Files
seaweedfs/test/plugin_workers/erasure_coding/detection_test.go
Chris Lu 1f6f473995 refactor(worker): co-locate plugin handlers with their task packages (#9301)
* refactor(worker): co-locate plugin handlers with their task packages

Move every per-task plugin handler from weed/plugin/worker/ into the
matching weed/worker/tasks/<name>/ package, so each task owns its
detection, scheduling, execution, and plugin handler in one place.

Step 0 (within pluginworker, no behavior change): extract shared helpers
that previously lived inside individual handler files into dedicated
files and export the ones now consumed across packages.

  - activity.go: BuildExecutorActivity, BuildDetectorActivity
  - config.go: ReadStringConfig/Double/Int64/Bytes/StringList, MapTaskPriority
  - interval.go: ShouldSkipDetectionByInterval
  - volume_state.go: VolumeState + consts, FilterMetricsByVolumeState/Location
  - collection_filter.go: CollectionFilterMode + consts
  - volume_metrics.go: export CollectVolumeMetricsFromMasters,
    MasterAddressCandidates, FetchVolumeList
  - testing_senders_test.go: shared test stubs

Phase 1: move the per-task plugin handlers (and the iceberg subpackage)
into their task packages.

  weed/plugin/worker/vacuum_handler.go         -> weed/worker/tasks/vacuum/plugin_handler.go
  weed/plugin/worker/ec_balance_handler.go     -> weed/worker/tasks/ec_balance/plugin_handler.go
  weed/plugin/worker/erasure_coding_handler.go -> weed/worker/tasks/erasure_coding/plugin_handler.go
  weed/plugin/worker/volume_balance_handler.go -> weed/worker/tasks/balance/plugin_handler.go
  weed/plugin/worker/iceberg/                   -> weed/worker/tasks/iceberg/

  weed/plugin/worker/handlers/handlers.go now blank-imports all five
  task subpackages so their init() registrations fire.

  weed/command/mini.go and the worker tests construct the handler with
  vacuum.DefaultMaxExecutionConcurrency (the constant moved with the
  vacuum handler).

admin_script remains in weed/plugin/worker/ because there is no
underlying weed/worker/tasks/admin_script/ package to merge with.

* refactor(worker): update test/plugin_workers imports for moved handlers

Three handler constructors moved out of pluginworker into their task
packages — update the integration test files in test/plugin_workers/
to import from the new locations:

  pluginworker.NewVacuumHandler        -> vacuum.NewVacuumHandler
  pluginworker.NewVolumeBalanceHandler -> balance.NewVolumeBalanceHandler
  pluginworker.NewErasureCodingHandler -> erasure_coding.NewErasureCodingHandler

The pluginworker import is kept where the file still uses
pluginworker.WorkerOptions / pluginworker.JobHandler.

* refactor(worker): update test/s3tables iceberg import path

The iceberg subpackage moved from weed/plugin/worker/iceberg/ to
weed/worker/tasks/iceberg/. test/s3tables/maintenance/maintenance_integration_test.go
still imported the old path, breaking S3 Tables / RisingWave / Trino /
Spark / Iceberg-catalog / STS integration test builds.

Mirrors the OSS-side fix needed by every job in the run that
transitively imports test/s3tables/maintenance.

* chore: gofmt PR-touched files

The S3 Tables Format Check job runs `gofmt -l` over weed/s3api/s3tables
and test/s3tables, then fails if anything is unformatted. Files this
PR moved or modified had import-grouping and trailing-spacing issues
introduced by perl-based renames; reformat them with gofmt -w.

Touched files:
  test/plugin_workers/erasure_coding/{detection,execution}_test.go
  test/s3tables/maintenance/maintenance_integration_test.go
  weed/plugin/worker/handlers/handlers.go
  weed/worker/tasks/{balance,ec_balance,erasure_coding,vacuum}/plugin_handler*.go

* refactor(worker): bounds-checked int conversions for plugin config values

CodeQL flagged 18 go/incorrect-integer-conversion warnings on the moved
plugin handler files: results of pluginworker.ReadInt64Config (which
ultimately calls strconv.ParseInt with bit size 64) were being narrowed
to int32/uint32/int without an upper-bound check, so a malicious or
malformed admin/worker config value could overflow the target type.

Add three helpers in weed/plugin/worker/config.go that wrap
ReadInt64Config and clamp out-of-range values back to the caller's
fallback:

  ReadInt32Config (math.MinInt32 .. math.MaxInt32)
  ReadUint32Config (0 .. math.MaxUint32)
  ReadIntConfig    (math.MinInt32 .. math.MaxInt32, platform-portable)

Update each flagged call site in the four moved task packages to use
the bounds-checked helper. For protobuf uint32 fields (volume IDs)
the variable type also becomes uint32, removing the trailing
uint32(volumeID) casts and changing the "missing volume_id" check
from `<= 0` to `== 0`.

Touched files:
  weed/plugin/worker/config.go
  weed/worker/tasks/balance/plugin_handler.go
  weed/worker/tasks/erasure_coding/plugin_handler.go
  weed/worker/tasks/vacuum/plugin_handler.go

* refactor(worker): use ReadIntConfig for clamped derive-worker-config helpers

CodeQL still flagged three call sites where ReadInt64Config was being
narrowed to int after a value-range clamp (max_concurrent_moves <= 50,
batch_size <= 100, min_server_count >= 2). The clamp is correct but
CodeQL's flow analysis didn't recognize the bound, so it flagged them
as unbounded narrowing.

Switch to ReadIntConfig (already int32-bounded by the helper) for
those three sites, drop the now-redundant int64 intermediate variables.

Also drops the now-unused `> math.MaxInt32` clamp in
ec_balance.deriveECBalanceWorkerConfig (the helper covers it).
2026-05-02 18:03:13 -07:00

287 lines
7.3 KiB
Go

package erasure_coding_test
import (
"context"
"fmt"
"testing"
"time"
pluginworkers "github.com/seaweedfs/seaweedfs/test/plugin_workers"
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
pluginworker "github.com/seaweedfs/seaweedfs/weed/plugin/worker"
ecstorage "github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/protobuf/proto"
)
type topologySpec struct {
name string
dataCenters int
racksPerDC int
nodesPerRack int
diskTypes []string
replicas int
collection string
}
type detectionCase struct {
name string
topology topologySpec
adminCollectionFilter string
expectProposals bool
}
func TestErasureCodingDetectionAcrossTopologies(t *testing.T) {
cases := []detectionCase{
{
name: "single-dc-multi-rack",
topology: topologySpec{
name: "single-dc-multi-rack",
dataCenters: 1,
racksPerDC: 2,
nodesPerRack: 7,
diskTypes: []string{"hdd"},
replicas: 1,
collection: "ec-test",
},
expectProposals: true,
},
{
name: "multi-dc",
topology: topologySpec{
name: "multi-dc",
dataCenters: 2,
racksPerDC: 1,
nodesPerRack: 7,
diskTypes: []string{"hdd"},
replicas: 1,
collection: "ec-test",
},
expectProposals: true,
},
{
name: "multi-dc-multi-rack",
topology: topologySpec{
name: "multi-dc-multi-rack",
dataCenters: 2,
racksPerDC: 2,
nodesPerRack: 4,
diskTypes: []string{"hdd"},
replicas: 1,
collection: "ec-test",
},
expectProposals: true,
},
{
name: "mixed-disk-types",
topology: topologySpec{
name: "mixed-disk-types",
dataCenters: 1,
racksPerDC: 2,
nodesPerRack: 7,
diskTypes: []string{"hdd", "ssd"},
replicas: 1,
collection: "ec-test",
},
expectProposals: true,
},
{
name: "multi-replica-volume",
topology: topologySpec{
name: "multi-replica-volume",
dataCenters: 1,
racksPerDC: 2,
nodesPerRack: 7,
diskTypes: []string{"hdd"},
replicas: 3,
collection: "ec-test",
},
expectProposals: true,
},
{
name: "collection-filter-match",
topology: topologySpec{
name: "collection-filter-match",
dataCenters: 1,
racksPerDC: 2,
nodesPerRack: 7,
diskTypes: []string{"hdd"},
replicas: 1,
collection: "filtered",
},
adminCollectionFilter: "filtered",
expectProposals: true,
},
{
name: "collection-filter-mismatch",
topology: topologySpec{
name: "collection-filter-mismatch",
dataCenters: 1,
racksPerDC: 2,
nodesPerRack: 7,
diskTypes: []string{"hdd"},
replicas: 1,
collection: "filtered",
},
adminCollectionFilter: "other",
expectProposals: false,
},
{
name: "insufficient-disks",
topology: topologySpec{
name: "insufficient-disks",
dataCenters: 1,
racksPerDC: 1,
nodesPerRack: 2,
diskTypes: []string{"hdd"},
replicas: 1,
collection: "ec-test",
},
expectProposals: false,
},
}
for _, tc := range cases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
volumeID := uint32(7)
response := buildVolumeListResponse(t, tc.topology, volumeID)
master := pluginworkers.NewMasterServer(t, response)
dialOption := grpc.WithTransportCredentials(insecure.NewCredentials())
handler := erasure_coding.NewErasureCodingHandler(dialOption, t.TempDir())
harness := pluginworkers.NewHarness(t, pluginworkers.HarnessConfig{
WorkerOptions: pluginworker.WorkerOptions{
GrpcDialOption: dialOption,
},
Handlers: []pluginworker.JobHandler{handler},
})
harness.WaitForJobType("erasure_coding")
if tc.adminCollectionFilter != "" {
err := harness.Plugin().SaveJobTypeConfig(&plugin_pb.PersistedJobTypeConfig{
JobType: "erasure_coding",
AdminConfigValues: map[string]*plugin_pb.ConfigValue{
"collection_filter": {
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: tc.adminCollectionFilter},
},
},
})
require.NoError(t, err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
proposals, err := harness.Plugin().RunDetection(ctx, "erasure_coding", &plugin_pb.ClusterContext{
MasterGrpcAddresses: []string{master.Address()},
}, 10)
require.NoError(t, err)
if !tc.expectProposals {
require.Empty(t, proposals)
return
}
require.NotEmpty(t, proposals)
proposal := proposals[0]
require.Equal(t, "erasure_coding", proposal.JobType)
paramsValue := proposal.Parameters["task_params_pb"]
require.NotNil(t, paramsValue)
params := &worker_pb.TaskParams{}
require.NoError(t, proto.Unmarshal(paramsValue.GetBytesValue(), params))
require.NotEmpty(t, params.Sources)
require.Len(t, params.Targets, ecstorage.TotalShardsCount)
})
}
}
func buildVolumeListResponse(t *testing.T, spec topologySpec, volumeID uint32) *master_pb.VolumeListResponse {
t.Helper()
volumeSizeLimitMB := uint64(100)
volumeSize := uint64(90) * 1024 * 1024
volumeModifiedAt := time.Now().Add(-10 * time.Minute).Unix()
diskTypes := spec.diskTypes
if len(diskTypes) == 0 {
diskTypes = []string{"hdd"}
}
replicas := spec.replicas
if replicas <= 0 {
replicas = 1
}
collection := spec.collection
if collection == "" {
collection = "ec-test"
}
var dataCenters []*master_pb.DataCenterInfo
nodeIndex := 0
replicasPlaced := 0
for dc := 0; dc < spec.dataCenters; dc++ {
var racks []*master_pb.RackInfo
for rack := 0; rack < spec.racksPerDC; rack++ {
var nodes []*master_pb.DataNodeInfo
for n := 0; n < spec.nodesPerRack; n++ {
nodeIndex++
address := fmt.Sprintf("127.0.0.1:%d", 20000+nodeIndex)
diskType := diskTypes[(nodeIndex-1)%len(diskTypes)]
diskInfo := &master_pb.DiskInfo{
DiskId: 0,
MaxVolumeCount: 100,
VolumeCount: 0,
VolumeInfos: []*master_pb.VolumeInformationMessage{},
}
if replicasPlaced < replicas {
diskInfo.VolumeCount = 1
diskInfo.VolumeInfos = append(diskInfo.VolumeInfos, &master_pb.VolumeInformationMessage{
Id: volumeID,
Collection: collection,
DiskId: 0,
Size: volumeSize,
DeletedByteCount: 0,
ModifiedAtSecond: volumeModifiedAt,
ReplicaPlacement: 1,
ReadOnly: false,
})
replicasPlaced++
}
nodes = append(nodes, &master_pb.DataNodeInfo{
Id: address,
Address: address,
DiskInfos: map[string]*master_pb.DiskInfo{diskType: diskInfo},
})
}
racks = append(racks, &master_pb.RackInfo{
Id: fmt.Sprintf("rack-%d", rack+1),
DataNodeInfos: nodes,
})
}
dataCenters = append(dataCenters, &master_pb.DataCenterInfo{
Id: fmt.Sprintf("dc-%d", dc+1),
RackInfos: racks,
})
}
return &master_pb.VolumeListResponse{
VolumeSizeLimitMb: volumeSizeLimitMB,
TopologyInfo: &master_pb.TopologyInfo{
DataCenterInfos: dataCenters,
},
}
}