mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-05-17 07:11:30 +00:00
* refactor(worker): co-locate plugin handlers with their task packages
Move every per-task plugin handler from weed/plugin/worker/ into the
matching weed/worker/tasks/<name>/ package, so each task owns its
detection, scheduling, execution, and plugin handler in one place.
Step 0 (within pluginworker, no behavior change): extract shared helpers
that previously lived inside individual handler files into dedicated
files and export the ones now consumed across packages.
- activity.go: BuildExecutorActivity, BuildDetectorActivity
- config.go: ReadStringConfig/Double/Int64/Bytes/StringList, MapTaskPriority
- interval.go: ShouldSkipDetectionByInterval
- volume_state.go: VolumeState + consts, FilterMetricsByVolumeState/Location
- collection_filter.go: CollectionFilterMode + consts
- volume_metrics.go: export CollectVolumeMetricsFromMasters,
MasterAddressCandidates, FetchVolumeList
- testing_senders_test.go: shared test stubs
Phase 1: move the per-task plugin handlers (and the iceberg subpackage)
into their task packages.
weed/plugin/worker/vacuum_handler.go -> weed/worker/tasks/vacuum/plugin_handler.go
weed/plugin/worker/ec_balance_handler.go -> weed/worker/tasks/ec_balance/plugin_handler.go
weed/plugin/worker/erasure_coding_handler.go -> weed/worker/tasks/erasure_coding/plugin_handler.go
weed/plugin/worker/volume_balance_handler.go -> weed/worker/tasks/balance/plugin_handler.go
weed/plugin/worker/iceberg/ -> weed/worker/tasks/iceberg/
weed/plugin/worker/handlers/handlers.go now blank-imports all five
task subpackages so their init() registrations fire.
weed/command/mini.go and the worker tests construct the handler with
vacuum.DefaultMaxExecutionConcurrency (the constant moved with the
vacuum handler).
admin_script remains in weed/plugin/worker/ because there is no
underlying weed/worker/tasks/admin_script/ package to merge with.
* refactor(worker): update test/plugin_workers imports for moved handlers
Three handler constructors moved out of pluginworker into their task
packages — update the integration test files in test/plugin_workers/
to import from the new locations:
pluginworker.NewVacuumHandler -> vacuum.NewVacuumHandler
pluginworker.NewVolumeBalanceHandler -> balance.NewVolumeBalanceHandler
pluginworker.NewErasureCodingHandler -> erasure_coding.NewErasureCodingHandler
The pluginworker import is kept where the file still uses
pluginworker.WorkerOptions / pluginworker.JobHandler.
* refactor(worker): update test/s3tables iceberg import path
The iceberg subpackage moved from weed/plugin/worker/iceberg/ to
weed/worker/tasks/iceberg/. test/s3tables/maintenance/maintenance_integration_test.go
still imported the old path, breaking S3 Tables / RisingWave / Trino /
Spark / Iceberg-catalog / STS integration test builds.
Mirrors the OSS-side fix needed by every job in the run that
transitively imports test/s3tables/maintenance.
* chore: gofmt PR-touched files
The S3 Tables Format Check job runs `gofmt -l` over weed/s3api/s3tables
and test/s3tables, then fails if anything is unformatted. Files this
PR moved or modified had import-grouping and trailing-spacing issues
introduced by perl-based renames; reformat them with gofmt -w.
Touched files:
test/plugin_workers/erasure_coding/{detection,execution}_test.go
test/s3tables/maintenance/maintenance_integration_test.go
weed/plugin/worker/handlers/handlers.go
weed/worker/tasks/{balance,ec_balance,erasure_coding,vacuum}/plugin_handler*.go
* refactor(worker): bounds-checked int conversions for plugin config values
CodeQL flagged 18 go/incorrect-integer-conversion warnings on the moved
plugin handler files: results of pluginworker.ReadInt64Config (which
ultimately calls strconv.ParseInt with bit size 64) were being narrowed
to int32/uint32/int without an upper-bound check, so a malicious or
malformed admin/worker config value could overflow the target type.
Add three helpers in weed/plugin/worker/config.go that wrap
ReadInt64Config and clamp out-of-range values back to the caller's
fallback:
ReadInt32Config (math.MinInt32 .. math.MaxInt32)
ReadUint32Config (0 .. math.MaxUint32)
ReadIntConfig (math.MinInt32 .. math.MaxInt32, platform-portable)
Update each flagged call site in the four moved task packages to use
the bounds-checked helper. For protobuf uint32 fields (volume IDs)
the variable type also becomes uint32, removing the trailing
uint32(volumeID) casts and changing the "missing volume_id" check
from `<= 0` to `== 0`.
Touched files:
weed/plugin/worker/config.go
weed/worker/tasks/balance/plugin_handler.go
weed/worker/tasks/erasure_coding/plugin_handler.go
weed/worker/tasks/vacuum/plugin_handler.go
* refactor(worker): use ReadIntConfig for clamped derive-worker-config helpers
CodeQL still flagged three call sites where ReadInt64Config was being
narrowed to int after a value-range clamp (max_concurrent_moves <= 50,
batch_size <= 100, min_server_count >= 2). The clamp is correct but
CodeQL's flow analysis didn't recognize the bound, so it flagged them
as unbounded narrowing.
Switch to ReadIntConfig (already int32-bounded by the helper) for
those three sites, drop the now-redundant int64 intermediate variables.
Also drops the now-unused `> math.MaxInt32` clamp in
ec_balance.deriveECBalanceWorkerConfig (the helper covers it).
140 lines
4.2 KiB
Go
140 lines
4.2 KiB
Go
package volume_balance_test
|
|
|
|
import (
|
|
"context"
|
|
"testing"
|
|
"time"
|
|
|
|
pluginworkers "github.com/seaweedfs/seaweedfs/test/plugin_workers"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
|
pluginworker "github.com/seaweedfs/seaweedfs/weed/plugin/worker"
|
|
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
|
"github.com/stretchr/testify/require"
|
|
"google.golang.org/grpc"
|
|
"google.golang.org/grpc/credentials/insecure"
|
|
"google.golang.org/protobuf/proto"
|
|
)
|
|
|
|
func TestVolumeBalanceDetectionIntegration(t *testing.T) {
|
|
response := buildBalanceVolumeListResponse(t)
|
|
master := pluginworkers.NewMasterServer(t, response)
|
|
|
|
dialOption := grpc.WithTransportCredentials(insecure.NewCredentials())
|
|
handler := balance.NewVolumeBalanceHandler(dialOption)
|
|
harness := pluginworkers.NewHarness(t, pluginworkers.HarnessConfig{
|
|
WorkerOptions: pluginworker.WorkerOptions{
|
|
GrpcDialOption: dialOption,
|
|
},
|
|
Handlers: []pluginworker.JobHandler{handler},
|
|
})
|
|
harness.WaitForJobType("volume_balance")
|
|
|
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
|
defer cancel()
|
|
|
|
proposals, err := harness.Plugin().RunDetection(ctx, "volume_balance", &plugin_pb.ClusterContext{
|
|
MasterGrpcAddresses: []string{master.Address()},
|
|
}, 10)
|
|
require.NoError(t, err)
|
|
// With default batch_size=20 and 10 overloaded volumes vs 1 underloaded,
|
|
// all moves are grouped into a single batch proposal.
|
|
require.Len(t, proposals, 1, "expected exactly one batch proposal")
|
|
|
|
proposal := proposals[0]
|
|
require.Equal(t, "volume_balance", proposal.JobType)
|
|
paramsValue := proposal.Parameters["task_params_pb"]
|
|
require.NotNil(t, paramsValue)
|
|
|
|
params := &worker_pb.TaskParams{}
|
|
require.NoError(t, proto.Unmarshal(paramsValue.GetBytesValue(), params))
|
|
|
|
bp := params.GetBalanceParams()
|
|
require.NotNil(t, bp, "expected BalanceParams in batch proposal")
|
|
require.Greater(t, len(bp.Moves), 1, "batch proposal should contain multiple moves")
|
|
for _, move := range bp.Moves {
|
|
require.NotZero(t, move.VolumeId)
|
|
require.NotEmpty(t, move.SourceNode)
|
|
require.NotEmpty(t, move.TargetNode)
|
|
}
|
|
}
|
|
|
|
func buildBalanceVolumeListResponse(t *testing.T) *master_pb.VolumeListResponse {
|
|
t.Helper()
|
|
|
|
volumeSizeLimitMB := uint64(100)
|
|
volumeModifiedAt := time.Now().Add(-2 * time.Hour).Unix()
|
|
|
|
overloadedVolumes := make([]*master_pb.VolumeInformationMessage, 0, 10)
|
|
for i := 0; i < 10; i++ {
|
|
volumeID := uint32(1000 + i)
|
|
overloadedVolumes = append(overloadedVolumes, &master_pb.VolumeInformationMessage{
|
|
Id: volumeID,
|
|
Collection: "balance",
|
|
DiskId: 0,
|
|
Size: 20 * 1024 * 1024,
|
|
DeletedByteCount: 0,
|
|
ModifiedAtSecond: volumeModifiedAt,
|
|
ReplicaPlacement: 1,
|
|
ReadOnly: false,
|
|
})
|
|
}
|
|
|
|
underloadedVolumes := []*master_pb.VolumeInformationMessage{
|
|
{
|
|
Id: 2000,
|
|
Collection: "balance",
|
|
DiskId: 0,
|
|
Size: 20 * 1024 * 1024,
|
|
DeletedByteCount: 0,
|
|
ModifiedAtSecond: volumeModifiedAt,
|
|
ReplicaPlacement: 1,
|
|
ReadOnly: false,
|
|
},
|
|
}
|
|
|
|
overloadedDisk := &master_pb.DiskInfo{
|
|
DiskId: 0,
|
|
MaxVolumeCount: 100,
|
|
VolumeCount: int64(len(overloadedVolumes)),
|
|
VolumeInfos: overloadedVolumes,
|
|
}
|
|
|
|
underloadedDisk := &master_pb.DiskInfo{
|
|
DiskId: 0,
|
|
MaxVolumeCount: 100,
|
|
VolumeCount: int64(len(underloadedVolumes)),
|
|
VolumeInfos: underloadedVolumes,
|
|
}
|
|
|
|
overloadedNode := &master_pb.DataNodeInfo{
|
|
Id: "10.0.0.1:8080",
|
|
Address: "10.0.0.1:8080",
|
|
DiskInfos: map[string]*master_pb.DiskInfo{"hdd": overloadedDisk},
|
|
}
|
|
|
|
underloadedNode := &master_pb.DataNodeInfo{
|
|
Id: "10.0.0.2:8080",
|
|
Address: "10.0.0.2:8080",
|
|
DiskInfos: map[string]*master_pb.DiskInfo{"hdd": underloadedDisk},
|
|
}
|
|
|
|
rack := &master_pb.RackInfo{
|
|
Id: "rack-1",
|
|
DataNodeInfos: []*master_pb.DataNodeInfo{overloadedNode, underloadedNode},
|
|
}
|
|
|
|
return &master_pb.VolumeListResponse{
|
|
VolumeSizeLimitMb: volumeSizeLimitMB,
|
|
TopologyInfo: &master_pb.TopologyInfo{
|
|
DataCenterInfos: []*master_pb.DataCenterInfo{
|
|
{
|
|
Id: "dc-1",
|
|
RackInfos: []*master_pb.RackInfo{rack},
|
|
},
|
|
},
|
|
},
|
|
}
|
|
}
|