mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-05-14 05:41:29 +00:00
* test(s3/lifecycle): integration coverage for versioning + filters
First integration-test bundle building on the existing single-test
backdating harness. Each scenario follows the same shape: create
bucket, set lifecycle, PUT object, backdate mtime via filer
UpdateEntry, run the shell command for one shard sweep, assert
S3-side state.
Five new tests:
- TestLifecycleVersionedBucketCreatesDeleteMarker: Expiration on a
versioned bucket must produce a delete marker (latest after worker
runs is a marker) AND keep the original version directly addressable
by versionId. ListObjectVersions confirms IsLatest=true on the
marker.
- TestLifecycleNoncurrentVersionExpiration: NoncurrentVersionExpiration
fires only on demoted versions. PUT v1, PUT v2 (so v1 → noncurrent),
backdate v1, run worker. v1 must be gone, v2 still current.
- TestLifecycleExpiredDeleteMarkerCleanup: combined rule (noncurrent +
expired-delete-marker) cleans up a sole-survivor marker. PUT v1,
DELETE (creates marker), backdate both, run worker. Every version
AND marker must be gone for the key.
- TestLifecycleDisabledRuleSkipsObject: rule with Status=Disabled
must not produce dispatches even on a backdated match. Negative
test for the engine's enabled-status gate.
- TestLifecycleTagFilter: rule with And{Prefix, Tag} only matches
objects carrying the tag. Two backdated objects (one tagged, one
not) — only the tagged one is removed.
Helpers extracted to keep each test focused: putVersioningEnabled,
putNoncurrentExpirationLifecycle, putExpiredDeleteMarkerLifecycle,
backdateVersionedMtime (ages a specific .versions/v_<id> entry),
runLifecycleShard (one-shot shell invocation with FATAL guard).
* test(s3/lifecycle): tighten noncurrent expiration diagnostics
Local run showed TestLifecycleNoncurrentVersionExpiration failing
with a bare 404 on HEAD(latest), not enough to tell whether v2 was
deleted, the bare-key pointer was removed, or a delete marker was
synthesized. Strengthen the test to:
- HEAD by versionId=v2 first, so we pin "v2 file still on disk"
separately from "the latest pointer resolves to v2"
- on HEAD(latest) failure, log ListObjectVersions output (versions +
markers, with IsLatest) so the next failure shows which side the
bug is on rather than just NotFound
* test(s3/lifecycle): integration coverage for AbortIncompleteMultipartUpload
Exercises the lifecycleAbortMPU handler path that the prefix-based
expiration tests can't reach — routing keys off of .uploads/<id>/
directory events, not regular object events, and the dispatcher uses
a different RPC path (rm on the .uploads/<id>/ folder).
Setup: AbortIncompleteMultipartUpload rule with DaysAfterInitiation=1,
CreateMultipartUpload, UploadPart (so the directory carries the
right shape), backdate the .uploads/<uploadID>/ directory entry 30
days, run the worker. The upload must drop out of
ListMultipartUploads.
Helpers added: putAbortMPULifecycle, backdateUploadDir.
* test(s3/lifecycle): integration coverage for NewerNoncurrentVersions
NewerNoncurrentVersions=N keeps the N most recent noncurrent versions
and expires the rest. Distinct from per-version NoncurrentDays —
depends on per-version rank, not just per-version age — and routes
through routePointerTransition's "needs full expansion" path.
Setup: PUT v1, v2, v3, v4 on a versioned bucket (v4 current; v1-v3
noncurrent), backdate v1+v2+v3 so all satisfy the NoncurrentDays>=1
floor, run the worker. Expect v1+v2 expired (older noncurrent),
v3 (newest noncurrent within keep=1) and v4 (current) preserved.
Helper added: putNewerNoncurrentLifecycle.
* test(s3/lifecycle): integration coverage for suspended-versioning Expiration
Suspended versioning takes a distinct code path in lifecycleDispatch:
the VersioningSuspended branch first deletes the null version (via
deleteSpecificObjectVersion(versionId="null")) and then writes a
fresh delete marker on top. Other branches (Enabled → only writes a
marker; Off → straight rm) miss this two-step.
Setup: enable versioning, PUT v1 (real versionId), suspend
versioning, PUT again (creates the null version, demotes v1 to
noncurrent), set the Expiration rule, backdate the null at the
bare path. Expect: latest is now a fresh delete marker, the
"null" version is gone from ListObjectVersions, and v1 (noncurrent
under Enabled) still addressable directly — suspended Expiration
must only touch the null, not other versions.
Helper added: putVersioningSuspended.
* test(s3/lifecycle): integration coverage for multi-bucket sweep
A single shell-driven shard sweep must process every bucket carrying
lifecycle config, not just the first one alphabetically. Pinned
because the scheduler iterates the buckets directory and a regression
that returns early after the first match would silently disable
lifecycle for every later bucket.
Two buckets, each with their own prefix-expiration rule and a
backdated object. Both must be expired after the same sweep.
* test(s3/lifecycle): integration coverage for ObjectSizeGreaterThan filter
ObjectSizeGreaterThan is a strict > gate (filterAllows uses
ev.Size <= rule.FilterSizeGreaterThan to reject). Pinned at the
boundary: an object whose size equals the threshold must remain;
only an object strictly larger expires. Catches a > vs >= flip.
Two backdated objects on the same prefix, sizes 100 and 150 with
threshold=100 — boundary survives, larger expires.
* test(s3/lifecycle): scrub bucket lifecycle config + versions on cleanup
Tests share one weed mini server. Two pollution modes were producing
order-dependent failures:
- A later test's shard sweep would still load the prior test's
lifecycle config (the worker reads every bucket's XML from filer
state, and DeleteBucket alone doesn't drop lifecycle config
cleanly on this codebase).
- Versioned-bucket tests left versions + delete markers behind that
ListObjectsV2 can't see, so the existing best-effort empty-then-
delete didn't actually empty those buckets.
- The AbortMPU test intentionally leaves an in-flight upload; without
an explicit AbortMultipartUpload the bucket DELETE hits NotEmpty.
Cleanup now runs DeleteBucketLifecycle, ListObjectVersions →
DeleteObject(versionId), ListObjectsV2 → DeleteObject (catches what
ListObjectVersions missed), ListMultipartUploads → AbortMultipartUpload,
then DeleteBucket. Best-effort throughout so a half-torn-down bucket
doesn't fail the cleanup chain.
* test(s3/lifecycle): backdate both versions for NoncurrentDays clock
Per codex review: NoncurrentDays is clocked from the SUCCESSOR
version's mtime (when the displaced version became noncurrent), not
from the displaced version's own mtime. Backdating only v1 left the
clock (v2's mtime) at "now" and the rule never fired — the test was
wrong, not the production path.
Backdate v1=31d and v2=30d so v1 sits past the 1-day threshold
relative to v2, the noncurrent rule fires, and v2 stays current.
* test(s3/lifecycle): assert specific NotFound on multi-bucket deletion
Per codex review: TestLifecycleMultipleBucketsInOneSweep treated any
HeadObject error as "deleted", which lets a transport failure or
dead endpoint mask a real bug. Recognize NoSuchKey/NotFound/HTTP-404
specifically via a small isS3NotFound helper so the assertion
actually proves deletion happened, not just that the call broke.
* test(s3/lifecycle): gofmt size-filter test
* test(s3/lifecycle): integration coverage for Object Lock skip
Object Lock retention must override the lifecycle rule. The handler's
enforceObjectLockProtections check (s3api_internal_lifecycle.go:47)
returns an error when retention is active; the dispatcher then
classifies the outcome as SKIPPED_OBJECT_LOCK and the object stays.
No existing integration test reaches that outcome.
Setup: bucket created with ObjectLockEnabledForBucket=true, expiration
rule on prefix "lock/", two backdated objects under the same prefix —
one with GOVERNANCE retention until 1h from now, one without. After
the worker runs, the unlocked object expires (positive control); the
locked one survives.
Custom cleanup uses BypassGovernanceRetention so the test can drop
the locked version when the test finishes — otherwise the retention
window keeps the bucket from being deleted.
* test(s3/lifecycle): integration coverage for config update between sweeps
An operator changes the lifecycle rule between two shell-driven
sweeps. The second sweep must respect the NEW rule, not a cached
copy of the old one. Each runLifecycleShard invocation spawns a
fresh weed shell subprocess, so cached engine state from a previous
sweep doesn't persist — but a regression that caches rules across
PutBucketLifecycleConfiguration calls within the S3 server itself
would still surface here.
Sweep 1: rule prefix="first/", PUT + backdate firstKey, run worker
→ firstKey expires.
Update rule to prefix="second/", PUT + backdate secondKey AND a
new key under the OLD prefix ("first/post-update.txt"). Sweep 2
must expire only the second-prefix object; the post-update old-
prefix one must survive — config replacement, not merge.
* test(s3/lifecycle): integration coverage for ExpirationDate (past)
Rules with Expiration{Date: <past>} route through ScanAtDate in the
engine (decideMode's ActionKindExpirationDate case) — a separate
compile + dispatch branch from the EventDriven delay-group path the
Days-based tests exercise.
Past date + in-prefix object → must expire. Out-of-prefix object →
must remain. Object also backdated as defense-in-depth so the
assertion doesn't depend on whether the dispatcher consults
MinTriggerAge for date kinds.
* test(s3/lifecycle): integration coverage for bootstrap walk on existing objects
Production scenario: operator enables lifecycle on a bucket that
already holds objects from before the policy. The worker must
discover them via the bootstrap walk (BucketBootstrapper) — there
were no meta-log events to observe because the objects predate the
rule. Without the bootstrap path, only NEW writes would ever match.
Setup: PUT 5 objects (no lifecycle config yet) + 1 out-of-prefix
survivor, backdate all, THEN set the Expiration rule, run the
worker. Every in-prefix pre-existing object must be expired; the
out-of-prefix one must remain.
* test(s3/lifecycle): integration coverage for DeleteBucketLifecycle stops dispatching
Operator UX: after DeleteBucketLifecycle, the worker must observe the
removal on the next sweep and stop expiring objects under the now-gone
rule. A regression that caches old configs across
PutBucketLifecycleConfiguration → DeleteBucketLifecycle would keep
silently dropping objects.
Setup: positive control (rule active, backdated obj expires) →
DeleteBucketLifecycle → PUT + backdate a fresh object → second
sweep. The fresh object must remain.
* test(s3/lifecycle): integration coverage for empty bucket sweep no-op
A bucket carrying lifecycle config but no objects must produce a
successful sweep — no hangs, no errors, no dispatches. Pinned
because the bootstrap walker iterates bucket directories, and an
empty directory is a corner of that traversal that's easy to break
(slice-bounds bug on the first listing returning zero entries).
Asserts: worker logs "loaded lifecycle for" and "shards 0-15
complete", no FATAL output, bucket still exists after the sweep.
* test(s3/lifecycle): fix Object Lock backdate path + skip unwired ScanAtDate
ObjectLock: enabling Object Lock on a bucket implicitly enables
versioning, so PUT objects land at .versions/v_<id>, not at the bare
key. The test was calling backdateMtime (bare path) and failing in
the helper with "filer: no entry is found". Switch to
backdateVersionedMtime with the versionId returned by PutObject.
ExpirationDate: ScanAtDate dispatch path isn't wired to the run-shard
shell command yet — the bootstrap walker explicitly skips actions in
ModeScanAtDate (walker.go:141 says "SCAN_AT_DATE runs its own date-
triggered bootstrap" but no such bootstrap exists in the scheduler or
shell). Skip with a t.Skip + explanation so the test activates the
moment the date-triggered path lands.
* fix(s3/lifecycle): wire ExpirationDate dispatch through bootstrap walker
The walker explicitly skipped ModeScanAtDate actions on the comment
"SCAN_AT_DATE runs its own date-triggered bootstrap" — but no such
bootstrap exists in the scheduler or shell layer. The result: rules
with Expiration{Date: ...} compiled correctly, populated the
snapshot's dateActions map, and were never dispatched.
ExpirationDate is silently a no-op in production.
EvaluateAction already handles ActionKindExpirationDate correctly
(rejects when now.Before(rule.ExpirationDate), otherwise emits
ActionDeleteObject). The walker just needed to fall through instead
of skipping. Pre-date walks become no-ops via EvaluateAction's date
check; post-date walks expire eligible objects.
Un-skip TestLifecycleExpirationDateInThePast — it now exercises the
fixed path end-to-end.
* test(s3/lifecycle): integration coverage for multiple rules per bucket
A single bucket carries two independent Expiration rules with disjoint
prefix filters and different Days thresholds. Each rule must fire
only on its prefix; objects outside both prefixes must survive.
Pinned because Compile builds one CompiledAction per rule per kind
all sharing the same bucket index — a bug that lets one rule's
prefix or threshold leak into another (e.g. last-write-wins on a
shared map) would silently expire wrong objects.
Setup: rule A with prefix=logs/ Days=1, rule B with prefix=tmp/
Days=7. Three backdated objects: logs/access.log, tmp/scratch.bin,
data/keep.bin. After the worker runs, logs/ + tmp/ are gone;
data/ — outside both rule prefixes — survives.
* fix(s3/lifecycle): mark ScanAtDate actions active in Compile
Two layers were silently filtering ScanAtDate actions out of routing:
the walker's mode skip (fixed in e785f59d6) and Compile only marking
ModeEventDriven actions active. MatchPath / MatchOriginalWrite both
require IsActive() to emit a key, so a ScanAtDate action that's never
marked active never reaches a dispatch path even after the walker
falls through.
ScanAtDate's only dispatch path is the bootstrap walk's MatchPath
call — there's no bootstrap-completion rendezvous to wait on. Make
the active flag include ModeScanAtDate alongside the
EventDriven+BootstrapComplete combination.
ExpirationDate-based rules now actually fire end-to-end. The
TestLifecycleExpirationDateInThePast integration test exercises this.
* fix(s3/lifecycle): route date kinds via ComputeDueAt
ExpirationDate has MinTriggerAge=0, so router computed
dueTime = info.ModTime + 0 = info.ModTime. For a backdated entry
that mtime is BEFORE rule.ExpirationDate, so EvaluateAction's
now.Before(rule.ExpirationDate) check returned ActionNone and the
date rule never fired through the event-driven path.
ComputeDueAt already knows the per-kind shape — rule.ExpirationDate
for date kinds, ModTime+Days for the rest — so use it as the
single source of truth for dueTime in Route's main loop.
* test(s3/lifecycle): pin bootstrap walker date dispatch
The original TestWalk_DateActionsSkipped pinned the pre-e785f59d6
behavior that the regular walker skipped ExpirationDate. That
walker was rewired to fire date rules whose date has passed (the
SCAN_AT_DATE bootstrap was never wired); update the test to match.
Split into two: post-date entries dispatch, pre-date entries don't.
* test(s3/lifecycle): drop unused putExpiredDeleteMarkerLifecycle
The helper was never called — TestLifecycleExpiredDeleteMarkerCleanup
constructs a combined noncurrent + expired-marker rule inline, which
the helper doesn't cover. The blank-assignment workaround was just
hiding dead code; remove both.
* test(s3/lifecycle): tighten HeadObject termination check to typed not-found
Generic err != nil also passes on transport/auth/timeouts, letting
the test go green without proving the lifecycle action actually
fired. Switch the three Eventuallyf HeadObject predicates to
isS3NotFound, matching the pattern already in the multi-bucket and
expiration-date tests.
* test(s3/lifecycle): guard ListObjectVersions diagnostic against nil
When ListObjectVersions errors, listOut is nil and the diagnostic
log path panics on listOut.Versions before the real assertion fires.
Branch on (listErr != nil || listOut == nil) so the failure log is
robust whatever ListObjectVersions returned.
* refactor(s3/lifecycle): extract entryUsesMetadataOnlyDelete predicate
The metadata-only delete decision (entry.Attributes.TtlSec > 0) was
inlined in lifecycleDispatch with no direct test. Lift it into a
named predicate with the rationale comment moved onto the function
and pin the four edge cases: nil entry, nil attributes, TtlSec=0,
TtlSec>0, plus a defensive check that TtlSec<0 doesn't flip the
path on.
332 lines
12 KiB
Go
332 lines
12 KiB
Go
package s3api
|
|
|
|
import (
|
|
"bytes"
|
|
"testing"
|
|
|
|
"github.com/prometheus/client_golang/prometheus/testutil"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/s3_lifecycle_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/s3lifecycle"
|
|
stats_collect "github.com/seaweedfs/seaweedfs/weed/stats"
|
|
)
|
|
|
|
func TestComputeEntryIdentity_BasicFields(t *testing.T) {
|
|
entry := &filer_pb.Entry{
|
|
Attributes: &filer_pb.FuseAttributes{Mtime: 1700000000, MtimeNs: 123, FileSize: 4096},
|
|
Chunks: []*filer_pb.FileChunk{
|
|
{FileId: "1,abc"},
|
|
{FileId: "1,def"},
|
|
},
|
|
}
|
|
id := computeEntryIdentity(entry)
|
|
want := int64(1700000000)*int64(1e9) + int64(123)
|
|
if id.MtimeNs != want {
|
|
t.Fatalf("MtimeNs want %d, got %d", want, id.MtimeNs)
|
|
}
|
|
if id.Size != 4096 {
|
|
t.Fatalf("Size want 4096, got %d", id.Size)
|
|
}
|
|
if id.HeadFid != "1,abc" {
|
|
t.Fatalf("HeadFid want 1,abc, got %s", id.HeadFid)
|
|
}
|
|
}
|
|
|
|
func TestComputeEntryIdentity_NilSafeMissingChunks(t *testing.T) {
|
|
if got := computeEntryIdentity(nil); got != nil {
|
|
t.Fatalf("nil entry should return nil, got %v", got)
|
|
}
|
|
id := computeEntryIdentity(&filer_pb.Entry{})
|
|
if id == nil {
|
|
t.Fatalf("entry with nil Attributes should still produce identity")
|
|
}
|
|
if id.HeadFid != "" {
|
|
t.Fatalf("missing chunks should yield empty HeadFid, got %s", id.HeadFid)
|
|
}
|
|
}
|
|
|
|
func TestHashExtended_OrderStable(t *testing.T) {
|
|
a := map[string][]byte{"k1": []byte("v1"), "k2": []byte("v2")}
|
|
b := map[string][]byte{"k2": []byte("v2"), "k1": []byte("v1")}
|
|
if !bytes.Equal(s3lifecycle.HashExtended(a), s3lifecycle.HashExtended(b)) {
|
|
t.Fatalf("hash should be insensitive to map iteration order")
|
|
}
|
|
}
|
|
|
|
func TestHashExtended_DelimiterCollisionResistant(t *testing.T) {
|
|
// Naively concatenated: "k1=v1k2v2" could collide with "k1=v1k" / "2v2".
|
|
// Length-prefix encoding must keep them apart.
|
|
a := map[string][]byte{"k1": []byte("v1"), "k2": []byte("v2")}
|
|
b := map[string][]byte{"k1": []byte("v1k2v2")}
|
|
if bytes.Equal(s3lifecycle.HashExtended(a), s3lifecycle.HashExtended(b)) {
|
|
t.Fatalf("delimiter-forged Extended payloads must not collide")
|
|
}
|
|
}
|
|
|
|
func TestHashExtended_NilEqualsEmpty(t *testing.T) {
|
|
if got := s3lifecycle.HashExtended(nil); len(got) != 0 {
|
|
t.Fatalf("nil should produce zero-length hash, got %d bytes", len(got))
|
|
}
|
|
if got := s3lifecycle.HashExtended(map[string][]byte{}); len(got) != 0 {
|
|
t.Fatalf("empty map should produce zero-length hash, got %d bytes", len(got))
|
|
}
|
|
}
|
|
|
|
func TestIdentityMatches_NilWantTreatedAsMatch(t *testing.T) {
|
|
// Bootstrap callers that don't yet have an identity to CAS against
|
|
// pass nil expected_identity; the server treats this as "no CAS".
|
|
live := &s3_lifecycle_pb.EntryIdentity{MtimeNs: 1, Size: 2}
|
|
if !identityMatches(live, nil) {
|
|
t.Fatalf("nil want should match")
|
|
}
|
|
}
|
|
|
|
func TestIdentityMatches_NilLiveDoesNotMatch(t *testing.T) {
|
|
if identityMatches(nil, &s3_lifecycle_pb.EntryIdentity{MtimeNs: 1}) {
|
|
t.Fatalf("nil live should not match a populated want")
|
|
}
|
|
}
|
|
|
|
func TestIdentityMatches_AllFieldsCompared(t *testing.T) {
|
|
base := &s3_lifecycle_pb.EntryIdentity{MtimeNs: 100, Size: 2048, HeadFid: "1,abc", ExtendedHash: []byte{0x01, 0x02}}
|
|
cases := []struct {
|
|
name string
|
|
live *s3_lifecycle_pb.EntryIdentity
|
|
want bool
|
|
}{
|
|
{"identical", &s3_lifecycle_pb.EntryIdentity{MtimeNs: 100, Size: 2048, HeadFid: "1,abc", ExtendedHash: []byte{0x01, 0x02}}, true},
|
|
{"mtime-drift", &s3_lifecycle_pb.EntryIdentity{MtimeNs: 101, Size: 2048, HeadFid: "1,abc", ExtendedHash: []byte{0x01, 0x02}}, false},
|
|
{"size-drift", &s3_lifecycle_pb.EntryIdentity{MtimeNs: 100, Size: 2049, HeadFid: "1,abc", ExtendedHash: []byte{0x01, 0x02}}, false},
|
|
{"fid-drift", &s3_lifecycle_pb.EntryIdentity{MtimeNs: 100, Size: 2048, HeadFid: "1,xyz", ExtendedHash: []byte{0x01, 0x02}}, false},
|
|
{"extended-drift", &s3_lifecycle_pb.EntryIdentity{MtimeNs: 100, Size: 2048, HeadFid: "1,abc", ExtendedHash: []byte{0x03, 0x04}}, false},
|
|
}
|
|
for _, c := range cases {
|
|
t.Run(c.name, func(t *testing.T) {
|
|
if got := identityMatches(c.live, base); got != c.want {
|
|
t.Fatalf("want %v, got %v", c.want, got)
|
|
}
|
|
})
|
|
}
|
|
}
|
|
|
|
func TestLifecycleDelete_RejectsEmptyRequest(t *testing.T) {
|
|
s := &S3ApiServer{}
|
|
resp, err := s.LifecycleDelete(nil, &s3_lifecycle_pb.LifecycleDeleteRequest{})
|
|
if err != nil {
|
|
t.Fatalf("unexpected gRPC error: %v", err)
|
|
}
|
|
if resp.Outcome != s3_lifecycle_pb.LifecycleDeleteOutcome_BLOCKED {
|
|
t.Fatalf("empty request should be BLOCKED, got %v", resp.Outcome)
|
|
}
|
|
}
|
|
|
|
func TestLifecycleAbortMPU_RejectsTraversalUploadIDs(t *testing.T) {
|
|
// "." and ".." pass the no-slash check but resolve to the bucket
|
|
// root via util.JoinPath; they must be rejected before any rm call.
|
|
s := &S3ApiServer{}
|
|
cases := []string{
|
|
"",
|
|
".uploads",
|
|
".uploads/",
|
|
".uploads/.",
|
|
".uploads/..",
|
|
".uploads/u1/extra",
|
|
}
|
|
for _, path := range cases {
|
|
t.Run(path, func(t *testing.T) {
|
|
resp, err := s.LifecycleDelete(nil, &s3_lifecycle_pb.LifecycleDeleteRequest{
|
|
Bucket: "bk",
|
|
ObjectPath: path,
|
|
ActionKind: s3_lifecycle_pb.ActionKind_ABORT_MPU,
|
|
})
|
|
if err != nil {
|
|
t.Fatalf("unexpected gRPC error: %v", err)
|
|
}
|
|
if resp.Outcome != s3_lifecycle_pb.LifecycleDeleteOutcome_BLOCKED {
|
|
t.Fatalf("path %q: outcome=%v reason=%q, want BLOCKED",
|
|
path, resp.Outcome, resp.Reason)
|
|
}
|
|
})
|
|
}
|
|
}
|
|
|
|
func TestLifecycleDispatch_AbortMPUAfterFetchIsBlocked(t *testing.T) {
|
|
// LifecycleDelete routes ABORT_MPU to lifecycleAbortMPU before
|
|
// getObjectEntry; reaching lifecycleDispatch with ABORT_MPU means
|
|
// some caller bypassed that route. Defensive BLOCKED so a
|
|
// regression there can't accidentally rm a real object via the
|
|
// expiration paths.
|
|
s := &S3ApiServer{}
|
|
resp, err := s.lifecycleDispatch(nil, &s3_lifecycle_pb.LifecycleDeleteRequest{
|
|
Bucket: "bk",
|
|
ObjectPath: "k",
|
|
ActionKind: s3_lifecycle_pb.ActionKind_ABORT_MPU,
|
|
}, &filer_pb.Entry{Attributes: &filer_pb.FuseAttributes{}})
|
|
if err != nil {
|
|
t.Fatalf("unexpected gRPC error: %v", err)
|
|
}
|
|
if resp.Outcome != s3_lifecycle_pb.LifecycleDeleteOutcome_BLOCKED {
|
|
t.Fatalf("ABORT_MPU at dispatch should be BLOCKED, got %v reason=%q", resp.Outcome, resp.Reason)
|
|
}
|
|
if !contains(resp.Reason, "ABORT_MPU dispatched after fetch") {
|
|
t.Fatalf("reason should name the route bypass, got %q", resp.Reason)
|
|
}
|
|
}
|
|
|
|
func TestLifecycleDispatch_UnknownActionKindIsBlocked(t *testing.T) {
|
|
// An ActionKind value the proto doesn't define yet must produce a
|
|
// FATAL outcome rather than fall through to a default delete path.
|
|
s := &S3ApiServer{}
|
|
const bogus = s3_lifecycle_pb.ActionKind(999)
|
|
resp, err := s.lifecycleDispatch(nil, &s3_lifecycle_pb.LifecycleDeleteRequest{
|
|
Bucket: "bk",
|
|
ObjectPath: "k",
|
|
ActionKind: bogus,
|
|
}, &filer_pb.Entry{Attributes: &filer_pb.FuseAttributes{}})
|
|
if err != nil {
|
|
t.Fatalf("unexpected gRPC error: %v", err)
|
|
}
|
|
if resp.Outcome != s3_lifecycle_pb.LifecycleDeleteOutcome_BLOCKED {
|
|
t.Fatalf("unknown action kind should be BLOCKED, got %v reason=%q", resp.Outcome, resp.Reason)
|
|
}
|
|
if !contains(resp.Reason, "unknown action_kind") {
|
|
t.Fatalf("reason should name the unknown kind, got %q", resp.Reason)
|
|
}
|
|
}
|
|
|
|
func TestLifecycleDispatch_NoncurrentRequiresVersionID(t *testing.T) {
|
|
// Noncurrent / EXPIRED_DELETE_MARKER target a specific version; an
|
|
// empty version_id is a writer-side bug and must be rejected before
|
|
// any filer call. This pinning keeps the early-return in place so
|
|
// a refactor doesn't accidentally let the empty-version_id path
|
|
// reach deleteSpecificObjectVersion.
|
|
s := &S3ApiServer{}
|
|
for _, kind := range []s3_lifecycle_pb.ActionKind{
|
|
s3_lifecycle_pb.ActionKind_NONCURRENT_DAYS,
|
|
s3_lifecycle_pb.ActionKind_NEWER_NONCURRENT,
|
|
s3_lifecycle_pb.ActionKind_EXPIRED_DELETE_MARKER,
|
|
} {
|
|
t.Run(kind.String(), func(t *testing.T) {
|
|
resp, err := s.lifecycleDispatch(nil, &s3_lifecycle_pb.LifecycleDeleteRequest{
|
|
Bucket: "bk",
|
|
ObjectPath: "k",
|
|
ActionKind: kind,
|
|
// VersionId intentionally empty
|
|
}, &filer_pb.Entry{Attributes: &filer_pb.FuseAttributes{}})
|
|
if err != nil {
|
|
t.Fatalf("unexpected gRPC error: %v", err)
|
|
}
|
|
if resp.Outcome != s3_lifecycle_pb.LifecycleDeleteOutcome_BLOCKED {
|
|
t.Fatalf("kind %v with empty version_id should be BLOCKED, got %v reason=%q",
|
|
kind, resp.Outcome, resp.Reason)
|
|
}
|
|
if !contains(resp.Reason, "version_id required") {
|
|
t.Fatalf("reason should name the missing version_id, got %q", resp.Reason)
|
|
}
|
|
})
|
|
}
|
|
}
|
|
|
|
// contains is a tiny helper so the tests above don't pull in strings
|
|
// just for a substring check.
|
|
func contains(haystack, needle string) bool {
|
|
if len(needle) == 0 {
|
|
return true
|
|
}
|
|
for i := 0; i+len(needle) <= len(haystack); i++ {
|
|
if haystack[i:i+len(needle)] == needle {
|
|
return true
|
|
}
|
|
}
|
|
return false
|
|
}
|
|
|
|
func TestEntryUsesMetadataOnlyDelete(t *testing.T) {
|
|
cases := []struct {
|
|
name string
|
|
entry *filer_pb.Entry
|
|
want bool
|
|
}{
|
|
{
|
|
name: "nil entry",
|
|
entry: nil,
|
|
want: false,
|
|
},
|
|
{
|
|
name: "nil attributes",
|
|
entry: &filer_pb.Entry{},
|
|
want: false,
|
|
},
|
|
{
|
|
name: "TtlSec=0 (no per-write stamp)",
|
|
entry: &filer_pb.Entry{Attributes: &filer_pb.FuseAttributes{TtlSec: 0}},
|
|
want: false,
|
|
},
|
|
{
|
|
name: "TtlSec>0 (PR 9377 stamped a fast-path TTL)",
|
|
entry: &filer_pb.Entry{Attributes: &filer_pb.FuseAttributes{TtlSec: 86400}},
|
|
want: true,
|
|
},
|
|
{
|
|
name: "TtlSec<0 should not happen but must not flip the path on",
|
|
entry: &filer_pb.Entry{Attributes: &filer_pb.FuseAttributes{TtlSec: -1}},
|
|
want: false,
|
|
},
|
|
}
|
|
for _, c := range cases {
|
|
t.Run(c.name, func(t *testing.T) {
|
|
if got := entryUsesMetadataOnlyDelete(c.entry); got != c.want {
|
|
t.Fatalf("want %v, got %v", c.want, got)
|
|
}
|
|
})
|
|
}
|
|
}
|
|
|
|
func TestRecordMetadataOnlyIf_OnlyFiresWhenOn(t *testing.T) {
|
|
// Counter must increment exactly once per (bucket, hex(rule_hash))
|
|
// when on=true, and not at all when on=false. Other lifecycle paths
|
|
// in the same suite share the global counter — use distinct bucket
|
|
// names per test so series don't bleed.
|
|
c := stats_collect.S3LifecycleMetadataOnlyCounter
|
|
bucket := "bk-counter-on"
|
|
hash := []byte{0xde, 0xad, 0xbe, 0xef, 0x01, 0x02, 0x03, 0x04}
|
|
hexHash := "deadbeef01020304"
|
|
|
|
before := testutil.ToFloat64(c.WithLabelValues(bucket, hexHash))
|
|
recordMetadataOnlyIf(true, &s3_lifecycle_pb.LifecycleDeleteRequest{
|
|
Bucket: bucket,
|
|
RuleHash: hash,
|
|
})
|
|
if got := testutil.ToFloat64(c.WithLabelValues(bucket, hexHash)); got != before+1 {
|
|
t.Fatalf("on=true should bump by 1; before=%v after=%v", before, got)
|
|
}
|
|
|
|
beforeOff := testutil.ToFloat64(c.WithLabelValues("bk-counter-off", hexHash))
|
|
recordMetadataOnlyIf(false, &s3_lifecycle_pb.LifecycleDeleteRequest{
|
|
Bucket: "bk-counter-off",
|
|
RuleHash: hash,
|
|
})
|
|
if got := testutil.ToFloat64(c.WithLabelValues("bk-counter-off", hexHash)); got != beforeOff {
|
|
t.Fatalf("on=false should not bump; before=%v after=%v", beforeOff, got)
|
|
}
|
|
}
|
|
|
|
func TestRecordMetadataOnlyIf_NilRequestSafe(t *testing.T) {
|
|
// A nil req is a defensive no-op; never panic on the prometheus
|
|
// label call which would otherwise dereference req.Bucket.
|
|
recordMetadataOnlyIf(true, nil)
|
|
}
|
|
|
|
func TestRecordMetadataOnlyIf_EmptyRuleHashCollapsesToEmptyLabel(t *testing.T) {
|
|
// Bootstrap or test paths may not stamp a rule hash; the label
|
|
// must end up as an empty string rather than panicking on
|
|
// hex.EncodeToString(nil).
|
|
c := stats_collect.S3LifecycleMetadataOnlyCounter
|
|
bucket := "bk-counter-emptyhash"
|
|
before := testutil.ToFloat64(c.WithLabelValues(bucket, ""))
|
|
recordMetadataOnlyIf(true, &s3_lifecycle_pb.LifecycleDeleteRequest{Bucket: bucket})
|
|
if got := testutil.ToFloat64(c.WithLabelValues(bucket, "")); got != before+1 {
|
|
t.Fatalf("nil rule_hash should produce empty-label series; before=%v after=%v", before, got)
|
|
}
|
|
}
|