mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-05-14 05:41:29 +00:00
* fix(s3api): stream multipart SSE-S3 chunks lazily to avoid truncated GETs (#8908) buildMultipartSSES3Reader opened a volume-server HTTP response for EVERY chunk upfront, then walked them with io.MultiReader. For a multipart SSE-S3 object with N internal chunks (e.g. a 200MB Docker Registry blob with 25+ chunks), N volume-server bodies sat live at once; chunks 1..N-1 were idle while io.MultiReader drained chunk 0. Under concurrent load the volume server's keep-alive logic closed those idle responses mid-flight, and the S3 client saw `unexpected EOF` partway through the GET. Truncated bytes hash to the wrong SHA-256, which is exactly the "Digest did not match" symptom Docker Registry reports in #8908 (and which persisted even after the per-chunk metadata fix in #9211 and the completion backfill in #9224). Introduce lazyMultipartChunkReader + preparedMultipartChunk{chunk, wrap}: a generic lazy chunk streamer with a per-chunk wrap closure for the SSE-specific decryption setup. Per-chunk metadata is still validated UPFRONT so a malformed chunk fails fast without opening any HTTP connection -- the eager validation contract callers and tests rely on is preserved. The volume-server GET and the SSE-specific decrypt wrap, however, fire LAZILY: at most one chunk body is live at any time, regardless of object size. This commit applies the new pattern to buildMultipartSSES3Reader only; the SSE-KMS and SSE-C multipart readers retain their eager form for now and will be migrated in follow-up commits, since the same shape exists there too. Tests: - TestBuildMultipartSSES3Reader_LazyChunkFetch pins the new contract: zero chunks opened at construction, peak liveness == 1, all closed after drain. - TestBuildMultipartSSES3Reader_RejectsBadChunkBeforeAnyFetch (replaces ClosesAppendedOnError) asserts a malformed chunk in position N causes zero fetches for chunks 0..N -- the previous test pinned a weaker contract (cleanup after eager open). - TestBuildMultipartSSES3Reader_InvalidIVLength updated for the same reason: the fetch callback must NOT be invoked at all on a bad-IV chunk. - TestMultipartSSES3RealisticEndToEnd round-trips multiple parts encrypted the way putToFiler writes them (shared DEK + baseIV, partOffset=0, post-completion global offsets) and walks them through buildMultipartSSES3Reader. * fix(s3api): stream multipart SSE-KMS chunks lazily Apply the same fix as the previous commit to createMultipartSSEKMSDecryptedReaderDirect: per-chunk SSE-KMS metadata is validated upfront, but volume-server GETs fire lazily through lazyMultipartChunkReader. At most one chunk body is live at any time. This is the same eager-open-all-chunks shape that produced #8908's truncated GETs for SSE-S3; SSE-KMS multipart objects with many chunks were exposed to the same idle-keepalive failure mode under concurrent load. The wire format on disk is unchanged (same per-chunk metadata, same encrypted bytes, same object Extended attributes). Existing SSE-KMS multipart objects read back identically -- only when the volume-server GETs fire changes. * fix(s3api): stream multipart SSE-C chunks lazily Apply the same fix as the previous two commits to createMultipartSSECDecryptedReaderDirect: per-chunk SSE-C metadata is validated upfront (IV decode, IV length check, non-negative PartOffset), but the volume-server GET and CreateSSECDecryptedReader- WithOffset wrap fire lazily through lazyMultipartChunkReader. At most one chunk body is live at any time. This is the same eager-open-all-chunks shape that produced #8908's truncated GETs for SSE-S3; SSE-C multipart objects with many chunks were exposed to the same idle-keepalive failure mode under concurrent load. The pre-existing TODO note about CopyObject SSE-C PartOffset handling is preserved verbatim. The wire format on disk is unchanged (same per-chunk metadata, same encrypted bytes); existing SSE-C multipart objects read back identically. After this commit all three multipart SSE read paths (SSE-S3, SSE-KMS, SSE-C) share lazyMultipartChunkReader as their streaming engine. * test(s3): add Docker Registry-shape multipart SSE-S3 GET regression Pin the end-to-end fix for #8908 with a test that mirrors what Docker Registry actually does on pull: a 25-part * 5MB upload with bucket- default SSE-S3, then a full GET, then SHA-256 over the streamed body must match SHA-256 over the uploaded bytes. The eager-multipart-reader bug was specifically a streaming truncation under load: the response status was 200 with a Content-Length matching the object size, but the body short-circuited mid-stream because later chunks' volume-server connections had already been closed by keepalive. The hash check is the symptom Docker Registry surfaces ("Digest did not match"), so this is the most faithful regression we can pin without spinning up a registry. uploadAndVerifyMultipartSSEObject already byte-compares the GET body, but hashing on top is intentionally explicit -- it documents WHY the test exists, and matches the failure mode reported in the issue. * test(s3): add range-read coverage matrix across SSE modes and sizes Existing range-read coverage in test/s3/sse was scoped to small (<= 1MB) single-chunk objects, with one ad-hoc range case per SSE mode and one 129-byte boundary-crossing case in TestSSEMultipartUploadIntegration. Nothing exercised: - Range reads on single-PUT objects whose content crosses the 8MB internal chunk boundary (medium size class). - Range reads on multipart objects whose parts each span multiple internal chunks (large size class) -- the shape #8908 originally surfaced for full-object GETs and the most likely site of any future regression in per-chunk IV / PartOffset plumbing for partial reads. - A consistent range-pattern set applied uniformly across SSE modes, so any divergence between modes (SSE-C uses random IV + PartOffset; SSE-S3/KMS use base IV + offset) is comparable at a glance. TestSSERangeReadCoverageMatrix introduces a parameterized matrix: modes: no_sse, sse_c, sse_kms, sse_s3 sizes: small (256KB single chunk), medium (12MB single PUT crossing one internal boundary), large (5x9MB multipart, ~10 internal chunks, every part itself spans an 8MB boundary) ranges: single byte at 0, prefix 512B, single byte at last, suffix bytes=-100, open-ended bytes=N-, whole object, AES-block boundary 15-31, mid straddling one internal boundary (medium+large), mid spanning many internal boundaries (large only) Per case it asserts: body bytes equal the expected slice, Content-Length matches the range length, Content-Range matches start-end/total, and the SSE response headers match the mode. The sse_kms branch probes once with a 1-byte SSE-KMS PUT and t.Skip's the remaining sse_kms subtests with a clear reason if the local server has no KMS provider configured -- the default `weed mini` setup lacks one; the Makefile target `test-with-kms` provides one via OpenBao. Other modes always run. Verified locally: 75 subtests pass under no_sse / sse_c / sse_s3 against weed mini, sse_kms cleanly skipped. * test(s3): conform new test names to TestSSE*Integration so CI runs them The two tests added in the previous commits had names that did NOT match the patterns the test/s3/sse Makefile and .github/workflows/s3-sse-tests.yml use to discover SSE integration tests: - test/s3/sse/Makefile `test` target: TestSSE.*Integration - test/s3/sse/Makefile `test-multipart`: TestSSEMultipartUploadIntegration - .github/workflows/s3-sse-tests.yml: ...|.*Multipart.*Integration|.*RangeRequestsServerBehavior Result: SSE-KMS coverage I added to TestSSERangeReadCoverageMatrix and the Docker-Registry-shape multipart regression in TestSSES3MultipartManyChunks_DockerRegistryShape were silently invisible to CI even though the underlying test setup (start-seaweedfs-ci using s3-config-template.json with the embedded `local` KMS provider) already has SSE-KMS configured. Renames: TestSSERangeReadCoverageMatrix -> TestSSERangeReadIntegration TestSSES3MultipartManyChunks_... -> TestSSEMultipartManyChunksIntegration Both names now match `TestSSE.*Integration` (Makefile `test` target) and TestSSEMultipartManyChunksIntegration additionally matches `.*Multipart.*Integration` (CI's comprehensive subset). No behavior change; only the function names move. Verified locally against `weed mini` with s3-config-template.json: TestSSERangeReadIntegration runs 96 leaf subtests across 4 SSE modes (none, SSE-C, SSE-KMS, SSE-S3) x 3 size classes x 7-9 range patterns, all passing, 0 skipped. The probe-and-skip in the SSE-KMS arm now only fires for ad-hoc local setups that don't load any KMS provider; the project's standard test setup loads the local provider, so CI has full SSE-KMS range coverage. * fix(s3api): validate SSE-KMS chunk IV during prep, before any fetch Addresses CodeRabbit review on PR #9228: in createMultipartSSEKMSDecryptedReaderDirect the per-chunk SSE-KMS metadata was deserialized in the prep loop but the IV length was only validated later, inside CreateSSEKMSDecryptedReader, which runs from the wrap closure -- AFTER the chunk's volume-server fetch has already started. That weakens the new "reject malformed chunks before any fetch" contract for SSE-KMS specifically: a chunk with a missing/short/long IV would fire its HTTP GET, then fail mid-stream during decrypt. The fix moves the existing ValidateIV check into the prep loop, matching the SSE-S3 and SSE-C paths. Drive-by: extract the SSE-KMS prep loop into a free buildMultipartSSEKMSReader helper that mirrors buildMultipartSSES3Reader, so the new contract is unit-testable without an S3ApiServer. The exported method (createMultipartSSEKMSDecryptedReaderDirect) stays a thin caller, so behavior for production callers is unchanged. New tests in weed/s3api/s3api_multipart_ssekms_test.go pin the contract: - TestBuildMultipartSSEKMSReader_RejectsBadIVBeforeAnyFetch covers missing IV, empty IV, short IV, long IV. Each case asserts both that an error is returned AND that the fetch callback is never invoked. - TestBuildMultipartSSEKMSReader_RejectsMissingMetadataBeforeAnyFetch pins the analogous behavior when SseMetadata is nil on a chunk in position N: chunks 0..N-1 must not be fetched (the earlier eager implementation depended on a closeAppendedReaders cleanup path; the new contract is stronger -- nothing is opened in the first place). - TestBuildMultipartSSEKMSReader_RejectsUnparseableMetadataBeforeAnyFetch covers the JSON-unmarshal failure branch. - TestBuildMultipartSSEKMSReader_SortsByOffset smoke-tests the documented sort-by-offset contract by recording the order in which fetch is invoked. All four pass under `go test ./weed/s3api/`. Existing weed/s3api unit suite + the SSE integration suite (with the local KMS provider enabled via s3-config-template.json) continue to pass. * test(s3): address CodeRabbit nitpicks on range coverage matrix Three small follow-ups on the range-read coverage matrix from the previous commit, per CodeRabbit nitpicks on PR #9228: 1. Promote the body-length check from `assert.Equal` to `require.Equal` so a truncation regression -- the canonical #8908 failure mode -- aborts the subtest immediately. Previously the assertion logged a length mismatch and then `assertDataEqual` ran on differently-sized slices, producing a noisy byte-diff on top of the actual symptom. The redundant trailing `t.Fatalf` block becomes dead and is removed. 2. Broaden the SSE-KMS probe-skip heuristic. The probe previously produced the friendly "KMS provider not configured" message only for 5xx responses; KMS-misconfig surfaces also include 501 NotImplemented, 4xx KMS.NotConfigured, and error messages containing "KMS.NotConfigured" / "NotImplemented" / "not configured". The behaviour change is purely cosmetic (the caller t.Skip's on any non-empty reason either way) but the new diagnostic is more useful in CI logs. 3. Add `t.Parallel()` at the mode and size-class levels of the matrix. Each (mode, size) writes an independent object key under the shared bucket, with no cross-talk, so parallel execution is safe. Local wall time on the full matrix dropped from ~2.0s to ~1.1s (~45%); the savings scale with chunk count and CI machine concurrency. Verified locally against `weed mini` with s3-config-template.json: - go test ./weed/s3api/ -count=1 PASS - TestSSERangeReadIntegration -v 112 PASS, 0 SKIP - TestSSEMultipartUploadIntegration etc. PASS * fix(s3api): tighten lazy reader error path; unify SSE IV validation Three CodeRabbit nitpicks on PR #9228: 1. lazyMultipartChunkReader: mark finished on non-EOF Read errors The Read loop's three earlier failure paths (chunk index past end, fetch error, wrap error) all set l.finished = true before returning. The non-EOF Read path -- where l.current.Read itself errors mid-chunk -- did not, leaving l.current/l.closer set and l.finished = false. A caller that retried Read after an error would re-enter the same broken stream instead of advancing or giving up. Set l.finished = true on non-EOF Read error so post-error state is consistent across all four failure sites; Close() (which the GetObjectHandler defers) still releases the chunk body. 2. Unify IV-length validation across SSE-S3, SSE-KMS, SSE-C prep paths The previous commit moved SSE-KMS to the shared ValidateIV helper but left SSE-S3 and SSE-C with bespoke inline `len(...) != AESBlockSize` checks. All three are enforcing the same invariant; inconsistency obscures the symmetry. Move SSE-S3 and SSE-C to ValidateIV too, with the same `<algo> chunk <fileId> IV` name convention. Error message wording shifts from "<algo> chunk X has invalid IV length N (expected 16)" to ValidateIV's "invalid <algo> chunk X IV length: expected 16 bytes, got N". The substring "IV length" is preserved across both, so the existing TestBuildMultipartSSES3Reader_InvalidIVLength substring assertion is loosened to match either form. 3. TestBuildMultipartSSEKMSReader_SortsByOffset: verify full ordering The test previously drove Read() to observe fetch-call order, but CreateSSEKMSDecryptedReader requires a live KMS provider to unwrap the encrypted DEK -- unavailable in unit tests -- so the wrap closure failed on the first chunk and only one fetch was ever recorded. The test asserted only fetchOrder[0] == "c0", which is weaker than the comment promised. Switch to a static check: type-assert the returned reader to *lazyMultipartChunkReader (same package so unexported fields are accessible) and inspect the prepared chunks slice directly. This pins the entire [c0, c1, c2] sort order in one place, doesn't depend on KMS, and runs in zero fetch calls. The fetch closure now asserts it is never invoked during preparation. All weed/s3api unit tests pass; integration suite (with KMS provider configured via s3-config-template.json) passes. * test(s3): switch range coverage cleanup to t.Cleanup; tighten KMS probe Two CodeRabbit comments on PR #9228, both about test/s3/sse/s3_sse_range_coverage_test.go: 1. CRITICAL: defer + t.Parallel() race in TestSSERangeReadIntegration The test creates one bucket up front, then runs subtests that call t.Parallel() at the mode and size levels (added in058cbf27to cut wall time). t.Parallel() pauses each subtest and yields back to the parent. The parent's for loop finishes scheduling, the function returns, and the deferred cleanupTestBucket fires -- BEFORE any parallel subtest body has executed. The bucket gets deleted out from under the parallel subtests, which then race the cleanup and either fail with NoSuchBucket or, depending on lazy-deletion behaviour on the server side, mask other regressions because chunks happen to still be readable for a brief window. The local matrix passing prior to this commit was a server-side coincidence; the t.Cleanup contract is the right one for parent tests with parallel children, and switching to it is a one-line change. t.Cleanup runs after the test AND all its (parallel) subtests complete, so the bucket survives until every leaf subtest is done. 2. MINOR: tighten the SSE-KMS probe-skip heuristic The previous broadening (058cbf27) treated `code == 400` as "KMS provider not configured", on the theory that some servers return 4xx for KMS misconfig. That is too aggressive: a real misconfiguration in the SSE-KMS test request itself (bad keyID format, missing header) ALSO surfaces as a 400, and would silently t.Skip the SSE-KMS subtree in CI -- which is exactly the integration coverage the new TestSSERangeReadIntegration is supposed to add. Drop the 400 branch (and the redundant 501 match, since 501 >= 500 already covers it). Genuine "KMS.NotConfigured" / "NotImplemented" responses are still recognised via the string-match block immediately below, regardless of status code, so the friendly skip message survives for the cases where it actually applies. Verified locally against `weed mini` with s3-config-template.json: - go test ./weed/s3api/ PASS - TestSSERangeReadIntegration -v 113 PASS lines, 0 SKIP - TestSSEMultipartUploadIntegration etc. PASS