mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-05-16 23:01:30 +00:00
master
5 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
69da20bdae |
volume: gate FetchAndWriteNeedle behind admin auth and refuse internal endpoints (#9441)
volume: require admin auth and refuse loopback endpoints in FetchAndWriteNeedle Gate the RPC behind checkGrpcAdminAuth for parity with the rest of the destructive volume-server RPCs, and reject cluster-internal remote S3 endpoints (loopback / link-local / IMDS / RFC 1918 / CGNAT) before dialing. Pin the validated address against DNS rebinding by routing the AWS SDK through an HTTP transport whose DialContext re-resolves the host and re-applies the deny list on every dial, so an endpoint that resolves to a public IP at validate-time and then flips to 127.0.0.1 at connect time is refused. Operators that legitimately fetch from private hosts can opt out with -volume.allowUntrustedRemoteEndpoints. |
||
|
|
6844ec067c |
fix(s3): cache remote-only source before CopyObject (#9304) (#9305)
* fix(s3): cache remote-only source before CopyObject (#9304) CopyObject from a remote.mount source whose object lives only upstream created a destination entry with FileSize > 0 but no chunks/content, because the resolved source entry has no local chunks and the copy path fell into the "inline/empty chunks" branch with empty entry.Content. A subsequent GET returned 500 with "data integrity error: size N reported but no content available". CopyObjectPart had the same shape via copyChunksForRange iterating an empty chunk list. Detect entry.IsInRemoteOnly() right after resolving the source in both CopyObjectHandler and CopyObjectPartHandler and cache the object to the local cluster first via a new cacheRemoteObjectForCopy helper (a copy-time analogue of cacheRemoteObjectForStreaming with a bounded 30s timeout and version-aware path resolution). If caching fails or produces no chunks, return 503 with Retry-After: 5 instead of writing a metadata-only destination, mirroring the GetObject behavior added in the #7817 cold-cache fix. Adds TestCopyObjectRemoteOnlySourceDetection pinning the four entry shapes the fix branches on plus the pre-fix broken-output shape. * address PR review on remote-only copy fix - Use the resolved entry's version id when srcVersionId is empty so a CopyObject reading the latest object in a versioning-enabled bucket caches the correct .versions/v_<id> path instead of getting stuck in a 503 retry loop. New helper resolvedSourceVersionId handles the fallback for both CopyObject and CopyObjectPart. - Drop the redundant cachedEntry.IsInRemoteOnly() recheck in both handlers; the cache helper now reports success based on local data presence, and IsInRemoteOnly does not look at inline Content so keeping the check would 503 on small inline-cached objects. - Treat inline Content as a successful cache result in both cacheRemoteObjectForStreaming and cacheRemoteObjectForCopy via a shared cachedEntryHasLocalData predicate. The CopyObject inline branch already handles entries that have Content but no chunks. - Extract buildVersionedRemoteObjectPath so the streaming and copy cache helpers share path construction. Adds TestResolvedSourceVersionId and TestCachedEntryHasLocalData to pin the new helpers' contracts. * narrow streaming cache contract back to chunks-only CodeRabbit flagged that cacheRemoteObjectForStreaming's caller in streamFromVolumeServers (lines 997-1002) still required non-empty chunks, so content-only cache hits would fall through to a 503 retry loop instead of being honored. Resolve by keeping the helper's contract chunks-only: the filer's caching code only ever writes chunks, the streaming downstream isn't wired to read inline Content from a cached entry, and a partial range- aware inline writer here would be overkill for a path that doesn't actually occur in practice. cacheRemoteObjectForCopy keeps the relaxed contract since the copy path's inline branch genuinely handles both chunked and content-only entries. Document the asymmetry on cachedEntryHasLocalData and on cacheRemoteObjectForStreaming so a future reader can see why the two helpers diverge. * extend version-id resolution to streaming cache path CodeRabbit flagged that GetObjectHandler still passed the raw query versionId to cacheRemoteObjectForStreaming. For latest-version reads in versioning-enabled buckets that stays empty even though the resolved entry lives at .versions/v_<id>, so remote-only GETs would keep caching the wrong path and 503-ing forever. Reuse the new resolvedSourceVersionId helper at the streaming call site too. Also document on cachedEntryHasLocalData that the zero-byte case flagged in the same review is handled upstream (IsInRemoteOnly requires RemoteSize > 0, so the cache helper is never invoked for empty remote objects -- CopyObject's pre-existing inline branch writes a correct empty destination directly). Pin this with a new test case. * trim verbose comments Drop tutorial-style and review-history comments. Keep only the WHY that isn't obvious from identifiers: the #9304 reference on the new branches in CopyObject / CopyObjectPart, the latest-version-fallback rationale on resolvedSourceVersionId, and the streaming/copy contract asymmetry on cachedEntryHasLocalData. * drop issue references from comments Issue numbers belong in PR descriptions and commit messages, not in source comments where they rot. Replace with the underlying invariant the code is preserving. * test: drive remote-object cache helpers through real gRPC Existing tests only re-enacted helper-function branching in test space, so they could not have caught a handler that consumed a remote-only entry without going through the cache. Stand up an in-process filer gRPC server (UnimplementedSeaweedFilerServer + configurable CacheRemoteObjectToLocalCluster response) and exercise the two cache helpers end-to-end. What's pinned: - cacheRemoteObjectForCopy returns nil when the cache makes no progress (response is still remote-only), lets gRPC errors through as nil, accepts both chunked and inline-content cache hits, and surfaces deadline-exceeded as nil so callers can 503 instead of holding the request open. - Versioned source paths route to .versions/v_<id>; non-versioned and "null" stay at the bucket-relative path. Captured by reading the request the stub server received. - cacheRemoteObjectForStreaming holds the stricter chunks-only contract: a content-only cache hit is not propagated, since streamFromVolumeServers' downstream isn't wired to read from inline Content there. Any current or future handler that calls these helpers exercises the same gRPC path under test, so the bug class is closed for helper-routed cache calls. * move remote-only copy test into the integration suite The previous gRPC-stub test in weed/s3api/ was integration-flavored but stubbed; relocate the coverage to the existing two-server suite under test/s3/remote_cache/, which already exercises the real remote.mount + remote.uncache flow against a primary SeaweedFS plus a secondary acting as remote storage. The new test/s3/remote_cache/remote_cache_copy_test.go drives: - TestRemoteCacheCopyObject: upload to primary, uncache (entry now remote-only), CopyObject to a new key, GET the destination. Pre- fix the GET returned 500 'data integrity error: size N reported but no content'; this pins the fixed behavior over real HTTP through the actual handler stack. - TestRemoteCacheCopyObjectPart: same shape via multipart UploadPartCopy on a 6 MiB object split into two parts, exercising CopyObjectPartHandler's range-copy path. Drop weed/s3api/s3api_remote_storage_grpc_test.go: the helper-level classification tests in s3api_remote_storage_test.go still cover the contract pieces (cachedEntryHasLocalData, resolvedSourceVersionId, the remote-only entry shape), and the integration suite covers the end-to-end behavior that those classifications enable. |
||
|
|
905e7e72d9 |
Add remote.copy.local command to copy local files to remote storage (#8033)
* Add remote.copy.local command to copy local files to remote storage This new command solves the issue described in GitHub Discussion #8031 where files exist locally but are not synced to remote storage due to missing filer logs. Features: - Copies local-only files to remote storage - Supports file filtering (include/exclude patterns) - Dry run mode to preview actions - Configurable concurrency for performance - Force update option for existing remote files - Comprehensive error handling with retry logic Usage: remote.copy.local -dir=/path/to/mount/dir [options] This addresses the need to manually sync files when filer logs were deleted or when local files were never synced to remote storage. * shell: rename commandRemoteLocalSync to commandRemoteCopyLocal * test: add comprehensive remote cache integration tests * shell: fix forceUpdate logic in remote.copy.local The previous logic only allowed force updates when localEntry.RemoteEntry was not nil, which defeated the purpose of using -forceUpdate to fix inconsistencies where local metadata might be missing. Now -forceUpdate will overwrite remote files whenever they exist, regardless of local metadata state. * shell: fix code review issues in remote.copy.local - Return actual error from flag parsing instead of swallowing it - Use sync.Once to safely capture first error in concurrent operations - Add atomic counter to track actual successful copies - Protect concurrent writes to output with mutex to prevent interleaving - Fix path matching to prevent false positives with sibling directories (e.g., /mnt/remote2 no longer matches /mnt/remote) * test: address code review nitpicks in integration tests - Improve create_bucket error handling to fail on real errors - Fix test assertions to properly verify expected failures - Use case-insensitive string matching for error detection - Replace weak logging-only tests with proper assertions - Remove extra blank line in Makefile * test: remove redundant edge case tests Removed 5 tests that were either duplicates or didn't assert meaningful behavior: - TestEdgeCaseEmptyDirectory (duplicate of TestRemoteCopyLocalEmptyDirectory) - TestEdgeCaseRapidCacheUncache (no meaningful assertions) - TestEdgeCaseConcurrentCommands (only logs errors, no assertions) - TestEdgeCaseInvalidPaths (no security assertions) - TestEdgeCaseFileNamePatterns (duplicate of pattern tests in cache tests) Kept valuable stress tests: nested directories, special characters, very large files (100MB), many small files (100), and zero-byte files. * test: fix CI failures by forcing localhost IP advertising Added -ip=127.0.0.1 flag to both primary and remote weed mini commands to prevent IP auto-detection issues in CI environments. Without this flag, the master would advertise itself using the actual IP (e.g., 10.1.0.17) while binding to 127.0.0.1, causing connection refused errors when other services tried to connect to the gRPC port. * test: address final code review issues - Add proper error assertions for concurrent commands test - Require errors for invalid path tests instead of just logging - Remove unused 'match' field from pattern test struct - Add dry-run output assertion to verify expected behavior - Simplify redundant condition in remote.copy.local (remove entry.RemoteEntry check) * test: fix remote.configure tests to match actual validation rules - Use only letters in remote names (no numbers) to match validation - Relax missing parameter test expectations since validation may not be strict - Generate unique names using letter suffix instead of numbers * shell: rename pathToCopyCopy to localPath for clarity Improved variable naming in concurrent copy loop to make the code more readable and less repetitive. * test: fix remaining test failures - Remove strict error requirement for invalid paths (commands handle gracefully) - Fix TestRemoteUncacheBasic to actually test uncache instead of cache - Use simple numeric names for remote.configure tests (testcfg1234 format) to avoid validation issues with letter-only or complex name generation * test: use only letters in remote.configure test names The validation regex ^[A-Za-z][A-Za-z0-9]*$ requires names to start with a letter, but using static letter-only names avoids any potential issues with the validation. * test: remove quotes from -name parameter in remote.configure tests Single quotes were being included as part of the name value, causing validation failures. Changed from -name='testremote' to -name=testremote. * test: fix remote.configure assertion to be flexible about JSON formatting Changed from checking exact JSON format with specific spacing to just checking if the name appears in the output, since JSON formatting may vary (e.g., "name": "value" vs "name": "value"). |
||
|
|
7064ad420d |
Refactor S3 integration tests to use weed mini (#7877)
* Refactor S3 integration tests to use weed mini * Fix weed mini flags for sse and parquet tests * Fix IAM test startup: remove -iam.config flag from weed mini * Enhance logging in IAM Makefile to debug startup failure * Simplify weed mini flags and checks in S3 tests (IAM, Parquet, SSE, Copying) * Simplify weed mini flags and checks in all S3 tests * Fix IAM tests: use -s3.iam.config for weed mini * Replace timeout command with portable loop in IAM Makefile * Standardize portable loop-based readiness checks in all S3 Makefiles * Define SERVER_DIR in retention Makefile * Fix versioning and retention Makefiles: remove unsupported weed mini flags * fix filer_group test * fix cors * emojis * fix sse * fix retention * fixes * fix * fixes * fix parquet * fixes * fix * clean up * avoid duplicated debug server * Update .gitignore * simplify * clean up * add credentials * bind * delay * Update Makefile * Update Makefile * check ready * delay * update remote credentials * Update Makefile * clean up * kill * Update Makefile * update credentials |
||
|
|
504b258258 |
s3: fix remote object not caching (#7790)
* s3: fix remote object not caching * s3: address review comments for remote object caching - Fix leading slash in object name by using strings.TrimPrefix - Return cached entry from CacheRemoteObjectToLocalCluster to get updated local chunk locations - Reuse existing helper function instead of inline gRPC call * s3/filer: add singleflight deduplication for remote object caching - Add singleflight.Group to FilerServer to deduplicate concurrent cache operations - Wrap CacheRemoteObjectToLocalCluster with singleflight to ensure only one caching operation runs per object when multiple clients request the same file - Add early-return check for already-cached objects - S3 API calls filer gRPC with timeout and graceful fallback on error - Clear negative bucket cache when bucket is created via weed shell - Add integration tests for remote cache with singleflight deduplication This benefits all clients (S3, HTTP, Hadoop) accessing remote-mounted objects by preventing redundant cache operations and improving concurrent access performance. Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599 * fix: data race in concurrent remote object caching - Add mutex to protect chunks slice from concurrent append - Add mutex to protect fetchAndWriteErr from concurrent read/write - Fix incorrect error check (was checking assignResult.Error instead of parseErr) - Rename inner variable to avoid shadowing fetchAndWriteErr * fix: address code review comments - Remove duplicate remote caching block in GetObjectHandler, keep only singleflight version - Add mutex protection for concurrent chunk slice and error access (data race fix) - Use lazy initialization for S3 client in tests to avoid panic during package load - Fix markdown linting: add language specifier to code fence, blank lines around tables - Add 'all' target to Makefile as alias for test-with-server - Remove unused 'util' import * style: remove emojis from test files * fix: add defensive checks and sort chunks by offset - Add nil check and type assertion check for singleflight result - Sort chunks by offset after concurrent fetching to maintain file order * fix: improve test diagnostics and path normalization - runWeedShell now returns error for better test diagnostics - Add all targets to .PHONY in Makefile (logs-primary, logs-remote, health) - Strip leading slash from normalizedObject to avoid double slashes in path --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |