mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-05-14 13:51:33 +00:00
master
19 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
9ae905e456 |
feat(security): hot-reload HTTPS certs without restart (k8s cert-manager) (#9181)
* feat(security): hot-reload HTTPS certs for master/volume/filer/webdav/admin S3 and filer already use a refreshing pemfile provider for their HTTPS cert, so rotated certificates (e.g. from k8s cert-manager) are picked up without a restart. Master, volume, webdav, and admin, however, passed cert/key paths straight to ServeTLS/ListenAndServeTLS and loaded once at startup — rotating those certs required a pod restart. Add a small helper NewReloadingServerCertificate in weed/security that wraps pemfile.Provider and returns a tls.Config.GetCertificate closure, then wire it into the four remaining HTTPS entry points. httpdown now also calls ServeTLS when TLSConfig carries a GetCertificate/Certificates but CertFile/KeyFile are empty, so volume server can pre-populate TLSConfig. A unit test exercises the rotation path (write cert, rotate on disk, assert the callback returns the new cert) with a short refresh window. * refactor(security): route filer/s3 HTTPS through the shared cert reloader Before: filer.go and s3.go each kept a *certprovider.Provider on the options struct plus a duplicated GetCertificateWithUpdate method. Both were loading pemfile themselves. Behaviorally they already reloaded, but the logic was duplicated two ways and neither path was shared with the newly-added master/volume/webdav/admin wiring. After: both use security.NewReloadingServerCertificate like the other servers. The per-struct certProvider field and GetCertificateWithUpdate method are removed, along with the now-unused certprovider and pemfile imports. Net: -32 lines, one code path for all HTTPS cert reloading. No behavior change — the refresh window, cache, and handshake contract are identical (the helper wraps the same pemfile.NewProvider). * feat(security): hot-reload HTTPS client certs for mount/backup/upload/etc The HTTP client in weed/util/http/client loaded the mTLS client cert once at startup via tls.LoadX509KeyPair. That left every long-lived HTTPS client process (weed mount, backup, filer.copy, filer→volume, s3→filer/volume) unable to pick up a rotated client cert without a restart — even though the same cert-manager setup was already rotating the server side fine. Swap the client cert loader for a tls.Config.GetClientCertificate callback backed by the same refreshing pemfile provider. New TLS handshakes pick up the rotated cert; in-flight pooled connections keep their old cert and drop as normal transport churn happens. To keep this reusable from both server and client TLS code without an import cycle (weed/security already imports weed/util/http/client for LoadHTTPClientFromFile), extract the pemfile wrapper into a new weed/security/certreload subpackage. weed/security keeps its thin NewReloadingServerCertificate wrapper. The existing unit test moves with the implementation. gRPC mTLS was already handled by security.LoadServerTLS / LoadClientTLS; this PR does not change any gRPC paths. MQ broker, MQ agent, Kafka gateway, and FUSE mount control plane are gRPC-only and therefore already rotate. CA bundles (ClientCAs / RootCAs / grpc.ca) are still loaded once — noted as a known limitation in the wiki. * fix(security): address PR review feedback on cert reloader Bots (gemini-code-assist + coderabbit) flagged three real issues and a couple of nits. Addressing them here: 1. KeyMaterial used context.Background(). The grpc pemfile provider's KeyMaterial blocks until material arrives or the context deadline expires; with Background() a slow disk could hang the TLS handshake indefinitely. Switched both the server and client callbacks to use hello.Context() / cri.Context() so a stuck read is bounded by the handshake timeout. 2. Admin server loaded TLS inside the serve goroutine. If the cert was bad, the goroutine returned but startAdminServer kept blocking on <-ctx.Done() with no listener, making the process look healthy with nothing bound. Moved TLS setup to run before the goroutine starts and propagate errors via fmt.Errorf; also captures the provider and defers Close(). 3. HTTP client discarded the certprovider.Provider from NewClientGetCertificate. That leaked the refresh goroutine, and NewHttpClientWithTLS had a worse case where a CA-file failure after provider creation orphaned the provider entirely. Added a certProvider field and a Close() method on HTTPClient, and made the constructors close the provider on subsequent error paths. 4. Server-side paths (master/volume/filer/s3/webdav/admin) now retain the provider. filer and webdav run ServeTLS synchronously, so a plain defer works. master/volume/s3 dispatch goroutines and return while the server keeps running, so they hook Close() into grace.OnInterrupt. 5. Test: certreload_test now tolerates transient read/parse errors during file rotation (writeSelfSigned rewrites cert before key) and reports the last error only if the deadline expires. No user-visible behavior change for the happy path. * test(tls): add end-to-end HTTPS cert rotation integration test Boots a real `weed master` with HTTPS enabled, captures the leaf cert served at TLS handshake time, atomically rewrites the cert/key files on disk (the same rename-in-place pattern kubelet does when it swaps a cert-manager Secret), and asserts that a subsequent TLS handshake observes the rotated leaf — with no process restart, no SIGHUP, no reloader sidecar. Verifies the full path: on-disk change → pemfile refresh tick → provider.KeyMaterial → tls.Config.GetCertificate → server TLS handshake. Runtime is ~1s by exposing the reloader's refresh window as an env var (WEED_TLS_CERT_REFRESH_INTERVAL) and setting it to 500ms for the test. The same env var is user-facing — documented in the wiki — so operators running short-lived certs (Vault, cert-manager with duration: 24h, etc.) can tighten the rotation-pickup window without a rebuild. Defaults to 5h to preserve prior behavior. security.CredRefreshingInterval is kept for API compatibility but now aliases certreload.DefaultRefreshInterval so the same env controls both gRPC mTLS and HTTPS reload. * ci(tls): wire the TLS rotation integration test into GitHub Actions Mirrors the existing vacuum-integration-tests.yml shape: Ubuntu runner, Go 1.25, build weed, run `go test` in test/tls_rotation, upload master logs on failure. 10-minute job timeout; the test itself finishes in about a second because WEED_TLS_CERT_REFRESH_INTERVAL is set to 500ms inside the test. Runs on every push to master and on every PR to master. * fix(tls): address follow-up PR review comments Three new comments on the integration test + volume shutdown path: 1. Test: peekServerCert was swallowing every dial/handshake error, which meant waitForCert's "last err: <nil>" fatal message lost all diagnostic value. Thread errors back through: peekServerCert now returns (*x509.Certificate, error), and waitForCert records the latest error so a CI flake points at the actual cause (master didn't come up, handshake rejected, CA pool mismatch, etc.). 2. Test: set HOME=<tempdir> on the master subprocess. Viper today registers the literal path "$HOME/.seaweedfs" without env expansion, so a developer's ~/.seaweedfs/security.toml is accidentally invisible — the test was relying on that. Pinning HOME is belt-and-braces against a future viper upgrade that does expand env vars. 3. volume.go: startClusterHttpService's provider close was registered via grace.OnInterrupt, which fires on SIGTERM but NOT on the v.shutdownCtx.Done() path used by mini / integration tests. The pemfile refresh goroutine leaked in that shutdown path. Now the helper returns a close func and the caller invokes it on BOTH shutdown paths for parity. Also add MinVersion: TLS 1.2 to the test's tls.Config to quiet the ast-grep static-analysis nit — zero-risk since the pool only trusts our in-memory CA. Test runs clean 3/3. |
||
|
|
08d9193fe1 |
[nfs] Add NFS (#9067)
* add filer inode foundation for nfs
* nfs command skeleton
* add filer inode index foundation for nfs
* make nfs inode index hardlink aware
* add nfs filehandle and inode lookup plumbing
* add read-only nfs frontend foundation
* add nfs namespace mutation support
* add chunk-backed nfs write path
* add nfs protocol integration tests
* add stale handle nfs coverage
* complete nfs hardlink and failover coverage
* add nfs export access controls
* add nfs metadata cache invalidation
* fix nfs chunk read lookup routing
* fix nfs review findings and rename regression
* address pr 9067 review comments
- filer_inode: fail fast if the snowflake sequencer cannot start, and let
operators override the 10-bit node id via SEAWEEDFS_FILER_SNOWFLAKE_ID
to avoid multi-filer collisions
- filer_inode: drop the redundant retry loop in nextInode
- filerstore_wrapper: treat inode-index writes/removals as best-effort so
a primary store success no longer surfaces as an operation failure
- filer_grpc_server_rename: defer overwritten-target chunk deletion until
after CommitTransaction so a rolled-back rename does not strand live
metadata pointing at freshly deleted chunks
- command/nfs: default ip.bind to loopback and require an explicit
filer.path, so the experimental server does not expose the entire
filer namespace on first run
- nfs integration_test: document why LinkArgs matches go-nfs's on-the-wire
layout rather than RFC 1813 LINK3args
* mount: pre-allocate inode in Mkdir and Symlink
Mkdir and Symlink used to send filer_pb.CreateEntryRequest with
Attributes.Inode = 0. After PR 9067, the filer's CreateEntry now assigns
its own inode in that case, so the filer-side entry ends up with a
different inode than the one the mount allocates via inodeToPath.Lookup
and returns to the kernel. Once applyLocalMetadataEvent stores the
filer's entry in the meta cache, subsequent GetAttr calls read the
cached entry and hit the setAttrByPbEntry override at line 197 of
weedfs_attr.go, returning the filer-assigned inode instead of the
mount's local one. pjdfstest tests/rename/00.t (subtests 81/87/91)
caught this — it lstat'd a freshly-created directory/symlink, renamed
it, lstat'd again, and saw a different inode the second time.
createRegularFile already pre-allocates via inodeToPath.AllocateInode
and stamps it into the create request. Do the same thing in Mkdir and
Symlink so both sides agree on the object identity from the very first
request, and so GetAttr's cache path returns the same value as Mkdir /
Symlink's initial response.
* sequence: mask snowflake node id on int→uint32 conversion
CodeQL flagged the unchecked uint32(snowflakeId) cast in
NewSnowflakeSequencer as a potential truncation bug when snowflakeId is
sourced from user input (e.g. via SEAWEEDFS_FILER_SNOWFLAKE_ID). Mask
to the 10 bits the snowflake library actually uses so any caller-
supplied int is safely clamped into range.
* add test/nfs integration suite
Boots a real SeaweedFS cluster (master + volume + filer) plus the
experimental `weed nfs` frontend as subprocesses and drives it through
the NFSv3 wire protocol via go-nfs-client, mirroring the layout of
test/sftp. The tests run without a kernel NFS mount, privileged ports,
or any platform-specific tooling.
Coverage includes read/write round-trip, mkdir/rmdir, nested
directories, rename content preservation, overwrite + explicit
truncate, 3 MiB binary file, all-byte binary and empty files, symlink
round-trip, ReadDirPlus listing, missing-path remove, FSInfo sanity,
sequential appends, and readdir-after-remove.
Framework notes:
- Picks ephemeral ports with net.Listen("127.0.0.1:0") and passes
-port.grpc explicitly so the default port+10000 convention cannot
overflow uint16 on macOS.
- Pre-creates the /nfs_export directory via the filer HTTP API before
starting the NFS server — the NFS server's ensureIndexedEntry check
requires the export root to exist with a real entry, which filer.Root
does not satisfy when the export path is "/".
- Reuses the same rpc.Client for mount and target so go-nfs-client does
not try to re-dial via portmapper (which concatenates ":111" onto the
address).
* ci: add NFS integration test workflow
Mirror test/sftp's workflow for the new test/nfs suite so PRs that touch
the NFS server, the inode filer plumbing it depends on, or the test
harness itself run the 14 NFSv3-over-RPC integration tests on Ubuntu
22.04 via `make test`.
* nfs: use append for buffer growth in Write and Truncate
The previous make+copy pattern reallocated the full buffer on every
extending write or truncate, giving O(N^2) behaviour for sequential
write loops. Switching to `append(f.content, make([]byte, delta)...)`
lets Go's amortized growth strategy absorb the repeated extensions.
Called out by gemini-code-assist on PR 9067.
* filer: honor caller cancellation in collectInodeIndexEntries
Dropping the WithoutCancel wrapper lets DeleteFolderChildren bail out of
the inode-index scan if the client disconnects mid-walk. The cleanup is
already treated as best-effort by the caller (it logs on error and
continues), so a cancelled walk just means the partial index rebuild is
skipped — the same failure mode as any other index write error.
Flagged as a DoS concern by gemini-code-assist on PR 9067.
* nfs: skip filer read on open when O_TRUNC is set
openFile used to unconditionally loadWritableContent for every writable
open and then discard the buffer if O_TRUNC was set. For large files
that is a pointless 64 MiB round-trip. Reorder the branches so we only
fetch existing content when the caller intends to keep it, and mark the
file dirty right away so the subsequent Close still issues the
truncating write. Called out by gemini-code-assist on PR 9067.
* nfs: allow Seek on O_APPEND files and document buffered write cap
Two related cleanups on filesystem.go:
- POSIX only restricts Write on an O_APPEND fd, not lseek. The existing
Seek error ("append-only file descriptors may only seek to EOF")
prevented read-and-write workloads that legitimately reposition the
read cursor. Write already snaps the offset to EOF before persisting
(see seaweedFile Write), so Seek can unconditionally accept any
offset. Update the unit test that was asserting the old behaviour.
- Add a doc comment on maxBufferedWriteSize explaining that it is a
per-file ceiling, the memory footprint it implies, and that the real
fix for larger whole-file rewrites is streaming / multi-chunk support.
Both changes flagged by gemini-code-assist on PR 9067.
* nfs: guard offset before casting to int in Write
CodeQL flagged `int(f.offset) + len(p)` inside the Write growth path as
a potential overflow on architectures where `int` is 32-bit. The
existing check only bounded the post-cast value, which is too late.
Clamp f.offset against maxBufferedWriteSize before the cast and also
reject negative/overflowed endOffset results. Both branches fall
through to billy.ErrNotSupported, the same behaviour the caller gets
today for any out-of-range buffered write.
* nfs: compute Write endOffset in int64 to satisfy CodeQL
The previous guard bounded f.offset but left len(p) unchecked, so
CodeQL still flagged `int(f.offset) + len(p)` as a possible int-width
overflow path. Bound len(p) against maxBufferedWriteSize first, do the
addition in int64, and only cast down after the total has been clamped
against the buffer ceiling. Behaviour is unchanged: any out-of-range
write still returns billy.ErrNotSupported.
* ci: drop emojis from nfs-tests workflow summary
Plain-text step summary per user preference — no decorative glyphs in
the NFS CI output or checklist.
* nfs: annotate remaining DEV_PLAN TODOs with status
Three of the unchecked items are genuine follow-up PRs rather than
missing work in this one, and one was actually already done:
- Reuse chunk cache and mutation stream helpers without FUSE deps:
checked off — the NFS server imports weed/filer.ReaderCache and
weed/util/chunk_cache directly with no weed/mount or go-fuse imports.
- Extract shared read/write helpers from mount/WebDAV/SFTP: annotated
as deferred to a separate refactor PR (touches four packages).
- Expand direct data-path writes beyond the 64 MiB buffered fallback:
annotated as deferred — requires a streaming WRITE path.
- Shared lock state + lock tests: annotated as blocked upstream on
go-nfs's missing NLM/NFSv4 lock state RPCs, matching the existing
"Current Blockers" note.
* test/nfs: share port+readiness helpers with test/testutil
Drop the per-suite mustPickFreePort and waitForService re-implementations
in favor of testutil.MustAllocatePorts (atomic batch allocation; no
close-then-hope race) and testutil.WaitForPort / SeaweedMiniStartupTimeout.
Pull testutil in via a local replace directive so this standalone
seaweedfs-nfs-tests module can import the in-repo package without a
separate release.
Subprocess startup is still master + volume + filer + nfs — no switch to
weed mini yet, since mini does not know about the nfs frontend.
* nfs: stream writes to volume servers instead of buffering the whole file
Before this change the NFS write path held the full contents of every
writable open in memory:
- OpenFile(write) called loadWritableContent which read the existing
file into seaweedFile.content up to maxBufferedWriteSize (64 MiB)
- each Write() extended content in-place
- Close() uploaded the whole buffer as a single chunk via
persistContent + AssignVolume
The 64 MiB ceiling made large NFS writes return NFS3ERR_NOTSUPP, and
even below the cap every Write paid a whole-file-in-memory cost. This
PR rewrites the write path to match how `weed filer` and the S3 gateway
persist data:
- openFile(write) no longer loads the existing content at all; it
only issues an UpdateEntry when O_TRUNC is set *and* the file is
non-empty (so a fresh create+trunc is still zero-RPC)
- Write() streams the caller's bytes straight to a volume server via
one AssignVolume + one chunk upload, then atomically appends the
resulting chunk to the filer entry through mutateEntry. Any
previously inlined entry.Content is migrated to a chunk in the same
update so the chunk list becomes the authoritative representation.
- Truncate() becomes a direct mutateEntry (drop chunks past the new
size, clip inline content, update FileSize) instead of resizing an
in-memory buffer.
- Close() is a no-op because everything was flushed inline.
The small-file fast path that the filer HTTP handler uses is preserved:
if the post-write size still fits in maxInlineWriteSize (4 MiB) and
the file has no existing chunks, we rewrite entry.Content directly and
skip the volume-server round-trip. This keeps single-shot tiny writes
(echo, small edits) cheap while completely removing the 64 MiB cap on
larger files. Read() now always reads through the chunk reader instead
of a local byte slice, so reads inside the same session see the freshly
appended data.
Drops the unused seaweedFile.content / dirty fields, the
maxBufferedWriteSize constant, and the loadWritableContent helper.
Updates TestSeaweedFileSystemSupportsNamespaceMutations expectations
to match the new "no extra O_TRUNC UpdateEntry on an empty file"
behavior (still 3 updates: Write + Chmod + Truncate).
* filer: extract shared gateway upload helper for NFS and WebDAV
Three filer-backed gateways (NFS, WebDAV, and mount) each had a local
saveDataAsChunk that wrapped operation.NewUploader().UploadWithRetry
with near-identical bodies: build AssignVolumeRequest, build
UploadOption, build genFileUrlFn with optional filerProxy rewriting,
call UploadWithRetry, validate the result, and call ToPbFileChunk.
Pull that body into filer.SaveGatewayDataAsChunk with a
GatewayChunkUploadRequest struct so both NFS and WebDAV can delegate
to one implementation.
- NFS's saveDataAsChunk is now a thin adapter that assembles the
GatewayChunkUploadRequest from server options and calls the helper.
The chunkUploader interface keeps working for test injection because
the new GatewayChunkUploader interface is structurally identical.
- WebDAV's saveDataAsChunk is similarly a thin adapter — it drops the
local operation.NewUploader call plus the AssignVolume/UploadOption
scaffolding.
- mount is intentionally left alone. mount's saveDataAsChunk has two
features that do not fit the shared helper (a pre-allocated file-id
pool used to skip AssignVolume entirely, and a chunkCache
write-through at offset 0 so future reads hit the mount's local
cache), both of which are mount-specific.
Marks the Phase 2 "extract shared read/write helpers from mount,
WebDAV, and SFTP" DEV_PLAN item as done. The filer-level chunk read
path (NonOverlappingVisibleIntervals + ViewFromVisibleIntervals +
NewChunkReaderAtFromClient) was already shared.
* nfs: remove DESIGN.md and DEV_PLAN.md
The planning documents have served their purpose — all phase 1 and
phase 2 items are landed, phase 3 streaming writes are landed, phase 2
shared helpers are extracted, and the two remaining phase 4 items
(shared lock state + lock tests) are blocked upstream on
github.com/willscott/go-nfs which exposes no NLM or NFSv4 lock state
RPCs. The running decision log no longer reflects current code and
would just drift. The NFS wiki page
(https://github.com/seaweedfs/seaweedfs/wiki/NFS-Server) now carries
the overview, configuration surface, architecture notes, and known
limitations; the source is the source of truth for the rest.
|
||
|
|
2919bb27e5 |
fix(sync): use per-cluster TLS for HTTP volume connections in filer.sync (#8974)
* fix(sync): use per-cluster TLS for HTTP volume connections in filer.sync (#8965) When filer.sync runs with -a.security and -b.security flags, only gRPC connections received per-cluster TLS configuration. HTTP clients for volume server reads and uploads used a global singleton with the default security.toml, causing TLS verification failures when clusters use different self-signed certificates. Load per-cluster HTTPS client config from the security files and pass dedicated HTTP clients to FilerSource (for downloads) and FilerSink (for uploads) so each direction uses the correct cluster's certificates. * fix(sync): address review feedback for per-cluster HTTP TLS - Add insecure_skip_verify support to NewHttpClientWithTLS and read it from per-cluster security config via https.client.insecure_skip_verify - Error on partial mTLS config (cert without key or vice versa) - Add nil-check for client parameter in DownloadFileWithClient - Document SetUploader as init-only (same pattern as SetChunkConcurrency) |
||
|
|
0798b274dd |
feat(s3): add concurrent chunk prefetch for large file downloads (#8917)
* feat(s3): add concurrent chunk prefetch for large file downloads Add a pipe-based prefetch pipeline that overlaps chunk fetching with response writing during S3 GetObject, SSE downloads, and filer proxy. While chunk N streams to the HTTP response, fetch goroutines for the next K chunks establish HTTP connections to volume servers ahead of time, eliminating the RTT gap between sequential chunk fetches. Uses io.Pipe for minimal memory overhead (~1MB per download regardless of chunk size, vs buffering entire chunks). Also increases the streaming read buffer from 64KB to 256KB to reduce syscall overhead. Benchmark results (64KB chunks, prefetch=4): - 0ms latency: 1058 → 2362 MB/s (2.2× faster) - 5ms latency: 11.0 → 41.7 MB/s (3.8× faster) - 10ms latency: 5.9 → 23.3 MB/s (4.0× faster) - 20ms latency: 3.1 → 12.1 MB/s (3.9× faster) * fix: address review feedback for prefetch pipeline - Fix data race: use *chunkPipeResult (pointer) on channel to avoid copying struct while fetch goroutines write to it. Confirmed clean with -race detector. - Remove concurrent map write: retryWithCacheInvalidation no longer updates fileId2Url map. Producer only reads it; consumer never writes. - Use mem.Allocate/mem.Free for copy buffer to reduce GC pressure. - Add local cancellable context so consumer errors (client disconnect) immediately stop the producer and all in-flight fetch goroutines. * fix(test): remove dead code and add Range header support in test server - Remove unused allData variable in makeChunksAndServer - Add Range header handling to createTestServer for partial chunk read coverage (206 Partial Content, 416 Range Not Satisfiable) * fix: correct retry condition and goroutine leak in prefetch pipeline - Fix retry condition: use result.fetchErr/result.written instead of copied to decide cache-invalidation retry. The old condition wrongly triggered retry when the fetch succeeded but the response writer failed on the first write (copied==0 despite fetcher having data). Now matches the sequential path (stream.go:197) which checks whether the fetcher itself wrote zero bytes. - Fix goroutine leak: when the producer's send to the results channel is interrupted by context cancellation, the fetch goroutine was already launched but the result was never sent to the channel. The drain loop couldn't handle it. Now waits on result.done before returning so every fetch goroutine is properly awaited. |
||
|
|
92c2fc0d52 |
Add insecure_skip_verify option for HTTPS client in security.toml (#8781)
* Add -insecureSkipVerify flag and config option for filer.sync HTTPS connections When using filer.sync between clusters with different CAs (e.g., separate OpenShift clusters), TLS certificate verification fails with "x509: certificate signed by unknown authority". This adds two ways to skip TLS certificate verification: 1. CLI flag: `weed filer.sync -insecureSkipVerify ...` 2. Config option: `insecure_skip_verify = true` under [https.client] in security.toml Closes #8778 * Add insecure_skip_verify option for HTTPS client in security.toml When using filer.sync between clusters with different CAs (e.g., separate OpenShift clusters), TLS certificate verification fails. Adding insecure_skip_verify = true under [https.client] in security.toml allows skipping TLS certificate verification. The option is read during global HTTP client initialization so it applies to all HTTPS connections including filer.sync proxy reads and writes. Closes #8778 --------- Co-authored-by: Copilot <copilot@github.com> |
||
|
|
b665c329bc |
fix(replication): resume partial chunk reads on EOF instead of re-downloading (#8607)
* fix(replication): resume partial chunk reads on EOF instead of re-downloading When replicating chunks and the source connection drops mid-transfer, accumulate the bytes already received and retry with a Range header to fetch only the remaining bytes. This avoids re-downloading potentially large chunks from scratch on each retry, reducing load on busy source servers and speeding up recovery. * test(replication): add tests for downloadWithRange including gzip partial reads Tests cover: - No offset (no Range header sent) - With offset (Range header verified) - Content-Disposition filename extraction - Partial read + resume: server drops connection mid-transfer, client resumes with Range from the offset of received bytes - Gzip partial read + resume: first response is gzip-encoded (Go auto- decompresses), connection drops, resume request gets decompressed data (Go doesn't add Accept-Encoding when Range is set, so the server decompresses), combined bytes match original * fix(replication): address PR review comments - Consolidate downloadWithRange into DownloadFile with optional offset parameter (variadic), eliminating code duplication (DRY) - Validate HTTP response status: require 206 + correct Content-Range when offset > 0, reject when server ignores Range header - Use if/else for fullData assignment for clarity - Add test for rejected Range (server returns 200 instead of 206) * refactor(replication): remove unused ReplicationSource interface The interface was never referenced and its signature didn't match the actual FilerSource.ReadPart method. --------- Co-authored-by: Copilot <copilot@github.com> |
||
|
|
38e14a867b |
fix: cancel volume server requests on client disconnect during S3 downloads (#8373)
* fix: cancel volume server requests on client disconnect during S3 downloads - Use http.NewRequestWithContext in ReadUrlAsStream so in-flight volume server requests are properly aborted when the client disconnects and the request context is canceled - Distinguish context-canceled errors (client disconnect, expected) from real server errors in streamFromVolumeServers; log at V(3) instead of ERROR to reduce noise from client-side disconnects (e.g. Nginx upstream timeout, browser cancel, curl --max-time) Fixes: streamFromVolumeServers: streamFn failed...context canceled" * fixup: separate Canceled/DeadlineExceeded log severity in streamFromVolumeServers - context.Canceled → V(3) Infof "client disconnected" (expected, no noise) - context.DeadlineExceeded → Warningf "server-side deadline exceeded" (unexpected, needs attention) - all other errors → Errorf (unchanged)" |
||
|
|
9012069bd7 |
chore: execute goimports to format the code (#7983)
* chore: execute goimports to format the code Signed-off-by: promalert <promalert@outlook.com> * goimports -w . --------- Signed-off-by: promalert <promalert@outlook.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
||
|
|
5c1de633cb |
mount: improve read throughput with parallel chunk fetching (#7627)
* filer: remove lock contention during chunk download This addresses issue #7504 where a single weed mount FUSE instance does not fully utilize node network bandwidth when reading large files. The SingleChunkCacher was holding a mutex during the entire HTTP download, causing readers to block until the download completed. This serialized chunk reads even when multiple goroutines were downloading in parallel. Changes: - Add sync.Cond to SingleChunkCacher for efficient waiting - Move HTTP download outside the critical section in startCaching() - Use condition variable in readChunkAt() to wait for download completion - Add isComplete flag to track download state Now multiple chunk downloads can proceed truly in parallel, and readers wait efficiently using the condition variable instead of blocking on a mutex held during I/O operations. Ref: #7504 * filer: parallel chunk fetching within doReadAt This addresses issue #7504 by enabling parallel chunk downloads within a single read operation. Previously, doReadAt() processed chunks sequentially in a loop, meaning each chunk had to be fully downloaded before the next one started. This left significant network bandwidth unused when chunks resided on different volume servers. Changes: - Collect all chunk read tasks upfront - Use errgroup to fetch multiple chunks in parallel - Each chunk reads directly into its correct buffer position - Limit concurrency to prefetchCount (min 4) to avoid overwhelming the system - Handle gaps and zero-filling before parallel fetch - Trigger prefetch after parallel reads complete For a read spanning N chunks on different volume servers, this can now utilize up to N times the bandwidth of a single connection. Ref: #7504 * http: direct buffer read to reduce memory copies This addresses issue #7504 by reducing memory copy overhead during chunk downloads. Previously, RetriedFetchChunkData used ReadUrlAsStream which: 1. Allocated a 64KB intermediate buffer 2. Read data in 64KB chunks 3. Called a callback to copy each chunk to the destination For a 16MB chunk, this meant 256 copy operations plus the callback overhead. Profiling showed significant time spent in memmove. Changes: - Add readUrlDirectToBuffer() that reads directly into the destination - Add retriedFetchChunkDataDirect() for unencrypted, non-gzipped chunks - Automatically use direct read path when possible (cipher=nil, gzip=false) - Use http.NewRequestWithContext for proper cancellation For unencrypted chunks (the common case), this eliminates the intermediate buffer entirely, reading HTTP response bytes directly into the final destination buffer. Ref: #7504 * address review comments - Use channel (done) instead of sync.Cond for download completion signaling This integrates better with context cancellation patterns - Remove redundant groupErr check in reader_at.go (errors are already captured in task.err) - Remove buggy URL encoding logic from retriedFetchChunkDataDirect (The existing url.PathEscape on full URL is a pre-existing bug that should be fixed separately) * address review comments (round 2) - Return io.ErrUnexpectedEOF when HTTP response is truncated This prevents silent data corruption from incomplete reads - Simplify errgroup error handling by using g.Wait() error directly Remove redundant task.err field and manual error aggregation loop - Define minReadConcurrency constant instead of magic number 4 Improves code readability and maintainability Note: Context propagation to startCaching() is intentionally NOT changed. The downloaded chunk is a shared resource that may be used by multiple readers. Using context.Background() ensures the download completes even if one reader cancels, preventing data loss for other waiting readers. * http: inject request ID for observability in direct read path Add request_id.InjectToRequest() call to readUrlDirectToBuffer() for consistency with ReadUrlAsStream path. This ensures full-chunk reads carry the same tracing/correlation headers for server logs and metrics. * filer: consistent timestamp handling in sequential read path Use max(ts, task.chunk.ModifiedTsNs) in sequential path to match parallel path behavior. Also update ts before error check so that on failure, the returned timestamp reflects the max of all chunks processed so far. * filer: document why context.Background() is used in startCaching Add comment explaining the intentional design decision: the downloaded chunk is a shared resource that may be used by multiple concurrent readers. Using context.Background() ensures the download completes even if one reader cancels, preventing errors for other waiting readers. * filer: propagate context for reader cancellation Address review comment: pass context through ReadChunkAt call chain so that a reader can cancel its wait for a download. The key distinction is: - Download uses context.Background() - shared resource, always completes - Reader wait uses request context - can be cancelled individually If a reader cancels, it stops waiting and returns ctx.Err(), but the download continues to completion for other readers waiting on the same chunk. This properly handles the shared resource semantics while still allowing individual reader cancellation. * filer: use defer for close(done) to guarantee signal on panic Move close(s.done) to a defer statement at the start of startCaching() to ensure the completion signal is always sent, even if an unexpected panic occurs. This prevents readers from blocking indefinitely. * filer: remove unnecessary code - Remove close(s.cacheStartedCh) in destroy() - the channel is only used for one-time synchronization, closing it provides no benefit - Remove task := task loop variable capture - Go 1.22+ fixed loop variable semantics, this capture is no longer necessary (go.mod specifies Go 1.24.0) * filer: restore fallback to chunkCache when cacher returns no data Fix critical issue where ReadChunkAt would return 0,nil immediately if SingleChunkCacher couldn't provide data for the requested offset, without trying the chunkCache fallback. Now if cacher.readChunkAt returns n=0 and err=nil, we fall through to try chunkCache. * filer: add comprehensive tests for ReaderCache Tests cover: - Context cancellation while waiting for download - Fallback to chunkCache when cacher returns n=0, err=nil - Multiple concurrent readers waiting for same chunk - Partial reads at different offsets - Downloader cleanup when exceeding cache limit - Done channel signaling (no hangs on completion) * filer: prioritize done channel over context cancellation If data is already available (done channel closed), return it even if the reader's context is also cancelled. This avoids unnecessary errors when the download has already completed. * filer: add lookup error test and document test limitations Add TestSingleChunkCacherLookupError to test error handling when lookup fails. Document that full HTTP integration tests for SingleChunkCacher require global HTTP client initialization which is complex in unit tests. The download path is tested via FUSE integration tests. * filer: add tests that exercise SingleChunkCacher concurrency logic Add tests that use blocking lookupFileIdFn to exercise the actual SingleChunkCacher wait/cancellation logic: - TestSingleChunkCacherContextCancellationDuringLookup: tests reader cancellation while lookup is blocked - TestSingleChunkCacherMultipleReadersWaitForDownload: tests multiple readers waiting on the same download - TestSingleChunkCacherOneReaderCancelsOthersContinue: tests that when one reader cancels, other readers continue waiting These tests properly exercise the done channel wait/cancel logic without requiring HTTP calls - the blocking lookup simulates a slow download. |
||
|
|
cd2fac4551 |
S3: pass HTTP 429 from volume servers to S3 clients (#7556)
With the recent changes (commit
|
||
|
|
263e891da0 |
Clients to volume server requires JWT tokens for all read operations (#7376)
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error * When jwt.signing.read.key is enabled in security.toml, the volume server requires JWT tokens for all read operations. * reuse fileId * refactor --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
||
|
|
b7b73016dd |
S3 API: Add SSE-KMS (#7144)
* implement sse-c * fix Content-Range * adding tests * Update s3_sse_c_test.go * copy sse-c objects * adding tests * refactor * multi reader * remove extra write header call * refactor * SSE-C encrypted objects do not support HTTP Range requests * robust * fix server starts * Update Makefile * Update Makefile * ci: remove SSE-C integration tests and workflows; delete test/s3/encryption/ * s3: SSE-C MD5 must be base64 (case-sensitive); fix validation, comparisons, metadata storage; update tests * minor * base64 * Update SSE-C_IMPLEMENTATION.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/s3api/s3api_object_handlers.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update SSE-C_IMPLEMENTATION.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * fix test * fix compilation * Bucket Default Encryption To complete the SSE-KMS implementation for production use: Add AWS KMS Provider - Implement weed/kms/aws/aws_kms.go using AWS SDK Integrate with S3 Handlers - Update PUT/GET object handlers to use SSE-KMS Add Multipart Upload Support - Extend SSE-KMS to multipart uploads Configuration Integration - Add KMS configuration to filer.toml Documentation - Update SeaweedFS wiki with SSE-KMS usage examples * store bucket sse config in proto * add more tests * Update SSE-C_IMPLEMENTATION.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Fix rebase errors and restore structured BucketMetadata API Merge Conflict Fixes: - Fixed merge conflicts in header.go (SSE-C and SSE-KMS headers) - Fixed merge conflicts in s3api_errors.go (SSE-C and SSE-KMS error codes) - Fixed merge conflicts in s3_sse_c.go (copy strategy constants) - Fixed merge conflicts in s3api_object_handlers_copy.go (copy strategy usage) API Restoration: - Restored BucketMetadata struct with Tags, CORS, and Encryption fields - Restored structured API functions: GetBucketMetadata, SetBucketMetadata, UpdateBucketMetadata - Restored helper functions: UpdateBucketTags, UpdateBucketCORS, UpdateBucketEncryption - Restored clear functions: ClearBucketTags, ClearBucketCORS, ClearBucketEncryption Handler Updates: - Updated GetBucketTaggingHandler to use GetBucketMetadata() directly - Updated PutBucketTaggingHandler to use UpdateBucketTags() - Updated DeleteBucketTaggingHandler to use ClearBucketTags() - Updated CORS handlers to use UpdateBucketCORS() and ClearBucketCORS() - Updated loadCORSFromBucketContent to use GetBucketMetadata() Internal Function Updates: - Updated getBucketMetadata() to return *BucketMetadata struct - Updated setBucketMetadata() to accept *BucketMetadata struct - Updated getBucketEncryptionMetadata() to use GetBucketMetadata() - Updated setBucketEncryptionMetadata() to use SetBucketMetadata() Benefits: - Resolved all rebase conflicts while preserving both SSE-C and SSE-KMS functionality - Maintained consistent structured API throughout the codebase - Eliminated intermediate wrapper functions for cleaner code - Proper error handling with better granularity - All tests passing and build successful The bucket metadata system now uses a unified, type-safe, structured API that supports tags, CORS, and encryption configuration consistently. * Fix updateEncryptionConfiguration for first-time bucket encryption setup - Change getBucketEncryptionMetadata to getBucketMetadata to avoid failures when no encryption config exists - Change setBucketEncryptionMetadata to setBucketMetadataWithEncryption for consistency - This fixes the critical issue where bucket encryption configuration failed for buckets without existing encryption Fixes: https://github.com/seaweedfs/seaweedfs/pull/7144#discussion_r2285669572 * Fix rebase conflicts and maintain structured BucketMetadata API Resolved Conflicts: - Fixed merge conflicts in s3api_bucket_config.go between structured API (HEAD) and old intermediate functions - Kept modern structured API approach: UpdateBucketCORS, ClearBucketCORS, UpdateBucketEncryption - Removed old intermediate functions: setBucketTags, deleteBucketTags, setBucketMetadataWithEncryption API Consistency Maintained: - updateCORSConfiguration: Uses UpdateBucketCORS() directly - removeCORSConfiguration: Uses ClearBucketCORS() directly - updateEncryptionConfiguration: Uses UpdateBucketEncryption() directly - All structured API functions preserved: GetBucketMetadata, SetBucketMetadata, UpdateBucketMetadata Benefits: - Maintains clean separation between API layers - Preserves atomic metadata updates with proper error handling - Eliminates function indirection for better performance - Consistent API usage pattern throughout codebase - All tests passing and build successful The bucket metadata system continues to use the unified, type-safe, structured API that properly handles tags, CORS, and encryption configuration without any intermediate wrapper functions. * Fix complex rebase conflicts and maintain clean structured BucketMetadata API Resolved Complex Conflicts: - Fixed merge conflicts between modern structured API (HEAD) and mixed approach - Removed duplicate function declarations that caused compilation errors - Consistently chose structured API approach over intermediate functions Fixed Functions: - BucketMetadata struct: Maintained clean field alignment - loadCORSFromBucketContent: Uses GetBucketMetadata() directly - updateCORSConfiguration: Uses UpdateBucketCORS() directly - removeCORSConfiguration: Uses ClearBucketCORS() directly - getBucketMetadata: Returns *BucketMetadata struct consistently - setBucketMetadata: Accepts *BucketMetadata struct consistently Removed Duplicates: - Eliminated duplicate GetBucketMetadata implementations - Eliminated duplicate SetBucketMetadata implementations - Eliminated duplicate UpdateBucketMetadata implementations - Eliminated duplicate helper functions (UpdateBucketTags, etc.) API Consistency Achieved: - Single, unified BucketMetadata struct for all operations - Atomic updates through UpdateBucketMetadata with function callbacks - Type-safe operations with proper error handling - No intermediate wrapper functions cluttering the API Benefits: - Clean, maintainable codebase with no function duplication - Consistent structured API usage throughout all bucket operations - Proper error handling and type safety - Build successful and all tests passing The bucket metadata system now has a completely clean, structured API without any conflicts, duplicates, or inconsistencies. * Update remaining functions to use new structured BucketMetadata APIs directly Updated functions to follow the pattern established in bucket config: - getEncryptionConfiguration() -> Uses GetBucketMetadata() directly - removeEncryptionConfiguration() -> Uses ClearBucketEncryption() directly Benefits: - Consistent API usage pattern across all bucket metadata operations - Simpler, more readable code that leverages the structured API - Eliminates calls to intermediate legacy functions - Better error handling and logging consistency - All tests pass with improved functionality This completes the transition to using the new structured BucketMetadata API throughout the entire bucket configuration and encryption subsystem. * Fix GitHub PR #7144 code review comments Address all code review comments from Gemini Code Assist bot: 1. **High Priority - SSE-KMS Key Validation**: Fixed ValidateSSEKMSKey to allow empty KMS key ID - Empty key ID now indicates use of default KMS key (consistent with AWS behavior) - Updated ParseSSEKMSHeaders to call validation after parsing - Enhanced isValidKMSKeyID to reject keys with spaces and invalid characters 2. **Medium Priority - KMS Registry Error Handling**: Improved error collection in CloseAll - Now collects all provider close errors instead of only returning the last one - Uses proper error formatting with %w verb for error wrapping - Returns single error for one failure, combined message for multiple failures 3. **Medium Priority - Local KMS Aliases Consistency**: Fixed alias handling in CreateKey - Now updates the aliases slice in-place to maintain consistency - Ensures both p.keys map and key.Aliases slice use the same prefixed format All changes maintain backward compatibility and improve error handling robustness. Tests updated and passing for all scenarios including edge cases. * Use errors.Join for KMS registry error handling Replace manual string building with the more idiomatic errors.Join function: - Removed manual error message concatenation with strings.Builder - Simplified error handling logic by using errors.Join(allErrors...) - Removed unnecessary string import - Added errors import for errors.Join This approach is cleaner, more idiomatic, and automatically handles: - Returning nil for empty error slice - Returning single error for one-element slice - Properly formatting multiple errors with newlines The errors.Join function was introduced in Go 1.20 and is the recommended way to combine multiple errors. * Update registry.go * Fix GitHub PR #7144 latest review comments Address all new code review comments from Gemini Code Assist bot: 1. **High Priority - SSE-KMS Detection Logic**: Tightened IsSSEKMSEncrypted function - Now relies only on the canonical x-amz-server-side-encryption header - Removed redundant check for x-amz-encrypted-data-key metadata - Prevents misinterpretation of objects with inconsistent metadata state - Updated test case to reflect correct behavior (encrypted data key only = false) 2. **Medium Priority - UUID Validation**: Enhanced KMS key ID validation - Replaced simplistic length/hyphen count check with proper regex validation - Added regexp import for robust UUID format checking - Regex pattern: ^[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}$ - Prevents invalid formats like '------------------------------------' from passing 3. **Medium Priority - Alias Mutation Fix**: Avoided input slice modification - Changed CreateKey to not mutate the input aliases slice in-place - Uses local variable for modified alias to prevent side effects - Maintains backward compatibility while being safer for callers All changes improve code robustness and follow AWS S3 standards more closely. Tests updated and passing for all scenarios including edge cases. * Fix failing SSE tests Address two failing test cases: 1. **TestSSEHeaderConflicts**: Fixed SSE-C and SSE-KMS mutual exclusion - Modified IsSSECRequest to return false if SSE-KMS headers are present - Modified IsSSEKMSRequest to return false if SSE-C headers are present - This prevents both detection functions from returning true simultaneously - Aligns with AWS S3 behavior where SSE-C and SSE-KMS are mutually exclusive 2. **TestBucketEncryptionEdgeCases**: Fixed XML namespace validation - Added namespace validation in encryptionConfigFromXMLBytes function - Now rejects XML with invalid namespaces (only allows empty or AWS standard namespace) - Validates XMLName.Space to ensure proper XML structure - Prevents acceptance of malformed XML with incorrect namespaces Both fixes improve compliance with AWS S3 standards and prevent invalid configurations from being accepted. All SSE and bucket encryption tests now pass successfully. * Fix GitHub PR #7144 latest review comments Address two new code review comments from Gemini Code Assist bot: 1. **High Priority - Race Condition in UpdateBucketMetadata**: Fixed thread safety issue - Added per-bucket locking mechanism to prevent race conditions - Introduced bucketMetadataLocks map with RWMutex for each bucket - Added getBucketMetadataLock helper with double-checked locking pattern - UpdateBucketMetadata now uses bucket-specific locks to serialize metadata updates - Prevents last-writer-wins scenarios when concurrent requests update different metadata parts 2. **Medium Priority - KMS Key ARN Validation**: Improved robustness of ARN validation - Enhanced isValidKMSKeyID function to strictly validate ARN structure - Changed from 'len(parts) >= 6' to 'len(parts) != 6' for exact part count - Added proper resource validation for key/ and alias/ prefixes - Prevents malformed ARNs with incorrect structure from being accepted - Now validates: arn:aws:kms:region:account:key/keyid or arn:aws:kms:region:account:alias/aliasname Both fixes improve system reliability and prevent edge cases that could cause data corruption or security issues. All existing tests continue to pass. * format * address comments * Configuration Adapter * Regex Optimization * Caching Integration * add negative cache for non-existent buckets * remove bucketMetadataLocks * address comments * address comments * copying objects with sse-kms * copying strategy * store IV in entry metadata * implement compression reader * extract json map as sse kms context * bucket key * comments * rotate sse chunks * KMS Data Keys use AES-GCM + nonce * add comments * Update weed/s3api/s3_sse_kms.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update s3api_object_handlers_put.go * get IV from response header * set sse headers * Update s3api_object_handlers.go * deterministic JSON marshaling * store iv in entry metadata * address comments * not used * store iv in destination metadata ensures that SSE-C copy operations with re-encryption (decrypt/re-encrypt scenario) now properly store the destination encryption metadata * add todo * address comments * SSE-S3 Deserialization * add BucketKMSCache to BucketConfig * fix test compilation * already not empty * use constants * fix: critical metadata (encrypted data keys, encryption context, etc.) was never stored during PUT/copy operations * address comments * fix tests * Fix SSE-KMS Copy Re-encryption * Cache now persists across requests * fix test * iv in metadata only * SSE-KMS copy operations should follow the same pattern as SSE-C * fix size overhead calculation * Filer-Side SSE Metadata Processing * SSE Integration Tests * fix tests * clean up * Update s3_sse_multipart_test.go * add s3 sse tests * unused * add logs * Update Makefile * Update Makefile * s3 health check * The tests were failing because they tried to run both SSE-C and SSE-KMS tests * Update weed/s3api/s3_sse_c.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update Makefile * add back * Update Makefile * address comments * fix tests * Update s3-sse-tests.yml * Update s3-sse-tests.yml * fix sse-kms for PUT operation * IV * Update auth_credentials.go * fix multipart with kms * constants * multipart sse kms Modified handleSSEKMSResponse to detect multipart SSE-KMS objects Added createMultipartSSEKMSDecryptedReader to handle each chunk independently Each chunk now gets its own decrypted reader before combining into the final stream * validate key id * add SSEType * permissive kms key format * Update s3_sse_kms_test.go * format * assert equal * uploading SSE-KMS metadata per chunk * persist sse type and metadata * avoid re-chunk multipart uploads * decryption process to use stored PartOffset values * constants * sse-c multipart upload * Unified Multipart SSE Copy * purge * fix fatalf * avoid io.MultiReader which does not close underlying readers * unified cross-encryption * fix Single-object SSE-C * adjust constants * range read sse files * remove debug logs --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
||
|
|
a834327755 |
context cancellation during reading range reading large files (#7092)
* context cancellation during reading range reading large files * address comments |
||
|
|
9fadd9def8 | Fixed weed mount reads with jwt.signing.read.key (#7061) | ||
|
|
4511c2cc1f |
Changes logging function (#6919)
* updated logging methods for stores * updated logging methods for stores * updated logging methods for filer * updated logging methods for uploader and http_util * updated logging methods for weed server --------- Co-authored-by: akosov <a.kosov@kryptonite.ru> |
||
|
|
90c128e7a6 | Context-based logging with request ID (#6899) | ||
|
|
283d9e0079 | Add context with request (#6824) | ||
|
|
a5fe6e21bc |
feat(filer.backup): add ignore errors option (#6235)
* feat(filer.backup): add ignore errors option * feat(filer.backup): fix 404 error wrap * feat(filer.backup): fix wrapping function * feat(filer.backup): fix wrapping errors in genProcessFunction * Update weed/command/filer_backup.go * Update weed/command/filer_backup.go * Update weed/command/filer_backup.go --------- Co-authored-by: Max Denushev <denushev@tochka.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
||
|
|
86d92a42b4 |
Added tls for http clients (#5766)
* Added global http client * Added Do func for global http client * Changed the code to use the global http client * Fix http client in volume uploader * Fixed pkg name * Fixed http util funcs * Fixed http client for bench_filer_upload * Fixed http client for stress_filer_upload * Fixed http client for filer_server_handlers_proxy * Fixed http client for command_fs_merge_volumes * Fixed http client for command_fs_merge_volumes and command_volume_fsck * Fixed http client for s3api_server * Added init global client for main funcs * Rename global_client to client * Changed: - fixed NewHttpClient; - added CheckIsHttpsClientEnabled func - updated security.toml in scaffold * Reduce the visibility of some functions in the util/http/client pkg * Added the loadSecurityConfig function * Use util.LoadSecurityConfiguration() in NewHttpClient func |