When a client provides an invalid or incomplete body for a single-part
upload, the handler returns before the link stage. Factor tempfile
removal into cleanup() to catch all cases.
The standard io.Copy() will attempt to use copy_file_range when available.
However, this can cause problems with certain filesystems. Add an option
that will prevent io.Copy() from being able to use copy_file_range to force
a standard data copy between file descriptors when needed for completing
multipart uploads.
Rewrite the posix Walk implementation to avoid the extra ReadDir per
directory that was noted as a TODO in the old code. The new algorithm
holds all traversal state in a walkState struct and uses processDir to
interleave sibling entries in correct S3 lexicographic order without a
second syscall.
Key changes:
prefix optimisation: jump directly into the deepest matching directory
rather than scanning from the root on every call
marker short-circuit: skip entire subtrees that are lexically before
the marker, making paginated listing faster
Co-authored-by: Ben McClelland <ben.mcclelland@versity.com>
This is a fixup of the codebase using:
go run golang.org/x/tools/go/analysis/passes/modernize/cmd/modernize@latest -fix ./...
This has no bahvior changes, and only updates safe changes for
modern go features.
When access/secret are not provided, let AWS SDK v2 resolve credentials
from the default provider chain (env vars, IRSA, ECS/EC2 roles, etc.)
instead of forcing anonymous credentials.
Add an explicit anonymous credentials option for s3 proxy to force
backend anonymous access.
Fixes#1955
Fixes#1909
Previously, the mapping between object metadata and posix object was as follows: for each metadata key, we stored a separate xattr with the `user.X-Amz-Meta.<key>` prefix. This resulted in syscall overhead when storing and deleting large numbers of metadata keys.
In addition, very long metadata keys caused failures because most posix filesystems limit xattr key lengths to 127–255 bytes, while S3 does not enforce such a per-key limit.
The logic has now been changed so that all object metadata is stored in a single xattr, `user.metadata`, as a JSON key/value object. For backward compatibility, metadata GET operations still fall back to the old mechanism (`metadata key -> xattr key`) when `user.metadata` is not present.
A new CLI utility has been added to convert all legacy object metadata to the new metadata format within the provided directory.
**Example usage:**
```
versitygw utils convert-xattr-metadata path/to/bucket
```
or
```
versitygw utils cxm path/to/bucket
```
It is recommended to run this command on bucket directories to convert all legacy metadata for every object in the bucket.
When copying between two different Azure blobs, the source download
stream body was only consumed by PutObject but never explicitly closed.
If PutObject or any subsequent step returned an error, the underlying
HTTP connection held by the Azure SDK was never released, leaking both
the connection and any internal SDK retry goroutines attached to it.
Added a deferred close on downloadResp.Body immediately after the
successful DownloadStream call to ensure the body is always drained and
released regardless of the outcome.
Azure's ListContainers Marker parameter requires an opaque internal token
(e.g. /accountname/containername) rather than a plain container name, so
passing MaxResults and our ContinuationToken directly to the Azure API
caused 400 OutOfRangeInput errors. Rework ListBuckets to iterate all Azure
pages client-side, skip entries at or before the ContinuationToken (matching
the posix backend's "start after" semantics), and stop once MaxBuckets items
have been collected, setting ContinuationToken to the last returned bucket
name. This avoids using Azure's NextMarker entirely and correctly handles
both unpaginated and paginated requests.
Azure Storage's StageBlock REST API rejects Content-Length: 0
with InvalidHeaderValue. The tests (PresignedAuth_UploadPart,
UploadPart_success) upload a nil/empty body, which causes the
Azure SDK to send Content-Length: 0. Azurite is lenient and
accepts it; real Azure Storage does not.
Use a new metadata key ("Zerobytesparts") sett on the
.sgwtmp/multipart/<uploadId>/<object-hash> blob to track and
0 length parts.
Closes#1857
Adds object Tagging support for directory objects in `PutObject` posix. Updates the integration tests to test object metadata and tagging both for file and directory objects.
In sidecar mode, the three StoreAttribute/storeChecksums calls after the
copy loop were using objPath (the source object path) instead of the
destination part's bucket and partPath. This caused checksums and the
internal part-crc64nvme to be written under the source object's sidecar
directory, making them unresolvable when CompleteMultipartUploadWithCopy
tried to retrieve them. All three stores now use *upi.Bucket and partPath,
consistent with the etag store directly below them.
Replace the length-comparison condition with an explicit predicate that
is both more readable and correctly scoped: skip only when the visited
directory is a strict ancestor of the specified prefix (not a descendant
and not when prefix is empty).
Adds tests from the original bug report (#1864) to verify the fix and
guard against future regressions.
This PR optimizes multipart upload checksum handling. When a checksum algorithm/type is specified at multipart-upload initiation, each `UploadPart` request computes, validates, and stores the corresponding part checksum. During `CompleteMultipartUpload`, the final checksum is derived either via composite checksum calculation or by composing the CRC-family checksums.
When **no** checksum algorithm is specified during multipart-upload initiation, each `UploadPart` may supply a different checksum algorithm for data-integrity verification. To support this scenario, a new mechanism has been implemented: for every `UploadPart`, a **crc64nvme** checksum is always computed.
* If the client uses crc64nvme for the part upload, a single hash reader is used.
* Otherwise, two hash readers are used—one for crc64nvme and one for the user-provided checksum.
The crc64nvme value is stored in part xattrs under `user.part-crc64nvme` and later used during `CompleteMultipartUpload` as a composable checksum source.
In `CompleteMultipartUpload`, the hash reader is entirely removed; the gateway no longer re-reads part data to compute the final checksum. The logic now follows two distinct paths:
1. **Checksum algorithm/type specified at MP initiation**
* All required per-part checksums have already been stored.
* If the checksum type is `FULL_OBJECT`, the gateway uses the composable path.
* If the type is `COMPOSITE`, the gateway follows the checksum-combining path.
2. **No checksum algorithm specified at MP initiation**
* The gateway loads the stored per-part `crc64nvme` values and composes them to compute the final checksum.
The previous `composableCRC` check has been removed because all `FULL_OBJECT` algorithms are inherently composable (`crc32`, `crc32c`, `crc64nvme`). Validation now relies solely on `checksum.Type`.
Previously, if no object checksum type/algorithm was specified when initiating a multipart upload, the CompleteMultipartUpload request would compute the final object’s CRC64NVME checksum but not persist it. This logic has now been fixed, and in the scenario described above the checksum is stored on the final object. There should no longer be any case where a CompleteMultipartUpload request finishes without persisting the final object checksum.
This brings scoutfs in-line with the posix concurrency limiter.
This fixes a hang with scoutfs due to not correctly initializing
the concurrency in posix leading to a concurrency of 0 allowed.
This also adds a sane default to the posix concurrency when not
initialized.
Add data-integrity checksum support in `PutObject` in the POSIX backend for directory objects. Since the only way to upload a directory object is via `PutObject`, this logic validates and stores the checksum of the empty payload. Support for `GetObject` has also been added to retrieve and return directory-object checksums.
AWS introduced a relatively newer option for data integrity checks
that not all non-AWS server support yet. See this for mmore info:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
This change adds a new option: disable-data-integrity-check
to disable the data integrity checks in the client sdk for the
servers that may not yet support this. Use this only when the s3
service for the proxy does not support the data integrity features.
Fixes#1867
Fixes#1849
If no `Content-Type` is provided during object upload, S3 defaults it to `application/octet-stream`. This behavior was missing in the gateway, causing backends to persist an empty `Content-Type`, which Fiber then overrides with its default `text/plain`. The behavior has now been corrected for the object upload operations: `PutObject`, `CreateMultipartUpload`, and `CopyObject`.
Closes#1815
Implements posix actions concurrency limiter. Since posix actions perform filesystem-heavy syscalls, a semaphore-based limiter is introduced to cap the maximum number of concurrent posix actions. When the limit is reached, additional action calls block until a slot becomes available.
For internal posix calls, the `no_acquire_slot` context key is used to prevent acquiring the limiter multiple times within a single action (e.g., PutObject internally calling PutObjectLegalHold).
The posix concurrency limit can be configured via the gateway posix subcommand flag (--concurrency) or the environment variable `VGW_POSIX_CONCURRENCY`. The default value is `5000`.
* Update client.go to support anonymous S3 access
* Update s3.go to make access and secret parameters optional
* Update example.conf for more clear S3 access and secret usage
Fixes#1836
Fixes#1835
If-Match in DeleteObject is a precondition header that compares the client-provided ETag with the server-side ETag before deleting the object. Previously, the comparison failed when the client sent an unquoted ETag, because server ETags are stored with quotes. The implementation now trims quotes from both the input ETag and the server ETag before comparison to avoid mismatches. Both quoted and unquoted ETags are valid according to S3.
CopyObject was failing with NoSuchKey when source keys contained special
characters like {} or spaces. The X-Amz-Copy-Source header is URL-encoded
by clients, but ParseCopySource wasn't decoding before filesystem access.
Added url.QueryUnescape() to properly decode bucket and object names,
fixing copy operations for keys with special characters.
Fixing this also uncovered an errors with azure blob url encoding with
similar special character handling. Added this fix in for the integration
tests to pass.
Fixes#1832Fixes#1637
Fixes#1809Fixes#1806Fixes#1804Fixes#1794
This PR focuses on correcting so-called "list-limiter" parsing and validation. The affected limiters include: `max-keys`, `max-uploads`, `max-parts`, `max-buckets`, `max-uploads` and `part-number-marker`. When a limiter value is outside the integer range, a specific `InvalidArgument` error is now returned. If the value is a valid integer but negative, a different `InvalidArgument` error is produced.
`max-buckets` has its own validation rules: completely invalid values and values outside the allowed range (`1 <= input <= 10000`) return distinct errors. For `ListObjectVersions`, negative `max-keys` values follow S3’s special-case behavior and return a different `InvalidArgument` error message.
Additionally, `GetObjectAttributes` now follows S3 semantics for `x-amz-max-parts`: S3 ignores invalid values, so the gateway now matches that behavior.
Fixes#1792Fixes#1747Fixes#1797Fixes#1799
This PR primarily introduces delimiter support and several bug fixes for the `ListMultipartUploads` action in the POSIX and Azure backends. Delimiter handling is now implemented — when a delimiter is present in multipart-upload object key names, the backend collects and returns the appropriate common prefixes.
This functionality is achieved by introducing a common multipart-upload lister in the backend package. All backends (Azure, POSIX) now use this lister. The lister accepts a list that is already sorted and filtered by `KeyMarker` and `Prefix`.
Previously, the `KeyMarker` was required to exactly match an existing multipart-upload object key. This restriction is removed. The listing now relies on a lexicographical comparison between the provided `KeyMarker` and existing multipart-upload object keys.
Validation for `UploadIdMarker` is also added to correctly return an `InvalidArgument` error for invalid upload IDs. If `KeyMarker` is missing, the `UploadIdMarker` is ignored entirely. If `KeyMarker` is provided, a valid upload ID is one that matches an upload belonging to *the first object key after the KeyMarker*. For example, if the `KeyMarker` is `foo`, but the provided `UploadIdMarker` corresponds to an upload under `quxx`, it is invalid. It must match one of the uploads for the next object key equal to `foo`.
Finally, this PR fixes multipart-upload sorting. Multipart uploads must be sorted primarily lexicographically by their object key, and secondarily—when multiple uploads share the same object key—by their initiation time in ascending order.
In the POSIX `ListParts` implementation, there was a code snippet that loaded the object metadata, even though it wasn’t needed and never used in the response. This redundant code has now been removed.
The previous logic was not allowing put-object on windows when the parent directory did not already exist, and would
not always return the correct error if an ancestor in the path already existed as a file.
The problem is the different behavior of the os.Stat command in Windows compared to *nix in backend/posix/posix.go
in function PutObjectWithPostFunc. The os.Stat returns ENOTDIR on *nix if the parent object is a file instead of a
directory. On Windows, if the parent object does not exist at all, the return code of such os.Stat is ERROR_PATH_NOT_FOUND which is mapped to ENOTDIR. However this is inappropriate in this case. As a result, the
return code of the os.Stat is incorrectly interpreted as if the parent object is a file instead of the parent object does not
exist. Which then leads to a failed upload.
This fix validates the existing parent structure on put to make sure the correct error is returned or the put is successful.
Fixes#1702