The gateway currently supports only Signature Version 4 (SigV4) authorization. Deprecated AWS SigV2 requests are now rejected with an AWS-specific `InvalidRequest` error for both Authorization-header requests and query-string requests(presigned URLs).
This also fixes SigV4 Authorization-header handling for date headers. SigV4 accepts two date headers: `Date` and `X-Amz-Date`. `X-Amz-Date` takes precedence, but when it is missing, `Date` should be used. The gateway now uses the `Date` header with lower precedence when `X-Amz-Date` is not present. No SDK integration test was added for this case because the SDK always sets `X-Amz-Date`, and this behavior is not configurable.
Integrate the new S3 checksum types in the gateway, including `SHA512`, `MD5`, `XXHASH64`, `XXHASH3`, and `XXHASH128`. This adds checksum calculation, validation, schema handling, and test coverage for the expanded checksum support.
These external packages have been used:
- `github.com/zeebo/xxh3` for `XXHASH3` and `XXHASH128`
- `github.com/cespare/xxhash/v2` for `XXHASH64`
Adjust integration tests because `aws-sdk-go-v2/service/s3` does not support automatic checksum calculation for the new checksum algorithms and returns an SDK-level error when only the checksum algorithm is provided. Only precalculated checksum values are acceptable for these checksum types.
References:
- `https://github.com/aws/aws-sdk-go-v2/issues/3404`
- `https://github.com/aws/aws-sdk-go-v2/issues/3403`
Add options for embedders to register Fiber routes and middleware before the S3 route table is initialized.
WithRoute registers a top-level route with explicit method and path matching. WithMiddleware registers prefix middleware that can handle a request or call ctx.Next() to continue into the S3 stack.
Add coverage for route registration order when a top-level route and catch-all middleware are both configured.
Closes#1273
Rewrite UnsignedChunkReader to stream the payload bytes directly into the caller buffer instead of allocating and stashing full chunks. With this implementation, no stash is held by the reader and the chunk reader doesn't allocate any memory.
Make debug logging more descriptive, which records reader state on all error paths and logs read progress whenever a Read call fills the caller buffer.
Some unit tests were added to cover the main moving parts of the reader flow.
The PATCH /:bucket/create admin route was missing
middlewares.ApplyDefaultCORS, while every other admin PATCH route
applies it. The OPTIONS preflight handler already sets CORS headers,
so browsers pass preflight but block the actual response for lacking
Access-Control-Allow-Origin. This caused the WebUI bucket-creation
flow to fail with ERR_FAILED even though the server returned 201.
Fixes#2105. Introduced in #1739 when the endpoint was added.
Co-Authored-By: Claude <noreply@anthropic.com>
Fixes#1986
When a client includes tagging, legal hold, or retention headers in a PutObject, CopyObject or CreateMultipartUpload request, the corresponding bucket policy permissions must be verified in addition to s3:PutObject:
`X-Amz-Tagging` - `s3:PutObjectTagging`
`X-Amz-Object-Lock-Legal-Hold` - `s3:PutObjectLegalHold`
`X-Amz-Object-Lock-Mode` - `s3:PutObjectRetention`
Previously, only s3:PutObject was checked, allowing users to set tagging, legal hold, and retention without having the required permissions. Now each action permission is check, if user tries to add them.
For CopyObject these permissions are checked on destination object.
Add a --socket-perm flag (VGW_SOCKET_PERM env var) to control the
file-mode permissions on file-backed UNIX domain sockets. This allows
operators to limit access permission without relying on process umask.
The option applies to S3, admin, and WebUI sockets and has no effect
on TCP/IP addresses or Linux abstract namespace sockets.
Fixes#2010
Fixes#2052Fixes#2056Fixes#2057
Previously, GetObject and HeadObject used the request's `Range` header to determine the response status code, which caused incorrect 206 responses for invalid Range header values.
The status is now driven by whether res.ContentRange is set in the response, rather than by the presence of a range in the request. Backends (posix and azure) now set Content-Range for PartNumber=1 on non-multipart objects, skipping zero-size objects where no range applies.
HeadObject was also fixed to return 206 when Content-Range is present, and to only return checksums when the full object is requested.
Closes#1064
Use the multipart ETag as the in-progress directory suffix instead of the static `.inprogress` marker so that concurrent CompleteMultipartUpload calls for the same upload ID are all treated as successful (idempotent) rather than racing, where only one succeeded and the rest returned NoSuchUpload.
After finalizing the multipart upload, store an `mp-metadata` xattr on the assembled object that records the upload ID and cumulative byte offsets for each part. GetObject and HeadObject now use this metadata to serve individual part ranges via the `partNumber` query parameter, returning a successful response instead of returning NotImplemented.
Add two new S3 error codes:
- `ErrInvalidPartNumberRange` (416 RequestedRangeNotSatisfiable) — returned
when the requested part number exceeds the number of parts in the upload.
- `ErrRangeAndPartNumber` (400 BadRequest) — returned when both a Range header
and a partNumber query parameter are specified on the same request.
fasthttp v1.70.0 now enforces the HTTP/1.1 requirement of exactly
one Host header, rejecting requests that omit it. Fix tests that
were failing due to missing host.
Closes#1813
We use a specific `versionId` format(`ulid` package) to generate versionIds in posix, which is not compatible to S3. The versionId validation was performed in frontend which is a potential source of failure for s3 proxy configured on an s3 service which doesn't use ulid for versionId generation(e.g. aws S3). These changes move the specific `ulid` versionId validation to posix to not force any specific versionId format in the gateway.
Fixes the [comment](https://github.com/versity/versitygw/issues/1648#issuecomment-4175425099)
Removes the unnecessary multipart/form-data boundary normalizing. The boundary prefix(`--`) was trimmed in `NewMultipartParser`, which caused incorrect boundary check for the boundaries starting with 2 dashes(e.g. `----WebKitFormBoundaryABC123`).
Closes#1897
Extract the `X-Amz-Source-Expected-Bucket-Owner` header for CopyObject and UploadPartCopy. Verify the source bucket owner in the backend and if the provided access key id doesn't match, return an `AccessDenied` error.
Fixes#1814
The `x-amz-bucket-region` is not mentioned in AWS S3 documentation, however s3 sends it in all ListObjects(V2) successful responses. The header is now added.
Closes#1648Fixes#1980Fixes#1981
This PR implements browser-based POST object uploads for S3-compatible form uploads. It adds support for handling `multipart/form-data` object uploads submitted from browsers, including streaming multipart parsing so file content is not buffered in memory, POST policy decoding and evaluation, SigV4-based form authorization, and integration with the existing `PutObject` backend flow. The implementation covers the full browser POST upload path, including validation of required form fields, credential scope and request date checks, signature verification, metadata extraction from `x-amz-meta-*` fields, checksum field parsing, object tagging conversion from XML into the query-string format expected by `PutObject`, and browser-compatible success handling through `success_action_status` and `success_action_redirect`. It also wires the new flow into the router and metrics layer and adds POST-specific error handling and debug logging across policy parsing, multipart parsing, and POST authorization. AWS S3 also accepts the `redirect` form field alongside `success_action_redirect`, but since AWS has marked `redirect` as deprecated and is planning to remove it, this gateway intentionally does not support it.
Fixes#1896
Enforces the S3 `5 GiB` copy source size limit across the posix and azure
backends for `CopyObject` and `UploadPartCopy`, returning `InvalidRequest` when
the source object exceeds the threshold.
The limit is now configurable via `--copy-object-threshold`
(`VGW_COPY_OBJECT_THRESHOLD`, default 5 GiB).
A new `--mp-max-parts flag` (`VGW_MP_MAX_PARTS`, default `10000`) has been added to make multipart upload parts number limit configurable.
No integration test has been added, as GitHub Actions cannot reliably
handle large objects.
Closes#1967
Add support for response header override query parameters(`response-cache-control`, `response-content-disposition`, `response-content-encoding`, `response-content-language`, `response-content-type`, `response-expires`) in `HeadObject`. Anonymous requests with override params are rejected with `ErrAnonymousResponseHeaders`.
Some SDKs/clients make the assumption that us-east-1 will always
work for bucket listings even if individual buckets are in
different regions. This is how AWS works with allowing full
bucket listing from the default us-east-1 region as well as
any other valid region.
Fixes#1879
This is a fixup of the codebase using:
go run golang.org/x/tools/go/analysis/passes/modernize/cmd/modernize@latest -fix ./...
This has no bahvior changes, and only updates safe changes for
modern go features.
This adds the webui-s3-prefix option to specify a prefix and host
the webui on the same port as the s3 service. Like the health
endpoint, this will mask any bucket with the same name as the
webui prefix.
The benefit of hosting this on the same interface as the s3
service is no longer needing the CORS headers for the browser
access if the webui and s3 access are on the same IP:PORT.
This adds the ability to specify unix domain socket paths for the
service listener with the --port <path> option. Where <path> can
be either a path to a file in a filesystem or prefixed with @ for
an abstract socket name.
Anything not matching the <host>:<port> pattern in the --port
option will be considered a socket filename.
Fixes#1606
According to AWS documentation:
> *“The PUT request header is limited to 8 KB in size. Within the PUT request header, the user-defined metadata is limited to 2 KB in size. The size of user-defined metadata is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value.”*
Based on this, object metadata size is now limited to **2 KB** for all object upload operations (`PutObject`, `CopyObject`, and `CreateMultipartUpload`).
Fixes handling of metadata HTTP headers when the same header appears multiple times with different casing or even if they are identical. According to S3 behavior, these headers must be merged into a single lower-cased metadata key, with values concatenated using commas.
Example:
```
x-amz-meta-Key: value1
x-amz-meta-kEy: value2
x-amz-meta-keY: value3
```
Translated to:
```
key: value1,value2,value3
```
This PR also introduces an **8 KB limit for request headers**. Although the S3 documentation explicitly mentions the 8 KB limit only for **PUT requests**, in practice this limit applies to **all requests**.
To enforce the header size limit, the Fiber configuration option `ReadBufferSize` is used. This parameter defines the maximum number of bytes read when parsing an incoming request. Note that this limit does not apply strictly to request headers only, since request parsing also includes other parts of the request line (e.g., the HTTP method, protocol string, and version such as `HTTP/1.1`). So `ReadBufferSize` is effectively a limit for request headers size, but not the exact limit.
The logic to return a `NotImplemented` error on object upload operations, when any ACL header is present has been removed. Now all object ACL headers are by default ignored. The `-noacl` flag is preserved to disabled bucket ACLs.
**Testing**
The Put/Get object ACL tests are moved to `NotImplemented` integration tests group as a default gateway behavior. The existing `_acl_not_supported` tests are modified to expect no error, when ACLs are used on object uploads.
Closes#1847
This PR introduces a global optional gateway CLI flag `--disable-acl` (`VGW_DISABLE_ACL`) to disable ACL handling. When this flag is enabled, the gateway ignores all ACL-related headers, particularly in `CreateBucket`, `PutObject`, `CopyObject`, and `CreateMultipartUpload`.
`GetBucketAcl` behavior is unchanged simply returning the bucket ACL config.
There's no change in object ACL actions(`PutObjectACL`, `GetObjectACL`). They return a`NotImplemented` error as before.
A new custom error is added for PutBucketAcl calls when ACLs are disabled at the gateway level. Its HTTP status code and error code match AWS S3’s behavior, with only a slightly different error message.
In the access-control checker, ACL evaluation is fully bypassed. If ACLs are disabled only the bucket owner gets access to the bucket and all grantee checks are ignored.
The PR also includes minor refactoring of the S3 API server and router. The growing list of parameters passed to the router’s Init method has been consolidated into fields within the router struct, initialized during router construction. Parameters not needed by the S3 server are no longer stored in the server configuration and are instead forwarded directly to the router.
Fixes#1870Fixes#1863
A validation has been added to **PutBucketCors** for `CORSRule.AllowedOrigins`. The `AllowedOrigins` list can no longer be empty—otherwise a **MalformedXML** error is returned. Additionally, each origin is now validated to ensure it does not contain more than one wildcard.
A similar validation has been added for `AllowedMethods`. The list must not be empty, or a **MalformedXML** error is returned. Previously, empty method values (e.g., `[]string{""}`) were incorrectly treated as valid. This has been fixed, and an **UnsupportedCORSMethod** error is now returned.
Fixes#1869
Generally, when object ownership is not explicitly specified during bucket creation, it defaults to `BucketOwnerEnforced`. With `BucketOwnerEnforced`, ACLs are disabled and any attempt to set one results in an `InvalidBucketAclWithObjectOwnership` error.
However, there is an edge case. When the `private` canned ACL is used during bucket creation—which is effectively the default ACL for all buckets—`BucketOwnerEnforced` is still permitted. Moreover, if no explicit object ownership is specified together with the `private` canned ACL, the ownership defaults to `BucketOwnerPreferred`.
This fix also resolves the issue with rclone bucket creation, since rclone sends `x-amz-acl: private` by default:
```
rclone mkdir vgw:test
```
This allows specifying the following options more than once:
port, admin-port, webui
or using a comma-separated list for the env vars:
e.g., VGW_PORT=:7070,:8080,localhost:9090
This will also expand multiple interfaces from hostnames, for example
"localhost" in this case would resolve to both IPv4 and IPv6 interfaces:
localhost has address 127.0.0.1
localhost has IPv6 address ::1
This updates the banner to reflect all of the listening interfaces/ports,
and starts the service listener on all requested interfaces/ports.
Fixes#1761
Fixes#1849
If no `Content-Type` is provided during object upload, S3 defaults it to `application/octet-stream`. This behavior was missing in the gateway, causing backends to persist an empty `Content-Type`, which Fiber then overrides with its default `text/plain`. The behavior has now been corrected for the object upload operations: `PutObject`, `CreateMultipartUpload`, and `CopyObject`.
This is part of the thread exhaustion issue (#1815).
This PR introduces:
* A **maximum Fiber HTTP connections limit**
* A middleware that enforces a **hard limit on in-flight HTTP requests**
When the in-flight request limit is reached, the middleware returns an **S3-compatible `503 SlowDown`** error.
The same mechanism is implemented for the **admin server** (both max connections and max in-flight requests).
All limits are configurable via **CLI flags** and **environment variables**, for both the `s3api` server and the `admin` server.
---
| Setting | CLI Flag | Alias | Environment Variable | Default |
| --------------- | ------------------- | ----- | --------------------- | ------- |
| Max Connections | `--max-connections` | `-mc` | `VGW_MAX_CONNECTIONS` | 250000 |
| Max Requests | `--max-requests` | `-mr` | `VGW_MAX_REQUESTS` | 100000 |
---
| Setting | CLI Flag | Alias | Environment Variable | Default |
| --------------- | ------------------------- | ------ | --------------------------- | ------- |
| Max Connections | `--admin-max-connections` | `-amc` | `VGW_ADMIN_MAX_CONNECTIONS` | 250000 |
| Max Requests | `--admin-max-requests` | `-amr` | `VGW_ADMIN_MAX_REQUESTS` | 100000 |
Fixes#1852Fixes#1821
Fiber used to return the `text/plain` default `Content-Type` for error responses, because it wasn't explicitly set. Now for all error responses the `application/xml` content type is set.
Fixes#1835
If-Match in DeleteObject is a precondition header that compares the client-provided ETag with the server-side ETag before deleting the object. Previously, the comparison failed when the client sent an unquoted ETag, because server ETags are stored with quotes. The implementation now trims quotes from both the input ETag and the server ETag before comparison to avoid mismatches. Both quoted and unquoted ETags are valid according to S3.
Fixes#1809Fixes#1806Fixes#1804Fixes#1794
This PR focuses on correcting so-called "list-limiter" parsing and validation. The affected limiters include: `max-keys`, `max-uploads`, `max-parts`, `max-buckets`, `max-uploads` and `part-number-marker`. When a limiter value is outside the integer range, a specific `InvalidArgument` error is now returned. If the value is a valid integer but negative, a different `InvalidArgument` error is produced.
`max-buckets` has its own validation rules: completely invalid values and values outside the allowed range (`1 <= input <= 10000`) return distinct errors. For `ListObjectVersions`, negative `max-keys` values follow S3’s special-case behavior and return a different `InvalidArgument` error message.
Additionally, `GetObjectAttributes` now follows S3 semantics for `x-amz-max-parts`: S3 ignores invalid values, so the gateway now matches that behavior.
In the refactor for being able to set global CORS headers, the
options router was incorrectly set to use both CORS middlewares
casuing duplicate headers to be set. The ApplyBucketCORS()
middleware is not needed for options since this is already handled
by the CORSOoptions controller.
Fixes#1819
Fixes#1767Fixes#1773
As object ACLs are not supported in the gateway, any attempt to set an ACL during object creation must return a NotImplemented error. A check has now been added to `PutObject`, `CopyObject`, and `CreateMultipartUpload` to detect any ACL-related headers and return a NotImplemented error accordingly.