Closes#1525
* Adds validation for the `Content-MD5` header.
* If the header value is invalid, the gateway now returns an `InvalidDigest` error.
* If the value is valid but does not match the payload, it returns a `BadDigest` error.
* Adds integration test cases for `PutBucketCors` with `Content-MD5`.
Fixes#1540Fixes#1538Fixes#1513Fixes#1425
Fixes SigV4 authentication and presigned URL error handling. Adds two sets of errors in the `s3err` package for these authentication mechanisms.
* Adds a check to return a custom "not supported" error when `X-Amz-Security-Token` is present in presigned URLs.
* Adds a check to return a custom "not supported" error when the `AWS4-ECDSA-P256-SHA256` algorithm is used in presigned URLs.
Closes#821
**Implements conditional operations across object APIs:**
* **PutObject** and **CompleteMultipartUpload**:
Supports conditional writes with `If-Match` and `If-None-Match` headers (ETag comparisons).
Evaluation is based on an existing object with the same key in the bucket. The operation is allowed only if the preconditions are satisfied. If no object exists for the key, these headers are ignored.
* **CopyObject** and **UploadPartCopy**:
Adds conditional reads on the copy source object with the following headers:
* `x-amz-copy-source-if-match`
* `x-amz-copy-source-if-none-match`
* `x-amz-copy-source-if-modified-since`
* `x-amz-copy-source-if-unmodified-since`
The first two are ETag comparisons, while the latter two compare against the copy source’s `LastModified` timestamp.
* **AbortMultipartUpload**:
Supports the `x-amz-if-match-initiated-time` header, which is true only if the multipart upload’s initialization time matches.
* **DeleteObject**:
Adds support for:
* `If-Match` (ETag comparison)
* `x-amz-if-match-last-modified-time` (LastModified comparison)
* `x-amz-if-match-size` (object size comparison)
Additionally, this PR updates precondition date parsing logic to support both **RFC1123** and **RFC3339** formats. Dates set in the future are ignored, matching AWS S3 behavior.
The following panic was triggered when mc client (that uses
chunked uploads) would upload a 171164 byte file. This likely
could have been hit with other sizes as well, but this size
was able to reliably reproduce the issue.
panic: runtime error: slice bounds out of range [:2] with capacity 1
goroutine 66 [running]:
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseChunkHeaderBytes(0x14000276200, {0x14000167fff?, 0x14000103180?, 0x200000003?})
versitygw/s3api/utils/signed-chunk-reader.go:372 +0xe54
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseAndRemoveChunkInfo(0x14000276200, {0x14000167fff, 0x1, 0x1})
versitygw/s3api/utils/signed-chunk-reader.go:251 +0x50
github.com/versity/versitygw/s3api/utils.(*ChunkReader).Read(0x14000276200, {0x14000160000, 0x14000056c00?, 0x8000})
versitygw/s3api/utils/signed-chunk-reader.go:126 +0x188
io.(*teeReader).Read(0x140000b09c0, {0x14000160000, 0x105e7b368?, 0x8000})
/usr/local/go/src/io/io.go:628 +0x34
...
The reproducer is:
% truncate -s 171764 testfile
% mc cp testfile gwtest/mybucket/testfile
mc: <ERROR> Failed to copy `/Users/ben/repo/s3perf/tools/testfile`. Put "http://127.0.0.1:7070/mybucket/testfile": dial tcp 127.0.0.1:7070: connect: connection refused
The panic can happen because the capacity of header ([]byte) at
the point of the debuglog line can be less than 2, but we were
trying to always send the first 2 bytes to the debug log.
The debuglogger should be a top level module since we expect
all modules within the project to make use of this. If its
hidden in s3api, then contributors are less likely to make
use of this outside of s3api.
Closes#882
Implements conditional reads for `GetObject` and `HeadObject` in the gateway for both POSIX and Azure backends. The behavior is controlled by the `If-Match`, `If-None-Match`, `If-Modified-Since`, and `If-Unmodified-Since` request headers, where the first two perform ETag comparisons and the latter two compare against the object’s `LastModified` date. No validation is performed for invalid ETags or malformed date formats, and precondition date headers are expected to follow RFC1123; otherwise, they are ignored.
The Integration tests cover all possible combinations of conditional headers, ensuring the feature is 100% AWS S3–compatible.
GetObject allows overriding response headers with the following
paramters:
response-cache-control
response-content-disposition
response-content-encoding
response-content-language
response-content-type
response-expires
This is only valid for signed (and pre-singed) requests. An error
is returned for anonymous requests if these are set.
More info on the GetObject overrides can be found in the GetObject
API reference.
This also clarifies the naming of the AccessOptions IsPublicBucket
to IsPublicRequest to indicate this is a public access request
and not just accessing a bucket that allows public access.
Fixes#1501
Fixes#1345
The previous implementation incorrectly parsed the `x-amz-sdk-checksum-algorithm` header for the `CompleteMultipartUpload` operation, even though this header is not expected and should be ignored. It also mistakenly treated the `x-amz-checksum-algorithm` header as an invalid value for `x-amz-checksum-x`.
The updated implementation only parses the `x-amz-sdk-checksum-algorithm` header for `PutObject` and `UploadPart` operations. Additionally, `x-amz-checksum-algorithm` and `x-amz-checksum-type` headers are now correctly ignored when parsing the precalculated checksum headers (`x-amz-checksum-x`).
Fixes#1339
`x-amz-checksum-type` and `x-amz-checksum-algorithm` request headers should be case insensitive in `CreateMultipartUpload`.
The changes include parsing the header values to upper case before validating and passing to back-end. `x-amz-checksum-type` response header was added in`CreateMultipartUpload`, which was missing before.
Fixes#1352
Adds a validation check step in `SigV4` authentication for `x-amz-content-sh256` to check it to be either a valid sha256 hash or a special payload type(UNSIGNED-PAYLOAD, STREAMING-UNSIGNED-PAYLOAD-TRAILER...).
Fixes#1388Fixes#1389Fixes#1390Fixes#1401
Adds the `x-amz-copy-source` header validation for `CopyObject` and `UploadPartCopy` in front-end.
The error:
```
ErrInvalidCopySource: {
Code: "InvalidArgument",
Description: "Copy Source must mention the source bucket and key: sourcebucket/sourcekey.",
HTTPStatusCode: http.StatusBadRequest,
},
```
is now deprecated.
The conditional read/write headers validation in `CopyObject` should come with #821 and #822.
Closes#908
This PR introduces a new routing system integrated with Fiber. It matches each S3 action to a route using middleware utility functions (e.g., URL query match, request header match). Each S3 action is mapped to a dedicated route in the Fiber router. This functionality cannot be achieved using standard Fiber methods, as Fiber lacks the necessary tooling for such dynamic routing.
Additionally, this PR implements a generic response handler to manage responses from the backend. This abstraction helps isolate the controller from the data layer and Fiber-specific response logic.
With this approach, controller unit testing becomes simpler and more effective.
An update to fasthttp has deprecated the VisitAll() method
for an iterator function All() that can be used to range over
all headers.
This should fix the staticcheck warnings for calling the
deprecated function.
The CRC32, CRC32c, and CRC64NVME data integrity checksums support calculating
the composite full object values for multi-part uploads using the checksum
and length of the individual parts.
Previously, we were reading all of the part data to recalculate the full
object checksum values during the complete multipart upload call. This
disabled the optimized copy_file_range() for certain filesystems such as
XFS because the part data was being read. If the data is not read, and
the file handle is passed directly to io.Copy(), then the filesystem is
allowed to optimize the copying of the data from the source to destination
files.
This now allows both the optimized copy_file_range() optimizations as well
as the data integrity features enabled for support composite checksum types.
This implementation introduces **public buckets**, which are accessible without signature-based authentication.
There are two ways to grant public access to a bucket:
* **Bucket ACLs**
* **Bucket Policies**
Only `Get` and `List` operations are permitted on public buckets. All **write operations** require authentication, regardless of whether public access is granted through an ACL or a policy.
The implementation includes an `AuthorizePublicBucketAccess` middleware, which checks if public access has been granted to the bucket. If so, authentication middlewares are skipped. For unauthenticated requests, appropriate errors are returned based on the specific S3 action.
---
**1. Bucket-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
}
]
}
```
**2. Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
**3. Both Bucket and Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
---
```sh
aws s3api create-bucket --bucket test --object-ownership BucketOwnerPreferred
aws s3api put-bucket-acl --bucket test --acl public-read
```
This adds an object name validation util to check if the object
path would resolve to a path outside of the bucket directory.
S3 returns Bad Request for these type of paths:
% aws s3api put-object --bucket mybucket --key test/../../hello
An error occurred (400) when calling the PutObject operation: Bad Request
Closes#803
Implements host-style bucket addressing in the gateway. This feature can be enabled by running the gateway with the `--virtual-domain` flag and specifying a virtual domain name.
Example:
```bash
./versitygw -a user -s secret --virtual-domain localhost:7070 posix /tmp/vgw
```
The implementation follows this approach: it introduces a middleware (`HostStyleParser`) that parses the bucket name from the `Host` header and appends it to the URL path. This effectively transforms the request into a path-style bucket addressing format, which the gateway already supports. With this design, the gateway can handle both path-style and host-style requests when running in host-style mode.
For local testing, one can either set up a local DNS server to wildcard-match all subdomains of a specified domain and resolve them to the local IP address, or manually add entries to `/etc/hosts` to resolve bucket-prefixed hosts to the server IP (e.g., `127.0.0.1`).
Closes#1221
Adds debug logging for `signed`/`unsigned` chunk readers.
Adds the `debuglogger.Infof` log method, which prints out green info logs with `[INFO]:` prefix.
The debug logging inclues some chunk details: size, signature, trailers. It also prints out stash/release stash operations.
The error cases are logged with standart yellow `[DEBUG]:` prefix.
The `String to sign` block in signed chunk reader is logged in purple horizontal borders with title.
Fixes#961Fixes#1248
The gateway should return a `MissingContentLength` error if the `Content-Length` HTTP header is missing for upload operations (`PutObject`, `UploadPart`).
The second fix involves enforcing a maximum object size limit of `5 * 1024 * 1024 * 1024` bytes (5 GB) by validating the value of the `Content-Length` header. If the value exceeds this limit, the gateway should return an `EntityTooLarge` error.
Fixes#1238
The signed chunk reader stashes the header bytes if it can't fully parse the chunk header. On the next `io.Reader` call, the stash is combined with the new buffer data to attempt parsing the header again. The stashing logic was broken due to the premature removal of the first two header bytes (`\r\n`). As a result, the stash was incomplete, leading to parsing issues on subsequent calls.
These changes fix the stashing logic and correct the buffer offset calculation in `parseChunkHeaderBytes`.
Fixes#1214Fixes#1231Fixes#1232
Implements `utils.ParseTagging` which is a generic implementation of parsing tags for both `PutObjectTagging` and `PutBucketTagging`.
- The actions now return `MalformedXML` if the provided request body is invalid.
- Adds validation to return `InvalidTag` if duplicate keys are present in tagging.
- For invalid tag keys, it creates a new error: `ErrInvalidTagKey`.
Added missing debug logs in the `front-end` and `utility` functions.
Enhanced debug logging with the following improvements:
- Each debug message is now prefixed with [DEBUG] and appears in color.
- The full request URL is printed at the beginning of each debug log block.
- Request/response details are wrapped in framed sections for better readability.
- Headers are displayed in a colored box.
- XML request/response bodies are pretty-printed with indentation and color.
Fixes#1171
As signature v2 is depracated the gateway doesn't support it.
AWS S3 supports signature version 2 in some regions. For some regions the request fails with error:
```
InvalidRequest: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
```
The PR makes this change to return unsupported authorization mechanism for `sigV2` requests.
Fixes#1186Fixes#1188Fixes#1189
If multiple checksum headers are provided, no matter if they are empty or not, the gateway should return `(InvalidRequest): Expecting a single x-amz-checksum- header. Multiple checksum Types are not allowed.`
An empty checksum header is considered as invalid, because it's not valid crc32, crc32c ...
Fixes#1165
The signed chunk encoding with trailers should return api error for:
1. Invalid checksum - `(InvalidRequest) Value for x-amz-checksum-x trailing header is invalid.`
2. Incorrect checksum - `(BadDigest) The x you specified did not match the calculated checksum.`
Where `x` could be crc32, crc32c, sha1 ...
Fixes#1000
`GetObjectAttributes` returned `InvalidRequest` instead of `InvalidArgument` with description `Invalid attribute name specified.`.
Fixes the logic in `ParseObjectAttributes` to ignore empty values for `X-Amz-Object-Attributes` headers to return `InvalidArgument` if all the specified object attributes are invalid.
Closes#1159Fixes#1161
Implements signed chunk encoding with trailers in the gateway.
The signed encoding (both with and without trailers) is now handled by the `ChunkReader`.
Fixes the `ChunkReader` implementation to validate encoding headers byte by byte.
The chunk encoding with trailers follows the general signed chunk encoding pattern, but the final chunk includes the trailing signature (`x-amz-trailing-signature`) and the checksum header (`x-amz-checksum-x`, where `x` can be `crc32`, `crc32c`, `sha1`, `sha256`, or `crc64nvme`).
Adds validation for the `X-Amz-Trailer` header.
Fixes#1147
The final chunk header with 0 length, contains the last signature in signed chunk encoding implementation.
Added this last signature verification in the signed chunk encoding without trailers.
Fixes#1141Fixes#1142
Changes the error type to `InvalidArgument` for `x-amz-object-lock-legal-hold` and `x-amz-object-lock-mode` headers invalid values.
The StreamResponseBody() called ctx.Write() in a loop with a small
buffer in an attempt to stream data back to client. But the
ctx.Write() was just calling append buffer to the response instead
of streaming the data back to the client.
The correct way to stream the response back is to use
(ctx *fasthttp.RequestCtx).SetBodyStream() to set the body stream
reader, and the response will automatically get streamed back
using the reader. This will also call Close() on our body
since we are providing an io.ReadCloser.
Testing this should be done with single large get requests such as
aws s3api get-object --bucket bucket --key file /tmp/data
for very large objects. The testing shows significantly reduced
memory usage for large objects once the streaming is enabled.
Fixes#1082
We were getting errors such as:
2025/02/07 19:24:28 Internal Error, write object data: write exceeds content length 87
whenever the chunk encoding did not have the correct chunk
signatures. The issue was that the chunk encoding reader
was reading from the underlying reader and then passing the full
buffer read back to the caller if the underlying reader returned
an error. This meant that we were not processing the chunk
headers within the buffer due to the higher level error, and
would possibly hand back the longer unprocessed chunk encoded
stream to the caller that was in turn trying to write to the
object file exceeding the content length limit.
Fixes#1056
An invalid chunk encoding, or parse errors leading to parsing
invalid data can lead to a server panic if the chunk header
remaining is determined to be larger than the max buffer size.
This was previously seen when the chunk trailer checksums were
used by the client without the support from the server side
for this encoding. Example panic:
panic: runtime error: slice bounds out of range [4088:1024]
goroutine 5 [running]:
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseChunkHeaderBytes(0xc0003c4280, {0xc0000e6000?, 0x3000?, 0x423525?})
/home/tester/s3api/utils/chunk-reader.go:242 +0x492
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseAndRemoveChunkInfo(0xc0003c4280, {0xc0000e6000, 0x3000, 0x8000})
/home/tester/s3api/utils/chunk-reader.go:170 +0x20b
github.com/versity/versitygw/s3api/utils.(*ChunkReader).Read(0xc0003c4280, {0xc0000e6000, 0xc0000b41e0?, 0x8000})
/home/tester/s3api/utils/chunk-reader.go:91 +0x11e
This fix will validate the data length before copying into the
temporary buffer to prevent a panic and instead just return
an error.