Adjusts the admin apis to the new advanced routing changes.
Enables debug logging for the separate admin server(when a separate server is run for the admin apis).
Adds the quiet mode for the separate admin server.
Fixes#1036
Fixes the issue when calling a non-existing root endpoint(POST /) the gateway returns `NoSuchBucket`. Now it returns the correct `MethodNotAllowed` error.
fixes#896fixes#899
Registeres an all route matcher handler at the end of the router to handle the cases when the api call doesn't match to any s3 action. The all routes matcher returns `MethodNotAllowed` for this kind of requests.
Closes#908
This PR introduces a new routing system integrated with Fiber. It matches each S3 action to a route using middleware utility functions (e.g., URL query match, request header match). Each S3 action is mapped to a dedicated route in the Fiber router. This functionality cannot be achieved using standard Fiber methods, as Fiber lacks the necessary tooling for such dynamic routing.
Additionally, this PR implements a generic response handler to manage responses from the backend. This abstraction helps isolate the controller from the data layer and Fiber-specific response logic.
With this approach, controller unit testing becomes simpler and more effective.
Public buckets support a set of actions on buckets and objects, returning various errors based on the S3 action type and permissions (ACL or policy). The implementation aligns with the table provided in [this gist](https://gist.github.com/niksis02/5919d52d6112537a31c14d9abfa89ac0).
This implementation introduces **public buckets**, which are accessible without signature-based authentication.
There are two ways to grant public access to a bucket:
* **Bucket ACLs**
* **Bucket Policies**
Only `Get` and `List` operations are permitted on public buckets. All **write operations** require authentication, regardless of whether public access is granted through an ACL or a policy.
The implementation includes an `AuthorizePublicBucketAccess` middleware, which checks if public access has been granted to the bucket. If so, authentication middlewares are skipped. For unauthenticated requests, appropriate errors are returned based on the specific S3 action.
---
**1. Bucket-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
}
]
}
```
**2. Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
**3. Both Bucket and Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
---
```sh
aws s3api create-bucket --bucket test --object-ownership BucketOwnerPreferred
aws s3api put-bucket-acl --bucket test --acl public-read
```
The debuglogger logs will only get printed if debug is enabled,
but we always want the internal server error logs to be logged
by the service since this is usually some actionable error
that needs to be addressed with the backend storage system.
This changes internal server error logs to always to sent to
stderr.
Add helper util auth.UpdateBucketACLOwner() that sets new
default ACL based on new owner and removes old bucket policy.
The ChangeBucketOwner() remains in the backend.Backend
interface in case there is ever a backend that needs to manage
ownership in some other way than with bucket ACLs. The arguments
are changing to clarify the updated owner. This will break any
plugins implementing the old interface. They should use the new
auth.UpdateBucketACLOwner() or implement the corresponding
change specific for the backend.
An update to fasthttp has deprecated the VisitAll() method
for an iterator function All() that can be used to range over
all headers.
This should fix the staticcheck warnings for calling the
deprecated function.
The CRC32, CRC32c, and CRC64NVME data integrity checksums support calculating
the composite full object values for multi-part uploads using the checksum
and length of the individual parts.
Previously, we were reading all of the part data to recalculate the full
object checksum values during the complete multipart upload call. This
disabled the optimized copy_file_range() for certain filesystems such as
XFS because the part data was being read. If the data is not read, and
the file handle is passed directly to io.Copy(), then the filesystem is
allowed to optimize the copying of the data from the source to destination
files.
This now allows both the optimized copy_file_range() optimizations as well
as the data integrity features enabled for support composite checksum types.
Implements a middleware that validates incoming bucket and object names before authentication. This helps prevent malicious attacks that attempt to access restricted or unreachable data in `POSIX`.
Adds test cases to cover such attack scenarios, including false negatives where encoded paths are used to try accessing resources outside the intended bucket.
Removes bucket validation from all other layers—including `controllers` and both `POSIX` and `ScoutFS` backends — by moving the logic entirely into the middleware layer.
This implementation introduces **public buckets**, which are accessible without signature-based authentication.
There are two ways to grant public access to a bucket:
* **Bucket ACLs**
* **Bucket Policies**
Only `Get` and `List` operations are permitted on public buckets. All **write operations** require authentication, regardless of whether public access is granted through an ACL or a policy.
The implementation includes an `AuthorizePublicBucketAccess` middleware, which checks if public access has been granted to the bucket. If so, authentication middlewares are skipped. For unauthenticated requests, appropriate errors are returned based on the specific S3 action.
---
**1. Bucket-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
}
]
}
```
**2. Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
**3. Both Bucket and Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
---
```sh
aws s3api create-bucket --bucket test --object-ownership BucketOwnerPreferred
aws s3api put-bucket-acl --bucket test --acl public-read
```
This adds an object name validation util to check if the object
path would resolve to a path outside of the bucket directory.
S3 returns Bad Request for these type of paths:
% aws s3api put-object --bucket mybucket --key test/../../hello
An error occurred (400) when calling the PutObject operation: Bad Request
Closes#803
Implements host-style bucket addressing in the gateway. This feature can be enabled by running the gateway with the `--virtual-domain` flag and specifying a virtual domain name.
Example:
```bash
./versitygw -a user -s secret --virtual-domain localhost:7070 posix /tmp/vgw
```
The implementation follows this approach: it introduces a middleware (`HostStyleParser`) that parses the bucket name from the `Host` header and appends it to the URL path. This effectively transforms the request into a path-style bucket addressing format, which the gateway already supports. With this design, the gateway can handle both path-style and host-style requests when running in host-style mode.
For local testing, one can either set up a local DNS server to wildcard-match all subdomains of a specified domain and resolve them to the local IP address, or manually add entries to `/etc/hosts` to resolve bucket-prefixed hosts to the server IP (e.g., `127.0.0.1`).
Closes#1295
Makes the user `role` mutable in /update-user admin endpoint.
Integrates the changes in the `admin update-user` cli command, by adding the `role` flag for a user role modification.
Fixes#1276
Creates the custom `s3response.CopyObjectOutput` type to handle the `LastModified` date property formatting correctly. It uses `time.RFC3339` to format the date to match the format that s3 uses.
Closes#1221
Adds debug logging for `signed`/`unsigned` chunk readers.
Adds the `debuglogger.Infof` log method, which prints out green info logs with `[INFO]:` prefix.
The debug logging inclues some chunk details: size, signature, trailers. It also prints out stash/release stash operations.
The error cases are logged with standart yellow `[DEBUG]:` prefix.
The `String to sign` block in signed chunk reader is logged in purple horizontal borders with title.
On 32-bit systems, this value could overflow. Add a check for the
overflow and return ErrInvalidRange if it does overflow.
The type in GetObjectOutput for ContentLength is *int64, but the
fasthttp.RequestCtx.SetBodyStream() takes type int. So there is
no way to set the bodysize to the correct limit if the value
overflows.
Fixes#1258Fixes#1257Closes#1244
Adds range queries support for `HeadObject`.
Fixes the range parsing logic for `GetObject`, which is used for `HeadObject` as well. Both actions follow the same rules for range parsing.
Fixes the error message returned by `GetObject`.
Fixes#1253
Removes the xml pretty printing from debug logger. Instead it prints out the raw request/response body. This way we avoid to miss/add something to raw xml, which could lead to misconfusion.
The xml encoding for the s3.CompleteMultipartUploadOutput response
type was not producing exactly the right field names for the
expected complete multipart upload result.
This change follows the pattern we have had to do for other xml
responses to create our own type that will encode better to the
expected response.
This will change the backend.Backend interface, so plugins and
other backends will have to make the corresponding changes.
Fixes#961Fixes#1248
The gateway should return a `MissingContentLength` error if the `Content-Length` HTTP header is missing for upload operations (`PutObject`, `UploadPart`).
The second fix involves enforcing a maximum object size limit of `5 * 1024 * 1024 * 1024` bytes (5 GB) by validating the value of the `Content-Length` header. If the value exceeds this limit, the gateway should return an `EntityTooLarge` error.
Fixes#1238
The signed chunk reader stashes the header bytes if it can't fully parse the chunk header. On the next `io.Reader` call, the stash is combined with the new buffer data to attempt parsing the header again. The stashing logic was broken due to the premature removal of the first two header bytes (`\r\n`). As a result, the stash was incomplete, leading to parsing issues on subsequent calls.
These changes fix the stashing logic and correct the buffer offset calculation in `parseChunkHeaderBytes`.
Fixes#1214Fixes#1231Fixes#1232
Implements `utils.ParseTagging` which is a generic implementation of parsing tags for both `PutObjectTagging` and `PutBucketTagging`.
- The actions now return `MalformedXML` if the provided request body is invalid.
- Adds validation to return `InvalidTag` if duplicate keys are present in tagging.
- For invalid tag keys, it creates a new error: `ErrInvalidTagKey`.
Added missing debug logs in the `front-end` and `utility` functions.
Enhanced debug logging with the following improvements:
- Each debug message is now prefixed with [DEBUG] and appears in color.
- The full request URL is printed at the beginning of each debug log block.
- Request/response details are wrapped in framed sections for better readability.
- Headers are displayed in a colored box.
- XML request/response bodies are pretty-printed with indentation and color.
Fixes#1204Fixes#1205
Tag count in `PutBucketTagging` and `PutObjectTagging` is limited.
`PutBucketTagging`: 50
`PutObjectTagging`: 10
Adds the changes to return errors respectively
Fixes#1171
As signature v2 is depracated the gateway doesn't support it.
AWS S3 supports signature version 2 in some regions. For some regions the request fails with error:
```
InvalidRequest: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
```
The PR makes this change to return unsupported authorization mechanism for `sigV2` requests.
Fixes#1186Fixes#1188Fixes#1189
If multiple checksum headers are provided, no matter if they are empty or not, the gateway should return `(InvalidRequest): Expecting a single x-amz-checksum- header. Multiple checksum Types are not allowed.`
An empty checksum header is considered as invalid, because it's not valid crc32, crc32c ...
Closes#819
ListObjects returns object owner data in each object entity in the result, while ListObjectsV2 has fetch-owner query param, which indicates if the objects owner data should be fetched.
Adds these changes in the gateway to add `Owner` data in `ListObjects` and `ListObjectsV2` result. In aws the objects can be owned by different users in the same bucket. In the gateway all the objects are owned by the bucket owner.
Fixes#1165
The signed chunk encoding with trailers should return api error for:
1. Invalid checksum - `(InvalidRequest) Value for x-amz-checksum-x trailing header is invalid.`
2. Incorrect checksum - `(BadDigest) The x you specified did not match the calculated checksum.`
Where `x` could be crc32, crc32c, sha1 ...
Fixes#1000
`GetObjectAttributes` returned `InvalidRequest` instead of `InvalidArgument` with description `Invalid attribute name specified.`.
Fixes the logic in `ParseObjectAttributes` to ignore empty values for `X-Amz-Object-Attributes` headers to return `InvalidArgument` if all the specified object attributes are invalid.