Closes#1003
**Changes Introduced:**
1. **S3 Bucket CORS Actions**
* Implemented the following S3 bucket CORS APIs:
* `PutBucketCors` – Configure CORS rules for a bucket.
* `GetBucketCors` – Retrieve the current CORS configuration for a bucket.
* `DeleteBucketCors` – Remove CORS configuration from a bucket.
2. **CORS Preflight Handling**
* Added an `OPTIONS` endpoint to handle browser preflight requests.
* The endpoint evaluates incoming requests against bucket CORS rules and returns the appropriate `Access-Control-*` headers.
3. **CORS Middleware**
* Implemented middleware that:
* Checks if a bucket has CORS configured.
* Detects the `Origin` header in the request.
* Adds the necessary `Access-Control-*` headers to the response when the request matches the bucket CORS configuration.
Fixes#1418
If neither the `Transfer-Encoding` nor the `Content-Length` headers are provided in chunked uploads, **fasthttp** assumes there is no request body and sets the request body reader to `nil`. This leads to a panic in the auth reader when it attempts to read the body.
The fix ensures that if the request body reader is `nil`, it is overridden with an `empty reader` to prevent panics.
Fixes#1345
The previous implementation incorrectly parsed the `x-amz-sdk-checksum-algorithm` header for the `CompleteMultipartUpload` operation, even though this header is not expected and should be ignored. It also mistakenly treated the `x-amz-checksum-algorithm` header as an invalid value for `x-amz-checksum-x`.
The updated implementation only parses the `x-amz-sdk-checksum-algorithm` header for `PutObject` and `UploadPart` operations. Additionally, `x-amz-checksum-algorithm` and `x-amz-checksum-type` headers are now correctly ignored when parsing the precalculated checksum headers (`x-amz-checksum-x`).
Fixes#1339
`x-amz-checksum-type` and `x-amz-checksum-algorithm` request headers should be case insensitive in `CreateMultipartUpload`.
The changes include parsing the header values to upper case before validating and passing to back-end. `x-amz-checksum-type` response header was added in`CreateMultipartUpload`, which was missing before.
Fixes#1352
Adds a validation check step in `SigV4` authentication for `x-amz-content-sh256` to check it to be either a valid sha256 hash or a special payload type(UNSIGNED-PAYLOAD, STREAMING-UNSIGNED-PAYLOAD-TRAILER...).
Fixes#1385
When accessing a specific object version, the user must have the `s3:GetObjectVersion` permission in the bucket policy. The `s3:GetObject` permission alone is not sufficient for a regular user to query object versions using `HeadObject`.
This PR fixes the issue and adds integration tests for both `HeadObject` and `GetObject`. It also includes cleanup in the integration tests by refactoring the creation of user S3 clients, and moves some test user data to the package level to avoid repetition across tests.
Fixes#1388Fixes#1389Fixes#1390Fixes#1401
Adds the `x-amz-copy-source` header validation for `CopyObject` and `UploadPartCopy` in front-end.
The error:
```
ErrInvalidCopySource: {
Code: "InvalidArgument",
Description: "Copy Source must mention the source bucket and key: sourcebucket/sourcekey.",
HTTPStatusCode: http.StatusBadRequest,
},
```
is now deprecated.
The conditional read/write headers validation in `CopyObject` should come with #821 and #822.
Fixes#1398
The `x-amz-mp-object-size` request header can have two erroneous states: an invalid value or a negative integer. AWS returns different error descriptions for each case. This PR fixes the error description for the invalid header value case.
The invalid case can't be integration tested as SDK expects `int64` as the header value.
Adjusts the admin apis to the new advanced routing changes.
Enables debug logging for the separate admin server(when a separate server is run for the admin apis).
Adds the quiet mode for the separate admin server.
Fixes#1036
Fixes the issue when calling a non-existing root endpoint(POST /) the gateway returns `NoSuchBucket`. Now it returns the correct `MethodNotAllowed` error.
fixes#896fixes#899
Registeres an all route matcher handler at the end of the router to handle the cases when the api call doesn't match to any s3 action. The all routes matcher returns `MethodNotAllowed` for this kind of requests.
Closes#908
This PR introduces a new routing system integrated with Fiber. It matches each S3 action to a route using middleware utility functions (e.g., URL query match, request header match). Each S3 action is mapped to a dedicated route in the Fiber router. This functionality cannot be achieved using standard Fiber methods, as Fiber lacks the necessary tooling for such dynamic routing.
Additionally, this PR implements a generic response handler to manage responses from the backend. This abstraction helps isolate the controller from the data layer and Fiber-specific response logic.
With this approach, controller unit testing becomes simpler and more effective.
Public buckets support a set of actions on buckets and objects, returning various errors based on the S3 action type and permissions (ACL or policy). The implementation aligns with the table provided in [this gist](https://gist.github.com/niksis02/5919d52d6112537a31c14d9abfa89ac0).
This implementation introduces **public buckets**, which are accessible without signature-based authentication.
There are two ways to grant public access to a bucket:
* **Bucket ACLs**
* **Bucket Policies**
Only `Get` and `List` operations are permitted on public buckets. All **write operations** require authentication, regardless of whether public access is granted through an ACL or a policy.
The implementation includes an `AuthorizePublicBucketAccess` middleware, which checks if public access has been granted to the bucket. If so, authentication middlewares are skipped. For unauthenticated requests, appropriate errors are returned based on the specific S3 action.
---
**1. Bucket-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
}
]
}
```
**2. Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
**3. Both Bucket and Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
---
```sh
aws s3api create-bucket --bucket test --object-ownership BucketOwnerPreferred
aws s3api put-bucket-acl --bucket test --acl public-read
```
The debuglogger logs will only get printed if debug is enabled,
but we always want the internal server error logs to be logged
by the service since this is usually some actionable error
that needs to be addressed with the backend storage system.
This changes internal server error logs to always to sent to
stderr.
Add helper util auth.UpdateBucketACLOwner() that sets new
default ACL based on new owner and removes old bucket policy.
The ChangeBucketOwner() remains in the backend.Backend
interface in case there is ever a backend that needs to manage
ownership in some other way than with bucket ACLs. The arguments
are changing to clarify the updated owner. This will break any
plugins implementing the old interface. They should use the new
auth.UpdateBucketACLOwner() or implement the corresponding
change specific for the backend.
An update to fasthttp has deprecated the VisitAll() method
for an iterator function All() that can be used to range over
all headers.
This should fix the staticcheck warnings for calling the
deprecated function.
The CRC32, CRC32c, and CRC64NVME data integrity checksums support calculating
the composite full object values for multi-part uploads using the checksum
and length of the individual parts.
Previously, we were reading all of the part data to recalculate the full
object checksum values during the complete multipart upload call. This
disabled the optimized copy_file_range() for certain filesystems such as
XFS because the part data was being read. If the data is not read, and
the file handle is passed directly to io.Copy(), then the filesystem is
allowed to optimize the copying of the data from the source to destination
files.
This now allows both the optimized copy_file_range() optimizations as well
as the data integrity features enabled for support composite checksum types.