GetBucketLocation is being deprecated by AWS, but is still used
by some clients. We don't need any backend handlers for this since
the region is managed by the frontend. All we need is to test for
bucket existence, so we can use HeadBucket for this.
Fixes#1499
Fixes#1488
Adds full wildcard (`*`) and single-character (`?`) support for bucket policy actions, fixes resource detection with wildcards, and includes unit tests for `bucket_policy_actions`, `bucket_policy_effect`, and `bucket_policy_principals`.
Fixes#1486
* Adds the `Access-Control-Allow-Headers` response header to CORS responses for both **OPTIONS preflight requests** and any request containing an `Origin` header.
* The `Access-Control-Allow-Headers` response includes only the headers specified in the `Access-Control-Request-Headers` request header, always returned in lowercase.
* Fixes an issue with allow headers comparison in cors evaluation by making it case-insensitive.
* Adds missing unit tests for the **OPTIONS controller**.
This changes the marker/continuation token from the object name
to the marker from the azure list objects pager. This is needed
because passing the object name as the token to the azure next
call causes the Azure API to throw 400 Bad Request with
InvalidQueryParameterValue. So we have to use the azure marker
for compatibility with the azure API pager.
To do this we have to align the s3 list objects request to the
Azure ListBlobsHierarchyPager. The v2 requests have an optional
startafter where we will have to page through the azure blobs
to find the correct starting point, but after this we will
only return with the single paginated results form the Azure
pager to maintain the correct markers all the way through to
Azure.
The ListObjects (non V2) assumes that the marker must be an object
name, so for this case we have to page through the azure listings
for each call to find the correct starting point. This makes the
V2 method far more efficient, but maintains correctness for the
ListObjects.
Also remove continuation token string checks in the integration
tests since this is supposed to be an opaque token that the
client should not care about. This will help to maintain the
tests for mutliple backend types.
Fixes#1457
Closes#1454
Adds the implementation of [S3 GetBucketPolicyStatus action](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html). The implementation goes to front-end. Front-End loads the bucket policy and checks if it grants public access to all users.
A bucket policy document `is public` only when `Principal` contains `*`(all users): only when it grants access to `ALL` users.
Closes#1003
**Changes Introduced:**
1. **S3 Bucket CORS Actions**
* Implemented the following S3 bucket CORS APIs:
* `PutBucketCors` – Configure CORS rules for a bucket.
* `GetBucketCors` – Retrieve the current CORS configuration for a bucket.
* `DeleteBucketCors` – Remove CORS configuration from a bucket.
2. **CORS Preflight Handling**
* Added an `OPTIONS` endpoint to handle browser preflight requests.
* The endpoint evaluates incoming requests against bucket CORS rules and returns the appropriate `Access-Control-*` headers.
3. **CORS Middleware**
* Implemented middleware that:
* Checks if a bucket has CORS configured.
* Detects the `Origin` header in the request.
* Adds the necessary `Access-Control-*` headers to the response when the request matches the bucket CORS configuration.
This adds a bunch of test cases for non-0 len object, 0 len
object, and directory objects to match verified AWS responses
for the various range bytes cases.
This fixes the posix head/get range responses for these test
cases as well.
The sdk update has caused azurite to fail with:
The API version 2025-07-05 is not supported by Azurite
The workaround for now according to
https://github.com/Azure/Azurite/issues/2562
is to tell azurite to skip this check.
Fixes#1342
This PR includes two main changes:
1. It fixes the case where `x-amz-checksum-x` (precalculated checksum headers) are not provided for `UploadPart`, and the checksum type for the multipart upload is `FULL_OBJECT`. In this scenario, the server no longer returns an error.
2. When no `x-amz-checksum-x` is provided for `UploadPart`, and `x-amz-sdk-checksum-algorithm` is also missing, the gateway now calculates the part checksum based on the multipart upload's checksum algorithm and stores it accordingly.
Additionally, the PR adds integration tests for:
* The two cases above
* The case where only `x-amz-sdk-checksum-algorithm` is provided
Fixes#1339
`x-amz-checksum-type` and `x-amz-checksum-algorithm` request headers should be case insensitive in `CreateMultipartUpload`.
The changes include parsing the header values to upper case before validating and passing to back-end. `x-amz-checksum-type` response header was added in`CreateMultipartUpload`, which was missing before.
Fixes#1352
Adds a validation check step in `SigV4` authentication for `x-amz-content-sh256` to check it to be either a valid sha256 hash or a special payload type(UNSIGNED-PAYLOAD, STREAMING-UNSIGNED-PAYLOAD-TRAILER...).
Fixes#1385
When accessing a specific object version, the user must have the `s3:GetObjectVersion` permission in the bucket policy. The `s3:GetObject` permission alone is not sufficient for a regular user to query object versions using `HeadObject`.
This PR fixes the issue and adds integration tests for both `HeadObject` and `GetObject`. It also includes cleanup in the integration tests by refactoring the creation of user S3 clients, and moves some test user data to the package level to avoid repetition across tests.
Fixes#1388Fixes#1389Fixes#1390Fixes#1401
Adds the `x-amz-copy-source` header validation for `CopyObject` and `UploadPartCopy` in front-end.
The error:
```
ErrInvalidCopySource: {
Code: "InvalidArgument",
Description: "Copy Source must mention the source bucket and key: sourcebucket/sourcekey.",
HTTPStatusCode: http.StatusBadRequest,
},
```
is now deprecated.
The conditional read/write headers validation in `CopyObject` should come with #821 and #822.
Fixes#1398
The `x-amz-mp-object-size` request header can have two erroneous states: an invalid value or a negative integer. AWS returns different error descriptions for each case. This PR fixes the error description for the invalid header value case.
The invalid case can't be integration tested as SDK expects `int64` as the header value.
Fixes#1036
Fixes the issue when calling a non-existing root endpoint(POST /) the gateway returns `NoSuchBucket`. Now it returns the correct `MethodNotAllowed` error.
Implements a middleware that validates incoming bucket and object names before authentication. This helps prevent malicious attacks that attempt to access restricted or unreachable data in `POSIX`.
Adds test cases to cover such attack scenarios, including false negatives where encoded paths are used to try accessing resources outside the intended bucket.
Removes bucket validation from all other layers—including `controllers` and both `POSIX` and `ScoutFS` backends — by moving the logic entirely into the middleware layer.