The error check for the SDK call in `GetObjectAttributes` in the S3 proxy backend was missing, which caused the gateway to panic in all cases where the SDK method returned an error. The error check has now been added so that the method returns an error when the SDK call fails.
Fixes#1600Fixes#1603Fixes#1607Fixes#1626Fixes#1632Fixes#1652Fixes#1653Fixes#1656Fixes#1657Fixes#1659
This PR focuses mainly on unsigned streaming payload **trailer request payload parsing** and **checksum calculation**. For streaming uploads, there are essentially two ways to specify checksums:
1. via `x-amz-checksum-*` headers,
2. via `x-amz-trailer`,
or none — in which case the checksum should default to **crc64nvme**.
Previously, the implementation calculated the checksum only from `x-amz-checksum-*` headers. Now, `x-amz-trailer` is also treated as a checksum-related header and indicates the checksum algorithm for streaming requests. If `x-amz-trailer` is present, the payload must include a trailing checksum; otherwise, an error is returned.
`x-amz-trailer` and any `x-amz-checksum-*` header **cannot** be used together — doing so results in an error.
If `x-amz-sdk-checksum-algorithm` is specified, then either `x-amz-trailer` or one of the `x-amz-checksum-*` headers must also be present, and the algorithms must match. If they don’t, an error is returned.
The old implementation used to return an internal error when no `x-amz-trailer` was received in streaming requests or when the payload didn’t contain a trailer. This is now fixed.
Checksum calculation used to happen twice in the gateway (once in the chunk reader and once in the backend). A new `ChecksumReader` is introduced to prevent double computation, and the trailing checksum is now read by the backend from the chunk reader. The logic for stacking `io.Reader`s in the Fiber context is preserved, but extended: once a `ChecksumReader` is stacked, all following `io.Reader`s are wrapped with `MockChecksumReader`, which simply delegates to the underlying checksum reader. In the backend, a simple type assertion on `io.Reader` provides the necessary checksum metadata (algorithm, value, etc.).
Fixes#1649
`GetBucketVersioning` used to be a cause of a panic in s3 proxy backend, because of an inproper error handling. Now the error returned from the sdk method is explitily checked, before returning the response.
The scoutfs filesystem allows setting project IDs on files and
directories for project level accounting tracking. This adds the
option to set the project id for the following:
create bucket
put object
put part
complete multipart upload
The project id will only be set if all of the following is true:
- set project id option enabled
- filesystem format version supports projects (version >1)
- account project id > 0
Fixes#1616
Some object-level actions in the gateway that work with object versions used to return `InvalidVersionId` when the specified object version did not exist. The logic has now been fixed, and they correctly return `NoSuchVersion`. These actions include: `HeadObject`, `GetObject`, `PutObjectLegalHold`, `GetObjectLegalHold`, `PutObjectRetention`, and `GetObjectRetention`.
Closes#1346
`GetObject` and `HeadObject` return the `x-amz-tagging-count` header in the response, which specifies the number of tags associated with the object. This was already supported for `GetObject`, but missing for `HeadObject`. This implementation adds support for `HeadObject` in `azure` and `posix` and updates the integration tests to cover this functionality for `GetObject`.
Closes#1343
Object version tagging support was previously missing in the gateway. The support is added with this PR. If versioning is not enabled at the gateway level and a user attempts to put, get, or delete object version tags, the gateway returns an `InvalidArgument`(Invalid versionId)
Closes#1595
This implementation diverges from AWS S3 behavior. The `CreateBucket` request body is no longer ignored. Based on the S3 request body schema, the gateway parses only the `LocationConstraint` and `Tags` fields. If the `LocationConstraint` does not match the gateway’s region, it returns an `InvalidLocationConstraint` error.
In AWS S3, tagging during bucket creation is supported only for directory buckets. The gateway extends this support to general-purpose buckets.
If the request body is malformed, the gateway returns a `MalformedXML` error.
Fixes#1579
S3 enforces a specific rule for validating bucket and object tag key/value names. This PR integrates the regexp pattern used by S3 for tag validation.
Official S3 documentation for tag validation rules: [AWS S3 Tag](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_Tag.html)
There are two types of tagging inputs for buckets and objects:
1. **On existing buckets/objects** — used in the `PutObjectTagging` and `PutBucketTagging` actions, where tags are provided in the request body.
2. **On object creation** — used in the `PutObject`, `CreateMultipartUpload`, and `CopyObject` actions, where tags are provided in the request headers and must be URL-encoded.
This implementation ensures correct validation for both types of tag inputs.
Fixes#1547
When no checksum is specified during multipart upload initialization, the complete multipart upload request should default to **CRC64NVME FULL_OBJECT**. The checksum will not be stored in the final object metadata, as it is used solely for data integrity verification. Note that although CRC64NVME is composable, it is calculated using the standard hash reader, since the part checksums are missing and the final checksum calculation is instead based directly on the parts data.
Fixes#1359
The composite checksums in **CompleteMultipartUpload** generally follow the format `checksum-<number_of_parts>`. Previously, the gateway treated composite checksums as regular checksums without distinguishing between the two formats.
In S3, the `x-amz-checksum-*` headers accept both plain checksum values and the `checksum-<number_of_parts>` format. However, after a successful `CompleteMultipartUpload` request, the final checksum is always stored with the part number included.
This implementation adds support for parsing both formats—checksums with and without the part number. From now on, composite checksums are consistently stored with the part number included.
Additionally, two integration tests are added:
* One verifies the final composite checksum with part numbers.
* Another ensures invalid composite checksums are correctly rejected.
Some systems may choose to allow non-aws compliant bucket names
and/or handle the bucket naem validation in the backend instead.
This adds the option to turn off the strict bucket name validation
checks in the frontend API handlers.
When frontend bucket name validation is disabled, we need to do
sanity checks for posix compliant names in the posix/scoutfs
backends. This is automatically enabled when strict bucket
name validation is disabled.
Fixes#1564
Fixes#1565Fixes#1561Fixes#1300
This PR focuses on three main changes:
1. **Prioritizing object-level lock configuration over bucket-level default retention**
When an object is uploaded with a specific retention configuration, it takes precedence over the bucket’s default retention set via `PutObjectLockConfiguration`. If the object’s retention expires, the object must become available for write operations, even if the bucket-level default retention is still active.
2. **Preventing object lock configuration from being disabled once enabled**
To align with AWS S3 behavior, once object lock is enabled for a bucket, it can no longer be disabled. Previously, sending an empty `Enabled` field in the payload would disable object lock. Now, this behavior is removed—an empty `Enabled` field will result in a `MalformedXML` error.
This creates a challenge for integration tests that need to clean up locked objects in order to delete the bucket. To handle this, a method has been implemented that:
* Removes any legal hold if present.
* Applies a temporary retention with a "retain until" date set 3 seconds ahead.
* Waits for 3 seconds before deleting the object and bucket.
3. **Allowing object lock to be enabled on existing buckets via `PutObjectLockConfiguration`**
Object lock can now be enabled on an existing bucket if it wasn’t enabled at creation time.
* If versioning is enabled at the gateway level, the behavior matches AWS S3: object lock can only be enabled when bucket versioning status is `Enabled`.
* If versioning is not enabled at the gateway level, object lock can always be enabled on existing buckets via `PutObjectLockConfiguration`.
* In Azure (which does not support bucket versioning), enabling object lock is always allowed.
This change also fixes the error message returned in this scenario for better clarity.
Fixes#1559Fixes#1330
This PR focuses on three main changes:
1. **Fix object lock error codes and descriptions**
When an object was WORM-protected and delete/overwrite was disallowed due to object lock configurations, the gateway incorrectly returned the `s3.ErrObjectLocked` error code and description. These have now been corrected.
2. **Update `PutObjectRetention` behavior**
Previously, when an object already had a retention mode set, the gateway only allowed modifications if the mode was changed from `GOVERNANCE` to `COMPLIANCE`, and only when the user had the `s3:BypassGovernanceRetention` permission.
The logic has been updated: if the existing retention mode is the same as the one being applied, the operation is now allowed regardless of other factors.
3. **Fix error checks in integration tests (AWS SDK regression)**
Due to an AWS SDK regression, integration tests were previously limited to checking partial error descriptions. This issue seems to be resolved for some actions (though the ticket is still open: https://github.com/aws/aws-sdk-go-v2/issues/2921). Error checks have been reverted back to full description comparisons where possible.
Similar to:
8e18b43116
fix: lex sort order of listobjects backend.Walk
But now the "Versions" walk.
The original backend.WalkVersions function used the native WalkDir and ReadDir
which did not guarantee lexicographic ordering of results for cases where
including directory slash changes the sort order. This caused incorrect
paginated responses because S3 APIs require strict lexicographic ordering
where directories with trailing slashes sort correctly relative to files.
For example, dir1/a.b/ must come before dir1/a/ in the results, but
fs.WalkDir was returning them in filesystem sort order which reversed
the order due to not taking in account the trailing "/".
The original Walk function used the native WalkDir and ReadDir which did not
guarantee lexicographic ordering of results for cases where including directory
slash changes the sort order. This caused incorrect paginated responses because
S3 APIs require strict lexicographic ordering where directories with trailing
slashes sort correctly relative to files. For example, dir1/a.b/ must come
before dir1/a/ in the results, but fs.WalkDir was returning them in filesystem
sort order which reversed the order due to not taking in account the trailing
"/".
This also lead to cases of continuous looping of paginated listobjects results
when the marker was set out of order from the expected results.
To address this fundamental ordering issue, the entire directory traversal
mechanism was replaced with a custom lexicographic sorting approach. The new
implementation reads each directory's contents using ReadDir, then sorts the
entries using custom sort keys that append trailing slashes to directory paths.
This ensures that dir1/a.b/ correctly sorts before dir1/a/, as well as other
similar failing cases, according to ASCII character ordering rules.
Fixes#1283
Fixes#1520
Removes the incorrect logic for HeadObject returning successful response, when querying an incomplete multipart upload.
Implements the logic to return `NotImplemented` error if `GetObject`/`HeadObject` is attempted with `partNumber` in azure and posix backends. The front-end part is preserved to be used in s3 proxy backend.
Closes#821
**Implements conditional operations across object APIs:**
* **PutObject** and **CompleteMultipartUpload**:
Supports conditional writes with `If-Match` and `If-None-Match` headers (ETag comparisons).
Evaluation is based on an existing object with the same key in the bucket. The operation is allowed only if the preconditions are satisfied. If no object exists for the key, these headers are ignored.
* **CopyObject** and **UploadPartCopy**:
Adds conditional reads on the copy source object with the following headers:
* `x-amz-copy-source-if-match`
* `x-amz-copy-source-if-none-match`
* `x-amz-copy-source-if-modified-since`
* `x-amz-copy-source-if-unmodified-since`
The first two are ETag comparisons, while the latter two compare against the copy source’s `LastModified` timestamp.
* **AbortMultipartUpload**:
Supports the `x-amz-if-match-initiated-time` header, which is true only if the multipart upload’s initialization time matches.
* **DeleteObject**:
Adds support for:
* `If-Match` (ETag comparison)
* `x-amz-if-match-last-modified-time` (LastModified comparison)
* `x-amz-if-match-size` (object size comparison)
Additionally, this PR updates precondition date parsing logic to support both **RFC1123** and **RFC3339** formats. Dates set in the future are ignored, matching AWS S3 behavior.
Closes#1518
Adds the `x-amz-object-size` header to the `PutObject` response, indicating the size of the uploaded object. This change is applied to the POSIX, Azure, and S3 proxy backends.
The debuglogger should be a top level module since we expect
all modules within the project to make use of this. If its
hidden in s3api, then contributors are less likely to make
use of this outside of s3api.
Closes#882
Implements conditional reads for `GetObject` and `HeadObject` in the gateway for both POSIX and Azure backends. The behavior is controlled by the `If-Match`, `If-None-Match`, `If-Modified-Since`, and `If-Unmodified-Since` request headers, where the first two perform ETag comparisons and the latter two compare against the object’s `LastModified` date. No validation is performed for invalid ETags or malformed date formats, and precondition date headers are expected to follow RFC1123; otherwise, they are ignored.
The Integration tests cover all possible combinations of conditional headers, ensuring the feature is 100% AWS S3–compatible.
The CORS actions were directly proxied in s3 proxy backend. The new implementation stores/retreives/deletes bucket cors configuration in `meta` bucket.
This changes the marker/continuation token from the object name
to the marker from the azure list objects pager. This is needed
because passing the object name as the token to the azure next
call causes the Azure API to throw 400 Bad Request with
InvalidQueryParameterValue. So we have to use the azure marker
for compatibility with the azure API pager.
To do this we have to align the s3 list objects request to the
Azure ListBlobsHierarchyPager. The v2 requests have an optional
startafter where we will have to page through the azure blobs
to find the correct starting point, but after this we will
only return with the single paginated results form the Azure
pager to maintain the correct markers all the way through to
Azure.
The ListObjects (non V2) assumes that the marker must be an object
name, so for this case we have to page through the azure listings
for each call to find the correct starting point. This makes the
V2 method far more efficient, but maintains correctness for the
ListObjects.
Also remove continuation token string checks in the integration
tests since this is supposed to be an opaque token that the
client should not care about. This will help to maintain the
tests for mutliple backend types.
Fixes#1457
Closes#1003
**Changes Introduced:**
1. **S3 Bucket CORS Actions**
* Implemented the following S3 bucket CORS APIs:
* `PutBucketCors` – Configure CORS rules for a bucket.
* `GetBucketCors` – Retrieve the current CORS configuration for a bucket.
* `DeleteBucketCors` – Remove CORS configuration from a bucket.
2. **CORS Preflight Handling**
* Added an `OPTIONS` endpoint to handle browser preflight requests.
* The endpoint evaluates incoming requests against bucket CORS rules and returns the appropriate `Access-Control-*` headers.
3. **CORS Middleware**
* Implemented middleware that:
* Checks if a bucket has CORS configured.
* Detects the `Origin` header in the request.
* Adds the necessary `Access-Control-*` headers to the response when the request matches the bucket CORS configuration.
The C++ SDK (and maybe others?) assume that the S3 ETags
without a "-" in the string are MD5 checksums. So the Azure
ETag that does not have a "-" but also is not an MD5 checksum
will fail some of the sdk internal validation checks.
Fix this by appending "-1" to the ETag to make it look like
the multipart format ETag that will skip the sdk verfication
check.
Fixes: #1380
Co-authored-by: Ben McClelland <ben.mcclelland@versity.com>
This adds a bunch of test cases for non-0 len object, 0 len
object, and directory objects to match verified AWS responses
for the various range bytes cases.
This fixes the posix head/get range responses for these test
cases as well.
Fixes#1342
This PR includes two main changes:
1. It fixes the case where `x-amz-checksum-x` (precalculated checksum headers) are not provided for `UploadPart`, and the checksum type for the multipart upload is `FULL_OBJECT`. In this scenario, the server no longer returns an error.
2. When no `x-amz-checksum-x` is provided for `UploadPart`, and `x-amz-sdk-checksum-algorithm` is also missing, the gateway now calculates the part checksum based on the multipart upload's checksum algorithm and stores it accordingly.
Additionally, the PR adds integration tests for:
* The two cases above
* The case where only `x-amz-sdk-checksum-algorithm` is provided
Fixes#1388Fixes#1389Fixes#1390Fixes#1401
Adds the `x-amz-copy-source` header validation for `CopyObject` and `UploadPartCopy` in front-end.
The error:
```
ErrInvalidCopySource: {
Code: "InvalidArgument",
Description: "Copy Source must mention the source bucket and key: sourcebucket/sourcekey.",
HTTPStatusCode: http.StatusBadRequest,
},
```
is now deprecated.
The conditional read/write headers validation in `CopyObject` should come with #821 and #822.
Fixes#1036
Fixes the issue when calling a non-existing root endpoint(POST /) the gateway returns `NoSuchBucket`. Now it returns the correct `MethodNotAllowed` error.
Add helper util auth.UpdateBucketACLOwner() that sets new
default ACL based on new owner and removes old bucket policy.
The ChangeBucketOwner() remains in the backend.Backend
interface in case there is ever a backend that needs to manage
ownership in some other way than with bucket ACLs. The arguments
are changing to clarify the updated owner. This will break any
plugins implementing the old interface. They should use the new
auth.UpdateBucketACLOwner() or implement the corresponding
change specific for the backend.
This fixes a panic seen when there were a lot of multipart uploads in the
same bucket requiring multiple paginated responses. for example:
panic: runtime error: index out of range [11455] with length 1000
goroutine 418 [running]:
github.com/versity/versitygw/backend/posix.(*Posix).ListMultipartUploads(0xc0004300
/Users/ben/repo/versitygw/backend/posix/posix.go:2122 +0xd25
github.com/versity/versitygw/s3api/controllers.S3ApiController.ListActions({{0x183c
...
This change updates the ListMultipartUploads implementation to properly advance
past the (KeyMarker, UploadIDMarker) tuple when paginating, ensuring that each
response starts after the marker and does not include duplicate uploads.
We were incorrctly trying to pass through the admin request
actions through to the backend s3 service in s3proxy. This
was resulting in internal server errors since not all s3
backends would understand these requests. Instead the
gateway needs to handle these requests directly.
Fixes#1381