Fixes#1708
This PR focuses on evaluating the `x-amz-if-none-match` precondition header for object PUT operations. If any value other than `*` is provided, a `NotImplemented` error is returned. If `If-Match` is used together with `If-None-Match`, regardless of the value combination, a `NotImplemented` error is returned. When only `If-None-Match: *` is specified, a `PreconditionFailed` error is returned if the object already exists in `PutObject` or `CompleteMultipartUpload`; if the object does not exist, object creation is allowed.
Fixes#1709
If any precondition header is present(`If-Match`, `If-None-Match`) in `PutObject` and `CompleteMultipartUpload` and there's no object in the bucket with the given key, a `NoSuchKey` error is now returned. Previously the headers were simply ignored and new object creation was allowed.
Closes#1346
`GetObject` and `HeadObject` return the `x-amz-tagging-count` header in the response, which specifies the number of tags associated with the object. This was already supported for `GetObject`, but missing for `HeadObject`. This implementation adds support for `HeadObject` in `azure` and `posix` and updates the integration tests to cover this functionality for `GetObject`.
Closes#1343
Object version tagging support was previously missing in the gateway. The support is added with this PR. If versioning is not enabled at the gateway level and a user attempts to put, get, or delete object version tags, the gateway returns an `InvalidArgument`(Invalid versionId)
Closes#1595
This implementation diverges from AWS S3 behavior. The `CreateBucket` request body is no longer ignored. Based on the S3 request body schema, the gateway parses only the `LocationConstraint` and `Tags` fields. If the `LocationConstraint` does not match the gateway’s region, it returns an `InvalidLocationConstraint` error.
In AWS S3, tagging during bucket creation is supported only for directory buckets. The gateway extends this support to general-purpose buckets.
If the request body is malformed, the gateway returns a `MalformedXML` error.
Fixes#1565Fixes#1561Fixes#1300
This PR focuses on three main changes:
1. **Prioritizing object-level lock configuration over bucket-level default retention**
When an object is uploaded with a specific retention configuration, it takes precedence over the bucket’s default retention set via `PutObjectLockConfiguration`. If the object’s retention expires, the object must become available for write operations, even if the bucket-level default retention is still active.
2. **Preventing object lock configuration from being disabled once enabled**
To align with AWS S3 behavior, once object lock is enabled for a bucket, it can no longer be disabled. Previously, sending an empty `Enabled` field in the payload would disable object lock. Now, this behavior is removed—an empty `Enabled` field will result in a `MalformedXML` error.
This creates a challenge for integration tests that need to clean up locked objects in order to delete the bucket. To handle this, a method has been implemented that:
* Removes any legal hold if present.
* Applies a temporary retention with a "retain until" date set 3 seconds ahead.
* Waits for 3 seconds before deleting the object and bucket.
3. **Allowing object lock to be enabled on existing buckets via `PutObjectLockConfiguration`**
Object lock can now be enabled on an existing bucket if it wasn’t enabled at creation time.
* If versioning is enabled at the gateway level, the behavior matches AWS S3: object lock can only be enabled when bucket versioning status is `Enabled`.
* If versioning is not enabled at the gateway level, object lock can always be enabled on existing buckets via `PutObjectLockConfiguration`.
* In Azure (which does not support bucket versioning), enabling object lock is always allowed.
This change also fixes the error message returned in this scenario for better clarity.
Fixes#1559Fixes#1330
This PR focuses on three main changes:
1. **Fix object lock error codes and descriptions**
When an object was WORM-protected and delete/overwrite was disallowed due to object lock configurations, the gateway incorrectly returned the `s3.ErrObjectLocked` error code and description. These have now been corrected.
2. **Update `PutObjectRetention` behavior**
Previously, when an object already had a retention mode set, the gateway only allowed modifications if the mode was changed from `GOVERNANCE` to `COMPLIANCE`, and only when the user had the `s3:BypassGovernanceRetention` permission.
The logic has been updated: if the existing retention mode is the same as the one being applied, the operation is now allowed regardless of other factors.
3. **Fix error checks in integration tests (AWS SDK regression)**
Due to an AWS SDK regression, integration tests were previously limited to checking partial error descriptions. This issue seems to be resolved for some actions (though the ticket is still open: https://github.com/aws/aws-sdk-go-v2/issues/2921). Error checks have been reverted back to full description comparisons where possible.
Fixes#1520
Removes the incorrect logic for HeadObject returning successful response, when querying an incomplete multipart upload.
Implements the logic to return `NotImplemented` error if `GetObject`/`HeadObject` is attempted with `partNumber` in azure and posix backends. The front-end part is preserved to be used in s3 proxy backend.
Closes#821
**Implements conditional operations across object APIs:**
* **PutObject** and **CompleteMultipartUpload**:
Supports conditional writes with `If-Match` and `If-None-Match` headers (ETag comparisons).
Evaluation is based on an existing object with the same key in the bucket. The operation is allowed only if the preconditions are satisfied. If no object exists for the key, these headers are ignored.
* **CopyObject** and **UploadPartCopy**:
Adds conditional reads on the copy source object with the following headers:
* `x-amz-copy-source-if-match`
* `x-amz-copy-source-if-none-match`
* `x-amz-copy-source-if-modified-since`
* `x-amz-copy-source-if-unmodified-since`
The first two are ETag comparisons, while the latter two compare against the copy source’s `LastModified` timestamp.
* **AbortMultipartUpload**:
Supports the `x-amz-if-match-initiated-time` header, which is true only if the multipart upload’s initialization time matches.
* **DeleteObject**:
Adds support for:
* `If-Match` (ETag comparison)
* `x-amz-if-match-last-modified-time` (LastModified comparison)
* `x-amz-if-match-size` (object size comparison)
Additionally, this PR updates precondition date parsing logic to support both **RFC1123** and **RFC3339** formats. Dates set in the future are ignored, matching AWS S3 behavior.
Closes#1518
Adds the `x-amz-object-size` header to the `PutObject` response, indicating the size of the uploaded object. This change is applied to the POSIX, Azure, and S3 proxy backends.
Closes#882
Implements conditional reads for `GetObject` and `HeadObject` in the gateway for both POSIX and Azure backends. The behavior is controlled by the `If-Match`, `If-None-Match`, `If-Modified-Since`, and `If-Unmodified-Since` request headers, where the first two perform ETag comparisons and the latter two compare against the object’s `LastModified` date. No validation is performed for invalid ETags or malformed date formats, and precondition date headers are expected to follow RFC1123; otherwise, they are ignored.
The Integration tests cover all possible combinations of conditional headers, ensuring the feature is 100% AWS S3–compatible.
This changes the marker/continuation token from the object name
to the marker from the azure list objects pager. This is needed
because passing the object name as the token to the azure next
call causes the Azure API to throw 400 Bad Request with
InvalidQueryParameterValue. So we have to use the azure marker
for compatibility with the azure API pager.
To do this we have to align the s3 list objects request to the
Azure ListBlobsHierarchyPager. The v2 requests have an optional
startafter where we will have to page through the azure blobs
to find the correct starting point, but after this we will
only return with the single paginated results form the Azure
pager to maintain the correct markers all the way through to
Azure.
The ListObjects (non V2) assumes that the marker must be an object
name, so for this case we have to page through the azure listings
for each call to find the correct starting point. This makes the
V2 method far more efficient, but maintains correctness for the
ListObjects.
Also remove continuation token string checks in the integration
tests since this is supposed to be an opaque token that the
client should not care about. This will help to maintain the
tests for mutliple backend types.
Fixes#1457
Closes#1003
**Changes Introduced:**
1. **S3 Bucket CORS Actions**
* Implemented the following S3 bucket CORS APIs:
* `PutBucketCors` – Configure CORS rules for a bucket.
* `GetBucketCors` – Retrieve the current CORS configuration for a bucket.
* `DeleteBucketCors` – Remove CORS configuration from a bucket.
2. **CORS Preflight Handling**
* Added an `OPTIONS` endpoint to handle browser preflight requests.
* The endpoint evaluates incoming requests against bucket CORS rules and returns the appropriate `Access-Control-*` headers.
3. **CORS Middleware**
* Implemented middleware that:
* Checks if a bucket has CORS configured.
* Detects the `Origin` header in the request.
* Adds the necessary `Access-Control-*` headers to the response when the request matches the bucket CORS configuration.
The C++ SDK (and maybe others?) assume that the S3 ETags
without a "-" in the string are MD5 checksums. So the Azure
ETag that does not have a "-" but also is not an MD5 checksum
will fail some of the sdk internal validation checks.
Fix this by appending "-1" to the ETag to make it look like
the multipart format ETag that will skip the sdk verfication
check.
Fixes: #1380
Co-authored-by: Ben McClelland <ben.mcclelland@versity.com>
Add helper util auth.UpdateBucketACLOwner() that sets new
default ACL based on new owner and removes old bucket policy.
The ChangeBucketOwner() remains in the backend.Backend
interface in case there is ever a backend that needs to manage
ownership in some other way than with bucket ACLs. The arguments
are changing to clarify the updated owner. This will break any
plugins implementing the old interface. They should use the new
auth.UpdateBucketACLOwner() or implement the corresponding
change specific for the backend.
There were a couple of cases that would return an error for the
non existing bucket acl instead of treating that as the default
acl.
This also cleans up the backends that were doing their own
acl parsing instead of using the auth.ParseACL() function.
Fixes#1304
Fixes#1276
Creates the custom `s3response.CopyObjectOutput` type to handle the `LastModified` date property formatting correctly. It uses `time.RFC3339` to format the date to match the format that s3 uses.
Fixes#1258Fixes#1257Closes#1244
Adds range queries support for `HeadObject`.
Fixes the range parsing logic for `GetObject`, which is used for `HeadObject` as well. Both actions follow the same rules for range parsing.
Fixes the error message returned by `GetObject`.
The xml encoding for the s3.CompleteMultipartUploadOutput response
type was not producing exactly the right field names for the
expected complete multipart upload result.
This change follows the pattern we have had to do for other xml
responses to create our own type that will encode better to the
expected response.
This will change the backend.Backend interface, so plugins and
other backends will have to make the corresponding changes.
Fixes#1215Fixes#1216
`PutObject`, `CopyObject` and `CreateMultipartUpload` accept tag string as an http request header which should be url-encoded. The tag string should be a valid url-encoded string and each key/value pair should be valid, otherwise they should fail with `APIError`.
If the provided tag set contains duplicate `keys` the calls should fail with the same `InvalidURLEncodedTagging` error.
Not all url-encoded characters are supported by `S3`. The tagging string should contain only `letters`, `digits` and the following special chars:
- `-`
- `.`
- `/`
- `_`
- `+`
- ` `(space)
And their url-encoded versions: e.g. `%2F`(/), `%2E`(.) ... .
If the provided tagging string contains invalid `key`/`value`, the calls should fail with the following errors respectively:
`invalid key` - `(InvalidTag) The TagKey you have provided is invalid`
`invalid value` - `(InvalidTag) The TagValue you have provided is invalid`
Fixes#1214Fixes#1231Fixes#1232
Implements `utils.ParseTagging` which is a generic implementation of parsing tags for both `PutObjectTagging` and `PutBucketTagging`.
- The actions now return `MalformedXML` if the provided request body is invalid.
- Adds validation to return `InvalidTag` if duplicate keys are present in tagging.
- For invalid tag keys, it creates a new error: `ErrInvalidTagKey`.
Closes#819
ListObjects returns object owner data in each object entity in the result, while ListObjectsV2 has fetch-owner query param, which indicates if the objects owner data should be fetched.
Adds these changes in the gateway to add `Owner` data in `ListObjects` and `ListObjectsV2` result. In aws the objects can be owned by different users in the same bucket. In the gateway all the objects are owned by the bucket owner.
Fixes#998Closes#1125Closes#1126Closes#1127
Implements objects meta properties(Content-Disposition, Content-Language, Content-Encoding, Cache-Control, Expires) and tagging besed on the directives(metadata, tagging) in CopyObject in posix and azure backends. The properties/tagging should be coppied from the source object if "COPY" directive is provided and it should be replaced otherwise.
Changes the object copy principle in azure: instead of using the `CopyFromURL` method from azure sdk, it first loads the object then creates one, to be able to compare and store the meta properties.
Closes#1128
Adds `Content-Disposition`, `Content-Language`, `Cache-Control` and `Expires` object meta properties support in posix and azure backends.
Changes the `PutObject` and `CreateMultipartUpload` actions backend input type to custom `s3response` types to be able to store `Expires` as any string.
Fixes#1004Fixes#1122Fixes#1120
Separates `GetObject` and `UploadPartCopy` range parsing/validation.
`GetObject` returns a successful response if acceptRange is invalid.
Adjusts the range upper limit, if it exceeds the actual objects size for `GetObject`.
Corrects the `ContentRange` in the `GetObject` response.
Fixes the `UploadPartCopy` action copy source range parsing/validation.
`UploadPartCopy` returns `InvalidArgument` if the copy source range is not valid.
There were two issues that were preventing correct behavior here.
One was that we need to specifically request the container metadata
when listing containers, and then we also need to handle the case
where the container does not include the acl metadata.
This fixes both of these cases by adding in the metadata request
option for this container listing, and will return a default acl
if not provided in the container metadaata.
Fixes#948
The latest azurite made a change where the blob metadata must be
explicitly requested when calling NewListBlobsFlatPager(). We were
taking action on metadata iteams, and the tests were failing due
to these always missing without requesting metadata to be included
in the response.
Fix is to enable metadata for the response.
The "x-amz-copy-source" header may start with '/' as observed with
WinSCP. However, '/' is also the separator between the bucket and the
object path in "x-amz-copy-source".
Consider the following code in VerifyObjectCopyAccess():
srcBucket, srcObject, found := strings.Cut(copySource, "/")
If `copySource` starts with '/', then `srcBucket` is set to an empty
string. Later, an error is returned because bucket "" does not exist.
This issue was fixed in the Posix and Azure backends by the following
commit:
* 5e484f2 fix: Fixed CopySource parsing to handle the values starting with '/' in CopyObject action in posix and azure backends.
But the issue was not fixed in `VerifyObjectCopyAccess`.
This commit sanitizes "x-amz-copy-source" right after the header is
extracted in `s3api/controllers/base.go`. This ensures that the
`CopySource` argument passed to the backend functions UploadPartCopy()
and CopyObject() does not start with '/'. Since the backends no longer
need to strip away any leading '/' in `CopySource`, the parts of
commit 5e484f2 modifying the Posix and Azure backends are reverted.
Fixes issue #773.
Signed-off-by: Christophe Vu-Brugier <christophe.vu-brugier@seagate.com>
It is better if we let the s3response module handle the xml
formatting spec specifics, and let the backends not worry
about how to format the time fields. This should help to
prevent any future backend modifications or additions from
accidental incorrect time formatting.
Changed ListObjectsV2 and ListObjects actions return types from
*s3.ListObjects(V2)Output to s3response.ListObjects(V2)Result.
Changed the listing objects timestamp to RFC3339 to match AWS
S3 objects timestamp.
Fixes#752
The AWS spec for the create multipart upload response is:
<?xml version="1.0" encoding="UTF-8"?>
<InitiateMultipartUploadResult>
<Bucket>string</Bucket>
<Key>string</Key>
<UploadId>string</UploadId>
</InitiateMultipartUploadResult>
So we need the return type to marshal to this xml format.