This fixes put object with setting a content type. If no content
type is set, then we will return a default content type for
following requests. Mutli-part upload appears to be ok.
Also fixed content eincoding and multipart uploads.
Fixes#783
The "x-amz-copy-source" header may start with '/' as observed with
WinSCP. However, '/' is also the separator between the bucket and the
object path in "x-amz-copy-source".
Consider the following code in VerifyObjectCopyAccess():
srcBucket, srcObject, found := strings.Cut(copySource, "/")
If `copySource` starts with '/', then `srcBucket` is set to an empty
string. Later, an error is returned because bucket "" does not exist.
This issue was fixed in the Posix and Azure backends by the following
commit:
* 5e484f2 fix: Fixed CopySource parsing to handle the values starting with '/' in CopyObject action in posix and azure backends.
But the issue was not fixed in `VerifyObjectCopyAccess`.
This commit sanitizes "x-amz-copy-source" right after the header is
extracted in `s3api/controllers/base.go`. This ensures that the
`CopySource` argument passed to the backend functions UploadPartCopy()
and CopyObject() does not start with '/'. Since the backends no longer
need to strip away any leading '/' in `CopySource`, the parts of
commit 5e484f2 modifying the Posix and Azure backends are reverted.
Fixes issue #773.
Signed-off-by: Christophe Vu-Brugier <christophe.vu-brugier@seagate.com>
Changed ListObjectsV2 and ListObjects actions return types from
*s3.ListObjects(V2)Output to s3response.ListObjects(V2)Result.
Changed the listing objects timestamp to RFC3339 to match AWS
S3 objects timestamp.
Fixes#752
When on linux with O_TMPFILE support, we issue and fallocate for
the expected object size ax an optimization for the underlying
filesystem to allocate the full file all ate once. With the chunked
transfer encoding, the final object size is recoded in the
X-Amz-Decoded-Content-Length instead of the standard ContentLength
which includes the chunk encoding in the payload.
We were incorrectly using the content length to fallocate the
file which would cause the filesystem to pad out any unwritten
length to this size with 0s.
The fix here is to make sure we pass the X-Amz-Decoded-Content-Length
as the object size to the backend for all PUTs.
Fixes#753
We were handing the URL escaped string to the backend as the
copysource which includes "%<hex>" for spaces and other special
characters. The backend would then interpret this as the source
path. This fixes the copyobject and upload part copy.
Fixes#749
The AWS spec for the create multipart upload response is:
<?xml version="1.0" encoding="UTF-8"?>
<InitiateMultipartUploadResult>
<Bucket>string</Bucket>
<Key>string</Key>
<UploadId>string</UploadId>
</InitiateMultipartUploadResult>
So we need the return type to marshal to this xml format.
In copy-object, if the source and destination are the same then
X-Amz-Metadata-Directive must be set to "REPLACE" in order to use
this api call to update the metadata of the object in place.
The default X-Amz-Metadata-Directive is "COPY" if not specified.
"COPY" is only valid if source and destination are not the same
object.
When "REPLACE" selected, metadata does not have to differ for the
call to be successful. The "REPLACE" always sets the incoming
metadata (even if empty or the same as the source).
Fixes#734
The API hanlders and backend were stripping trailing "/" in object
paths. So if an object exists and a request came in for head/get/delete/copy
for that same name but with a trailing "/" indicating request should
be for directory object, the "/" would be stripped and the request
would be handlied for the incorrect file object.
This fix adds in checks to handle the case with the training "/"
in the request.
Fixes#709
AWS uses binary/octet-stream for the default content type if the
client doesn't specify the content type. Change the default for
the gateway to match this behavior.
Fixes#697
Bucket ACLs are now disabled by default the same as AWS.
By default the object ownership is BucketOwnerEnforced
which means that bucket ACLs are disabled. If one attempts
to set bucket ACL the following error is returned both in
the gateway and on AWS:
ErrAclNotSupported: {
Code: "AccessControlListNotSupported",
Description: "The bucket does not allow ACLs",
HTTPStatusCode: http.StatusBadRequest,
},
ACls can be enabled with PutBucketOwnershipControls
Changed bucket canned ACL translation
New backend interface methods:
PutBucketOwnershipControls
GetBucketOwnershipControls
DeleteBucketOwnershipControls
Added these to metrics
The restore object api request handler was incorrectly trying to
unmarshal the request body, but for the stadnard (all?) case the
request body is emtpy. We only need the bucket and opbject params
for now.
This also adds a fix to actually honor the enable glacier mode
in scoutfs.
Added support to enable object lock on bucket creation in posix and azure
backends.
Implemented the logic to add object legal hold and retention on object creation
in azure and posix backends.
Added the functionality for HeadObject to return object lock related headers.
Added integration tests for these features.
Make the code scanners happy with a bounds check before we do the
integer conversion from int64 to int, since this can overflow on
32 bit platforms.
Best error to return here is a signature error since this is a
client problem and the chunk headers are considered part of the
request signature.