We were trying to parse a non existing acl and returning an
internal server error due to invalid json acl data.
If the bucket acl does not exist, return a default acl with the
root account as the owner.
This fixes#1060, but does not address the invalid acl format
from s3cmd reported in #963.
We were getting errors such as:
2025/02/07 19:24:28 Internal Error, write object data: write exceeds content length 87
whenever the chunk encoding did not have the correct chunk
signatures. The issue was that the chunk encoding reader
was reading from the underlying reader and then passing the full
buffer read back to the caller if the underlying reader returned
an error. This meant that we were not processing the chunk
headers within the buffer due to the higher level error, and
would possibly hand back the longer unprocessed chunk encoded
stream to the caller that was in turn trying to write to the
object file exceeding the content length limit.
Fixes#1056
An invalid chunk encoding, or parse errors leading to parsing
invalid data can lead to a server panic if the chunk header
remaining is determined to be larger than the max buffer size.
This was previously seen when the chunk trailer checksums were
used by the client without the support from the server side
for this encoding. Example panic:
panic: runtime error: slice bounds out of range [4088:1024]
goroutine 5 [running]:
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseChunkHeaderBytes(0xc0003c4280, {0xc0000e6000?, 0x3000?, 0x423525?})
/home/tester/s3api/utils/chunk-reader.go:242 +0x492
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseAndRemoveChunkInfo(0xc0003c4280, {0xc0000e6000, 0x3000, 0x8000})
/home/tester/s3api/utils/chunk-reader.go:170 +0x20b
github.com/versity/versitygw/s3api/utils.(*ChunkReader).Read(0xc0003c4280, {0xc0000e6000, 0xc0000b41e0?, 0x8000})
/home/tester/s3api/utils/chunk-reader.go:91 +0x11e
This fix will validate the data length before copying into the
temporary buffer to prevent a panic and instead just return
an error.
This fixes put object with setting a content type. If no content
type is set, then we will return a default content type for
following requests. Mutli-part upload appears to be ok.
Also fixed content eincoding and multipart uploads.
Fixes#783
The "x-amz-copy-source" header may start with '/' as observed with
WinSCP. However, '/' is also the separator between the bucket and the
object path in "x-amz-copy-source".
Consider the following code in VerifyObjectCopyAccess():
srcBucket, srcObject, found := strings.Cut(copySource, "/")
If `copySource` starts with '/', then `srcBucket` is set to an empty
string. Later, an error is returned because bucket "" does not exist.
This issue was fixed in the Posix and Azure backends by the following
commit:
* 5e484f2 fix: Fixed CopySource parsing to handle the values starting with '/' in CopyObject action in posix and azure backends.
But the issue was not fixed in `VerifyObjectCopyAccess`.
This commit sanitizes "x-amz-copy-source" right after the header is
extracted in `s3api/controllers/base.go`. This ensures that the
`CopySource` argument passed to the backend functions UploadPartCopy()
and CopyObject() does not start with '/'. Since the backends no longer
need to strip away any leading '/' in `CopySource`, the parts of
commit 5e484f2 modifying the Posix and Azure backends are reverted.
Fixes issue #773.
Signed-off-by: Christophe Vu-Brugier <christophe.vu-brugier@seagate.com>
Changed ListObjectsV2 and ListObjects actions return types from
*s3.ListObjects(V2)Output to s3response.ListObjects(V2)Result.
Changed the listing objects timestamp to RFC3339 to match AWS
S3 objects timestamp.
Fixes#752
When on linux with O_TMPFILE support, we issue and fallocate for
the expected object size ax an optimization for the underlying
filesystem to allocate the full file all ate once. With the chunked
transfer encoding, the final object size is recoded in the
X-Amz-Decoded-Content-Length instead of the standard ContentLength
which includes the chunk encoding in the payload.
We were incorrectly using the content length to fallocate the
file which would cause the filesystem to pad out any unwritten
length to this size with 0s.
The fix here is to make sure we pass the X-Amz-Decoded-Content-Length
as the object size to the backend for all PUTs.
Fixes#753
We were handing the URL escaped string to the backend as the
copysource which includes "%<hex>" for spaces and other special
characters. The backend would then interpret this as the source
path. This fixes the copyobject and upload part copy.
Fixes#749
The AWS spec for the create multipart upload response is:
<?xml version="1.0" encoding="UTF-8"?>
<InitiateMultipartUploadResult>
<Bucket>string</Bucket>
<Key>string</Key>
<UploadId>string</UploadId>
</InitiateMultipartUploadResult>
So we need the return type to marshal to this xml format.
In copy-object, if the source and destination are the same then
X-Amz-Metadata-Directive must be set to "REPLACE" in order to use
this api call to update the metadata of the object in place.
The default X-Amz-Metadata-Directive is "COPY" if not specified.
"COPY" is only valid if source and destination are not the same
object.
When "REPLACE" selected, metadata does not have to differ for the
call to be successful. The "REPLACE" always sets the incoming
metadata (even if empty or the same as the source).
Fixes#734
The API hanlders and backend were stripping trailing "/" in object
paths. So if an object exists and a request came in for head/get/delete/copy
for that same name but with a trailing "/" indicating request should
be for directory object, the "/" would be stripped and the request
would be handlied for the incorrect file object.
This fix adds in checks to handle the case with the training "/"
in the request.
Fixes#709
AWS uses binary/octet-stream for the default content type if the
client doesn't specify the content type. Change the default for
the gateway to match this behavior.
Fixes#697
Bucket ACLs are now disabled by default the same as AWS.
By default the object ownership is BucketOwnerEnforced
which means that bucket ACLs are disabled. If one attempts
to set bucket ACL the following error is returned both in
the gateway and on AWS:
ErrAclNotSupported: {
Code: "AccessControlListNotSupported",
Description: "The bucket does not allow ACLs",
HTTPStatusCode: http.StatusBadRequest,
},
ACls can be enabled with PutBucketOwnershipControls
Changed bucket canned ACL translation
New backend interface methods:
PutBucketOwnershipControls
GetBucketOwnershipControls
DeleteBucketOwnershipControls
Added these to metrics