The IAM vault client stores an access token once authenticated,
but this token will expire after a certain amount of time set
by the server generating the token. Once this token is expired
or revoked, it can no longer be use by the vault client. So
the client should try to refresh the token with any errors
indicating expired or revoked tokens.
Fixes#976
The debuglogger logs will only get printed if debug is enabled,
but we always want the internal server error logs to be logged
by the service since this is usually some actionable error
that needs to be addressed with the backend storage system.
This changes internal server error logs to always to sent to
stderr.
Add helper util auth.UpdateBucketACLOwner() that sets new
default ACL based on new owner and removes old bucket policy.
The ChangeBucketOwner() remains in the backend.Backend
interface in case there is ever a backend that needs to manage
ownership in some other way than with bucket ACLs. The arguments
are changing to clarify the updated owner. This will break any
plugins implementing the old interface. They should use the new
auth.UpdateBucketACLOwner() or implement the corresponding
change specific for the backend.
This fixes a panic seen when there were a lot of multipart uploads in the
same bucket requiring multiple paginated responses. for example:
panic: runtime error: index out of range [11455] with length 1000
goroutine 418 [running]:
github.com/versity/versitygw/backend/posix.(*Posix).ListMultipartUploads(0xc0004300
/Users/ben/repo/versitygw/backend/posix/posix.go:2122 +0xd25
github.com/versity/versitygw/s3api/controllers.S3ApiController.ListActions({{0x183c
...
This change updates the ListMultipartUploads implementation to properly advance
past the (KeyMarker, UploadIDMarker) tuple when paginating, ensuring that each
response starts after the marker and does not include duplicate uploads.
We were incorrctly trying to pass through the admin request
actions through to the backend s3 service in s3proxy. This
was resulting in internal server errors since not all s3
backends would understand these requests. Instead the
gateway needs to handle these requests directly.
Fixes#1381
An update to fasthttp has deprecated the VisitAll() method
for an iterator function All() that can be used to range over
all headers.
This should fix the staticcheck warnings for calling the
deprecated function.
The IPA service connections have been seen to not always work
correctly on the first network connection attempt. Add retry
logic for errors that appear to be transient network issues.
The CRC32, CRC32c, and CRC64NVME data integrity checksums support calculating
the composite full object values for multi-part uploads using the checksum
and length of the individual parts.
Previously, we were reading all of the part data to recalculate the full
object checksum values during the complete multipart upload call. This
disabled the optimized copy_file_range() for certain filesystems such as
XFS because the part data was being read. If the data is not read, and
the file handle is passed directly to io.Copy(), then the filesystem is
allowed to optimize the copying of the data from the source to destination
files.
This now allows both the optimized copy_file_range() optimizations as well
as the data integrity features enabled for support composite checksum types.
Implements a middleware that validates incoming bucket and object names before authentication. This helps prevent malicious attacks that attempt to access restricted or unreachable data in `POSIX`.
Adds test cases to cover such attack scenarios, including false negatives where encoded paths are used to try accessing resources outside the intended bucket.
Removes bucket validation from all other layers—including `controllers` and both `POSIX` and `ScoutFS` backends — by moving the logic entirely into the middleware layer.
The http body stream is not a seekable stream, so most operation
retry attempts will fail with an internal server error. This
change tells the s3 client within the gateway to not retry any
requests, and instead let the client of the gateway handle the
error retry.
Fixes#1353
We were using the metadata retrieval to check for existing
buckets during create, and then return either BucketAlreadyExists
or ErrBucketAlreadyOwnedByYou accordingly.
Howver, the metadata retrieval was returning success with a
default ACL when the bucket metadata did not already exist
causing the gateway to always think this bucket existed.
Fix here is to let the metadata retrieval know that we do not
want the default ACL for this case.
This implementation introduces **public buckets**, which are accessible without signature-based authentication.
There are two ways to grant public access to a bucket:
* **Bucket ACLs**
* **Bucket Policies**
Only `Get` and `List` operations are permitted on public buckets. All **write operations** require authentication, regardless of whether public access is granted through an ACL or a policy.
The implementation includes an `AuthorizePublicBucketAccess` middleware, which checks if public access has been granted to the bucket. If so, authentication middlewares are skipped. For unauthenticated requests, appropriate errors are returned based on the specific S3 action.
---
**1. Bucket-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
}
]
}
```
**2. Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
**3. Both Bucket and Object-Level Operations:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::test/*"
}
]
}
```
---
```sh
aws s3api create-bucket --bucket test --object-ownership BucketOwnerPreferred
aws s3api put-bucket-acl --bucket test --acl public-read
```
This adds an object name validation util to check if the object
path would resolve to a path outside of the bucket directory.
S3 returns Bad Request for these type of paths:
% aws s3api put-object --bucket mybucket --key test/../../hello
An error occurred (400) when calling the PutObject operation: Bad Request