Compare commits

..

205 Commits

Author SHA1 Message Date
Ben McClelland
c2c359e9f0 Merge pull request #1534 from versity/test/delete_bucket_tagging_two
test: more list-buckets, bucket tagging tests, dockerfile enhancements
2025-09-16 16:18:28 -07:00
Ben McClelland
6d081f5a3f Merge pull request #1539 from versity/dependabot/go_modules/dev-dependencies-f333cc90b3 2025-09-15 15:07:53 -07:00
Ben McClelland
7797154812 Merge pull request #1533 from versity/ben/list-versions 2025-09-15 14:25:49 -07:00
dependabot[bot]
eb0a8ee0c0 chore(deps): bump the dev-dependencies group with 10 updates
Bumps the dev-dependencies group with 10 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go) | `1.19.0` | `1.19.1` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.88.0` | `1.88.1` |
| [github.com/valyala/fasthttp](https://github.com/valyala/fasthttp) | `1.65.0` | `1.66.0` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.29.2` | `1.29.3` |
| [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://github.com/aws/aws-sdk-go-v2) | `1.34.3` | `1.34.4` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.38.3` | `1.38.4` |
| [golang.org/x/net](https://github.com/golang/net) | `0.43.0` | `0.44.0` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.31.7` | `1.31.8` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.18.11` | `1.18.12` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.19.5` | `1.19.6` |


Updates `github.com/Azure/azure-sdk-for-go/sdk/azcore` from 1.19.0 to 1.19.1
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/sdk-breaking-changes-guide-migration.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.19.0...sdk/azcore/v1.19.1)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.88.0 to 1.88.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.88.0...service/s3/v1.88.1)

Updates `github.com/valyala/fasthttp` from 1.65.0 to 1.66.0
- [Release notes](https://github.com/valyala/fasthttp/releases)
- [Commits](https://github.com/valyala/fasthttp/compare/v1.65.0...v1.66.0)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.29.2 to 1.29.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.29.2...config/v1.29.3)

Updates `github.com/aws/aws-sdk-go-v2/service/ssooidc` from 1.34.3 to 1.34.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/amp/v1.34.3...service/iot/v1.34.4)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.38.3 to 1.38.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.38.3...service/s3/v1.38.4)

Updates `golang.org/x/net` from 0.43.0 to 0.44.0
- [Commits](https://github.com/golang/net/compare/v0.43.0...v0.44.0)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.31.7 to 1.31.8
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.31.7...config/v1.31.8)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.18.11 to 1.18.12
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.11...config/v1.18.12)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.19.5 to 1.19.6
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/pi/v1.19.5...service/m2/v1.19.6)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-version: 1.19.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-version: 1.88.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/valyala/fasthttp
  dependency-version: 1.66.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-version: 1.29.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-version: 1.34.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-version: 1.38.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/net
  dependency-version: 0.44.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.31.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-version: 1.18.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-version: 1.19.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-15 21:12:54 +00:00
Luke McCrone
31799f48c8 test: more list-buckets, bucket tagging tests, dockerfile enhancements 2025-09-15 14:22:19 -03:00
Ben McClelland
34da18337e fix: lex sort order of listobjectversions backend.WalkVersions
Similar to:
  8e18b43116
  fix: lex sort order of listobjects backend.Walk
But now the "Versions" walk.

The original backend.WalkVersions function used the native WalkDir and ReadDir
which did not guarantee lexicographic ordering of results for cases where
including directory slash changes the sort order. This caused incorrect
paginated responses because S3 APIs require strict lexicographic ordering
where directories with trailing slashes sort correctly relative to files.
For example, dir1/a.b/ must come before dir1/a/ in the results, but
fs.WalkDir was returning them in filesystem sort order which reversed
the order due to not taking in account the trailing "/".
2025-09-12 11:49:58 -07:00
Ben McClelland
148836bb0c Merge pull request #1529 from nick-stephen/main
fix: #1527 - case-insensitive x-amz-checksum-mode header value
2025-09-12 09:10:48 -07:00
Nick Stephen
18e30127d5 fix: #1527 - case-insensitive x-amz-checksum-mode header value 2025-09-12 11:04:19 +02:00
Ben McClelland
6c0b8ea019 Merge pull request #1515 from versity/ben/list-objects-sort
fix: lex sort order of listobjects backend.Walk
2025-09-10 09:26:02 -07:00
Ben McClelland
8e18b43116 fix: lex sort order of listobjects backend.Walk
The original Walk function used the native WalkDir and ReadDir which did not
guarantee lexicographic ordering of results for cases where including directory
slash changes the sort order. This caused incorrect paginated responses because
S3 APIs require strict lexicographic ordering where directories with trailing
slashes sort correctly relative to files. For example, dir1/a.b/ must come
before dir1/a/ in the results, but fs.WalkDir was returning them in filesystem
sort order which reversed the order due to not taking in account the trailing
"/".

This also lead to cases of continuous looping of paginated listobjects results
when the marker was set out of order from the expected results.

To address this fundamental ordering issue, the entire directory traversal
mechanism was replaced with a custom lexicographic sorting approach. The new
implementation reads each directory's contents using ReadDir, then sorts the
entries using custom sort keys that append trailing slashes to directory paths.
This ensures that dir1/a.b/ correctly sorts before dir1/a/, as well as other
similar failing cases,  according to ASCII character ordering rules.

Fixes #1283
2025-09-10 08:57:36 -07:00
Ben McClelland
406161ba10 Merge pull request #1524 from versity/sis/object-get-part-number
fix: NotImplemented for GetObject/HeadObject PartNumber
2025-09-10 08:54:54 -07:00
Ben McClelland
dd91cecd00 Merge pull request #1522 from versity/sis/conditional-writes
feat: implement conditional writes
2025-09-10 08:54:04 -07:00
niksis02
2bb8a1eeb7 fix: NotImplemented for GetObject/HeadObject PartNumber
Fixes #1520

Removes the incorrect logic for HeadObject returning successful response, when querying an incomplete multipart upload.

Implements the logic to return `NotImplemented` error if `GetObject`/`HeadObject` is attempted with `partNumber` in azure and posix backends. The front-end part is preserved to be used in s3 proxy backend.
2025-09-09 22:40:36 +04:00
Ben McClelland
3375689010 Merge pull request #1516 from versity/test/delete_bucket_tagging
Test/more list buckets, general coverage
2025-09-09 11:06:01 -07:00
Ben McClelland
c206f6414e Merge pull request #1523 from versity/dependabot/go_modules/dev-dependencies-25282f792f
chore(deps): bump the dev-dependencies group with 25 updates
2025-09-09 11:03:17 -07:00
niksis02
7a098b925f feat: implement conditional writes
Closes #821

**Implements conditional operations across object APIs:**

* **PutObject** and **CompleteMultipartUpload**:
  Supports conditional writes with `If-Match` and `If-None-Match` headers (ETag comparisons).
  Evaluation is based on an existing object with the same key in the bucket. The operation is allowed only if the preconditions are satisfied. If no object exists for the key, these headers are ignored.

* **CopyObject** and **UploadPartCopy**:
  Adds conditional reads on the copy source object with the following headers:

  * `x-amz-copy-source-if-match`
  * `x-amz-copy-source-if-none-match`
  * `x-amz-copy-source-if-modified-since`
  * `x-amz-copy-source-if-unmodified-since`
    The first two are ETag comparisons, while the latter two compare against the copy source’s `LastModified` timestamp.

* **AbortMultipartUpload**:
  Supports the `x-amz-if-match-initiated-time` header, which is true only if the multipart upload’s initialization time matches.

* **DeleteObject**:
  Adds support for:

  * `If-Match` (ETag comparison)
  * `x-amz-if-match-last-modified-time` (LastModified comparison)
  * `x-amz-if-match-size` (object size comparison)

Additionally, this PR updates precondition date parsing logic to support both **RFC1123** and **RFC3339** formats. Dates set in the future are ignored, matching AWS S3 behavior.
2025-09-09 01:55:38 +04:00
dependabot[bot]
8fb020ef83 chore(deps): bump the dev-dependencies group with 25 updates
Bumps the dev-dependencies group with 25 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) | `1.38.1` | `1.39.0` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.87.1` | `1.88.0` |
| [github.com/aws/smithy-go](https://github.com/aws/smithy-go) | `1.22.5` | `1.23.0` |
| [github.com/stretchr/testify](https://github.com/stretchr/testify) | `1.11.0` | `1.11.1` |
| [golang.org/x/sync](https://github.com/golang/sync) | `0.16.0` | `0.17.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.35.0` | `0.36.0` |
| [github.com/AzureAD/microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | `1.4.2` | `1.5.0` |
| [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://github.com/aws/aws-sdk-go-v2) | `1.18.4` | `1.18.7` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.28.2` | `1.29.2` |
| [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://github.com/aws/aws-sdk-go-v2) | `1.33.2` | `1.34.3` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.38.0` | `1.38.3` |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.41.0` | `0.42.0` |
| [golang.org/x/text](https://github.com/golang/text) | `0.28.0` | `0.29.0` |
| [golang.org/x/time](https://github.com/golang/time) | `0.12.0` | `0.13.0` |
| [github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream](https://github.com/aws/aws-sdk-go-v2) | `1.7.0` | `1.7.1` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.31.2` | `1.31.7` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.18.6` | `1.18.11` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.19.0` | `1.19.5` |
| [github.com/aws/aws-sdk-go-v2/internal/configsources](https://github.com/aws/aws-sdk-go-v2) | `1.4.4` | `1.4.7` |
| [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://github.com/aws/aws-sdk-go-v2) | `2.7.4` | `2.7.7` |
| [github.com/aws/aws-sdk-go-v2/internal/v4a](https://github.com/aws/aws-sdk-go-v2) | `1.4.4` | `1.4.7` |
| [github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding](https://github.com/aws/aws-sdk-go-v2) | `1.13.0` | `1.13.1` |
| [github.com/aws/aws-sdk-go-v2/service/internal/checksum](https://github.com/aws/aws-sdk-go-v2) | `1.8.4` | `1.8.7` |
| [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://github.com/aws/aws-sdk-go-v2) | `1.13.4` | `1.13.7` |
| [github.com/aws/aws-sdk-go-v2/service/internal/s3shared](https://github.com/aws/aws-sdk-go-v2) | `1.19.4` | `1.19.7` |


Updates `github.com/aws/aws-sdk-go-v2` from 1.38.1 to 1.39.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.38.1...v1.39.0)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.87.1 to 1.88.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.87.1...service/s3/v1.88.0)

Updates `github.com/aws/smithy-go` from 1.22.5 to 1.23.0
- [Release notes](https://github.com/aws/smithy-go/releases)
- [Changelog](https://github.com/aws/smithy-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/smithy-go/compare/v1.22.5...v1.23.0)

Updates `github.com/stretchr/testify` from 1.11.0 to 1.11.1
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.11.0...v1.11.1)

Updates `golang.org/x/sync` from 0.16.0 to 0.17.0
- [Commits](https://github.com/golang/sync/compare/v0.16.0...v0.17.0)

Updates `golang.org/x/sys` from 0.35.0 to 0.36.0
- [Commits](https://github.com/golang/sys/compare/v0.35.0...v0.36.0)

Updates `github.com/AzureAD/microsoft-authentication-library-for-go` from 1.4.2 to 1.5.0
- [Release notes](https://github.com/AzureAD/microsoft-authentication-library-for-go/releases)
- [Changelog](https://github.com/AzureAD/microsoft-authentication-library-for-go/blob/main/changelog.md)
- [Commits](https://github.com/AzureAD/microsoft-authentication-library-for-go/compare/v1.4.2...v1.5.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/ec2/imds` from 1.18.4 to 1.18.7
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.7/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.4...config/v1.18.7)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.28.2 to 1.29.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.28.2...config/v1.29.2)

Updates `github.com/aws/aws-sdk-go-v2/service/ssooidc` from 1.33.2 to 1.34.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/mq/v1.33.2...service/amp/v1.34.3)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.38.0 to 1.38.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.38.0...v1.38.3)

Updates `golang.org/x/crypto` from 0.41.0 to 0.42.0
- [Commits](https://github.com/golang/crypto/compare/v0.41.0...v0.42.0)

Updates `golang.org/x/text` from 0.28.0 to 0.29.0
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.28.0...v0.29.0)

Updates `golang.org/x/time` from 0.12.0 to 0.13.0
- [Commits](https://github.com/golang/time/compare/v0.12.0...v0.13.0)

Updates `github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream` from 1.7.0 to 1.7.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.7.0...v1.7.1)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.31.2 to 1.31.7
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.31.2...config/v1.31.7)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.18.6 to 1.18.11
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.6...config/v1.18.11)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.19.0 to 1.19.5
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.19.0...service/pi/v1.19.5)

Updates `github.com/aws/aws-sdk-go-v2/internal/configsources` from 1.4.4 to 1.4.7
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.4.4...service/m2/v1.4.7)

Updates `github.com/aws/aws-sdk-go-v2/internal/endpoints/v2` from 2.7.4 to 2.7.7
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/endpoints/v2.7.4...internal/endpoints/v2.7.7)

Updates `github.com/aws/aws-sdk-go-v2/internal/v4a` from 1.4.4 to 1.4.7
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.4.4...service/m2/v1.4.7)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding` from 1.13.0 to 1.13.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.13.1/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.13.0...config/v1.13.1)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/checksum` from 1.8.4 to 1.8.7
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/drs/v1.8.4...service/tnb/v1.8.7)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/presigned-url` from 1.13.4 to 1.13.7
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/service/mq/v1.13.7/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/mq/v1.13.4...service/mq/v1.13.7)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/s3shared` from 1.19.4 to 1.19.7
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.19.4...service/m2/v1.19.7)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-version: 1.88.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/smithy-go
  dependency-version: 1.23.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/stretchr/testify
  dependency-version: 1.11.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sync
  dependency-version: 0.17.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sys
  dependency-version: 0.36.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/AzureAD/microsoft-authentication-library-for-go
  dependency-version: 1.5.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/ec2/imds
  dependency-version: 1.18.7
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-version: 1.29.2
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-version: 1.34.3
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-version: 1.38.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/crypto
  dependency-version: 0.42.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/text
  dependency-version: 0.29.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/time
  dependency-version: 0.13.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream
  dependency-version: 1.7.1
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.31.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-version: 1.18.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-version: 1.19.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/configsources
  dependency-version: 1.4.7
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
  dependency-version: 2.7.7
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/v4a
  dependency-version: 1.4.7
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding
  dependency-version: 1.13.1
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/checksum
  dependency-version: 1.8.7
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
  dependency-version: 1.13.7
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/s3shared
  dependency-version: 1.19.7
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-08 21:33:00 +00:00
Luke McCrone
d7c73a06ea test: universal REST structure checks, delete-bucket-tagging test 2025-09-07 13:44:34 -03:00
Ben McClelland
04fbe405ca Merge pull request #1519 from versity/sis/putobject-size
feat: adds x-amz-object-size in PutObject response headers
2025-09-05 13:06:07 -07:00
niksis02
818e91ebde feat: adds x-amz-object-size in PutObject response headers
Closes #1518

Adds the `x-amz-object-size` header to the `PutObject` response, indicating the size of the uploaded object. This change is applied to the POSIX, Azure, and S3 proxy backends.
2025-09-05 21:40:46 +04:00
Ben McClelland
743707b9ae Merge pull request #1509 from versity/ben/chunk-panic
fix: panic in signed-chunk-reader with incorrect debug string
2025-09-02 14:06:34 -07:00
Ben McClelland
dd151001a2 Merge pull request #1506 from versity/ben/ldap-debug
cleanup: minor fixes to ldap exported functions and test
2025-09-02 14:06:24 -07:00
Ben McClelland
f50e008ceb Merge pull request #1511 from ondrap/pfix
Fix scoutfs backend s3 upload with non-aligned size.
2025-09-02 10:08:45 -07:00
Ben McClelland
488a9ac1bb fix: panic in signed-chunk-reader with incorrect debug string
The following panic was triggered when mc client (that uses
chunked uploads) would upload a 171164 byte file. This likely
could have been hit with other sizes as well, but this size
was able to reliably reproduce the issue.

panic: runtime error: slice bounds out of range [:2] with capacity 1

goroutine 66 [running]:
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseChunkHeaderBytes(0x14000276200, {0x14000167fff?, 0x14000103180?, 0x200000003?})
	versitygw/s3api/utils/signed-chunk-reader.go:372 +0xe54
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseAndRemoveChunkInfo(0x14000276200, {0x14000167fff, 0x1, 0x1})
	versitygw/s3api/utils/signed-chunk-reader.go:251 +0x50
github.com/versity/versitygw/s3api/utils.(*ChunkReader).Read(0x14000276200, {0x14000160000, 0x14000056c00?, 0x8000})
	versitygw/s3api/utils/signed-chunk-reader.go:126 +0x188
io.(*teeReader).Read(0x140000b09c0, {0x14000160000, 0x105e7b368?, 0x8000})
	/usr/local/go/src/io/io.go:628 +0x34
...

The reproducer is:
% truncate -s 171764 testfile
% mc cp testfile gwtest/mybucket/testfile
mc: <ERROR> Failed to copy `/Users/ben/repo/s3perf/tools/testfile`. Put "http://127.0.0.1:7070/mybucket/testfile": dial tcp 127.0.0.1:7070: connect: connection refused

The panic can happen because the capacity of header ([]byte) at
the point of the debuglog line can be less than 2, but we were
trying to always send the first 2 bytes to the debug log.
2025-09-02 08:30:03 -07:00
Ben McClelland
b46a486d29 cleanup: s3 iam server debug logging done with debuglogger
Move the debug output to the standard debuglogger for more
consistency across the project.
2025-09-01 20:02:04 -07:00
Ben McClelland
5aa407d832 cleanup: ipa iam server debug logging done with debuglogger
Move the debug output to the standard debuglogger for more
consistency across the project.
2025-09-01 20:02:04 -07:00
Ben McClelland
b358e385db cleanup: minor fixes to ldap exported functions and test
The buildSearchFilter function doesn't need to be exported, and
can use strings.Builder. Add a unit test to make sure this didn't
change any logic.

This will also use the debuglogger to enable debugging.
2025-09-01 20:02:04 -07:00
Ben McClelland
24b1c45db3 cleanup: move debuglogger to top level for full project access
The debuglogger should be a top level module since we expect
all modules within the project to make use of this. If its
hidden in s3api, then contributors are less likely to make
use of this outside of s3api.
2025-09-01 20:02:02 -07:00
Ben McClelland
cae6f3d1fe Merge pull request #1508 from versity/sis/conditional-reads
feat: implements conditional reads for GetObject and HeadObject
2025-09-01 19:20:19 -07:00
niksis02
b3ed7639f0 feat: implements conditional reads for GetObject and HeadObject
Closes #882

Implements conditional reads for `GetObject` and `HeadObject` in the gateway for both POSIX and Azure backends. The behavior is controlled by the `If-Match`, `If-None-Match`, `If-Modified-Since`, and `If-Unmodified-Since` request headers, where the first two perform ETag comparisons and the latter two compare against the object’s `LastModified` date. No validation is performed for invalid ETags or malformed date formats, and precondition date headers are expected to follow RFC1123; otherwise, they are ignored.

The Integration tests cover all possible combinations of conditional headers, ensuring the feature is 100% AWS S3–compatible.
2025-09-01 18:33:01 -07:00
Ben McClelland
e2fb272711 Merge pull request #1510 from versity/ben/fix-build
fix: previous pr was not rebased before merging and caused a build error
2025-09-01 18:09:10 -07:00
Ben McClelland
a4091fd61c fix: previous pr was not rebased before merging and caused a build error
There was a change to the auth.VerifyAccess that changed
IsPublicBucket to IsPublicRequest, but another PR
(GetBucketLocation) that was merged at the same time
(and not rebased) was using the old version.

Update this to fix the build.
2025-09-01 17:31:56 -07:00
Ben McClelland
0bf49872cf Merge pull request #1507 from versity/ben/get-object-overrides
feat: add response header overrides for GetObject
2025-09-01 14:17:28 -07:00
Ben McClelland
39de3b9f9a Merge pull request #1504 from versity/ben/bucket-location
feat: add get bucket location frontend handlers
2025-09-01 14:17:06 -07:00
Ben McClelland
8cad7fd6d9 feat: add response header overrides for GetObject
GetObject allows overriding response headers with the following
paramters:
response-cache-control
response-content-disposition
response-content-encoding
response-content-language
response-content-type
response-expires

This is only valid for signed (and pre-singed) requests. An error
is returned for anonymous requests if these are set.

More info on the GetObject overrides can be found in the GetObject
API reference.

This also clarifies the naming of the AccessOptions IsPublicBucket
to IsPublicRequest to indicate this is a public access request
and not just accessing a bucket that allows public access.

Fixes #1501
2025-08-30 14:13:20 -07:00
Ben McClelland
58117c011a feat: add get bucket location frontend handlers
GetBucketLocation is being deprecated by AWS, but is still used
by some clients. We don't need any backend handlers for this since
the region is managed by the frontend. All we need is to test for
bucket existence, so we can use HeadBucket for this.

Fixes #1499
2025-08-30 12:29:26 -07:00
Ben McClelland
2015cc1ab2 Merge pull request #1502 from tannevaled/main
correct a bug when using glauth as LDAP IAM
2025-08-29 12:33:55 -07:00
tannevaled
fbde51b3ea be able to debug LDAP queries; be consistent between GetUserAccount() and ListUserAccounts() on how to build the search filters; objectClasses were missing in GetUserAccount research filter leading to a bad result for example when a posixgGroup have the same name as a posixUser. 2025-08-29 10:50:08 +02:00
Ben McClelland
5ea9c6e956 Merge pull request #1497 from versity/test/head_object
test: PutBucketOwnershipControls tests
2025-08-28 10:24:51 -07:00
Luke McCrone
278946f132 test: PutBucketOwnershipControls tests 2025-08-28 11:19:17 -03:00
Ondrej Palkovsky
c93d2cd1f2 Fix scoutfs backend s3 upload with non-aligned size. 2025-08-28 12:44:53 +02:00
Ben McClelland
13ea2286f7 Merge pull request #1496 from versity/sis/s3-proxy-cors
feat: changes cors implementation in s3 to store/retrieve in meta bucket
2025-08-27 15:52:24 -07:00
niksis02
4c41b8be3b feat: changes cors implementation in s3 to store/retreive in meta bucket
The CORS actions were directly proxied in s3 proxy backend. The new implementation stores/retreives/deletes bucket cors configuration in `meta` bucket.
2025-08-28 01:43:11 +04:00
Ben McClelland
e7efc1deb9 Merge pull request #1495 from versity/sis/bucket-policy-wildcard-action
fix: adds full wildcard and any character match for bucket policy actions
2025-08-27 12:02:38 -07:00
niksis02
843620235b fix: adds full wildcard and any character match for bucket policy actions
Fixes #1488

Adds full wildcard (`*`) and single-character (`?`) support for bucket policy actions, fixes resource detection with wildcards, and includes unit tests for `bucket_policy_actions`, `bucket_policy_effect`, and `bucket_policy_principals`.
2025-08-27 20:44:30 +04:00
Ben McClelland
2a4d86d8d0 Merge pull request #1494 from siomporas/fix/add-keepalive-option
fix: add keeplive option (CLI and env var)
2025-08-26 20:17:54 -07:00
Rich Siomporas
6a82213606 fix: add keeplive option (CLI and env var)
This fix enables Versity Gateway to serve clients that use the AWS C++ SDK - without enabling keepalive in the fiber connection, clients that use the AWS C++ SDK like Run:ai's model streamer [will wig out from all of the closed connections and fail to function](https://github.com/run-ai/runai-model-streamer/issues/55) when connecting to a Versity GW back end. 

This fix is intentionally side-effect free in that it retains the current default behavior, with the ability to override it via an env var or CLI arg
2025-08-26 21:47:19 -04:00
Ben McClelland
45a4d1892f Merge pull request #1491 from versity/ben/scoutfs-options
feat: add versioning dir option to scoutfs backend
2025-08-26 14:43:21 -07:00
Ben McClelland
a06a1f007a Merge pull request #1492 from versity/sis/bucket-cors-allow-headers
fix: adds Acces-Control-Allow-Headers to cors responses
2025-08-26 14:42:57 -07:00
niksis02
3d20a63f75 fix: adds Acces-Control-Allow-Headers to cors responses
Fixes #1486

* Adds the `Access-Control-Allow-Headers` response header to CORS responses for both **OPTIONS preflight requests** and any request containing an `Origin` header.
* The `Access-Control-Allow-Headers` response includes only the headers specified in the `Access-Control-Request-Headers` request header, always returned in lowercase.
* Fixes an issue with allow headers comparison in cors evaluation by making it case-insensitive.
* Adds missing unit tests for the **OPTIONS controller**.
2025-08-27 00:31:47 +04:00
Ben McClelland
1eeb7de0b6 feat: add versioning dir option to scoutfs backend
This adds the same versioning dir option that is found in the
posix backend to scoutfs backend. Functionality is the same.
2025-08-26 11:20:35 -07:00
Ben McClelland
ee1cbeac15 Merge pull request #1490 from versity/dependabot/go_modules/dev-dependencies-03ceddfc4c
chore(deps): bump the dev-dependencies group with 20 updates
2025-08-26 08:53:05 -07:00
dependabot[bot]
f29337aae6 chore(deps): bump the dev-dependencies group with 20 updates
Bumps the dev-dependencies group with 20 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go) | `1.18.2` | `1.19.0` |
| [github.com/DataDog/datadog-go/v5](https://github.com/DataDog/datadog-go) | `5.6.0` | `5.7.1` |
| [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) | `1.38.0` | `1.38.1` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.87.0` | `1.87.1` |
| [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) | `1.44.0` | `1.45.0` |
| [github.com/segmentio/kafka-go](https://github.com/segmentio/kafka-go) | `0.4.48` | `0.4.49` |
| [github.com/stretchr/testify](https://github.com/stretchr/testify) | `1.10.0` | `1.11.0` |
| [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://github.com/aws/aws-sdk-go-v2) | `1.18.3` | `1.18.4` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.28.0` | `1.28.2` |
| [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://github.com/aws/aws-sdk-go-v2) | `1.33.0` | `1.33.2` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.37.0` | `1.38.0` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.31.0` | `1.31.2` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.18.4` | `1.18.6` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.18.4` | `1.19.0` |
| [github.com/aws/aws-sdk-go-v2/internal/configsources](https://github.com/aws/aws-sdk-go-v2) | `1.4.3` | `1.4.4` |
| [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://github.com/aws/aws-sdk-go-v2) | `2.7.3` | `2.7.4` |
| [github.com/aws/aws-sdk-go-v2/internal/v4a](https://github.com/aws/aws-sdk-go-v2) | `1.4.3` | `1.4.4` |
| [github.com/aws/aws-sdk-go-v2/service/internal/checksum](https://github.com/aws/aws-sdk-go-v2) | `1.8.3` | `1.8.4` |
| [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://github.com/aws/aws-sdk-go-v2) | `1.13.3` | `1.13.4` |
| [github.com/aws/aws-sdk-go-v2/service/internal/s3shared](https://github.com/aws/aws-sdk-go-v2) | `1.19.3` | `1.19.4` |


Updates `github.com/Azure/azure-sdk-for-go/sdk/azcore` from 1.18.2 to 1.19.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/go-mgmt-sdk-release-guideline.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.18.2...sdk/azcore/v1.19.0)

Updates `github.com/DataDog/datadog-go/v5` from 5.6.0 to 5.7.1
- [Release notes](https://github.com/DataDog/datadog-go/releases)
- [Changelog](https://github.com/DataDog/datadog-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/DataDog/datadog-go/compare/v5.6.0...v5.7.1)

Updates `github.com/aws/aws-sdk-go-v2` from 1.38.0 to 1.38.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.38.0...v1.38.1)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.87.0 to 1.87.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.87.0...service/s3/v1.87.1)

Updates `github.com/nats-io/nats.go` from 1.44.0 to 1.45.0
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.44.0...v1.45.0)

Updates `github.com/segmentio/kafka-go` from 0.4.48 to 0.4.49
- [Release notes](https://github.com/segmentio/kafka-go/releases)
- [Commits](https://github.com/segmentio/kafka-go/compare/v0.4.48...v0.4.49)

Updates `github.com/stretchr/testify` from 1.10.0 to 1.11.0
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.10.0...v1.11.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/ec2/imds` from 1.18.3 to 1.18.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.4/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.3...config/v1.18.4)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.28.0 to 1.28.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.28.0...config/v1.28.2)

Updates `github.com/aws/aws-sdk-go-v2/service/ssooidc` from 1.33.0 to 1.33.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.33.0...service/ecs/v1.33.2)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.37.0 to 1.38.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.37.0...v1.38.0)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.31.0 to 1.31.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.31.0...config/v1.31.2)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.18.4 to 1.18.6
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.6/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.4...config/v1.18.6)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.18.4 to 1.19.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.19.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.4...v1.19.0)

Updates `github.com/aws/aws-sdk-go-v2/internal/configsources` from 1.4.3 to 1.4.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.4.3...service/m2/v1.4.4)

Updates `github.com/aws/aws-sdk-go-v2/internal/endpoints/v2` from 2.7.3 to 2.7.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/endpoints/v2.7.3...internal/endpoints/v2.7.4)

Updates `github.com/aws/aws-sdk-go-v2/internal/v4a` from 1.4.3 to 1.4.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.4.3...service/m2/v1.4.4)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/checksum` from 1.8.3 to 1.8.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/service/drs/v1.8.4/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.8.3...service/drs/v1.8.4)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/presigned-url` from 1.13.3 to 1.13.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.13.3...service/mq/v1.13.4)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/s3shared` from 1.19.3 to 1.19.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/pi/v1.19.3...service/m2/v1.19.4)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-version: 1.19.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/DataDog/datadog-go/v5
  dependency-version: 5.7.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.38.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-version: 1.87.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.45.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/segmentio/kafka-go
  dependency-version: 0.4.49
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/stretchr/testify
  dependency-version: 1.11.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/ec2/imds
  dependency-version: 1.18.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-version: 1.28.2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-version: 1.33.2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-version: 1.38.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.31.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-version: 1.18.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-version: 1.19.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/configsources
  dependency-version: 1.4.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
  dependency-version: 2.7.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/v4a
  dependency-version: 1.4.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/checksum
  dependency-version: 1.8.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
  dependency-version: 1.13.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/s3shared
  dependency-version: 1.19.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-26 14:50:47 +00:00
Ben McClelland
9772badd43 Merge pull request #1473 from versity/ben/ldap-reconnect
fix: iam ldap reconnect after network disconnects
2025-08-25 13:58:25 -07:00
Ben McClelland
c82582bf07 Merge pull request #1471 from versity/fix/AzureNextMarker
fix: update marker/continuation token to be the azure next marker
2025-08-25 13:58:06 -07:00
nitin
630651254f fix: update marker/continuation token to be the azure next marker
This changes the marker/continuation token from the object name
to the marker from the azure list objects pager. This is needed
because passing the object name as the token to the azure next
call causes the Azure API to throw 400 Bad Request with
InvalidQueryParameterValue. So we have to use the azure marker
for compatibility with the azure API pager.

To do this we have to align the s3 list objects request to the
Azure ListBlobsHierarchyPager. The v2 requests have an optional
startafter where we will have to page through the azure blobs
to find the correct starting point, but after this we will
only return with the single paginated results form the Azure
pager to maintain the correct markers all the way through to
Azure.

The ListObjects (non V2) assumes that the marker must be an object
name, so for this case we have to page through the azure listings
for each call to find the correct starting point. This makes the
V2 method far more efficient, but maintains correctness for the
ListObjects.

Also remove continuation token string checks in the integration
tests since this is supposed to be an opaque token that the
client should not care about. This will help to maintain the
tests for mutliple backend types.

Fixes #1457
2025-08-25 11:28:42 -07:00
Ben McClelland
5d2a1527e0 Merge pull request #1489 from versity/sis/get-bucket-policy-status-action
feat: implementes GetBucketPolicyStatus s3 action
2025-08-25 11:21:11 -07:00
niksis02
d90944afd1 feat: implementes GetBucketPolicyStatus s3 action
Closes #1454

Adds the implementation of [S3 GetBucketPolicyStatus action](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html). The implementation goes to front-end. Front-End loads the bucket policy and checks if it grants public access to all users.

A bucket policy document `is public` only when `Principal` contains `*`(all users): only when it grants access to `ALL` users.
2025-08-25 21:48:06 +04:00
Ben McClelland
ac4229cd6d Merge pull request #1481 from versity/test/test_put_object_conditional
test: PutObject conditional
2025-08-25 09:09:03 -07:00
Luke McCrone
b3286c44e2 test: REST PutObject, HeadObject, organization, skips removal 2025-08-25 11:55:42 -03:00
Ben McClelland
9992e341da Merge pull request #1485 from versity/sis/bucket-website-actions-not-implemented
feat: adds not implemented routes for bucket website actions
2025-08-22 16:18:59 -07:00
Ben McClelland
8210dc4cbb Merge pull request #1483 from versity/sis/bucket-acceleration-configuration-acttions-not-implemented
feat: adds not implemented routes for bucket accelerate configurationactions
2025-08-22 16:18:32 -07:00
niksis02
14a2984d59 feat: adds not implemented routes for bucket website actions
Closes #1450

Adds `NotImplemented` routes for bucket website S3 actions:
- `PutBucketWebsite`
- `GetBucketWebsite`
- `DeleteBucketWebsite`
2025-08-22 19:56:51 +04:00
niksis02
0895ada9ed feat: adds not implemented routes for bucket accelerate configuration actions
Closes #1452

Adds `NotImplemented` routes for bucket accelerate configuration S3 actions:
- `PutBucketAccelerateConfiguration`
- `GetBucketAccelerateConfiguration`
2025-08-22 14:45:42 +04:00
Ben McClelland
867cdb5f97 Merge pull request #1480 from versity/sis/bucket-notification-actions-not-implemented
feat: adds not implemented routes for bucket notification configuration actions
2025-08-21 10:41:37 -07:00
Ben McClelland
2ae66311a7 Merge pull request #1479 from versity/sis/bucket-public-access-block-actions-not-implemented
feat: adds not implemented routes for bucket public access block actions
2025-08-21 10:41:10 -07:00
niksis02
d784c0a841 feat: adds not implemented routes for bucket notification configuration actions
Closes #1453

Adds `NotImplemented` routes for bucket notification configuration S3 actions:
- `PutBucketNotificationConfiguration`
- `GetBucketNotificationConfiguration`
2025-08-21 20:40:18 +04:00
niksis02
be79fc249d feat: adds not implemented routes for bucket public access block actions
Closes #1451

Adds `NotImplemented` routes for bucket public access block S3 actions:
- `PutPublicAccessBlock`
- `GetPublicAccessBlock`
- `DeletePublicAccessBlock`
2025-08-21 20:10:29 +04:00
Ben McClelland
3a51b1ee5c Merge pull request #1478 from versity/sis/bucket-replication-actions-not-implemented
feat: adds not implemented routes for bucket replication actions
2025-08-21 08:24:30 -07:00
Ben McClelland
7954d386b2 Merge pull request #1477 from versity/sis/bucket-metrics-configuration-actions-not-implemented
feat: adds not implemented routes for bucket metrics configuration actions
2025-08-21 08:23:33 -07:00
niksis02
88f84bfd89 feat: adds not implemented routes for bucket replication actions
Closes #1449

Adds `NotImplemented` routes for bucket replication S3 actions:
- `PutBucketReplication`
- `GetBucketReplication`
- `DeleteBucketReplication`

Adds missing actions in metrics `ActionMap`
2025-08-21 16:44:29 +04:00
niksis02
45a1f7ae7c feat: adds not implemented routes for bucket metrics configuration actions
Closes #1445

Adds `NotImplemented` routes for bucket metrics configuration S3 actions:
- `PutBucketMetricsConfiguration`
- `GetBucketMetricsConfiguration`
- `ListBucketMetricsConfigurations`
- `DeleteBucketMetricsConfiguration`

Adds the missing bucket actions to `supportedActionList` in bucket policy supported actions list.
2025-08-21 16:05:06 +04:00
Ben McClelland
be1708b1df Merge pull request #1476 from versity/sis/bucket-request-payment-actions-not-implemented
feat: adds not implemented routes for bucket request payment actions
2025-08-20 17:10:54 -07:00
Ben McClelland
617ad0fd31 Merge pull request #1475 from versity/sis/bucket-logging-actions-not-implemented
feat: adds not implemented routes for bucket logging actions
2025-08-20 17:10:32 -07:00
Ben McClelland
3e4c31f14a Merge pull request #1474 from versity/sis/bucket-lifecycle-configuration-actions-not-implemented
feat: adds not implemented routes for bucket lifecycle configuration actions
2025-08-20 17:09:59 -07:00
Ben McClelland
502a72bf20 Merge pull request #1461 from versity/sis/bucket-cors-implementation
feat: bucket cors implementation
2025-08-20 17:09:21 -07:00
niksis02
6b450a5c11 feat: adds not implemented routes for bucket request payment actions
Closes #1455

Adds `NotImplemented` routes for bucket request payment S3 actions:
- `PutBucketRequestPayment`
- `GetBucketRequestPayment`
2025-08-21 00:54:31 +04:00
niksis02
5f28a7449e feat: adds not implemented routes for bucket logging actions
Closes #1444

Adds `NotImplemented` routes for bucket logging S3 actions:
- `PutBucketLogging`
- `GetBucketLogging`
2025-08-20 21:07:09 +04:00
niksis02
025b0ee3c8 feat: adds not implemented routes for bucket lifecycle configuration actions
Closes #1443

Adds `NotImplemented` routes for bucket lifecycle configuration S3 actions.
- `PutBucketLifecycleConfiguration`
- `GetBucketLifecycleConfiguration`
- `DeleteBucketLifecycle`
2025-08-20 20:48:58 +04:00
niksis02
09031a30e5 feat: bucket cors implementation
Closes #1003

**Changes Introduced:**

1. **S3 Bucket CORS Actions**

   * Implemented the following S3 bucket CORS APIs:

     * `PutBucketCors` – Configure CORS rules for a bucket.
     * `GetBucketCors` – Retrieve the current CORS configuration for a bucket.
     * `DeleteBucketCors` – Remove CORS configuration from a bucket.

2. **CORS Preflight Handling**

   * Added an `OPTIONS` endpoint to handle browser preflight requests.
   * The endpoint evaluates incoming requests against bucket CORS rules and returns the appropriate `Access-Control-*` headers.

3. **CORS Middleware**

   * Implemented middleware that:

     * Checks if a bucket has CORS configured.
     * Detects the `Origin` header in the request.
     * Adds the necessary `Access-Control-*` headers to the response when the request matches the bucket CORS configuration.
2025-08-20 20:45:09 +04:00
Ben McClelland
5fb73deef1 Merge pull request #1472 from versity/ben/log-panic
fix: panic in access log when region header not set in request context
2025-08-20 09:44:35 -07:00
Ben McClelland
dafe099d9b fix: iam ldap reconnect after network disconnects
Handle LDAP connection failures by attempting to reconnect.
This should resolve the issue of connections being closed by
the LDAP server after a period of inactivity.

Fixes #1464
2025-08-19 18:17:12 -07:00
Ben McClelland
795324109e fix: panic in access log when region header not set in request context
This fixes a nil deref when the region is not set for the access
log. This was reported to happen during netwrok security scans
likely sending unexpected requests triggering this case.

Fixes #1463
2025-08-19 18:06:20 -07:00
Ben McClelland
794d01a0ae Merge pull request #1462 from versity/test/test_rest_delete_bucket
Test/test rest delete bucket
2025-08-19 16:12:47 -07:00
Ben McClelland
020542639a Merge pull request #1469 from versity/sis/bucket-inventory-configuration-actions-not-implemented
feat: adds not implemented routes for bucket inventory configuration actions
2025-08-19 16:11:37 -07:00
Ben McClelland
3703d919f6 Merge pull request #1468 from versity/sis/bucket-intelligent-tiering-actions-not-implemented
feat: adds not implemented routes for bucket intelligent tiering actions
2025-08-19 16:11:11 -07:00
Ben McClelland
56af16fcc4 Merge pull request #1467 from versity/sis/bucket-encryption-actions-not-implemented
feat: adds not implemented routes for bucket ecryption actions
2025-08-19 16:10:45 -07:00
Ben McClelland
ec80b11cef Merge pull request #1465 from versity/sis/bucket-analytics-actions-not-implemented
fix: adds not implemented routes for bucket analytics s3 actions.
2025-08-19 16:10:02 -07:00
Ben McClelland
12ab923a35 Merge pull request #1466 from versity/dependabot/go_modules/dev-dependencies-af42e1f312
chore(deps): bump github.com/valyala/fasthttp from 1.64.0 to 1.65.0 in the dev-dependencies group
2025-08-19 15:43:39 -07:00
niksis02
24b88e20e0 feat: adds not implemented routes for bucket inventory configuration actions
Closes #1440

Adds `NotImplemented` routes for bucket inventory configuration S3 actions:
- `PutBucketInventoryConfiguration`
- `GetBucketInventoryConfiguration`
- `ListBucketInventoryConfigurations`
- `DeleteBucketInventoryConfiguration`
2025-08-19 21:49:38 +04:00
niksis02
cdccdcc4d6 feat: adds not implemented routes for bucket intelligent tiering actions
Closes #1440

Adds `NotImplemented` routes for intelligent tiering S3 actions:
- `PutBucketIntelligentTieringConfiguration`
- `GetBucketIntelligentTieringConfiguration`
- `ListBucketIntelligentTieringConfigurations`
- `DeleteBucketIntelligentTieringConfiguration`
2025-08-19 21:23:05 +04:00
niksis02
ed92ad3daa feat: adds not implemented routes for bucket ecryption actions
Closes #1439

Adds `NotImplemented` routes for bucket encryption S3 actions:

- `PutBucketEncryption`
- `GetBucketEncryption`
- `DeleteBucketEncryption`
2025-08-19 20:30:02 +04:00
Luke McCrone
2679ac70b6 test: more delete bucket tests, more skips removals 2025-08-19 10:07:27 -03:00
dependabot[bot]
3208247597 chore(deps): bump github.com/valyala/fasthttp
Bumps the dev-dependencies group with 1 update: [github.com/valyala/fasthttp](https://github.com/valyala/fasthttp).


Updates `github.com/valyala/fasthttp` from 1.64.0 to 1.65.0
- [Release notes](https://github.com/valyala/fasthttp/releases)
- [Commits](https://github.com/valyala/fasthttp/compare/v1.64.0...v1.65.0)

---
updated-dependencies:
- dependency-name: github.com/valyala/fasthttp
  dependency-version: 1.65.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-19 07:22:14 +00:00
niksis02
8db196634b fix: adds not implemented routes for bucket analytics s3 actions.
Fixes #1433
Fixes #1437
Fixes #1438

Adds 4 routes to return `NotImplemented` for bucket analytics `S3` actions:

- `PutBucketAnalyticsConfiguration`
- `GetBucketAnalyticsConfiguration`
- `DeleteBucketAnalyticsConfiguration`
- `ListBucketAnalyticsConfiguration`
2025-08-19 02:14:31 +04:00
Ben McClelland
f31a56316b Merge pull request #1460 from versity/fix/EtagAzureIssue
fix: add -1 to azure etag to avoid client sdk verfications
2025-08-14 18:12:19 -07:00
nitin
0eadc3871e fix: add -1 to azure etag to avoid client sdk verfications
The C++ SDK (and maybe others?) assume that the S3 ETags
without a "-" in the string are MD5 checksums. So the Azure
ETag that does not have a "-" but also is not an MD5 checksum
will fail some of the sdk internal validation checks.

Fix this by appending "-1" to the ETag to make it look like
the multipart format ETag that will skip the sdk verfication
check.

Fixes: #1380

Co-authored-by: Ben McClelland <ben.mcclelland@versity.com>
2025-08-14 14:14:12 -07:00
Ben McClelland
84a989a23c Merge pull request #1459 from versity/test/not_implementeds
Test/not implementeds
2025-08-13 16:46:14 -07:00
Ben McClelland
6be62f189d Merge pull request #1448 from versity/ben/rabbitmq-event
feat: add rabbitmq s3 event notification support
2025-08-13 16:34:49 -07:00
Ben McClelland
36d2a55162 feat: add rabbitmq s3 event notification support
This adds support for rabbitmq publisher for s3 events. The
mechanics are similar to kafka and nats, but will use the amqp
protocol to send bucket events.
2025-08-13 12:46:57 -07:00
Luke McCrone
15f19cc75c test: "not implemented" commands 2025-08-13 15:49:46 -03:00
Ben McClelland
634396c3c5 Merge pull request #1447 from versity/ben/range-checks
fix: add test cases and fix behavior for head/get range requests
2025-08-13 08:31:56 -07:00
Ben McClelland
e134f63ebc fix: add test cases and fix behavior for head/get range requests
This adds a bunch of test cases for non-0 len object, 0 len
object, and directory objects to match verified AWS responses
for the various range bytes cases.

This fixes the posix head/get range responses for these test
cases as well.
2025-08-12 14:46:58 -07:00
Ben McClelland
01760fdf1c Merge pull request #1446 from versity/dependabot/go_modules/dev-dependencies-fc69ab1dbe
chore(deps): bump the dev-dependencies group with 20 updates
2025-08-12 08:34:07 -07:00
dependabot[bot]
cef2950a79 chore(deps): bump the dev-dependencies group with 20 updates
Bumps the dev-dependencies group with 20 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) | `1.10.1` | `1.11.0` |
| [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) | `1.37.2` | `1.38.0` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.86.0` | `1.87.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.34.0` | `0.35.0` |
| [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://github.com/aws/aws-sdk-go-v2) | `1.18.2` | `1.18.3` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.27.0` | `1.28.0` |
| [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://github.com/aws/aws-sdk-go-v2) | `1.32.0` | `1.33.0` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.36.0` | `1.37.0` |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.40.0` | `0.41.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.42.0` | `0.43.0` |
| [golang.org/x/text](https://github.com/golang/text) | `0.27.0` | `0.28.0` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.30.3` | `1.31.0` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.18.3` | `1.18.4` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.18.3` | `1.18.4` |
| [github.com/aws/aws-sdk-go-v2/internal/configsources](https://github.com/aws/aws-sdk-go-v2) | `1.4.2` | `1.4.3` |
| [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://github.com/aws/aws-sdk-go-v2) | `2.7.2` | `2.7.3` |
| [github.com/aws/aws-sdk-go-v2/internal/v4a](https://github.com/aws/aws-sdk-go-v2) | `1.4.2` | `1.4.3` |
| [github.com/aws/aws-sdk-go-v2/service/internal/checksum](https://github.com/aws/aws-sdk-go-v2) | `1.8.2` | `1.8.3` |
| [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://github.com/aws/aws-sdk-go-v2) | `1.13.2` | `1.13.3` |
| [github.com/aws/aws-sdk-go-v2/service/internal/s3shared](https://github.com/aws/aws-sdk-go-v2) | `1.19.2` | `1.19.3` |


Updates `github.com/Azure/azure-sdk-for-go/sdk/azidentity` from 1.10.1 to 1.11.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/go-mgmt-sdk-release-guideline.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azidentity/v1.10.1...sdk/azcore/v1.11.0)

Updates `github.com/aws/aws-sdk-go-v2` from 1.37.2 to 1.38.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.37.2...v1.38.0)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.86.0 to 1.87.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.86.0...service/s3/v1.87.0)

Updates `golang.org/x/sys` from 0.34.0 to 0.35.0
- [Commits](https://github.com/golang/sys/compare/v0.34.0...v0.35.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/ec2/imds` from 1.18.2 to 1.18.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.3/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.2...config/v1.18.3)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.27.0 to 1.28.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.27.0...v1.28.0)

Updates `github.com/aws/aws-sdk-go-v2/service/ssooidc` from 1.32.0 to 1.33.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.32.0...v1.33.0)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.36.0 to 1.37.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.36.0...v1.37.0)

Updates `golang.org/x/crypto` from 0.40.0 to 0.41.0
- [Commits](https://github.com/golang/crypto/compare/v0.40.0...v0.41.0)

Updates `golang.org/x/net` from 0.42.0 to 0.43.0
- [Commits](https://github.com/golang/net/compare/v0.42.0...v0.43.0)

Updates `golang.org/x/text` from 0.27.0 to 0.28.0
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.27.0...v0.28.0)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.30.3 to 1.31.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.30.3...v1.31.0)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.18.3 to 1.18.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.4/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.3...config/v1.18.4)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.18.3 to 1.18.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.4/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.3...config/v1.18.4)

Updates `github.com/aws/aws-sdk-go-v2/internal/configsources` from 1.4.2 to 1.4.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/service/m2/v1.4.3/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.4.2...service/m2/v1.4.3)

Updates `github.com/aws/aws-sdk-go-v2/internal/endpoints/v2` from 2.7.2 to 2.7.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/endpoints/v2.7.2...internal/endpoints/v2.7.3)

Updates `github.com/aws/aws-sdk-go-v2/internal/v4a` from 1.4.2 to 1.4.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/service/m2/v1.4.3/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.4.2...service/m2/v1.4.3)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/checksum` from 1.8.2 to 1.8.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.8.2...config/v1.8.3)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/presigned-url` from 1.13.2 to 1.13.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.13.2...service/m2/v1.13.3)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/s3shared` from 1.19.2 to 1.19.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.19.2...service/pi/v1.19.3)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-version: 1.11.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.38.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-version: 1.87.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sys
  dependency-version: 0.35.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/ec2/imds
  dependency-version: 1.18.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-version: 1.28.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-version: 1.33.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-version: 1.37.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/crypto
  dependency-version: 0.41.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/net
  dependency-version: 0.43.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/text
  dependency-version: 0.28.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-version: 1.18.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-version: 1.18.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/configsources
  dependency-version: 1.4.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
  dependency-version: 2.7.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/v4a
  dependency-version: 1.4.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/checksum
  dependency-version: 1.8.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
  dependency-version: 1.13.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/s3shared
  dependency-version: 1.19.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 08:03:21 +00:00
Ben McClelland
b0054fc415 Merge pull request #1435 from ondrap/pr2
Fix O_TMPFILE Linkat race, cleanup of scoutfs integration, fix MoveData non-aligned problem
2025-08-08 08:18:02 -07:00
Ondrej Palkovsky
f0858a47d5 Small cleanups. 2025-08-08 08:56:44 +02:00
Ondrej Palkovsky
298d4ec6b4 Merged scoutfs and posix ListObjects and ListObjectsV2 2025-08-08 08:37:16 +02:00
Ondrej Palkovsky
3934beae2f Lowercase err message. 2025-08-08 07:36:13 +02:00
Ben McClelland
ba017420c4 Merge pull request #1430 from ondrap/main 2025-08-07 18:05:23 -07:00
Ondrej Palkovsky
936239b619 DRY of scoutfs integration, alignment testing for scoutfs.MoveData 2025-08-07 18:28:38 +02:00
Ondrej Palkovsky
e62337f055 Fix O_TMPFILE Linkat race. 2025-08-07 18:28:32 +02:00
Ben McClelland
0be8b2aedd Merge pull request #1432 from versity/dependabot/go_modules/dev-dependencies-8a4a54d917
chore(deps): bump the dev-dependencies group with 19 updates
2025-08-05 14:10:00 -07:00
Ben McClelland
9122f66438 Merge pull request #1431 from versity/test/head_bucket
test: HeadBucket tests, test script reorganization
2025-08-05 14:09:31 -07:00
dependabot[bot]
47e49ce593 chore(deps): bump the dev-dependencies group with 19 updates
Bumps the dev-dependencies group with 19 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go) | `1.18.1` | `1.18.2` |
| [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) | `1.37.0` | `1.37.2` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.85.0` | `1.86.0` |
| [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) | `1.43.0` | `1.44.0` |
| [github.com/Azure/azure-sdk-for-go/sdk/internal](https://github.com/Azure/azure-sdk-for-go) | `1.11.1` | `1.11.2` |
| [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://github.com/aws/aws-sdk-go-v2) | `1.17.0` | `1.18.2` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.26.0` | `1.27.0` |
| [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://github.com/aws/aws-sdk-go-v2) | `1.31.0` | `1.32.0` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.35.0` | `1.36.0` |
| [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) | `5.2.3` | `5.3.0` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.30.0` | `1.30.3` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.18.0` | `1.18.3` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.18.0` | `1.18.3` |
| [github.com/aws/aws-sdk-go-v2/internal/configsources](https://github.com/aws/aws-sdk-go-v2) | `1.4.0` | `1.4.2` |
| [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://github.com/aws/aws-sdk-go-v2) | `2.7.0` | `2.7.2` |
| [github.com/aws/aws-sdk-go-v2/internal/v4a](https://github.com/aws/aws-sdk-go-v2) | `1.4.0` | `1.4.2` |
| [github.com/aws/aws-sdk-go-v2/service/internal/checksum](https://github.com/aws/aws-sdk-go-v2) | `1.8.0` | `1.8.2` |
| [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://github.com/aws/aws-sdk-go-v2) | `1.13.0` | `1.13.2` |
| [github.com/aws/aws-sdk-go-v2/service/internal/s3shared](https://github.com/aws/aws-sdk-go-v2) | `1.19.0` | `1.19.2` |


Updates `github.com/Azure/azure-sdk-for-go/sdk/azcore` from 1.18.1 to 1.18.2
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/go-mgmt-sdk-release-guideline.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.18.1...sdk/azcore/v1.18.2)

Updates `github.com/aws/aws-sdk-go-v2` from 1.37.0 to 1.37.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.37.0...v1.37.2)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.85.0 to 1.86.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.85.0...service/s3/v1.86.0)

Updates `github.com/nats-io/nats.go` from 1.43.0 to 1.44.0
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.43.0...v1.44.0)

Updates `github.com/Azure/azure-sdk-for-go/sdk/internal` from 1.11.1 to 1.11.2
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/go-mgmt-sdk-release-guideline.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.11.1...sdk/internal/v1.11.2)

Updates `github.com/aws/aws-sdk-go-v2/feature/ec2/imds` from 1.17.0 to 1.18.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.2/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.17.0...config/v1.18.2)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.26.0 to 1.27.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.26.0...v1.27.0)

Updates `github.com/aws/aws-sdk-go-v2/service/ssooidc` from 1.31.0 to 1.32.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.31.0...v1.32.0)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.35.0 to 1.36.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.35.0...v1.36.0)

Updates `github.com/golang-jwt/jwt/v5` from 5.2.3 to 5.3.0
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.3...v5.3.0)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.30.0 to 1.30.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.30.0...v1.30.3)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.18.0 to 1.18.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.3/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.18.0...config/v1.18.3)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.18.0 to 1.18.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.3/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.18.0...config/v1.18.3)

Updates `github.com/aws/aws-sdk-go-v2/internal/configsources` from 1.4.0 to 1.4.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/service/m2/v1.4.2/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.4.0...service/m2/v1.4.2)

Updates `github.com/aws/aws-sdk-go-v2/internal/endpoints/v2` from 2.7.0 to 2.7.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/endpoints/v2.7.0...internal/endpoints/v2.7.2)

Updates `github.com/aws/aws-sdk-go-v2/internal/v4a` from 1.4.0 to 1.4.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/service/m2/v1.4.2/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.4.0...service/m2/v1.4.2)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/checksum` from 1.8.0 to 1.8.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.8.0...config/v1.8.2)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/presigned-url` from 1.13.0 to 1.13.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.13.0...service/m2/v1.13.2)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/s3shared` from 1.19.0 to 1.19.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.19.0...service/m2/v1.19.2)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-version: 1.18.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.37.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-version: 1.86.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.44.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/internal
  dependency-version: 1.11.2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/ec2/imds
  dependency-version: 1.18.2
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-version: 1.27.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-version: 1.32.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-version: 1.36.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/golang-jwt/jwt/v5
  dependency-version: 5.3.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.30.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-version: 1.18.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-version: 1.18.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/configsources
  dependency-version: 1.4.2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
  dependency-version: 2.7.2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/v4a
  dependency-version: 1.4.2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/checksum
  dependency-version: 1.8.2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
  dependency-version: 1.13.2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/s3shared
  dependency-version: 1.19.2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-05 04:14:52 +00:00
Luke McCrone
38e43eedfb test: HeadBucket tests, test file reorganization 2025-08-04 20:05:37 -03:00
Ondrej Palkovsky
8e6dd45ce5 Fix race in GetObject 2025-08-04 15:50:46 +02:00
Ben McClelland
742cebb5e5 Merge pull request #1424 from versity/test/more_create_bucket
Test/more create bucket
2025-08-01 08:12:53 -07:00
Luke McCrone
26a8502f29 test: new REST CreateBucket, ACL tests 2025-07-30 16:17:01 -03:00
Ben McClelland
501d57cbb0 Merge pull request #1422 from versity/dependabot/go_modules/dev-dependencies-4a814c34f0
chore(deps): bump the dev-dependencies group with 19 updates
2025-07-29 10:39:12 -07:00
Ben McClelland
46650314af test: update docker azurite command to skip api check
The sdk update has caused azurite to fail with:
The API version 2025-07-05 is not supported by Azurite

The workaround for now according to
https://github.com/Azure/Azurite/issues/2562
is to tell azurite to skip this check.
2025-07-29 09:54:44 -07:00
dependabot[bot]
13c7cb488c chore(deps): bump the dev-dependencies group with 19 updates
Bumps the dev-dependencies group with 19 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) | `1.6.1` | `1.6.2` |
| [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) | `1.36.6` | `1.37.0` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.84.1` | `1.85.0` |
| [github.com/aws/smithy-go](https://github.com/aws/smithy-go) | `1.22.4` | `1.22.5` |
| [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://github.com/aws/aws-sdk-go-v2) | `1.16.33` | `1.17.0` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.25.6` | `1.26.0` |
| [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://github.com/aws/aws-sdk-go-v2) | `1.30.4` | `1.31.0` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.34.1` | `1.35.0` |
| [github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream](https://github.com/aws/aws-sdk-go-v2) | `1.6.11` | `1.7.0` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.29.18` | `1.30.0` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.17.71` | `1.18.0` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.17.85` | `1.18.0` |
| [github.com/aws/aws-sdk-go-v2/internal/configsources](https://github.com/aws/aws-sdk-go-v2) | `1.3.37` | `1.4.0` |
| [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://github.com/aws/aws-sdk-go-v2) | `2.6.37` | `2.7.0` |
| [github.com/aws/aws-sdk-go-v2/internal/v4a](https://github.com/aws/aws-sdk-go-v2) | `1.3.37` | `1.4.0` |
| [github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding](https://github.com/aws/aws-sdk-go-v2) | `1.12.4` | `1.13.0` |
| [github.com/aws/aws-sdk-go-v2/service/internal/checksum](https://github.com/aws/aws-sdk-go-v2) | `1.7.5` | `1.8.0` |
| [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://github.com/aws/aws-sdk-go-v2) | `1.12.18` | `1.13.0` |
| [github.com/aws/aws-sdk-go-v2/service/internal/s3shared](https://github.com/aws/aws-sdk-go-v2) | `1.18.18` | `1.19.0` |


Updates `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` from 1.6.1 to 1.6.2
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/go-mgmt-sdk-release-guideline.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.6.1...sdk/storage/azblob/v1.6.2)

Updates `github.com/aws/aws-sdk-go-v2` from 1.36.6 to 1.37.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.36.6...v1.37.0)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.84.1 to 1.85.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.84.1...service/s3/v1.85.0)

Updates `github.com/aws/smithy-go` from 1.22.4 to 1.22.5
- [Release notes](https://github.com/aws/smithy-go/releases)
- [Changelog](https://github.com/aws/smithy-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/smithy-go/compare/v1.22.4...v1.22.5)

Updates `github.com/aws/aws-sdk-go-v2/feature/ec2/imds` from 1.16.33 to 1.17.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.17.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/feature/ec2/imds/v1.16.33...v1.17.0)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.25.6 to 1.26.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.25.6...v1.26.0)

Updates `github.com/aws/aws-sdk-go-v2/service/ssooidc` from 1.30.4 to 1.31.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.30.4...v1.31.0)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.34.1 to 1.35.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.34.1...v1.35.0)

Updates `github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream` from 1.6.11 to 1.7.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.7.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/rum/v1.6.11...v1.7.0)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.29.18 to 1.30.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.29.18...v1.30.0)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.17.71 to 1.18.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.18.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.71...v1.18.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.85 to 1.18.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.18.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/feature/s3/manager/v1.17.85...v1.18.0)

Updates `github.com/aws/aws-sdk-go-v2/internal/configsources` from 1.3.37 to 1.4.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.4.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.37...v1.4.0)

Updates `github.com/aws/aws-sdk-go-v2/internal/endpoints/v2` from 2.6.37 to 2.7.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/endpoints/v2.6.37...internal/endpoints/v2.7.0)

Updates `github.com/aws/aws-sdk-go-v2/internal/v4a` from 1.3.37 to 1.4.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.4.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.37...v1.4.0)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding` from 1.12.4 to 1.13.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.13.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.12.4...v1.13.0)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/checksum` from 1.7.5 to 1.8.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.8.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.7.5...v1.8.0)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/presigned-url` from 1.12.18 to 1.13.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.13.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.12.18...v1.13.0)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/s3shared` from 1.18.18 to 1.19.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.19.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.18...v1.19.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-version: 1.6.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.37.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-version: 1.85.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/smithy-go
  dependency-version: 1.22.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/ec2/imds
  dependency-version: 1.17.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-version: 1.26.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-version: 1.31.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-version: 1.35.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream
  dependency-version: 1.7.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.30.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-version: 1.18.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-version: 1.18.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/configsources
  dependency-version: 1.4.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
  dependency-version: 2.7.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/v4a
  dependency-version: 1.4.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding
  dependency-version: 1.13.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/checksum
  dependency-version: 1.8.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
  dependency-version: 1.13.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/s3shared
  dependency-version: 1.19.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-29 09:53:30 -07:00
Ben McClelland
19206b0da2 Merge pull request #1419 from versity/sis/uploadpart-fullobject-empty-checksum
fix: fixes the UploadPart failure with no precalculated checksum header for FULL_OBJECT checksum type
2025-07-28 16:50:41 -07:00
Ben McClelland
16484083ad Merge pull request #1421 from versity/sis/request-body-reader-nil-panic
fix: fixes the nil body reader panic.
2025-07-28 16:49:42 -07:00
niksis02
0972af0783 fix: fixes the nil body reader panic.
Fixes #1418

If neither the `Transfer-Encoding` nor the `Content-Length` headers are provided in chunked uploads, **fasthttp** assumes there is no request body and sets the request body reader to `nil`. This leads to a panic in the auth reader when it attempts to read the body.

The fix ensures that if the request body reader is `nil`, it is overridden with an `empty reader` to prevent panics.
2025-07-29 02:45:44 +04:00
niksis02
69ba00a25f fix: fixes the UploadPart failure with no precalculated checksum header for FULL_OBJECT checksum type
Fixes #1342

This PR includes two main changes:

1. It fixes the case where `x-amz-checksum-x` (precalculated checksum headers) are not provided for `UploadPart`, and the checksum type for the multipart upload is `FULL_OBJECT`. In this scenario, the server no longer returns an error.

2. When no `x-amz-checksum-x` is provided for `UploadPart`, and `x-amz-sdk-checksum-algorithm` is also missing, the gateway now calculates the part checksum based on the multipart upload's checksum algorithm and stores it accordingly.

Additionally, the PR adds integration tests for:

* The two cases above
* The case where only `x-amz-sdk-checksum-algorithm` is provided
2025-07-28 23:01:35 +04:00
Ben McClelland
3842253962 Merge pull request #1417 from versity/sis/ignore-special-checksum-headers
fix: ignores special checksum headers when parsing x-amz-checksum-x headers
2025-07-25 22:20:34 -07:00
Ben McClelland
fb372e497d Merge pull request #1415 from versity/sis/listbuckets-region
fix: adds bucket region in ListBuckets result
2025-07-25 14:42:23 -07:00
niksis02
e18c4f4080 fix: ignores special checksum headers when parsing x-amz-checksum-x headers
Fixes #1345

The previous implementation incorrectly parsed the `x-amz-sdk-checksum-algorithm` header for the `CompleteMultipartUpload` operation, even though this header is not expected and should be ignored. It also mistakenly treated the `x-amz-checksum-algorithm` header as an invalid value for `x-amz-checksum-x`.

The updated implementation only parses the `x-amz-sdk-checksum-algorithm` header for `PutObject` and `UploadPart` operations. Additionally, `x-amz-checksum-algorithm` and `x-amz-checksum-type` headers are now correctly ignored when parsing the precalculated checksum headers (`x-amz-checksum-x`).
2025-07-26 01:33:00 +04:00
niksis02
7dc213e68e fix: adds bucket region in ListBuckets result
Fixes #1374

Hardcodes the gateway region for each bucket entry in `ListBuckets` result as bucket region.
2025-07-26 00:45:18 +04:00
Ben McClelland
bcbe739158 Merge pull request #1416 from versity/sis/create-mp-checksum-headers-case-sensitivity
fix: makes checksum type and algorithm case insensitive in CreateMultipartUpload
2025-07-25 10:11:03 -07:00
Ben McClelland
c63c0a7a24 Merge pull request #1413 from versity/sis/invalid-x-amz-content-sha256
fix: adds validation for x-amz-content-sha256 header
2025-07-25 10:10:42 -07:00
niksis02
3363988206 fix: makes checksum type and algorithm case insensitive in CreateMultipartUpload
Fixes #1339

`x-amz-checksum-type` and `x-amz-checksum-algorithm` request headers should be case insensitive in `CreateMultipartUpload`.

The changes include parsing the header values to upper case before validating and passing to back-end. `x-amz-checksum-type` response header was added in`CreateMultipartUpload`, which was missing before.
2025-07-25 20:35:26 +04:00
niksis02
4187b4d400 fix: adds validation for x-amz-content-sha256 header
Fixes #1352

Adds a validation check step in `SigV4` authentication for `x-amz-content-sh256` to check it to be either a valid sha256 hash or a special payload type(UNSIGNED-PAYLOAD, STREAMING-UNSIGNED-PAYLOAD-TRAILER...).
2025-07-25 01:59:55 +04:00
Ben McClelland
35fc8c214a Merge pull request #1412 from versity/sis/listparts-invalid-part-number-marker
fix: fixes the invalid part number marker error description in ListParts
2025-07-24 13:34:41 -07:00
niksis02
2b9e343132 fix: fixes the invalid part number marker error description in ListParts
Fixes #1383

Fixes the invalid part number marker error description in ListParts. The description should be: `Argument part-number-marker must be an integer between 0 and 2147483647`.
2025-07-24 23:06:43 +04:00
Ben McClelland
70be7d7363 Merge pull request #1409 from versity/sis/bucket-acl-ownership-error-description
fix: fixes the InvalidBucketAclWithObjectOwnership error code.
2025-07-23 15:24:59 -07:00
Ben McClelland
9d129aaa26 Merge pull request #1408 from versity/sis/head-object-version-permission
fix: fixes the HeadObject version access control with policies.
2025-07-23 15:24:18 -07:00
niksis02
4395c9e0f9 fix: fixes the InvalidBucketAclWithObjectOwnership error code.
Fixes #1387

The `Code` for `ErrInvalidBucketAclWithObjectOwnership` error should be `InvalidBucketAclWithObjectOwnership` instead of `ErrInvalidBucketAclWithObjectOwnership`.
The PR fixes the typo in the error code.
2025-07-24 01:19:28 +04:00
niksis02
891672bf7e fix: fixes the HeadObject version access control with policies.
Fixes #1385

When accessing a specific object version, the user must have the `s3:GetObjectVersion` permission in the bucket policy. The `s3:GetObject` permission alone is not sufficient for a regular user to query object versions using `HeadObject`.

This PR fixes the issue and adds integration tests for both `HeadObject` and `GetObject`. It also includes cleanup in the integration tests by refactoring the creation of user S3 clients, and moves some test user data to the package level to avoid repetition across tests.
2025-07-24 01:04:45 +04:00
Ben McClelland
1fb3a7d466 Merge pull request #1404 from versity/sis/copy-actions-copy-source-validation
feat: adds copy source validation for x-amz-copy-source header.
2025-07-22 14:56:32 -07:00
niksis02
e5850ff11f feat: adds copy source validation for x-amz-copy-source header.
Fixes #1388
Fixes #1389
Fixes #1390
Fixes #1401

Adds the `x-amz-copy-source` header validation for `CopyObject` and `UploadPartCopy` in front-end.
The error:
```
	ErrInvalidCopySource: {
		Code:           "InvalidArgument",
		Description:    "Copy Source must mention the source bucket and key: sourcebucket/sourcekey.",
		HTTPStatusCode: http.StatusBadRequest,
	},
```
is now deprecated.

The conditional read/write headers validation in `CopyObject` should come with #821 and #822.
2025-07-22 14:40:11 -07:00
Ben McClelland
ccb4895367 Merge pull request #1341 from versity/sis/advanced-routing-system
Advanced routing system
2025-07-22 14:31:32 -07:00
niksis02
e74d2c0d19 fix: fixes the invalid x-amz-mp-object-size header error in CompleteMultipartUpload.
Fixes #1398

The `x-amz-mp-object-size` request header can have two erroneous states: an invalid value or a negative integer. AWS returns different error descriptions for each case. This PR fixes the error description for the invalid header value case.

The invalid case can't be integration tested as SDK expects `int64` as the header value.
2025-07-22 21:01:32 +04:00
niksis02
dc16c0448f feat: implements integration tests for the new advanced router 2025-07-22 21:00:24 +04:00
niksis02
394675a5a8 feat: implements unit tests for controller utilities 2025-07-22 20:55:23 +04:00
niksis02
ab571a6571 feat: implements unit tests for admin controllers 2025-07-22 20:55:22 +04:00
niksis02
7f9ab35347 feat: implements unit tests for object PUT controllers 2025-07-22 20:55:22 +04:00
niksis02
ba76aea17a feat: adds unit tests for the object HEAD and GET controllers. 2025-07-22 20:55:22 +04:00
niksis02
67d0750ee0 feat: adds unit tests for object DELETE and POST operations 2025-07-22 20:55:22 +04:00
niksis02
866b07b98f feat: implementes unit tests for all the bucket action controllers. 2025-07-22 20:55:22 +04:00
niksis02
65cd44aadd fix: fixes the s3 access logs and metrics manager reporting. Fixes the default cotext keys setter order in the middlewares. 2025-07-22 20:55:22 +04:00
niksis02
5be9e3bd1e feat: a total refactoring of the gateway middlewares by lowering them from server to router level. 2025-07-22 20:55:22 +04:00
niksis02
abdf342ef7 feat: implements advanced routing for the admin apis. Adds the debug logging and quite mode for the separate admin server.
Adjusts the admin apis to the new advanced routing changes.
Enables debug logging for the separate admin server(when a separate server is run for the admin apis).
Adds the quiet mode for the separate admin server.
2025-07-22 20:55:22 +04:00
niksis02
b7c758b065 feat: implements advanced routing for bucket POST and object PUT operations.
Fixes #1036

Fixes the issue when calling a non-existing root endpoint(POST /) the gateway returns `NoSuchBucket`. Now it returns the correct `MethodNotAllowed` error.
2025-07-22 20:55:22 +04:00
niksis02
a3fef4254a feat: implements advanced routing for object DELETE and POST actions.
fixes #896
fixes #899

Registeres an all route matcher handler at the end of the router to handle the cases when the api call doesn't match to any s3 action. The all routes matcher returns `MethodNotAllowed` for this kind of requests.
2025-07-22 20:55:22 +04:00
niksis02
56d4e4aa3e feat: implements advanced routing for object GET actions. 2025-07-22 20:55:22 +04:00
niksis02
d2038ca973 feat: implements advanced routing for HeadObject and bucket PUT operations. 2025-07-22 20:55:22 +04:00
niksis02
a7c3cb5cf8 feat: implements advanced routing for ListBuckets, HeadBucket and bucket delete operations 2025-07-22 20:55:22 +04:00
niksis02
b8456bc5ab feat: implements advanced routing system for the bucket get operations.
Closes #908

This PR introduces a new routing system integrated with Fiber. It matches each S3 action to a route using middleware utility functions (e.g., URL query match, request header match). Each S3 action is mapped to a dedicated route in the Fiber router. This functionality cannot be achieved using standard Fiber methods, as Fiber lacks the necessary tooling for such dynamic routing.

Additionally, this PR implements a generic response handler to manage responses from the backend. This abstraction helps isolate the controller from the data layer and Fiber-specific response logic.

With this approach, controller unit testing becomes simpler and more effective.
2025-07-22 20:55:22 +04:00
niksis02
f877502ab0 feat: adds integration tests for public buckets. 2025-07-22 20:55:22 +04:00
niksis02
edaf9d6d4e feat: implements public bucket access for write operations
Public buckets support a set of actions on buckets and objects, returning various errors based on the S3 action type and permissions (ACL or policy). The implementation aligns with the table provided in [this gist](https://gist.github.com/niksis02/5919d52d6112537a31c14d9abfa89ac0).
2025-07-22 20:55:22 +04:00
niksis02
39cef57c87 feat: implements public bucket access.
This implementation introduces **public buckets**, which are accessible without signature-based authentication.

There are two ways to grant public access to a bucket:

* **Bucket ACLs**
* **Bucket Policies**

Only `Get` and `List` operations are permitted on public buckets. All **write operations** require authentication, regardless of whether public access is granted through an ACL or a policy.

The implementation includes an `AuthorizePublicBucketAccess` middleware, which checks if public access has been granted to the bucket. If so, authentication middlewares are skipped. For unauthenticated requests, appropriate errors are returned based on the specific S3 action.

---

**1. Bucket-Level Operations:**

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::test"
    }
  ]
}
```

**2. Object-Level Operations:**

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::test/*"
    }
  ]
}
```

**3. Both Bucket and Object-Level Operations:**

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::test"
    },
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::test/*"
    }
  ]
}
```

---

```sh
aws s3api create-bucket --bucket test --object-ownership BucketOwnerPreferred
aws s3api put-bucket-acl --bucket test --acl public-read
```
2025-07-22 20:55:22 +04:00
Ben McClelland
4f3c930d52 Merge pull request #1402 from versity/dependabot/go_modules/dev-dependencies-87e55614e3
chore(deps): bump the dev-dependencies group with 18 updates
2025-07-21 17:20:53 -07:00
Ben McClelland
ddbc8911aa Merge pull request #1395 from versity/test/list_buckets_tests
Test/list buckets tests
2025-07-21 17:20:13 -07:00
dependabot[bot]
6e91e874c8 chore(deps): bump the dev-dependencies group with 18 updates
---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.36.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-version: 1.84.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/gofiber/fiber/v2
  dependency-version: 2.52.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/valyala/fasthttp
  dependency-version: 1.64.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/ec2/imds
  dependency-version: 1.16.33
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-version: 1.25.6
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-version: 1.30.4
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-version: 1.34.1
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/golang-jwt/jwt/v5
  dependency-version: 5.2.3
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.29.18
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-version: 1.17.71
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-version: 1.17.85
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/configsources
  dependency-version: 1.3.37
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
  dependency-version: 2.6.37
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/v4a
  dependency-version: 1.3.37
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/checksum
  dependency-version: 1.7.5
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
  dependency-version: 1.12.18
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/s3shared
  dependency-version: 1.18.18
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-21 23:05:49 +00:00
Luke McCrone
70c25de544 test: list-buckets tests 2025-07-19 15:06:19 -03:00
Ben McClelland
b2516e4153 Merge pull request #1397 from versity/ben/vault-refresh
fix: refresh expired iam vault tokens when needed
2025-07-17 14:03:01 -07:00
Ben McClelland
08ccf821f9 fix: refresh expired iam vault tokens when needed
The IAM vault client stores an access token once authenticated,
but this token will expire after a certain amount of time set
by the server generating the token. Once this token is expired
or revoked, it can no longer be use by the vault client. So
the client should try to refresh the token with any errors
indicating expired or revoked tokens.

Fixes #976
2025-07-17 09:32:40 -07:00
Ben McClelland
b57be7d56f Merge pull request #1393 from mfhunruh/split-vault-mount-path
feat: split the vault mount path into kv and auth
2025-07-16 10:04:40 -07:00
Maksim Loviagin
e39ab6f0ee feat: split the vault mount path into kv and auth 2025-07-15 18:57:44 +00:00
Ben McClelland
4eb13c2fdc Merge pull request #1392 from versity/test/bucket_create_canned_acl
Test/bucket create canned acl
2025-07-14 21:49:42 -07:00
Ben McClelland
0c2252fde0 Merge pull request #1396 from versity/dependabot/go_modules/dev-dependencies-23405cd618
chore(deps): bump the dev-dependencies group with 6 updates
2025-07-14 21:44:55 -07:00
dependabot[bot]
a915c3fec4 chore(deps): bump the dev-dependencies group with 6 updates
---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-version: 1.18.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sync
  dependency-version: 0.16.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sys
  dependency-version: 0.34.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/crypto
  dependency-version: 0.40.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/net
  dependency-version: 0.42.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/text
  dependency-version: 0.27.0
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-15 01:27:25 +00:00
Ben McClelland
706dee8572 Merge pull request #1391 from versity/ben/server-err-log
fix: always log internal server error messages to stderr
2025-07-14 15:17:53 -07:00
Luke McCrone
c6944650a3 test: CreateBucket ACLs tests, REST command testing update 2025-07-14 15:08:05 -03:00
Ben McClelland
c3201081ce fix: always log internal server error messages to stderr
The debuglogger logs will only get printed if debug is enabled,
but we always want the internal server error logs to be logged
by the service since this is usually some actionable error
that needs to be addressed with the backend storage system.

This changes internal server error logs to always to sent to
stderr.
2025-07-11 10:55:39 -07:00
Ben McClelland
9cc29af073 Merge pull request #1382 from versity/ben/s3proxy-change-bucket-owner
fix: admin bucket actions for s3proxy
2025-07-09 16:37:37 -07:00
Ben McClelland
7d98d1df39 Merge pull request #1386 from versity/ben/list-mp-upload-panic
fix: ListMultipartUploads pagination panic and duplicate results
2025-07-09 16:21:50 -07:00
Ben McClelland
f295df2217 fix: add new auth method to update ownership within acl
Add helper util auth.UpdateBucketACLOwner() that sets new
default ACL based on new owner and removes old bucket policy.

The ChangeBucketOwner() remains in the backend.Backend
interface in case there is ever a backend that needs to manage
ownership in some other way than with bucket ACLs. The arguments
are changing to clarify the updated owner. This will break any
plugins implementing the old interface. They should use the new
auth.UpdateBucketACLOwner() or implement the corresponding
change specific for the backend.
2025-07-09 16:16:34 -07:00
Ben McClelland
cbd3eb1cd2 fix: ListMultipartUploads pagination panic and duplicate results
This fixes a panic seen when there were a lot of multipart uploads in the
same bucket requiring multiple paginated responses. for example:
panic: runtime error: index out of range [11455] with length 1000
goroutine 418 [running]:
github.com/versity/versitygw/backend/posix.(*Posix).ListMultipartUploads(0xc0004300
/Users/ben/repo/versitygw/backend/posix/posix.go:2122 +0xd25
github.com/versity/versitygw/s3api/controllers.S3ApiController.ListActions({{0x183c
...

This change updates the ListMultipartUploads implementation to properly advance
past the (KeyMarker, UploadIDMarker) tuple when paginating, ensuring that each
response starts after the marker and does not include duplicate uploads.
2025-07-09 15:36:16 -07:00
Ben McClelland
c196b5f999 fix: admin bucket actions for s3proxy
We were incorrctly trying to pass through the admin request
actions through to the backend s3 service in s3proxy. This
was resulting in internal server errors since not all s3
backends would understand these requests. Instead the
gateway needs to handle these requests directly.

Fixes #1381
2025-07-09 09:13:14 -07:00
Ben McClelland
839909c880 Merge pull request #1377 from versity/ben/ipa-retry
fix: add retry for iam freeipa http requests
2025-07-08 11:52:57 -07:00
Ben McClelland
68c002486d Merge pull request #1375 from versity/ben/s3proxy-lint
chore: use time.Equal for s3proxy time equality checks
2025-07-08 11:52:37 -07:00
Ben McClelland
4117bcdf65 Merge pull request #1376 from versity/dependabot/go_modules/dev-dependencies-eb784ae51d
chore(deps): bump the dev-dependencies group with 3 updates
2025-07-08 08:13:36 -07:00
Ben McClelland
003bf5db0b fix: convert deprecated fasthttp VisitAll() to All()
An update to fasthttp has deprecated the VisitAll() method
for an iterator function All() that can be used to range over
all headers.
This should fix the staticcheck warnings for calling the
deprecated function.
2025-07-07 22:34:01 -07:00
Ben McClelland
91b904d10f fix: add retry for iam freeipa http requests
The IPA service connections have been seen to not always work
correctly on the first network connection attempt. Add retry
logic for errors that appear to be transient network issues.
2025-07-07 22:28:58 -07:00
dependabot[bot]
ee4d0b0c3e chore(deps): bump the dev-dependencies group with 3 updates
Bumps the dev-dependencies group with 3 updates: [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2), [github.com/valyala/fasthttp](https://github.com/valyala/fasthttp) and [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2).


Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.82.0 to 1.83.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.82.0...service/s3/v1.83.0)

Updates `github.com/valyala/fasthttp` from 1.62.0 to 1.63.0
- [Release notes](https://github.com/valyala/fasthttp/releases)
- [Commits](https://github.com/valyala/fasthttp/compare/v1.62.0...v1.63.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.82 to 1.17.83
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/feature/s3/manager/v1.17.82...feature/s3/manager/v1.17.83)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-version: 1.83.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/valyala/fasthttp
  dependency-version: 1.63.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-version: 1.17.83
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-07 23:51:33 +00:00
Ben McClelland
78a92168bf Merge pull request #1333 from versity/test/multipart_upload_checksums
Test/multipart upload checksums
2025-07-07 14:24:51 -07:00
Ben McClelland
36509daec7 chore: use time.Equal for s3proxy time equality checks
Fixes lint warnings related to using time.Equal instead of == for
time equality checks.
2025-07-07 14:20:36 -07:00
Luke McCrone
28cb97329e test: multipart upload checksum tests 2025-07-07 15:31:28 -03:00
Ben McClelland
3ec6e634c3 Merge pull request #1348 from versity/ben/crc-mp-complete
feat: calculate full object crc for multi-part uploads for compatible checksums
2025-07-04 09:50:21 -07:00
Ben McClelland
7b8b483dfc feat: calculate full object crc for multi-part uploads for compatible checksums
The CRC32, CRC32c, and CRC64NVME data integrity checksums support calculating
the composite full object values for multi-part uploads using the checksum
and length of the individual parts.

Previously, we were reading all of the part data to recalculate the full
object checksum values during the complete multipart upload call. This
disabled the optimized copy_file_range() for certain filesystems such as
XFS because the part data was being read. If the data is not read, and
the file handle is passed directly to io.Copy(), then the filesystem is
allowed to optimize the copying of the data from the source to destination
files.

This now allows both the optimized copy_file_range() optimizations as well
as the data integrity features enabled for support composite checksum types.
2025-07-03 19:58:53 -07:00
Ben McClelland
4ce0ba33e9 Merge pull request #1371 from versity/sis/bucket-object-name-validation
feat: adds a middleware to validate bucket/object names
2025-07-03 19:57:18 -07:00
niksis02
98a7b7f402 feat: adds a middleware to validate bucket/object names
Implements a middleware that validates incoming bucket and object names before authentication. This helps prevent malicious attacks that attempt to access restricted or unreachable data in `POSIX`.

Adds test cases to cover such attack scenarios, including false negatives where encoded paths are used to try accessing resources outside the intended bucket.

Removes bucket validation from all other layers—including `controllers` and both `POSIX` and `ScoutFS` backends — by moving the logic entirely into the middleware layer.
2025-07-04 00:55:03 +04:00
Ben McClelland
b09efa532c Merge pull request #1370 from versity/ben/s3-client-retry
fix: prevent internal request retry to s3proxy backend
2025-07-03 11:39:06 -07:00
Ben McClelland
1066c44a04 Merge pull request #1368 from versity/ben/fix-s3-create-bucket
fix: s3proxy create bucket always returning BucketAlreadyExists
2025-07-03 11:38:52 -07:00
Ben McClelland
0d73e3ebe2 fix: prevent internal request retry to s3proxy backend
The http body stream is not a seekable stream, so most operation
retry attempts will fail with an internal server error. This
change tells the s3 client within the gateway to not retry any
requests, and instead let the client of the gateway handle the
error retry.

Fixes #1353
2025-07-03 10:20:44 -07:00
Ben McClelland
5ba5327ba6 fix: s3proxy create bucket always returning BucketAlreadyExists
We were using the metadata retrieval to check for existing
buckets during create, and then return either BucketAlreadyExists
or ErrBucketAlreadyOwnedByYou accordingly.

Howver, the metadata retrieval was returning success with a
default ACL when the bucket metadata did not already exist
causing the gateway to always think this bucket existed.

Fix here is to let the metadata retrieval know that we do not
want the default ACL for this case.
2025-07-02 16:29:28 -07:00
Ben McClelland
78537bedf9 Merge pull request #1319 from versity/sis/public-buckets
feat: implements public bucket access.
2025-07-02 15:46:33 -07:00
Ben McClelland
c276e0ebe4 Merge pull request #1323 from versity/test/rest_encode_urls
Test/rest encode urls
2025-07-01 15:54:11 -07:00
Luke McCrone
1c08eaadcd test: PutObject/ListObjects/GetObject/HeadObject encodings 2025-07-01 17:52:19 -03:00
niksis02
458db64e2d feat: implements public bucket access.
This implementation introduces **public buckets**, which are accessible without signature-based authentication.

There are two ways to grant public access to a bucket:

* **Bucket ACLs**
* **Bucket Policies**

Only `Get` and `List` operations are permitted on public buckets. All **write operations** require authentication, regardless of whether public access is granted through an ACL or a policy.

The implementation includes an `AuthorizePublicBucketAccess` middleware, which checks if public access has been granted to the bucket. If so, authentication middlewares are skipped. For unauthenticated requests, appropriate errors are returned based on the specific S3 action.

---

**1. Bucket-Level Operations:**

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::test"
    }
  ]
}
```

**2. Object-Level Operations:**

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::test/*"
    }
  ]
}
```

**3. Both Bucket and Object-Level Operations:**

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::test"
    },
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::test/*"
    }
  ]
}
```

---

```sh
aws s3api create-bucket --bucket test --object-ownership BucketOwnerPreferred
aws s3api put-bucket-acl --bucket test --acl public-read
```
2025-07-02 00:11:10 +04:00
Ben McClelland
8e5b7ead92 Merge pull request #1322 from versity/test/rest_create_bucket
test - rest bucket creation, put-object test
2025-07-01 10:38:26 -07:00
Luke McCrone
58659ae279 test: REST create bucket test, PutObject w/o Content-Length 2025-07-01 10:33:17 -03:00
228 changed files with 31046 additions and 11894 deletions

View File

@@ -21,15 +21,21 @@ jobs:
RECREATE_BUCKETS: "true"
DELETE_BUCKETS_AFTER_TEST: "true"
BACKEND: "posix"
- set: "REST, posix, non-static, base|acl, folder IAM"
- set: "REST, posix, non-static, base|acl|multipart|put-object, folder IAM"
IAM_TYPE: folder
RUN_SET: "rest-base,rest-acl"
RUN_SET: "rest-base,rest-acl,rest-multipart,rest-put-object"
RECREATE_BUCKETS: "true"
DELETE_BUCKETS_AFTER_TEST: "true"
BACKEND: "posix"
- set: "REST, posix, non-static, chunked|checksum|versioning|bucket, folder IAM"
IAM_TYPE: folder
RUN_SET: "rest-chunked,rest-checksum,rest-versioning,rest-bucket"
RUN_SET: "rest-chunked,rest-checksum,rest-versioning,rest-bucket,rest-list-buckets,rest-create-bucket,rest-head-bucket"
RECREATE_BUCKETS: "true"
DELETE_BUCKETS_AFTER_TEST: "true"
BACKEND: "posix"
- set: "REST, posix, non-static, not implemented|rest-delete-bucket-ownership-controls|rest-delete-bucket-tagging, folder IAM"
IAM_TYPE: folder
RUN_SET: "rest-not-implemented,rest-delete-bucket-ownership-controls,rest-delete-bucket-tagging"
RECREATE_BUCKETS: "true"
DELETE_BUCKETS_AFTER_TEST: "true"
BACKEND: "posix"

View File

@@ -40,7 +40,7 @@ func VerifyObjectCopyAccess(ctx context.Context, be backend.Backend, copySource
// Verify source bucket access
srcBucket, srcObject, found := strings.Cut(copySource, "/")
if !found {
return s3err.GetAPIError(s3err.ErrInvalidCopySource)
return s3err.GetAPIError(s3err.ErrInvalidCopySourceBucket)
}
// Get source bucket ACL
@@ -70,20 +70,20 @@ func VerifyObjectCopyAccess(ctx context.Context, be backend.Backend, copySource
}
type AccessOptions struct {
Acl ACL
AclPermission Permission
IsRoot bool
Acc Account
Bucket string
Object string
Action Action
Readonly bool
IsBucketPublic bool
Acl ACL
AclPermission Permission
IsRoot bool
Acc Account
Bucket string
Object string
Action Action
Readonly bool
IsPublicRequest bool
}
func VerifyAccess(ctx context.Context, be backend.Backend, opts AccessOptions) error {
// Skip the access check for public buckets
if opts.IsBucketPublic {
// Skip the access check for public bucket requests
if opts.IsPublicRequest {
return nil
}
if opts.Readonly {
@@ -155,18 +155,6 @@ func VerifyPublicAccess(ctx context.Context, be backend.Backend, action Action,
return nil
}
func MayCreateBucket(acct Account, isRoot bool) error {
if isRoot {
return nil
}
if acct.Role == RoleUser {
return s3err.GetAPIError(s3err.ErrAccessDenied)
}
return nil
}
func IsAdminOrOwner(acct Account, isRoot bool, acl ACL) error {
// Owner check
if acct.Access == acl.Owner {

View File

@@ -385,7 +385,7 @@ func CheckIfAccountsExist(accs []string, iam IAMService) ([]string, error) {
for _, acc := range accs {
_, err := iam.GetUserAccount(acc)
if err != nil {
if err == ErrNoSuchUser {
if err == ErrNoSuchUser || err == s3err.GetAPIError(s3err.ErrAdminUserNotFound) {
result = append(result, acc)
continue
}
@@ -466,3 +466,30 @@ func VerifyPublicBucketACL(ctx context.Context, be backend.Backend, bucket strin
return nil
}
// UpdateBucketACLOwner sets default ACL with new owner and removes
// any previous bucket policy that was in place
func UpdateBucketACLOwner(ctx context.Context, be backend.Backend, bucket, newOwner string) error {
acl := ACL{
Owner: newOwner,
Grantees: []Grantee{
{
Permission: PermissionFullControl,
Access: newOwner,
Type: types.TypeCanonicalUser,
},
},
}
result, err := json.Marshal(acl)
if err != nil {
return fmt.Errorf("marshal ACL: %w", err)
}
err = be.PutBucketAcl(ctx, bucket, result)
if err != nil {
return err
}
return be.DeleteBucketPolicy(ctx, bucket)
}

338
auth/bucket_cors.go Normal file
View File

@@ -0,0 +1,338 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package auth
import (
"encoding/xml"
"fmt"
"net/http"
"regexp"
"strings"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3err"
)
// headerRegex is the regexp to validate http header names
var headerRegex = regexp.MustCompile(`^[!#$%&'*+\-.^_` + "`" + `|~0-9A-Za-z]+$`)
type CORSHeader string
type CORSHTTPMethod string
// IsValid validates the CORS http header
// the rules are based on http RFC
// https://datatracker.ietf.org/doc/html/rfc7230#section-3.2
//
// Empty values are considered as valid
func (ch CORSHeader) IsValid() bool {
return ch == "" || headerRegex.MatchString(ch.String())
}
// String converts the header value to 'string'
func (ch CORSHeader) String() string {
return string(ch)
}
// ToLower converts the header to lower case
func (ch CORSHeader) ToLower() string {
return strings.ToLower(string(ch))
}
// IsValid validates the cors http request method:
// the methods are case sensitive
func (cm CORSHTTPMethod) IsValid() bool {
return cm.IsEmpty() || cm == http.MethodGet || cm == http.MethodHead || cm == http.MethodPut ||
cm == http.MethodPost || cm == http.MethodDelete
}
// IsEmpty checks if the cors method is an empty string
func (cm CORSHTTPMethod) IsEmpty() bool {
return cm == ""
}
// String converts the method value to 'string'
func (cm CORSHTTPMethod) String() string {
return string(cm)
}
type CORSConfiguration struct {
Rules []CORSRule `xml:"CORSRule"`
}
// Validate validates the cors configuration rules
func (cc *CORSConfiguration) Validate() error {
if cc == nil || cc.Rules == nil {
debuglogger.Logf("invalid CORS configuration")
return s3err.GetAPIError(s3err.ErrMalformedXML)
}
if len(cc.Rules) == 0 {
debuglogger.Logf("empty CORS config rules")
return s3err.GetAPIError(s3err.ErrMalformedXML)
}
// validate each CORS rule
for _, rule := range cc.Rules {
if err := rule.Validate(); err != nil {
return err
}
}
return nil
}
type CORSAllowanceConfig struct {
Origin string
Methods string
ExposedHeaders string
AllowCredentials string
AllowHeaders string
MaxAge *int32
}
// IsAllowed walks through the CORS rules and finds the first one allowing access.
// If no rule grants access, returns 'AccessForbidden'
func (cc *CORSConfiguration) IsAllowed(origin string, method CORSHTTPMethod, headers []CORSHeader) (*CORSAllowanceConfig, error) {
// if method is empty, anyways cors is forbidden
// skip, without going through the rules
if method.IsEmpty() {
debuglogger.Logf("empty Access-Control-Request-Method")
return nil, s3err.GetAPIError(s3err.ErrCORSForbidden)
}
for _, rule := range cc.Rules {
// find the first rule granting access
if isAllowed, wilcardOrigin := rule.Match(origin, method, headers); isAllowed {
o := origin
allowCredentials := "true"
if wilcardOrigin {
o = "*"
allowCredentials = "false"
}
return &CORSAllowanceConfig{
Origin: o,
AllowCredentials: allowCredentials,
Methods: rule.GetAllowedMethods(),
ExposedHeaders: rule.GetExposeHeaders(),
AllowHeaders: buildAllowedHeaders(headers),
MaxAge: rule.MaxAgeSeconds,
}, nil
}
}
// if no matching rule is found, return AccessForbidden
return nil, s3err.GetAPIError(s3err.ErrCORSForbidden)
}
type CORSRule struct {
AllowedMethods []CORSHTTPMethod `xml:"AllowedMethod"`
AllowedHeaders []CORSHeader `xml:"AllowedHeader"`
ExposeHeaders []CORSHeader `xml:"ExposeHeader"`
AllowedOrigins []string `xml:"AllowedOrigin"`
ID *string
MaxAgeSeconds *int32
}
// Validate validates and returns error if CORS configuration has invalid rule
func (cr *CORSRule) Validate() error {
// validate CORS allowed headers
for _, header := range cr.AllowedHeaders {
if !header.IsValid() {
debuglogger.Logf("invalid CORS allowed header: %s", header)
return s3err.GetInvalidCORSHeaderErr(header.String())
}
}
// validate CORS allowed methods
for _, method := range cr.AllowedMethods {
if !method.IsValid() {
debuglogger.Logf("invalid CORS allowed method: %s", method)
return s3err.GetUnsopportedCORSMethodErr(method.String())
}
}
// validate CORS expose headers
for _, header := range cr.ExposeHeaders {
if !header.IsValid() {
debuglogger.Logf("invalid CORS exposed header: %s", header)
return s3err.GetInvalidCORSHeaderErr(header.String())
}
}
return nil
}
// Match matches the provided origin, method and headers with the
// CORS configuration rule
// if the matching origin is "*", it returns true as the first argument
func (cr *CORSRule) Match(origin string, method CORSHTTPMethod, headers []CORSHeader) (bool, bool) {
wildcardOrigin := false
originFound := false
// check if the provided origin exists in CORS AllowedOrigins
for _, or := range cr.AllowedOrigins {
if wildcardMatch(or, origin) {
originFound = true
if or == "*" {
// mark wildcardOrigin as true, if "*" is found in AllowedOrigins
wildcardOrigin = true
}
break
}
}
if !originFound {
return false, false
}
// cache the CORS AllowedMethods in a map
allowedMethods := cacheCORSMethods(cr.AllowedMethods)
// check if the provided method exists in CORS AllowedMethods
if _, ok := allowedMethods[method]; !ok {
return false, false
}
// check is CORS rule allowed headers match
// with the requested allowed headers
for _, reqHeader := range headers {
match := false
for _, header := range cr.AllowedHeaders {
if wildcardMatch(header.ToLower(), reqHeader.ToLower()) {
match = true
break
}
}
if !match {
return false, false
}
}
return true, wildcardOrigin
}
// GetExposeHeaders returns comma separated CORS expose headers
func (cr *CORSRule) GetExposeHeaders() string {
var result strings.Builder
for i, h := range cr.ExposeHeaders {
if i > 0 {
result.WriteString(", ")
}
result.WriteString(h.String())
}
return result.String()
}
// buildAllowedHeaders builds a comma separated string from []CORSHeader
func buildAllowedHeaders(headers []CORSHeader) string {
var result strings.Builder
for i, h := range headers {
if i > 0 {
result.WriteString(", ")
}
result.WriteString(h.ToLower())
}
return result.String()
}
// GetAllowedMethods returns comma separated CORS allowed methods
func (cr *CORSRule) GetAllowedMethods() string {
var result strings.Builder
for i, m := range cr.AllowedMethods {
if i > 0 {
result.WriteString(", ")
}
result.WriteString(m.String())
}
return result.String()
}
// ParseCORSOutput parses raw bytes to 'CORSConfiguration'
func ParseCORSOutput(data []byte) (*CORSConfiguration, error) {
var config CORSConfiguration
err := xml.Unmarshal(data, &config)
if err != nil {
debuglogger.Logf("unmarshal cors output: %v", err)
return nil, fmt.Errorf("failed to parse cors config: %w", err)
}
return &config, nil
}
func cacheCORSMethods(input []CORSHTTPMethod) map[CORSHTTPMethod]struct{} {
result := make(map[CORSHTTPMethod]struct{}, len(input))
for _, el := range input {
result[el] = struct{}{}
}
return result
}
// ParseCORSHeaders parses/validates Access-Control-Request-Headers
// and returns []CORSHeaders
func ParseCORSHeaders(headers string) ([]CORSHeader, error) {
result := []CORSHeader{}
if headers == "" {
return result, nil
}
headersSplitted := strings.Split(headers, ",")
for _, h := range headersSplitted {
corsHeader := CORSHeader(strings.TrimSpace(h))
if corsHeader == "" || !corsHeader.IsValid() {
debuglogger.Logf("invalid access control header: %s", h)
return nil, s3err.GetInvalidCORSRequestHeaderErr(h)
}
result = append(result, corsHeader)
}
return result, nil
}
func wildcardMatch(pattern, input string) bool {
pIdx, sIdx := 0, 0
starIdx, matchIdx := -1, 0
for sIdx < len(input) {
if pIdx < len(pattern) && pattern[pIdx] == input[sIdx] {
// exact match of current char
sIdx++
pIdx++
} else if pIdx < len(pattern) && pattern[pIdx] == '*' {
// remember star position
starIdx = pIdx
matchIdx = sIdx
pIdx++
} else if starIdx != -1 {
// backtrack: try to match more characters with '*'
pIdx = starIdx + 1
matchIdx++
sIdx = matchIdx
} else {
return false
}
}
// skip trailing stars
for pIdx < len(pattern) && pattern[pIdx] == '*' {
pIdx++
}
return pIdx == len(pattern)
}

736
auth/bucket_cors_test.go Normal file
View File

@@ -0,0 +1,736 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package auth
import (
"net/http"
"testing"
"github.com/stretchr/testify/assert"
"github.com/versity/versitygw/s3err"
)
func TestCORSHeader_IsValid(t *testing.T) {
tests := []struct {
name string
header CORSHeader
want bool
}{
{"empty", "", true},
{"valid", "X-Custom-Header", true},
{"invalid_1", "Invalid Header", false},
{"invalid_2", "invalid/header", false},
{"invalid_3", "Invalid\tHeader", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := tt.header.IsValid(); got != tt.want {
t.Errorf("IsValid() = %v, want %v", got, tt.want)
}
})
}
}
func TestCORSHTTPMethod_IsValid(t *testing.T) {
tests := []struct {
name string
method CORSHTTPMethod
want bool
}{
{"empty valid", "", true},
{"GET valid", http.MethodGet, true},
{"HEAD valid", http.MethodHead, true},
{"PUT valid", http.MethodPut, true},
{"POST valid", http.MethodPost, true},
{"DELETE valid", http.MethodDelete, true},
{"get valid", "get", false},
{"put valid", "put", false},
{"post valid", "post", false},
{"head valid", "head", false},
{"invalid", "FOO", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := tt.method.IsValid(); got != tt.want {
t.Errorf("IsValid() = %v, want %v", got, tt.want)
}
})
}
}
func TestCORSHeader_ToLower(t *testing.T) {
tests := []struct {
name string
header CORSHeader
want string
}{
{
name: "already lowercase",
header: CORSHeader("content-type"),
want: "content-type",
},
{
name: "mixed case",
header: CORSHeader("X-CuStOm-HeAdEr"),
want: "x-custom-header",
},
{
name: "uppercase",
header: CORSHeader("AUTHORIZATION"),
want: "authorization",
},
{
name: "empty string",
header: CORSHeader(""),
want: "",
},
{
name: "numeric and symbols",
header: CORSHeader("X-123-HEADER"),
want: "x-123-header",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.header.ToLower()
assert.Equal(t, tt.want, got)
})
}
}
func TestCORSHTTPMethod_IsEmpty(t *testing.T) {
tests := []struct {
name string
method CORSHTTPMethod
want bool
}{
{
name: "empty string is empty",
method: CORSHTTPMethod(""),
want: true,
},
{
name: "GET method is not empty",
method: CORSHTTPMethod("GET"),
want: false,
},
{
name: "random string is not empty",
method: CORSHTTPMethod("FOO"),
want: false,
},
{
name: "lowercase get is not empty (case sensitive)",
method: CORSHTTPMethod("get"),
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.method.IsEmpty()
assert.Equal(t, tt.want, got)
})
}
}
func TestCORSConfiguration_Validate(t *testing.T) {
tests := []struct {
name string
cfg *CORSConfiguration
want error
}{
{"nil config", nil, s3err.GetAPIError(s3err.ErrMalformedXML)},
{"nil rules", &CORSConfiguration{}, s3err.GetAPIError(s3err.ErrMalformedXML)},
{"empty rules", &CORSConfiguration{Rules: []CORSRule{}}, s3err.GetAPIError(s3err.ErrMalformedXML)},
{"invalid rule", &CORSConfiguration{Rules: []CORSRule{{AllowedHeaders: []CORSHeader{"Invalid Header"}}}}, s3err.GetInvalidCORSHeaderErr("Invalid Header")},
{"valid rule", &CORSConfiguration{Rules: []CORSRule{{
AllowedOrigins: []string{"origin"},
AllowedHeaders: []CORSHeader{"X-Test"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
ExposeHeaders: []CORSHeader{"X-Expose"},
}}}, nil},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.cfg.Validate()
assert.EqualValues(t, tt.want, err)
})
}
}
func TestCORSConfiguration_IsAllowed(t *testing.T) {
type input struct {
cfg *CORSConfiguration
origin string
method CORSHTTPMethod
headers []CORSHeader
}
type output struct {
result *CORSAllowanceConfig
err error
}
tests := []struct {
name string
input input
output output
}{
{
name: "allowed exact origin",
input: input{
cfg: &CORSConfiguration{Rules: []CORSRule{{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
AllowedHeaders: []CORSHeader{"X-Test"},
}}},
origin: "http://allowed.com",
method: http.MethodGet,
headers: []CORSHeader{"X-Test"},
},
output: output{
result: &CORSAllowanceConfig{
Origin: "http://allowed.com",
AllowCredentials: "true",
Methods: http.MethodGet,
AllowHeaders: "x-test",
ExposedHeaders: "",
MaxAge: nil,
},
err: nil,
},
},
{
name: "allowed wildcard origin",
input: input{
cfg: &CORSConfiguration{Rules: []CORSRule{{
AllowedOrigins: []string{"*"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
AllowedHeaders: []CORSHeader{"X-Test"},
}}},
origin: "anything",
method: http.MethodGet,
headers: []CORSHeader{"X-Test"},
},
output: output{
result: &CORSAllowanceConfig{
Origin: "*",
AllowCredentials: "false",
AllowHeaders: "x-test",
Methods: http.MethodGet,
ExposedHeaders: "",
MaxAge: nil,
},
err: nil,
},
},
{
name: "forbidden no matching origin",
input: input{
cfg: &CORSConfiguration{Rules: []CORSRule{{
AllowedOrigins: []string{"http://nope.com"},
}}},
origin: "http://not-allowed.com",
method: http.MethodGet,
},
output: output{
result: nil,
err: s3err.GetAPIError(s3err.ErrCORSForbidden),
},
},
{
name: "forbidden method not allowed",
input: input{
cfg: &CORSConfiguration{Rules: []CORSRule{{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodPost},
AllowedHeaders: []CORSHeader{"X-Test"},
}}},
origin: "http://allowed.com",
method: http.MethodGet,
headers: []CORSHeader{"X-Test"},
},
output: output{
result: nil,
err: s3err.GetAPIError(s3err.ErrCORSForbidden),
},
},
{
name: "forbidden header not allowed",
input: input{
cfg: &CORSConfiguration{Rules: []CORSRule{{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
AllowedHeaders: []CORSHeader{"X-Test"},
}}},
origin: "http://allowed.com",
method: http.MethodGet,
headers: []CORSHeader{"X-Nope"},
},
output: output{
result: nil,
err: s3err.GetAPIError(s3err.ErrCORSForbidden),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := tt.input.cfg.IsAllowed(tt.input.origin, tt.input.method, tt.input.headers)
assert.EqualValues(t, tt.output.err, err)
assert.EqualValues(t, tt.output.result, got)
})
}
}
func TestCORSRule_Validate(t *testing.T) {
tests := []struct {
name string
rule CORSRule
want error
}{
{
name: "valid rule",
rule: CORSRule{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
AllowedHeaders: []CORSHeader{"X-Test"},
},
want: nil,
},
{
name: "invalid allowed methods",
rule: CORSRule{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{"invalid_method"},
AllowedHeaders: []CORSHeader{"X-Test"},
},
want: s3err.GetUnsopportedCORSMethodErr("invalid_method"),
},
{
name: "invalid allowed header",
rule: CORSRule{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
AllowedHeaders: []CORSHeader{"Invalid Header"},
},
want: s3err.GetInvalidCORSHeaderErr("Invalid Header"),
},
{
name: "invalid allowed header",
rule: CORSRule{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
AllowedHeaders: []CORSHeader{"Content-Length"},
ExposeHeaders: []CORSHeader{"Content-Encoding", "invalid header"},
},
want: s3err.GetInvalidCORSHeaderErr("invalid header"),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.rule.Validate()
assert.EqualValues(t, tt.want, err)
})
}
}
func TestCORSRule_Match(t *testing.T) {
type input struct {
rule CORSRule
origin string
method CORSHTTPMethod
headers []CORSHeader
}
type output struct {
isAllowed bool
isWildcard bool
}
tests := []struct {
name string
input input
output output
}{
{
name: "exact origin and method match",
input: input{
rule: CORSRule{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
AllowedHeaders: []CORSHeader{"X-Test"},
},
origin: "http://allowed.com",
method: http.MethodGet,
headers: []CORSHeader{"X-Test"},
},
output: output{isAllowed: true, isWildcard: false},
},
{
name: "wildcard origin match",
input: input{
rule: CORSRule{
AllowedOrigins: []string{"*"},
AllowedMethods: []CORSHTTPMethod{http.MethodPost},
AllowedHeaders: []CORSHeader{"X-Test"},
},
origin: "http://random.com",
method: http.MethodPost,
headers: []CORSHeader{"X-Test"},
},
output: output{isAllowed: true, isWildcard: true},
},
{
name: "wildcard containing origin match",
input: input{
rule: CORSRule{
AllowedOrigins: []string{"http://random*"},
AllowedMethods: []CORSHTTPMethod{http.MethodPost},
AllowedHeaders: []CORSHeader{"X-Test"},
},
origin: "http://random.com",
method: http.MethodPost,
headers: []CORSHeader{"X-Test"},
},
output: output{isAllowed: true, isWildcard: false},
},
{
name: "wildcard allowed headers match",
input: input{
rule: CORSRule{
AllowedOrigins: []string{"http://something.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodPost},
AllowedHeaders: []CORSHeader{"X-*"},
},
origin: "http://something.com",
method: http.MethodPost,
headers: []CORSHeader{"X-Test", "X-Something", "X-Anyting"},
},
output: output{isAllowed: true, isWildcard: false},
},
{
name: "origin mismatch",
input: input{
rule: CORSRule{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
AllowedHeaders: []CORSHeader{"X-Test"},
},
origin: "http://notallowed.com",
method: http.MethodGet,
headers: []CORSHeader{"X-Test"},
},
output: output{isAllowed: false, isWildcard: false},
},
{
name: "method mismatch",
input: input{
rule: CORSRule{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodPost},
AllowedHeaders: []CORSHeader{"X-Test"},
},
origin: "http://allowed.com",
method: http.MethodGet,
headers: []CORSHeader{"X-Test"},
},
output: output{isAllowed: false, isWildcard: false},
},
{
name: "header mismatch",
input: input{
rule: CORSRule{
AllowedOrigins: []string{"http://allowed.com"},
AllowedMethods: []CORSHTTPMethod{http.MethodGet},
AllowedHeaders: []CORSHeader{"X-Test"},
},
origin: "http://allowed.com",
method: http.MethodGet,
headers: []CORSHeader{"X-Other"},
},
output: output{isAllowed: false, isWildcard: false},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
isAllowed, wild := tt.input.rule.Match(tt.input.origin, tt.input.method, tt.input.headers)
assert.Equal(t, tt.output.isAllowed, isAllowed)
assert.Equal(t, tt.output.isWildcard, wild)
})
}
}
func TestGetExposeHeaders(t *testing.T) {
tests := []struct {
name string
rule CORSRule
want string
}{
{"multiple headers", CORSRule{ExposeHeaders: []CORSHeader{"Content-Length", "Content-Type", "Content-Encoding"}}, "Content-Length, Content-Type, Content-Encoding"},
{"single header", CORSRule{ExposeHeaders: []CORSHeader{"Authorization"}}, "Authorization"},
{"no headers", CORSRule{}, ""},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.rule.GetExposeHeaders()
assert.Equal(t, tt.want, got)
})
}
}
func TestBuildAllowedHeaders(t *testing.T) {
tests := []struct {
name string
headers []CORSHeader
want string
}{
{
name: "empty slice returns empty string",
headers: []CORSHeader{},
want: "",
},
{
name: "single header lowercase",
headers: []CORSHeader{"Content-Type"},
want: "content-type",
},
{
name: "multiple headers lowercased with commas",
headers: []CORSHeader{"Content-Type", "X-Custom-Header", "Authorization"},
want: "content-type, x-custom-header, authorization",
},
{
name: "already lowercase header",
headers: []CORSHeader{"accept"},
want: "accept",
},
{
name: "mixed case headers",
headers: []CORSHeader{"ACCEPT", "x-Powered-By"},
want: "accept, x-powered-by",
},
{
name: "empty header value",
headers: []CORSHeader{""},
want: "",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := buildAllowedHeaders(tt.headers)
assert.Equal(t, tt.want, got)
})
}
}
func TestGetAllowedMethods(t *testing.T) {
tests := []struct {
name string
rule CORSRule
want string
}{
{"multiple methods", CORSRule{AllowedMethods: []CORSHTTPMethod{http.MethodGet, http.MethodPost, http.MethodPut}}, "GET, POST, PUT"},
{"single method", CORSRule{AllowedMethods: []CORSHTTPMethod{http.MethodGet}}, "GET"},
{"no methods", CORSRule{}, ""},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.rule.GetAllowedMethods()
assert.Equal(t, tt.want, got)
})
}
}
func TestParseCORSOutput(t *testing.T) {
tests := []struct {
name string
data string
want bool
}{
{"valid", `<CORSConfiguration><CORSRule></CORSRule></CORSConfiguration>`, true},
{"invalid xml", `<CORSConfiguration><CORSRule>`, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cfg, err := ParseCORSOutput([]byte(tt.data))
if (err == nil) != tt.want {
t.Errorf("ParseCORSOutput() err = %v, want success=%v", err, tt.want)
}
if tt.want && cfg == nil {
t.Errorf("Expected non-nil config")
}
})
}
}
func TestCacheCORSProps(t *testing.T) {
tests := []struct {
name string
in []CORSHTTPMethod
want map[string]struct{}
}{
{
name: "empty CORSHTTPMethod slice",
in: []CORSHTTPMethod{},
want: map[string]struct{}{},
},
{
name: "single CORSHTTPMethod",
in: []CORSHTTPMethod{http.MethodGet},
want: map[string]struct{}{http.MethodGet: {}},
},
{
name: "multiple CORSHTTPMethods",
in: []CORSHTTPMethod{http.MethodGet, http.MethodPost, http.MethodPut},
want: map[string]struct{}{
http.MethodGet: {},
http.MethodPost: {},
http.MethodPut: {},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := cacheCORSMethods(tt.in)
assert.Equal(t, len(tt.want), len(got))
for key := range tt.want {
_, ok := got[CORSHTTPMethod(key)]
assert.True(t, ok)
}
})
}
}
func TestParseCORSHeaders(t *testing.T) {
tests := []struct {
name string
in string
want []CORSHeader
err error
}{
{
name: "empty string",
in: "",
want: []CORSHeader{},
err: nil,
},
{
name: "single valid header",
in: "X-Test",
want: []CORSHeader{"X-Test"},
err: nil,
},
{
name: "multiple valid headers with spaces",
in: "X-Test, Content-Type, Authorization",
want: []CORSHeader{"X-Test", "Content-Type", "Authorization"},
err: nil,
},
{
name: "header with leading/trailing spaces",
in: " X-Test ",
want: []CORSHeader{"X-Test"},
err: nil,
},
{
name: "contains invalid header",
in: "X-Test, Invalid Header, Content-Type",
want: nil,
err: s3err.GetInvalidCORSRequestHeaderErr(" Invalid Header"),
},
{
name: "only invalid header",
in: "Invalid Header",
want: nil,
err: s3err.GetInvalidCORSRequestHeaderErr("Invalid Header"),
},
{
name: "multiple commas in a row",
in: "X-Test,,Content-Type",
want: nil,
err: s3err.GetInvalidCORSRequestHeaderErr(""),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseCORSHeaders(tt.in)
assert.EqualValues(t, tt.err, err)
assert.Equal(t, tt.want, got)
})
}
}
func TestWildcardMatch(t *testing.T) {
tests := []struct {
name string
pattern string
input string
want bool
}{
// Exact match, no wildcards
{"exact match", "hello", "hello", true},
{"exact mismatch", "hello", "hell", false},
// Single '*' matching zero chars
{"star matches zero chars", "he*lo", "helo", true},
// Single '*' matching multiple chars
{"star matches multiple chars", "he*o", "heyyyyyo", true},
// '*' at start
{"star at start", "*world", "hello world", true},
// '*' at end
{"star at end", "hello*", "hello there", true},
// '*' matches whole string
{"only star", "*", "anything", true},
{"only star empty", "*", "", true},
// Multiple '*'s
{"multiple stars", "a*b*c", "axxxbzzzzyc", true},
{"multiple stars no match", "a*b*c", "axxxbzzzzy", false},
// Backtracking needed
{"backtracking required", "a*b*c", "ab123c", true},
// No match with star present
{"star but mismatch", "he*world", "hey there", false},
// Trailing stars in pattern
{"trailing stars match", "abc**", "abc", true},
{"trailing stars match longer", "abc**", "abccc", true},
// Empty pattern cases
{"empty pattern and empty input", "", "", true},
{"empty pattern non-empty input", "", "a", false},
{"only stars pattern with empty input", "***", "", true},
// Pattern longer than input
{"pattern longer no star", "abcd", "abc", false},
// Input longer but no star
{"input longer no star", "abc", "abcd", false},
// Complex interleaved match
{"complex interleaved", "*a*b*cd*", "xxaYYbZZcd123", true},
// Star match at the end after mismatch
{"mismatch then star match", "ab*xyz", "abzzzxyz", true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := wildcardMatch(tt.pattern, tt.input)
assert.Equal(t, tt.want, got)
})
}
}

View File

@@ -17,6 +17,7 @@ package auth
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/versity/versitygw/s3err"
@@ -91,12 +92,12 @@ func (bp *BucketPolicy) isAllowed(principal string, action Action, resource stri
return isAllowed
}
// isPublic checks if the bucket policy statements contain
// an entity granting public access
func (bp *BucketPolicy) isPublic(resource string, action Action) bool {
// IsPublicFor checks if the bucket policy statements contain
// an entity granting public access to the given resource and action
func (bp *BucketPolicy) isPublicFor(resource string, action Action) bool {
var isAllowed bool
for _, statement := range bp.Statement {
if statement.isPublic(resource, action) {
if statement.isPublicFor(resource, action) {
switch statement.Effect {
case BucketPolicyAccessTypeAllow:
isAllowed = true
@@ -109,6 +110,18 @@ func (bp *BucketPolicy) isPublic(resource string, action Action) bool {
return isAllowed
}
// IsPublic checks if one of bucket policy statments grant
// public access to ALL users
func (bp *BucketPolicy) IsPublic() bool {
for _, statement := range bp.Statement {
if statement.isPublic() {
return true
}
}
return false
}
type BucketPolicyItem struct {
Effect BucketPolicyAccessType `json:"Effect"`
Principals Principals `json:"Principal"`
@@ -154,9 +167,16 @@ func (bpi *BucketPolicyItem) findMatch(principal string, action Action, resource
return false
}
// isPublic checks if the bucket policy statemant grants public access
func (bpi *BucketPolicyItem) isPublic(resource string, action Action) bool {
return bpi.Principals.IsPublic() && bpi.Actions.FindMatch(action) && bpi.Resources.FindMatch(resource)
// isPublicFor checks if the bucket policy statemant grants public access
// for given resource and action
func (bpi *BucketPolicyItem) isPublicFor(resource string, action Action) bool {
return bpi.Principals.isPublic() && bpi.Actions.FindMatch(action) && bpi.Resources.FindMatch(resource)
}
// isPublic checks if the statement grants public access
// to ALL users
func (bpi *BucketPolicyItem) isPublic() bool {
return bpi.Principals.isPublic()
}
func getMalformedPolicyError(err error) error {
@@ -167,17 +187,27 @@ func getMalformedPolicyError(err error) error {
}
}
// ParsePolicyDocument parses raw bytes to 'BucketPolicy'
func ParsePolicyDocument(data []byte) (*BucketPolicy, error) {
var policy BucketPolicy
if err := json.Unmarshal(data, &policy); err != nil {
var pe policyErr
if errors.As(err, &pe) {
return nil, getMalformedPolicyError(err)
}
return nil, getMalformedPolicyError(policyErrInvalidPolicy)
}
return &policy, nil
}
func ValidatePolicyDocument(policyBin []byte, bucket string, iam IAMService) error {
if len(policyBin) == 0 || policyBin[0] != '{' {
return getMalformedPolicyError(policyErrInvalidFirstChar)
}
var policy BucketPolicy
if err := json.Unmarshal(policyBin, &policy); err != nil {
var pe policyErr
if errors.As(err, &pe) {
return getMalformedPolicyError(err)
}
return getMalformedPolicyError(policyErrInvalidPolicy)
policy, err := ParsePolicyDocument(policyBin)
if err != nil {
return err
}
if len(policy.Statement) == 0 {
@@ -194,7 +224,7 @@ func ValidatePolicyDocument(policyBin []byte, bucket string, iam IAMService) err
func VerifyBucketPolicy(policy []byte, access, bucket, object string, action Action) error {
var bucketPolicy BucketPolicy
if err := json.Unmarshal(policy, &bucketPolicy); err != nil {
return err
return fmt.Errorf("failed to parse the bucket policy: %w", err)
}
resource := bucket
@@ -221,9 +251,40 @@ func VerifyPublicBucketPolicy(policy []byte, bucket, object string, action Actio
resource += "/" + object
}
if !bucketPolicy.isPublic(resource, action) {
if !bucketPolicy.isPublicFor(resource, action) {
return ErrAccessDenied
}
return nil
}
// matchPattern checks if the input string matches the given pattern with wildcard(`*`) and any character(`?`).
// - `?` matches exactly one occurrence of any character.
// - `*` matches arbitrary many (including zero) occurrences of any character.
func matchPattern(pattern, input string) bool {
pIdx, sIdx := 0, 0
starIdx, matchIdx := -1, 0
for sIdx < len(input) {
if pIdx < len(pattern) && (pattern[pIdx] == '?' || pattern[pIdx] == input[sIdx]) {
sIdx++
pIdx++
} else if pIdx < len(pattern) && pattern[pIdx] == '*' {
starIdx = pIdx
matchIdx = sIdx
pIdx++
} else if starIdx != -1 {
pIdx = starIdx + 1
matchIdx++
sIdx = matchIdx
} else {
return false
}
}
for pIdx < len(pattern) && pattern[pIdx] == '*' {
pIdx++
}
return pIdx == len(pattern)
}

View File

@@ -22,87 +22,144 @@ import (
type Action string
const (
GetBucketAclAction Action = "s3:GetBucketAcl"
CreateBucketAction Action = "s3:CreateBucket"
PutBucketAclAction Action = "s3:PutBucketAcl"
DeleteBucketAction Action = "s3:DeleteBucket"
PutBucketVersioningAction Action = "s3:PutBucketVersioning"
GetBucketVersioningAction Action = "s3:GetBucketVersioning"
PutBucketPolicyAction Action = "s3:PutBucketPolicy"
GetBucketPolicyAction Action = "s3:GetBucketPolicy"
DeleteBucketPolicyAction Action = "s3:DeleteBucketPolicy"
AbortMultipartUploadAction Action = "s3:AbortMultipartUpload"
ListMultipartUploadPartsAction Action = "s3:ListMultipartUploadParts"
ListBucketMultipartUploadsAction Action = "s3:ListBucketMultipartUploads"
PutObjectAction Action = "s3:PutObject"
GetObjectAction Action = "s3:GetObject"
GetObjectVersionAction Action = "s3:GetObjectVersion"
DeleteObjectAction Action = "s3:DeleteObject"
GetObjectAclAction Action = "s3:GetObjectAcl"
GetObjectAttributesAction Action = "s3:GetObjectAttributes"
PutObjectAclAction Action = "s3:PutObjectAcl"
RestoreObjectAction Action = "s3:RestoreObject"
GetBucketTaggingAction Action = "s3:GetBucketTagging"
PutBucketTaggingAction Action = "s3:PutBucketTagging"
GetObjectTaggingAction Action = "s3:GetObjectTagging"
PutObjectTaggingAction Action = "s3:PutObjectTagging"
DeleteObjectTaggingAction Action = "s3:DeleteObjectTagging"
ListBucketVersionsAction Action = "s3:ListBucketVersions"
ListBucketAction Action = "s3:ListBucket"
GetBucketObjectLockConfigurationAction Action = "s3:GetBucketObjectLockConfiguration"
PutBucketObjectLockConfigurationAction Action = "s3:PutBucketObjectLockConfiguration"
GetObjectLegalHoldAction Action = "s3:GetObjectLegalHold"
PutObjectLegalHoldAction Action = "s3:PutObjectLegalHold"
GetObjectRetentionAction Action = "s3:GetObjectRetention"
PutObjectRetentionAction Action = "s3:PutObjectRetention"
BypassGovernanceRetentionAction Action = "s3:BypassGovernanceRetention"
PutBucketOwnershipControlsAction Action = "s3:PutBucketOwnershipControls"
GetBucketOwnershipControlsAction Action = "s3:GetBucketOwnershipControls"
PutBucketCorsAction Action = "s3:PutBucketCORS"
GetBucketCorsAction Action = "s3:GetBucketCORS"
AllActions Action = "s3:*"
GetBucketAclAction Action = "s3:GetBucketAcl"
CreateBucketAction Action = "s3:CreateBucket"
PutBucketAclAction Action = "s3:PutBucketAcl"
DeleteBucketAction Action = "s3:DeleteBucket"
PutBucketVersioningAction Action = "s3:PutBucketVersioning"
GetBucketVersioningAction Action = "s3:GetBucketVersioning"
PutBucketPolicyAction Action = "s3:PutBucketPolicy"
GetBucketPolicyAction Action = "s3:GetBucketPolicy"
DeleteBucketPolicyAction Action = "s3:DeleteBucketPolicy"
AbortMultipartUploadAction Action = "s3:AbortMultipartUpload"
ListMultipartUploadPartsAction Action = "s3:ListMultipartUploadParts"
ListBucketMultipartUploadsAction Action = "s3:ListBucketMultipartUploads"
PutObjectAction Action = "s3:PutObject"
GetObjectAction Action = "s3:GetObject"
GetObjectVersionAction Action = "s3:GetObjectVersion"
DeleteObjectAction Action = "s3:DeleteObject"
GetObjectAclAction Action = "s3:GetObjectAcl"
GetObjectAttributesAction Action = "s3:GetObjectAttributes"
PutObjectAclAction Action = "s3:PutObjectAcl"
RestoreObjectAction Action = "s3:RestoreObject"
GetBucketTaggingAction Action = "s3:GetBucketTagging"
PutBucketTaggingAction Action = "s3:PutBucketTagging"
GetObjectTaggingAction Action = "s3:GetObjectTagging"
PutObjectTaggingAction Action = "s3:PutObjectTagging"
DeleteObjectTaggingAction Action = "s3:DeleteObjectTagging"
ListBucketVersionsAction Action = "s3:ListBucketVersions"
ListBucketAction Action = "s3:ListBucket"
GetBucketObjectLockConfigurationAction Action = "s3:GetBucketObjectLockConfiguration"
PutBucketObjectLockConfigurationAction Action = "s3:PutBucketObjectLockConfiguration"
GetObjectLegalHoldAction Action = "s3:GetObjectLegalHold"
PutObjectLegalHoldAction Action = "s3:PutObjectLegalHold"
GetObjectRetentionAction Action = "s3:GetObjectRetention"
PutObjectRetentionAction Action = "s3:PutObjectRetention"
BypassGovernanceRetentionAction Action = "s3:BypassGovernanceRetention"
PutBucketOwnershipControlsAction Action = "s3:PutBucketOwnershipControls"
GetBucketOwnershipControlsAction Action = "s3:GetBucketOwnershipControls"
PutBucketCorsAction Action = "s3:PutBucketCORS"
GetBucketCorsAction Action = "s3:GetBucketCORS"
PutAnalyticsConfigurationAction Action = "s3:PutAnalyticsConfiguration"
GetAnalyticsConfigurationAction Action = "s3:GetAnalyticsConfiguration"
PutEncryptionConfigurationAction Action = "s3:PutEncryptionConfiguration"
GetEncryptionConfigurationAction Action = "s3:GetEncryptionConfiguration"
PutIntelligentTieringConfigurationAction Action = "s3:PutIntelligentTieringConfiguration"
GetIntelligentTieringConfigurationAction Action = "s3:GetIntelligentTieringConfiguration"
PutInventoryConfigurationAction Action = "s3:PutInventoryConfiguration"
GetInventoryConfigurationAction Action = "s3:GetInventoryConfiguration"
PutLifecycleConfigurationAction Action = "s3:PutLifecycleConfiguration"
GetLifecycleConfigurationAction Action = "s3:GetLifecycleConfiguration"
PutBucketLoggingAction Action = "s3:PutBucketLogging"
GetBucketLoggingAction Action = "s3:GetBucketLogging"
PutBucketRequestPaymentAction Action = "s3:PutBucketRequestPayment"
GetBucketRequestPaymentAction Action = "s3:GetBucketRequestPayment"
PutMetricsConfigurationAction Action = "s3:PutMetricsConfiguration"
GetMetricsConfigurationAction Action = "s3:GetMetricsConfiguration"
PutReplicationConfigurationAction Action = "s3:PutReplicationConfiguration"
GetReplicationConfigurationAction Action = "s3:GetReplicationConfiguration"
PutBucketPublicAccessBlockAction Action = "s3:PutBucketPublicAccessBlock"
GetBucketPublicAccessBlockAction Action = "s3:GetBucketPublicAccessBlock"
PutBucketNotificationAction Action = "s3:PutBucketNotification"
GetBucketNotificationAction Action = "s3:GetBucketNotification"
PutAccelerateConfigurationAction Action = "s3:PutAccelerateConfiguration"
GetAccelerateConfigurationAction Action = "s3:GetAccelerateConfiguration"
PutBucketWebsiteAction Action = "s3:PutBucketWebsite"
GetBucketWebsiteAction Action = "s3:GetBucketWebsite"
GetBucketPolicyStatusAction Action = "s3:GetBucketPolicyStatus"
GetBucketLocationAction Action = "s3:GetBucketLocation"
AllActions Action = "s3:*"
)
var supportedActionList = map[Action]struct{}{
GetBucketAclAction: {},
CreateBucketAction: {},
PutBucketAclAction: {},
DeleteBucketAction: {},
PutBucketVersioningAction: {},
GetBucketVersioningAction: {},
PutBucketPolicyAction: {},
GetBucketPolicyAction: {},
DeleteBucketPolicyAction: {},
AbortMultipartUploadAction: {},
ListMultipartUploadPartsAction: {},
ListBucketMultipartUploadsAction: {},
PutObjectAction: {},
GetObjectAction: {},
GetObjectVersionAction: {},
DeleteObjectAction: {},
GetObjectAclAction: {},
GetObjectAttributesAction: {},
PutObjectAclAction: {},
RestoreObjectAction: {},
GetBucketTaggingAction: {},
PutBucketTaggingAction: {},
GetObjectTaggingAction: {},
PutObjectTaggingAction: {},
DeleteObjectTaggingAction: {},
ListBucketVersionsAction: {},
ListBucketAction: {},
GetBucketObjectLockConfigurationAction: {},
PutBucketObjectLockConfigurationAction: {},
GetObjectLegalHoldAction: {},
PutObjectLegalHoldAction: {},
GetObjectRetentionAction: {},
PutObjectRetentionAction: {},
BypassGovernanceRetentionAction: {},
PutBucketOwnershipControlsAction: {},
GetBucketOwnershipControlsAction: {},
PutBucketCorsAction: {},
GetBucketCorsAction: {},
AllActions: {},
GetBucketAclAction: {},
CreateBucketAction: {},
PutBucketAclAction: {},
DeleteBucketAction: {},
PutBucketVersioningAction: {},
GetBucketVersioningAction: {},
PutBucketPolicyAction: {},
GetBucketPolicyAction: {},
DeleteBucketPolicyAction: {},
AbortMultipartUploadAction: {},
ListMultipartUploadPartsAction: {},
ListBucketMultipartUploadsAction: {},
PutObjectAction: {},
GetObjectAction: {},
GetObjectVersionAction: {},
DeleteObjectAction: {},
GetObjectAclAction: {},
GetObjectAttributesAction: {},
PutObjectAclAction: {},
RestoreObjectAction: {},
GetBucketTaggingAction: {},
PutBucketTaggingAction: {},
GetObjectTaggingAction: {},
PutObjectTaggingAction: {},
DeleteObjectTaggingAction: {},
ListBucketVersionsAction: {},
ListBucketAction: {},
GetBucketObjectLockConfigurationAction: {},
PutBucketObjectLockConfigurationAction: {},
GetObjectLegalHoldAction: {},
PutObjectLegalHoldAction: {},
GetObjectRetentionAction: {},
PutObjectRetentionAction: {},
BypassGovernanceRetentionAction: {},
PutBucketOwnershipControlsAction: {},
GetBucketOwnershipControlsAction: {},
PutBucketCorsAction: {},
GetBucketCorsAction: {},
PutAnalyticsConfigurationAction: {},
GetAnalyticsConfigurationAction: {},
PutEncryptionConfigurationAction: {},
GetEncryptionConfigurationAction: {},
PutIntelligentTieringConfigurationAction: {},
GetIntelligentTieringConfigurationAction: {},
PutInventoryConfigurationAction: {},
GetInventoryConfigurationAction: {},
PutLifecycleConfigurationAction: {},
GetLifecycleConfigurationAction: {},
PutBucketLoggingAction: {},
GetBucketLoggingAction: {},
PutBucketRequestPaymentAction: {},
GetBucketRequestPaymentAction: {},
PutMetricsConfigurationAction: {},
GetMetricsConfigurationAction: {},
PutReplicationConfigurationAction: {},
GetReplicationConfigurationAction: {},
PutBucketPublicAccessBlockAction: {},
GetBucketPublicAccessBlockAction: {},
PutBucketNotificationAction: {},
GetBucketNotificationAction: {},
PutAccelerateConfigurationAction: {},
GetAccelerateConfigurationAction: {},
PutBucketWebsiteAction: {},
GetBucketWebsiteAction: {},
GetBucketPolicyStatusAction: {},
GetBucketLocationAction: {},
AllActions: {},
}
var supportedObjectActionList = map[Action]struct{}{
@@ -137,55 +194,54 @@ func (a Action) IsValid() error {
return nil
}
if a[len(a)-1] == '*' {
pattern := strings.TrimSuffix(string(a), "*")
for act := range supportedActionList {
if strings.HasPrefix(string(act), pattern) {
return nil
}
// first check for an exact match
if _, ok := supportedActionList[a]; ok {
return nil
}
// walk through the supported actions and try wildcard match
for action := range supportedActionList {
if action.Match(a) {
return nil
}
return policyErrInvalidAction
}
_, found := supportedActionList[a]
if !found {
return policyErrInvalidAction
}
return nil
return policyErrInvalidAction
}
func getBoolPtr(bl bool) *bool {
return &bl
}
// String converts the action to string
func (a Action) String() string {
return string(a)
}
// Match wildcard matches the given pattern to the action
func (a Action) Match(pattern Action) bool {
return matchPattern(pattern.String(), a.String())
}
// Checks if the action is object action
// nil points to 's3:*'
func (a Action) IsObjectAction() *bool {
if a == AllActions {
return nil
}
if a[len(a)-1] == '*' {
pattern := strings.TrimSuffix(string(a), "*")
for act := range supportedObjectActionList {
if strings.HasPrefix(string(act), pattern) {
return getBoolPtr(true)
}
// first find an exact match
if _, ok := supportedObjectActionList[a]; ok {
return &ok
}
for action := range supportedObjectActionList {
if action.Match(a) {
return getBoolPtr(true)
}
return getBoolPtr(false)
}
_, found := supportedObjectActionList[a]
return &found
}
func (a Action) WildCardMatch(act Action) bool {
if strings.HasSuffix(string(a), "*") {
pattern := strings.TrimSuffix(string(a), "*")
return strings.HasPrefix(string(act), pattern)
}
return false
return getBoolPtr(false)
}
type Actions map[Action]struct{}
@@ -234,6 +290,7 @@ func (a Actions) Add(str string) error {
return nil
}
// FindMatch tries to match the given action to the actions list
func (a Actions) FindMatch(action Action) bool {
_, ok := a[AllActions]
if ok {
@@ -245,8 +302,9 @@ func (a Actions) FindMatch(action Action) bool {
return true
}
// search for a wildcard match
for act := range a {
if strings.HasSuffix(string(act), "*") && act.WildCardMatch(action) {
if action.Match(act) {
return true
}
}

View File

@@ -0,0 +1,175 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package auth
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/assert"
)
func TestAction_IsValid(t *testing.T) {
tests := []struct {
name string
action Action
wantErr bool
}{
{"valid exact action", GetObjectAction, false},
{"valid all actions", AllActions, false},
{"invalid prefix", "invalid:Action", true},
{"unsupported action 1", "s3:Unsupported", true},
{"unsupported action 2", "s3:HeadObject", true},
{"valid wildcard match 1", "s3:Get*", false},
{"valid wildcard match 2", "s3:*Object*", false},
{"valid wildcard match 3", "s3:*Multipart*", false},
{"any char match 1", "s3:Get?bject", false},
{"any char match 2", "s3:Get??bject", true},
{"any char match 3", "s3:???", true},
{"mixed match 1", "s3:Get?*", false},
{"mixed match 2", "s3:*Object?????", true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.action.IsValid()
if tt.wantErr {
assert.EqualValues(t, policyErrInvalidAction, err)
} else {
assert.NoError(t, err)
}
})
}
}
func TestAction_String(t *testing.T) {
a := Action("s3:TestAction")
assert.Equal(t, "s3:TestAction", a.String())
}
func TestAction_Match(t *testing.T) {
tests := []struct {
name string
action Action
pattern Action
want bool
}{
{"exact match", "s3:GetObject", "s3:GetObject", true},
{"wildcard match", "s3:GetObject", "s3:Get*", true},
{"wildcard mismatch", "s3:PutObject", "s3:Get*", false},
{"any character match", "s3:Get1", "s3:Get?", true},
{"any character mismatch", "s3:Get12", "s3:Get?", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.action.Match(tt.pattern)
assert.Equal(t, tt.want, got)
})
}
}
func TestAction_IsObjectAction(t *testing.T) {
tests := []struct {
name string
action Action
want *bool
}{
{"all actions", AllActions, nil},
{"object action exact", GetObjectAction, getBoolPtr(true)},
{"object action wildcard", "s3:Get*", getBoolPtr(true)},
{"non object action", GetBucketAclAction, getBoolPtr(false)},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.action.IsObjectAction()
if tt.want == nil {
assert.Nil(t, got)
} else {
assert.NotNil(t, got)
assert.Equal(t, *tt.want, *got)
}
})
}
}
func TestActions_UnmarshalJSON(t *testing.T) {
tests := []struct {
name string
input string
wantErr bool
}{
{"valid slice", `["s3:GetObject","s3:PutObject"]`, false},
{"empty slice", `[]`, true},
{"invalid action in slice", `["s3:Invalid"]`, true},
{"valid string", `"s3:GetObject"`, false},
{"empty string", `""`, true},
{"invalid string", `"s3:Invalid"`, true},
{"invalid json", `{}`, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var a Actions
err := json.Unmarshal([]byte(tt.input), &a)
if tt.wantErr {
assert.Error(t, err)
} else {
assert.NoError(t, err)
}
})
}
}
func TestActions_Add(t *testing.T) {
tests := []struct {
name string
action string
wantErr bool
}{
{"valid add", "s3:GetObject", false},
{"invalid add", "s3:InvalidAction", true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
a := make(Actions)
err := a.Add(tt.action)
if tt.wantErr {
assert.Error(t, err)
} else {
assert.NoError(t, err)
_, ok := a[Action(tt.action)]
assert.True(t, ok)
}
})
}
}
func TestActions_FindMatch(t *testing.T) {
tests := []struct {
name string
actions Actions
check Action
want bool
}{
{"all actions present", Actions{AllActions: {}}, GetObjectAction, true},
{"exact match", Actions{GetObjectAction: {}}, GetObjectAction, true},
{"wildcard match", Actions{"s3:Get*": {}}, GetObjectAction, true},
{"no match", Actions{"s3:Put*": {}}, GetObjectAction, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.actions.FindMatch(tt.check)
assert.Equal(t, tt.want, got)
})
}
}

View File

@@ -0,0 +1,57 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package auth
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestBucketPolicyAccessType_Validate(t *testing.T) {
tests := []struct {
name string
input BucketPolicyAccessType
wantErr bool
errMsg string
}{
{
name: "valid allow",
input: BucketPolicyAccessTypeAllow,
wantErr: false,
},
{
name: "valid deny",
input: BucketPolicyAccessTypeDeny,
wantErr: false,
},
{
name: "invalid type",
input: BucketPolicyAccessType("InvalidValue"),
wantErr: true,
errMsg: "Invalid effect: InvalidValue",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.input.Validate()
if tt.wantErr {
assert.EqualError(t, err, tt.errMsg)
} else {
assert.NoError(t, err)
}
})
}
}

View File

@@ -124,7 +124,7 @@ func (p Principals) Contains(userAccess string) bool {
// Bucket policy grants public access, if it contains
// a wildcard match to all the users
func (p Principals) IsPublic() bool {
func (p Principals) isPublic() bool {
_, ok := p["*"]
return ok
}

View File

@@ -0,0 +1,106 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package auth
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/assert"
)
func TestPrincipals_Add(t *testing.T) {
p := make(Principals)
p.Add("user1")
_, ok := p["user1"]
assert.True(t, ok)
}
func TestPrincipals_UnmarshalJSON(t *testing.T) {
tests := []struct {
name string
input string
want Principals
wantErr bool
}{
{"valid slice", `["user1","user2"]`, Principals{"user1": {}, "user2": {}}, false},
{"empty slice", `[]`, nil, true},
{"valid string", `"user1"`, Principals{"user1": {}}, false},
{"empty string", `""`, nil, true},
{"valid AWS object", `{"AWS":"user1"}`, Principals{"user1": {}}, false},
{"empty AWS object", `{"AWS":""}`, nil, true},
{"valid AWS array", `{"AWS":["user1","user2"]}`, Principals{"user1": {}, "user2": {}}, false},
{"empty AWS array", `{"AWS":[]}`, nil, true},
{"invalid json", `{invalid}`, nil, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var p Principals
err := json.Unmarshal([]byte(tt.input), &p)
if tt.wantErr {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.want, p)
}
})
}
}
func TestPrincipals_ToSlice(t *testing.T) {
p := Principals{"user1": {}, "user2": {}, "*": {}}
got := p.ToSlice()
assert.Contains(t, got, "user1")
assert.Contains(t, got, "user2")
assert.NotContains(t, got, "*")
}
func TestPrincipals_Validate(t *testing.T) {
iamSingle := NewIAMServiceSingle(Account{
Access: "user1",
})
tests := []struct {
name string
principals Principals
mockIAM IAMService
err error
}{
{"only wildcard", Principals{"*": {}}, iamSingle, nil},
{"wildcard and user", Principals{"*": {}, "user1": {}}, iamSingle, policyErrInvalidPrincipal},
{"accounts exist returns err", Principals{"user2": {}, "user3": {}}, iamSingle, policyErrInvalidPrincipal},
{"accounts exist non-empty", Principals{"user1": {}}, iamSingle, nil},
{"accounts valid", Principals{"user1": {}}, iamSingle, nil},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.principals.Validate(tt.mockIAM)
assert.EqualValues(t, tt.err, err)
})
}
}
func TestPrincipals_Contains(t *testing.T) {
p := Principals{"user1": {}}
assert.True(t, p.Contains("user1"))
assert.False(t, p.Contains("user2"))
p = Principals{"*": {}}
assert.True(t, p.Contains("anyuser"))
}
func TestPrincipals_isPublic(t *testing.T) {
assert.True(t, Principals{"*": {}}.isPublic())
assert.False(t, Principals{"user1": {}}.isPublic())
}

View File

@@ -110,35 +110,9 @@ func (r Resources) FindMatch(resource string) bool {
return false
}
// Match checks if the input string matches the given pattern with wildcards (`*`, `?`).
// - `?` matches exactly one occurrence of any character.
// - `*` matches arbitrary many (including zero) occurrences of any character.
// Match matches the given input resource with the pattern
func (r Resources) Match(pattern, input string) bool {
pIdx, sIdx := 0, 0
starIdx, matchIdx := -1, 0
for sIdx < len(input) {
if pIdx < len(pattern) && (pattern[pIdx] == '?' || pattern[pIdx] == input[sIdx]) {
sIdx++
pIdx++
} else if pIdx < len(pattern) && pattern[pIdx] == '*' {
starIdx = pIdx
matchIdx = sIdx
pIdx++
} else if starIdx != -1 {
pIdx = starIdx + 1
matchIdx++
sIdx = matchIdx
} else {
return false
}
}
for pIdx < len(pattern) && pattern[pIdx] == '*' {
pIdx++
}
return pIdx == len(pattern)
return matchPattern(pattern, input)
}
// Checks the resource to have arn prefix and not starting with /

View File

@@ -121,6 +121,7 @@ type Opts struct {
LDAPGroupIdAtr string
VaultEndpointURL string
VaultSecretStoragePath string
VaultAuthMethod string
VaultMountPath string
VaultRootToken string
VaultRoleId string
@@ -134,7 +135,6 @@ type Opts struct {
S3Bucket string
S3Endpoint string
S3DisableSSlVerfiy bool
S3Debug bool
CacheDisable bool
CacheTTL int
CachePrune int
@@ -143,7 +143,6 @@ type Opts struct {
IpaUser string
IpaPassword string
IpaInsecure bool
IpaDebug bool
}
func New(o *Opts) (IAMService, error) {
@@ -161,16 +160,16 @@ func New(o *Opts) (IAMService, error) {
fmt.Printf("initializing LDAP IAM with %q\n", o.LDAPServerURL)
case o.S3Endpoint != "":
svc, err = NewS3(o.RootAccount, o.S3Access, o.S3Secret, o.S3Region, o.S3Bucket,
o.S3Endpoint, o.S3DisableSSlVerfiy, o.S3Debug)
o.S3Endpoint, o.S3DisableSSlVerfiy)
fmt.Printf("initializing S3 IAM with '%v/%v'\n",
o.S3Endpoint, o.S3Bucket)
case o.VaultEndpointURL != "":
svc, err = NewVaultIAMService(o.RootAccount, o.VaultEndpointURL, o.VaultSecretStoragePath,
o.VaultMountPath, o.VaultRootToken, o.VaultRoleId, o.VaultRoleSecret,
o.VaultAuthMethod, o.VaultMountPath, o.VaultRootToken, o.VaultRoleId, o.VaultRoleSecret,
o.VaultServerCert, o.VaultClientCert, o.VaultClientCertKey)
fmt.Printf("initializing Vault IAM with %q\n", o.VaultEndpointURL)
case o.IpaHost != "":
svc, err = NewIpaIAMService(o.RootAccount, o.IpaHost, o.IpaVaultName, o.IpaUser, o.IpaPassword, o.IpaInsecure, o.IpaDebug)
svc, err = NewIpaIAMService(o.RootAccount, o.IpaHost, o.IpaVaultName, o.IpaUser, o.IpaPassword, o.IpaInsecure)
fmt.Printf("initializing IPA IAM with %q\n", o.IpaHost)
default:
// if no iam options selected, default to the single user mode

View File

@@ -26,13 +26,17 @@ import (
"errors"
"fmt"
"io"
"log"
"net"
"net/http"
"net/http/cookiejar"
"net/url"
"slices"
"strconv"
"strings"
"syscall"
"time"
"github.com/versity/versitygw/debuglogger"
)
const IpaVersion = "2.254"
@@ -46,13 +50,12 @@ type IpaIAMService struct {
username string
password string
kraTransportKey *rsa.PublicKey
debug bool
rootAcc Account
}
var _ IAMService = &IpaIAMService{}
func NewIpaIAMService(rootAcc Account, host, vaultName, username, password string, isInsecure, debug bool) (*IpaIAMService, error) {
func NewIpaIAMService(rootAcc Account, host, vaultName, username, password string, isInsecure bool) (*IpaIAMService, error) {
ipa := IpaIAMService{
id: 0,
version: IpaVersion,
@@ -60,7 +63,6 @@ func NewIpaIAMService(rootAcc Account, host, vaultName, username, password strin
vaultName: vaultName,
username: username,
password: password,
debug: debug,
rootAcc: rootAcc,
}
jar, err := cookiejar.New(nil)
@@ -221,6 +223,8 @@ func (ipa *IpaIAMService) Shutdown() error {
// Implementation
const requestRetries = 3
func (ipa *IpaIAMService) login() error {
form := url.Values{}
form.Set("user", ipa.username)
@@ -237,17 +241,33 @@ func (ipa *IpaIAMService) login() error {
req.Header.Set("referer", fmt.Sprintf("%s/ipa", ipa.host))
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
resp, err := ipa.client.Do(req)
if err != nil {
return err
var resp *http.Response
for i := range requestRetries {
resp, err = ipa.client.Do(req)
if err == nil {
break
}
// Check for transient network errors
if isRetryable(err) {
time.Sleep(time.Second * time.Duration(i+1))
continue
}
return fmt.Errorf("login POST to %s failed: %w", req.URL, err)
}
if err != nil {
return fmt.Errorf("login POST to %s failed after retries: %w",
req.URL, err)
}
defer resp.Body.Close()
if resp.StatusCode == 401 {
return errors.New("cannot login to FreeIPA: invalid credentials")
}
if resp.StatusCode != 200 {
return fmt.Errorf("cannot login to FreeIPA: status code %d", resp.StatusCode)
return fmt.Errorf("cannot login to FreeIPA: status code %d",
resp.StatusCode)
}
return nil
@@ -290,17 +310,34 @@ func (ipa *IpaIAMService) rpcInternal(req rpcRequest) (rpcResponse, error) {
return rpcResponse{}, err
}
ipa.log(fmt.Sprintf("%v", req))
debuglogger.IAMLogf("IPA request: %v", req)
httpReq.Header.Set("referer", fmt.Sprintf("%s/ipa", ipa.host))
httpReq.Header.Set("Content-Type", "application/json")
httpResp, err := ipa.client.Do(httpReq)
var httpResp *http.Response
for i := range requestRetries {
httpResp, err = ipa.client.Do(httpReq)
if err == nil {
break
}
// Check for transient network errors
if isRetryable(err) {
time.Sleep(time.Second * time.Duration(i+1))
continue
}
return rpcResponse{}, fmt.Errorf("ipa request to %s failed: %w",
httpReq.URL, err)
}
if err != nil {
return rpcResponse{}, err
return rpcResponse{},
fmt.Errorf("ipa request to %s failed after retries: %w",
httpReq.URL, err)
}
defer httpResp.Body.Close()
bytes, err := io.ReadAll(httpResp.Body)
ipa.log(string(bytes))
debuglogger.IAMLogf("IPA response (%v): %v", err, string(bytes))
if err != nil {
return rpcResponse{}, err
}
@@ -333,6 +370,30 @@ func (ipa *IpaIAMService) rpcInternal(req rpcRequest) (rpcResponse, error) {
}, nil
}
func isRetryable(err error) bool {
if err == nil {
return false
}
if errors.Is(err, io.EOF) {
return true
}
if err, ok := err.(net.Error); ok && err.Timeout() {
return true
}
if opErr, ok := err.(*net.OpError); ok {
if sysErr, ok := opErr.Err.(*syscall.Errno); ok {
if *sysErr == syscall.ECONNRESET {
return true
}
}
}
return false
}
func (ipa *IpaIAMService) newRequest(method string, args []string, dict map[string]any) (rpcRequest, error) {
id := ipa.id
@@ -433,9 +494,3 @@ func (b *Base64Encoded) UnmarshalJSON(data []byte) error {
*b, err = base64.StdEncoding.DecodeString(intermediate)
return err
}
func (ipa *IpaIAMService) log(msg string) {
if ipa.debug {
log.Println(msg)
}
}

View File

@@ -18,8 +18,11 @@ import (
"fmt"
"strconv"
"strings"
"sync"
"github.com/davecgh/go-spew/spew"
"github.com/go-ldap/ldap/v3"
"github.com/versity/versitygw/debuglogger"
)
type LdapIAMService struct {
@@ -32,6 +35,10 @@ type LdapIAMService struct {
groupIdAtr string
userIdAtr string
rootAcc Account
url string
bindDN string
pass string
mu sync.Mutex
}
var _ IAMService = &LdapIAMService{}
@@ -60,9 +67,45 @@ func NewLDAPService(rootAcc Account, url, bindDN, pass, queryBase, accAtr, secAt
userIdAtr: userIdAtr,
groupIdAtr: groupIdAtr,
rootAcc: rootAcc,
url: url,
bindDN: bindDN,
pass: pass,
}, nil
}
func (ld *LdapIAMService) reconnect() error {
ld.conn.Close()
conn, err := ldap.DialURL(ld.url)
if err != nil {
return fmt.Errorf("failed to reconnect to LDAP server: %w", err)
}
err = conn.Bind(ld.bindDN, ld.pass)
if err != nil {
conn.Close()
return fmt.Errorf("failed to bind to LDAP server on reconnect: %w", err)
}
ld.conn = conn
return nil
}
func (ld *LdapIAMService) execute(f func(*ldap.Conn) error) error {
ld.mu.Lock()
defer ld.mu.Unlock()
err := f(ld.conn)
if err != nil {
if e, ok := err.(*ldap.Error); ok && e.ResultCode == ldap.ErrorNetwork {
if reconnErr := ld.reconnect(); reconnErr != nil {
return reconnErr
}
return f(ld.conn)
}
}
return err
}
func (ld *LdapIAMService) CreateAccount(account Account) error {
if ld.rootAcc.Access == account.Access {
return ErrUserExists
@@ -75,7 +118,9 @@ func (ld *LdapIAMService) CreateAccount(account Account) error {
userEntry.Attribute(ld.groupIdAtr, []string{fmt.Sprint(account.GroupID)})
userEntry.Attribute(ld.userIdAtr, []string{fmt.Sprint(account.UserID)})
err := ld.conn.Add(userEntry)
err := ld.execute(func(c *ldap.Conn) error {
return c.Add(userEntry)
})
if err != nil {
return fmt.Errorf("error adding an entry: %w", err)
}
@@ -83,10 +128,22 @@ func (ld *LdapIAMService) CreateAccount(account Account) error {
return nil
}
func (ld *LdapIAMService) buildSearchFilter(access string) string {
var searchFilter strings.Builder
for _, el := range ld.objClasses {
searchFilter.WriteString(fmt.Sprintf("(objectClass=%v)", el))
}
if access != "" {
searchFilter.WriteString(fmt.Sprintf("(%v=%v)", ld.accessAtr, access))
}
return fmt.Sprintf("(&%v)", searchFilter.String())
}
func (ld *LdapIAMService) GetUserAccount(access string) (Account, error) {
if access == ld.rootAcc.Access {
return ld.rootAcc, nil
}
var result *ldap.SearchResult
searchRequest := ldap.NewSearchRequest(
ld.queryBase,
ldap.ScopeWholeSubtree,
@@ -94,12 +151,27 @@ func (ld *LdapIAMService) GetUserAccount(access string) (Account, error) {
0,
0,
false,
fmt.Sprintf("(%v=%v)", ld.accessAtr, access),
ld.buildSearchFilter(access),
[]string{ld.accessAtr, ld.secretAtr, ld.roleAtr, ld.userIdAtr, ld.groupIdAtr},
nil,
)
result, err := ld.conn.Search(searchRequest)
if debuglogger.IsIAMDebugEnabled() {
debuglogger.IAMLogf("LDAP Search Request")
debuglogger.IAMLogf(spew.Sdump(searchRequest))
}
err := ld.execute(func(c *ldap.Conn) error {
var err error
result, err = c.Search(searchRequest)
return err
})
if debuglogger.IsIAMDebugEnabled() {
debuglogger.IAMLogf("LDAP Search Result")
debuglogger.IAMLogf(spew.Sdump(result))
}
if err != nil {
return Account{}, err
}
@@ -143,7 +215,9 @@ func (ld *LdapIAMService) UpdateUserAccount(access string, props MutableProps) e
req.Replace(ld.roleAtr, []string{string(props.Role)})
}
err := ld.conn.Modify(req)
err := ld.execute(func(c *ldap.Conn) error {
return c.Modify(req)
})
//TODO: Handle non existing user case
if err != nil {
return err
@@ -154,7 +228,9 @@ func (ld *LdapIAMService) UpdateUserAccount(access string, props MutableProps) e
func (ld *LdapIAMService) DeleteUserAccount(access string) error {
delReq := ldap.NewDelRequest(fmt.Sprintf("%v=%v, %v", ld.accessAtr, access, ld.queryBase), nil)
err := ld.conn.Del(delReq)
err := ld.execute(func(c *ldap.Conn) error {
return c.Del(delReq)
})
if err != nil {
return err
}
@@ -163,10 +239,7 @@ func (ld *LdapIAMService) DeleteUserAccount(access string) error {
}
func (ld *LdapIAMService) ListUserAccounts() ([]Account, error) {
searchFilter := ""
for _, el := range ld.objClasses {
searchFilter += fmt.Sprintf("(objectClass=%v)", el)
}
var resp *ldap.SearchResult
searchRequest := ldap.NewSearchRequest(
ld.queryBase,
ldap.ScopeWholeSubtree,
@@ -174,12 +247,16 @@ func (ld *LdapIAMService) ListUserAccounts() ([]Account, error) {
0,
0,
false,
fmt.Sprintf("(&%v)", searchFilter),
ld.buildSearchFilter(""),
[]string{ld.accessAtr, ld.secretAtr, ld.roleAtr, ld.groupIdAtr, ld.userIdAtr},
nil,
)
resp, err := ld.conn.Search(searchRequest)
err := ld.execute(func(c *ldap.Conn) error {
var err error
resp, err = c.Search(searchRequest)
return err
})
if err != nil {
return nil, err
}
@@ -210,5 +287,7 @@ func (ld *LdapIAMService) ListUserAccounts() ([]Account, error) {
// Shutdown graceful termination of service
func (ld *LdapIAMService) Shutdown() error {
ld.mu.Lock()
defer ld.mu.Unlock()
return ld.conn.Close()
}

56
auth/iam_ldap_test.go Normal file
View File

@@ -0,0 +1,56 @@
package auth
import "testing"
func TestLdapIAMService_BuildSearchFilter(t *testing.T) {
tests := []struct {
name string
objClasses []string
accessAtr string
access string
expected string
}{
{
name: "single object class with access",
objClasses: []string{"inetOrgPerson"},
accessAtr: "uid",
access: "testuser",
expected: "(&(objectClass=inetOrgPerson)(uid=testuser))",
},
{
name: "single object class without access",
objClasses: []string{"inetOrgPerson"},
accessAtr: "uid",
access: "",
expected: "(&(objectClass=inetOrgPerson))",
},
{
name: "multiple object classes with access",
objClasses: []string{"inetOrgPerson", "organizationalPerson"},
accessAtr: "cn",
access: "john.doe",
expected: "(&(objectClass=inetOrgPerson)(objectClass=organizationalPerson)(cn=john.doe))",
},
{
name: "multiple object classes without access",
objClasses: []string{"inetOrgPerson", "organizationalPerson", "person"},
accessAtr: "cn",
access: "",
expected: "(&(objectClass=inetOrgPerson)(objectClass=organizationalPerson)(objectClass=person))",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ld := &LdapIAMService{
objClasses: tt.objClasses,
accessAtr: tt.accessAtr,
}
result := ld.buildSearchFilter(tt.access)
if result != tt.expected {
t.Errorf("BuildSearchFilter() = %v, want %v", result, tt.expected)
}
})
}
}

View File

@@ -33,6 +33,7 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/aws/smithy-go"
"github.com/versity/versitygw/debuglogger"
)
// IAMServiceS3 stores user accounts in an S3 object
@@ -56,14 +57,13 @@ type IAMServiceS3 struct {
bucket string
endpoint string
sslSkipVerify bool
debug bool
rootAcc Account
client *s3.Client
}
var _ IAMService = &IAMServiceS3{}
func NewS3(rootAcc Account, access, secret, region, bucket, endpoint string, sslSkipVerify, debug bool) (*IAMServiceS3, error) {
func NewS3(rootAcc Account, access, secret, region, bucket, endpoint string, sslSkipVerify bool) (*IAMServiceS3, error) {
if access == "" {
return nil, fmt.Errorf("must provide s3 IAM service access key")
}
@@ -87,7 +87,6 @@ func NewS3(rootAcc Account, access, secret, region, bucket, endpoint string, ssl
bucket: bucket,
endpoint: endpoint,
sslSkipVerify: sslSkipVerify,
debug: debug,
rootAcc: rootAcc,
}
@@ -235,7 +234,7 @@ func (s *IAMServiceS3) getConfig() (aws.Config, error) {
config.WithHTTPClient(client),
}
if s.debug {
if debuglogger.IsIAMDebugEnabled() {
opts = append(opts,
config.WithClientLogMode(aws.LogSigning|aws.LogRetries|aws.LogRequest|aws.LogResponse|aws.LogRequestEventMessage|aws.LogResponseEventMessage))
}

View File

@@ -19,6 +19,7 @@ import (
"encoding/json"
"errors"
"fmt"
"net/http"
"strings"
"time"
@@ -26,20 +27,25 @@ import (
"github.com/hashicorp/vault-client-go/schema"
)
const requestTimeout = 10 * time.Second
type VaultIAMService struct {
client *vault.Client
reqOpts []vault.RequestOption
authReqOpts []vault.RequestOption
kvReqOpts []vault.RequestOption
secretStoragePath string
rootAcc Account
creds schema.AppRoleLoginRequest
}
var _ IAMService = &VaultIAMService{}
func NewVaultIAMService(rootAcc Account, endpoint, secretStoragePath, mountPath, rootToken, roleID, roleSecret, serverCert, clientCert, clientCertKey string) (IAMService, error) {
func NewVaultIAMService(rootAcc Account, endpoint, secretStoragePath,
authMethod, mountPath, rootToken, roleID, roleSecret, serverCert,
clientCert, clientCertKey string) (IAMService, error) {
opts := []vault.ClientOption{
vault.WithAddress(endpoint),
// set request timeout to 10 secs
vault.WithRequestTimeout(10 * time.Second),
vault.WithRequestTimeout(requestTimeout),
}
if serverCert != "" {
tls := vault.TLSConfiguration{}
@@ -62,10 +68,21 @@ func NewVaultIAMService(rootAcc Account, endpoint, secretStoragePath, mountPath,
return nil, fmt.Errorf("init vault client: %w", err)
}
reqOpts := []vault.RequestOption{}
// if mount path is not specified, it defaults to "approle"
authReqOpts := []vault.RequestOption{}
// if auth method path is not specified, it defaults to "approle"
if authMethod != "" {
authReqOpts = append(authReqOpts, vault.WithMountPath(authMethod))
}
kvReqOpts := []vault.RequestOption{}
// if mount path is not specified, it defaults to "kv-v2"
if mountPath != "" {
reqOpts = append(reqOpts, vault.WithMountPath(mountPath))
kvReqOpts = append(kvReqOpts, vault.WithMountPath(mountPath))
}
creds := schema.AppRoleLoginRequest{
RoleId: roleID,
SecretId: roleSecret,
}
// Authentication
@@ -80,12 +97,8 @@ func NewVaultIAMService(rootAcc Account, endpoint, secretStoragePath, mountPath,
return nil, fmt.Errorf("role id and role secret must both be specified")
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
resp, err := client.Auth.AppRoleLogin(ctx, schema.AppRoleLoginRequest{
RoleId: roleID,
SecretId: roleSecret,
}, reqOpts...)
cancel()
resp, err := client.Auth.AppRoleLogin(context.Background(),
creds, authReqOpts...)
if err != nil {
return nil, fmt.Errorf("approle authentication failure: %w", err)
}
@@ -99,33 +112,77 @@ func NewVaultIAMService(rootAcc Account, endpoint, secretStoragePath, mountPath,
return &VaultIAMService{
client: client,
reqOpts: reqOpts,
authReqOpts: authReqOpts,
kvReqOpts: kvReqOpts,
secretStoragePath: secretStoragePath,
rootAcc: rootAcc,
creds: creds,
}, nil
}
func (vt *VaultIAMService) reAuthIfNeeded(err error) error {
if err == nil {
return nil
}
// Vault returns 403 for expired/revoked tokens
// pass all other errors back unchanged
if !vault.IsErrorStatus(err, http.StatusForbidden) {
return err
}
resp, authErr := vt.client.Auth.AppRoleLogin(context.Background(),
vt.creds, vt.authReqOpts...)
if authErr != nil {
return fmt.Errorf("vault re-authentication failure: %w", authErr)
}
if err := vt.client.SetToken(resp.Auth.ClientToken); err != nil {
return fmt.Errorf("vault re-authentication set token failure: %w", err)
}
return nil
}
func (vt *VaultIAMService) CreateAccount(account Account) error {
if vt.rootAcc.Access == account.Access {
return ErrUserExists
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
_, err := vt.client.Secrets.KvV2Write(ctx, vt.secretStoragePath+"/"+account.Access, schema.KvV2WriteRequest{
Data: map[string]any{
account.Access: account,
},
Options: map[string]interface{}{
"cas": 0,
},
}, vt.reqOpts...)
cancel()
_, err := vt.client.Secrets.KvV2Write(context.Background(),
vt.secretStoragePath+"/"+account.Access, schema.KvV2WriteRequest{
Data: map[string]any{
account.Access: account,
},
Options: map[string]any{
"cas": 0,
},
}, vt.kvReqOpts...)
if err != nil {
if strings.Contains(err.Error(), "check-and-set") {
return ErrUserExists
}
return err
}
reauthErr := vt.reAuthIfNeeded(err)
if reauthErr != nil {
return reauthErr
}
// retry once after re-auth
_, err = vt.client.Secrets.KvV2Write(context.Background(),
vt.secretStoragePath+"/"+account.Access, schema.KvV2WriteRequest{
Data: map[string]any{
account.Access: account,
},
Options: map[string]any{
"cas": 0,
},
}, vt.kvReqOpts...)
if err != nil {
if strings.Contains(err.Error(), "check-and-set") {
return ErrUserExists
}
return err
}
return nil
}
return nil
}
@@ -133,66 +190,84 @@ func (vt *VaultIAMService) GetUserAccount(access string) (Account, error) {
if vt.rootAcc.Access == access {
return vt.rootAcc, nil
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
resp, err := vt.client.Secrets.KvV2Read(ctx, vt.secretStoragePath+"/"+access, vt.reqOpts...)
cancel()
resp, err := vt.client.Secrets.KvV2Read(context.Background(),
vt.secretStoragePath+"/"+access, vt.kvReqOpts...)
if err != nil {
return Account{}, err
reauthErr := vt.reAuthIfNeeded(err)
if reauthErr != nil {
return Account{}, reauthErr
}
// retry once after re-auth
resp, err = vt.client.Secrets.KvV2Read(context.Background(),
vt.secretStoragePath+"/"+access, vt.kvReqOpts...)
if err != nil {
return Account{}, err
}
}
acc, err := parseVaultUserAccount(resp.Data.Data, access)
if err != nil {
return Account{}, err
}
return acc, nil
}
func (vt *VaultIAMService) UpdateUserAccount(access string, props MutableProps) error {
//TODO: We need something like a transaction here ?
acc, err := vt.GetUserAccount(access)
if err != nil {
return err
}
updateAcc(&acc, props)
err = vt.DeleteUserAccount(access)
if err != nil {
return err
}
err = vt.CreateAccount(acc)
if err != nil {
return err
}
return nil
}
func (vt *VaultIAMService) DeleteUserAccount(access string) error {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
_, err := vt.client.Secrets.KvV2DeleteMetadataAndAllVersions(ctx, vt.secretStoragePath+"/"+access, vt.reqOpts...)
cancel()
_, err := vt.client.Secrets.KvV2DeleteMetadataAndAllVersions(context.Background(),
vt.secretStoragePath+"/"+access, vt.kvReqOpts...)
if err != nil {
return err
reauthErr := vt.reAuthIfNeeded(err)
if reauthErr != nil {
return reauthErr
}
// retry once after re-auth
_, err = vt.client.Secrets.KvV2DeleteMetadataAndAllVersions(context.Background(),
vt.secretStoragePath+"/"+access, vt.kvReqOpts...)
if err != nil {
return err
}
}
return nil
}
func (vt *VaultIAMService) ListUserAccounts() ([]Account, error) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
resp, err := vt.client.Secrets.KvV2List(ctx, vt.secretStoragePath, vt.reqOpts...)
cancel()
resp, err := vt.client.Secrets.KvV2List(context.Background(),
vt.secretStoragePath, vt.kvReqOpts...)
if err != nil {
if vault.IsErrorStatus(err, 404) {
return []Account{}, nil
reauthErr := vt.reAuthIfNeeded(err)
if reauthErr != nil {
if vault.IsErrorStatus(err, http.StatusNotFound) {
return []Account{}, nil
}
return nil, reauthErr
}
// retry once after re-auth
resp, err = vt.client.Secrets.KvV2List(context.Background(),
vt.secretStoragePath, vt.kvReqOpts...)
if err != nil {
if vault.IsErrorStatus(err, http.StatusNotFound) {
return []Account{}, nil
}
return nil, err
}
return nil, err
}
accs := []Account{}
for _, acss := range resp.Data.Keys {
acc, err := vt.GetUserAccount(acss)
if err != nil {
@@ -200,7 +275,6 @@ func (vt *VaultIAMService) ListUserAccounts() ([]Account, error) {
}
accs = append(accs, acc)
}
return accs, nil
}
@@ -211,8 +285,8 @@ func (vt *VaultIAMService) Shutdown() error {
var errInvalidUser error = errors.New("invalid user account entry in secrets engine")
func parseVaultUserAccount(data map[string]interface{}, access string) (acc Account, err error) {
usrAcc, ok := data[access].(map[string]interface{})
func parseVaultUserAccount(data map[string]any, access string) (acc Account, err error) {
usrAcc, ok := data[access].(map[string]any)
if !ok {
return acc, errInvalidUser
}

View File

@@ -60,6 +60,7 @@ const (
keyOwnership key = "Ownership"
keyTags key = "Tags"
keyPolicy key = "Policy"
keyCors key = "Cors"
keyBucketLock key = "Bucketlock"
keyObjRetention key = "Objectretention"
keyObjLegalHold key = "Objectlegalhold"
@@ -67,6 +68,8 @@ const (
onameAttr key = "Objname"
onameAttrLower key = "objname"
metaTmpMultipartPrefix key = ".sgwtmp" + "/multipart"
defaultListingMaxKeys = 1000
)
func (key) Table() map[string]struct{} {
@@ -298,6 +301,11 @@ func (az *Azure) PutObject(ctx context.Context, po s3response.PutObjectInput) (s
return s3response.PutObjectOutput{}, err
}
err = az.evaluateWritePreconditions(ctx, po.Bucket, po.Key, po.IfMatch, po.IfNoneMatch)
if err != nil {
return s3response.PutObjectOutput{}, err
}
metadata := parseMetadata(po.Metadata)
// Store the "Expires" property in the object metadata
@@ -363,7 +371,8 @@ func (az *Azure) PutObject(ctx context.Context, po s3response.PutObjectInput) (s
}
return s3response.PutObjectOutput{
ETag: string(*uploadResp.ETag),
ETag: convertAzureEtag(uploadResp.ETag),
Size: po.ContentLength,
}, nil
}
@@ -404,6 +413,11 @@ func (az *Azure) DeleteBucketTagging(ctx context.Context, bucket string) error {
}
func (az *Azure) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.GetObjectOutput, error) {
if input.PartNumber != nil {
// querying an object with part number is not supported
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
client, err := az.getBlobClient(*input.Bucket, *input.Key)
if err != nil {
return nil, err
@@ -414,6 +428,19 @@ func (az *Azure) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.G
return nil, azureErrToS3Err(err)
}
if resp.ETag != nil && resp.LastModified != nil {
err = backend.EvaluatePreconditions(convertAzureEtag(resp.ETag), *resp.LastModified,
backend.PreConditions{
IfMatch: input.IfMatch,
IfNoneMatch: input.IfNoneMatch,
IfModSince: input.IfModifiedSince,
IfUnmodeSince: input.IfUnmodifiedSince,
})
if err != nil {
return nil, err
}
}
var opts *azblob.DownloadStreamOptions
if *input.Range != "" {
offset, count, isValid, err := backend.ParseObjectRange(*resp.ContentLength, *input.Range)
@@ -453,7 +480,7 @@ func (az *Azure) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.G
ContentLanguage: blobDownloadResponse.ContentLanguage,
CacheControl: blobDownloadResponse.CacheControl,
ExpiresString: blobDownloadResponse.Metadata[string(keyExpires)],
ETag: (*string)(blobDownloadResponse.ETag),
ETag: backend.GetPtrFromString(convertAzureEtag(blobDownloadResponse.ETag)),
LastModified: blobDownloadResponse.LastModified,
Metadata: parseAndFilterAzMetadata(blobDownloadResponse.Metadata),
TagCount: &tagcount,
@@ -465,35 +492,8 @@ func (az *Azure) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.G
func (az *Azure) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.HeadObjectOutput, error) {
if input.PartNumber != nil {
client, err := az.getBlockBlobClient(*input.Bucket, *input.Key)
if err != nil {
return nil, err
}
res, err := client.GetBlockList(ctx, blockblob.BlockListTypeUncommitted, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
partsCount := int32(len(res.UncommittedBlocks))
for _, block := range res.UncommittedBlocks {
partNumber, err := decodeBlockId(*block.Name)
if err != nil {
return nil, err
}
if partNumber == int(*input.PartNumber) {
return &s3.HeadObjectOutput{
ContentLength: block.Size,
ETag: block.Name,
PartsCount: &partsCount,
StorageClass: types.StorageClassStandard,
}, nil
}
}
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
// querying an object with part number is not supported
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
client, err := az.getBlobClient(*input.Bucket, *input.Key)
@@ -505,6 +505,20 @@ func (az *Azure) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3
if err != nil {
return nil, azureErrToS3Err(err)
}
if resp.ETag != nil && resp.LastModified != nil {
err = backend.EvaluatePreconditions(convertAzureEtag(resp.ETag), *resp.LastModified,
backend.PreConditions{
IfMatch: input.IfMatch,
IfNoneMatch: input.IfNoneMatch,
IfModSince: input.IfModifiedSince,
IfUnmodeSince: input.IfUnmodifiedSince,
})
if err != nil {
return nil, err
}
}
var size int64
if resp.ContentLength != nil {
size = *resp.ContentLength
@@ -531,7 +545,7 @@ func (az *Azure) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3
ContentDisposition: resp.ContentDisposition,
CacheControl: resp.CacheControl,
ExpiresString: resp.Metadata[string(keyExpires)],
ETag: (*string)(resp.ETag),
ETag: backend.GetPtrFromString(convertAzureEtag(resp.ETag)),
LastModified: resp.LastModified,
Metadata: parseAndFilterAzMetadata(resp.Metadata),
StorageClass: types.StorageClassStandard,
@@ -568,7 +582,7 @@ func (az *Azure) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAtt
}
return s3response.GetObjectAttributesResponse{
ETag: backend.TrimEtag(data.ETag),
ETag: data.ETag,
ObjectSize: data.ContentLength,
StorageClass: data.StorageClass,
LastModified: data.LastModified,
@@ -578,26 +592,6 @@ func (az *Azure) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAtt
}
func (az *Azure) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
client, err := az.getContainerClient(*input.Bucket)
if err != nil {
return s3response.ListObjectsResult{}, nil
}
pager := client.NewListBlobsHierarchyPager(*input.Delimiter, &container.ListBlobsHierarchyOptions{
Marker: input.Marker,
MaxResults: input.MaxKeys,
Prefix: input.Prefix,
})
var objects []s3response.Object
var cPrefixes []types.CommonPrefix
var nextMarker *string
var isTruncated bool
var maxKeys int32 = math.MaxInt32
if input.MaxKeys != nil {
maxKeys = *input.MaxKeys
}
// Retrieve the bucket acl to get the bucket owner
// All the objects in the bucket are owner by the bucket owner
aclBytes, err := az.getContainerMetaData(ctx, *input.Bucket, string(keyAclCapital))
@@ -610,20 +604,50 @@ func (az *Azure) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (s
return s3response.ListObjectsResult{}, err
}
Pager:
for pager.More() {
client, err := az.getContainerClient(*input.Bucket)
if err != nil {
return s3response.ListObjectsResult{}, nil
}
var maxKeys int32 = defaultListingMaxKeys
if input.MaxKeys != nil {
maxKeys = *input.MaxKeys
}
pager := client.NewListBlobsHierarchyPager(*input.Delimiter, &container.ListBlobsHierarchyOptions{
MaxResults: &maxKeys,
Prefix: input.Prefix,
})
var objects []s3response.Object
var cPrefixes []types.CommonPrefix
var nextMarker *string
var isTruncated bool
// Convert marker to filter criteria
var markerFilter string
if input.Marker != nil && *input.Marker != "" {
markerFilter = *input.Marker
}
// Loop through pages until we have enough objects or no more pages
objectsFound := int32(0)
for pager.More() && objectsFound < maxKeys {
resp, err := pager.NextPage(ctx)
if err != nil {
return s3response.ListObjectsResult{}, azureErrToS3Err(err)
}
// Process objects from this page
var pageObjects []s3response.Object
for _, v := range resp.Segment.BlobItems {
if len(objects)+len(cPrefixes) >= int(maxKeys) {
nextMarker = objects[len(objects)-1].Key
isTruncated = true
break Pager
// Skip objects that come before or equal to marker
if markerFilter != "" && *v.Name <= markerFilter {
continue
}
objects = append(objects, s3response.Object{
ETag: backend.GetPtrFromString(fmt.Sprintf("%q", *v.Properties.ETag)),
pageObjects = append(pageObjects, s3response.Object{
ETag: backend.GetPtrFromString(convertAzureEtag(v.Properties.ETag)),
Key: v.Name,
LastModified: v.Properties.LastModified,
Size: v.Properties.ContentLength,
@@ -632,20 +656,22 @@ Pager:
ID: &acl.Owner,
},
})
}
for _, v := range resp.Segment.BlobPrefixes {
if *v.Name <= *input.Marker {
continue
}
if len(objects)+len(cPrefixes) >= int(maxKeys) {
nextMarker = cPrefixes[len(cPrefixes)-1].Prefix
isTruncated = true
break Pager
}
marker := getString(input.Marker)
pfx := strings.TrimSuffix(*v.Name, getString(input.Delimiter))
if marker != "" && strings.HasPrefix(marker, pfx) {
objectsFound++
if objectsFound >= maxKeys {
// Set next marker to the current object name for pagination
nextMarker = v.Name
isTruncated = true
break
}
}
objects = append(objects, pageObjects...)
// Process common prefixes from this page
for _, v := range resp.Segment.BlobPrefixes {
// Skip prefixes that come before or equal to marker
if markerFilter != "" && *v.Name <= markerFilter {
continue
}
@@ -653,6 +679,16 @@ Pager:
Prefix: v.Name,
})
}
// If we've reached maxKeys, break
if objectsFound >= maxKeys {
break
}
// If Azure indicates more pages but we need to continue for more objects
if resp.NextMarker != nil && *resp.NextMarker != "" && objectsFound < maxKeys {
continue
}
}
return s3response.ListObjectsResult{
@@ -669,98 +705,104 @@ Pager:
}
func (az *Azure) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
marker := ""
if *input.ContinuationToken > *input.StartAfter {
marker = *input.ContinuationToken
} else {
marker = *input.StartAfter
// Retrieve the bucket acl to get the bucket owner
// All the objects in the bucket are owner by the bucket owner
aclBytes, err := az.getContainerMetaData(ctx, *input.Bucket, string(keyAclCapital))
if err != nil {
return s3response.ListObjectsV2Result{}, azureErrToS3Err(err)
}
acl, err := auth.ParseACL(aclBytes)
if err != nil {
return s3response.ListObjectsV2Result{}, err
}
client, err := az.getContainerClient(*input.Bucket)
if err != nil {
return s3response.ListObjectsV2Result{}, nil
}
var maxKeys int32 = defaultListingMaxKeys
if input.MaxKeys != nil {
maxKeys = *input.MaxKeys
}
pager := client.NewListBlobsHierarchyPager(*input.Delimiter, &container.ListBlobsHierarchyOptions{
Marker: &marker,
MaxResults: input.MaxKeys,
Marker: input.ContinuationToken,
MaxResults: &maxKeys,
Prefix: input.Prefix,
})
var objects []s3response.Object
var cPrefixes []types.CommonPrefix
var nextMarker *string
var isTruncated bool
var maxKeys int32 = math.MaxInt32
var fetchOwner bool
var resp container.ListBlobsHierarchyResponse
if input.MaxKeys != nil {
maxKeys = *input.MaxKeys
}
if input.FetchOwner != nil {
fetchOwner = *input.FetchOwner
}
// Retrieve the bucket acl to get the bucket owner, if "fetchOwner" is true
// All the objects in the bucket are owner by the bucket owner
var acl auth.ACL
if fetchOwner {
aclBytes, err := az.getContainerMetaData(ctx, *input.Bucket, string(keyAclCapital))
// Loop through pages until we find objects or no more pages
for {
resp, err = pager.NextPage(ctx)
if err != nil {
return s3response.ListObjectsV2Result{}, azureErrToS3Err(err)
}
acl, err = auth.ParseACL(aclBytes)
if err != nil {
return s3response.ListObjectsV2Result{}, err
}
}
Pager:
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return s3response.ListObjectsV2Result{}, azureErrToS3Err(err)
}
// Convert Azure objects to S3 objects
var pageObjects []s3response.Object
for _, v := range resp.Segment.BlobItems {
if len(objects)+len(cPrefixes) >= int(maxKeys) {
nextMarker = objects[len(objects)-1].Key
isTruncated = true
break Pager
}
obj := s3response.Object{
ETag: backend.GetPtrFromString(fmt.Sprintf("%q", *v.Properties.ETag)),
pageObjects = append(pageObjects, s3response.Object{
ETag: backend.GetPtrFromString(convertAzureEtag(v.Properties.ETag)),
Key: v.Name,
LastModified: v.Properties.LastModified,
Size: v.Properties.ContentLength,
StorageClass: types.ObjectStorageClassStandard,
}
if fetchOwner {
obj.Owner = &types.Owner{
Owner: &types.Owner{
ID: &acl.Owner,
}
}
objects = append(objects, obj)
}
for _, v := range resp.Segment.BlobPrefixes {
if *v.Name <= marker {
continue
}
if len(objects)+len(cPrefixes) >= int(maxKeys) {
nextMarker = cPrefixes[len(cPrefixes)-1].Prefix
isTruncated = true
break Pager
}
marker := getString(input.ContinuationToken)
pfx := strings.TrimSuffix(*v.Name, getString(input.Delimiter))
if marker != "" && strings.HasPrefix(marker, pfx) {
continue
}
cPrefixes = append(cPrefixes, types.CommonPrefix{
Prefix: v.Name,
},
})
}
// If StartAfter is specified, filter objects
if input.StartAfter != nil && *input.StartAfter != "" {
startAfter := *input.StartAfter
startIndex := -1
for i, obj := range pageObjects {
if *obj.Key > startAfter {
startIndex = i
break
}
}
if startIndex != -1 {
// Found objects after StartAfter in this page
objects = append(objects, pageObjects[startIndex:]...)
break
} else {
// No objects after StartAfter in this page
// Check if there are more pages to examine
if resp.NextMarker == nil || *resp.NextMarker == "" {
// No more pages, so no objects after StartAfter
break
}
// Continue to next page without adding any objects
continue
}
} else {
// No StartAfter specified, add all objects from this page
objects = append(objects, pageObjects...)
break
}
}
var cPrefixes []types.CommonPrefix
for _, v := range resp.Segment.BlobPrefixes {
cPrefixes = append(cPrefixes, types.CommonPrefix{
Prefix: v.Name,
})
}
var isTruncated bool
var nextMarker *string
// If Azure returned a NextMarker, set it for the next request
if resp.NextMarker != nil && *resp.NextMarker != "" {
nextMarker = resp.NextMarker
isTruncated = true
}
return s3response.ListObjectsV2Result{
@@ -778,6 +820,42 @@ Pager:
}
func (az *Azure) DeleteObject(ctx context.Context, input *s3.DeleteObjectInput) (*s3.DeleteObjectOutput, error) {
if input.IfMatch != nil || input.IfMatchLastModifiedTime != nil || input.IfMatchSize != nil {
// evaluate the preconditions before deleting the object
props, err := az.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: input.Bucket,
Key: input.Key,
})
if err != nil && !errors.Is(err, s3err.GetAPIError(s3err.ErrNoSuchKey)) {
// if object doesn't exist, skip preconditions
// if unexpected error shows up, return the error
return nil, err
}
if err == nil {
var etag string
if props.ETag != nil {
etag = *props.ETag
}
var lastMod time.Time
if props.LastModified != nil {
lastMod = *props.LastModified
}
var size int64
if props.ContentLength != nil {
size = *props.ContentLength
}
err := backend.EvaluateObjectDeletePreconditions(etag, lastMod, size,
backend.ObjectDeletePreconditions{
IfMatch: input.IfMatch,
IfMatchLastModTime: input.IfMatchLastModifiedTime,
IfMatchSize: input.IfMatchSize,
})
if err != nil {
return nil, err
}
}
}
_, err := az.client.DeleteBlob(ctx, *input.Bucket, *input.Key, nil)
if err != nil {
azerr, ok := err.(*azcore.ResponseError)
@@ -827,6 +905,26 @@ func (az *Azure) CopyObject(ctx context.Context, input s3response.CopyObjectInpu
if err != nil {
return s3response.CopyObjectOutput{}, err
}
srcBucket, srcObj, _, err := backend.ParseCopySource(*input.CopySource)
if err != nil {
return s3response.CopyObjectOutput{}, err
}
if !areNils(input.CopySourceIfMatch, input.CopySourceIfNoneMatch) || !areNils(input.CopySourceIfModifiedSince, input.CopySourceIfUnmodifiedSince) {
_, err = az.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: &srcBucket,
Key: &srcObj,
IfMatch: input.CopySourceIfMatch,
IfNoneMatch: input.CopySourceIfNoneMatch,
IfModifiedSince: input.CopySourceIfModifiedSince,
IfUnmodifiedSince: input.CopySourceIfUnmodifiedSince,
})
if err != nil {
return s3response.CopyObjectOutput{}, err
}
}
if strings.Join([]string{*input.Bucket, *input.Key}, "/") == *input.CopySource {
if input.MetadataDirective != types.MetadataDirectiveReplace {
return s3response.CopyObjectOutput{}, s3err.GetAPIError(s3err.ErrInvalidCopyDest)
@@ -900,16 +998,11 @@ func (az *Azure) CopyObject(ctx context.Context, input s3response.CopyObjectInpu
return s3response.CopyObjectOutput{
CopyObjectResult: &s3response.CopyObjectResult{
LastModified: res.LastModified,
ETag: (*string)(res.ETag),
ETag: backend.GetPtrFromString(convertAzureEtag(res.ETag)),
},
}, nil
}
srcBucket, srcObj, _, err := backend.ParseCopySource(*input.CopySource)
if err != nil {
return s3response.CopyObjectOutput{}, err
}
// Get the source object
downloadResp, err := az.client.DownloadStream(ctx, srcBucket, srcObj, nil)
if err != nil {
@@ -1138,7 +1231,7 @@ func (az *Azure) UploadPart(ctx context.Context, input *s3.UploadPartInput) (*s3
func (az *Azure) UploadPartCopy(ctx context.Context, input *s3.UploadPartCopyInput) (s3response.CopyPartResult, error) {
client, err := az.getBlockBlobClient(*input.Bucket, *input.Key)
if err != nil {
return s3response.CopyPartResult{}, nil
return s3response.CopyPartResult{}, err
}
if err := az.checkIfMpExists(ctx, *input.Bucket, *input.Key, *input.UploadId); err != nil {
@@ -1341,6 +1434,22 @@ func (az *Azure) ListMultipartUploads(ctx context.Context, input *s3.ListMultipa
// Cleans up the initiated multipart upload in .sgwtmp namespace
func (az *Azure) AbortMultipartUpload(ctx context.Context, input *s3.AbortMultipartUploadInput) error {
tmpPath := createMetaTmpPath(*input.Key, *input.UploadId)
if input.IfMatchInitiatedTime != nil {
client, err := az.getBlobClient(*input.Bucket, tmpPath)
if err != nil {
return err
}
resp, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
if resp.LastModified != nil && resp.LastModified.Unix() != input.IfMatchInitiatedTime.Unix() {
return s3err.GetAPIError(s3err.ErrPreconditionFailed)
}
}
_, err := az.client.DeleteBlob(ctx, *input.Bucket, tmpPath, nil)
if err != nil {
return parseMpError(err)
@@ -1368,6 +1477,11 @@ func (az *Azure) AbortMultipartUpload(ctx context.Context, input *s3.AbortMultip
func (az *Azure) CompleteMultipartUpload(ctx context.Context, input *s3.CompleteMultipartUploadInput) (s3response.CompleteMultipartUploadResult, string, error) {
var res s3response.CompleteMultipartUploadResult
err := az.evaluateWritePreconditions(ctx, input.Bucket, input.Key, input.IfMatch, input.IfNoneMatch)
if err != nil {
return s3response.CompleteMultipartUploadResult{}, "", err
}
tmpPath := createMetaTmpPath(*input.Key, *input.UploadId)
blobClient, err := az.getBlobClient(*input.Bucket, tmpPath)
if err != nil {
@@ -1471,7 +1585,7 @@ func (az *Azure) CompleteMultipartUpload(ctx context.Context, input *s3.Complete
return s3response.CompleteMultipartUploadResult{
Bucket: input.Bucket,
Key: input.Key,
ETag: (*string)(resp.ETag),
ETag: backend.GetPtrFromString(convertAzureEtag(resp.ETag)),
}, "", nil
}
@@ -1506,6 +1620,29 @@ func (az *Azure) DeleteBucketPolicy(ctx context.Context, bucket string) error {
return az.PutBucketPolicy(ctx, bucket, nil)
}
func (az *Azure) PutBucketCors(ctx context.Context, bucket string, cors []byte) error {
if cors == nil {
return az.deleteContainerMetaData(ctx, bucket, string(keyCors))
}
return az.setContainerMetaData(ctx, bucket, string(keyCors), cors)
}
func (az *Azure) GetBucketCors(ctx context.Context, bucket string) ([]byte, error) {
p, err := az.getContainerMetaData(ctx, bucket, string(keyCors))
if err != nil {
return nil, err
}
if len(p) == 0 {
return nil, s3err.GetAPIError(s3err.ErrNoSuchCORSConfiguration)
}
return p, nil
}
func (az *Azure) DeleteBucketCors(ctx context.Context, bucket string) error {
return az.PutBucketCors(ctx, bucket, nil)
}
func (az *Azure) PutObjectLockConfiguration(ctx context.Context, bucket string, config []byte) error {
cfg, err := az.getContainerMetaData(ctx, bucket, string(keyBucketLock))
if err != nil {
@@ -1683,8 +1820,8 @@ func (az *Azure) GetObjectLegalHold(ctx context.Context, bucket, object, version
return &status, nil
}
func (az *Azure) ChangeBucketOwner(ctx context.Context, bucket string, acl []byte) error {
return az.PutBucketAcl(ctx, bucket, acl)
func (az *Azure) ChangeBucketOwner(ctx context.Context, bucket, owner string) error {
return auth.UpdateBucketACLOwner(ctx, az, bucket, owner)
}
// The action actually returns the containers owned by the user, who initialized the gateway
@@ -1963,6 +2100,29 @@ func (az *Azure) deleteContainerMetaData(ctx context.Context, bucket, key string
return nil
}
func (az *Azure) evaluateWritePreconditions(ctx context.Context, bucket, object, ifMatch, ifNoneMatch *string) error {
if areNils(ifMatch, ifNoneMatch) {
return nil
}
// call HeadObject to evaluate preconditions
// if object doesn't exist, move forward with the object creation
// otherwise return the error
_, err := az.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: bucket,
Key: object,
IfMatch: ifMatch,
IfNoneMatch: ifNoneMatch,
})
if errors.Is(err, s3err.GetAPIError(s3err.ErrNotModified)) {
return s3err.GetAPIError(s3err.ErrPreconditionFailed)
}
if err != nil && !errors.Is(err, s3err.GetAPIError(s3err.ErrNoSuchKey)) {
return err
}
return nil
}
func getAclFromMetadata(meta map[string]*string, key key) (*auth.ACL, error) {
data, ok := meta[string(key)]
if !ok {
@@ -2002,3 +2162,21 @@ func (az *Azure) checkIfMpExists(ctx context.Context, bucket, obj, uploadId stri
return nil
}
func convertAzureEtag(etag *azcore.ETag) string {
// Azure ETag values are not S3 compatible,
// so append "-1" to avoid client SDK ETag validation issues.
str := (*string)(etag)
return *backend.TrimEtag(str) + "-1"
}
func areNils[T any](args ...*T) bool {
for _, arg := range args {
if arg != nil {
return false
}
}
return true
}

View File

@@ -46,7 +46,7 @@ type Backend interface {
PutBucketOwnershipControls(_ context.Context, bucket string, ownership types.ObjectOwnership) error
GetBucketOwnershipControls(_ context.Context, bucket string) (types.ObjectOwnership, error)
DeleteBucketOwnershipControls(_ context.Context, bucket string) error
PutBucketCors(context.Context, []byte) error
PutBucketCors(_ context.Context, bucket string, cors []byte) error
GetBucketCors(_ context.Context, bucket string) ([]byte, error)
DeleteBucketCors(_ context.Context, bucket string) error
@@ -96,7 +96,7 @@ type Backend interface {
GetObjectLegalHold(_ context.Context, bucket, object, versionId string) (*bool, error)
// non AWS actions
ChangeBucketOwner(_ context.Context, bucket string, acl []byte) error
ChangeBucketOwner(_ context.Context, bucket, owner string) error
ListBucketsAndOwners(context.Context) ([]s3response.Bucket, error)
}
@@ -153,7 +153,7 @@ func (BackendUnsupported) GetBucketOwnershipControls(_ context.Context, bucket s
func (BackendUnsupported) DeleteBucketOwnershipControls(_ context.Context, bucket string) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) PutBucketCors(context.Context, []byte) error {
func (BackendUnsupported) PutBucketCors(context.Context, string, []byte) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) GetBucketCors(_ context.Context, bucket string) ([]byte, error) {
@@ -280,7 +280,7 @@ func (BackendUnsupported) GetObjectLegalHold(_ context.Context, bucket, object,
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) ChangeBucketOwner(_ context.Context, bucket string, acl []byte) error {
func (BackendUnsupported) ChangeBucketOwner(_ context.Context, bucket, owner string) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) ListBucketsAndOwners(context.Context) ([]s3response.Bucket, error) {

View File

@@ -22,6 +22,7 @@ import (
"hash"
"io"
"io/fs"
"math"
"net/url"
"os"
"regexp"
@@ -87,6 +88,8 @@ func TrimEtag(etag *string) *string {
var (
errInvalidRange = s3err.GetAPIError(s3err.ErrInvalidRange)
errInvalidCopySourceRange = s3err.GetAPIError(s3err.ErrInvalidCopySourceRange)
errPreconditionFailed = s3err.GetAPIError(s3err.ErrPreconditionFailed)
errNotModified = s3err.GetAPIError(s3err.ErrNotModified)
)
// ParseObjectRange parses input range header and returns startoffset, length, isValid
@@ -94,63 +97,81 @@ var (
// for invalid inputs, it returns no error, but isValid=false
// `InvalidRange` error is returnd, only if startoffset is greater than the object size
func ParseObjectRange(size int64, acceptRange string) (int64, int64, bool, error) {
// Return full object (invalid range, no error) if header empty
if acceptRange == "" {
return 0, size, false, nil
}
rangeKv := strings.Split(acceptRange, "=")
if len(rangeKv) != 2 {
return 0, size, false, nil
}
if rangeKv[0] != "bytes" {
if rangeKv[0] != "bytes" { // unsupported unit -> ignore
return 0, size, false, nil
}
bRange := strings.Split(rangeKv[1], "-")
if len(bRange) != 2 {
if len(bRange) != 2 { // malformed / multi-range
return 0, size, false, nil
}
startOffset, err := strconv.ParseInt(bRange[0], 10, 64)
if err != nil && bRange[0] != "" {
// Parse start; empty start indicates a suffix-byte-range-spec (e.g. bytes=-100)
startOffset, err := strconv.ParseInt(bRange[0], 10, strconv.IntSize)
if startOffset > int64(math.MaxInt) || startOffset < int64(math.MinInt) {
return 0, size, false, errInvalidRange
}
if err != nil && bRange[0] != "" { // invalid numeric start (non-empty) -> ignore range
return 0, size, false, nil
}
// If end part missing (e.g. bytes=100-)
if bRange[1] == "" {
if bRange[0] == "" {
if bRange[0] == "" { // bytes=- (meaningless) -> ignore
return 0, size, false, nil
}
// start beyond or at size is unsatisfiable -> error (RequestedRangeNotSatisfiable)
if startOffset >= size {
return 0, 0, false, errInvalidRange
}
// bytes=100- => from start to end
return startOffset, size - startOffset, true, nil
}
endOffset, err := strconv.ParseInt(bRange[1], 10, 64)
if err != nil {
endOffset, err := strconv.ParseInt(bRange[1], 10, strconv.IntSize)
if endOffset > int64(math.MaxInt) {
return 0, size, false, errInvalidRange
}
if err != nil { // invalid numeric end -> ignore range
return 0, size, false, nil
}
if startOffset > endOffset {
return 0, size, false, nil
}
// for ranges like 'bytes=-100' return the last bytes specified with 'endOffset'
// Suffix range handling (bRange[0] == "")
if bRange[0] == "" {
// Disallow -0 (always unsatisfiable)
if endOffset == 0 {
return 0, 0, false, errInvalidRange
}
// For zero-sized objects any positive suffix is treated as invalid (ignored, no error)
if size == 0 {
return 0, size, false, nil
}
// Clamp to object size (request more bytes than exist -> entire object)
endOffset = min(endOffset, size)
return size - endOffset, endOffset, true, nil
}
// Normal range (start-end)
if startOffset > endOffset { // start > end -> ignore
return 0, size, false, nil
}
// Start beyond or at end of object -> error
if startOffset >= size {
return 0, 0, false, errInvalidRange
}
// Adjust end beyond object size (trim)
if endOffset >= size {
endOffset = size - 1
}
return startOffset, endOffset - startOffset + 1, true, nil
}
@@ -223,7 +244,7 @@ func ParseCopySource(copySourceHeader string) (string, string, string, error) {
srcBucket, srcObject, ok := strings.Cut(copySource, "/")
if !ok {
return "", "", "", s3err.GetAPIError(s3err.ErrInvalidCopySource)
return "", "", "", s3err.GetAPIError(s3err.ErrInvalidCopySourceBucket)
}
return srcBucket, srcObject, versionId, nil
@@ -403,3 +424,149 @@ func GenerateEtag(h hash.Hash) string {
func AreEtagsSame(e1, e2 string) bool {
return strings.Trim(e1, `"`) == strings.Trim(e2, `"`)
}
func getBoolPtr(b bool) *bool {
return &b
}
type PreConditions struct {
IfMatch *string
IfNoneMatch *string
IfModSince *time.Time
IfUnmodeSince *time.Time
}
// EvaluatePreconditions takes the object ETag, the last modified time and
// evaluates the read preconditions:
// - if-match,
// - if-none-match
// - if-modified-since
// - if-unmodified-since
// if-match and if-none-match are ETag comparisions
// if-modified-since and if-unmodified-since are last modifed time comparisons
func EvaluatePreconditions(etag string, modTime time.Time, preconditions PreConditions) error {
if preconditions.IfMatch == nil && preconditions.IfNoneMatch == nil && preconditions.IfModSince == nil && preconditions.IfUnmodeSince == nil {
return nil
}
// convert all conditions to *bool to evaluate the conditions
var ifMatch, ifNoneMatch, ifModSince, ifUnmodeSince *bool
if preconditions.IfMatch != nil {
ifMatch = getBoolPtr(*preconditions.IfMatch == etag)
}
if preconditions.IfNoneMatch != nil {
ifNoneMatch = getBoolPtr(*preconditions.IfNoneMatch != etag)
}
if preconditions.IfModSince != nil {
ifModSince = getBoolPtr(preconditions.IfModSince.UTC().Before(modTime.UTC()))
}
if preconditions.IfUnmodeSince != nil {
ifUnmodeSince = getBoolPtr(preconditions.IfUnmodeSince.UTC().After(modTime.UTC()))
}
if ifMatch != nil {
// if `if-match` doesn't matches, return PreconditionFailed
if !*ifMatch {
return errPreconditionFailed
}
// if-match matches
if *ifMatch {
if ifNoneMatch != nil {
// if `if-none-match` doesn't match return NotModified
if !*ifNoneMatch {
return errNotModified
}
// if both `if-match` and `if-none-match` match, return no error
return nil
}
// if `if-match` matches but `if-modified-since` is false return NotModified
if ifModSince != nil && !*ifModSince {
return errNotModified
}
// ignore `if-unmodified-since` as `if-match` is true
return nil
}
}
if ifNoneMatch != nil {
if *ifNoneMatch {
// if `if-none-match` is true, but `if-unmodified-since` is false
// return PreconditionFailed
if ifUnmodeSince != nil && !*ifUnmodeSince {
return errPreconditionFailed
}
// ignore `if-modified-since` as `if-none-match` is true
return nil
} else {
// if `if-none-match` is false and `if-unmodified-since` is false
// return PreconditionFailed
if ifUnmodeSince != nil && !*ifUnmodeSince {
return errPreconditionFailed
}
// in all other cases when `if-none-match` is false return NotModified
return errNotModified
}
}
if ifModSince != nil && !*ifModSince {
// if both `if-modified-since` and `if-unmodified-since` are false
// return PreconditionFailed
if ifUnmodeSince != nil && !*ifUnmodeSince {
return errPreconditionFailed
}
// if only `if-modified-since` is false, return NotModified
return errNotModified
}
// if `if-unmodified-since` is false return PreconditionFailed
if ifUnmodeSince != nil && !*ifUnmodeSince {
return errPreconditionFailed
}
return nil
}
// EvaluateMatchPreconditions evaluates if-match and if-none-match preconditions
func EvaluateMatchPreconditions(etag string, ifMatch, ifNoneMatch *string) error {
if ifMatch != nil && *ifMatch != etag {
return errPreconditionFailed
}
if ifNoneMatch != nil && *ifNoneMatch == etag {
return errPreconditionFailed
}
return nil
}
type ObjectDeletePreconditions struct {
IfMatch *string
IfMatchLastModTime *time.Time
IfMatchSize *int64
}
// EvaluateObjectDeletePreconditions evaluates preconditions for DeleteObject
func EvaluateObjectDeletePreconditions(etag string, modTime time.Time, size int64, preconditions ObjectDeletePreconditions) error {
ifMatch := preconditions.IfMatch
if ifMatch != nil && *ifMatch != etag {
return errPreconditionFailed
}
ifMatchTime := preconditions.IfMatchLastModTime
if ifMatchTime != nil && ifMatchTime.Unix() != modTime.Unix() {
return errPreconditionFailed
}
ifMatchSize := preconditions.IfMatchSize
if ifMatchSize != nil && *ifMatchSize != size {
return errPreconditionFailed
}
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -26,6 +26,7 @@ import (
"path/filepath"
"strconv"
"syscall"
"time"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
@@ -165,14 +166,10 @@ func (tmp *tmpfile) link() error {
// of last upload completed wins and is not some combination of writes
// from simultaneous uploads.
objPath := filepath.Join(tmp.bucket, tmp.objname)
err := os.Remove(objPath)
if err != nil && !errors.Is(err, fs.ErrNotExist) {
return fmt.Errorf("remove stale path: %w", err)
}
dir := filepath.Dir(objPath)
err = backend.MkdirAll(dir, tmp.uid, tmp.gid, tmp.needsChown, tmp.newDirPerm)
err := backend.MkdirAll(dir, tmp.uid, tmp.gid, tmp.needsChown, tmp.newDirPerm)
if err != nil {
return fmt.Errorf("make parent dir: %w", err)
}
@@ -194,21 +191,33 @@ func (tmp *tmpfile) link() error {
}
defer dirf.Close()
for {
err = unix.Linkat(int(procdir.Fd()), filepath.Base(tmp.f.Name()),
int(dirf.Fd()), filepath.Base(objPath), unix.AT_SYMLINK_FOLLOW)
if errors.Is(err, syscall.EEXIST) {
err := os.Remove(objPath)
if err != nil && !errors.Is(err, fs.ErrNotExist) {
return fmt.Errorf("remove stale path: %w", err)
err = unix.Linkat(int(procdir.Fd()), filepath.Base(tmp.f.Name()),
int(dirf.Fd()), filepath.Base(objPath), unix.AT_SYMLINK_FOLLOW)
if errors.Is(err, syscall.EEXIST) {
// Linkat cannot overwrite files; we will allocate a temporary file, Linkat to it and then Renameat it
// to avoid potential race condition
retries := 1
for {
tmpName := fmt.Sprintf(".%s.sgwtmp.%d", filepath.Base(objPath), time.Now().UnixNano())
err := unix.Linkat(int(procdir.Fd()), filepath.Base(tmp.f.Name()),
int(dirf.Fd()), tmpName, unix.AT_SYMLINK_FOLLOW)
if errors.Is(err, syscall.EEXIST) && retries < 3 {
retries += 1
continue
}
continue
if err != nil {
return fmt.Errorf("cannot find free temporary file: %w", err)
}
err = unix.Renameat(int(dirf.Fd()), tmpName, int(dirf.Fd()), filepath.Base(objPath))
if err != nil {
return fmt.Errorf("overwriting renameat failed: %w", err)
}
break
}
if err != nil {
return fmt.Errorf("link tmpfile (fd %q as %q): %w",
filepath.Base(tmp.f.Name()), objPath, err)
}
break
} else if err != nil {
return fmt.Errorf("link tmpfile (fd %q as %q): %w",
filepath.Base(tmp.f.Name()), objPath, err)
}
err = tmp.f.Close()

View File

@@ -37,6 +37,10 @@ func (s *S3Proxy) getClientWithCtx(ctx context.Context) (*s3.Client, error) {
return s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = &s.endpoint
o.UsePathStyle = s.usePathStyle
// The http body stream is not seekable, so most operations cannot
// be retried. The error returned to the original client may be
// retried by the client.
o.Retryer = aws.NopRetryer{}
}), nil
}

View File

@@ -17,17 +17,12 @@ package s3proxy
import (
"bytes"
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
"github.com/aws/aws-sdk-go-v2/service/s3"
@@ -44,6 +39,7 @@ type metaPrefix string
const (
metaPrefixAcl metaPrefix = "vgw-meta-acl-"
metaPrefixPolicy metaPrefix = "vgw-meta-policy-"
metaPrefixCors metaPrefix = "vgw-meta-cors-"
)
type S3Proxy struct {
@@ -122,7 +118,7 @@ func (s *S3Proxy) ListBuckets(ctx context.Context, input s3response.ListBucketsI
continue
}
data, err := s.getMetaBucketObjData(ctx, *b.Name, metaPrefixAcl)
data, err := s.getMetaBucketObjData(ctx, *b.Name, metaPrefixAcl, false)
if err != nil {
return s3response.ListAllMyBucketsResult{}, handleError(err)
}
@@ -186,7 +182,7 @@ func (s *S3Proxy) CreateBucket(ctx context.Context, input *s3.CreateBucketInput,
}
if s.metaBucket != "" {
data, err := s.getMetaBucketObjData(ctx, *input.Bucket, metaPrefixAcl)
data, err := s.getMetaBucketObjData(ctx, *input.Bucket, metaPrefixAcl, true)
if err == nil {
acl, err := auth.ParseACL(data)
if err != nil {
@@ -380,7 +376,7 @@ func (s *S3Proxy) CreateMultipartUpload(ctx context.Context, input s3response.Cr
if input.ExpectedBucketOwner != nil && *input.ExpectedBucketOwner == "" {
input.ExpectedBucketOwner = nil
}
if input.ObjectLockRetainUntilDate != nil && *input.ObjectLockRetainUntilDate == defTime {
if input.ObjectLockRetainUntilDate != nil && (*input.ObjectLockRetainUntilDate).Equal(defTime) {
input.ObjectLockRetainUntilDate = nil
}
if input.SSECustomerAlgorithm != nil && *input.SSECustomerAlgorithm == "" {
@@ -530,7 +526,7 @@ func (s *S3Proxy) AbortMultipartUpload(ctx context.Context, input *s3.AbortMulti
if input.ExpectedBucketOwner != nil && *input.ExpectedBucketOwner == "" {
input.ExpectedBucketOwner = nil
}
if input.IfMatchInitiatedTime != nil && *input.IfMatchInitiatedTime == defTime {
if input.IfMatchInitiatedTime != nil && (*input.IfMatchInitiatedTime).Equal(defTime) {
input.IfMatchInitiatedTime = nil
}
_, err := s.client.AbortMultipartUpload(ctx, input)
@@ -735,13 +731,13 @@ func (s *S3Proxy) UploadPartCopy(ctx context.Context, input *s3.UploadPartCopyIn
if input.CopySourceIfMatch != nil && *input.CopySourceIfMatch == "" {
input.CopySourceIfMatch = nil
}
if input.CopySourceIfModifiedSince != nil && *input.CopySourceIfModifiedSince == defTime {
if input.CopySourceIfModifiedSince != nil && (*input.CopySourceIfModifiedSince).Equal(defTime) {
input.CopySourceIfModifiedSince = nil
}
if input.CopySourceIfNoneMatch != nil && *input.CopySourceIfNoneMatch == "" {
input.CopySourceIfNoneMatch = nil
}
if input.CopySourceIfUnmodifiedSince != nil && *input.CopySourceIfUnmodifiedSince == defTime {
if input.CopySourceIfUnmodifiedSince != nil && (*input.CopySourceIfUnmodifiedSince).Equal(defTime) {
input.CopySourceIfUnmodifiedSince = nil
}
if input.CopySourceRange != nil && *input.CopySourceRange == "" {
@@ -942,6 +938,7 @@ func (s *S3Proxy) PutObject(ctx context.Context, input s3response.PutObjectInput
ChecksumCRC64NVME: output.ChecksumCRC64NVME,
ChecksumSHA1: output.ChecksumSHA1,
ChecksumSHA256: output.ChecksumSHA256,
Size: output.Size,
}, nil
}
@@ -955,13 +952,13 @@ func (s *S3Proxy) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s
if input.IfMatch != nil && *input.IfMatch == "" {
input.IfMatch = nil
}
if input.IfModifiedSince != nil && *input.IfModifiedSince == defTime {
if input.IfModifiedSince != nil && (*input.IfModifiedSince).Equal(defTime) {
input.IfModifiedSince = nil
}
if input.IfNoneMatch != nil && *input.IfNoneMatch == "" {
input.IfNoneMatch = nil
}
if input.IfUnmodifiedSince != nil && *input.IfUnmodifiedSince == defTime {
if input.IfUnmodifiedSince != nil && (*input.IfUnmodifiedSince).Equal(defTime) {
input.IfUnmodifiedSince = nil
}
if input.PartNumber != nil && *input.PartNumber == 0 {
@@ -985,7 +982,7 @@ func (s *S3Proxy) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s
if input.ResponseContentType != nil && *input.ResponseContentType == "" {
input.ResponseContentType = nil
}
if input.ResponseExpires != nil && *input.ResponseExpires == defTime {
if input.ResponseExpires != nil && (*input.ResponseExpires).Equal(defTime) {
input.ResponseExpires = nil
}
if input.SSECustomerAlgorithm != nil && *input.SSECustomerAlgorithm == "" {
@@ -1015,13 +1012,13 @@ func (s *S3Proxy) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.
if input.IfMatch != nil && *input.IfMatch == "" {
input.IfMatch = nil
}
if input.IfModifiedSince != nil && *input.IfModifiedSince == defTime {
if input.IfModifiedSince != nil && (*input.IfModifiedSince).Equal(defTime) {
input.IfModifiedSince = nil
}
if input.IfNoneMatch != nil && *input.IfNoneMatch == "" {
input.IfNoneMatch = nil
}
if input.IfUnmodifiedSince != nil && *input.IfUnmodifiedSince == defTime {
if input.IfUnmodifiedSince != nil && (*input.IfUnmodifiedSince).Equal(defTime) {
input.IfUnmodifiedSince = nil
}
if input.PartNumber != nil && *input.PartNumber == 0 {
@@ -1045,7 +1042,7 @@ func (s *S3Proxy) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.
if input.ResponseContentType != nil && *input.ResponseContentType == "" {
input.ResponseContentType = nil
}
if input.ResponseExpires != nil && *input.ResponseExpires == defTime {
if input.ResponseExpires != nil && (*input.ResponseExpires).Equal(defTime) {
input.ResponseExpires = nil
}
if input.SSECustomerAlgorithm != nil && *input.SSECustomerAlgorithm == "" {
@@ -1153,13 +1150,13 @@ func (s *S3Proxy) CopyObject(ctx context.Context, input s3response.CopyObjectInp
if input.CopySourceIfMatch != nil && *input.CopySourceIfMatch == "" {
input.CopySourceIfMatch = nil
}
if input.CopySourceIfModifiedSince != nil && *input.CopySourceIfModifiedSince == defTime {
if input.CopySourceIfModifiedSince != nil && (*input.CopySourceIfModifiedSince).Equal(defTime) {
input.CopySourceIfModifiedSince = nil
}
if input.CopySourceIfNoneMatch != nil && *input.CopySourceIfNoneMatch == "" {
input.CopySourceIfNoneMatch = nil
}
if input.CopySourceIfUnmodifiedSince != nil && *input.CopySourceIfUnmodifiedSince == defTime {
if input.CopySourceIfUnmodifiedSince != nil && (*input.CopySourceIfUnmodifiedSince).Equal(defTime) {
input.CopySourceIfUnmodifiedSince = nil
}
if input.CopySourceSSECustomerAlgorithm != nil && *input.CopySourceSSECustomerAlgorithm == "" {
@@ -1192,7 +1189,7 @@ func (s *S3Proxy) CopyObject(ctx context.Context, input s3response.CopyObjectInp
if input.GrantWriteACP != nil && *input.GrantWriteACP == "" {
input.GrantWriteACP = nil
}
if input.ObjectLockRetainUntilDate != nil && *input.ObjectLockRetainUntilDate == defTime {
if input.ObjectLockRetainUntilDate != nil && (*input.ObjectLockRetainUntilDate).Equal(defTime) {
input.ObjectLockRetainUntilDate = nil
}
if input.SSECustomerAlgorithm != nil && *input.SSECustomerAlgorithm == "" {
@@ -1392,7 +1389,7 @@ func (s *S3Proxy) DeleteObject(ctx context.Context, input *s3.DeleteObjectInput)
if input.IfMatch != nil && *input.IfMatch == "" {
input.IfMatch = nil
}
if input.IfMatchLastModifiedTime != nil && *input.IfMatchLastModifiedTime == defTime {
if input.IfMatchLastModifiedTime != nil && (*input.IfMatchLastModifiedTime).Equal(defTime) {
input.IfMatchLastModifiedTime = nil
}
if input.IfMatchSize != nil && *input.IfMatchSize == 0 {
@@ -1436,7 +1433,7 @@ func (s *S3Proxy) DeleteObjects(ctx context.Context, input *s3.DeleteObjectsInpu
}
func (s *S3Proxy) GetBucketAcl(ctx context.Context, input *s3.GetBucketAclInput) ([]byte, error) {
data, err := s.getMetaBucketObjData(ctx, *input.Bucket, metaPrefixAcl)
data, err := s.getMetaBucketObjData(ctx, *input.Bucket, metaPrefixAcl, false)
if err != nil {
return nil, handleError(err)
}
@@ -1501,12 +1498,38 @@ func (s *S3Proxy) DeleteObjectTagging(ctx context.Context, bucket, object string
return handleError(err)
}
func (s *S3Proxy) PutBucketCors(ctx context.Context, bucket string, cors []byte) error {
return handleError(s.putMetaBucketObj(ctx, bucket, cors, metaPrefixCors))
}
func (s *S3Proxy) GetBucketCors(ctx context.Context, bucket string) ([]byte, error) {
data, err := s.getMetaBucketObjData(ctx, bucket, metaPrefixCors, false)
if err != nil {
return nil, handleError(err)
}
return data, nil
}
func (s *S3Proxy) DeleteBucketCors(ctx context.Context, bucket string) error {
key := getMetaKey(bucket, metaPrefixCors)
_, err := s.client.DeleteObject(ctx, &s3.DeleteObjectInput{
Bucket: &s.metaBucket,
Key: &key,
})
if err != nil && !areErrSame(err, s3err.GetAPIError(s3err.ErrNoSuchKey)) {
return handleError(err)
}
return nil
}
func (s *S3Proxy) PutBucketPolicy(ctx context.Context, bucket string, policy []byte) error {
return handleError(s.putMetaBucketObj(ctx, bucket, policy, metaPrefixPolicy))
}
func (s *S3Proxy) GetBucketPolicy(ctx context.Context, bucket string) ([]byte, error) {
data, err := s.getMetaBucketObjData(ctx, bucket, metaPrefixPolicy)
data, err := s.getMetaBucketObjData(ctx, bucket, metaPrefixPolicy, false)
if err != nil {
return nil, handleError(err)
}
@@ -1552,82 +1575,39 @@ func (s *S3Proxy) GetObjectLegalHold(ctx context.Context, bucket, object, versio
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (s *S3Proxy) ChangeBucketOwner(ctx context.Context, bucket string, acl []byte) error {
acll, err := auth.ParseACL(acl)
if err != nil {
return err
}
req, err := http.NewRequest(http.MethodPatch, fmt.Sprintf("%v/change-bucket-owner/?bucket=%v&owner=%v", s.endpoint, bucket, acll.Owner), nil)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
signer := v4.NewSigner()
hashedPayload := sha256.Sum256([]byte{})
hexPayload := hex.EncodeToString(hashedPayload[:])
req.Header.Set("X-Amz-Content-Sha256", hexPayload)
signErr := signer.SignHTTP(req.Context(), aws.Credentials{AccessKeyID: s.access, SecretAccessKey: s.secret}, req, hexPayload, "s3", s.awsRegion, time.Now())
if signErr != nil {
return fmt.Errorf("failed to sign the request: %w", err)
}
client := http.Client{}
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
if resp.StatusCode > 300 {
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
defer resp.Body.Close()
return fmt.Errorf("%v", string(body))
}
return nil
func (s *S3Proxy) ChangeBucketOwner(ctx context.Context, bucket, owner string) error {
return auth.UpdateBucketACLOwner(ctx, s, bucket, owner)
}
func (s *S3Proxy) ListBucketsAndOwners(ctx context.Context) ([]s3response.Bucket, error) {
req, err := http.NewRequest(http.MethodPatch, fmt.Sprintf("%v/list-buckets", s.endpoint), nil)
if err != nil {
return []s3response.Bucket{}, fmt.Errorf("failed to send the request: %w", err)
}
signer := v4.NewSigner()
hashedPayload := sha256.Sum256([]byte{})
hexPayload := hex.EncodeToString(hashedPayload[:])
req.Header.Set("X-Amz-Content-Sha256", hexPayload)
signErr := signer.SignHTTP(req.Context(), aws.Credentials{AccessKeyID: s.access, SecretAccessKey: s.secret}, req, hexPayload, "s3", s.awsRegion, time.Now())
if signErr != nil {
return []s3response.Bucket{}, fmt.Errorf("failed to sign the request: %w", err)
}
client := http.Client{}
resp, err := client.Do(req)
if err != nil {
return []s3response.Bucket{}, fmt.Errorf("failed to send the request: %w", err)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return []s3response.Bucket{}, err
}
defer resp.Body.Close()
var buckets []s3response.Bucket
if err := json.Unmarshal(body, &buckets); err != nil {
return []s3response.Bucket{}, err
paginator := s3.NewListBucketsPaginator(s.client, &s3.ListBucketsInput{})
for paginator.HasMorePages() {
page, err := paginator.NextPage(ctx)
if err != nil {
return nil, handleError(err)
}
for _, bucket := range page.Buckets {
if *bucket.Name == s.metaBucket {
continue
}
aclJSON, err := s.getMetaBucketObjData(ctx, *bucket.Name, metaPrefixAcl, false)
if err != nil {
return nil, handleError(err)
}
acl, err := auth.ParseACL(aclJSON)
if err != nil {
return buckets, fmt.Errorf("parse acl tag: %w", err)
}
buckets = append(buckets, s3response.Bucket{
Name: *bucket.Name,
Owner: acl.Owner,
})
}
}
return buckets, nil
@@ -1656,15 +1636,13 @@ func (s *S3Proxy) putMetaBucketObj(ctx context.Context, bucket string, data []by
return err
}
func (s *S3Proxy) getMetaBucketObjData(ctx context.Context, bucket string, prefix metaPrefix) ([]byte, error) {
// set checkExists to true if using to check for existence of bucket, in
// this case it will not return default acl/policy if the metadata does
// not exist
func (s *S3Proxy) getMetaBucketObjData(ctx context.Context, bucket string, prefix metaPrefix, checkExists bool) ([]byte, error) {
// return default bahviour of get bucket policy/acl, if meta bucket is not provided
if s.metaBucket == "" {
switch prefix {
case metaPrefixAcl:
return []byte{}, nil
case metaPrefixPolicy:
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucketPolicy)
}
return handleMetaBucketObjectNotFoundErr(prefix)
}
key := getMetaKey(bucket, prefix)
@@ -1674,13 +1652,11 @@ func (s *S3Proxy) getMetaBucketObjData(ctx context.Context, bucket string, prefi
Key: &key,
})
if areErrSame(err, s3err.GetAPIError(s3err.ErrNoSuchKey)) {
switch prefix {
case metaPrefixAcl:
// If bucket acl is not found, return default acl
return []byte{}, nil
case metaPrefixPolicy:
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucketPolicy)
if checkExists {
return nil, err
}
return handleMetaBucketObjectNotFoundErr(prefix)
}
if err != nil {
return nil, err
@@ -1694,6 +1670,23 @@ func (s *S3Proxy) getMetaBucketObjData(ctx context.Context, bucket string, prefi
return data, nil
}
// handles the case when an object with the given metprefix
// is not found in meta bucket. Aggregates the not found errors
// for each meta prefix
func handleMetaBucketObjectNotFoundErr(prefix metaPrefix) ([]byte, error) {
switch prefix {
case metaPrefixAcl:
// If bucket acl is not found, return default acl
return []byte{}, nil
case metaPrefixPolicy:
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucketPolicy)
case metaPrefixCors:
return nil, s3err.GetAPIError(s3err.ErrNoSuchCORSConfiguration)
}
return []byte{}, nil
}
// Checks if the provided err is a type of smithy.APIError
// and if the error code and message match with the provided apiErr
func areErrSame(err error, apiErr s3err.APIError) bool {

View File

@@ -21,7 +21,6 @@ import (
"errors"
"fmt"
"io/fs"
"net/http"
"os"
"path/filepath"
"strings"
@@ -30,21 +29,27 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/pkg/xattr"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/backend/meta"
"github.com/versity/versitygw/backend/posix"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
// ScoutfsOpts are the options for the ScoutFS backend
type ScoutfsOpts struct {
ChownUID bool
ChownGID bool
GlacierMode bool
BucketLinks bool
NewDirPerm fs.FileMode
// ChownUID sets the UID of the object to the UID of the user on PUT
ChownUID bool
// ChownGID sets the GID of the object to the GID of the user on PUT
ChownGID bool
// BucketLinks enables symlinks to directories to be treated as buckets
BucketLinks bool
//VersioningDir sets the version directory to enable object versioning
VersioningDir string
// NewDirPerm specifies the permission to set on newly created directories
NewDirPerm fs.FileMode
// GlacierMode enables glacier emulation for offline files
GlacierMode bool
// DisableNoArchive prevents setting noarchive on temporary files
DisableNoArchive bool
}
@@ -53,9 +58,6 @@ type ScoutFS struct {
rootfd *os.File
rootdir string
// bucket/object metadata storage facility
meta meta.MetadataStorer
// glaciermode enables the following behavior:
// GET object: if file offline, return invalid object state
// HEAD object: if file offline, set obj storage class to GLACIER
@@ -67,19 +69,6 @@ type ScoutFS struct {
// RestoreObject: add batch stage request to file
glaciermode bool
// chownuid/gid enable chowning of files to the account uid/gid
// when objects are uploaded
chownuid bool
chowngid bool
// euid/egid are the effective uid/gid of the running versitygw process
// used to determine if chowning is needed
euid int
egid int
// newDirPerm is the permissions to use when creating new directories
newDirPerm fs.FileMode
// disableNoArchive is used to disable setting scoutam noarchive flag
// on mutlipart parts. This is enabled by default to prevent archive
// copies of temporary multipart parts.
@@ -89,24 +78,6 @@ type ScoutFS struct {
var _ backend.Backend = &ScoutFS{}
const (
metaTmpDir = ".sgwtmp"
metaTmpMultipartDir = metaTmpDir + "/multipart"
tagHdr = "X-Amz-Tagging"
metaHdr = "X-Amz-Meta"
contentTypeHdr = "content-type"
contentEncHdr = "content-encoding"
contentLangHdr = "content-language"
contentDispHdr = "content-disposition"
cacheCtrlHdr = "cache-control"
expiresHdr = "expires"
emptyMD5 = "d41d8cd98f00b204e9800998ecf8427e"
etagkey = "etag"
checksumsKey = "checksums"
objectRetentionKey = "object-retention"
objectLegalHoldKey = "object-legal-hold"
)
var (
stageComplete = "ongoing-request=\"false\", expiry-date=\"Fri, 2 Dec 2050 00:00:00 GMT\""
stageInProgress = "true"
stageNotInProgress = "false"
@@ -146,25 +117,6 @@ func (*ScoutFS) String() string {
return "ScoutFS Gateway"
}
// getChownIDs returns the uid and gid that should be used for chowning
// the object to the account uid/gid. It also returns a boolean indicating
// if chowning is needed.
func (s *ScoutFS) getChownIDs(acct auth.Account) (int, int, bool) {
uid := s.euid
gid := s.egid
var needsChown bool
if s.chownuid && acct.UserID != s.euid {
uid = acct.UserID
needsChown = true
}
if s.chowngid && acct.GroupID != s.egid {
gid = acct.GroupID
needsChown = true
}
return uid, gid, needsChown
}
func (s *ScoutFS) UploadPart(ctx context.Context, input *s3.UploadPartInput) (*s3.UploadPartOutput, error) {
out, err := s.Posix.UploadPart(ctx, input)
if err != nil {
@@ -175,7 +127,7 @@ func (s *ScoutFS) UploadPart(ctx context.Context, input *s3.UploadPartInput) (*s
sum := sha256.Sum256([]byte(*input.Key))
partPath := filepath.Join(
*input.Bucket, // bucket
metaTmpMultipartDir, // temp multipart dir
posix.MetaTmpMultipartDir, // temp multipart dir
fmt.Sprintf("%x", sum), // hashed objname
*input.UploadId, // upload id
fmt.Sprintf("%v", *input.PartNumber), // part number
@@ -194,446 +146,7 @@ func (s *ScoutFS) UploadPart(ctx context.Context, input *s3.UploadPartInput) (*s
// ioctl to not have to read and copy the part data to the final object. This
// saves a read and write cycle for all mutlipart uploads.
func (s *ScoutFS) CompleteMultipartUpload(ctx context.Context, input *s3.CompleteMultipartUploadInput) (s3response.CompleteMultipartUploadResult, string, error) {
acct, ok := ctx.Value("account").(auth.Account)
if !ok {
acct = auth.Account{}
}
var res s3response.CompleteMultipartUploadResult
if input.Bucket == nil {
return res, "", s3err.GetAPIError(s3err.ErrInvalidBucketName)
}
if input.Key == nil {
return res, "", s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if input.UploadId == nil {
return res, "", s3err.GetAPIError(s3err.ErrNoSuchUpload)
}
if input.MultipartUpload == nil {
return res, "", s3err.GetAPIError(s3err.ErrInvalidRequest)
}
bucket := *input.Bucket
object := *input.Key
uploadID := *input.UploadId
parts := input.MultipartUpload.Parts
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return res, "", s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return res, "", fmt.Errorf("stat bucket: %w", err)
}
sum, err := s.checkUploadIDExists(bucket, object, uploadID)
if err != nil {
return res, "", err
}
objdir := filepath.Join(metaTmpMultipartDir, fmt.Sprintf("%x", sum))
checksums, err := s.retrieveChecksums(nil, bucket, filepath.Join(objdir, uploadID))
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return res, "", fmt.Errorf("get mp checksums: %w", err)
}
// ChecksumType should be the same as specified on CreateMultipartUpload
if input.ChecksumType != "" && checksums.Type != input.ChecksumType {
checksumType := checksums.Type
if checksumType == "" {
checksumType = types.ChecksumType("null")
}
return res, "", s3err.GetChecksumTypeMismatchOnMpErr(checksumType)
}
// check all parts ok
last := len(parts) - 1
var totalsize int64
// The initialie values is the lower limit of partNumber: 0
var partNumber int32
for i, part := range parts {
if part.PartNumber == nil {
return res, "", s3err.GetAPIError(s3err.ErrInvalidPart)
}
if *part.PartNumber < 1 {
return res, "", s3err.GetAPIError(s3err.ErrInvalidCompleteMpPartNumber)
}
if *part.PartNumber <= partNumber {
return res, "", s3err.GetAPIError(s3err.ErrInvalidPartOrder)
}
partNumber = *part.PartNumber
partObjPath := filepath.Join(objdir, uploadID, fmt.Sprintf("%v", *part.PartNumber))
fullPartPath := filepath.Join(bucket, partObjPath)
fi, err := os.Lstat(fullPartPath)
if err != nil {
return res, "", s3err.GetAPIError(s3err.ErrInvalidPart)
}
totalsize += fi.Size()
// all parts except the last need to be greater, thena
// the minimum allowed size (5 Mib)
if i < last && fi.Size() < backend.MinPartSize {
return res, "", s3err.GetAPIError(s3err.ErrEntityTooSmall)
}
b, err := s.meta.RetrieveAttribute(nil, bucket, partObjPath, etagkey)
etag := string(b)
if err != nil {
etag = ""
}
if parts[i].ETag == nil || !backend.AreEtagsSame(etag, *parts[i].ETag) {
return res, "", s3err.GetAPIError(s3err.ErrInvalidPart)
}
partChecksum, err := s.retrieveChecksums(nil, bucket, partObjPath)
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return res, "", fmt.Errorf("get part checksum: %w", err)
}
// If checksum has been provided on mp initalization
err = validatePartChecksum(partChecksum, part)
if err != nil {
return res, "", err
}
}
if input.MpuObjectSize != nil && totalsize != *input.MpuObjectSize {
return res, "", s3err.GetIncorrectMpObjectSizeErr(totalsize, *input.MpuObjectSize)
}
// use totalsize=0 because we wont be writing to the file, only moving
// extents around. so we dont want to fallocate this.
f, err := s.openTmpFile(filepath.Join(bucket, metaTmpDir), bucket, object, 0, acct)
if err != nil {
if errors.Is(err, syscall.EDQUOT) {
return res, "", s3err.GetAPIError(s3err.ErrQuotaExceeded)
}
return res, "", fmt.Errorf("open temp file: %w", err)
}
defer f.cleanup()
for _, part := range parts {
if part.PartNumber == nil || *part.PartNumber < 1 {
return res, "", s3err.GetAPIError(s3err.ErrInvalidPart)
}
partObjPath := filepath.Join(objdir, uploadID, fmt.Sprintf("%v", *part.PartNumber))
fullPartPath := filepath.Join(bucket, partObjPath)
pf, err := os.Open(fullPartPath)
if err != nil {
return res, "", fmt.Errorf("open part %v: %v", *part.PartNumber, err)
}
// scoutfs move data is a metadata only operation that moves the data
// extent references from the source, appeding to the destination.
// this needs to be 4k aligned.
err = moveData(pf, f.File())
pf.Close()
if err != nil {
return res, "", fmt.Errorf("move blocks part %v: %v", *part.PartNumber, err)
}
}
userMetaData := make(map[string]string)
upiddir := filepath.Join(objdir, uploadID)
objMeta := s.loadUserMetaData(bucket, upiddir, userMetaData)
err = s.storeObjectMetadata(f.File(), bucket, object, objMeta)
if err != nil {
return res, "", err
}
objname := filepath.Join(bucket, object)
dir := filepath.Dir(objname)
if dir != "" {
uid, gid, doChown := s.getChownIDs(acct)
err = backend.MkdirAll(dir, uid, gid, doChown, s.newDirPerm)
if err != nil {
return res, "", err
}
}
for k, v := range userMetaData {
err = s.meta.StoreAttribute(f.File(), bucket, object, fmt.Sprintf("%v.%v", metaHdr, k), []byte(v))
if err != nil {
return res, "", fmt.Errorf("set user attr %q: %w", k, err)
}
}
// load and set tagging
tagging, err := s.meta.RetrieveAttribute(nil, bucket, upiddir, tagHdr)
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return res, "", fmt.Errorf("get object tagging: %w", err)
}
if err == nil {
err := s.meta.StoreAttribute(f.File(), bucket, object, tagHdr, tagging)
if err != nil {
return res, "", fmt.Errorf("set object tagging: %w", err)
}
}
// load and set legal hold
lHold, err := s.meta.RetrieveAttribute(nil, bucket, upiddir, objectLegalHoldKey)
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return res, "", fmt.Errorf("get object legal hold: %w", err)
}
if err == nil {
err := s.meta.StoreAttribute(f.File(), bucket, object, objectLegalHoldKey, lHold)
if err != nil {
return res, "", fmt.Errorf("set object legal hold: %w", err)
}
}
// load and set retention
ret, err := s.meta.RetrieveAttribute(nil, bucket, upiddir, objectRetentionKey)
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return res, "", fmt.Errorf("get object retention: %w", err)
}
if err == nil {
err := s.meta.StoreAttribute(f.File(), bucket, object, objectRetentionKey, ret)
if err != nil {
return res, "", fmt.Errorf("set object retention: %w", err)
}
}
// Calculate s3 compatible md5sum for complete multipart.
s3MD5 := backend.GetMultipartMD5(parts)
err = s.meta.StoreAttribute(f.File(), bucket, object, etagkey, []byte(s3MD5))
if err != nil {
return res, "", fmt.Errorf("set etag attr: %w", err)
}
err = f.link()
if err != nil {
return res, "", fmt.Errorf("link object in namespace: %w", err)
}
// cleanup tmp dirs
os.RemoveAll(filepath.Join(bucket, upiddir))
// use Remove for objdir in case there are still other uploads
// for same object name outstanding
os.Remove(filepath.Join(bucket, objdir))
return s3response.CompleteMultipartUploadResult{
Bucket: &bucket,
ETag: &s3MD5,
Key: &object,
}, "", nil
}
func (s *ScoutFS) storeObjectMetadata(f *os.File, bucket, object string, m objectMetadata) error {
if getString(m.ContentType) != "" {
err := s.meta.StoreAttribute(f, bucket, object, contentTypeHdr, []byte(*m.ContentType))
if err != nil {
return fmt.Errorf("set content-type: %w", err)
}
}
if getString(m.ContentEncoding) != "" {
err := s.meta.StoreAttribute(f, bucket, object, contentEncHdr, []byte(*m.ContentEncoding))
if err != nil {
return fmt.Errorf("set content-encoding: %w", err)
}
}
if getString(m.ContentDisposition) != "" {
err := s.meta.StoreAttribute(f, bucket, object, contentDispHdr, []byte(*m.ContentDisposition))
if err != nil {
return fmt.Errorf("set content-disposition: %w", err)
}
}
if getString(m.ContentLanguage) != "" {
err := s.meta.StoreAttribute(f, bucket, object, contentLangHdr, []byte(*m.ContentLanguage))
if err != nil {
return fmt.Errorf("set content-language: %w", err)
}
}
if getString(m.CacheControl) != "" {
err := s.meta.StoreAttribute(f, bucket, object, cacheCtrlHdr, []byte(*m.CacheControl))
if err != nil {
return fmt.Errorf("set cache-control: %w", err)
}
}
if getString(m.Expires) != "" {
err := s.meta.StoreAttribute(f, bucket, object, expiresHdr, []byte(*m.Expires))
if err != nil {
return fmt.Errorf("set cache-control: %w", err)
}
}
return nil
}
func validatePartChecksum(checksum s3response.Checksum, part types.CompletedPart) error {
n := numberOfChecksums(part)
if n > 1 {
return s3err.GetAPIError(s3err.ErrInvalidChecksumPart)
}
if checksum.Algorithm == "" {
if n != 0 {
return s3err.GetAPIError(s3err.ErrInvalidPart)
}
return nil
}
algo := checksum.Algorithm
if n == 0 {
return s3err.APIError{
Code: "InvalidRequest",
Description: fmt.Sprintf("The upload was created using a %v checksum. The complete request must include the checksum for each part. It was missing for part %v in the request.", strings.ToLower(string(algo)), *part.PartNumber),
HTTPStatusCode: http.StatusBadRequest,
}
}
for _, cs := range []struct {
checksum *string
expectedChecksum string
algo types.ChecksumAlgorithm
}{
{part.ChecksumCRC32, getString(checksum.CRC32), types.ChecksumAlgorithmCrc32},
{part.ChecksumCRC32C, getString(checksum.CRC32C), types.ChecksumAlgorithmCrc32c},
{part.ChecksumSHA1, getString(checksum.SHA1), types.ChecksumAlgorithmSha1},
{part.ChecksumSHA256, getString(checksum.SHA256), types.ChecksumAlgorithmSha256},
{part.ChecksumCRC64NVME, getString(checksum.CRC64NVME), types.ChecksumAlgorithmCrc64nvme},
} {
if cs.checksum == nil {
continue
}
if !utils.IsValidChecksum(*cs.checksum, cs.algo) {
return s3err.GetAPIError(s3err.ErrInvalidChecksumPart)
}
if *cs.checksum != cs.expectedChecksum {
if algo == cs.algo {
return s3err.GetAPIError(s3err.ErrInvalidPart)
}
return s3err.APIError{
Code: "BadDigest",
Description: fmt.Sprintf("The %v you specified for part %v did not match what we received.", strings.ToLower(string(cs.algo)), *part.PartNumber),
HTTPStatusCode: http.StatusBadRequest,
}
}
}
return nil
}
func numberOfChecksums(part types.CompletedPart) int {
counter := 0
if getString(part.ChecksumCRC32) != "" {
counter++
}
if getString(part.ChecksumCRC32C) != "" {
counter++
}
if getString(part.ChecksumSHA1) != "" {
counter++
}
if getString(part.ChecksumSHA256) != "" {
counter++
}
if getString(part.ChecksumCRC64NVME) != "" {
counter++
}
return counter
}
func (s *ScoutFS) checkUploadIDExists(bucket, object, uploadID string) ([32]byte, error) {
sum := sha256.Sum256([]byte(object))
objdir := filepath.Join(bucket, metaTmpMultipartDir, fmt.Sprintf("%x", sum))
_, err := os.Stat(filepath.Join(objdir, uploadID))
if errors.Is(err, fs.ErrNotExist) {
return [32]byte{}, s3err.GetAPIError(s3err.ErrNoSuchUpload)
}
if err != nil {
return [32]byte{}, fmt.Errorf("stat upload: %w", err)
}
return sum, nil
}
type objectMetadata struct {
ContentType *string
ContentEncoding *string
ContentDisposition *string
ContentLanguage *string
CacheControl *string
Expires *string
}
// fll out the user metadata map with the metadata for the object
// and return the content type and encoding
func (s *ScoutFS) loadUserMetaData(bucket, object string, m map[string]string) objectMetadata {
ents, err := s.meta.ListAttributes(bucket, object)
if err != nil || len(ents) == 0 {
return objectMetadata{}
}
for _, e := range ents {
if !isValidMeta(e) {
continue
}
b, err := s.meta.RetrieveAttribute(nil, bucket, object, e)
if err != nil {
continue
}
if b == nil {
m[strings.TrimPrefix(e, fmt.Sprintf("%v.", metaHdr))] = ""
continue
}
m[strings.TrimPrefix(e, fmt.Sprintf("%v.", metaHdr))] = string(b)
}
var result objectMetadata
b, err := s.meta.RetrieveAttribute(nil, bucket, object, contentTypeHdr)
if err == nil {
result.ContentType = backend.GetPtrFromString(string(b))
}
b, err = s.meta.RetrieveAttribute(nil, bucket, object, contentEncHdr)
if err == nil {
result.ContentEncoding = backend.GetPtrFromString(string(b))
}
b, err = s.meta.RetrieveAttribute(nil, bucket, object, contentDispHdr)
if err == nil {
result.ContentDisposition = backend.GetPtrFromString(string(b))
}
b, err = s.meta.RetrieveAttribute(nil, bucket, object, contentLangHdr)
if err == nil {
result.ContentLanguage = backend.GetPtrFromString(string(b))
}
b, err = s.meta.RetrieveAttribute(nil, bucket, object, cacheCtrlHdr)
if err == nil {
result.CacheControl = backend.GetPtrFromString(string(b))
}
b, err = s.meta.RetrieveAttribute(nil, bucket, object, expiresHdr)
if err == nil {
result.Expires = backend.GetPtrFromString(string(b))
}
return result
}
func isValidMeta(val string) bool {
if strings.HasPrefix(val, metaHdr) {
return true
}
if strings.EqualFold(val, "Expires") {
return true
}
return false
return s.Posix.CompleteMultipartUploadWithCopy(ctx, input, moveData)
}
func (s *ScoutFS) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.HeadObjectOutput, error) {
@@ -730,216 +243,47 @@ func (s *ScoutFS) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.
}
func (s *ScoutFS) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
if input.Bucket == nil {
return s3response.ListObjectsResult{}, s3err.GetAPIError(s3err.ErrInvalidBucketName)
if s.glaciermode {
return s.Posix.ListObjectsParametrized(ctx, input, s.glacierFileToObj)
} else {
return s.Posix.ListObjects(ctx, input)
}
bucket := *input.Bucket
prefix := ""
if input.Prefix != nil {
prefix = *input.Prefix
}
marker := ""
if input.Marker != nil {
marker = *input.Marker
}
delim := ""
if input.Delimiter != nil {
delim = *input.Delimiter
}
maxkeys := int32(0)
if input.MaxKeys != nil {
maxkeys = *input.MaxKeys
}
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return s3response.ListObjectsResult{}, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return s3response.ListObjectsResult{}, fmt.Errorf("stat bucket: %w", err)
}
fileSystem := os.DirFS(bucket)
results, err := backend.Walk(ctx, fileSystem, prefix, delim, marker, maxkeys,
s.fileToObj(bucket), []string{metaTmpDir})
if err != nil {
return s3response.ListObjectsResult{}, fmt.Errorf("walk %v: %w", bucket, err)
}
return s3response.ListObjectsResult{
CommonPrefixes: results.CommonPrefixes,
Contents: results.Objects,
Delimiter: backend.GetPtrFromString(delim),
Marker: backend.GetPtrFromString(marker),
NextMarker: backend.GetPtrFromString(results.NextMarker),
Prefix: backend.GetPtrFromString(prefix),
IsTruncated: &results.Truncated,
MaxKeys: &maxkeys,
Name: &bucket,
}, nil
}
func (s *ScoutFS) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
if input.Bucket == nil {
return s3response.ListObjectsV2Result{}, s3err.GetAPIError(s3err.ErrInvalidBucketName)
if s.glaciermode {
return s.Posix.ListObjectsV2Parametrized(ctx, input, s.glacierFileToObj)
} else {
return s.Posix.ListObjectsV2(ctx, input)
}
bucket := *input.Bucket
prefix := ""
if input.Prefix != nil {
prefix = *input.Prefix
}
marker := ""
if input.ContinuationToken != nil {
if input.StartAfter != nil {
marker = max(*input.StartAfter, *input.ContinuationToken)
} else {
marker = *input.ContinuationToken
}
}
delim := ""
if input.Delimiter != nil {
delim = *input.Delimiter
}
maxkeys := int32(0)
if input.MaxKeys != nil {
maxkeys = *input.MaxKeys
}
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return s3response.ListObjectsV2Result{}, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return s3response.ListObjectsV2Result{}, fmt.Errorf("stat bucket: %w", err)
}
fileSystem := os.DirFS(bucket)
results, err := backend.Walk(ctx, fileSystem, prefix, delim, marker, int32(maxkeys),
s.fileToObj(bucket), []string{metaTmpDir})
if err != nil {
return s3response.ListObjectsV2Result{}, fmt.Errorf("walk %v: %w", bucket, err)
}
count := int32(len(results.Objects))
return s3response.ListObjectsV2Result{
CommonPrefixes: results.CommonPrefixes,
Contents: results.Objects,
IsTruncated: &results.Truncated,
MaxKeys: &maxkeys,
Name: &bucket,
KeyCount: &count,
Delimiter: backend.GetPtrFromString(delim),
ContinuationToken: backend.GetPtrFromString(marker),
NextContinuationToken: backend.GetPtrFromString(results.NextMarker),
Prefix: backend.GetPtrFromString(prefix),
StartAfter: backend.GetPtrFromString(*input.StartAfter),
}, nil
}
func (s *ScoutFS) fileToObj(bucket string) backend.GetObjFunc {
// FileToObj function for ListObject calls that adds a Glacier storage class if the file is offline
func (s *ScoutFS) glacierFileToObj(bucket string, fetchOwner bool) backend.GetObjFunc {
posixFileToObj := s.Posix.FileToObj(bucket, fetchOwner)
return func(path string, d fs.DirEntry) (s3response.Object, error) {
res, err := posixFileToObj(path, d)
if err != nil || d.IsDir() {
return res, err
}
objPath := filepath.Join(bucket, path)
if d.IsDir() {
// directory object only happens if directory empty
// check to see if this is a directory object by checking etag
etagBytes, err := s.meta.RetrieveAttribute(nil, bucket, path, etagkey)
if errors.Is(err, meta.ErrNoSuchKey) || errors.Is(err, fs.ErrNotExist) {
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return s3response.Object{}, fmt.Errorf("get etag: %w", err)
}
etag := string(etagBytes)
fi, err := d.Info()
if errors.Is(err, fs.ErrNotExist) {
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return s3response.Object{}, fmt.Errorf("get fileinfo: %w", err)
}
size := int64(0)
mtime := fi.ModTime()
return s3response.Object{
ETag: &etag,
Key: &path,
LastModified: &mtime,
Size: &size,
StorageClass: types.ObjectStorageClassStandard,
}, nil
}
// Retreive the object checksum algorithm
checksums, err := s.retrieveChecksums(nil, bucket, path)
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return s3response.Object{}, backend.ErrSkipObj
}
// file object, get object info and fill out object data
b, err := s.meta.RetrieveAttribute(nil, bucket, path, etagkey)
if errors.Is(err, fs.ErrNotExist) {
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return s3response.Object{}, fmt.Errorf("get etag: %w", err)
}
// note: meta.ErrNoSuchKey will return etagBytes = []byte{}
// so this will just set etag to "" if its not already set
etag := string(b)
fi, err := d.Info()
// Check if there are any offline exents associated with this file.
// If so, we will return the Glacier storage class
st, err := statMore(objPath)
if errors.Is(err, fs.ErrNotExist) {
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return s3response.Object{}, fmt.Errorf("get fileinfo: %w", err)
return s3response.Object{}, fmt.Errorf("stat more: %w", err)
}
sc := types.ObjectStorageClassStandard
if s.glaciermode {
// Check if there are any offline exents associated with this file.
// If so, we will return the InvalidObjectState error.
st, err := statMore(objPath)
if errors.Is(err, fs.ErrNotExist) {
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return s3response.Object{}, fmt.Errorf("stat more: %w", err)
}
if st.Offline_blocks != 0 {
sc = types.ObjectStorageClassGlacier
}
if st.Offline_blocks != 0 {
res.StorageClass = types.ObjectStorageClassGlacier
}
size := fi.Size()
mtime := fi.ModTime()
return s3response.Object{
ETag: &etag,
Key: &path,
LastModified: &mtime,
Size: &size,
StorageClass: sc,
ChecksumAlgorithm: []types.ChecksumAlgorithm{checksums.Algorithm},
ChecksumType: checksums.Type,
}, nil
return res, nil
}
}
func (s *ScoutFS) retrieveChecksums(f *os.File, bucket, object string) (checksums s3response.Checksum, err error) {
checksumsAtr, err := s.meta.RetrieveAttribute(f, bucket, object, checksumsKey)
if err != nil {
return checksums, err
}
err = json.Unmarshal(checksumsAtr, &checksums)
return checksums, err
}
// RestoreObject will set stage request on file if offline and do nothing if
// file is online
func (s *ScoutFS) RestoreObject(_ context.Context, input *s3.RestoreObjectInput) error {
@@ -965,13 +309,6 @@ func (s *ScoutFS) RestoreObject(_ context.Context, input *s3.RestoreObjectInput)
return nil
}
func getString(str *string) string {
if str == nil {
return ""
}
return *str
}
func isStaging(objname string) (bool, error) {
b, err := xattr.Get(objname, flagskey)
if err != nil && !isNoAttr(err) {

View File

@@ -17,32 +17,24 @@
package scoutfs
import (
"errors"
"fmt"
"io/fs"
"os"
"path/filepath"
"strconv"
"syscall"
"golang.org/x/sys/unix"
"github.com/versity/scoutfs-go"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/backend/meta"
"github.com/versity/versitygw/backend/posix"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/debuglogger"
)
func New(rootdir string, opts ScoutfsOpts) (*ScoutFS, error) {
metastore := meta.XattrMeta{}
p, err := posix.New(rootdir, metastore, posix.PosixOpts{
ChownUID: opts.ChownUID,
ChownGID: opts.ChownGID,
BucketLinks: opts.BucketLinks,
NewDirPerm: opts.NewDirPerm,
ChownUID: opts.ChownUID,
ChownGID: opts.ChownGID,
BucketLinks: opts.BucketLinks,
NewDirPerm: opts.NewDirPerm,
VersioningDir: opts.VersioningDir,
})
if err != nil {
return nil, err
@@ -57,148 +49,30 @@ func New(rootdir string, opts ScoutfsOpts) (*ScoutFS, error) {
Posix: p,
rootfd: f,
rootdir: rootdir,
meta: metastore,
chownuid: opts.ChownUID,
chowngid: opts.ChownGID,
glaciermode: opts.GlacierMode,
newDirPerm: opts.NewDirPerm,
disableNoArchive: opts.DisableNoArchive,
}, nil
}
const procfddir = "/proc/self/fd"
type tmpfile struct {
f *os.File
bucket string
objname string
size int64
needsChown bool
uid int
gid int
newDirPerm fs.FileMode
}
var (
defaultFilePerm uint32 = 0644
)
func (s *ScoutFS) openTmpFile(dir, bucket, obj string, size int64, acct auth.Account) (*tmpfile, error) {
uid, gid, doChown := s.getChownIDs(acct)
// O_TMPFILE allows for a file handle to an unnamed file in the filesystem.
// This can help reduce contention within the namespace (parent directories),
// etc. And will auto cleanup the inode on close if we never link this
// file descriptor into the namespace.
fd, err := unix.Open(dir, unix.O_RDWR|unix.O_TMPFILE|unix.O_CLOEXEC, defaultFilePerm)
if err != nil {
if errors.Is(err, syscall.EROFS) {
return nil, s3err.GetAPIError(s3err.ErrMethodNotAllowed)
}
return nil, err
}
// for O_TMPFILE, filename is /proc/self/fd/<fd> to be used
// later to link file into namespace
f := os.NewFile(uintptr(fd), filepath.Join(procfddir, strconv.Itoa(fd)))
tmp := &tmpfile{
f: f,
bucket: bucket,
objname: obj,
size: size,
needsChown: doChown,
uid: uid,
gid: gid,
newDirPerm: s.newDirPerm,
}
if doChown {
err := f.Chown(uid, gid)
if err != nil {
return nil, fmt.Errorf("set temp file ownership: %w", err)
}
}
return tmp, nil
}
func (tmp *tmpfile) link() error {
// We use Linkat/Rename as the atomic operation for object puts. The
// upload is written to a temp (or unnamed/O_TMPFILE) file to not conflict
// with any other simultaneous uploads. The final operation is to move the
// temp file into place for the object. This ensures the object semantics
// of last upload completed wins and is not some combination of writes
// from simultaneous uploads.
objPath := filepath.Join(tmp.bucket, tmp.objname)
err := os.Remove(objPath)
if err != nil && !errors.Is(err, fs.ErrNotExist) {
return fmt.Errorf("remove stale path: %w", err)
}
dir := filepath.Dir(objPath)
err = backend.MkdirAll(dir, tmp.uid, tmp.gid, tmp.needsChown, tmp.newDirPerm)
if err != nil {
return fmt.Errorf("make parent dir: %w", err)
}
procdir, err := os.Open(procfddir)
if err != nil {
return fmt.Errorf("open proc dir: %w", err)
}
defer procdir.Close()
dirf, err := os.Open(dir)
if err != nil {
return fmt.Errorf("open parent dir: %w", err)
}
defer dirf.Close()
for {
err = unix.Linkat(int(procdir.Fd()), filepath.Base(tmp.f.Name()),
int(dirf.Fd()), filepath.Base(objPath), unix.AT_SYMLINK_FOLLOW)
if errors.Is(err, fs.ErrExist) {
err := os.Remove(objPath)
if err != nil && !errors.Is(err, fs.ErrNotExist) {
return fmt.Errorf("remove stale path: %w", err)
}
continue
}
if err != nil {
return fmt.Errorf("link tmpfile: %w", err)
}
break
}
err = tmp.f.Close()
if err != nil {
return fmt.Errorf("close tmpfile: %w", err)
}
return nil
}
func (tmp *tmpfile) Write(b []byte) (int, error) {
if int64(len(b)) > tmp.size {
return 0, fmt.Errorf("write exceeds content length %v", tmp.size)
}
n, err := tmp.f.Write(b)
tmp.size -= int64(n)
return n, err
}
func (tmp *tmpfile) cleanup() {
tmp.f.Close()
}
func (tmp *tmpfile) File() *os.File {
return tmp.f
}
func moveData(from *os.File, to *os.File) error {
return scoutfs.MoveData(from, to)
// May fail if the files are not 4K aligned; check for alignment
ffi, err := from.Stat()
if err != nil {
return fmt.Errorf("stat from: %v", err)
}
tfi, err := to.Stat()
if err != nil {
return fmt.Errorf("stat to: %v", err)
}
if ffi.Size()%4096 != 0 || tfi.Size()%4096 != 0 {
return os.ErrInvalid
}
err = scoutfs.MoveData(from, to)
if err != nil {
debuglogger.Logf("ScoutFs MoveData failed: %v", err)
}
return err
}
func statMore(path string) (stat, error) {

View File

@@ -20,44 +20,16 @@ import (
"errors"
"fmt"
"os"
"github.com/versity/versitygw/auth"
)
func New(rootdir string, opts ScoutfsOpts) (*ScoutFS, error) {
return nil, fmt.Errorf("scoutfs only available on linux")
}
type tmpfile struct{}
var (
errNotSupported = errors.New("not supported")
)
func (s *ScoutFS) openTmpFile(_, _, _ string, _ int64, _ auth.Account) (*tmpfile, error) {
// make these look used for static check
_ = s.chownuid
_ = s.chowngid
_ = s.euid
_ = s.egid
return nil, errNotSupported
}
func (tmp *tmpfile) link() error {
return errNotSupported
}
func (tmp *tmpfile) Write(b []byte) (int, error) {
return 0, errNotSupported
}
func (tmp *tmpfile) cleanup() {
}
func (tmp *tmpfile) File() *os.File {
return nil
}
func moveData(_, _ *os.File) error {
return errNotSupported
}

File diff suppressed because it is too large Load Diff

View File

@@ -112,6 +112,22 @@ func TestWalk(t *testing.T) {
}},
},
},
{
name: "max objs",
delimiter: "/",
prefix: "photos/2006/February/",
maxObjs: 2,
expected: backend.WalkResults{
Objects: []s3response.Object{
{
Key: backend.GetPtrFromString("photos/2006/February/sample2.jpg"),
},
{
Key: backend.GetPtrFromString("photos/2006/February/sample3.jpg"),
},
},
},
},
},
},
{
@@ -226,7 +242,7 @@ func TestWalk(t *testing.T) {
tt.fsys, tc.prefix, tc.delimiter, tc.marker, tc.maxObjs,
tt.getobj, []string{})
if err != nil {
t.Errorf("tc.name: walk: %v", err)
t.Errorf("%v: walk: %v", tc.name, err)
}
compareResults(tc.name, res, tc.expected, t)
@@ -376,3 +392,702 @@ func TestWalkStop(t *testing.T) {
t.Fatalf("unexpected error: %v", err)
}
}
// TestOrderWalk tests the lexicographic ordering of the object names
// for the case where readdir sort order of a directory is different
// than the lexicographic ordering of the full paths. The below has
// a readdir sort order for dir1/:
// a, a.b
// but if you consider the character that comes after a is "/", then
// the "." should come before "/" in the lexicographic ordering:
// a.b/, a/
func TestOrderWalk(t *testing.T) {
tests := []walkTest{
{
fsys: fstest.MapFS{
"dir1/a/file1": {},
"dir1/a/file2": {},
"dir1/a/file3": {},
"dir1/a.b/file1": {},
"dir1/a.b/file2": {},
},
getobj: getObj,
cases: []testcase{
{
name: "order test",
maxObjs: 1000,
prefix: "dir1/",
expected: backend.WalkResults{
Objects: []s3response.Object{
{Key: backend.GetPtrFromString("dir1/")},
{Key: backend.GetPtrFromString("dir1/a.b/")},
{Key: backend.GetPtrFromString("dir1/a.b/file1")},
{Key: backend.GetPtrFromString("dir1/a.b/file2")},
{Key: backend.GetPtrFromString("dir1/a/")},
{Key: backend.GetPtrFromString("dir1/a/file1")},
{Key: backend.GetPtrFromString("dir1/a/file2")},
{Key: backend.GetPtrFromString("dir1/a/file3")},
},
},
},
},
},
{
fsys: fstest.MapFS{
"dir|1/a/file1": {},
"dir|1/a/file2": {},
"dir|1/a/file3": {},
"dir|1/a.b/file1": {},
"dir|1/a.b/file2": {},
},
getobj: getObj,
cases: []testcase{
{
name: "order test delim",
maxObjs: 1000,
delimiter: "|",
prefix: "dir|",
expected: backend.WalkResults{
Objects: []s3response.Object{
{
Key: backend.GetPtrFromString("dir|1/a.b/file1"),
},
{
Key: backend.GetPtrFromString("dir|1/a.b/file2"),
},
{
Key: backend.GetPtrFromString("dir|1/a/file1"),
},
{
Key: backend.GetPtrFromString("dir|1/a/file2"),
},
{
Key: backend.GetPtrFromString("dir|1/a/file3"),
},
},
},
},
},
},
{
fsys: fstest.MapFS{
"a": &fstest.MapFile{Mode: fs.ModeDir},
},
getobj: getObj,
cases: []testcase{
{
name: "single dir obj",
maxObjs: 1000,
delimiter: "/",
prefix: "a",
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{
{
Prefix: backend.GetPtrFromString("a/"),
},
},
},
},
{
name: "single dir obj",
maxObjs: 1000,
delimiter: "/",
prefix: "a/",
expected: backend.WalkResults{
Objects: []s3response.Object{
{
Key: backend.GetPtrFromString("a/"),
},
},
},
},
},
},
}
for _, tt := range tests {
for _, tc := range tt.cases {
res, err := backend.Walk(context.Background(),
tt.fsys, tc.prefix, tc.delimiter, tc.marker, tc.maxObjs,
tt.getobj, []string{})
if err != nil {
t.Errorf("%v: walk: %v", tc.name, err)
}
compareResultsOrdered(tc.name, res, tc.expected, t)
}
}
}
type markerTest struct {
fsys fs.FS
getobj backend.GetObjFunc
cases []markertestcase
}
type markertestcase struct {
name string
prefix string
delimiter string
marker string
maxObjs int32
expected []backend.WalkResults
}
func TestMarker(t *testing.T) {
tests := []markerTest{
{
fsys: fstest.MapFS{
"dir/sample2.jpg": {},
"dir/sample3.jpg": {},
"dir/sample4.jpg": {},
"dir/sample5.jpg": {},
},
getobj: getObj,
cases: []markertestcase{
{
name: "multi page marker",
delimiter: "/",
prefix: "dir/",
maxObjs: 2,
expected: []backend.WalkResults{
{
Objects: []s3response.Object{
{
Key: backend.GetPtrFromString("dir/sample2.jpg"),
},
{
Key: backend.GetPtrFromString("dir/sample3.jpg"),
},
},
Truncated: true,
},
{
Objects: []s3response.Object{
{
Key: backend.GetPtrFromString("dir/sample4.jpg"),
},
{
Key: backend.GetPtrFromString("dir/sample5.jpg"),
},
},
},
},
},
},
},
{
fsys: fstest.MapFS{
"dir1/subdir/file.txt": {},
"dir1/subdir.ext": {},
"dir1/subdir1.ext": {},
"dir1/subdir2.ext": {},
},
getobj: getObj,
cases: []markertestcase{
{
name: "integration test case 1",
maxObjs: 2,
delimiter: "/",
prefix: "dir1/",
expected: []backend.WalkResults{
{
Objects: []s3response.Object{
{
Key: backend.GetPtrFromString("dir1/subdir.ext"),
},
},
CommonPrefixes: []types.CommonPrefix{
{
Prefix: backend.GetPtrFromString("dir1/subdir/"),
},
},
Truncated: true,
},
{
Objects: []s3response.Object{
{
Key: backend.GetPtrFromString("dir1/subdir1.ext"),
},
{
Key: backend.GetPtrFromString("dir1/subdir2.ext"),
},
},
},
},
},
},
},
{
fsys: fstest.MapFS{
"asdf": {},
"boo/bar": {},
"boo/baz/xyzzy": {},
"cquux/thud": {},
"cquux/bla": {},
},
getobj: getObj,
cases: []markertestcase{
{
name: "integration test case2",
maxObjs: 1,
delimiter: "/",
marker: "boo/",
expected: []backend.WalkResults{
{
Objects: []s3response.Object{},
CommonPrefixes: []types.CommonPrefix{
{
Prefix: backend.GetPtrFromString("cquux/"),
},
},
},
},
},
},
},
{
fsys: fstest.MapFS{
"bar": {},
"baz": {},
"foo": {},
},
getobj: getObj,
cases: []markertestcase{
{
name: "exact limit count",
maxObjs: 3,
expected: []backend.WalkResults{
{
Objects: []s3response.Object{
{
Key: backend.GetPtrFromString("bar"),
},
{
Key: backend.GetPtrFromString("baz"),
},
{
Key: backend.GetPtrFromString("foo"),
},
},
},
},
},
},
},
{
fsys: fstest.MapFS{
"d1/f1": {},
"d2/f2": {},
"d3/f3": {},
"d4/f4": {},
},
getobj: getObj,
cases: []markertestcase{
{
name: "limited common prefix",
maxObjs: 3,
delimiter: "/",
expected: []backend.WalkResults{
{
CommonPrefixes: []types.CommonPrefix{
{
Prefix: backend.GetPtrFromString("d1/"),
},
{
Prefix: backend.GetPtrFromString("d2/"),
},
{
Prefix: backend.GetPtrFromString("d3/"),
},
},
Truncated: true,
},
{
CommonPrefixes: []types.CommonPrefix{
{
Prefix: backend.GetPtrFromString("d4/"),
},
},
},
},
},
},
},
}
for _, tt := range tests {
for _, tc := range tt.cases {
marker := tc.marker
for i, page := range tc.expected {
res, err := backend.Walk(context.Background(),
tt.fsys, tc.prefix, tc.delimiter, marker, tc.maxObjs,
tt.getobj, []string{})
if err != nil {
t.Errorf("%v: walk: %v", tc.name, err)
}
marker = res.NextMarker
compareResultsOrdered(tc.name, res, page, t)
if res.Truncated != page.Truncated {
t.Errorf("%v page %v expected truncated %v, got %v",
tc.name, i, page.Truncated, res.Truncated)
}
}
}
}
}
func compareResultsOrdered(name string, got, wanted backend.WalkResults, t *testing.T) {
if !compareObjectsOrdered(got.Objects, wanted.Objects) {
t.Errorf("%v: unexpected object, got %v wanted %v",
name,
printObjects(got.Objects),
printObjects(wanted.Objects))
}
if !comparePrefixesOrdered(got.CommonPrefixes, wanted.CommonPrefixes) {
t.Errorf("%v: unexpected prefix, got %v wanted %v",
name,
printCommonPrefixes(got.CommonPrefixes),
printCommonPrefixes(wanted.CommonPrefixes))
}
}
func compareObjectsOrdered(a, b []s3response.Object) bool {
if len(a) == 0 && len(b) == 0 {
return true
}
if len(a) != len(b) {
return false
}
for i, obj := range a {
if *obj.Key != *b[i].Key {
return false
}
}
return true
}
func comparePrefixesOrdered(a, b []types.CommonPrefix) bool {
if len(a) == 0 && len(b) == 0 {
return true
}
if len(a) != len(b) {
return false
}
for i, cp := range a {
if *cp.Prefix != *b[i].Prefix {
return false
}
}
return true
}
// ---- Versioning Tests ----
// getVersionsTestFunc is a simple GetVersionsFunc implementation for tests that
// returns a single latest version for each file or directory encountered.
// Directories are reported with a trailing delimiter in the key to match the
// behavior of the non-versioned Walk tests where directory objects are listed.
func getVersionsTestFunc(path, versionIdMarker string, pastVersionIdMarker *bool, availableObjCount int, d fs.DirEntry) (*backend.ObjVersionFuncResult, error) {
// If we have no available slots left, signal truncation (should be rare in these tests)
if availableObjCount <= 0 {
return &backend.ObjVersionFuncResult{Truncated: true, NextVersionIdMarker: ""}, nil
}
key := path
if d.IsDir() {
key = key + "/"
}
ver := "v1"
latest := true
ov := s3response.ObjectVersion{Key: &key, VersionId: &ver, IsLatest: &latest}
return &backend.ObjVersionFuncResult{ObjectVersions: []s3response.ObjectVersion{ov}}, nil
}
// TestWalkVersions mirrors TestWalk but exercises WalkVersions and validates
// common prefixes and object versions for typical delimiter/prefix scenarios.
func TestWalkVersions(t *testing.T) {
fsys := fstest.MapFS{
"dir1/a/file1": {},
"dir1/a/file2": {},
"dir1/b/file3": {},
"rootfile": {},
}
// Without a delimiter, every directory and file becomes an object version
// via the test GetVersionsFunc (directories have trailing '/').
expected := backend.WalkVersioningResults{
ObjectVersions: []s3response.ObjectVersion{
{Key: backend.GetPtrFromString("dir1/")},
{Key: backend.GetPtrFromString("dir1/a/")},
{Key: backend.GetPtrFromString("dir1/a/file1")},
{Key: backend.GetPtrFromString("dir1/a/file2")},
{Key: backend.GetPtrFromString("dir1/b/")},
{Key: backend.GetPtrFromString("dir1/b/file3")},
{Key: backend.GetPtrFromString("rootfile")},
},
}
res, err := backend.WalkVersions(context.Background(), fsys, "", "", "", "", 1000, getVersionsTestFunc, []string{})
if err != nil {
t.Fatalf("walk versions: %v", err)
}
compareVersionResultsOrdered("simple versions no delimiter", res, expected, t)
}
// TestOrderWalkVersions mirrors TestOrderWalk, exercising ordering semantics for
// version listings (lexicographic ordering of directory and file version keys).
func TestOrderWalkVersions(t *testing.T) {
fsys := fstest.MapFS{
"dir1/a/file1": {},
"dir1/a/file2": {},
"dir1/a/file3": {},
"dir1/a.b/file1": {},
"dir1/a.b/file2": {},
}
// Expect lexicographic ordering similar to non-version walk when no delimiter.
expected := backend.WalkVersioningResults{
ObjectVersions: []s3response.ObjectVersion{
{Key: backend.GetPtrFromString("dir1/")},
{Key: backend.GetPtrFromString("dir1/a.b/")},
{Key: backend.GetPtrFromString("dir1/a.b/file1")},
{Key: backend.GetPtrFromString("dir1/a.b/file2")},
{Key: backend.GetPtrFromString("dir1/a/")},
{Key: backend.GetPtrFromString("dir1/a/file1")},
{Key: backend.GetPtrFromString("dir1/a/file2")},
{Key: backend.GetPtrFromString("dir1/a/file3")},
},
}
res, err := backend.WalkVersions(context.Background(), fsys, "dir1/", "", "", "", 1000, getVersionsTestFunc, []string{})
if err != nil {
t.Fatalf("order walk versions: %v", err)
}
compareVersionResultsOrdered("order versions no delimiter", res, expected, t)
}
// compareVersionResults compares unordered sets of common prefixes and object versions
// compareVersionResultsOrdered compares ordered slices
func compareVersionResultsOrdered(name string, got, wanted backend.WalkVersioningResults, t *testing.T) {
if !compareObjectVersionsOrdered(got.ObjectVersions, wanted.ObjectVersions) {
t.Errorf("%v: unexpected object versions, got %v wanted %v", name, printVersionObjects(got.ObjectVersions), printVersionObjects(wanted.ObjectVersions))
}
if !comparePrefixesOrdered(got.CommonPrefixes, wanted.CommonPrefixes) {
t.Errorf("%v: unexpected prefix, got %v wanted %v", name, printCommonPrefixes(got.CommonPrefixes), printCommonPrefixes(wanted.CommonPrefixes))
}
}
func compareObjectVersionsOrdered(a, b []s3response.ObjectVersion) bool {
if len(a) == 0 && len(b) == 0 {
return true
}
if len(a) != len(b) {
return false
}
for i, ov := range a {
if ov.Key == nil || b[i].Key == nil {
return false
}
if *ov.Key != *b[i].Key {
return false
}
}
return true
}
func printVersionObjects(list []s3response.ObjectVersion) string {
res := "["
for _, ov := range list {
var key string
if ov.Key == nil {
key = "<nil>"
} else {
key = *ov.Key
}
if res == "[" {
res = res + key
} else {
res = res + ", " + key
}
}
return res + "]"
}
// multiVersionGetVersionsFunc is a more sophisticated test function that simulates
// multiple versions per object, similar to the integration test behavior.
// It creates multiple versions for each file with deterministic version IDs.
func createMultiVersionFunc(files map[string]int) backend.GetVersionsFunc {
// Pre-generate all versions for deterministic testing
versionedFiles := make(map[string][]s3response.ObjectVersion)
for path, versionCount := range files {
versions := make([]s3response.ObjectVersion, versionCount)
for i := range versionCount {
versionId := fmt.Sprintf("v%d", i+1)
isLatest := i == versionCount-1 // Last version is latest
key := path
versions[i] = s3response.ObjectVersion{
Key: &key,
VersionId: &versionId,
IsLatest: &isLatest,
}
}
// Reverse slice so latest comes first (reverse chronological order)
for i, j := 0, len(versions)-1; i < j; i, j = i+1, j-1 {
versions[i], versions[j] = versions[j], versions[i]
}
versionedFiles[path] = versions
}
return func(path, versionIdMarker string, pastVersionIdMarker *bool, availableObjCount int, d fs.DirEntry) (*backend.ObjVersionFuncResult, error) {
if availableObjCount <= 0 {
return &backend.ObjVersionFuncResult{Truncated: true}, nil
}
// Handle directories - just return a single directory version
if d.IsDir() {
key := path + "/"
ver := "v1"
latest := true
ov := s3response.ObjectVersion{Key: &key, VersionId: &ver, IsLatest: &latest}
return &backend.ObjVersionFuncResult{ObjectVersions: []s3response.ObjectVersion{ov}}, nil
}
// Get versions for this file
versions, exists := versionedFiles[path]
if !exists {
// No versions for this file, skip it
return &backend.ObjVersionFuncResult{}, backend.ErrSkipObj
}
// Handle version ID marker pagination
startIdx := 0
if versionIdMarker != "" && !*pastVersionIdMarker {
// Find the starting position after the marker
for i, version := range versions {
if *version.VersionId == versionIdMarker {
startIdx = i + 1
*pastVersionIdMarker = true
break
}
}
}
// Return available versions up to the limit
endIdx := min(startIdx+availableObjCount, len(versions))
result := &backend.ObjVersionFuncResult{
ObjectVersions: versions[startIdx:endIdx],
}
// Check if we need to truncate
if endIdx < len(versions) {
result.Truncated = true
result.NextVersionIdMarker = *versions[endIdx-1].VersionId
}
return result, nil
}
}
// TestWalkVersionsTruncated tests the pagination behavior of WalkVersions
// when there are multiple versions per object and the result is truncated.
// This mirrors the integration test ListObjectVersions_multiple_object_versions_truncated.
func TestWalkVersionsTruncated(t *testing.T) {
// Create filesystem with the same files as integration test
fsys := fstest.MapFS{
"foo": {},
"bar": {},
"baz": {},
}
// Define version counts per file (matching integration test)
versionCounts := map[string]int{
"foo": 4, // 4 versions
"bar": 3, // 3 versions
"baz": 5, // 5 versions
}
getVersionsFunc := createMultiVersionFunc(versionCounts)
// Test first page with limit of 5 (should be truncated)
maxKeys := 5
res1, err := backend.WalkVersions(context.Background(), fsys, "", "", "", "", maxKeys, getVersionsFunc, []string{})
if err != nil {
t.Fatalf("walk versions first page: %v", err)
}
// Verify first page results
if !res1.Truncated {
t.Error("expected first page to be truncated")
}
if len(res1.ObjectVersions) != maxKeys {
t.Errorf("expected %d versions in first page, got %d", maxKeys, len(res1.ObjectVersions))
}
// Expected order: bar (3 versions), baz (2 versions) - lexicographic order
expectedFirstPage := []string{"bar", "bar", "bar", "baz", "baz"}
if len(res1.ObjectVersions) != len(expectedFirstPage) {
t.Fatalf("first page length mismatch: expected %d, got %d", len(expectedFirstPage), len(res1.ObjectVersions))
}
for i, expected := range expectedFirstPage {
if res1.ObjectVersions[i].Key == nil || *res1.ObjectVersions[i].Key != expected {
t.Errorf("first page[%d]: expected key %s, got %v", i, expected, res1.ObjectVersions[i].Key)
}
}
// Verify next markers are set
if res1.NextMarker == "" {
t.Error("expected NextMarker to be set on truncated result")
}
if res1.NextVersionIdMarker == "" {
t.Error("expected NextVersionIdMarker to be set on truncated result")
}
// Test second page using markers
res2, err := backend.WalkVersions(context.Background(), fsys, "", "", res1.NextMarker, res1.NextVersionIdMarker, maxKeys, getVersionsFunc, []string{})
if err != nil {
t.Fatalf("walk versions second page: %v", err)
}
t.Logf("Second page: ObjectVersions=%d, Truncated=%v, NextMarker=%s, NextVersionIdMarker=%s",
len(res2.ObjectVersions), res2.Truncated, res2.NextMarker, res2.NextVersionIdMarker)
for i, ov := range res2.ObjectVersions {
t.Logf(" [%d] Key=%s, VersionId=%s", i, *ov.Key, *ov.VersionId)
}
// Verify second page results
// With maxKeys=5, we should have 3 pages total: 5 + 5 + 2 = 12
// Test third page if needed
var res3 backend.WalkVersioningResults
if res2.Truncated {
res3, err = backend.WalkVersions(context.Background(), fsys, "", "", res2.NextMarker, res2.NextVersionIdMarker, maxKeys, getVersionsFunc, []string{})
if err != nil {
t.Fatalf("walk versions third page: %v", err)
}
t.Logf("Third page: ObjectVersions=%d, Truncated=%v, NextMarker=%s, NextVersionIdMarker=%s",
len(res3.ObjectVersions), res3.Truncated, res3.NextMarker, res3.NextVersionIdMarker)
for i, ov := range res3.ObjectVersions {
t.Logf(" [%d] Key=%s, VersionId=%s", i, *ov.Key, *ov.VersionId)
}
}
// Verify total count across all pages
totalVersions := len(res1.ObjectVersions) + len(res2.ObjectVersions) + len(res3.ObjectVersions)
expectedTotal := versionCounts["foo"] + versionCounts["bar"] + versionCounts["baz"]
if totalVersions != expectedTotal {
t.Errorf("total versions mismatch: expected %d, got %d", expectedTotal, totalVersions)
}
}

View File

@@ -29,6 +29,7 @@ import (
"github.com/urfave/cli/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3api"
"github.com/versity/versitygw/s3api/middlewares"
@@ -45,6 +46,8 @@ var (
certFile, keyFile string
kafkaURL, kafkaTopic, kafkaKey string
natsURL, natsTopic string
rabbitmqURL, rabbitmqExchange string
rabbitmqRoutingKey string
eventWebhookURL string
eventConfigFilePath string
logWebhookURL, accessLog string
@@ -52,6 +55,7 @@ var (
healthPath string
virtualDomain string
debug bool
keepAlive bool
pprof string
quiet bool
readonly bool
@@ -61,14 +65,14 @@ var (
ldapAccessAtr, ldapSecAtr, ldapRoleAtr string
ldapUserIdAtr, ldapGroupIdAtr string
vaultEndpointURL, vaultSecretStoragePath string
vaultMountPath, vaultRootToken string
vaultRoleId, vaultRoleSecret string
vaultServerCert, vaultClientCert string
vaultClientCertKey string
vaultAuthMethod, vaultMountPath string
vaultRootToken, vaultRoleId string
vaultRoleSecret, vaultServerCert string
vaultClientCert, vaultClientCertKey string
s3IamAccess, s3IamSecret string
s3IamRegion, s3IamBucket string
s3IamEndpoint string
s3IamSslNoVerify, s3IamDebug bool
s3IamSslNoVerify bool
iamCacheDisable bool
iamCacheTTL int
iamCachePrune int
@@ -77,7 +81,8 @@ var (
dogstatsServers string
ipaHost, ipaVaultName string
ipaUser, ipaPassword string
ipaInsecure, ipaDebug bool
ipaInsecure bool
iamDebug bool
)
var (
@@ -221,6 +226,12 @@ func initFlags() []cli.Flag {
EnvVars: []string{"VGW_PPROF"},
Destination: &pprof,
},
&cli.BoolFlag{
Name: "keep-alive",
Usage: "enable keep-alive connections (for finnicky clients)",
EnvVars: []string{"VGW_KEEP_ALIVE"},
Destination: &keepAlive,
},
&cli.BoolFlag{
Name: "quiet",
Usage: "silence stdout request logging output",
@@ -288,6 +299,27 @@ func initFlags() []cli.Flag {
Destination: &natsTopic,
Aliases: []string{"ent"},
},
&cli.StringFlag{
Name: "event-rabbitmq-url",
Usage: "rabbitmq server url to send the bucket notifications (amqp or amqps scheme)",
EnvVars: []string{"VGW_EVENT_RABBITMQ_URL"},
Destination: &rabbitmqURL,
Aliases: []string{"eru"},
},
&cli.StringFlag{
Name: "event-rabbitmq-exchange",
Usage: "rabbitmq exchange to publish bucket notifications to (blank for default)",
EnvVars: []string{"VGW_EVENT_RABBITMQ_EXCHANGE"},
Destination: &rabbitmqExchange,
Aliases: []string{"ere"},
},
&cli.StringFlag{
Name: "event-rabbitmq-routing-key",
Usage: "rabbitmq routing key when publishing bucket notifications (defaults to bucket name when blank)",
EnvVars: []string{"VGW_EVENT_RABBITMQ_ROUTING_KEY"},
Destination: &rabbitmqRoutingKey,
Aliases: []string{"errk"},
},
&cli.StringFlag{
Name: "event-webhook-url",
Usage: "webhook url to send bucket notifications",
@@ -380,6 +412,12 @@ func initFlags() []cli.Flag {
EnvVars: []string{"VGW_IAM_VAULT_SECRET_STORAGE_PATH"},
Destination: &vaultSecretStoragePath,
},
&cli.StringFlag{
Name: "iam-vault-auth-method",
Usage: "vault server auth method",
EnvVars: []string{"VGW_IAM_VAULT_AUTH_METHOD"},
Destination: &vaultAuthMethod,
},
&cli.StringFlag{
Name: "iam-vault-mount-path",
Usage: "vault server mount path",
@@ -459,12 +497,6 @@ func initFlags() []cli.Flag {
EnvVars: []string{"VGW_S3_IAM_NO_VERIFY"},
Destination: &s3IamSslNoVerify,
},
&cli.BoolFlag{
Name: "s3-iam-debug",
Usage: "s3 IAM debug output",
EnvVars: []string{"VGW_S3_IAM_DEBUG"},
Destination: &s3IamDebug,
},
&cli.BoolFlag{
Name: "iam-cache-disable",
Usage: "disable local iam cache",
@@ -485,6 +517,13 @@ func initFlags() []cli.Flag {
Value: 3600,
Destination: &iamCachePrune,
},
&cli.BoolFlag{
Name: "iam-debug",
Usage: "enable IAM debug output",
Value: false,
EnvVars: []string{"VGW_IAM_DEBUG"},
Destination: &iamDebug,
},
&cli.StringFlag{
Name: "health",
Usage: `health check endpoint path. Health endpoint will be configured on GET http method: GET <health>
@@ -549,12 +588,6 @@ func initFlags() []cli.Flag {
EnvVars: []string{"VGW_IPA_INSECURE"},
Destination: &ipaInsecure,
},
&cli.BoolFlag{
Name: "ipa-debug",
Usage: "FreeIPA IAM debug output",
EnvVars: []string{"VGW_IPA_DEBUG"},
Destination: &ipaDebug,
},
}
}
@@ -575,7 +608,7 @@ func runGateway(ctx context.Context, be backend.Backend) error {
AppName: "versitygw",
ServerHeader: "VERSITYGW",
StreamRequestBody: true,
DisableKeepalive: true,
DisableKeepalive: !keepAlive,
Network: fiber.NetworkTCP,
DisableStartupMessage: true,
})
@@ -596,9 +629,6 @@ func runGateway(ctx context.Context, be backend.Backend) error {
}
opts = append(opts, s3api.WithTLS(cert))
}
if debug {
opts = append(opts, s3api.WithDebug())
}
if admPort == "" {
opts = append(opts, s3api.WithAdminServer())
}
@@ -615,28 +645,12 @@ func runGateway(ctx context.Context, be backend.Backend) error {
opts = append(opts, s3api.WithHostStyle(virtualDomain))
}
admApp := fiber.New(fiber.Config{
AppName: "versitygw",
ServerHeader: "VERSITYGW",
Network: fiber.NetworkTCP,
DisableStartupMessage: true,
})
if debug {
debuglogger.SetDebugEnabled()
}
var admOpts []s3api.AdminOpt
if admCertFile != "" || admKeyFile != "" {
if admCertFile == "" {
return fmt.Errorf("TLS key specified without cert file")
}
if admKeyFile == "" {
return fmt.Errorf("TLS cert specified without key file")
}
cert, err := tls.LoadX509KeyPair(admCertFile, admKeyFile)
if err != nil {
return fmt.Errorf("tls: load certs: %v", err)
}
admOpts = append(admOpts, s3api.WithAdminSrvTLS(cert))
if iamDebug {
debuglogger.SetIAMDebugEnabled()
}
iam, err := auth.New(&auth.Opts{
@@ -658,6 +672,7 @@ func runGateway(ctx context.Context, be backend.Backend) error {
LDAPGroupIdAtr: ldapGroupIdAtr,
VaultEndpointURL: vaultEndpointURL,
VaultSecretStoragePath: vaultSecretStoragePath,
VaultAuthMethod: vaultAuthMethod,
VaultMountPath: vaultMountPath,
VaultRootToken: vaultRootToken,
VaultRoleId: vaultRoleId,
@@ -671,7 +686,6 @@ func runGateway(ctx context.Context, be backend.Backend) error {
S3Bucket: s3IamBucket,
S3Endpoint: s3IamEndpoint,
S3DisableSSlVerfiy: s3IamSslNoVerify,
S3Debug: s3IamDebug,
CacheDisable: iamCacheDisable,
CacheTTL: iamCacheTTL,
CachePrune: iamCachePrune,
@@ -680,7 +694,6 @@ func runGateway(ctx context.Context, be backend.Backend) error {
IpaUser: ipaUser,
IpaPassword: ipaPassword,
IpaInsecure: ipaInsecure,
IpaDebug: ipaDebug,
})
if err != nil {
return fmt.Errorf("setup iam: %w", err)
@@ -710,6 +723,9 @@ func runGateway(ctx context.Context, be backend.Backend) error {
KafkaTopicKey: kafkaKey,
NatsURL: natsURL,
NatsTopic: natsTopic,
RabbitmqURL: rabbitmqURL,
RabbitmqExchange: rabbitmqExchange,
RabbitmqRoutingKey: rabbitmqRoutingKey,
WebhookURL: eventWebhookURL,
FilterConfigFilePath: eventConfigFilePath,
})
@@ -725,7 +741,41 @@ func runGateway(ctx context.Context, be backend.Backend) error {
return fmt.Errorf("init gateway: %v", err)
}
admSrv := s3api.NewAdminServer(admApp, be, middlewares.RootUserConfig{Access: rootUserAccess, Secret: rootUserSecret}, admPort, region, iam, loggers.AdminLogger, admOpts...)
var admSrv *s3api.S3AdminServer
if admPort != "" {
admApp := fiber.New(fiber.Config{
AppName: "versitygw",
ServerHeader: "VERSITYGW",
Network: fiber.NetworkTCP,
DisableStartupMessage: true,
})
var opts []s3api.AdminOpt
if admCertFile != "" || admKeyFile != "" {
if admCertFile == "" {
return fmt.Errorf("TLS key specified without cert file")
}
if admKeyFile == "" {
return fmt.Errorf("TLS cert specified without key file")
}
cert, err := tls.LoadX509KeyPair(admCertFile, admKeyFile)
if err != nil {
return fmt.Errorf("tls: load certs: %v", err)
}
opts = append(opts, s3api.WithAdminSrvTLS(cert))
}
if quiet {
opts = append(opts, s3api.WithAdminQuiet())
}
if debug {
opts = append(opts, s3api.WithAdminDebug())
}
admSrv = s3api.NewAdminServer(admApp, be, middlewares.RootUserConfig{Access: rootUserAccess, Secret: rootUserSecret}, admPort, region, iam, loggers.AdminLogger, opts...)
}
if !quiet {
printBanner(port, admPort, certFile != "", admCertFile != "")
@@ -970,10 +1020,7 @@ func getMatchingIPs(spec string) ([]string, error) {
const columnWidth = 70
func centerText(text string) string {
padding := (columnWidth - 2 - len(text)) / 2
if padding < 0 {
padding = 0
}
padding := max((columnWidth-2-len(text))/2, 0)
return strings.Repeat(" ", padding) + text
}

View File

@@ -72,6 +72,12 @@ move interfaces as well as support for tiered filesystems.`,
EnvVars: []string{"VGW_BUCKET_LINKS"},
Destination: &bucketlinks,
},
&cli.StringFlag{
Name: "versioning-dir",
Usage: "the directory path to enable bucket versioning",
EnvVars: []string{"VGW_VERSIONING_DIR"},
Destination: &versioningDir,
},
&cli.UintFlag{
Name: "dir-perms",
Usage: "default directory permissions for new directories",
@@ -106,6 +112,7 @@ func runScoutfs(ctx *cli.Context) error {
opts.BucketLinks = bucketlinks
opts.NewDirPerm = fs.FileMode(dirPerms)
opts.DisableNoArchive = disableNoArchive
opts.VersioningDir = versioningDir
be, err := scoutfs.New(ctx.Args().Get(0), opts)
if err != nil {

View File

@@ -45,9 +45,9 @@ func LogFiberRequestDetails(ctx *fiber.Ctx) {
// log request headers
wrapInBox(green, "REQUEST HEADERS", boxWidth, func() {
ctx.Request().Header.VisitAll(func(key, value []byte) {
for key, value := range ctx.Request().Header.All() {
printWrappedLine(yellow, string(key), string(value))
})
}
})
// skip request body log for PutObject and UploadPart
skipBodyLog := isLargeDataAction(ctx)
@@ -61,18 +61,18 @@ func LogFiberRequestDetails(ctx *fiber.Ctx) {
}
if ctx.Request().URI().QueryArgs().Len() != 0 {
ctx.Request().URI().QueryArgs().VisitAll(func(key, val []byte) {
log.Printf("%s: %s", key, val)
})
for key, value := range ctx.Request().URI().QueryArgs().All() {
log.Printf("%s: %s", key, value)
}
}
}
// Logs http response details: body, headers
func LogFiberResponseDetails(ctx *fiber.Ctx) {
wrapInBox(green, "RESPONSE HEADERS", boxWidth, func() {
ctx.Response().Header.VisitAll(func(key, value []byte) {
for key, value := range ctx.Response().Header.All() {
printWrappedLine(yellow, string(key), string(value))
})
}
})
_, ok := ctx.Locals("skip-res-body-log").(bool)
@@ -91,6 +91,11 @@ func SetDebugEnabled() {
debugEnabled.Store(true)
}
// IsDebugEnabled returns true if debugging is enabled
func IsDebugEnabled() bool {
return debugEnabled.Load()
}
// Logf is the same as 'fmt.Printf' with debug prefix,
// a color added and '\n' at the end
func Logf(format string, v ...any) {
@@ -110,6 +115,28 @@ func Infof(format string, v ...any) {
fmt.Printf(string(green)+debugPrefix+format+reset+"\n", v...)
}
var debugIAMEnabled atomic.Bool
// SetIAMDebugEnabled sets the IAM debug mode
func SetIAMDebugEnabled() {
debugIAMEnabled.Store(true)
}
// IsDebugEnabled returns true if debugging enabled
func IsIAMDebugEnabled() bool {
return debugEnabled.Load()
}
// IAMLogf is the same as 'fmt.Printf' with debug prefix,
// a color added and '\n' at the end
func IAMLogf(format string, v ...any) {
if !debugIAMEnabled.Load() {
return
}
debugPrefix := "[DEBUG]: "
fmt.Printf(string(yellow)+debugPrefix+format+reset+"\n", v...)
}
// PrintInsideHorizontalBorders prints the text inside horizontal
// border and title in the center of upper border
func PrintInsideHorizontalBorders(color Color, title, text string, width int) {

View File

@@ -169,6 +169,19 @@ ROOT_SECRET_ACCESS_KEY=
#VGW_EVENT_NATS_URL=
#VGW_EVENT_NATS_TOPIC=
# Bucket events can be sent to a RabbitMQ messaging service. When
# VGW_EVENT_RABBITMQ_URL is specified, events will be published to the specified
# exchange (VGW_EVENT_RABBITMQ_EXCHANGE) using the routing key
# (VGW_EVENT_RABBITMQ_ROUTING_KEY). If exchange is blank the default exchange is
# used. If routing key is blank, it will be left empty (the server can bind a
# queue with an empty binding key or you can set an explicit key).
# Example URL formats:
# amqp://user:pass@rabbitmq:5672/
# amqps://user:pass@rabbitmq:5671/vhost
#VGW_EVENT_RABBITMQ_URL=
#VGW_EVENT_RABBITMQ_EXCHANGE=
#VGW_EVENT_RABBITMQ_ROUTING_KEY=
# Bucket events can be sent to a webhook. When VGW_EVENT_WEBHOOK_URL is
# specified, all configured bucket events will be sent to the webhook.
#VGW_EVENT_WEBHOOK_URL=
@@ -430,6 +443,14 @@ ROOT_SECRET_ACCESS_KEY=
# as any parent directories automatically created with object uploads.
#VGW_DIR_PERMS=0755
# To enable object versions, the VGW_VERSIONING_DIR option must be set to the
# directory that will be used to store the object versions. The version
# directory must NOT be a subdirectory of the VGW_BACKEND_ARG directory.
# There may be implications for archive policy updates to include version
# directory as well. It is recommended to discuss archive implications of
# versioning with Versity support before enabling on an archiving filesystem.
#VGW_VERSIONING_DIR=
# The default behavior of the gateway is to automatically set the noarchive
# flag on the multipart upload parts while the multipart upload is in progress.
# This is to prevent the parts from being archived since they are temporary

77
go.mod
View File

@@ -1,46 +1,49 @@
module github.com/versity/versitygw
go 1.23.0
go 1.24.0
toolchain go1.24.1
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1
github.com/DataDog/datadog-go/v5 v5.6.0
github.com/aws/aws-sdk-go-v2 v1.36.5
github.com/aws/aws-sdk-go-v2/service/s3 v1.82.0
github.com/aws/smithy-go v1.22.4
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.19.1
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.11.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.2
github.com/DataDog/datadog-go/v5 v5.7.1
github.com/aws/aws-sdk-go-v2 v1.39.0
github.com/aws/aws-sdk-go-v2/service/s3 v1.88.1
github.com/aws/smithy-go v1.23.0
github.com/davecgh/go-spew v1.1.1
github.com/go-ldap/ldap/v3 v3.4.11
github.com/gofiber/fiber/v2 v2.52.8
github.com/gofiber/fiber/v2 v2.52.9
github.com/google/go-cmp v0.7.0
github.com/google/uuid v1.6.0
github.com/hashicorp/vault-client-go v0.4.3
github.com/nats-io/nats.go v1.43.0
github.com/nats-io/nats.go v1.45.0
github.com/oklog/ulid/v2 v2.1.1
github.com/pkg/xattr v0.4.12
github.com/segmentio/kafka-go v0.4.48
github.com/rabbitmq/amqp091-go v1.10.0
github.com/segmentio/kafka-go v0.4.49
github.com/smira/go-statsd v1.3.4
github.com/stretchr/testify v1.11.1
github.com/urfave/cli/v2 v2.27.7
github.com/valyala/fasthttp v1.62.0
github.com/valyala/fasthttp v1.66.0
github.com/versity/scoutfs-go v0.0.0-20240325223134-38eb2f5f7d44
golang.org/x/sync v0.15.0
golang.org/x/sys v0.33.0
golang.org/x/sync v0.17.0
golang.org/x/sys v0.36.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.7 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.29.3 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.34.4 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.4 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667 // indirect
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
github.com/golang-jwt/jwt/v5 v5.3.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-retryablehttp v0.7.8 // indirect
github.com/hashicorp/go-rootcerts v1.0.2 // indirect
@@ -51,26 +54,28 @@ require (
github.com/nats-io/nuid v1.0.1 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/ryanuber/go-glob v1.0.0 // indirect
golang.org/x/crypto v0.39.0 // indirect
golang.org/x/net v0.41.0 // indirect
golang.org/x/text v0.26.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/crypto v0.42.0 // indirect
golang.org/x/net v0.44.0 // indirect
golang.org/x/text v0.29.0 // indirect
golang.org/x/time v0.13.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
require (
github.com/andybalholm/brotli v1.2.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 // indirect
github.com/aws/aws-sdk-go-v2/config v1.29.17
github.com/aws/aws-sdk-go-v2/credentials v1.17.70
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.82
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.1 // indirect
github.com/aws/aws-sdk-go-v2/config v1.31.8
github.com/aws/aws-sdk-go-v2/credentials v1.18.12
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.19.6
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.7 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.7 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.7 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.8.7 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.7 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.7 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.7 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect

200
go.sum
View File

@@ -1,23 +1,23 @@
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0 h1:Gt0j3wceWMwPmiazCa8MzMA0MfhmPIz0Qp0FJ6qcM0U=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0/go.mod h1:Ot/6aikWnKWi4l9QB7qVSwa8iMphQNqkWALMoNT3rzM=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 h1:B+blDbyVIG3WaikNxPnhPiJ1MThR03b3vKGtER95TP4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1/go.mod h1:JdM5psgjfBf5fo2uWOZhflPWyDBZ/O/CNAH9CtsuZE4=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.19.1 h1:5YTBM8QDVIBN3sxBil89WfdAAqDZbyJTgh688DSxX5w=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.19.1/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.11.0 h1:MhRfI58HblXzCtWEZCO0feHs8LweePB3s90r7WaR1KU=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.11.0/go.mod h1:okZ+ZURbArNdlJ+ptXoyHNuOETzOl1Oww19rm8I2WLA=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2 h1:yz1bePFlP5Vws5+8ez6T3HWXPmwOK7Yvq8QxDBD3SKY=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2/go.mod h1:Pa9ZNPuoNu/GztvBSKk9J1cDJW6vk/n0zLtV4mgd8N8=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 h1:FPKJS1T+clwv+OLGt13a8UjqeRuh0O4SJ3lUriThc+4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0 h1:LR0kAX9ykz8G4YgLCaRDVJ3+n43R8MneB5dTy2konZo=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0/go.mod h1:DWAciXemNf++PQJLeXUB4HHH5OpsAh12HZnu2wXE1jA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1 h1:lhZdRq7TIx0GJQvSyX2Si406vrYsov2FXGp/RnSEtcs=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1/go.mod h1:8cl44BDmi+effbARHMQjgOKA2AYvcohNm7KEt42mSV8=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1 h1:/Zt+cDPnpC3OVDm/JKLOs7M2DKmLRIIp3XIx9pHHiig=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1/go.mod h1:Ng3urmn6dYe8gnbCMoHHVl5APYz2txho3koEkV2o2HA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.2 h1:FwladfywkNirM+FZYLBR2kBz5C8Tg0fw5w5Y7meRXWI=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.2/go.mod h1:vv5Ad0RrIoT1lJFdWBZwt4mB1+j+V8DUroixmKDTCdk=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/DataDog/datadog-go/v5 v5.6.0 h1:2oCLxjF/4htd55piM75baflj/KoE6VYS7alEUqFvRDw=
github.com/DataDog/datadog-go/v5 v5.6.0/go.mod h1:K9kcYBlxkcPP8tvvjZZKs/m1edNAUFzBbdpTUKfCsuw=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 h1:XkkQbfMyuH2jTSjQjSoihryI8GINRcs4xp8lNawg0FI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
github.com/DataDog/datadog-go/v5 v5.7.1 h1:dNhEwKaO3LJhGYKajl2DjobArfa5R9YF72z3Dy+PH3k=
github.com/DataDog/datadog-go/v5 v5.7.1/go.mod h1:CA9Ih6tb3jtxk+ps1xvTnxmhjr7ldE8TiwrZyrm31ss=
github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
@@ -25,63 +25,59 @@ github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7V
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/aws/aws-sdk-go-v2 v1.36.5 h1:0OF9RiEMEdDdZEMqF9MRjevyxAQcf6gY+E7vwBILFj0=
github.com/aws/aws-sdk-go-v2 v1.36.5/go.mod h1:EYrzvCCN9CMUTa5+6lf6MM4tq3Zjp8UhSGR/cBsjai0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
github.com/aws/aws-sdk-go-v2/config v1.29.17 h1:jSuiQ5jEe4SAMH6lLRMY9OVC+TqJLP5655pBGjmnjr0=
github.com/aws/aws-sdk-go-v2/config v1.29.17/go.mod h1:9P4wwACpbeXs9Pm9w1QTh6BwWwJjwYvJ1iCt5QbCXh8=
github.com/aws/aws-sdk-go-v2/credentials v1.17.70 h1:ONnH5CM16RTXRkS8Z1qg7/s2eDOhHhaXVd72mmyv4/0=
github.com/aws/aws-sdk-go-v2/credentials v1.17.70/go.mod h1:M+lWhhmomVGgtuPOhO85u4pEa3SmssPTdcYpP/5J/xc=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32 h1:KAXP9JSHO1vKGCr5f4O6WmlVKLFFXgWYAGoJosorxzU=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32/go.mod h1:h4Sg6FQdexC1yYG9RDnOvLbW1a/P986++/Y/a+GyEM8=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.82 h1:EO13QJTCD1Ig2IrQnoHTRrn981H9mB7afXsZ89WptI4=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.82/go.mod h1:AGh1NCg0SH+uyJamiJA5tTQcql4MMRDXGRdMmCxCXzY=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36 h1:SsytQyTMHMDPspp+spo7XwXTP44aJZZAC7fBV2C5+5s=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36/go.mod h1:Q1lnJArKRXkenyog6+Y+zr7WDpk4e6XlR6gs20bbeNo=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36 h1:i2vNHQiXUvKhs3quBR6aqlgJaiaexz/aNvdCktW/kAM=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36/go.mod h1:UdyGa7Q91id/sdyHPwth+043HhmP6yP9MBHgbZM0xo8=
github.com/aws/aws-sdk-go-v2 v1.39.0 h1:xm5WV/2L4emMRmMjHFykqiA4M/ra0DJVSWUkDyBjbg4=
github.com/aws/aws-sdk-go-v2 v1.39.0/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.1 h1:i8p8P4diljCr60PpJp6qZXNlgX4m2yQFpYk+9ZT+J4E=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.1/go.mod h1:ddqbooRZYNoJ2dsTwOty16rM+/Aqmk/GOXrK8cg7V00=
github.com/aws/aws-sdk-go-v2/config v1.31.8 h1:kQjtOLlTU4m4A64TsRcqwNChhGCwaPBt+zCQt/oWsHU=
github.com/aws/aws-sdk-go-v2/config v1.31.8/go.mod h1:QPpc7IgljrKwH0+E6/KolCgr4WPLerURiU592AYzfSY=
github.com/aws/aws-sdk-go-v2/credentials v1.18.12 h1:zmc9e1q90wMn8wQbjryy8IwA6Q4XlaL9Bx2zIqdNNbk=
github.com/aws/aws-sdk-go-v2/credentials v1.18.12/go.mod h1:3VzdRDR5u3sSJRI4kYcOSIBbeYsgtVk7dG5R/U6qLWY=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.7 h1:Is2tPmieqGS2edBnmOJIbdvOA6Op+rRpaYR60iBAwXM=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.7/go.mod h1:F1i5V5421EGci570yABvpIXgRIBPb5JM+lSkHF6Dq5w=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.19.6 h1:bByPm7VcaAgeT2+z5m0Lj5HDzm+g9AwbA3WFx2hPby0=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.19.6/go.mod h1:PhTe8fR8aFW0wDc6IV9BHeIzXhpv3q6AaVHnqiv5Pyc=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.7 h1:UCxq0X9O3xrlENdKf1r9eRJoKz/b0AfGkpp3a7FPlhg=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.7/go.mod h1:rHRoJUNUASj5Z/0eqI4w32vKvC7atoWR0jC+IkmVH8k=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.7 h1:Y6DTZUn7ZUC4th9FMBbo8LVE+1fyq3ofw+tRwkUd3PY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.7/go.mod h1:x3XE6vMnU9QvHN/Wrx2s44kwzV2o2g5x/siw4ZUJ9g8=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 h1:GMYy2EOWfzdP3wfVAGXBNKY5vK4K8vMET4sYOYltmqs=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36/go.mod h1:gDhdAV6wL3PmPqBhiPbnlS447GoWs8HTTOYef9/9Inw=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4 h1:CXV68E2dNqhuynZJPB80bhPQwAKqBWVer887figW6Jc=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4/go.mod h1:/xFi9KtvBXP97ppCz1TAEvU1Uf66qvid89rbem3wCzQ=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 h1:nAP2GYbfh8dd2zGZqFRSMlq+/F6cMPBUuCsGAMkN074=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4/go.mod h1:LT10DsiGjLWh4GbjInf9LQejkYEhBgBCjLG5+lvk4EE=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17 h1:t0E6FzREdtCsiLIoLCWsYliNsRBgyGD/MCK571qk4MI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17/go.mod h1:ygpklyoaypuyDvOM5ujWGrYWpAK3h7ugnmKCU/76Ys4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 h1:qcLWgdhq45sDM9na4cvXax9dyLitn8EYBRl8Ak4XtG4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17/go.mod h1:M+jkjBFZ2J6DJrjMv2+vkBbuht6kxJYtJiwoVgX4p4U=
github.com/aws/aws-sdk-go-v2/service/s3 v1.82.0 h1:JubM8CGDDFaAOmBrd8CRYNr49ZNgEAiLwGwgNMdS0nw=
github.com/aws/aws-sdk-go-v2/service/s3 v1.82.0/go.mod h1:kUklwasNoCn5YpyAqC/97r6dzTA1SRKJfKq16SXeoDU=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5 h1:AIRJ3lfb2w/1/8wOOSqYb9fUKGwQbtysJ2H1MofRUPg=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5/go.mod h1:b7SiVprpU+iGazDUqvRSLf5XmCdn+JtT1on7uNL6Ipc=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3 h1:BpOxT3yhLwSJ77qIY3DoHAQjZsc4HEGfMCE4NGy3uFg=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3/go.mod h1:vq/GQR1gOFLquZMSrxUK/cpvKCNVYibNyJ1m7JrU88E=
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0 h1:NFOJ/NXEGV4Rq//71Hs1jC/NvPs1ezajK+yQmkwnPV0=
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0/go.mod h1:7ph2tGpfQvwzgistp2+zga9f+bCjlQJPkPUmMgDSD7w=
github.com/aws/smithy-go v1.22.4 h1:uqXzVZNuNexwc/xrh6Tb56u89WDlJY6HS+KC0S4QSjw=
github.com/aws/smithy-go v1.22.4/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.7 h1:BszAktdUo2xlzmYHjWMq70DqJ7cROM8iBd3f6hrpuMQ=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.7/go.mod h1:XJ1yHki/P7ZPuG4fd3f0Pg/dSGA2cTQBCLw82MH2H48=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 h1:oegbebPEMA/1Jny7kvwejowCaHz1FWZAQ94WXFNCyTM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1/go.mod h1:kemo5Myr9ac0U9JfSjMo9yHLtw+pECEHsFtJ9tqCEI8=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.8.7 h1:zmZ8qvtE9chfhBPuKB2aQFxW5F/rpwXUgmcVCgQzqRw=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.8.7/go.mod h1:vVYfbpd2l+pKqlSIDIOgouxNsGu5il9uDp0ooWb0jys=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.7 h1:mLgc5QIgOy26qyh5bvW+nDoAppxgn3J2WV3m9ewq7+8=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.7/go.mod h1:wXb/eQnqt8mDQIQTTmcw58B5mYGxzLGZGK8PWNFZ0BA=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.7 h1:u3VbDKUCWarWiU+aIUK4gjTr/wQFXV17y3hgNno9fcA=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.7/go.mod h1:/OuMQwhSyRapYxq6ZNpPer8juGNrB4P5Oz8bZ2cgjQE=
github.com/aws/aws-sdk-go-v2/service/s3 v1.88.1 h1:+RpGuaQ72qnU83qBKVwxkznewEdAGhIWo/PQCmkhhog=
github.com/aws/aws-sdk-go-v2/service/s3 v1.88.1/go.mod h1:xajPTguLoeQMAOE44AAP2RQoUhF8ey1g5IFHARv71po=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.3 h1:7PKX3VYsZ8LUWceVRuv0+PU+E7OtQb1lgmi5vmUE9CM=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.3/go.mod h1:Ql6jE9kyyWI5JHn+61UT/Y5Z0oyVJGmgmJbZD5g4unY=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.34.4 h1:e0XBRn3AptQotkyBFrHAxFB8mDhAIOfsG+7KyJ0dg98=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.34.4/go.mod h1:XclEty74bsGBCr1s0VSaA11hQ4ZidK4viWK7rRfO88I=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.4 h1:PR00NXRYgY4FWHqOGx3fC3lhVKjsp1GdloDv2ynMSd8=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.4/go.mod h1:Z+Gd23v97pX9zK97+tX4ppAgqCt3Z2dIXB02CtBncK8=
github.com/aws/smithy-go v1.23.0 h1:8n6I3gXzWJB2DxBDnfxgBaSX6oe0d/t10qGz7OKqMCE=
github.com/aws/smithy-go v1.23.0/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/cpuguy83/go-md2man/v2 v2.0.7 h1:zbFlGlXEAKlwXpmvle3d8Oe3YnkKIK4xSRTd3sHPnBo=
github.com/cpuguy83/go-md2man/v2 v2.0.7/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667 h1:BP4M0CvQ4S3TGls2FvczZtj5Re/2ZzkV9VwqPHH/3Bo=
github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-ldap/ldap/v3 v3.4.11 h1:4k0Yxweg+a3OyBLjdYn5OKglv18JNvfDykSoI8bW0gU=
github.com/go-ldap/ldap/v3 v3.4.11/go.mod h1:bY7t0FLK8OAVpp/vV6sSlpz3EQDGcQwc8pF0ujLgKvM=
github.com/gofiber/fiber/v2 v2.52.8 h1:xl4jJQ0BV5EJTA2aWiKw/VddRpHrKeZLF0QPUxqn0x4=
github.com/gofiber/fiber/v2 v2.52.8/go.mod h1:YEcBbO/FB+5M1IZNBP9FO3J9281zgPAreiI1oqg8nDw=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/gofiber/fiber/v2 v2.52.9 h1:YjKl5DOiyP3j0mO61u3NTmK7or8GzzWzCFzkboyP5cw=
github.com/gofiber/fiber/v2 v2.52.9/go.mod h1:YEcBbO/FB+5M1IZNBP9FO3J9281zgPAreiI1oqg8nDw=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
@@ -115,9 +111,12 @@ github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZ
github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
github.com/keybase/go-keychain v0.0.1 h1:way+bWYa6lDppZoZcgMbYsvC7GxljxrskdNInRtuthU=
github.com/keybase/go-keychain v0.0.1/go.mod h1:PdEILRW3i9D8JcdM+FmY6RwkHGnhHxXwkPPMeUgOK1k=
github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
@@ -128,8 +127,8 @@ github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6T
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/nats-io/nats.go v1.43.0 h1:uRFZ2FEoRvP64+UUhaTokyS18XBCR/xM2vQZKO4i8ug=
github.com/nats-io/nats.go v1.43.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nats.go v1.45.0 h1:/wGPbnYXDM0pLKFjZTX+2JOw9TQPoIgTFrUaH97giwA=
github.com/nats-io/nats.go v1.45.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nkeys v0.4.11 h1:q44qGV008kYd9W1b1nEBkNzvnWxtRSQ7A8BoqRrcfa0=
github.com/nats-io/nkeys v0.4.11/go.mod h1:szDimtgmfOi9n25JpfIdGw12tZFYXqhGxjhVxsatHVE=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
@@ -137,7 +136,6 @@ github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OS
github.com/oklog/ulid/v2 v2.1.1 h1:suPZ4ARWLOJLegGFiZZ1dFAkqzhMjL3J1TzI+5wHz8s=
github.com/oklog/ulid/v2 v2.1.1/go.mod h1:rcEKHmBBKfef9DhnvX7y1HZBYxjXb0cP5ExxNsTT1QQ=
github.com/pborman/getopt v0.0.0-20170112200414-7148bc3a4c30/go.mod h1:85jBQOZwpVEaDAr341tbn15RS4fCAsIst0qp7i8ex1o=
github.com/pierrec/lz4/v4 v4.1.15/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
@@ -147,36 +145,39 @@ github.com/pkg/xattr v0.4.12 h1:rRTkSyFNTRElv6pkA3zpjHpQ90p/OdHQC1GmGh1aTjM=
github.com/pkg/xattr v0.4.12/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/redis/go-redis/v9 v9.8.0 h1:q3nRvjrlge/6UD7eTu/DSg2uYiU2mCL0G/uzBWqhicI=
github.com/redis/go-redis/v9 v9.8.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/rabbitmq/amqp091-go v1.10.0 h1:STpn5XsHlHGcecLmMFCtg7mqq0RnD+zFr4uzukfVhBw=
github.com/rabbitmq/amqp091-go v1.10.0/go.mod h1:Hy4jKW5kQART1u+JkDTF9YYOQUHXqMuhrgxOEeS7G4o=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk=
github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc=
github.com/segmentio/kafka-go v0.4.48 h1:9jyu9CWK4W5W+SroCe8EffbrRZVqAOkuaLd/ApID4Vs=
github.com/segmentio/kafka-go v0.4.48/go.mod h1:HjF6XbOKh0Pjlkr5GVZxt6CsjjwnmhVOfURM5KMd8qg=
github.com/segmentio/kafka-go v0.4.49 h1:GJiNX1d/g+kG6ljyJEoi9++PUMdXGAxb7JGPiDCuNmk=
github.com/segmentio/kafka-go v0.4.49/go.mod h1:Y1gn60kzLEEaW28YshXyk2+VCUKbJ3Qr6DrnT3i4+9E=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smira/go-statsd v1.3.4 h1:kBYWcLSGT+qC6JVbvfz48kX7mQys32fjDOPrfmsSx2c=
github.com/smira/go-statsd v1.3.4/go.mod h1:RjdsESPgDODtg1VpVVf9MJrEW2Hw0wtRNbmB1CAhu6A=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/urfave/cli/v2 v2.27.7 h1:bH59vdhbjLv3LAvIu6gd0usJHgoTTPhCFib8qqOwXYU=
github.com/urfave/cli/v2 v2.27.7/go.mod h1:CyNAG/xg+iAOg0N4MPGZqVmv2rCoP267496AOXUZjA4=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.62.0 h1:8dKRBX/y2rCzyc6903Zu1+3qN0H/d2MsxPPmVNamiH0=
github.com/valyala/fasthttp v1.62.0/go.mod h1:FCINgr4GKdKqV8Q0xv8b+UxPV+H/O5nNFo3D+r54Htg=
github.com/valyala/fasthttp v1.66.0 h1:M87A0Z7EayeyNaV6pfO3tUTUiYO0dZfEJnRGXTVNuyU=
github.com/valyala/fasthttp v1.66.0/go.mod h1:Y4eC+zwoocmXSVCB1JmhNbYtS7tZPRI2ztPB72EVObs=
github.com/versity/scoutfs-go v0.0.0-20240325223134-38eb2f5f7d44 h1:Wx1o3pNrCzsHIIDyZ2MLRr6tF/1FhAr7HNDn80QqDWE=
github.com/versity/scoutfs-go v0.0.0-20240325223134-38eb2f5f7d44/go.mod h1:gJsq73k+4685y+rbDIpPY8i/5GbsiwP6JFoFyUDB1fQ=
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
@@ -190,32 +191,22 @@ github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1/go.mod h1:Ohn+xnUBi
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/net v0.44.0 h1:evd8IRDyfNBMBTTY5XRF1vaZlD+EmWx6x8PkhR04H/I=
golang.org/x/net v0.44.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -223,42 +214,27 @@ golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/text v0.29.0 h1:1neNs90w9YzJ9BocxfsQNHKuAT4pkghyXc4nhZ6sJvk=
golang.org/x/text v0.29.0/go.mod h1:7MhJOA9CD2qZyOKYazxdYMF85OwPdEr9jTtBpO7ydH4=
golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI=
golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -24,57 +24,99 @@ var (
)
var (
ActionUndetected = "ActionUnDetected"
ActionAbortMultipartUpload = "s3_AbortMultipartUpload"
ActionCompleteMultipartUpload = "s3_CompleteMultipartUpload"
ActionCopyObject = "s3_CopyObject"
ActionCreateBucket = "s3_CreateBucket"
ActionCreateMultipartUpload = "s3_CreateMultipartUpload"
ActionDeleteBucket = "s3_DeleteBucket"
ActionDeleteBucketPolicy = "s3_DeleteBucketPolicy"
ActionDeleteBucketTagging = "s3_DeleteBucketTagging"
ActionDeleteObject = "s3_DeleteObject"
ActionDeleteObjectTagging = "s3_DeleteObjectTagging"
ActionDeleteObjects = "s3_DeleteObjects"
ActionGetBucketAcl = "s3_GetBucketAcl"
ActionGetBucketPolicy = "s3_GetBucketPolicy"
ActionGetBucketTagging = "s3_GetBucketTagging"
ActionGetBucketVersioning = "s3_GetBucketVersioning"
ActionGetObject = "s3_GetObject"
ActionGetObjectAcl = "s3_GetObjectAcl"
ActionGetObjectAttributes = "s3_GetObjectAttributes"
ActionGetObjectLegalHold = "s3_GetObjectLegalHold"
ActionGetObjectLockConfiguration = "s3_GetObjectLockConfiguration"
ActionGetObjectRetention = "s3_GetObjectRetention"
ActionGetObjectTagging = "s3_GetObjectTagging"
ActionHeadBucket = "s3_HeadBucket"
ActionHeadObject = "s3_HeadObject"
ActionListAllMyBuckets = "s3_ListAllMyBuckets"
ActionListMultipartUploads = "s3_ListMultipartUploads"
ActionListObjectVersions = "s3_ListObjectVersions"
ActionListObjects = "s3_ListObjects"
ActionListObjectsV2 = "s3_ListObjectsV2"
ActionListParts = "s3_ListParts"
ActionPutBucketAcl = "s3_PutBucketAcl"
ActionPutBucketPolicy = "s3_PutBucketPolicy"
ActionPutBucketTagging = "s3_PutBucketTagging"
ActionPutBucketVersioning = "s3_PutBucketVersioning"
ActionPutObject = "s3_PutObject"
ActionPutObjectAcl = "s3_PutObjectAcl"
ActionPutObjectLegalHold = "s3_PutObjectLegalHold"
ActionPutObjectLockConfiguration = "s3_PutObjectLockConfiguration"
ActionPutObjectRetention = "s3_PutObjectRetention"
ActionPutObjectTagging = "s3_PutObjectTagging"
ActionRestoreObject = "s3_RestoreObject"
ActionSelectObjectContent = "s3_SelectObjectContent"
ActionUploadPart = "s3_UploadPart"
ActionUploadPartCopy = "s3_UploadPartCopy"
ActionPutBucketOwnershipControls = "s3_PutBucketOwnershipControls"
ActionGetBucketOwnershipControls = "s3_GetBucketOwnershipControls"
ActionDeleteBucketOwnershipControls = "s3_DeleteBucketOwnershipControls"
ActionPutBucketCors = "s3_PutBucketCors"
ActionGetBucketCors = "s3_GetBucketCors"
ActionDeleteBucketCors = "s3_DeleteBucketCors"
ActionUndetected = "ActionUnDetected"
ActionAbortMultipartUpload = "s3_AbortMultipartUpload"
ActionCompleteMultipartUpload = "s3_CompleteMultipartUpload"
ActionCopyObject = "s3_CopyObject"
ActionCreateBucket = "s3_CreateBucket"
ActionCreateMultipartUpload = "s3_CreateMultipartUpload"
ActionDeleteBucket = "s3_DeleteBucket"
ActionDeleteBucketPolicy = "s3_DeleteBucketPolicy"
ActionDeleteBucketTagging = "s3_DeleteBucketTagging"
ActionDeleteObject = "s3_DeleteObject"
ActionDeleteObjectTagging = "s3_DeleteObjectTagging"
ActionDeleteObjects = "s3_DeleteObjects"
ActionGetBucketAcl = "s3_GetBucketAcl"
ActionGetBucketPolicy = "s3_GetBucketPolicy"
ActionGetBucketTagging = "s3_GetBucketTagging"
ActionGetBucketVersioning = "s3_GetBucketVersioning"
ActionGetObject = "s3_GetObject"
ActionGetObjectAcl = "s3_GetObjectAcl"
ActionGetObjectAttributes = "s3_GetObjectAttributes"
ActionGetObjectLegalHold = "s3_GetObjectLegalHold"
ActionGetObjectLockConfiguration = "s3_GetObjectLockConfiguration"
ActionGetObjectRetention = "s3_GetObjectRetention"
ActionGetObjectTagging = "s3_GetObjectTagging"
ActionHeadBucket = "s3_HeadBucket"
ActionHeadObject = "s3_HeadObject"
ActionListAllMyBuckets = "s3_ListAllMyBuckets"
ActionListMultipartUploads = "s3_ListMultipartUploads"
ActionListObjectVersions = "s3_ListObjectVersions"
ActionListObjects = "s3_ListObjects"
ActionListObjectsV2 = "s3_ListObjectsV2"
ActionListParts = "s3_ListParts"
ActionPutBucketAcl = "s3_PutBucketAcl"
ActionPutBucketPolicy = "s3_PutBucketPolicy"
ActionPutBucketTagging = "s3_PutBucketTagging"
ActionPutBucketVersioning = "s3_PutBucketVersioning"
ActionPutObject = "s3_PutObject"
ActionPutObjectAcl = "s3_PutObjectAcl"
ActionPutObjectLegalHold = "s3_PutObjectLegalHold"
ActionPutObjectLockConfiguration = "s3_PutObjectLockConfiguration"
ActionPutObjectRetention = "s3_PutObjectRetention"
ActionPutObjectTagging = "s3_PutObjectTagging"
ActionRestoreObject = "s3_RestoreObject"
ActionSelectObjectContent = "s3_SelectObjectContent"
ActionUploadPart = "s3_UploadPart"
ActionUploadPartCopy = "s3_UploadPartCopy"
ActionPutBucketOwnershipControls = "s3_PutBucketOwnershipControls"
ActionGetBucketOwnershipControls = "s3_GetBucketOwnershipControls"
ActionDeleteBucketOwnershipControls = "s3_DeleteBucketOwnershipControls"
ActionPutBucketCors = "s3_PutBucketCors"
ActionGetBucketCors = "s3_GetBucketCors"
ActionDeleteBucketCors = "s3_DeleteBucketCors"
ActionOptions = "s3_Options"
ActionPutBucketAnalyticsConfiguration = "s3_PutBucketAnalyticsConfiguration"
ActionGetBucketAnalyticsConfiguration = "s3_GetBucketAnalyticsConfiguration"
ActionListBucketAnalyticsConfigurations = "s3_ListBucketAnalyticsConfigurations"
ActionDeleteBucketAnalyticsConfiguration = "s3_DeleteBucketAnalyticsConfiguration"
ActionPutBucketEncryption = "s3_PutBucketEncryption"
ActionGetBucketEncryption = "s3_GetBucketEncryption"
ActionDeleteBucketEncryption = "s3_DeleteBucketEncryption"
ActionPutBucketIntelligentTieringConfiguration = "s3_PutBucketIntelligentTieringConfiguration"
ActionGetBucketIntelligentTieringConfiguration = "s3_GetBucketIntelligentTieringConfiguration"
ActionListBucketIntelligentTieringConfigurations = "s3_ListBucketIntelligentTieringConfigurations"
ActionDeleteBucketIntelligentTieringConfiguration = "s3_DeleteBucketIntelligentTieringConfiguration"
ActionPutBucketInventoryConfiguration = "s3_PutBucketInventoryConfiguration"
ActionGetBucketInventoryConfiguration = "s3_GetBucketInventoryConfiguration"
ActionListBucketInventoryConfigurations = "s3_ListBucketInventoryConfigurations"
ActionDeleteBucketInventoryConfiguration = "s3_DeleteBucketInventoryConfiguration"
ActionPutBucketLifecycleConfiguration = "s3_PutBucketLifecycleConfiguration"
ActionGetBucketLifecycleConfiguration = "s3_GetBucketLifecycleConfiguration"
ActionDeleteBucketLifecycle = "s3_DeleteBucketLifecycle"
ActionPutBucketLogging = "s3_PutBucketLogging"
ActionGetBucketLogging = "s3_GetBucketLogging"
ActionPutBucketRequestPayment = "s3_PutBucketRequestPayment"
ActionGetBucketRequestPayment = "s3_GetBucketRequestPayment"
ActionPutBucketMetricsConfiguration = "s3_PutBucketMetricsConfiguration"
ActionGetBucketMetricsConfiguration = "s3_GetBucketMetricsConfiguration"
ActionListBucketMetricsConfigurations = "s3_ListBucketMetricsConfigurations"
ActionDeleteBucketMetricsConfiguration = "s3_DeleteBucketMetricsConfiguration"
ActionPutBucketReplication = "s3_PutBucketReplication"
ActionGetBucketReplication = "s3_GetBucketReplication"
ActionDeleteBucketReplication = "s3_DeleteBucketReplication"
ActionPutPublicAccessBlock = "s3_PutPublicAccessBlock"
ActionGetPublicAccessBlock = "s3_GetPublicAccessBlock"
ActionDeletePublicAccessBlock = "s3_DeletePublicAccessBlock"
ActionPutBucketNotificationConfiguration = "s3_PutBucketNotificationConfiguration"
ActionGetBucketNotificationConfiguration = "s3_GetBucketNotificationConfiguration"
ActionPutBucketAccelerateConfiguration = "s3_PutBucketAccelerateConfiguration"
ActionGetBucketAccelerateConfiguration = "s3_GetBucketAccelerateConfiguration"
ActionPutBucketWebsite = "s3_PutBucketWebsite"
ActionGetBucketWebsite = "s3_GetBucketWebsite"
ActionDeleteBucketWebsite = "s3_DeleteBucketWebsite"
ActionGetBucketPolicyStatus = "s3_GetBucketPolicyStatus"
ActionGetBucketLocation = "s3_GetBucketLocation"
// Admin actions
ActionAdminCreateUser = "admin_CreateUser"
@@ -281,4 +323,184 @@ func init() {
Name: "DeleteBucketCors",
Service: "s3",
}
ActionMap[ActionPutBucketOwnershipControls] = Action{
Name: "PutBucketOwnershipControls",
Service: "s3",
}
ActionMap[ActionGetBucketOwnershipControls] = Action{
Name: "GetBucketOwnershipControls",
Service: "s3",
}
ActionMap[ActionDeleteBucketOwnershipControls] = Action{
Name: "DeleteBucketOwnershipControls",
Service: "s3",
}
ActionMap[ActionOptions] = Action{
Name: "Options",
Service: "s3",
}
ActionMap[ActionPutBucketAnalyticsConfiguration] = Action{
Name: "PutBucketAnalyticsConfiguration",
Service: "s3",
}
ActionMap[ActionGetBucketAnalyticsConfiguration] = Action{
Name: "GetBucketAnalyticsConfiguration",
Service: "s3",
}
ActionMap[ActionListBucketAnalyticsConfigurations] = Action{
Name: "ListBucketAnalyticsConfigurations",
Service: "s3",
}
ActionMap[ActionDeleteBucketAnalyticsConfiguration] = Action{
Name: "DeleteBucketAnalyticsConfiguration",
Service: "s3",
}
ActionMap[ActionPutBucketEncryption] = Action{
Name: "PutBucketEncryption",
Service: "s3",
}
ActionMap[ActionGetBucketEncryption] = Action{
Name: "GetBucketEncryption",
Service: "s3",
}
ActionMap[ActionDeleteBucketEncryption] = Action{
Name: "DeleteBucketEncryption",
Service: "s3",
}
ActionMap[ActionPutBucketIntelligentTieringConfiguration] = Action{
Name: "PutBucketIntelligentTieringConfiguration",
Service: "s3",
}
ActionMap[ActionGetBucketIntelligentTieringConfiguration] = Action{
Name: "GetBucketIntelligentTieringConfiguration",
Service: "s3",
}
ActionMap[ActionListBucketIntelligentTieringConfigurations] = Action{
Name: "ListBucketIntelligentTieringConfigurations",
Service: "s3",
}
ActionMap[ActionDeleteBucketIntelligentTieringConfiguration] = Action{
Name: "DeleteBucketIntelligentTieringConfiguration",
Service: "s3",
}
ActionMap[ActionPutBucketInventoryConfiguration] = Action{
Name: "PutBucketInventoryConfiguration",
Service: "s3",
}
ActionMap[ActionGetBucketInventoryConfiguration] = Action{
Name: "GetBucketInventoryConfiguration",
Service: "s3",
}
ActionMap[ActionListBucketInventoryConfigurations] = Action{
Name: "ListBucketInventoryConfigurations",
Service: "s3",
}
ActionMap[ActionDeleteBucketInventoryConfiguration] = Action{
Name: "DeleteBucketInventoryConfiguration",
Service: "s3",
}
ActionMap[ActionPutBucketLifecycleConfiguration] = Action{
Name: "PutBucketLifecycleConfiguration",
Service: "s3",
}
ActionMap[ActionGetBucketLifecycleConfiguration] = Action{
Name: "GetBucketLifecycleConfiguration",
Service: "s3",
}
ActionMap[ActionDeleteBucketLifecycle] = Action{
Name: "DeleteBucketLifecycle",
Service: "s3",
}
ActionMap[ActionPutBucketLogging] = Action{
Name: "PutBucketLogging",
Service: "s3",
}
ActionMap[ActionGetBucketLogging] = Action{
Name: "GetBucketLogging",
Service: "s3",
}
ActionMap[ActionPutBucketRequestPayment] = Action{
Name: "PutBucketRequestPayment",
Service: "s3",
}
ActionMap[ActionGetBucketRequestPayment] = Action{
Name: "GetBucketRequestPayment",
Service: "s3",
}
ActionMap[ActionPutBucketMetricsConfiguration] = Action{
Name: "PutBucketMetricsConfiguration",
Service: "s3",
}
ActionMap[ActionGetBucketMetricsConfiguration] = Action{
Name: "GetBucketMetricsConfiguration",
Service: "s3",
}
ActionMap[ActionListBucketMetricsConfigurations] = Action{
Name: "ListBucketMetricsConfigurations",
Service: "s3",
}
ActionMap[ActionDeleteBucketMetricsConfiguration] = Action{
Name: "DeleteBucketMetricsConfiguration",
Service: "s3",
}
ActionMap[ActionPutBucketReplication] = Action{
Name: "PutBucketReplication",
Service: "s3",
}
ActionMap[ActionGetBucketReplication] = Action{
Name: "GetBucketReplication",
Service: "s3",
}
ActionMap[ActionDeleteBucketReplication] = Action{
Name: "DeleteBucketReplication",
Service: "s3",
}
ActionMap[ActionPutPublicAccessBlock] = Action{
Name: "PutPublicAccessBlock",
Service: "s3",
}
ActionMap[ActionGetPublicAccessBlock] = Action{
Name: "GetPublicAccessBlock",
Service: "s3",
}
ActionMap[ActionDeletePublicAccessBlock] = Action{
Name: "DeletePublicAccessBlock",
Service: "s3",
}
ActionMap[ActionPutBucketNotificationConfiguration] = Action{
Name: "PutBucketNotificationConfiguration",
Service: "s3",
}
ActionMap[ActionGetBucketNotificationConfiguration] = Action{
Name: "GetBucketNotificationConfiguration",
Service: "s3",
}
ActionMap[ActionPutBucketAccelerateConfiguration] = Action{
Name: "PutBucketAccelerateConfiguration",
Service: "s3",
}
ActionMap[ActionGetBucketAccelerateConfiguration] = Action{
Name: "GetBucketAccelerateConfiguration",
Service: "s3",
}
ActionMap[ActionPutBucketWebsite] = Action{
Name: "PutBucketWebsite",
Service: "s3",
}
ActionMap[ActionGetBucketWebsite] = Action{
Name: "GetBucketWebsite",
Service: "s3",
}
ActionMap[ActionDeleteBucketWebsite] = Action{
Name: "DeleteBucketWebsite",
Service: "s3",
}
ActionMap[ActionGetBucketPolicyStatus] = Action{
Name: "GetBucketPolicyStatus",
Service: "s3",
}
ActionMap[ActionGetBucketLocation] = Action{
Name: "GetBucketLocation",
Service: "s3",
}
}

View File

@@ -41,8 +41,14 @@ type Tag struct {
Value string
}
// Manager is a manager of metrics plugins
type Manager struct {
// Manager is the interface definition for metrics manager
type Manager interface {
Send(ctx *fiber.Ctx, err error, action string, count int64, status int)
Close()
}
// manager is a manager of metrics plugins
type manager struct {
wg sync.WaitGroup
ctx context.Context
@@ -59,7 +65,7 @@ type Config struct {
}
// NewManager initializes metrics plugins and returns a new metrics manager
func NewManager(ctx context.Context, conf Config) (*Manager, error) {
func NewManager(ctx context.Context, conf Config) (Manager, error) {
if len(conf.StatsdServers) == 0 && len(conf.DogStatsdServers) == 0 {
return nil, nil
}
@@ -74,7 +80,7 @@ func NewManager(ctx context.Context, conf Config) (*Manager, error) {
addDataChan := make(chan datapoint, dataItemCount)
mgr := &Manager{
mgr := &manager{
addDataChan: addDataChan,
ctx: ctx,
config: conf,
@@ -112,7 +118,7 @@ func NewManager(ctx context.Context, conf Config) (*Manager, error) {
return mgr, nil
}
func (m *Manager) Send(ctx *fiber.Ctx, err error, action string, count int64, status int) {
func (m *manager) Send(ctx *fiber.Ctx, err error, action string, count int64, status int) {
// In case of Authentication failures, url parsing ...
if action == "" {
action = ActionUndetected
@@ -168,12 +174,12 @@ func (m *Manager) Send(ctx *fiber.Ctx, err error, action string, count int64, st
}
// increment increments the key by one
func (m *Manager) increment(key string, tags ...Tag) {
func (m *manager) increment(key string, tags ...Tag) {
m.add(key, 1, tags...)
}
// add adds value to key
func (m *Manager) add(key string, value int64, tags ...Tag) {
func (m *manager) add(key string, value int64, tags ...Tag) {
if m.ctx.Err() != nil {
return
}
@@ -192,7 +198,7 @@ func (m *Manager) add(key string, value int64, tags ...Tag) {
}
// Close closes metrics channels, waits for data to complete, closes all plugins
func (m *Manager) Close() {
func (m *manager) Close() {
// drain the datapoint channels
close(m.addDataChan)
m.wg.Wait()
@@ -209,7 +215,7 @@ type publisher interface {
Close()
}
func (m *Manager) addForwarder(addChan <-chan datapoint) {
func (m *manager) addForwarder(addChan <-chan datapoint) {
for data := range addChan {
for _, s := range m.publishers {
s.Add(data.key, data.value, data.tags...)

View File

@@ -18,30 +18,59 @@ import (
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3api/controllers"
"github.com/versity/versitygw/s3api/middlewares"
"github.com/versity/versitygw/s3log"
)
type S3AdminRouter struct{}
func (ar *S3AdminRouter) Init(app *fiber.App, be backend.Backend, iam auth.IAMService, logger s3log.AuditLogger) {
controller := controllers.NewAdminController(iam, be, logger)
func (ar *S3AdminRouter) Init(app *fiber.App, be backend.Backend, iam auth.IAMService, logger s3log.AuditLogger, root middlewares.RootUserConfig, region string, debug bool) {
ctrl := controllers.NewAdminController(iam, be, logger)
services := &controllers.Services{
Logger: logger,
}
// CreateUser admin api
app.Patch("/create-user", controller.CreateUser)
app.Patch("/create-user",
controllers.ProcessHandlers(ctrl.CreateUser, metrics.ActionAdminCreateUser, services,
middlewares.VerifyV4Signature(root, iam, region),
middlewares.IsAdmin(metrics.ActionAdminCreateUser),
))
// DeleteUsers admin api
app.Patch("/delete-user", controller.DeleteUser)
app.Patch("/delete-user",
controllers.ProcessHandlers(ctrl.DeleteUser, metrics.ActionAdminDeleteUser, services,
middlewares.VerifyV4Signature(root, iam, region),
middlewares.IsAdmin(metrics.ActionAdminDeleteUser),
))
// UpdateUser admin api
app.Patch("/update-user", controller.UpdateUser)
app.Patch("/update-user",
controllers.ProcessHandlers(ctrl.UpdateUser, metrics.ActionAdminUpdateUser, services,
middlewares.VerifyV4Signature(root, iam, region),
middlewares.IsAdmin(metrics.ActionAdminUpdateUser),
))
// ListUsers admin api
app.Patch("/list-users", controller.ListUsers)
app.Patch("/list-users",
controllers.ProcessHandlers(ctrl.ListUsers, metrics.ActionAdminListUsers, services,
middlewares.VerifyV4Signature(root, iam, region),
middlewares.IsAdmin(metrics.ActionAdminListUsers),
))
// ChangeBucketOwner admin api
app.Patch("/change-bucket-owner", controller.ChangeBucketOwner)
app.Patch("/change-bucket-owner",
controllers.ProcessHandlers(ctrl.ChangeBucketOwner, metrics.ActionAdminChangeBucketOwner, services,
middlewares.VerifyV4Signature(root, iam, region),
middlewares.IsAdmin(metrics.ActionAdminChangeBucketOwner),
))
// ListBucketsAndOwners admin api
app.Patch("/list-buckets", controller.ListBuckets)
app.Patch("/list-buckets",
controllers.ProcessHandlers(ctrl.ListBuckets, metrics.ActionAdminListBuckets, services,
middlewares.VerifyV4Signature(root, iam, region),
middlewares.IsAdmin(metrics.ActionAdminListBuckets),
))
}

View File

@@ -21,6 +21,7 @@ import (
"github.com/gofiber/fiber/v2/middleware/logger"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3api/controllers"
"github.com/versity/versitygw/s3api/middlewares"
"github.com/versity/versitygw/s3log"
)
@@ -31,6 +32,8 @@ type S3AdminServer struct {
router *S3AdminRouter
port string
cert *tls.Certificate
quiet bool
debug bool
}
func NewAdminServer(app *fiber.App, be backend.Backend, root middlewares.RootUserConfig, port, region string, iam auth.IAMService, l s3log.AuditLogger, opts ...AdminOpt) *S3AdminServer {
@@ -46,17 +49,15 @@ func NewAdminServer(app *fiber.App, be backend.Backend, root middlewares.RootUse
}
// Logging middlewares
app.Use(logger.New())
app.Use(middlewares.DecodeURL(l, nil))
if !server.quiet {
app.Use(logger.New(logger.Config{
Format: "${time} | ${status} | ${latency} | ${ip} | ${method} | ${path} | ${error} | ${queryParams}\n",
}))
}
app.Use(controllers.WrapMiddleware(middlewares.DecodeURL, l, nil))
app.Use(middlewares.DebugLogger())
// Authentication middlewares
app.Use(middlewares.VerifyV4Signature(root, iam, l, nil, region, false))
app.Use(middlewares.VerifyMD5Body(l))
// Admin role checker
app.Use(middlewares.IsAdmin(l))
server.router.Init(app, be, iam, l)
server.router.Init(app, be, iam, l, root, region, server.debug)
return server
}
@@ -67,6 +68,16 @@ func WithAdminSrvTLS(cert tls.Certificate) AdminOpt {
return func(s *S3AdminServer) { s.cert = &cert }
}
// WithQuiet silences default logging output
func WithAdminQuiet() AdminOpt {
return func(s *S3AdminServer) { s.quiet = true }
}
// WithAdminDebug enables the debug logging
func WithAdminDebug() AdminOpt {
return func(s *S3AdminServer) { s.debug = true }
}
func (sa *S3AdminServer) Serve() (err error) {
if sa.cert != nil {
return sa.app.ListenTLSWithCertificate(sa.port, *sa.cert)

View File

@@ -15,17 +15,13 @@
package controllers
import (
"encoding/json"
"encoding/xml"
"fmt"
"net/http"
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
"github.com/versity/versitygw/s3response"
@@ -41,23 +37,19 @@ func NewAdminController(iam auth.IAMService, be backend.Backend, l s3log.AuditLo
return AdminController{iam: iam, be: be, l: l}
}
func (c AdminController) CreateUser(ctx *fiber.Ctx) error {
func (c AdminController) CreateUser(ctx *fiber.Ctx) (*Response, error) {
var usr auth.Account
err := xml.Unmarshal(ctx.Body(), &usr)
if err != nil {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrMalformedXML),
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminCreateUser,
})
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
if !usr.Role.IsValid() {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrAdminInvalidUserRole),
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminCreateUser,
})
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrAdminInvalidUserRole)
}
err = c.iam.CreateAccount(usr)
@@ -66,47 +58,38 @@ func (c AdminController) CreateUser(ctx *fiber.Ctx) error {
err = s3err.GetAPIError(s3err.ErrAdminUserExists)
}
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminCreateUser,
})
return &Response{
MetaOpts: &MetaOptions{},
}, err
}
return SendResponse(ctx, nil,
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminCreateUser,
return &Response{
MetaOpts: &MetaOptions{
Status: http.StatusCreated,
})
},
}, nil
}
func (c AdminController) UpdateUser(ctx *fiber.Ctx) error {
func (c AdminController) UpdateUser(ctx *fiber.Ctx) (*Response, error) {
access := ctx.Query("access")
if access == "" {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrAdminMissingUserAcess),
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminUpdateUser,
})
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrAdminMissingUserAcess)
}
var props auth.MutableProps
if err := xml.Unmarshal(ctx.Body(), &props); err != nil {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrMalformedXML),
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminUpdateUser,
})
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
err := props.Validate()
if err != nil {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrAdminInvalidUserRole),
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminUpdateUser,
})
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrAdminInvalidUserRole)
}
err = c.iam.UpdateUserAccount(access, props)
@@ -115,98 +98,66 @@ func (c AdminController) UpdateUser(ctx *fiber.Ctx) error {
err = s3err.GetAPIError(s3err.ErrAdminUserNotFound)
}
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminUpdateUser,
})
return &Response{
MetaOpts: &MetaOptions{},
}, err
}
return SendResponse(ctx, nil,
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminUpdateUser,
})
return &Response{
MetaOpts: &MetaOptions{},
}, nil
}
func (c AdminController) DeleteUser(ctx *fiber.Ctx) error {
func (c AdminController) DeleteUser(ctx *fiber.Ctx) (*Response, error) {
access := ctx.Query("access")
if access == "" {
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrAdminMissingUserAcess)
}
err := c.iam.DeleteUserAccount(access)
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminDeleteUser,
})
return &Response{
MetaOpts: &MetaOptions{},
}, err
}
func (c AdminController) ListUsers(ctx *fiber.Ctx) error {
func (c AdminController) ListUsers(ctx *fiber.Ctx) (*Response, error) {
accs, err := c.iam.ListUserAccounts()
return SendXMLResponse(ctx,
auth.ListUserAccountsResult{
Accounts: accs,
}, err,
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminListUsers,
})
return &Response{
Data: auth.ListUserAccountsResult{Accounts: accs},
MetaOpts: &MetaOptions{},
}, err
}
func (c AdminController) ChangeBucketOwner(ctx *fiber.Ctx) error {
func (c AdminController) ChangeBucketOwner(ctx *fiber.Ctx) (*Response, error) {
owner := ctx.Query("owner")
bucket := ctx.Query("bucket")
accs, err := auth.CheckIfAccountsExist([]string{owner}, c.iam)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminChangeBucketOwner,
})
return &Response{
MetaOpts: &MetaOptions{},
}, err
}
if len(accs) > 0 {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrAdminUserNotFound),
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminChangeBucketOwner,
})
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrAdminUserNotFound)
}
acl := auth.ACL{
Owner: owner,
Grantees: []auth.Grantee{
{
Permission: auth.PermissionFullControl,
Access: owner,
Type: types.TypeCanonicalUser,
},
},
}
aclParsed, err := json.Marshal(acl)
if err != nil {
return SendResponse(ctx, fmt.Errorf("failed to marshal the bucket acl: %w", err),
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminChangeBucketOwner,
})
}
err = c.be.ChangeBucketOwner(ctx.Context(), bucket, aclParsed)
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminChangeBucketOwner,
})
err = c.be.ChangeBucketOwner(ctx.Context(), bucket, owner)
return &Response{
MetaOpts: &MetaOptions{},
}, err
}
func (c AdminController) ListBuckets(ctx *fiber.Ctx) error {
func (c AdminController) ListBuckets(ctx *fiber.Ctx) (*Response, error) {
buckets, err := c.be.ListBucketsAndOwners(ctx.Context())
return SendXMLResponse(ctx,
s3response.ListBucketsResult{
return &Response{
Data: s3response.ListBucketsResult{
Buckets: buckets,
}, err, &MetaOpts{
Logger: c.l,
Action: metrics.ActionAdminListBuckets,
})
},
MetaOpts: &MetaOptions{},
}, err
}

View File

@@ -16,439 +16,564 @@ package controllers
import (
"context"
"fmt"
"encoding/xml"
"errors"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/gofiber/fiber/v2"
"github.com/stretchr/testify/assert"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
"github.com/versity/versitygw/s3response"
)
func TestAdminController_CreateUser(t *testing.T) {
func TestNewAdminController(t *testing.T) {
type args struct {
req *http.Request
iam auth.IAMService
be backend.Backend
l s3log.AuditLogger
}
adminController := AdminController{
iam: &IAMServiceMock{
CreateAccountFunc: func(account auth.Account) error {
return nil
},
},
}
app := fiber.New()
app.Patch("/create-user", adminController.CreateUser)
succUser := `
<Account>
<Access>access</Access>
<Secret>secret</Secret>
<Role>admin</Role>
<UserID>0</UserID>
<GroupID>0</GroupID>
</Account>
`
invuser := `
<Account>
<Access>access</Access>
<Secret>secret</Secret>
<Role>invalid_role</Role>
<UserID>0</UserID>
<GroupID>0</GroupID>
</Account>
`
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
statusCode int
name string
args args
want AdminController
}{
{
name: "Admin-create-user-malformed-body",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/create-user", nil),
},
wantErr: false,
statusCode: 400,
},
{
name: "Admin-create-user-invalid-requester-role",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/create-user", strings.NewReader(invuser)),
},
wantErr: false,
statusCode: 400,
},
{
name: "Admin-create-user-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/create-user", strings.NewReader(succUser)),
},
wantErr: false,
statusCode: 201,
name: "initialize admin api",
args: args{},
want: AdminController{},
},
}
for _, tt := range tests {
resp, err := tt.app.Test(tt.args.req)
t.Run(tt.name, func(t *testing.T) {
got := NewAdminController(tt.args.iam, tt.args.be, tt.args.l)
assert.Equal(t, got, tt.want)
})
}
}
if (err != nil) != tt.wantErr {
t.Errorf("AdminController.CreateUser() error = %v, wantErr %v", err, tt.wantErr)
}
func TestAdminController_CreateUser(t *testing.T) {
validBody, err := xml.Marshal(auth.Account{
Access: "access",
Secret: "secret",
Role: auth.RoleAdmin,
})
assert.NoError(t, err)
if resp.StatusCode != tt.statusCode {
t.Errorf("AdminController.CreateUser() statusCode = %v, wantStatusCode = %v", resp.StatusCode, tt.statusCode)
}
invalidUserRoleBody, err := xml.Marshal(auth.Account{
Access: "access",
Secret: "secret",
Role: auth.Role("invalid_role"),
})
assert.NoError(t, err)
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "invalid request body",
input: testInput{
body: []byte("invalid_request_body"),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrMalformedXML),
},
},
{
name: "invalid user role",
input: testInput{
body: invalidUserRoleBody,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrAdminInvalidUserRole),
},
},
{
name: "backend returns user exists error",
input: testInput{
body: validBody,
beErr: auth.ErrUserExists,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrAdminUserExists),
},
},
{
name: "backend returns other error",
input: testInput{
body: validBody,
beErr: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
},
{
name: "successful response",
input: testInput{
body: validBody,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
Status: http.StatusCreated,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
iam := &IAMServiceMock{
CreateAccountFunc: func(account auth.Account) error {
return tt.input.beErr
},
}
ctrl := AdminController{
iam: iam,
}
testController(
t,
ctrl.CreateUser,
tt.output.response,
tt.output.err,
ctxInputs{
body: tt.input.body,
})
})
}
}
func TestAdminController_UpdateUser(t *testing.T) {
type args struct {
req *http.Request
}
validBody, err := xml.Marshal(auth.MutableProps{
Secret: utils.GetStringPtr("secret"),
Role: auth.RoleAdmin,
})
assert.NoError(t, err)
adminController := AdminController{
iam: &IAMServiceMock{
UpdateUserAccountFunc: func(access string, props auth.MutableProps) error {
return nil
},
},
}
app := fiber.New()
app.Patch("/update-user", adminController.UpdateUser)
adminControllerErr := AdminController{
iam: &IAMServiceMock{
UpdateUserAccountFunc: func(access string, props auth.MutableProps) error {
return auth.ErrNoSuchUser
},
},
}
appNotFound := fiber.New()
appNotFound.Patch("/update-user", adminControllerErr.UpdateUser)
succUser := `
<Account>
<Secret>secret</Secret>
<UserID>0</UserID>
<GroupID>0</GroupID>
</Account>
`
invalidUserRoleBody, err := xml.Marshal(auth.MutableProps{
Secret: utils.GetStringPtr("secret"),
Role: auth.Role("invalid_role"),
})
assert.NoError(t, err)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
statusCode int
name string
input testInput
output testOutput
}{
{
name: "Admin-update-user-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/update-user?access=access", strings.NewReader(succUser)),
name: "missing user access key",
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrAdminMissingUserAcess),
},
wantErr: false,
statusCode: 200,
},
{
name: "Admin-update-user-missing-access",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/update-user", strings.NewReader(succUser)),
name: "invalid request body",
input: testInput{
body: []byte("invalid_request_body"),
queries: map[string]string{
"access": "user",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrMalformedXML),
},
wantErr: false,
statusCode: 404,
},
{
name: "Admin-update-user-invalid-request-body",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/update-user?access=access", nil),
name: "invalid user role",
input: testInput{
body: invalidUserRoleBody,
queries: map[string]string{
"access": "user",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrAdminInvalidUserRole),
},
wantErr: false,
statusCode: 400,
},
{
name: "Admin-update-user-not-found",
app: appNotFound,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/update-user?access=access", strings.NewReader(succUser)),
name: "backend returns user not found error",
input: testInput{
body: validBody,
beErr: auth.ErrNoSuchUser,
queries: map[string]string{
"access": "user",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrAdminUserNotFound),
},
},
{
name: "backend returns other error",
input: testInput{
body: validBody,
beErr: s3err.GetAPIError(s3err.ErrInvalidRequest),
queries: map[string]string{
"access": "user",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
},
{
name: "successful response",
input: testInput{
body: validBody,
queries: map[string]string{
"access": "user",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
},
wantErr: false,
statusCode: 404,
},
}
for _, tt := range tests {
resp, err := tt.app.Test(tt.args.req)
t.Run(tt.name, func(t *testing.T) {
iam := &IAMServiceMock{
UpdateUserAccountFunc: func(access string, props auth.MutableProps) error {
return tt.input.beErr
},
}
if (err != nil) != tt.wantErr {
t.Errorf("AdminController.UpdateUser() error = %v, wantErr %v", err, tt.wantErr)
}
ctrl := AdminController{
iam: iam,
}
if resp.StatusCode != tt.statusCode {
t.Errorf("AdminController.UpdateUser() statusCode = %v, wantStatusCode = %v", resp.StatusCode, tt.statusCode)
}
testController(
t,
ctrl.UpdateUser,
tt.output.response,
tt.output.err,
ctxInputs{
body: tt.input.body,
queries: tt.input.queries,
})
})
}
}
func TestAdminController_DeleteUser(t *testing.T) {
type args struct {
req *http.Request
}
adminController := AdminController{
iam: &IAMServiceMock{
DeleteUserAccountFunc: func(access string) error {
return nil
},
},
}
app := fiber.New()
app.Patch("/delete-user", adminController.DeleteUser)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
statusCode int
name string
input testInput
output testOutput
}{
{
name: "Admin-delete-user-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/delete-user?access=test", nil),
name: "missing user access key",
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrAdminMissingUserAcess),
},
},
{
name: "backend returns other error",
input: testInput{
beErr: s3err.GetAPIError(s3err.ErrInvalidRequest),
queries: map[string]string{
"access": "user",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
},
{
name: "successful response",
input: testInput{
queries: map[string]string{
"access": "user",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
},
wantErr: false,
statusCode: 200,
},
}
for _, tt := range tests {
resp, err := tt.app.Test(tt.args.req)
t.Run(tt.name, func(t *testing.T) {
iam := &IAMServiceMock{
DeleteUserAccountFunc: func(access string) error {
return tt.input.beErr
},
}
if (err != nil) != tt.wantErr {
t.Errorf("AdminController.DeleteUser() error = %v, wantErr %v", err, tt.wantErr)
}
ctrl := AdminController{
iam: iam,
}
if resp.StatusCode != tt.statusCode {
t.Errorf("AdminController.DeleteUser() statusCode = %v, wantStatusCode = %v", resp.StatusCode, tt.statusCode)
}
testController(
t,
ctrl.DeleteUser,
tt.output.response,
tt.output.err,
ctxInputs{
queries: tt.input.queries,
})
})
}
}
func TestAdminController_ListUsers(t *testing.T) {
type args struct {
req *http.Request
}
adminController := AdminController{
iam: &IAMServiceMock{
ListUserAccountsFunc: func() ([]auth.Account, error) {
return []auth.Account{}, nil
},
accs := []auth.Account{
{
Access: "access",
Secret: "secret",
},
{
Access: "access",
Secret: "secret",
},
}
adminControllerErr := AdminController{
iam: &IAMServiceMock{
ListUserAccountsFunc: func() ([]auth.Account, error) {
return []auth.Account{}, fmt.Errorf("server error")
},
},
}
appErr := fiber.New()
appErr.Patch("/list-users", adminControllerErr.ListUsers)
appSucc := fiber.New()
appSucc.Patch("/list-users", adminController.ListUsers)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
statusCode int
name string
input testInput
output testOutput
}{
{
name: "Admin-list-users-iam-error",
app: appErr,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/list-users", nil),
name: "backend returns error",
input: testInput{
beRes: []auth.Account{},
beErr: s3err.GetAPIError(s3err.ErrInternalError),
},
output: testOutput{
response: &Response{
Data: auth.ListUserAccountsResult{
Accounts: []auth.Account{},
},
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrInternalError),
},
wantErr: false,
statusCode: 500,
},
{
name: "Admin-list-users-success",
app: appSucc,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/list-users", nil),
name: "successful response",
input: testInput{
beRes: accs,
},
output: testOutput{
response: &Response{
Data: auth.ListUserAccountsResult{
Accounts: accs,
},
MetaOpts: &MetaOptions{},
},
},
wantErr: false,
statusCode: 200,
},
}
for _, tt := range tests {
resp, err := tt.app.Test(tt.args.req)
t.Run(tt.name, func(t *testing.T) {
iam := &IAMServiceMock{
ListUserAccountsFunc: func() ([]auth.Account, error) {
return tt.input.beRes.([]auth.Account), tt.input.beErr
},
}
if (err != nil) != tt.wantErr {
t.Errorf("AdminController.ListUsers() error = %v, wantErr %v", err, tt.wantErr)
}
ctrl := AdminController{
iam: iam,
}
if resp.StatusCode != tt.statusCode {
t.Errorf("AdminController.ListUsers() statusCode = %v, wantStatusCode = %v", resp.StatusCode, tt.statusCode)
}
testController(
t,
ctrl.ListUsers,
tt.output.response,
tt.output.err,
ctxInputs{
queries: tt.input.queries,
})
})
}
}
func TestAdminController_ChangeBucketOwner(t *testing.T) {
type args struct {
req *http.Request
}
adminController := AdminController{
be: &BackendMock{
ChangeBucketOwnerFunc: func(contextMoqParam context.Context, bucket string, acl []byte) error {
return nil
},
},
iam: &IAMServiceMock{
GetUserAccountFunc: func(access string) (auth.Account, error) {
return auth.Account{}, nil
},
},
}
adminControllerIamErr := AdminController{
iam: &IAMServiceMock{
GetUserAccountFunc: func(access string) (auth.Account, error) {
return auth.Account{}, fmt.Errorf("unknown server error")
},
},
}
adminControllerIamAccDoesNotExist := AdminController{
iam: &IAMServiceMock{
GetUserAccountFunc: func(access string) (auth.Account, error) {
return auth.Account{}, auth.ErrNoSuchUser
},
},
}
app := fiber.New()
app.Patch("/change-bucket-owner", adminController.ChangeBucketOwner)
appIamErr := fiber.New()
appIamErr.Patch("/change-bucket-owner", adminControllerIamErr.ChangeBucketOwner)
appIamNoSuchUser := fiber.New()
appIamNoSuchUser.Patch("/change-bucket-owner", adminControllerIamAccDoesNotExist.ChangeBucketOwner)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
statusCode int
name string
input testInput
output testOutput
}{
{
name: "Change-bucket-owner-check-account-server-error",
app: appIamErr,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/change-bucket-owner", nil),
name: "fails to get user account",
input: testInput{
extraMockErr: s3err.GetAPIError(s3err.ErrInternalError),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: errors.New("check user account: "),
},
wantErr: false,
statusCode: 500,
},
{
name: "Change-bucket-owner-acc-does-not-exist",
app: appIamNoSuchUser,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/change-bucket-owner", nil),
name: "user not found",
input: testInput{
extraMockErr: auth.ErrNoSuchUser,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrAdminUserNotFound),
},
wantErr: false,
statusCode: 404,
},
{
name: "Change-bucket-owner-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/change-bucket-owner?bucket=bucket&owner=owner", nil),
name: "backend returns error",
input: testInput{
beErr: s3err.GetAPIError(s3err.ErrAdminMethodNotSupported),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrAdminMethodNotSupported),
},
},
{
name: "successful response",
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
},
wantErr: false,
statusCode: 200,
},
}
for _, tt := range tests {
resp, err := tt.app.Test(tt.args.req)
t.Run(tt.name, func(t *testing.T) {
iam := &IAMServiceMock{
GetUserAccountFunc: func(access string) (auth.Account, error) {
return auth.Account{}, tt.input.extraMockErr
},
}
be := &BackendMock{
ChangeBucketOwnerFunc: func(contextMoqParam context.Context, bucket, owner string) error {
return tt.input.beErr
},
}
if (err != nil) != tt.wantErr {
t.Errorf("AdminController.ChangeBucketOwner() error = %v, wantErr %v", err, tt.wantErr)
}
ctrl := AdminController{
iam: iam,
be: be,
}
if resp.StatusCode != tt.statusCode {
t.Errorf("AdminController.ChangeBucketOwner() statusCode = %v, wantStatusCode = %v", resp.StatusCode, tt.statusCode)
}
testController(
t,
ctrl.ChangeBucketOwner,
tt.output.response,
tt.output.err,
ctxInputs{},
)
})
}
}
func TestAdminController_ListBuckets(t *testing.T) {
type args struct {
req *http.Request
}
adminController := AdminController{
be: &BackendMock{
ListBucketsAndOwnersFunc: func(contextMoqParam context.Context) ([]s3response.Bucket, error) {
return []s3response.Bucket{}, nil
},
res := []s3response.Bucket{
{
Name: "bucket",
Owner: "owner",
},
}
app := fiber.New()
app.Patch("/list-buckets", adminController.ListBuckets)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
statusCode int
name string
input testInput
output testOutput
}{
{
name: "List-buckets-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPatch, "/list-buckets", nil),
name: "backend returns other error",
input: testInput{
beRes: []s3response.Bucket{},
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
output: testOutput{
response: &Response{
Data: s3response.ListBucketsResult{
Buckets: []s3response.Bucket{},
},
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
beRes: res,
},
output: testOutput{
response: &Response{
Data: s3response.ListBucketsResult{
Buckets: res,
},
MetaOpts: &MetaOptions{},
},
},
wantErr: false,
statusCode: 200,
},
}
for _, tt := range tests {
resp, err := tt.app.Test(tt.args.req)
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
ListBucketsAndOwnersFunc: func(contextMoqParam context.Context) ([]s3response.Bucket, error) {
return tt.input.beRes.([]s3response.Bucket), tt.input.beErr
},
}
if (err != nil) != tt.wantErr {
t.Errorf("AdminController.ListBuckets() error = %v, wantErr %v", err, tt.wantErr)
}
ctrl := AdminController{
be: be,
}
if resp.StatusCode != tt.statusCode {
t.Errorf("AdminController.ListBuckets() statusCode = %v, wantStatusCode = %v", resp.StatusCode, tt.statusCode)
}
testController(
t,
ctrl.ListBuckets,
tt.output.response,
tt.output.err,
ctxInputs{},
)
})
}
}

View File

@@ -26,7 +26,7 @@ var _ backend.Backend = &BackendMock{}
// AbortMultipartUploadFunc: func(contextMoqParam context.Context, abortMultipartUploadInput *s3.AbortMultipartUploadInput) error {
// panic("mock out the AbortMultipartUpload method")
// },
// ChangeBucketOwnerFunc: func(contextMoqParam context.Context, bucket string, acl []byte) error {
// ChangeBucketOwnerFunc: func(contextMoqParam context.Context, bucket string, owner string) error {
// panic("mock out the ChangeBucketOwner method")
// },
// CompleteMultipartUploadFunc: func(contextMoqParam context.Context, completeMultipartUploadInput *s3.CompleteMultipartUploadInput) (s3response.CompleteMultipartUploadResult, string, error) {
@@ -134,7 +134,7 @@ var _ backend.Backend = &BackendMock{}
// PutBucketAclFunc: func(contextMoqParam context.Context, bucket string, data []byte) error {
// panic("mock out the PutBucketAcl method")
// },
// PutBucketCorsFunc: func(contextMoqParam context.Context, bytes []byte) error {
// PutBucketCorsFunc: func(contextMoqParam context.Context, bucket string, cors []byte) error {
// panic("mock out the PutBucketCors method")
// },
// PutBucketOwnershipControlsFunc: func(contextMoqParam context.Context, bucket string, ownership types.ObjectOwnership) error {
@@ -196,7 +196,7 @@ type BackendMock struct {
AbortMultipartUploadFunc func(contextMoqParam context.Context, abortMultipartUploadInput *s3.AbortMultipartUploadInput) error
// ChangeBucketOwnerFunc mocks the ChangeBucketOwner method.
ChangeBucketOwnerFunc func(contextMoqParam context.Context, bucket string, acl []byte) error
ChangeBucketOwnerFunc func(contextMoqParam context.Context, bucket string, owner string) error
// CompleteMultipartUploadFunc mocks the CompleteMultipartUpload method.
CompleteMultipartUploadFunc func(contextMoqParam context.Context, completeMultipartUploadInput *s3.CompleteMultipartUploadInput) (s3response.CompleteMultipartUploadResult, string, error)
@@ -304,7 +304,7 @@ type BackendMock struct {
PutBucketAclFunc func(contextMoqParam context.Context, bucket string, data []byte) error
// PutBucketCorsFunc mocks the PutBucketCors method.
PutBucketCorsFunc func(contextMoqParam context.Context, bytes []byte) error
PutBucketCorsFunc func(contextMoqParam context.Context, bucket string, cors []byte) error
// PutBucketOwnershipControlsFunc mocks the PutBucketOwnershipControls method.
PutBucketOwnershipControlsFunc func(contextMoqParam context.Context, bucket string, ownership types.ObjectOwnership) error
@@ -369,8 +369,8 @@ type BackendMock struct {
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
// ACL is the acl argument value.
ACL []byte
// Owner is the owner argument value.
Owner string
}
// CompleteMultipartUpload holds details about calls to the CompleteMultipartUpload method.
CompleteMultipartUpload []struct {
@@ -635,8 +635,10 @@ type BackendMock struct {
PutBucketCors []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bytes is the bytes argument value.
Bytes []byte
// Bucket is the bucket argument value.
Bucket string
// Cors is the cors argument value.
Cors []byte
}
// PutBucketOwnershipControls holds details about calls to the PutBucketOwnershipControls method.
PutBucketOwnershipControls []struct {
@@ -864,23 +866,23 @@ func (mock *BackendMock) AbortMultipartUploadCalls() []struct {
}
// ChangeBucketOwner calls ChangeBucketOwnerFunc.
func (mock *BackendMock) ChangeBucketOwner(contextMoqParam context.Context, bucket string, acl []byte) error {
func (mock *BackendMock) ChangeBucketOwner(contextMoqParam context.Context, bucket string, owner string) error {
if mock.ChangeBucketOwnerFunc == nil {
panic("BackendMock.ChangeBucketOwnerFunc: method is nil but Backend.ChangeBucketOwner was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
ACL []byte
Owner string
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
ACL: acl,
Owner: owner,
}
mock.lockChangeBucketOwner.Lock()
mock.calls.ChangeBucketOwner = append(mock.calls.ChangeBucketOwner, callInfo)
mock.lockChangeBucketOwner.Unlock()
return mock.ChangeBucketOwnerFunc(contextMoqParam, bucket, acl)
return mock.ChangeBucketOwnerFunc(contextMoqParam, bucket, owner)
}
// ChangeBucketOwnerCalls gets all the calls that were made to ChangeBucketOwner.
@@ -890,12 +892,12 @@ func (mock *BackendMock) ChangeBucketOwner(contextMoqParam context.Context, buck
func (mock *BackendMock) ChangeBucketOwnerCalls() []struct {
ContextMoqParam context.Context
Bucket string
ACL []byte
Owner string
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
ACL []byte
Owner string
}
mock.lockChangeBucketOwner.RLock()
calls = mock.calls.ChangeBucketOwner
@@ -2192,21 +2194,23 @@ func (mock *BackendMock) PutBucketAclCalls() []struct {
}
// PutBucketCors calls PutBucketCorsFunc.
func (mock *BackendMock) PutBucketCors(contextMoqParam context.Context, bytes []byte) error {
func (mock *BackendMock) PutBucketCors(contextMoqParam context.Context, bucket string, cors []byte) error {
if mock.PutBucketCorsFunc == nil {
panic("BackendMock.PutBucketCorsFunc: method is nil but Backend.PutBucketCors was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bytes []byte
Bucket string
Cors []byte
}{
ContextMoqParam: contextMoqParam,
Bytes: bytes,
Bucket: bucket,
Cors: cors,
}
mock.lockPutBucketCors.Lock()
mock.calls.PutBucketCors = append(mock.calls.PutBucketCors, callInfo)
mock.lockPutBucketCors.Unlock()
return mock.PutBucketCorsFunc(contextMoqParam, bytes)
return mock.PutBucketCorsFunc(contextMoqParam, bucket, cors)
}
// PutBucketCorsCalls gets all the calls that were made to PutBucketCors.
@@ -2215,11 +2219,13 @@ func (mock *BackendMock) PutBucketCors(contextMoqParam context.Context, bytes []
// len(mockedBackend.PutBucketCorsCalls())
func (mock *BackendMock) PutBucketCorsCalls() []struct {
ContextMoqParam context.Context
Bytes []byte
Bucket string
Cors []byte
} {
var calls []struct {
ContextMoqParam context.Context
Bytes []byte
Bucket string
Cors []byte
}
mock.lockPutBucketCors.RLock()
calls = mock.calls.PutBucketCors

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,194 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"net/http"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/s3api/utils"
)
func (c S3ApiController) DeleteBucketTagging(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketTaggingAction,
IsPublicRequest: IsBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.DeleteBucketTagging(ctx.Context(), bucket)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
Status: http.StatusNoContent,
},
}, err
}
func (c S3ApiController) DeleteBucketOwnershipControls(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketOwnershipControlsAction,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.DeleteBucketOwnershipControls(ctx.Context(), bucket)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
Status: http.StatusNoContent,
},
}, err
}
func (c S3ApiController) DeleteBucketPolicy(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.DeleteBucketPolicyAction,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.DeleteBucketPolicy(ctx.Context(), bucket)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
Status: http.StatusNoContent,
},
}, err
}
func (c S3ApiController) DeleteBucketCors(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketCorsAction,
IsPublicRequest: IsBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.DeleteBucketCors(ctx.Context(), bucket)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
Status: http.StatusNoContent,
},
}, err
}
func (c S3ApiController) DeleteBucket(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.DeleteBucketAction,
IsPublicRequest: IsBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.DeleteBucket(ctx.Context(), bucket)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
Status: http.StatusNoContent,
},
}, err
}

View File

@@ -0,0 +1,413 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"context"
"net/http"
"testing"
"github.com/versity/versitygw/s3err"
)
func TestS3ApiController_DeleteBucketTagging(t *testing.T) {
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrAclNotSupported),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
err: s3err.GetAPIError(s3err.ErrAclNotSupported),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
DeleteBucketTaggingFunc: func(_ context.Context, _ string) error {
return tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.DeleteBucketTagging,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
})
})
}
}
func TestS3ApiController_DeleteBucketOwnershipControls(t *testing.T) {
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrInvalidAccessKeyID),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
err: s3err.GetAPIError(s3err.ErrInvalidAccessKeyID),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
DeleteBucketOwnershipControlsFunc: func(contextMoqParam context.Context, bucket string) error {
return tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.DeleteBucketOwnershipControls,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
})
})
}
}
func TestS3ApiController_DeleteBucketPolicy(t *testing.T) {
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrInvalidDigest),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
err: s3err.GetAPIError(s3err.ErrInvalidDigest),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
DeleteBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) error {
return tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.DeleteBucketPolicy,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
})
})
}
}
func TestS3ApiController_DeleteBucketCors(t *testing.T) {
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrAdminMethodNotSupported),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
err: s3err.GetAPIError(s3err.ErrAdminMethodNotSupported),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
DeleteBucketCorsFunc: func(contextMoqParam context.Context, bucket string) error {
return tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.DeleteBucketCors,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
})
})
}
}
func TestS3ApiController_DeleteBucket(t *testing.T) {
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrInvalidDigest),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
err: s3err.GetAPIError(s3err.ErrInvalidDigest),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
DeleteBucketFunc: func(contextMoqParam context.Context, bucket string) error {
return tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.DeleteBucket,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
})
})
}
}

View File

@@ -0,0 +1,670 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
func (c S3ApiController) GetBucketTagging(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketTaggingAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
tags, err := c.be.GetBucketTagging(ctx.Context(), bucket)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
resp := s3response.Tagging{
TagSet: s3response.TagSet{
Tags: make([]s3response.Tag, 0, len(tags)),
},
}
for key, val := range tags {
resp.TagSet.Tags = append(resp.TagSet.Tags,
s3response.Tag{Key: key, Value: val})
}
return &Response{
Data: resp,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetBucketOwnershipControls(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketOwnershipControlsAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
data, err := c.be.GetBucketOwnershipControls(ctx.Context(), bucket)
return &Response{
Data: s3response.OwnershipControls{
Rules: []types.OwnershipControlsRule{
{
ObjectOwnership: data,
},
},
},
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetBucketVersioning(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketVersioningAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
// Only admin users and the bucket owner are allowed to get the versioning state of a bucket.
if err := auth.IsAdminOrOwner(acct, isRoot, parsedAcl); err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
data, err := c.be.GetBucketVersioning(ctx.Context(), bucket)
return &Response{
Data: data,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetBucketCors(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketCorsAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
data, err := c.be.GetBucketCors(ctx.Context(), bucket)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
output, err := auth.ParseCORSOutput(data)
return &Response{
Data: output,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetBucketPolicy(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketPolicyAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
data, err := c.be.GetBucketPolicy(ctx.Context(), bucket)
return &Response{
Data: data,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetBucketPolicyStatus(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketPolicyStatusAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
policyRaw, err := c.be.GetBucketPolicy(ctx.Context(), bucket)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
policy, err := auth.ParsePolicyDocument(policyRaw)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
isPublic := policy.IsPublic()
return &Response{
Data: types.PolicyStatus{
IsPublic: &isPublic,
},
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, nil
}
func (c S3ApiController) ListObjectVersions(ctx *fiber.Ctx) (*Response, error) {
// url values
bucket := ctx.Params("bucket")
prefix := ctx.Query("prefix")
delimiter := ctx.Query("delimiter")
maxkeysStr := ctx.Query("max-keys")
keyMarker := ctx.Query("key-marker")
versionIdMarker := ctx.Query("version-id-marker")
// context keys
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.ListBucketVersionsAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
maxkeys, err := utils.ParseUint(maxkeysStr)
if err != nil {
debuglogger.Logf("error parsing max keys %q: %v",
maxkeysStr, err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidMaxKeys)
}
data, err := c.be.ListObjectVersions(ctx.Context(),
&s3.ListObjectVersionsInput{
Bucket: &bucket,
Delimiter: &delimiter,
KeyMarker: &keyMarker,
MaxKeys: &maxkeys,
Prefix: &prefix,
VersionIdMarker: &versionIdMarker,
})
return &Response{
Data: data,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetObjectLockConfiguration(ctx *fiber.Ctx) (*Response, error) {
// url values
bucket := ctx.Params("bucket")
// context keys
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketObjectLockConfigurationAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
data, err := c.be.GetObjectLockConfiguration(ctx.Context(), bucket)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
resp, err := auth.ParseBucketLockConfigurationOutput(data)
return &Response{
Data: resp,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetBucketAcl(ctx *fiber.Ctx) (*Response, error) {
// url values
bucket := ctx.Params("bucket")
// context keys
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionReadAcp,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketAclAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
data, err := c.be.GetBucketAcl(ctx.Context(),
&s3.GetBucketAclInput{Bucket: &bucket})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
res, err := auth.ParseACLOutput(data, parsedAcl.Owner)
return &Response{
Data: res,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) ListMultipartUploads(ctx *fiber.Ctx) (*Response, error) {
// url values
bucket := ctx.Params("bucket")
prefix := ctx.Query("prefix")
delimiter := ctx.Query("delimiter")
keyMarker := ctx.Query("key-marker")
maxUploadsStr := ctx.Query("max-uploads")
uploadIdMarker := ctx.Query("upload-id-marker")
// context keys
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.ListBucketMultipartUploadsAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
maxUploads, err := utils.ParseUint(maxUploadsStr)
if err != nil {
debuglogger.Logf("error parsing max uploads %q: %v",
maxUploadsStr, err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidMaxUploads)
}
res, err := c.be.ListMultipartUploads(ctx.Context(),
&s3.ListMultipartUploadsInput{
Bucket: &bucket,
Delimiter: &delimiter,
Prefix: &prefix,
UploadIdMarker: &uploadIdMarker,
MaxUploads: &maxUploads,
KeyMarker: &keyMarker,
})
return &Response{
Data: res,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) ListObjectsV2(ctx *fiber.Ctx) (*Response, error) {
// url values
bucket := ctx.Params("bucket")
prefix := ctx.Query("prefix")
cToken := ctx.Query("continuation-token")
sAfter := ctx.Query("start-after")
delimiter := ctx.Query("delimiter")
maxkeysStr := ctx.Query("max-keys")
fetchOwner := strings.EqualFold(ctx.Query("fetch-owner"), "true")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.ListBucketAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
maxkeys, err := utils.ParseUint(maxkeysStr)
if err != nil {
debuglogger.Logf("error parsing max keys %q: %v",
maxkeysStr, err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidMaxKeys)
}
res, err := c.be.ListObjectsV2(ctx.Context(),
&s3.ListObjectsV2Input{
Bucket: &bucket,
Prefix: &prefix,
ContinuationToken: &cToken,
Delimiter: &delimiter,
MaxKeys: &maxkeys,
StartAfter: &sAfter,
FetchOwner: &fetchOwner,
})
return &Response{
Data: res,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) ListObjects(ctx *fiber.Ctx) (*Response, error) {
// url values
bucket := ctx.Params("bucket")
prefix := ctx.Query("prefix")
marker := ctx.Query("marker")
delimiter := ctx.Query("delimiter")
maxkeysStr := ctx.Query("max-keys")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.ListBucketAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
maxkeys, err := utils.ParseUint(maxkeysStr)
if err != nil {
debuglogger.Logf("error parsing max keys %q: %v",
maxkeysStr, err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidMaxKeys)
}
res, err := c.be.ListObjects(ctx.Context(),
&s3.ListObjectsInput{
Bucket: &bucket,
Prefix: &prefix,
Marker: &marker,
Delimiter: &delimiter,
MaxKeys: &maxkeys,
})
return &Response{
Data: res,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
// GetBucketLocation handles GET /:bucket?location
func (c S3ApiController) GetBucketLocation(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketLocationAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
// verify bucket existence/access via backend HeadBucket
_, err = c.be.HeadBucket(ctx.Context(), &s3.HeadBucketInput{Bucket: &bucket})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
// pick up configured region from locals (set by router middleware)
region, _ := ctx.Locals("region").(string)
return &Response{
Data: s3response.LocationConstraint{
Value: region,
},
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,73 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/s3api/utils"
)
func (c S3ApiController) HeadBucket(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
region := utils.ContextKeyRegion.Get(ctx).(string)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.ListBucketAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
_, err = c.be.HeadBucket(ctx.Context(),
&s3.HeadBucketInput{
Bucket: &bucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
return &Response{
Headers: map[string]*string{
"X-Amz-Access-Point-Alias": utils.GetStringPtr("false"),
"X-Amz-Bucket-Region": utils.GetStringPtr(region),
},
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, nil
}

View File

@@ -0,0 +1,136 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"context"
"testing"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
)
func TestS3ApiController_HeadBucket(t *testing.T) {
region := "us-east-1"
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: map[utils.ContextKey]any{
utils.ContextKeyIsRoot: false,
utils.ContextKeyParsedAcl: auth.ACL{
Owner: "root",
},
utils.ContextKeyAccount: auth.Account{
Access: "user",
Role: auth.RoleUser,
},
utils.ContextKeyRegion: region,
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: map[utils.ContextKey]any{
utils.ContextKeyIsRoot: true,
utils.ContextKeyParsedAcl: auth.ACL{
Owner: "root",
},
utils.ContextKeyAccount: auth.Account{
Access: "root",
Role: auth.RoleAdmin,
},
utils.ContextKeyRegion: region,
},
beErr: s3err.GetAPIError(s3err.ErrInvalidAccessKeyID),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidAccessKeyID),
},
},
{
name: "successful response",
input: testInput{
locals: map[utils.ContextKey]any{
utils.ContextKeyIsRoot: true,
utils.ContextKeyParsedAcl: auth.ACL{
Owner: "root",
},
utils.ContextKeyAccount: auth.Account{
Access: "root",
Role: auth.RoleAdmin,
},
utils.ContextKeyRegion: region,
},
},
output: testOutput{
response: &Response{
Headers: map[string]*string{
"X-Amz-Access-Point-Alias": utils.GetStringPtr("false"),
"X-Amz-Bucket-Region": utils.GetStringPtr(region),
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
HeadBucketFunc: func(contextMoqParam context.Context, headBucketInput *s3.HeadBucketInput) (*s3.HeadBucketOutput, error) {
return &s3.HeadBucketOutput{}, tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.HeadBucket,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
})
})
}
}

View File

@@ -0,0 +1,69 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"strconv"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
func (c S3ApiController) ListBuckets(ctx *fiber.Ctx) (*Response, error) {
cToken := ctx.Query("continuation-token")
prefix := ctx.Query("prefix")
maxBucketsStr := ctx.Query("max-buckets")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
region, ok := utils.ContextKeyRegion.Get(ctx).(string)
if !ok {
region = defaultRegion
}
maxBuckets := defaultMaxBuckets
if maxBucketsStr != "" {
maxBucketsParsed, err := strconv.ParseInt(maxBucketsStr, 10, 32)
if err != nil || maxBucketsParsed < 0 || maxBucketsParsed > int64(defaultMaxBuckets) {
debuglogger.Logf("error parsing max-buckets %q: %v", maxBucketsStr, err)
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrInvalidMaxBuckets)
}
maxBuckets = int32(maxBucketsParsed)
}
res, err := c.be.ListBuckets(ctx.Context(),
s3response.ListBucketsInput{
Owner: acct.Access,
IsAdmin: acct.Role == auth.RoleAdmin,
MaxBuckets: maxBuckets,
ContinuationToken: cToken,
Prefix: prefix,
})
if err != nil {
return &Response{}, err
}
for i := range res.Buckets.Bucket {
res.Buckets.Bucket[i].BucketRegion = region
}
return &Response{
Data: res,
}, nil
}

View File

@@ -0,0 +1,108 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"context"
"testing"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
func TestS3ApiController_ListBuckets(t *testing.T) {
validRes := s3response.ListAllMyBucketsResult{
Owner: s3response.CanonicalUser{
ID: "root",
},
Buckets: s3response.ListAllMyBucketsList{
Bucket: []s3response.ListAllMyBucketsEntry{
{Name: "test"},
},
},
}
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "invalid max buckets",
input: testInput{
locals: defaultLocals,
queries: map[string]string{
"max-buckets": "-1",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{},
},
err: s3err.GetAPIError(s3err.ErrInvalidMaxBuckets),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
beRes: s3response.ListAllMyBucketsResult{},
},
output: testOutput{
response: &Response{},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
beRes: validRes,
queries: map[string]string{
"max-buckets": "3",
},
},
output: testOutput{
response: &Response{
Data: validRes,
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
ListBucketsFunc: func(contextMoqParam context.Context, listBucketsInput s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error) {
return tt.input.beRes.(s3response.ListAllMyBucketsResult), tt.input.beErr
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.ListBuckets,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
queries: tt.input.queries,
})
})
}
}

View File

@@ -0,0 +1,94 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"encoding/xml"
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3event"
"github.com/versity/versitygw/s3response"
)
func (c S3ApiController) DeleteObjects(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
bypass := strings.EqualFold(ctx.Get("X-Amz-Bypass-Governance-Retention"), "true")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.DeleteObjectAction,
IsPublicRequest: IsBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var dObj s3response.DeleteObjects
err = xml.Unmarshal(ctx.Body(), &dObj)
if err != nil {
debuglogger.Logf("error unmarshalling delete objects: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, dObj.Objects, bypass, IsBucketPublic, c.be)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
res, err := c.be.DeleteObjects(ctx.Context(),
&s3.DeleteObjectsInput{
Bucket: &bucket,
Delete: &types.Delete{
Objects: dObj.Objects,
},
})
return &Response{
Data: res,
MetaOpts: &MetaOptions{
ObjectCount: int64(len(dObj.Objects)),
BucketOwner: parsedAcl.Owner,
EventName: s3event.EventObjectRemovedDeleteObjects,
},
}, err
}

View File

@@ -0,0 +1,165 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"context"
"encoding/xml"
"testing"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/stretchr/testify/assert"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3event"
"github.com/versity/versitygw/s3response"
)
func TestS3ApiController_DeleteObjects(t *testing.T) {
validBody, err := xml.Marshal(s3response.DeleteObjects{
Objects: []types.ObjectIdentifier{
{Key: utils.GetStringPtr("obj")},
},
})
assert.NoError(t, err)
validRes := s3response.DeleteResult{
Deleted: []types.DeletedObject{
{Key: utils.GetStringPtr("key")},
},
}
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "invalid request body",
input: testInput{
locals: defaultLocals,
body: []byte("invalid_body"),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
},
{
name: "check object access returns error",
input: testInput{
locals: defaultLocals,
body: validBody,
extraMockErr: s3err.GetAPIError(s3err.ErrObjectLocked),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrObjectLocked),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beRes: s3response.DeleteResult{},
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
body: validBody,
extraMockErr: s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound),
},
output: testOutput{
response: &Response{
Data: s3response.DeleteResult{},
MetaOpts: &MetaOptions{
BucketOwner: "root",
EventName: s3event.EventObjectRemovedDeleteObjects,
ObjectCount: 1,
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
body: validBody,
beRes: validRes,
extraMockErr: s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound),
},
output: testOutput{
response: &Response{
Data: validRes,
MetaOpts: &MetaOptions{
BucketOwner: "root",
EventName: s3event.EventObjectRemovedDeleteObjects,
ObjectCount: 1,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
DeleteObjectsFunc: func(contextMoqParam context.Context, deleteObjectsInput *s3.DeleteObjectsInput) (s3response.DeleteResult, error) {
return tt.input.beRes.(s3response.DeleteResult), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
GetObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, tt.input.extraMockErr
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.DeleteObjects,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
})
})
}
}

View File

@@ -0,0 +1,603 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"bytes"
"encoding/xml"
"errors"
"fmt"
"net/http"
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
func (c S3ApiController) PutBucketTagging(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketTaggingAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
tagging, err := utils.ParseTagging(ctx.Body(), utils.TagLimitBucket)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.PutBucketTagging(ctx.Context(), bucket, tagging)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
Status: http.StatusNoContent,
},
}, err
}
func (c S3ApiController) PutBucketOwnershipControls(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
if err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketOwnershipControlsAction,
}); err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var ownershipControls s3response.OwnershipControls
if err := xml.Unmarshal(ctx.Body(), &ownershipControls); err != nil {
debuglogger.Logf("failed to unmarshal request body: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
rulesCount := len(ownershipControls.Rules)
isValidOwnership := utils.IsValidOwnership(ownershipControls.Rules[0].ObjectOwnership)
if rulesCount != 1 || !isValidOwnership {
if rulesCount != 1 {
debuglogger.Logf("ownership control rules should be 1, got %v", rulesCount)
}
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
err := c.be.PutBucketOwnershipControls(ctx.Context(), bucket, ownershipControls.Rules[0].ObjectOwnership)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) PutBucketVersioning(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketVersioningAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var versioningConf types.VersioningConfiguration
err = xml.Unmarshal(ctx.Body(), &versioningConf)
if err != nil {
debuglogger.Logf("error unmarshalling versioning configuration: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
if versioningConf.Status != types.BucketVersioningStatusEnabled &&
versioningConf.Status != types.BucketVersioningStatusSuspended {
debuglogger.Logf("invalid versioning configuration status: %v", versioningConf.Status)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
err = c.be.PutBucketVersioning(ctx.Context(), bucket, versioningConf.Status)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) PutObjectLockConfiguration(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
if err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketObjectLockConfigurationAction,
IsPublicRequest: isPublicBucket,
}); err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
config, err := auth.ParseBucketLockConfigurationInput(ctx.Body())
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.PutObjectLockConfiguration(ctx.Context(), bucket, config)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) PutBucketCors(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketCorsAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
body := ctx.Body()
var corsConfig auth.CORSConfiguration
err = xml.Unmarshal(body, &corsConfig)
if err != nil {
debuglogger.Logf("invalid CORS request body: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
// validate the CORS configuration rules
err = corsConfig.Validate()
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
algo, checksusms, err := utils.ParseChecksumHeadersAndSdkAlgo(ctx)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
if algo != "" {
rdr, err := utils.NewHashReader(bytes.NewReader(body), checksusms[algo], utils.HashType(strings.ToLower(string(algo))))
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
// Pass the same body to avoid data duplication
_, err = rdr.Read(body)
if err != nil {
debuglogger.Logf("failed to read hash calculation data: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
}
err = c.be.PutBucketCors(ctx.Context(), bucket, body)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) PutBucketPolicy(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketPolicyAction,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = auth.ValidatePolicyDocument(ctx.Body(), bucket, c.iam)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.PutBucketPolicy(ctx.Context(), bucket, ctx.Body())
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) PutBucketAcl(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acl := ctx.Get("X-Amz-Acl")
grantFullControl := ctx.Get("X-Amz-Grant-Full-Control")
grantRead := ctx.Get("X-Amz-Grant-Read")
grantReadACP := ctx.Get("X-Amz-Grant-Read-Acp")
grantWrite := ctx.Get("X-Amz-Grant-Write")
grantWriteACP := ctx.Get("X-Amz-Grant-Write-Acp")
// context locals
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
grants := grantFullControl + grantRead + grantReadACP + grantWrite + grantWriteACP
var input *auth.PutBucketAclInput
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWriteAcp,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketAclAction,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
ownership, err := c.be.GetBucketOwnershipControls(ctx.Context(), bucket)
if err != nil && !errors.Is(err, s3err.GetAPIError(s3err.ErrOwnershipControlsNotFound)) {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
if ownership == types.ObjectOwnershipBucketOwnerEnforced {
debuglogger.Logf("bucket acls are disabled")
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrAclNotSupported)
}
if len(ctx.Body()) > 0 {
var accessControlPolicy auth.AccessControlPolicy
err := xml.Unmarshal(ctx.Body(), &accessControlPolicy)
if err != nil {
debuglogger.Logf("error unmarshalling access control policy: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedACL)
}
err = accessControlPolicy.Validate()
if err != nil {
debuglogger.Logf("invalid access control policy: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
if *accessControlPolicy.Owner.ID != parsedAcl.Owner {
debuglogger.Logf("invalid access control policy owner id: %v, expected %v", *accessControlPolicy.Owner.ID, parsedAcl.Owner)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.APIError{
Code: "InvalidArgument",
Description: "Invalid id",
HTTPStatusCode: http.StatusBadRequest,
}
}
if grants+acl != "" {
debuglogger.Logf("invalid request: %q (grants) %q (acl)",
grants, acl)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrUnexpectedContent)
}
input = &auth.PutBucketAclInput{
Bucket: &bucket,
AccessControlPolicy: &accessControlPolicy,
}
} else if acl != "" {
if acl != "private" && acl != "public-read" && acl != "public-read-write" {
debuglogger.Logf("invalid acl: %q", acl)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
if grants != "" {
debuglogger.Logf("invalid request: %q (grants) %q (acl)",
grants, acl)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrBothCannedAndHeaderGrants)
}
input = &auth.PutBucketAclInput{
Bucket: &bucket,
ACL: types.BucketCannedACL(acl),
}
} else if grants != "" {
input = &auth.PutBucketAclInput{
Bucket: &bucket,
GrantFullControl: &grantFullControl,
GrantRead: &grantRead,
GrantReadACP: &grantReadACP,
GrantWrite: &grantWrite,
GrantWriteACP: &grantWriteACP,
}
} else {
debuglogger.Logf("none of the bucket acl options has been specified: canned, req headers, req body")
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMissingSecurityHeader)
}
updAcl, err := auth.UpdateACL(input, parsedAcl, c.iam, acct.Role == auth.RoleAdmin)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.PutBucketAcl(ctx.Context(), bucket, updAcl)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) CreateBucket(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
acl := ctx.Get("X-Amz-Acl")
grantFullControl := ctx.Get("X-Amz-Grant-Full-Control")
grantRead := ctx.Get("X-Amz-Grant-Read")
grantReadACP := ctx.Get("X-Amz-Grant-Read-Acp")
grantWrite := ctx.Get("X-Amz-Grant-Write")
grantWriteACP := ctx.Get("X-Amz-Grant-Write-Acp")
lockEnabled := strings.EqualFold(ctx.Get("X-Amz-Bucket-Object-Lock-Enabled"), "true")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
grants := grantFullControl + grantRead + grantReadACP + grantWrite + grantWriteACP
objectOwnership := types.ObjectOwnership(
ctx.Get("X-Amz-Object-Ownership", string(types.ObjectOwnershipBucketOwnerEnforced)),
)
if acct.Role != auth.RoleAdmin && acct.Role != auth.RoleUserPlus {
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrAccessDenied)
}
// validate the bucket name
if ok := utils.IsValidBucketName(bucket); !ok {
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.GetAPIError(s3err.ErrInvalidBucketName)
}
// validate the object ownership value
if ok := utils.IsValidOwnership(objectOwnership); !ok {
return &Response{
MetaOpts: &MetaOptions{},
}, s3err.APIError{
Code: "InvalidArgument",
Description: fmt.Sprintf("Invalid x-amz-object-ownership header: %v", objectOwnership),
HTTPStatusCode: http.StatusBadRequest,
}
}
if acl+grants != "" && objectOwnership == types.ObjectOwnershipBucketOwnerEnforced {
debuglogger.Logf("bucket acls are disabled for %v object ownership", objectOwnership)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: acct.Access,
},
}, s3err.GetAPIError(s3err.ErrInvalidBucketAclWithObjectOwnership)
}
if acl != "" && grants != "" {
debuglogger.Logf("invalid request: %q (grants) %q (acl)", grants, acl)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: acct.Access,
},
}, s3err.GetAPIError(s3err.ErrBothCannedAndHeaderGrants)
}
defACL := auth.ACL{
Owner: acct.Access,
}
updAcl, err := auth.UpdateACL(&auth.PutBucketAclInput{
GrantFullControl: &grantFullControl,
GrantRead: &grantRead,
GrantReadACP: &grantReadACP,
GrantWrite: &grantWrite,
GrantWriteACP: &grantWriteACP,
AccessControlPolicy: &auth.AccessControlPolicy{
Owner: &types.Owner{
ID: &acct.Access,
}},
ACL: types.BucketCannedACL(acl),
}, defACL, c.iam, acct.Role == auth.RoleAdmin)
if err != nil {
debuglogger.Logf("failed to update bucket acl: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: acct.Access,
},
}, err
}
err = c.be.CreateBucket(ctx.Context(), &s3.CreateBucketInput{
Bucket: &bucket,
ObjectOwnership: objectOwnership,
ObjectLockEnabledForBucket: &lockEnabled,
}, updAcl)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: acct.Access,
},
}, err
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,206 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"fmt"
"net/http"
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3event"
)
func (c S3ApiController) DeleteObjectTagging(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.DeleteObjectTaggingAction,
IsPublicRequest: isBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.DeleteObjectTagging(ctx.Context(), bucket, key)
return &Response{
MetaOpts: &MetaOptions{
Status: http.StatusNoContent,
BucketOwner: parsedAcl.Owner,
EventName: s3event.EventObjectTaggingDelete,
},
}, err
}
func (c S3ApiController) AbortMultipartUpload(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
uploadId := ctx.Query("uploadId")
ifMatchInitiatedTime := utils.ParsePreconditionDateHeader(ctx.Get("X-Amz-If-Match-Initiated-Time"))
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.AbortMultipartUploadAction,
IsPublicRequest: isBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.AbortMultipartUpload(ctx.Context(),
&s3.AbortMultipartUploadInput{
UploadId: &uploadId,
Bucket: &bucket,
Key: &key,
IfMatchInitiatedTime: ifMatchInitiatedTime,
})
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
Status: http.StatusNoContent,
},
}, err
}
func (c S3ApiController) DeleteObject(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
versionId := ctx.Query("versionId")
bypass := strings.EqualFold(ctx.Get("X-Amz-Bypass-Governance-Retention"), "true")
ifMatch := utils.GetStringPtr(ctx.Get("If-Match"))
ifMatchLastModTime := utils.ParsePreconditionDateHeader(ctx.Get("X-Amz-If-Match-Last-Modified-Time"))
ifMatchSize := utils.ParseIfMatchSize(ctx)
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
//TODO: check s3:DeleteObjectVersion policy in case a use tries to delete a version of an object
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.DeleteObjectAction,
IsPublicRequest: isBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = auth.CheckObjectAccess(
ctx.Context(),
bucket,
acct.Access,
[]types.ObjectIdentifier{
{
Key: &key,
VersionId: &versionId,
},
},
bypass,
isBucketPublic,
c.be,
)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
res, err := c.be.DeleteObject(ctx.Context(),
&s3.DeleteObjectInput{
Bucket: &bucket,
Key: &key,
VersionId: &versionId,
IfMatch: ifMatch,
IfMatchLastModifiedTime: ifMatchLastModTime,
IfMatchSize: ifMatchSize,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
EventName: s3event.EventObjectRemovedDelete,
Status: http.StatusNoContent,
},
}, err
}
headers := map[string]*string{
"x-amz-version-id": res.VersionId,
}
if res.DeleteMarker != nil && *res.DeleteMarker {
headers["x-amz-delete-marker"] = utils.GetStringPtr("true")
}
return &Response{
Headers: headers,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
EventName: s3event.EventObjectRemovedDelete,
Status: http.StatusNoContent,
},
}, nil
}

View File

@@ -0,0 +1,296 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"context"
"net/http"
"testing"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3event"
)
func TestS3ApiController_DeleteObjectTagging(t *testing.T) {
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
EventName: s3event.EventObjectTaggingDelete,
},
},
err: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
EventName: s3event.EventObjectTaggingDelete,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
DeleteObjectTaggingFunc: func(contextMoqParam context.Context, bucket, object string) error {
return tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.DeleteObjectTagging,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
})
})
}
}
func TestS3ApiController_AbortMultipartUpload(t *testing.T) {
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
err: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
AbortMultipartUploadFunc: func(contextMoqParam context.Context, abortMultipartUploadInput *s3.AbortMultipartUploadInput) error {
return tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.AbortMultipartUpload,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
})
})
}
}
func TestS3ApiController_DeleteObject(t *testing.T) {
delMarker, versionId := true, "versionId"
var emptyRes *s3.DeleteObjectOutput
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "object locked",
input: testInput{
locals: defaultLocals,
extraMockErr: s3err.GetAPIError(s3err.ErrObjectLocked),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrObjectLocked),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrInvalidRequest),
extraMockErr: s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound),
beRes: emptyRes,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
EventName: s3event.EventObjectRemovedDelete,
},
},
err: s3err.GetAPIError(s3err.ErrInvalidRequest),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
extraMockErr: s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound),
beRes: &s3.DeleteObjectOutput{
DeleteMarker: &delMarker,
VersionId: &versionId,
},
},
output: testOutput{
response: &Response{
Headers: map[string]*string{
"x-amz-delete-marker": utils.GetStringPtr("true"),
"x-amz-version-id": &versionId,
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusNoContent,
EventName: s3event.EventObjectRemovedDelete,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
DeleteObjectFunc: func(contextMoqParam context.Context, deleteObjectInput *s3.DeleteObjectInput) (*s3.DeleteObjectOutput, error) {
return tt.input.beRes.(*s3.DeleteObjectOutput), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
GetObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, tt.input.extraMockErr
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.DeleteObject,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
})
})
}
}

View File

@@ -0,0 +1,560 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"fmt"
"math"
"net/http"
"strconv"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
func (c S3ApiController) GetObjectTagging(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.GetObjectTaggingAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
data, err := c.be.GetObjectTagging(ctx.Context(), bucket, key)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
tags := s3response.Tagging{
TagSet: s3response.TagSet{Tags: []s3response.Tag{}},
}
for key, val := range data {
tags.TagSet.Tags = append(tags.TagSet.Tags,
s3response.Tag{Key: key, Value: val})
}
return &Response{
Data: tags,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetObjectRetention(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
versionId := ctx.Query("versionId")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.GetObjectRetentionAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
data, err := c.be.GetObjectRetention(ctx.Context(), bucket, key, versionId)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
retention, err := auth.ParseObjectLockRetentionOutput(data)
return &Response{
Data: retention,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetObjectLegalHold(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
versionId := ctx.Query("versionId")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.GetObjectLegalHoldAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
data, err := c.be.GetObjectLegalHold(ctx.Context(), bucket, key, versionId)
return &Response{
Data: auth.ParseObjectLegalHoldOutput(data),
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetObjectAcl(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionReadAcp,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.GetObjectAclAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
res, err := c.be.GetObjectAcl(ctx.Context(), &s3.GetObjectAclInput{
Bucket: &bucket,
Key: &key,
})
return &Response{
Data: res,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) ListParts(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
uploadId := ctx.Query("uploadId")
partNumberMarker := ctx.Query("part-number-marker")
maxPartsStr := ctx.Query("max-parts")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.ListMultipartUploadPartsAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
// parse the part number marker
if partNumberMarker != "" {
n, err := strconv.Atoi(partNumberMarker)
if err != nil || n < 0 {
debuglogger.Logf("invalid part number marker %q: %v",
partNumberMarker, err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidPartNumberMarker)
}
}
// parse the max parts
maxParts, err := utils.ParseUint(maxPartsStr)
if err != nil {
debuglogger.Logf("error parsing max parts %q: %v",
maxPartsStr, err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidMaxParts)
}
res, err := c.be.ListParts(ctx.Context(), &s3.ListPartsInput{
Bucket: &bucket,
Key: &key,
UploadId: &uploadId,
PartNumberMarker: &partNumberMarker,
MaxParts: &maxParts,
})
return &Response{
Data: res,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetObjectAttributes(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
versionId := ctx.Query("versionId")
maxPartsStr := ctx.Get("X-Amz-Max-Parts")
partNumberMarker := ctx.Get("X-Amz-Part-Number-Marker")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.GetObjectAttributesAction,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
// parse max parts
maxParts, err := utils.ParseUint(maxPartsStr)
if err != nil {
debuglogger.Logf("error parsing max parts %q: %v",
maxPartsStr, err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidMaxParts)
}
// parse the object attributes
attrs, err := utils.ParseObjectAttributes(ctx)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
res, err := c.be.GetObjectAttributes(ctx.Context(),
&s3.GetObjectAttributesInput{
Bucket: &bucket,
Key: &key,
PartNumberMarker: &partNumberMarker,
MaxParts: &maxParts,
VersionId: &versionId,
})
if err != nil {
headers := map[string]*string{
"x-amz-version-id": res.VersionId,
}
if res.DeleteMarker != nil && *res.DeleteMarker {
headers["x-amz-delete-marker"] = utils.GetStringPtr("true")
}
return &Response{
Headers: headers,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
headers := map[string]*string{
"x-amz-version-id": res.VersionId,
"Last-Modified": utils.FormatDatePtrToString(res.LastModified, iso8601TimeFormatExtended),
}
if res.DeleteMarker != nil && *res.DeleteMarker {
headers["x-amz-delete-marker"] = utils.GetStringPtr("true")
}
return &Response{
Headers: headers,
Data: utils.FilterObjectAttributes(attrs, res),
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) GetObject(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
versionId := ctx.Query("versionId")
acceptRange := ctx.Get("Range")
checksumMode := types.ChecksumMode(strings.ToUpper(ctx.Get("x-amz-checksum-mode")))
partNumberQuery := int32(ctx.QueryInt("partNumber", -1))
// Extract response override query parameters
responseOverrides := map[string]*string{
"Cache-Control": utils.GetQueryParam(ctx, "response-cache-control"),
"Content-Disposition": utils.GetQueryParam(ctx, "response-content-disposition"),
"Content-Encoding": utils.GetQueryParam(ctx, "response-content-encoding"),
"Content-Language": utils.GetQueryParam(ctx, "response-content-language"),
"Content-Type": utils.GetQueryParam(ctx, "response-content-type"),
"Expires": utils.GetQueryParam(ctx, "response-expires"),
}
// Check if any response override parameters are present
hasResponseOverrides := false
for _, override := range responseOverrides {
if override != nil {
hasResponseOverrides = true
break
}
}
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
isPublicBucketRequest := utils.ContextKeyPublicBucket.IsSet(ctx)
utils.ContextKeySkipResBodyLog.Set(ctx, true)
// Validate that response override parameters are not used with anonymous requests
if hasResponseOverrides && isPublicBucketRequest {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrAnonymousResponseHeaders)
}
action := auth.GetObjectAction
if ctx.Request().URI().QueryArgs().Has("versionId") {
action = auth.GetObjectVersionAction
}
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: action,
IsPublicRequest: isPublicBucketRequest,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var partNumber *int32
if ctx.Request().URI().QueryArgs().Has("partNumber") {
if partNumberQuery < minPartNumber || partNumberQuery > maxPartNumber {
debuglogger.Logf("invalid part number: %d", partNumberQuery)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidPartNumber)
}
partNumber = &partNumberQuery
}
// validate the checksum mode
if checksumMode != "" && checksumMode != types.ChecksumModeEnabled {
debuglogger.Logf("invalid x-amz-checksum-mode header value: %v", checksumMode)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetInvalidChecksumHeaderErr("x-amz-checksum-mode")
}
conditionalHeaders := utils.ParsePreconditionHeaders(ctx)
res, err := c.be.GetObject(ctx.Context(), &s3.GetObjectInput{
Bucket: &bucket,
Key: &key,
Range: &acceptRange,
IfMatch: conditionalHeaders.IfMatch,
IfNoneMatch: conditionalHeaders.IfNoneMatch,
IfModifiedSince: conditionalHeaders.IfModSince,
IfUnmodifiedSince: conditionalHeaders.IfUnmodeSince,
VersionId: &versionId,
ChecksumMode: checksumMode,
PartNumber: partNumber,
})
if err != nil {
var headers map[string]*string
if res != nil {
headers = map[string]*string{
"x-amz-delete-marker": utils.GetStringPtr("true"),
"Last-Modified": utils.FormatDatePtrToString(res.LastModified, timefmt),
}
}
return &Response{
Headers: headers,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
// Set x-amz-meta-... headers
utils.SetMetaHeaders(ctx, res.Metadata)
status := http.StatusOK
if acceptRange != "" {
status = http.StatusPartialContent
}
if res.Body != nil {
// -1 will stream response body until EOF if content length not set
contentLen := -1
if res.ContentLength != nil {
if *res.ContentLength > int64(math.MaxInt) {
debuglogger.Logf("content length %v int overflow",
*res.ContentLength)
return &Response{
MetaOpts: &MetaOptions{
ContentLength: utils.GetInt64(res.ContentLength),
BucketOwner: parsedAcl.Owner,
Status: status,
},
}, s3err.GetAPIError(s3err.ErrInvalidRange)
}
contentLen = int(*res.ContentLength)
}
utils.StreamResponseBody(ctx, res.Body, contentLen)
}
return &Response{
Headers: map[string]*string{
"ETag": res.ETag,
"x-amz-restore": res.Restore,
"accept-ranges": res.AcceptRanges,
"Content-Range": res.ContentRange,
"Content-Disposition": utils.ApplyOverride(res.ContentDisposition, responseOverrides["Content-Disposition"]),
"Content-Encoding": utils.ApplyOverride(res.ContentEncoding, responseOverrides["Content-Encoding"]),
"Content-Language": utils.ApplyOverride(res.ContentLanguage, responseOverrides["Content-Language"]),
"Cache-Control": utils.ApplyOverride(res.CacheControl, responseOverrides["Cache-Control"]),
"Expires": utils.ApplyOverride(res.ExpiresString, responseOverrides["Expires"]),
"x-amz-checksum-crc32": res.ChecksumCRC32,
"x-amz-checksum-crc64nvme": res.ChecksumCRC64NVME,
"x-amz-checksum-crc32c": res.ChecksumCRC32C,
"x-amz-checksum-sha1": res.ChecksumSHA1,
"x-amz-checksum-sha256": res.ChecksumSHA256,
"Content-Type": utils.ApplyOverride(res.ContentType, responseOverrides["Content-Type"]),
"x-amz-version-id": res.VersionId,
"Content-Length": utils.ConvertPtrToStringPtr(res.ContentLength),
"x-amz-mp-parts-count": utils.ConvertPtrToStringPtr(res.PartsCount),
"x-amz-tagging-count": utils.ConvertPtrToStringPtr(res.TagCount),
"x-amz-object-lock-mode": utils.ConvertToStringPtr(res.ObjectLockMode),
"x-amz-object-lock-legal-hold": utils.ConvertToStringPtr(res.ObjectLockLegalHoldStatus),
"x-amz-storage-class": utils.ConvertToStringPtr(res.StorageClass),
"x-amz-checksum-type": utils.ConvertToStringPtr(res.ChecksumType),
"x-amz-object-lock-retain-until-date": utils.FormatDatePtrToString(res.ObjectLockRetainUntilDate, time.RFC3339),
"Last-Modified": utils.FormatDatePtrToString(res.LastModified, timefmt),
},
MetaOpts: &MetaOptions{
ContentLength: utils.GetInt64(res.ContentLength),
BucketOwner: parsedAcl.Owner,
Status: status,
},
}, nil
}

View File

@@ -0,0 +1,835 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strings"
"testing"
"time"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/stretchr/testify/assert"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
func TestS3ApiController_GetObjectTagging(t *testing.T) {
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beRes: map[string]string{},
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
beRes: map[string]string{
"key": "val",
},
},
output: testOutput{
response: &Response{
Data: s3response.Tagging{
TagSet: s3response.TagSet{
Tags: []s3response.Tag{
{Key: "key", Value: "val"},
},
},
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
GetObjectTaggingFunc: func(contextMoqParam context.Context, bucket, object string) (map[string]string, error) {
return tt.input.beRes.(map[string]string), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.GetObjectTagging,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
})
})
}
}
func TestS3ApiController_GetObjectRetention(t *testing.T) {
retBytes, err := json.Marshal(types.ObjectLockRetention{
Mode: types.ObjectLockRetentionModeCompliance,
})
assert.NoError(t, err)
var retention *types.ObjectLockRetention
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beRes: []byte{},
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "invalid data from backend",
input: testInput{
locals: defaultLocals,
beRes: []byte{},
},
output: testOutput{
response: &Response{
Data: retention,
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: fmt.Errorf("parse object lock retention: "),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
beRes: retBytes,
},
output: testOutput{
response: &Response{
Data: &types.ObjectLockRetention{
Mode: types.ObjectLockRetentionModeCompliance,
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
GetObjectRetentionFunc: func(contextMoqParam context.Context, bucket, object, versionId string) ([]byte, error) {
return tt.input.beRes.([]byte), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.GetObjectRetention,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
})
})
}
}
func TestS3ApiController_GetObjectLegalHold(t *testing.T) {
var legalHold *bool
var emptyLegalHold *s3response.GetObjectLegalHoldResult
status := true
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beRes: legalHold,
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
output: testOutput{
response: &Response{
Data: emptyLegalHold,
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
beRes: &status,
},
output: testOutput{
response: &Response{
Data: &s3response.GetObjectLegalHoldResult{
Status: types.ObjectLockLegalHoldStatusOn,
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
GetObjectLegalHoldFunc: func(contextMoqParam context.Context, bucket, object, versionId string) (*bool, error) {
return tt.input.beRes.(*bool), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.GetObjectLegalHold,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
})
})
}
}
func TestS3ApiController_GetObjectAcl(t *testing.T) {
var emptyRes *s3.GetObjectAclOutput
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beRes: emptyRes,
beErr: s3err.GetAPIError(s3err.ErrNotImplemented),
},
output: testOutput{
response: &Response{
Data: emptyRes,
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrNotImplemented),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
beRes: &s3.GetObjectAclOutput{
Owner: &types.Owner{
ID: utils.GetStringPtr("something"),
},
},
},
output: testOutput{
response: &Response{
Data: &s3.GetObjectAclOutput{
Owner: &types.Owner{
ID: utils.GetStringPtr("something"),
},
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
GetObjectAclFunc: func(contextMoqParam context.Context, getObjectAclInput *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error) {
return tt.input.beRes.(*s3.GetObjectAclOutput), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.GetObjectAcl,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
})
})
}
}
func TestS3ApiController_ListParts(t *testing.T) {
listPartsResult := s3response.ListPartsResult{
Bucket: "my-bucket",
Key: "obj",
IsTruncated: false,
Parts: []s3response.Part{
{ETag: "ETag"},
},
}
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "invalid part number marker",
input: testInput{
locals: defaultLocals,
queries: map[string]string{
"part-number-marker": "-1",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidPartNumberMarker),
},
},
{
name: "invalid max parts",
input: testInput{
locals: defaultLocals,
queries: map[string]string{
"max-parts": "-1",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidMaxParts),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beRes: s3response.ListPartsResult{},
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
output: testOutput{
response: &Response{
Data: s3response.ListPartsResult{},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
beRes: listPartsResult,
},
output: testOutput{
response: &Response{
Data: listPartsResult,
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
ListPartsFunc: func(contextMoqParam context.Context, listPartsInput *s3.ListPartsInput) (s3response.ListPartsResult, error) {
return tt.input.beRes.(s3response.ListPartsResult), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.ListParts,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
queries: tt.input.queries,
})
})
}
}
func TestS3ApiController_GetObjectAttributes(t *testing.T) {
delMarker, lastModTime, etag := true, time.Now(), "ETag"
timeFormatted := lastModTime.UTC().Format(iso8601TimeFormatExtended)
validRes := s3response.GetObjectAttributesResponse{
DeleteMarker: &delMarker,
LastModified: &lastModTime,
VersionId: utils.GetStringPtr("versionId"),
ETag: &etag,
}
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "invalid max parts",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"X-Amz-Max-Parts": "-1",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidMaxParts),
},
},
{
name: "invalid object attributes",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"X-Amz-Object-Attributes": "invalid_attribute",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidObjectAttributes),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beRes: validRes,
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
headers: map[string]string{
"X-Amz-Object-Attributes": "ETag",
},
},
output: testOutput{
response: &Response{
Headers: map[string]*string{
"x-amz-version-id": utils.GetStringPtr("versionId"),
"x-amz-delete-marker": utils.GetStringPtr("true"),
},
Data: nil,
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
beRes: validRes,
headers: map[string]string{
"X-Amz-Object-Attributes": "ETag",
},
},
output: testOutput{
response: &Response{
Headers: map[string]*string{
"x-amz-version-id": utils.GetStringPtr("versionId"),
"x-amz-delete-marker": utils.GetStringPtr("true"),
"Last-Modified": &timeFormatted,
},
Data: s3response.GetObjectAttributesResponse{
ETag: &etag,
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
GetObjectAttributesFunc: func(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error) {
return tt.input.beRes.(s3response.GetObjectAttributesResponse), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.GetObjectAttributes,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
headers: tt.input.headers,
})
})
}
}
func TestS3ApiController_GetObject(t *testing.T) {
tm := time.Now()
cLength := int64(11)
rdr := io.NopCloser(strings.NewReader("hello world"))
delMarker := true
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "invalid checksum mode",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"x-amz-checksum-mode": "invalid_checksum_mode",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetInvalidChecksumHeaderErr("x-amz-checksum-mode"),
},
},
{
name: "invalid part number",
input: testInput{
locals: defaultLocals,
queries: map[string]string{
"partNumber": "-2",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidPartNumber),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrInvalidAccessKeyID),
beRes: &s3.GetObjectOutput{
DeleteMarker: &delMarker,
LastModified: &tm,
},
},
output: testOutput{
response: &Response{
Headers: map[string]*string{
"x-amz-delete-marker": utils.GetStringPtr("true"),
"Last-Modified": utils.GetStringPtr(tm.UTC().Format(timefmt)),
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidAccessKeyID),
},
},
{
name: "successful response",
input: testInput{
headers: map[string]string{
"Range": "100-200",
},
queries: map[string]string{
"versionId": "versionId",
},
locals: defaultLocals,
beRes: &s3.GetObjectOutput{
ETag: utils.GetStringPtr("ETag"),
ContentType: utils.GetStringPtr("application/xml"),
ContentLength: &cLength,
Body: rdr,
},
},
output: testOutput{
response: &Response{
Headers: map[string]*string{
"ETag": utils.GetStringPtr("ETag"),
"x-amz-restore": nil,
"accept-ranges": nil,
"Content-Range": nil,
"Content-Disposition": nil,
"Content-Encoding": nil,
"Content-Language": nil,
"Cache-Control": nil,
"Expires": nil,
"x-amz-checksum-crc32": nil,
"x-amz-checksum-crc64nvme": nil,
"x-amz-checksum-crc32c": nil,
"x-amz-checksum-sha1": nil,
"x-amz-checksum-sha256": nil,
"x-amz-version-id": nil,
"x-amz-mp-parts-count": nil,
"x-amz-object-lock-mode": nil,
"x-amz-object-lock-legal-hold": nil,
"x-amz-storage-class": nil,
"x-amz-checksum-type": nil,
"x-amz-object-lock-retain-until-date": nil,
"Last-Modified": nil,
"x-amz-tagging-count": nil,
"Content-Type": utils.GetStringPtr("application/xml"),
"Content-Length": utils.GetStringPtr("11"),
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
Status: http.StatusPartialContent,
ContentLength: cLength,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
GetObjectFunc: func(contextMoqParam context.Context, getObjectInput *s3.GetObjectInput) (*s3.GetObjectOutput, error) {
return tt.input.beRes.(*s3.GetObjectOutput), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.GetObject,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
headers: tt.input.headers,
queries: tt.input.queries,
})
})
}
}

View File

@@ -0,0 +1,158 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"fmt"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
)
func (c S3ApiController) HeadObject(ctx *fiber.Ctx) (*Response, error) {
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
isPublicBucket := utils.ContextKeyPublicBucket.IsSet(ctx)
// url values
bucket := ctx.Params("bucket")
partNumberQuery := int32(ctx.QueryInt("partNumber", -1))
versionId := ctx.Query("versionId")
objRange := ctx.Get("Range")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
action := auth.GetObjectAction
if ctx.Request().URI().QueryArgs().Has("versionId") {
action = auth.GetObjectVersionAction
}
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: action,
IsPublicRequest: isPublicBucket,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var partNumber *int32
if ctx.Request().URI().QueryArgs().Has("partNumber") {
if partNumberQuery < minPartNumber || partNumberQuery > maxPartNumber {
debuglogger.Logf("invalid part number: %d", partNumberQuery)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidPartNumber)
}
partNumber = &partNumberQuery
}
checksumMode := types.ChecksumMode(strings.ToUpper(ctx.Get("x-amz-checksum-mode")))
if checksumMode != "" && checksumMode != types.ChecksumModeEnabled {
debuglogger.Logf("invalid x-amz-checksum-mode header value: %v", checksumMode)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetInvalidChecksumHeaderErr("x-amz-checksum-mode")
}
conditionalHeaders := utils.ParsePreconditionHeaders(ctx)
res, err := c.be.HeadObject(ctx.Context(),
&s3.HeadObjectInput{
Bucket: &bucket,
Key: &key,
PartNumber: partNumber,
VersionId: &versionId,
ChecksumMode: checksumMode,
Range: &objRange,
IfMatch: conditionalHeaders.IfMatch,
IfNoneMatch: conditionalHeaders.IfNoneMatch,
IfModifiedSince: conditionalHeaders.IfModSince,
IfUnmodifiedSince: conditionalHeaders.IfUnmodeSince,
})
if err != nil {
var headers map[string]*string
if res != nil {
headers = map[string]*string{
"x-amz-delete-marker": utils.GetStringPtr("true"),
"Last-Modified": utils.GetStringPtr(res.LastModified.UTC().Format(timefmt)),
}
}
return &Response{
Headers: headers,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
// Set the metadata headers
utils.SetMetaHeaders(ctx, res.Metadata)
return &Response{
Headers: map[string]*string{
"ETag": res.ETag,
"x-amz-restore": res.Restore,
"accept-ranges": res.AcceptRanges,
"Content-Range": res.ContentRange,
"Content-Disposition": res.ContentDisposition,
"Content-Encoding": res.ContentEncoding,
"Content-Language": res.ContentLanguage,
"Cache-Control": res.CacheControl,
"Expires": res.ExpiresString,
"x-amz-checksum-crc32": res.ChecksumCRC32,
"x-amz-checksum-crc64nvme": res.ChecksumCRC64NVME,
"x-amz-checksum-crc32c": res.ChecksumCRC32C,
"x-amz-checksum-sha1": res.ChecksumSHA1,
"x-amz-checksum-sha256": res.ChecksumSHA256,
"Content-Type": res.ContentType,
"x-amz-version-id": res.VersionId,
"Content-Length": utils.ConvertPtrToStringPtr(res.ContentLength),
"x-amz-mp-parts-count": utils.ConvertPtrToStringPtr(res.PartsCount),
"x-amz-object-lock-mode": utils.ConvertToStringPtr(res.ObjectLockMode),
"x-amz-object-lock-legal-hold": utils.ConvertToStringPtr(res.ObjectLockLegalHoldStatus),
"x-amz-storage-class": utils.ConvertToStringPtr(res.StorageClass),
"x-amz-checksum-type": utils.ConvertToStringPtr(res.ChecksumType),
"x-amz-object-lock-retain-until-date": utils.FormatDatePtrToString(res.ObjectLockRetainUntilDate, time.RFC3339),
"Last-Modified": utils.FormatDatePtrToString(res.LastModified, timefmt),
},
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, nil
}

View File

@@ -0,0 +1,187 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"context"
"testing"
"time"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
)
func TestS3ApiController_HeadObject(t *testing.T) {
tm := time.Now()
cLength := int64(100)
failingBeRes := &s3.HeadObjectOutput{
LastModified: &tm,
}
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "invalid part number",
input: testInput{
locals: defaultLocals,
queries: map[string]string{
"partNumber": "-4",
"versionId": "id",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidPartNumber),
},
},
{
name: "invalid checksum mode",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"x-amz-checksum-mode": "invalid_checksum_mode",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetInvalidChecksumHeaderErr("x-amz-checksum-mode"),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrInvalidAccessKeyID),
beRes: failingBeRes,
},
output: testOutput{
response: &Response{
Headers: map[string]*string{
"x-amz-delete-marker": utils.GetStringPtr("true"),
"Last-Modified": utils.GetStringPtr(tm.UTC().Format(timefmt)),
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidAccessKeyID),
},
},
{
name: "successful response",
input: testInput{
queries: map[string]string{
"partNumber": "4",
},
locals: defaultLocals,
headers: map[string]string{
"x-amz-checksum-mode": "enabled",
},
beRes: &s3.HeadObjectOutput{
ETag: utils.GetStringPtr("ETag"),
ContentType: utils.GetStringPtr("application/xml"),
ContentLength: &cLength,
},
},
output: testOutput{
response: &Response{
Headers: map[string]*string{
"ETag": utils.GetStringPtr("ETag"),
"x-amz-restore": nil,
"accept-ranges": nil,
"Content-Range": nil,
"Content-Disposition": nil,
"Content-Encoding": nil,
"Content-Language": nil,
"Cache-Control": nil,
"Expires": nil,
"x-amz-checksum-crc32": nil,
"x-amz-checksum-crc64nvme": nil,
"x-amz-checksum-crc32c": nil,
"x-amz-checksum-sha1": nil,
"x-amz-checksum-sha256": nil,
"x-amz-version-id": nil,
"x-amz-mp-parts-count": nil,
"x-amz-object-lock-mode": nil,
"x-amz-object-lock-legal-hold": nil,
"x-amz-storage-class": nil,
"x-amz-checksum-type": nil,
"x-amz-object-lock-retain-until-date": nil,
"Last-Modified": nil,
"Content-Type": utils.GetStringPtr("application/xml"),
"Content-Length": utils.GetStringPtr("100"),
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
HeadObjectFunc: func(contextMoqParam context.Context, headObjectInput *s3.HeadObjectInput) (*s3.HeadObjectOutput, error) {
return tt.input.beRes.(*s3.HeadObjectOutput), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.HeadObject,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
queries: tt.input.queries,
headers: tt.input.headers,
})
})
}
}

View File

@@ -0,0 +1,358 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"encoding/xml"
"fmt"
"strconv"
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3event"
"github.com/versity/versitygw/s3response"
)
func (c S3ApiController) RestoreObject(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.RestoreObjectAction,
IsPublicRequest: isBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var restoreRequest types.RestoreRequest
if err := xml.Unmarshal(ctx.Body(), &restoreRequest); err != nil {
debuglogger.Logf("failed to parse the request body: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
err = c.be.RestoreObject(ctx.Context(), &s3.RestoreObjectInput{
Bucket: &bucket,
Key: &key,
RestoreRequest: &restoreRequest,
})
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
EventName: s3event.EventObjectRestoreCompleted,
},
}, err
}
func (c S3ApiController) SelectObjectContent(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.GetObjectAction,
IsPublicRequest: isBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var payload s3response.SelectObjectContentPayload
err = xml.Unmarshal(ctx.Body(), &payload)
if err != nil {
debuglogger.Logf("error unmarshalling select object content: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
sw := c.be.SelectObjectContent(ctx.Context(),
&s3.SelectObjectContentInput{
Bucket: &bucket,
Key: &key,
Expression: payload.Expression,
ExpressionType: payload.ExpressionType,
InputSerialization: payload.InputSerialization,
OutputSerialization: payload.OutputSerialization,
RequestProgress: payload.RequestProgress,
ScanRange: payload.ScanRange,
})
ctx.Context().SetBodyStreamWriter(sw)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, nil
}
func (c S3ApiController) CreateMultipartUpload(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
contentType := ctx.Get("Content-Type")
contentDisposition := ctx.Get("Content-Disposition")
contentLanguage := ctx.Get("Content-Language")
cacheControl := ctx.Get("Cache-Control")
contentEncoding := ctx.Get("Content-Encoding")
tagging := ctx.Get("X-Amz-Tagging")
expires := ctx.Get("Expires")
metadata := utils.GetUserMetaData(&ctx.Request().Header)
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectAction,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
objLockState, err := utils.ParsObjectLockHdrs(ctx)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
checksumAlgorithm, checksumType, err := utils.ParseCreateMpChecksumHeaders(ctx)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
res, err := c.be.CreateMultipartUpload(ctx.Context(),
s3response.CreateMultipartUploadInput{
Bucket: &bucket,
Key: &key,
Tagging: &tagging,
ContentType: &contentType,
ContentEncoding: &contentEncoding,
ContentDisposition: &contentDisposition,
ContentLanguage: &contentLanguage,
CacheControl: &cacheControl,
Expires: &expires,
ObjectLockRetainUntilDate: &objLockState.RetainUntilDate,
ObjectLockMode: objLockState.ObjectLockMode,
ObjectLockLegalHoldStatus: objLockState.LegalHoldStatus,
Metadata: metadata,
ChecksumAlgorithm: checksumAlgorithm,
ChecksumType: checksumType,
})
var headers map[string]*string
if err == nil {
headers = map[string]*string{
"x-amz-checksum-algorithm": utils.ConvertToStringPtr(checksumAlgorithm),
"x-amz-checksum-type": utils.ConvertToStringPtr(checksumType),
}
}
return &Response{
Headers: headers,
Data: res,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) CompleteMultipartUpload(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
uploadId := ctx.Query("uploadId")
mpuObjSizeHdr := ctx.Get("X-Amz-Mp-Object-Size")
checksumType := types.ChecksumType(strings.ToUpper(ctx.Get("x-amz-checksum-type")))
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
isBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectAction,
IsPublicRequest: isBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var body s3response.CompleteMultipartUploadRequestBody
err = xml.Unmarshal(ctx.Body(), &body)
if err != nil {
debuglogger.Logf("error unmarshalling complete multipart upload: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
if len(body.Parts) == 0 {
debuglogger.Logf("empty parts provided for complete multipart upload")
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrEmptyParts)
}
var mpuObjectSize *int64
if mpuObjSizeHdr != "" {
val, err := strconv.ParseInt(mpuObjSizeHdr, 10, 64)
if err != nil {
debuglogger.Logf("invalid value for 'x-amz-mp-object-size' header: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetInvalidMpObjectSizeErr(mpuObjSizeHdr)
}
if val < 0 {
debuglogger.Logf("value for 'x-amz-mp-object-size' header is less than 0: %v", val)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetNegatvieMpObjectSizeErr(val)
}
mpuObjectSize = &val
}
checksums, err := utils.ParseChecksumHeaders(ctx)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = utils.IsChecksumTypeValid(checksumType)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
ifMatch, ifNoneMatch := utils.ParsePreconditionMatchHeaders(ctx)
res, versid, err := c.be.CompleteMultipartUpload(ctx.Context(),
&s3.CompleteMultipartUploadInput{
Bucket: &bucket,
Key: &key,
UploadId: &uploadId,
MultipartUpload: &types.CompletedMultipartUpload{
Parts: body.Parts,
},
MpuObjectSize: mpuObjectSize,
ChecksumCRC32: utils.GetStringPtr(checksums[types.ChecksumAlgorithmCrc32]),
ChecksumCRC32C: utils.GetStringPtr(checksums[types.ChecksumAlgorithmCrc32c]),
ChecksumSHA1: utils.GetStringPtr(checksums[types.ChecksumAlgorithmSha1]),
ChecksumSHA256: utils.GetStringPtr(checksums[types.ChecksumAlgorithmSha256]),
ChecksumCRC64NVME: utils.GetStringPtr(checksums[types.ChecksumAlgorithmCrc64nvme]),
ChecksumType: checksumType,
IfMatch: ifMatch,
IfNoneMatch: ifNoneMatch,
})
return &Response{
Data: res,
Headers: map[string]*string{
"x-amz-version-id": &versid,
},
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
ObjectETag: res.ETag,
EventName: s3event.EventCompleteMultipartUpload,
VersionId: &versid,
},
}, err
}

View File

@@ -0,0 +1,563 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"bufio"
"context"
"encoding/xml"
"testing"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/stretchr/testify/assert"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3event"
"github.com/versity/versitygw/s3response"
)
func TestS3ApiController_RestoreObject(t *testing.T) {
validRestoreBody, err := xml.Marshal(types.RestoreRequest{
Description: utils.GetStringPtr("description"),
Type: types.RestoreRequestTypeSelect,
})
assert.NoError(t, err)
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "invalid request body",
input: testInput{
locals: defaultLocals,
body: []byte("invalid_body"),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrMalformedXML),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
body: validRestoreBody,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
EventName: s3event.EventObjectRestoreCompleted,
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
body: validRestoreBody,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
EventName: s3event.EventObjectRestoreCompleted,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
RestoreObjectFunc: func(contextMoqParam context.Context, restoreObjectInput *s3.RestoreObjectInput) error {
return tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.RestoreObject,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
})
})
}
}
func TestS3ApiController_SelectObjectContent(t *testing.T) {
validSelectBody, err := xml.Marshal(s3response.SelectObjectContentPayload{
Expression: utils.GetStringPtr("expression"),
ExpressionType: types.ExpressionTypeSql,
})
assert.NoError(t, err)
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "invalid request body",
input: testInput{
locals: defaultLocals,
body: []byte("invalid_body"),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrMalformedXML),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
body: validSelectBody,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
SelectObjectContentFunc: func(ctx context.Context, input *s3.SelectObjectContentInput) func(w *bufio.Writer) {
return func(w *bufio.Writer) {}
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.SelectObjectContent,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
})
})
}
}
func TestS3ApiController_CreateMultipartUpload(t *testing.T) {
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "invalid object lock headers",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"X-Amz-Object-Lock-Mode": string(types.ObjectLockModeGovernance),
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrObjectLockInvalidHeaders),
},
},
{
name: "invalid checksum headers",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"X-Amz-Checksum-Algorithm": "invalid_checksum_algo",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrInvalidChecksumAlgorithm),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
beRes: s3response.InitiateMultipartUploadResult{},
},
output: testOutput{
response: &Response{
Data: s3response.InitiateMultipartUploadResult{},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
beRes: s3response.InitiateMultipartUploadResult{},
headers: map[string]string{
"x-amz-checksum-algorithm": string(types.ChecksumAlgorithmCrc32),
"x-amz-checksum-type": string(types.ChecksumTypeComposite),
},
},
output: testOutput{
response: &Response{
Data: s3response.InitiateMultipartUploadResult{},
Headers: map[string]*string{
"x-amz-checksum-algorithm": utils.ConvertToStringPtr(types.ChecksumAlgorithmCrc32),
"x-amz-checksum-type": utils.ConvertToStringPtr(types.ChecksumTypeComposite),
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
CreateMultipartUploadFunc: func(contextMoqParam context.Context, createMultipartUploadInput s3response.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error) {
return tt.input.beRes.(s3response.InitiateMultipartUploadResult), tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.CreateMultipartUpload,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
headers: tt.input.headers,
})
})
}
}
func TestS3ApiController_CompleteMultipartUpload(t *testing.T) {
emptyMpPartsBody, err := xml.Marshal(s3response.CompleteMultipartUploadRequestBody{
Parts: []types.CompletedPart{},
})
assert.NoError(t, err)
pn := int32(1)
validMpBody, err := xml.Marshal(s3response.CompleteMultipartUploadRequestBody{
Parts: []types.CompletedPart{
{
PartNumber: &pn,
ETag: utils.GetStringPtr("ETag"),
},
},
})
assert.NoError(t, err)
versionId, ETag := "versionId", "mock-ETag"
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "verify access fails",
input: testInput{
locals: accessDeniedLocals,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrAccessDenied),
},
},
{
name: "invalid request body",
input: testInput{
locals: defaultLocals,
body: []byte("invalid_body"),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrMalformedXML),
},
},
{
name: "request body empty mp parts",
input: testInput{
locals: defaultLocals,
body: emptyMpPartsBody,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrEmptyParts),
},
},
{
name: "invalid mp parts header string",
input: testInput{
locals: defaultLocals,
body: validMpBody,
headers: map[string]string{
"X-Amz-Mp-Object-Size": "invalid_mp_object_size",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetInvalidMpObjectSizeErr("invalid_mp_object_size"),
},
},
{
name: "negative mp parts header value",
input: testInput{
locals: defaultLocals,
body: validMpBody,
headers: map[string]string{
"X-Amz-Mp-Object-Size": "-4",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetNegatvieMpObjectSizeErr(-4),
},
},
{
name: "invalid checksum headers",
input: testInput{
locals: defaultLocals,
body: validMpBody,
headers: map[string]string{
"X-Amz-Checksum-Crc32": "invalid_checksum",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetInvalidChecksumHeaderErr("x-amz-checksum-crc32"),
},
},
{
name: "invalid checksum type",
input: testInput{
locals: defaultLocals,
body: validMpBody,
headers: map[string]string{
"X-Amz-Checksum-Type": "invalid_checksum_type",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetInvalidChecksumHeaderErr("x-amz-checksum-type"),
},
},
{
name: "backend returns error",
input: testInput{
locals: defaultLocals,
body: validMpBody,
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
beRes: s3response.CompleteMultipartUploadResult{},
},
output: testOutput{
response: &Response{
Data: s3response.CompleteMultipartUploadResult{},
Headers: map[string]*string{
"x-amz-version-id": &versionId,
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
EventName: s3event.EventCompleteMultipartUpload,
VersionId: &versionId,
ObjectETag: nil,
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "successful response",
input: testInput{
locals: defaultLocals,
body: validMpBody,
beRes: s3response.CompleteMultipartUploadResult{
ETag: &ETag,
},
headers: map[string]string{
"X-Amz-Mp-Object-Size": "3",
},
},
output: testOutput{
response: &Response{
Data: s3response.CompleteMultipartUploadResult{
ETag: &ETag,
},
Headers: map[string]*string{
"x-amz-version-id": &versionId,
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
EventName: s3event.EventCompleteMultipartUpload,
VersionId: &versionId,
ObjectETag: &ETag,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
CompleteMultipartUploadFunc: func(contextMoqParam context.Context, completeMultipartUploadInput *s3.CompleteMultipartUploadInput) (s3response.CompleteMultipartUploadResult, string, error) {
return tt.input.beRes.(s3response.CompleteMultipartUploadResult), versionId, tt.input.beErr
},
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.CompleteMultipartUpload,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
body: tt.input.body,
headers: tt.input.headers,
})
})
}
}

View File

@@ -0,0 +1,716 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"bytes"
"encoding/xml"
"fmt"
"io"
"strconv"
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3event"
"github.com/versity/versitygw/s3response"
)
func (c S3ApiController) PutObjectTagging(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectTaggingAction,
IsPublicRequest: IsBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
tagging, err := utils.ParseTagging(ctx.Body(), utils.TagLimitObject)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.PutObjectTagging(ctx.Context(), bucket, key, tagging)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
EventName: s3event.EventObjectTaggingPut,
},
}, err
}
func (c S3ApiController) PutObjectRetention(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
versionId := ctx.Query("versionId")
bypass := strings.EqualFold(ctx.Get("X-Amz-Bypass-Governance-Retention"), "true")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
if err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectRetentionAction,
IsPublicRequest: IsBucketPublic,
}); err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
if bypass {
policy, err := c.be.GetBucketPolicy(ctx.Context(), bucket)
if err != nil {
bypass = false
} else {
if err := auth.VerifyBucketPolicy(policy, acct.Access, bucket, key, auth.BypassGovernanceRetentionAction); err != nil {
bypass = false
}
}
}
retention, err := auth.ParseObjectLockRetentionInput(ctx.Body())
if err != nil {
debuglogger.Logf("failed to parse object lock configuration input: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.PutObjectRetention(ctx.Context(), bucket, key, versionId, bypass, retention)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) PutObjectLegalHold(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
versionId := ctx.Query("versionId")
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
if err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectLegalHoldAction,
IsPublicRequest: IsBucketPublic,
}); err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var legalHold types.ObjectLockLegalHold
if err := xml.Unmarshal(ctx.Body(), &legalHold); err != nil {
debuglogger.Logf("failed to parse request body: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
if legalHold.Status != types.ObjectLockLegalHoldStatusOff && legalHold.Status != types.ObjectLockLegalHoldStatusOn {
debuglogger.Logf("invalid legal hold status: %v", legalHold.Status)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMalformedXML)
}
err := c.be.PutObjectLegalHold(ctx.Context(), bucket, key, versionId, legalHold.Status == types.ObjectLockLegalHoldStatusOn)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) UploadPart(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
partNumber := int32(ctx.QueryInt("partNumber", -1))
uploadId := ctx.Query("uploadId")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
contentLengthStr := ctx.Get("Content-Length")
if contentLengthStr == "" {
contentLengthStr = "0"
}
// Use decoded content length if available because the
// middleware will decode the chunked transfer encoding
decodedLength := ctx.Get("X-Amz-Decoded-Content-Length")
if decodedLength != "" {
contentLengthStr = decodedLength
}
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectAction,
IsPublicRequest: IsBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
if partNumber < minPartNumber || partNumber > maxPartNumber {
debuglogger.Logf("invalid part number: %d", partNumber)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidPartNumber)
}
contentLength, err := strconv.ParseInt(contentLengthStr, 10, 64)
if err != nil {
debuglogger.Logf("error parsing content length %q: %v", contentLengthStr, err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
algorithm, checksums, err := utils.ParseChecksumHeadersAndSdkAlgo(ctx)
if err != nil {
debuglogger.Logf("err parsing checksum headers: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var body io.Reader
bodyi := utils.ContextKeyBodyReader.Get(ctx)
if bodyi != nil {
body = bodyi.(io.Reader)
} else {
body = bytes.NewReader([]byte{})
}
res, err := c.be.UploadPart(ctx.Context(),
&s3.UploadPartInput{
Bucket: &bucket,
Key: &key,
UploadId: &uploadId,
PartNumber: &partNumber,
ContentLength: &contentLength,
Body: body,
ChecksumAlgorithm: algorithm,
ChecksumCRC32: utils.GetStringPtr(checksums[types.ChecksumAlgorithmCrc32]),
ChecksumCRC32C: utils.GetStringPtr(checksums[types.ChecksumAlgorithmCrc32c]),
ChecksumSHA1: utils.GetStringPtr(checksums[types.ChecksumAlgorithmSha1]),
ChecksumSHA256: utils.GetStringPtr(checksums[types.ChecksumAlgorithmSha256]),
ChecksumCRC64NVME: utils.GetStringPtr(checksums[types.ChecksumAlgorithmCrc64nvme]),
})
var headers map[string]*string
if err == nil {
headers = map[string]*string{
"ETag": res.ETag,
"x-amz-checksum-crc32": res.ChecksumCRC32,
"x-amz-checksum-crc32c": res.ChecksumCRC32C,
"x-amz-checksum-crc64nvme": res.ChecksumCRC64NVME,
"x-amz-checksum-sha1": res.ChecksumSHA1,
"x-amz-checksum-sha256": res.ChecksumSHA256,
}
}
return &Response{
Headers: headers,
MetaOpts: &MetaOptions{
ContentLength: contentLength,
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) UploadPartCopy(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
copySource := strings.TrimPrefix(ctx.Get("X-Amz-Copy-Source"), "/")
copySrcRange := ctx.Get("X-Amz-Copy-Source-Range")
partNumber := int32(ctx.QueryInt("partNumber", -1))
uploadId := ctx.Query("uploadId")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := utils.ValidateCopySource(copySource)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = auth.VerifyObjectCopyAccess(ctx.Context(), c.be, copySource,
auth.AccessOptions{
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectAction,
IsPublicRequest: IsBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
if partNumber < minPartNumber || partNumber > maxPartNumber {
debuglogger.Logf("invalid part number: %d", partNumber)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidPartNumber)
}
preconditionHdrs := utils.ParsePreconditionHeaders(ctx, utils.WithCopySource())
resp, err := c.be.UploadPartCopy(ctx.Context(),
&s3.UploadPartCopyInput{
Bucket: &bucket,
Key: &key,
CopySource: &copySource,
PartNumber: &partNumber,
UploadId: &uploadId,
CopySourceRange: &copySrcRange,
CopySourceIfMatch: preconditionHdrs.IfMatch,
CopySourceIfNoneMatch: preconditionHdrs.IfNoneMatch,
CopySourceIfModifiedSince: preconditionHdrs.IfModSince,
CopySourceIfUnmodifiedSince: preconditionHdrs.IfUnmodeSince,
})
var headers map[string]*string
if err == nil && resp.CopySourceVersionId != "" {
headers = map[string]*string{
"x-amz-copy-source-version-id": &resp.CopySourceVersionId,
}
}
return &Response{
Headers: headers,
Data: resp,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
func (c S3ApiController) PutObjectAcl(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
acl := ctx.Get("X-Amz-Acl")
grantFullControl := ctx.Get("X-Amz-Grant-Full-Control")
grantRead := ctx.Get("X-Amz-Grant-Read")
grantReadACP := ctx.Get("X-Amz-Grant-Read-Acp")
grantWrite := ctx.Get("X-Amz-Grant-Write")
grantWriteACP := ctx.Get("X-Amz-Grant-Write-Acp")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectAclAction,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = c.be.PutObjectAcl(ctx.Context(), &s3.PutObjectAclInput{
Bucket: &bucket,
Key: &key,
GrantFullControl: &grantFullControl,
GrantRead: &grantRead,
GrantWrite: &grantWrite,
ACL: types.ObjectCannedACL(acl),
GrantReadACP: &grantReadACP,
GrantWriteACP: &grantWriteACP,
})
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
EventName: s3event.EventObjectAclPut,
},
}, err
}
func (c S3ApiController) CopyObject(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
copySource := strings.TrimPrefix(ctx.Get("X-Amz-Copy-Source"), "/")
metaDirective := types.MetadataDirective(ctx.Get("X-Amz-Metadata-Directive", string(types.MetadataDirectiveCopy)))
taggingDirective := types.TaggingDirective(ctx.Get("X-Amz-Tagging-Directive", string(types.TaggingDirectiveCopy)))
contentType := ctx.Get("Content-Type")
contentEncoding := ctx.Get("Content-Encoding")
contentDisposition := ctx.Get("Content-Disposition")
contentLanguage := ctx.Get("Content-Language")
cacheControl := ctx.Get("Cache-Control")
expires := ctx.Get("Expires")
tagging := ctx.Get("x-amz-tagging")
storageClass := ctx.Get("X-Amz-Storage-Class")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
err := utils.ValidateCopySource(copySource)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = auth.VerifyObjectCopyAccess(ctx.Context(), c.be, copySource,
auth.AccessOptions{
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectAction,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
metadata := utils.GetUserMetaData(&ctx.Request().Header)
if metaDirective != "" && metaDirective != types.MetadataDirectiveCopy && metaDirective != types.MetadataDirectiveReplace {
debuglogger.Logf("invalid metadata directive: %v", metaDirective)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidMetadataDirective)
}
if taggingDirective != "" && taggingDirective != types.TaggingDirectiveCopy && taggingDirective != types.TaggingDirectiveReplace {
debuglogger.Logf("invalid tagging directive: %v", taggingDirective)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidTaggingDirective)
}
checksumAlgorithm := types.ChecksumAlgorithm(ctx.Get("x-amz-checksum-algorithm"))
err = utils.IsChecksumAlgorithmValid(checksumAlgorithm)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
objLock, err := utils.ParsObjectLockHdrs(ctx)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
preconditionHdrs := utils.ParsePreconditionHeaders(ctx, utils.WithCopySource())
res, err := c.be.CopyObject(ctx.Context(),
s3response.CopyObjectInput{
Bucket: &bucket,
Key: &key,
ContentType: &contentType,
ContentDisposition: &contentDisposition,
ContentEncoding: &contentEncoding,
ContentLanguage: &contentLanguage,
CacheControl: &cacheControl,
Expires: &expires,
Tagging: &tagging,
TaggingDirective: taggingDirective,
CopySource: &copySource,
CopySourceIfMatch: preconditionHdrs.IfMatch,
CopySourceIfNoneMatch: preconditionHdrs.IfNoneMatch,
CopySourceIfModifiedSince: preconditionHdrs.IfModSince,
CopySourceIfUnmodifiedSince: preconditionHdrs.IfUnmodeSince,
ExpectedBucketOwner: &acct.Access,
Metadata: metadata,
MetadataDirective: metaDirective,
StorageClass: types.StorageClass(storageClass),
ChecksumAlgorithm: checksumAlgorithm,
ObjectLockRetainUntilDate: &objLock.RetainUntilDate,
ObjectLockLegalHoldStatus: objLock.LegalHoldStatus,
ObjectLockMode: objLock.ObjectLockMode,
})
var etag *string
if err == nil {
etag = res.CopyObjectResult.ETag
}
return &Response{
Headers: map[string]*string{
"x-amz-version-id": res.VersionId,
"x-amz-copy-source-version-id": res.CopySourceVersionId,
},
Data: res.CopyObjectResult,
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
ObjectETag: etag,
VersionId: res.VersionId,
EventName: s3event.EventObjectCreatedCopy,
},
}, err
}
func (c S3ApiController) PutObject(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
key := strings.TrimPrefix(ctx.Path(), fmt.Sprintf("/%s/", bucket))
contentType := ctx.Get("Content-Type")
contentEncoding := ctx.Get("Content-Encoding")
contentDisposition := ctx.Get("Content-Disposition")
contentLanguage := ctx.Get("Content-Language")
cacheControl := ctx.Get("Cache-Control")
expires := ctx.Get("Expires")
tagging := ctx.Get("x-amz-tagging")
// context locals
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
isRoot := utils.ContextKeyIsRoot.Get(ctx).(bool)
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
IsBucketPublic := utils.ContextKeyPublicBucket.IsSet(ctx)
// Content Length
contentLengthStr := ctx.Get("Content-Length")
if contentLengthStr == "" {
contentLengthStr = "0"
}
// Use decoded content length if available because the
// middleware will decode the chunked transfer encoding
decodedLength := ctx.Get("X-Amz-Decoded-Content-Length")
if decodedLength != "" {
contentLengthStr = decodedLength
}
// load the meta headers
metadata := utils.GetUserMetaData(&ctx.Request().Header)
err := auth.VerifyAccess(ctx.Context(), c.be,
auth.AccessOptions{
Readonly: c.readonly,
Acl: parsedAcl,
AclPermission: auth.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.PutObjectAction,
IsPublicRequest: IsBucketPublic,
})
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, []types.ObjectIdentifier{{Key: &key}}, true, IsBucketPublic, c.be)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
contentLength, err := strconv.ParseInt(contentLengthStr, 10, 64)
if err != nil {
debuglogger.Logf("error parsing content length %q: %v", contentLengthStr, err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
objLock, err := utils.ParsObjectLockHdrs(ctx)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
algorithm, checksums, err := utils.ParseChecksumHeadersAndSdkAlgo(ctx)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
var body io.Reader
bodyi := utils.ContextKeyBodyReader.Get(ctx)
if bodyi != nil {
body = bodyi.(io.Reader)
} else {
body = bytes.NewReader([]byte{})
}
ifMatch, ifNoneMatch := utils.ParsePreconditionMatchHeaders(ctx)
res, err := c.be.PutObject(ctx.Context(),
s3response.PutObjectInput{
Bucket: &bucket,
Key: &key,
ContentLength: &contentLength,
ContentType: &contentType,
ContentEncoding: &contentEncoding,
ContentDisposition: &contentDisposition,
ContentLanguage: &contentLanguage,
CacheControl: &cacheControl,
Expires: &expires,
Metadata: metadata,
Body: body,
Tagging: &tagging,
ObjectLockRetainUntilDate: &objLock.RetainUntilDate,
ObjectLockMode: objLock.ObjectLockMode,
ObjectLockLegalHoldStatus: objLock.LegalHoldStatus,
ChecksumAlgorithm: algorithm,
ChecksumCRC32: utils.GetStringPtr(checksums[types.ChecksumAlgorithmCrc32]),
ChecksumCRC32C: utils.GetStringPtr(checksums[types.ChecksumAlgorithmCrc32c]),
ChecksumSHA1: utils.GetStringPtr(checksums[types.ChecksumAlgorithmSha1]),
ChecksumSHA256: utils.GetStringPtr(checksums[types.ChecksumAlgorithmSha256]),
ChecksumCRC64NVME: utils.GetStringPtr(checksums[types.ChecksumAlgorithmCrc64nvme]),
IfMatch: ifMatch,
IfNoneMatch: ifNoneMatch,
})
return &Response{
Headers: map[string]*string{
"ETag": &res.ETag,
"x-amz-checksum-crc32": res.ChecksumCRC32,
"x-amz-checksum-crc32c": res.ChecksumCRC32C,
"x-amz-checksum-crc64nvme": res.ChecksumCRC64NVME,
"x-amz-checksum-sha1": res.ChecksumSHA1,
"x-amz-checksum-sha256": res.ChecksumSHA256,
"x-amz-checksum-type": utils.ConvertToStringPtr(res.ChecksumType),
"x-amz-version-id": &res.VersionID,
"x-amz-object-size": utils.ConvertPtrToStringPtr(res.Size),
},
MetaOpts: &MetaOptions{
ContentLength: contentLength,
BucketOwner: parsedAcl.Owner,
ObjectETag: &res.ETag,
ObjectSize: contentLength,
EventName: s3event.EventObjectCreatedPut,
},
}, err
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,113 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"errors"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3api/middlewares"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
)
func (s S3ApiController) CORSOptions(ctx *fiber.Ctx) (*Response, error) {
bucket := ctx.Params("bucket")
parsedAcl := utils.ContextKeyParsedAcl.Get(ctx).(auth.ACL)
// get headers
origin := ctx.Get("Origin")
method := auth.CORSHTTPMethod(ctx.Get("Access-Control-Request-Method"))
headers := ctx.Get("Access-Control-Request-Headers")
// Origin is required
if origin == "" {
debuglogger.Logf("origin is missing: %v", origin)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetAPIError(s3err.ErrMissingCORSOrigin)
}
// check if allowed method is valid
if !method.IsValid() {
debuglogger.Logf("invalid cors method: %s", method)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, s3err.GetInvalidCORSMethodErr(method.String())
}
// parse and validate headers
parsedHeaders, err := auth.ParseCORSHeaders(headers)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
cors, err := s.be.GetBucketCors(ctx.Context(), bucket)
if err != nil {
debuglogger.Logf("failed to get bucket cors: %v", err)
if errors.Is(err, s3err.GetAPIError(s3err.ErrNoSuchCORSConfiguration)) {
err = s3err.GetAPIError(s3err.ErrCORSIsNotEnabled)
debuglogger.Logf("bucket cors is not set: %v", err)
}
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
corsConfig, err := auth.ParseCORSOutput(cors)
if err != nil {
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
allowConfig, err := corsConfig.IsAllowed(origin, method, parsedHeaders)
if err != nil {
debuglogger.Logf("cors access forbidden: %v", err)
return &Response{
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, err
}
return &Response{
Headers: map[string]*string{
"Access-Control-Allow-Origin": &allowConfig.Origin,
"Access-Control-Allow-Methods": &allowConfig.Methods,
"Access-Control-Expose-Headers": &allowConfig.ExposedHeaders,
"Access-Control-Allow-Credentials": &allowConfig.AllowCredentials,
"Access-Control-Allow-Headers": &allowConfig.AllowHeaders,
"Access-Control-Max-Age": utils.ConvertPtrToStringPtr(allowConfig.MaxAge),
"Vary": &middlewares.VaryHdr,
},
MetaOpts: &MetaOptions{
BucketOwner: parsedAcl.Owner,
},
}, nil
}

View File

@@ -0,0 +1,241 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"context"
"encoding/xml"
"errors"
"net/http"
"testing"
"github.com/stretchr/testify/assert"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/s3api/middlewares"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
)
func TestS3ApiController_CORSOptions(t *testing.T) {
maxAge := int32(10000)
cors, err := xml.Marshal(auth.CORSConfiguration{
Rules: []auth.CORSRule{
{
AllowedOrigins: []string{"example.com"},
AllowedMethods: []auth.CORSHTTPMethod{http.MethodGet, http.MethodPost},
AllowedHeaders: []auth.CORSHeader{"Content-Type", "Content-Disposition"},
ExposeHeaders: []auth.CORSHeader{"Content-Encoding", "date"},
MaxAgeSeconds: &maxAge,
},
},
})
assert.NoError(t, err)
tests := []struct {
name string
input testInput
output testOutput
}{
{
name: "missing origin",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"Access-Control-Request-Method": "GET",
"Access-Control-Request-Headers": "Content-Type",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrMissingCORSOrigin),
},
},
{
name: "invalid method",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"Origin": "example.com",
"Access-Control-Request-Method": "invalid_method",
"Access-Control-Request-Headers": "Content-Type",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetInvalidCORSMethodErr("invalid_method"),
},
},
{
name: "invalid headers",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"Origin": "example.com",
"Access-Control-Request-Method": "GET",
"Access-Control-Request-Headers": "Content Type",
},
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetInvalidCORSRequestHeaderErr("Content Type"),
},
},
{
name: "fails to get bucket cors",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"Origin": "example.com",
"Access-Control-Request-Method": "GET",
"Access-Control-Request-Headers": "Content-Type",
},
beRes: []byte{},
beErr: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrNoSuchBucket),
},
},
{
name: "bucket cors is not enabled",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"Origin": "example.com",
"Access-Control-Request-Method": "GET",
"Access-Control-Request-Headers": "Content-Type",
},
beRes: []byte{},
beErr: s3err.GetAPIError(s3err.ErrNoSuchCORSConfiguration),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrCORSIsNotEnabled),
},
},
{
name: "fails to parse bucket cors",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"Origin": "example.com",
"Access-Control-Request-Method": "GET",
"Access-Control-Request-Headers": "Content-Type",
},
beRes: []byte("invalid_cors"),
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: errors.New("failed to parse cors config:"),
},
},
{
name: "cors is not allowed",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"Origin": "example.com",
"Access-Control-Request-Method": "PUT",
"Access-Control-Request-Headers": "Content-Type",
},
beRes: cors,
},
output: testOutput{
response: &Response{
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
err: s3err.GetAPIError(s3err.ErrCORSForbidden),
},
},
{
name: "success: cors is allowed",
input: testInput{
locals: defaultLocals,
headers: map[string]string{
"Origin": "example.com",
"Access-Control-Request-Method": "GET",
"Access-Control-Request-Headers": "content-type, Content-Disposition",
},
beRes: cors,
},
output: testOutput{
response: &Response{
Headers: map[string]*string{
"Access-Control-Allow-Origin": utils.GetStringPtr("example.com"),
"Access-Control-Allow-Methods": utils.GetStringPtr("GET, POST"),
"Access-Control-Expose-Headers": utils.GetStringPtr("Content-Encoding, date"),
"Access-Control-Allow-Credentials": utils.GetStringPtr("true"),
"Access-Control-Allow-Headers": utils.GetStringPtr("content-type, content-disposition"),
"Access-Control-Max-Age": utils.ConvertToStringPtr(maxAge),
"Vary": &middlewares.VaryHdr,
},
MetaOpts: &MetaOptions{
BucketOwner: "root",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
be := &BackendMock{
GetBucketCorsFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return tt.input.beRes.([]byte), tt.input.beErr
},
}
ctrl := S3ApiController{
be: be,
}
testController(
t,
ctrl.CORSOptions,
tt.output.response,
tt.output.err,
ctxInputs{
locals: tt.input.locals,
headers: tt.input.headers,
})
})
}
}

View File

@@ -15,65 +15,26 @@
package middlewares
import (
"net/http"
"regexp"
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3api/controllers"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
)
var (
singlePath = regexp.MustCompile(`^/[^/]+/?$`)
)
func AclParser(be backend.Backend, logger s3log.AuditLogger, readonly bool) fiber.Handler {
// ParseAcl retreives the bucket acl and stores in the context locals
// if no bucket is found, it returns 'NoSuchBucket'
func ParseAcl(be backend.Backend) fiber.Handler {
return func(ctx *fiber.Ctx) error {
path := ctx.Path()
pathParts := strings.Split(path, "/")
bucket := pathParts[1]
if path == "/" && ctx.Method() == http.MethodGet {
return ctx.Next()
}
if ctx.Method() == http.MethodPatch {
return ctx.Next()
}
if singlePath.MatchString(path) &&
ctx.Method() == http.MethodPut &&
!ctx.Request().URI().QueryArgs().Has("acl") &&
!ctx.Request().URI().QueryArgs().Has("tagging") &&
!ctx.Request().URI().QueryArgs().Has("versioning") &&
!ctx.Request().URI().QueryArgs().Has("policy") &&
!ctx.Request().URI().QueryArgs().Has("object-lock") &&
!ctx.Request().URI().QueryArgs().Has("ownershipControls") &&
!ctx.Request().URI().QueryArgs().Has("cors") {
isRoot, acct := utils.ContextKeyIsRoot.Get(ctx).(bool), utils.ContextKeyAccount.Get(ctx).(auth.Account)
if err := auth.MayCreateBucket(acct, isRoot); err != nil {
return controllers.SendXMLResponse(ctx, nil, err, &controllers.MetaOpts{Logger: logger, Action: "CreateBucket"})
}
if readonly {
return controllers.SendXMLResponse(ctx, nil, s3err.GetAPIError(s3err.ErrAccessDenied),
&controllers.MetaOpts{
Logger: logger,
Action: "CreateBucket",
})
}
return ctx.Next()
}
bucket := ctx.Params("bucket")
data, err := be.GetBucketAcl(ctx.Context(), &s3.GetBucketAclInput{Bucket: &bucket})
if err != nil {
return controllers.SendResponse(ctx, err, &controllers.MetaOpts{Logger: logger})
return err
}
parsedAcl, err := auth.ParseACL(data)
if err != nil {
return controllers.SendResponse(ctx, err, &controllers.MetaOpts{Logger: logger})
return err
}
// if owner is not set, set default owner to root account
@@ -82,6 +43,6 @@ func AclParser(be backend.Backend, logger s3log.AuditLogger, readonly bool) fibe
}
utils.ContextKeyParsedAcl.Set(ctx, parsedAcl)
return ctx.Next()
return nil
}
}

View File

@@ -15,46 +15,20 @@
package middlewares
import (
"strings"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3api/controllers"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
)
func IsAdmin(logger s3log.AuditLogger) fiber.Handler {
// IsAdmin is a middleware that restricts access to admin APIs, allowing only admin users
func IsAdmin(action string) fiber.Handler {
return func(ctx *fiber.Ctx) error {
acct := utils.ContextKeyAccount.Get(ctx).(auth.Account)
if acct.Role != auth.RoleAdmin {
path := ctx.Path()
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrAdminAccessDenied),
&controllers.MetaOpts{
Logger: logger,
Action: detectAction(path),
})
return s3err.GetAPIError(s3err.ErrAdminAccessDenied)
}
return ctx.Next()
return nil
}
}
func detectAction(path string) (action string) {
if strings.Contains(path, "create-user") {
action = metrics.ActionAdminCreateUser
} else if strings.Contains(path, "update-user") {
action = metrics.ActionAdminUpdateUser
} else if strings.Contains(path, "delete-user") {
action = metrics.ActionAdminDeleteUser
} else if strings.Contains(path, "list-user") {
action = metrics.ActionAdminListUsers
} else if strings.Contains(path, "list-buckets") {
action = metrics.ActionAdminListBuckets
} else if strings.Contains(path, "change-bucket-owner") {
action = metrics.ActionAdminChangeBucketOwner
}
return action
}

View File

@@ -0,0 +1,105 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package middlewares
import (
"fmt"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3err"
)
// Vary http response header is always the same below
var VaryHdr = "Origin, Access-Control-Request-Headers, Access-Control-Request-Method"
// ApplyBucketCORS retreives the bucket CORS configuration,
// checks if origin and method meets the cors rules and
// adds the necessary response headers.
// CORS check is applied only when 'Origin' request header is present
func ApplyBucketCORS(be backend.Backend) fiber.Handler {
return func(ctx *fiber.Ctx) error {
bucket := ctx.Params("bucket")
origin := ctx.Get("Origin")
// if the origin request header is empty, skip cors validation
if origin == "" {
return nil
}
// if bucket cors is not set, skip the check
data, err := be.GetBucketCors(ctx.Context(), bucket)
if err != nil {
// If CORS is not configured, S3Error will have code NoSuchCORSConfiguration.
// In this case, we can safely continue. For any other error, we should log it.
s3Err, ok := err.(s3err.APIError)
if !ok || s3Err.Code != "NoSuchCORSConfiguration" {
debuglogger.Logf("failed to get bucket cors for bucket %q: %v", bucket, err)
}
return nil
}
cors, err := auth.ParseCORSOutput(data)
if err != nil {
return nil
}
method := auth.CORSHTTPMethod(ctx.Get("Access-Control-Request-Method"))
headers := ctx.Get("Access-Control-Request-Headers")
// if request method is not specified with Access-Control-Request-Method
// override it with the actual request method
if method.IsEmpty() {
method = auth.CORSHTTPMethod(ctx.Request().Header.Method())
} else if !method.IsValid() {
// check if allowed method is valid
debuglogger.Logf("invalid cors method: %s", method)
return s3err.GetInvalidCORSMethodErr(method.String())
}
// parse and validate headers
parsedHeaders, err := auth.ParseCORSHeaders(headers)
if err != nil {
return err
}
allowConfig, err := cors.IsAllowed(origin, method, parsedHeaders)
if err != nil {
// if bucket cors rules doesn't grant access, skip
// and don't add any response headers
return nil
}
if allowConfig.MaxAge != nil {
ctx.Response().Header.Add("Access-Control-Max-Age", fmt.Sprint(*allowConfig.MaxAge))
}
for key, val := range map[string]string{
"Access-Control-Allow-Origin": allowConfig.Origin,
"Access-Control-Allow-Methods": allowConfig.Methods,
"Access-Control-Expose-Headers": allowConfig.ExposedHeaders,
"Access-Control-Allow-Credentials": allowConfig.AllowCredentials,
"Access-Control-Allow-Headers": allowConfig.AllowHeaders,
"Vary": VaryHdr,
} {
if val != "" {
ctx.Response().Header.Add(key, val)
}
}
return nil
}
}

View File

@@ -25,11 +25,8 @@ import (
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3api/controllers"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
)
const (
@@ -42,45 +39,45 @@ type RootUserConfig struct {
Secret string
}
func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.AuditLogger, mm *metrics.Manager, region string, debug bool) fiber.Handler {
func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, region string) fiber.Handler {
acct := accounts{root: root, iam: iam}
return func(ctx *fiber.Ctx) error {
// The bucket is public, no need to check this signature
if utils.ContextKeyPublicBucket.IsSet(ctx) {
return ctx.Next()
return nil
}
// If ContextKeyAuthenticated is set in context locals, it means it was presigned url case
if utils.ContextKeyAuthenticated.IsSet(ctx) {
return ctx.Next()
return nil
}
authorization := ctx.Get("Authorization")
if authorization == "" {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrAuthHeaderEmpty), logger, mm)
return s3err.GetAPIError(s3err.ErrAuthHeaderEmpty)
}
authData, err := utils.ParseAuthorization(authorization)
if err != nil {
return sendResponse(ctx, err, logger, mm)
return err
}
if authData.Region != region {
return sendResponse(ctx, s3err.APIError{
return s3err.APIError{
Code: "SignatureDoesNotMatch",
Description: fmt.Sprintf("Credential should be scoped to a valid Region, not %v", authData.Region),
HTTPStatusCode: http.StatusForbidden,
}, logger, mm)
}
}
utils.ContextKeyIsRoot.Set(ctx, authData.Access == root.Access)
account, err := acct.getAccount(authData.Access)
if err == auth.ErrNoSuchUser {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidAccessKeyID), logger, mm)
return s3err.GetAPIError(s3err.ErrInvalidAccessKeyID)
}
if err != nil {
return sendResponse(ctx, err, logger, mm)
return err
}
utils.ContextKeyAccount.Set(ctx, account)
@@ -88,23 +85,23 @@ func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.Au
// Check X-Amz-Date header
date := ctx.Get("X-Amz-Date")
if date == "" {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrMissingDateHeader), logger, mm)
return s3err.GetAPIError(s3err.ErrMissingDateHeader)
}
// Parse the date and check the date validity
tdate, err := time.Parse(iso8601Format, date)
if err != nil {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrMalformedDate), logger, mm)
return s3err.GetAPIError(s3err.ErrMalformedDate)
}
if date[:8] != authData.Date {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrSignatureDateDoesNotMatch), logger, mm)
return s3err.GetAPIError(s3err.ErrSignatureDateDoesNotMatch)
}
// Validate the dates difference
err = utils.ValidateDate(tdate)
if err != nil {
return sendResponse(ctx, err, logger, mm)
return err
}
var contentLength int64
@@ -113,17 +110,20 @@ func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.Au
contentLength, err = strconv.ParseInt(contentLengthStr, 10, 64)
//TODO: not sure if InvalidRequest should be returned in this case
if err != nil {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidRequest), logger, mm)
return s3err.GetAPIError(s3err.ErrInvalidRequest)
}
}
hashPayload := ctx.Get("X-Amz-Content-Sha256")
if !utils.IsValidSh256PayloadHeader(hashPayload) {
return s3err.GetAPIError(s3err.ErrInvalidSHA256Paylod)
}
if utils.IsBigDataAction(ctx) {
// for streaming PUT actions, authorization is deferred
// until end of stream due to need to get length and
// checksum of the stream to validate authorization
wrapBodyReader(ctx, func(r io.Reader) io.Reader {
return utils.NewAuthReader(ctx, r, authData, account.Secret, debug)
return utils.NewAuthReader(ctx, r, authData, account.Secret)
})
// wrap the io.Reader with ChunkReader if x-amz-content-sha256
@@ -136,23 +136,23 @@ func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.Au
return cr
})
if err != nil {
return sendResponse(ctx, err, logger, mm)
return err
}
return ctx.Next()
return nil
}
// Content-Length has to be set for data uploads: PutObject, UploadPart
if contentLengthStr == "" {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrMissingContentLength), logger, mm)
return s3err.GetAPIError(s3err.ErrMissingContentLength)
}
// the upload limit for big data actions: PutObject, UploadPart
// is 5gb. If the size exceeds the limit, return 'EntityTooLarge' err
if contentLength > maxObjSizeLimit {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrEntityTooLarge), logger, mm)
return s3err.GetAPIError(s3err.ErrEntityTooLarge)
}
return ctx.Next()
return nil
}
if !utils.IsSpecialPayload(hashPayload) {
@@ -162,16 +162,16 @@ func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.Au
// Compare the calculated hash with the hash provided
if hashPayload != hexPayload {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrContentSHA256Mismatch), logger, mm)
return s3err.GetAPIError(s3err.ErrContentSHA256Mismatch)
}
}
err = utils.CheckValidSignature(ctx, authData, account.Secret, hashPayload, tdate, contentLength, debug)
err = utils.CheckValidSignature(ctx, authData, account.Secret, hashPayload, tdate, contentLength)
if err != nil {
return sendResponse(ctx, err, logger, mm)
return err
}
return ctx.Next()
return nil
}
}
@@ -185,13 +185,9 @@ func (a accounts) getAccount(access string) (auth.Account, error) {
return auth.Account{
Access: a.root.Access,
Secret: a.root.Secret,
Role: "admin",
Role: auth.RoleAdmin,
}, nil
}
return a.iam.GetUserAccount(access)
}
func sendResponse(ctx *fiber.Ctx, err error, logger s3log.AuditLogger, mm *metrics.Manager) error {
return controllers.SendResponse(ctx, err, &controllers.MetaOpts{Logger: logger, MetricsMng: mm})
}

View File

@@ -15,6 +15,7 @@
package middlewares
import (
"bytes"
"io"
"github.com/gofiber/fiber/v2"
@@ -25,6 +26,11 @@ func wrapBodyReader(ctx *fiber.Ctx, wr func(io.Reader) io.Reader) {
r, ok := utils.ContextKeyBodyReader.Get(ctx).(io.Reader)
if !ok {
r = ctx.Request().BodyStream()
// Override the body reader with an empty reader to prevent panics
// in case of unexpected or malformed HTTP requests.
if r == nil {
r = bytes.NewBuffer([]byte{})
}
}
r = wr(r)

View File

@@ -16,21 +16,27 @@ package middlewares
import (
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
)
func ValidateBucketObjectNames(l s3log.AuditLogger, mm *metrics.Manager) fiber.Handler {
// BucketObjectNameValidator extracts and validates
// the bucket and object names from the request URI.
func BucketObjectNameValidator() fiber.Handler {
return func(ctx *fiber.Ctx) error {
bucket, object := parsePath(ctx.Path())
if bucket != "" && !utils.IsValidBucketName(bucket) {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidBucketName), l, mm)
// check if the provided bucket name is valid
if !utils.IsValidBucketName(bucket) {
return s3err.GetAPIError(s3err.ErrInvalidBucketName)
}
// check if the provided object name is valid
// skip for empty objects: e.g bucket operations: HeadBucket...
if object != "" && !utils.IsObjectNameValid(object) {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrBadRequest), l, mm)
return s3err.GetAPIError(s3err.ErrBadRequest)
}
return ctx.Next()
return nil
}
}

View File

@@ -16,7 +16,7 @@ package middlewares
import (
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/s3api/debuglogger"
"github.com/versity/versitygw/debuglogger"
)
func DebugLogger() fiber.Handler {

View File

@@ -19,17 +19,15 @@ import (
"io"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/s3api/controllers"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
)
func VerifyMD5Body(logger s3log.AuditLogger) fiber.Handler {
func VerifyMD5Body() fiber.Handler {
return func(ctx *fiber.Ctx) error {
incomingSum := ctx.Get("Content-Md5")
if incomingSum == "" {
return ctx.Next()
return nil
}
if utils.IsBigDataAction(ctx) {
@@ -39,18 +37,18 @@ func VerifyMD5Body(logger s3log.AuditLogger) fiber.Handler {
return r
})
if err != nil {
return controllers.SendResponse(ctx, err, &controllers.MetaOpts{Logger: logger})
return err
}
return ctx.Next()
return nil
}
sum := md5.Sum(ctx.Body())
calculatedSum := utils.Base64SumString(sum[:])
if incomingSum != calculatedSum {
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidDigest), &controllers.MetaOpts{Logger: logger})
return s3err.GetAPIError(s3err.ErrInvalidDigest)
}
return ctx.Next()
return nil
}
}

View File

@@ -20,22 +20,20 @@ import (
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
)
func VerifyPresignedV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.AuditLogger, mm *metrics.Manager, region string, debug bool) fiber.Handler {
func VerifyPresignedV4Signature(root RootUserConfig, iam auth.IAMService, region string) fiber.Handler {
acct := accounts{root: root, iam: iam}
return func(ctx *fiber.Ctx) error {
// The bucket is public, no need to check this signature
if utils.ContextKeyPublicBucket.IsSet(ctx) {
return ctx.Next()
return nil
}
if ctx.Query("X-Amz-Signature") == "" {
return ctx.Next()
return nil
}
// Set in the context the "authenticated" key, in case the authentication succeeds,
@@ -44,17 +42,17 @@ func VerifyPresignedV4Signature(root RootUserConfig, iam auth.IAMService, logger
authData, err := utils.ParsePresignedURIParts(ctx)
if err != nil {
return sendResponse(ctx, err, logger, mm)
return err
}
utils.ContextKeyIsRoot.Set(ctx, authData.Access == root.Access)
account, err := acct.getAccount(authData.Access)
if err == auth.ErrNoSuchUser {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidAccessKeyID), logger, mm)
return s3err.GetAPIError(s3err.ErrInvalidAccessKeyID)
}
if err != nil {
return sendResponse(ctx, err, logger, mm)
return err
}
utils.ContextKeyAccount.Set(ctx, account)
@@ -64,32 +62,32 @@ func VerifyPresignedV4Signature(root RootUserConfig, iam auth.IAMService, logger
contentLength, err = strconv.ParseInt(contentLengthStr, 10, 64)
//TODO: not sure if InvalidRequest should be returned in this case
if err != nil {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidRequest), logger, mm)
return err
}
}
if utils.IsBigDataAction(ctx) {
// Content-Length has to be set for data uploads: PutObject, UploadPart
if contentLengthStr == "" {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrMissingContentLength), logger, mm)
return s3err.GetAPIError(s3err.ErrMissingContentLength)
}
// the upload limit for big data actions: PutObject, UploadPart
// is 5gb. If the size exceeds the limit, return 'EntityTooLarge' err
if contentLength > maxObjSizeLimit {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrEntityTooLarge), logger, mm)
return s3err.GetAPIError(s3err.ErrEntityTooLarge)
}
wrapBodyReader(ctx, func(r io.Reader) io.Reader {
return utils.NewPresignedAuthReader(ctx, r, authData, account.Secret, debug)
return utils.NewPresignedAuthReader(ctx, r, authData, account.Secret)
})
return ctx.Next()
return nil
}
err = utils.CheckPresignedSignature(ctx, authData, account.Secret, debug)
err = utils.CheckPresignedSignature(ctx, authData, account.Secret)
if err != nil {
return sendResponse(ctx, err, logger, mm)
return err
}
return ctx.Next()
return nil
}
}

View File

@@ -24,26 +24,40 @@ import (
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
)
func AuthorizePublicBucketAccess(be backend.Backend, l s3log.AuditLogger, mm *metrics.Manager) fiber.Handler {
// AuthorizePublicBucketAccess checks if the bucket grants public
// access to anonymous requesters
func AuthorizePublicBucketAccess(be backend.Backend, s3action string, policyPermission auth.Action, permission auth.Permission) fiber.Handler {
return func(ctx *fiber.Ctx) error {
// skip for auhtneicated requests
// skip for authenticated requests
if ctx.Query("X-Amz-Algorithm") != "" || ctx.Get("Authorization") != "" {
return ctx.Next()
return nil
}
switch s3action {
case metrics.ActionListAllMyBuckets:
return s3err.GetAPIError(s3err.ErrAccessDenied)
case metrics.ActionGetBucketOwnershipControls:
return s3err.GetAPIError(s3err.ErrAnonymousGetBucketOwnership)
case metrics.ActionPutBucketOwnershipControls, metrics.ActionDeleteBucketOwnershipControls:
return s3err.GetAPIError(s3err.ErrAnonymousPutBucketOwnership)
case metrics.ActionPutBucketAcl, metrics.ActionPutObjectAcl, metrics.ActionSelectObjectContent, metrics.ActionCreateBucket:
return s3err.GetAPIError(s3err.ErrAnonymousRequest)
case metrics.ActionCopyObject:
return s3err.GetAPIError(s3err.ErrAnonymousCopyObject)
case metrics.ActionCreateMultipartUpload:
return s3err.GetAPIError(s3err.ErrAnonymousCreateMp)
case metrics.ActionUploadPartCopy, metrics.ActionDeleteObjects:
// TODO: should be fixed with https://github.com/versity/versitygw/issues/1327
// TODO: should be fixed with https://github.com/versity/versitygw/issues/1338
return s3err.GetAPIError(s3err.ErrAccessDenied)
}
bucket, object := parsePath(ctx.Path())
action, permission, err := detectS3Action(ctx, object == "")
err := auth.VerifyPublicAccess(ctx.Context(), be, policyPermission, permission, bucket, object)
if err != nil {
return sendResponse(ctx, err, l, mm)
}
err = auth.VerifyPublicAccess(ctx.Context(), be, action, permission, bucket, object)
if err != nil {
return sendResponse(ctx, err, l, mm)
return err
}
if utils.IsBigDataAction(ctx) {
@@ -51,7 +65,7 @@ func AuthorizePublicBucketAccess(be backend.Backend, l s3log.AuditLogger, mm *me
if utils.IsUnsignedStreamingPayload(payloadType) {
checksumType, err := utils.ExtractChecksumType(ctx)
if err != nil {
return sendResponse(ctx, err, l, mm)
return err
}
wrapBodyReader(ctx, func(r io.Reader) io.Reader {
@@ -60,232 +74,17 @@ func AuthorizePublicBucketAccess(be backend.Backend, l s3log.AuditLogger, mm *me
return cr
})
if err != nil {
return sendResponse(ctx, err, l, mm)
return err
}
} else {
utils.ContextKeyBodyReader.Set(ctx, ctx.Request().BodyStream())
}
}
utils.ContextKeyPublicBucket.Set(ctx, true)
return ctx.Next()
}
}
func detectS3Action(ctx *fiber.Ctx, isBucketAction bool) (auth.Action, auth.Permission, error) {
path := ctx.Path()
// ListBuckets is not publically available
if path == "/" {
//TODO: Still not clear what kind of error should be returned in this case(ListBuckets)
return "", auth.PermissionRead, s3err.GetAPIError(s3err.ErrAccessDenied)
}
queryArgs := ctx.Context().QueryArgs()
switch ctx.Method() {
case fiber.MethodPatch:
// Admin apis should always be protected
return "", "", s3err.GetAPIError(s3err.ErrAccessDenied)
case fiber.MethodHead:
// HeadBucket
if isBucketAction {
return auth.ListBucketAction, auth.PermissionRead, nil
}
// HeadObject
return auth.GetObjectAction, auth.PermissionRead, nil
case fiber.MethodGet:
if isBucketAction {
if queryArgs.Has("tagging") {
// GetBucketTagging
return auth.GetBucketTaggingAction, auth.PermissionRead, nil
} else if queryArgs.Has("ownershipControls") {
// GetBucketOwnershipControls
return auth.GetBucketOwnershipControlsAction, auth.PermissionRead, s3err.GetAPIError(s3err.ErrAnonymousGetBucketOwnership)
} else if queryArgs.Has("versioning") {
// GetBucketVersioning
return auth.GetBucketVersioningAction, auth.PermissionRead, nil
} else if queryArgs.Has("policy") {
// GetBucketPolicy
return auth.GetBucketPolicyAction, auth.PermissionRead, nil
} else if queryArgs.Has("cors") {
// GetBucketCors
return auth.GetBucketCorsAction, auth.PermissionRead, nil
} else if queryArgs.Has("versions") {
// ListObjectVersions
return auth.ListBucketVersionsAction, auth.PermissionRead, nil
} else if queryArgs.Has("object-lock") {
// GetObjectLockConfiguration
return auth.GetBucketObjectLockConfigurationAction, auth.PermissionReadAcp, nil
} else if queryArgs.Has("acl") {
// GetBucketAcl
return auth.GetBucketAclAction, auth.PermissionRead, nil
} else if queryArgs.Has("uploads") {
// ListMultipartUploads
return auth.ListBucketMultipartUploadsAction, auth.PermissionRead, nil
} else if queryArgs.GetUintOrZero("list-type") == 2 {
// ListObjectsV2
return auth.ListBucketAction, auth.PermissionRead, nil
}
// All the other requests are considerd as ListObjects in the router
// no matter what kind of query arguments are provided apart from the ones above
return auth.ListBucketAction, auth.PermissionRead, nil
}
if queryArgs.Has("tagging") {
// GetObjectTagging
return auth.GetObjectTaggingAction, auth.PermissionRead, nil
} else if queryArgs.Has("retention") {
// GetObjectRetention
return auth.GetObjectRetentionAction, auth.PermissionRead, nil
} else if queryArgs.Has("legal-hold") {
// GetObjectLegalHold
return auth.GetObjectLegalHoldAction, auth.PermissionReadAcp, nil
} else if queryArgs.Has("acl") {
// GetObjectAcl
return auth.GetObjectAclAction, auth.PermissionRead, nil
} else if queryArgs.Has("attributes") {
// GetObjectAttributes
return auth.GetObjectAttributesAction, auth.PermissionRead, nil
} else if queryArgs.Has("uploadId") {
// ListParts
return auth.ListMultipartUploadPartsAction, auth.PermissionRead, nil
}
// All the other requests are considerd as GetObject in the router
// no matter what kind of query arguments are provided apart from the ones above
if queryArgs.Has("versionId") {
return auth.GetObjectVersionAction, auth.PermissionRead, nil
}
return auth.GetObjectAction, auth.PermissionRead, nil
case fiber.MethodPut:
if isBucketAction {
if queryArgs.Has("tagging") {
// PutBucketTagging
return auth.PutBucketTaggingAction, auth.PermissionWrite, nil
}
if queryArgs.Has("ownershipControls") {
// PutBucketOwnershipControls
return auth.PutBucketOwnershipControlsAction, auth.PermissionWrite, s3err.GetAPIError(s3err.ErrAnonymousPutBucketOwnership)
}
if queryArgs.Has("versioning") {
// PutBucketVersioning
return auth.PutBucketVersioningAction, auth.PermissionWrite, nil
}
if queryArgs.Has("object-lock") {
// PutObjectLockConfiguration
return auth.PutBucketObjectLockConfigurationAction, auth.PermissionWrite, nil
}
if queryArgs.Has("cors") {
// PutBucketCors
return auth.PutBucketCorsAction, auth.PermissionWrite, nil
}
if queryArgs.Has("policy") {
// PutBucketPolicy
return auth.PutBucketPolicyAction, auth.PermissionWrite, nil
}
if queryArgs.Has("acl") {
// PutBucketAcl
return auth.PutBucketAclAction, auth.PermissionWrite, s3err.GetAPIError(s3err.ErrAnonymousRequest)
}
// All the other rquestes are considered as 'CreateBucket' in the router
return "", "", s3err.GetAPIError(s3err.ErrAnonymousRequest)
}
if queryArgs.Has("tagging") {
// PutObjectTagging
return auth.PutObjectTaggingAction, auth.PermissionWrite, nil
}
if queryArgs.Has("retention") {
// PutObjectRetention
return auth.PutObjectRetentionAction, auth.PermissionWrite, nil
}
if queryArgs.Has("legal-hold") {
// PutObjectLegalHold
return auth.PutObjectLegalHoldAction, auth.PermissionWrite, nil
}
if queryArgs.Has("acl") {
// PutObjectAcl
return auth.PutObjectAclAction, auth.PermissionWriteAcp, s3err.GetAPIError(s3err.ErrAnonymousRequest)
}
if queryArgs.Has("uploadId") && queryArgs.Has("partNumber") {
if ctx.Get("X-Amz-Copy-Source") != "" {
// UploadPartCopy
//TODO: Add public access check for copy-source
// Return AccessDenied for now
return auth.PutObjectAction, auth.PermissionWrite, s3err.GetAPIError(s3err.ErrAccessDenied)
}
// UploadPart
utils.ContextKeyBodyReader.Set(ctx, ctx.Request().BodyStream())
return auth.PutObjectAction, auth.PermissionWrite, nil
}
if ctx.Get("X-Amz-Copy-Source") != "" {
return auth.PutObjectAction, auth.PermissionWrite, s3err.GetAPIError(s3err.ErrAnonymousCopyObject)
}
// All the other requests are considered as 'PutObject' in the router
utils.ContextKeyBodyReader.Set(ctx, ctx.Request().BodyStream())
return auth.PutObjectAction, auth.PermissionWrite, nil
case fiber.MethodPost:
if isBucketAction {
// DeleteObjects
// FIXME: should be fixed with https://github.com/versity/versitygw/issues/1327
// Return AccessDenied for now
return auth.DeleteObjectAction, auth.PermissionWrite, s3err.GetAPIError(s3err.ErrAccessDenied)
}
if queryArgs.Has("restore") {
return auth.RestoreObjectAction, auth.PermissionWrite, nil
}
if queryArgs.Has("select") && ctx.Query("select-type") == "2" {
// SelectObjectContent
return auth.GetObjectAction, auth.PermissionRead, s3err.GetAPIError(s3err.ErrAnonymousRequest)
}
if queryArgs.Has("uploadId") {
// CompleteMultipartUpload
return auth.PutObjectAction, auth.PermissionWrite, nil
}
// All the other requests are considered as 'CreateMultipartUpload' in the router
return "", "", s3err.GetAPIError(s3err.ErrAnonymousCreateMp)
case fiber.MethodDelete:
if isBucketAction {
if queryArgs.Has("tagging") {
// DeleteBucketTagging
return auth.PutBucketTaggingAction, auth.PermissionWrite, nil
}
if queryArgs.Has("ownershipControls") {
// DeleteBucketOwnershipControls
return auth.PutBucketOwnershipControlsAction, auth.PermissionWrite, s3err.GetAPIError(s3err.ErrAnonymousPutBucketOwnership)
}
if queryArgs.Has("policy") {
// DeleteBucketPolicy
return auth.PutBucketPolicyAction, auth.PermissionWrite, nil
}
if queryArgs.Has("cors") {
// DeleteBucketCors
return auth.PutBucketCorsAction, auth.PermissionWrite, nil
}
// All the other requests are considered as 'DeleteBucket' in the router
return auth.DeleteBucketAction, auth.PermissionWrite, nil
}
if queryArgs.Has("tagging") {
// DeleteObjectTagging
return auth.PutObjectTaggingAction, auth.PermissionWrite, nil
}
if queryArgs.Has("uploadId") {
// AbortMultipartUpload
return auth.AbortMultipartUploadAction, auth.PermissionWrite, nil
}
// All the other requests are considered as 'DeleteObject' in the router
return auth.DeleteObjectAction, auth.PermissionWrite, nil
default:
// In no action is detected, return AccessDenied ?
return "", "", s3err.GetAPIError(s3err.ErrAccessDenied)
return nil
}
}

View File

@@ -0,0 +1,67 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package middlewares
import (
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/s3api/utils"
)
// Evaluates/Matches the provided requst query params
func MatchQueryArgs(args ...string) fiber.Handler {
return func(ctx *fiber.Ctx) error {
if utils.ContextKeySkip.IsSet(ctx) {
return ctx.Next()
}
for _, query := range args {
if !ctx.Request().URI().QueryArgs().Has(query) {
utils.ContextKeySkip.Set(ctx, true)
break
}
}
return ctx.Next()
}
}
// Evaluates/Matches the requst header
func MatchHeader(key string) fiber.Handler {
return func(ctx *fiber.Ctx) error {
if utils.ContextKeySkip.IsSet(ctx) {
return ctx.Next()
}
val := ctx.Get(key)
if val == "" {
utils.ContextKeySkip.Set(ctx, true)
}
return ctx.Next()
}
}
// Evaluates/Matches the requst query param and value
func MatchQueryArgWithValue(key, val string) fiber.Handler {
return func(ctx *fiber.Ctx) error {
if utils.ContextKeySkip.IsSet(ctx) {
return ctx.Next()
}
if ctx.Query(key) != val {
utils.ContextKeySkip.Set(ctx, true)
}
return ctx.Next()
}
}

View File

@@ -18,19 +18,15 @@ import (
"net/url"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3api/controllers"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
)
func DecodeURL(logger s3log.AuditLogger, mm *metrics.Manager) fiber.Handler {
return func(ctx *fiber.Ctx) error {
unescp, err := url.PathUnescape(string(ctx.Request().URI().PathOriginal()))
if err != nil {
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidURI), &controllers.MetaOpts{Logger: logger, MetricsMng: mm})
}
ctx.Path(unescp)
return ctx.Next()
// DecodeURL url path unescapes the request url for the gateway
// to handle some special characters
func DecodeURL(ctx *fiber.Ctx) error {
unescp, err := url.PathUnescape(string(ctx.Request().URI().PathOriginal()))
if err != nil {
return err
}
ctx.Path(unescp)
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -20,6 +20,7 @@ import (
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3api/middlewares"
)
func TestS3ApiRouter_Init(t *testing.T) {
@@ -45,7 +46,7 @@ func TestS3ApiRouter_Init(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.sa.Init(tt.args.app, tt.args.be, tt.args.iam, nil, nil, nil, nil, false, false)
tt.sa.Init(tt.args.app, tt.args.be, tt.args.iam, nil, nil, nil, nil, false, "us-east-1", middlewares.RootUserConfig{})
})
}
}

View File

@@ -22,7 +22,9 @@ import (
"github.com/gofiber/fiber/v2/middleware/logger"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/metrics"
"github.com/versity/versitygw/s3api/controllers"
"github.com/versity/versitygw/s3api/middlewares"
"github.com/versity/versitygw/s3event"
"github.com/versity/versitygw/s3log"
@@ -35,7 +37,6 @@ type S3ApiServer struct {
port string
cert *tls.Certificate
quiet bool
debug bool
readonly bool
health string
virtualDomain string
@@ -50,7 +51,7 @@ func New(
l s3log.AuditLogger,
adminLogger s3log.AuditLogger,
evs s3event.S3EventSender,
mm *metrics.Manager,
mm metrics.Manager,
opts ...Option,
) (*S3ApiServer, error) {
server := &S3ApiServer{
@@ -76,33 +77,25 @@ func New(
return ctx.SendStatus(http.StatusOK)
})
}
app.Use(middlewares.DecodeURL(l, mm))
// initilaze the default value setter middleware
app.Use(middlewares.SetDefaultValues(root, region))
// initialize the 'DecodeURL' middleware which
// path unescapes the url
app.Use(controllers.WrapMiddleware(middlewares.DecodeURL, l, mm))
// initialize host-style parser in virtual domain is specified
if server.virtualDomain != "" {
app.Use(middlewares.HostStyleParser(server.virtualDomain))
}
// initilaze the default value setter middleware
app.Use(middlewares.SetDefaultValues(root, region))
// initialize the debug logger in debug mode
if server.debug {
if debuglogger.IsDebugEnabled() {
app.Use(middlewares.DebugLogger())
}
app.Use(middlewares.ValidateBucketObjectNames(l, mm))
// Public buckets access checker
app.Use(middlewares.AuthorizePublicBucketAccess(be, l, mm))
// Authentication middlewares
app.Use(middlewares.VerifyPresignedV4Signature(root, iam, l, mm, region, server.debug))
app.Use(middlewares.VerifyV4Signature(root, iam, l, mm, region, server.debug))
app.Use(middlewares.VerifyMD5Body(l))
app.Use(middlewares.AclParser(be, l, server.readonly))
server.router.Init(app, be, iam, l, adminLogger, evs, mm, server.debug, server.readonly)
server.router.Init(app, be, iam, l, adminLogger, evs, mm, server.readonly, region, root)
return server, nil
}
@@ -120,11 +113,6 @@ func WithAdminServer() Option {
return func(s *S3ApiServer) { s.router.WithAdmSrv = true }
}
// WithDebug sets debug output
func WithDebug() Option {
return func(s *S3ApiServer) { s.debug = true }
}
// WithQuiet silences default logging output
func WithQuiet() Option {
return func(s *S3ApiServer) { s.quiet = true }

View File

@@ -27,6 +27,7 @@ import (
"github.com/aws/smithy-go/logging"
"github.com/gofiber/fiber/v2"
v4 "github.com/versity/versitygw/aws/signer/v4"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3err"
)
@@ -45,14 +46,13 @@ type AuthReader struct {
secret string
size int
r *HashReader
debug bool
}
// NewAuthReader initializes an io.Reader that will verify the request
// v4 auth when the underlying reader returns io.EOF. This postpones the
// authorization check until the reader is consumed. So it is important that
// the consumer of this reader checks for the auth errors while reading.
func NewAuthReader(ctx *fiber.Ctx, r io.Reader, auth AuthData, secret string, debug bool) *AuthReader {
func NewAuthReader(ctx *fiber.Ctx, r io.Reader, auth AuthData, secret string) *AuthReader {
var hr *HashReader
hashPayload := ctx.Get("X-Amz-Content-Sha256")
if !IsSpecialPayload(hashPayload) {
@@ -66,7 +66,6 @@ func NewAuthReader(ctx *fiber.Ctx, r io.Reader, auth AuthData, secret string, de
r: hr,
auth: auth,
secret: secret,
debug: debug,
}
}
@@ -107,7 +106,7 @@ func (ar *AuthReader) validateSignature() error {
return s3err.GetAPIError(s3err.ErrMalformedDate)
}
return CheckValidSignature(ar.ctx, ar.auth, ar.secret, hashPayload, tdate, int64(ar.size), ar.debug)
return CheckValidSignature(ar.ctx, ar.auth, ar.secret, hashPayload, tdate, int64(ar.size))
}
const (
@@ -115,7 +114,7 @@ const (
)
// CheckValidSignature validates the ctx v4 auth signature
func CheckValidSignature(ctx *fiber.Ctx, auth AuthData, secret, checksum string, tdate time.Time, contentLen int64, debug bool) error {
func CheckValidSignature(ctx *fiber.Ctx, auth AuthData, secret, checksum string, tdate time.Time, contentLen int64) error {
signedHdrs := strings.Split(auth.SignedHeaders, ";")
// Create a new http request instance from fasthttp request
@@ -134,7 +133,7 @@ func CheckValidSignature(ctx *fiber.Ctx, auth AuthData, secret, checksum string,
req, checksum, service, auth.Region, tdate, signedHdrs,
func(options *v4.SignerOptions) {
options.DisableURIPathEscaping = true
if debug {
if debuglogger.IsDebugEnabled() {
options.LogSigning = true
options.Logger = logging.NewStandardLogger(os.Stderr)
}

View File

@@ -15,6 +15,7 @@
package utils
import (
"encoding/hex"
"fmt"
"io"
"net/http"
@@ -23,7 +24,7 @@ import (
"time"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/s3api/debuglogger"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3err"
)
@@ -105,6 +106,28 @@ func IsSpecialPayload(str string) bool {
return specialValues[payloadType(str)]
}
// IsValidSh256PayloadHeader checks if the provided x-amz-content-sha256
// paylod header is valid special paylod type or a valid sh256 hash
func IsValidSh256PayloadHeader(value string) bool {
// empty header is valid
if value == "" {
return true
}
// special values are valid
if specialValues[payloadType(value)] {
return true
}
// check to be a valid sha256
if len(value) != 64 {
return false
}
// decode the string as hex
_, err := hex.DecodeString(value)
return err == nil
}
// Checks if the provided string is unsigned payload trailer type
func IsUnsignedStreamingPayload(str string) bool {
return payloadType(str) == payloadTypeStreamingUnsignedTrailer

View File

@@ -0,0 +1,43 @@
// Copyright 2024 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package utils
import "testing"
func TestIsValidSh256PayloadHeader(t *testing.T) {
tests := []struct {
name string
hash string
want bool
}{
{"empty header", "", true},
{"special payload type 1", "UNSIGNED-PAYLOAD", true},
{"special payload type 2", "STREAMING-UNSIGNED-PAYLOAD-TRAILER", true},
{"special payload type 3", "STREAMING-AWS4-HMAC-SHA256-PAYLOAD", true},
{"special payload type 4", "STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER", true},
{"special payload type 5", "STREAMING-AWS4-ECDSA-P256-SHA256-PAYLOAD", true},
{"special payload type 6", "STREAMING-AWS4-ECDSA-P256-SHA256-PAYLOAD-TRAILER", true},
{"invalid hext", "invalid_hex", false},
{"valid hex, but not sha256", "d41d8cd98f00b204e9800998ecf8427e", false},
{"valid sh256", "9c56cc51b374bb0f2f8d55af2b34d2a6f8f7f42dd4bbcccbbf8e3279b6e1e6d4", true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := IsValidSh256PayloadHeader(tt.hash); got != tt.want {
t.Errorf("IsValidSh256PayloadHeader() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -34,6 +34,7 @@ const (
ContextKeyParsedAcl ContextKey = "parsed-acl"
ContextKeySkipResBodyLog ContextKey = "skip-res-body-log"
ContextKeyBodyReader ContextKey = "body-reader"
ContextKeySkip ContextKey = "__skip"
)
func (ck ContextKey) Values() []ContextKey {
@@ -60,6 +61,10 @@ func (ck ContextKey) IsSet(ctx *fiber.Ctx) bool {
return val != nil
}
func (ck ContextKey) Delete(ctx *fiber.Ctx) {
ctx.Locals(string(ck), nil)
}
func (ck ContextKey) Get(ctx *fiber.Ctx) any {
return ctx.Locals(string(ck))
}

180
s3api/utils/crc.go Normal file
View File

@@ -0,0 +1,180 @@
// Copyright (C) 1995-2017 Jean-loup Gailly and Mark Adler
//
// This software is provided 'as-is', without any express or implied
// warranty. In no event will the authors be held liable for any damages
// arising from the use of this software.
//
// Permission is granted to anyone to use this software for any purpose,
// including commercial applications, and to alter it and redistribute it
// freely, subject to the following restrictions:
//
// 1. The origin of this software must not be misrepresented; you must not
// claim that you wrote the original software. If you use this software
// in a product, an acknowledgment in the product documentation would be
// appreciated but is not required.
// 2. Altered source versions must be plainly marked as such, and must not be
// misrepresented as being the original software.
// 3. This notice may not be removed or altered from any source distribution.
//
// Jean-loup Gailly Mark Adler
// jloup@gzip.org madler@alumni.caltech.edu
// Original implementation is from
// https://github.com/vimeo/go-util/blob/8cd4c737f091d9317f72b25df78ce6cf869f7d30/crc32combine/crc32combine.go
// extended for crc64 support.
// Following is ported from C to Go in 2016 by Justin Ruggles, with minimal alteration.
// Used uint for unsigned long. Used uint32 for input arguments in order to match
// the Go hash/crc32 package. zlib CRC32 combine (https://github.com/madler/zlib)
package utils
import (
"hash/crc64"
)
const crc64NVME = 0x9a6c_9329_ac4b_c9b5
var crc64NVMETable = crc64.MakeTable(crc64NVME)
func gf2MatrixTimes(mat []uint64, vec uint64) uint64 {
var sum uint64
for vec != 0 {
if vec&1 != 0 {
sum ^= mat[0]
}
vec >>= 1
mat = mat[1:]
}
return sum
}
func gf2MatrixSquare(square, mat []uint64) {
if len(square) != len(mat) {
panic("square matrix size mismatch")
}
for n := range mat {
square[n] = gf2MatrixTimes(mat, mat[n])
}
}
// crc32Combine returns the combined CRC-32 hash value of the two passed CRC-32
// hash values crc1 and crc2. poly represents the generator polynomial
// and len2 specifies the byte length that the crc2 hash covers.
func crc32Combine(poly uint32, crc1, crc2 uint32, len2 int64) uint32 {
// degenerate case (also disallow negative lengths)
if len2 <= 0 {
return crc1
}
even := make([]uint64, 32) // even-power-of-two zeros operator
odd := make([]uint64, 32) // odd-power-of-two zeros operator
// put operator for one zero bit in odd
odd[0] = uint64(poly) // CRC-32 polynomial
row := uint64(1)
for n := 1; n < 32; n++ {
odd[n] = row
row <<= 1
}
// put operator for two zero bits in even
gf2MatrixSquare(even, odd)
// put operator for four zero bits in odd
gf2MatrixSquare(odd, even)
// apply len2 zeros to crc1 (first square will put the operator for one
// zero byte, eight zero bits, in even)
crc1n := uint64(crc1)
for {
// apply zeros operator for this bit of len2
gf2MatrixSquare(even, odd)
if len2&1 != 0 {
crc1n = gf2MatrixTimes(even, crc1n)
}
len2 >>= 1
// if no more bits set, then done
if len2 == 0 {
break
}
// another iteration of the loop with odd and even swapped
gf2MatrixSquare(odd, even)
if len2&1 != 0 {
crc1n = gf2MatrixTimes(odd, crc1n)
}
len2 >>= 1
// if no more bits set, then done
if len2 == 0 {
break
}
}
// return combined crc
crc1n ^= uint64(crc2)
return uint32(crc1n)
}
// crc64Combine returns the combined CRC-64 hash value of the two passed CRC-64
// hash values crc1 and crc2. poly represents the generator polynomial
// and len2 specifies the byte length that the crc2 hash covers.
func crc64Combine(poly uint64, crc1, crc2 uint64, len2 int64) uint64 {
// degenerate case (also disallow negative lengths)
if len2 <= 0 {
return crc1
}
even := make([]uint64, 64) // even-power-of-two zeros operator
odd := make([]uint64, 64) // odd-power-of-two zeros operator
// put operator for one zero bit in odd
odd[0] = poly // CRC-64 polynomial
row := uint64(1)
for n := 1; n < 64; n++ {
odd[n] = row
row <<= 1
}
// put operator for two zero bits in even
gf2MatrixSquare(even, odd)
// put operator for four zero bits in odd
gf2MatrixSquare(odd, even)
// apply len2 zeros to crc1 (first square will put the operator for one
// zero byte, eight zero bits, in even)
crc1n := crc1
for {
// apply zeros operator for this bit of len2
gf2MatrixSquare(even, odd)
if len2&1 != 0 {
crc1n = gf2MatrixTimes(even, crc1n)
}
len2 >>= 1
// if no more bits set, then done
if len2 == 0 {
break
}
// another iteration of the loop with odd and even swapped
gf2MatrixSquare(odd, even)
if len2&1 != 0 {
crc1n = gf2MatrixTimes(odd, crc1n)
}
len2 >>= 1
// if no more bits set, then done
if len2 == 0 {
break
}
}
// return combined crc
crc1n ^= crc2
return crc1n
}

57
s3api/utils/crc_test.go Normal file
View File

@@ -0,0 +1,57 @@
// Copyright 2025 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package utils
import (
"hash/crc32"
"hash/crc64"
"testing"
)
func TestCRC32Combine(t *testing.T) {
data := []byte("The quick brown fox jumps over the lazy dog")
mid := len(data) / 2
part1 := data[:mid]
part2 := data[mid:]
var poly uint32 = crc32.IEEE
tab := crc32.MakeTable(poly)
crc1 := crc32.Checksum(part1, tab)
crc2 := crc32.Checksum(part2, tab)
combined := crc32Combine(poly, crc1, crc2, int64(len(part2)))
full := crc32.Checksum(data, tab)
if combined != full {
t.Errorf("crc32Combine failed: got %08x, want %08x", combined, full)
}
}
func TestCRC64Combine(t *testing.T) {
data := []byte("The quick brown fox jumps over the lazy dog")
mid := len(data) / 2
part1 := data[:mid]
part2 := data[mid:]
var poly uint64 = crc64NVME
tab := crc64NVMETable
crc1 := crc64.Checksum(part1, tab)
crc2 := crc64.Checksum(part2, tab)
combined := crc64Combine(poly, crc1, crc2, int64(len(part2)))
full := crc64.Checksum(data, tab)
if combined != full {
t.Errorf("crc64Combine failed: got %016x, want %016x", combined, full)
}
}

View File

@@ -26,7 +26,6 @@ import (
"hash/crc32"
"hash/crc64"
"io"
"math/bits"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/s3err"
@@ -89,7 +88,7 @@ func NewHashReader(r io.Reader, expectedSum string, ht HashType) (*HashReader, e
case HashTypeCRC32C:
hash = crc32.New(crc32.MakeTable(crc32.Castagnoli))
case HashTypeCRC64NVME:
hash = crc64.New(crc64.MakeTable(bits.Reverse64(0xad93d23594c93659)))
hash = crc64.New(crc64NVMETable)
case HashTypeNone:
hash = noop{}
default:
@@ -185,7 +184,7 @@ func (hr *HashReader) Type() HashType {
return hr.hashType
}
// Md5SumString converts the hash bytes to the string checksum value
// Base64SumString converts the hash bytes to the b64 encoded string checksum value
func Base64SumString(b []byte) string {
return base64.StdEncoding.EncodeToString(b)
}
@@ -198,6 +197,108 @@ func (n noop) Reset() {}
func (n noop) Size() int { return 0 }
func (n noop) BlockSize() int { return 1 }
// IsChecksumComposable tests if the final foll object crc can be calculated
// based on the part crc values.
func IsChecksumComposable(algo types.ChecksumAlgorithm) bool {
switch algo {
case types.ChecksumAlgorithmCrc32, types.ChecksumAlgorithmCrc32c, types.ChecksumAlgorithmCrc64nvme:
return true
default:
return false
}
}
// AddCRCChecksum calculates the composite CRC checksum after adding the part crc.
// Only CRC32, CRC32C, and CRC64NVME are supported. The input checksums must be base64-encoded strings.
func AddCRCChecksum(algo types.ChecksumAlgorithm, crc, partCrc string, partLen int64) (string, error) {
switch algo {
case types.ChecksumAlgorithmCrc32:
data, err := base64.StdEncoding.DecodeString(partCrc)
if err != nil {
return "", fmt.Errorf("base64 decode partCrc: %w", err)
}
if len(data) != 4 {
return "", fmt.Errorf("invalid crc32 part checksum length: %d", len(data))
}
currentCRC, err := base64.StdEncoding.DecodeString(crc)
if err != nil {
return "", fmt.Errorf("base64 decode crc: %w", err)
}
if len(currentCRC) != 4 {
return "", fmt.Errorf("invalid crc32 checksum length: %d", len(currentCRC))
}
currentVal := uint32(currentCRC[0])<<24 | uint32(currentCRC[1])<<16 | uint32(currentCRC[2])<<8 | uint32(currentCRC[3])
val := uint32(data[0])<<24 | uint32(data[1])<<16 | uint32(data[2])<<8 | uint32(data[3])
composite := crc32Combine(crc32.IEEE, currentVal, val, partLen)
out := []byte{
byte(composite >> 24),
byte(composite >> 16),
byte(composite >> 8),
byte(composite),
}
return base64.StdEncoding.EncodeToString(out), nil
case types.ChecksumAlgorithmCrc32c:
data, err := base64.StdEncoding.DecodeString(partCrc)
if err != nil {
return "", fmt.Errorf("base64 decode partCrc: %w", err)
}
if len(data) != 4 {
return "", fmt.Errorf("invalid crc32 part checksum length: %d", len(data))
}
currentCRC, err := base64.StdEncoding.DecodeString(crc)
if err != nil {
return "", fmt.Errorf("base64 decode crc: %w", err)
}
if len(currentCRC) != 4 {
return "", fmt.Errorf("invalid crc32 checksum length: %d", len(currentCRC))
}
currentVal := uint32(currentCRC[0])<<24 | uint32(currentCRC[1])<<16 | uint32(currentCRC[2])<<8 | uint32(currentCRC[3])
val := uint32(data[0])<<24 | uint32(data[1])<<16 | uint32(data[2])<<8 | uint32(data[3])
composite := crc32Combine(crc32.Castagnoli, currentVal, val, partLen)
// Convert composite to big-endian bytes
out := []byte{
byte(composite >> 24),
byte(composite >> 16),
byte(composite >> 8),
byte(composite),
}
return base64.StdEncoding.EncodeToString(out), nil
case types.ChecksumAlgorithmCrc64nvme:
data, err := base64.StdEncoding.DecodeString(partCrc)
if err != nil {
return "", fmt.Errorf("base64 decode partCrc: %w", err)
}
if len(data) != 8 {
return "", fmt.Errorf("invalid crc64 part checksum length: %d", len(data))
}
currentCRC, err := base64.StdEncoding.DecodeString(crc)
if err != nil {
return "", fmt.Errorf("base64 decode crc: %w", err)
}
if len(currentCRC) != 8 {
return "", fmt.Errorf("invalid crc64 checksum length: %d", len(currentCRC))
}
currentVal := uint64(currentCRC[0])<<56 | uint64(currentCRC[1])<<48 | uint64(currentCRC[2])<<40 | uint64(currentCRC[3])<<32 |
uint64(currentCRC[4])<<24 | uint64(currentCRC[5])<<16 | uint64(currentCRC[6])<<8 | uint64(currentCRC[7])
val := uint64(data[0])<<56 | uint64(data[1])<<48 | uint64(data[2])<<40 | uint64(data[3])<<32 |
uint64(data[4])<<24 | uint64(data[5])<<16 | uint64(data[6])<<8 | uint64(data[7])
composite := crc64Combine(crc64NVME, currentVal, val, partLen)
out := []byte{
byte(composite >> 56), byte(composite >> 48), byte(composite >> 40), byte(composite >> 32),
byte(composite >> 24), byte(composite >> 16), byte(composite >> 8), byte(composite),
}
return base64.StdEncoding.EncodeToString(out), nil
default:
return "", fmt.Errorf("composite checksum not supported for algorithm: %v", algo)
}
}
// NewCompositeChecksumReader initializes a composite checksum
// processor, which decodes and validates the provided
// checksums and returns the final checksum based on

View File

@@ -0,0 +1,120 @@
// Copyright 2025 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package utils
import (
"encoding/base64"
"hash/crc32"
"hash/crc64"
"testing"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
func TestAddCRCChecksum_CRC32(t *testing.T) {
data := []byte("this is a test buffer for crc32")
mid := len(data) / 2
part1 := data[:mid]
part2 := data[mid:]
crc1 := crc32.Checksum(part1, crc32.IEEETable)
crc2 := crc32.Checksum(part2, crc32.IEEETable)
crcFull := crc32.Checksum(data, crc32.IEEETable)
crc1b := []byte{byte(crc1 >> 24), byte(crc1 >> 16), byte(crc1 >> 8), byte(crc1)}
crc2b := []byte{byte(crc2 >> 24), byte(crc2 >> 16), byte(crc2 >> 8), byte(crc2)}
crc1b64 := base64.StdEncoding.EncodeToString(crc1b)
crc2b64 := base64.StdEncoding.EncodeToString(crc2b)
combined, err := AddCRCChecksum(types.ChecksumAlgorithmCrc32, crc1b64, crc2b64, int64(len(part2)))
if err != nil {
t.Fatalf("AddCRCChecksum failed: %v", err)
}
combinedBytes, err := base64.StdEncoding.DecodeString(combined)
if err != nil {
t.Fatalf("base64 decode failed: %v", err)
}
combinedVal := uint32(combinedBytes[0])<<24 | uint32(combinedBytes[1])<<16 | uint32(combinedBytes[2])<<8 | uint32(combinedBytes[3])
if combinedVal != crcFull {
t.Errorf("CRC32 combine mismatch: got %x, want %x", combinedVal, crcFull)
}
}
func TestAddCRCChecksum_CRC32c(t *testing.T) {
data := []byte("this is a test buffer for crc32c")
mid := len(data) / 2
part1 := data[:mid]
part2 := data[mid:]
castagnoli := crc32.MakeTable(crc32.Castagnoli)
crc1 := crc32.Checksum(part1, castagnoli)
crc2 := crc32.Checksum(part2, castagnoli)
crcFull := crc32.Checksum(data, castagnoli)
crc1b := []byte{byte(crc1 >> 24), byte(crc1 >> 16), byte(crc1 >> 8), byte(crc1)}
crc2b := []byte{byte(crc2 >> 24), byte(crc2 >> 16), byte(crc2 >> 8), byte(crc2)}
crc1b64 := base64.StdEncoding.EncodeToString(crc1b)
crc2b64 := base64.StdEncoding.EncodeToString(crc2b)
combined, err := AddCRCChecksum(types.ChecksumAlgorithmCrc32c, crc1b64, crc2b64, int64(len(part2)))
if err != nil {
t.Fatalf("AddCRCChecksum failed: %v", err)
}
combinedBytes, err := base64.StdEncoding.DecodeString(combined)
if err != nil {
t.Fatalf("base64 decode failed: %v", err)
}
combinedVal := uint32(combinedBytes[0])<<24 | uint32(combinedBytes[1])<<16 | uint32(combinedBytes[2])<<8 | uint32(combinedBytes[3])
if combinedVal != crcFull {
t.Errorf("CRC32c combine mismatch: got %x, want %x", combinedVal, crcFull)
}
}
func TestAddCRCChecksum_CRC64NVME(t *testing.T) {
data := []byte("this is a test buffer for crc64nvme")
mid := len(data) / 2
part1 := data[:mid]
part2 := data[mid:]
table := crc64NVMETable
crc1 := crc64.Checksum(part1, table)
crc2 := crc64.Checksum(part2, table)
crcFull := crc64.Checksum(data, table)
crc1b := []byte{
byte(crc1 >> 56), byte(crc1 >> 48), byte(crc1 >> 40), byte(crc1 >> 32),
byte(crc1 >> 24), byte(crc1 >> 16), byte(crc1 >> 8), byte(crc1),
}
crc2b := []byte{
byte(crc2 >> 56), byte(crc2 >> 48), byte(crc2 >> 40), byte(crc2 >> 32),
byte(crc2 >> 24), byte(crc2 >> 16), byte(crc2 >> 8), byte(crc2),
}
crc1b64 := base64.StdEncoding.EncodeToString(crc1b)
crc2b64 := base64.StdEncoding.EncodeToString(crc2b)
combined, err := AddCRCChecksum(types.ChecksumAlgorithmCrc64nvme, crc1b64, crc2b64, int64(len(part2)))
if err != nil {
t.Fatalf("AddCRCChecksum failed: %v", err)
}
combinedBytes, err := base64.StdEncoding.DecodeString(combined)
if err != nil {
t.Fatalf("base64 decode failed: %v", err)
}
combinedVal := uint64(combinedBytes[0])<<56 | uint64(combinedBytes[1])<<48 | uint64(combinedBytes[2])<<40 | uint64(combinedBytes[3])<<32 |
uint64(combinedBytes[4])<<24 | uint64(combinedBytes[5])<<16 | uint64(combinedBytes[6])<<8 | uint64(combinedBytes[7])
if combinedVal != crcFull {
t.Errorf("CRC64NVME combine mismatch: got %x, want %x", combinedVal, crcFull)
}
}

138
s3api/utils/precondition.go Normal file
View File

@@ -0,0 +1,138 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package utils
import (
"strconv"
"time"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/debuglogger"
)
// ConditionalHeaders holds the conditional header values
type ConditionalHeaders struct {
IfMatch *string
IfNoneMatch *string
IfModSince *time.Time
IfUnmodeSince *time.Time
}
type precondtionCfg struct {
withCopySource bool
}
type preconditionOpt func(*precondtionCfg)
func WithCopySource() preconditionOpt {
return func(o *precondtionCfg) { o.withCopySource = true }
}
// ParsePreconditionHeaders parses the precondition headers:
// - If-Match
// - If-None-Match
// - If-Modified-Since
// - If-Unmodified-Since
func ParsePreconditionHeaders(ctx *fiber.Ctx, opts ...preconditionOpt) ConditionalHeaders {
ifMatch, ifNoneMatch := ParsePreconditionMatchHeaders(ctx, opts...)
ifModSince, ifUnmodeSince := ParsePreconditionDateHeaders(ctx, opts...)
return ConditionalHeaders{
IfMatch: ifMatch,
IfNoneMatch: ifNoneMatch,
IfModSince: ifModSince,
IfUnmodeSince: ifUnmodeSince,
}
}
// ParsePreconditionMatchHeaders extracts "If-Match" and "If-None-Match" headers from fiber Ctx
func ParsePreconditionMatchHeaders(ctx *fiber.Ctx, opts ...preconditionOpt) (*string, *string) {
cfg := new(precondtionCfg)
for _, opt := range opts {
opt(cfg)
}
prefix := ""
if cfg.withCopySource {
prefix = "X-Amz-Copy-Source-"
}
return GetStringPtr(ctx.Get(prefix + "If-Match")), GetStringPtr(ctx.Get(prefix + "If-None-Match"))
}
// ParsePreconditionDateHeaders parses the "If-Modified-Since" and "If-Unmodified-Since"
// headers from fiber context to *time.Time
func ParsePreconditionDateHeaders(ctx *fiber.Ctx, opts ...preconditionOpt) (*time.Time, *time.Time) {
cfg := new(precondtionCfg)
for _, opt := range opts {
opt(cfg)
}
prefix := ""
if cfg.withCopySource {
prefix = "X-Amz-Copy-Source-"
}
ifModSince := ctx.Get(prefix + "If-Modified-Since")
ifUnmodSince := ctx.Get(prefix + "If-Unmodified-Since")
ifModSinceParsed := ParsePreconditionDateHeader(ifModSince)
ifUnmodSinceParsed := ParsePreconditionDateHeader(ifUnmodSince)
return ifModSinceParsed, ifUnmodSinceParsed
}
// ParsePreconditionDateHeader tries to parse the given date string as
// - RFC1123
// - RFC3339
// both are valid
func ParsePreconditionDateHeader(date string) *time.Time {
if date == "" {
return nil
}
// try to parse as RFC1123
parsed, err := time.Parse(time.RFC1123, date)
if err == nil {
// ignore future dates
if parsed.After(time.Now()) {
return nil
}
return &parsed
}
// try to parse as RFC3339
parsed, err = time.Parse(time.RFC3339, date)
if err == nil {
// ignore future dates
if parsed.After(time.Now()) {
return nil
}
return &parsed
}
return nil
}
// ParseIfMatchSize parses the 'x-amz-if-match-size' to *int64
// if parsing fails, returns nil
func ParseIfMatchSize(ctx *fiber.Ctx) *int64 {
ifMatchSizeHdr := ctx.Get("x-amz-if-match-size")
ifMatchSize, err := strconv.ParseInt(ifMatchSizeHdr, 10, 64)
if err != nil {
debuglogger.Logf("failed to parse 'x-amz-if-match-size': %s", ifMatchSizeHdr)
return nil
}
return &ifMatchSize
}

View File

@@ -29,6 +29,7 @@ import (
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
"github.com/aws/smithy-go/logging"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3err"
)
@@ -45,16 +46,14 @@ type PresignedAuthReader struct {
auth AuthData
secret string
r io.Reader
debug bool
}
func NewPresignedAuthReader(ctx *fiber.Ctx, r io.Reader, auth AuthData, secret string, debug bool) *PresignedAuthReader {
func NewPresignedAuthReader(ctx *fiber.Ctx, r io.Reader, auth AuthData, secret string) *PresignedAuthReader {
return &PresignedAuthReader{
ctx: ctx,
r: r,
auth: auth,
secret: secret,
debug: debug,
}
}
@@ -63,7 +62,7 @@ func (pr *PresignedAuthReader) Read(p []byte) (int, error) {
n, err := pr.r.Read(p)
if errors.Is(err, io.EOF) {
cerr := CheckPresignedSignature(pr.ctx, pr.auth, pr.secret, pr.debug)
cerr := CheckPresignedSignature(pr.ctx, pr.auth, pr.secret)
if cerr != nil {
return n, cerr
}
@@ -73,7 +72,7 @@ func (pr *PresignedAuthReader) Read(p []byte) (int, error) {
}
// CheckPresignedSignature validates presigned request signature
func CheckPresignedSignature(ctx *fiber.Ctx, auth AuthData, secret string, debug bool) error {
func CheckPresignedSignature(ctx *fiber.Ctx, auth AuthData, secret string) error {
signedHdrs := strings.Split(auth.SignedHeaders, ";")
var contentLength int64
@@ -100,7 +99,7 @@ func CheckPresignedSignature(ctx *fiber.Ctx, auth AuthData, secret string, debug
SecretAccessKey: secret,
}, req, unsignedPayload, service, auth.Region, date, func(options *v4.SignerOptions) {
options.DisableURIPathEscaping = true
if debug {
if debuglogger.IsDebugEnabled() {
options.LogSigning = true
options.Logger = logging.NewStandardLogger(os.Stderr)
}

View File

@@ -31,7 +31,7 @@ import (
"time"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/s3api/debuglogger"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3err"
)
@@ -363,7 +363,7 @@ func (cr *ChunkReader) parseChunkHeaderBytes(header []byte) (int64, string, int,
if !cr.isFirstHeader {
err := readAndSkip(rdr, '\r', '\n')
if err != nil {
debuglogger.Logf("failed to read chunk header first 2 bytes: (should be): \\r\\n, (got): %q", header[:2])
debuglogger.Logf("failed to read chunk header first 2 bytes: (should be): \\r\\n, (got): %q", header[:min(2, len(header))])
return cr.handleRdrErr(err, header)
}
}

View File

@@ -30,7 +30,7 @@ import (
"strconv"
"strings"
"github.com/versity/versitygw/s3api/debuglogger"
"github.com/versity/versitygw/debuglogger"
)
var (

View File

@@ -31,7 +31,7 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/valyala/fasthttp"
"github.com/versity/versitygw/s3api/debuglogger"
"github.com/versity/versitygw/debuglogger"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
@@ -44,14 +44,14 @@ var (
func GetUserMetaData(headers *fasthttp.RequestHeader) (metadata map[string]string) {
metadata = make(map[string]string)
headers.DisableNormalizing()
headers.VisitAllInOrder(func(key, value []byte) {
for key, value := range headers.AllInOrder() {
hKey := string(key)
if strings.HasPrefix(strings.ToLower(hKey), "x-amz-meta-") {
trimmedKey := hKey[11:]
headerValue := string(value)
metadata[trimmedKey] = headerValue
}
})
}
headers.EnableNormalizing()
return
@@ -74,12 +74,12 @@ func createHttpRequestFromCtx(ctx *fiber.Ctx, signedHdrs []string, contentLength
}
// Set the request headers
req.Header.VisitAll(func(key, value []byte) {
for key, value := range req.Header.All() {
keyStr := string(key)
if includeHeader(keyStr, signedHdrs) {
httpReq.Header.Add(keyStr, string(value))
}
})
}
// make sure all headers in the signed headers are present
for _, header := range signedHdrs {
@@ -124,7 +124,7 @@ func createPresignedHttpRequestFromCtx(ctx *fiber.Ctx, signedHdrs []string, cont
uri, _, _ := strings.Cut(ctx.OriginalURL(), "?")
isFirst := true
ctx.Request().URI().QueryArgs().VisitAll(func(key, value []byte) {
for key, value := range ctx.Request().URI().QueryArgs().All() {
_, ok := signedQueryArgs[string(key)]
if !ok {
escapeValue := url.QueryEscape(string(value))
@@ -135,19 +135,19 @@ func createPresignedHttpRequestFromCtx(ctx *fiber.Ctx, signedHdrs []string, cont
uri += fmt.Sprintf("&%s=%s", key, escapeValue)
}
}
})
}
httpReq, err := http.NewRequest(string(req.Header.Method()), uri, body)
if err != nil {
return nil, errors.New("error in creating an http request")
}
// Set the request headers
req.Header.VisitAll(func(key, value []byte) {
for key, value := range req.Header.All() {
keyStr := string(key)
if includeHeader(keyStr, signedHdrs) {
httpReq.Header.Add(keyStr, string(value))
}
})
}
// Check if Content-Length in signed headers
// If content length is non 0, then the header will be included
@@ -297,10 +297,10 @@ func FilterObjectAttributes(attrs map[s3response.ObjectAttributes]struct{}, outp
func ParseObjectAttributes(ctx *fiber.Ctx) (map[s3response.ObjectAttributes]struct{}, error) {
attrs := map[s3response.ObjectAttributes]struct{}{}
var err error
ctx.Request().Header.VisitAll(func(key, value []byte) {
for key, value := range ctx.Request().Header.All() {
if string(key) == "X-Amz-Object-Attributes" {
if len(value) == 0 {
return
break
}
oattrs := strings.Split(string(value), ",")
for _, a := range oattrs {
@@ -313,7 +313,7 @@ func ParseObjectAttributes(ctx *fiber.Ctx) (map[s3response.ObjectAttributes]stru
attrs[attr] = struct{}{}
}
}
})
}
if err != nil {
return nil, err
@@ -413,23 +413,23 @@ func (cv ChecksumValues) Headers() string {
return result
}
func ParseChecksumHeaders(ctx *fiber.Ctx) (types.ChecksumAlgorithm, ChecksumValues, error) {
sdkAlgorithm := types.ChecksumAlgorithm(strings.ToUpper(ctx.Get("X-Amz-Sdk-Checksum-Algorithm")))
err := IsChecksumAlgorithmValid(sdkAlgorithm)
if err != nil {
debuglogger.Logf("invalid checksum algorithm: %v\n", sdkAlgorithm)
return "", nil, err
}
// ParseCalculatedChecksumHeaders parses and validates x-amz-checksum-x header keys
// e.g x-amz-checksum-crc32, x-amz-checksum-sha256 ...
func ParseCalculatedChecksumHeaders(ctx *fiber.Ctx) (ChecksumValues, error) {
checksums := ChecksumValues{}
var hdrErr error
// Parse and validate checksum headers
ctx.Request().Header.VisitAll(func(key, value []byte) {
// Skip `X-Amz-Checksum-Type` as it's a special header
if hdrErr != nil || !strings.HasPrefix(string(key), "X-Amz-Checksum-") || string(key) == "X-Amz-Checksum-Type" {
return
for key, value := range ctx.Request().Header.All() {
// only check the headers with 'X-Amz-Checksum-' prefix
if !strings.HasPrefix(string(key), "X-Amz-Checksum-") {
continue
}
// "X-Amz-Checksum-Type" and "X-Amz-Checksum-Algorithm" aren't considered
// as invalid values, even if the s3 action doesn't expect these headers
switch string(key) {
case "X-Amz-Checksum-Type", "X-Amz-Checksum-Algorithm":
continue
}
algo := types.ChecksumAlgorithm(strings.ToUpper(strings.TrimPrefix(string(key), "X-Amz-Checksum-")))
@@ -437,19 +437,55 @@ func ParseChecksumHeaders(ctx *fiber.Ctx) (types.ChecksumAlgorithm, ChecksumValu
if err != nil {
debuglogger.Logf("invalid checksum header: %s\n", key)
hdrErr = s3err.GetAPIError(s3err.ErrInvalidChecksumHeader)
return
break
}
checksums[algo] = string(value)
})
}
if hdrErr != nil {
return sdkAlgorithm, nil, hdrErr
return checksums, hdrErr
}
if len(checksums) > 1 {
debuglogger.Logf("multiple checksum headers provided: %v\n", checksums.Headers())
return sdkAlgorithm, checksums, s3err.GetAPIError(s3err.ErrMultipleChecksumHeaders)
return checksums, s3err.GetAPIError(s3err.ErrMultipleChecksumHeaders)
}
return checksums, nil
}
// ParseChecksumHeaders parses/validates x-amz-checksum-x headers key/values
func ParseChecksumHeaders(ctx *fiber.Ctx) (ChecksumValues, error) {
// first parse/validate 'x-amz-checksum-x' headers
checksums, err := ParseCalculatedChecksumHeaders(ctx)
if err != nil {
return checksums, err
}
// check if the values are valid
for al, val := range checksums {
if !IsValidChecksum(val, al) {
return checksums, s3err.GetInvalidChecksumHeaderErr(fmt.Sprintf("x-amz-checksum-%v", strings.ToLower(string(al))))
}
}
return checksums, nil
}
// ParseChecksumHeadersAndSdkAlgo parses/validates 'x-amz-sdk-checksum-algorithm' and
// 'x-amz-checksum-x' precalculated request headers
func ParseChecksumHeadersAndSdkAlgo(ctx *fiber.Ctx) (types.ChecksumAlgorithm, ChecksumValues, error) {
sdkAlgorithm := types.ChecksumAlgorithm(strings.ToUpper(ctx.Get("X-Amz-Sdk-Checksum-Algorithm")))
err := IsChecksumAlgorithmValid(sdkAlgorithm)
if err != nil {
debuglogger.Logf("invalid checksum algorithm: %v\n", sdkAlgorithm)
return "", nil, err
}
checksums, err := ParseCalculatedChecksumHeaders(ctx)
if err != nil {
return sdkAlgorithm, checksums, err
}
for al, val := range checksums {
@@ -569,12 +605,12 @@ func checkChecksumTypeAndAlgo(algo types.ChecksumAlgorithm, t types.ChecksumType
// Parses and validates the x-amz-checksum-algorithm and x-amz-checksum-type headers
func ParseCreateMpChecksumHeaders(ctx *fiber.Ctx) (types.ChecksumAlgorithm, types.ChecksumType, error) {
algo := types.ChecksumAlgorithm(ctx.Get("x-amz-checksum-algorithm"))
algo := types.ChecksumAlgorithm(strings.ToUpper(ctx.Get("x-amz-checksum-algorithm")))
if err := IsChecksumAlgorithmValid(algo); err != nil {
return "", "", err
}
chType := types.ChecksumType(ctx.Get("x-amz-checksum-type"))
chType := types.ChecksumType(strings.ToUpper(ctx.Get("x-amz-checksum-type")))
if err := IsChecksumTypeValid(chType); err != nil {
return "", "", err
}
@@ -664,3 +700,99 @@ func ParseTagging(data []byte, limit TagLimit) (map[string]string, error) {
return tagSet, nil
}
// Returns the provided string pointer
func GetStringPtr(str string) *string {
if str == "" {
return nil
}
return &str
}
// Converts any type to a string pointer
func ConvertToStringPtr[T any](val T) *string {
str := fmt.Sprint(val)
if str == "" {
return nil
}
return &str
}
// Converst any pointer to a string pointer
func ConvertPtrToStringPtr[T any](val *T) *string {
if val == nil {
return nil
}
str := fmt.Sprint(*val)
return &str
}
// Formats the date with the given formatting and returns a string pointer
func FormatDatePtrToString(date *time.Time, format string) *string {
if date == nil {
return nil
}
if date.IsZero() {
return nil
}
formatted := date.UTC().Format(format)
return &formatted
}
// GetInt64 returns the value of int64 pointer
func GetInt64(n *int64) int64 {
if n == nil {
return 0
}
return *n
}
// ValidateCopySource parses and validates the copy-source
func ValidateCopySource(copysource string) error {
var err error
copysource, err = url.QueryUnescape(copysource)
if err != nil {
debuglogger.Logf("invalid copy source encoding: %s", copysource)
return s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)
}
bucket, rest, _ := strings.Cut(copysource, "/")
if !IsValidBucketName(bucket) {
debuglogger.Logf("invalid copy source bucket: %s", bucket)
return s3err.GetAPIError(s3err.ErrInvalidCopySourceBucket)
}
// cut till the versionId as it's the only query param
// that is recognized in copy source
object, _, _ := strings.Cut(rest, "?versionId=")
// objects containing '../', '...../' ... are considered valid in AWS
// but for the security purposes these should be considered as invalid
// in the gateway
if !IsObjectNameValid(object) {
debuglogger.Logf("invalid copy source object: %s", object)
return s3err.GetAPIError(s3err.ErrInvalidCopySourceObject)
}
return nil
}
// GetQueryParam returns a pointer to the query parameter value if it exists
func GetQueryParam(ctx *fiber.Ctx, key string) *string {
value := ctx.Query(key)
if value == "" {
return nil
}
return &value
}
// ApplyOverride returns the override value if it exists and status is 200, otherwise returns original
func ApplyOverride(original, override *string) *string {
if override != nil {
return override
}
return original
}

View File

@@ -26,6 +26,7 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/stretchr/testify/assert"
"github.com/valyala/fasthttp"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3err"
@@ -898,3 +899,51 @@ func TestParseTagging(t *testing.T) {
})
}
}
func TestValidateCopySource(t *testing.T) {
tests := []struct {
name string
copysource string
err error
}{
// invalid encoding
{"invalid encoding 1", "%", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 2", "%2", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 3", "%G1", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 4", "%1Z", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 5", "%0H", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 6", "%XY", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 7", "%E", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 8", "hello%", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 9", "%%", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 10", "%2Gmore", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 11", "100%%sure", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 12", "%#00", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 13", "%0%0", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
{"invalid encoding 14", "%?versionId=id", s3err.GetAPIError(s3err.ErrInvalidCopySourceEncoding)},
// invalid bucket name
{"invalid bucket name 1", "168.200.1.255/obj/foo", s3err.GetAPIError(s3err.ErrInvalidCopySourceBucket)},
{"invalid bucket name 2", "/0000:0db8:85a3:0000:0000:8a2e:0370:7224/smth", s3err.GetAPIError(s3err.ErrInvalidCopySourceBucket)},
{"invalid bucket name 3", "", s3err.GetAPIError(s3err.ErrInvalidCopySourceBucket)},
{"invalid bucket name 4", "//obj/foo", s3err.GetAPIError(s3err.ErrInvalidCopySourceBucket)},
{"invalid bucket name 5", "//obj/foo?versionId=id", s3err.GetAPIError(s3err.ErrInvalidCopySourceBucket)},
// invalid object name
{"invalid object name 1", "bucket/../foo", s3err.GetAPIError(s3err.ErrInvalidCopySourceObject)},
{"invalid object name 2", "bucket/", s3err.GetAPIError(s3err.ErrInvalidCopySourceObject)},
{"invalid object name 3", "bucket", s3err.GetAPIError(s3err.ErrInvalidCopySourceObject)},
{"invalid object name 4", "bucket/../foo/dir/../../../", s3err.GetAPIError(s3err.ErrInvalidCopySourceObject)},
{"invalid object name 5", "bucket/.?versionId=smth", s3err.GetAPIError(s3err.ErrInvalidCopySourceObject)},
// success
{"no error 1", "bucket/object", nil},
{"no error 2", "bucket/object/key", nil},
{"no error 3", "bucket/4*&(*&(89765))", nil},
{"no error 4", "bucket/foo/../bar", nil},
{"no error 5", "bucket/foo/bar/baz?versionId=id", nil},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := ValidateCopySource(tt.copysource)
assert.Equal(t, tt.err, err)
})
}
}

View File

@@ -64,6 +64,7 @@ const (
ErrAnonymousCopyObject
ErrAnonymousPutBucketOwnership
ErrAnonymousGetBucketOwnership
ErrAnonymousResponseHeaders
ErrMethodNotAllowed
ErrBucketNotEmpty
ErrVersionedBucketNotEmpty
@@ -87,8 +88,10 @@ const (
ErrInvalidCompleteMpPartNumber
ErrInternalError
ErrInvalidCopyDest
ErrInvalidCopySource
ErrInvalidCopySourceRange
ErrInvalidCopySourceBucket
ErrInvalidCopySourceObject
ErrInvalidCopySourceEncoding
ErrInvalidTagKey
ErrInvalidTagValue
ErrDuplicateTagKey
@@ -123,6 +126,7 @@ const (
ErrSignatureTerminationStr
ErrSignatureIncorrService
ErrContentSHA256Mismatch
ErrInvalidSHA256Paylod
ErrMissingContentLength
ErrInvalidAccessKeyID
ErrRequestNotReadyYet
@@ -168,6 +172,12 @@ const (
ErrInvalidChecksumHeader
ErrTrailerHeaderNotSupported
ErrBadRequest
ErrMissingUploadId
ErrNoSuchCORSConfiguration
ErrCORSForbidden
ErrMissingCORSOrigin
ErrCORSIsNotEnabled
ErrNotModified
// Non-AWS errors
ErrExistingObjectIsDirectory
@@ -217,6 +227,11 @@ var errorCodeResponse = map[ErrorCode]APIError{
Description: "s3:GetBucketOwnershipControls does not support Anonymous requests!",
HTTPStatusCode: http.StatusForbidden,
},
ErrAnonymousResponseHeaders: {
Code: "InvalidRequest",
Description: "Request specific response headers cannot be used for anonymous GET requests.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMethodNotAllowed: {
Code: "MethodNotAllowed",
Description: "The specified method is not allowed against this resource.",
@@ -274,7 +289,7 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrInvalidPartNumberMarker: {
Code: "InvalidArgument",
Description: "Argument partNumberMarker must be an integer.",
Description: "Argument part-number-marker must be an integer between 0 and 2147483647",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidObjectAttributes: {
@@ -332,16 +347,26 @@ var errorCodeResponse = map[ErrorCode]APIError{
Description: "This copy request is illegal because it is trying to copy an object to itself without changing the object's metadata, storage class, website redirect location or encryption attributes.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidCopySource: {
Code: "InvalidArgument",
Description: "Copy Source must mention the source bucket and key: sourcebucket/sourcekey.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidCopySourceRange: {
Code: "InvalidArgument",
Description: "The x-amz-copy-source-range value must be of the form bytes=first-last where first and last are the zero-based offsets of the first and last bytes to copy",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidCopySourceBucket: {
Code: "InvalidArgument",
Description: "Invalid copy source bucket name",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidCopySourceObject: {
Code: "InvalidArgument",
Description: "Invalid copy source object key",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidCopySourceEncoding: {
Code: "InvalidArgument",
Description: "Invalid copy source encoding",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidTagKey: {
Code: "InvalidTag",
Description: "The TagKey you have provided is invalid",
@@ -517,6 +542,11 @@ var errorCodeResponse = map[ErrorCode]APIError{
Description: "The provided 'x-amz-content-sha256' header does not match what was computed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidSHA256Paylod: {
Code: "InvalidArgument",
Description: "x-amz-content-sha256 must be UNSIGNED-PAYLOAD, STREAMING-UNSIGNED-PAYLOAD-TRAILER, STREAMING-AWS4-HMAC-SHA256-PAYLOAD, STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER, STREAMING-AWS4-ECDSA-P256-SHA256-PAYLOAD, STREAMING-AWS4-ECDSA-P256-SHA256-PAYLOAD-TRAILER or a valid sha256 value.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingContentLength: {
Code: "MissingContentLength",
Description: "You must provide the Content-Length HTTP header.",
@@ -633,8 +663,8 @@ var errorCodeResponse = map[ErrorCode]APIError{
HTTPStatusCode: http.StatusForbidden,
},
ErrInvalidBucketAclWithObjectOwnership: {
Code: "ErrInvalidBucketAclWithObjectOwnership",
Description: "Bucket cannot have ACLs set with ObjectOwnership's BucketOwnerEnforced setting.",
Code: "InvalidBucketAclWithObjectOwnership",
Description: "Bucket cannot have ACLs set with ObjectOwnership's BucketOwnerEnforced setting",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBothCannedAndHeaderGrants: {
@@ -732,6 +762,36 @@ var errorCodeResponse = map[ErrorCode]APIError{
Description: "Bad Request",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingUploadId: {
Code: "InvalidArgument",
Description: "This operation does not accept partNumber without uploadId",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNoSuchCORSConfiguration: {
Code: "NoSuchCORSConfiguration",
Description: "The CORS configuration does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrCORSForbidden: {
Code: "AccessForbidden",
Description: "CORSResponse: This CORS request is not allowed. This is usually because the evalution of Origin, request method / Access-Control-Request-Method or Access-Control-Request-Headers are not whitelisted by the resource's CORS spec.",
HTTPStatusCode: http.StatusForbidden,
},
ErrMissingCORSOrigin: {
Code: "BadRequest",
Description: "Insufficient information. Origin request header needed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrCORSIsNotEnabled: {
Code: "AccessForbidden",
Description: "CORSResponse: CORS is not enabled for this bucket.",
HTTPStatusCode: http.StatusForbidden,
},
ErrNotModified: {
Code: "NotModified",
Description: "Not Modified",
HTTPStatusCode: http.StatusNotModified,
},
// non aws errors
ErrExistingObjectIsDirectory: {
@@ -888,7 +948,7 @@ func GetIncorrectMpObjectSizeErr(expected, actual int64) APIError {
}
}
func GetInvalidMpObjectSizeErr(val int64) APIError {
func GetNegatvieMpObjectSizeErr(val int64) APIError {
return APIError{
Code: "InvalidRequest",
Description: fmt.Sprintf("Value for x-amz-mp-object-size header is less than zero: '%v'", val),
@@ -896,6 +956,14 @@ func GetInvalidMpObjectSizeErr(val int64) APIError {
}
}
func GetInvalidMpObjectSizeErr(val string) APIError {
return APIError{
Code: "InvalidRequest",
Description: fmt.Sprintf("Value for x-amz-mp-object-size header is invalid: '%s'", val),
HTTPStatusCode: http.StatusBadRequest,
}
}
func CreateExceedingRangeErr(objSize int64) APIError {
return APIError{
Code: "InvalidArgument",
@@ -903,3 +971,35 @@ func CreateExceedingRangeErr(objSize int64) APIError {
HTTPStatusCode: http.StatusBadRequest,
}
}
func GetInvalidCORSHeaderErr(header string) APIError {
return APIError{
Code: "InvalidRequest",
Description: fmt.Sprintf(`AllowedHeader "%s" contains invalid character.`, header),
HTTPStatusCode: http.StatusBadRequest,
}
}
func GetInvalidCORSRequestHeaderErr(header string) APIError {
return APIError{
Code: "BadRequest",
Description: fmt.Sprintf(`Access-Control-Request-Headers "%s" contains invalid character.`, header),
HTTPStatusCode: http.StatusBadRequest,
}
}
func GetUnsopportedCORSMethodErr(method string) APIError {
return APIError{
Code: "InvalidRequest",
Description: fmt.Sprintf("Found unsupported HTTP method in CORS config. Unsupported method is %s", method),
HTTPStatusCode: http.StatusBadRequest,
}
}
func GetInvalidCORSMethodErr(method string) APIError {
return APIError{
Code: "BadRequest",
Description: fmt.Sprintf("Invalid Access-Control-Request-Method: %s", method),
HTTPStatusCode: http.StatusBadRequest,
}
}

View File

@@ -72,9 +72,10 @@ type ConfigurationId string
// This field will be changed after implementing per bucket notifications
const (
ConfigurationIdKafka ConfigurationId = "kafka-global"
ConfigurationIdNats ConfigurationId = "nats-global"
ConfigurationIdWebhook ConfigurationId = "webhook-global"
ConfigurationIdKafka ConfigurationId = "kafka-global"
ConfigurationIdNats ConfigurationId = "nats-global"
ConfigurationIdWebhook ConfigurationId = "webhook-global"
ConfigurationIdRabbitMQ ConfigurationId = "rabbitmq-global"
)
type EventS3Data struct {
@@ -113,6 +114,9 @@ type EventConfig struct {
KafkaTopicKey string
NatsURL string
NatsTopic string
RabbitmqURL string
RabbitmqExchange string
RabbitmqRoutingKey string
WebhookURL string
FilterConfigFilePath string
}
@@ -133,6 +137,9 @@ func InitEventSender(cfg *EventConfig) (S3EventSender, error) {
case cfg.NatsURL != "":
evSender, err = InitNatsEventService(cfg.NatsURL, cfg.NatsTopic, filter)
fmt.Printf("initializing S3 Event Notifications with Nats. URL: %v, topic: %v\n", cfg.NatsURL, cfg.NatsTopic)
case cfg.RabbitmqURL != "":
evSender, err = InitRabbitmqEventService(cfg.RabbitmqURL, cfg.RabbitmqExchange, cfg.RabbitmqRoutingKey, filter)
fmt.Printf("initializing S3 Event Notifications with RabbitMQ. URL: %v, exchange: %v, routing key: %v\n", cfg.RabbitmqURL, cfg.RabbitmqExchange, cfg.RabbitmqRoutingKey)
default:
return nil, nil
}

Some files were not shown because too many files have changed in this diff Show More