Compare commits

..

30 Commits

Author SHA1 Message Date
jonaustin09
7b6486cdba feat: Fixes all struct fields alignment issues, Adds betteralign check in the pipeline, to check the structs fields alignment 2024-11-08 13:25:21 -05:00
Ben McClelland
b4190f6749 Merge pull request #938 from versity/test/github_actions_speedup
Test/GitHub actions speedup
2024-11-05 12:16:03 -08:00
Luke McCrone
71f8e9a342 test: github-actions speedup, cleanup 2024-11-05 11:11:19 -08:00
Ben McClelland
ea33612799 Merge pull request #934 from versity/test/rest_get_bucket_tagging
Test/rest get bucket tagging
2024-11-05 11:10:46 -08:00
Ben McClelland
7ececea2c7 Merge pull request #939 from versity/dependabot/go_modules/dev-dependencies-fcb8b8a1c4
chore(deps): bump the dev-dependencies group with 2 updates
2024-11-04 15:01:25 -08:00
dependabot[bot]
8fc0c5a65e chore(deps): bump the dev-dependencies group with 2 updates
Bumps the dev-dependencies group with 2 updates: [github.com/smira/go-statsd](https://github.com/smira/go-statsd) and [github.com/AzureAD/microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go).


Updates `github.com/smira/go-statsd` from 1.3.3 to 1.3.4
- [Release notes](https://github.com/smira/go-statsd/releases)
- [Commits](https://github.com/smira/go-statsd/compare/v1.3.3...v1.3.4)

Updates `github.com/AzureAD/microsoft-authentication-library-for-go` from 1.2.2 to 1.2.3
- [Release notes](https://github.com/AzureAD/microsoft-authentication-library-for-go/releases)
- [Changelog](https://github.com/AzureAD/microsoft-authentication-library-for-go/blob/main/changelog.md)
- [Commits](https://github.com/AzureAD/microsoft-authentication-library-for-go/compare/v1.2.2...v1.2.3)

---
updated-dependencies:
- dependency-name: github.com/smira/go-statsd
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/AzureAD/microsoft-authentication-library-for-go
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-04 22:01:44 +00:00
Ben McClelland
5d8899baf4 Merge pull request #936 from versity/fix/putbuckettagging-status
fix: Changes the PutBucketTagging action response status code from 20…
2024-11-04 08:46:44 -08:00
Ben McClelland
a708a5e272 Merge pull request #935 from versity/fix/getobjectattributes-attrs-header
fix: Adds a check to ensure the x-amz-object-attributes header is set…
2024-11-04 08:46:16 -08:00
jonaustin09
7bd32a2cfa fix: Changes the PutBucketTagging action response status code from 200(OK) to 204(No Content) 2024-10-31 18:30:07 -04:00
Luke McCrone
bbcd3642e7 test: REST tagging, cleanup 2024-10-31 21:46:09 +00:00
jonaustin09
66c13ef982 fix: Adds a check to ensure the x-amz-object-attributes header is set and non-empty. 2024-10-31 17:05:54 -04:00
Ben McClelland
96cf88b530 Merge pull request #930 from versity/test/rest_get_obj_attributes
Test/rest get obj attributes
2024-10-31 13:52:23 -07:00
Luke McCrone
68b82b8d08 test: REST GetObjectAttributes, cleanup 2024-10-31 17:53:10 +00:00
Ben McClelland
ab517e6f65 Merge pull request #912 from versity/test_cmdline_list_parts_three
test: REST multipart upload listing, completion
2024-10-31 10:48:35 -07:00
Luke McCrone
bf4fc71bba test: multipart upload REST testing (complete, list, upload) 2024-10-31 00:10:50 +00:00
Ben McClelland
7fdfecf7f9 Merge pull request #933 from versity/fix/getobjectattributes-fixes
fix: Changes GetObjectAttributes action xml encoding root element to …
2024-10-30 15:04:41 -07:00
jonaustin09
06e2f2183d fix: Changes GetObjectAttributes action xml encoding root element to GetObjectAttributesResponse. Adds input validation for x-amz-object-attributes header. Adds x-amz-delete-marker and x-maz-version-id headers for GetObjectAttributes action. Adds VersionId in HeadObject response, if it's not specified in the request 2024-10-30 15:42:15 -04:00
Ben McClelland
98eda968eb Merge pull request #929 from versity/ben/walk_ut 2024-10-29 18:27:43 -07:00
Ben McClelland
c90e8a7f67 chore: add non standard delimiter to walk unit tests 2024-10-29 09:18:34 -07:00
Ryan Hileman
3e04251609 fix: remove unnecessary parent dir traversal in backend.Walk()
The prefix can contain one or more parent directories. In this case it is not necessary to start traversal at the root dir. Instead start the directory traversal at the last directory component within the prefix.
2024-10-28 20:40:32 -07:00
Ryan Hileman
56a2d04630 fix: use / for path separation on all platforms and speed up listing with delimiter
Uses / for path separation for all platforms including ones that have other native os path separators. This ensures client compatibility across server platforms.

Fixes performance issue with ListObjects recursing into all subdirectories, even when using a delimiter. With delimiter, only contents at the top level prefix are returned with all other objects represented below common prefixes. Since the common prefixes don't list all objects separately, there is no need to traverse into the directories below the top level list-objects contents.

Fixes #903
2024-10-28 17:15:52 -07:00
Ben McClelland
b6f1d20c24 Merge pull request #927 from versity/dependabot/go_modules/dev-dependencies-2b2f613808
chore(deps): bump the dev-dependencies group with 16 updates
2024-10-28 15:41:25 -07:00
Ben McClelland
2c1d0b362c Merge pull request #921 from lunixbochs/sendfile
make zero-copy GetObject possible via sendfile
2024-10-28 15:41:10 -07:00
Ryan Hileman
e7a6ce214b feat: make zero-copy GetObject possible via sendfile
From #919, This provides the *os.File handle to io.Copy() for the
case where the full file range is requested. This prevents hiding
the *os.File type for io.Copy() optimizations.

This still requires the change to valyala/fasthttp#1889 to expose
the net.Conn similarly to enable the linux sendfile optimization.
2024-10-28 15:13:30 -07:00
Ben McClelland
a53667cd75 Merge pull request #926 from versity/listbuckets-pagination
Listbuckets pagination
2024-10-28 15:07:54 -07:00
dependabot[bot]
5ce768745d chore(deps): bump the dev-dependencies group with 16 updates
Bumps the dev-dependencies group with 16 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) | `1.32.2` | `1.32.3` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.66.0` | `1.66.2` |
| [github.com/valyala/fasthttp](https://github.com/valyala/fasthttp) | `1.56.0` | `1.57.0` |
| [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://github.com/aws/aws-sdk-go-v2) | `1.16.17` | `1.16.18` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.24.2` | `1.24.3` |
| [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://github.com/aws/aws-sdk-go-v2) | `1.28.2` | `1.28.3` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.32.2` | `1.32.3` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.28.0` | `1.28.1` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.17.41` | `1.17.42` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.17.33` | `1.17.35` |
| [github.com/aws/aws-sdk-go-v2/internal/configsources](https://github.com/aws/aws-sdk-go-v2) | `1.3.21` | `1.3.22` |
| [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://github.com/aws/aws-sdk-go-v2) | `2.6.21` | `2.6.22` |
| [github.com/aws/aws-sdk-go-v2/internal/v4a](https://github.com/aws/aws-sdk-go-v2) | `1.3.21` | `1.3.22` |
| [github.com/aws/aws-sdk-go-v2/service/internal/checksum](https://github.com/aws/aws-sdk-go-v2) | `1.4.2` | `1.4.3` |
| [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://github.com/aws/aws-sdk-go-v2) | `1.12.2` | `1.12.3` |
| [github.com/aws/aws-sdk-go-v2/service/internal/s3shared](https://github.com/aws/aws-sdk-go-v2) | `1.18.2` | `1.18.3` |


Updates `github.com/aws/aws-sdk-go-v2` from 1.32.2 to 1.32.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.32.2...v1.32.3)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.66.0 to 1.66.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.66.0...service/s3/v1.66.2)

Updates `github.com/valyala/fasthttp` from 1.56.0 to 1.57.0
- [Release notes](https://github.com/valyala/fasthttp/releases)
- [Commits](https://github.com/valyala/fasthttp/compare/v1.56.0...v1.57.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/ec2/imds` from 1.16.17 to 1.16.18
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/ram/v1.16.17...service/ram/v1.16.18)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.24.2 to 1.24.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/pi/v1.24.2...service/pi/v1.24.3)

Updates `github.com/aws/aws-sdk-go-v2/service/ssooidc` from 1.28.2 to 1.28.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/pi/v1.28.2...service/pi/v1.28.3)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.32.2 to 1.32.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.32.2...v1.32.3)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.28.0 to 1.28.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.28.0...config/v1.28.1)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.17.41 to 1.17.42
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.41...credentials/v1.17.42)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.33 to 1.17.35
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.33...credentials/v1.17.35)

Updates `github.com/aws/aws-sdk-go-v2/internal/configsources` from 1.3.21 to 1.3.22
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.21...internal/ini/v1.3.22)

Updates `github.com/aws/aws-sdk-go-v2/internal/endpoints/v2` from 2.6.21 to 2.6.22
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/endpoints/v2.6.21...internal/endpoints/v2.6.22)

Updates `github.com/aws/aws-sdk-go-v2/internal/v4a` from 1.3.21 to 1.3.22
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.21...internal/ini/v1.3.22)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/checksum` from 1.4.2 to 1.4.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/service/m2/v1.4.3/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/mq/v1.4.2...service/m2/v1.4.3)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/presigned-url` from 1.12.2 to 1.12.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/m2/v1.12.2...service/mq/v1.12.3)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/s3shared` from 1.18.2 to 1.18.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.18.3/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.2...config/v1.18.3)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/valyala/fasthttp
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/ec2/imds
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/configsources
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/v4a
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/checksum
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/s3shared
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 22:05:35 +00:00
jonaustin09
24fea307ba Merge branch 'main' of github.com:versity/versitygw into listbuckets-pagination 2024-10-28 16:26:42 -04:00
jonaustin09
4d6ec783bf feat: Implements pagination for ListBuckets 2024-10-28 16:26:08 -04:00
Ben McClelland
c2f6e48bf6 Merge pull request #924 from versity/ben/update_example_config
chore: update example service config for directory perms option
2024-10-28 11:02:39 -07:00
Ben McClelland
565000c3e7 chore: update example service config for directory perms option
This was missed when we added an option for setting directory permissions
different than the default 0755. This adds the VGW_DIR_PERMS option and
description to the example.conf file.
2024-10-28 09:33:50 -07:00
95 changed files with 3222 additions and 1402 deletions

23
.github/workflows/betteralign.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: betteralign
on: pull_request
jobs:
build:
name: Check
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "stable"
id: go
- name: Install betteralign
run: go install github.com/dkorunic/betteralign/cmd/betteralign@latest
- name: Run betteralign
run: betteralign -test_files ./...

View File

@@ -8,138 +8,96 @@ jobs:
fail-fast: false
matrix:
include:
- set: "s3cmd, posix"
LOCAL_FOLDER: /tmp/gw1
BUCKET_ONE_NAME: versity-gwtest-bucket-one-1
BUCKET_TWO_NAME: versity-gwtest-bucket-two-1
- set: "mc, posix, non-file count, non-static, folder IAM"
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam1
AWS_ENDPOINT_URL: https://127.0.0.1:7070
RUN_SET: "s3cmd"
RUN_SET: "mc-non-file-count"
RECREATE_BUCKETS: "true"
PORT: 7070
BACKEND: "posix"
- set: "s3, posix"
LOCAL_FOLDER: /tmp/gw2
BUCKET_ONE_NAME: versity-gwtest-bucket-one-2
BUCKET_TWO_NAME: versity-gwtest-bucket-two-2
- set: "mc, posix, file count, non-static, folder IAM"
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam2
AWS_ENDPOINT_URL: https://127.0.0.1:7071
RUN_SET: "s3"
RUN_SET: "mc-file-count"
RECREATE_BUCKETS: "true"
PORT: 7071
BACKEND: "posix"
- set: "s3api non-policy, posix"
LOCAL_FOLDER: /tmp/gw3
BUCKET_ONE_NAME: versity-gwtest-bucket-one-3
BUCKET_TWO_NAME: versity-gwtest-bucket-two-3
- set: "REST, posix, non-static, all, folder IAM"
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam3
AWS_ENDPOINT_URL: https://127.0.0.1:7072
RUN_SET: "s3api"
RECREATE_BUCKETS: "true"
PORT: 7072
BACKEND: "posix"
- set: "mc, posix"
LOCAL_FOLDER: /tmp/gw4
BUCKET_ONE_NAME: versity-gwtest-bucket-one-4
BUCKET_TWO_NAME: versity-gwtest-bucket-two-4
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam4
AWS_ENDPOINT_URL: https://127.0.0.1:7073
RUN_SET: "mc"
RECREATE_BUCKETS: "true"
PORT: 7073
BACKEND: "posix"
- set: "s3api-user, posix, s3 IAM"
LOCAL_FOLDER: /tmp/gw5
BUCKET_ONE_NAME: versity-gwtest-bucket-one-5
BUCKET_TWO_NAME: versity-gwtest-bucket-two-5
IAM_TYPE: s3
USERS_BUCKET: versity-gwtest-iam
AWS_ENDPOINT_URL: https://127.0.0.1:7074
RUN_SET: "s3api-user"
RECREATE_BUCKETS: "true"
PORT: 7074
BACKEND: "posix"
- set: "s3api non-policy, static buckets"
LOCAL_FOLDER: /tmp/gw6
BUCKET_ONE_NAME: versity-gwtest-bucket-one-6
BUCKET_TWO_NAME: versity-gwtest-bucket-two-6
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam6
AWS_ENDPOINT_URL: https://127.0.0.1:7075
RUN_SET: "s3api-non-policy"
RECREATE_BUCKETS: "false"
PORT: 7075
BACKEND: "posix"
- set: "s3api non-policy, s3 backend"
LOCAL_FOLDER: /tmp/gw7
BUCKET_ONE_NAME: versity-gwtest-bucket-one-7
BUCKET_TWO_NAME: versity-gwtest-bucket-two-7
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam7
AWS_ENDPOINT_URL: https://127.0.0.1:7076
RUN_SET: "s3api"
RECREATE_BUCKETS: "true"
PORT: 7076
BACKEND: "s3"
- set: "REST, posix"
LOCAL_FOLDER: /tmp/gw8
BUCKET_ONE_NAME: versity-gwtest-bucket-one-7
BUCKET_TWO_NAME: versity-gwtest-bucket-two-7
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam8
AWS_ENDPOINT_URL: https://127.0.0.1:7077
RUN_SET: "rest"
RECREATE_BUCKETS: "true"
PORT: 7077
BACKEND: "posix"
- set: "s3api policy, static buckets"
LOCAL_FOLDER: /tmp/gw9
BUCKET_ONE_NAME: versity-gwtest-bucket-one-8
BUCKET_TWO_NAME: versity-gwtest-bucket-two-8
- set: "s3, posix, non-file count, non-static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3-non-file-count"
RECREATE_BUCKETS: "true"
BACKEND: "posix"
- set: "s3, posix, file count, non-static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3-file-count"
RECREATE_BUCKETS: "true"
BACKEND: "posix"
- set: "s3api, posix, bucket|object|multipart, non-static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3api-bucket,s3api-object,s3api-multipart"
RECREATE_BUCKETS: "true"
BACKEND: "posix"
- set: "s3api, posix, policy, non-static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3api-policy"
RECREATE_BUCKETS: "true"
BACKEND: "posix"
- set: "s3api, posix, user, non-static, s3 IAM"
IAM_TYPE: s3
RUN_SET: "s3api-user"
RECREATE_BUCKETS: "true"
BACKEND: "posix"
- set: "s3api, posix, bucket, static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3api-bucket"
RECREATE_BUCKETS: "false"
BACKEND: "posix"
- set: "s3api, posix, multipart, static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3api-multipart"
RECREATE_BUCKETS: "false"
BACKEND: "posix"
- set: "s3api, posix, object, static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3api-object"
RECREATE_BUCKETS: "false"
BACKEND: "posix"
- set: "s3api, posix, policy, static, folder IAM"
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam9
AWS_ENDPOINT_URL: https://127.0.0.1:7078
RUN_SET: "s3api-policy"
RECREATE_BUCKETS: "false"
PORT: 7078
BACKEND: "posix"
- set: "s3api user, static buckets"
LOCAL_FOLDER: /tmp/gw10
BUCKET_ONE_NAME: versity-gwtest-bucket-one-9
BUCKET_TWO_NAME: versity-gwtest-bucket-two-9
- set: "s3api, posix, user, static, folder IAM"
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam10
AWS_ENDPOINT_URL: https://127.0.0.1:7079
RUN_SET: "s3api-user"
RECREATE_BUCKETS: "false"
PORT: 7079
BACKEND: "posix"
- set: "s3api policy and user, posix"
LOCAL_FOLDER: /tmp/gw11
BUCKET_ONE_NAME: versity-gwtest-bucket-one-10
BUCKET_TWO_NAME: versity-gwtest-bucket-two-10
- set: "s3api, s3, multipart|object, non-static, folder IAM"
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam11
AWS_ENDPOINT_URL: https://127.0.0.1:7080
RUN_SET: "s3api-policy,s3api-user"
RUN_SET: "s3api-bucket,s3api-object,s3api-multipart"
RECREATE_BUCKETS: "true"
PORT: 7080
BACKEND: "posix"
- set: "s3api policy and user, s3 backend"
LOCAL_FOLDER: /tmp/gw12
BUCKET_ONE_NAME: versity-gwtest-bucket-one-11
BUCKET_TWO_NAME: versity-gwtest-bucket-two-11
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam12
AWS_ENDPOINT_URL: https://127.0.0.1:7081
RUN_SET: "s3api-policy,s3api-user"
RECREATE_BUCKETS: "true"
PORT: 7081
BACKEND: "s3"
- set: "s3api, s3, policy|user, non-static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3api-policy,s3api-user"
RECREATE_BUCKETS: "true"
BACKEND: "s3"
- set: "s3cmd, posix, file count, non-static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3cmd-file-count"
RECREATE_BUCKETS: "true"
BACKEND: "posix"
- set: "s3cmd, posix, non-user, non-static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3cmd-non-user"
RECREATE_BUCKETS: "true"
BACKEND: "posix"
- set: "s3cmd, posix, user, non-static, folder IAM"
IAM_TYPE: folder
RUN_SET: "s3cmd-user"
RECREATE_BUCKETS: "true"
BACKEND: "posix"
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v4
@@ -176,15 +134,8 @@ jobs:
- name: Build and run
env:
LOCAL_FOLDER: ${{ matrix.LOCAL_FOLDER }}
BUCKET_ONE_NAME: ${{ matrix.BUCKET_ONE_NAME }}
BUCKET_TWO_NAME: ${{ matrix.BUCKET_TWO_NAME }}
USERS_FOLDER: ${{ matrix.USERS_FOLDER }}
USERS_BUCKET: ${{ matrix.USERS_BUCKET }}
IAM_TYPE: ${{ matrix.IAM_TYPE }}
AWS_ENDPOINT_URL: ${{ matrix.AWS_ENDPOINT_URL }}
RUN_SET: ${{ matrix.RUN_SET }}
PORT: ${{ matrix.PORT }}
AWS_PROFILE: versity
VERSITY_EXE: ${{ github.workspace }}/versitygw
RUN_VERSITYGW: true
@@ -192,6 +143,13 @@ jobs:
RECREATE_BUCKETS: ${{ matrix.RECREATE_BUCKETS }}
CERT: ${{ github.workspace }}/cert.pem
KEY: ${{ github.workspace }}/versitygw.pem
LOCAL_FOLDER: /tmp/gw
BUCKET_ONE_NAME: versity-gwtest-bucket-one
BUCKET_TWO_NAME: versity-gwtest-bucket-two
USERS_FOLDER: /tmp/iam
USERS_BUCKET: versity-gwtest-iam
AWS_ENDPOINT_URL: https://127.0.0.1:7070
PORT: 7070
S3CMD_CONFIG: tests/s3cfg.local.default
MC_ALIAS: versity
LOG_LEVEL: 4
@@ -204,6 +162,7 @@ jobs:
REMOVE_TEST_FILE_FOLDER: true
VERSIONING_DIR: ${{ github.workspace }}/versioning
COMMAND_LOG: command.log
TIME_LOG: time.log
run: |
make testbin
export AWS_ACCESS_KEY_ID=ABCDEFGHIJKLMNOPQRST
@@ -224,6 +183,9 @@ jobs:
fi
BYPASS_ENV_FILE=true ${{ github.workspace }}/tests/run.sh $RUN_SET
- name: Time report
run: cat ${{ github.workspace }}/time.log
- name: Coverage report
run: |
go tool covdata percent -i=cover

View File

@@ -45,18 +45,18 @@ type GetBucketAclOutput struct {
type PutBucketAclInput struct {
Bucket *string
ACL types.BucketCannedACL
AccessControlPolicy *AccessControlPolicy
GrantFullControl *string
GrantRead *string
GrantReadACP *string
GrantWrite *string
GrantWriteACP *string
ACL types.BucketCannedACL
}
type AccessControlPolicy struct {
AccessControlList AccessControlList `xml:"AccessControlList"`
Owner *types.Owner
AccessControlList AccessControlList `xml:"AccessControlList"`
}
type AccessControlList struct {
@@ -352,13 +352,13 @@ func IsAdminOrOwner(acct Account, isRoot bool, acl ACL) error {
}
type AccessOptions struct {
Acl ACL
AclPermission types.Permission
IsRoot bool
Acc Account
Bucket string
Object string
Action Action
Acl ACL
Acc Account
IsRoot bool
Readonly bool
}

View File

@@ -63,10 +63,10 @@ func (bp *BucketPolicy) isAllowed(principal string, action Action, resource stri
}
type BucketPolicyItem struct {
Effect BucketPolicyAccessType `json:"Effect"`
Principals Principals `json:"Principal"`
Actions Actions `json:"Action"`
Resources Resources `json:"Resource"`
Effect BucketPolicyAccessType `json:"Effect"`
}
func (bpi *BucketPolicyItem) Validate(bucket string, iam IAMService) error {

View File

@@ -93,7 +93,6 @@ var (
)
type Opts struct {
RootAccount Account
Dir string
LDAPServerURL string
LDAPBindDN string
@@ -119,11 +118,12 @@ type Opts struct {
S3Region string
S3Bucket string
S3Endpoint string
RootAccount Account
CacheTTL int
CachePrune int
S3DisableSSlVerfiy bool
S3Debug bool
CacheDisable bool
CacheTTL int
CachePrune int
}
func New(o *Opts) (IAMService, error) {

View File

@@ -36,14 +36,14 @@ type IAMCache struct {
var _ IAMService = &IAMCache{}
type item struct {
value Account
exp time.Time
value Account
}
type icache struct {
sync.RWMutex
expire time.Duration
items map[string]item
expire time.Duration
sync.RWMutex
}
func (i *icache) set(k string, v Account) {

View File

@@ -33,6 +33,8 @@ const (
// IAMServiceInternal manages the internal IAM service
type IAMServiceInternal struct {
dir string
rootAcc Account
// This mutex will help with racing updates to the IAM data
// from multiple requests to this gateway instance, but
// will not help with racing updates to multiple load balanced
@@ -40,8 +42,6 @@ type IAMServiceInternal struct {
// IAM service. All account updates should be sent to a single
// gateway instance if possible.
sync.RWMutex
dir string
rootAcc Account
}
// UpdateAcctFunc accepts the current data and returns the new data to be stored

View File

@@ -42,6 +42,14 @@ import (
// coming from iAMConfig and iamFile in iam_internal.
type IAMServiceS3 struct {
client *s3.Client
access string
secret string
region string
bucket string
endpoint string
rootAcc Account
// This mutex will help with racing updates to the IAM data
// from multiple requests to this gateway instance, but
// will not help with racing updates to multiple load balanced
@@ -50,15 +58,8 @@ type IAMServiceS3 struct {
// gateway instance if possible.
sync.RWMutex
access string
secret string
region string
bucket string
endpoint string
sslSkipVerify bool
debug bool
rootAcc Account
client *s3.Client
}
var _ IAMService = &IAMServiceS3{}

View File

@@ -29,9 +29,9 @@ import (
)
type BucketLockConfig struct {
Enabled bool
DefaultRetention *types.DefaultRetention
CreatedAt *time.Time
Enabled bool
}
func ParseBucketLockConfigurationInput(input []byte) ([]byte, error) {

View File

@@ -85,6 +85,10 @@ type keyDerivator interface {
// SignerOptions is the SigV4 Signer options.
type SignerOptions struct {
// The logger to send log messages to.
Logger logging.Logger
// Disables the Signer's moving HTTP header key/value pairs from the HTTP
// request header to the request's query string. This is most commonly used
// with pre-signed requests preventing headers from being added to the
@@ -100,9 +104,6 @@ type SignerOptions struct {
// http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
DisableURIPathEscaping bool
// The logger to send log messages to.
Logger logging.Logger
// Enable logging of signed requests.
// This will enable logging of the canonical request, the string to sign, and for presigning the subsequent
// presigned URL.
@@ -117,8 +118,8 @@ type SignerOptions struct {
// Signer applies AWS v4 signing to given request. Use this to sign requests
// that need to be signed with AWS V4 Signatures.
type Signer struct {
options SignerOptions
keyDerivator keyDerivator
options SignerOptions
}
// NewSigner returns a new SigV4 Signer
@@ -133,17 +134,19 @@ func NewSigner(optFns ...func(signer *SignerOptions)) *Signer {
}
type httpSigner struct {
KeyDerivator keyDerivator
Request *http.Request
Credentials aws.Credentials
Time v4Internal.SigningTime
ServiceName string
Region string
Time v4Internal.SigningTime
Credentials aws.Credentials
KeyDerivator keyDerivator
IsPreSign bool
SignedHdrs []string
PayloadHash string
SignedHdrs []string
IsPreSign bool
DisableHeaderHoisting bool
DisableURIPathEscaping bool
DisableSessionToken bool

View File

@@ -149,8 +149,8 @@ func (az *Azure) String() string {
func (az *Azure) CreateBucket(ctx context.Context, input *s3.CreateBucketInput, acl []byte) error {
meta := map[string]*string{
string(keyAclCapital): backend.GetStringPtr(encodeBytes(acl)),
string(keyOwnership): backend.GetStringPtr(encodeBytes([]byte(input.ObjectOwnership))),
string(keyAclCapital): backend.GetPtrFromString(encodeBytes(acl)),
string(keyOwnership): backend.GetPtrFromString(encodeBytes([]byte(input.ObjectOwnership))),
}
acct, ok := ctx.Value("account").(auth.Account)
@@ -170,7 +170,7 @@ func (az *Azure) CreateBucket(ctx context.Context, input *s3.CreateBucketInput,
return fmt.Errorf("parse default bucket lock state: %w", err)
}
meta[string(keyBucketLock)] = backend.GetStringPtr(encodeBytes(defaultLockParsed))
meta[string(keyBucketLock)] = backend.GetPtrFromString(encodeBytes(defaultLockParsed))
}
_, err := az.client.CreateContainer(ctx, *input.Bucket, &container.CreateOptions{Metadata: meta})
@@ -195,48 +195,56 @@ func (az *Azure) CreateBucket(ctx context.Context, input *s3.CreateBucketInput,
return azureErrToS3Err(err)
}
func (az *Azure) ListBuckets(ctx context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error) {
func (az *Azure) ListBuckets(ctx context.Context, input s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error) {
fmt.Printf("%+v\n", input)
pager := az.client.NewListContainersPager(
&service.ListContainersOptions{
Include: service.ListContainersInclude{
Metadata: true,
},
Marker: &input.ContinuationToken,
MaxResults: &input.MaxBuckets,
Prefix: &input.Prefix,
})
var buckets []s3response.ListAllMyBucketsEntry
var result s3response.ListAllMyBucketsResult
result := s3response.ListAllMyBucketsResult{
Prefix: input.Prefix,
}
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return result, azureErrToS3Err(err)
}
for _, v := range resp.ContainerItems {
if isAdmin {
resp, err := pager.NextPage(ctx)
if err != nil {
return result, azureErrToS3Err(err)
}
for _, v := range resp.ContainerItems {
if input.IsAdmin {
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
Name: *v.Name,
// TODO: using modification date here instead of creation, is that ok?
CreationDate: *v.Properties.LastModified,
})
} else {
acl, err := getAclFromMetadata(v.Metadata, keyAclLower)
if err != nil {
return result, err
}
if acl.Owner == input.Owner {
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
Name: *v.Name,
// TODO: using modification date here instead of creation, is that ok?
CreationDate: *v.Properties.LastModified,
})
} else {
acl, err := getAclFromMetadata(v.Metadata, keyAclLower)
if err != nil {
return result, err
}
if acl.Owner == owner {
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
Name: *v.Name,
// TODO: using modification date here instead of creation, is that ok?
CreationDate: *v.Properties.LastModified,
})
}
}
}
}
if resp.NextMarker != nil {
result.ContinuationToken = *resp.NextMarker
}
result.Buckets.Bucket = buckets
result.Owner.ID = owner
result.Owner.ID = input.Owner
return result, nil
}
@@ -303,13 +311,13 @@ func (az *Azure) PutObject(ctx context.Context, po *s3.PutObjectInput) (s3respon
opts.HTTPHeaders.BlobContentDisposition = po.ContentDisposition
if strings.HasSuffix(*po.Key, "/") {
// Hardcode "application/x-directory" for direcoty objects
opts.HTTPHeaders.BlobContentType = backend.GetStringPtr(backend.DirContentType)
opts.HTTPHeaders.BlobContentType = backend.GetPtrFromString(backend.DirContentType)
} else {
opts.HTTPHeaders.BlobContentType = po.ContentType
}
if opts.HTTPHeaders.BlobContentType == nil {
opts.HTTPHeaders.BlobContentType = backend.GetStringPtr(backend.DefaultContentType)
opts.HTTPHeaders.BlobContentType = backend.GetPtrFromString(backend.DefaultContentType)
}
uploadResp, err := az.client.UploadStream(ctx, *po.Bucket, *po.Key, po.Body, opts)
@@ -408,7 +416,7 @@ func (az *Azure) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.G
contentType := blobDownloadResponse.ContentType
if contentType == nil {
contentType = backend.GetStringPtr(backend.DefaultContentType)
contentType = backend.GetPtrFromString(backend.DefaultContentType)
}
return &s3.GetObjectOutput{
@@ -504,20 +512,22 @@ func (az *Azure) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3
return result, nil
}
func (az *Azure) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
func (az *Azure) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error) {
data, err := az.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: input.Bucket,
Key: input.Key,
})
if err != nil {
return s3response.GetObjectAttributesResult{}, err
return s3response.GetObjectAttributesResponse{}, err
}
return s3response.GetObjectAttributesResult{
return s3response.GetObjectAttributesResponse{
ETag: data.ETag,
LastModified: data.LastModified,
ObjectSize: data.ContentLength,
StorageClass: data.StorageClass,
LastModified: data.LastModified,
VersionId: data.VersionId,
DeleteMarker: data.DeleteMarker,
}, nil
}
@@ -712,8 +722,8 @@ func (az *Azure) DeleteObjects(ctx context.Context, input *s3.DeleteObjectsInput
} else {
errs = append(errs, types.Error{
Key: obj.Key,
Code: backend.GetStringPtr("InternalError"),
Message: backend.GetStringPtr(err.Error()),
Code: backend.GetPtrFromString("InternalError"),
Message: backend.GetPtrFromString(err.Error()),
})
}
}
@@ -849,7 +859,7 @@ func (az *Azure) CreateMultipartUpload(ctx context.Context, input *s3.CreateMult
// set blob legal hold status in metadata
if input.ObjectLockLegalHoldStatus == types.ObjectLockLegalHoldStatusOn {
meta[string(keyObjLegalHold)] = backend.GetStringPtr("1")
meta[string(keyObjLegalHold)] = backend.GetPtrFromString("1")
}
// set blob retention date
@@ -862,7 +872,7 @@ func (az *Azure) CreateMultipartUpload(ctx context.Context, input *s3.CreateMult
if err != nil {
return s3response.InitiateMultipartUploadResult{}, azureErrToS3Err(err)
}
meta[string(keyObjRetention)] = backend.GetStringPtr(string(retParsed))
meta[string(keyObjRetention)] = backend.GetPtrFromString(string(retParsed))
}
uploadId := uuid.New().String()
@@ -1319,12 +1329,12 @@ func (az *Azure) PutObjectRetention(ctx context.Context, bucket, object, version
meta := blobProps.Metadata
if meta == nil {
meta = map[string]*string{
string(keyObjRetention): backend.GetStringPtr(string(retention)),
string(keyObjRetention): backend.GetPtrFromString(string(retention)),
}
} else {
objLockCfg, ok := meta[string(keyObjRetention)]
if !ok {
meta[string(keyObjRetention)] = backend.GetStringPtr(string(retention))
meta[string(keyObjRetention)] = backend.GetPtrFromString(string(retention))
} else {
var lockCfg types.ObjectLockRetention
if err := json.Unmarshal([]byte(*objLockCfg), &lockCfg); err != nil {
@@ -1342,7 +1352,7 @@ func (az *Azure) PutObjectRetention(ctx context.Context, bucket, object, version
}
}
meta[string(keyObjRetention)] = backend.GetStringPtr(string(retention))
meta[string(keyObjRetention)] = backend.GetPtrFromString(string(retention))
}
}
@@ -1690,7 +1700,7 @@ func (az *Azure) setContainerMetaData(ctx context.Context, bucket, key string, v
}
str := encodeBytes(value)
mdmap[key] = backend.GetStringPtr(str)
mdmap[key] = backend.GetPtrFromString(str)
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{Metadata: mdmap})
if err != nil {

View File

@@ -32,7 +32,7 @@ type Backend interface {
Shutdown()
// bucket operations
ListBuckets(_ context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error)
ListBuckets(context.Context, s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error)
HeadBucket(context.Context, *s3.HeadBucketInput) (*s3.HeadBucketOutput, error)
GetBucketAcl(context.Context, *s3.GetBucketAclInput) ([]byte, error)
CreateBucket(_ context.Context, _ *s3.CreateBucketInput, defaultACL []byte) error
@@ -61,7 +61,7 @@ type Backend interface {
HeadObject(context.Context, *s3.HeadObjectInput) (*s3.HeadObjectOutput, error)
GetObject(context.Context, *s3.GetObjectInput) (*s3.GetObjectOutput, error)
GetObjectAcl(context.Context, *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error)
GetObjectAttributes(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error)
GetObjectAttributes(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error)
CopyObject(context.Context, *s3.CopyObjectInput) (*s3.CopyObjectOutput, error)
ListObjects(context.Context, *s3.ListObjectsInput) (s3response.ListObjectsResult, error)
ListObjectsV2(context.Context, *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error)
@@ -108,7 +108,7 @@ func (BackendUnsupported) Shutdown() {}
func (BackendUnsupported) String() string {
return "Unsupported"
}
func (BackendUnsupported) ListBuckets(context.Context, string, bool) (s3response.ListAllMyBucketsResult, error) {
func (BackendUnsupported) ListBuckets(context.Context, s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error) {
return s3response.ListAllMyBucketsResult{}, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) HeadBucket(context.Context, *s3.HeadBucketInput) (*s3.HeadBucketOutput, error) {
@@ -185,8 +185,8 @@ func (BackendUnsupported) GetObject(context.Context, *s3.GetObjectInput) (*s3.Ge
func (BackendUnsupported) GetObjectAcl(context.Context, *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) GetObjectAttributes(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
return s3response.GetObjectAttributesResult{}, s3err.GetAPIError(s3err.ErrNotImplemented)
func (BackendUnsupported) GetObjectAttributes(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error) {
return s3response.GetObjectAttributesResponse{}, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) CopyObject(context.Context, *s3.CopyObjectInput) (*s3.CopyObjectOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)

View File

@@ -50,10 +50,6 @@ func (d ByObjectName) Len() int { return len(d) }
func (d ByObjectName) Swap(i, j int) { d[i], d[j] = d[j], d[i] }
func (d ByObjectName) Less(i, j int) bool { return *d[i].Key < *d[j].Key }
func GetStringPtr(s string) *string {
return &s
}
func GetPtrFromString(str string) *string {
if str == "" {
return nil
@@ -61,6 +57,13 @@ func GetPtrFromString(str string) *string {
return &str
}
func GetStringFromPtr(str *string) string {
if str == nil {
return ""
}
return *str
}
func GetTimePtr(t time.Time) *time.Time {
return &t
}

View File

@@ -52,25 +52,25 @@ type Posix struct {
rootfd *os.File
rootdir string
// chownuid/gid enable chowning of files to the account uid/gid
// when objects are uploaded
chownuid bool
chowngid bool
// bucket versioning directory path
versioningDir string
// euid/egid are the effective uid/gid of the running versitygw process
// used to determine if chowning is needed
euid int
egid int
// newDirPerm is the permission to set on newly created directories
newDirPerm fs.FileMode
// chownuid/gid enable chowning of files to the account uid/gid
// when objects are uploaded
chownuid bool
chowngid bool
// bucketlinks is a flag to enable symlinks to directories at the top
// level gateway directory to be treated as buckets the same as directories
bucketlinks bool
// bucket versioning directory path
versioningDir string
// newDirPerm is the permission to set on newly created directories
newDirPerm fs.FileMode
}
var _ backend.Backend = &Posix{}
@@ -102,11 +102,11 @@ const (
)
type PosixOpts struct {
VersioningDir string
NewDirPerm fs.FileMode
ChownUID bool
ChownGID bool
BucketLinks bool
VersioningDir string
NewDirPerm fs.FileMode
}
func New(rootdir string, meta meta.MetadataStorer, opts PosixOpts) (*Posix, error) {
@@ -208,13 +208,15 @@ func (p *Posix) doesBucketAndObjectExist(bucket, object string) error {
return nil
}
func (p *Posix) ListBuckets(_ context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error) {
func (p *Posix) ListBuckets(_ context.Context, input s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error) {
entries, err := os.ReadDir(".")
if err != nil {
return s3response.ListAllMyBucketsResult{},
fmt.Errorf("readdir buckets: %w", err)
}
var cToken string
var buckets []s3response.ListAllMyBucketsEntry
for _, entry := range entries {
fi, err := entry.Info()
@@ -236,8 +238,21 @@ func (p *Posix) ListBuckets(_ context.Context, owner string, isAdmin bool) (s3re
continue
}
if !strings.HasPrefix(fi.Name(), input.Prefix) {
continue
}
if len(buckets) == int(input.MaxBuckets) {
cToken = buckets[len(buckets)-1].Name
break
}
if fi.Name() <= input.ContinuationToken {
continue
}
// return all the buckets for admin users
if isAdmin {
if input.IsAdmin {
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
Name: entry.Name(),
CreationDate: fi.ModTime(),
@@ -260,7 +275,7 @@ func (p *Posix) ListBuckets(_ context.Context, owner string, isAdmin bool) (s3re
return s3response.ListAllMyBucketsResult{}, fmt.Errorf("parse acl tag: %w", err)
}
if acl.Owner == owner {
if acl.Owner == input.Owner {
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
Name: entry.Name(),
CreationDate: fi.ModTime(),
@@ -268,15 +283,15 @@ func (p *Posix) ListBuckets(_ context.Context, owner string, isAdmin bool) (s3re
}
}
sort.Sort(backend.ByBucketName(buckets))
return s3response.ListAllMyBucketsResult{
Buckets: s3response.ListAllMyBucketsList{
Bucket: buckets,
},
Owner: s3response.CanonicalUser{
ID: owner,
ID: input.Owner,
},
Prefix: input.Prefix,
ContinuationToken: cToken,
}, nil
}
@@ -926,7 +941,7 @@ func (p *Posix) fileToObjVersions(bucket string) backend.GetVersionsFunc {
// Check to see if the null versionId object is delete marker or not
if isDel {
nullObjDelMarker = &types.DeleteMarkerEntry{
VersionId: backend.GetStringPtr("null"),
VersionId: backend.GetPtrFromString("null"),
LastModified: backend.GetTimePtr(nf.ModTime()),
Key: &path,
IsLatest: getBoolPtr(false),
@@ -948,7 +963,7 @@ func (p *Posix) fileToObjVersions(bucket string) backend.GetVersionsFunc {
Key: &path,
LastModified: backend.GetTimePtr(nf.ModTime()),
Size: &size,
VersionId: backend.GetStringPtr("null"),
VersionId: backend.GetPtrFromString("null"),
IsLatest: getBoolPtr(false),
StorageClass: types.ObjectVersionStorageClassStandard,
}
@@ -2916,7 +2931,12 @@ func (p *Posix) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.GetO
return nil, fmt.Errorf("open object: %w", err)
}
rdr := io.NewSectionReader(f, startOffset, length)
// using an os.File allows zero-copy sendfile via io.Copy(os.File, net.Conn)
var body io.ReadCloser = f
if startOffset != 0 || length != objSize {
rdr := io.NewSectionReader(f, startOffset, length)
body = &backend.FileSectionReadCloser{R: rdr, F: f}
}
return &s3.GetObjectOutput{
AcceptRanges: &acceptRange,
@@ -2930,7 +2950,7 @@ func (p *Posix) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.GetO
ContentRange: &contentRange,
StorageClass: types.StorageClassStandard,
VersionId: &versionId,
Body: &backend.FileSectionReadCloser{R: rdr, F: f},
Body: body,
}, nil
}
@@ -2941,8 +2961,9 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
if input.Key == nil {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
versionId := backend.GetStringFromPtr(input.VersionId)
if !p.versioningEnabled() && *input.VersionId != "" {
if !p.versioningEnabled() && versionId != "" {
//TODO: Maybe we need to return our custom error here?
return nil, s3err.GetAPIError(s3err.ErrInvalidVersionId)
}
@@ -3002,7 +3023,7 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
return nil, fmt.Errorf("stat bucket: %w", err)
}
if *input.VersionId != "" {
if versionId != "" {
vId, err := p.meta.RetrieveAttribute(nil, bucket, object, versionIdKey)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
@@ -3012,12 +3033,12 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
}
if errors.Is(err, meta.ErrNoSuchKey) {
bucket = filepath.Join(p.versioningDir, bucket)
object = filepath.Join(genObjVersionKey(object), *input.VersionId)
object = filepath.Join(genObjVersionKey(object), versionId)
}
if string(vId) != *input.VersionId {
if string(vId) != versionId {
bucket = filepath.Join(p.versioningDir, bucket)
object = filepath.Join(genObjVersionKey(object), *input.VersionId)
object = filepath.Join(genObjVersionKey(object), versionId)
}
}
@@ -3025,7 +3046,7 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
fi, err := os.Stat(objPath)
if errors.Is(err, fs.ErrNotExist) {
if *input.VersionId != "" {
if versionId != "" {
return nil, s3err.GetAPIError(s3err.ErrInvalidVersionId)
}
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
@@ -3043,7 +3064,7 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if *input.VersionId != "" {
if p.versioningEnabled() {
isDelMarker, err := p.isObjDeleteMarker(bucket, object)
if err != nil {
return nil, err
@@ -3058,6 +3079,15 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
}
}
if p.versioningEnabled() && versionId == "" {
vId, err := p.meta.RetrieveAttribute(nil, bucket, object, versionIdKey)
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return nil, fmt.Errorf("get object versionId: %v", err)
}
versionId = string(vId)
}
userMetaData := make(map[string]string)
contentType, contentEncoding := p.loadUserMetaData(bucket, object, userMetaData)
@@ -3074,7 +3104,7 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
size := fi.Size()
var objectLockLegalHoldStatus types.ObjectLockLegalHoldStatus
status, err := p.GetObjectLegalHold(ctx, bucket, object, *input.VersionId)
status, err := p.GetObjectLegalHold(ctx, bucket, object, versionId)
if err == nil {
if *status {
objectLockLegalHoldStatus = types.ObjectLockLegalHoldStatusOn
@@ -3085,7 +3115,7 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
var objectLockMode types.ObjectLockMode
var objectLockRetainUntilDate *time.Time
retention, err := p.GetObjectRetention(ctx, bucket, object, *input.VersionId)
retention, err := p.GetObjectRetention(ctx, bucket, object, versionId)
if err == nil {
var config types.ObjectLockRetention
if err := json.Unmarshal(retention, &config); err == nil {
@@ -3094,8 +3124,6 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
}
}
//TODO: the method must handle multipart upload case
return &s3.HeadObjectOutput{
ContentLength: &size,
ContentType: &contentType,
@@ -3107,25 +3135,34 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
ObjectLockMode: objectLockMode,
ObjectLockRetainUntilDate: objectLockRetainUntilDate,
StorageClass: types.StorageClassStandard,
VersionId: input.VersionId,
VersionId: &versionId,
}, nil
}
func (p *Posix) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
func (p *Posix) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error) {
data, err := p.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: input.Bucket,
Key: input.Key,
VersionId: input.VersionId,
})
if err != nil {
return s3response.GetObjectAttributesResult{}, nil
if errors.Is(err, s3err.GetAPIError(s3err.ErrMethodNotAllowed)) && data != nil {
return s3response.GetObjectAttributesResponse{
DeleteMarker: data.DeleteMarker,
VersionId: data.VersionId,
}, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
return s3response.GetObjectAttributesResponse{}, err
}
return s3response.GetObjectAttributesResult{
return s3response.GetObjectAttributesResponse{
ETag: data.ETag,
LastModified: data.LastModified,
ObjectSize: data.ContentLength,
StorageClass: data.StorageClass,
LastModified: data.LastModified,
VersionId: data.VersionId,
DeleteMarker: data.DeleteMarker,
}, nil
}
@@ -3250,7 +3287,7 @@ func (p *Posix) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
version = backend.GetStringPtr(string(vId))
version = backend.GetPtrFromString(string(vId))
} else {
contentLength := fi.Size()
res, err := p.PutObject(ctx,

View File

@@ -38,12 +38,12 @@ type tmpfile struct {
f *os.File
bucket string
objname string
isOTmp bool
size int64
needsChown bool
uid int
gid int
newDirPerm fs.FileMode
isOTmp bool
needsChown bool
}
var (

View File

@@ -75,8 +75,12 @@ func New(access, secret, endpoint, region string, disableChecksum, sslSkipVerify
return s, nil
}
func (s *S3Proxy) ListBuckets(ctx context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error) {
output, err := s.client.ListBuckets(ctx, &s3.ListBucketsInput{})
func (s *S3Proxy) ListBuckets(ctx context.Context, input s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error) {
output, err := s.client.ListBuckets(ctx, &s3.ListBucketsInput{
ContinuationToken: &input.ContinuationToken,
MaxBuckets: &input.MaxBuckets,
Prefix: &input.Prefix,
})
if err != nil {
return s3response.ListAllMyBucketsResult{}, handleError(err)
}
@@ -96,6 +100,8 @@ func (s *S3Proxy) ListBuckets(ctx context.Context, owner string, isAdmin bool) (
Buckets: s3response.ListAllMyBucketsList{
Bucket: buckets,
},
ContinuationToken: backend.GetStringFromPtr(output.ContinuationToken),
Prefix: backend.GetStringFromPtr(output.Prefix),
}, nil
}
@@ -112,8 +118,8 @@ func (s *S3Proxy) CreateBucket(ctx context.Context, input *s3.CreateBucketInput,
var tagSet []types.Tag
tagSet = append(tagSet, types.Tag{
Key: backend.GetStringPtr(aclKey),
Value: backend.GetStringPtr(base64Encode(acl)),
Key: backend.GetPtrFromString(aclKey),
Value: backend.GetPtrFromString(base64Encode(acl)),
})
_, err = s.client.PutBucketTagging(ctx, &s3.PutBucketTaggingInput{
@@ -386,7 +392,7 @@ func (s *S3Proxy) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.
return output, nil
}
func (s *S3Proxy) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
func (s *S3Proxy) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error) {
out, err := s.client.GetObjectAttributes(ctx, input)
parts := s3response.ObjectParts{}
@@ -413,12 +419,11 @@ func (s *S3Proxy) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAt
}
}
return s3response.GetObjectAttributesResult{
return s3response.GetObjectAttributesResponse{
ETag: out.ETag,
LastModified: out.LastModified,
ObjectSize: out.ObjectSize,
StorageClass: out.StorageClass,
VersionId: out.VersionId,
ObjectParts: &parts,
}, handleError(err)
}
@@ -525,8 +530,8 @@ func (s *S3Proxy) PutBucketAcl(ctx context.Context, bucket string, data []byte)
for i, tag := range tagout.TagSet {
if *tag.Key == aclKey {
tagout.TagSet[i] = types.Tag{
Key: backend.GetStringPtr(aclKey),
Value: backend.GetStringPtr(base64Encode(data)),
Key: backend.GetPtrFromString(aclKey),
Value: backend.GetPtrFromString(base64Encode(data)),
}
found = true
break
@@ -534,8 +539,8 @@ func (s *S3Proxy) PutBucketAcl(ctx context.Context, bucket string, data []byte)
}
if !found {
tagout.TagSet = append(tagout.TagSet, types.Tag{
Key: backend.GetStringPtr(aclKey),
Value: backend.GetStringPtr(base64Encode(data)),
Key: backend.GetPtrFromString(aclKey),
Value: backend.GetPtrFromString(base64Encode(data)),
})
}
@@ -595,7 +600,7 @@ func (s *S3Proxy) DeleteObjectTagging(ctx context.Context, bucket, object string
func (s *S3Proxy) PutBucketPolicy(ctx context.Context, bucket string, policy []byte) error {
_, err := s.client.PutBucketPolicy(ctx, &s3.PutBucketPolicyInput{
Bucket: &bucket,
Policy: backend.GetStringPtr(string(policy)),
Policy: backend.GetPtrFromString(string(policy)),
})
return handleError(err)
}

View File

@@ -48,12 +48,21 @@ type ScoutfsOpts struct {
}
type ScoutFS struct {
// bucket/object metadata storage facility
meta meta.MetadataStorer
*posix.Posix
rootfd *os.File
rootdir string
// bucket/object metadata storage facility
meta meta.MetadataStorer
// euid/egid are the effective uid/gid of the running versitygw process
// used to determine if chowning is needed
euid int
egid int
// newDirPerm is the permissions to use when creating new directories
newDirPerm fs.FileMode
// glaciermode enables the following behavior:
// GET object: if file offline, return invalid object state
@@ -70,14 +79,6 @@ type ScoutFS struct {
// when objects are uploaded
chownuid bool
chowngid bool
// euid/egid are the effective uid/gid of the running versitygw process
// used to determine if chowning is needed
euid int
egid int
// newDirPerm is the permissions to use when creating new directories
newDirPerm fs.FileMode
}
var _ backend.Backend = &ScoutFS{}

View File

@@ -70,10 +70,10 @@ type tmpfile struct {
bucket string
objname string
size int64
needsChown bool
uid int
gid int
newDirPerm fs.FileMode
needsChown bool
}
var (

View File

@@ -19,7 +19,6 @@ import (
"errors"
"fmt"
"io/fs"
"os"
"sort"
"strings"
@@ -28,10 +27,10 @@ import (
)
type WalkResults struct {
NextMarker string
CommonPrefixes []types.CommonPrefix
Objects []s3response.Object
Truncated bool
NextMarker string
}
type GetObjFunc func(path string, d fs.DirEntry) (s3response.Object, error)
@@ -53,7 +52,15 @@ func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker strin
var newMarker string
var truncated bool
err := fs.WalkDir(fileSystem, ".", func(path string, d fs.DirEntry, err error) error {
root := "."
if strings.Contains(prefix, "/") {
idx := strings.LastIndex(prefix, "/")
if idx > 0 {
root = prefix[:idx]
}
}
err := fs.WalkDir(fileSystem, root, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
@@ -76,6 +83,9 @@ func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker strin
return fs.SkipAll
}
// After this point, return skipflag instead of nil
// so we can skip a directory without an early return
var skipflag error
if d.IsDir() {
// If prefix is defined and the directory does not match prefix,
// do not descend into the directory because nothing will
@@ -85,51 +95,57 @@ func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker strin
// building to match. So only skip if path isn't a prefix of prefix
// and prefix isn't a prefix of path.
if prefix != "" &&
!strings.HasPrefix(path+string(os.PathSeparator), prefix) &&
!strings.HasPrefix(prefix, path+string(os.PathSeparator)) {
!strings.HasPrefix(path+"/", prefix) &&
!strings.HasPrefix(prefix, path+"/") {
return fs.SkipDir
}
// TODO: can we do better here rather than a second readdir
// per directory?
ents, err := fs.ReadDir(fileSystem, path)
if err != nil {
return fmt.Errorf("readdir %q: %w", path, err)
}
path += string(os.PathSeparator)
if len(ents) == 0 && delimiter == "" {
dirobj, err := getObj(path, d)
if err == ErrSkipObj {
return nil
}
// Don't recurse into subdirectories which contain the delimiter
// after reaching the prefix
if delimiter != "" &&
strings.HasPrefix(path+"/", prefix) &&
strings.Contains(strings.TrimPrefix(path+"/", prefix), delimiter) {
skipflag = fs.SkipDir
} else {
// TODO: can we do better here rather than a second readdir
// per directory?
ents, err := fs.ReadDir(fileSystem, path)
if err != nil {
return fmt.Errorf("directory to object %q: %w", path, err)
return fmt.Errorf("readdir %q: %w", path, err)
}
objects = append(objects, dirobj)
if len(ents) == 0 && delimiter == "" {
dirobj, err := getObj(path+"/", d)
if err == ErrSkipObj {
return skipflag
}
if err != nil {
return fmt.Errorf("directory to object %q: %w", path, err)
}
objects = append(objects, dirobj)
return nil
}
return skipflag
}
if len(ents) != 0 {
return nil
if len(ents) != 0 {
return skipflag
}
}
path += "/"
}
if !pastMarker {
if path == marker {
pastMarker = true
return nil
return skipflag
}
if path < marker {
return nil
return skipflag
}
}
// If object doesn't have prefix, don't include in results.
if prefix != "" && !strings.HasPrefix(path, prefix) {
return nil
return skipflag
}
if delimiter == "" {
@@ -137,7 +153,7 @@ func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker strin
// prefix are included in results
obj, err := getObj(path, d)
if err == ErrSkipObj {
return nil
return skipflag
}
if err != nil {
return fmt.Errorf("file to object %q: %w", path, err)
@@ -148,7 +164,7 @@ func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker strin
pastMax = true
}
return nil
return skipflag
}
// Since delimiter is specified, we only want results that
@@ -177,7 +193,7 @@ func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker strin
if !found {
obj, err := getObj(path, d)
if err == ErrSkipObj {
return nil
return skipflag
}
if err != nil {
return fmt.Errorf("file to object %q: %w", path, err)
@@ -186,7 +202,7 @@ func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker strin
if (len(objects) + len(cpmap)) == int(max) {
pastMax = true
}
return nil
return skipflag
}
// Common prefixes are a set, so should not have duplicates.
@@ -196,12 +212,12 @@ func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker strin
cpref := prefix + before + delimiter
if cpref == marker {
pastMarker = true
return nil
return skipflag
}
if marker != "" && strings.HasPrefix(marker, cprefNoDelim) {
// skip common prefixes that are before the marker
return nil
return skipflag
}
cpmap[cpref] = struct{}{}
@@ -211,9 +227,13 @@ func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker strin
return fs.SkipAll
}
return nil
return skipflag
})
if err != nil {
// suppress file not found caused by user's prefix
if errors.Is(err, fs.ErrNotExist) {
return WalkResults{}, nil
}
return WalkResults{}, err
}
@@ -248,18 +268,18 @@ func contains(a string, strs []string) bool {
}
type WalkVersioningResults struct {
NextMarker string
NextVersionIdMarker string
CommonPrefixes []types.CommonPrefix
ObjectVersions []types.ObjectVersion
DelMarkers []types.DeleteMarkerEntry
Truncated bool
NextMarker string
NextVersionIdMarker string
}
type ObjVersionFuncResult struct {
NextVersionIdMarker string
ObjectVersions []types.ObjectVersion
DelMarkers []types.DeleteMarkerEntry
NextVersionIdMarker string
Truncated bool
}
@@ -315,8 +335,16 @@ func WalkVersions(ctx context.Context, fileSystem fs.FS, prefix, delimiter, keyM
// building to match. So only skip if path isn't a prefix of prefix
// and prefix isn't a prefix of path.
if prefix != "" &&
!strings.HasPrefix(path+string(os.PathSeparator), prefix) &&
!strings.HasPrefix(prefix, path+string(os.PathSeparator)) {
!strings.HasPrefix(path+"/", prefix) &&
!strings.HasPrefix(prefix, path+"/") {
return fs.SkipDir
}
// Don't recurse into subdirectories when listing with delimiter.
if delimiter == "/" &&
prefix != path+"/" &&
strings.HasPrefix(path+"/", prefix) {
cpmap[path+"/"] = struct{}{}
return fs.SkipDir
}

View File

@@ -31,9 +31,18 @@ import (
)
type walkTest struct {
fsys fs.FS
expected backend.WalkResults
getobj backend.GetObjFunc
fsys fs.FS
getobj backend.GetObjFunc
cases []testcase
}
type testcase struct {
name string
prefix string
delimiter string
marker string
expected backend.WalkResults
maxObjs int32
}
func getObj(path string, d fs.DirEntry) (s3response.Object, error) {
@@ -88,50 +97,154 @@ func TestWalk(t *testing.T) {
"photos/2006/February/sample3.jpg": {},
"photos/2006/February/sample4.jpg": {},
},
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetStringPtr("photos/"),
}},
Objects: []s3response.Object{{
Key: backend.GetStringPtr("sample.jpg"),
}},
},
getobj: getObj,
cases: []testcase{
{
name: "aws example",
delimiter: "/",
maxObjs: 1000,
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetPtrFromString("photos/"),
}},
Objects: []s3response.Object{{
Key: backend.GetPtrFromString("sample.jpg"),
}},
},
},
},
},
{
// test case single dir/single file
fsys: fstest.MapFS{
"test/file": {},
},
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetStringPtr("test/"),
}},
Objects: []s3response.Object{},
getobj: getObj,
cases: []testcase{
{
name: "single dir single file",
delimiter: "/",
maxObjs: 1000,
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetPtrFromString("test/"),
}},
Objects: []s3response.Object{},
},
},
},
},
{
// non-standard delimiter
fsys: fstest.MapFS{
"photo|s/200|6/Januar|y/sampl|e1.jpg": {},
"photo|s/200|6/Januar|y/sampl|e2.jpg": {},
"photo|s/200|6/Januar|y/sampl|e3.jpg": {},
},
getobj: getObj,
cases: []testcase{
{
name: "different delimiter 1",
delimiter: "|",
maxObjs: 1000,
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetPtrFromString("photo|"),
}},
},
},
{
name: "different delimiter 2",
delimiter: "|",
maxObjs: 1000,
prefix: "photo|",
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetPtrFromString("photo|s/200|"),
}},
},
},
{
name: "different delimiter 3",
delimiter: "|",
maxObjs: 1000,
prefix: "photo|s/200|",
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetPtrFromString("photo|s/200|6/Januar|"),
}},
},
},
{
name: "different delimiter 4",
delimiter: "|",
maxObjs: 1000,
prefix: "photo|s/200|",
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetPtrFromString("photo|s/200|6/Januar|"),
}},
},
},
{
name: "different delimiter 5",
delimiter: "|",
maxObjs: 1000,
prefix: "photo|s/200|6/Januar|",
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetPtrFromString("photo|s/200|6/Januar|y/sampl|"),
}},
},
},
{
name: "different delimiter 6",
delimiter: "|",
maxObjs: 1000,
prefix: "photo|s/200|6/Januar|y/sampl|",
expected: backend.WalkResults{
Objects: []s3response.Object{
{
Key: backend.GetPtrFromString("photo|s/200|6/Januar|y/sampl|e1.jpg"),
},
{
Key: backend.GetPtrFromString("photo|s/200|6/Januar|y/sampl|e2.jpg"),
},
{
Key: backend.GetPtrFromString("photo|s/200|6/Januar|y/sampl|e3.jpg"),
},
},
},
},
},
},
}
for _, tt := range tests {
res, err := backend.Walk(context.Background(), tt.fsys, "", "/", "", 1000, tt.getobj, []string{})
if err != nil {
t.Fatalf("walk: %v", err)
}
for _, tc := range tt.cases {
res, err := backend.Walk(context.Background(),
tt.fsys, tc.prefix, tc.delimiter, tc.marker, tc.maxObjs,
tt.getobj, []string{})
if err != nil {
t.Errorf("tc.name: walk: %v", err)
}
compareResults(res, tt.expected, t)
compareResults(tc.name, res, tc.expected, t)
}
}
}
func compareResults(got, wanted backend.WalkResults, t *testing.T) {
func compareResults(name string, got, wanted backend.WalkResults, t *testing.T) {
if !compareCommonPrefix(got.CommonPrefixes, wanted.CommonPrefixes) {
t.Errorf("unexpected common prefix, got %v wanted %v",
t.Errorf("%v: unexpected common prefix, got %v wanted %v",
name,
printCommonPrefixes(got.CommonPrefixes),
printCommonPrefixes(wanted.CommonPrefixes))
}
if !compareObjects(got.Objects, wanted.Objects) {
t.Errorf("unexpected object, got %v wanted %v",
t.Errorf("%v: unexpected object, got %v wanted %v",
name,
printObjects(got.Objects),
printObjects(wanted.Objects))
}
@@ -202,10 +315,16 @@ func containsObject(c s3response.Object, list []s3response.Object) bool {
func printObjects(list []s3response.Object) string {
res := "["
for _, cp := range list {
if res == "[" {
res = res + *cp.Key
var key string
if cp.Key == nil {
key = "<nil>"
} else {
res = res + ", " + *cp.Key
key = *cp.Key
}
if res == "[" {
res = res + key
} else {
res = res + ", " + key
}
}
return res + "]"

View File

@@ -311,6 +311,12 @@ ROOT_SECRET_ACCESS_KEY=
# to directories at the top level gateway directory as buckets.
#VGW_BUCKET_LINKS=false
# The default permissions mode when creating new directories is 0755. Use
# VGW_DIR_PERMS option to set a different mode for any new directory that the
# gateway creates. This applies to buckets created through the gateway as well
# as any parent directories automatically created with object uploads.
#VGW_DIR_PERMS=0755
###########
# scoutfs #
###########
@@ -346,6 +352,12 @@ ROOT_SECRET_ACCESS_KEY=
# to directories at the top level gateway directory as buckets.
#VGW_BUCKET_LINKS=false
# The default permissions mode when creating new directories is 0755. Use
# VGW_DIR_PERMS option to set a different mode for any new directory that the
# gateway creates. This applies to buckets created through the gateway as well
# as any parent directories automatically created with object uploads.
#VGW_DIR_PERMS=0755
######
# s3 #
######

36
go.mod
View File

@@ -7,8 +7,8 @@ require (
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.4.1
github.com/DataDog/datadog-go/v5 v5.5.0
github.com/aws/aws-sdk-go-v2 v1.32.2
github.com/aws/aws-sdk-go-v2/service/s3 v1.66.0
github.com/aws/aws-sdk-go-v2 v1.32.3
github.com/aws/aws-sdk-go-v2/service/s3 v1.66.2
github.com/aws/smithy-go v1.22.0
github.com/go-ldap/ldap/v3 v3.4.8
github.com/gofiber/fiber/v2 v2.52.5
@@ -19,9 +19,9 @@ require (
github.com/oklog/ulid/v2 v2.1.0
github.com/pkg/xattr v0.4.10
github.com/segmentio/kafka-go v0.4.47
github.com/smira/go-statsd v1.3.3
github.com/smira/go-statsd v1.3.4
github.com/urfave/cli/v2 v2.27.5
github.com/valyala/fasthttp v1.56.0
github.com/valyala/fasthttp v1.57.0
github.com/versity/scoutfs-go v0.0.0-20240325223134-38eb2f5f7d44
golang.org/x/sync v0.8.0
golang.org/x/sys v0.26.0
@@ -30,13 +30,13 @@ require (
require (
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.3 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.17 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.18 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.2 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.32.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.3 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.3 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.32.3 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.7 // indirect
github.com/golang-jwt/jwt/v5 v5.2.1 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
@@ -59,16 +59,16 @@ require (
require (
github.com/andybalholm/brotli v1.1.1 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6 // indirect
github.com/aws/aws-sdk-go-v2/config v1.28.0
github.com/aws/aws-sdk-go-v2/credentials v1.17.41
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.33
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.21 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.21 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.21 // indirect
github.com/aws/aws-sdk-go-v2/config v1.28.1
github.com/aws/aws-sdk-go-v2/credentials v1.17.42
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.35
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.22 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.22 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.22 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.3 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.3 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.3 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.5 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect

72
go.sum
View File

@@ -14,8 +14,8 @@ github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 h1:XHOnouVk1mxXfQidrMEnLlPk9UMeRtyBTnEFtxkV0kU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.3 h1:6LyjnnaLpcOKK0fbYisI+mb8CE7iNe7i89nMNQxFxs8=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.3/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/DataDog/datadog-go/v5 v5.5.0 h1:G5KHeB8pWBNXT4Jtw0zAkhdxEAWSpWH00geHI6LDrKU=
github.com/DataDog/datadog-go/v5 v5.5.0/go.mod h1:K9kcYBlxkcPP8tvvjZZKs/m1edNAUFzBbdpTUKfCsuw=
github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
@@ -25,42 +25,42 @@ github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7V
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/andybalholm/brotli v1.1.1 h1:PR2pgnyFznKEugtsUo0xLdDop5SKXd5Qf5ysW+7XdTA=
github.com/andybalholm/brotli v1.1.1/go.mod h1:05ib4cKhjx3OQYUY22hTVd34Bc8upXjOLL2rKwwZBoA=
github.com/aws/aws-sdk-go-v2 v1.32.2 h1:AkNLZEyYMLnx/Q/mSKkcMqwNFXMAvFto9bNsHqcTduI=
github.com/aws/aws-sdk-go-v2 v1.32.2/go.mod h1:2SK5n0a2karNTv5tbP1SjsX0uhttou00v/HpXKM1ZUo=
github.com/aws/aws-sdk-go-v2 v1.32.3 h1:T0dRlFBKcdaUPGNtkBSwHZxrtis8CQU17UpNBZYd0wk=
github.com/aws/aws-sdk-go-v2 v1.32.3/go.mod h1:2SK5n0a2karNTv5tbP1SjsX0uhttou00v/HpXKM1ZUo=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6 h1:pT3hpW0cOHRJx8Y0DfJUEQuqPild8jRGmSFmBgvydr0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6/go.mod h1:j/I2++U0xX+cr44QjHay4Cvxj6FUbnxrgmqN3H1jTZA=
github.com/aws/aws-sdk-go-v2/config v1.28.0 h1:FosVYWcqEtWNxHn8gB/Vs6jOlNwSoyOCA/g/sxyySOQ=
github.com/aws/aws-sdk-go-v2/config v1.28.0/go.mod h1:pYhbtvg1siOOg8h5an77rXle9tVG8T+BWLWAo7cOukc=
github.com/aws/aws-sdk-go-v2/credentials v1.17.41 h1:7gXo+Axmp+R4Z+AK8YFQO0ZV3L0gizGINCOWxSLY9W8=
github.com/aws/aws-sdk-go-v2/credentials v1.17.41/go.mod h1:u4Eb8d3394YLubphT4jLEwN1rLNq2wFOlT6OuxFwPzU=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.17 h1:TMH3f/SCAWdNtXXVPPu5D6wrr4G5hI1rAxbcocKfC7Q=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.17/go.mod h1:1ZRXLdTpzdJb9fwTMXiLipENRxkGMTn1sfKexGllQCw=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.33 h1:X+4YY5kZRI/cOoSMVMGTqFXHAMg1bvvay7IBcqHpybQ=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.33/go.mod h1:DPynzu+cn92k5UQ6tZhX+wfTB4ah6QDU/NgdHqatmvk=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.21 h1:UAsR3xA31QGf79WzpG/ixT9FZvQlh5HY1NRqSHBNOCk=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.21/go.mod h1:JNr43NFf5L9YaG3eKTm7HQzls9J+A9YYcGI5Quh1r2Y=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.21 h1:6jZVETqmYCadGFvrYEQfC5fAQmlo80CeL5psbno6r0s=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.21/go.mod h1:1SR0GbLlnN3QUmYaflZNiH1ql+1qrSiB2vwcJ+4UM60=
github.com/aws/aws-sdk-go-v2/config v1.28.1 h1:oxIvOUXy8x0U3fR//0eq+RdCKimWI900+SV+10xsCBw=
github.com/aws/aws-sdk-go-v2/config v1.28.1/go.mod h1:bRQcttQJiARbd5JZxw6wG0yIK3eLeSCPdg6uqmmlIiI=
github.com/aws/aws-sdk-go-v2/credentials v1.17.42 h1:sBP0RPjBU4neGpIYyx8mkU2QqLPl5u9cmdTWVzIpHkM=
github.com/aws/aws-sdk-go-v2/credentials v1.17.42/go.mod h1:FwZBfU530dJ26rv9saAbxa9Ej3eF/AK0OAY86k13n4M=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.18 h1:68jFVtt3NulEzojFesM/WVarlFpCaXLKaBxDpzkQ9OQ=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.18/go.mod h1:Fjnn5jQVIo6VyedMc0/EhPpfNlPl7dHV916O6B+49aE=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.35 h1:ihPPdcCVSN0IvBByXwqVp28/l4VosBZ6sDulcvU2J7w=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.35/go.mod h1:JkgEhs3SVF51Dj3m1Bj+yL8IznpxzkwlA3jLg3x7Kls=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.22 h1:Jw50LwEkVjuVzE1NzkhNKkBf9cRN7MtE1F/b2cOKTUM=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.22/go.mod h1:Y/SmAyPcOTmpeVaWSzSKiILfXTVJwrGmYZhcRbhWuEY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.22 h1:981MHwBaRZM7+9QSR6XamDzF/o7ouUGxFzr+nVSIhrs=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.22/go.mod h1:1RA1+aBEfn+CAB/Mh0MB6LsdCYCnjZm7tKXtnk499ZQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.21 h1:7edmS3VOBDhK00b/MwGtGglCm7hhwNYnjJs/PgFdMQE=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.21/go.mod h1:Q9o5h4HoIWG8XfzxqiuK/CGUbepCJ8uTlaE3bAbxytQ=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.22 h1:yV+hCAHZZYJQcwAaszoBNwLbPItHvApxT0kVIw6jRgs=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.22/go.mod h1:kbR1TL8llqB1eGnVbybcA4/wgScxdylOdyAd51yxPdw=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0 h1:TToQNkvGguu209puTojY/ozlqy2d/SFNcoLIqTFi42g=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0/go.mod h1:0jp+ltwkf+SwG2fm/PKo8t4y8pJSgOCO4D8Lz3k0aHQ=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.2 h1:4FMHqLfk0efmTqhXVRL5xYRqlEBNBiRI7N6w4jsEdd4=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.2/go.mod h1:LWoqeWlK9OZeJxsROW2RqrSPvQHKTpp69r/iDjwsSaw=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.2 h1:s7NA1SOw8q/5c0wr8477yOPp0z+uBaXBnLE0XYb0POA=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.2/go.mod h1:fnjjWyAW/Pj5HYOxl9LJqWtEwS7W2qgcRLWP+uWbss0=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.2 h1:t7iUP9+4wdc5lt3E41huP+GvQZJD38WLsgVp4iOtAjg=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.2/go.mod h1:/niFCtmuQNxqx9v8WAPq5qh7EH25U4BF6tjoyq9bObM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.66.0 h1:xA6XhTF7PE89BCNHJbQi8VvPzcgMtmGC5dr8S8N7lHk=
github.com/aws/aws-sdk-go-v2/service/s3 v1.66.0/go.mod h1:cB6oAuus7YXRZhWCc1wIwPywwZ1XwweNp2TVAEGYeB8=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.2 h1:bSYXVyUzoTHoKalBmwaZxs97HU9DWWI3ehHSAMa7xOk=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.2/go.mod h1:skMqY7JElusiOUjMJMOv1jJsP7YUg7DrhgqZZWuzu1U=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.2 h1:AhmO1fHINP9vFYUE0LHzCWg/LfUWUF+zFPEcY9QXb7o=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.2/go.mod h1:o8aQygT2+MVP0NaV6kbdE1YnnIM8RRVQzoeUH45GOdI=
github.com/aws/aws-sdk-go-v2/service/sts v1.32.2 h1:CiS7i0+FUe+/YY1GvIBLLrR/XNGZ4CtM1Ll0XavNuVo=
github.com/aws/aws-sdk-go-v2/service/sts v1.32.2/go.mod h1:HtaiBI8CjYoNVde8arShXb94UbQQi9L4EMr6D+xGBwo=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.3 h1:kT6BcZsmMtNkP/iYMcRG+mIEA/IbeiUimXtGmqF39y0=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.3/go.mod h1:Z8uGua2k4PPaGOYn66pK02rhMrot3Xk3tpBuUFPomZU=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.3 h1:qcxX0JYlgWH3hpPUnd6U0ikcl6LLA9sLkXE2w1fpMvY=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.3/go.mod h1:cLSNEmI45soc+Ef8K/L+8sEA3A3pYFEYf5B5UI+6bH4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.3 h1:ZC7Y/XgKUxwqcdhO5LE8P6oGP1eh6xlQReWNKfhvJno=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.3/go.mod h1:WqfO7M9l9yUAw0HcHaikwRd/H6gzYdz7vjejCA5e2oY=
github.com/aws/aws-sdk-go-v2/service/s3 v1.66.2 h1:p9TNFL8bFUMd+38YIpTAXpoxyz0MxC7FlbFEH4P4E1U=
github.com/aws/aws-sdk-go-v2/service/s3 v1.66.2/go.mod h1:fNjyo0Coen9QTwQLWeV6WO2Nytwiu+cCcWaTdKCAqqE=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.3 h1:UTpsIf0loCIWEbrqdLb+0RxnTXfWh2vhw4nQmFi4nPc=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.3/go.mod h1:FZ9j3PFHHAR+w0BSEjK955w5YD2UwB/l/H0yAK3MJvI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.3 h1:2YCmIXv3tmiItw0LlYf6v7gEHebLY45kBEnPezbUKyU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.3/go.mod h1:u19stRyNPxGhj6dRm+Cdgu6N75qnbW7+QN0q0dsAk58=
github.com/aws/aws-sdk-go-v2/service/sts v1.32.3 h1:wVnQ6tigGsRqSWDEEyH6lSAJ9OyFUsSnbaUWChuSGzs=
github.com/aws/aws-sdk-go-v2/service/sts v1.32.3/go.mod h1:VZa9yTFyj4o10YGsmDO4gbQJUvvhY72fhumT8W4LqsE=
github.com/aws/smithy-go v1.22.0 h1:uunKnWlcoL3zO7q+gG2Pk53joueEOsnNB28QdMsmiMM=
github.com/aws/smithy-go v1.22.0/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
@@ -164,8 +164,8 @@ github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIH
github.com/segmentio/kafka-go v0.4.47 h1:IqziR4pA3vrZq7YdRxaT3w1/5fvIH5qpCwstUanQQB0=
github.com/segmentio/kafka-go v0.4.47/go.mod h1:HjF6XbOKh0Pjlkr5GVZxt6CsjjwnmhVOfURM5KMd8qg=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smira/go-statsd v1.3.3 h1:WnMlmGTyMpzto+HvOJWRPoLaLlk5EGfzsnlQBcvj4yI=
github.com/smira/go-statsd v1.3.3/go.mod h1:RjdsESPgDODtg1VpVVf9MJrEW2Hw0wtRNbmB1CAhu6A=
github.com/smira/go-statsd v1.3.4 h1:kBYWcLSGT+qC6JVbvfz48kX7mQys32fjDOPrfmsSx2c=
github.com/smira/go-statsd v1.3.4/go.mod h1:RjdsESPgDODtg1VpVVf9MJrEW2Hw0wtRNbmB1CAhu6A=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
@@ -181,8 +181,8 @@ github.com/urfave/cli/v2 v2.27.5 h1:WoHEJLdsXr6dDWoJgMq/CboDmyY/8HMMH1fTECbih+w=
github.com/urfave/cli/v2 v2.27.5/go.mod h1:3Sevf16NykTbInEnD0yKkjDAeZDS0A6bzhBH5hrMvTQ=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.56.0 h1:bEZdJev/6LCBlpdORfrLu/WOZXXxvrUQSiyniuaoW8U=
github.com/valyala/fasthttp v1.56.0/go.mod h1:sReBt3XZVnudxuLOx4J/fMrJVorWRiWY2koQKgABiVI=
github.com/valyala/fasthttp v1.57.0 h1:Xw8SjWGEP/+wAAgyy5XTvgrWlOD1+TxbbvNADYCm1Tg=
github.com/valyala/fasthttp v1.57.0/go.mod h1:h6ZBaPRlzpZ6O3H5t2gEk1Qi33+TmLvfwgLLp0t9CpE=
github.com/valyala/tcplisten v1.0.0 h1:rBHj/Xf+E1tRGZyWIWwJDiRY0zc1Js+CV5DqwacVSA8=
github.com/valyala/tcplisten v1.0.0/go.mod h1:T0xQ8SeCZGxckz9qRXTfG43PvQ/mcWh7FwZEA7Ioqkc=
github.com/versity/scoutfs-go v0.0.0-20240325223134-38eb2f5f7d44 h1:Wx1o3pNrCzsHIIDyZ2MLRr6tF/1FhAr7HNDn80QqDWE=

View File

@@ -43,13 +43,14 @@ type Tag struct {
// Manager is a manager of metrics plugins
type Manager struct {
wg sync.WaitGroup
ctx context.Context
addDataChan chan datapoint
config Config
publishers []publisher
addDataChan chan datapoint
publishers []publisher
wg sync.WaitGroup
}
type Config struct {
@@ -220,6 +221,6 @@ func (m *Manager) addForwarder(addChan <-chan datapoint) {
type datapoint struct {
key string
value int64
tags []Tag
value int64
}

View File

@@ -26,11 +26,11 @@ import (
)
type S3AdminServer struct {
app *fiber.App
backend backend.Backend
app *fiber.App
router *S3AdminRouter
port string
cert *tls.Certificate
port string
}
func NewAdminServer(app *fiber.App, be backend.Backend, root middlewares.RootUserConfig, port, region string, iam auth.IAMService, l s3log.AuditLogger, opts ...AdminOpt) *S3AdminServer {

View File

@@ -64,11 +64,11 @@ func TestAdminController_CreateUser(t *testing.T) {
`
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Admin-create-user-malformed-body",
@@ -149,11 +149,11 @@ func TestAdminController_UpdateUser(t *testing.T) {
`
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Admin-update-user-success",
@@ -223,11 +223,11 @@ func TestAdminController_DeleteUser(t *testing.T) {
app.Patch("/delete-user", adminController.DeleteUser)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Admin-delete-user-success",
@@ -280,11 +280,11 @@ func TestAdminController_ListUsers(t *testing.T) {
appSucc.Patch("/list-users", adminController.ListUsers)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Admin-list-users-iam-error",
@@ -361,11 +361,11 @@ func TestAdminController_ChangeBucketOwner(t *testing.T) {
appIamNoSuchUser.Patch("/change-bucket-owner", adminControllerIamAccDoesNotExist.ChangeBucketOwner)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Change-bucket-owner-check-account-server-error",
@@ -424,11 +424,11 @@ func TestAdminController_ListBuckets(t *testing.T) {
app.Patch("/list-buckets", adminController.ListBuckets)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "List-buckets-success",

View File

@@ -83,7 +83,7 @@ var _ backend.Backend = &BackendMock{}
// GetObjectAclFunc: func(contextMoqParam context.Context, getObjectAclInput *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error) {
// panic("mock out the GetObjectAcl method")
// },
// GetObjectAttributesFunc: func(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
// GetObjectAttributesFunc: func(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error) {
// panic("mock out the GetObjectAttributes method")
// },
// GetObjectLegalHoldFunc: func(contextMoqParam context.Context, bucket string, object string, versionId string) (*bool, error) {
@@ -104,7 +104,7 @@ var _ backend.Backend = &BackendMock{}
// HeadObjectFunc: func(contextMoqParam context.Context, headObjectInput *s3.HeadObjectInput) (*s3.HeadObjectOutput, error) {
// panic("mock out the HeadObject method")
// },
// ListBucketsFunc: func(contextMoqParam context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error) {
// ListBucketsFunc: func(contextMoqParam context.Context, listBucketsInput s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error) {
// panic("mock out the ListBuckets method")
// },
// ListBucketsAndOwnersFunc: func(contextMoqParam context.Context) ([]s3response.Bucket, error) {
@@ -244,7 +244,7 @@ type BackendMock struct {
GetObjectAclFunc func(contextMoqParam context.Context, getObjectAclInput *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error)
// GetObjectAttributesFunc mocks the GetObjectAttributes method.
GetObjectAttributesFunc func(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error)
GetObjectAttributesFunc func(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error)
// GetObjectLegalHoldFunc mocks the GetObjectLegalHold method.
GetObjectLegalHoldFunc func(contextMoqParam context.Context, bucket string, object string, versionId string) (*bool, error)
@@ -265,7 +265,7 @@ type BackendMock struct {
HeadObjectFunc func(contextMoqParam context.Context, headObjectInput *s3.HeadObjectInput) (*s3.HeadObjectOutput, error)
// ListBucketsFunc mocks the ListBuckets method.
ListBucketsFunc func(contextMoqParam context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error)
ListBucketsFunc func(contextMoqParam context.Context, listBucketsInput s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error)
// ListBucketsAndOwnersFunc mocks the ListBucketsAndOwners method.
ListBucketsAndOwnersFunc func(contextMoqParam context.Context) ([]s3response.Bucket, error)
@@ -547,10 +547,8 @@ type BackendMock struct {
ListBuckets []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Owner is the owner argument value.
Owner string
// IsAdmin is the isAdmin argument value.
IsAdmin bool
// ListBucketsInput is the listBucketsInput argument value.
ListBucketsInput s3response.ListBucketsInput
}
// ListBucketsAndOwners holds details about calls to the ListBucketsAndOwners method.
ListBucketsAndOwners []struct {
@@ -1520,7 +1518,7 @@ func (mock *BackendMock) GetObjectAclCalls() []struct {
}
// GetObjectAttributes calls GetObjectAttributesFunc.
func (mock *BackendMock) GetObjectAttributes(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
func (mock *BackendMock) GetObjectAttributes(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error) {
if mock.GetObjectAttributesFunc == nil {
panic("BackendMock.GetObjectAttributesFunc: method is nil but Backend.GetObjectAttributes was just called")
}
@@ -1792,23 +1790,21 @@ func (mock *BackendMock) HeadObjectCalls() []struct {
}
// ListBuckets calls ListBucketsFunc.
func (mock *BackendMock) ListBuckets(contextMoqParam context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error) {
func (mock *BackendMock) ListBuckets(contextMoqParam context.Context, listBucketsInput s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error) {
if mock.ListBucketsFunc == nil {
panic("BackendMock.ListBucketsFunc: method is nil but Backend.ListBuckets was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Owner string
IsAdmin bool
ContextMoqParam context.Context
ListBucketsInput s3response.ListBucketsInput
}{
ContextMoqParam: contextMoqParam,
Owner: owner,
IsAdmin: isAdmin,
ContextMoqParam: contextMoqParam,
ListBucketsInput: listBucketsInput,
}
mock.lockListBuckets.Lock()
mock.calls.ListBuckets = append(mock.calls.ListBuckets, callInfo)
mock.lockListBuckets.Unlock()
return mock.ListBucketsFunc(contextMoqParam, owner, isAdmin)
return mock.ListBucketsFunc(contextMoqParam, listBucketsInput)
}
// ListBucketsCalls gets all the calls that were made to ListBuckets.
@@ -1816,14 +1812,12 @@ func (mock *BackendMock) ListBuckets(contextMoqParam context.Context, owner stri
//
// len(mockedBackend.ListBucketsCalls())
func (mock *BackendMock) ListBucketsCalls() []struct {
ContextMoqParam context.Context
Owner string
IsAdmin bool
ContextMoqParam context.Context
ListBucketsInput s3response.ListBucketsInput
} {
var calls []struct {
ContextMoqParam context.Context
Owner string
IsAdmin bool
ContextMoqParam context.Context
ListBucketsInput s3response.ListBucketsInput
}
mock.lockListBuckets.RLock()
calls = mock.calls.ListBuckets

View File

@@ -51,8 +51,9 @@ type S3ApiController struct {
}
const (
iso8601Format = "20060102T150405Z"
defaultContentType = "binary/octet-stream"
iso8601Format = "20060102T150405Z"
iso8601TimeFormatExtended = "Mon Jan _2 15:04:05 2006"
defaultContentType = "binary/octet-stream"
)
func New(be backend.Backend, iam auth.IAMService, logger s3log.AuditLogger, evs s3event.S3EventSender, mm *metrics.Manager, debug bool, readonly bool) S3ApiController {
@@ -68,8 +69,32 @@ func New(be backend.Backend, iam auth.IAMService, logger s3log.AuditLogger, evs
}
func (c S3ApiController) ListBuckets(ctx *fiber.Ctx) error {
cToken := ctx.Query("continuation-token")
prefix := ctx.Query("prefix")
maxBucketsStr := ctx.Query("max-buckets")
acct := ctx.Locals("account").(auth.Account)
res, err := c.be.ListBuckets(ctx.Context(), acct.Access, acct.Role == "admin")
maxBuckets, err := utils.ParseUint(maxBucketsStr)
if err != nil || maxBuckets > 10000 {
if c.debug {
log.Printf("error parsing max-buckets %q: %v\n", maxBucketsStr, err)
}
return SendXMLResponse(ctx, nil, s3err.GetAPIError(s3err.ErrInvalidMaxBuckets),
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
Action: metrics.ActionListAllMyBuckets,
})
}
res, err := c.be.ListBuckets(ctx.Context(),
s3response.ListBucketsInput{
Owner: acct.Access,
IsAdmin: acct.Role == auth.RoleAdmin,
MaxBuckets: maxBuckets,
ContinuationToken: cToken,
Prefix: prefix,
})
return SendXMLResponse(ctx, res, err,
&MetaOpts{
Logger: c.logger,
@@ -355,7 +380,19 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
BucketOwner: parsedAcl.Owner,
})
}
attrs := utils.ParseObjectAttributes(ctx)
attrs, err := utils.ParseObjectAttributes(ctx)
if err != nil {
if c.debug {
log.Printf("error parsing object attributes: %v", err)
}
return SendXMLResponse(ctx, nil, err,
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
Action: metrics.ActionGetObjectAttributes,
BucketOwner: parsedAcl.Owner,
})
}
res, err := c.be.GetObjectAttributes(ctx.Context(),
&s3.GetObjectAttributesInput{
@@ -366,6 +403,22 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
VersionId: &versionId,
})
if err != nil {
hdrs := []utils.CustomHeader{}
if res.DeleteMarker != nil {
hdrs = append(hdrs, utils.CustomHeader{
Key: "x-amz-delete-marker",
Value: "true",
})
}
if getstring(res.VersionId) != "" {
hdrs = append(hdrs, utils.CustomHeader{
Key: "x-amz-version-id",
Value: getstring(res.VersionId),
})
}
utils.SetResponseHeaders(ctx, hdrs)
return SendXMLResponse(ctx, nil, err,
&MetaOpts{
Logger: c.logger,
@@ -374,7 +427,31 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
BucketOwner: parsedAcl.Owner,
})
}
return SendXMLResponse(ctx, utils.FilterObjectAttributes(attrs, res), err,
hdrs := []utils.CustomHeader{}
if getstring(res.VersionId) != "" {
hdrs = append(hdrs, utils.CustomHeader{
Key: "x-amz-version-id",
Value: getstring(res.VersionId),
})
}
if res.DeleteMarker != nil && *res.DeleteMarker {
hdrs = append(hdrs, utils.CustomHeader{
Key: "x-amz-delete-marker",
Value: "true",
})
}
if res.LastModified != nil {
hdrs = append(hdrs, utils.CustomHeader{
Key: "Last-Modified",
Value: res.LastModified.UTC().Format(iso8601TimeFormatExtended),
})
}
utils.SetResponseHeaders(ctx, hdrs)
return SendXMLResponse(ctx, utils.FilterObjectAttributes(attrs, res), nil,
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
@@ -1073,6 +1150,7 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
MetricsMng: c.mm,
Action: metrics.ActionPutBucketTagging,
BucketOwner: parsedAcl.Owner,
Status: http.StatusNoContent,
})
}
@@ -2876,15 +2954,6 @@ func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
BucketOwner: parsedAcl.Owner,
})
}
if res == nil {
return SendResponse(ctx, fmt.Errorf("head object nil response"),
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
Action: metrics.ActionHeadObject,
BucketOwner: parsedAcl.Owner,
})
}
utils.SetMetaHeaders(ctx, res.Metadata)
headers := []utils.CustomHeader{
@@ -3239,14 +3308,14 @@ type MetaOpts struct {
Logger s3log.AuditLogger
EvSender s3event.S3EventSender
MetricsMng *metrics.Manager
ContentLength int64
Action string
BucketOwner string
ObjectSize int64
ObjectCount int64
EventName s3event.EventType
ObjectETag *string
VersionId *string
Action string
BucketOwner string
EventName s3event.EventType
ContentLength int64
ObjectSize int64
ObjectCount int64
Status int
}

View File

@@ -92,7 +92,7 @@ func TestS3ApiController_ListBuckets(t *testing.T) {
app := fiber.New()
s3ApiController := S3ApiController{
be: &BackendMock{
ListBucketsFunc: func(context.Context, string, bool) (s3response.ListAllMyBucketsResult, error) {
ListBucketsFunc: func(contextMoqParam context.Context, listBucketsInput s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error) {
return s3response.ListAllMyBucketsResult{}, nil
},
},
@@ -109,7 +109,7 @@ func TestS3ApiController_ListBuckets(t *testing.T) {
appErr := fiber.New()
s3ApiControllerErr := S3ApiController{
be: &BackendMock{
ListBucketsFunc: func(context.Context, string, bool) (s3response.ListAllMyBucketsResult, error) {
ListBucketsFunc: func(contextMoqParam context.Context, listBucketsInput s3response.ListBucketsInput) (s3response.ListAllMyBucketsResult, error) {
return s3response.ListAllMyBucketsResult{}, s3err.GetAPIError(s3err.ErrMethodNotAllowed)
},
},
@@ -123,11 +123,11 @@ func TestS3ApiController_ListBuckets(t *testing.T) {
appErr.Get("/", s3ApiControllerErr.ListBuckets)
tests := []struct {
name string
args args
app *fiber.App
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "List-bucket-method-not-allowed",
@@ -187,8 +187,8 @@ func TestS3ApiController_GetActions(t *testing.T) {
GetObjectAclFunc: func(context.Context, *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error) {
return &s3.GetObjectAclOutput{}, nil
},
GetObjectAttributesFunc: func(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
return s3response.GetObjectAttributesResult{}, nil
GetObjectAttributesFunc: func(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResponse, error) {
return s3response.GetObjectAttributesResponse{}, nil
},
GetObjectFunc: func(context.Context, *s3.GetObjectInput) (*s3.GetObjectOutput, error) {
return &s3.GetObjectOutput{
@@ -233,11 +233,11 @@ func TestS3ApiController_GetActions(t *testing.T) {
getObjAttrs.Header.Set("X-Amz-Object-Attributes", "hello")
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Get-actions-get-tags-success",
@@ -435,11 +435,11 @@ func TestS3ApiController_ListActions(t *testing.T) {
appError.Get("/:bucket", s3ApiControllerError.ListActions)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Get-bucket-tagging-non-existing-bucket",
@@ -728,11 +728,11 @@ func TestS3ApiController_PutBucketActions(t *testing.T) {
invAclOwnershipReq.Header.Set("X-Amz-Grant-Read", "hello")
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Put-bucket-tagging-invalid-body",
@@ -750,7 +750,7 @@ func TestS3ApiController_PutBucketActions(t *testing.T) {
req: httptest.NewRequest(http.MethodPut, "/my-bucket?tagging", strings.NewReader(tagBody)),
},
wantErr: false,
statusCode: 200,
statusCode: 204,
},
{
name: "Put-bucket-ownership-controls-invalid-ownership",
@@ -1034,11 +1034,11 @@ func TestS3ApiController_PutActions(t *testing.T) {
invAclBodyGrtReq.Header.Set("X-Amz-Grant-Read", "hello")
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Put-object-part-error-case",
@@ -1243,11 +1243,11 @@ func TestS3ApiController_DeleteBucket(t *testing.T) {
app.Delete("/:bucket", s3ApiController.DeleteBucket)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Delete-bucket-success",
@@ -1334,11 +1334,11 @@ func TestS3ApiController_DeleteObjects(t *testing.T) {
request.Header.Set("Content-Type", "application/xml")
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Delete-Objects-success",
@@ -1432,11 +1432,11 @@ func TestS3ApiController_DeleteActions(t *testing.T) {
appErr.Delete("/:bucket/:key/*", s3ApiControllerErr.DeleteActions)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Abort-multipart-upload-success",
@@ -1541,11 +1541,11 @@ func TestS3ApiController_HeadBucket(t *testing.T) {
appErr.Head("/:bucket", s3ApiControllerErr.HeadBucket)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Head-bucket-success",
@@ -1628,7 +1628,7 @@ func TestS3ApiController_HeadObject(t *testing.T) {
return acldata, nil
},
HeadObjectFunc: func(context.Context, *s3.HeadObjectInput) (*s3.HeadObjectOutput, error) {
return nil, s3err.GetAPIError(42)
return nil, s3err.GetAPIError(s3err.ErrInvalidRequest)
},
},
}
@@ -1643,11 +1643,11 @@ func TestS3ApiController_HeadObject(t *testing.T) {
appErr.Head("/:bucket/:key/*", s3ApiControllerErr.HeadObject)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Head-object-success",
@@ -1723,11 +1723,11 @@ func TestS3ApiController_CreateActions(t *testing.T) {
app.Post("/:bucket/:key/*", s3ApiController.CreateActions)
tests := []struct {
name string
app *fiber.App
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Restore-object-success",
@@ -1808,10 +1808,10 @@ func Test_XMLresponse(t *testing.T) {
ctx := app.AcquireCtx(&fasthttp.RequestCtx{})
tests := []struct {
name string
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Internal-server-error",
@@ -1883,10 +1883,10 @@ func Test_response(t *testing.T) {
ctx := app.AcquireCtx(&fasthttp.RequestCtx{})
tests := []struct {
name string
args args
wantErr bool
name string
statusCode int
wantErr bool
}{
{
name: "Internal-server-error",

View File

@@ -149,8 +149,8 @@ func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.Au
}
type accounts struct {
root RootUserConfig
iam auth.IAMService
root RootUserConfig
}
func (a accounts) getAccount(access string) (auth.Account, error) {

View File

@@ -29,9 +29,9 @@ func TestS3ApiRouter_Init(t *testing.T) {
iam auth.IAMService
}
tests := []struct {
name string
sa *S3ApiRouter
args args
sa *S3ApiRouter
name string
}{
{
name: "Initialize S3 api router",

View File

@@ -29,15 +29,15 @@ import (
)
type S3ApiServer struct {
app *fiber.App
backend backend.Backend
app *fiber.App
router *S3ApiRouter
port string
cert *tls.Certificate
port string
health string
quiet bool
debug bool
readonly bool
health string
}
func New(

View File

@@ -39,9 +39,9 @@ func TestNew(t *testing.T) {
port := ":7070"
tests := []struct {
name string
args args
wantS3ApiServer *S3ApiServer
args args
name string
wantErr bool
}{
{
@@ -78,8 +78,8 @@ func TestNew(t *testing.T) {
func TestS3ApiServer_Serve(t *testing.T) {
tests := []struct {
name string
sa *S3ApiServer
name string
wantErr bool
}{
{

View File

@@ -41,10 +41,10 @@ const (
// the data is completely read.
type AuthReader struct {
ctx *fiber.Ctx
r *HashReader
auth AuthData
secret string
size int
r *HashReader
debug bool
}

View File

@@ -48,15 +48,15 @@ const (
// object data stream
type ChunkReader struct {
r io.Reader
signingKey []byte
chunkHash hash.Hash
prevSig string
parsedSig string
strToSignPrefix string
signingKey []byte
stash []byte
currentChunkSize int64
chunkDataLeft int64
trailerExpected int
stash []byte
chunkHash hash.Hash
strToSignPrefix string
skipcheck bool
}

View File

@@ -41,10 +41,10 @@ const (
// data requests where the data size is not known until
// the data is completely read.
type PresignedAuthReader struct {
r io.Reader
ctx *fiber.Ctx
auth AuthData
secret string
r io.Reader
debug bool
}

View File

@@ -23,13 +23,13 @@ import (
func Test_validateExpiration(t *testing.T) {
type args struct {
str string
date time.Time
str string
}
tests := []struct {
name string
args args
err error
name string
}{
{
name: "empty-expiration",

View File

@@ -254,35 +254,51 @@ func ParseDeleteObjects(objs []types.ObjectIdentifier) (result []string) {
return
}
func FilterObjectAttributes(attrs map[types.ObjectAttributes]struct{}, output s3response.GetObjectAttributesResult) s3response.GetObjectAttributesResult {
if _, ok := attrs[types.ObjectAttributesEtag]; !ok {
func FilterObjectAttributes(attrs map[s3response.ObjectAttributes]struct{}, output s3response.GetObjectAttributesResponse) s3response.GetObjectAttributesResponse {
// These properties shouldn't appear in the final response body
output.LastModified = nil
output.VersionId = nil
output.DeleteMarker = nil
if _, ok := attrs[s3response.ObjectAttributesEtag]; !ok {
output.ETag = nil
}
if _, ok := attrs[types.ObjectAttributesObjectParts]; !ok {
if _, ok := attrs[s3response.ObjectAttributesObjectParts]; !ok {
output.ObjectParts = nil
}
if _, ok := attrs[types.ObjectAttributesObjectSize]; !ok {
if _, ok := attrs[s3response.ObjectAttributesObjectSize]; !ok {
output.ObjectSize = nil
}
if _, ok := attrs[types.ObjectAttributesStorageClass]; !ok {
if _, ok := attrs[s3response.ObjectAttributesStorageClass]; !ok {
output.StorageClass = ""
}
fmt.Printf("%+v\n", output)
return output
}
func ParseObjectAttributes(ctx *fiber.Ctx) map[types.ObjectAttributes]struct{} {
attrs := map[types.ObjectAttributes]struct{}{}
func ParseObjectAttributes(ctx *fiber.Ctx) (map[s3response.ObjectAttributes]struct{}, error) {
attrs := map[s3response.ObjectAttributes]struct{}{}
var err error
ctx.Request().Header.VisitAll(func(key, value []byte) {
if string(key) == "X-Amz-Object-Attributes" {
oattrs := strings.Split(string(value), ",")
for _, a := range oattrs {
attrs[types.ObjectAttributes(a)] = struct{}{}
attr := s3response.ObjectAttributes(a)
if !attr.IsValid() {
err = s3err.GetAPIError(s3err.ErrInvalidObjectAttributes)
break
}
attrs[attr] = struct{}{}
}
}
})
return attrs
if len(attrs) == 0 {
return nil, s3err.GetAPIError(s3err.ErrObjectAttributesInvalidHeader)
}
return attrs, err
}
type objLockCfg struct {

View File

@@ -19,10 +19,12 @@ import (
"net/http"
"reflect"
"testing"
"time"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/valyala/fasthttp"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3response"
)
@@ -47,11 +49,11 @@ func TestCreateHttpRequestFromCtx(t *testing.T) {
request2.Header.Add("X-Amz-Mfa", "Some valid Mfa")
tests := []struct {
name string
args args
want *http.Request
wantErr bool
name string
hdrs []string
wantErr bool
}{
{
name: "Success-response",
@@ -99,9 +101,9 @@ func TestGetUserMetaData(t *testing.T) {
req := ctx.Request()
tests := []struct {
name string
args args
wantMetadata map[string]string
name string
}{
{
name: "Success-empty-response",
@@ -283,47 +285,68 @@ func TestParseUint(t *testing.T) {
func TestFilterObjectAttributes(t *testing.T) {
type args struct {
attrs map[types.ObjectAttributes]struct{}
output s3response.GetObjectAttributesResult
attrs map[s3response.ObjectAttributes]struct{}
output s3response.GetObjectAttributesResponse
}
etag, objSize := "etag", int64(3222)
delMarker := true
tests := []struct {
name string
args args
want s3response.GetObjectAttributesResult
want s3response.GetObjectAttributesResponse
}{
{
name: "keep only ETag",
args: args{
attrs: map[types.ObjectAttributes]struct{}{
types.ObjectAttributesEtag: {},
attrs: map[s3response.ObjectAttributes]struct{}{
s3response.ObjectAttributesEtag: {},
},
output: s3response.GetObjectAttributesResult{
output: s3response.GetObjectAttributesResponse{
ObjectSize: &objSize,
ETag: &etag,
},
},
want: s3response.GetObjectAttributesResult{ETag: &etag},
want: s3response.GetObjectAttributesResponse{ETag: &etag},
},
{
name: "keep multiple props",
args: args{
attrs: map[types.ObjectAttributes]struct{}{
types.ObjectAttributesEtag: {},
types.ObjectAttributesObjectSize: {},
types.ObjectAttributesStorageClass: {},
attrs: map[s3response.ObjectAttributes]struct{}{
s3response.ObjectAttributesEtag: {},
s3response.ObjectAttributesObjectSize: {},
s3response.ObjectAttributesStorageClass: {},
},
output: s3response.GetObjectAttributesResult{
output: s3response.GetObjectAttributesResponse{
ObjectSize: &objSize,
ETag: &etag,
ObjectParts: &s3response.ObjectParts{},
VersionId: &etag,
},
},
want: s3response.GetObjectAttributesResult{
want: s3response.GetObjectAttributesResponse{
ETag: &etag,
ObjectSize: &objSize,
VersionId: &etag,
},
},
{
name: "make sure LastModified, DeleteMarker and VersionId are removed",
args: args{
attrs: map[s3response.ObjectAttributes]struct{}{
s3response.ObjectAttributesEtag: {},
},
output: s3response.GetObjectAttributesResponse{
ObjectSize: &objSize,
ETag: &etag,
ObjectParts: &s3response.ObjectParts{},
VersionId: &etag,
LastModified: backend.GetTimePtr(time.Now()),
DeleteMarker: &delMarker,
},
},
want: s3response.GetObjectAttributesResponse{
ETag: &etag,
},
},
}

View File

@@ -66,9 +66,11 @@ const (
ErrInvalidBucketName
ErrInvalidDigest
ErrInvalidMaxKeys
ErrInvalidMaxBuckets
ErrInvalidMaxUploads
ErrInvalidMaxParts
ErrInvalidPartNumberMarker
ErrInvalidObjectAttributes
ErrInvalidPart
ErrInvalidPartNumber
ErrInternalError
@@ -123,6 +125,7 @@ const (
ErrNoSuchBucketPolicy
ErrBucketTaggingNotFound
ErrObjectLockInvalidHeaders
ErrObjectAttributesInvalidHeader
ErrRequestTimeTooSkewed
ErrInvalidBucketAclWithObjectOwnership
ErrBothCannedAndHeaderGrants
@@ -193,6 +196,11 @@ var errorCodeResponse = map[ErrorCode]APIError{
Description: "The Content-Md5 you specified is not valid.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidMaxBuckets: {
Code: "InvalidArgument",
Description: "Argument max-buckets must be an integer between 1 and 10000.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidMaxUploads: {
Code: "InvalidArgument",
Description: "Argument max-uploads must be an integer between 0 and 2147483647.",
@@ -213,6 +221,11 @@ var errorCodeResponse = map[ErrorCode]APIError{
Description: "Argument partNumberMarker must be an integer.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidObjectAttributes: {
Code: "InvalidArgument",
Description: "Invalid attribute name specified.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNoSuchBucket: {
Code: "NoSuchBucket",
Description: "The specified bucket does not exist.",
@@ -493,6 +506,11 @@ var errorCodeResponse = map[ErrorCode]APIError{
Description: "x-amz-object-lock-retain-until-date and x-amz-object-lock-mode must both be supplied.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectAttributesInvalidHeader: {
Code: "InvalidRequest",
Description: "The x-amz-object-attributes header specifying the attributes to be retrieved is either missing or empty",
HTTPStatusCode: http.StatusBadRequest,
},
ErrRequestTimeTooSkewed: {
Code: "RequestTimeTooSkewed",
Description: "The difference between the request time and the server's time is too large.",

View File

@@ -30,11 +30,11 @@ type S3EventSender interface {
}
type EventMeta struct {
ObjectETag *string
VersionId *string
BucketOwner string
EventName EventType
ObjectSize int64
ObjectETag *string
VersionId *string
}
type EventSchema struct {
@@ -42,6 +42,8 @@ type EventSchema struct {
}
type EventRecord struct {
ResponseElements EventResponseElements `json:"responseElements"`
GlacierEventData EventGlacierData `json:"glacierEventData"`
EventVersion string `json:"eventVersion"`
EventSource string `json:"eventSource"`
AwsRegion string `json:"awsRegion"`
@@ -49,9 +51,7 @@ type EventRecord struct {
EventName EventType `json:"eventName"`
UserIdentity EventUserIdentity `json:"userIdentity"`
RequestParameters EventRequestParams `json:"requestParameters"`
ResponseElements EventResponseElements `json:"responseElements"`
S3 EventS3Data `json:"s3"`
GlacierEventData EventGlacierData `json:"glacierEventData"`
}
type EventUserIdentity struct {
@@ -99,11 +99,11 @@ type EventS3BucketData struct {
}
type EventObjectData struct {
Key string `json:"key"`
Size int64 `json:"size"`
ETag *string `json:"eTag"`
VersionId *string `json:"versionId"`
Key string `json:"key"`
Sequencer string `json:"sequencer"`
Size int64 `json:"size"`
}
type EventConfig struct {

View File

@@ -31,9 +31,9 @@ import (
var sequencer = 0
type Kafka struct {
key string
writer *kafka.Writer
filter EventFilter
key string
mu sync.Mutex
}

View File

@@ -27,10 +27,10 @@ import (
)
type NatsEventSender struct {
topic string
client *nats.Conn
mu sync.Mutex
filter EventFilter
topic string
mu sync.Mutex
}
func InitNatsEventService(url, topic string, filter EventFilter) (S3EventSender, error) {

View File

@@ -30,9 +30,9 @@ import (
)
type Webhook struct {
url string
client *http.Client
filter EventFilter
url string
mu sync.Mutex
}

View File

@@ -33,8 +33,8 @@ type AuditLogger interface {
type LogMeta struct {
BucketOwner string
ObjectSize int64
Action string
ObjectSize int64
HttpStatus int
}
@@ -45,21 +45,16 @@ type LogConfig struct {
}
type LogFields struct {
Time time.Time
BucketOwner string
Bucket string
Time time.Time
RemoteIP string
Requester string
RequestID string
Operation string
Key string
RequestURI string
HttpStatus int
ErrorCode string
BytesSent int
ObjectSize int64
TotalTime int64
TurnAroundTime int64
Referer string
UserAgent string
VersionID string
@@ -71,6 +66,11 @@ type LogFields struct {
TLSVersion string
AccessPointARN string
AclRequired string
HttpStatus int
BytesSent int
ObjectSize int64
TotalTime int64
TurnAroundTime int64
}
type AdminLogFields struct {
@@ -80,17 +80,17 @@ type AdminLogFields struct {
RequestID string
Operation string
RequestURI string
HttpStatus int
ErrorCode string
BytesSent int
TotalTime int64
TurnAroundTime int64
Referer string
UserAgent string
SignatureVersion string
CipherSuite string
AuthenticationType string
TLSVersion string
HttpStatus int
BytesSent int
TotalTime int64
TurnAroundTime int64
}
type Loggers struct {

View File

@@ -34,10 +34,10 @@ const (
// FileLogger is a local file audit log
type FileLogger struct {
logfile string
f *os.File
gotErr bool
logfile string
mu sync.Mutex
gotErr bool
}
var _ AuditLogger = &FileLogger{}

View File

@@ -33,8 +33,8 @@ import (
// WebhookLogger is a webhook URL audit log
type WebhookLogger struct {
mu sync.Mutex
url string
mu sync.Mutex
}
var _ AuditLogger = &WebhookLogger{}

View File

@@ -35,17 +35,17 @@ type PutObjectOutput struct {
// Part describes part metadata.
type Part struct {
PartNumber int
LastModified time.Time
ETag string
PartNumber int
Size int64
}
func (p Part) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
type Alias Part
aux := &struct {
LastModified string `xml:"LastModified"`
*Alias
LastModified string `xml:"LastModified"`
}{
Alias: (*Alias)(&p),
}
@@ -59,57 +59,61 @@ func (p Part) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
type ListPartsResult struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListPartsResult" json:"-"`
Initiator Initiator
Owner Owner
Bucket string
Key string
UploadID string `xml:"UploadId"`
Initiator Initiator
Owner Owner
// The class of storage used to store the object.
StorageClass types.StorageClass
// List of parts.
Parts []Part `xml:"Part"`
PartNumberMarker int
NextPartNumberMarker int
MaxParts int
IsTruncated bool
// List of parts.
Parts []Part `xml:"Part"`
}
type GetObjectAttributesResult struct {
ETag *string
LastModified *time.Time
ObjectSize *int64
StorageClass types.StorageClass
type ObjectAttributes string
const (
ObjectAttributesEtag ObjectAttributes = "ETag"
ObjectAttributesChecksum ObjectAttributes = "Checksum"
ObjectAttributesObjectParts ObjectAttributes = "ObjectParts"
ObjectAttributesStorageClass ObjectAttributes = "StorageClass"
ObjectAttributesObjectSize ObjectAttributes = "ObjectSize"
)
func (o ObjectAttributes) IsValid() bool {
return o == ObjectAttributesChecksum ||
o == ObjectAttributesEtag ||
o == ObjectAttributesObjectParts ||
o == ObjectAttributesObjectSize ||
o == ObjectAttributesStorageClass
}
type GetObjectAttributesResponse struct {
ETag *string
ObjectSize *int64
ObjectParts *ObjectParts
// Not included in the response body
VersionId *string
ObjectParts *ObjectParts
}
func (r GetObjectAttributesResult) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
type Alias GetObjectAttributesResult
aux := &struct {
LastModified *string `xml:"LastModified"`
*Alias
}{
Alias: (*Alias)(&r),
}
if r.LastModified != nil {
formattedTime := r.LastModified.UTC().Format(iso8601TimeFormat)
aux.LastModified = &formattedTime
}
return e.EncodeElement(aux, start)
LastModified *time.Time
DeleteMarker *bool
StorageClass types.StorageClass `xml:",omitempty"`
}
type ObjectParts struct {
Parts []types.ObjectPart `xml:"Part"`
PartNumberMarker int
NextPartNumberMarker int
MaxParts int
IsTruncated bool
Parts []types.ObjectPart `xml:"Part"`
}
// ListMultipartUploadsResponse - s3 api list multipart uploads response.
@@ -124,18 +128,17 @@ type ListMultipartUploadsResult struct {
Delimiter string
Prefix string
EncodingType string `xml:"EncodingType,omitempty"`
MaxUploads int
IsTruncated bool
// List of pending uploads.
Uploads []Upload `xml:"Upload"`
// Delimed common prefixes.
CommonPrefixes []CommonPrefix
MaxUploads int
IsTruncated bool
}
type ListObjectsResult struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
Name *string
Prefix *string
Marker *string
@@ -143,13 +146,13 @@ type ListObjectsResult struct {
MaxKeys *int32
Delimiter *string
IsTruncated *bool
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
EncodingType types.EncodingType
Contents []Object
CommonPrefixes []types.CommonPrefix
EncodingType types.EncodingType
}
type ListObjectsV2Result struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
Name *string
Prefix *string
StartAfter *string
@@ -159,9 +162,10 @@ type ListObjectsV2Result struct {
MaxKeys *int32
Delimiter *string
IsTruncated *bool
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
EncodingType types.EncodingType
Contents []Object
CommonPrefixes []types.CommonPrefix
EncodingType types.EncodingType
}
type Object struct {
@@ -193,19 +197,19 @@ func (o Object) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
// Upload describes in progress multipart upload
type Upload struct {
Key string
UploadID string `xml:"UploadId"`
Initiated time.Time
Initiator Initiator
Owner Owner
Key string
UploadID string `xml:"UploadId"`
StorageClass types.StorageClass
Initiated time.Time
}
func (u Upload) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
type Alias Upload
aux := &struct {
Initiated string `xml:"Initiated"`
*Alias
Initiated string `xml:"Initiated"`
}{
Alias: (*Alias)(&u),
}
@@ -257,11 +261,11 @@ type DeleteResult struct {
}
type SelectObjectContentPayload struct {
Expression *string
ExpressionType types.ExpressionType
RequestProgress *types.RequestProgress
InputSerialization *types.InputSerialization
OutputSerialization *types.OutputSerialization
ScanRange *types.ScanRange
ExpressionType types.ExpressionType
}
type SelectObjectContentResult struct {
@@ -277,22 +281,32 @@ type Bucket struct {
Owner string `json:"owner"`
}
type ListBucketsInput struct {
Owner string
ContinuationToken string
Prefix string
MaxBuckets int32
IsAdmin bool
}
type ListAllMyBucketsResult struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListAllMyBucketsResult" json:"-"`
Owner CanonicalUser
Buckets ListAllMyBucketsList
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListAllMyBucketsResult" json:"-"`
Owner CanonicalUser
ContinuationToken string `xml:"ContinuationToken,omitempty"`
Prefix string `xml:"Prefix,omitempty"`
Buckets ListAllMyBucketsList
}
type ListAllMyBucketsEntry struct {
Name string
CreationDate time.Time
Name string
}
func (r ListAllMyBucketsEntry) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
type Alias ListAllMyBucketsEntry
aux := &struct {
CreationDate string `xml:"CreationDate"`
*Alias
CreationDate string `xml:"CreationDate"`
}{
Alias: (*Alias)(&r),
}
@@ -321,8 +335,8 @@ type CopyObjectResult struct {
func (r CopyObjectResult) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
type Alias CopyObjectResult
aux := &struct {
LastModified string `xml:"LastModified"`
*Alias
LastModified string `xml:"LastModified"`
}{
Alias: (*Alias)(&r),
}
@@ -390,15 +404,15 @@ type ListVersionsResult struct {
}
type GetBucketVersioningOutput struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ VersioningConfiguration" json:"-"`
MFADelete *types.MFADeleteStatus
Status *types.BucketVersioningStatus
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ VersioningConfiguration" json:"-"`
}
type PutObjectRetentionInput struct {
RetainUntilDate AmzDate
XMLName xml.Name `xml:"Retention"`
Mode types.ObjectLockRetentionMode
RetainUntilDate AmzDate
}
type AmzDate struct {

View File

@@ -204,7 +204,6 @@ func genErrorMessage(errorCode, errorMessage string) []byte {
type GetProgress func() (bytesScanned int64, bytesProcessed int64)
type MessageHandler struct {
sync.Mutex
ctx context.Context
cancel context.CancelFunc
writer *bufio.Writer
@@ -213,6 +212,7 @@ type MessageHandler struct {
stopCh chan bool
resetCh chan bool
bytesReturned int64
sync.Mutex
}
// NewMessageHandler creates a new MessageHandler instance and starts the event streaming

View File

@@ -28,4 +28,5 @@ PASSWORD_TWO=OPQRSTU
TEST_FILE_FOLDER=$PWD/versity-gwtest-files
REMOVE_TEST_FILE_FOLDER=true
VERSIONING_DIR=/tmp/versioning
COMMAND_LOG=command.log
COMMAND_LOG=command.log
TIME_LOG=time.log

View File

@@ -74,36 +74,12 @@ put_object_rest() {
log 2 "'put_object_rest' requires local file, bucket name, key"
return 1
fi
generate_hash_for_payload_file "$1"
current_date_time=$(date -u +"%Y%m%dT%H%M%SZ")
aws_endpoint_url_address=${AWS_ENDPOINT_URL#*//}
header=$(echo "$AWS_ENDPOINT_URL" | awk -F: '{print $1}')
# shellcheck disable=SC2154
canonical_request="PUT
/$2/$3
host:$aws_endpoint_url_address
x-amz-content-sha256:$payload_hash
x-amz-date:$current_date_time
host;x-amz-content-sha256;x-amz-date
$payload_hash"
if ! generate_sts_string "$current_date_time" "$canonical_request"; then
log 2 "error generating sts string"
if ! result=$(COMMAND_LOG="$COMMAND_LOG" DATA_FILE="$1" BUCKET_NAME="$2" OBJECT_KEY="$3" OUTPUT_FILE="$TEST_FILE_FOLDER/result.txt" ./tests/rest_scripts/put_object.sh); then
log 2 "error sending object file: $result"
return 1
fi
get_signature
# shellcheck disable=SC2154
reply=$(send_command curl -ks -w "%{http_code}" -X PUT "$header://$aws_endpoint_url_address/$2/$3" \
-H "Authorization: AWS4-HMAC-SHA256 Credential=$AWS_ACCESS_KEY_ID/$ymd/$AWS_REGION/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=$signature" \
-H "x-amz-content-sha256: $payload_hash" \
-H "x-amz-date: $current_date_time" \
-T "$1" -o "$TEST_FILE_FOLDER"/put_object_error.txt 2>&1)
if [[ "$reply" != "200" ]]; then
log 2 "put object command returned error: $(cat "$TEST_FILE_FOLDER"/put_object_error.txt)"
if [ "$result" != "200" ]; then
log 2 "expected response code of '200', was '$result' (output: $(cat "$TEST_FILE_FOLDER/result.txt")"
return 1
fi
return 0

View File

@@ -31,3 +31,22 @@ upload_part() {
fi
export etag
}
upload_part_and_get_etag_rest() {
if [ $# -ne 5 ]; then
log 2 "'upload_part_rest' requires bucket name, key, part number, upload ID, part"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$1" OBJECT_KEY="$2" PART_NUMBER="$4" UPLOAD_ID="$3" DATA_FILE="$5" OUTPUT_FILE="$TEST_FILE_FOLDER/etag.txt" ./tests/rest_scripts/upload_part.sh); then
log 2 "error sending upload-part REST command: $result"
return 1
fi
if [[ "$result" != "200" ]]; then
log 2 "upload-part command returned error $result: $(cat "$TEST_FILE_FOLDER/etag.txt")"
return 1
fi
log 5 "$(cat "$TEST_FILE_FOLDER/etag.txt")"
etag=$(grep -i "etag" "$TEST_FILE_FOLDER/etag.txt" | awk '{print $2}' | tr -d '\r')
log 5 "etag: $etag"
return 0
}

View File

@@ -27,7 +27,7 @@ services:
args:
- CONFIG_FILE=tests/.env.default
image: bats_test
command: ["s3api-non-policy"]
command: ["s3api-bucket,s3api-object"]
direct:
build:
dockerfile: tests/Dockerfile_direct

View File

@@ -26,9 +26,9 @@ import (
)
type prefResult struct {
err error
elapsed time.Duration
size int64
err error
}
func TestUpload(s *S3Conf, files int, objSize int64, bucket, prefix string) error {

View File

@@ -22,9 +22,9 @@ import (
)
type RReader struct {
hash hash.Hash
buf []byte
dataleft int
hash hash.Hash
}
func NewDataReader(totalsize, bufsize int) *RReader {

View File

@@ -82,6 +82,9 @@ func TestHeadBucket(s *S3Conf) {
func TestListBuckets(s *S3Conf) {
ListBuckets_as_user(s)
ListBuckets_as_admin(s)
ListBuckets_with_prefix(s)
ListBuckets_invalid_max_buckets(s)
ListBuckets_truncated(s)
ListBuckets_success(s)
}
@@ -113,6 +116,7 @@ func TestPutBucketTagging(s *S3Conf) {
PutBucketTagging_non_existing_bucket(s)
PutBucketTagging_long_tags(s)
PutBucketTagging_success(s)
PutBucketTagging_success_status(s)
}
func TestGetBucketTagging(s *S3Conf) {
@@ -154,6 +158,8 @@ func TestHeadObject(s *S3Conf) {
func TestGetObjectAttributes(s *S3Conf) {
GetObjectAttributes_non_existing_bucket(s)
GetObjectAttributes_non_existing_object(s)
GetObjectAttributes_invalid_attrs(s)
GetObjectAttributes_empty_attrs(s)
GetObjectAttributes_existing_object(s)
}
@@ -562,6 +568,7 @@ func TestVersioning(s *S3Conf) {
// HeadObject action
Versioning_HeadObject_invalid_versionId(s)
Versioning_HeadObject_success(s)
Versioning_HeadObject_without_versionId(s)
Versioning_HeadObject_delete_marker(s)
// GetObject action
Versioning_GetObject_invalid_versionId(s)
@@ -569,6 +576,9 @@ func TestVersioning(s *S3Conf) {
Versioning_GetObject_delete_marker_without_versionId(s)
Versioning_GetObject_delete_marker(s)
Versioning_GetObject_null_versionId_obj(s)
// GetObjectAttributes action
Versioning_GetObjectAttributes_object_version(s)
Versioning_GetObjectAttributes_delete_marker(s)
// DeleteObject(s) actions
Versioning_DeleteObject_delete_object_version(s)
Versioning_DeleteObject_non_existing_object(s)
@@ -676,6 +686,9 @@ func GetIntTests() IntTests {
"HeadBucket_success": HeadBucket_success,
"ListBuckets_as_user": ListBuckets_as_user,
"ListBuckets_as_admin": ListBuckets_as_admin,
"ListBuckets_with_prefix": ListBuckets_with_prefix,
"ListBuckets_invalid_max_buckets": ListBuckets_invalid_max_buckets,
"ListBuckets_truncated": ListBuckets_truncated,
"ListBuckets_success": ListBuckets_success,
"DeleteBucket_non_existing_bucket": DeleteBucket_non_existing_bucket,
"DeleteBucket_non_empty_bucket": DeleteBucket_non_empty_bucket,
@@ -692,6 +705,7 @@ func GetIntTests() IntTests {
"PutBucketTagging_non_existing_bucket": PutBucketTagging_non_existing_bucket,
"PutBucketTagging_long_tags": PutBucketTagging_long_tags,
"PutBucketTagging_success": PutBucketTagging_success,
"PutBucketTagging_success_status": PutBucketTagging_success_status,
"GetBucketTagging_non_existing_bucket": GetBucketTagging_non_existing_bucket,
"GetBucketTagging_unset_tags": GetBucketTagging_unset_tags,
"GetBucketTagging_success": GetBucketTagging_success,
@@ -714,6 +728,8 @@ func GetIntTests() IntTests {
"HeadObject_success": HeadObject_success,
"GetObjectAttributes_non_existing_bucket": GetObjectAttributes_non_existing_bucket,
"GetObjectAttributes_non_existing_object": GetObjectAttributes_non_existing_object,
"GetObjectAttributes_invalid_attrs": GetObjectAttributes_invalid_attrs,
"GetObjectAttributes_empty_attrs": GetObjectAttributes_empty_attrs,
"GetObjectAttributes_existing_object": GetObjectAttributes_existing_object,
"GetObject_non_existing_key": GetObject_non_existing_key,
"GetObject_directory_object_noslash": GetObject_directory_object_noslash,
@@ -954,12 +970,15 @@ func GetIntTests() IntTests {
"Versioning_CopyObject_special_chars": Versioning_CopyObject_special_chars,
"Versioning_HeadObject_invalid_versionId": Versioning_HeadObject_invalid_versionId,
"Versioning_HeadObject_success": Versioning_HeadObject_success,
"Versioning_HeadObject_without_versionId": Versioning_HeadObject_without_versionId,
"Versioning_HeadObject_delete_marker": Versioning_HeadObject_delete_marker,
"Versioning_GetObject_invalid_versionId": Versioning_GetObject_invalid_versionId,
"Versioning_GetObject_success": Versioning_GetObject_success,
"Versioning_GetObject_delete_marker_without_versionId": Versioning_GetObject_delete_marker_without_versionId,
"Versioning_GetObject_delete_marker": Versioning_GetObject_delete_marker,
"Versioning_GetObject_null_versionId_obj": Versioning_GetObject_null_versionId_obj,
"Versioning_GetObjectAttributes_object_version": Versioning_GetObjectAttributes_object_version,
"Versioning_GetObjectAttributes_delete_marker": Versioning_GetObjectAttributes_delete_marker,
"Versioning_DeleteObject_delete_object_version": Versioning_DeleteObject_delete_object_version,
"Versioning_DeleteObject_non_existing_object": Versioning_DeleteObject_non_existing_object,
"Versioning_DeleteObject_delete_a_delete_marker": Versioning_DeleteObject_delete_a_delete_marker,

View File

@@ -35,10 +35,10 @@ type S3Conf struct {
awsSecret string
awsRegion string
endpoint string
checksumDisable bool
pathStyle bool
PartSize int64
Concurrency int
checksumDisable bool
pathStyle bool
debug bool
versioningEnabled bool
azureTests bool

View File

@@ -35,7 +35,6 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
"golang.org/x/sync/errgroup"
)
@@ -1967,7 +1966,7 @@ func HeadBucket_success(s *S3Conf) error {
func ListBuckets_as_user(s *S3Conf) error {
testName := "ListBuckets_as_user"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
buckets := []s3response.ListAllMyBucketsEntry{{Name: bucket}}
buckets := []types.Bucket{{Name: &bucket}}
for i := 0; i < 6; i++ {
bckt := getBucketName()
@@ -1976,8 +1975,8 @@ func ListBuckets_as_user(s *S3Conf) error {
return err
}
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
Name: bckt,
buckets = append(buckets, types.Bucket{
Name: &bckt,
})
}
usr := user{
@@ -1997,7 +1996,7 @@ func ListBuckets_as_user(s *S3Conf) error {
bckts := []string{}
for i := 0; i < 3; i++ {
bckts = append(bckts, buckets[i].Name)
bckts = append(bckts, *buckets[i].Name)
}
err = changeBucketsOwner(s, bckts, usr.access)
@@ -2017,12 +2016,12 @@ func ListBuckets_as_user(s *S3Conf) error {
if *out.Owner.ID != usr.access {
return fmt.Errorf("expected buckets owner to be %v, instead got %v", usr.access, *out.Owner.ID)
}
if ok := compareBuckets(out.Buckets, buckets[:3]); !ok {
if !compareBuckets(out.Buckets, buckets[:3]) {
return fmt.Errorf("expected list buckets result to be %v, instead got %v", buckets[:3], out.Buckets)
}
for _, elem := range buckets[1:] {
err = teardown(s, elem.Name)
err = teardown(s, *elem.Name)
if err != nil {
return err
}
@@ -2035,7 +2034,7 @@ func ListBuckets_as_user(s *S3Conf) error {
func ListBuckets_as_admin(s *S3Conf) error {
testName := "ListBuckets_as_admin"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
buckets := []s3response.ListAllMyBucketsEntry{{Name: bucket}}
buckets := []types.Bucket{{Name: &bucket}}
for i := 0; i < 6; i++ {
bckt := getBucketName()
@@ -2044,8 +2043,8 @@ func ListBuckets_as_admin(s *S3Conf) error {
return err
}
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
Name: bckt,
buckets = append(buckets, types.Bucket{
Name: &bckt,
})
}
usr := user{
@@ -2070,7 +2069,7 @@ func ListBuckets_as_admin(s *S3Conf) error {
bckts := []string{}
for i := 0; i < 3; i++ {
bckts = append(bckts, buckets[i].Name)
bckts = append(bckts, *buckets[i].Name)
}
err = changeBucketsOwner(s, bckts, usr.access)
@@ -2090,12 +2089,163 @@ func ListBuckets_as_admin(s *S3Conf) error {
if *out.Owner.ID != admin.access {
return fmt.Errorf("expected buckets owner to be %v, instead got %v", admin.access, *out.Owner.ID)
}
if ok := compareBuckets(out.Buckets, buckets); !ok {
if !compareBuckets(out.Buckets, buckets) {
return fmt.Errorf("expected list buckets result to be %v, instead got %v", buckets, out.Buckets)
}
for _, elem := range buckets[1:] {
err = teardown(s, elem.Name)
err = teardown(s, *elem.Name)
if err != nil {
return err
}
}
return nil
})
}
func ListBuckets_with_prefix(s *S3Conf) error {
testName := "ListBuckets_with_prefix"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
prefix := "my-prefix-"
allBuckets, prefixedBuckets := []types.Bucket{{Name: &bucket}}, []types.Bucket{}
for i := 0; i < 5; i++ {
bckt := getBucketName()
if i%2 == 0 {
bckt = prefix + bckt
}
err := setup(s, bckt)
if err != nil {
return err
}
allBuckets = append(allBuckets, types.Bucket{
Name: &bckt,
})
if i%2 == 0 {
prefixedBuckets = append(prefixedBuckets, types.Bucket{
Name: &bckt,
})
}
}
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
out, err := s3client.ListBuckets(ctx, &s3.ListBucketsInput{
Prefix: &prefix,
})
cancel()
if err != nil {
return err
}
if *out.Owner.ID != s.awsID {
return fmt.Errorf("expected owner to be %v, instead got %v", s.awsID, *out.Owner.ID)
}
if getString(out.Prefix) != prefix {
return fmt.Errorf("expected prefix to be %v, instead got %v", prefix, getString(out.Prefix))
}
if !compareBuckets(out.Buckets, prefixedBuckets) {
return fmt.Errorf("expected list buckets result to be %v, instead got %v", prefixedBuckets, out.Buckets)
}
for _, elem := range allBuckets[1:] {
err = teardown(s, *elem.Name)
if err != nil {
return err
}
}
return nil
})
}
func ListBuckets_invalid_max_buckets(s *S3Conf) error {
testName := "ListBuckets_invalid_max_buckets"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
listBuckets := func(maxBuckets int32) error {
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err := s3client.ListBuckets(ctx, &s3.ListBucketsInput{
MaxBuckets: &maxBuckets,
})
cancel()
return err
}
invMaxBuckets := int32(-3)
err := listBuckets(invMaxBuckets)
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrInvalidMaxBuckets)); err != nil {
return err
}
invMaxBuckets = 2000000
err = listBuckets(invMaxBuckets)
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrInvalidMaxBuckets)); err != nil {
return err
}
return nil
})
}
func ListBuckets_truncated(s *S3Conf) error {
testName := "ListBuckets_truncated"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
buckets := []types.Bucket{{Name: &bucket}}
for i := 0; i < 5; i++ {
bckt := getBucketName()
err := setup(s, bckt)
if err != nil {
return err
}
buckets = append(buckets, types.Bucket{
Name: &bckt,
})
}
maxBuckets := int32(3)
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
out, err := s3client.ListBuckets(ctx, &s3.ListBucketsInput{
MaxBuckets: &maxBuckets,
})
cancel()
if err != nil {
return err
}
if *out.Owner.ID != s.awsID {
return fmt.Errorf("expected owner to be %v, instead got %v", s.awsID, *out.Owner.ID)
}
if !compareBuckets(out.Buckets, buckets[:maxBuckets]) {
return fmt.Errorf("expected list buckets result to be %v, instead got %v", buckets[:maxBuckets], out.Buckets)
}
if getString(out.ContinuationToken) != *buckets[maxBuckets-1].Name {
return fmt.Errorf("expected ContinuationToken to be %v, instead got %v", *buckets[maxBuckets-1].Name, getString(out.ContinuationToken))
}
ctx, cancel = context.WithTimeout(context.Background(), shortTimeout)
out, err = s3client.ListBuckets(ctx, &s3.ListBucketsInput{
ContinuationToken: out.ContinuationToken,
})
cancel()
if err != nil {
return err
}
if !compareBuckets(out.Buckets, buckets[maxBuckets:]) {
return fmt.Errorf("expected list buckets result to be %v, instead got %v", buckets[maxBuckets:], out.Buckets)
}
if out.ContinuationToken != nil {
return fmt.Errorf("expected nil continuation token, instead got %v", *out.ContinuationToken)
}
if out.Prefix != nil {
return fmt.Errorf("expected nil prefix, instead got %v", *out.Prefix)
}
for _, elem := range buckets[1:] {
err = teardown(s, *elem.Name)
if err != nil {
return err
}
@@ -2108,7 +2258,7 @@ func ListBuckets_as_admin(s *S3Conf) error {
func ListBuckets_success(s *S3Conf) error {
testName := "ListBuckets_success"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
buckets := []s3response.ListAllMyBucketsEntry{{Name: bucket}}
buckets := []types.Bucket{{Name: &bucket}}
for i := 0; i < 5; i++ {
bckt := getBucketName()
@@ -2117,8 +2267,8 @@ func ListBuckets_success(s *S3Conf) error {
return err
}
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
Name: bckt,
buckets = append(buckets, types.Bucket{
Name: &bckt,
})
}
@@ -2132,12 +2282,12 @@ func ListBuckets_success(s *S3Conf) error {
if *out.Owner.ID != s.awsID {
return fmt.Errorf("expected owner to be %v, instead got %v", s.awsID, *out.Owner.ID)
}
if ok := compareBuckets(out.Buckets, buckets); !ok {
if !compareBuckets(out.Buckets, buckets) {
return fmt.Errorf("expected list buckets result to be %v, instead got %v", buckets, out.Buckets)
}
for _, elem := range buckets[1:] {
err = teardown(s, elem.Name)
err = teardown(s, *elem.Name)
if err != nil {
return err
}
@@ -2522,6 +2672,45 @@ func PutBucketTagging_success(s *S3Conf) error {
})
}
func PutBucketTagging_success_status(s *S3Conf) error {
testName := "PutBucketTagging_success_status"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
tagging := types.Tagging{
TagSet: []types.Tag{
{
Key: getPtr("key"),
Value: getPtr("val"),
},
},
}
taggingParsed, err := xml.Marshal(tagging)
if err != nil {
return fmt.Errorf("err parsing tagging: %w", err)
}
req, err := createSignedReq(http.MethodPut, s.endpoint, fmt.Sprintf("%v?tagging=", bucket), s.awsID, s.awsSecret, "s3", s.awsRegion, taggingParsed, time.Now(), nil)
if err != nil {
return fmt.Errorf("err signing the request: %w", err)
}
client := http.Client{
Timeout: shortTimeout,
}
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("err sending request: %w", err)
}
if resp.StatusCode != http.StatusNoContent {
return fmt.Errorf("expected the response status code to be %v, instad got %v", http.StatusNoContent, resp.StatusCode)
}
return nil
})
}
func GetBucketTagging_non_existing_bucket(s *S3Conf) error {
testName := "GetBucketTagging_non_existing_object"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
@@ -3225,9 +3414,11 @@ func GetObjectAttributes_non_existing_object(s *S3Conf) error {
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err := s3client.GetObjectAttributes(ctx, &s3.GetObjectAttributesInput{
Bucket: &bucket,
Key: getPtr("my-obj"),
ObjectAttributes: []types.ObjectAttributes{},
Bucket: &bucket,
Key: getPtr("my-obj"),
ObjectAttributes: []types.ObjectAttributes{
types.ObjectAttributesEtag,
},
})
cancel()
if err := checkSdkApiErr(err, "NoSuchKey"); err != nil {
@@ -3238,6 +3429,57 @@ func GetObjectAttributes_non_existing_object(s *S3Conf) error {
})
}
func GetObjectAttributes_invalid_attrs(s *S3Conf) error {
testName := "GetObjectAttributes_invalid_attrs"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
obj := "my-obj"
_, err := putObjects(s3client, []string{obj}, bucket)
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err = s3client.GetObjectAttributes(ctx, &s3.GetObjectAttributesInput{
Bucket: &bucket,
Key: &obj,
ObjectAttributes: []types.ObjectAttributes{
types.ObjectAttributesEtag,
types.ObjectAttributes("Invalid_argument"),
},
})
cancel()
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrInvalidObjectAttributes)); err != nil {
return err
}
return nil
})
}
func GetObjectAttributes_empty_attrs(s *S3Conf) error {
testName := "GetObjectAttributes_empty_attrs"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
obj := "my-obj"
_, err := putObjects(s3client, []string{obj}, bucket)
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err = s3client.GetObjectAttributes(ctx, &s3.GetObjectAttributesInput{
Bucket: &bucket,
Key: &obj,
ObjectAttributes: []types.ObjectAttributes{},
})
cancel()
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrObjectAttributesInvalidHeader)); err != nil {
return err
}
return nil
})
}
func GetObjectAttributes_existing_object(s *S3Conf) error {
testName := "GetObjectAttributes_existing_object"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
@@ -3288,11 +3530,14 @@ func GetObjectAttributes_existing_object(s *S3Conf) error {
return fmt.Errorf("expected object size to be %v, instead got %v", data_len, *out.ObjectSize)
}
if out.Checksum != nil {
return fmt.Errorf("expected checksum do be nil, instead got %v", *out.Checksum)
return fmt.Errorf("expected checksum to be nil, instead got %v", *out.Checksum)
}
if out.StorageClass != types.StorageClassStandard {
return fmt.Errorf("expected the storage class to be %v, instead got %v", types.StorageClassStandard, out.StorageClass)
}
if out.LastModified == nil {
return fmt.Errorf("expected non nil LastModified")
}
return nil
})
@@ -10698,19 +10943,6 @@ func PutObject_name_too_long(s *S3Conf) error {
})
}
// Versioning tests
func PutBucketVersioning_non_existing_bucket(s *S3Conf) error {
testName := "PutBucketVersioning_non_existing_bucket"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
err := putBucketVersioningStatus(s3client, getBucketName(), types.BucketVersioningStatusSuspended)
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrNoSuchBucket)); err != nil {
return err
}
return nil
})
}
func HeadObject_name_too_long(s *S3Conf) error {
testName := "HeadObject_name_too_long"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
@@ -10728,6 +10960,35 @@ func HeadObject_name_too_long(s *S3Conf) error {
})
}
func DeleteObject_name_too_long(s *S3Conf) error {
testName := "DeleteObject_name_too_long"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err := s3client.DeleteObject(ctx, &s3.DeleteObjectInput{
Bucket: &bucket,
Key: getPtr(genRandString(300)),
})
cancel()
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrKeyTooLong)); err != nil {
return err
}
return nil
})
}
// Versioning tests
func PutBucketVersioning_non_existing_bucket(s *S3Conf) error {
testName := "PutBucketVersioning_non_existing_bucket"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
err := putBucketVersioningStatus(s3client, getBucketName(), types.BucketVersioningStatusSuspended)
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrNoSuchBucket)); err != nil {
return err
}
return nil
})
}
func PutBucketVersioning_invalid_status(s *S3Conf) error {
testName := "PutBucketVersioning_invalid_status"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
@@ -11255,22 +11516,6 @@ func Versioning_HeadObject_invalid_versionId(s *S3Conf) error {
})
}
func DeleteObject_name_too_long(s *S3Conf) error {
testName := "DeleteObject_name_too_long"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err := s3client.DeleteObject(ctx, &s3.DeleteObjectInput{
Bucket: &bucket,
Key: getPtr(genRandString(300)),
})
cancel()
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrKeyTooLong)); err != nil {
return err
}
return nil
})
}
func Versioning_HeadObject_success(s *S3Conf) error {
testName := "Versioning_HeadObject_success"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
@@ -11306,6 +11551,35 @@ func Versioning_HeadObject_success(s *S3Conf) error {
}, withVersioning(types.BucketVersioningStatusEnabled))
}
func Versioning_HeadObject_without_versionId(s *S3Conf) error {
testName := "Versioning_HeadObject_without_versionId"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
obj := "my-obj"
versions, err := createObjVersions(s3client, bucket, obj, 3)
if err != nil {
return err
}
lastVersion := versions[0]
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
res, err := s3client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: &bucket,
Key: &obj,
})
cancel()
if err != nil {
return err
}
if getString(res.VersionId) != *lastVersion.VersionId {
return fmt.Errorf("expected versionId to be %v, instead got %v", *lastVersion.VersionId, getString(res.VersionId))
}
return nil
}, withVersioning(types.BucketVersioningStatusEnabled))
}
func Versioning_HeadObject_delete_marker(s *S3Conf) error {
testName := "Versioning_HeadObject_delete_marker"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
@@ -11577,6 +11851,97 @@ func Versioning_GetObject_null_versionId_obj(s *S3Conf) error {
})
}
func Versioning_GetObjectAttributes_object_version(s *S3Conf) error {
testName := "Versioning_GetObjectAttributes_object_version"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
obj := "my-obj"
versions, err := createObjVersions(s3client, bucket, obj, 1)
if err != nil {
return err
}
version := versions[0]
getObjAttrs := func(versionId *string) (*s3.GetObjectAttributesOutput, error) {
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
res, err := s3client.GetObjectAttributes(ctx, &s3.GetObjectAttributesInput{
Bucket: &bucket,
Key: &obj,
VersionId: versionId,
ObjectAttributes: []types.ObjectAttributes{
types.ObjectAttributesEtag,
},
})
cancel()
return res, err
}
// By specifying the versionId
res, err := getObjAttrs(version.VersionId)
if err != nil {
return err
}
if getString(res.ETag) != *version.ETag {
return fmt.Errorf("expected the uploaded object ETag to be %v, instead got %v", *version.ETag, getString(res.ETag))
}
if getString(res.VersionId) != *version.VersionId {
return fmt.Errorf("expected the uploaded versionId to be %v, instead got %v", *version.VersionId, getString(res.VersionId))
}
// Without versionId
res, err = getObjAttrs(nil)
if err != nil {
return err
}
if getString(res.ETag) != *version.ETag {
return fmt.Errorf("expected the uploaded object ETag to be %v, instead got %v", *version.ETag, getString(res.ETag))
}
if getString(res.VersionId) != *version.VersionId {
return fmt.Errorf("expected the uploaded object versionId to be %v, instead got %v", *version.VersionId, getString(res.VersionId))
}
return nil
}, withVersioning(types.BucketVersioningStatusEnabled))
}
func Versioning_GetObjectAttributes_delete_marker(s *S3Conf) error {
testName := "Versioning_GetObjectAttributes_delete_marker"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
obj := "my-obj"
_, err := createObjVersions(s3client, bucket, obj, 1)
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
res, err := s3client.DeleteObject(ctx, &s3.DeleteObjectInput{
Bucket: &bucket,
Key: &obj,
})
cancel()
if err != nil {
return err
}
ctx, cancel = context.WithTimeout(context.Background(), shortTimeout)
_, err = s3client.GetObjectAttributes(ctx, &s3.GetObjectAttributesInput{
Bucket: &bucket,
Key: &obj,
VersionId: res.VersionId,
ObjectAttributes: []types.ObjectAttributes{
types.ObjectAttributesEtag,
},
})
cancel()
if err := checkSdkApiErr(err, "NoSuchKey"); err != nil {
return err
}
return nil
}, withVersioning(types.BucketVersioningStatusEnabled))
}
func Versioning_DeleteObject_delete_object_version(s *S3Conf) error {
testName := "Versioning_DeleteObject_delete_object_version"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {

View File

@@ -39,7 +39,6 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/aws/smithy-go"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
var (
@@ -169,9 +168,9 @@ func teardown(s *S3Conf, bucket string) error {
}
type setupCfg struct {
LockEnabled bool
VersioningStatus types.BucketVersioningStatus
Ownership types.ObjectOwnership
LockEnabled bool
}
type setupOpt func(*setupCfg)
@@ -215,12 +214,12 @@ func actionHandler(s *S3Conf, testName string, handler func(s3client *s3.Client,
}
type authConfig struct {
date time.Time
testName string
path string
method string
body []byte
service string
date time.Time
body []byte
}
func authHandler(s *S3Conf, cfg *authConfig, handler func(req *http.Request) error) error {
@@ -435,9 +434,9 @@ func contains(s []string, e string) bool {
}
type putObjectOutput struct {
csum [32]byte
data []byte
res *s3.PutObjectOutput
data []byte
csum [32]byte
}
func putObjectWithData(lgth int64, input *s3.PutObjectInput, client *s3.Client) (*putObjectOutput, error) {
@@ -593,19 +592,13 @@ func areMapsSame(mp1, mp2 map[string]string) bool {
return true
}
func compareBuckets(list1 []types.Bucket, list2 []s3response.ListAllMyBucketsEntry) bool {
func compareBuckets(list1 []types.Bucket, list2 []types.Bucket) bool {
if len(list1) != len(list2) {
return false
}
elementMap := make(map[string]bool)
for _, elem := range list1 {
elementMap[*elem.Name] = true
}
for _, elem := range list2 {
if _, found := elementMap[elem.Name]; !found {
for i, elem := range list1 {
if *elem.Name != *list2[i].Name {
return false
}
}

View File

@@ -39,7 +39,7 @@ UNSIGNED-PAYLOAD"
create_canonical_hash_sts_and_signature
curl_command+=(curl -ks -w "\"%{http_code}\"" -X DELETE "https://$host/$bucket_name/$key?uploadId=$upload_id"
curl_command+=(curl -ks -w "\"%{http_code}\"" -X DELETE "$AWS_ENDPOINT_URL/$bucket_name/$key?uploadId=$upload_id"
-H "\"Authorization: AWS4-HMAC-SHA256 Credential=$aws_access_key_id/$year_month_day/$aws_region/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=$signature\""
-H "\"x-amz-content-sha256: UNSIGNED-PAYLOAD\""
-H "\"x-amz-date: $current_date_time\""

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/rest_scripts/rest.sh
# Fields
# shellcheck disable=SC2153
bucket_name="$BUCKET_NAME"
# shellcheck disable=SC2153
key="$OBJECT_KEY"
# shellcheck disable=SC2153,SC2034
upload_id="$UPLOAD_ID"
# shellcheck disable=SC2153
parts="$PARTS"
payload="<CompleteMultipartUpload xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">$parts</CompleteMultipartUpload>"
payload_hash="$(echo -n "$payload" | sha256sum | awk '{print $1}')"
current_date_time=$(date -u +"%Y%m%dT%H%M%SZ")
canonical_request="POST
/$bucket_name/$key
uploadId=$UPLOAD_ID
host:$host
x-amz-content-sha256:$payload_hash
x-amz-date:$current_date_time
host;x-amz-content-sha256;x-amz-date
$payload_hash"
create_canonical_hash_sts_and_signature
curl_command+=(curl -ks -w "\"%{http_code}\"" -X POST "$AWS_ENDPOINT_URL/$bucket_name/$key?uploadId=$upload_id"
-H "\"Authorization: AWS4-HMAC-SHA256 Credential=$aws_access_key_id/$year_month_day/$aws_region/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=$signature\""
-H "\"x-amz-content-sha256: $payload_hash\""
-H "\"x-amz-date: $current_date_time\""
-H "\"Content-Type: application/xml\""
-d "\"${payload//\"/\\\"}\""
-o "$OUTPUT_FILE")
# shellcheck disable=SC2154
eval "${curl_command[*]}" 2>&1

View File

@@ -0,0 +1,43 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/rest_scripts/rest.sh
# Fields
# shellcheck disable=SC2153
bucket_name="$BUCKET_NAME"
current_date_time=$(date -u +"%Y%m%dT%H%M%SZ")
canonical_request="GET
/$bucket_name
tagging=
host:$host
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:$current_date_time
host;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD"
create_canonical_hash_sts_and_signature
curl_command+=(curl -ks -w "\"%{http_code}\"" "$AWS_ENDPOINT_URL/$bucket_name?tagging="
-H "\"Authorization: AWS4-HMAC-SHA256 Credential=$aws_access_key_id/$year_month_day/$aws_region/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=$signature\""
-H "\"x-amz-content-sha256: UNSIGNED-PAYLOAD\""
-H "\"x-amz-date: $current_date_time\""
-o "$OUTPUT_FILE")
# shellcheck disable=SC2154
eval "${curl_command[*]}" 2>&1

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env bash
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/rest_scripts/rest.sh
# Fields
# shellcheck disable=SC2153
bucket_name="$BUCKET_NAME"
# shellcheck disable=SC2154
key="$OBJECT_KEY"
# shellcheck disable=SC2153,SC2154
attributes="$ATTRIBUTES"
current_date_time=$(date -u +"%Y%m%dT%H%M%SZ")
#x-amz-object-attributes:ETag
canonical_request="GET
/$bucket_name/$key
attributes=
host:$host
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:$current_date_time
x-amz-object-attributes:$attributes
host;x-amz-content-sha256;x-amz-date;x-amz-object-attributes
UNSIGNED-PAYLOAD"
create_canonical_hash_sts_and_signature
curl_command+=(curl -ks -w "\"%{http_code}\"" "$AWS_ENDPOINT_URL/$bucket_name/$key?attributes="
-H "\"Authorization: AWS4-HMAC-SHA256 Credential=$aws_access_key_id/$year_month_day/$aws_region/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-object-attributes,Signature=$signature\""
-H "\"x-amz-content-sha256: UNSIGNED-PAYLOAD\""
-H "\"x-amz-date: $current_date_time\""
-H "\"x-amz-object-attributes: $attributes\""
-o "$OUTPUT_FILE")
# shellcheck disable=SC2154
eval "${curl_command[*]}" 2>&1

View File

@@ -0,0 +1,47 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/rest_scripts/rest.sh
# Fields
# shellcheck disable=SC2153
bucket_name="$BUCKET_NAME"
# shellcheck disable=SC2153,SC2034
upload_id="$UPLOAD_ID"
current_date_time=$(date -u +"%Y%m%dT%H%M%SZ")
# shellcheck disable=SC2034
canonical_request="GET
/$bucket_name/
uploads=
host:$host
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:$current_date_time
host;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD"
create_canonical_hash_sts_and_signature
curl_command+=(curl -ks -w "\"%{http_code}\"" "https://$host/$bucket_name/$key?uploads="
-H "\"Authorization: AWS4-HMAC-SHA256 Credential=$aws_access_key_id/$year_month_day/$aws_region/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=$signature\""
-H "\"x-amz-content-sha256: UNSIGNED-PAYLOAD\""
-H "\"x-amz-date: $current_date_time\""
-o "$OUTPUT_FILE")
# shellcheck disable=SC2154
eval "${curl_command[*]}" 2>&1

View File

@@ -0,0 +1,64 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/rest_scripts/rest.sh
# Fields
# shellcheck disable=SC2153
bucket_name="$BUCKET_NAME"
# shellcheck disable=SC2153
key="$TAG_KEY"
# shellcheck disable=SC2153
value="$TAG_VALUE"
payload="<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<Tagging xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">
<TagSet>
<Tag>
<Key>$key</Key>
<Value>$value</Value>
</Tag>
</TagSet>
</Tagging>"
content_md5=$(echo -n "$payload" | openssl dgst -binary -md5 | openssl base64)
payload_hash="$(echo -n "$payload" | sha256sum | awk '{print $1}')"
current_date_time=$(date -u +"%Y%m%dT%H%M%SZ")
canonical_request="PUT
/$bucket_name
tagging=
content-md5:$content_md5
host:$host
x-amz-content-sha256:$payload_hash
x-amz-date:$current_date_time
content-md5;host;x-amz-content-sha256;x-amz-date
$payload_hash"
create_canonical_hash_sts_and_signature
curl_command+=(curl -ks -w "\"%{http_code}\"" -X PUT "$AWS_ENDPOINT_URL/$bucket_name?tagging="
-H "\"Authorization: AWS4-HMAC-SHA256 Credential=$aws_access_key_id/$year_month_day/$aws_region/s3/aws4_request,SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date,Signature=$signature\""
-H "\"Content-MD5: $content_md5\""
-H "\"x-amz-content-sha256: $payload_hash\""
-H "\"x-amz-date: $current_date_time\""
-d "\"${payload//\"/\\\"}\""
-o "$OUTPUT_FILE")
# shellcheck disable=SC2154
eval "${curl_command[*]}" 2>&1

View File

@@ -0,0 +1,72 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/rest_scripts/rest.sh
# Fields
# shellcheck disable=SC2153
data_file="$DATA_FILE"
# shellcheck disable=SC2153
bucket_name="$BUCKET_NAME"
# shellcheck disable=SC2153
key="$OBJECT_KEY"
# shellcheck disable=SC2153,SC2154
checksum="$CHECKSUM"
current_date_time=$(date -u +"%Y%m%dT%H%M%SZ")
payload_hash="$(sha256sum "$data_file" | awk '{print $1}')"
checksum_hash="$(echo -n "$payload_hash" | xxd -r -p | base64)"
if [ "$CHECKSUM" == "true" ]; then
canonical_request="PUT
/$bucket_name/$key
host:$host
x-amz-checksum-sha256:$checksum_hash
x-amz-content-sha256:$payload_hash
x-amz-date:$current_date_time
host;x-amz-checksum-sha256;x-amz-content-sha256;x-amz-date
$payload_hash"
else
canonical_request="PUT
/$bucket_name/$key
host:$host
x-amz-content-sha256:$payload_hash
x-amz-date:$current_date_time
host;x-amz-content-sha256;x-amz-date
$payload_hash"
fi
create_canonical_hash_sts_and_signature
curl_command+=(curl -ks -w "\"%{http_code}\"" -X PUT "$AWS_ENDPOINT_URL/$bucket_name/$key")
if [ "$CHECKSUM" == "true" ]; then
curl_command+=(-H "\"Authorization: AWS4-HMAC-SHA256 Credential=$aws_access_key_id/$year_month_day/$aws_region/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-checksum-sha256,Signature=$signature\"")
else
curl_command+=(-H "\"Authorization: AWS4-HMAC-SHA256 Credential=$aws_access_key_id/$year_month_day/$aws_region/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=$signature\"")
fi
curl_command+=(-H "\"x-amz-content-sha256: $payload_hash\""
-H "\"x-amz-date: $current_date_time\"")
if [ "$checksum" == "true" ]; then
curl_command+=(-H "\"x-amz-checksum-sha256: $checksum_hash\"")
fi
curl_command+=(-T "$data_file" -o "$OUTPUT_FILE")
# shellcheck disable=SC2154
eval "${curl_command[*]}" 2>&1

View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/rest_scripts/rest.sh
# shellcheck disable=SC2153
bucket_name="$BUCKET_NAME"
# shellcheck disable=SC2153
key="$OBJECT_KEY"
# shellcheck disable=SC2153
part_number="$PART_NUMBER"
# shellcheck disable=SC2153
upload_id="$UPLOAD_ID"
# shellcheck disable=SC2153
data=$DATA_FILE
payload_hash="$(sha256sum "$data" | awk '{print $1}')"
current_date_time=$(date -u +"%Y%m%dT%H%M%SZ")
aws_endpoint_url_address=${AWS_ENDPOINT_URL#*//}
# shellcheck disable=SC2034
header=$(echo "$AWS_ENDPOINT_URL" | awk -F: '{print $1}')
# shellcheck disable=SC2154
canonical_request="PUT
/$bucket_name/$key
partNumber=$part_number&uploadId=$upload_id
host:$aws_endpoint_url_address
x-amz-content-sha256:$payload_hash
x-amz-date:$current_date_time
host;x-amz-content-sha256;x-amz-date
$payload_hash"
create_canonical_hash_sts_and_signature
sleep 5
curl_command+=(curl -isk -w "\"%{http_code}\"" "\"$AWS_ENDPOINT_URL/$bucket_name/$key?partNumber=$part_number&uploadId=$upload_id\""
-H "\"Authorization: AWS4-HMAC-SHA256 Credential=$aws_access_key_id/$year_month_day/$aws_region/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=$signature\""
-H "\"x-amz-content-sha256: $payload_hash\""
-H "\"x-amz-date: $current_date_time\""
-o "\"$OUTPUT_FILE\""
-T "\"$data\"")
# shellcheck disable=SC2154
eval "${curl_command[*]}" 2>&1

View File

@@ -19,14 +19,21 @@ show_help() {
echo "Usage: $0 [option...]"
echo " -h, --help Display this help message and exit"
echo " Separate the below by comma"
echo " s3api Run tests with s3api cli"
echo " s3api-non-policy Run policy tests with s3api cli"
echo " s3api Run all tests with s3api cli"
echo " s3api-multipart Run multipart tests with s3api cli"
echo " s3api-bucket Run bucket tests with s3api cli"
echo " s3api-object Run object tests with s3api cli"
echo " s3api-policy Run policy tests with s3api cli"
echo " s3api-user Run user tests with s3api cli"
echo " s3 Run tests with s3 cli"
echo " s3cmd Run tests with s3cmd utility"
echo " s3cmd-user Run user tests with s3cmd utility"
echo " s3cmd-non-user Run non-user tests with s3cmd utility"
echo " s3cmd-file-count Run file count test with s3cmd utility"
echo " mc Run tests with mc utility"
echo " mc-non-file-count Run non-file count tests with mc utility"
echo " mc-file-count Run file count test with mc utility"
echo " rest Run tests with rest cli"
echo " s3api-user Run user tests with aws cli"
}
handle_param() {
@@ -35,7 +42,7 @@ handle_param() {
show_help
exit 0
;;
s3|s3api|s3cmd|mc|s3api-user|rest|s3api-policy|s3api-non-policy)
s3|s3-file-count|s3-non-file-count|s3api|s3cmd|s3cmd-user|s3cmd-non-user|s3cmd-file-count|mc|mc-non-file-count|mc-file-count|s3api-user|rest|s3api-policy|s3api-bucket|s3api-object|s3api-multipart)
run_suite "$1"
;;
*) # Handle unrecognized options or positional arguments
@@ -50,25 +57,50 @@ run_suite() {
case $1 in
s3api)
echo "Running all s3api tests ..."
"$HOME"/bin/bats ./tests/test_s3api.sh || exit_code=$?
"$HOME"/bin/bats ./tests/test_s3api_bucket.sh || exit_code=$?
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_s3api_object.sh || exit_code=$?
fi
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_s3api_policy.sh || exit_code=$?
fi
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_s3api_multipart.sh || exit_code=$?
fi
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_user_aws.sh || exit_code=$?
fi
;;
s3api-multipart)
echo "Running s3api multipart tests ..."
"$HOME"/bin/bats ./tests/test_s3api_multipart.sh || exit_code=$?
;;
s3api-policy)
echo "Running s3api policy tests ..."
"$HOME"/bin/bats ./tests/test_s3api_policy.sh || exit_code=$?
;;
s3api-non-policy)
echo "Running s3api non-policy tests ..."
"$HOME"/bin/bats ./tests/test_s3api.sh || exit_code=$?
s3api-bucket)
echo "Running s3api bucket tests ..."
"$HOME"/bin/bats ./tests/test_s3api_bucket.sh || exit_code=$?
;;
s3api-object)
echo "Running s3api object tests ..."
"$HOME"/bin/bats ./tests/test_s3api_object.sh || exit_code=$?
;;
s3)
echo "Running s3 tests ..."
"$HOME"/bin/bats ./tests/test_s3.sh || exit_code=$?
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_s3_file_count.sh || exit_code=$?
fi
;;
s3-non-file-count)
echo "Running s3 non-file count tests ..."
"$HOME"/bin/bats ./tests/test_s3.sh || exit_code=$?
;;
s3-file-count)
echo "Running s3 file count test ..."
"$HOME"/bin/bats ./tests/test_s3_file_count.sh || exit_code=$?
;;
s3cmd)
echo "Running s3cmd tests ..."
@@ -76,10 +108,36 @@ run_suite() {
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_user_s3cmd.sh || exit_code=$?
fi
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_s3cmd_file_count.sh || exit_code=$?
fi
;;
s3cmd-user)
echo "Running s3cmd user tests ..."
"$HOME"/bin/bats ./tests/test_user_s3cmd.sh || exit_code=$?
;;
s3cmd-non-user)
echo "Running s3cmd non-user tests ..."
"$HOME"/bin/bats ./tests/test_s3cmd.sh || exit_code=$?
;;
s3cmd-file-count)
echo "Running s3cmd file count test ..."
"$HOME"/bin/bats ./tests/test_s3cmd_file_count.sh || exit_code=$?
;;
mc)
echo "Running mc tests ..."
"$HOME"/bin/bats ./tests/test_mc.sh || exit_code=$?
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_mc_file_count.sh || exit_code=$?
fi
;;
mc-non-file-count)
echo "Running mc non-file count tests ..."
"$HOME"/bin/bats ./tests/test_mc.sh || exit_code=$?
;;
mc-file-count)
echo "Running mc file count test ..."
"$HOME"/bin/bats ./tests/test_mc_file_count.sh || exit_code=$?
;;
rest)
echo "Running rest tests ..."

View File

@@ -28,7 +28,7 @@ setup() {
base_setup
log 4 "Running test $BATS_TEST_NAME"
if [[ $LOG_LEVEL -ge 5 ]]; then
if [[ $LOG_LEVEL -ge 5 ]] || [[ -n "$TIME_LOG" ]]; then
start_time=$(date +%s)
export start_time
fi
@@ -62,10 +62,10 @@ teardown() {
echo "**********************************************************************************"
fi
# shellcheck disable=SC2154
if ! delete_bucket_or_contents_if_exists "s3api" "$BUCKET_ONE_NAME"; then
if ! bucket_cleanup_if_bucket_exists "s3api" "$BUCKET_ONE_NAME"; then
log 3 "error deleting bucket $BUCKET_ONE_NAME or contents"
fi
if ! delete_bucket_or_contents_if_exists "s3api" "$BUCKET_TWO_NAME"; then
if ! bucket_cleanup_if_bucket_exists "s3api" "$BUCKET_TWO_NAME"; then
log 3 "error deleting bucket $BUCKET_TWO_NAME or contents"
fi
if [ "$REMOVE_TEST_FILE_FOLDER" == "true" ]; then
@@ -75,9 +75,13 @@ teardown() {
fi
fi
stop_versity
if [[ $LOG_LEVEL -ge 5 ]]; then
if [[ $LOG_LEVEL -ge 5 ]] || [[ -n "$TIME_LOG" ]]; then
end_time=$(date +%s)
log 4 "Total test time: $((end_time - start_time))"
total_time=$((end_time - start_time))
log 4 "Total test time: $total_time"
if [[ -n "$TIME_LOG" ]]; then
echo "$BATS_TEST_NAME: ${total_time}s" >> "$TIME_LOG"
fi
fi
if [[ -n "$COVERAGE_DB" ]]; then
record_result

View File

@@ -87,7 +87,7 @@ test_create_multipart_upload_properties_aws_root() {
run dd if=/dev/urandom of="$TEST_FILE_FOLDER/$bucket_file" bs=5M count=1
assert_success
run delete_bucket_or_contents_if_exists "s3api" "$BUCKET_ONE_NAME"
run bucket_cleanup_if_bucket_exists "s3api" "$BUCKET_ONE_NAME"
assert_success
# in static bucket config, bucket will still exist
if ! bucket_exists "s3api" "$BUCKET_ONE_NAME"; then
@@ -360,7 +360,7 @@ test_retention_bypass_aws_root() {
legal_hold_retention_setup() {
assert [ $# -eq 3 ]
run delete_bucket_or_contents_if_exists "s3api" "$BUCKET_ONE_NAME"
run bucket_cleanup_if_bucket_exists "s3api" "$BUCKET_ONE_NAME"
assert_success
run setup_user "$1" "$2" "user"

View File

@@ -65,7 +65,7 @@ test_common_multipart_upload() {
run download_and_compare_file "$1" "$TEST_FILE_FOLDER/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" "$TEST_FILE_FOLDER/$bucket_file-copy"
assert_success
delete_bucket_or_contents "$1" "$BUCKET_ONE_NAME"
bucket_cleanup "$1" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
@@ -85,7 +85,7 @@ test_common_create_delete_bucket() {
run bucket_exists "$1" "$BUCKET_ONE_NAME"
assert_success
run delete_bucket_or_contents "$1" "$BUCKET_ONE_NAME"
run bucket_cleanup "$1" "$BUCKET_ONE_NAME"
assert_success
}

View File

@@ -112,10 +112,6 @@ export RUN_MC=true
test_common_presigned_url_utf8_chars "mc"
}
@test "test_list_objects_file_count" {
test_common_list_objects_file_count "mc"
}
@test "test_create_bucket_invalid_name_mc" {
if [[ $RECREATE_BUCKETS != "true" ]]; then
return

23
tests/test_mc_file_count.sh Executable file
View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bats
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/test_common.sh
export RUN_MC=true
@test "test_list_objects_file_count" {
test_common_list_objects_file_count "mc"
}

View File

@@ -29,9 +29,11 @@ source ./tests/commands/put_object_tagging.sh
source ./tests/logger.sh
source ./tests/setup.sh
source ./tests/util.sh
source ./tests/util_attributes.sh
source ./tests/util_legal_hold.sh
source ./tests/util_list_buckets.sh
source ./tests/util_list_objects.sh
source ./tests/util_list_parts.sh
source ./tests/util_lock_config.sh
source ./tests/util_rest.sh
source ./tests/util_tags.sh
@@ -117,7 +119,7 @@ source ./tests/util_versioning.sh
test_key="TestKey"
test_value="TestValue"
run delete_bucket_or_contents_if_exists "s3api" "$BUCKET_ONE_NAME"
run bucket_cleanup_if_bucket_exists "s3api" "$BUCKET_ONE_NAME"
assert_success
# in static bucket config, bucket will still exist
if ! bucket_exists "s3api" "$BUCKET_ONE_NAME"; then
@@ -169,7 +171,7 @@ source ./tests/util_versioning.sh
run check_no_object_lock_config_rest "$BUCKET_ONE_NAME"
assert_success
run delete_bucket_or_contents_if_exists "s3api" "$BUCKET_ONE_NAME"
run bucket_cleanup_if_bucket_exists "s3api" "$BUCKET_ONE_NAME"
assert_success
# in static bucket config, bucket will still exist
@@ -291,3 +293,105 @@ source ./tests/util_versioning.sh
run create_abort_multipart_upload_rest "$BUCKET_ONE_NAME" "$test_file"
assert_success
}
@test "REST - multipart upload create, list parts" {
test_file="test_file"
run create_large_file "$test_file"
assert_success
run split_file "$TEST_FILE_FOLDER/$test_file" 4
assert_success
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
run upload_check_parts "$BUCKET_ONE_NAME" "$test_file" \
"$TEST_FILE_FOLDER/$test_file-0" "$TEST_FILE_FOLDER/$test_file-1" "$TEST_FILE_FOLDER/$test_file-2" "$TEST_FILE_FOLDER/$test_file-3"
assert_success
run get_object "s3api" "$BUCKET_ONE_NAME" "$test_file" "$TEST_FILE_FOLDER/$test_file-copy"
assert_success
run compare_files "$TEST_FILE_FOLDER/$test_file" "$TEST_FILE_FOLDER/$test_file-copy"
assert_success
}
@test "REST - get object attributes" {
if [ "$DIRECT" != "true" ]; then
skip "https://github.com/versity/versitygw/issues/916"
fi
test_file="test_file"
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
run create_large_file "$test_file"
assert_success
# shellcheck disable=SC2034
file_size=$(stat -c %s "$TEST_FILE_FOLDER/$test_file" 2>/dev/null || stat -f %z "$TEST_FILE_FOLDER/$test_file" 2>/dev/null)
run split_file "$TEST_FILE_FOLDER/$test_file" 4
assert_success
run upload_and_check_attributes "$test_file" "$file_size"
assert_success
}
@test "REST - attributes - invalid param" {
if [ "$DIRECT" != "true" ]; then
skip "https://github.com/versity/versitygw/issues/917"
fi
test_file="test_file"
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
run create_test_file "$test_file"
assert_success
run put_object "s3api" "$TEST_FILE_FOLDER/$test_file" "$BUCKET_ONE_NAME" "$test_file"
assert_success
run check_attributes_invalid_param "$test_file"
assert_success
}
@test "REST - attributes - checksum" {
if [ "$DIRECT" != "true" ]; then
skip "https://github.com/versity/versitygw/issues/928"
fi
test_file="test_file"
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
run create_test_file "$test_file"
assert_success
run add_and_check_checksum "$TEST_FILE_FOLDER/$test_file" "$test_file"
assert_success
}
@test "REST - bucket tagging - no tags" {
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
run verify_no_bucket_tags_rest "$BUCKET_ONE_NAME"
assert_success
}
@test "REST - bucket tagging - tags" {
if [ "$DIRECT" != "true" ]; then
skip "https://github.com/versity/versitygw/issues/932"
fi
test_key="testKey"
test_value="testValue"
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
run add_verify_bucket_tags_rest "$BUCKET_ONE_NAME" "$test_key" "$test_value"
assert_success
}

View File

@@ -51,10 +51,6 @@ source ./tests/util_file.sh
test_common_list_buckets "s3"
}
@test "test_list_objects_file_count" {
test_common_list_objects_file_count "s3"
}
@test "test_delete_bucket" {
if [[ $RECREATE_BUCKETS == "false" ]]; then
skip "will not test bucket deletion in static bucket test config"

22
tests/test_s3_file_count.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bats
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/test_common.sh
source ./tests/util_file.sh
@test "test_list_objects_file_count" {
test_common_list_objects_file_count "s3"
}

125
tests/test_s3api_bucket.sh Executable file
View File

@@ -0,0 +1,125 @@
#!/usr/bin/env bats
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/setup.sh
source ./tests/util.sh
source ./tests/util_aws.sh
source ./tests/util_create_bucket.sh
source ./tests/util_file.sh
source ./tests/util_lock_config.sh
source ./tests/util_tags.sh
source ./tests/util_users.sh
source ./tests/test_aws_root_inner.sh
source ./tests/test_common.sh
source ./tests/test_common_acl.sh
source ./tests/commands/copy_object.sh
source ./tests/commands/delete_bucket_policy.sh
source ./tests/commands/delete_object_tagging.sh
source ./tests/commands/get_bucket_acl.sh
source ./tests/commands/get_bucket_policy.sh
source ./tests/commands/get_bucket_versioning.sh
source ./tests/commands/get_object.sh
source ./tests/commands/get_object_attributes.sh
source ./tests/commands/get_object_legal_hold.sh
source ./tests/commands/get_object_lock_configuration.sh
source ./tests/commands/get_object_retention.sh
source ./tests/commands/get_object_tagging.sh
source ./tests/commands/list_object_versions.sh
source ./tests/commands/put_bucket_acl.sh
source ./tests/commands/put_bucket_policy.sh
source ./tests/commands/put_bucket_versioning.sh
source ./tests/commands/put_object.sh
source ./tests/commands/put_object_legal_hold.sh
source ./tests/commands/put_object_lock_configuration.sh
source ./tests/commands/put_object_retention.sh
source ./tests/commands/put_public_access_block.sh
source ./tests/commands/select_object_content.sh
export RUN_USERS=true
# create-bucket
@test "test_create_delete_bucket_aws" {
test_common_create_delete_bucket "aws"
}
@test "test_create_bucket_invalid_name" {
test_create_bucket_invalid_name_aws_root
}
# delete-bucket - test_create_delete_bucket_aws
# delete-bucket-policy
@test "test_get_put_delete_bucket_policy" {
if [[ -n $SKIP_POLICY ]]; then
skip "will not test policy actions with SKIP_POLICY set"
fi
test_common_get_put_delete_bucket_policy "aws"
}
# delete-bucket-tagging
@test "test-set-get-delete-bucket-tags" {
test_common_set_get_delete_bucket_tags "aws"
}
# get-bucket-acl
@test "test_get_bucket_acl" {
test_get_bucket_acl_aws_root
}
# get-bucket-location
@test "test_get_bucket_location" {
test_common_get_bucket_location "aws"
}
# get-bucket-policy - test_get_put_delete_bucket_policy
# get-bucket-tagging - test_set_get_delete_bucket_tags
@test "test_head_bucket_invalid_name" {
if head_bucket "aws" ""; then
fail "able to get bucket info for invalid name"
fi
}
# test listing buckets on versitygw
@test "test_list_buckets" {
test_common_list_buckets "s3api"
}
@test "test_put_bucket_acl" {
test_common_put_bucket_acl "s3api"
}
@test "test_head_bucket" {
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
head_bucket "aws" "$BUCKET_ONE_NAME" || fail "error getting bucket info"
log 5 "INFO: $bucket_info"
region=$(echo "$bucket_info" | grep -v "InsecureRequestWarning" | jq -r ".BucketRegion" 2>&1) || fail "error getting bucket region: $region"
[[ $region != "" ]] || fail "empty bucket region"
bucket_cleanup "aws" "$BUCKET_ONE_NAME"
}
@test "test_head_bucket_doesnt_exist" {
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
head_bucket "aws" "$BUCKET_ONE_NAME"a || local info_result=$?
[[ $info_result -eq 1 ]] || fail "bucket info for non-existent bucket returned"
[[ $bucket_info == *"404"* ]] || fail "404 not returned for non-existent bucket info"
bucket_cleanup "aws" "$BUCKET_ONE_NAME"
}

110
tests/test_s3api_multipart.sh Executable file
View File

@@ -0,0 +1,110 @@
#!/usr/bin/env bats
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/setup.sh
source ./tests/test_aws_root_inner.sh
source ./tests/util_file.sh
source ./tests/util_multipart.sh
source ./tests/util_tags.sh
source ./tests/commands/get_object.sh
source ./tests/commands/put_object.sh
source ./tests/commands/list_multipart_uploads.sh
# abort-multipart-upload
@test "test_abort_multipart_upload" {
test_abort_multipart_upload_aws_root
}
# complete-multipart-upload
@test "test_complete_multipart_upload" {
test_complete_multipart_upload_aws_root
}
# create-multipart-upload
@test "test_create_multipart_upload_properties" {
test_create_multipart_upload_properties_aws_root
}
# test multi-part upload list parts command
@test "test-multipart-upload-list-parts" {
test_multipart_upload_list_parts_aws_root
}
# test listing of active uploads
@test "test-multipart-upload-list-uploads" {
local bucket_file_one="bucket-file-one"
local bucket_file_two="bucket-file-two"
if [[ $RECREATE_BUCKETS == false ]]; then
run abort_all_multipart_uploads "$BUCKET_ONE_NAME"
assert_success
fi
run create_test_files "$bucket_file_one" "$bucket_file_two"
assert_success
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
run create_list_check_multipart_uploads "$BUCKET_ONE_NAME" "$bucket_file_one" "$bucket_file_two"
assert_success
}
@test "test-multipart-upload-from-bucket" {
local bucket_file="bucket-file"
run create_test_file "$bucket_file"
assert_success
run dd if=/dev/urandom of="$TEST_FILE_FOLDER/$bucket_file" bs=5M count=1
assert_success
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
run multipart_upload_from_bucket "$BUCKET_ONE_NAME" "$bucket_file" "$TEST_FILE_FOLDER"/"$bucket_file" 4
assert_success
run get_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file-copy" "$TEST_FILE_FOLDER/$bucket_file-copy"
assert_success
run compare_files "$TEST_FILE_FOLDER"/$bucket_file-copy "$TEST_FILE_FOLDER"/$bucket_file
assert_success
}
@test "test_multipart_upload_from_bucket_range_too_large" {
local bucket_file="bucket-file"
run create_large_file "$bucket_file"
assert_success
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
run multipart_upload_range_too_large "$BUCKET_ONE_NAME" "$bucket_file" "$TEST_FILE_FOLDER"/"$bucket_file"
assert_success
}
@test "test_multipart_upload_from_bucket_range_valid" {
local bucket_file="bucket-file"
run create_large_file "$bucket_file"
assert_success
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
run run_and_verify_multipart_upload_with_valid_range "$BUCKET_ONE_NAME" "$bucket_file" "$TEST_FILE_FOLDER/$bucket_file"
assert_success
}

View File

@@ -20,7 +20,6 @@ source ./tests/util_aws.sh
source ./tests/util_create_bucket.sh
source ./tests/util_file.sh
source ./tests/util_lock_config.sh
source ./tests/util_multipart.sh
source ./tests/util_tags.sh
source ./tests/util_users.sh
source ./tests/test_aws_root_inner.sh
@@ -38,7 +37,6 @@ source ./tests/commands/get_object_legal_hold.sh
source ./tests/commands/get_object_lock_configuration.sh
source ./tests/commands/get_object_retention.sh
source ./tests/commands/get_object_tagging.sh
source ./tests/commands/list_multipart_uploads.sh
source ./tests/commands/list_object_versions.sh
source ./tests/commands/put_bucket_acl.sh
source ./tests/commands/put_bucket_policy.sh
@@ -52,16 +50,6 @@ source ./tests/commands/select_object_content.sh
export RUN_USERS=true
# abort-multipart-upload
@test "test_abort_multipart_upload" {
test_abort_multipart_upload_aws_root
}
# complete-multipart-upload
@test "test_complete_multipart_upload" {
test_complete_multipart_upload_aws_root
}
# copy-object
@test "test_copy_object" {
test_common_copy_object "s3api"
@@ -71,35 +59,6 @@ export RUN_USERS=true
copy_object_empty || fail "copy objects with no parameters test failure"
}
# create-bucket
@test "test_create_delete_bucket_aws" {
test_common_create_delete_bucket "aws"
}
@test "test_create_bucket_invalid_name" {
test_create_bucket_invalid_name_aws_root
}
# create-multipart-upload
@test "test_create_multipart_upload_properties" {
test_create_multipart_upload_properties_aws_root
}
# delete-bucket - test_create_delete_bucket_aws
# delete-bucket-policy
@test "test_get_put_delete_bucket_policy" {
if [[ -n $SKIP_POLICY ]]; then
skip "will not test policy actions with SKIP_POLICY set"
fi
test_common_get_put_delete_bucket_policy "aws"
}
# delete-bucket-tagging
@test "test-set-get-delete-bucket-tags" {
test_common_set_get_delete_bucket_tags "aws"
}
# delete-object - tested with bucket cleanup before or after tests
# delete-object-tagging
@@ -115,20 +74,6 @@ export RUN_USERS=true
test_delete_objects_aws_root
}
# get-bucket-acl
@test "test_get_bucket_acl" {
test_get_bucket_acl_aws_root
}
# get-bucket-location
@test "test_get_bucket_location" {
test_common_get_bucket_location "aws"
}
# get-bucket-policy - test_get_put_delete_bucket_policy
# get-bucket-tagging - test_set_get_delete_bucket_tags
# get-object
@test "test_get_object_full_range" {
test_get_object_full_range_aws_root
@@ -143,12 +88,6 @@ export RUN_USERS=true
test_get_object_attributes_aws_root
}
@test "test_head_bucket_invalid_name" {
if head_bucket "aws" ""; then
fail "able to get bucket info for invalid name"
fi
}
@test "test_put_object" {
test_put_object_aws_root
}
@@ -168,11 +107,6 @@ export RUN_USERS=true
test_common_put_object_no_data "aws"
}
# test listing buckets on versitygw
@test "test_list_buckets" {
test_common_list_buckets "s3api"
}
# test listing a bucket's objects on versitygw
@test "test_list_objects" {
test_common_list_objects "aws"
@@ -186,10 +120,6 @@ export RUN_USERS=true
test_get_put_object_retention_aws_root
}
@test "test_put_bucket_acl" {
test_common_put_bucket_acl "s3api"
}
# test v1 s3api list objects command
@test "test-s3api-list-objects-v1" {
test_s3api_list_objects_v1_aws_root
@@ -205,88 +135,6 @@ export RUN_USERS=true
test_common_set_get_object_tags "aws"
}
# test multi-part upload list parts command
@test "test-multipart-upload-list-parts" {
test_multipart_upload_list_parts_aws_root
}
# test listing of active uploads
@test "test-multipart-upload-list-uploads" {
local bucket_file_one="bucket-file-one"
local bucket_file_two="bucket-file-two"
if [[ $RECREATE_BUCKETS == false ]]; then
run abort_all_multipart_uploads "$BUCKET_ONE_NAME"
assert_success
fi
run create_test_files "$bucket_file_one" "$bucket_file_two"
assert_success
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
run create_list_check_multipart_uploads "$BUCKET_ONE_NAME" "$bucket_file_one" "$bucket_file_two"
assert_success
}
@test "test-multipart-upload-from-bucket" {
local bucket_file="bucket-file"
run create_test_file "$bucket_file"
assert_success
run dd if=/dev/urandom of="$TEST_FILE_FOLDER/$bucket_file" bs=5M count=1
assert_success
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
run multipart_upload_from_bucket "$BUCKET_ONE_NAME" "$bucket_file" "$TEST_FILE_FOLDER"/"$bucket_file" 4
assert_success
run get_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file-copy" "$TEST_FILE_FOLDER/$bucket_file-copy"
assert_success
run compare_files "$TEST_FILE_FOLDER"/$bucket_file-copy "$TEST_FILE_FOLDER"/$bucket_file
assert_success
}
@test "test_multipart_upload_from_bucket_range_too_large" {
local bucket_file="bucket-file"
run create_large_file "$bucket_file"
assert_success
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
run multipart_upload_range_too_large "$BUCKET_ONE_NAME" "$bucket_file" "$TEST_FILE_FOLDER"/"$bucket_file"
assert_success
}
@test "test_multipart_upload_from_bucket_range_valid" {
local bucket_file="bucket-file"
run create_large_file "$bucket_file"
assert_success
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
range_max=$((5*1024*1024-1))
multipart_upload_from_bucket_range "$BUCKET_ONE_NAME" "$bucket_file" "$TEST_FILE_FOLDER"/"$bucket_file" 4 "bytes=0-$range_max" || fail "upload failure"
get_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file-copy" "$TEST_FILE_FOLDER/$bucket_file-copy" || fail "error retrieving object after upload"
if [[ $(uname) == 'Darwin' ]]; then
object_size=$(stat -f%z "$TEST_FILE_FOLDER/$bucket_file-copy")
else
object_size=$(stat --format=%s "$TEST_FILE_FOLDER/$bucket_file-copy")
fi
[[ object_size -eq $((range_max*4+4)) ]] || fail "object size mismatch ($object_size, $((range_max*4+4)))"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
@test "test-presigned-url-utf8-chars" {
test_common_presigned_url_utf8_chars "aws"
}
@@ -304,18 +152,11 @@ export RUN_USERS=true
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
put_object "aws" "$TEST_FILE_FOLDER/$folder_name/$object_name" "$BUCKET_ONE_NAME" "$folder_name/$object_name" || fail "failed to add object to bucket"
run put_object "aws" "$TEST_FILE_FOLDER/$folder_name/$object_name" "$BUCKET_ONE_NAME" "$folder_name/$object_name"
assert_success
list_objects_s3api_v1 "$BUCKET_ONE_NAME" "/"
prefix=$(echo "${objects[@]}" | jq -r ".CommonPrefixes[0].Prefix" 2>&1) || fail "error getting object prefix from object list: $prefix"
[[ $prefix == "$folder_name/" ]] || fail "prefix doesn't match (expected $prefix, actual $folder_name/)"
list_objects_s3api_v1 "$BUCKET_ONE_NAME" "#"
key=$(echo "${objects[@]}" | jq -r ".Contents[0].Key" 2>&1) || fail "error getting key from object list: $key"
[[ $key == "$folder_name/$object_name" ]] || fail "key doesn't match (expected $key, actual $folder_name/$object_name)"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $folder_name
run check_object_listing_with_prefixes "$BUCKET_ONE_NAME" "$folder_name" "$object_name"
assert_success
}
# ensure that lists of files greater than a size of 1000 (pagination) are returned properly
@@ -342,31 +183,10 @@ export RUN_USERS=true
# [[ $put_object -eq 0 ]] || fail "Failed to add object to bucket"
#}
@test "test_head_bucket" {
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
head_bucket "aws" "$BUCKET_ONE_NAME" || fail "error getting bucket info"
log 5 "INFO: $bucket_info"
region=$(echo "$bucket_info" | grep -v "InsecureRequestWarning" | jq -r ".BucketRegion" 2>&1) || fail "error getting bucket region: $region"
[[ $region != "" ]] || fail "empty bucket region"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
}
@test "test_retention_bypass" {
test_retention_bypass_aws_root
}
@test "test_head_bucket_doesnt_exist" {
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
head_bucket "aws" "$BUCKET_ONE_NAME"a || local info_result=$?
[[ $info_result -eq 1 ]] || fail "bucket info for non-existent bucket returned"
[[ $bucket_info == *"404"* ]] || fail "404 not returned for non-existent bucket info"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
}
@test "test_add_object_metadata" {
object_one="object-one"
test_key="x-test-data"
@@ -388,7 +208,7 @@ export RUN_USERS=true
[[ $key == "$test_key" ]] || fail "keys doesn't match (expected $key, actual \"$test_key\")"
[[ $value == "$test_value" ]] || fail "values doesn't match (expected $value, actual \"$test_value\")"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
bucket_cleanup "aws" "$BUCKET_ONE_NAME"
delete_test_files "$object_one"
}
@@ -410,7 +230,7 @@ export RUN_USERS=true
run get_and_check_object_lock_config "$bucket_name" "$enabled" "$governance" "$days"
assert_success "error getting and checking object lock config"
delete_bucket_or_contents "aws" "$bucket_name"
bucket_cleanup "aws" "$bucket_name"
}
@test "test_ls_directory_object" {

View File

@@ -182,20 +182,17 @@ test_s3api_policy_invalid_action() {
resource="arn:aws:s3:::$BUCKET_ONE_NAME/*"
# shellcheck disable=SC2154
setup_policy_with_single_statement "$TEST_FILE_FOLDER/$policy_file" "dummy" "$effect" "$principal" "$action" "$resource"
run setup_policy_with_single_statement "$TEST_FILE_FOLDER/$policy_file" "dummy" "$effect" "$principal" "$action" "$resource"
assert_success
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
check_for_empty_policy "s3api" "$BUCKET_ONE_NAME" || fail "policy not empty"
run check_for_empty_policy "s3api" "$BUCKET_ONE_NAME"
assert_success
if put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file"; then
fail "put succeeded despite malformed policy"
fi
# shellcheck disable=SC2154
[[ "$put_bucket_policy_error" == *"MalformedPolicy"*"invalid action"* ]] || fail "invalid policy error: $put_bucket_policy_error"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file"
run put_and_check_for_malformed_policy "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file"
assert_success
}
test_s3api_policy_get_object_with_user() {
@@ -213,30 +210,26 @@ test_s3api_policy_get_object_with_user() {
action="s3:GetObject"
resource="arn:aws:s3:::$BUCKET_ONE_NAME/$test_file"
setup_policy_with_single_statement "$TEST_FILE_FOLDER/$policy_file" "2012-10-17" "$effect" "$principal" "$action" "$resource" || fail "failed to set up policy"
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
put_object "s3api" "$TEST_FILE_FOLDER/$test_file" "$BUCKET_ONE_NAME" "$test_file" || fail "error copying object"
if ! check_for_empty_policy "s3api" "$BUCKET_ONE_NAME"; then
delete_bucket_policy "s3api" "$BUCKET_ONE_NAME" || fail "error deleting policy"
check_for_empty_policy "s3api" "$BUCKET_ONE_NAME" || fail "policy not empty after deletion"
fi
setup_user "$username" "$password" "user" || fail "error creating user"
if get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$TEST_FILE_FOLDER/$test_file-copy" "$username" "$password"; then
fail "get object with user succeeded despite lack of permissions"
fi
# shellcheck disable=SC2154
[[ "$get_object_error" == *"Access Denied"* ]] || fail "invalid get object error: $get_object_error"
put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file" || fail "error putting policy"
run download_and_compare_file_with_user "s3api" "$TEST_FILE_FOLDER/$test_file" "$BUCKET_ONE_NAME" "$test_file" "$TEST_FILE_FOLDER/$test_file-copy" "$username" "$password"
run put_object "s3api" "$TEST_FILE_FOLDER/$test_file" "$BUCKET_ONE_NAME" "$test_file"
assert_success
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
run setup_user "$username" "$password" "user"
assert_success
run verify_user_cant_get_object "s3api" "$BUCKET_ONE_NAME" "$test_file" "$TEST_FILE_FOLDER/$test_file-copy" "$username" "$password"
assert_success
run setup_policy_with_single_statement "$TEST_FILE_FOLDER/$policy_file" "2012-10-17" "$effect" "$principal" "$action" "$resource"
assert_success
run put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file"
assert_success
run download_and_compare_file_with_user "s3api" "$TEST_FILE_FOLDER/$test_file" "$BUCKET_ONE_NAME" "$test_file" "$TEST_FILE_FOLDER/$test_file-copy" "$username" "$password"
assert_success
}
test_s3api_policy_get_object_specific_file() {
@@ -268,12 +261,8 @@ test_s3api_policy_get_object_specific_file() {
run download_and_compare_file_with_user "s3api" "$TEST_FILE_FOLDER/$test_file" "$BUCKET_ONE_NAME" "$test_file" "$TEST_FILE_FOLDER/$test_file-copy" "$username" "$password"
assert_success
if get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file_two" "$TEST_FILE_FOLDER/$test_file_two-copy" "$username" "$password"; then
fail "get object with user succeeded despite lack of permissions"
fi
# shellcheck disable=SC2154
[[ "$get_object_error" == *"Access Denied"* ]] || fail "invalid get object error: $get_object_error"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
run verify_user_cant_get_object "s3api" "$BUCKET_ONE_NAME" "$test_file_two" "$TEST_FILE_FOLDER/$test_file_two-copy" "$username" "$password"
assert_success
}
test_s3api_policy_get_object_file_wildcard() {
@@ -291,17 +280,23 @@ test_s3api_policy_get_object_file_wildcard() {
action="s3:GetObject"
resource="arn:aws:s3:::$BUCKET_ONE_NAME/policy_file*"
setup_user "$username" "$password" "user" || fail "error creating user account"
run setup_user "$username" "$password" "user"
assert_success
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
setup_policy_with_single_statement "$TEST_FILE_FOLDER/$policy_file" "dummy" "$effect" "$principal" "$action" "$resource" || fail "failed to set up policy"
put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file" || fail "error putting policy"
run setup_policy_with_single_statement "$TEST_FILE_FOLDER/$policy_file" "dummy" "$effect" "$principal" "$action" "$resource"
assert_success
run put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file"
assert_success
put_object "s3api" "$TEST_FILE_FOLDER/$policy_file" "$BUCKET_ONE_NAME" "$policy_file" || fail "error copying object one"
put_object "s3api" "$TEST_FILE_FOLDER/$policy_file_two" "$BUCKET_ONE_NAME" "$policy_file_two" || fail "error copying object two"
put_object "s3api" "$TEST_FILE_FOLDER/$policy_file_three" "$BUCKET_ONE_NAME" "$policy_file_three" || fail "error copying object three"
run put_object "s3api" "$TEST_FILE_FOLDER/$policy_file" "$BUCKET_ONE_NAME" "$policy_file"
assert_success
run put_object "s3api" "$TEST_FILE_FOLDER/$policy_file_two" "$BUCKET_ONE_NAME" "$policy_file_two"
assert_success
run put_object "s3api" "$TEST_FILE_FOLDER/$policy_file_three" "$BUCKET_ONE_NAME" "$policy_file_three"
assert_success
run download_and_compare_file_with_user "s3api" "$TEST_FILE_FOLDER/$policy_file" "$BUCKET_ONE_NAME" "$policy_file" "$TEST_FILE_FOLDER/$policy_file-copy" "$username" "$password"
assert_success
@@ -309,12 +304,8 @@ test_s3api_policy_get_object_file_wildcard() {
run download_and_compare_file_with_user "s3api" "$TEST_FILE_FOLDER/$policy_file_two" "$BUCKET_ONE_NAME" "$policy_file_two" "$TEST_FILE_FOLDER/$policy_file_two-copy" "$username" "$password"
assert_success
if get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$policy_file_three" "$TEST_FILE_FOLDER/$policy_file_three" "$username" "$password"; then
fail "get object three with user succeeded despite lack of permissions"
fi
[[ "$get_object_error" == *"Access Denied"* ]] || fail "invalid get object error: $get_object_error"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
run verify_user_cant_get_object "s3api" "$BUCKET_ONE_NAME" "$policy_file_three" "$TEST_FILE_FOLDER/$policy_file_three" "$username" "$password"
assert_success
}
test_s3api_policy_get_object_folder_wildcard() {
@@ -335,17 +326,23 @@ test_s3api_policy_get_object_folder_wildcard() {
action="s3:GetObject"
resource="arn:aws:s3:::$BUCKET_ONE_NAME/$test_folder/*"
setup_user "$username" "$password" "user" || fail "error creating user"
run setup_user "$username" "$password" "user"
assert_success
setup_bucket "s3api" "$BUCKET_ONE_NAME"
setup_policy_with_single_statement "$TEST_FILE_FOLDER/$policy_file" "dummy" "$effect" "$principal" "$action" "$resource" || fail "failed to set up policy"
put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file" || fail "error putting policy"
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
put_object "s3api" "$TEST_FILE_FOLDER/$test_folder/$test_file" "$BUCKET_ONE_NAME" "$test_folder/$test_file" || fail "error copying object to bucket"
run setup_policy_with_single_statement "$TEST_FILE_FOLDER/$policy_file" "dummy" "$effect" "$principal" "$action" "$resource"
assert_success
download_and_compare_file_with_user "s3api" "$TEST_FILE_FOLDER/$test_folder/$test_file" "$BUCKET_ONE_NAME" "$test_folder/$test_file" "$TEST_FILE_FOLDER/$test_file-copy" "$username" "$password" || fail "error downloading and comparing file"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$test_folder/$test_file" "$policy_file"
run put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file"
assert_success
run put_object "s3api" "$TEST_FILE_FOLDER/$test_folder/$test_file" "$BUCKET_ONE_NAME" "$test_folder/$test_file"
assert_success
run download_and_compare_file_with_user "s3api" "$TEST_FILE_FOLDER/$test_folder/$test_file" "$BUCKET_ONE_NAME" "$test_folder/$test_file" "$TEST_FILE_FOLDER/$test_file-copy" "$username" "$password"
assert_success
}
test_s3api_policy_allow_deny() {
@@ -357,25 +354,25 @@ test_s3api_policy_allow_deny() {
run create_test_files "$policy_file" "$test_file"
assert_success
setup_user "$username" "$password" "user" || fail "error creating user"
run setup_user "$username" "$password" "user"
assert_success
run setup_bucket "s3api" "$BUCKET_ONE_NAME"
assert_success
setup_policy_with_double_statement "$TEST_FILE_FOLDER/$policy_file" "dummy" \
run setup_policy_with_double_statement "$TEST_FILE_FOLDER/$policy_file" "dummy" \
"Deny" "$username" "s3:GetObject" "arn:aws:s3:::$BUCKET_ONE_NAME/$test_file" \
"Allow" "$username" "s3:GetObject" "arn:aws:s3:::$BUCKET_ONE_NAME/$test_file"
assert_success
put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file" || fail "error putting policy"
put_object "s3api" "$TEST_FILE_FOLDER/$test_file" "$BUCKET_ONE_NAME" "$test_file" || fail "error copying object to bucket"
run put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file"
assert_success
if get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$TEST_FILE_FOLDER/$test_file-copy" "$username" "$password"; then
fail "able to get object despite deny statement"
fi
[[ "$get_object_error" == *"Access Denied"* ]] || fail "invalid get object error: $get_object_error"
run put_object "s3api" "$TEST_FILE_FOLDER/$test_file" "$BUCKET_ONE_NAME" "$test_file"
assert_success
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$test_file" "$test_file-copy" "$policy_file"
run verify_user_cant_get_object "s3api" "$BUCKET_ONE_NAME" "$test_file" "$TEST_FILE_FOLDER/$test_file-copy" "$username" "$password"
assert_success
}
test_s3api_policy_deny() {
@@ -402,12 +399,9 @@ test_s3api_policy_deny() {
put_object "s3api" "$TEST_FILE_FOLDER/$test_file_one" "$BUCKET_ONE_NAME" "$test_file_one" || fail "error copying object one"
put_object "s3api" "$TEST_FILE_FOLDER/$test_file_one" "$BUCKET_ONE_NAME" "$test_file_two" || fail "error copying object two"
get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file_one" "$TEST_FILE_FOLDER/$test_file_one-copy" "$username" "$password" || fail "error getting object"
if get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file_two" "$TEST_FILE_FOLDER/$test_file_two-copy" "$username" "$password"; then
fail "able to get object despite deny statement"
fi
[[ "$get_object_error" == *"Access Denied"* ]] || fail "invalid get object error: $get_object_error"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$test_file_one" "$test_file_two" "$test_file_one-copy" "$test_file_two-copy" "$policy_file"
run verify_user_cant_get_object "s3api" "$BUCKET_ONE_NAME" "$test_file_two" "$TEST_FILE_FOLDER/$test_file_two-copy" "$username" "$password"
assert_success
}
test_s3api_policy_put_wildcard() {
@@ -440,13 +434,11 @@ test_s3api_policy_put_wildcard() {
# shellcheck disable=SC2154
[[ "$put_object_error" == *"Access Denied"* ]] || fail "invalid put object error: $put_object_error"
put_object_with_user "s3api" "$TEST_FILE_FOLDER/$test_folder/$test_file" "$BUCKET_ONE_NAME" "$test_folder/$test_file" "$username" "$password" || fail "error putting file despite policy permissions"
if get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_folder/$test_file" "$test_folder/$test_file-copy" "$username" "$password"; then
fail "able to get object without permissions"
fi
[[ "$get_object_error" == *"Access Denied"* ]] || fail "invalid get object error: $get_object_error"
run verify_user_cant_get_object "s3api" "$BUCKET_ONE_NAME" "$test_folder/$test_file" "$test_folder/$test_file-copy" "$username" "$password"
assert_success
download_and_compare_file "s3api" "$TEST_FILE_FOLDER/$test_folder/$test_file" "$BUCKET_ONE_NAME" "$test_folder/$test_file" "$TEST_FILE_FOLDER/$test_file-copy" || fail "files don't match"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$test_folder/$test_file" "$test_file-copy" "$policy_file"
}
test_s3api_policy_delete() {
@@ -481,8 +473,6 @@ test_s3api_policy_delete() {
# shellcheck disable=SC2154
[[ "$delete_object_error" == *"Access Denied"* ]] || fail "invalid delete object error: $delete_object_error"
delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file_two" "$username" "$password" || fail "error deleting object despite permissions"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$test_file_one" "$test_file_two" "$policy_file"
}
test_s3api_policy_get_bucket_policy() {
@@ -515,8 +505,6 @@ test_s3api_policy_get_bucket_policy() {
log 5 "ORIG: $(cat "$TEST_FILE_FOLDER/$policy_file")"
log 5 "COPY: $(cat "$TEST_FILE_FOLDER/$policy_file-copy")"
compare_files "$TEST_FILE_FOLDER/$policy_file" "$TEST_FILE_FOLDER/$policy_file-copy" || fail "policies not equal"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file" "$policy_file-copy"
}
test_s3api_policy_list_multipart_uploads() {
@@ -560,8 +548,6 @@ test_s3api_policy_list_multipart_uploads() {
log 5 "$uploads"
upload_key=$(echo "$uploads" | grep -v "InsecureRequestWarning" | jq -r ".Uploads[0].Key" 2>&1) || fail "error parsing upload key from uploads message: $upload_key"
[[ $upload_key == "$test_file" ]] || fail "upload key doesn't match file marked as being uploaded"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file" "$test_file"
}
test_s3api_policy_put_bucket_policy() {
@@ -597,8 +583,6 @@ test_s3api_policy_put_bucket_policy() {
log 5 "ORIG: $(cat "$TEST_FILE_FOLDER/$policy_file_two")"
log 5 "COPY: $(cat "$TEST_FILE_FOLDER/$policy_file-copy")"
compare_files "$TEST_FILE_FOLDER/$policy_file_two" "$TEST_FILE_FOLDER/$policy_file-copy" || fail "policies not equal"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file" "$policy_file_two" "$policy_file-copy"
}
test_s3api_policy_delete_bucket_policy() {
@@ -625,8 +609,6 @@ test_s3api_policy_delete_bucket_policy() {
setup_policy_with_single_statement "$TEST_FILE_FOLDER/$policy_file" "dummy" "$effect" "$principal" "$action" "$resource" || fail "failed to set up policy"
put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file" || fail "error putting policy"
delete_bucket_policy_with_user "$BUCKET_ONE_NAME" "$username" "$password" || fail "unable to delete bucket policy"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file"
}
test_s3api_policy_get_bucket_acl() {
@@ -699,9 +681,6 @@ test_s3api_policy_abort_multipart_upload() {
put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file" || fail "error putting policy"
abort_multipart_upload_with_user "$BUCKET_ONE_NAME" "$test_file" "$upload_id" "$username" "$password" || fail "error aborting multipart upload despite permissions"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file" "$test_file"
}
test_s3api_policy_two_principals() {
@@ -734,9 +713,6 @@ test_s3api_policy_two_principals() {
assert_success "error getting object with user $USERNAME_ONE"
run get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$TEST_FILE_FOLDER/copy_two" "$USERNAME_TWO" "$PASSWORD_TWO"
assert_success "error getting object with user $USERNAME_TWO"
delete_test_files "$test_file" "$policy_file" "$TEST_FILE_FOLDER/copy_one" "$TEST_FILE_FOLDER/copy_two"
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
}
test_s3api_policy_put_bucket_tagging() {
@@ -760,9 +736,8 @@ test_s3api_policy_put_bucket_tagging() {
run put_bucket_tagging_with_user "$BUCKET_ONE_NAME" "$tag_key" "$tag_value" "$USERNAME_ONE" "$PASSWORD_ONE"
assert_success "unable to put bucket tagging despite user permissions"
get_and_check_bucket_tags "$BUCKET_ONE_NAME" "$tag_key" "$tag_value"
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
run get_and_check_bucket_tags "$BUCKET_ONE_NAME" "$tag_key" "$tag_value"
assert_success
}
test_s3api_policy_put_acl() {
@@ -805,8 +780,6 @@ test_s3api_policy_put_acl() {
id=$(echo "$second_grantee" | jq -r ".ID" 2>&1) || fail "error getting ID: $id"
[[ $id == "all-users" ]] || fail "unexpected ID: $id"
fi
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file"
}
test_s3api_policy_get_bucket_tagging() {
@@ -835,11 +808,9 @@ test_s3api_policy_get_bucket_tagging() {
run put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$TEST_FILE_FOLDER/$policy_file"
assert_success "error putting policy"
run get_and_check_bucket_tags_with_user "$USERNAME_ONE" "$PASSWORD_ONE" "$BUCKET_ONE_NAME" "$tag_key" "$tag_value"
assert_success "get and check bucket tags failed"
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file"
}
test_s3api_policy_list_upload_parts() {
@@ -868,7 +839,4 @@ test_s3api_policy_list_upload_parts() {
run create_upload_and_test_parts_listing "$test_file" "$policy_file"
assert_success "error creating upload and testing parts listing"
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file" "$test_file"
}

View File

@@ -52,7 +52,7 @@ export RUN_USERS=true
[[ "$bucket_create_error" == *"just the bucket name"* ]] || fail "unexpected error: $bucket_create_error"
delete_bucket_or_contents "s3cmd" "$BUCKET_ONE_NAME"
bucket_cleanup "s3cmd" "$BUCKET_ONE_NAME"
}
# delete-bucket - test_create_delete_bucket
@@ -109,17 +109,13 @@ export RUN_USERS=true
# test_common_presigned_url_utf8_chars "s3cmd"
#}
@test "test_list_objects_file_count" {
test_common_list_objects_file_count "s3cmd"
}
@test "test_get_bucket_info_s3cmd" {
run setup_bucket "s3cmd" "$BUCKET_ONE_NAME"
assert_success
head_bucket "s3cmd" "$BUCKET_ONE_NAME"
[[ $bucket_info == *"s3://$BUCKET_ONE_NAME"* ]] || fail "failure to retrieve correct bucket info: $bucket_info"
delete_bucket_or_contents "s3cmd" "$BUCKET_ONE_NAME"
bucket_cleanup "s3cmd" "$BUCKET_ONE_NAME"
}
@test "test_get_bucket_info_doesnt_exist_s3cmd" {
@@ -129,7 +125,7 @@ export RUN_USERS=true
head_bucket "s3cmd" "$BUCKET_ONE_NAME"a || local info_result=$?
[[ $info_result -eq 1 ]] || fail "bucket info for non-existent bucket returned"
[[ $bucket_info == *"404"* ]] || fail "404 not returned for non-existent bucket info"
delete_bucket_or_contents "s3cmd" "$BUCKET_ONE_NAME"
bucket_cleanup "s3cmd" "$BUCKET_ONE_NAME"
}
@test "test_ls_directory_object" {

23
tests/test_s3cmd_file_count.sh Executable file
View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bats
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/test_common.sh
export RUN_S3CMD=true
@test "test_list_objects_file_count" {
test_common_list_objects_file_count "s3cmd"
}

View File

@@ -136,7 +136,7 @@ export RUN_USERS=true
fi
# shellcheck disable=SC2154
[[ "$get_object_error" == *"NoSuchKey"* ]] || fail "unexpected error message: $get_object_error"
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
bucket_cleanup "s3api" "$BUCKET_ONE_NAME"
delete_test_files "$test_file" "$test_file-copy"
}

View File

@@ -70,7 +70,9 @@ test_create_user_already_exists() {
username="$USERNAME_ONE"
password="$PASSWORD_ONE"
setup_user "$username" "123456" "admin" || fail "error setting up user"
run setup_user "$username" "123456" "admin"
assert_success "error setting up user"
if create_user "$username" "123456" "admin"; then
fail "'user already exists' error not returned"
fi
@@ -87,7 +89,7 @@ test_user_user() {
password="$PASSWORD_ONE"
setup_user "$username" "$password" "user" || fail "error setting up user"
delete_bucket_or_contents_if_exists "aws" "versity-gwtest-user-bucket"
bucket_cleanup_if_bucket_exists "aws" "versity-gwtest-user-bucket"
run setup_bucket "aws" "$BUCKET_ONE_NAME"
assert_success
@@ -131,7 +133,7 @@ test_userplus_operation() {
username="$USERNAME_ONE"
password="$PASSWORD_ONE"
delete_bucket_or_contents_if_exists "aws" "versity-gwtest-userplus-bucket"
bucket_cleanup_if_bucket_exists "aws" "versity-gwtest-userplus-bucket"
setup_user "$username" "$password" "userplus" || fail "error creating user '$username'"
run setup_bucket "aws" "$BUCKET_ONE_NAME"

View File

@@ -14,6 +14,7 @@
# specific language governing permissions and limitations
# under the License.
source ./tests/util_bucket.sh
source ./tests/util_create_bucket.sh
source ./tests/util_mc.sh
source ./tests/util_multipart.sh
@@ -46,46 +47,6 @@ source ./tests/commands/upload_part_copy.sh
source ./tests/commands/upload_part.sh
source ./tests/util_users.sh
# recursively delete an AWS bucket
# param: client, bucket name
# fail if error
delete_bucket_recursive() {
log 6 "delete_bucket_recursive"
if [ $# -ne 2 ]; then
log 2 "'delete_bucket_recursive' requires client, bucket name"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 rb s3://"$2" --force 2>&1) || exit_code="$?"
elif [[ $1 == "aws" ]] || [[ $1 == 's3api' ]]; then
if ! delete_bucket_recursive_s3api "$2"; then
log 2 "error deleting bucket recursively (s3api)"
return 1
fi
return 0
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate rb s3://"$2" --recursive 2>&1) || exit_code="$?"
elif [[ $1 == "mc" ]]; then
error=$(delete_bucket_recursive_mc "$2" 2>&1) || exit_code="$?"
else
log 2 "invalid client '$1'"
return 1
fi
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"The specified bucket does not exist"* ]]; then
return 0
else
log 2 "error deleting bucket recursively: $error"
return 1
fi
fi
return 0
}
# params: bucket name
# return 0 for success, 1 for error
add_governance_bypass_policy() {
@@ -200,56 +161,6 @@ check_object_lock_config() {
return 0
}
# restore bucket to pre-test state (or prep for deletion)
# param: bucket name
# return 0 on success, 1 on error
clear_bucket_s3api() {
log 6 "clear_bucket_s3api"
if [ $# -ne 1 ]; then
log 2 "'clear_bucket_s3api' requires bucket name"
return 1
fi
if [[ $LOG_LEVEL_INT -ge 5 ]]; then
if ! log_bucket_policy "$1"; then
log 2 "error logging bucket policy"
return 1
fi
fi
if ! check_object_lock_config "$1"; then
log 2 "error checking object lock config"
return 1
fi
if [[ "$DIRECT" != "true" ]] && ! add_governance_bypass_policy "$1"; then
log 2 "error adding governance bypass policy"
return 1
fi
if ! list_and_delete_objects "$1"; then
log 2 "error listing and deleting objects"
return 1
fi
#run check_ownership_rule_and_reset_acl "$1"
#assert_success "error checking ownership rule and resetting acl"
if [[ $lock_config_exists == true ]] && ! put_object_lock_configuration_disabled "$1"; then
log 2 "error disabling object lock config"
return 1
fi
#if ! put_bucket_versioning "s3api" "$1" "Suspended"; then
# log 2 "error suspending bucket versioning"
# return 1
#fi
#if ! change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$1" "$AWS_ACCESS_KEY_ID"; then
# log 2 "error changing bucket owner back to root"
# return 1
#fi
}
# params: bucket, object name
# return 0 for success, 1 for error
clear_object_in_bucket() {
@@ -321,77 +232,6 @@ log_worm_protection() {
log 5 "RETENTION: $retention"
}
# params: bucket name
# return 0 if able to delete recursively, 1 if not
delete_bucket_recursive_s3api() {
log 6 "delete_bucket_recursive_s3api"
if [ $# -ne 1 ]; then
log 2 "'delete_bucket_recursive_s3api' requires bucket name"
return 1
fi
if ! clear_bucket_s3api "$1"; then
log 2 "error clearing bucket (s3api)"
return 1
fi
if ! delete_bucket 's3api' "$1"; then
log 2 "error deleting bucket"
return 1
fi
return 0
}
# params: client, bucket name
# return 0 on success, 1 on error
delete_bucket_contents() {
log 6 "delete_bucket_contents"
if [ $# -ne 2 ]; then
log 2 "'delete_bucket_contents' requires client, bucket name"
return 1
fi
local exit_code=0
local error
if [[ $1 == "aws" ]] || [[ $1 == 's3api' ]]; then
if ! clear_bucket_s3api "$2"; then
log 2 "error clearing bucket (s3api)"
return 1
fi
elif [[ $1 == "s3cmd" ]]; then
delete_bucket_recursive "s3cmd" "$1"
elif [[ $1 == "mc" ]]; then
delete_bucket_recursive "mc" "$1"
elif [[ $1 == "s3" ]]; then
delete_bucket_recursive "s3" "$1"
else
log 2 "unrecognized client: '$1'"
return 1
fi
return 0
}
# check if bucket exists
# param: bucket name
# return 0 for true, 1 for false, 2 for error
bucket_exists() {
if [ $# -ne 2 ]; then
log 2 "bucket_exists command requires client, bucket name"
return 2
fi
local exists=0
head_bucket "$1" "$2" || exists=$?
# shellcheck disable=SC2181
if [ $exists -ne 0 ] && [ $exists -ne 1 ]; then
log 2 "unexpected error checking if bucket exists"
return 2
fi
if [ $exists -eq 0 ]; then
return 0
fi
return 1
}
# param: bucket name
# return 1 for failure, 0 for success
get_object_ownership_rule_and_update_acl() {
@@ -410,126 +250,6 @@ get_object_ownership_rule_and_update_acl() {
fi
}
# params: client, bucket name
# return 0 for success, 1 for error
delete_bucket_or_contents() {
log 6 "delete_bucket_or_contents"
if [ $# -ne 2 ]; then
log 2 "'delete_bucket_or_contents' requires client, bucket name"
return 1
fi
if [[ $RECREATE_BUCKETS == "false" ]]; then
if ! delete_bucket_contents "$1" "$2"; then
log 2 "error deleting bucket contents"
return 1
fi
if ! delete_bucket_policy "$1" "$2"; then
log 2 "error deleting bucket policy"
return 1
fi
if ! get_object_ownership_rule_and_update_acl "$2"; then
log 2 "error getting object ownership rule and updating ACL"
return 1
fi
if ! abort_all_multipart_uploads "$2"; then
log 2 "error aborting all multipart uploads"
return 1
fi
if [ "$RUN_USERS" == "true" ] && ! reset_bucket_owner "$2"; then
log 2 "error resetting bucket owner"
return 1
fi
log 5 "bucket contents, policy, ACL deletion success"
return 0
fi
if ! delete_bucket_recursive "$1" "$2"; then
log 2 "error with recursive bucket delete"
return 1
fi
log 5 "bucket deletion success"
return 0
}
# params: client, bucket name
# return 0 for success, 1 for error
delete_bucket_or_contents_if_exists() {
log 6 "delete_bucket_or_contents_if_exists"
if [ $# -ne 2 ]; then
log 2 "'delete_bucket_or_contents_if_exists' requires client, bucket name"
return 1
fi
if bucket_exists "$1" "$2"; then
if ! delete_bucket_or_contents "$1" "$2"; then
log 2 "error deleting bucket and/or contents"
return 1
fi
log 5 "bucket and/or bucket data deletion success"
return 0
fi
return 0
}
# params: client, bucket name(s)
# return 0 for success, 1 for failure
setup_buckets() {
if [ $# -lt 2 ]; then
log 2 "'setup_buckets' command requires client, bucket names"
return 1
fi
for name in "${@:2}"; do
if ! setup_bucket "$1" "$name"; then
log 2 "error setting up bucket $name"
return 1
fi
done
return 0
}
# params: client, bucket name
# return 0 on successful setup, 1 on error
setup_bucket() {
log 6 "setup_bucket"
if [ $# -ne 2 ]; then
log 2 "'setup_bucket' requires client, bucket name"
return 1
fi
if ! bucket_exists "$1" "$2" && [[ $RECREATE_BUCKETS == "false" ]]; then
log 2 "When RECREATE_BUCKETS isn't set to \"true\", buckets should be pre-created by user"
return 1
fi
if ! delete_bucket_or_contents_if_exists "$1" "$2"; then
log 2 "error deleting bucket or contents if they exist"
return 1
fi
log 5 "util.setup_bucket: command type: $1, bucket name: $2"
if [[ $RECREATE_BUCKETS == "true" ]]; then
if ! create_bucket "$1" "$2"; then
log 2 "error creating bucket"
return 1
fi
else
log 5 "skipping bucket re-creation"
fi
if [[ $1 == "s3cmd" ]]; then
log 5 "putting bucket ownership controls"
if bucket_exists "s3cmd" "$2" && ! put_bucket_ownership_controls "$2" "BucketOwnerPreferred"; then
log 2 "error putting bucket ownership controls"
return 1
fi
fi
return 0
}
# check if object exists on S3 via gateway
# param: command, object path
# return 0 for true, 1 for false, 2 for error
@@ -659,27 +379,6 @@ remove_insecure_request_warning() {
export parsed_output
}
# check if bucket info can be retrieved
# param: path of bucket or folder
# return 0 for yes, 1 for no, 2 for error
bucket_is_accessible() {
if [ $# -ne 1 ]; then
echo "bucket accessibility check missing bucket name"
return 2
fi
local exit_code=0
local error
error=$(aws --no-verify-ssl s3api head-bucket --bucket "$1" 2>&1) || exit_code="$?"
if [ $exit_code -eq 0 ]; then
return 0
fi
if [[ "$error" == *"500"* ]]; then
return 1
fi
echo "Error checking bucket accessibility: $error"
return 2
}
# check if object info (etag) is accessible
# param: path of object
# return 0 for yes, 1 for no, 2 for error

113
tests/util_attributes.sh Normal file
View File

@@ -0,0 +1,113 @@
#!/usr/bin/env bash
upload_and_check_attributes() {
if [ $# -ne 2 ]; then
log 2 "'upload_and_check_attributes' requires test file, file size"
return 1
fi
if ! perform_multipart_upload_rest "$BUCKET_ONE_NAME" "$1" "$TEST_FILE_FOLDER/$1-0" "$TEST_FILE_FOLDER/$1-1" \
"$TEST_FILE_FOLDER/$1-2" "$TEST_FILE_FOLDER/$1-3"; then
log 2 "error uploading and checking parts"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$BUCKET_ONE_NAME" OBJECT_KEY="$1" ATTRIBUTES="ETag,StorageClass,ObjectParts,ObjectSize" OUTPUT_FILE="$TEST_FILE_FOLDER/attributes.txt" ./tests/rest_scripts/get_object_attributes.sh); then
log 2 "error listing object attributes: $result"
return 1
fi
if ! check_attributes_after_upload "$2"; then
log 2 "error checking attributes after upload"
return 1
fi
}
check_attributes_after_upload() {
if [ $# -ne 1 ]; then
log 2 "'check_attributes_after_upload' requires file size"
return 1
fi
if ! object_size=$(xmllint --xpath '//*[local-name()="ObjectSize"]/text()' "$TEST_FILE_FOLDER/attributes.txt" 2>&1); then
log 2 "error getting checksum: $object_size"
return 1
fi
# shellcheck disable=SC2154
if [ "$object_size" != "$1" ]; then
log 2 "expected file size of '$file_size', was '$object_size'"
return 1
fi
if ! error=$(xmllint --xpath '//*[local-name()="StorageClass"]/text()' "$TEST_FILE_FOLDER/attributes.txt" 2>&1); then
log 2 "error getting storage class: $error"
return 1
fi
if ! etag=$(xmllint --xpath '//*[local-name()="ETag"]/text()' "$TEST_FILE_FOLDER/attributes.txt" 2>&1); then
log 2 "error getting etag: $etag"
return 1
fi
if ! [[ $etag =~ ^[a-fA-F0-9]{32}-4$ ]]; then
log 2 "unexpected etag pattern ($etag)"
return 1
fi
if ! parts_count=$(xmllint --xpath '//*[local-name()="PartsCount"]/text()' "$TEST_FILE_FOLDER/attributes.txt" 2>&1); then
log 2 "error getting parts_count: $parts_count"
return 1
fi
if [[ $parts_count != 4 ]]; then
log 2 "unexpected parts count, expected 4, was $parts_count"
return 1
fi
}
check_attributes_invalid_param() {
if [ "$1" -ne 1 ]; then
log 2 "'check_attributes_invalid_param' requires test file"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$BUCKET_ONE_NAME" OBJECT_KEY="$1" ATTRIBUTES="ETags" OUTPUT_FILE="$TEST_FILE_FOLDER/attributes.txt" ./tests/rest_scripts/get_object_attributes.sh); then
log 2 "error listing object attributes: $result"
return 1
fi
if [ "$result" != "400" ]; then
log 2 "expected response code of '400', was '$result'"
return 1
fi
log 5 "attributes: $(cat "$TEST_FILE_FOLDER/attributes.txt")"
if ! code=$(xmllint --xpath '//*[local-name()="Code"]/text()' "$TEST_FILE_FOLDER/attributes.txt" 2>&1); then
log 2 "error getting code: $code"
return 1
fi
if [ "$code" != "InvalidArgument" ]; then
log 2 "expected 'InvalidArgument', was '$code'"
return 1
fi
}
add_and_check_checksum() {
if [ $# -ne 2 ]; then
log 2 "'add_and_check_checksum' requires data file, key"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" DATA_FILE="$1" BUCKET_NAME="$BUCKET_ONE_NAME" OBJECT_KEY="$2" CHECKSUM="true" OUTPUT_FILE="$TEST_FILE_FOLDER/result.txt" ./tests/rest_scripts/put_object.sh); then
log 2 "error sending object file: $result"
return 1
fi
if [ "$result" != "200" ]; then
log 2 "expected response code of '200', was '$result' (output: $(cat "$TEST_FILE_FOLDER/result.txt")"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$BUCKET_ONE_NAME" OBJECT_KEY="$2" ATTRIBUTES="Checksum" OUTPUT_FILE="$TEST_FILE_FOLDER/attributes.txt" ./tests/rest_scripts/get_object_attributes.sh); then
log 2 "error listing object attributes: $result (output: $(cat "$TEST_FILE_FOLDER/attributes.txt")"
return 1
fi
if [ "$result" != "200" ]; then
log 2 "expected response code of '200', was '$result'"
return 1
fi
log 5 "attributes: $(cat "$TEST_FILE_FOLDER/attributes.txt")"
if ! checksum=$(xmllint --xpath '//*[local-name()="ChecksumSHA256"]/text()' "$TEST_FILE_FOLDER/attributes.txt" 2>&1); then
log 2 "error getting checksum: $checksum"
return 1
fi
if [ "$checksum" == "" ]; then
log 2 "empty checksum"
return 1
fi
}

View File

@@ -34,22 +34,21 @@ abort_all_multipart_uploads() {
log 5 "Modified upload list: ${modified_upload_list[*]}"
has_uploads=$(echo "${modified_upload_list[*]}" | jq 'has("Uploads")')
if [[ $has_uploads != false ]]; then
lines=$(echo "${modified_upload_list[*]}" | jq -r '.Uploads[] | "--key \(.Key) --upload-id \(.UploadId)"') || lines_result=$?
if [[ $lines_result -ne 0 ]]; then
echo "error getting lines for multipart upload delete: $lines"
if [[ $has_uploads == false ]]; then
return 0
fi
if ! lines=$(echo "${modified_upload_list[*]}" | jq -r '.Uploads[] | "--key \(.Key) --upload-id \(.UploadId)"' 2>&1); then
log 2 "error getting lines for multipart upload delete: $lines"
return 1
fi
log 5 "$lines"
while read -r line; do
# shellcheck disable=SC2086
if ! error=$(aws --no-verify-ssl s3api abort-multipart-upload --bucket "$1" $line 2>&1); then
echo "error aborting multipart upload: $error"
return 1
fi
log 5 "$lines"
while read -r line; do
# shellcheck disable=SC2086
error=$(aws --no-verify-ssl s3api abort-multipart-upload --bucket "$1" $line 2>&1) || abort_result=$?
if [[ $abort_result -ne 0 ]]; then
echo "error aborting multipart upload: $error"
return 1
fi
done <<< "$lines"
fi
done <<< "$lines"
return 0
}

304
tests/util_bucket.sh Normal file
View File

@@ -0,0 +1,304 @@
#!/usr/bin/env bash
# recursively delete an AWS bucket
# param: client, bucket name
# fail if error
delete_bucket_recursive() {
log 6 "delete_bucket_recursive"
if [ $# -ne 2 ]; then
log 2 "'delete_bucket_recursive' requires client, bucket name"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 rb s3://"$2" --force 2>&1) || exit_code="$?"
elif [[ $1 == "aws" ]] || [[ $1 == 's3api' ]]; then
if ! delete_bucket_recursive_s3api "$2"; then
log 2 "error deleting bucket recursively (s3api)"
return 1
fi
return 0
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate rb s3://"$2" --recursive 2>&1) || exit_code="$?"
elif [[ $1 == "mc" ]]; then
error=$(delete_bucket_recursive_mc "$2" 2>&1) || exit_code="$?"
else
log 2 "invalid client '$1'"
return 1
fi
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"The specified bucket does not exist"* ]]; then
return 0
else
log 2 "error deleting bucket recursively: $error"
return 1
fi
fi
return 0
}
# restore bucket to pre-test state (or prep for deletion)
# param: bucket name
# return 0 on success, 1 on error
clear_bucket_s3api() {
log 6 "clear_bucket_s3api"
if [ $# -ne 1 ]; then
log 2 "'clear_bucket_s3api' requires bucket name"
return 1
fi
if [[ $LOG_LEVEL_INT -ge 5 ]]; then
if ! log_bucket_policy "$1"; then
log 2 "error logging bucket policy"
return 1
fi
fi
if ! check_object_lock_config "$1"; then
log 2 "error checking object lock config"
return 1
fi
if [[ "$DIRECT" != "true" ]] && ! add_governance_bypass_policy "$1"; then
log 2 "error adding governance bypass policy"
return 1
fi
if ! list_and_delete_objects "$1"; then
log 2 "error listing and deleting objects"
return 1
fi
#run check_ownership_rule_and_reset_acl "$1"
#assert_success "error checking ownership rule and resetting acl"
# shellcheck disable=SC2154
if [[ $lock_config_exists == true ]] && ! put_object_lock_configuration_disabled "$1"; then
log 2 "error disabling object lock config"
return 1
fi
#if ! put_bucket_versioning "s3api" "$1" "Suspended"; then
# log 2 "error suspending bucket versioning"
# return 1
#fi
#if ! change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$1" "$AWS_ACCESS_KEY_ID"; then
# log 2 "error changing bucket owner back to root"
# return 1
#fi
}
# params: bucket name
# return 0 if able to delete recursively, 1 if not
delete_bucket_recursive_s3api() {
log 6 "delete_bucket_recursive_s3api"
if [ $# -ne 1 ]; then
log 2 "'delete_bucket_recursive_s3api' requires bucket name"
return 1
fi
if ! clear_bucket_s3api "$1"; then
log 2 "error clearing bucket (s3api)"
return 1
fi
if ! delete_bucket 's3api' "$1"; then
log 2 "error deleting bucket"
return 1
fi
return 0
}
# params: client, bucket name
# return 0 on success, 1 on error
delete_bucket_contents() {
log 6 "delete_bucket_contents"
if [ $# -ne 2 ]; then
log 2 "'delete_bucket_contents' requires client, bucket name"
return 1
fi
local exit_code=0
local error
if [[ $1 == "aws" ]] || [[ $1 == 's3api' ]]; then
if ! clear_bucket_s3api "$2"; then
log 2 "error clearing bucket (s3api)"
return 1
fi
elif [[ $1 == "s3cmd" ]]; then
delete_bucket_recursive "s3cmd" "$1"
elif [[ $1 == "mc" ]]; then
delete_bucket_recursive "mc" "$1"
elif [[ $1 == "s3" ]]; then
delete_bucket_recursive "s3" "$1"
else
log 2 "unrecognized client: '$1'"
return 1
fi
return 0
}
# check if bucket exists
# param: bucket name
# return 0 for true, 1 for false, 2 for error
bucket_exists() {
if [ $# -ne 2 ]; then
log 2 "bucket_exists command requires client, bucket name"
return 2
fi
local exists=0
head_bucket "$1" "$2" || exists=$?
# shellcheck disable=SC2181
if [ $exists -ne 0 ] && [ $exists -ne 1 ]; then
log 2 "unexpected error checking if bucket exists"
return 2
fi
if [ $exists -eq 0 ]; then
return 0
fi
return 1
}
# params: client, bucket name
# return 0 for success, 1 for error
bucket_cleanup() {
log 6 "bucket_cleanup"
if [ $# -ne 2 ]; then
log 2 "'bucket_cleanup' requires client, bucket name"
return 1
fi
if [[ $RECREATE_BUCKETS == "false" ]]; then
if ! delete_bucket_contents "$1" "$2"; then
log 2 "error deleting bucket contents"
return 1
fi
if ! delete_bucket_policy "$1" "$2"; then
log 2 "error deleting bucket policy"
return 1
fi
if ! get_object_ownership_rule_and_update_acl "$2"; then
log 2 "error getting object ownership rule and updating ACL"
return 1
fi
if ! abort_all_multipart_uploads "$2"; then
log 2 "error aborting all multipart uploads"
return 1
fi
if [ "$RUN_USERS" == "true" ] && ! reset_bucket_owner "$2"; then
log 2 "error resetting bucket owner"
return 1
fi
log 5 "bucket contents, policy, ACL deletion success"
return 0
fi
if ! delete_bucket_recursive "$1" "$2"; then
log 2 "error with recursive bucket delete"
return 1
fi
log 5 "bucket deletion success"
return 0
}
# params: client, bucket name
# return 0 for success, 1 for error
bucket_cleanup_if_bucket_exists() {
log 6 "bucket_cleanup_if_bucket_exists"
if [ $# -ne 2 ]; then
log 2 "'bucket_cleanup_if_bucket_exists' requires client, bucket name"
return 1
fi
if bucket_exists "$1" "$2"; then
if ! bucket_cleanup "$1" "$2"; then
log 2 "error deleting bucket and/or contents"
return 1
fi
log 5 "bucket and/or bucket data deletion success"
return 0
fi
return 0
}
# params: client, bucket name(s)
# return 0 for success, 1 for failure
setup_buckets() {
if [ $# -lt 2 ]; then
log 2 "'setup_buckets' command requires client, bucket names"
return 1
fi
for name in "${@:2}"; do
if ! setup_bucket "$1" "$name"; then
log 2 "error setting up bucket $name"
return 1
fi
done
return 0
}
# params: client, bucket name
# return 0 on successful setup, 1 on error
setup_bucket() {
log 6 "setup_bucket"
if [ $# -ne 2 ]; then
log 2 "'setup_bucket' requires client, bucket name"
return 1
fi
if ! bucket_exists "$1" "$2" && [[ $RECREATE_BUCKETS == "false" ]]; then
log 2 "When RECREATE_BUCKETS isn't set to \"true\", buckets should be pre-created by user"
return 1
fi
if ! bucket_cleanup_if_bucket_exists "$1" "$2"; then
log 2 "error deleting bucket or contents if they exist"
return 1
fi
log 5 "util.setup_bucket: command type: $1, bucket name: $2"
if [[ $RECREATE_BUCKETS == "true" ]]; then
if ! create_bucket "$1" "$2"; then
log 2 "error creating bucket"
return 1
fi
else
log 5 "skipping bucket re-creation"
fi
if [[ $1 == "s3cmd" ]]; then
log 5 "putting bucket ownership controls"
if bucket_exists "s3cmd" "$2" && ! put_bucket_ownership_controls "$2" "BucketOwnerPreferred"; then
log 2 "error putting bucket ownership controls"
return 1
fi
fi
return 0
}
# check if bucket info can be retrieved
# param: path of bucket or folder
# return 0 for yes, 1 for no, 2 for error
bucket_is_accessible() {
if [ $# -ne 1 ]; then
echo "bucket accessibility check missing bucket name"
return 2
fi
local exit_code=0
local error
error=$(aws --no-verify-ssl s3api head-bucket --bucket "$1" 2>&1) || exit_code="$?"
if [ $exit_code -eq 0 ]; then
return 0
fi
if [[ "$error" == *"500"* ]]; then
return 1
fi
echo "Error checking bucket accessibility: $error"
return 2
}

View File

@@ -166,3 +166,35 @@ list_objects_check_file_count() {
fi
return 0
}
check_object_listing_with_prefixes() {
if [ $# -ne 3 ]; then
log 2 "'check_object_listing_with_prefixes' requires bucket name, folder name, object name"
return 1
fi
if ! list_objects_s3api_v1 "$BUCKET_ONE_NAME" "/"; then
log 2 "error listing objects with delimiter '/'"
return 1
fi
if ! prefix=$(echo "${objects[@]}" | jq -r ".CommonPrefixes[0].Prefix" 2>&1); then
log 2 "error getting object prefix from object list: $prefix"
return 1
fi
if [[ $prefix != "$2/" ]]; then
log 2 "prefix doesn't match (expected $2, actual $prefix/)"
return 1
fi
if ! list_objects_s3api_v1 "$BUCKET_ONE_NAME" "#"; then
log 2 "error listing objects with delimiter '#"
return 1
fi
if ! key=$(echo "${objects[@]}" | jq -r ".Contents[0].Key" 2>&1); then
log 2 "error getting key from object list: $key"
return 1
fi
if [[ $key != "$2/$3" ]]; then
log 2 "key doesn't match (expected $key, actual $2/$3)"
return 1
fi
return 0
}

158
tests/util_list_parts.sh Normal file
View File

@@ -0,0 +1,158 @@
#!/usr/bin/env bash
check_part_list_rest() {
if [ $# -lt 4 ]; then
log 2 "'check_part_list_rest' requires bucket, file name, upload ID, expected count, etags"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$1" OBJECT_KEY="$2" UPLOAD_ID="$3" OUTPUT_FILE="$TEST_FILE_FOLDER/parts.txt" ./tests/rest_scripts/list_parts.sh); then
log 2 "error listing multipart upload parts: $result"
return 1
fi
if [ "$result" != "200" ]; then
log 2 "list-parts returned response code: $result, reply: $(cat "$TEST_FILE_FOLDER/parts.txt")"
return 1
fi
log 5 "parts list: $(cat "$TEST_FILE_FOLDER/parts.txt")"
if ! parts_upload_id=$(xmllint --xpath '//*[local-name()="UploadId"]/text()' "$TEST_FILE_FOLDER/parts.txt" 2>&1); then
log 2 "error retrieving UploadId: $parts_upload_id"
return 1
fi
if [ "$parts_upload_id" != "$3" ]; then
log 2 "expected '$3', UploadId value is '$parts_upload_id'"
return 1
fi
if ! part_count=$(xmllint --xpath 'count(//*[local-name()="Part"])' "$TEST_FILE_FOLDER/parts.txt" 2>&1); then
log 2 "error retrieving part count: $part_count"
return 1
fi
if [ "$part_count" != "$4" ]; then
log 2 "expected $4, 'Part' count is '$part_count'"
return 1
fi
if [ "$4" == 0 ]; then
return 0
fi
if ! etags=$(xmllint --xpath '//*[local-name()="ETag"]/text()' "$TEST_FILE_FOLDER/parts.txt" | tr '\n' ' ' 2>&1); then
log 2 "error retrieving etags: $etags"
return 1
fi
read -ra etags_array <<< "$etags"
shift 4
idx=0
while [ $# -gt 0 ]; do
if [ "$1" != "${etags_array[$idx]}" ]; then
log 2 "etag mismatch (expected '$1', actual ${etags_array[$idx]})"
return 1
fi
((idx++))
shift
done
return 0
}
perform_multipart_upload_rest() {
if [ $# -ne 6 ]; then
log 2 "'upload_check_parts' requires bucket, key, part list"
return 1
fi
if ! create_upload_and_get_id_rest "$1" "$2"; then
log 2 "error creating upload"
return 1
fi
# shellcheck disable=SC2154
if ! upload_part_and_get_etag_rest "$1" "$2" "$upload_id" 1 "$3"; then
log 2 "error uploading part 1"
return 1
fi
# shellcheck disable=SC2154
parts_payload="<Part><ETag>$etag</ETag><PartNumber>1</PartNumber></Part>"
if ! upload_part_and_get_etag_rest "$1" "$2" "$upload_id" 2 "$4"; then
log 2 "error uploading part 2"
return 1
fi
parts_payload+="<Part><ETag>$etag</ETag><PartNumber>2</PartNumber></Part>"
if ! upload_part_and_get_etag_rest "$1" "$2" "$upload_id" 3 "$5"; then
log 2 "error uploading part 3"
return 1
fi
parts_payload+="<Part><ETag>$etag</ETag><PartNumber>3</PartNumber></Part>"
if ! upload_part_and_get_etag_rest "$1" "$2" "$upload_id" 4 "$6"; then
log 2 "error uploading part 4"
return 1
fi
parts_payload+="<Part><ETag>$etag</ETag><PartNumber>4</PartNumber></Part>"
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$1" OBJECT_KEY="$2" UPLOAD_ID="$upload_id" PARTS="$parts_payload" OUTPUT_FILE="$TEST_FILE_FOLDER/result.txt" ./tests/rest_scripts/complete_multipart_upload.sh); then
log 2 "error completing multipart upload: $result"
return 1
fi
if [ "$result" != "200" ]; then
log 2 "complete multipart upload returned code $result: $(cat "$TEST_FILE_FOLDER/result.txt")"
return 1
fi
return 0
}
upload_check_parts() {
if [ $# -ne 6 ]; then
log 2 "'upload_check_parts' requires bucket, key, part list"
return 1
fi
if ! create_upload_and_get_id_rest "$1" "$2"; then
log 2 "error creating upload"
return 1
fi
# shellcheck disable=SC2154
if ! check_part_list_rest "$1" "$2" "$upload_id" 0; then
log 2 "error checking part list before part upload"
return 1
fi
parts_payload=""
if ! upload_check_part "$1" "$2" "$upload_id" 1 "$3"; then
log 2 "error uploading and checking first part"
return 1
fi
# shellcheck disable=SC2154
etag_one=$etag
if ! upload_check_part "$1" "$2" "$upload_id" 2 "$4" "$etag_one"; then
log 2 "error uploading and checking second part"
return 1
fi
etag_two=$etag
if ! upload_check_part "$1" "$2" "$upload_id" 3 "$5" "$etag_one" "$etag_two"; then
log 2 "error uploading and checking third part"
return 1
fi
etag_three=$etag
if ! upload_check_part "$1" "$2" "$upload_id" 4 "$6" "$etag_one" "$etag_two" "$etag_three"; then
log 2 "error uploading and checking fourth part"
return 1
fi
log 5 "PARTS PAYLOAD: $parts_payload"
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$1" OBJECT_KEY="$2" UPLOAD_ID="$upload_id" PARTS="$parts_payload" OUTPUT_FILE="$TEST_FILE_FOLDER/result.txt" ./tests/rest_scripts/complete_multipart_upload.sh); then
log 2 "error completing multipart upload: $result"
return 1
fi
if [ "$result" != "200" ]; then
log 2 "complete multipart upload returned code $result: $(cat "$TEST_FILE_FOLDER/result.txt")"
return 1
fi
return 0
}
upload_check_part() {
if [ $# -lt 5 ]; then
log 2 "'upload_check_part' requires bucket, key, upload ID, part number, part, etags"
return 1
fi
if ! upload_part_and_get_etag_rest "$1" "$2" "$3" "$4" "$5"; then
log 2 "error uploading part $4"
return 1
fi
parts_payload+="<Part><ETag>$etag</ETag><PartNumber>$4</PartNumber></Part>"
# shellcheck disable=SC2068
if ! check_part_list_rest "$1" "$2" "$3" "$4" "${@:6}" "$etag"; then
log 2 "error checking part list after upload $4"
return 1
fi
}

View File

@@ -218,36 +218,32 @@ create_and_list_multipart_uploads() {
multipart_upload_from_bucket() {
if [ $# -ne 4 ]; then
echo "multipart upload from bucket command missing bucket, copy source, key, and/or part count"
log 2 "multipart upload from bucket command missing bucket, copy source, key, and/or part count"
return 1
fi
split_file "$3" "$4" || split_result=$?
if [[ $split_result -ne 0 ]]; then
echo "error splitting file"
if ! split_file "$3" "$4"; then
log 2 "error splitting file"
return 1
fi
for ((i=0;i<$4;i++)) {
echo "key: $3"
put_object "s3api" "$3-$i" "$1" "$2-$i" || copy_result=$?
if [[ $copy_result -ne 0 ]]; then
echo "error copying object"
if ! put_object "s3api" "$3-$i" "$1" "$2-$i"; then
log 2 "error copying object"
return 1
fi
}
create_multipart_upload "$1" "$2-copy" || upload_result=$?
if [[ $upload_result -ne 0 ]]; then
echo "error running first multpart upload"
if ! create_multipart_upload "$1" "$2-copy"; then
log 2 "error running first multpart upload"
return 1
fi
parts="["
for ((i = 1; i <= $4; i++)); do
upload_part_copy "$1" "$2-copy" "$upload_id" "$2" "$i" || local upload_result=$?
if [[ $upload_result -ne 0 ]]; then
echo "error uploading part $i"
if ! upload_part_copy "$1" "$2-copy" "$upload_id" "$2" "$i"; then
log 2 "error uploading part $i"
return 1
fi
parts+="{\"ETag\": $etag, \"PartNumber\": $i}"
@@ -257,9 +253,8 @@ multipart_upload_from_bucket() {
done
parts+="]"
error=$(aws --no-verify-ssl s3api complete-multipart-upload --bucket "$1" --key "$2-copy" --upload-id "$upload_id" --multipart-upload '{"Parts": '"$parts"'}') || local completed=$?
if [[ $completed -ne 0 ]]; then
echo "Error completing upload: $error"
if ! error=$(aws --no-verify-ssl s3api complete-multipart-upload --bucket "$1" --key "$2-copy" --upload-id "$upload_id" --multipart-upload '{"Parts": '"$parts"'}' 2>&1); then
log 2 "Error completing upload: $error"
return 1
fi
return 0
@@ -270,35 +265,27 @@ multipart_upload_from_bucket_range() {
echo "multipart upload from bucket with range command requires bucket, copy source, key, part count, and range"
return 1
fi
split_file "$3" "$4" || local split_result=$?
if [[ $split_result -ne 0 ]]; then
echo "error splitting file"
if ! split_file "$3" "$4"; then
log 2 "error splitting file"
return 1
fi
for ((i=0;i<$4;i++)) {
echo "key: $3"
log 5 "file info: $(ls -l "$3"-"$i")"
put_object "s3api" "$3-$i" "$1" "$2-$i" || local copy_result=$?
if [[ $copy_result -ne 0 ]]; then
echo "error copying object"
log 5 "key: $3, file info: $(ls -l "$3"-"$i")"
if ! put_object "s3api" "$3-$i" "$1" "$2-$i"; then
log 2 "error copying object"
return 1
fi
}
create_multipart_upload "$1" "$2-copy" || local create_multipart_result=$?
if [[ $create_multipart_result -ne 0 ]]; then
echo "error running first multpart upload"
if ! create_multipart_upload "$1" "$2-copy"; then
log 2 "error running first multpart upload"
return 1
fi
parts="["
for ((i = 1; i <= $4; i++)); do
upload_part_copy_with_range "$1" "$2-copy" "$upload_id" "$2" "$i" "$5" || local upload_part_copy_result=$?
if [[ $upload_part_copy_result -ne 0 ]]; then
if ! upload_part_copy_with_range "$1" "$2-copy" "$upload_id" "$2" "$i" "$5"; then
# shellcheck disable=SC2154
echo "error uploading part $i: $upload_part_copy_error"
log 2 "error uploading part $i: $upload_part_copy_error"
return 1
fi
parts+="{\"ETag\": $etag, \"PartNumber\": $i}"
@@ -307,10 +294,8 @@ multipart_upload_from_bucket_range() {
fi
done
parts+="]"
error=$(aws --no-verify-ssl s3api complete-multipart-upload --bucket "$1" --key "$2-copy" --upload-id "$upload_id" --multipart-upload '{"Parts": '"$parts"'}') || local completed=$?
if [[ $completed -ne 0 ]]; then
echo "Error completing upload: $error"
if ! error=$(aws --no-verify-ssl s3api complete-multipart-upload --bucket "$1" --key "$2-copy" --upload-id "$upload_id" --multipart-upload '{"Parts": '"$parts"'}'); then
log 2 "Error completing upload: $error"
return 1
fi
return 0
@@ -580,18 +565,34 @@ create_abort_multipart_upload_rest() {
log 2 "'create_abort_upload_rest' requires bucket, key"
return 1
fi
if ! list_and_check_upload "$1" "$2"; then
log 2 "error listing multipart uploads before creation"
return 1
fi
log 5 "uploads before upload: $(cat "$TEST_FILE_FOLDER/uploads.txt")"
if ! create_upload_and_get_id_rest "$1" "$2"; then
log 2 "error creating upload"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$1" OBJECT_KEY="$2" UPLOAD_ID="$upload_id" OUTPUT_FILE="$TEST_FILE_FOLDER/output.txt" ./tests/rest_scripts/abort_multipart_upload.sh); then
if ! list_and_check_upload "$1" "$2" "$upload_id"; then
log 2 "error listing multipart uploads after upload creation"
return 1
fi
log 5 "uploads after upload creation: $(cat "$TEST_FILE_FOLDER/uploads.txt")"
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$1" OBJECT_KEY="$2" UPLOAD_ID="$upload_id" OUTPUT_FILE="$TEST_FILE_FOLDER/result.txt" ./tests/rest_scripts/abort_multipart_upload.sh); then
log 2 "error aborting multipart upload: $result"
return 1
fi
if [ "$result" != "204" ]; then
log 2 "expected '204' response, actual was '$result' (error: $(cat "$TEST_FILE_FOLDER"/output.txt)"
log 2 "expected '204' response, actual was '$result' (error: $(cat "$TEST_FILE_FOLDER"/result.txt)"
return 1
fi
log 5 "final uploads: $(cat "$TEST_FILE_FOLDER/uploads.txt")"
if ! list_and_check_upload "$1" "$2"; then
log 2 "error listing multipart uploads after abort"
return 1
fi
return 0
}
multipart_upload_range_too_large() {
@@ -610,3 +611,71 @@ multipart_upload_range_too_large() {
fi
return 0
}
list_and_check_upload() {
if [ $# -lt 2 ]; then
log 2 "'list_and_check_upload' requires bucket, key, upload ID (optional)"
return 1
fi
if ! uploads=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$1" OUTPUT_FILE="$TEST_FILE_FOLDER/uploads.txt" ./tests/rest_scripts/list_multipart_uploads.sh); then
log 2 "error listing multipart uploads before upload: $result"
return 1
fi
if ! upload_count=$(xmllint --xpath 'count(//*[local-name()="Upload"])' "$TEST_FILE_FOLDER/uploads.txt" 2>&1); then
log 2 "error retrieving upload count: $upload_count"
return 1
fi
if [[ (( $# == 2 ) && ( $upload_count != 0 )) ]]; then
log 2 "upload count mismatch (expected 0, actual $upload_count)"
return 1
elif [[ (( $# == 3 ) && ( $upload_count != 1 )) ]]; then
log 2 "upload count mismatch (expected 1, actual $upload_count)"
return 1
fi
if [ $# -eq 2 ]; then
return 0
fi
if ! key=$(xmllint --xpath '//*[local-name()="Key"]/text()' "$TEST_FILE_FOLDER/uploads.txt" 2>&1); then
log 2 "error retrieving key: $key"
return 1
fi
if [ "$key" != "$2" ]; then
log 2 "key mismatch (expected '$2', actual '$key')"
return 1
fi
if ! upload_id=$(xmllint --xpath '//*[local-name()="UploadId"]/text()' "$TEST_FILE_FOLDER/uploads.txt" 2>&1); then
log 2 "error retrieving upload ID: $upload_id"
return 1
fi
if [ "$upload_id" != "$3" ]; then
log 2 "upload ID mismatch (expected '$3', actual '$upload_id')"
return 1
fi
return 0
}
run_and_verify_multipart_upload_with_valid_range() {
if [ $# -ne 3 ]; then
log 2 "'run_and_verify_multipart_upload_with_valid_range' requires bucket, key, 5MB file"
return 1
fi
range_max=$((5*1024*1024-1))
if ! multipart_upload_from_bucket_range "$1" "$2" "$3" 4 "bytes=0-$range_max"; then
log 2 "error with multipart upload"
return 1
fi
if ! get_object "s3api" "$1" "$2-copy" "$3-copy"; then
log 2 "error getting object"
return 1
fi
if [[ $(uname) == 'Darwin' ]]; then
object_size=$(stat -f%z "$3-copy")
else
object_size=$(stat --format=%s "$3-copy")
fi
if [[ object_size -ne $((range_max*4+4)) ]]; then
log 2 "object size mismatch ($object_size, $((range_max*4+4)))"
return 1
fi
return 0
}

View File

@@ -205,3 +205,20 @@ get_and_check_policy() {
fi
return 0
}
put_and_check_for_malformed_policy() {
if [ $# -ne 2 ]; then
log 2 "'put_and_check_for_malformed_policy' requires bucket name, policy file"
return 1
fi
if put_bucket_policy "s3api" "$1" "$2"; then
log 2 "put succeeded despite malformed policy"
return 1
fi
# shellcheck disable=SC2154
if [[ "$put_bucket_policy_error" != *"MalformedPolicy"*"invalid action"* ]]; then
log 2 "invalid policy error: $put_bucket_policy_error"
return 1
fi
return 0
}

View File

@@ -253,3 +253,60 @@ get_and_verify_object_tags() {
fi
return 0
}
verify_no_bucket_tags_rest() {
if [ $# -ne 1 ]; then
log 2 "'verify_no_bucket_tags_rest' requires bucket name"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$1" OUTPUT_FILE="$TEST_FILE_FOLDER/bucket_tagging.txt" ./tests/rest_scripts/get_bucket_tagging.sh); then
log 2 "error listing bucket tags: $result"
return 1
fi
if [ "$result" != "404" ]; then
log 2 "expected response code of '404', was '$result' (error: $(cat "$TEST_FILE_FOLDER/bucket_tagging.txt"))"
return 1
fi
return 0
}
add_verify_bucket_tags_rest() {
if [ $# -ne 3 ]; then
log 2 "'add_verify_bucket_tags_rest' requires bucket name, test key, test value"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$1" TAG_KEY="$2" TAG_VALUE="$3" OUTPUT_FILE="$TEST_FILE_FOLDER/result.txt" ./tests/rest_scripts/put_bucket_tagging.sh); then
log 2 "error putting bucket tags: $result"
return 1
fi
if [ "$result" != "204" ]; then
log 2 "expected response code of '204', was '$result' (error: $(cat "$TEST_FILE_FOLDER/result.txt"))"
return 1
fi
if ! result=$(COMMAND_LOG="$COMMAND_LOG" BUCKET_NAME="$BUCKET_ONE_NAME" OUTPUT_FILE="$TEST_FILE_FOLDER/bucket_tagging.txt" ./tests/rest_scripts/get_bucket_tagging.sh); then
log 2 "error listing bucket tags: $result"
return 1
fi
if [ "$result" != "200" ]; then
log 2 "expected response code of '200', was '$result' (error: $(cat "$TEST_FILE_FOLDER/bucket_tagging.txt"))"
return 1
fi
log 5 "tags: $(cat "$TEST_FILE_FOLDER/bucket_tagging.txt")"
if ! key=$(xmllint --xpath '//*[local-name()="Key"]/text()' "$TEST_FILE_FOLDER/bucket_tagging.txt" 2>&1); then
log 2 "error retrieving key: $key"
return 1
fi
if [ "$key" != "$2" ]; then
log 2 "key mismatch (expected '$2', actual '$key')"
return 1
fi
if ! value=$(xmllint --xpath '//*[local-name()="Value"]/text()' "$TEST_FILE_FOLDER/bucket_tagging.txt" 2>&1); then
log 2 "error retrieving value: $value"
return 1
fi
if [ "$value" != "$3" ]; then
log 2 "value mismatch (expected '$3', actual '$value')"
return 1
fi
return 0
}

View File

@@ -90,9 +90,8 @@ put_user_policy_userplus() {
log 2 "unable to create test file folder"
return 1
fi
#"Resource": "arn:aws:s3:::${aws:username}-*"
cat <<EOF > "$TEST_FILE_FOLDER"/user_policy_file
cat <<EOF > "$TEST_FILE_FOLDER"/user_policy_file
{
"Version": "2012-10-17",
"Statement": [
@@ -400,4 +399,21 @@ get_bucket_owner() {
log 3 "bucket owner for bucket '$1' not found"
bucket_owner=
return 0
}
verify_user_cant_get_object() {
if [ $# -ne 6 ]; then
log 2 "'verify_user_cant_get_object' requires client, bucket, key, save file, username, password"
return 1
fi
if get_object_with_user "$1" "$2" "$3" "$4" "$5" "$6"; then
log 2 "get object with user succeeded despite lack of permissions"
return 1
fi
# shellcheck disable=SC2154
if [[ "$get_object_error" != *"Access Denied"* ]]; then
log 2 "invalid get object error: $get_object_error"
return 1
fi
return 0
}

View File

@@ -25,18 +25,7 @@ start_versity_process() {
log 1 "error creating test log folder"
exit 1
fi
IFS=' ' read -r -a full_command <<< "${base_command[@]}"
log 5 "versity command: ${full_command[*]}"
if [ -n "$COMMAND_LOG" ]; then
mask_args "${full_command[*]}"
# shellcheck disable=SC2154
echo "${masked_args[@]}" >> "$COMMAND_LOG"
fi
if [ -n "$VERSITY_LOG_FILE" ]; then
"${full_command[@]}" >> "$VERSITY_LOG_FILE" 2>&1 &
else
"${full_command[@]}" 2>&1 &
fi
build_run_and_log_command
# shellcheck disable=SC2181
if [[ $? -ne 0 ]]; then
sleep 1
@@ -66,6 +55,21 @@ start_versity_process() {
export versitygw_pid_"$1"
}
build_run_and_log_command() {
IFS=' ' read -r -a full_command <<< "${base_command[@]}"
log 5 "versity command: ${full_command[*]}"
if [ -n "$COMMAND_LOG" ]; then
mask_args "${full_command[*]}"
# shellcheck disable=SC2154
echo "${masked_args[@]}" >> "$COMMAND_LOG"
fi
if [ -n "$VERSITY_LOG_FILE" ]; then
"${full_command[@]}" >> "$VERSITY_LOG_FILE" 2>&1 &
else
"${full_command[@]}" 2>&1 &
fi
}
run_versity_app_posix() {
if [[ $# -ne 3 ]]; then
log 1 "run versity app w/posix command requires access ID, secret key, process number"
@@ -147,14 +151,16 @@ run_versity_app() {
log 1 "unrecognized backend type $BACKEND"
exit 1
fi
if [[ $IAM_TYPE == "s3" ]]; then
if ! bucket_exists "s3api" "$USERS_BUCKET"; then
if ! create_bucket "s3api" "$USERS_BUCKET"; then
log 1 "error creating IAM bucket"
teardown
exit 1
fi
fi
if [[ $IAM_TYPE != "s3" ]]; then
return 0
fi
if bucket_exists "s3api" "$USERS_BUCKET"; then
return 0
fi
if ! create_bucket "s3api" "$USERS_BUCKET"; then
log 1 "error creating IAM bucket"
teardown
exit 1
fi
}