Compare commits

..

68 Commits

Author SHA1 Message Date
Ben McClelland
33b7116aab Merge pull request #546 from versity/test_cmdline_get_put_copy
Test cmdline get put copy
2024-05-03 10:08:34 -07:00
Luke McCrone
0009845acd test: get, copy, put, etc. s3api additions, cleanup 2024-05-03 13:07:53 -03:00
Ben McClelland
a912980173 Merge pull request #545 from versity/aws-error-ref
AWS error refactoring
2024-05-02 15:49:26 -07:00
Luke McCrone
096f370322 test: changes due to policy, tag changes 2024-05-02 15:26:17 -07:00
jonaustin09
b4cd35f60b feat: error refactoring and enable object lock in backends
Added support to enable object lock on bucket creation in posix and azure
backends.
Implemented the logic to add object legal hold and retention on object creation
in azure and posix backends.
Added the functionality for HeadObject to return object lock related headers.
Added integration tests for these features.
2024-05-02 15:23:48 -07:00
Ben McClelland
aba8d03ddf Merge pull request #544 from versity/ben/request_time_skewed
Ben/request time skewed
2024-05-02 10:21:17 -07:00
Ben McClelland
4a7e2296b9 Merge pull request #543 from versity/ben/int_check
fix: int overflow check in chunk reader
2024-05-02 10:21:04 -07:00
Ben McClelland
2c165a632c fix: int overflow check in chunk reader
Make the code scanners happy with a bounds check before we do the
integer conversion from int64 to int, since this can overflow on
32 bit platforms.

Best error to return here is a signature error since this is a
client problem and the chunk headers are considered part of the
request signature.
2024-05-01 21:27:17 -07:00
Ben McClelland
3fc8956baf fix: increase valid timestampe window from 1 to 15 minutes
According to:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/RESTAuthentication.html#RESTAuthenticationTimeStamp
The valid time wondow for authenticated requests is 15 minutes,
and when outside of that window should return RequestTimeTooSkewed.
2024-05-01 13:56:34 -07:00
Ben McClelland
acf69ab03d Merge pull request #541 from versity/test_cmdline_policy
Test cmdline policy
2024-04-29 20:32:26 -07:00
Luke McCrone
60e4a07e65 test: policy 2024-04-29 21:01:27 -03:00
Ben McClelland
ba8e1f7910 Merge pull request #542 from versity/dependabot/go_modules/dev-dependencies-34457f1dff
chore(deps): bump github.com/urfave/cli/v2 from 2.27.1 to 2.27.2 in the dev-dependencies group
2024-04-29 14:47:37 -07:00
Ben McClelland
864bbf81ff Merge pull request #540 from versity/get-object-attributes
GetObjectAttributes action
2024-04-29 14:47:02 -07:00
dependabot[bot]
259a385aea chore(deps): bump github.com/urfave/cli/v2 in the dev-dependencies group
Bumps the dev-dependencies group with 1 update: [github.com/urfave/cli/v2](https://github.com/urfave/cli).


Updates `github.com/urfave/cli/v2` from 2.27.1 to 2.27.2
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/main/docs/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v2.27.1...v2.27.2)

---
updated-dependencies:
- dependency-name: github.com/urfave/cli/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-29 21:38:59 +00:00
jonaustin09
0c3771ae2d feat: Added GetObjectAttributes actions implementation in posix, azure and s3 backends. Added integration tests for GetObjectAttributes action 2024-04-29 15:31:53 -04:00
Ben McClelland
af469cd279 Merge pull request #539 from versity/event-notif-del-objects
Bucket event notifications DeleteObjects
2024-04-25 15:11:38 -07:00
jonaustin09
6f9c6fde37 feat: Added DeleteObjects event support in bucket event notifications 2024-04-25 16:18:02 -04:00
Ben McClelland
dd7de194f9 Merge pull request #538 from versity/test_cmdline_more_tests
Test cmdline more tests
2024-04-25 13:16:42 -07:00
Luke McCrone
ec53605ea3 test: delete tags, get location, some reorganization 2024-04-25 15:40:23 -03:00
Ben McClelland
47ed2d65c1 Merge pull request #537 from versity/s3proxy-policy-object-lock-actions
S3 proxy bucket policy, object lock actions
2024-04-24 13:27:21 -07:00
jonaustin09
5126aedeff feat: Added bucket policy and object lock actions implementation in s3 proxy 2024-04-24 15:49:02 -04:00
Ben McClelland
a780f89ff0 Merge pull request #536 from versity/azure-object-lock-actions
Azure object lock actions
2024-04-23 15:19:08 -07:00
jonaustin09
4a56d570ad feat: Added object lock actions implementation in azure backend 2024-04-23 17:05:59 -04:00
Ben McClelland
62209cf222 Merge pull request #535 from versity/dependabot/go_modules/dev-dependencies-9433fa9262
chore(deps): bump the dev-dependencies group with 3 updates
2024-04-22 15:31:19 -07:00
dependabot[bot]
f7da252b7a chore(deps): bump the dev-dependencies group with 3 updates
Bumps the dev-dependencies group with 3 updates: [github.com/go-ldap/ldap/v3](https://github.com/go-ldap/ldap), [github.com/Azure/azure-sdk-for-go/sdk/internal](https://github.com/Azure/azure-sdk-for-go) and [github.com/go-asn1-ber/asn1-ber](https://github.com/go-asn1-ber/asn1-ber).


Updates `github.com/go-ldap/ldap/v3` from 3.4.7 to 3.4.8
- [Release notes](https://github.com/go-ldap/ldap/releases)
- [Commits](https://github.com/go-ldap/ldap/compare/v3.4.7...v3.4.8)

Updates `github.com/Azure/azure-sdk-for-go/sdk/internal` from 1.5.2 to 1.6.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/internal/v1.5.2...sdk/azcore/v1.6.0)

Updates `github.com/go-asn1-ber/asn1-ber` from 1.5.5 to 1.5.6
- [Release notes](https://github.com/go-asn1-ber/asn1-ber/releases)
- [Commits](https://github.com/go-asn1-ber/asn1-ber/compare/v1.5.5...v1.5.6)

---
updated-dependencies:
- dependency-name: github.com/go-ldap/ldap/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/internal
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/go-asn1-ber/asn1-ber
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-22 21:48:10 +00:00
Ben McClelland
8907a50331 Merge pull request #516 from versity/object-locks
WORM protection with S3 object locks
2024-04-22 13:28:15 -07:00
jonaustin09
89755ea5aa feat: Changed object lock actions interface to put/get []byte 2024-04-22 13:19:09 -07:00
jonaustin09
00476ef70c feat: Closes #490, Added integration tests for object lock actions 2024-04-22 13:13:40 -07:00
jonaustin09
fbaba0b944 feat: Added object WORM protection by object-lock feature from AWS with the following actions support: PutObjectLockConfiguration, GetObjectLockConfiguration, PutObjectRetention, GetObjectRetention, PutObjectLegalHold, GetObjectLegalHold 2024-04-22 13:13:40 -07:00
Ben McClelland
c0489f981c Merge pull request #533 from versity/test_cmdline_tags_two
Test cmdline tags two
2024-04-22 13:01:13 -07:00
Luke McCrone
2a072e1580 test: tags, metadata tests, docker, test config cleanup 2024-04-22 15:44:46 -03:00
Ben McClelland
6d868229a8 Merge pull request #532 from versity/ben/readme
chore: more readme cleanup
2024-04-22 10:46:45 -07:00
Ben McClelland
e1a1d7f65f chore: more readme cleanup
fix typo, add use case
2024-04-22 10:38:17 -07:00
Ben McClelland
134672aea2 Merge pull request #531 from versity/ben/readme
chore: minor readme cleanup
2024-04-22 10:11:45 -07:00
Ben McClelland
c75edc2ae5 chore: minor readme cleanup
Move use cases up, and change wording. Add link for global options in the wiki.
2024-04-22 09:18:17 -07:00
Ben McClelland
7ab0e3ebbe Merge pull request #530 from versity/azure-policy-actions
Azure bucket policy actions
2024-04-20 10:00:23 -07:00
jonaustin09
5c835c5c74 feat: Implemented GetBucketPolicy, PutBucketPolicy action in azure backend 2024-04-19 16:36:42 -04:00
Ben McClelland
bd380b4858 Merge pull request #528 from versity/ben/xattr
fix: use xattr.ENOATTR check for posix xattrs
2024-04-19 11:39:54 -07:00
Ben McClelland
fe33532f78 Merge pull request #529 from versity/ben/module_version
fix: as of Go 1.21, toolchain versions must use the 1.N.P syntax
2024-04-18 21:07:52 -07:00
Ben McClelland
892d4d7d17 fix: as of Go 1.21, toolchain versions must use the 1.N.P syntax
Setting min toolchain to 1.21.0 for the gateway.
see: https://go.dev/doc/toolchain#version
2024-04-18 20:29:23 -07:00
Ben McClelland
4429570388 fix: use xattr.ENOATTR check for posix xattrs
The xattr package has a more universal error type for xattrs
not existing. Use this for better platform compatibility.

This also adds the xattr.XATTR_SUPPORTED check for platform
xattr suport in xattr package.

Fixes #527
2024-04-18 18:20:43 -07:00
Ben McClelland
ae0354c765 Merge pull request #526 from versity/fix/487-head-bucket-resp
HeadBucket response headers
2024-04-18 15:55:50 -07:00
jonaustin09
84ce40fb54 fix: Fixes #487, added response headers for HeadBucket action 2024-04-18 13:27:45 -04:00
Ben McClelland
5853c3240b Merge pull request #520 from versity/test_cmdline_user_s3cmd
test: s3cmd user, fix for non-bucket creating testing
2024-04-17 14:42:25 -07:00
Ben McClelland
8bd068c22c Merge pull request #525 from versity/ben/check_account
fix: auth iam single error for GetUserAccount()
2024-04-17 14:32:02 -07:00
Luke McCrone
f08ccacd0f test: s3cmd user, fix for non-bucket creating testing 2024-04-17 15:24:01 -03:00
Ben McClelland
46aab041cc fix: auth iam single error for GetUserAccount()
Fixes #524. The iam single needs to return ErrNoSuchUser instead of
ErrNotSupported in GetUserAccount to return the correct error
when the client access is not done by the single user account.

This fixes the internal error when accessing the gateway in
iam single user mode with incorrect access keys.
2024-04-17 09:33:03 -07:00
Ben McClelland
a7a8ea9e61 Merge pull request #523 from versity/ben/chunk_uploads
fix: chunkreader invalid signature when header crossed read buffers
2024-04-17 09:13:12 -07:00
Ben McClelland
07b01a738a fix: chunkreader invalid signature when header crossed read buffers
Fixes #512. For chunked uploads, we parse the chunk headers in place
and then move the data payload up on the buffer to overwrite the
chunk headers for the real data stream.

For the special case where the chunk header was truncated in the
current read buffer, the partial header is stashed in a temporary
byte slice. The following read will contain the remainder of the
header that we can put together and parse.

We were correctly parsing this, but we forgot that the data offset
is calculated based on the start of the header. But the special
case where part of the header was stashed means we were incorrectly
calculating the data offset into the read buffer.

Easy fix to just remove the stash size from the data offset return
value.
2024-04-16 23:08:25 -07:00
Ben McClelland
6f35a5fbaf Merge pull request #521 from versity/ben/readme_news_perf2
feat: add new perf article to readme news
2024-04-16 15:55:05 -07:00
Ben McClelland
05530e02c9 feat: add new perf article to readme news 2024-04-16 14:52:37 -07:00
Ben McClelland
b2f028939e Merge pull request #518 from versity/ben/meta_storer
feat: add metadata storage abstraction layer
2024-04-16 11:43:31 -07:00
Ben McClelland
7ccd1dd619 Merge pull request #519 from versity/dependabot/go_modules/dev-dependencies-e4c8b118df
chore(deps): bump the dev-dependencies group with 3 updates
2024-04-15 15:31:35 -07:00
dependabot[bot]
b10d08a8df chore(deps): bump the dev-dependencies group with 3 updates
Bumps the dev-dependencies group with 3 updates: [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go), [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) and [github.com/klauspost/compress](https://github.com/klauspost/compress).


Updates `github.com/Azure/azure-sdk-for-go/sdk/azidentity` from 1.5.1 to 1.5.2
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/internal/v1.5.1...sdk/internal/v1.5.2)

Updates `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` from 1.3.1 to 1.3.2
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.3.1...sdk/storage/azblob/v1.3.2)

Updates `github.com/klauspost/compress` from 1.17.7 to 1.17.8
- [Release notes](https://github.com/klauspost/compress/releases)
- [Changelog](https://github.com/klauspost/compress/blob/master/.goreleaser.yml)
- [Commits](https://github.com/klauspost/compress/compare/v1.17.7...v1.17.8)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/klauspost/compress
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-15 21:32:24 +00:00
Ben McClelland
c81403fe90 feat: add metadata storage abstraction layer
Closes #511. This adds an abstraction layer to the metadata
storage to allow for future non-xattr metadata storage
implementations.
2024-04-15 13:57:31 -07:00
Ben McClelland
5f422fefd8 Merge pull request #517 from versity/test_cmdline_iam
Test cmdline iam
2024-04-13 09:55:10 -07:00
Luke McCrone
0a74509d00 test: initial users tests (admin, userplus, user) 2024-04-12 22:33:38 -03:00
Ben McClelland
65abac9823 Merge pull request #515 from versity/ben/admin_insecure
fix: admin change-bucket-owner cert disable verify
2024-04-12 08:08:34 -07:00
Ben McClelland
5ec2de544c fix: admin change-bucket-owner return status 2024-04-11 16:11:59 -07:00
Ben McClelland
53a50df742 fix: admin change-bucket-owner cert disable verify 2024-04-11 14:44:37 -07:00
Ben McClelland
936ba1f84b Merge pull request #509 from versity/ben/admin_insecure
feat: optional disable cert check for admin cli actions
2024-04-09 09:04:54 -07:00
Ben McClelland
ffe1fc4ad3 feat: optional disable cert check for admin cli actions
Fixes #499. Allows running admin cli commands against servers
with self signed certs.
2024-04-09 08:37:11 -07:00
Ben McClelland
020b2db975 Merge pull request #506 from versity/ben/cmd_admin_err
fix: return non 0 exit status for cli admin error
2024-04-09 08:36:35 -07:00
Ben McClelland
17b1dbe025 fix: return non 0 exit status for cli admin error
Fixes #505. This returns the body as an error when the http status
for the admin request is non-success.
2024-04-08 17:29:02 -07:00
Ben McClelland
5937af22c6 Merge pull request #507 from versity/dependabot/go_modules/dev-dependencies-d1c995973a
chore(deps): bump github.com/go-ldap/ldap/v3 from 3.4.6 to 3.4.7 in the dev-dependencies group
2024-04-08 16:35:49 -07:00
dependabot[bot]
5c2e7cce05 chore(deps): bump github.com/go-ldap/ldap/v3
Bumps the dev-dependencies group with 1 update: [github.com/go-ldap/ldap/v3](https://github.com/go-ldap/ldap).


Updates `github.com/go-ldap/ldap/v3` from 3.4.6 to 3.4.7
- [Release notes](https://github.com/go-ldap/ldap/releases)
- [Commits](https://github.com/go-ldap/ldap/compare/v3.4.6...v3.4.7)

---
updated-dependencies:
- dependency-name: github.com/go-ldap/ldap/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-08 16:24:34 -07:00
Ben McClelland
6b9ee3a587 Merge pull request #508 from versity/ben/ldap_url
fix: use ldap.DialURL instead of deprecated ldap.Dial
2024-04-08 16:24:19 -07:00
Ben McClelland
e9a036d100 fix: use ldap.DialURL instead of deprecated ldap.Dial 2024-04-08 16:10:59 -07:00
91 changed files with 6524 additions and 1546 deletions

View File

@@ -51,7 +51,7 @@ jobs:
export WORKSPACE=$GITHUB_WORKSPACE
openssl genpkey -algorithm RSA -out versitygw.pem -pkeyopt rsa_keygen_bits:2048
openssl req -new -x509 -key versitygw.pem -out cert.pem -days 365 -subj "/C=US/ST=California/L=San Francisco/O=Versity/OU=Software/CN=versity.com"
mkdir cover
mkdir cover iam
VERSITYGW_TEST_ENV=./tests/.env.default ./tests/run_all.sh
#- name: Build and run, s3 backend
@@ -66,7 +66,7 @@ jobs:
# export AWS_ACCESS_KEY_ID_TWO=ABCDEFGHIJKLMNOPQRST
# export AWS_SECRET_ACCESS_KEY_TWO=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmn
# export WORKSPACE=$GITHUB_WORKSPACE
# VERSITYGW_TEST_ENV=./tests/.env.s3.default GOCOVERDIR=/tmp/cover ./tests/run_all.sh
# VERSITYGW_TEST_ENV=./tests/.env.s3 GOCOVERDIR=/tmp/cover ./tests/run_all.sh
- name: Coverage report
run: |

14
.gitignore vendored
View File

@@ -47,5 +47,15 @@ tests/.secrets*
users.json
# env files for testing
.env*
!.env.default
**/.env*
**/!.env.default
# s3cmd config files (testing)
tests/s3cfg.local*
tests/!s3cfg.local.default
# keys
*.pem
# patches
*.patch

View File

@@ -6,7 +6,7 @@ COPY go.mod ./
RUN go mod download
COPY ./ ./
COPY certs/* /etc/pki/tls/certs/
COPY ./tests/certs/* /etc/pki/tls/certs/
ARG IAM_DIR=/tmp/vgw
ARG SETUP_DIR=/tmp/vgw

View File

@@ -61,8 +61,6 @@ USER tester
COPY --chown=tester:tester . /home/tester
WORKDIR /home/tester
#RUN cp tests/.env.docker.s3.default tests/.env.docker.s3
RUN cp tests/s3cfg.local.default tests/s3cfg.local
RUN make
RUN . $SECRETS_FILE && \

View File

@@ -14,8 +14,16 @@ Download [latest release](https://github.com/versity/versitygw/releases)
|:-----------:|:-----------:|:-----------:|:-----------:|:---------:|:---------:|
| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
### Use Cases
* Turn your local filesystem into an S3 server with a single command!
* Proxy S3 requests to S3 storage
* Simple to deploy S3 server with a single command
* Protocol compatibility in `posix` allows common access to files via posix or S3
* Simplified interface for adding new storage system support
### News
* New performance analysis article [https://github.com/versity/versitygw/wiki/Performance](https://github.com/versity/versitygw/wiki/Performance)
* New performance (scale up) analysis article [https://github.com/versity/versitygw/wiki/Performance](https://github.com/versity/versitygw/wiki/Performance)
* New performance (scale out) Part 2 analysis article [https://github.com/versity/versitygw/wiki/Performance-Part-2](https://github.com/versity/versitygw/wiki/Performance-Part-2)
### Mailing List
Keep up to date with latest gateway announcements by signing up to the [versitygw mailing list](https://www.versity.com/products/versitygw#signup).
@@ -28,12 +36,6 @@ Ask questions in the [community discussions](https://github.com/versity/versityg
<br>
Contact [Versity Sales](https://www.versity.com/contact/) to discuss enterprise support.
### Use Cases
* Share filesystem directory via S3 protocol
* Proxy S3 requests to S3 storage
* Simple to deploy S3 server with a single command
* Protocol compatibility in `posix` allows common access to files via posix or S3
### Overview
Versity Gateway, a simple to use tool for seamless inline translation between AWS S3 object commands and storage systems. The Versity Gateway bridges the gap between S3-reliant applications and other storage systems, enabling enhanced compatibility and integration while offering exceptional scalability.
@@ -67,7 +69,7 @@ The command format is
```
versitygw [global options] command [command options] [arguments...]
```
The global options are specified before the backend type and the backend options are specified after.
The [global options](https://github.com/versity/versitygw/wiki/Global-Options) are specified before the backend type and the backend options are specified after.
***

View File

@@ -17,6 +17,7 @@ package auth
import (
"context"
"encoding/json"
"errors"
"fmt"
"strings"
@@ -292,13 +293,13 @@ func VerifyAccess(ctx context.Context, be backend.Backend, opts AccessOptions) e
return nil
}
policy, err := be.GetBucketPolicy(ctx, opts.Bucket)
if err != nil {
return err
policy, policyErr := be.GetBucketPolicy(ctx, opts.Bucket)
if policyErr != nil && !errors.Is(policyErr, s3err.GetAPIError(s3err.ErrNoSuchBucketPolicy)) {
return policyErr
}
// If bucket policy is not set and the ACL is default, only the owner has access
if len(policy) == 0 && opts.Acl.ACL == "" && len(opts.Acl.Grantees) == 0 {
if errors.Is(policyErr, s3err.GetAPIError(s3err.ErrNoSuchBucketPolicy)) && opts.Acl.ACL == "" && len(opts.Acl.Grantees) == 0 {
return s3err.GetAPIError(s3err.ErrAccessDenied)
}

View File

@@ -117,7 +117,7 @@ func ValidatePolicyDocument(policyBin []byte, bucket string, iam IAMService) err
func verifyBucketPolicy(policy []byte, access, bucket, object string, action Action) error {
// If bucket policy is not set
if len(policy) == 0 {
if policy == nil {
return nil
}
@@ -131,7 +131,6 @@ func verifyBucketPolicy(policy []byte, access, bucket, object string, action Act
resource += "/" + object
}
fmt.Println(access, action, resource)
if !bucketPolicy.isAllowed(access, action, resource) {
return s3err.GetAPIError(s3err.ErrAccessDenied)
}

View File

@@ -23,79 +23,97 @@ import (
type Action string
const (
GetBucketAclAction Action = "s3:GetBucketAcl"
CreateBucketAction Action = "s3:CreateBucket"
PutBucketAclAction Action = "s3:PutBucketAcl"
DeleteBucketAction Action = "s3:DeleteBucket"
PutBucketVersioningAction Action = "s3:PutBucketVersioning"
GetBucketVersioningAction Action = "s3:GetBucketVersioning"
PutBucketPolicyAction Action = "s3:PutBucketPolicy"
GetBucketPolicyAction Action = "s3:GetBucketPolicy"
DeleteBucketPolicyAction Action = "s3:DeleteBucketPolicy"
AbortMultipartUploadAction Action = "s3:AbortMultipartUpload"
ListMultipartUploadPartsAction Action = "s3:ListMultipartUploadParts"
ListBucketMultipartUploadsAction Action = "s3:ListBucketMultipartUploads"
PutObjectAction Action = "s3:PutObject"
GetObjectAction Action = "s3:GetObject"
DeleteObjectAction Action = "s3:DeleteObject"
GetObjectAclAction Action = "s3:GetObjectAcl"
GetObjectAttributesAction Action = "s3:GetObjectAttributes"
PutObjectAclAction Action = "s3:PutObjectAcl"
RestoreObjectAction Action = "s3:RestoreObject"
GetBucketTaggingAction Action = "s3:GetBucketTagging"
PutBucketTaggingAction Action = "s3:PutBucketTagging"
GetObjectTaggingAction Action = "s3:GetObjectTagging"
PutObjectTaggingAction Action = "s3:PutObjectTagging"
DeleteObjectTaggingAction Action = "s3:DeleteObjectTagging"
ListBucketVersionsAction Action = "s3:ListBucketVersions"
ListBucketAction Action = "s3:ListBucket"
AllActions Action = "s3:*"
GetBucketAclAction Action = "s3:GetBucketAcl"
CreateBucketAction Action = "s3:CreateBucket"
PutBucketAclAction Action = "s3:PutBucketAcl"
DeleteBucketAction Action = "s3:DeleteBucket"
PutBucketVersioningAction Action = "s3:PutBucketVersioning"
GetBucketVersioningAction Action = "s3:GetBucketVersioning"
PutBucketPolicyAction Action = "s3:PutBucketPolicy"
GetBucketPolicyAction Action = "s3:GetBucketPolicy"
DeleteBucketPolicyAction Action = "s3:DeleteBucketPolicy"
AbortMultipartUploadAction Action = "s3:AbortMultipartUpload"
ListMultipartUploadPartsAction Action = "s3:ListMultipartUploadParts"
ListBucketMultipartUploadsAction Action = "s3:ListBucketMultipartUploads"
PutObjectAction Action = "s3:PutObject"
GetObjectAction Action = "s3:GetObject"
DeleteObjectAction Action = "s3:DeleteObject"
GetObjectAclAction Action = "s3:GetObjectAcl"
GetObjectAttributesAction Action = "s3:GetObjectAttributes"
PutObjectAclAction Action = "s3:PutObjectAcl"
RestoreObjectAction Action = "s3:RestoreObject"
GetBucketTaggingAction Action = "s3:GetBucketTagging"
PutBucketTaggingAction Action = "s3:PutBucketTagging"
GetObjectTaggingAction Action = "s3:GetObjectTagging"
PutObjectTaggingAction Action = "s3:PutObjectTagging"
DeleteObjectTaggingAction Action = "s3:DeleteObjectTagging"
ListBucketVersionsAction Action = "s3:ListBucketVersions"
ListBucketAction Action = "s3:ListBucket"
GetBucketObjectLockConfigurationAction Action = "s3:GetBucketObjectLockConfiguration"
PutBucketObjectLockConfigurationAction Action = "s3:PutBucketObjectLockConfiguration"
GetObjectLegalHoldAction Action = "s3:GetObjectLegalHold"
PutObjectLegalHoldAction Action = "s3:PutObjectLegalHold"
GetObjectRetentionAction Action = "s3:GetObjectRetention"
PutObjectRetentionAction Action = "s3:PutObjectRetention"
BypassGovernanceRetentionAction Action = "s3:BypassGovernanceRetention"
AllActions Action = "s3:*"
)
var supportedActionList = map[Action]struct{}{
GetBucketAclAction: {},
CreateBucketAction: {},
PutBucketAclAction: {},
DeleteBucketAction: {},
PutBucketVersioningAction: {},
GetBucketVersioningAction: {},
PutBucketPolicyAction: {},
GetBucketPolicyAction: {},
DeleteBucketPolicyAction: {},
AbortMultipartUploadAction: {},
ListMultipartUploadPartsAction: {},
ListBucketMultipartUploadsAction: {},
PutObjectAction: {},
GetObjectAction: {},
DeleteObjectAction: {},
GetObjectAclAction: {},
GetObjectAttributesAction: {},
PutObjectAclAction: {},
RestoreObjectAction: {},
GetBucketTaggingAction: {},
PutBucketTaggingAction: {},
GetObjectTaggingAction: {},
PutObjectTaggingAction: {},
DeleteObjectTaggingAction: {},
ListBucketVersionsAction: {},
ListBucketAction: {},
AllActions: {},
GetBucketAclAction: {},
CreateBucketAction: {},
PutBucketAclAction: {},
DeleteBucketAction: {},
PutBucketVersioningAction: {},
GetBucketVersioningAction: {},
PutBucketPolicyAction: {},
GetBucketPolicyAction: {},
DeleteBucketPolicyAction: {},
AbortMultipartUploadAction: {},
ListMultipartUploadPartsAction: {},
ListBucketMultipartUploadsAction: {},
PutObjectAction: {},
GetObjectAction: {},
DeleteObjectAction: {},
GetObjectAclAction: {},
GetObjectAttributesAction: {},
PutObjectAclAction: {},
RestoreObjectAction: {},
GetBucketTaggingAction: {},
PutBucketTaggingAction: {},
GetObjectTaggingAction: {},
PutObjectTaggingAction: {},
DeleteObjectTaggingAction: {},
ListBucketVersionsAction: {},
ListBucketAction: {},
PutBucketObjectLockConfigurationAction: {},
GetObjectLegalHoldAction: {},
PutObjectLegalHoldAction: {},
GetObjectRetentionAction: {},
PutObjectRetentionAction: {},
BypassGovernanceRetentionAction: {},
AllActions: {},
}
var supportedObjectActionList = map[Action]struct{}{
AbortMultipartUploadAction: {},
ListMultipartUploadPartsAction: {},
PutObjectAction: {},
GetObjectAction: {},
DeleteObjectAction: {},
GetObjectAclAction: {},
GetObjectAttributesAction: {},
PutObjectAclAction: {},
RestoreObjectAction: {},
GetObjectTaggingAction: {},
PutObjectTaggingAction: {},
DeleteObjectTaggingAction: {},
AllActions: {},
AbortMultipartUploadAction: {},
ListMultipartUploadPartsAction: {},
PutObjectAction: {},
GetObjectAction: {},
DeleteObjectAction: {},
GetObjectAclAction: {},
GetObjectAttributesAction: {},
PutObjectAclAction: {},
RestoreObjectAction: {},
GetObjectTaggingAction: {},
PutObjectTaggingAction: {},
DeleteObjectTaggingAction: {},
GetObjectLegalHoldAction: {},
PutObjectLegalHoldAction: {},
GetObjectRetentionAction: {},
PutObjectRetentionAction: {},
BypassGovernanceRetentionAction: {},
AllActions: {},
}
// Validates Action: it should either wildcard match with supported actions list or be in it

View File

@@ -22,7 +22,7 @@ func NewLDAPService(url, bindDN, pass, queryBase, accAtr, secAtr, roleAtr, objCl
if url == "" || bindDN == "" || pass == "" || queryBase == "" || accAtr == "" || secAtr == "" || roleAtr == "" || objClasses == "" {
return nil, fmt.Errorf("required parameters list not fully provided")
}
conn, err := ldap.Dial("tcp", url)
conn, err := ldap.DialURL(url)
if err != nil {
return nil, fmt.Errorf("failed to connect to LDAP server: %w", err)
}

View File

@@ -32,7 +32,7 @@ func (IAMServiceSingle) CreateAccount(account Account) error {
// GetUserAccount no accounts in single tenant mode
func (IAMServiceSingle) GetUserAccount(access string) (Account, error) {
return Account{}, ErrNotSupported
return Account{}, ErrNoSuchUser
}
// DeleteUserAccount no accounts in single tenant mode

230
auth/object_lock.go Normal file
View File

@@ -0,0 +1,230 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package auth
import (
"context"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"time"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3err"
)
type BucketLockConfig struct {
Enabled bool
DefaultRetention *types.DefaultRetention
CreatedAt *time.Time
}
func ParseBucketLockConfigurationInput(input []byte) ([]byte, error) {
var lockConfig types.ObjectLockConfiguration
if err := xml.Unmarshal(input, &lockConfig); err != nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
config := BucketLockConfig{
Enabled: lockConfig.ObjectLockEnabled == types.ObjectLockEnabledEnabled,
}
if lockConfig.Rule != nil && lockConfig.Rule.DefaultRetention != nil {
retention := lockConfig.Rule.DefaultRetention
if retention.Years != nil && retention.Days != nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
config.DefaultRetention = retention
now := time.Now()
config.CreatedAt = &now
}
return json.Marshal(config)
}
func ParseBucketLockConfigurationOutput(input []byte) (*types.ObjectLockConfiguration, error) {
var config BucketLockConfig
if err := json.Unmarshal(input, &config); err != nil {
return nil, fmt.Errorf("parse object lock config: %w", err)
}
result := &types.ObjectLockConfiguration{
Rule: &types.ObjectLockRule{
DefaultRetention: config.DefaultRetention,
},
}
if config.Enabled {
result.ObjectLockEnabled = types.ObjectLockEnabledEnabled
}
return result, nil
}
func ParseObjectLockRetentionInput(input []byte) ([]byte, error) {
var retention types.ObjectLockRetention
if err := xml.Unmarshal(input, &retention); err != nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
if retention.RetainUntilDate == nil || retention.RetainUntilDate.Before(time.Now()) {
return nil, s3err.GetAPIError(s3err.ErrPastObjectLockRetainDate)
}
switch retention.Mode {
case types.ObjectLockRetentionModeCompliance:
case types.ObjectLockRetentionModeGovernance:
default:
return nil, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
return json.Marshal(retention)
}
func ParseObjectLockRetentionOutput(input []byte) (*types.ObjectLockRetention, error) {
var retention types.ObjectLockRetention
if err := json.Unmarshal(input, &retention); err != nil {
return nil, fmt.Errorf("parse object lock retention: %w", err)
}
return &retention, nil
}
func ParseObjectLegalHoldOutput(status *bool) *types.ObjectLockLegalHold {
if status == nil {
return nil
}
if *status {
return &types.ObjectLockLegalHold{
Status: types.ObjectLockLegalHoldStatusOn,
}
}
return &types.ObjectLockLegalHold{
Status: types.ObjectLockLegalHoldStatusOff,
}
}
func CheckObjectAccess(ctx context.Context, bucket, userAccess string, objects []string, isAdminOrRoot bool, be backend.Backend) error {
data, err := be.GetObjectLockConfiguration(ctx, bucket)
if err != nil {
if errors.Is(err, s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound)) {
return nil
}
return err
}
var bucketLockConfig BucketLockConfig
if err := json.Unmarshal(data, &bucketLockConfig); err != nil {
return fmt.Errorf("parse object lock config: %w", err)
}
if !bucketLockConfig.Enabled {
return nil
}
objExists := true
for _, obj := range objects {
var checkRetention bool = true
retentionData, err := be.GetObjectRetention(ctx, bucket, obj, "")
if errors.Is(err, s3err.GetAPIError(s3err.ErrNoSuchKey)) {
objExists = false
continue
}
if errors.Is(err, s3err.GetAPIError(s3err.ErrNoSuchObjectLockConfiguration)) {
checkRetention = false
}
if err != nil && checkRetention {
return err
}
if checkRetention {
retention, err := ParseObjectLockRetentionOutput(retentionData)
if err != nil {
return err
}
if retention.Mode != "" && retention.RetainUntilDate != nil {
if retention.RetainUntilDate.After(time.Now()) {
switch retention.Mode {
case types.ObjectLockRetentionModeGovernance:
if !isAdminOrRoot {
policy, err := be.GetBucketPolicy(ctx, bucket)
if errors.Is(err, s3err.GetAPIError(s3err.ErrNoSuchBucketPolicy)) {
return s3err.GetAPIError(s3err.ErrObjectLocked)
}
if err != nil {
return err
}
err = verifyBucketPolicy(policy, userAccess, bucket, obj, BypassGovernanceRetentionAction)
if err != nil {
return s3err.GetAPIError(s3err.ErrObjectLocked)
}
}
case types.ObjectLockRetentionModeCompliance:
return s3err.GetAPIError(s3err.ErrObjectLocked)
}
}
}
}
status, err := be.GetObjectLegalHold(ctx, bucket, obj, "")
if errors.Is(err, s3err.GetAPIError(s3err.ErrNoSuchObjectLockConfiguration)) {
continue
}
if err != nil {
return err
}
if *status && !isAdminOrRoot {
return s3err.GetAPIError(s3err.ErrObjectLocked)
}
}
if bucketLockConfig.DefaultRetention != nil && bucketLockConfig.CreatedAt != nil && objExists {
expirationDate := *bucketLockConfig.CreatedAt
if bucketLockConfig.DefaultRetention.Days != nil {
expirationDate = expirationDate.AddDate(0, 0, int(*bucketLockConfig.DefaultRetention.Days))
}
if bucketLockConfig.DefaultRetention.Years != nil {
expirationDate = expirationDate.AddDate(int(*bucketLockConfig.DefaultRetention.Years), 0, 0)
}
if expirationDate.After(time.Now()) {
switch bucketLockConfig.DefaultRetention.Mode {
case types.ObjectLockRetentionModeGovernance:
if !isAdminOrRoot {
policy, err := be.GetBucketPolicy(ctx, bucket)
if err != nil {
return err
}
err = verifyBucketPolicy(policy, userAccess, bucket, "", BypassGovernanceRetentionAction)
if err != nil {
return s3err.GetAPIError(s3err.ErrObjectLocked)
}
}
case types.ObjectLockRetentionModeCompliance:
return s3err.GetAPIError(s3err.ErrObjectLocked)
}
}
}
return nil
}

View File

@@ -20,6 +20,7 @@ import (
"encoding/base64"
"encoding/binary"
"encoding/json"
"errors"
"fmt"
"io"
"math"
@@ -45,10 +46,17 @@ import (
// When getting container metadata with GetProperties method the sdk returns
// the first letter capital, when accessing the metadata after listing the containers
// it returns the first letter lower
type aclKey string
type key string
const aclKeyCapital aclKey = "Acl"
const aclKeyLower aclKey = "acl"
const (
keyAclCapital key = "Acl"
keyAclLower key = "acl"
keyTags key = "Tags"
keyPolicy key = "Policy"
keyBucketLock key = "Bucket-Lock"
keyObjRetention key = "Object_retention"
keyObjLegalHold key = "Object_legal_hold"
)
type Azure struct {
backend.BackendUnsupported
@@ -116,7 +124,22 @@ func (az *Azure) String() string {
func (az *Azure) CreateBucket(ctx context.Context, input *s3.CreateBucketInput, acl []byte) error {
meta := map[string]*string{
string(aclKeyCapital): backend.GetStringPtr(string(acl)),
string(keyAclCapital): backend.GetStringPtr(string(acl)),
}
if input.ObjectLockEnabledForBucket != nil && *input.ObjectLockEnabledForBucket {
now := time.Now()
defaultLock := auth.BucketLockConfig{
Enabled: true,
CreatedAt: &now,
}
defaultLockParsed, err := json.Marshal(defaultLock)
if err != nil {
return fmt.Errorf("parse default bucket lock state: %w", err)
}
meta[string(keyBucketLock)] = backend.GetStringPtr(string(defaultLockParsed))
}
_, err := az.client.CreateContainer(ctx, *input.Bucket, &container.CreateOptions{Metadata: meta})
return azureErrToS3Err(err)
@@ -181,6 +204,28 @@ func (az *Azure) PutObject(ctx context.Context, po *s3.PutObjectInput) (string,
return "", azureErrToS3Err(err)
}
// Set object legal hold
if po.ObjectLockLegalHoldStatus == types.ObjectLockLegalHoldStatusOn {
if err := az.PutObjectLegalHold(ctx, *po.Bucket, *po.Key, "", true); err != nil {
return "", err
}
}
// Set object retention
if po.ObjectLockMode != "" {
retention := types.ObjectLockRetention{
Mode: types.ObjectLockRetentionMode(po.ObjectLockMode),
RetainUntilDate: po.ObjectLockRetainUntilDate,
}
retParsed, err := json.Marshal(retention)
if err != nil {
return "", fmt.Errorf("parse object lock retention: %w", err)
}
if err := az.PutObjectRetention(ctx, *po.Bucket, *po.Key, "", retParsed); err != nil {
return "", err
}
}
return string(*uploadResp.ETag), nil
}
@@ -196,24 +241,17 @@ func (az *Azure) PutBucketTagging(ctx context.Context, bucket string, tags map[s
}
if tags == nil {
_, err := client.SetMetadata(ctx, &container.SetMetadataOptions{Metadata: map[string]*string{
string(aclKeyCapital): resp.Metadata[string(aclKeyCapital)],
}})
delete(resp.Metadata, string(keyTags))
} else {
tagsJson, err := json.Marshal(tags)
if err != nil {
return azureErrToS3Err(err)
return err
}
return nil
resp.Metadata[string(keyTags)] = backend.GetStringPtr(string(tagsJson))
}
_, ok := tags[string(aclKeyLower)]
if ok {
delete(tags, string(aclKeyLower))
}
tags[string(aclKeyCapital)] = *resp.Metadata[string(aclKeyCapital)]
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{Metadata: parseMetadata(tags)})
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{Metadata: resp.Metadata})
if err != nil {
return azureErrToS3Err(err)
}
@@ -232,9 +270,17 @@ func (az *Azure) GetBucketTagging(ctx context.Context, bucket string) (map[strin
return nil, azureErrToS3Err(err)
}
delete(resp.Metadata, string(aclKeyCapital))
tagsJson, ok := resp.Metadata[string(keyTags)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrBucketTaggingNotFound)
}
return parseAzMetadata(resp.Metadata), nil
var tags map[string]string
if json.Unmarshal([]byte(*tagsJson), &tags); err != nil {
return nil, err
}
return tags, nil
}
func (az *Azure) DeleteBucketTagging(ctx context.Context, bucket string) error {
@@ -309,6 +355,61 @@ func (az *Azure) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3
}, nil
}
func (az *Azure) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
data, err := az.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: input.Bucket,
Key: input.Key,
})
if err == nil {
return s3response.GetObjectAttributesResult{
ETag: data.ETag,
LastModified: data.LastModified,
ObjectSize: data.ContentLength,
StorageClass: &data.StorageClass,
VersionId: data.VersionId,
}, nil
}
if !errors.Is(err, s3err.GetAPIError(s3err.ErrNoSuchKey)) {
return s3response.GetObjectAttributesResult{}, err
}
resp, err := az.ListParts(ctx, &s3.ListPartsInput{
Bucket: input.Bucket,
Key: input.Key,
PartNumberMarker: input.PartNumberMarker,
MaxParts: input.MaxParts,
})
if errors.Is(err, s3err.GetAPIError(s3err.ErrNoSuchUpload)) {
return s3response.GetObjectAttributesResult{}, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if err != nil {
return s3response.GetObjectAttributesResult{}, err
}
parts := []types.ObjectPart{}
for _, p := range resp.Parts {
partNumber := int32(p.PartNumber)
size := p.Size
parts = append(parts, types.ObjectPart{
Size: &size,
PartNumber: &partNumber,
})
}
//TODO: handle PartsCount prop
return s3response.GetObjectAttributesResult{
ObjectParts: &s3response.ObjectParts{
IsTruncated: resp.IsTruncated,
MaxParts: resp.MaxParts,
PartNumberMarker: resp.PartNumberMarker,
NextPartNumberMarker: resp.PartNumberMarker,
Parts: parts,
},
}, nil
}
func (az *Azure) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
pager := az.client.NewListBlobsFlatPager(*input.Bucket, &azblob.ListBlobsFlatOptions{
Marker: input.Marker,
@@ -625,7 +726,7 @@ func (az *Azure) ListParts(ctx context.Context, input *s3.ListPartsInput) (s3res
}
parts := []s3response.Part{}
for _, el := range resp.BlockList.UncommittedBlocks {
for _, el := range resp.UncommittedBlocks {
partNumber, err := decodeBlockId(*el.Name)
if err != nil {
return s3response.ListPartsResult{}, err
@@ -751,11 +852,14 @@ func (az *Azure) PutBucketAcl(ctx context.Context, bucket string, data []byte) e
if err != nil {
return err
}
meta := map[string]*string{
string(aclKeyCapital): backend.GetStringPtr(string(data)),
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
props.Metadata[string(keyAclCapital)] = backend.GetStringPtr(string(data))
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{
Metadata: meta,
Metadata: props.Metadata,
})
if err != nil {
return azureErrToS3Err(err)
@@ -773,7 +877,7 @@ func (az *Azure) GetBucketAcl(ctx context.Context, input *s3.GetBucketAclInput)
return nil, azureErrToS3Err(err)
}
aclPtr, ok := props.Metadata[string(aclKeyCapital)]
aclPtr, ok := props.Metadata[string(keyAclCapital)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrInternalError)
}
@@ -781,6 +885,249 @@ func (az *Azure) GetBucketAcl(ctx context.Context, input *s3.GetBucketAclInput)
return []byte(*aclPtr), nil
}
func (az *Azure) PutBucketPolicy(ctx context.Context, bucket string, policy []byte) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
if policy == nil {
delete(props.Metadata, string(keyPolicy))
} else {
// Store policy as base64 encoded, because storing raw json causes an SDK error
policyEncoded := base64.StdEncoding.EncodeToString(policy)
props.Metadata[string(keyPolicy)] = &policyEncoded
}
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{
Metadata: props.Metadata,
})
if err != nil {
return azureErrToS3Err(err)
}
return nil
}
func (az *Azure) GetBucketPolicy(ctx context.Context, bucket string) ([]byte, error) {
client, err := az.getContainerClient(bucket)
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
policyPtr, ok := props.Metadata[string(keyPolicy)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
policy, err := base64.StdEncoding.DecodeString(*policyPtr)
if err != nil {
return nil, err
}
return policy, nil
}
func (az *Azure) DeleteBucketPolicy(ctx context.Context, bucket string) error {
return az.PutBucketPolicy(ctx, bucket, nil)
}
func (az *Azure) PutObjectLockConfiguration(ctx context.Context, bucket string, config []byte) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
props.Metadata[string(keyBucketLock)] = backend.GetStringPtr(string(config))
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{
Metadata: props.Metadata,
})
if err != nil {
return azureErrToS3Err(err)
}
return nil
}
func (az *Azure) GetObjectLockConfiguration(ctx context.Context, bucket string) ([]byte, error) {
client, err := az.getContainerClient(bucket)
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
config, ok := props.Metadata[string(keyBucketLock)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound)
}
return []byte(*config), nil
}
func (az *Azure) PutObjectRetention(ctx context.Context, bucket, object, versionId string, retention []byte) error {
contClient, err := az.getContainerClient(bucket)
if err != nil {
return err
}
contProps, err := contClient.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
contCfg, ok := contProps.Metadata[string(keyBucketLock)]
if !ok {
return s3err.GetAPIError(s3err.ErrInvalidBucketObjectLockConfiguration)
}
var bucketLockConfig auth.BucketLockConfig
if err := json.Unmarshal([]byte(*contCfg), &bucketLockConfig); err != nil {
return fmt.Errorf("parse bucket lock config: %w", err)
}
if !bucketLockConfig.Enabled {
return s3err.GetAPIError(s3err.ErrInvalidBucketObjectLockConfiguration)
}
blobClient, err := az.getBlobClient(bucket, object)
if err != nil {
return err
}
blobProps, err := blobClient.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
meta := blobProps.Metadata
if meta == nil {
meta = map[string]*string{
string(keyObjRetention): backend.GetStringPtr(string(retention)),
}
} else {
meta[string(keyObjRetention)] = backend.GetStringPtr(string(retention))
}
_, err = blobClient.SetMetadata(ctx, meta, nil)
if err != nil {
return azureErrToS3Err(err)
}
return nil
}
func (az *Azure) GetObjectRetention(ctx context.Context, bucket, object, versionId string) ([]byte, error) {
client, err := az.getBlobClient(bucket, object)
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
retentionPtr, ok := props.Metadata[string(keyObjRetention)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrNoSuchObjectLockConfiguration)
}
return []byte(*retentionPtr), nil
}
func (az *Azure) PutObjectLegalHold(ctx context.Context, bucket, object, versionId string, status bool) error {
contClient, err := az.getContainerClient(bucket)
if err != nil {
return err
}
contProps, err := contClient.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
contCfg, ok := contProps.Metadata[string(keyBucketLock)]
if !ok {
return s3err.GetAPIError(s3err.ErrInvalidBucketObjectLockConfiguration)
}
var bucketLockConfig auth.BucketLockConfig
if err := json.Unmarshal([]byte(*contCfg), &bucketLockConfig); err != nil {
return fmt.Errorf("parse bucket lock config: %w", err)
}
if !bucketLockConfig.Enabled {
return s3err.GetAPIError(s3err.ErrInvalidBucketObjectLockConfiguration)
}
blobClient, err := az.getBlobClient(bucket, object)
if err != nil {
return err
}
blobProps, err := blobClient.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
var statusData string
if status {
statusData = "1"
} else {
statusData = "0"
}
meta := blobProps.Metadata
if meta == nil {
meta = map[string]*string{
string(keyObjLegalHold): &statusData,
}
} else {
meta[string(keyObjLegalHold)] = &statusData
}
_, err = blobClient.SetMetadata(ctx, meta, nil)
if err != nil {
return azureErrToS3Err(err)
}
return nil
}
func (az *Azure) GetObjectLegalHold(ctx context.Context, bucket, object, versionId string) (*bool, error) {
client, err := az.getBlobClient(bucket, object)
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
retentionPtr, ok := props.Metadata[string(keyObjLegalHold)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrNoSuchObjectLockConfiguration)
}
status := *retentionPtr == "1"
return &status, nil
}
func (az *Azure) ChangeBucketOwner(ctx context.Context, bucket, newOwner string) error {
client, err := az.getContainerClient(bucket)
if err != nil {
@@ -791,7 +1138,7 @@ func (az *Azure) ChangeBucketOwner(ctx context.Context, bucket, newOwner string)
return azureErrToS3Err(err)
}
acl, err := getAclFromMetadata(props.Metadata, aclKeyCapital)
acl, err := getAclFromMetadata(props.Metadata, keyAclCapital)
if err != nil {
return err
}
@@ -822,7 +1169,7 @@ func (az *Azure) ListBucketsAndOwners(ctx context.Context) (buckets []s3response
return buckets, azureErrToS3Err(err)
}
for _, v := range resp.ContainerItems {
acl, err := getAclFromMetadata(v.Metadata, aclKeyLower)
acl, err := getAclFromMetadata(v.Metadata, keyAclLower)
if err != nil {
return buckets, err
}
@@ -999,7 +1346,7 @@ func parseRange(rg string) (offset, count int64, err error) {
return offset, count - offset + 1, nil
}
func getAclFromMetadata(meta map[string]*string, key aclKey) (*auth.ACL, error) {
func getAclFromMetadata(meta map[string]*string, key key) (*auth.ACL, error) {
aclPtr, ok := meta[string(key)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrInternalError)
@@ -1020,7 +1367,7 @@ func isMetaSame(azMeta map[string]*string, awsMeta map[string]string) bool {
}
for key, val := range azMeta {
if key == string(aclKeyCapital) || key == string(aclKeyLower) {
if key == string(keyAclCapital) || key == string(keyAclLower) {
continue
}
awsVal, ok := awsMeta[key]

View File

@@ -58,7 +58,7 @@ type Backend interface {
HeadObject(context.Context, *s3.HeadObjectInput) (*s3.HeadObjectOutput, error)
GetObject(context.Context, *s3.GetObjectInput, io.Writer) (*s3.GetObjectOutput, error)
GetObjectAcl(context.Context, *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error)
GetObjectAttributes(context.Context, *s3.GetObjectAttributesInput) (*s3.GetObjectAttributesOutput, error)
GetObjectAttributes(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error)
CopyObject(context.Context, *s3.CopyObjectInput) (*s3.CopyObjectOutput, error)
ListObjects(context.Context, *s3.ListObjectsInput) (*s3.ListObjectsOutput, error)
ListObjectsV2(context.Context, *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error)
@@ -81,6 +81,14 @@ type Backend interface {
PutObjectTagging(_ context.Context, bucket, object string, tags map[string]string) error
DeleteObjectTagging(_ context.Context, bucket, object string) error
// object lock operations
PutObjectLockConfiguration(_ context.Context, bucket string, config []byte) error
GetObjectLockConfiguration(_ context.Context, bucket string) ([]byte, error)
PutObjectRetention(_ context.Context, bucket, object, versionId string, retention []byte) error
GetObjectRetention(_ context.Context, bucket, object, versionId string) ([]byte, error)
PutObjectLegalHold(_ context.Context, bucket, object, versionId string, status bool) error
GetObjectLegalHold(_ context.Context, bucket, object, versionId string) (*bool, error)
// non AWS actions
ChangeBucketOwner(_ context.Context, bucket, newOwner string) error
ListBucketsAndOwners(context.Context) ([]s3response.Bucket, error)
@@ -165,8 +173,8 @@ func (BackendUnsupported) GetObject(context.Context, *s3.GetObjectInput, io.Writ
func (BackendUnsupported) GetObjectAcl(context.Context, *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) GetObjectAttributes(context.Context, *s3.GetObjectAttributesInput) (*s3.GetObjectAttributesOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
func (BackendUnsupported) GetObjectAttributes(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
return s3response.GetObjectAttributesResult{}, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) CopyObject(context.Context, *s3.CopyObjectInput) (*s3.CopyObjectOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
@@ -229,6 +237,25 @@ func (BackendUnsupported) DeleteObjectTagging(_ context.Context, bucket, object
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) PutObjectLockConfiguration(_ context.Context, bucket string, config []byte) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) GetObjectLockConfiguration(_ context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) PutObjectRetention(_ context.Context, bucket, object, versionId string, retention []byte) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) GetObjectRetention(_ context.Context, bucket, object, versionId string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) PutObjectLegalHold(_ context.Context, bucket, object, versionId string, status bool) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) GetObjectLegalHold(_ context.Context, bucket, object, versionId string) (*bool, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) ChangeBucketOwner(_ context.Context, bucket, newOwner string) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}

26
backend/meta/meta.go Normal file
View File

@@ -0,0 +1,26 @@
package meta
// MetadataStorer defines the interface for managing metadata.
// When object == "", the operation is on the bucket.
type MetadataStorer interface {
// RetrieveAttribute retrieves the value of a specific attribute for an object or a bucket.
// Returns the value of the attribute, or an error if the attribute does not exist.
RetrieveAttribute(bucket, object, attribute string) ([]byte, error)
// StoreAttribute stores the value of a specific attribute for an object or a bucket.
// If attribute already exists, new attribute should replace existing.
// Returns an error if the operation fails.
StoreAttribute(bucket, object, attribute string, value []byte) error
// DeleteAttribute removes the value of a specific attribute for an object or a bucket.
// Returns an error if the operation fails.
DeleteAttribute(bucket, object, attribute string) error
// ListAttributes lists all attributes for an object or a bucket.
// Returns list of attribute names, or an error if the operation fails.
ListAttributes(bucket, object string) ([]string, error)
// DeleteAttributes removes all attributes for an object or a bucket.
// Returns an error if the operation fails.
DeleteAttributes(bucket, object string) error
}

87
backend/meta/xattr.go Normal file
View File

@@ -0,0 +1,87 @@
package meta
import (
"errors"
"fmt"
"path/filepath"
"strings"
"syscall"
"github.com/pkg/xattr"
)
const (
xattrPrefix = "user."
)
var (
// ErrNoSuchKey is returned when the key does not exist.
ErrNoSuchKey = errors.New("no such key")
)
type XattrMeta struct{}
// RetrieveAttribute retrieves the value of a specific attribute for an object in a bucket.
func (x XattrMeta) RetrieveAttribute(bucket, object, attribute string) ([]byte, error) {
b, err := xattr.Get(filepath.Join(bucket, object), xattrPrefix+attribute)
if errors.Is(err, xattr.ENOATTR) {
return nil, ErrNoSuchKey
}
return b, err
}
// StoreAttribute stores the value of a specific attribute for an object in a bucket.
func (x XattrMeta) StoreAttribute(bucket, object, attribute string, value []byte) error {
return xattr.Set(filepath.Join(bucket, object), xattrPrefix+attribute, value)
}
// DeleteAttribute removes the value of a specific attribute for an object in a bucket.
func (x XattrMeta) DeleteAttribute(bucket, object, attribute string) error {
err := xattr.Remove(filepath.Join(bucket, object), xattrPrefix+attribute)
if errors.Is(err, xattr.ENOATTR) {
return ErrNoSuchKey
}
return err
}
// DeleteAttributes is not implemented for xattr since xattrs
// are automatically removed when the file is deleted.
func (x XattrMeta) DeleteAttributes(bucket, object string) error {
return nil
}
// ListAttributes lists all attributes for an object in a bucket.
func (x XattrMeta) ListAttributes(bucket, object string) ([]string, error) {
attrs, err := xattr.List(filepath.Join(bucket, object))
if err != nil {
return nil, err
}
attributes := make([]string, 0, len(attrs))
for _, attr := range attrs {
if !isUserAttr(attr) {
continue
}
attributes = append(attributes, strings.TrimPrefix(attr, xattrPrefix))
}
return attributes, nil
}
func isUserAttr(attr string) bool {
return strings.HasPrefix(attr, xattrPrefix)
}
// Test is a helper function to test if xattrs are supported.
func (x XattrMeta) Test(path string) error {
// check for platform support
if !xattr.XATTR_SUPPORTED {
return fmt.Errorf("xattrs are not supported on this platform")
}
// check if the filesystem supports xattrs
_, err := xattr.Get(path, "user.test")
if errors.Is(err, syscall.ENOTSUP) {
return fmt.Errorf("xattrs are not supported on this filesystem")
}
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,24 +0,0 @@
// Copyright 2024 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
//go:build !freebsd && !openbsd && !netbsd
// +build !freebsd,!openbsd,!netbsd
package posix
import "syscall"
var (
errNoData = syscall.ENODATA
)

View File

@@ -1,24 +0,0 @@
// Copyright 2024 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
//go:build freebsd || openbsd || netbsd
// +build freebsd openbsd netbsd
package posix
import "syscall"
var (
errNoData = syscall.ENOATTR
)

View File

@@ -33,6 +33,7 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/aws/smithy-go"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
@@ -295,9 +296,41 @@ func (s *S3Proxy) GetObject(ctx context.Context, input *s3.GetObjectInput, w io.
return output, nil
}
func (s *S3Proxy) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (*s3.GetObjectAttributesOutput, error) {
func (s *S3Proxy) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
out, err := s.client.GetObjectAttributes(ctx, input)
return out, handleError(err)
parts := s3response.ObjectParts{}
objParts := out.ObjectParts
if objParts != nil {
if objParts.PartNumberMarker != nil {
partNumberMarker, err := strconv.Atoi(*objParts.PartNumberMarker)
if err != nil {
parts.PartNumberMarker = partNumberMarker
}
if objParts.NextPartNumberMarker != nil {
nextPartNumberMarker, err := strconv.Atoi(*objParts.NextPartNumberMarker)
if err != nil {
parts.NextPartNumberMarker = nextPartNumberMarker
}
}
if objParts.IsTruncated != nil {
parts.IsTruncated = *objParts.IsTruncated
}
if objParts.MaxParts != nil {
parts.MaxParts = int(*objParts.MaxParts)
}
parts.Parts = objParts.Parts
}
}
return s3response.GetObjectAttributesResult{
ETag: out.ETag,
LastModified: out.LastModified,
ObjectSize: out.ObjectSize,
StorageClass: &out.StorageClass,
VersionId: out.VersionId,
ObjectParts: &parts,
}, handleError(err)
}
func (s *S3Proxy) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.CopyObjectOutput, error) {
@@ -436,6 +469,128 @@ func (s *S3Proxy) DeleteObjectTagging(ctx context.Context, bucket, object string
return handleError(err)
}
func (s *S3Proxy) PutBucketPolicy(ctx context.Context, bucket string, policy []byte) error {
_, err := s.client.PutBucketPolicy(ctx, &s3.PutBucketPolicyInput{
Bucket: &bucket,
Policy: backend.GetStringPtr(string(policy)),
})
return handleError(err)
}
func (s *S3Proxy) GetBucketPolicy(ctx context.Context, bucket string) ([]byte, error) {
policy, err := s.client.GetBucketPolicy(ctx, &s3.GetBucketPolicyInput{
Bucket: &bucket,
})
if err != nil {
return nil, handleError(err)
}
result := []byte{}
if policy.Policy != nil {
result = []byte(*policy.Policy)
}
return result, nil
}
func (s *S3Proxy) DeleteBucketPolicy(ctx context.Context, bucket string) error {
_, err := s.client.DeleteBucketPolicy(ctx, &s3.DeleteBucketPolicyInput{
Bucket: &bucket,
})
return handleError(err)
}
func (s *S3Proxy) PutObjectLockConfiguration(ctx context.Context, bucket string, config []byte) error {
cfg, err := auth.ParseBucketLockConfigurationOutput(config)
if err != nil {
return err
}
_, err = s.client.PutObjectLockConfiguration(ctx, &s3.PutObjectLockConfigurationInput{
Bucket: &bucket,
ObjectLockConfiguration: cfg,
})
return handleError(err)
}
func (s *S3Proxy) GetObjectLockConfiguration(ctx context.Context, bucket string) ([]byte, error) {
resp, err := s.client.GetObjectLockConfiguration(ctx, &s3.GetObjectLockConfigurationInput{
Bucket: &bucket,
})
if err != nil {
return nil, handleError(err)
}
config := auth.BucketLockConfig{
Enabled: resp.ObjectLockConfiguration.ObjectLockEnabled == types.ObjectLockEnabledEnabled,
DefaultRetention: resp.ObjectLockConfiguration.Rule.DefaultRetention,
}
return json.Marshal(config)
}
func (s *S3Proxy) PutObjectRetention(ctx context.Context, bucket, object, versionId string, retention []byte) error {
ret, err := auth.ParseObjectLockRetentionOutput(retention)
if err != nil {
return err
}
_, err = s.client.PutObjectRetention(ctx, &s3.PutObjectRetentionInput{
Bucket: &bucket,
Key: &object,
VersionId: &versionId,
Retention: ret,
})
return handleError(err)
}
func (s *S3Proxy) GetObjectRetention(ctx context.Context, bucket, object, versionId string) ([]byte, error) {
resp, err := s.client.GetObjectRetention(ctx, &s3.GetObjectRetentionInput{
Bucket: &bucket,
Key: &object,
VersionId: &versionId,
})
if err != nil {
return nil, handleError(err)
}
return json.Marshal(resp.Retention)
}
func (s *S3Proxy) PutObjectLegalHold(ctx context.Context, bucket, object, versionId string, status bool) error {
var st types.ObjectLockLegalHoldStatus
if status {
st = types.ObjectLockLegalHoldStatusOn
} else {
st = types.ObjectLockLegalHoldStatusOff
}
_, err := s.client.PutObjectLegalHold(ctx, &s3.PutObjectLegalHoldInput{
Bucket: &bucket,
Key: &object,
VersionId: &versionId,
LegalHold: &types.ObjectLockLegalHold{
Status: st,
},
})
return handleError(err)
}
func (s *S3Proxy) GetObjectLegalHold(ctx context.Context, bucket, object, versionId string) (*bool, error) {
resp, err := s.client.GetObjectLegalHold(ctx, &s3.GetObjectLegalHoldInput{
Bucket: &bucket,
Key: &object,
VersionId: &versionId,
})
if err != nil {
return nil, handleError(err)
}
status := resp.LegalHold.Status == types.ObjectLockLegalHoldStatusOn
return &status, nil
}
func (s *S3Proxy) ChangeBucketOwner(ctx context.Context, bucket, newOwner string) error {
req, err := http.NewRequest(http.MethodPatch, fmt.Sprintf("%v/change-bucket-owner/?bucket=%v&owner=%v", s.endpoint, bucket, newOwner), nil)
if err != nil {

View File

@@ -39,7 +39,6 @@ import (
type ScoutfsOpts struct {
ChownUID bool
ChownGID bool
ReadOnly bool
GlacierMode bool
}
@@ -64,9 +63,6 @@ type ScoutFS struct {
chownuid bool
chowngid bool
// read only mode prevents any backend modifications
readonly bool
// euid/egid are the effective uid/gid of the running versitygw process
// used to determine if chowning is needed
euid int
@@ -147,10 +143,6 @@ func (s *ScoutFS) getChownIDs(acct auth.Account) (int, int, bool) {
// ioctl to not have to read and copy the part data to the final object. This
// saves a read and write cycle for all mutlipart uploads.
func (s *ScoutFS) CompleteMultipartUpload(ctx context.Context, input *s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error) {
if s.readonly {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
}
acct, ok := ctx.Value("account").(auth.Account)
if !ok {
acct = auth.Account{}

View File

@@ -30,14 +30,14 @@ import (
"github.com/versity/scoutfs-go"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/backend/meta"
"github.com/versity/versitygw/backend/posix"
)
func New(rootdir string, opts ScoutfsOpts) (*ScoutFS, error) {
p, err := posix.New(rootdir, posix.PosixOpts{
p, err := posix.New(rootdir, meta.XattrMeta{}, posix.PosixOpts{
ChownUID: opts.ChownUID,
ChownGID: opts.ChownGID,
ReadOnly: opts.ReadOnly,
})
if err != nil {
return nil, err
@@ -54,7 +54,6 @@ func New(rootdir string, opts ScoutfsOpts) (*ScoutFS, error) {
rootdir: rootdir,
chownuid: opts.ChownUID,
chowngid: opts.ChownGID,
readonly: opts.ReadOnly,
}, nil
}

View File

@@ -42,7 +42,6 @@ func (s *ScoutFS) openTmpFile(_, _, _ string, _ int64, _ auth.Account) (*tmpfile
_ = s.chowngid
_ = s.euid
_ = s.egid
_ = s.readonly
return nil, errNotSupported
}

View File

@@ -17,6 +17,7 @@ package main
import (
"bytes"
"crypto/sha256"
"crypto/tls"
"encoding/hex"
"encoding/json"
"fmt"
@@ -37,6 +38,7 @@ var (
adminAccess string
adminSecret string
adminEndpoint string
allowInsecure bool
)
func adminCommand() *cli.Command {
@@ -154,10 +156,24 @@ func adminCommand() *cli.Command {
Required: true,
Destination: &adminEndpoint,
},
&cli.BoolFlag{
Name: "allow-insecure",
Usage: "disable tls certificate verification for the admin endpoint",
EnvVars: []string{"ADMIN_ALLOW_INSECURE"},
Aliases: []string{"ai"},
Destination: &allowInsecure,
},
},
}
}
func initHTTPClient() *http.Client {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: allowInsecure},
}
return &http.Client{Transport: tr}
}
func createUser(ctx *cli.Context) error {
access, secret, role := ctx.String("access"), ctx.String("secret"), ctx.String("role")
userID, groupID, projectID := ctx.Int("user-id"), ctx.Int("group-id"), ctx.Int("projectID")
@@ -199,18 +215,22 @@ func createUser(ctx *cli.Context) error {
return fmt.Errorf("failed to sign the request: %w", err)
}
client := http.Client{}
client := initHTTPClient()
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("%s", body)
}
fmt.Printf("%s\n", body)
@@ -240,18 +260,22 @@ func deleteUser(ctx *cli.Context) error {
return fmt.Errorf("failed to sign the request: %w", err)
}
client := http.Client{}
client := initHTTPClient()
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("%s", body)
}
fmt.Printf("%s\n", body)
@@ -276,18 +300,18 @@ func listUsers(ctx *cli.Context) error {
return fmt.Errorf("failed to sign the request: %w", err)
}
client := http.Client{}
client := initHTTPClient()
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("%s", body)
@@ -343,18 +367,22 @@ func changeBucketOwner(ctx *cli.Context) error {
return fmt.Errorf("failed to sign the request: %w", err)
}
client := http.Client{}
client := initHTTPClient()
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("%s", body)
}
fmt.Println(string(body))
@@ -391,18 +419,18 @@ func listBuckets(ctx *cli.Context) error {
return fmt.Errorf("failed to sign the request: %w", err)
}
client := http.Client{}
client := initHTTPClient()
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("%s", body)

View File

@@ -8,6 +8,7 @@ import (
"sync"
"testing"
"github.com/versity/versitygw/backend/meta"
"github.com/versity/versitygw/backend/posix"
"github.com/versity/versitygw/tests/integration"
)
@@ -56,7 +57,7 @@ func initPosix(ctx context.Context) {
log.Fatalf("make temp directory: %v", err)
}
be, err := posix.New(tempdir, posix.PosixOpts{})
be, err := posix.New(tempdir, meta.XattrMeta{}, posix.PosixOpts{})
if err != nil {
log.Fatalf("init posix: %v", err)
}

View File

@@ -18,12 +18,12 @@ import (
"fmt"
"github.com/urfave/cli/v2"
"github.com/versity/versitygw/backend/meta"
"github.com/versity/versitygw/backend/posix"
)
var (
chownuid, chowngid bool
readonly bool
)
func posixCommand() *cli.Command {
@@ -54,12 +54,6 @@ will be translated into the file /mnt/fs/gwroot/mybucket/a/b/c/myobject`,
EnvVars: []string{"VGW_CHOWN_GID"},
Destination: &chowngid,
},
&cli.BoolFlag{
Name: "readonly",
Usage: "allow only read operations to backend",
EnvVars: []string{"VGW_READ_ONLY"},
Destination: &readonly,
},
},
}
}
@@ -69,10 +63,15 @@ func runPosix(ctx *cli.Context) error {
return fmt.Errorf("no directory provided for operation")
}
be, err := posix.New(ctx.Args().Get(0), posix.PosixOpts{
gwroot := (ctx.Args().Get(0))
err := meta.XattrMeta{}.Test(gwroot)
if err != nil {
return fmt.Errorf("posix xattr check: %v", err)
}
be, err := posix.New(gwroot, meta.XattrMeta{}, posix.PosixOpts{
ChownUID: chownuid,
ChownGID: chowngid,
ReadOnly: readonly,
})
if err != nil {
return fmt.Errorf("init posix: %v", err)

View File

@@ -23,10 +23,6 @@ import (
var (
glacier bool
// defined in posix.go:
// chownuid, chowngid bool
// readonly bool
)
func scoutfsCommand() *cli.Command {
@@ -67,12 +63,6 @@ move interfaces as well as support for tiered filesystems.`,
EnvVars: []string{"VGW_CHOWN_GID"},
Destination: &chowngid,
},
&cli.BoolFlag{
Name: "readonly",
Usage: "allow only read operations to backend",
EnvVars: []string{"VGW_READ_ONLY"},
Destination: &readonly,
},
},
}
}
@@ -86,7 +76,6 @@ func runScoutfs(ctx *cli.Context) error {
opts.GlacierMode = glacier
opts.ChownUID = chownuid
opts.ChownGID = chowngid
opts.ReadOnly = readonly
be, err := scoutfs.New(ctx.Args().Get(0), opts)
if err != nil {

View File

@@ -54,19 +54,21 @@ func generateEventFiltersConfig(ctx *cli.Context) error {
}
config := s3event.EventFilter{
s3event.EventObjectCreated: true,
s3event.EventObjectCreatedPut: true,
s3event.EventObjectCreatedPost: true,
s3event.EventObjectCreatedCopy: true,
s3event.EventCompleteMultipartUpload: true,
s3event.EventObjectDeleted: true,
s3event.EventObjectTagging: true,
s3event.EventObjectTaggingPut: true,
s3event.EventObjectTaggingDelete: true,
s3event.EventObjectAclPut: true,
s3event.EventObjectRestore: true,
s3event.EventObjectRestorePost: true,
s3event.EventObjectRestoreCompleted: true,
s3event.EventObjectCreated: true,
s3event.EventObjectCreatedPut: true,
s3event.EventObjectCreatedPost: true,
s3event.EventObjectCreatedCopy: true,
s3event.EventCompleteMultipartUpload: true,
s3event.EventObjectRemoved: true,
s3event.EventObjectRemovedDelete: true,
s3event.EventObjectRemovedDeleteObjects: true,
s3event.EventObjectTagging: true,
s3event.EventObjectTaggingPut: true,
s3event.EventObjectTaggingDelete: true,
s3event.EventObjectAclPut: true,
s3event.EventObjectRestore: true,
s3event.EventObjectRestorePost: true,
s3event.EventObjectRestoreCompleted: true,
}
configBytes, err := json.Marshal(config)

35
docker-compose-bats.yml Normal file
View File

@@ -0,0 +1,35 @@
version: '3'
services:
no_certs:
build:
context: .
dockerfile: Dockerfile_test_bats
args:
- CONFIG_FILE=tests/.env.nocerts
static_buckets:
build:
context: .
dockerfile: Dockerfile_test_bats
args:
- CONFIG_FILE=tests/.env.static
posix_backend:
build:
context: .
dockerfile: Dockerfile_test_bats
args:
- CONFIG_FILE=tests/.env.default
s3_backend:
build:
context: .
dockerfile: Dockerfile_test_bats
args:
- CONFIG_FILE=tests/.env.s3
- SECRETS_FILE=tests/.secrets.s3
direct:
build:
context: .
dockerfile: Dockerfile_test_bats
args:
- CONFIG_FILE=tests/.env.direct
- SECRETS_FILE=tests/.secrets.direct

View File

@@ -31,7 +31,7 @@ services:
hostname: azurite
command: "azurite --oauth basic --cert /tests/certs/azurite.pem --key /tests/certs/azurite-key.pem --blobHost 0.0.0.0"
volumes:
- ./certs:/certs
- ./tests/certs:/tests/certs
azuritegw:
build:
context: .

View File

@@ -254,12 +254,6 @@ ROOT_SECRET_ACCESS_KEY=
#VGW_CHOWN_UID=false
#VGW_CHOWN_GID=false
# The VGW_READ_ONLY option will disable all write operations to the backend
# filesystem. This is useful for creating a read-only gateway for clients.
# This will prevent any PUT, POST, DELETE, and multipart upload operations
# for objects and buckets as well as preventing updating metadata for objects.
#VGW_READ_ONLY=false
###########
# scoutfs #
###########
@@ -291,12 +285,6 @@ ROOT_SECRET_ACCESS_KEY=
#VGW_CHOWN_UID=false
#VGW_CHOWN_GID=false
# The VGW_READ_ONLY option will disable all write operations to the backend
# filesystem. This is useful for creating a read-only gateway for clients.
# This will prevent any PUT, POST, DELETE, and multipart upload operations
# for objects and buckets as well as preventing updating metadata for objects.
#VGW_READ_ONLY=false
######
# s3 #
######

16
go.mod
View File

@@ -1,29 +1,29 @@
module github.com/versity/versitygw
go 1.21
go 1.21.0
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.11.1
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.1
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.2
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2
github.com/aws/aws-sdk-go-v2 v1.26.1
github.com/aws/aws-sdk-go-v2/service/s3 v1.53.1
github.com/aws/smithy-go v1.20.2
github.com/go-ldap/ldap/v3 v3.4.6
github.com/go-ldap/ldap/v3 v3.4.8
github.com/gofiber/fiber/v2 v2.52.4
github.com/google/go-cmp v0.6.0
github.com/google/uuid v1.6.0
github.com/nats-io/nats.go v1.34.1
github.com/pkg/xattr v0.4.9
github.com/segmentio/kafka-go v0.4.47
github.com/urfave/cli/v2 v2.27.1
github.com/urfave/cli/v2 v2.27.2
github.com/valyala/fasthttp v1.52.0
github.com/versity/scoutfs-go v0.0.0-20240325223134-38eb2f5f7d44
golang.org/x/sys v0.19.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.6.0 // indirect
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.1 // indirect
@@ -31,7 +31,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/sso v1.20.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.23.4 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.28.6 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.5 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.6 // indirect
github.com/golang-jwt/jwt/v5 v5.2.1 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
@@ -58,7 +58,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.7 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.5 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.4 // indirect
github.com/klauspost/compress v1.17.7 // indirect
github.com/klauspost/compress v1.17.8 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect

71
go.sum
View File

@@ -1,19 +1,19 @@
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.11.1 h1:E+OJmp2tPvt1W+amx48v1eqbjDYsgN+RzP4q16yV5eM=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.11.1/go.mod h1:a6xsAQUZg+VsS3TJ05SRp524Hs4pZ/AeFSr5ENf0Yjo=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1 h1:sO0/P7g68FrryJzljemN+6GTssUXdANk6aJ7T1ZxnsQ=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1/go.mod h1:h8hyGFDsU5HMivxiS2iYFZsgDbU9OnnJ163x5UGVKYo=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2 h1:LqbJ/WzJUwBf8UiaSzgX7aMclParm9/5Vgp+TY51uBQ=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2/go.mod h1:yInRyqWXAuaPrgI7p70+lDDgh3mlBohis29jGMISnmc=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.2 h1:FDif4R1+UUR+00q6wquyX90K7A8dN+R5E8GEadoP7sU=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.2/go.mod h1:aiYBYui4BJ/BJCAIKs92XiPyQfTaBWqvHujDwKb6CBU=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.6.0 h1:sUFnFjzDUie80h24I7mrKtwCKgLY9L8h5Tp2x9+TWqk=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.6.0/go.mod h1:52JbnQTp15qg5mRkMBHwp0j0ZFwHJ42Sx3zVV5RE9p0=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.5.0 h1:AifHbc4mg0x9zW52WOpKbsHaDKuRhlI7TVl47thgQ70=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.5.0/go.mod h1:T5RfihdXtBDxt1Ch2wobif3TvzTdumDy29kahv6AV9A=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.1 h1:fXPMAmuh0gDuRDey0atC8cXBuKIlqCzCkL8sm1n9Ov0=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.1/go.mod h1:SUZc9YRRHfx2+FAQKNDGrssXehqLpxmwRv2mC/5ntj4=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2 h1:YUUxeiOWgdAQE3pXt2H7QXzZs0q8UBjgRbl56qo8GYM=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2/go.mod h1:dmXQgZuiSubAecswZE+Sm8jkvEa7kQgTPVRvwL/nd0E=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 h1:XHOnouVk1mxXfQidrMEnLlPk9UMeRtyBTnEFtxkV0kU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/alexbrainman/sspi v0.0.0-20210105120005-909beea2cc74 h1:Kk6a4nehpJ3UuJRqlA3JxYxBZEqCeOmATOvrbT4p9RA=
github.com/alexbrainman/sspi v0.0.0-20210105120005-909beea2cc74/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7VVbI0o4wBRNQIgn917usHWOd6VAffYI=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/andybalholm/brotli v1.1.0 h1:eLKJA0d02Lf0mVpIDgYnqXcUn0GqVmEFny3VuID1U3M=
github.com/andybalholm/brotli v1.1.0/go.mod h1:sms7XGricyQI9K10gOSf56VKKWS4oLer58Q+mhRPtnY=
github.com/aws/aws-sdk-go-v2 v1.26.1 h1:5554eUqIYVWpU0YmeeYZ0wU64H2VLBs8TlhRB2L+EkA=
@@ -61,26 +61,43 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
github.com/go-asn1-ber/asn1-ber v1.5.5 h1:MNHlNMBDgEKD4TcKr36vQN68BA00aDfjIt3/bD50WnA=
github.com/go-asn1-ber/asn1-ber v1.5.5/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-ldap/ldap/v3 v3.4.6 h1:ert95MdbiG7aWo/oPYp9btL3KJlMPKnP58r09rI8T+A=
github.com/go-ldap/ldap/v3 v3.4.6/go.mod h1:IGMQANNtxpsOzj7uUAMjpGBaOVTC4DYyIy8VsTdxmtc=
github.com/go-asn1-ber/asn1-ber v1.5.6 h1:CYsqysemXfEaQbyrLJmdsCRuufHoLa3P/gGWGl5TDrM=
github.com/go-asn1-ber/asn1-ber v1.5.6/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-ldap/ldap/v3 v3.4.8 h1:loKJyspcRezt2Q3ZRMq2p/0v8iOurlmeXDPw6fikSvQ=
github.com/go-ldap/ldap/v3 v3.4.8/go.mod h1:qS3Sjlu76eHfHGpUdWkAXQTw4beih+cHsco2jXlIXrk=
github.com/gofiber/fiber/v2 v2.52.4 h1:P+T+4iK7VaqUsq2PALYEfBBo6bJZ4q3FP8cZ84EggTM=
github.com/gofiber/fiber/v2 v2.52.4/go.mod h1:KEOE+cXMhXG0zHc9d8+E38hoX+ZN7bhOtgeF2oT6jrQ=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4=
github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/jcmturner/aescts/v2 v2.0.0 h1:9YKLH6ey7H4eDBXW8khjYslgyqG2xZikXP0EQFKrle8=
github.com/jcmturner/aescts/v2 v2.0.0/go.mod h1:AiaICIRyfYg35RUkr8yESTqvSy7csK90qZ5xfvvsoNs=
github.com/jcmturner/dnsutils/v2 v2.0.0 h1:lltnkeZGL0wILNvrNiVCR6Ro5PGU/SeBvVO/8c/iPbo=
github.com/jcmturner/dnsutils/v2 v2.0.0/go.mod h1:b0TnjGOvI/n42bZa+hmXL+kFJZsFT7G4t3HTlQ184QM=
github.com/jcmturner/gofork v1.7.6 h1:QH0l3hzAU1tfT3rZCnW5zXl+orbkNMMRGJfdJjHVETg=
github.com/jcmturner/gofork v1.7.6/go.mod h1:1622LH6i/EZqLloHfE7IeZ0uEJwMSUyQ/nDd82IeqRo=
github.com/jcmturner/goidentity/v6 v6.0.1 h1:VKnZd2oEIMorCTsFBnJWbExfNN7yZr3EhJAxwOkZg6o=
github.com/jcmturner/goidentity/v6 v6.0.1/go.mod h1:X1YW3bgtvwAXju7V3LCIMpY0Gbxyjn/mY9zx4tFonSg=
github.com/jcmturner/gokrb5/v8 v8.4.4 h1:x1Sv4HaTpepFkXbt2IkL29DXRf8sOfZXo8eRKh687T8=
github.com/jcmturner/gokrb5/v8 v8.4.4/go.mod h1:1btQEpgT6k+unzCwX1KdWMEwPPkkgBtP+F6aCACiMrs=
github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZY=
github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
github.com/klauspost/compress v1.17.7 h1:ehO88t2UGzQK66LMdE8tibEd1ErmzZjNEqWkjLAKQQg=
github.com/klauspost/compress v1.17.7/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/compress v1.17.8 h1:YcnTYrq7MikUT7k0Yb5eceMmALQPYBW/Xltxn0NAMnU=
github.com/klauspost/compress v1.17.8/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
@@ -114,12 +131,15 @@ github.com/segmentio/kafka-go v0.4.47 h1:IqziR4pA3vrZq7YdRxaT3w1/5fvIH5qpCwstUan
github.com/segmentio/kafka-go v0.4.47/go.mod h1:HjF6XbOKh0Pjlkr5GVZxt6CsjjwnmhVOfURM5KMd8qg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=
github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/urfave/cli/v2 v2.27.2 h1:6e0H+AkS+zDckwPCUrZkKX38mRaau4nL2uipkJpbkcI=
github.com/urfave/cli/v2 v2.27.2/go.mod h1:g0+79LmHHATl7DAcHO99smiR/T7uGLw84w8Y42x+4eM=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.52.0 h1:wqBQpxH71XW0e2g+Og4dzQM8pk34aFYlA1Ga8db7gU0=
@@ -139,18 +159,24 @@ github.com/xrash/smetrics v0.0.0-20240312152122-5f08fbb34913/go.mod h1:4aEEwZQut
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.22.0 h1:g1v0xeRhjcugydODzvb3mEM9SQ0HGp9s/nh3COQ/C30=
golang.org/x/crypto v0.22.0/go.mod h1:vr6Su+7cTlO45qkww3VDJlzDn0ctJvRgYbC2NvXHt+M=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.24.0 h1:1PcaxkF854Fu3+lvBIx5SYn9wRlBzzcnHZSiaFFAb0w=
golang.org/x/net v0.24.0/go.mod h1:2Q7sJY5mzlzWjKtYUEXSlBWCdyaioyXzRB2RtU8KVE8=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -167,16 +193,18 @@ golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.19.0 h1:q5f1RH2jigJ1MoAWp2KTp3gm5zAGFUTarQZ5U386+4o=
golang.org/x/sys v0.19.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
@@ -192,6 +220,7 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=

View File

@@ -49,7 +49,7 @@ func (c AdminController) CreateUser(ctx *fiber.Ctx) error {
err = c.iam.CreateAccount(usr)
if err != nil {
return fmt.Errorf("failed to create a user: %w", err)
return fmt.Errorf("failed to create user: %w", err)
}
return ctx.SendString("The user has been created successfully")

View File

@@ -77,9 +77,18 @@ var _ backend.Backend = &BackendMock{}
// GetObjectAclFunc: func(contextMoqParam context.Context, getObjectAclInput *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error) {
// panic("mock out the GetObjectAcl method")
// },
// GetObjectAttributesFunc: func(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (*s3.GetObjectAttributesOutput, error) {
// GetObjectAttributesFunc: func(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
// panic("mock out the GetObjectAttributes method")
// },
// GetObjectLegalHoldFunc: func(contextMoqParam context.Context, bucket string, object string, versionId string) (*bool, error) {
// panic("mock out the GetObjectLegalHold method")
// },
// GetObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
// panic("mock out the GetObjectLockConfiguration method")
// },
// GetObjectRetentionFunc: func(contextMoqParam context.Context, bucket string, object string, versionId string) ([]byte, error) {
// panic("mock out the GetObjectRetention method")
// },
// GetObjectTaggingFunc: func(contextMoqParam context.Context, bucket string, object string) (map[string]string, error) {
// panic("mock out the GetObjectTagging method")
// },
@@ -128,6 +137,15 @@ var _ backend.Backend = &BackendMock{}
// PutObjectAclFunc: func(contextMoqParam context.Context, putObjectAclInput *s3.PutObjectAclInput) error {
// panic("mock out the PutObjectAcl method")
// },
// PutObjectLegalHoldFunc: func(contextMoqParam context.Context, bucket string, object string, versionId string, status bool) error {
// panic("mock out the PutObjectLegalHold method")
// },
// PutObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string, config []byte) error {
// panic("mock out the PutObjectLockConfiguration method")
// },
// PutObjectRetentionFunc: func(contextMoqParam context.Context, bucket string, object string, versionId string, retention []byte) error {
// panic("mock out the PutObjectRetention method")
// },
// PutObjectTaggingFunc: func(contextMoqParam context.Context, bucket string, object string, tags map[string]string) error {
// panic("mock out the PutObjectTagging method")
// },
@@ -211,7 +229,16 @@ type BackendMock struct {
GetObjectAclFunc func(contextMoqParam context.Context, getObjectAclInput *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error)
// GetObjectAttributesFunc mocks the GetObjectAttributes method.
GetObjectAttributesFunc func(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (*s3.GetObjectAttributesOutput, error)
GetObjectAttributesFunc func(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error)
// GetObjectLegalHoldFunc mocks the GetObjectLegalHold method.
GetObjectLegalHoldFunc func(contextMoqParam context.Context, bucket string, object string, versionId string) (*bool, error)
// GetObjectLockConfigurationFunc mocks the GetObjectLockConfiguration method.
GetObjectLockConfigurationFunc func(contextMoqParam context.Context, bucket string) ([]byte, error)
// GetObjectRetentionFunc mocks the GetObjectRetention method.
GetObjectRetentionFunc func(contextMoqParam context.Context, bucket string, object string, versionId string) ([]byte, error)
// GetObjectTaggingFunc mocks the GetObjectTagging method.
GetObjectTaggingFunc func(contextMoqParam context.Context, bucket string, object string) (map[string]string, error)
@@ -261,6 +288,15 @@ type BackendMock struct {
// PutObjectAclFunc mocks the PutObjectAcl method.
PutObjectAclFunc func(contextMoqParam context.Context, putObjectAclInput *s3.PutObjectAclInput) error
// PutObjectLegalHoldFunc mocks the PutObjectLegalHold method.
PutObjectLegalHoldFunc func(contextMoqParam context.Context, bucket string, object string, versionId string, status bool) error
// PutObjectLockConfigurationFunc mocks the PutObjectLockConfiguration method.
PutObjectLockConfigurationFunc func(contextMoqParam context.Context, bucket string, config []byte) error
// PutObjectRetentionFunc mocks the PutObjectRetention method.
PutObjectRetentionFunc func(contextMoqParam context.Context, bucket string, object string, versionId string, retention []byte) error
// PutObjectTaggingFunc mocks the PutObjectTagging method.
PutObjectTaggingFunc func(contextMoqParam context.Context, bucket string, object string, tags map[string]string) error
@@ -425,6 +461,35 @@ type BackendMock struct {
// GetObjectAttributesInput is the getObjectAttributesInput argument value.
GetObjectAttributesInput *s3.GetObjectAttributesInput
}
// GetObjectLegalHold holds details about calls to the GetObjectLegalHold method.
GetObjectLegalHold []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
// Object is the object argument value.
Object string
// VersionId is the versionId argument value.
VersionId string
}
// GetObjectLockConfiguration holds details about calls to the GetObjectLockConfiguration method.
GetObjectLockConfiguration []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
}
// GetObjectRetention holds details about calls to the GetObjectRetention method.
GetObjectRetention []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
// Object is the object argument value.
Object string
// VersionId is the versionId argument value.
VersionId string
}
// GetObjectTagging holds details about calls to the GetObjectTagging method.
GetObjectTagging []struct {
// ContextMoqParam is the contextMoqParam argument value.
@@ -545,6 +610,41 @@ type BackendMock struct {
// PutObjectAclInput is the putObjectAclInput argument value.
PutObjectAclInput *s3.PutObjectAclInput
}
// PutObjectLegalHold holds details about calls to the PutObjectLegalHold method.
PutObjectLegalHold []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
// Object is the object argument value.
Object string
// VersionId is the versionId argument value.
VersionId string
// Status is the status argument value.
Status bool
}
// PutObjectLockConfiguration holds details about calls to the PutObjectLockConfiguration method.
PutObjectLockConfiguration []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
// Config is the config argument value.
Config []byte
}
// PutObjectRetention holds details about calls to the PutObjectRetention method.
PutObjectRetention []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
// Object is the object argument value.
Object string
// VersionId is the versionId argument value.
VersionId string
// Retention is the retention argument value.
Retention []byte
}
// PutObjectTagging holds details about calls to the PutObjectTagging method.
PutObjectTagging []struct {
// ContextMoqParam is the contextMoqParam argument value.
@@ -591,48 +691,54 @@ type BackendMock struct {
UploadPartCopyInput *s3.UploadPartCopyInput
}
}
lockAbortMultipartUpload sync.RWMutex
lockChangeBucketOwner sync.RWMutex
lockCompleteMultipartUpload sync.RWMutex
lockCopyObject sync.RWMutex
lockCreateBucket sync.RWMutex
lockCreateMultipartUpload sync.RWMutex
lockDeleteBucket sync.RWMutex
lockDeleteBucketPolicy sync.RWMutex
lockDeleteBucketTagging sync.RWMutex
lockDeleteObject sync.RWMutex
lockDeleteObjectTagging sync.RWMutex
lockDeleteObjects sync.RWMutex
lockGetBucketAcl sync.RWMutex
lockGetBucketPolicy sync.RWMutex
lockGetBucketTagging sync.RWMutex
lockGetBucketVersioning sync.RWMutex
lockGetObject sync.RWMutex
lockGetObjectAcl sync.RWMutex
lockGetObjectAttributes sync.RWMutex
lockGetObjectTagging sync.RWMutex
lockHeadBucket sync.RWMutex
lockHeadObject sync.RWMutex
lockListBuckets sync.RWMutex
lockListBucketsAndOwners sync.RWMutex
lockListMultipartUploads sync.RWMutex
lockListObjectVersions sync.RWMutex
lockListObjects sync.RWMutex
lockListObjectsV2 sync.RWMutex
lockListParts sync.RWMutex
lockPutBucketAcl sync.RWMutex
lockPutBucketPolicy sync.RWMutex
lockPutBucketTagging sync.RWMutex
lockPutBucketVersioning sync.RWMutex
lockPutObject sync.RWMutex
lockPutObjectAcl sync.RWMutex
lockPutObjectTagging sync.RWMutex
lockRestoreObject sync.RWMutex
lockSelectObjectContent sync.RWMutex
lockShutdown sync.RWMutex
lockString sync.RWMutex
lockUploadPart sync.RWMutex
lockUploadPartCopy sync.RWMutex
lockAbortMultipartUpload sync.RWMutex
lockChangeBucketOwner sync.RWMutex
lockCompleteMultipartUpload sync.RWMutex
lockCopyObject sync.RWMutex
lockCreateBucket sync.RWMutex
lockCreateMultipartUpload sync.RWMutex
lockDeleteBucket sync.RWMutex
lockDeleteBucketPolicy sync.RWMutex
lockDeleteBucketTagging sync.RWMutex
lockDeleteObject sync.RWMutex
lockDeleteObjectTagging sync.RWMutex
lockDeleteObjects sync.RWMutex
lockGetBucketAcl sync.RWMutex
lockGetBucketPolicy sync.RWMutex
lockGetBucketTagging sync.RWMutex
lockGetBucketVersioning sync.RWMutex
lockGetObject sync.RWMutex
lockGetObjectAcl sync.RWMutex
lockGetObjectAttributes sync.RWMutex
lockGetObjectLegalHold sync.RWMutex
lockGetObjectLockConfiguration sync.RWMutex
lockGetObjectRetention sync.RWMutex
lockGetObjectTagging sync.RWMutex
lockHeadBucket sync.RWMutex
lockHeadObject sync.RWMutex
lockListBuckets sync.RWMutex
lockListBucketsAndOwners sync.RWMutex
lockListMultipartUploads sync.RWMutex
lockListObjectVersions sync.RWMutex
lockListObjects sync.RWMutex
lockListObjectsV2 sync.RWMutex
lockListParts sync.RWMutex
lockPutBucketAcl sync.RWMutex
lockPutBucketPolicy sync.RWMutex
lockPutBucketTagging sync.RWMutex
lockPutBucketVersioning sync.RWMutex
lockPutObject sync.RWMutex
lockPutObjectAcl sync.RWMutex
lockPutObjectLegalHold sync.RWMutex
lockPutObjectLockConfiguration sync.RWMutex
lockPutObjectRetention sync.RWMutex
lockPutObjectTagging sync.RWMutex
lockRestoreObject sync.RWMutex
lockSelectObjectContent sync.RWMutex
lockShutdown sync.RWMutex
lockString sync.RWMutex
lockUploadPart sync.RWMutex
lockUploadPartCopy sync.RWMutex
}
// AbortMultipartUpload calls AbortMultipartUploadFunc.
@@ -1300,7 +1406,7 @@ func (mock *BackendMock) GetObjectAclCalls() []struct {
}
// GetObjectAttributes calls GetObjectAttributesFunc.
func (mock *BackendMock) GetObjectAttributes(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (*s3.GetObjectAttributesOutput, error) {
func (mock *BackendMock) GetObjectAttributes(contextMoqParam context.Context, getObjectAttributesInput *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
if mock.GetObjectAttributesFunc == nil {
panic("BackendMock.GetObjectAttributesFunc: method is nil but Backend.GetObjectAttributes was just called")
}
@@ -1335,6 +1441,130 @@ func (mock *BackendMock) GetObjectAttributesCalls() []struct {
return calls
}
// GetObjectLegalHold calls GetObjectLegalHoldFunc.
func (mock *BackendMock) GetObjectLegalHold(contextMoqParam context.Context, bucket string, object string, versionId string) (*bool, error) {
if mock.GetObjectLegalHoldFunc == nil {
panic("BackendMock.GetObjectLegalHoldFunc: method is nil but Backend.GetObjectLegalHold was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
Object: object,
VersionId: versionId,
}
mock.lockGetObjectLegalHold.Lock()
mock.calls.GetObjectLegalHold = append(mock.calls.GetObjectLegalHold, callInfo)
mock.lockGetObjectLegalHold.Unlock()
return mock.GetObjectLegalHoldFunc(contextMoqParam, bucket, object, versionId)
}
// GetObjectLegalHoldCalls gets all the calls that were made to GetObjectLegalHold.
// Check the length with:
//
// len(mockedBackend.GetObjectLegalHoldCalls())
func (mock *BackendMock) GetObjectLegalHoldCalls() []struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
}
mock.lockGetObjectLegalHold.RLock()
calls = mock.calls.GetObjectLegalHold
mock.lockGetObjectLegalHold.RUnlock()
return calls
}
// GetObjectLockConfiguration calls GetObjectLockConfigurationFunc.
func (mock *BackendMock) GetObjectLockConfiguration(contextMoqParam context.Context, bucket string) ([]byte, error) {
if mock.GetObjectLockConfigurationFunc == nil {
panic("BackendMock.GetObjectLockConfigurationFunc: method is nil but Backend.GetObjectLockConfiguration was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
}
mock.lockGetObjectLockConfiguration.Lock()
mock.calls.GetObjectLockConfiguration = append(mock.calls.GetObjectLockConfiguration, callInfo)
mock.lockGetObjectLockConfiguration.Unlock()
return mock.GetObjectLockConfigurationFunc(contextMoqParam, bucket)
}
// GetObjectLockConfigurationCalls gets all the calls that were made to GetObjectLockConfiguration.
// Check the length with:
//
// len(mockedBackend.GetObjectLockConfigurationCalls())
func (mock *BackendMock) GetObjectLockConfigurationCalls() []struct {
ContextMoqParam context.Context
Bucket string
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
}
mock.lockGetObjectLockConfiguration.RLock()
calls = mock.calls.GetObjectLockConfiguration
mock.lockGetObjectLockConfiguration.RUnlock()
return calls
}
// GetObjectRetention calls GetObjectRetentionFunc.
func (mock *BackendMock) GetObjectRetention(contextMoqParam context.Context, bucket string, object string, versionId string) ([]byte, error) {
if mock.GetObjectRetentionFunc == nil {
panic("BackendMock.GetObjectRetentionFunc: method is nil but Backend.GetObjectRetention was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
Object: object,
VersionId: versionId,
}
mock.lockGetObjectRetention.Lock()
mock.calls.GetObjectRetention = append(mock.calls.GetObjectRetention, callInfo)
mock.lockGetObjectRetention.Unlock()
return mock.GetObjectRetentionFunc(contextMoqParam, bucket, object, versionId)
}
// GetObjectRetentionCalls gets all the calls that were made to GetObjectRetention.
// Check the length with:
//
// len(mockedBackend.GetObjectRetentionCalls())
func (mock *BackendMock) GetObjectRetentionCalls() []struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
}
mock.lockGetObjectRetention.RLock()
calls = mock.calls.GetObjectRetention
mock.lockGetObjectRetention.RUnlock()
return calls
}
// GetObjectTagging calls GetObjectTaggingFunc.
func (mock *BackendMock) GetObjectTagging(contextMoqParam context.Context, bucket string, object string) (map[string]string, error) {
if mock.GetObjectTaggingFunc == nil {
@@ -1927,6 +2157,142 @@ func (mock *BackendMock) PutObjectAclCalls() []struct {
return calls
}
// PutObjectLegalHold calls PutObjectLegalHoldFunc.
func (mock *BackendMock) PutObjectLegalHold(contextMoqParam context.Context, bucket string, object string, versionId string, status bool) error {
if mock.PutObjectLegalHoldFunc == nil {
panic("BackendMock.PutObjectLegalHoldFunc: method is nil but Backend.PutObjectLegalHold was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
Status bool
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
Object: object,
VersionId: versionId,
Status: status,
}
mock.lockPutObjectLegalHold.Lock()
mock.calls.PutObjectLegalHold = append(mock.calls.PutObjectLegalHold, callInfo)
mock.lockPutObjectLegalHold.Unlock()
return mock.PutObjectLegalHoldFunc(contextMoqParam, bucket, object, versionId, status)
}
// PutObjectLegalHoldCalls gets all the calls that were made to PutObjectLegalHold.
// Check the length with:
//
// len(mockedBackend.PutObjectLegalHoldCalls())
func (mock *BackendMock) PutObjectLegalHoldCalls() []struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
Status bool
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
Status bool
}
mock.lockPutObjectLegalHold.RLock()
calls = mock.calls.PutObjectLegalHold
mock.lockPutObjectLegalHold.RUnlock()
return calls
}
// PutObjectLockConfiguration calls PutObjectLockConfigurationFunc.
func (mock *BackendMock) PutObjectLockConfiguration(contextMoqParam context.Context, bucket string, config []byte) error {
if mock.PutObjectLockConfigurationFunc == nil {
panic("BackendMock.PutObjectLockConfigurationFunc: method is nil but Backend.PutObjectLockConfiguration was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
Config []byte
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
Config: config,
}
mock.lockPutObjectLockConfiguration.Lock()
mock.calls.PutObjectLockConfiguration = append(mock.calls.PutObjectLockConfiguration, callInfo)
mock.lockPutObjectLockConfiguration.Unlock()
return mock.PutObjectLockConfigurationFunc(contextMoqParam, bucket, config)
}
// PutObjectLockConfigurationCalls gets all the calls that were made to PutObjectLockConfiguration.
// Check the length with:
//
// len(mockedBackend.PutObjectLockConfigurationCalls())
func (mock *BackendMock) PutObjectLockConfigurationCalls() []struct {
ContextMoqParam context.Context
Bucket string
Config []byte
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
Config []byte
}
mock.lockPutObjectLockConfiguration.RLock()
calls = mock.calls.PutObjectLockConfiguration
mock.lockPutObjectLockConfiguration.RUnlock()
return calls
}
// PutObjectRetention calls PutObjectRetentionFunc.
func (mock *BackendMock) PutObjectRetention(contextMoqParam context.Context, bucket string, object string, versionId string, retention []byte) error {
if mock.PutObjectRetentionFunc == nil {
panic("BackendMock.PutObjectRetentionFunc: method is nil but Backend.PutObjectRetention was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
Retention []byte
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
Object: object,
VersionId: versionId,
Retention: retention,
}
mock.lockPutObjectRetention.Lock()
mock.calls.PutObjectRetention = append(mock.calls.PutObjectRetention, callInfo)
mock.lockPutObjectRetention.Unlock()
return mock.PutObjectRetentionFunc(contextMoqParam, bucket, object, versionId, retention)
}
// PutObjectRetentionCalls gets all the calls that were made to PutObjectRetention.
// Check the length with:
//
// len(mockedBackend.PutObjectRetentionCalls())
func (mock *BackendMock) PutObjectRetentionCalls() []struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
Retention []byte
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
Object string
VersionId string
Retention []byte
}
mock.lockPutObjectRetention.RLock()
calls = mock.calls.PutObjectRetention
mock.lockPutObjectRetention.RUnlock()
return calls
}
// PutObjectTagging calls PutObjectTaggingFunc.
func (mock *BackendMock) PutObjectTagging(contextMoqParam context.Context, bucket string, object string, tags map[string]string) error {
if mock.PutObjectTaggingFunc == nil {

View File

@@ -131,6 +131,72 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
})
}
if ctx.Request().URI().QueryArgs().Has("retention") {
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Acl: parsedAcl,
AclPermission: types.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.GetObjectRetentionAction,
})
if err != nil {
return SendXMLResponse(ctx, nil, err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectRetention",
BucketOwner: parsedAcl.Owner,
})
}
data, err := c.be.GetObjectRetention(ctx.Context(), bucket, key, versionId)
if err != nil {
return SendXMLResponse(ctx, data, err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectRetention",
BucketOwner: parsedAcl.Owner,
})
}
retention, err := auth.ParseObjectLockRetentionOutput(data)
return SendXMLResponse(ctx, retention, err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectRetention",
BucketOwner: parsedAcl.Owner,
})
}
if ctx.Request().URI().QueryArgs().Has("legal-hold") {
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Acl: parsedAcl,
AclPermission: types.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: key,
Action: auth.GetObjectLegalHoldAction,
})
if err != nil {
return SendXMLResponse(ctx, nil, err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectLegalHold",
BucketOwner: parsedAcl.Owner,
})
}
data, err := c.be.GetObjectLegalHold(ctx.Context(), bucket, key, versionId)
return SendXMLResponse(ctx, auth.ParseObjectLegalHoldOutput(data), err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectLegalHold",
BucketOwner: parsedAcl.Owner,
})
}
if uploadId != "" {
if maxParts < 0 && ctx.Request().URI().QueryArgs().Has("max-parts") {
return SendResponse(ctx,
@@ -225,7 +291,7 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
})
}
if attrs := ctx.Get("X-Amz-Object-Attributes"); attrs != "" {
if ctx.Request().URI().QueryArgs().Has("attributes") {
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Acl: parsedAcl,
AclPermission: types.PermissionRead,
@@ -243,17 +309,36 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
BucketOwner: parsedAcl.Owner,
})
}
var oattrs []types.ObjectAttributes
for _, a := range strings.Split(attrs, ",") {
oattrs = append(oattrs, types.ObjectAttributes(a))
maxParts := ctx.Get("X-Amz-Max-Parts")
partNumberMarker := ctx.Get("X-Amz-Part-Number-Marker")
maxPartsParsed, err := utils.ParseUint(maxParts)
if err != nil {
return SendXMLResponse(ctx, nil, err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectAttributes",
BucketOwner: parsedAcl.Owner,
})
}
attrs := utils.ParseObjectAttributes(ctx)
res, err := c.be.GetObjectAttributes(ctx.Context(),
&s3.GetObjectAttributesInput{
Bucket: &bucket,
Key: &key,
ObjectAttributes: oattrs,
PartNumberMarker: &partNumberMarker,
MaxParts: &maxPartsParsed,
VersionId: &versionId,
})
return SendXMLResponse(ctx, res, err,
if err != nil {
return SendXMLResponse(ctx, nil, err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectAttributes",
BucketOwner: parsedAcl.Owner,
})
}
return SendXMLResponse(ctx, utils.FilterObjectAttributes(attrs, res), err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectAttributes",
@@ -547,6 +632,43 @@ func (c S3ApiController) ListActions(ctx *fiber.Ctx) error {
})
}
if ctx.Request().URI().QueryArgs().Has("object-lock") {
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Acl: parsedAcl,
AclPermission: types.PermissionRead,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.GetBucketObjectLockConfigurationAction,
})
if err != nil {
return SendXMLResponse(ctx, nil, err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectLockConfiguration",
BucketOwner: parsedAcl.Owner,
})
}
data, err := c.be.GetObjectLockConfiguration(ctx.Context(), bucket)
if err != nil {
return SendXMLResponse(ctx, nil, err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectLockConfiguration",
BucketOwner: parsedAcl.Owner,
})
}
resp, err := auth.ParseBucketLockConfigurationOutput(data)
return SendXMLResponse(ctx, resp, err,
&MetaOpts{
Logger: c.logger,
Action: "GetObjectLockConfiguration",
BucketOwner: parsedAcl.Owner,
})
}
if ctx.Request().URI().QueryArgs().Has("acl") {
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Acl: parsedAcl,
@@ -845,6 +967,44 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
})
}
if ctx.Request().URI().QueryArgs().Has("object-lock") {
parsedAcl := ctx.Locals("parsedAcl").(auth.ACL)
if err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Acl: parsedAcl,
AclPermission: types.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Action: auth.PutBucketObjectLockConfigurationAction,
}); err != nil {
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.logger,
Action: "PutObjectLockConfiguration",
BucketOwner: parsedAcl.Owner,
})
}
config, err := auth.ParseBucketLockConfigurationInput(ctx.Body())
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.logger,
Action: "PutObjectLockConfiguration",
BucketOwner: parsedAcl.Owner,
})
}
err = c.be.PutObjectLockConfiguration(ctx.Context(), bucket, config)
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.logger,
Action: "PutObjectLockConfiguration",
BucketOwner: parsedAcl.Owner,
})
}
if ctx.Request().URI().QueryArgs().Has("policy") {
parsedAcl := ctx.Locals("parsedAcl").(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
@@ -1059,9 +1219,14 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
})
}
lockHeader := ctx.Get("X-Amz-Bucket-Object-Lock-Enabled")
// CLI provides "True", SDK - "true"
lockEnabled := lockHeader == "True" || lockHeader == "true"
err = c.be.CreateBucket(ctx.Context(), &s3.CreateBucketInput{
Bucket: &bucket,
ObjectOwnership: types.ObjectOwnership(acct.Access),
Bucket: &bucket,
ObjectOwnership: types.ObjectOwnership(acct.Access),
ObjectLockEnabledForBucket: &lockEnabled,
}, updAcl)
return SendResponse(ctx, err,
&MetaOpts{
@@ -1076,6 +1241,7 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
keyStart := ctx.Params("key")
keyEnd := ctx.Params("*1")
uploadId := ctx.Query("uploadId")
versionId := ctx.Query("versionId")
acct := ctx.Locals("account").(auth.Account)
isRoot := ctx.Locals("isRoot").(bool)
parsedAcl := ctx.Locals("parsedAcl").(auth.ACL)
@@ -1176,6 +1342,76 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
})
}
if ctx.Request().URI().QueryArgs().Has("retention") {
if err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Acl: parsedAcl,
AclPermission: types.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: keyStart,
Action: auth.PutObjectRetentionAction,
}); err != nil {
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.logger,
Action: "PutObjectRetention",
BucketOwner: parsedAcl.Owner,
})
}
retention, err := auth.ParseObjectLockRetentionInput(ctx.Body())
if err != nil {
return SendResponse(ctx, err, &MetaOpts{
Logger: c.logger,
Action: "PutObjectRetention",
BucketOwner: parsedAcl.Owner,
})
}
err = c.be.PutObjectRetention(ctx.Context(), bucket, keyStart, versionId, retention)
return SendResponse(ctx, err, &MetaOpts{
Logger: c.logger,
Action: "PutObjectRetention",
BucketOwner: parsedAcl.Owner,
})
}
if ctx.Request().URI().QueryArgs().Has("legal-hold") {
var legalHold types.ObjectLockLegalHold
if err := xml.Unmarshal(ctx.Body(), &legalHold); err != nil {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidRequest), &MetaOpts{
Logger: c.logger,
Action: "PutObjectLegalHold",
BucketOwner: parsedAcl.Owner,
})
}
if err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
Acl: parsedAcl,
AclPermission: types.PermissionWrite,
IsRoot: isRoot,
Acc: acct,
Bucket: bucket,
Object: keyStart,
Action: auth.PutObjectLegalHoldAction,
}); err != nil {
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.logger,
Action: "PutObjectLegalHold",
BucketOwner: parsedAcl.Owner,
})
}
err := c.be.PutObjectLegalHold(ctx.Context(), bucket, keyStart, versionId, legalHold.Status == types.ObjectLockLegalHoldStatusOn)
return SendResponse(ctx, err, &MetaOpts{
Logger: c.logger,
Action: "PutObjectLegalHold",
BucketOwner: parsedAcl.Owner,
})
}
if ctx.Request().URI().QueryArgs().Has("uploadId") &&
ctx.Request().URI().QueryArgs().Has("partNumber") &&
copySource != "" {
@@ -1525,6 +1761,16 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
})
}
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, []string{keyStart}, isRoot || acct.Role == auth.RoleAdmin, c.be)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.logger,
Action: "PutObject",
BucketOwner: parsedAcl.Owner,
})
}
contentLength, err := strconv.ParseInt(contentLengthStr, 10, 64)
if err != nil {
if c.debug {
@@ -1539,6 +1785,52 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
})
}
legalHoldHdr := ctx.Get("X-Amz-Object-Lock-Legal-Hold")
objLockModeHdr := ctx.Get("X-Amz-Object-Lock-Mode")
objLockDate := ctx.Get("X-Amz-Object-Lock-Retain-Until-Date")
if (objLockDate != "" && objLockModeHdr == "") || (objLockDate == "" && objLockModeHdr != "") {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrObjectLockInvalidHeaders),
&MetaOpts{
Logger: c.logger,
Action: "PutObject",
BucketOwner: parsedAcl.Owner,
})
}
var retainUntilDate *time.Time
if objLockDate != "" {
rDate, err := time.Parse(time.RFC3339, objLockDate)
if err != nil {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidRequest),
&MetaOpts{
Logger: c.logger,
Action: "PutObject",
BucketOwner: parsedAcl.Owner,
})
}
if rDate.Before(time.Now()) {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrPastObjectLockRetainDate),
&MetaOpts{
Logger: c.logger,
Action: "PutObject",
BucketOwner: parsedAcl.Owner,
})
}
retainUntilDate = &rDate
}
if objLockModeHdr != "" &&
objLockModeHdr != string(types.ObjectLockModeCompliance) &&
objLockModeHdr != string(types.ObjectLockModeGovernance) {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidRequest),
&MetaOpts{
Logger: c.logger,
Action: "PutObject",
BucketOwner: parsedAcl.Owner,
})
}
var body io.Reader
bodyi := ctx.Locals("body-reader")
if bodyi != nil {
@@ -1550,12 +1842,15 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
ctx.Locals("logReqBody", false)
etag, err := c.be.PutObject(ctx.Context(),
&s3.PutObjectInput{
Bucket: &bucket,
Key: &keyStart,
ContentLength: &contentLength,
Metadata: metadata,
Body: body,
Tagging: &tagging,
Bucket: &bucket,
Key: &keyStart,
ContentLength: &contentLength,
Metadata: metadata,
Body: body,
Tagging: &tagging,
ObjectLockRetainUntilDate: retainUntilDate,
ObjectLockMode: types.ObjectLockMode(objLockModeHdr),
ObjectLockLegalHoldStatus: types.ObjectLockLegalHoldStatus(legalHoldHdr),
})
ctx.Response().Header.Set("ETag", etag)
return SendResponse(ctx, err,
@@ -1703,6 +1998,16 @@ func (c S3ApiController) DeleteObjects(ctx *fiber.Ctx) error {
})
}
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, utils.ParseDeleteObjects(dObj.Objects), isRoot || acct.Role == auth.RoleAdmin, c.be)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.logger,
Action: "DeleteObjects",
BucketOwner: parsedAcl.Owner,
})
}
res, err := c.be.DeleteObjects(ctx.Context(),
&s3.DeleteObjectsInput{
Bucket: &bucket,
@@ -1715,6 +2020,8 @@ func (c S3ApiController) DeleteObjects(ctx *fiber.Ctx) error {
Logger: c.logger,
Action: "DeleteObjects",
BucketOwner: parsedAcl.Owner,
EvSender: c.evSender,
EventName: s3event.EventObjectRemovedDeleteObjects,
})
}
@@ -1823,6 +2130,16 @@ func (c S3ApiController) DeleteActions(ctx *fiber.Ctx) error {
})
}
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, []string{key}, isRoot || acct.Role == auth.RoleAdmin, c.be)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.logger,
Action: "DeleteObject",
BucketOwner: parsedAcl.Owner,
})
}
err = c.be.DeleteObject(ctx.Context(),
&s3.DeleteObjectInput{
Bucket: &bucket,
@@ -1835,7 +2152,7 @@ func (c S3ApiController) DeleteActions(ctx *fiber.Ctx) error {
EvSender: c.evSender,
Action: "DeleteObject",
BucketOwner: parsedAcl.Owner,
EventName: s3event.EventObjectDeleted,
EventName: s3event.EventObjectRemovedDelete,
Status: http.StatusNoContent,
})
}
@@ -1844,6 +2161,7 @@ func (c S3ApiController) HeadBucket(ctx *fiber.Ctx) error {
bucket := ctx.Params("bucket")
acct := ctx.Locals("account").(auth.Account)
isRoot := ctx.Locals("isRoot").(bool)
region := ctx.Locals("region").(string)
parsedAcl := ctx.Locals("parsedAcl").(auth.ACL)
err := auth.VerifyAccess(ctx.Context(), c.be,
@@ -1868,7 +2186,17 @@ func (c S3ApiController) HeadBucket(ctx *fiber.Ctx) error {
&s3.HeadBucketInput{
Bucket: &bucket,
})
// TODO: set bucket response headers
utils.SetResponseHeaders(ctx, []utils.CustomHeader{
{
Key: "X-Amz-Access-Point-Alias",
Value: "false",
},
{
Key: "X-Amz-Bucket-Region",
Value: region,
},
})
return SendResponse(ctx, err,
&MetaOpts{
Logger: c.logger,
@@ -1938,7 +2266,7 @@ func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
if res.LastModified != nil {
lastmod = res.LastModified.Format(timefmt)
}
utils.SetResponseHeaders(ctx, []utils.CustomHeader{
headers := []utils.CustomHeader{
{
Key: "Content-Length",
Value: fmt.Sprint(getint64(res.ContentLength)),
@@ -1967,7 +2295,27 @@ func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
Key: "x-amz-restore",
Value: getstring(res.Restore),
},
})
}
if res.ObjectLockMode != "" {
headers = append(headers, utils.CustomHeader{
Key: "x-amz-object-lock-mode",
Value: string(res.ObjectLockMode),
})
}
if res.ObjectLockLegalHoldStatus != "" {
headers = append(headers, utils.CustomHeader{
Key: "x-amz-object-lock-legal-hold",
Value: string(res.ObjectLockLegalHoldStatus),
})
}
if res.ObjectLockRetainUntilDate != nil {
retainUntilDate := res.ObjectLockRetainUntilDate.Format(time.RFC3339)
headers = append(headers, utils.CustomHeader{
Key: "x-amz-object-lock-retain-until-date",
Value: retainUntilDate,
})
}
utils.SetResponseHeaders(ctx, headers)
return SendResponse(ctx, nil,
&MetaOpts{

View File

@@ -188,8 +188,8 @@ func TestS3ApiController_GetActions(t *testing.T) {
GetObjectAclFunc: func(context.Context, *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error) {
return &s3.GetObjectAclOutput{}, nil
},
GetObjectAttributesFunc: func(context.Context, *s3.GetObjectAttributesInput) (*s3.GetObjectAttributesOutput, error) {
return &s3.GetObjectAttributesOutput{}, nil
GetObjectAttributesFunc: func(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error) {
return s3response.GetObjectAttributesResult{}, nil
},
GetObjectFunc: func(context.Context, *s3.GetObjectInput, io.Writer) (*s3.GetObjectOutput, error) {
return &s3.GetObjectOutput{
@@ -205,6 +205,19 @@ func TestS3ApiController_GetActions(t *testing.T) {
GetObjectTaggingFunc: func(_ context.Context, bucket, object string) (map[string]string, error) {
return map[string]string{"hello": "world"}, nil
},
GetObjectRetentionFunc: func(contextMoqParam context.Context, bucket, object, versionId string) ([]byte, error) {
result, err := json.Marshal(types.ObjectLockRetention{
Mode: types.ObjectLockRetentionModeCompliance,
})
if err != nil {
return nil, err
}
return result, nil
},
GetObjectLegalHoldFunc: func(contextMoqParam context.Context, bucket, object, versionId string) (*bool, error) {
result := true
return &result, nil
},
},
}
app.Use(func(ctx *fiber.Ctx) error {
@@ -236,6 +249,24 @@ func TestS3ApiController_GetActions(t *testing.T) {
wantErr: false,
statusCode: 200,
},
{
name: "Get-actions-get-object-retention-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodGet, "/my-bucket/my-obj?retention", nil),
},
wantErr: false,
statusCode: 200,
},
{
name: "Get-actions-get-object-legal-hold-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodGet, "/my-bucket/my-obj?legal-hold", nil),
},
wantErr: false,
statusCode: 200,
},
{
name: "Get-actions-invalid-max-parts-string",
app: app,
@@ -329,6 +360,11 @@ func TestS3ApiController_ListActions(t *testing.T) {
req *http.Request
}
objectLockResult, err := json.Marshal(auth.BucketLockConfig{})
if err != nil {
t.Errorf("failed to parse object lock result %v", err)
}
app := fiber.New()
s3ApiController := S3ApiController{
be: &BackendMock{
@@ -356,6 +392,9 @@ func TestS3ApiController_ListActions(t *testing.T) {
GetBucketPolicyFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return []byte{}, nil
},
GetObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return objectLockResult, nil
},
},
}
@@ -369,7 +408,7 @@ func TestS3ApiController_ListActions(t *testing.T) {
app.Get("/:bucket", s3ApiController.ListActions)
//Error case
// Error case
s3ApiControllerError := S3ApiController{
be: &BackendMock{
GetBucketAclFunc: func(context.Context, *s3.GetBucketAclInput) ([]byte, error) {
@@ -418,6 +457,15 @@ func TestS3ApiController_ListActions(t *testing.T) {
wantErr: false,
statusCode: 200,
},
{
name: "Get-object-lock-configuration-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodGet, "/my-bucket?object-lock", nil),
},
wantErr: false,
statusCode: 200,
},
{
name: "Get-bucket-acl-success",
app: app,
@@ -584,6 +632,18 @@ func TestS3ApiController_PutBucketActions(t *testing.T) {
}
`
objectLockBody := `
<ObjectLockConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<ObjectLockEnabled>Enabled</ObjectLockEnabled>
<Rule>
<DefaultRetention>
<Mode>GOVERNANCE</Mode>
<Years>2</Years>
</DefaultRetention>
</Rule>
</ObjectLockConfiguration>
`
s3ApiController := S3ApiController{
be: &BackendMock{
GetBucketAclFunc: func(context.Context, *s3.GetBucketAclInput) ([]byte, error) {
@@ -604,6 +664,9 @@ func TestS3ApiController_PutBucketActions(t *testing.T) {
PutBucketPolicyFunc: func(contextMoqParam context.Context, bucket string, policy []byte) error {
return nil
},
PutObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string, config []byte) error {
return nil
},
},
}
// Mock ctx.Locals
@@ -662,6 +725,24 @@ func TestS3ApiController_PutBucketActions(t *testing.T) {
wantErr: false,
statusCode: 200,
},
{
name: "Put-object-lock-configuration-invalid-body",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPut, "/my-bucket?object-lock", nil),
},
wantErr: false,
statusCode: 400,
},
{
name: "Put-object-lock-configuration-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPut, "/my-bucket?object-lock", strings.NewReader(objectLockBody)),
},
wantErr: false,
statusCode: 200,
},
{
name: "Put-bucket-versioning-invalid-body",
app: app,
@@ -806,6 +887,19 @@ func TestS3ApiController_PutActions(t *testing.T) {
</Tagging>
`
retentionBody := `
<Retention xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Mode>GOVERNANCE</Mode>
<RetainUntilDate>2025-01-01T00:00:00Z</RetainUntilDate>
</Retention>
`
legalHoldBody := `
<LegalHold xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Status>string</Status>
</LegalHold>
`
app := fiber.New()
s3ApiController := S3ApiController{
be: &BackendMock{
@@ -832,6 +926,15 @@ func TestS3ApiController_PutActions(t *testing.T) {
UploadPartCopyFunc: func(context.Context, *s3.UploadPartCopyInput) (s3response.CopyObjectResult, error) {
return s3response.CopyObjectResult{}, nil
},
PutObjectLegalHoldFunc: func(contextMoqParam context.Context, bucket, object, versionId string, status bool) error {
return nil
},
PutObjectRetentionFunc: func(contextMoqParam context.Context, bucket, object, versionId string, retention []byte) error {
return nil
},
GetObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound)
},
},
}
app.Use(func(ctx *fiber.Ctx) error {
@@ -910,6 +1013,42 @@ func TestS3ApiController_PutActions(t *testing.T) {
wantErr: false,
statusCode: 200,
},
{
name: "put-object-retention-invalid-request",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPut, "/my-bucket/my-key?retention", nil),
},
wantErr: false,
statusCode: 400,
},
{
name: "put-object-retention-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPut, "/my-bucket/my-key?retention", strings.NewReader(retentionBody)),
},
wantErr: false,
statusCode: 200,
},
{
name: "put-legal-hold-invalid-request",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPut, "/my-bucket/my-key?legal-hold", nil),
},
wantErr: false,
statusCode: 400,
},
{
name: "put-legal-hold-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPut, "/my-bucket/my-key?legal-hold", strings.NewReader(legalHoldBody)),
},
wantErr: false,
statusCode: 200,
},
{
name: "Put-object-acl-invalid-acl",
app: app,
@@ -1096,6 +1235,9 @@ func TestS3ApiController_DeleteObjects(t *testing.T) {
DeleteObjectsFunc: func(context.Context, *s3.DeleteObjectsInput) (s3response.DeleteResult, error) {
return s3response.DeleteResult{}, nil
},
GetObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound)
},
},
}
@@ -1173,6 +1315,9 @@ func TestS3ApiController_DeleteActions(t *testing.T) {
DeleteObjectTaggingFunc: func(_ context.Context, bucket, object string) error {
return nil
},
GetObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound)
},
},
}
@@ -1195,6 +1340,9 @@ func TestS3ApiController_DeleteActions(t *testing.T) {
DeleteObjectFunc: func(context.Context, *s3.DeleteObjectInput) error {
return s3err.GetAPIError(7)
},
GetObjectLockConfigurationFunc: func(contextMoqParam context.Context, bucket string) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound)
},
}}
appErr.Use(func(ctx *fiber.Ctx) error {
@@ -1285,6 +1433,7 @@ func TestS3ApiController_HeadBucket(t *testing.T) {
ctx.Locals("isRoot", true)
ctx.Locals("isDebug", false)
ctx.Locals("parsedAcl", auth.ACL{})
ctx.Locals("region", "us-east-1")
return ctx.Next()
})
@@ -1308,6 +1457,7 @@ func TestS3ApiController_HeadBucket(t *testing.T) {
ctx.Locals("isRoot", true)
ctx.Locals("isDebug", false)
ctx.Locals("parsedAcl", auth.ACL{})
ctx.Locals("region", "us-east-1")
return ctx.Next()
})

View File

@@ -48,7 +48,8 @@ func AclParser(be backend.Backend, logger s3log.AuditLogger) fiber.Handler {
!ctx.Request().URI().QueryArgs().Has("acl") &&
!ctx.Request().URI().QueryArgs().Has("tagging") &&
!ctx.Request().URI().QueryArgs().Has("versioning") &&
!ctx.Request().URI().QueryArgs().Has("policy") {
!ctx.Request().URI().QueryArgs().Has("policy") &&
!ctx.Request().URI().QueryArgs().Has("object-lock") {
if err := auth.MayCreateBucket(acct, isRoot); err != nil {
return controllers.SendXMLResponse(ctx, nil, err, &controllers.MetaOpts{Logger: logger, Action: "CreateBucket"})
}

View File

@@ -23,6 +23,7 @@ import (
"fmt"
"hash"
"io"
"math"
"strconv"
"time"
@@ -192,12 +193,15 @@ func (cr *ChunkReader) parseAndRemoveChunkInfo(p []byte) (int, error) {
cr.chunkDataLeft = 0
cr.chunkHash.Write(p[:chunkSize])
n, err := cr.parseAndRemoveChunkInfo(p[chunkSize:n])
if (chunkSize + int64(n)) > math.MaxInt {
return 0, s3err.GetAPIError(s3err.ErrSignatureDoesNotMatch)
}
return n + int(chunkSize), err
} else {
cr.chunkDataLeft = chunkSize - int64(n)
cr.chunkHash.Write(p[:n])
}
cr.chunkDataLeft = chunkSize - int64(n)
cr.chunkHash.Write(p[:n])
return n, nil
}
@@ -231,6 +235,7 @@ const (
// error if any. See the AWS documentation for the chunk header format. The
// header[0] byte is expected to be the first byte of the chunk size here.
func (cr *ChunkReader) parseChunkHeaderBytes(header []byte) (int64, string, int, error) {
stashLen := len(cr.stash)
if cr.stash != nil {
tmp := make([]byte, maxHeaderSize)
copy(tmp, cr.stash)
@@ -265,5 +270,5 @@ func (cr *ChunkReader) parseChunkHeaderBytes(header []byte) (int64, string, int,
signature := string(header[sigIndex:(sigIndex + sigEndIndex)])
dataStartOffset := sigIndex + sigEndIndex + len(chunkHdrDelim)
return chunkSize, signature, dataStartOffset, nil
return chunkSize, signature, dataStartOffset - stashLen, nil
}

View File

@@ -26,10 +26,12 @@ import (
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/aws/smithy-go/encoding/httpbinding"
"github.com/gofiber/fiber/v2"
"github.com/valyala/fasthttp"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
var (
@@ -222,25 +224,57 @@ func IsBigDataAction(ctx *fiber.Ctx) bool {
return false
}
// expiration time window
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/RESTAuthentication.html#RESTAuthenticationTimeStamp
const timeExpirationSec = 15 * 60
func ValidateDate(date time.Time) error {
now := time.Now().UTC()
diff := date.Unix() - now.Unix()
// Checks the dates difference to be less than a minute
if diff > 60 {
return s3err.APIError{
Code: "SignatureDoesNotMatch",
Description: fmt.Sprintf("Signature not yet current: %s is still later than %s", date.Format(iso8601Format), now.Format(iso8601Format)),
HTTPStatusCode: http.StatusForbidden,
}
}
if diff < -60 {
return s3err.APIError{
Code: "SignatureDoesNotMatch",
Description: fmt.Sprintf("Signature expired: %s is now earlier than %s", date.Format(iso8601Format), now.Format(iso8601Format)),
HTTPStatusCode: http.StatusForbidden,
}
// Checks the dates difference to be within allotted window
if diff > timeExpirationSec || diff < -timeExpirationSec {
return s3err.GetAPIError(s3err.ErrRequestTimeTooSkewed)
}
return nil
}
func ParseDeleteObjects(objs []types.ObjectIdentifier) (result []string) {
for _, obj := range objs {
result = append(result, *obj.Key)
}
return
}
func FilterObjectAttributes(attrs map[types.ObjectAttributes]struct{}, output s3response.GetObjectAttributesResult) s3response.GetObjectAttributesResult {
if _, ok := attrs[types.ObjectAttributesEtag]; !ok {
output.ETag = nil
}
if _, ok := attrs[types.ObjectAttributesObjectParts]; !ok {
output.ObjectParts = nil
}
if _, ok := attrs[types.ObjectAttributesObjectSize]; !ok {
output.ObjectSize = nil
}
if _, ok := attrs[types.ObjectAttributesStorageClass]; !ok {
output.StorageClass = nil
}
return output
}
func ParseObjectAttributes(ctx *fiber.Ctx) map[types.ObjectAttributes]struct{} {
attrs := map[types.ObjectAttributes]struct{}{}
ctx.Request().Header.VisitAll(func(key, value []byte) {
if string(key) == "X-Amz-Object-Attributes" {
oattrs := strings.Split(string(value), ",")
for _, a := range oattrs {
attrs[types.ObjectAttributes(a)] = struct{}{}
}
}
})
return attrs
}

View File

@@ -6,8 +6,10 @@ import (
"reflect"
"testing"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/valyala/fasthttp"
"github.com/versity/versitygw/s3response"
)
func TestCreateHttpRequestFromCtx(t *testing.T) {
@@ -264,3 +266,58 @@ func TestParseUint(t *testing.T) {
})
}
}
func TestFilterObjectAttributes(t *testing.T) {
type args struct {
attrs map[types.ObjectAttributes]struct{}
output s3response.GetObjectAttributesResult
}
etag, objSize := "etag", int64(3222)
tests := []struct {
name string
args args
want s3response.GetObjectAttributesResult
}{
{
name: "keep only ETag",
args: args{
attrs: map[types.ObjectAttributes]struct{}{
types.ObjectAttributesEtag: {},
},
output: s3response.GetObjectAttributesResult{
ObjectSize: &objSize,
ETag: &etag,
},
},
want: s3response.GetObjectAttributesResult{ETag: &etag},
},
{
name: "keep multiple props",
args: args{
attrs: map[types.ObjectAttributes]struct{}{
types.ObjectAttributesEtag: {},
types.ObjectAttributesObjectSize: {},
types.ObjectAttributesStorageClass: {},
},
output: s3response.GetObjectAttributesResult{
ObjectSize: &objSize,
ETag: &etag,
ObjectParts: &s3response.ObjectParts{},
VersionId: &etag,
},
},
want: s3response.GetObjectAttributesResult{
ETag: &etag,
ObjectSize: &objSize,
VersionId: &etag,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := FilterObjectAttributes(tt.args.attrs, tt.args.output); !reflect.DeepEqual(got, tt.want) {
t.Errorf("FilterObjectAttributes() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -111,6 +111,15 @@ const (
ErrInvalidObjectState
ErrInvalidRange
ErrInvalidURI
ErrObjectLockConfigurationNotFound
ErrNoSuchObjectLockConfiguration
ErrInvalidBucketObjectLockConfiguration
ErrObjectLocked
ErrPastObjectLockRetainDate
ErrNoSuchBucketPolicy
ErrBucketTaggingNotFound
ErrObjectLockInvalidHeaders
ErrRequestTimeTooSkewed
// Non-AWS errors
ErrExistingObjectIsDirectory
@@ -400,6 +409,53 @@ var errorCodeResponse = map[ErrorCode]APIError{
Description: "The specified URI couldn't be parsed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectLockConfigurationNotFound: {
Code: "ObjectLockConfigurationNotFoundError",
Description: "Object Lock configuration does not exist for this bucket",
HTTPStatusCode: http.StatusNotFound,
},
ErrNoSuchObjectLockConfiguration: {
Code: "NoSuchObjectLockConfiguration",
Description: "The specified object does not have an ObjectLock configuration",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidBucketObjectLockConfiguration: {
Code: "InvalidRequest",
Description: "Bucket is missing ObjectLockConfiguration",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectLocked: {
Code: "InvalidRequest",
Description: "Object is WORM protected and cannot be overwritten",
HTTPStatusCode: http.StatusBadRequest,
},
ErrPastObjectLockRetainDate: {
Code: "InvalidRequest",
Description: "the retain until date must be in the future",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNoSuchBucketPolicy: {
Code: "NoSuchBucketPolicy",
Description: "The bucket policy does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrBucketTaggingNotFound: {
Code: "NoSuchTagSet",
Description: "The TagSet does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrObjectLockInvalidHeaders: {
Code: "InvalidRequest",
Description: "x-amz-object-lock-retain-until-date and x-amz-object-lock-mode must both be supplied",
HTTPStatusCode: http.StatusBadRequest,
},
ErrRequestTimeTooSkewed: {
Code: "RequestTimeTooSkewed",
Description: "The difference between the request time and the server's time is too large.",
HTTPStatusCode: http.StatusForbidden,
},
// non aws errors
ErrExistingObjectIsDirectory: {
Code: "ExistingObjectIsDirectory",
Description: "Existing Object is a directory.",

View File

@@ -37,11 +37,11 @@ type EventMeta struct {
VersionId *string
}
type EventFields struct {
Records []EventSchema
type EventSchema struct {
Records []EventRecord
}
type EventSchema struct {
type EventRecord struct {
EventVersion string `json:"eventVersion"`
EventSource string `json:"eventSource"`
AwsRegion string `json:"awsRegion"`
@@ -139,54 +139,54 @@ func InitEventSender(cfg *EventConfig) (S3EventSender, error) {
return evSender, err
}
func createEventSchema(ctx *fiber.Ctx, meta EventMeta, configId ConfigurationId) ([]byte, error) {
func createEventSchema(ctx *fiber.Ctx, meta EventMeta, configId ConfigurationId) EventSchema {
path := strings.Split(ctx.Path(), "/")
bucket, object := path[1], strings.Join(path[2:], "/")
acc := ctx.Locals("account").(auth.Account)
event := []EventSchema{
{
EventVersion: "2.2",
EventSource: "aws:s3",
AwsRegion: ctx.Locals("region").(string),
EventTime: time.Now().Format(time.RFC3339),
EventName: meta.EventName,
UserIdentity: EventUserIdentity{
PrincipalId: acc.Access,
},
RequestParameters: EventRequestParams{
SourceIPAddress: ctx.IP(),
},
ResponseElements: EventResponseElements{
RequestId: ctx.Get("X-Amz-Request-Id"),
HostId: ctx.Get("X-Amz-Id-2"),
},
S3: EventS3Data{
S3SchemaVersion: "1.0",
ConfigurationId: configId,
Bucket: EventS3BucketData{
Name: bucket,
OwnerIdentity: EventUserIdentity{
PrincipalId: meta.BucketOwner,
return EventSchema{
Records: []EventRecord{
{
EventVersion: "2.2",
EventSource: "aws:s3",
AwsRegion: ctx.Locals("region").(string),
EventTime: time.Now().Format(time.RFC3339),
EventName: meta.EventName,
UserIdentity: EventUserIdentity{
PrincipalId: acc.Access,
},
RequestParameters: EventRequestParams{
SourceIPAddress: ctx.IP(),
},
ResponseElements: EventResponseElements{
RequestId: ctx.Get("X-Amz-Request-Id"),
HostId: ctx.Get("X-Amz-Id-2"),
},
S3: EventS3Data{
S3SchemaVersion: "1.0",
ConfigurationId: configId,
Bucket: EventS3BucketData{
Name: bucket,
OwnerIdentity: EventUserIdentity{
PrincipalId: meta.BucketOwner,
},
Arn: fmt.Sprintf("arn:aws:s3:::%v", strings.Join(path, "/")),
},
Object: EventObjectData{
Key: object,
Size: meta.ObjectSize,
ETag: meta.ObjectETag,
VersionId: meta.VersionId,
Sequencer: genSequencer(),
},
Arn: fmt.Sprintf("arn:aws:s3:::%v", strings.Join(path, "/")),
},
Object: EventObjectData{
Key: object,
Size: meta.ObjectSize,
ETag: meta.ObjectETag,
VersionId: meta.VersionId,
Sequencer: genSequencer(),
GlacierEventData: EventGlacierData{
// Not supported
RestoreEventData: EventRestoreData{},
},
},
GlacierEventData: EventGlacierData{
// Not supported
RestoreEventData: EventRestoreData{},
},
},
}
return json.Marshal(event)
}
func generateTestEvent() ([]byte, error) {

View File

@@ -25,19 +25,21 @@ import (
type EventType string
const (
EventObjectCreated EventType = "s3:ObjectCreated:*" // ObjectCreated
EventObjectCreatedPut EventType = "s3:ObjectCreated:Put"
EventObjectCreatedPost EventType = "s3:ObjectCreated:Post"
EventObjectCreatedCopy EventType = "s3:ObjectCreated:Copy"
EventCompleteMultipartUpload EventType = "s3:ObjectCreated:CompleteMultipartUpload"
EventObjectDeleted EventType = "s3:ObjectRemoved:Delete" // ObjectRemoved
EventObjectTagging EventType = "s3:ObjectTagging:*" // ObjectTagging
EventObjectTaggingPut EventType = "s3:ObjectTagging:Put"
EventObjectTaggingDelete EventType = "s3:ObjectTagging:Delete"
EventObjectAclPut EventType = "s3:ObjectAcl:Put"
EventObjectRestore EventType = "s3:ObjectRestore:*" // ObjectRestore
EventObjectRestorePost EventType = "s3:ObjectRestore:Post"
EventObjectRestoreCompleted EventType = "s3:ObjectRestore:Completed"
EventObjectCreated EventType = "s3:ObjectCreated:*" // ObjectCreated
EventObjectCreatedPut EventType = "s3:ObjectCreated:Put"
EventObjectCreatedPost EventType = "s3:ObjectCreated:Post"
EventObjectCreatedCopy EventType = "s3:ObjectCreated:Copy"
EventCompleteMultipartUpload EventType = "s3:ObjectCreated:CompleteMultipartUpload"
EventObjectRemoved EventType = "s3:ObjectRemoved:*"
EventObjectRemovedDelete EventType = "s3:ObjectRemoved:Delete"
EventObjectRemovedDeleteObjects EventType = "s3:ObjectRemoved:DeleteObjects" // non AWS custom type for DeleteObjects
EventObjectTagging EventType = "s3:ObjectTagging:*" // ObjectTagging
EventObjectTaggingPut EventType = "s3:ObjectTagging:Put"
EventObjectTaggingDelete EventType = "s3:ObjectTagging:Delete"
EventObjectAclPut EventType = "s3:ObjectAcl:Put"
EventObjectRestore EventType = "s3:ObjectRestore:*" // ObjectRestore
EventObjectRestorePost EventType = "s3:ObjectRestore:Post"
EventObjectRestoreCompleted EventType = "s3:ObjectRestore:Completed"
// EventObjectRestorePost EventType = "s3:ObjectRestore:Post"
// EventObjectRestoreDelete EventType = "s3:ObjectRestore:Delete"
)
@@ -48,19 +50,21 @@ func (event EventType) IsValid() bool {
}
var supportedEventFilters = map[EventType]struct{}{
EventObjectCreated: {},
EventObjectCreatedPut: {},
EventObjectCreatedPost: {},
EventObjectCreatedCopy: {},
EventCompleteMultipartUpload: {},
EventObjectDeleted: {},
EventObjectTagging: {},
EventObjectTaggingPut: {},
EventObjectTaggingDelete: {},
EventObjectAclPut: {},
EventObjectRestore: {},
EventObjectRestorePost: {},
EventObjectRestoreCompleted: {},
EventObjectCreated: {},
EventObjectCreatedPut: {},
EventObjectCreatedPost: {},
EventObjectCreatedCopy: {},
EventCompleteMultipartUpload: {},
EventObjectRemoved: {},
EventObjectRemovedDelete: {},
EventObjectRemovedDeleteObjects: {},
EventObjectTagging: {},
EventObjectTaggingPut: {},
EventObjectTaggingDelete: {},
EventObjectAclPut: {},
EventObjectRestore: {},
EventObjectRestorePost: {},
EventObjectRestoreCompleted: {},
}
type EventFilter map[EventType]bool

View File

@@ -16,6 +16,8 @@ package s3event
import (
"context"
"encoding/json"
"encoding/xml"
"fmt"
"os"
"sync"
@@ -23,6 +25,7 @@ import (
"github.com/gofiber/fiber/v2"
"github.com/segmentio/kafka-go"
"github.com/versity/versitygw/s3response"
)
var sequencer = 0
@@ -78,12 +81,29 @@ func (ks *Kafka) SendEvent(ctx *fiber.Ctx, meta EventMeta) {
return
}
schema, err := createEventSchema(ctx, meta, ConfigurationIdKafka)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to create kafka event: %v\n", err.Error())
if meta.EventName == EventObjectRemovedDeleteObjects {
var dObj s3response.DeleteObjects
if err := xml.Unmarshal(ctx.Body(), &dObj); err != nil {
fmt.Fprintf(os.Stderr, "failed to parse delete objects input payload: %v\n", err.Error())
return
}
// Events aren't send in correct order
for _, obj := range dObj.Objects {
key := *obj.Key
schema := createEventSchema(ctx, meta, ConfigurationIdWebhook)
schema.Records[0].S3.Object.Key = key
schema.Records[0].S3.Object.VersionId = obj.VersionId
go ks.send(schema)
}
return
}
schema := createEventSchema(ctx, meta, ConfigurationIdWebhook)
go ks.send(schema)
}
@@ -91,14 +111,20 @@ func (ks *Kafka) Close() error {
return ks.writer.Close()
}
func (ks *Kafka) send(event []byte) {
func (ks *Kafka) send(event EventSchema) {
eventBytes, err := json.Marshal(event)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to parse event data: %v\n", err.Error())
return
}
message := kafka.Message{
Key: []byte(ks.key),
Value: event,
Value: eventBytes,
}
ctx := context.Background()
err := ks.writer.WriteMessages(ctx, message)
err = ks.writer.WriteMessages(ctx, message)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to send kafka event: %v\n", err.Error())
}

View File

@@ -15,12 +15,15 @@
package s3event
import (
"encoding/json"
"encoding/xml"
"fmt"
"os"
"sync"
"github.com/gofiber/fiber/v2"
"github.com/nats-io/nats.go"
"github.com/versity/versitygw/s3response"
)
type NatsEventSender struct {
@@ -65,12 +68,29 @@ func (ns *NatsEventSender) SendEvent(ctx *fiber.Ctx, meta EventMeta) {
return
}
schema, err := createEventSchema(ctx, meta, ConfigurationIdNats)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to create nats event: %v\n", err.Error())
if meta.EventName == EventObjectRemovedDeleteObjects {
var dObj s3response.DeleteObjects
if err := xml.Unmarshal(ctx.Body(), &dObj); err != nil {
fmt.Fprintf(os.Stderr, "failed to parse delete objects input payload: %v\n", err.Error())
return
}
// Events aren't send in correct order
for _, obj := range dObj.Objects {
key := *obj.Key
schema := createEventSchema(ctx, meta, ConfigurationIdWebhook)
schema.Records[0].S3.Object.Key = key
schema.Records[0].S3.Object.VersionId = obj.VersionId
go ns.send(schema)
}
return
}
schema := createEventSchema(ctx, meta, ConfigurationIdWebhook)
go ns.send(schema)
}
@@ -79,8 +99,13 @@ func (ns *NatsEventSender) Close() error {
return nil
}
func (ns *NatsEventSender) send(event []byte) {
err := ns.client.Publish(ns.topic, event)
func (ns *NatsEventSender) send(event EventSchema) {
eventBytes, err := json.Marshal(event)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to parse event data: %v\n", err.Error())
return
}
err = ns.client.Publish(ns.topic, eventBytes)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to send nats event: %v\n", err.Error())
}

View File

@@ -16,6 +16,8 @@ package s3event
import (
"bytes"
"encoding/json"
"encoding/xml"
"fmt"
"net"
"net/http"
@@ -24,6 +26,7 @@ import (
"time"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/s3response"
)
type Webhook struct {
@@ -77,12 +80,29 @@ func (w *Webhook) SendEvent(ctx *fiber.Ctx, meta EventMeta) {
return
}
schema, err := createEventSchema(ctx, meta, ConfigurationIdWebhook)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to create webhook event: %v\n", err.Error())
if meta.EventName == EventObjectRemovedDeleteObjects {
var dObj s3response.DeleteObjects
if err := xml.Unmarshal(ctx.Body(), &dObj); err != nil {
fmt.Fprintf(os.Stderr, "failed to parse delete objects input payload: %v\n", err.Error())
return
}
// Events aren't send in correct order
for _, obj := range dObj.Objects {
key := *obj.Key
schema := createEventSchema(ctx, meta, ConfigurationIdWebhook)
schema.Records[0].S3.Object.Key = key
schema.Records[0].S3.Object.VersionId = obj.VersionId
go w.send(schema)
}
return
}
schema := createEventSchema(ctx, meta, ConfigurationIdWebhook)
go w.send(schema)
}
@@ -90,8 +110,14 @@ func (w *Webhook) Close() error {
return nil
}
func (w *Webhook) send(event []byte) {
req, err := http.NewRequest(http.MethodPost, w.url, bytes.NewReader(event))
func (w *Webhook) send(event EventSchema) {
eventBytes, err := json.Marshal(event)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to parse event data: %v\n", err.Error())
return
}
req, err := http.NewRequest(http.MethodPost, w.url, bytes.NewReader(eventBytes))
if err != nil {
fmt.Fprintf(os.Stderr, "failed to create webhook event request: %v\n", err.Error())
return

View File

@@ -52,6 +52,23 @@ type ListPartsResult struct {
Parts []Part `xml:"Part"`
}
type GetObjectAttributesResult struct {
ETag *string
LastModified *time.Time
ObjectSize *int64
StorageClass *types.StorageClass
VersionId *string
ObjectParts *ObjectParts
}
type ObjectParts struct {
PartNumberMarker int
NextPartNumberMarker int
MaxParts int
IsTruncated bool
Parts []types.ObjectPart `xml:"Part"`
}
// ListMultipartUploadsResponse - s3 api list multipart uploads response.
type ListMultipartUploadsResult struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListMultipartUploadsResult" json:"-"`

View File

@@ -6,11 +6,12 @@ BACKEND=posix
LOCAL_FOLDER=/tmp/gw
BUCKET_ONE_NAME=versity-gwtest-bucket-one
BUCKET_TWO_NAME=versity-gwtest-bucket-two
#RECREATE_BUCKETS=true
RECREATE_BUCKETS=true
CERT=$PWD/cert.pem
KEY=$PWD/versitygw.pem
S3CMD_CONFIG=./tests/s3cfg.local.default
SECRETS_FILE=./tests/.secrets
MC_ALIAS=versity
LOG_LEVEL=2
GOCOVERDIR=$PWD/cover
GOCOVERDIR=$PWD/cover
USERS_FOLDER=$PWD/iam

View File

@@ -1,14 +0,0 @@
AWS_PROFILE=versity_s3
AWS_ENDPOINT_URL=https://127.0.0.1:7070
VERSITY_EXE=./versitygw
RUN_VERSITYGW=true
BACKEND=s3
LOCAL_FOLDER=/tmp/gw
BUCKET_ONE_NAME=versity-gwtest-bucket-one
BUCKET_TWO_NAME=versity-gwtest-bucket-two
#RECREATE_BUCKETS=true
CERT=$PWD/cert.pem
KEY=$PWD/versitygw.pem
S3CMD_CONFIG=./tests/s3cfg.local.default
SECRETS_FILE=./tests/.secrets.s3
MC_ALIAS=versity_s3

View File

@@ -36,12 +36,38 @@ Instructions are mostly the same; however, testing with the S3 backend requires
To set up the latter:
1. Create a new AWS profile with ID and key values set to dummy 20-char allcaps and 40-char alphabetical values respectively.
1. In the `.secrets` file being used, create the fields `AWS_ACCESS_KEY_ID_TWO` and `AWS_SECRET_ACCESS_KEY_TWO`. Set these values to the actual AWS ID and key.
2. Set the values for `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` the same dummy values set in the AWS profile, and set `AWS_PROFILE` to the profile you just created.
3. Create a new AWS profile with these dummy values. In the `.env` file being used, set the `AWS_PROFILE` parameter to the name of this new profile, and the ID and key fields to the dummy values.
4. Set `BACKEND` to `s3`. Also, change the `MC_ALIAS` value if testing **mc** in this configuration.
2. In the `.secrets` file being used, create the fields `AWS_ACCESS_KEY_ID_TWO` and `AWS_SECRET_ACCESS_KEY_TWO`. Set these values to the actual AWS ID and key.
3. Set the values for `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` the same dummy values set in the AWS profile, and set `AWS_PROFILE` to the profile you just created.
4. Create a new AWS profile with these dummy values. In the `.env` file being used, set the `AWS_PROFILE` parameter to the name of this new profile, and the ID and key fields to the dummy values.
5. Set `BACKEND` to `s3`. Also, change the `MC_ALIAS` value if testing **mc** in this configuration.
### Direct Mode
To communicate directly with s3, in order to compare the gateway results to direct results:
1. Create an AWS profile with the direct connection info. Set `AWS_PROFILE` to this.
2. Set `RUN_VERSITYGW` to false.
3. Set `AWS_ENDPOINT_URL` to the typical endpoint location (usually `https://s3.amazonaws.com`).
4. If testing **s3cmd**, create a new `s3cfg.local` file with `host_base` and `host_bucket` set to `s3.amazonaws.com`.
5. If testing **mc**, change the `MC_ALIAS` value to a new value such as `versity-direct`.
## Instructions - Running With Docker
1. Create a `.secrets` file in the `tests` folder, and add the `AWS_PROFILE`, `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and the `AWS_PROFILE` fields.
2. Build and run the `Dockerfile_test_bats` file. Change the `SECRETS_FILE` and `CONFIG_FILE` parameters to point to an S3-backend-friendly config. Example: `docker build -t <tag> -f Dockerfile_test_bats --build-arg="SECRETS_FILE=<file>" --build-arg="CONFIG_FILE=<file>" .`.
1. Create a `.secrets` file in the `tests` folder, and add the `AWS_PROFILE`, `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and the `AWS_PROFILE` fields, as well as the additional s3 fields explained in the **S3 Backend** section above if running with the s3 backend.
2. Build and run the `Dockerfile_test_bats` file. Change the `SECRETS_FILE` and `CONFIG_FILE` parameters to point to your secrets and config file, respectively. Example: `docker build -t <tag> -f Dockerfile_test_bats --build-arg="SECRETS_FILE=<file>" --build-arg="CONFIG_FILE=<file>" .`.
## Instructions - Running with docker-compose
A file named `docker-compose-bats.yml` is provided in the root folder. Four configurations are provided:
* insecure (without certificates), with creation/removal of buckets
* secure, posix backend, with static buckets
* secure, posix backend, with creation/removal of buckets
* secure, s3 backend, with creation/removal of buckets
* direct mode
To use each of these, creating a separate `.env` file for each is suggested. How to do so is explained below.
To run in insecure mode, comment out the `CERT` and `KEY` parameters in the `.env` file, and change the prefix for the `AWS_ENDPOINT_URL` parameter to `http://`. Also, set `S3CMD_CONFIG` to point to a copy of the default s3cmd config file that has `use_https` set to false. Finally, change `MC_ALIAS` to something new to avoid overwriting the secure `MC_ALIAS` values.
To use static buckets set the `RECREATE_BUCKETS` value to `false`.
For the s3 backend, see the **S3 Backend** instructions above.

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
abort_multipart_upload() {
if [ $# -ne 3 ]; then
echo "command to run abort requires bucket, key, upload ID"
return 1
fi
error=$(aws --no-verify-ssl s3api abort-multipart-upload --bucket "$1" --key "$2" --upload-id "$3") || local aborted=$?
if [[ $aborted -ne 0 ]]; then
echo "Error aborting upload: $error"
return 1
fi
return 0
}

View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bash
copy_object() {
if [ $# -ne 4 ]; then
echo "copy object command requires command type, source, bucket, key"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 cp "$2" s3://"$3/$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api copy-object --copy-source "$2" --bucket "$3" --key "$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate cp "s3://$2" s3://"$3/$4" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure cp "$2" "$MC_ALIAS/$3/$4" 2>&1) || exit_code=$?
else
echo "'copy-object' not implemented for '$1'"
return 1
fi
log 5 "copy object exit code: $exit_code"
if [ $exit_code -ne 0 ]; then
echo "error copying object to bucket: $error"
return 1
fi
return 0
}

View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bash
# create an AWS bucket
# param: bucket name
# return 0 for success, 1 for failure
create_bucket() {
if [ $# -ne 2 ]; then
echo "create bucket missing command type, bucket name"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 mb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "aws" ]] || [[ $1 == 's3api' ]]; then
error=$(aws --no-verify-ssl s3api create-bucket --bucket "$2" 2>&1) || exit_code=$?
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate mb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "mc" ]]; then
error=$(mc --insecure mb "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error creating bucket: $error"
return 1
fi
return 0
}

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env bash
# delete an AWS bucket
# param: bucket name
# return 0 for success, 1 for failure
delete_bucket() {
if [ $# -ne 2 ]; then
echo "delete bucket missing command type, bucket name"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 rb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 'aws' ]] || [[ $1 == 's3api' ]]; then
error=$(aws --no-verify-ssl s3api delete-bucket --bucket "$2" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate rb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure rb "$MC_ALIAS/$2" 2>&1) || exit_code=$?
else
echo "Invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"The specified bucket does not exist"* ]]; then
return 0
else
echo "error deleting bucket: $error"
return 1
fi
fi
return 0
}

View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bash
delete_bucket_policy() {
if [[ $# -ne 2 ]]; then
echo "delete bucket policy command requires command type, bucket"
return 1
fi
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api delete-bucket-policy --bucket "$2") || delete_result=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate delpolicy "s3://$2") || delete_result=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure anonymous set none "$MC_ALIAS/$2") || delete_result=$?
else
echo "command 'get bucket policy' not implemented for '$1'"
return 1
fi
if [[ $delete_result -ne 0 ]]; then
echo "error deleting bucket policy: $error"
return 1
fi
return 0
}

View File

@@ -0,0 +1,27 @@
#!/usr/bin/env bash
delete_object() {
if [ $# -ne 3 ]; then
echo "delete object command requires command type, bucket, key"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 rm "s3://$2/$3" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api delete-object --bucket "$2" --key "$3" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate rm "s3://$2/$3" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure rm "$MC_ALIAS/$2/$3" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error deleting object: $error"
return 1
fi
return 0
}

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bash
delete_object_tagging() {
if [[ $# -ne 3 ]]; then
echo "delete object tagging command missing command type, bucket, key"
return 1
fi
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api delete-object-tagging --bucket "$2" --key "$3" 2>&1) || delete_result=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure tag remove "$MC_ALIAS/$2/$3") || delete_result=$?
else
echo "delete-object-tagging command not implemented for '$1'"
return 1
fi
if [[ $delete_result -ne 0 ]]; then
echo "error deleting object tagging: $error"
return 1
fi
return 0
}

View File

@@ -0,0 +1,68 @@
#!/usr/bin/env bash
get_bucket_location() {
if [[ $# -ne 2 ]]; then
echo "get bucket location command requires command type, bucket name"
return 1
fi
if [[ $1 == 'aws' ]]; then
get_bucket_location_aws "$2" || get_result=$?
elif [[ $1 == 's3cmd' ]]; then
get_bucket_location_s3cmd "$2" || get_result=$?
elif [[ $1 == 'mc' ]]; then
get_bucket_location_mc "$2" || get_result=$?
else
echo "command type '$1' not implemented for get_bucket_location"
return 1
fi
if [[ $get_result -ne 0 ]]; then
return 1
fi
location=$(echo "$location_json" | jq -r '.LocationConstraint')
export location
}
get_bucket_location_aws() {
if [[ $# -ne 1 ]]; then
echo "get bucket location (aws) requires bucket name"
return 1
fi
location_json=$(aws --no-verify-ssl s3api get-bucket-location --bucket "$1") || location_result=$?
if [[ $location_result -ne 0 ]]; then
echo "error getting bucket location: $location"
return 1
fi
bucket_location=$(echo "$location_json" | jq -r '.LocationConstraint')
export bucket_location
return 0
}
get_bucket_location_s3cmd() {
if [[ $# -ne 1 ]]; then
echo "get bucket location (s3cmd) requires bucket name"
return 1
fi
info=$(s3cmd --no-check-certificate info "s3://$1") || results=$?
if [[ $results -ne 0 ]]; then
echo "error getting s3cmd info: $info"
return 1
fi
bucket_location=$(echo "$info" | grep -o 'Location:.*' | awk '{print $2}')
export bucket_location
return 0
}
get_bucket_location_mc() {
if [[ $# -ne 1 ]]; then
echo "get bucket location (mc) requires bucket name"
return 1
fi
info=$(mc --insecure stat "$MC_ALIAS/$1") || results=$?
if [[ $results -ne 0 ]]; then
echo "error getting s3cmd info: $info"
return 1
fi
bucket_location=$(echo "$info" | grep -o 'Location:.*' | awk '{print $2}')
export bucket_location
return 0
}

View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bash
get_bucket_policy() {
if [[ $# -ne 2 ]]; then
echo "get bucket policy command requires command type, bucket"
return 1
fi
local get_bucket_policy_result=0
if [[ $1 == 'aws' ]]; then
get_bucket_policy_aws "$2" || get_bucket_policy_result=$?
elif [[ $1 == 's3cmd' ]]; then
get_bucket_policy_s3cmd "$2" || get_bucket_policy_result=$?
elif [[ $1 == 'mc' ]]; then
get_bucket_policy_mc "$2" || get_bucket_policy_result=$?
else
echo "command 'get bucket policy' not implemented for '$1'"
return 1
fi
if [[ $get_bucket_policy_result -ne 0 ]]; then
echo "error getting policy: $bucket_policy"
return 1
fi
export bucket_policy
return 0
}
get_bucket_policy_aws() {
if [[ $# -ne 1 ]]; then
echo "aws 'get bucket policy' command requires bucket"
return 1
fi
policy_json=$(aws --no-verify-ssl s3api get-bucket-policy --bucket "$1" 2>&1) || get_result=$?
if [[ $policy_json == *"InsecureRequestWarning"* ]]; then
policy_json=$(awk 'NR>2' <<< "$policy_json")
fi
if [[ $get_result -ne 0 ]]; then
if [[ "$policy_json" == *"(NoSuchBucketPolicy)"* ]]; then
bucket_policy=
else
echo "error getting policy: $policy_json"
return 1
fi
else
bucket_policy=$(echo "{$policy_json}" | jq -r '.Policy')
fi
export bucket_policy
return 0
}
get_bucket_policy_s3cmd() {
if [[ $# -ne 1 ]]; then
echo "s3cmd 'get bucket policy' command requires bucket"
return 1
fi
info=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate info "s3://$1") || get_result=$?
if [[ $get_result -ne 0 ]]; then
echo "error getting bucket policy: $info"
return 1
fi
bucket_policy=""
policy_brackets=false
while IFS= read -r line; do
if [[ $policy_brackets == false ]]; then
policy_line=$(echo "$line" | grep 'Policy: ')
if [[ $policy_line != "" ]]; then
if [[ $policy_line != *'{' ]]; then
break
fi
policy_brackets=true
bucket_policy+="{"
fi
else
bucket_policy+=$line
if [[ $line == "" ]]; then
break
fi
fi
done <<< "$info"
export bucket_policy
return 0
}
get_bucket_policy_mc() {
if [[ $# -ne 1 ]]; then
echo "aws 'get bucket policy' command requires bucket"
return 1
fi
bucket_policy=$(mc --insecure anonymous get-json "$MC_ALIAS/$1") || get_result=$?
if [[ $get_result -ne 0 ]]; then
echo "error getting policy: $bucket_policy"
return 1
fi
export bucket_policy
return 0
}

View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bash
# get bucket tags
# params: bucket
# export 'tags' on success, return 1 for error
get_bucket_tagging() {
if [ $# -ne 2 ]; then
echo "get bucket tag command missing command type, bucket name"
return 1
fi
local result
if [[ $1 == 'aws' ]]; then
tags=$(aws --no-verify-ssl s3api get-bucket-tagging --bucket "$2" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
tags=$(mc --insecure tag list "$MC_ALIAS"/"$2" 2>&1) || result=$?
else
echo "invalid command type $1"
return 1
fi
log 5 "Tags: $tags"
tags=$(echo "$tags" | grep -v "InsecureRequestWarning")
if [[ $result -ne 0 ]]; then
if [[ $tags =~ "No tags found" ]] || [[ $tags =~ "The TagSet does not exist" ]]; then
export tags=
return 0
fi
echo "error getting bucket tags: $tags"
return 1
fi
export tags
}

View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bash
get_object() {
if [ $# -ne 4 ]; then
echo "get object command requires command type, bucket, key, destination"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 mv "s3://$2/$3" "$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api get-object --bucket "$2" --key "$3" "$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate get "s3://$2/$3" "$4" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure get "$MC_ALIAS/$2/$3" "$4" 2>&1) || exit_code=$?
else
echo "'get object' command not implemented for '$1'"
return 1
fi
log 5 "get object exit code: $exit_code"
if [ $exit_code -ne 0 ]; then
echo "error putting object into bucket: $error"
return 1
fi
return 0
}

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
head_bucket() {
if [ $# -ne 2 ]; then
echo "head bucket command missing command type, bucket name"
return 1
fi
local exit_code=0
if [[ $1 == "aws" ]] || [[ $1 == 's3api' ]] || [[ $1 == 's3' ]]; then
bucket_info=$(aws --no-verify-ssl s3api head-bucket --bucket "$2" 2>&1) || exit_code=$?
elif [[ $1 == "s3cmd" ]]; then
bucket_info=$(s3cmd --no-check-certificate info "s3://$2" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
bucket_info=$(mc --insecure stat "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error getting bucket info: $bucket_info"
return 1
fi
export bucket_info
return 0
}

View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
head_object() {
if [ $# -ne 3 ]; then
echo "head-object missing command, bucket name, object name"
return 2
fi
local exit_code=0
local error=""
if [[ $1 == 'aws' ]] || [[ $1 == 's3api' ]] || [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3api head-object --bucket "$2" --key "$3" 2>&1) || exit_code="$?"
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate info s3://"$2/$3" 2>&1) || exit_code="$?"
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure stat "$MC_ALIAS/$2/$3" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 2
fi
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"404"* ]] || [[ "$error" == *"does not exist"* ]]; then
return 1
else
echo "error checking if object exists: $error"
return 2
fi
fi
return 0
}

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
list_buckets() {
if [ $# -ne 1 ]; then
echo "list buckets command missing command type"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
buckets=$(aws --no-verify-ssl s3 ls 2>&1 s3://) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
list_buckets_s3api || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
buckets=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3:// 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
buckets=$(mc --insecure ls "$MC_ALIAS" 2>&1) || exit_code=$?
else
echo "list buckets command not implemented for '$1'"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error listing buckets: $buckets"
return 1
fi
if [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
return 0
fi
bucket_array=()
while IFS= read -r line; do
bucket_name=$(echo "$line" | awk '{print $NF}')
bucket_array+=("${bucket_name%/}")
done <<< "$buckets"
export bucket_array
return 0
}
list_buckets_s3api() {
output=$(aws --no-verify-ssl s3api list-buckets 2>&1) || exit_code=$?
if [[ $exit_code -ne 0 ]]; then
echo "error listing buckets: $output"
return 1
fi
modified_output=""
while IFS= read -r line; do
if [[ $line != *InsecureRequestWarning* ]]; then
modified_output+="$line"
fi
done <<< "$output"
bucket_array=()
names=$(jq -r '.Buckets[].Name' <<<"$modified_output")
IFS=$'\n' read -rd '' -a bucket_array <<<"$names"
export bucket_array
return 0
}

View File

@@ -0,0 +1,65 @@
#!/usr/bin/env bash
list_objects() {
if [ $# -ne 2 ]; then
echo "list objects command requires command type, and bucket or folder"
return 1
fi
local exit_code=0
local output
if [[ $1 == "aws" ]] || [[ $1 == 's3' ]]; then
output=$(aws --no-verify-ssl s3 ls s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]]; then
list_objects_s3api "$2" || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
output=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
output=$(mc --insecure ls "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error listing objects: $output"
return 1
fi
if [[ $1 == 's3api' ]]; then
return 0
fi
object_array=()
while IFS= read -r line; do
if [[ $line != *InsecureRequestWarning* ]]; then
object_name=$(echo "$line" | awk '{print $NF}')
object_array+=("$object_name")
fi
done <<< "$output"
export object_array
}
list_objects_s3api() {
if [[ $# -ne 1 ]]; then
echo "list objects s3api command requires bucket name"
return 1
fi
output=$(aws --no-verify-ssl s3api list-objects --bucket "$1" 2>&1) || local exit_code=$?
if [[ $exit_code -ne 0 ]]; then
echo "error listing objects: $output"
return 1
fi
modified_output=""
while IFS= read -r line; do
if [[ $line != *InsecureRequestWarning* ]]; then
modified_output+="$line"
fi
done <<< "$output"
object_array=()
keys=$(jq -r '.Contents[].Key' <<<"$modified_output")
IFS=$'\n' read -rd '' -a object_array <<<"$keys"
export object_array
}

View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bash
put_bucket_policy() {
if [[ $# -ne 3 ]]; then
echo "get bucket policy command requires command type, bucket, policy file"
return 1
fi
if [[ $1 == 'aws' ]]; then
policy=$(aws --no-verify-ssl s3api put-bucket-policy --bucket "$2" --policy "file://$3") || get_result=$?
elif [[ $1 == 's3cmd' ]]; then
policy=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate setpolicy "$3" "s3://$2") || get_result=$?
elif [[ $1 == 'mc' ]]; then
policy=$(mc --insecure anonymous set-json "$3" "$MC_ALIAS/$2")
else
echo "command 'put bucket policy' not implemented for '$1'"
return 1
fi
if [[ $get_result -ne 0 ]]; then
echo "error putting policy: $policy"
return 1
fi
return 0
}

View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bash
put_object() {
if [ $# -ne 4 ]; then
echo "put object command requires command type, source, destination bucket, destination key"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 mv "$2" s3://"$3/$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api put-object --body "$2" --bucket "$3" --key "$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate put "$2" s3://"$3/$4" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure put "$2" "$MC_ALIAS/$3/$4" 2>&1) || exit_code=$?
else
echo "'put object' command not implemented for '$1'"
return 1
fi
log 5 "put object exit code: $exit_code"
if [ $exit_code -ne 0 ]; then
echo "error putting object into bucket: $error"
return 1
fi
return 0
}

View File

@@ -54,6 +54,7 @@ func TestCreateBucket(s *S3Conf) {
CreateBucket_default_acl(s)
CreateBucket_non_default_acl(s)
CreateDeleteBucket_success(s)
CreateBucket_default_object_lock(s)
}
func TestHeadBucket(s *S3Conf) {
@@ -81,6 +82,7 @@ func TestPutBucketTagging(s *S3Conf) {
func TestGetBucketTagging(s *S3Conf) {
GetBucketTagging_non_existing_bucket(s)
GetBucketTagging_unset_tags(s)
GetBucketTagging_success(s)
}
@@ -94,6 +96,8 @@ func TestPutObject(s *S3Conf) {
PutObject_non_existing_bucket(s)
PutObject_special_chars(s)
PutObject_invalid_long_tags(s)
PutObject_missing_object_lock_retention_config(s)
PutObject_with_object_lock(s)
PutObject_success(s)
PutObject_invalid_credentials(s)
}
@@ -103,6 +107,14 @@ func TestHeadObject(s *S3Conf) {
HeadObject_success(s)
}
func TestGetObjectAttributes(s *S3Conf) {
GetObjectAttributes_non_existing_bucket(s)
GetObjectAttributes_non_existing_object(s)
GetObjectAttributes_existing_object(s)
GetObjectAttributes_multipart_upload(s)
GetObjectAttributes_multipart_upload_truncated(s)
}
func TestGetObject(s *S3Conf) {
GetObject_non_existing_key(s)
GetObject_invalid_ranges(s)
@@ -157,6 +169,7 @@ func TestPutObjectTagging(s *S3Conf) {
func TestGetObjectTagging(s *S3Conf) {
GetObjectTagging_non_existing_object(s)
GetObjectTagging_unset_tags(s)
GetObjectTagging_success(s)
}
@@ -267,7 +280,7 @@ func TestPutBucketPolicy(s *S3Conf) {
func TestGetBucketPolicy(s *S3Conf) {
GetBucketPolicy_non_existing_bucket(s)
GetBucketPolicy_default_empty_policy(s)
GetBucketPolicy_not_set(s)
GetBucketPolicy_success(s)
}
@@ -277,6 +290,61 @@ func TestDeleteBucketPolicy(s *S3Conf) {
DeleteBucketPolicy_success(s)
}
func TestPutObjectLockConfiguration(s *S3Conf) {
PutObjectLockConfiguration_non_existing_bucket(s)
PutObjectLockConfiguration_empty_config(s)
PutObjectLockConfiguration_both_years_and_days(s)
PutObjectLockConfiguration_success(s)
}
func TestGetObjectLockConfiguration(s *S3Conf) {
GetObjectLockConfiguration_non_existing_bucket(s)
GetObjectLockConfiguration_unset_config(s)
GetObjectLockConfiguration_success(s)
}
func TestPutObjectRetention(s *S3Conf) {
PutObjectRetention_non_existing_bucket(s)
PutObjectRetention_non_existing_object(s)
PutObjectRetention_unset_bucket_object_lock_config(s)
PutObjectRetention_disabled_bucket_object_lock_config(s)
PutObjectRetention_expired_retain_until_date(s)
PutObjectRetention_success(s)
}
func TestGetObjectRetention(s *S3Conf) {
GetObjectRetention_non_existing_bucket(s)
GetObjectRetention_non_existing_object(s)
GetObjectRetention_unset_config(s)
GetObjectRetention_success(s)
}
func TestPutObjectLegalHold(s *S3Conf) {
PutObjectLegalHold_non_existing_bucket(s)
PutObjectLegalHold_non_existing_object(s)
PutObjectLegalHold_invalid_body(s)
PutObjectLegalHold_unset_bucket_object_lock_config(s)
PutObjectLegalHold_disabled_bucket_object_lock_config(s)
PutObjectLegalHold_success(s)
}
func TestGetObjectLegalHold(s *S3Conf) {
GetObjectLegalHold_non_existing_bucket(s)
GetObjectLegalHold_non_existing_object(s)
GetObjectLegalHold_unset_config(s)
GetObjectLegalHold_success(s)
}
func TestWORMProtection(s *S3Conf) {
WORMProtection_bucket_object_lock_configuration_compliance_mode(s)
WORMProtection_bucket_object_lock_governance_root_overwrite(s)
WORMProtection_object_lock_retention_compliance_root_access_denied(s)
WORMProtection_object_lock_retention_governance_root_overwrite(s)
WORMProtection_object_lock_retention_governance_user_access_denied(s)
WORMProtection_object_lock_legal_hold_user_access_denied(s)
WORMProtection_object_lock_legal_hold_root_overwrite(s)
}
func TestFullFlow(s *S3Conf) {
TestAuthentication(s)
TestPresignedAuthentication(s)
@@ -289,6 +357,7 @@ func TestFullFlow(s *S3Conf) {
TestDeleteBucketTagging(s)
TestPutObject(s)
TestHeadObject(s)
TestGetObjectAttributes(s)
TestGetObject(s)
TestListObjects(s)
TestListObjectsV2(s)
@@ -309,6 +378,13 @@ func TestFullFlow(s *S3Conf) {
TestPutBucketPolicy(s)
TestGetBucketPolicy(s)
TestDeleteBucketPolicy(s)
TestPutObjectLockConfiguration(s)
TestGetObjectLockConfiguration(s)
TestPutObjectRetention(s)
TestGetObjectRetention(s)
TestPutObjectLegalHold(s)
TestGetObjectLegalHold(s)
TestWORMProtection(s)
TestAccessControl(s)
}
@@ -340,193 +416,237 @@ type IntTests map[string]func(s *S3Conf) error
func GetIntTests() IntTests {
return IntTests{
"Authentication_empty_auth_header": Authentication_empty_auth_header,
"Authentication_invalid_auth_header": Authentication_invalid_auth_header,
"Authentication_unsupported_signature_version": Authentication_unsupported_signature_version,
"Authentication_malformed_credentials": Authentication_malformed_credentials,
"Authentication_malformed_credentials_invalid_parts": Authentication_malformed_credentials_invalid_parts,
"Authentication_credentials_terminated_string": Authentication_credentials_terminated_string,
"Authentication_credentials_incorrect_service": Authentication_credentials_incorrect_service,
"Authentication_credentials_incorrect_region": Authentication_credentials_incorrect_region,
"Authentication_credentials_invalid_date": Authentication_credentials_invalid_date,
"Authentication_credentials_future_date": Authentication_credentials_future_date,
"Authentication_credentials_past_date": Authentication_credentials_past_date,
"Authentication_credentials_non_existing_access_key": Authentication_credentials_non_existing_access_key,
"Authentication_invalid_signed_headers": Authentication_invalid_signed_headers,
"Authentication_missing_date_header": Authentication_missing_date_header,
"Authentication_invalid_date_header": Authentication_invalid_date_header,
"Authentication_date_mismatch": Authentication_date_mismatch,
"Authentication_incorrect_payload_hash": Authentication_incorrect_payload_hash,
"Authentication_incorrect_md5": Authentication_incorrect_md5,
"Authentication_signature_error_incorrect_secret_key": Authentication_signature_error_incorrect_secret_key,
"PresignedAuth_missing_algo_query_param": PresignedAuth_missing_algo_query_param,
"PresignedAuth_unsupported_algorithm": PresignedAuth_unsupported_algorithm,
"PresignedAuth_missing_credentials_query_param": PresignedAuth_missing_credentials_query_param,
"PresignedAuth_malformed_creds_invalid_parts": PresignedAuth_malformed_creds_invalid_parts,
"PresignedAuth_creds_invalid_terminator": PresignedAuth_creds_invalid_terminator,
"PresignedAuth_creds_incorrect_service": PresignedAuth_creds_incorrect_service,
"PresignedAuth_creds_incorrect_region": PresignedAuth_creds_incorrect_region,
"PresignedAuth_creds_invalid_date": PresignedAuth_creds_invalid_date,
"PresignedAuth_missing_date_query": PresignedAuth_missing_date_query,
"PresignedAuth_dates_mismatch": PresignedAuth_dates_mismatch,
"PresignedAuth_non_existing_access_key_id": PresignedAuth_non_existing_access_key_id,
"PresignedAuth_missing_signed_headers_query_param": PresignedAuth_missing_signed_headers_query_param,
"PresignedAuth_missing_expiration_query_param": PresignedAuth_missing_expiration_query_param,
"PresignedAuth_invalid_expiration_query_param": PresignedAuth_invalid_expiration_query_param,
"PresignedAuth_negative_expiration_query_param": PresignedAuth_negative_expiration_query_param,
"PresignedAuth_exceeding_expiration_query_param": PresignedAuth_exceeding_expiration_query_param,
"PresignedAuth_expired_request": PresignedAuth_expired_request,
"PresignedAuth_incorrect_secret_key": PresignedAuth_incorrect_secret_key,
"PresignedAuth_PutObject_success": PresignedAuth_PutObject_success,
"PresignedAuth_Put_GetObject_with_data": PresignedAuth_Put_GetObject_with_data,
"PresignedAuth_Put_GetObject_with_UTF8_chars": PresignedAuth_Put_GetObject_with_UTF8_chars,
"PresignedAuth_UploadPart": PresignedAuth_UploadPart,
"CreateBucket_invalid_bucket_name": CreateBucket_invalid_bucket_name,
"CreateBucket_existing_bucket": CreateBucket_existing_bucket,
"CreateBucket_as_user": CreateBucket_as_user,
"CreateDeleteBucket_success": CreateDeleteBucket_success,
"CreateBucket_default_acl": CreateBucket_default_acl,
"CreateBucket_non_default_acl": CreateBucket_non_default_acl,
"HeadBucket_non_existing_bucket": HeadBucket_non_existing_bucket,
"HeadBucket_success": HeadBucket_success,
"ListBuckets_as_user": ListBuckets_as_user,
"ListBuckets_as_admin": ListBuckets_as_admin,
"ListBuckets_success": ListBuckets_success,
"DeleteBucket_non_existing_bucket": DeleteBucket_non_existing_bucket,
"DeleteBucket_non_empty_bucket": DeleteBucket_non_empty_bucket,
"DeleteBucket_success_status_code": DeleteBucket_success_status_code,
"PutBucketTagging_non_existing_bucket": PutBucketTagging_non_existing_bucket,
"PutBucketTagging_long_tags": PutBucketTagging_long_tags,
"PutBucketTagging_success": PutBucketTagging_success,
"GetBucketTagging_non_existing_bucket": GetBucketTagging_non_existing_bucket,
"GetBucketTagging_success": GetBucketTagging_success,
"DeleteBucketTagging_non_existing_object": DeleteBucketTagging_non_existing_object,
"DeleteBucketTagging_success_status": DeleteBucketTagging_success_status,
"DeleteBucketTagging_success": DeleteBucketTagging_success,
"PutObject_non_existing_bucket": PutObject_non_existing_bucket,
"PutObject_special_chars": PutObject_special_chars,
"PutObject_invalid_long_tags": PutObject_invalid_long_tags,
"PutObject_success": PutObject_success,
"HeadObject_non_existing_object": HeadObject_non_existing_object,
"HeadObject_success": HeadObject_success,
"GetObject_non_existing_key": GetObject_non_existing_key,
"GetObject_invalid_ranges": GetObject_invalid_ranges,
"GetObject_with_meta": GetObject_with_meta,
"GetObject_success": GetObject_success,
"GetObject_by_range_success": GetObject_by_range_success,
"ListObjects_non_existing_bucket": ListObjects_non_existing_bucket,
"ListObjects_with_prefix": ListObjects_with_prefix,
"ListObject_truncated": ListObject_truncated,
"ListObjects_invalid_max_keys": ListObjects_invalid_max_keys,
"ListObjects_max_keys_0": ListObjects_max_keys_0,
"ListObjects_delimiter": ListObjects_delimiter,
"ListObjects_max_keys_none": ListObjects_max_keys_none,
"ListObjects_marker_not_from_obj_list": ListObjects_marker_not_from_obj_list,
"ListObjectsV2_start_after": ListObjectsV2_start_after,
"ListObjectsV2_both_start_after_and_continuation_token": ListObjectsV2_both_start_after_and_continuation_token,
"ListObjectsV2_start_after_not_in_list": ListObjectsV2_start_after_not_in_list,
"ListObjectsV2_start_after_empty_result": ListObjectsV2_start_after_empty_result,
"DeleteObject_non_existing_object": DeleteObject_non_existing_object,
"DeleteObject_success": DeleteObject_success,
"DeleteObject_success_status_code": DeleteObject_success_status_code,
"DeleteObjects_empty_input": DeleteObjects_empty_input,
"DeleteObjects_non_existing_objects": DeleteObjects_non_existing_objects,
"DeleteObjects_success": DeleteObjects_success,
"CopyObject_non_existing_dst_bucket": CopyObject_non_existing_dst_bucket,
"CopyObject_not_owned_source_bucket": CopyObject_not_owned_source_bucket,
"CopyObject_copy_to_itself": CopyObject_copy_to_itself,
"CopyObject_to_itself_with_new_metadata": CopyObject_to_itself_with_new_metadata,
"CopyObject_success": CopyObject_success,
"PutObjectTagging_non_existing_object": PutObjectTagging_non_existing_object,
"PutObjectTagging_long_tags": PutObjectTagging_long_tags,
"PutObjectTagging_success": PutObjectTagging_success,
"GetObjectTagging_non_existing_object": GetObjectTagging_non_existing_object,
"GetObjectTagging_success": GetObjectTagging_success,
"DeleteObjectTagging_non_existing_object": DeleteObjectTagging_non_existing_object,
"DeleteObjectTagging_success_status": DeleteObjectTagging_success_status,
"DeleteObjectTagging_success": DeleteObjectTagging_success,
"CreateMultipartUpload_non_existing_bucket": CreateMultipartUpload_non_existing_bucket,
"CreateMultipartUpload_success": CreateMultipartUpload_success,
"UploadPart_non_existing_bucket": UploadPart_non_existing_bucket,
"UploadPart_invalid_part_number": UploadPart_invalid_part_number,
"UploadPart_non_existing_key": UploadPart_non_existing_key,
"UploadPart_non_existing_mp_upload": UploadPart_non_existing_mp_upload,
"UploadPart_success": UploadPart_success,
"UploadPartCopy_non_existing_bucket": UploadPartCopy_non_existing_bucket,
"UploadPartCopy_incorrect_uploadId": UploadPartCopy_incorrect_uploadId,
"UploadPartCopy_incorrect_object_key": UploadPartCopy_incorrect_object_key,
"UploadPartCopy_invalid_part_number": UploadPartCopy_invalid_part_number,
"UploadPartCopy_invalid_copy_source": UploadPartCopy_invalid_copy_source,
"UploadPartCopy_non_existing_source_bucket": UploadPartCopy_non_existing_source_bucket,
"UploadPartCopy_non_existing_source_object_key": UploadPartCopy_non_existing_source_object_key,
"UploadPartCopy_success": UploadPartCopy_success,
"UploadPartCopy_by_range_invalid_range": UploadPartCopy_by_range_invalid_range,
"UploadPartCopy_greater_range_than_obj_size": UploadPartCopy_greater_range_than_obj_size,
"UploadPartCopy_by_range_success": UploadPartCopy_by_range_success,
"ListParts_incorrect_uploadId": ListParts_incorrect_uploadId,
"ListParts_incorrect_object_key": ListParts_incorrect_object_key,
"ListParts_success": ListParts_success,
"ListMultipartUploads_non_existing_bucket": ListMultipartUploads_non_existing_bucket,
"ListMultipartUploads_empty_result": ListMultipartUploads_empty_result,
"ListMultipartUploads_invalid_max_uploads": ListMultipartUploads_invalid_max_uploads,
"ListMultipartUploads_max_uploads": ListMultipartUploads_max_uploads,
"ListMultipartUploads_incorrect_next_key_marker": ListMultipartUploads_incorrect_next_key_marker,
"ListMultipartUploads_ignore_upload_id_marker": ListMultipartUploads_ignore_upload_id_marker,
"ListMultipartUploads_success": ListMultipartUploads_success,
"AbortMultipartUpload_non_existing_bucket": AbortMultipartUpload_non_existing_bucket,
"AbortMultipartUpload_incorrect_uploadId": AbortMultipartUpload_incorrect_uploadId,
"AbortMultipartUpload_incorrect_object_key": AbortMultipartUpload_incorrect_object_key,
"AbortMultipartUpload_success": AbortMultipartUpload_success,
"AbortMultipartUpload_success_status_code": AbortMultipartUpload_success_status_code,
"CompletedMultipartUpload_non_existing_bucket": CompletedMultipartUpload_non_existing_bucket,
"CompleteMultipartUpload_invalid_part_number": CompleteMultipartUpload_invalid_part_number,
"CompleteMultipartUpload_invalid_ETag": CompleteMultipartUpload_invalid_ETag,
"CompleteMultipartUpload_success": CompleteMultipartUpload_success,
"PutBucketAcl_non_existing_bucket": PutBucketAcl_non_existing_bucket,
"PutBucketAcl_invalid_acl_canned_and_acp": PutBucketAcl_invalid_acl_canned_and_acp,
"PutBucketAcl_invalid_acl_canned_and_grants": PutBucketAcl_invalid_acl_canned_and_grants,
"PutBucketAcl_invalid_acl_acp_and_grants": PutBucketAcl_invalid_acl_acp_and_grants,
"PutBucketAcl_invalid_owner": PutBucketAcl_invalid_owner,
"PutBucketAcl_success_access_denied": PutBucketAcl_success_access_denied,
"PutBucketAcl_success_grants": PutBucketAcl_success_grants,
"PutBucketAcl_success_canned_acl": PutBucketAcl_success_canned_acl,
"PutBucketAcl_success_acp": PutBucketAcl_success_acp,
"GetBucketAcl_non_existing_bucket": GetBucketAcl_non_existing_bucket,
"GetBucketAcl_access_denied": GetBucketAcl_access_denied,
"GetBucketAcl_success": GetBucketAcl_success,
"PutBucketPolicy_non_existing_bucket": PutBucketPolicy_non_existing_bucket,
"PutBucketPolicy_invalid_effect": PutBucketPolicy_invalid_effect,
"PutBucketPolicy_empty_actions_string": PutBucketPolicy_empty_actions_string,
"PutBucketPolicy_empty_actions_array": PutBucketPolicy_empty_actions_array,
"PutBucketPolicy_invalid_action": PutBucketPolicy_invalid_action,
"PutBucketPolicy_unsupported_action": PutBucketPolicy_unsupported_action,
"PutBucketPolicy_incorrect_action_wildcard_usage": PutBucketPolicy_incorrect_action_wildcard_usage,
"PutBucketPolicy_empty_principals_string": PutBucketPolicy_empty_principals_string,
"PutBucketPolicy_empty_principals_array": PutBucketPolicy_empty_principals_array,
"PutBucketPolicy_principals_incorrect_wildcard_usage": PutBucketPolicy_principals_incorrect_wildcard_usage,
"PutBucketPolicy_non_existing_principals": PutBucketPolicy_non_existing_principals,
"PutBucketPolicy_empty_resources_string": PutBucketPolicy_empty_resources_string,
"PutBucketPolicy_empty_resources_array": PutBucketPolicy_empty_resources_array,
"PutBucketPolicy_invalid_resource_prefix": PutBucketPolicy_invalid_resource_prefix,
"PutBucketPolicy_invalid_resource_with_starting_slash": PutBucketPolicy_invalid_resource_with_starting_slash,
"PutBucketPolicy_duplicate_resource": PutBucketPolicy_duplicate_resource,
"PutBucketPolicy_incorrect_bucket_name": PutBucketPolicy_incorrect_bucket_name,
"PutBucketPolicy_object_action_on_bucket_resource": PutBucketPolicy_object_action_on_bucket_resource,
"PutBucketPolicy_bucket_action_on_object_resource": PutBucketPolicy_bucket_action_on_object_resource,
"PutBucketPolicy_success": PutBucketPolicy_success,
"GetBucketPolicy_non_existing_bucket": GetBucketPolicy_non_existing_bucket,
"GetBucketPolicy_default_empty_policy": GetBucketPolicy_default_empty_policy,
"GetBucketPolicy_success": GetBucketPolicy_success,
"DeleteBucketPolicy_non_existing_bucket": DeleteBucketPolicy_non_existing_bucket,
"DeleteBucketPolicy_remove_before_setting": DeleteBucketPolicy_remove_before_setting,
"DeleteBucketPolicy_success": DeleteBucketPolicy_success,
"PutObject_overwrite_dir_obj": PutObject_overwrite_dir_obj,
"PutObject_overwrite_file_obj": PutObject_overwrite_file_obj,
"PutObject_dir_obj_with_data": PutObject_dir_obj_with_data,
"CreateMultipartUpload_dir_obj": CreateMultipartUpload_dir_obj,
"IAM_user_access_denied": IAM_user_access_denied,
"IAM_userplus_access_denied": IAM_userplus_access_denied,
"IAM_userplus_CreateBucket": IAM_userplus_CreateBucket,
"IAM_admin_ChangeBucketOwner": IAM_admin_ChangeBucketOwner,
"Authentication_empty_auth_header": Authentication_empty_auth_header,
"Authentication_invalid_auth_header": Authentication_invalid_auth_header,
"Authentication_unsupported_signature_version": Authentication_unsupported_signature_version,
"Authentication_malformed_credentials": Authentication_malformed_credentials,
"Authentication_malformed_credentials_invalid_parts": Authentication_malformed_credentials_invalid_parts,
"Authentication_credentials_terminated_string": Authentication_credentials_terminated_string,
"Authentication_credentials_incorrect_service": Authentication_credentials_incorrect_service,
"Authentication_credentials_incorrect_region": Authentication_credentials_incorrect_region,
"Authentication_credentials_invalid_date": Authentication_credentials_invalid_date,
"Authentication_credentials_future_date": Authentication_credentials_future_date,
"Authentication_credentials_past_date": Authentication_credentials_past_date,
"Authentication_credentials_non_existing_access_key": Authentication_credentials_non_existing_access_key,
"Authentication_invalid_signed_headers": Authentication_invalid_signed_headers,
"Authentication_missing_date_header": Authentication_missing_date_header,
"Authentication_invalid_date_header": Authentication_invalid_date_header,
"Authentication_date_mismatch": Authentication_date_mismatch,
"Authentication_incorrect_payload_hash": Authentication_incorrect_payload_hash,
"Authentication_incorrect_md5": Authentication_incorrect_md5,
"Authentication_signature_error_incorrect_secret_key": Authentication_signature_error_incorrect_secret_key,
"PresignedAuth_missing_algo_query_param": PresignedAuth_missing_algo_query_param,
"PresignedAuth_unsupported_algorithm": PresignedAuth_unsupported_algorithm,
"PresignedAuth_missing_credentials_query_param": PresignedAuth_missing_credentials_query_param,
"PresignedAuth_malformed_creds_invalid_parts": PresignedAuth_malformed_creds_invalid_parts,
"PresignedAuth_creds_invalid_terminator": PresignedAuth_creds_invalid_terminator,
"PresignedAuth_creds_incorrect_service": PresignedAuth_creds_incorrect_service,
"PresignedAuth_creds_incorrect_region": PresignedAuth_creds_incorrect_region,
"PresignedAuth_creds_invalid_date": PresignedAuth_creds_invalid_date,
"PresignedAuth_missing_date_query": PresignedAuth_missing_date_query,
"PresignedAuth_dates_mismatch": PresignedAuth_dates_mismatch,
"PresignedAuth_non_existing_access_key_id": PresignedAuth_non_existing_access_key_id,
"PresignedAuth_missing_signed_headers_query_param": PresignedAuth_missing_signed_headers_query_param,
"PresignedAuth_missing_expiration_query_param": PresignedAuth_missing_expiration_query_param,
"PresignedAuth_invalid_expiration_query_param": PresignedAuth_invalid_expiration_query_param,
"PresignedAuth_negative_expiration_query_param": PresignedAuth_negative_expiration_query_param,
"PresignedAuth_exceeding_expiration_query_param": PresignedAuth_exceeding_expiration_query_param,
"PresignedAuth_expired_request": PresignedAuth_expired_request,
"PresignedAuth_incorrect_secret_key": PresignedAuth_incorrect_secret_key,
"PresignedAuth_PutObject_success": PresignedAuth_PutObject_success,
"PutObject_missing_object_lock_retention_config": PutObject_missing_object_lock_retention_config,
"PutObject_with_object_lock": PutObject_with_object_lock,
"PresignedAuth_Put_GetObject_with_data": PresignedAuth_Put_GetObject_with_data,
"PresignedAuth_Put_GetObject_with_UTF8_chars": PresignedAuth_Put_GetObject_with_UTF8_chars,
"PresignedAuth_UploadPart": PresignedAuth_UploadPart,
"CreateBucket_invalid_bucket_name": CreateBucket_invalid_bucket_name,
"CreateBucket_existing_bucket": CreateBucket_existing_bucket,
"CreateBucket_as_user": CreateBucket_as_user,
"CreateDeleteBucket_success": CreateDeleteBucket_success,
"CreateBucket_default_acl": CreateBucket_default_acl,
"CreateBucket_non_default_acl": CreateBucket_non_default_acl,
"CreateBucket_default_object_lock": CreateBucket_default_object_lock,
"HeadBucket_non_existing_bucket": HeadBucket_non_existing_bucket,
"HeadBucket_success": HeadBucket_success,
"ListBuckets_as_user": ListBuckets_as_user,
"ListBuckets_as_admin": ListBuckets_as_admin,
"ListBuckets_success": ListBuckets_success,
"DeleteBucket_non_existing_bucket": DeleteBucket_non_existing_bucket,
"DeleteBucket_non_empty_bucket": DeleteBucket_non_empty_bucket,
"DeleteBucket_success_status_code": DeleteBucket_success_status_code,
"PutBucketTagging_non_existing_bucket": PutBucketTagging_non_existing_bucket,
"PutBucketTagging_long_tags": PutBucketTagging_long_tags,
"PutBucketTagging_success": PutBucketTagging_success,
"GetBucketTagging_non_existing_bucket": GetBucketTagging_non_existing_bucket,
"GetBucketTagging_unset_tags": GetBucketTagging_unset_tags,
"GetBucketTagging_success": GetBucketTagging_success,
"DeleteBucketTagging_non_existing_object": DeleteBucketTagging_non_existing_object,
"DeleteBucketTagging_success_status": DeleteBucketTagging_success_status,
"DeleteBucketTagging_success": DeleteBucketTagging_success,
"PutObject_non_existing_bucket": PutObject_non_existing_bucket,
"PutObject_special_chars": PutObject_special_chars,
"PutObject_invalid_long_tags": PutObject_invalid_long_tags,
"PutObject_success": PutObject_success,
"HeadObject_non_existing_object": HeadObject_non_existing_object,
"HeadObject_success": HeadObject_success,
"GetObjectAttributes_non_existing_bucket": GetObjectAttributes_non_existing_bucket,
"GetObjectAttributes_non_existing_object": GetObjectAttributes_non_existing_object,
"GetObjectAttributes_existing_object": GetObjectAttributes_existing_object,
"GetObjectAttributes_multipart_upload": GetObjectAttributes_multipart_upload,
"GetObjectAttributes_multipart_upload_truncated": GetObjectAttributes_multipart_upload_truncated,
"GetObject_non_existing_key": GetObject_non_existing_key,
"GetObject_invalid_ranges": GetObject_invalid_ranges,
"GetObject_with_meta": GetObject_with_meta,
"GetObject_success": GetObject_success,
"GetObject_by_range_success": GetObject_by_range_success,
"ListObjects_non_existing_bucket": ListObjects_non_existing_bucket,
"ListObjects_with_prefix": ListObjects_with_prefix,
"ListObject_truncated": ListObject_truncated,
"ListObjects_invalid_max_keys": ListObjects_invalid_max_keys,
"ListObjects_max_keys_0": ListObjects_max_keys_0,
"ListObjects_delimiter": ListObjects_delimiter,
"ListObjects_max_keys_none": ListObjects_max_keys_none,
"ListObjects_marker_not_from_obj_list": ListObjects_marker_not_from_obj_list,
"ListObjectsV2_start_after": ListObjectsV2_start_after,
"ListObjectsV2_both_start_after_and_continuation_token": ListObjectsV2_both_start_after_and_continuation_token,
"ListObjectsV2_start_after_not_in_list": ListObjectsV2_start_after_not_in_list,
"ListObjectsV2_start_after_empty_result": ListObjectsV2_start_after_empty_result,
"DeleteObject_non_existing_object": DeleteObject_non_existing_object,
"DeleteObject_success": DeleteObject_success,
"DeleteObject_success_status_code": DeleteObject_success_status_code,
"DeleteObjects_empty_input": DeleteObjects_empty_input,
"DeleteObjects_non_existing_objects": DeleteObjects_non_existing_objects,
"DeleteObjects_success": DeleteObjects_success,
"CopyObject_non_existing_dst_bucket": CopyObject_non_existing_dst_bucket,
"CopyObject_not_owned_source_bucket": CopyObject_not_owned_source_bucket,
"CopyObject_copy_to_itself": CopyObject_copy_to_itself,
"CopyObject_to_itself_with_new_metadata": CopyObject_to_itself_with_new_metadata,
"CopyObject_success": CopyObject_success,
"PutObjectTagging_non_existing_object": PutObjectTagging_non_existing_object,
"PutObjectTagging_long_tags": PutObjectTagging_long_tags,
"PutObjectTagging_success": PutObjectTagging_success,
"GetObjectTagging_non_existing_object": GetObjectTagging_non_existing_object,
"GetObjectTagging_unset_tags": GetObjectTagging_unset_tags,
"GetObjectTagging_success": GetObjectTagging_success,
"DeleteObjectTagging_non_existing_object": DeleteObjectTagging_non_existing_object,
"DeleteObjectTagging_success_status": DeleteObjectTagging_success_status,
"DeleteObjectTagging_success": DeleteObjectTagging_success,
"CreateMultipartUpload_non_existing_bucket": CreateMultipartUpload_non_existing_bucket,
"CreateMultipartUpload_success": CreateMultipartUpload_success,
"UploadPart_non_existing_bucket": UploadPart_non_existing_bucket,
"UploadPart_invalid_part_number": UploadPart_invalid_part_number,
"UploadPart_non_existing_key": UploadPart_non_existing_key,
"UploadPart_non_existing_mp_upload": UploadPart_non_existing_mp_upload,
"UploadPart_success": UploadPart_success,
"UploadPartCopy_non_existing_bucket": UploadPartCopy_non_existing_bucket,
"UploadPartCopy_incorrect_uploadId": UploadPartCopy_incorrect_uploadId,
"UploadPartCopy_incorrect_object_key": UploadPartCopy_incorrect_object_key,
"UploadPartCopy_invalid_part_number": UploadPartCopy_invalid_part_number,
"UploadPartCopy_invalid_copy_source": UploadPartCopy_invalid_copy_source,
"UploadPartCopy_non_existing_source_bucket": UploadPartCopy_non_existing_source_bucket,
"UploadPartCopy_non_existing_source_object_key": UploadPartCopy_non_existing_source_object_key,
"UploadPartCopy_success": UploadPartCopy_success,
"UploadPartCopy_by_range_invalid_range": UploadPartCopy_by_range_invalid_range,
"UploadPartCopy_greater_range_than_obj_size": UploadPartCopy_greater_range_than_obj_size,
"UploadPartCopy_by_range_success": UploadPartCopy_by_range_success,
"ListParts_incorrect_uploadId": ListParts_incorrect_uploadId,
"ListParts_incorrect_object_key": ListParts_incorrect_object_key,
"ListParts_success": ListParts_success,
"ListMultipartUploads_non_existing_bucket": ListMultipartUploads_non_existing_bucket,
"ListMultipartUploads_empty_result": ListMultipartUploads_empty_result,
"ListMultipartUploads_invalid_max_uploads": ListMultipartUploads_invalid_max_uploads,
"ListMultipartUploads_max_uploads": ListMultipartUploads_max_uploads,
"ListMultipartUploads_incorrect_next_key_marker": ListMultipartUploads_incorrect_next_key_marker,
"ListMultipartUploads_ignore_upload_id_marker": ListMultipartUploads_ignore_upload_id_marker,
"ListMultipartUploads_success": ListMultipartUploads_success,
"AbortMultipartUpload_non_existing_bucket": AbortMultipartUpload_non_existing_bucket,
"AbortMultipartUpload_incorrect_uploadId": AbortMultipartUpload_incorrect_uploadId,
"AbortMultipartUpload_incorrect_object_key": AbortMultipartUpload_incorrect_object_key,
"AbortMultipartUpload_success": AbortMultipartUpload_success,
"AbortMultipartUpload_success_status_code": AbortMultipartUpload_success_status_code,
"CompletedMultipartUpload_non_existing_bucket": CompletedMultipartUpload_non_existing_bucket,
"CompleteMultipartUpload_invalid_part_number": CompleteMultipartUpload_invalid_part_number,
"CompleteMultipartUpload_invalid_ETag": CompleteMultipartUpload_invalid_ETag,
"CompleteMultipartUpload_success": CompleteMultipartUpload_success,
"PutBucketAcl_non_existing_bucket": PutBucketAcl_non_existing_bucket,
"PutBucketAcl_invalid_acl_canned_and_acp": PutBucketAcl_invalid_acl_canned_and_acp,
"PutBucketAcl_invalid_acl_canned_and_grants": PutBucketAcl_invalid_acl_canned_and_grants,
"PutBucketAcl_invalid_acl_acp_and_grants": PutBucketAcl_invalid_acl_acp_and_grants,
"PutBucketAcl_invalid_owner": PutBucketAcl_invalid_owner,
"PutBucketAcl_success_access_denied": PutBucketAcl_success_access_denied,
"PutBucketAcl_success_grants": PutBucketAcl_success_grants,
"PutBucketAcl_success_canned_acl": PutBucketAcl_success_canned_acl,
"PutBucketAcl_success_acp": PutBucketAcl_success_acp,
"GetBucketAcl_non_existing_bucket": GetBucketAcl_non_existing_bucket,
"GetBucketAcl_access_denied": GetBucketAcl_access_denied,
"GetBucketAcl_success": GetBucketAcl_success,
"PutBucketPolicy_non_existing_bucket": PutBucketPolicy_non_existing_bucket,
"PutBucketPolicy_invalid_effect": PutBucketPolicy_invalid_effect,
"PutBucketPolicy_empty_actions_string": PutBucketPolicy_empty_actions_string,
"PutBucketPolicy_empty_actions_array": PutBucketPolicy_empty_actions_array,
"PutBucketPolicy_invalid_action": PutBucketPolicy_invalid_action,
"PutBucketPolicy_unsupported_action": PutBucketPolicy_unsupported_action,
"PutBucketPolicy_incorrect_action_wildcard_usage": PutBucketPolicy_incorrect_action_wildcard_usage,
"PutBucketPolicy_empty_principals_string": PutBucketPolicy_empty_principals_string,
"PutBucketPolicy_empty_principals_array": PutBucketPolicy_empty_principals_array,
"PutBucketPolicy_principals_incorrect_wildcard_usage": PutBucketPolicy_principals_incorrect_wildcard_usage,
"PutBucketPolicy_non_existing_principals": PutBucketPolicy_non_existing_principals,
"PutBucketPolicy_empty_resources_string": PutBucketPolicy_empty_resources_string,
"PutBucketPolicy_empty_resources_array": PutBucketPolicy_empty_resources_array,
"PutBucketPolicy_invalid_resource_prefix": PutBucketPolicy_invalid_resource_prefix,
"PutBucketPolicy_invalid_resource_with_starting_slash": PutBucketPolicy_invalid_resource_with_starting_slash,
"PutBucketPolicy_duplicate_resource": PutBucketPolicy_duplicate_resource,
"PutBucketPolicy_incorrect_bucket_name": PutBucketPolicy_incorrect_bucket_name,
"PutBucketPolicy_object_action_on_bucket_resource": PutBucketPolicy_object_action_on_bucket_resource,
"PutBucketPolicy_bucket_action_on_object_resource": PutBucketPolicy_bucket_action_on_object_resource,
"PutBucketPolicy_success": PutBucketPolicy_success,
"GetBucketPolicy_non_existing_bucket": GetBucketPolicy_non_existing_bucket,
"GetBucketPolicy_not_set": GetBucketPolicy_not_set,
"GetBucketPolicy_success": GetBucketPolicy_success,
"DeleteBucketPolicy_non_existing_bucket": DeleteBucketPolicy_non_existing_bucket,
"DeleteBucketPolicy_remove_before_setting": DeleteBucketPolicy_remove_before_setting,
"DeleteBucketPolicy_success": DeleteBucketPolicy_success,
"PutObjectLockConfiguration_non_existing_bucket": PutObjectLockConfiguration_non_existing_bucket,
"PutObjectLockConfiguration_empty_config": PutObjectLockConfiguration_empty_config,
"PutObjectLockConfiguration_both_years_and_days": PutObjectLockConfiguration_both_years_and_days,
"PutObjectLockConfiguration_success": PutObjectLockConfiguration_success,
"GetObjectLockConfiguration_non_existing_bucket": GetObjectLockConfiguration_non_existing_bucket,
"GetObjectLockConfiguration_unset_config": GetObjectLockConfiguration_unset_config,
"GetObjectLockConfiguration_success": GetObjectLockConfiguration_success,
"PutObjectRetention_non_existing_bucket": PutObjectRetention_non_existing_bucket,
"PutObjectRetention_non_existing_object": PutObjectRetention_non_existing_object,
"PutObjectRetention_unset_bucket_object_lock_config": PutObjectRetention_unset_bucket_object_lock_config,
"PutObjectRetention_disabled_bucket_object_lock_config": PutObjectRetention_disabled_bucket_object_lock_config,
"PutObjectRetention_expired_retain_until_date": PutObjectRetention_expired_retain_until_date,
"PutObjectRetention_success": PutObjectRetention_success,
"GetObjectRetention_non_existing_bucket": GetObjectRetention_non_existing_bucket,
"GetObjectRetention_non_existing_object": GetObjectRetention_non_existing_object,
"GetObjectRetention_unset_config": GetObjectRetention_unset_config,
"GetObjectRetention_success": GetObjectRetention_success,
"PutObjectLegalHold_non_existing_bucket": PutObjectLegalHold_non_existing_bucket,
"PutObjectLegalHold_non_existing_object": PutObjectLegalHold_non_existing_object,
"PutObjectLegalHold_invalid_body": PutObjectLegalHold_invalid_body,
"PutObjectLegalHold_unset_bucket_object_lock_config": PutObjectLegalHold_unset_bucket_object_lock_config,
"PutObjectLegalHold_disabled_bucket_object_lock_config": PutObjectLegalHold_disabled_bucket_object_lock_config,
"PutObjectLegalHold_success": PutObjectLegalHold_success,
"GetObjectLegalHold_non_existing_bucket": GetObjectLegalHold_non_existing_bucket,
"GetObjectLegalHold_non_existing_object": GetObjectLegalHold_non_existing_object,
"GetObjectLegalHold_unset_config": GetObjectLegalHold_unset_config,
"GetObjectLegalHold_success": GetObjectLegalHold_success,
"WORMProtection_bucket_object_lock_configuration_compliance_mode": WORMProtection_bucket_object_lock_configuration_compliance_mode,
"WORMProtection_bucket_object_lock_governance_root_overwrite": WORMProtection_bucket_object_lock_governance_root_overwrite,
"WORMProtection_object_lock_retention_compliance_root_access_denied": WORMProtection_object_lock_retention_compliance_root_access_denied,
"WORMProtection_object_lock_retention_governance_root_overwrite": WORMProtection_object_lock_retention_governance_root_overwrite,
"WORMProtection_object_lock_retention_governance_user_access_denied": WORMProtection_object_lock_retention_governance_user_access_denied,
"WORMProtection_object_lock_legal_hold_user_access_denied": WORMProtection_object_lock_legal_hold_user_access_denied,
"WORMProtection_object_lock_legal_hold_root_overwrite": WORMProtection_object_lock_legal_hold_root_overwrite,
"PutObject_overwrite_dir_obj": PutObject_overwrite_dir_obj,
"PutObject_overwrite_file_obj": PutObject_overwrite_file_obj,
"PutObject_dir_obj_with_data": PutObject_dir_obj_with_data,
"CreateMultipartUpload_dir_obj": CreateMultipartUpload_dir_obj,
"IAM_user_access_denied": IAM_user_access_denied,
"IAM_userplus_access_denied": IAM_userplus_access_denied,
"IAM_userplus_CreateBucket": IAM_userplus_CreateBucket,
"IAM_admin_ChangeBucketOwner": IAM_admin_ChangeBucketOwner,
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -30,7 +30,7 @@ import (
var (
bcktCount = 0
succUsrCrt = "The user has been created successfully"
failUsrCrt = "failed to create a user: update iam data: account already exists"
failUsrCrt = "failed to create user: update iam data: account already exists"
adminAccessDeniedMsg = "access denied: only admin users have access to this resource"
succDeleteUserMsg = "The user has been deleted successfully"
)
@@ -522,6 +522,7 @@ func uploadParts(client *s3.Client, size, partCount int, bucket, key, uploadId s
parts = append(parts, types.Part{
ETag: out.ETag,
PartNumber: &pn,
Size: &partSize,
})
offset += partSize
}
@@ -546,7 +547,7 @@ func createUsers(s *S3Conf, users []user) error {
return err
}
if !strings.Contains(string(out), succUsrCrt) && !strings.Contains(string(out), failUsrCrt) {
return fmt.Errorf("failed to create a user account")
return fmt.Errorf("failed to create user account")
}
}
return nil
@@ -645,3 +646,62 @@ func getUserS3Client(usr user, cfg *S3Conf) *s3.Client {
return s3.NewFromConfig(config.Config())
}
// if true enables, otherwise disables
func changeBucketObjectLockStatus(client *s3.Client, bucket string, status bool) error {
cfg := types.ObjectLockConfiguration{}
if status {
cfg.ObjectLockEnabled = types.ObjectLockEnabledEnabled
}
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err := client.PutObjectLockConfiguration(ctx, &s3.PutObjectLockConfigurationInput{
Bucket: &bucket,
ObjectLockConfiguration: &cfg,
})
cancel()
if err != nil {
return err
}
return nil
}
func checkWORMProtection(client *s3.Client, bucket, object string) error {
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err := client.PutObject(ctx, &s3.PutObjectInput{
Bucket: &bucket,
Key: &object,
})
cancel()
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrObjectLocked)); err != nil {
return err
}
ctx, cancel = context.WithTimeout(context.Background(), shortTimeout)
_, err = client.DeleteObject(ctx, &s3.DeleteObjectInput{
Bucket: &bucket,
Key: &object,
})
cancel()
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrObjectLocked)); err != nil {
return err
}
ctx, cancel = context.WithTimeout(context.Background(), shortTimeout)
_, err = client.DeleteObjects(ctx, &s3.DeleteObjectsInput{
Bucket: &bucket,
Delete: &types.Delete{
Objects: []types.ObjectIdentifier{
{
Key: &object,
},
},
},
})
cancel()
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrObjectLocked)); err != nil {
return err
}
return nil
}

View File

@@ -11,4 +11,7 @@ log() {
return 0
fi
echo "$2"
if [[ -n "$TEST_LOG_FILE" ]]; then
echo "$2" >> "$TEST_LOG_FILE"
fi
}

View File

@@ -6,7 +6,6 @@ show_help() {
echo " -h, --help Display this help message and exit"
echo " -s, --static Don't remove buckets between tests"
echo " aws Run tests with aws cli"
echo " aws-posix Run posix tests with aws cli"
echo " s3cmd Run tests with s3cmd utility"
echo " mc Run tests with mc utility"
}
@@ -20,7 +19,7 @@ handle_param() {
-s|--static)
export RECREATE_BUCKETS=false
;;
aws|aws-posix|s3cmd|mc)
s3|s3api|aws|s3cmd|mc|user)
set_command_type "$1"
;;
*) # Handle unrecognized options or positional arguments
@@ -39,7 +38,14 @@ set_command_type() {
export command_type
}
export RECREATE_BUCKETS=true
if [[ -z $RECREATE_BUCKETS ]]; then
export RECREATE_BUCKETS=true
elif [[ $RECREATE_BUCKETS != true ]] && [[ $RECREATE_BUCKETS != false ]]; then
echo "Invalid RECREATE_BUCKETS value: $RECREATE_BUCKETS"
exit 1
else
export RECREATE_BUCKETS=$RECREATE_BUCKETS
fi
while [[ "$#" -gt 0 ]]; do
handle_param "$1"
shift # past argument or value
@@ -58,16 +64,26 @@ if [[ $RECREATE_BUCKETS == false ]]; then
fi
case $command_type in
aws)
s3api|aws)
echo "Running aws tests ..."
"$HOME"/bin/bats ./tests/test_aws.sh || exit_code=$?
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_user_aws.sh || exit_code=$?
fi
;;
aws-posix)
"$HOME"/bin/bats ./tests/test_aws_posix.sh || exit_code=$?
s3)
echo "Running s3 tests ..."
"$HOME"/bin/bats ./tests/test_s3.sh || exit_code=$?
;;
s3cmd)
echo "Running s3cmd tests ..."
"$HOME"/bin/bats ./tests/test_s3cmd.sh || exit_code=$?
if [[ $exit_code -eq 0 ]]; then
"$HOME"/bin/bats ./tests/test_user_s3cmd.sh || exit_code=$?
fi
;;
mc)
echo "Running mc tests ..."
"$HOME"/bin/bats ./tests/test_mc.sh || exit_code=$?
;;
esac

View File

@@ -4,10 +4,15 @@ if [[ -z "$VERSITYGW_TEST_ENV" ]]; then
echo "Error: VERSITYGW_TEST_ENV parameter must be set"
exit 1
fi
# shellcheck source=./.env.default
source "$VERSITYGW_TEST_ENV"
export RECREATE_BUCKETS
if ! ./tests/run.sh aws; then
exit 1
fi
if ! ./tests/run.sh aws-posix; then
if ! ./tests/run.sh s3; then
exit 1
fi
if ! ./tests/run.sh s3cmd; then
@@ -16,16 +21,7 @@ fi
if ! ./tests/run.sh mc; then
exit 1
fi
if ! ./tests/run.sh -s aws; then
exit 1
fi
if ! ./tests/run.sh -s aws-posix; then
exit 1
fi
if ! ./tests/run.sh -s s3cmd; then
exit 1
fi
if ! ./tests/run.sh -s mc; then
if ! ./tests/run.sh user; then
exit 1
fi
exit 0

View File

@@ -0,0 +1,9 @@
# Setup endpoint
host_base = 127.0.0.1:7070
host_bucket = 127.0.0.1:7070
bucket_location = us-east-1
use_https = True
signurl_use_https = True
# Enable S3 v4 signature APIs
signature_v2 = False

View File

@@ -17,6 +17,12 @@ setup() {
return 1
fi
log 4 "Running test $BATS_TEST_NAME"
if [[ $LOG_LEVEL -ge 5 ]]; then
start_time=$(date +%s)
export start_time
fi
if [[ $RUN_S3CMD == true ]]; then
S3CMD_OPTS=()
S3CMD_OPTS+=(-c "$S3CMD_CONFIG")
@@ -59,6 +65,9 @@ check_params() {
else
export LOG_LEVEL
fi
if [[ -n "$TEST_LOG_FILE" ]]; then
export TEST_LOG_FILE
fi
return 0
}
@@ -72,4 +81,8 @@ fail() {
# bats teardown function
teardown() {
stop_versity
if [[ $LOG_LEVEL -ge 5 ]]; then
end_time=$(date +%s)
log 4 "Total test time: $((end_time - start_time))"
fi
}

View File

@@ -2,8 +2,81 @@
source ./tests/setup.sh
source ./tests/util.sh
source ./tests/util_aws.sh
source ./tests/util_bucket_create.sh
source ./tests/util_file.sh
source ./tests/test_common.sh
source ./tests/commands/copy_object.sh
source ./tests/commands/delete_bucket_policy.sh
source ./tests/commands/delete_object_tagging.sh
source ./tests/commands/get_bucket_policy.sh
source ./tests/commands/get_object.sh
source ./tests/commands/put_bucket_policy.sh
source ./tests/commands/put_object.sh
@test "test_abort_multipart_upload" {
local bucket_file="bucket-file"
bucket_file_data="test file\n"
create_test_files "$bucket_file" || local created=$?
printf "%s" "$bucket_file_data" > "$test_file_folder"/$bucket_file
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
run_then_abort_multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || abort_result=$?
[[ $abort_result -eq 0 ]] || fail "Abort failed"
object_exists "aws" "$BUCKET_ONE_NAME" "$bucket_file" || exists=$?
[[ $exists -eq 1 ]] || fail "Upload file exists after abort"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
@test "test_complete_multipart_upload" {
local bucket_file="bucket-file"
bucket_file_data="test file\n"
create_test_files "$bucket_file" || local created=$?
printf "%s" "$bucket_file_data" > "$test_file_folder"/$bucket_file
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || upload_result=$?
[[ $upload_result -eq 0 ]] || fail "Error performing multipart upload"
copy_file "s3://$BUCKET_ONE_NAME/$bucket_file" "$test_file_folder/$bucket_file-copy"
compare_files "$test_file_folder/$bucket_file-copy" "$test_file_folder"/$bucket_file || compare_result=$?
[[ $compare_result -eq 0 ]] || fail "Files do not match"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
@test "test_put_object" {
bucket_file="bucket_file"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
setup_bucket "s3api" "$BUCKET_TWO_NAME" || local setup_result_two=$?
[[ $setup_result_two -eq 0 ]] || fail "Bucket two setup error"
put_object "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
error=$(aws --no-verify-ssl s3api copy-object --copy-source "$BUCKET_ONE_NAME/$bucket_file" --key "$bucket_file" --bucket "$BUCKET_TWO_NAME" 2>&1) || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Error copying file: $error"
copy_file "s3://$BUCKET_TWO_NAME/$bucket_file" "$test_file_folder/${bucket_file}_copy" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
compare_files "$test_file_folder/$bucket_file" "$test_file_folder/${bucket_file}_copy" || local compare_result=$?
[[ $compare_result -eq 0 ]] || file "files don't match"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_bucket_or_contents "aws" "$BUCKET_TWO_NAME"
delete_test_files "$bucket_file"
}
# test creation and deletion of bucket on versitygw
@test "test_create_delete_bucket_aws" {
@@ -19,22 +92,20 @@ source ./tests/test_common.sh
[[ $create_result -eq 0 ]] || fail "Invalid name test failed"
[[ "$bucket_create_error" == *"Invalid bucket name "* ]] || fail "unexpected error: $bucket_create_error"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
}
# test adding and removing an object on versitygw
@test "test_put_object-with-data" {
@test "test_put_object_with_data" {
test_common_put_object_with_data "aws"
}
@test "test_put_object-no-data" {
@test "test_put_object_no_data" {
test_common_put_object_no_data "aws"
}
# test listing buckets on versitygw
@test "test_list_buckets" {
test_common_list_buckets "aws"
test_common_list_buckets "s3api"
}
# test listing a bucket's objects on versitygw
@@ -78,10 +149,8 @@ source ./tests/test_common.sh
# delete_bucket_or_contents "$BUCKET_ONE_NAME"
#}
# test ability to delete multiple objects from bucket
@test "test_delete_objects" {
local object_one="test-file-one"
local object_two="test-file-two"
@@ -90,9 +159,9 @@ source ./tests/test_common.sh
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result_one=$?
[[ $result_one -eq 0 ]] || fail "Error creating bucket"
put_object "aws" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME"/"$object_one" || local result_two=$?
put_object "s3api" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME" "$object_one" || local result_two=$?
[[ $result_two -eq 0 ]] || fail "Error adding object one"
put_object "aws" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME"/"$object_two" || local result_three=$?
put_object "s3api" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME" "$object_two" || local result_three=$?
[[ $result_three -eq 0 ]] || fail "Error adding object two"
error=$(aws --no-verify-ssl s3api delete-objects --bucket "$BUCKET_ONE_NAME" --delete '{
@@ -103,9 +172,9 @@ source ./tests/test_common.sh
}') || local result=$?
[[ $result -eq 0 ]] || fail "Error deleting objects: $error"
object_exists "aws" "$BUCKET_ONE_NAME"/"$object_one" || local exists_one=$?
object_exists "aws" "$BUCKET_ONE_NAME" "$object_one" || local exists_one=$?
[[ $exists_one -eq 1 ]] || fail "Object one not deleted"
object_exists "aws" "$BUCKET_ONE_NAME"/"$object_two" || local exists_two=$?
object_exists "aws" "$BUCKET_ONE_NAME" "$object_two" || local exists_two=$?
[[ $exists_two -eq 1 ]] || fail "Object two not deleted"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
@@ -113,13 +182,12 @@ source ./tests/test_common.sh
}
# test abilty to set and retrieve bucket tags
@test "test-set-get-bucket-tags" {
test_common_set_get_bucket_tags "aws"
@test "test-set-get-delete-bucket-tags" {
test_common_set_get_delete_bucket_tags "aws"
}
# test v1 s3api list objects command
@test "test-s3api-list-objects-v1" {
local object_one="test-file-one"
local object_two="test-file-two"
local object_two_data="test data\n"
@@ -129,20 +197,22 @@ source ./tests/test_common.sh
printf "%s" "$object_two_data" > "$test_file_folder"/"$object_two"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "aws" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME"/"$object_one" || local put_object_one=$?
[[ $put_object_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "aws" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME"/"$object_two" || local put_object_two=$?
[[ $put_object_two -eq 0 ]] || fail "Failed to add object $object_two"
put_object "s3api" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME" "$object_one" || local copy_result_one=$?
[[ $copy_result_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "s3api" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME" "$object_two" || local copy_result_two=$?
[[ $copy_result_two -eq 0 ]] || fail "Failed to add object $object_two"
sleep 1
list_objects_s3api_v1 "$BUCKET_ONE_NAME"
key_one=$(echo "$objects" | jq '.Contents[0].Key')
[[ $key_one == '"'$object_one'"' ]] || fail "Object one mismatch"
size_one=$(echo "$objects" | jq '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch"
key_two=$(echo "$objects" | jq '.Contents[1].Key')
[[ $key_two == '"'$object_two'"' ]] || fail "Object two mismatch"
key_one=$(echo "$objects" | jq -r '.Contents[0].Key')
[[ $key_one == "$object_one" ]] || fail "Object one mismatch ($key_one, $object_one)"
size_one=$(echo "$objects" | jq -r '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch ($size_one, 0)"
key_two=$(echo "$objects" | jq -r '.Contents[1].Key')
[[ $key_two == "$object_two" ]] || fail "Object two mismatch ($key_two, $object_two)"
size_two=$(echo "$objects" | jq '.Contents[1].Size')
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch"
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch ($size_two, ${#object_two_data})"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
@@ -150,7 +220,6 @@ source ./tests/test_common.sh
# test v2 s3api list objects command
@test "test-s3api-list-objects-v2" {
local object_one="test-file-one"
local object_two="test-file-two"
local object_two_data="test data\n"
@@ -160,20 +229,20 @@ source ./tests/test_common.sh
printf "%s" "$object_two_data" > "$test_file_folder"/"$object_two"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "aws" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME"/"$object_one" || local put_object_one=$?
[[ $put_object_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "aws" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME"/"$object_two" || local put_object_two=$?
[[ $put_object_two -eq 0 ]] || fail "Failed to add object $object_two"
put_object "s3api" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME" "$object_one" || local copy_object_one=$?
[[ $copy_object_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "s3api" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME" "$object_two" || local copy_object_two=$?
[[ $copy_object_two -eq 0 ]] || fail "Failed to add object $object_two"
list_objects_s3api_v2 "$BUCKET_ONE_NAME"
key_one=$(echo "$objects" | jq '.Contents[0].Key')
[[ $key_one == '"'$object_one'"' ]] || fail "Object one mismatch"
size_one=$(echo "$objects" | jq '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch"
key_two=$(echo "$objects" | jq '.Contents[1].Key')
[[ $key_two == '"'$object_two'"' ]] || fail "Object two mismatch"
size_two=$(echo "$objects" | jq '.Contents[1].Size')
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch"
key_one=$(echo "$objects" | jq -r '.Contents[0].Key')
[[ $key_one == "$object_one" ]] || fail "Object one mismatch ($key_one, $object_one)"
size_one=$(echo "$objects" | jq -r '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch ($size_one, 0)"
key_two=$(echo "$objects" | jq -r '.Contents[1].Key')
[[ $key_two == "$object_two" ]] || fail "Object two mismatch ($key_two, $object_two)"
size_two=$(echo "$objects" | jq -r '.Contents[1].Size')
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch ($size_two, ${#object_two_data})"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
@@ -184,54 +253,8 @@ source ./tests/test_common.sh
test_common_set_get_object_tags "aws"
}
# test multi-part upload
@test "test-multi-part-upload" {
local bucket_file="bucket-file"
bucket_file_data="test file\n"
create_test_files "$bucket_file" || local created=$?
printf "%s" "$bucket_file_data" > "$test_file_folder"/$bucket_file
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || upload_result=$?
[[ $upload_result -eq 0 ]] || fail "Error performing multipart upload"
copy_file "s3://$BUCKET_ONE_NAME/$bucket_file" "$test_file_folder/$bucket_file-copy"
compare_files "$test_file_folder/$bucket_file-copy" "$test_file_folder"/$bucket_file || compare_result=$?
[[ $compare_result -eq 0 ]] || fail "Files do not match"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
# test multi-part upload abort
@test "test-multi-part-upload-abort" {
local bucket_file="bucket-file"
bucket_file_data="test file\n"
create_test_files "$bucket_file" || local created=$?
printf "%s" "$bucket_file_data" > "$test_file_folder"/$bucket_file
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
abort_multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || abort_result=$?
[[ $abort_result -eq 0 ]] || fail "Abort failed"
object_exists "aws" "$BUCKET_ONE_NAME/$bucket_file" || exists=$?
[[ $exists -eq 1 ]] || fail "Upload file exists after abort"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
# test multi-part upload list parts command
@test "test-multipart-upload-list-parts" {
local bucket_file="bucket-file"
local bucket_file_data="test file\n"
@@ -245,7 +268,7 @@ source ./tests/test_common.sh
[[ list_result -eq 0 ]] || fail "Listing multipart upload parts failed"
declare -a parts_map
for ((i=0;i<$4;i++)) {
for i in {0..3}; do
local part_number
local etag
part_number=$(echo "$parts" | jq ".[$i].PartNumber")
@@ -259,9 +282,10 @@ source ./tests/test_common.sh
return 1
fi
parts_map[$part_number]=$etag
}
done
[[ ${#parts_map[@]} -ne 0 ]] || fail "error loading multipart upload parts to check"
for ((i=0;i<$4;i++)) {
for i in {0..3}; do
local part_number
local etag
part_number=$(echo "$listed_parts" | jq ".Parts[$i].PartNumber")
@@ -270,19 +294,23 @@ source ./tests/test_common.sh
echo "error: etags don't match (part number: $part_number, etags ${parts_map[$part_number]},$etag)"
return 1
fi
}
done
run_abort_command "$BUCKET_ONE_NAME" "$bucket_file" $upload_id
run_then_abort_multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder/$bucket_file" 4
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
# test listing of active uploads
@test "test-multipart-upload-list-uploads" {
local bucket_file_one="bucket-file-one"
local bucket_file_two="bucket-file-two"
if [[ $RECREATE_BUCKETS == false ]]; then
abort_all_multipart_uploads "$BUCKET_ONE_NAME" || local abort_result=$?
[[ $abort_result -eq 0 ]] || fail "error aborting all uploads"
fi
create_test_files "$bucket_file_one" "$bucket_file_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
@@ -293,6 +321,7 @@ source ./tests/test_common.sh
local key_one
local key_two
log 5 "$uploads"
key_one=$(echo "$uploads" | jq '.Uploads[0].Key')
key_two=$(echo "$uploads" | jq '.Uploads[1].Key')
key_one=${key_one//\"/}
@@ -321,7 +350,7 @@ source ./tests/test_common.sh
multipart_upload_from_bucket "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || upload_result=$?
[[ $upload_result -eq 0 ]] || fail "Error performing multipart upload"
copy_file "s3://$BUCKET_ONE_NAME/$bucket_file-copy" "$test_file_folder/$bucket_file-copy"
get_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file-copy" "$test_file_folder/$bucket_file-copy"
compare_files "$test_file_folder"/$bucket_file-copy "$test_file_folder"/$bucket_file || compare_result=$?
[[ $compare_result -eq 0 ]] || fail "Data doesn't match"
@@ -344,8 +373,8 @@ source ./tests/test_common.sh
setup_bucket "aws" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
put_object "aws" "$test_file_folder"/"$folder_name"/"$object_name" "$BUCKET_ONE_NAME"/"$folder_name"/"$object_name" || local put_object=$?
[[ $put_object -eq 0 ]] || fail "Failed to add object to bucket"
put_object "aws" "$test_file_folder/$folder_name/$object_name" "$BUCKET_ONE_NAME" "$folder_name/$object_name" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
list_objects_s3api_v1 "$BUCKET_ONE_NAME" "/"
prefix=$(echo "${objects[@]}" | jq ".CommonPrefixes[0].Prefix")
@@ -360,9 +389,9 @@ source ./tests/test_common.sh
}
# ensure that lists of files greater than a size of 1000 (pagination) are returned properly
@test "test_list_objects_file_count" {
test_common_list_objects_file_count "aws"
}
#@test "test_list_objects_file_count" {
# test_common_list_objects_file_count "aws"
#}
#@test "test_filename_length" {
# file_name=$(printf "%0.sa" $(seq 1 1025))
@@ -394,26 +423,40 @@ source ./tests/test_common.sh
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
}
@test "test_copy_object_aws" {
@test "test_add_object_metadata" {
bucket_file="bucket_file"
object_one="object-one"
test_key="x-test-data"
test_value="test-value"
create_test_files "$bucket_file" || local created=$?
create_test_files "$object_one" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
setup_bucket "aws" "$BUCKET_TWO_NAME" || local setup_result_two=$?
[[ $setup_result_two -eq 0 ]] || fail "Bucket two setup error"
put_object "aws" "$test_file_folder"/"$bucket_file" "$BUCKET_ONE_NAME"/"$bucket_file" || local put_object=$?
[[ $put_object -eq 0 ]] || fail "Failed to add object to bucket"
error=$(aws --no-verify-ssl s3api copy-object --copy-source "$BUCKET_ONE_NAME"/"$bucket_file" --key "$bucket_file" --bucket "$BUCKET_TWO_NAME" 2>&1) || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Error copying file: $error"
copy_file "s3://$BUCKET_TWO_NAME"/"$bucket_file" "$test_file_folder/${bucket_file}_copy" || local put_object=$?
[[ $put_object -eq 0 ]] || fail "Failed to add object to bucket"
compare_files "$test_file_folder/$bucket_file" "$test_file_folder/${bucket_file}_copy" || local compare_result=$?
[[ $compare_result -eq 0 ]] || file "files don't match"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_bucket_or_contents "aws" "$BUCKET_TWO_NAME"
delete_test_files "$bucket_file"
object="$test_file_folder"/"$object_one"
put_object_with_metadata "aws" "$object" "$BUCKET_ONE_NAME" "$object_one" "$test_key" "$test_value" || copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
object_exists "aws" "$BUCKET_ONE_NAME" "$object_one" || local exists_result_one=$?
[[ $exists_result_one -eq 0 ]] || fail "Object not added to bucket"
get_object_metadata "aws" "$BUCKET_ONE_NAME" "$object_one" || get_result=$?
[[ $get_result -eq 0 ]] || fail "error getting object metadata"
key=$(echo "$metadata" | jq 'keys[]')
value=$(echo "$metadata" | jq '.[]')
[[ $key == "\"$test_key\"" ]] || fail "keys doesn't match (expected $key, actual \"$test_key\")"
[[ $value == "\"$test_value\"" ]] || fail "values doesn't match (expected $value, actual \"$test_value\")"
}
@test "test_delete_object_tagging" {
test_common_delete_object_tagging "aws"
}
@test "test_get_bucket_location" {
test_common_get_bucket_location "aws"
}
@test "test_get_put_delete_bucket_policy" {
test_common_get_put_delete_bucket_policy "aws"
}

View File

@@ -1,93 +0,0 @@
#!/usr/bin/env bats
source ./tests/setup.sh
source ./tests/util.sh
source ./tests/util_file.sh
source ./tests/util_posix.sh
# test that changes to local folders and files are reflected on S3
@test "test_local_creation_deletion" {
if [[ $RECREATE_BUCKETS != "true" ]]; then
return
fi
local object_name="test-object"
if [[ -e "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME" ]]; then
rm -rf "${LOCAL_FOLDER:?}"/"${BUCKET_ONE_NAME:?}"
fi
mkdir "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
local object="$BUCKET_ONE_NAME"/"$object_name"
touch "$LOCAL_FOLDER"/"$object"
bucket_exists_remote_and_local "$BUCKET_ONE_NAME" || local bucket_exists_two=$?
[[ $bucket_exists_two -eq 0 ]] || fail "Failed bucket existence check"
object_exists_remote_and_local "$object" || local object_exists_two=$?
[[ $object_exists_two -eq 0 ]] || fail "Failed object existence check"
rm "$LOCAL_FOLDER"/"$object"
sleep 1
object_not_exists_remote_and_local "$object" || local object_deleted=$?
[[ $object_deleted -eq 0 ]] || fail "Failed object deletion check"
rmdir "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
sleep 1
bucket_not_exists_remote_and_local "$BUCKET_ONE_NAME" || local bucket_deleted=$?
[[ $bucket_deleted -eq 0 ]] || fail "Failed bucket deletion check"
}
# test head-object command
@test "test_head_object" {
local bucket_name=$BUCKET_ONE_NAME
local object_name="object-one"
create_test_files $object_name
if [ -e "$LOCAL_FOLDER"/"$bucket_name"/$object_name ]; then
chmod 755 "$LOCAL_FOLDER"/"$bucket_name"/$object_name
fi
setup_bucket "aws" "$bucket_name" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating bucket"
put_object "aws" "$test_file_folder"/"$object_name" "$bucket_name"/"$object_name" || local result="$?"
[[ result -eq 0 ]] || fail "Error adding object one"
chmod 000 "$LOCAL_FOLDER"/"$bucket_name"/$object_name
sleep 1
object_is_accessible "$bucket_name" $object_name || local accessible=$?
[[ $accessible -eq 1 ]] || fail "Object should be inaccessible"
chmod 755 "$LOCAL_FOLDER"/"$bucket_name"/$object_name
sleep 1
object_is_accessible "$bucket_name" $object_name || local accessible_two=$?
[[ $accessible_two -eq 0 ]] || fail "Object should be accessible"
delete_object "aws" "$bucket_name"/$object_name
delete_bucket_or_contents "aws" "$bucket_name"
delete_test_files $object_name
}
# check info, accessiblity of bucket
@test "test_get_bucket_info" {
if [ -e "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME" ]; then
chmod 755 "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
sleep 1
else
setup_bucket "aws" "$BUCKET_ONE_NAME" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating bucket"
fi
chmod 000 "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
sleep 1
bucket_is_accessible "$BUCKET_ONE_NAME" || local accessible=$?
[[ $accessible -eq 1 ]] || fail "Bucket should be inaccessible"
chmod 755 "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
sleep 1
bucket_is_accessible "$BUCKET_ONE_NAME" || local accessible_two=$?
[[ $accessible_two -eq 0 ]] || fail "Bucket should be accessible"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
}

View File

@@ -1,7 +1,35 @@
#!/usr/bin/env bats
source ./tests/setup.sh
source ./tests/util.sh
source ./tests/util_file.sh
source ./tests/util_policy.sh
source ./tests/commands/copy_object.sh
source ./tests/commands/delete_object_tagging.sh
source ./tests/commands/get_bucket_location.sh
source ./tests/commands/get_bucket_tagging.sh
source ./tests/commands/list_buckets.sh
source ./tests/commands/put_object.sh
test_common_multipart_upload() {
if [[ $# -ne 1 ]]; then
echo "multipart upload command missing command type"
return 1
fi
bucket_file="largefile"
create_large_file "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test file for multipart upload"
setup_bucket "$1" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "$1" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || local put_result=$?
[[ $put_result -eq 0 ]] || fail "failed to copy file"
delete_bucket_or_contents "$1" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
# common test for creating, deleting buckets
# param: "aws" or "s3cmd"
@@ -34,6 +62,7 @@ test_common_put_object_with_data() {
create_test_files "$object_name" || local create_result=$?
[[ $create_result -eq 0 ]] || fail "Error creating test file"
echo "test data" > "$test_file_folder"/"$object_name"
test_common_put_object "$1" "$object_name"
}
test_common_put_object_no_data() {
@@ -55,15 +84,14 @@ test_common_put_object() {
setup_bucket "$1" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
object="$BUCKET_ONE_NAME"/"$2"
put_object "$1" "$test_file_folder"/"$2" "$object" || local put_object=$?
[[ $put_object -eq 0 ]] || fail "Failed to add object to bucket"
object_exists "$1" "$object" || local exists_result_one=$?
put_object "$1" "$test_file_folder/$2" "$BUCKET_ONE_NAME" "$2" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
object_exists "$1" "$BUCKET_ONE_NAME" "$2" || local exists_result_one=$?
[[ $exists_result_one -eq 0 ]] || fail "Object not added to bucket"
delete_object "$1" "$object" || local delete_result=$?
delete_object "$1" "$BUCKET_ONE_NAME" "$2" || local delete_result=$?
[[ $delete_result -eq 0 ]] || fail "Failed to delete object"
object_exists "$1" "$object" || local exists_result_two=$?
object_exists "$1" "$BUCKET_ONE_NAME" "$2" || local exists_result_two=$?
[[ $exists_result_two -eq 1 ]] || fail "Object not removed from bucket"
delete_bucket_or_contents "$1" "$BUCKET_ONE_NAME"
@@ -89,6 +117,7 @@ test_common_list_buckets() {
if [ -z "$bucket_array" ]; then
fail "bucket_array parameter not exported"
fi
log 5 "bucket array: ${bucket_array[*]}"
for bucket in "${bucket_array[@]}"; do
if [ "$bucket" == "$BUCKET_ONE_NAME" ] || [ "$bucket" == "s3://$BUCKET_ONE_NAME" ]; then
bucket_one_found=true
@@ -122,9 +151,9 @@ test_common_list_objects() {
echo "test data 2" > "$test_file_folder"/"$object_two"
setup_bucket "$1" "$BUCKET_ONE_NAME" || local result_one=$?
[[ result_one -eq 0 ]] || fail "Error creating bucket"
put_object "$1" "$test_file_folder"/$object_one "$BUCKET_ONE_NAME"/"$object_one" || local result_two=$?
put_object "$1" "$test_file_folder"/$object_one "$BUCKET_ONE_NAME" "$object_one" || local result_two=$?
[[ result_two -eq 0 ]] || fail "Error adding object one"
put_object "$1" "$test_file_folder"/$object_two "$BUCKET_ONE_NAME"/"$object_two" || local result_three=$?
put_object "$1" "$test_file_folder"/$object_two "$BUCKET_ONE_NAME" "$object_two" || local result_three=$?
[[ result_three -eq 0 ]] || fail "Error adding object two"
list_objects "$1" "$BUCKET_ONE_NAME"
@@ -147,7 +176,7 @@ test_common_list_objects() {
fi
}
test_common_set_get_bucket_tags() {
test_common_set_get_delete_bucket_tags() {
if [[ $# -ne 1 ]]; then
fail "set/get bucket tags test requires command type"
fi
@@ -158,27 +187,22 @@ test_common_set_get_bucket_tags() {
setup_bucket "$1" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
get_bucket_tags "$1" "$BUCKET_ONE_NAME" || local get_result=$?
[[ $get_result -eq 0 ]] || fail "Error getting bucket tags"
get_bucket_tagging "$1" "$BUCKET_ONE_NAME" || local get_result=$?
[[ $get_result -eq 0 ]] || fail "Error getting bucket tags first time"
if [[ $1 == 'aws' ]]; then
if [[ $tags != "" ]]; then
tag_set=$(echo "$tags" | sed '1d' | jq '.TagSet')
[[ $tag_set == "[]" ]] || fail "Error: tags not empty: $tags"
fi
else
[[ $tags == "" ]] || [[ $tags =~ "No tags found" ]] || fail "Error: tags not empty: $tags"
fi
check_bucket_tags_empty "$1" "$BUCKET_ONE_NAME" || local check_result=$?
[[ $check_result -eq 0 ]] || fail "error checking if bucket tags are empty"
put_bucket_tag "$1" "$BUCKET_ONE_NAME" $key $value
get_bucket_tags "$1" "$BUCKET_ONE_NAME" || local get_result_two=$?
[[ $get_result_two -eq 0 ]] || fail "Error getting bucket tags"
get_bucket_tagging "$1" "$BUCKET_ONE_NAME" || local get_result_two=$?
[[ $get_result_two -eq 0 ]] || fail "Error getting bucket tags second time"
local tag_set_key
local tag_set_value
if [[ $1 == 'aws' ]]; then
tag_set_key=$(echo "$tags" | sed '1d' | jq '.TagSet[0].Key')
tag_set_value=$(echo "$tags" | sed '1d' | jq '.TagSet[0].Value')
log 5 "Post-export tags: $tags"
tag_set_key=$(echo "$tags" | jq '.TagSet[0].Key')
tag_set_value=$(echo "$tags" | jq '.TagSet[0].Value')
[[ $tag_set_key == '"'$key'"' ]] || fail "Key mismatch"
[[ $tag_set_value == '"'$value'"' ]] || fail "Value mismatch"
else
@@ -187,6 +211,12 @@ test_common_set_get_bucket_tags() {
[[ $tag_set_value == "$value" ]] || fail "Value mismatch"
fi
delete_bucket_tags "$1" "$BUCKET_ONE_NAME"
get_bucket_tagging "$1" "$BUCKET_ONE_NAME" || local get_result=$?
[[ $get_result -eq 0 ]] || fail "Error getting bucket tags third time"
check_bucket_tags_empty "$1" "$BUCKET_ONE_NAME" || local check_result=$?
[[ $check_result -eq 0 ]] || fail "error checking if bucket tags are empty"
delete_bucket_or_contents "$1" "$BUCKET_ONE_NAME"
}
@@ -204,27 +234,26 @@ test_common_set_get_object_tags() {
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "$1" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
local object_path="$BUCKET_ONE_NAME"/"$bucket_file"
put_object "$1" "$test_file_folder"/"$bucket_file" "$object_path" || local put_object=$?
[[ $put_object -eq 0 ]] || fail "Failed to add object to bucket '$BUCKET_ONE_NAME'"
put_object "$1" "$test_file_folder"/"$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket '$BUCKET_ONE_NAME'"
get_object_tags "$1" "$BUCKET_ONE_NAME" $bucket_file || local get_result=$?
[[ $get_result -eq 0 ]] || fail "Error getting object tags"
if [[ $1 == 'aws' ]]; then
tag_set=$(echo "$tags" | sed '1d' | jq '.TagSet')
[[ $tag_set == "[]" ]] || fail "Error: tags not empty"
elif [[ ! $tags == *"No tags found"* ]]; then
tag_set=$(echo "$tags" | jq '.TagSet')
[[ $tag_set == "[]" ]] || [[ $tag_set == "" ]] || fail "Error: tags not empty"
elif [[ $tags != *"No tags found"* ]] && [[ $tags != "" ]]; then
fail "no tags found (tags: $tags)"
fi
put_object_tag "$1" "$BUCKET_ONE_NAME" $bucket_file $key $value
get_object_tags "$1" "$BUCKET_ONE_NAME" $bucket_file || local get_result_two=$?
get_object_tags "$1" "$BUCKET_ONE_NAME" "$bucket_file" || local get_result_two=$?
[[ $get_result_two -eq 0 ]] || fail "Error getting object tags"
if [[ $1 == 'aws' ]]; then
tag_set_key=$(echo "$tags" | sed '1d' | jq '.TagSet[0].Key')
tag_set_value=$(echo "$tags" | sed '1d' | jq '.TagSet[0].Value')
[[ $tag_set_key == '"'$key'"' ]] || fail "Key mismatch"
[[ $tag_set_value == '"'$value'"' ]] || fail "Value mismatch"
tag_set_key=$(echo "$tags" | jq -r '.TagSet[0].Key')
tag_set_value=$(echo "$tags" | jq -r '.TagSet[0].Value')
[[ $tag_set_key == "$key" ]] || fail "Key mismatch"
[[ $tag_set_value == "$value" ]] || fail "Value mismatch"
else
read -r tag_set_key tag_set_value <<< "$(echo "$tags" | awk 'NR==2 {print $1, $3}')"
[[ $tag_set_key == "$key" ]] || fail "Key mismatch"
@@ -235,28 +264,7 @@ test_common_set_get_object_tags() {
delete_test_files $bucket_file
}
test_common_multipart_upload() {
if [[ $# -ne 1 ]]; then
echo "multipart upload command missing command type"
return 1
fi
bucket_file="largefile"
create_large_file "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test file for multipart upload"
setup_bucket "$1" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "$1" "$test_file_folder"/$bucket_file "$BUCKET_ONE_NAME/$bucket_file" || local put_result=$?
[[ $put_result -eq 0 ]] || fail "failed to copy file"
delete_bucket_or_contents "$1" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
test_common_presigned_url_utf8_chars() {
if [[ $# -ne 1 ]]; then
echo "presigned url command missing command type"
return 1
@@ -271,7 +279,7 @@ test_common_presigned_url_utf8_chars() {
setup_bucket "$1" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "$1" "$test_file_folder"/"$bucket_file" "$BUCKET_ONE_NAME"/"$bucket_file" || put_result=$?
put_object "$1" "$test_file_folder"/"$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || put_result=$?
[[ $put_result -eq 0 ]] || fail "Failed to add object $bucket_file"
create_presigned_url "$1" "$BUCKET_ONE_NAME" "$bucket_file" || presigned_result=$?
@@ -311,3 +319,101 @@ test_common_list_objects_file_count() {
[[ $file_count == 1001 ]] || fail "file count should be 1001, is $file_count"
delete_bucket_or_contents "$1" "$BUCKET_ONE_NAME"
}
test_common_delete_object_tagging() {
[[ $# -eq 1 ]] || fail "test common delete object tagging requires command type"
bucket_file="bucket_file"
tag_key="key"
tag_value="value"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "$1" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
put_object "$1" "$test_file_folder"/"$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
put_object_tag "$1" "$BUCKET_ONE_NAME" "$bucket_file" "$tag_key" "$tag_value" || put_result=$?
[[ $put_result -eq 0 ]] || fail "failed to add tags to object"
get_and_verify_object_tags "$1" "$BUCKET_ONE_NAME" "$bucket_file" "$tag_key" "$tag_value" || get_result=$?
[[ $get_result -eq 0 ]] || fail "failed to get tags"
delete_object_tagging "$1" "$BUCKET_ONE_NAME" "$bucket_file" || delete_result=$?
[[ $delete_result -eq 0 ]] || fail "error deleting object tagging"
check_object_tags_empty "$1" "$BUCKET_ONE_NAME" "$bucket_file" || get_result=$?
[[ $get_result -eq 0 ]] || fail "failed to get tags"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$bucket_file"
}
test_common_get_bucket_location() {
[[ $# -eq 1 ]] || fail "test common get bucket location missing command type"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
get_bucket_location "aws" "$BUCKET_ONE_NAME"
# shellcheck disable=SC2154
[[ $bucket_location == "null" ]] || [[ $bucket_location == "us-east-1" ]] || fail "wrong location: '$bucket_location'"
}
test_common_get_put_delete_bucket_policy() {
[[ $# -eq 1 ]] || fail "get/put/delete policy test requires command type"
policy_file="policy_file"
create_test_files "$policy_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating policy file"
effect="Allow"
principal="*"
action="s3:GetObject"
resource="arn:aws:s3:::$BUCKET_ONE_NAME/*"
cat <<EOF > "$test_file_folder"/$policy_file
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "$effect",
"Principal": "$principal",
"Action": "$action",
"Resource": "$resource"
}
]
}
EOF
setup_bucket "$1" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
check_for_empty_policy "$1" "$BUCKET_ONE_NAME" || check_result=$?
[[ $get_result -eq 0 ]] || fail "policy not empty"
put_bucket_policy "$1" "$BUCKET_ONE_NAME" "$test_file_folder"/"$policy_file" || put_result=$?
[[ $put_result -eq 0 ]] || fail "error putting bucket"
get_bucket_policy "$1" "$BUCKET_ONE_NAME" || local get_result=$?
[[ $get_result -eq 0 ]] || fail "error getting bucket policy after setting"
returned_effect=$(echo "$bucket_policy" | jq -r '.Statement[0].Effect')
[[ $effect == "$returned_effect" ]] || fail "effect mismatch ($effect, $returned_effect)"
returned_principal=$(echo "$bucket_policy" | jq -r '.Statement[0].Principal')
[[ $principal == "$returned_principal" ]] || fail "principal mismatch ($principal, $returned_principal)"
returned_action=$(echo "$bucket_policy" | jq -r '.Statement[0].Action')
[[ $action == "$returned_action" ]] || fail "action mismatch ($action, $returned_action)"
returned_resource=$(echo "$bucket_policy" | jq -r '.Statement[0].Resource')
[[ $resource == "$returned_resource" ]] || fail "resource mismatch ($resource, $returned_resource)"
delete_bucket_policy "$1" "$BUCKET_ONE_NAME" || delete_result=$?
[[ $delete_result -eq 0 ]] || fail "error deleting policy"
check_for_empty_policy "$1" "$BUCKET_ONE_NAME" || check_result=$?
[[ $get_result -eq 0 ]] || fail "policy not empty after deletion"
delete_bucket_or_contents "$1" "$BUCKET_ONE_NAME"
}

View File

@@ -2,9 +2,17 @@
source ./tests/test_common.sh
source ./tests/setup.sh
source ./tests/util_bucket_create.sh
source ./tests/commands/delete_bucket_policy.sh
source ./tests/commands/get_bucket_policy.sh
source ./tests/commands/put_bucket_policy.sh
export RUN_MC=true
@test "test_multipart_upload_mc" {
test_common_multipart_upload "mc"
}
# test mc bucket creation/deletion
@test "test_create_delete_bucket_mc" {
test_common_create_delete_bucket "mc"
@@ -27,17 +35,13 @@ export RUN_MC=true
}
@test "test_set_get_bucket_tags_mc" {
test_common_set_get_bucket_tags "mc"
test_common_set_get_delete_bucket_tags "mc"
}
@test "test_set_get_object_tags_mc" {
test_common_set_get_object_tags "mc"
}
@test "test_multipart_upload_mc" {
test_common_multipart_upload "mc"
}
@test "test_presigned_url_utf8_chars_mc" {
test_common_presigned_url_utf8_chars "mc"
}
@@ -75,3 +79,15 @@ export RUN_MC=true
[[ $bucket_info == *"does not exist"* ]] || fail "404 not returned for non-existent bucket info"
delete_bucket_or_contents "mc" "$BUCKET_ONE_NAME"
}
@test "test_delete_object_tagging" {
test_common_delete_object_tagging "mc"
}
@test "test_get_bucket_location" {
test_common_get_bucket_location "mc"
}
@test "test_get_put_delete_bucket_policy" {
test_common_get_put_delete_bucket_policy "mc"
}

19
tests/test_s3.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bats
source ./tests/test_common.sh
@test "test_multipart_upload" {
test_common_multipart_upload "s3"
}
@test "test_put_object" {
test_common_put_object_no_data "s3"
}
@test "test_list_buckets" {
test_common_list_buckets "s3"
}
@test "test_list_objects_file_count" {
test_common_list_objects_file_count "s3"
}

View File

@@ -3,20 +3,28 @@
source ./tests/setup.sh
source ./tests/test_common.sh
source ./tests/util.sh
source ./tests/util_bucket_create.sh
source ./tests/commands/delete_bucket_policy.sh
source ./tests/commands/get_bucket_policy.sh
source ./tests/commands/put_bucket_policy.sh
export RUN_S3CMD=true
@test "test_multipart_upload_s3cmd" {
test_common_multipart_upload "s3cmd"
}
# test s3cmd bucket creation/deletion
@test "test_create_delete_bucket_s3cmd" {
test_common_create_delete_bucket "s3cmd"
}
# test s3cmd put object
@test "test_put_object_with_data_s3cmd" {
@test "test_copy_object_with_data" {
test_common_put_object_with_data "s3cmd"
}
@test "test_put_object_no_data_s3cmd" {
@test "test_copy_object_no_data" {
test_common_put_object_no_data "s3cmd"
}
@@ -29,10 +37,6 @@ export RUN_S3CMD=true
test_common_list_objects "s3cmd"
}
@test "test_multipart_upload_s3cmd" {
test_common_multipart_upload "s3cmd"
}
#@test "test_presigned_url_utf8_chars_s3cmd" {
# test_common_presigned_url_utf8_chars "s3cmd"
#}
@@ -70,3 +74,11 @@ export RUN_S3CMD=true
[[ $bucket_info == *"404"* ]] || fail "404 not returned for non-existent bucket info"
delete_bucket_or_contents "s3cmd" "$BUCKET_ONE_NAME"
}
@test "test_get_bucket_location" {
test_common_get_bucket_location "s3cmd"
}
@test "test_get_put_delete_bucket_policy" {
test_common_get_put_delete_bucket_policy "s3cmd"
}

19
tests/test_user_aws.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bats
source ./tests/test_user_common.sh
@test "test_admin_user_aws" {
test_admin_user "aws"
}
@test "test_create_user_already_exists_aws" {
test_create_user_already_exists "aws"
}
@test "test_user_user_aws" {
test_user_user "aws"
}
@test "test_userplus_operation_aws" {
test_userplus_operation "aws"
}

178
tests/test_user_common.sh Executable file
View File

@@ -0,0 +1,178 @@
#!/usr/bin/env bats
source ./tests/setup.sh
source ./tests/util_users.sh
source ./tests/util.sh
source ./tests/util_bucket_create.sh
test_admin_user() {
if [[ $# -ne 1 ]]; then
fail "test admin user command requires command type"
fi
admin_username="ABCDEF"
user_username="GHIJKL"
admin_password="123456"
user_password="789012"
user_exists "$admin_username" || local admin_exists_result=$?
if [[ $admin_exists_result -eq 0 ]]; then
delete_user "$admin_username" || local delete_admin_result=$?
[[ $delete_admin_result -eq 0 ]] || fail "failed to delete admin user"
fi
create_user "$admin_username" "$admin_password" "admin" || create_admin_result=$?
[[ $create_admin_result -eq 0 ]] || fail "failed to create admin user"
user_exists "$user_username" || local user_exists_result=$?
if [[ $user_exists_result -eq 0 ]]; then
delete_user "$user_username" || local delete_user_result=$?
[[ $delete_user_result -eq 0 ]] || fail "failed to delete user user"
fi
create_user_with_user "$admin_username" "$admin_password" "$user_username" "$user_password" "user"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
delete_bucket "aws" "versity-gwtest-admin-bucket" || local delete_result=$?
[[ $delete_result -eq 0 ]] || fail "error deleting bucket if it exists"
create_bucket_with_user "aws" "versity-gwtest-admin-bucket" "$admin_username" "$admin_password" || create_result_two=$?
[[ $create_result_two -eq 0 ]] || fail "error creating bucket with user"
bucket_one_found=false
bucket_two_found=false
list_buckets_with_user "aws" "$admin_username" "$admin_password"
for bucket in "${bucket_array[@]}"; do
if [ "$bucket" == "$BUCKET_ONE_NAME" ]; then
bucket_one_found=true
elif [ "$bucket" == "versity-gwtest-admin-bucket" ]; then
bucket_two_found=true
fi
if [ $bucket_one_found == true ] && [ $bucket_two_found == true ]; then
break
fi
done
if [ $bucket_one_found == false ] || [ $bucket_two_found == false ]; then
fail "not all expected buckets listed"
fi
change_bucket_owner "$admin_username" "$admin_password" "versity-gwtest-admin-bucket" "$user_username" || local change_result=$?
[[ $change_result -eq 0 ]] || fail "error changing bucket owner"
delete_bucket "aws" "versity-gwtest-admin-bucket"
delete_user "$user_username"
delete_user "$admin_username"
}
test_create_user_already_exists() {
if [[ $# -ne 1 ]]; then
fail "test admin user command requires command type"
fi
username="ABCDEG"
password="123456"
user_exists "$username" || local exists_result=$?
if [[ $exists_result -eq 0 ]]; then
delete_user "$username" || local delete_result=$?
[[ $delete_result -eq 0 ]] || fail "failed to delete user '$username'"
fi
create_user "$username" "123456" "admin" || local create_result=$?
[[ $create_result -eq 0 ]] || fail "error creating user"
create_user "$username" "123456" "admin" || local create_result=$?
[[ $create_result -eq 1 ]] || fail "'user already exists' error not returned"
delete_bucket "aws" "versity-gwtest-admin-bucket"
delete_user "$username"
}
test_user_user() {
if [[ $# -ne 1 ]]; then
fail "test admin user command requires command type"
fi
username="ABCDEG"
password="123456"
user_exists "$username" || local exists_result=$?
if [[ $exists_result -eq 0 ]]; then
delete_user "$username" || local delete_result=$?
[[ $delete_result -eq 0 ]] || fail "failed to delete user '$username'"
fi
delete_bucket "aws" "versity-gwtest-user-bucket"
create_user "$username" "123456" "user" || local create_result=$?
[[ $create_result -eq 0 ]] || fail "error creating user"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
create_bucket_with_user "aws" "versity-gwtest-user-bucket" "$username" "$password" || create_result_two=$?
[[ $create_result_two -eq 1 ]] || fail "creating bucket with 'user' account failed to return error"
[[ $error == *"Access Denied"* ]] || fail "error message '$error' doesn't contain 'Access Denied'"
create_bucket "aws" "versity-gwtest-user-bucket" || create_result_three=$?
[[ $create_result_three -eq 0 ]] || fail "creating bucket account returned error"
change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "versity-gwtest-user-bucket" "$username" || local change_result=$?
[[ $change_result -eq 0 ]] || fail "error changing bucket owner"
change_bucket_owner "$username" "$password" "versity-gwtest-user-bucket" "admin" || local change_result_two=$?
[[ $change_result_two -eq 1 ]] || fail "user shouldn't be able to change bucket owner"
list_buckets_with_user "aws" "$username" "$password"
bucket_found=false
for bucket in "${bucket_array[@]}"; do
if [ "$bucket" == "$BUCKET_ONE_NAME" ]; then
fail "$BUCKET_ONE_NAME shouldn't show up in 'user' bucket list"
elif [ "$bucket" == "versity-gwtest-user-bucket" ]; then
bucket_found=true
fi
done
if [ $bucket_found == false ]; then
fail "user-owned bucket not found in user list"
fi
delete_bucket "aws" "versity-gwtest-user-bucket"
delete_user "$username"
}
test_userplus_operation() {
if [[ $# -ne 1 ]]; then
fail "test admin user command requires command type"
fi
username="ABCDEG"
password="123456"
user_exists "$username" || local exists_result=$?
if [[ $exists_result -eq 0 ]]; then
delete_user "$username" || local delete_result=$?
[[ $delete_result -eq 0 ]] || fail "failed to delete user '$username'"
fi
delete_bucket "aws" "versity-gwtest-userplus-bucket"
create_user "$username" "123456" "userplus" || local create_result=$?
[[ $create_result -eq 0 ]] || fail "error creating user"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
create_bucket_with_user "aws" "versity-gwtest-userplus-bucket" "$username" "$password" || create_result_two=$?
[[ $create_result_two -eq 0 ]] || fail "error creating bucket"
list_buckets_with_user "aws" "$username" "$password"
bucket_found=false
for bucket in "${bucket_array[@]}"; do
if [ "$bucket" == "$BUCKET_ONE_NAME" ]; then
fail "$BUCKET_ONE_NAME shouldn't show up in 'userplus' bucket list"
elif [ "$bucket" == "versity-gwtest-userplus-bucket" ]; then
bucket_found=true
fi
done
if [ $bucket_found == false ]; then
fail "userplus-owned bucket not found in user list"
fi
change_bucket_owner "$username" "$password" "versity-gwtest-userplus-bucket" "admin" || local change_result_two=$?
[[ $change_result_two -eq 1 ]] || fail "userplus shouldn't be able to change bucket owner"
delete_bucket "aws" "versity-gwtest-admin-bucket"
delete_user "$username" || delete_result=$?
[[ $delete_result -eq 0 ]] || fail "error deleting user"
}

19
tests/test_user_s3cmd.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bats
source ./tests/test_user_common.sh
@test "test_admin_user_s3cmd" {
test_admin_user "s3cmd"
}
@test "test_create_user_already_exists_s3cmd" {
test_create_user_already_exists "s3cmd"
}
@test "test_user_user_s3cmd" {
test_user_user "s3cmd"
}
@test "test_userplus_operation_s3cmd" {
test_userplus_operation "s3cmd"
}

View File

@@ -1,88 +1,16 @@
#!/usr/bin/env bats
#!/usr/bin/env bash
source ./tests/util_bucket_create.sh
source ./tests/util_mc.sh
source ./tests/logger.sh
# create an AWS bucket
# param: bucket name
# return 0 for success, 1 for failure
create_bucket() {
if [ $# -ne 2 ]; then
echo "create bucket missing command type, bucket name"
return 1
fi
local exit_code=0
local error
if [[ $1 == "aws" ]]; then
error=$(aws --no-verify-ssl s3 mb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate mb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "mc" ]]; then
error=$(mc --insecure mb "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error creating bucket: $error"
return 1
fi
return 0
}
create_bucket_invalid_name() {
if [ $# -ne 1 ]; then
echo "create bucket w/invalid name missing command type"
return 1
fi
local exit_code=0
if [[ $1 == "aws" ]]; then
bucket_create_error=$(aws --no-verify-ssl s3 mb "s3://" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
bucket_create_error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate mb "s3://" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
bucket_create_error=$(mc --insecure mb "$MC_ALIAS" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -eq 0 ]; then
echo "error: bucket should have not been created but was"
return 1
fi
export bucket_create_error
}
# delete an AWS bucket
# param: bucket name
# return 0 for success, 1 for failure
delete_bucket() {
if [ $# -ne 2 ]; then
echo "delete bucket missing command type, bucket name"
return 1
fi
local exit_code=0
local error
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3 rb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure rb "$MC_ALIAS/$2" 2>&1) || exit_code=$?
else
echo "Invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"The specified bucket does not exist"* ]]; then
return 0
else
echo "error deleting bucket: $error"
return 1
fi
fi
return 0
}
source ./tests/commands/abort_multipart_upload.sh
source ./tests/commands/create_bucket.sh
source ./tests/commands/delete_bucket.sh
source ./tests/commands/delete_object.sh
source ./tests/commands/get_bucket_tagging.sh
source ./tests/commands/head_bucket.sh
source ./tests/commands/head_object.sh
source ./tests/commands/list_objects.sh
# recursively delete an AWS bucket
# param: bucket name
@@ -95,8 +23,10 @@ delete_bucket_recursive() {
local exit_code=0
local error
if [[ $1 == "aws" ]]; then
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 rb s3://"$2" --force 2>&1) || exit_code="$?"
elif [[ $1 == "aws" ]] || [[ $1 == 's3api' ]]; then
delete_bucket_recursive_s3api "$2" 2>&1 || exit_code="$?"
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate rb s3://"$2" --recursive 2>&1) || exit_code="$?"
elif [[ $1 == "mc" ]]; then
@@ -117,6 +47,33 @@ delete_bucket_recursive() {
return 0
}
delete_bucket_recursive_s3api() {
if [[ $# -ne 1 ]]; then
echo "delete bucket recursive command for s3api requires bucket name"
return 1
fi
list_objects 's3api' "$1" || list_result=$?
if [[ $list_result -ne 0 ]]; then
echo "error listing objects"
return 1
fi
# shellcheck disable=SC2154
for object in "${object_array[@]}"; do
delete_object 's3api' "$1" "$object" || delete_result=$?
if [[ $delete_result -ne 0 ]]; then
echo "error deleting object $object"
return 1
fi
done
delete_bucket 's3api' "$1" || delete_result=$?
if [[ $delete_result -ne 0 ]]; then
echo "error deleting bucket"
return 1
fi
return 0
}
# delete contents of a bucket
# param: command type, bucket name
# return 0 for success, 1 for failure
@@ -154,28 +111,14 @@ bucket_exists() {
return 2
fi
local exit_code=0
local error
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3 ls s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
# NOTE: s3cmd sometimes takes longer with direct connection
sleep 1
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure ls "$MC_ALIAS/$2" 2>&1) || exit_code=$?
else
echo "invalid command type: $1"
return 2
fi
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"does not exist"* ]] || [[ "$error" == *"Access Denied"* ]]; then
head_bucket "$1" "$2" || local check_result=$?
if [[ $check_result -ne 0 ]]; then
# shellcheck disable=SC2154
if [[ "$bucket_info" == *"404"* ]] || [[ "$bucket_info" == *"does not exist"* ]]; then
return 1
else
echo "error checking if bucket exists: $error"
return 2
fi
echo "error checking if bucket exists"
return 2
fi
return 0
}
@@ -248,18 +191,28 @@ setup_bucket() {
# param: command, object path
# return 0 for true, 1 for false, 2 for error
object_exists() {
if [ $# -ne 2 ]; then
echo "object exists check missing command, object name"
if [ $# -ne 3 ]; then
echo "object exists check missing command, bucket name, object name"
return 2
fi
head_object "$1" "$2" "$3" || head_result=$?
if [[ $head_result -eq 2 ]]; then
echo "error checking if object exists"
return 2
fi
return $head_result
return 0
local exit_code=0
local error=""
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3 ls s3://"$2" 2>&1) || exit_code="$?"
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 ls "s3://$2/$3" 2>&1) || exit_code="$?"
elif [[ $1 == 'aws' ]] || [[ $1 == 's3api' ]]; then
error=$(aws --no-verify-ssl s3api head-object --bucket "$2" --prefix "$3" 2>&1) || exit_code="$?"
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3://"$2" 2>&1) || exit_code="$?"
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3://"$2/$3" 2>&1) || exit_code="$?"
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure ls "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
error=$(mc --insecure ls "$MC_ALIAS/$2/$3" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 2
@@ -278,22 +231,37 @@ object_exists() {
return 0
}
# add object to versitygw
# params: source file, destination copy location
# return 0 for success, 1 for failure
put_object() {
if [ $# -ne 3 ]; then
echo "put object command requires command type, source, destination"
put_object_with_metadata() {
if [ $# -ne 6 ]; then
echo "put object command requires command type, source, destination, key, metadata key, metadata value"
return 1
fi
local exit_code=0
local error
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3 cp "$2" s3://"$3" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate put "$2" s3://"$(dirname "$3")" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure cp "$2" "$MC_ALIAS"/"$(dirname "$3")" 2>&1) || exit_code=$?
error=$(aws --no-verify-ssl s3api put-object --body "$2" --bucket "$3" --key "$4" --metadata "{\"$5\":\"$6\"}") || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
log 5 "put object exit code: $exit_code"
if [ $exit_code -ne 0 ]; then
echo "error copying object to bucket: $error"
return 1
fi
return 0
}
get_object_metadata() {
if [ $# -ne 3 ]; then
echo "get object metadata command requires command type, bucket, key"
return 1
fi
local exit_code=0
if [[ $1 == 'aws' ]]; then
metadata_struct=$(aws --no-verify-ssl s3api head-object --bucket "$2" --key "$3") || exit_code=$?
else
echo "invalid command type $1"
return 1
@@ -302,6 +270,10 @@ put_object() {
echo "error copying object to bucket: $error"
return 1
fi
log 5 "$metadata_struct"
metadata=$(echo "$metadata_struct" | jq '.Metadata')
echo $metadata
export metadata
return 0
}
@@ -312,7 +284,7 @@ put_object_multiple() {
fi
local exit_code=0
local error
if [[ $1 == 'aws' ]]; then
if [[ $1 == 'aws' ]] || [[ $1 == 's3' ]]; then
# shellcheck disable=SC2086
error=$(aws --no-verify-ssl s3 cp "$(dirname "$2")" s3://"$3" --recursive --exclude="*" --include="$2" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
@@ -338,18 +310,18 @@ put_object_multiple() {
# params: source file, destination copy location
# return 0 for success or already exists, 1 for failure
check_and_put_object() {
if [ $# -ne 2 ]; then
echo "check and put object function requires source, destination"
if [ $# -ne 3 ]; then
echo "check and put object function requires source, bucket, destination"
return 1
fi
object_exists "aws" "$2" || local exists_result=$?
object_exists "aws" "$2" "$3" || local exists_result=$?
if [ "$exists_result" -eq 2 ]; then
echo "error checking if object exists"
return 1
fi
if [ "$exists_result" -eq 1 ]; then
put_object "$1" "$2" || local put_result=$?
if [ "$put_result" -ne 0 ]; then
copy_object "$1" "$2" || local copy_result=$?
if [ "$copy_result" -ne 0 ]; then
echo "error adding object"
return 1
fi
@@ -357,50 +329,16 @@ check_and_put_object() {
return 0
}
# delete object from versitygw
# param: object path, including bucket name
# return 0 for success, 1 for failure
delete_object() {
if [ $# -ne 2 ]; then
echo "delete object command requires command type, object parameter"
return 1
fi
local exit_code=0
local error
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3 rm s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate rm s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure rm "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error deleting object: $error"
return 1
fi
return 0
}
# list buckets on versitygw
# params: format (aws, s3cmd)
# export bucket_array (bucket names) on success, return 1 for failure
list_buckets() {
if [[ $# -ne 1 ]]; then
echo "List buckets command missing format"
list_buckets_with_user() {
if [[ $# -ne 3 ]]; then
echo "List buckets command missing format, user id, key"
return 1
fi
local exit_code=0
local output
if [[ $1 == "aws" ]]; then
output=$(aws --no-verify-ssl s3 ls s3:// 2>&1) || exit_code=$?
elif [[ $1 == "s3cmd" ]]; then
output=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3:// 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
output=$(mc --insecure ls "$MC_ALIAS" 2>&1) || exit_code=$?
output=$(AWS_ACCESS_KEY_ID="$2" AWS_SECRET_ACCESS_KEY="$3" aws --no-verify-ssl s3 ls s3:// 2>&1) || exit_code=$?
else
echo "invalid format: $1"
return 1
@@ -420,40 +358,18 @@ list_buckets() {
export bucket_array
}
# list objects on versitygw, in bucket or folder
# param: path of bucket or folder
# export object_array (object names) on success, return 1 for failure
list_objects() {
if [ $# -ne 2 ]; then
echo "list objects command requires command type, and bucket or folder"
remove_insecure_request_warning() {
if [[ $# -ne 1 ]]; then
echo "remove insecure request warning requires input lines"
return 1
fi
local exit_code=0
local output
if [[ $1 == "aws" ]]; then
output=$(aws --no-verify-ssl s3 ls s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
output=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
output=$(mc --insecure ls "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error listing objects: $output"
return 1
fi
object_array=()
parsed_output=()
while IFS= read -r line; do
if [[ $line != *InsecureRequestWarning* ]]; then
object_name=$(echo "$line" | awk '{print $NF}')
object_array+=("$object_name")
parsed_output+=("$line")
fi
done <<< "$output"
export object_array
done <<< "$1"
export parsed_output
}
# check if bucket info can be retrieved
@@ -557,32 +473,56 @@ put_bucket_tag() {
return 0
}
# get bucket tags
# params: bucket
# export 'tags' on success, return 1 for error
get_bucket_tags() {
if [ $# -ne 2 ]; then
echo "get bucket tag command missing command type, bucket name"
check_tags_empty() {
if [[ $# -ne 1 ]]; then
echo "check tags empty requires command type"
return 1
fi
local result
if [[ $1 == 'aws' ]]; then
tags=$(aws --no-verify-ssl s3api get-bucket-tagging --bucket "$2" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
tags=$(mc --insecure tag list "$MC_ALIAS"/"$2" 2>&1) || result=$?
else
echo "invalid command type $1"
return 1
fi
if [[ $result -ne 0 ]]; then
if [[ $tags =~ "No tags found" ]] || [[ $tags =~ "The TagSet does not exist" ]]; then
export tags=
return 0
if [[ $tags != "" ]]; then
tag_set=$(echo "$tags" | jq '.TagSet')
if [[ $tag_set != "[]" ]]; then
echo "error: tags not empty: $tags"
return 1
fi
fi
else
if [[ $tags != "" ]] && [[ $tags != *"No tags found"* ]]; then
echo "Error: tags not empty: $tags"
return 1
fi
echo "error getting bucket tags: $tags"
return 1
fi
export tags
return 0
}
check_object_tags_empty() {
if [[ $# -ne 3 ]]; then
echo "bucket tags empty check requires command type, bucket, and key"
return 2
fi
get_object_tags "$1" "$2" "$3" || get_result=$?
if [[ $get_result -ne 0 ]]; then
echo "failed to get tags"
return 2
fi
check_tags_empty "$1" || local check_result=$?
return $check_result
}
check_bucket_tags_empty() {
if [[ $# -ne 2 ]]; then
echo "bucket tags empty check requires command type, bucket"
return 2
fi
get_bucket_tagging "$1" "$2" || get_result=$?
if [[ $get_result -ne 0 ]]; then
echo "failed to get tags"
return 2
fi
check_tags_empty "$1" || local check_result=$?
return $check_result
}
delete_bucket_tags() {
@@ -627,6 +567,35 @@ put_object_tag() {
return 0
}
get_and_verify_object_tags() {
if [[ $# -ne 5 ]]; then
echo "get and verify object tags missing command type, bucket, key, tag key, tag value"
return 1
fi
get_object_tags "$1" "$2" "$3" || get_result=$?
if [[ $get_result -ne 0 ]]; then
echo "failed to get tags"
return 1
fi
if [[ $1 == 'aws' ]]; then
tag_set_key=$(echo "$tags" | jq '.TagSet[0].Key')
tag_set_value=$(echo "$tags" | jq '.TagSet[0].Value')
if [[ $tag_set_key != '"'$4'"' ]]; then
echo "Key mismatch ($tag_set_key, \"$4\")"
return 1
fi
if [[ $tag_set_value != '"'$5'"' ]]; then
echo "Value mismatch ($tag_set_value, \"$5\")"
return 1
fi
else
read -r tag_set_key tag_set_value <<< "$(echo "$tags" | awk 'NR==2 {print $1, $3}')"
[[ $tag_set_key == "$4" ]] || fail "Key mismatch"
[[ $tag_set_value == "$5" ]] || fail "Value mismatch"
fi
return 0
}
# get object tags
# params: bucket
# export 'tags' on success, return 1 for error
@@ -645,8 +614,15 @@ get_object_tags() {
return 1
fi
if [[ $result -ne 0 ]]; then
echo "error getting object tags: $tags"
return 1
if [[ "$tags" == *"NoSuchTagSet"* ]] || [[ "$tags" == *"No tags found"* ]]; then
tags=
else
echo "error getting object tags: $tags"
return 1
fi
else
log 5 "$tags"
tags=$(echo "$tags" | grep -v "InsecureRequestWarning")
fi
export tags
}
@@ -787,29 +763,12 @@ multipart_upload() {
return 0
}
# run the abort multipart command
# params: bucket, key, upload ID
# return 0 for success, 1 for failure
run_abort_command() {
if [ $# -ne 3 ]; then
echo "command to run abort requires bucket, key, upload ID"
return 1
fi
error=$(aws --no-verify-ssl s3api abort-multipart-upload --bucket "$1" --key "$2" --upload-id "$3") || local aborted=$?
if [[ $aborted -ne 0 ]]; then
echo "Error aborting upload: $error"
return 1
fi
return 0
}
# run upload, then abort it
# params: bucket, key, local file location, number of parts to split into before uploading
# return 0 for success, 1 for failure
abort_multipart_upload() {
run_then_abort_multipart_upload() {
if [ $# -ne 4 ]; then
echo "abort multipart upload command missing bucket, key, file, and/or part count"
echo "run then abort multipart upload command missing bucket, key, file, and/or part count"
return 1
fi
@@ -819,7 +778,7 @@ abort_multipart_upload() {
return 1
fi
run_abort_command "$1" "$2" "$upload_id"
abort_multipart_upload "$1" "$2" "$upload_id"
return $?
}
@@ -909,9 +868,10 @@ multipart_upload_from_bucket() {
fi
for ((i=0;i<$4;i++)) {
put_object "aws" "$3"-"$i" "$1" || put_result=$?
if [[ $put_result -ne 0 ]]; then
echo "error putting object"
echo "key: $3"
put_object "s3api" "$3-$i" "$1" "$2-$i" || copy_result=$?
if [[ $copy_result -ne 0 ]]; then
echo "error copying object"
return 1
fi
}
@@ -952,6 +912,7 @@ upload_part_copy() {
return 1
fi
local etag_json
echo "$1 $2 $3 $4 $5"
etag_json=$(aws --no-verify-ssl s3api upload-part-copy --bucket "$1" --key "$2" --upload-id "$3" --part-number "$5" --copy-source "$1/$4-$(($5-1))") || local uploaded=$?
if [[ $uploaded -ne 0 ]]; then
echo "Error uploading part $5: $etag_json"
@@ -985,27 +946,3 @@ create_presigned_url() {
fi
export presigned_url
}
head_bucket() {
if [ $# -ne 2 ]; then
echo "head bucket command missing command type, bucket name"
return 1
fi
local exit_code=0
local error
if [[ $1 == "aws" ]]; then
bucket_info=$(aws --no-verify-ssl s3api head-bucket --bucket "$2" 2>&1) || exit_code=$?
elif [[ $1 == "s3cmd" ]]; then
bucket_info=$(s3cmd --no-check-certificate info "s3://$2" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
bucket_info=$(mc --insecure stat "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error getting bucket info: $bucket_info"
return 1
fi
export bucket_info
}

40
tests/util_aws.sh Normal file
View File

@@ -0,0 +1,40 @@
#!/usr/bin/env bash
abort_all_multipart_uploads() {
if [[ $# -ne 1 ]]; then
echo "abort all multipart uploads command missing bucket name"
return 1
fi
upload_list=$(aws --no-verify-ssl s3api list-multipart-uploads --bucket "$1" 2>&1) || list_result=$?
if [[ $list_result -ne 0 ]]; then
echo "error listing multipart uploads: $upload_list"
return 1
fi
log 5 "$upload_list"
while IFS= read -r line; do
if [[ $line != *"InsecureRequestWarning"* ]]; then
modified_upload_list+=("$line")
fi
done <<< "$upload_list"
log 5 "Modified upload list: ${modified_upload_list[*]}"
has_uploads=$(echo "${modified_upload_list[*]}" | jq 'has("Uploads")')
if [[ $has_uploads != false ]]; then
lines=$(echo "${modified_upload_list[*]}" | jq -r '.Uploads[] | "--key \(.Key) --upload-id \(.UploadId)"') || lines_result=$?
if [[ $lines_result -ne 0 ]]; then
echo "error getting lines for multipart upload delete: $lines"
return 1
fi
log 5 "$lines"
while read -r line; do
error=$(aws --no-verify-ssl s3api abort-multipart-upload --bucket "$1" $line 2>&1) || abort_result=$?
if [[ $abort_result -ne 0 ]]; then
echo "error aborting multipart upload: $error"
return 1
fi
done <<< "$lines"
fi
return 0
}

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env bash
source ./tests/util_mc.sh
source ./tests/logger.sh
create_bucket_with_user() {
if [ $# -ne 4 ]; then
echo "create bucket missing command type, bucket name, access, secret"
return 1
fi
local exit_code=0
if [[ $1 == "aws" ]]; then
error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3 mb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate mb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "mc" ]]; then
error=$(mc --insecure mb "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error creating bucket: $error"
export error
return 1
fi
return 0
}
create_bucket_invalid_name() {
if [ $# -ne 1 ]; then
echo "create bucket w/invalid name missing command type"
return 1
fi
local exit_code=0
if [[ $1 == "aws" ]] || [[ $1 == 's3' ]]; then
bucket_create_error=$(aws --no-verify-ssl s3 mb "s3://" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]]; then
bucket_create_error=$(aws --no-verify-ssl s3api create-bucket --bucket "s3://" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
bucket_create_error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate mb "s3://" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
bucket_create_error=$(mc --insecure mb "$MC_ALIAS" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -eq 0 ]; then
echo "error: bucket should have not been created but was"
return 1
fi
export bucket_create_error
}

28
tests/util_policy.sh Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bash
check_for_empty_policy() {
if [[ $# -ne 2 ]]; then
echo "check for empty policy command requires command type, bucket name"
return 1
fi
local get_result=0
get_bucket_policy "$1" "$2" || get_result=$?
if [[ $get_result -ne 0 ]]; then
echo "error getting bucket policy"
return 1
fi
# shellcheck disable=SC2154
if [[ $bucket_policy == "" ]]; then
return 0
fi
policy=$(echo "$bucket_policy" | jq -r '.Policy')
statement=$(echo "$policy" | jq -r '.Statement[0]')
if [[ "" != "$statement" ]] && [[ "null" != "$statement" ]]; then
echo "policy should be empty (actual value: '$statement')"
return 1
fi
return 0
}

View File

@@ -1,97 +0,0 @@
#!/usr/bin/env bats
# check if object exists both on S3 and locally
# param: object path
# 0 for yes, 1 for no, 2 for error
object_exists_remote_and_local() {
if [ $# -ne 1 ]; then
echo "object existence check requires single name parameter"
return 2
fi
object_exists "aws" "$1" || local exist_result=$?
if [[ $exist_result -eq 2 ]]; then
echo "Error checking if object exists"
return 2
fi
if [[ $exist_result -eq 1 ]]; then
echo "Error: object doesn't exist remotely"
return 1
fi
if [[ ! -e "$LOCAL_FOLDER"/"$1" ]]; then
echo "Error: object doesn't exist locally"
return 1
fi
return 0
}
# check if object doesn't exist both on S3 and locally
# param: object path
# return 0 for doesn't exist, 1 for still exists, 2 for error
object_not_exists_remote_and_local() {
if [ $# -ne 1 ]; then
echo "object non-existence check requires single name parameter"
return 2
fi
object_exists "aws" "$1" || local exist_result=$?
if [[ $exist_result -eq 2 ]]; then
echo "Error checking if object doesn't exist"
return 2
fi
if [[ $exist_result -eq 0 ]]; then
echo "Error: object exists remotely"
return 1
fi
if [[ -e "$LOCAL_FOLDER"/"$1" ]]; then
echo "Error: object exists locally"
return 1
fi
return 0
}
# check if a bucket doesn't exist both on S3 and on gateway
# param: bucket name
# return: 0 for doesn't exist, 1 for does, 2 for error
bucket_not_exists_remote_and_local() {
if [ $# -ne 1 ]; then
echo "bucket existence check requires single name parameter"
return 2
fi
bucket_exists "aws" "$1" || local exist_result=$?
if [[ $exist_result -eq 2 ]]; then
echo "Error checking if bucket exists"
return 2
fi
if [[ $exist_result -eq 0 ]]; then
echo "Error: bucket exists remotely"
return 1
fi
if [[ -e "$LOCAL_FOLDER"/"$1" ]]; then
echo "Error: bucket exists locally"
return 1
fi
return 0
}
# check if a bucket exists both on S3 and on gateway
# param: bucket name
# return: 0 for yes, 1 for no, 2 for error
bucket_exists_remote_and_local() {
if [ $# -ne 1 ]; then
echo "bucket existence check requires single name parameter"
return 2
fi
bucket_exists "aws" "$1" || local exist_result=$?
if [[ $exist_result -eq 2 ]]; then
echo "Error checking if bucket exists"
return 2
fi
if [[ $exist_result -eq 1 ]]; then
echo "Error: bucket doesn't exist remotely"
return 1
fi
if [[ ! -e "$LOCAL_FOLDER"/"$1" ]]; then
echo "Error: bucket doesn't exist locally"
return 1
fi
return 0
}

85
tests/util_users.sh Normal file
View File

@@ -0,0 +1,85 @@
#!/usr/bin/env bash
create_user() {
if [[ $# -ne 3 ]]; then
echo "create user command requires user ID, key, and role"
return 1
fi
create_user_with_user "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$1" "$2" "$3" || create_result=$?
if [[ $create_result -ne 0 ]]; then
echo "error creating user: $error"
return 1
fi
return 0
}
create_user_with_user() {
if [[ $# -ne 5 ]]; then
echo "create user with user command requires creator ID, key, and new user ID, key, and role"
return 1
fi
error=$($VERSITY_EXE admin --allow-insecure --access "$1" --secret "$2" --endpoint-url "$AWS_ENDPOINT_URL" create-user --access "$3" --secret "$4" --role "$5") || local create_result=$?
if [[ $create_result -ne 0 ]]; then
echo "error creating user: $error"
return 1
fi
return 0
}
list_users() {
users=$($VERSITY_EXE admin --allow-insecure --access "$AWS_ACCESS_KEY_ID" --secret "$AWS_SECRET_ACCESS_KEY" --endpoint-url "$AWS_ENDPOINT_URL" list-users) || local list_result=$?
if [[ $list_result -ne 0 ]]; then
echo "error listing users: $users"
return 1
fi
parsed_users=()
while IFS= read -r line; do
parsed_users+=("$line")
done < <(awk 'NR>2 {print $1}' <<< "$users")
export parsed_users
return 0
}
user_exists() {
if [[ $# -ne 1 ]]; then
echo "user exists command requires username"
return 2
fi
list_users || local list_result=$?
if [[ $list_result -ne 0 ]]; then
echo "error listing user"
return 2
fi
for element in "${parsed_users[@]}"; do
if [[ $element == "$1" ]]; then
return 0
fi
done
return 1
}
delete_user() {
if [[ $# -ne 1 ]]; then
echo "delete user command requires user ID"
return 1
fi
error=$($VERSITY_EXE admin --allow-insecure --access $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY --endpoint-url $AWS_ENDPOINT_URL delete-user --access "$1") || local delete_result=$?
if [[ $delete_result -ne 0 ]]; then
echo "error deleting user: $error"
return 1
fi
return 0
}
change_bucket_owner() {
if [[ $# -ne 4 ]]; then
echo "change bucket owner command requires ID, key, bucket name, and new owner"
return 1
fi
error=$($VERSITY_EXE admin --allow-insecure --access "$1" --secret "$2" --endpoint-url "$AWS_ENDPOINT_URL" change-bucket-owner --bucket "$3" --owner "$4" 2>&1) || local change_result=$?
if [[ $change_result -ne 0 ]]; then
echo "error changing bucket owner: $error"
return 1
fi
return 0
}

View File

@@ -44,6 +44,9 @@ check_exe_params() {
elif [[ $RUN_VERSITYGW != "true" ]] && [[ $RUN_VERSITYGW != "false" ]]; then
echo "RUN_VERSITYGW must be 'true' or 'false'"
return 1
elif [ -z "$USERS_FOLDER" ]; then
echo "No users folder parameter set"
return 1
fi
if [[ -r $GOCOVERDIR ]]; then
export GOCOVERDIR=$GOCOVERDIR
@@ -89,7 +92,7 @@ start_versity() {
fi
fi
export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_REGION AWS_PROFILE AWS_ENDPOINT_URL
export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_REGION AWS_PROFILE AWS_ENDPOINT_URL VERSITY_EXE
}
start_versity_process() {
@@ -128,7 +131,7 @@ run_versity_app_posix() {
echo "run versity app w/posix command requires access ID, secret key, process number"
return 1
fi
base_command=("$VERSITY_EXE" --access="$1" --secret="$2" --region="$AWS_REGION")
base_command=("$VERSITY_EXE" --access="$1" --secret="$2" --region="$AWS_REGION" --iam-dir="$USERS_FOLDER")
if [ -n "$CERT" ] && [ -n "$KEY" ]; then
base_command+=(--cert "$CERT" --key "$KEY")
fi