Compare commits

..

25 Commits

Author SHA1 Message Date
Ben McClelland
adbc8140ed Merge pull request #714 from versity/ben/get_obj_dir
fix: head/get/delete/copy directory object should fail when corresponding …
2024-08-06 10:12:51 -07:00
Ben McClelland
ce9d3aa01a Merge pull request #715 from versity/dependabot/go_modules/dev-dependencies-18a6b4804e
chore(deps): bump the dev-dependencies group with 4 updates
2024-08-05 15:29:18 -07:00
dependabot[bot]
2e6bef6760 chore(deps): bump the dev-dependencies group with 4 updates
Bumps the dev-dependencies group with 4 updates: [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2), [golang.org/x/sys](https://github.com/golang/sys), [golang.org/x/time](https://github.com/golang/time) and [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2).


Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.58.2 to 1.58.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.58.2...service/s3/v1.58.3)

Updates `golang.org/x/sys` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/sys/compare/v0.22.0...v0.23.0)

Updates `golang.org/x/time` from 0.5.0 to 0.6.0
- [Commits](https://github.com/golang/time/compare/v0.5.0...v0.6.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.9 to 1.17.10
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.17.9...config/v1.17.10)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/time
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-05 21:33:43 +00:00
Ben McClelland
797376a235 fix: head/get/delete/copy directory object should fail when corresponding file object exists
The API hanlders and backend were stripping trailing "/" in object
paths. So if an object exists and a request came in for head/get/delete/copy
for that same name but with a trailing "/" indicating request should
be for directory object, the "/" would be stripped and the request
would be handlied for the incorrect file object.

This fix adds in checks to handle the case with the training "/"
in the request.

Fixes #709
2024-08-05 11:55:32 -07:00
Ben McClelland
cb992a4794 Merge pull request #710 from versity/ben/content_type
fix: set default content type to binary/octet-stream
2024-08-02 16:36:54 -07:00
Ben McClelland
61a97e94db fix: set default content type to binary/octet-stream
AWS uses binary/octet-stream for the default content type if the
client doesn't specify the content type. Change the default for
the gateway to match this behavior.

Fixes #697
2024-08-02 09:02:57 -07:00
Luke
18a8813ce7 Test cmdline acl hardcode fix (#712)
* test: s3cmd, s3api acls tests (direct vs s3)
* test: completed s3:PutBucketAcl tests
2024-08-01 17:24:49 -07:00
Ben McClelland
8872e2a428 Merge pull request #707 from versity/ben/scoutfs_listing_etag
fix: allow objecting listing in scoutfs for files created without etag attrs
2024-07-31 14:56:37 -07:00
Ben McClelland
b421598647 fix: allow objecting listing in scoutfs for files created without etag attrs
Files created outside of versitygw can be missing etag attributes.
Allow the empty etag to not cause errors with listing the files.

Fixes #694
2024-07-31 10:01:00 -07:00
Ben McClelland
cacd1d28ea Merge pull request #700 from versity/ben/test-template
chore: add issue template for test cases
2024-07-30 12:34:02 -07:00
Ben McClelland
ad30c251bc chore: add issue template for test cases 2024-07-30 11:13:32 -07:00
Ben McClelland
55c7109c94 Merge pull request #696 from versity/dependabot/go_modules/dev-dependencies-6a99dab1aa
chore(deps): bump the dev-dependencies group with 3 updates
2024-07-29 16:03:38 -07:00
dependabot[bot]
331996d3dd chore(deps): bump the dev-dependencies group with 3 updates
Bumps the dev-dependencies group with 3 updates: [github.com/urfave/cli/v2](https://github.com/urfave/cli), [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) and [github.com/xrash/smetrics](https://github.com/xrash/smetrics).


Updates `github.com/urfave/cli/v2` from 2.27.2 to 2.27.3
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/main/docs/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v2.27.2...v2.27.3)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.8 to 1.17.9
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.17.8...config/v1.17.9)

Updates `github.com/xrash/smetrics` from 0.0.0-20240312152122-5f08fbb34913 to 0.0.0-20240521201337-686a1a2994c1
- [Commits](https://github.com/xrash/smetrics/commits)

---
updated-dependencies:
- dependency-name: github.com/urfave/cli/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/xrash/smetrics
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-29 21:19:34 +00:00
Ben McClelland
18a9a23f2f Merge pull request #692 from versity/test_cmdline_coverage
Test cmdline coverage
2024-07-25 13:33:39 -07:00
Luke McCrone
60c8eb795d test: improve command coverage for tools, retention bypass 2024-07-25 15:23:31 -03:00
Ben McClelland
1173ea920b Merge pull request #689 from versity/ben/docker
feat: add arm64 docker images
2024-07-22 22:06:30 -07:00
Ben McClelland
370b51d327 Merge pull request #690 from versity/dependabot/go_modules/dev-dependencies-bdabae4bc6
chore(deps): bump the dev-dependencies group with 9 updates
2024-07-22 22:06:15 -07:00
dependabot[bot]
1a3937de90 chore(deps): bump the dev-dependencies group with 9 updates
Bumps the dev-dependencies group with 9 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go) | `1.12.0` | `1.13.0` |
| [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) | `1.3.2` | `1.4.0` |
| [github.com/pkg/xattr](https://github.com/pkg/xattr) | `0.4.9` | `0.4.10` |
| [github.com/Azure/azure-sdk-for-go/sdk/internal](https://github.com/Azure/azure-sdk-for-go) | `1.9.1` | `1.10.0` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.22.3` | `1.22.4` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.27.26` | `1.27.27` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.17.26` | `1.17.27` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.17.7` | `1.17.8` |
| [github.com/mattn/go-runewidth](https://github.com/mattn/go-runewidth) | `0.0.15` | `0.0.16` |


Updates `github.com/Azure/azure-sdk-for-go/sdk/azcore` from 1.12.0 to 1.13.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.12.0...sdk/azcore/v1.13.0)

Updates `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` from 1.3.2 to 1.4.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/storage/azblob/v1.3.2...sdk/azcore/v1.4.0)

Updates `github.com/pkg/xattr` from 0.4.9 to 0.4.10
- [Release notes](https://github.com/pkg/xattr/releases)
- [Commits](https://github.com/pkg/xattr/compare/v0.4.9...v0.4.10)

Updates `github.com/Azure/azure-sdk-for-go/sdk/internal` from 1.9.1 to 1.10.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.9.1...sdk/azcore/v1.10.0)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.22.3 to 1.22.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.22.3...service/pi/v1.22.4)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.27.26 to 1.27.27
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.27.26...config/v1.27.27)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.17.26 to 1.17.27
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.26...credentials/v1.17.27)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.7 to 1.17.8
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.17.8/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.17.7...v1.17.8)

Updates `github.com/mattn/go-runewidth` from 0.0.15 to 0.0.16
- [Commits](https://github.com/mattn/go-runewidth/compare/v0.0.15...v0.0.16)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/pkg/xattr
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/internal
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/mattn/go-runewidth
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-22 21:25:55 +00:00
Ben McClelland
ca79182c95 feat: add arm64 docker images 2024-07-22 10:00:58 -07:00
Ben McClelland
d2b004af9a Merge pull request #688 from versity/fix/cp-obj-cp-src-slash
CopySource parsing starting with '/' for CopyObject action
2024-07-22 09:13:30 -07:00
Ben McClelland
93b4926aeb Merge pull request #687 from versity/fix/aws-cli-obj-lock-header
Object lock X-Amz-Bypass-Governance-Retention header for AWS CLI
2024-07-22 09:13:13 -07:00
jonaustin09
12da1e2099 fix: Added X-Amz-Bypass-Governance-Retention header check to both check 'true' and 'True' values for DeleteObject(s) actions. 2024-07-22 11:43:34 -04:00
jonaustin09
5e484f2355 fix: Fixed CopySource parsing to handle the values starting with '/' in CopyObject action in posix and azure backends. 2024-07-22 11:30:32 -04:00
Ben McClelland
d521c66171 Merge pull request #677 from versity/test_cmdline_more_user_ops
Test cmdline more user ops
2024-07-17 10:44:41 -07:00
Luke McCrone
c580947b98 test: added user tests, added command recording, re-added s3cmd tests 2024-07-17 13:45:01 -03:00
73 changed files with 1873 additions and 785 deletions

View File

@@ -1,27 +1,23 @@
---
name: Bug report
name: Bug Report
about: Create a report to help us improve
title: ''
title: '[Bug] - <Short Description>'
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Steps to reproduce the behavior.
<!-- Steps to reproduce the behavior. -->
**Expected behavior**
A clear and concise description of what you expected to happen.
<!-- A clear and concise description of what you expected to happen. -->
**Server Version**
output of
```
./versitygw -version
uname -a
```
<!-- output of: './versitygw -version && uname -a' -->
**Additional context**
Describe s3 client and version if applicable.
<!-- Describe s3 client and version if applicable.

View File

@@ -1,14 +1,14 @@
---
name: Feature request
name: Feature Request
about: Suggest an idea for this project
title: ''
title: '[Feature] - <Short Description>'
labels: enhancement
assignees: ''
---
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
<!-- A clear and concise description of what you want to happen. -->
**Additional context**
Add any other context or screenshots about the feature request here.
<!-- Add any other context or screenshots about the feature request here. -->

33
.github/ISSUE_TEMPLATE/test_case.md vendored Normal file
View File

@@ -0,0 +1,33 @@
---
name: Test Case Request
about: Request new test cases or additional test coverage
title: '[Test Case] - <Short Description>'
labels: 'testcase'
assignees: ''
---
## Description
<!-- Please provide a detailed description of the test case or test coverage request. -->
## Purpose
<!-- Explain why this test case is important and what it aims to achieve. -->
## Scope
<!-- Describe the scope of the test case, including any specific functionalities, features, or modules that should be tested. -->
## Acceptance Criteria
<!-- List the criteria that must be met for the test case to be considered complete. -->
1.
2.
3.
## Additional Context
<!-- Add any other context or screenshots about the feature request here. -->
## Resources
<!-- Provide any resources, documentation, or links that could help in writing the test case. -->
**Thank you for contributing to our project!**

View File

@@ -15,6 +15,13 @@ jobs:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
@@ -43,6 +50,7 @@ jobs:
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
build-args: |
VERSION=${{ github.event.release.tag_name }}
TIME=${{ github.event.release.published_at }}

View File

@@ -8,17 +8,17 @@ jobs:
fail-fast: false
matrix:
include:
#- set: 1
# LOCAL_FOLDER: /tmp/gw1
# BUCKET_ONE_NAME: versity-gwtest-bucket-one-1
# BUCKET_TWO_NAME: versity-gwtest-bucket-two-1
# IAM_TYPE: folder
# USERS_FOLDER: /tmp/iam1
# AWS_ENDPOINT_URL: https://127.0.0.1:7070
# RUN_SET: "s3cmd"
# RECREATE_BUCKETS: "true"
# PORT: 7070
# BACKEND: "posix"
- set: 1
LOCAL_FOLDER: /tmp/gw1
BUCKET_ONE_NAME: versity-gwtest-bucket-one-1
BUCKET_TWO_NAME: versity-gwtest-bucket-two-1
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam1
AWS_ENDPOINT_URL: https://127.0.0.1:7070
RUN_SET: "s3cmd"
RECREATE_BUCKETS: "true"
PORT: 7070
BACKEND: "posix"
- set: 2
LOCAL_FOLDER: /tmp/gw2
BUCKET_ONE_NAME: versity-gwtest-bucket-one-2

View File

@@ -744,7 +744,12 @@ func (az *Azure) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3
return nil, azureErrToS3Err(err)
}
if strings.Join([]string{*input.Bucket, *input.Key}, "/") == *input.CopySource && isMetaSame(res.Metadata, input.Metadata) {
cpSrc := *input.CopySource
if cpSrc[0] == '/' {
cpSrc = cpSrc[1:]
}
if strings.Join([]string{*input.Bucket, *input.Key}, "/") == cpSrc && isMetaSame(res.Metadata, input.Metadata) {
return nil, s3err.GetAPIError(s3err.ErrInvalidCopyDest)
}
@@ -758,7 +763,7 @@ func (az *Azure) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3
return nil, err
}
resp, err := client.CopyFromURL(ctx, az.serviceURL+"/"+*input.CopySource, &blob.CopyFromURLOptions{
resp, err := client.CopyFromURL(ctx, az.serviceURL+"/"+cpSrc, &blob.CopyFromURLOptions{
BlobTags: tags,
Metadata: parseMetadata(input.Metadata),
})

View File

@@ -1523,7 +1523,20 @@ func (p *Posix) DeleteObject(_ context.Context, input *s3.DeleteObjectInput) err
return fmt.Errorf("stat bucket: %w", err)
}
err = os.Remove(filepath.Join(bucket, object))
objpath := filepath.Join(bucket, object)
fi, err := os.Stat(objpath)
if errors.Is(err, fs.ErrNotExist) {
return s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if err != nil {
return fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
return s3err.GetAPIError(s3err.ErrNoSuchKey)
}
err = os.Remove(objpath)
if errors.Is(err, fs.ErrNotExist) {
return s3err.GetAPIError(s3err.ErrNoSuchKey)
}
@@ -1629,6 +1642,7 @@ func (p *Posix) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.GetO
object := *input.Key
objPath := filepath.Join(bucket, object)
fi, err := os.Stat(objPath)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
@@ -1637,6 +1651,10 @@ func (p *Posix) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.GetO
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
acceptRange := *input.Range
startOffset, length, err := backend.ParseRange(fi.Size(), acceptRange)
if err != nil {
@@ -1801,6 +1819,7 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
}
objPath := filepath.Join(bucket, object)
fi, err := os.Stat(objPath)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
@@ -1808,6 +1827,9 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
if err != nil {
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
userMetaData := make(map[string]string)
contentType, contentEncoding := p.loadUserMetaData(bucket, object, userMetaData)
@@ -1938,7 +1960,13 @@ func (p *Posix) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.
if input.ExpectedBucketOwner == nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
srcBucket, srcObject, ok := strings.Cut(*input.CopySource, "/")
cpSrc := *input.CopySource
if cpSrc[0] == '/' {
cpSrc = cpSrc[1:]
}
srcBucket, srcObject, ok := strings.Cut(cpSrc, "/")
if !ok {
return nil, s3err.GetAPIError(s3err.ErrInvalidCopySource)
}
@@ -1971,10 +1999,13 @@ func (p *Posix) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.
}
defer f.Close()
fInfo, err := f.Stat()
fi, err := f.Stat()
if err != nil {
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(srcObject, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
meta := make(map[string]string)
p.loadUserMetaData(srcBucket, srcObject, meta)
@@ -2000,7 +2031,7 @@ func (p *Posix) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.
}
}
contentLength := fInfo.Size()
contentLength := fi.Size()
etag, err := p.PutObject(ctx,
&s3.PutObjectInput{
@@ -2014,7 +2045,7 @@ func (p *Posix) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.
return nil, err
}
fi, err := os.Stat(dstObjdPath)
fi, err = os.Stat(dstObjdPath)
if err != nil {
return nil, fmt.Errorf("stat dst object: %w", err)
}

View File

@@ -487,6 +487,7 @@ func (s *ScoutFS) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s
}
objPath := filepath.Join(bucket, object)
fi, err := os.Stat(objPath)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
@@ -494,6 +495,9 @@ func (s *ScoutFS) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s
if err != nil {
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
userMetaData := make(map[string]string)
contentType, contentEncoding := s.loadUserMetaData(bucket, object, userMetaData)
@@ -604,6 +608,7 @@ func (s *ScoutFS) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.Ge
}
objPath := filepath.Join(bucket, object)
fi, err := os.Stat(objPath)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
@@ -612,6 +617,10 @@ func (s *ScoutFS) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.Ge
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
startOffset, length, err := backend.ParseRange(fi.Size(), acceptRange)
if err != nil {
return nil, err
@@ -821,11 +830,14 @@ func (s *ScoutFS) fileToObj(bucket string) backend.GetObjFunc {
if d.IsDir() {
// directory object only happens if directory empty
// check to see if this is a directory object by checking etag
b, err := s.meta.RetrieveAttribute(bucket, path, etagkey)
etagBytes, err := s.meta.RetrieveAttribute(bucket, path, etagkey)
if errors.Is(err, meta.ErrNoSuchKey) || errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
}
if err != nil {
return types.Object{}, fmt.Errorf("get etag: %w", err)
}
etag := string(b)
etag := string(etagBytes)
fi, err := d.Info()
if errors.Is(err, fs.ErrNotExist) {
@@ -841,6 +853,7 @@ func (s *ScoutFS) fileToObj(bucket string) backend.GetObjFunc {
ETag: &etag,
Key: &key,
LastModified: backend.GetTimePtr(fi.ModTime()),
StorageClass: types.ObjectStorageClassStandard,
}, nil
}
@@ -849,9 +862,12 @@ func (s *ScoutFS) fileToObj(bucket string) backend.GetObjFunc {
if errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
}
if err != nil {
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return types.Object{}, fmt.Errorf("get etag: %w", err)
}
// note: meta.ErrNoSuchKey will return etagBytes = []byte{}
// so this will just set etag to "" if its not already set
etag := string(b)
fi, err := d.Info()

28
go.mod
View File

@@ -3,12 +3,12 @@ module github.com/versity/versitygw
go 1.21.0
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.13.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.4.0
github.com/DataDog/datadog-go/v5 v5.5.0
github.com/aws/aws-sdk-go-v2 v1.30.3
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.2
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.3
github.com/aws/smithy-go v1.20.3
github.com/go-ldap/ldap/v3 v3.4.8
github.com/gofiber/fiber/v2 v2.52.5
@@ -16,23 +16,23 @@ require (
github.com/google/uuid v1.6.0
github.com/hashicorp/vault-client-go v0.4.3
github.com/nats-io/nats.go v1.36.0
github.com/pkg/xattr v0.4.9
github.com/pkg/xattr v0.4.10
github.com/segmentio/kafka-go v0.4.47
github.com/smira/go-statsd v1.3.3
github.com/urfave/cli/v2 v2.27.2
github.com/urfave/cli/v2 v2.27.3
github.com/valyala/fasthttp v1.55.0
github.com/versity/scoutfs-go v0.0.0-20240325223134-38eb2f5f7d44
golang.org/x/sys v0.22.0
golang.org/x/sys v0.23.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/internal v1.9.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.11 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.22.3 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.22.4 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.4 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.30.3 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.7 // indirect
@@ -51,15 +51,15 @@ require (
golang.org/x/crypto v0.25.0 // indirect
golang.org/x/net v0.27.0 // indirect
golang.org/x/text v0.16.0 // indirect
golang.org/x/time v0.5.0 // indirect
golang.org/x/time v0.6.0 // indirect
)
require (
github.com/andybalholm/brotli v1.1.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.3 // indirect
github.com/aws/aws-sdk-go-v2/config v1.27.26
github.com/aws/aws-sdk-go-v2/credentials v1.17.26
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.7
github.com/aws/aws-sdk-go-v2/config v1.27.27
github.com/aws/aws-sdk-go-v2/credentials v1.17.27
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.10
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.15 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.15 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.15 // indirect
@@ -71,10 +71,10 @@ require (
github.com/klauspost/compress v1.17.9 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/tcplisten v1.0.0 // indirect
github.com/xrash/smetrics v0.0.0-20240312152122-5f08fbb34913 // indirect
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 // indirect
)

60
go.sum
View File

@@ -1,13 +1,13 @@
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0 h1:1nGuui+4POelzDwI7RG56yfQJHCnKvwfMoU7VsEp+Zg=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0/go.mod h1:99EvauvlcJ1U06amZiksfYz/3aFGyIhWGHVyiZXtBAI=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.13.0 h1:GJHeeA2N7xrG3q30L2UXDyuWRzDM900/65j70wcM4Ww=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.13.0/go.mod h1:l38EPgmsp71HHLq9j7De57JcKOWPyhrsW1Awm1JS6K0=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0 h1:tfLQ34V6F7tVSwoTf/4lH5sE0o6eCJuNDTmH09nDpbc=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0/go.mod h1:9kIvujWAA58nmPmWB1m23fyWic1kYZMxD9CxaWn4Qpg=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.9.1 h1:Xy/qV1DyOhhqsU/z0PyFMJfYCxnzna+vBEUtFW0ksQo=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.9.1/go.mod h1:oib6iWdC+sILvNUoJbbBn3xv7TXow7mEp/WRcsYvmow=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.5.0 h1:AifHbc4mg0x9zW52WOpKbsHaDKuRhlI7TVl47thgQ70=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.5.0/go.mod h1:T5RfihdXtBDxt1Ch2wobif3TvzTdumDy29kahv6AV9A=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2 h1:YUUxeiOWgdAQE3pXt2H7QXzZs0q8UBjgRbl56qo8GYM=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2/go.mod h1:dmXQgZuiSubAecswZE+Sm8jkvEa7kQgTPVRvwL/nd0E=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0 h1:PiSrjRPpkQNjrM8H0WwKMnZUdu1RGMtd/LdGKUrOo+c=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0/go.mod h1:oDrbWx4ewMylP7xHivfgixbfGBT6APAwsSoHRKotnIc=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.4.0 h1:Be6KInmFEKV81c0pOAEbRYehLMwmmGI1exuFj248AMk=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.4.0/go.mod h1:WCPBHsOXfBVnivScjs2ypRfimjEW0qPVLGgJkZlrIOA=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 h1:XHOnouVk1mxXfQidrMEnLlPk9UMeRtyBTnEFtxkV0kU=
@@ -25,14 +25,14 @@ github.com/aws/aws-sdk-go-v2 v1.30.3 h1:jUeBtG0Ih+ZIFH0F4UkmL9w3cSpaMv9tYYDbzILP
github.com/aws/aws-sdk-go-v2 v1.30.3/go.mod h1:nIQjQVp5sfpQcTc9mPSr1B0PaWK5ByX9MOoDadSN4lc=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.3 h1:tW1/Rkad38LA15X4UQtjXZXNKsCgkshC3EbmcUmghTg=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.3/go.mod h1:UbnqO+zjqk3uIt9yCACHJ9IVNhyhOCnYk8yA19SAWrM=
github.com/aws/aws-sdk-go-v2/config v1.27.26 h1:T1kAefbKuNum/AbShMsZEro6eRkeOT8YILfE9wyjAYQ=
github.com/aws/aws-sdk-go-v2/config v1.27.26/go.mod h1:ivWHkAWFrw/nxty5Fku7soTIVdqZaZ7dw+tc5iGW3GA=
github.com/aws/aws-sdk-go-v2/credentials v1.17.26 h1:tsm8g/nJxi8+/7XyJJcP2dLrnK/5rkFp6+i2nhmz5fk=
github.com/aws/aws-sdk-go-v2/credentials v1.17.26/go.mod h1:3vAM49zkIa3q8WT6o9Ve5Z0vdByDMwmdScO0zvThTgI=
github.com/aws/aws-sdk-go-v2/config v1.27.27 h1:HdqgGt1OAP0HkEDDShEl0oSYa9ZZBSOmKpdpsDMdO90=
github.com/aws/aws-sdk-go-v2/config v1.27.27/go.mod h1:MVYamCg76dFNINkZFu4n4RjDixhVr51HLj4ErWzrVwg=
github.com/aws/aws-sdk-go-v2/credentials v1.17.27 h1:2raNba6gr2IfA0eqqiP2XiQ0UVOpGPgDSi0I9iAP+UI=
github.com/aws/aws-sdk-go-v2/credentials v1.17.27/go.mod h1:gniiwbGahQByxan6YjQUMcW4Aov6bLC3m+evgcoN4r4=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.11 h1:KreluoV8FZDEtI6Co2xuNk/UqI9iwMrOx/87PBNIKqw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.11/go.mod h1:SeSUYBLsMYFoRvHE0Tjvn7kbxaUhl75CJi1sbfhMxkU=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.7 h1:kNemAUX+bJFBSfPkGVZ8HFOKIadjLoI2Ua1ZKivhGSo=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.7/go.mod h1:71S2C1g/Zjn+ANmyoOqJ586OrPF9uC9iiHt9ZAT+MOw=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.10 h1:zeN9UtUlA6FTx0vFSayxSX32HDw73Yb6Hh2izDSFxXY=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.10/go.mod h1:3HKuexPDcwLWPaqpW2UR/9n8N/u/3CKcGAzSs8p8u8g=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.15 h1:SoNJ4RlFEQEbtDcCEt+QG56MY4fm4W8rYirAmq+/DdU=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.15/go.mod h1:U9ke74k1n2bf+RIgoX1SXFed1HLs51OgUSs+Ph0KJP8=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.15 h1:C6WHdGnTDIYETAm5iErQUiVNsclNx9qbJVPIt03B6bI=
@@ -49,10 +49,10 @@ github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.17 h1:HGErhhrx
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.17/go.mod h1:RkZEx4l0EHYDJpWppMJ3nD9wZJAa8/0lq9aVC+r2UII=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.15 h1:246A4lSTXWJw/rmlQI+TT2OcqeDMKBdyjEQrafMaQdA=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.15/go.mod h1:haVfg3761/WF7YPuJOER2MP0k4UAXyHaLclKXB6usDg=
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.2 h1:sZXIzO38GZOU+O0C+INqbH7C2yALwfMWpd64tONS/NE=
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.2/go.mod h1:Lcxzg5rojyVPU/0eFwLtcyTaek/6Mtic5B1gJo7e/zE=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.3 h1:Fv1vD2L65Jnp5QRsdiM64JvUM4Xe+E0JyVsRQKv6IeA=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.3/go.mod h1:ooyCOXjvJEsUw7x+ZDHeISPMhtwI3ZCB7ggFMcFfWLU=
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.3 h1:hT8ZAZRIfqBqHbzKTII+CIiY8G2oC9OpLedkZ51DWl8=
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.3/go.mod h1:Lcxzg5rojyVPU/0eFwLtcyTaek/6Mtic5B1gJo7e/zE=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.4 h1:BXx0ZIxvrJdSgSvKTZ+yRBeSqqgPM89VPlulEcl37tM=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.4/go.mod h1:ooyCOXjvJEsUw7x+ZDHeISPMhtwI3ZCB7ggFMcFfWLU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.4 h1:yiwVzJW2ZxZTurVbYWA7QOrAaCYQR72t0wrSBfoesUE=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.4/go.mod h1:0oxfLkpz3rQ/CHlx5hB7H69YUpFiI1tql6Q6Ne+1bCw=
github.com/aws/aws-sdk-go-v2/service/sts v1.30.3 h1:ZsDKRLXGWHk8WdtyYMoGNO7bTudrvuKpDKgMVRlepGE=
@@ -119,8 +119,8 @@ github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovk
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U=
github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/nats-io/nats.go v1.36.0 h1:suEUPuWzTSse/XhESwqLxXGuj8vGRuPRoG7MoRN/qyU=
@@ -135,8 +135,8 @@ github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFu
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/xattr v0.4.9 h1:5883YPCtkSd8LFbs13nXplj9g9tlrwoJRjgpgMu1/fE=
github.com/pkg/xattr v0.4.9/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pkg/xattr v0.4.10 h1:Qe0mtiNFHQZ296vRgUjRCoPHPqH7VdTOrZx3g0T+pGA=
github.com/pkg/xattr v0.4.10/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
@@ -162,8 +162,8 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/urfave/cli/v2 v2.27.2 h1:6e0H+AkS+zDckwPCUrZkKX38mRaau4nL2uipkJpbkcI=
github.com/urfave/cli/v2 v2.27.2/go.mod h1:g0+79LmHHATl7DAcHO99smiR/T7uGLw84w8Y42x+4eM=
github.com/urfave/cli/v2 v2.27.3 h1:/POWahRmdh7uztQ3CYnaDddk0Rm90PyOgIxgW2rr41M=
github.com/urfave/cli/v2 v2.27.3/go.mod h1:m4QzxcD2qpra4z7WhzEGn74WZLViBnMpb1ToCAKdGRQ=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.55.0 h1:Zkefzgt6a7+bVKHnu/YaYSOPfNYNisSVBo/unVCf8k8=
@@ -178,8 +178,8 @@ github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
github.com/xrash/smetrics v0.0.0-20240312152122-5f08fbb34913 h1:+qGGcbkzsfDQNPPe9UDgpxAWQrhbbBXOYJFQDq/dtJw=
github.com/xrash/smetrics v0.0.0-20240312152122-5f08fbb34913/go.mod h1:4aEEwZQutDLsQv2Deui4iYQ6DWTxR14g6m8Wv88+Xqk=
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 h1:gEOO8jv9F4OT7lGCjxCBTO/36wtF6j2nSip77qHd4x4=
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1/go.mod h1:Ohn+xnUBiLI6FVj/9LpzZWtj1/D6lUovWYBkxHVV3aM=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
@@ -231,8 +231,8 @@ golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.23.0 h1:YfKFowiIMvtgl1UERQoTPPToxltDeZfbj4H7dVUCwmM=
golang.org/x/sys v0.23.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -250,8 +250,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=

View File

@@ -50,7 +50,8 @@ type S3ApiController struct {
}
const (
iso8601Format = "20060102T150405Z"
iso8601Format = "20060102T150405Z"
defaultContentType = "binary/octet-stream"
)
func New(be backend.Backend, iam auth.IAMService, logger s3log.AuditLogger, evs s3event.S3EventSender, mm *metrics.Manager, debug bool, readonly bool) S3ApiController {
@@ -91,6 +92,10 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
}
path := ctx.Path()
if path[len(path)-1:] == "/" && key[len(key)-1:] != "/" {
key = key + "/"
}
if ctx.Request().URI().QueryArgs().Has("tagging") {
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
@@ -419,10 +424,15 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
lastmod = res.LastModified.Format(timefmt)
}
contentType := getstring(res.ContentType)
if contentType == "" {
contentType = defaultContentType
}
utils.SetResponseHeaders(ctx, []utils.CustomHeader{
{
Key: "Content-Type",
Value: getstring(res.ContentType),
Value: contentType,
},
{
Key: "Content-Encoding",
@@ -1610,7 +1620,7 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
}
bypassHdr := ctx.Get("X-Amz-Bypass-Governance-Retention")
bypass := bypassHdr == "true"
bypass := strings.EqualFold(bypassHdr, "true")
if bypass {
policy, err := c.be.GetBucketPolicy(ctx.Context(), bucket)
if err != nil {
@@ -2289,7 +2299,7 @@ func (c S3ApiController) DeleteObjects(ctx *fiber.Ctx) error {
acct := ctx.Locals("account").(auth.Account)
isRoot := ctx.Locals("isRoot").(bool)
parsedAcl := ctx.Locals("parsedAcl").(auth.ACL)
bypass := ctx.Get("X-Amz-Bypass-Governance-Retention")
bypassHdr := ctx.Get("X-Amz-Bypass-Governance-Retention")
var dObj s3response.DeleteObjects
err := xml.Unmarshal(ctx.Body(), &dObj)
@@ -2326,7 +2336,10 @@ func (c S3ApiController) DeleteObjects(ctx *fiber.Ctx) error {
})
}
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, utils.ParseDeleteObjects(dObj.Objects), bypass == "true", c.be)
// The AWS CLI sends 'True', while Go SDK sends 'true'
bypass := strings.EqualFold(bypassHdr, "true")
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, utils.ParseDeleteObjects(dObj.Objects), bypass, c.be)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
@@ -2365,11 +2378,15 @@ func (c S3ApiController) DeleteActions(ctx *fiber.Ctx) error {
acct := ctx.Locals("account").(auth.Account)
isRoot := ctx.Locals("isRoot").(bool)
parsedAcl := ctx.Locals("parsedAcl").(auth.ACL)
bypass := ctx.Get("X-Amz-Bypass-Governance-Retention")
bypassHdr := ctx.Get("X-Amz-Bypass-Governance-Retention")
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
}
path := ctx.Path()
if path[len(path)-1:] == "/" && key[len(key)-1:] != "/" {
key = key + "/"
}
if ctx.Request().URI().QueryArgs().Has("tagging") {
err := auth.VerifyAccess(ctx.Context(), c.be,
@@ -2470,7 +2487,10 @@ func (c S3ApiController) DeleteActions(ctx *fiber.Ctx) error {
})
}
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, []string{key}, bypass == "true", c.be)
// The AWS CLI sends 'True', while Go SDK sends 'true'
bypass := strings.EqualFold(bypassHdr, "true")
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, []string{key}, bypass, c.be)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
@@ -2565,6 +2585,10 @@ func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
}
path := ctx.Path()
if path[len(path)-1:] == "/" && key[len(key)-1:] != "/" {
key = key + "/"
}
var partNumber *int32
if ctx.Request().URI().QueryArgs().Has("partNumber") {
@@ -2687,12 +2711,16 @@ func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
Value: getstring(res.ContentEncoding),
})
}
if res.ContentType != nil {
headers = append(headers, utils.CustomHeader{
Key: "Content-Type",
Value: getstring(res.ContentType),
})
contentType := getstring(res.ContentType)
if contentType == "" {
contentType = defaultContentType
}
headers = append(headers, utils.CustomHeader{
Key: "Content-Type",
Value: contentType,
})
utils.SetResponseHeaders(ctx, headers)
return SendResponse(ctx, nil,

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
abort_multipart_upload() {
record_command "abort-multipart-upload" "client:s3api"
if [ $# -ne 3 ]; then
log 2 "'abort multipart upload' command requires bucket, key, upload ID"
return 1
@@ -17,6 +18,7 @@ abort_multipart_upload_with_user() {
log 2 "'abort multipart upload' command requires bucket, key, upload ID, username, password"
return 1
fi
record_command "abort-multipart-upload" "client:s3api"
if ! abort_multipart_upload_error=$(AWS_ACCESS_KEY_ID="$4" AWS_SECRET_ACCESS_KEY="$5" aws --no-verify-ssl s3api abort-multipart-upload --bucket "$1" --key "$2" --upload-id "$3" 2>&1); then
log 2 "Error aborting upload: $abort_multipart_upload_error"
export abort_multipart_upload_error

View File

@@ -6,6 +6,7 @@ complete_multipart_upload() {
return 1
fi
log 5 "complete multipart upload id: $3, parts: $4"
record_command "complete-multipart-upload" "client:s3api"
error=$(aws --no-verify-ssl s3api complete-multipart-upload --bucket "$1" --key "$2" --upload-id "$3" --multipart-upload '{"Parts": '"$4"'}' 2>&1) || local completed=$?
if [[ $completed -ne 0 ]]; then
log 2 "error completing multipart upload: $error"

View File

@@ -7,6 +7,7 @@ copy_object() {
fi
local exit_code=0
local error
record_command "copy-object" "client:$1"
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 cp "$2" s3://"$3/$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
@@ -29,6 +30,7 @@ copy_object() {
}
copy_object_empty() {
record-command "copy-object" "client:s3api"
error=$(aws --no-verify-ssl s3api copy-object 2>&1) || local result=$?
if [[ $result -eq 0 ]]; then
log 2 "copy object with empty parameters returned no error"

View File

@@ -1,5 +1,7 @@
#!/usr/bin/env bash
source ./tests/report.sh
# create an AWS bucket
# param: bucket name
# return 0 for success, 1 for failure
@@ -9,6 +11,7 @@ create_bucket() {
return 1
fi
record_command "create-bucket" "client:$1"
local exit_code=0
local error
log 6 "create bucket"
@@ -32,7 +35,32 @@ create_bucket() {
return 0
}
create_bucket_with_user() {
if [ $# -ne 4 ]; then
log 2 "create bucket missing command type, bucket name, access, secret"
return 1
fi
local exit_code=0
if [[ $1 == "aws" ]] || [[ $1 == "s3api" ]]; then
error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3 mb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate mb --access_key="$3" --secret_key="$4" s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "mc" ]]; then
error=$(mc --insecure mb "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
log 2 "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
log 2 "error creating bucket: $error"
export error
return 1
fi
return 0
}
create_bucket_object_lock_enabled() {
record_command "create-bucket" "client:s3api"
if [ $# -ne 1 ]; then
log 2 "create bucket missing bucket name"
return 1

View File

@@ -4,6 +4,7 @@
# params: bucket, key
# return 0 for success, 1 for failure
create_multipart_upload() {
record_command "create-multipart-upload" "client:s3api"
if [ $# -ne 2 ]; then
log 2 "create multipart upload function must have bucket, key"
return 1
@@ -24,6 +25,7 @@ create_multipart_upload() {
}
create_multipart_upload_with_user() {
record_command "create-multipart-upload" "client:s3api"
if [ $# -ne 4 ]; then
log 2 "create multipart upload function must have bucket, key, username, password"
return 1
@@ -44,6 +46,7 @@ create_multipart_upload_with_user() {
}
create_multipart_upload_params() {
record_command "create-multipart-upload" "client:s3api"
if [ $# -ne 8 ]; then
log 2 "create multipart upload function with params must have bucket, key, content type, metadata, object lock legal hold status, " \
"object lock mode, object lock retain until date, and tagging"
@@ -71,6 +74,7 @@ create_multipart_upload_params() {
}
create_multipart_upload_custom() {
record_command "create-multipart-upload" "client:s3api"
if [ $# -lt 2 ]; then
log 2 "create multipart upload custom function must have at least bucket and key"
return 1

View File

@@ -4,11 +4,17 @@
# param: bucket name
# return 0 for success, 1 for failure
delete_bucket() {
record_command "delete-bucket" "client:$1"
if [ $# -ne 2 ]; then
log 2 "delete bucket missing command type, bucket name"
return 1
fi
if [[ ( $RECREATE_BUCKETS == "false" ) && (( "$2" == "$BUCKET_ONE_NAME" ) || ( "$2" == "$BUCKET_TWO_NAME" )) ]]; then
log 2 "attempt to delete main buckets in static mode"
return 1
fi
local exit_code=0
local error
if [[ $1 == 's3' ]]; then

View File

@@ -1,10 +1,12 @@
#!/usr/bin/env bash
delete_bucket_policy() {
record_command "delete-bucket-policy" "client:$1"
if [[ $# -ne 2 ]]; then
log 2 "delete bucket policy command requires command type, bucket"
return 1
fi
local delete_result=0
if [[ $1 == 'aws' ]] || [[ $1 == 's3api' ]]; then
error=$(aws --no-verify-ssl s3api delete-bucket-policy --bucket "$2" 2>&1) || delete_result=$?
elif [[ $1 == 's3cmd' ]]; then
@@ -23,6 +25,7 @@ delete_bucket_policy() {
}
delete_bucket_policy_with_user() {
record_command "delete-bucket-policy" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'delete bucket policy with user' command requires bucket, username, password"
return 1

View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bash
delete_bucket_tagging() {
record_command "delete-bucket-tagging" "client:$1"
if [ $# -ne 2 ]; then
log 2 "delete bucket tagging command missing command type, bucket name"
return 1
fi
local result
if [[ $1 == 'aws' ]]; then
tags=$(aws --no-verify-ssl s3api delete-bucket-tagging --bucket "$2" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
tags=$(mc --insecure tag remove "$MC_ALIAS"/"$2" 2>&1) || result=$?
else
log 2 "invalid command type $1"
return 1
fi
if [[ $result -ne 0 ]]; then
log 2 "error deleting bucket tagging: $tags"
return 1
fi
return 0
}

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
delete_object() {
record_command "delete-object" "client:$1"
if [ $# -ne 3 ]; then
log 2 "delete object command requires command type, bucket, key"
return 1
@@ -27,7 +28,20 @@ delete_object() {
return 0
}
delete_object_bypass_retention() {
if [[ $# -ne 4 ]]; then
log 2 "'delete-object with bypass retention' requires bucket, key, user, password"
return 1
fi
if ! delete_object_error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3api delete-object --bucket "$1" --key "$2" --bypass-governance-retention 2>&1); then
log 2 "error deleting object with bypass retention: $delete_object_error"
return 1
fi
return 0
}
delete_object_with_user() {
record_command "delete-object" "client:$1"
if [ $# -ne 5 ]; then
log 2 "delete object with user command requires command type, bucket, key, access ID, secret key"
return 1
@@ -36,7 +50,7 @@ delete_object_with_user() {
if [[ $1 == 's3' ]]; then
delete_object_error=$(AWS_ACCESS_KEY_ID="$4" AWS_SECRET_ACCESS_KEY="$5" aws --no-verify-ssl s3 rm "s3://$2/$3" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
delete_object_error=$(AWS_ACCESS_KEY_ID="$4" AWS_SECRET_ACCESS_KEY="$5" aws --no-verify-ssl s3api delete-object --bucket "$2" --key "$3" 2>&1) || exit_code=$?
delete_object_error=$(AWS_ACCESS_KEY_ID="$4" AWS_SECRET_ACCESS_KEY="$5" aws --no-verify-ssl s3api delete-object --bucket "$2" --key "$3" --bypass-governance-retention 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
delete_object_error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate rm --access_key="$4" --secret_key="$5" "s3://$2/$3" 2>&1) || exit_code=$?
else

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
delete_object_tagging() {
record_command "delete-object-tagging" "client:$1"
if [[ $# -ne 3 ]]; then
echo "delete object tagging command missing command type, bucket, key"
return 1

View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
delete_objects() {
record_command "delete-objects" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'delete-objects' command requires bucket name, two object keys"
return 1
fi
if ! error=$(aws --no-verify-ssl s3api delete-objects --bucket "$1" --delete "{
\"Objects\": [
{\"Key\": \"$2\"},
{\"Key\": \"$3\"}
]
}" 2>&1); then
log 2 "error deleting objects: $error"
return 1
fi
return 0
}

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
get_bucket_acl() {
record_command "get-bucket-acl" "client:$1"
if [ $# -ne 2 ]; then
log 2 "bucket ACL command missing command type, bucket name"
return 1
@@ -18,10 +19,12 @@ get_bucket_acl() {
log 2 "Error getting bucket ACLs: $acl"
return 1
fi
acl=$(echo "$acl" | grep -v "InsecureRequestWarning")
export acl
}
get_bucket_acl_with_user() {
record_command "get-bucket-acl" "client:s3api"
if [ $# -ne 3 ]; then
log 2 "'get bucket ACL with user' command requires bucket name, username, password"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
get_bucket_location() {
record_command "get-bucket-location" "client:$1"
if [[ $# -ne 2 ]]; then
echo "get bucket location command requires command type, bucket name"
return 1
@@ -23,6 +24,7 @@ get_bucket_location() {
}
get_bucket_location_aws() {
record_command "get-bucket-location" "client:s3api"
if [[ $# -ne 1 ]]; then
echo "get bucket location (aws) requires bucket name"
return 1
@@ -38,6 +40,7 @@ get_bucket_location_aws() {
}
get_bucket_location_s3cmd() {
record_command "get-bucket-location" "client:s3cmd"
if [[ $# -ne 1 ]]; then
echo "get bucket location (s3cmd) requires bucket name"
return 1
@@ -53,6 +56,7 @@ get_bucket_location_s3cmd() {
}
get_bucket_location_mc() {
record_command "get-bucket-location" "client:mc"
if [[ $# -ne 1 ]]; then
echo "get bucket location (mc) requires bucket name"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
get_bucket_ownership_controls() {
record_command "get-bucket-ownership-controls" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "'get bucket ownership controls' command requires bucket name"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
get_bucket_policy() {
record_command "get-bucket-policy" "client:$1"
if [[ $# -ne 2 ]]; then
log 2 "get bucket policy command requires command type, bucket"
return 1
@@ -25,6 +26,7 @@ get_bucket_policy() {
}
get_bucket_policy_aws() {
record_command "get-bucket-policy" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "aws 'get bucket policy' command requires bucket"
return 1
@@ -47,6 +49,7 @@ get_bucket_policy_aws() {
}
get_bucket_policy_with_user() {
record_command "get-bucket-policy" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'get bucket policy with user' command requires bucket, username, password"
return 1
@@ -67,6 +70,7 @@ get_bucket_policy_with_user() {
}
get_bucket_policy_s3cmd() {
record_command "get-bucket-policy" "client:s3cmd"
if [[ $# -ne 1 ]]; then
log 2 "s3cmd 'get bucket policy' command requires bucket"
return 1
@@ -110,6 +114,7 @@ get_bucket_policy_s3cmd() {
}
get_bucket_policy_mc() {
record_command "get-bucket-policy" "client:mc"
if [[ $# -ne 1 ]]; then
echo "aws 'get bucket policy' command requires bucket"
return 1

View File

@@ -4,6 +4,7 @@
# params: bucket
# export 'tags' on success, return 1 for error
get_bucket_tagging() {
record_command "get-bucket-tagging" "client:$1"
if [ $# -ne 2 ]; then
echo "get bucket tag command missing command type, bucket name"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
get_bucket_versioning() {
record_command "get-bucket-versioning" "client:s3api"
if [[ $# -ne 2 ]]; then
log 2 "put bucket versioning command requires command type, bucket name"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
get_object() {
record_command "get-object" "client:$1"
if [ $# -ne 4 ]; then
log 2 "get object command requires command type, bucket, key, destination"
return 1
@@ -8,26 +9,28 @@ get_object() {
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 mv "s3://$2/$3" "$4" 2>&1) || exit_code=$?
get_object_error=$(aws --no-verify-ssl s3 mv "s3://$2/$3" "$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api get-object --bucket "$2" --key "$3" "$4" 2>&1) || exit_code=$?
get_object_error=$(aws --no-verify-ssl s3api get-object --bucket "$2" --key "$3" "$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate get "s3://$2/$3" "$4" 2>&1) || exit_code=$?
get_object_error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate get "s3://$2/$3" "$4" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure get "$MC_ALIAS/$2/$3" "$4" 2>&1) || exit_code=$?
get_object_error=$(mc --insecure get "$MC_ALIAS/$2/$3" "$4" 2>&1) || exit_code=$?
else
log 2 "'get object' command not implemented for '$1'"
return 1
fi
log 5 "get object exit code: $exit_code"
if [ $exit_code -ne 0 ]; then
log 2 "error getting object: $error"
log 2 "error getting object: $get_object_error"
export get_object_error
return 1
fi
return 0
}
get_object_with_range() {
record_command "get-object" "client:s3api"
if [[ $# -ne 4 ]]; then
log 2 "'get object with range' requires bucket, key, range, outfile"
return 1
@@ -41,6 +44,7 @@ get_object_with_range() {
}
get_object_with_user() {
record_command "get-object" "client:$1"
if [ $# -ne 6 ]; then
log 2 "'get object with user' command requires command type, bucket, key, save location, aws ID, aws secret key"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
get_object_attributes() {
record_command "get-object-attributes" "client:s3api"
if [[ $# -ne 2 ]]; then
log 2 "'get object attributes' command requires bucket, key"
return 1

View File

@@ -5,6 +5,7 @@ get_object_legal_hold() {
log 2 "'get object legal hold' command requires bucket, key"
return 1
fi
record_command "get-object-legal-hold" "client:s3api"
legal_hold=$(aws --no-verify-ssl s3api get-object-legal-hold --bucket "$1" --key "$2" 2>&1) || local get_result=$?
if [[ $get_result -ne 0 ]]; then
log 2 "error getting object legal hold: $legal_hold"

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
get_object_lock_configuration() {
record_command "get-object-lock-configuration" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "'get object lock configuration' command missing bucket name"
return 1

View File

@@ -1,13 +1,15 @@
#!/usr/bin/env bash
get_object_retention() {
record_command "get-object-retention" "client:s3api"
if [[ $# -ne 2 ]]; then
log 2 "'get object retention' command requires bucket, key"
return 1
fi
retention=$(aws --no-verify-ssl s3api get-object-retention --bucket "$1" --key "$2" 2>&1) || local get_result=$?
if [[ $get_result -ne 0 ]]; then
if ! retention=$(aws --no-verify-ssl s3api get-object-retention --bucket "$1" --key "$2" 2>&1); then
log 2 "error getting object retention: $retention"
get_object_retention_error=$retention
export get_object_retention_error
return 1
fi
export retention

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
get_object_tagging() {
record_command "get-object-tagging" "client:$1"
if [ $# -ne 3 ]; then
log 2 "get object tag command missing command type, bucket, and/or key"
return 1

View File

@@ -1,6 +1,9 @@
#!/usr/bin/env bash
source ./tests/report.sh
head_bucket() {
record_command "head-bucket" "client:$1"
if [ $# -ne 2 ]; then
echo "head bucket command missing command type, bucket name"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
head_object() {
record_command "head-object" "client:$1"
if [ $# -ne 3 ]; then
log 2 "head-object missing command, bucket name, object name"
return 2

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
list_buckets() {
record_command "list-buckets" "client:$1"
if [ $# -ne 1 ]; then
echo "list buckets command missing command type"
return 1
@@ -10,7 +11,7 @@ list_buckets() {
if [[ $1 == 's3' ]]; then
buckets=$(aws --no-verify-ssl s3 ls 2>&1 s3://) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
list_buckets_s3api || exit_code=$?
list_buckets_s3api "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
buckets=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3:// 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
@@ -37,12 +38,54 @@ list_buckets() {
return 0
}
list_buckets_with_user() {
record_command "list-buckets" "client:$1"
if [ $# -ne 3 ]; then
echo "'list buckets as user' command missing command type, username, password"
return 1
fi
local exit_code=0
if [[ $1 == 's3' ]]; then
buckets=$(AWS_ACCESS_KEY_ID="$2" AWS_SECRET_ACCESS_KEY="$3" aws --no-verify-ssl s3 ls 2>&1 s3://) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
list_buckets_s3api "$2" "$3" || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
buckets=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate --access_key="$2" --secret_key="$3" ls s3:// 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
buckets=$(mc --insecure ls "$MC_ALIAS" 2>&1) || exit_code=$?
else
echo "list buckets command not implemented for '$1'"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error listing buckets: $buckets"
return 1
fi
if [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
return 0
fi
bucket_array=()
while IFS= read -r line; do
bucket_name=$(echo "$line" | awk '{print $NF}')
bucket_array+=("${bucket_name%/}")
done <<< "$buckets"
export bucket_array
return 0
}
list_buckets_s3api() {
output=$(aws --no-verify-ssl s3api list-buckets 2>&1) || exit_code=$?
if [[ $exit_code -ne 0 ]]; then
if [[ $# -ne 2 ]]; then
log 2 "'list_buckets_s3api' requires username, password"
return 1
fi
if ! output=$(AWS_ACCESS_KEY_ID="$1" AWS_SECRET_ACCESS_KEY="$2" aws --no-verify-ssl s3api list-buckets 2>&1); then
echo "error listing buckets: $output"
return 1
fi
log 5 "bucket data: $output"
modified_output=""
while IFS= read -r line; do

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
list_multipart_uploads() {
record_command "list-multipart-uploads" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "'list multipart uploads' command requires bucket name"
return 1
@@ -13,6 +14,7 @@ list_multipart_uploads() {
}
list_multipart_uploads_with_user() {
record_command "list-multipart-uploads" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'list multipart uploads' command requires bucket name, username, password"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
list_object_versions() {
record_command "list-object-versions" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "'list object versions' command requires bucket name"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
list_objects() {
record_command "list-objects" "client:$1"
if [ $# -ne 2 ]; then
echo "list objects command requires command type, and bucket or folder"
return 1

View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
# list objects in bucket, v2
# param: bucket
# export objects on success, return 1 for failure
list_objects_v2() {
if [ $# -ne 1 ]; then
echo "list objects command missing bucket and/or path"
return 1
fi
record_command "list-objects-v2 client:s3api"
objects=$(aws --no-verify-ssl s3api list-objects-v2 --bucket "$1") || local result=$?
if [[ $result -ne 0 ]]; then
echo "error listing objects: $objects"
return 1
fi
export objects
}

View File

@@ -0,0 +1,14 @@
#!/usr/bin/env bash
list_parts() {
if [[ $# -ne 3 ]]; then
log 2 "'list-parts' command requires bucket, key, upload ID"
return 1
fi
record_command "list-parts" "client:s3api"
if ! listed_parts=$(aws --no-verify-ssl s3api list-parts --bucket "$1" --key "$2" --upload-id "$3" 2>&1); then
log 2 "Error listing multipart upload parts: $listed_parts"
return 1
fi
export listed_parts
}

View File

@@ -1,23 +1,27 @@
#!/usr/bin/env bash
put_bucket_acl() {
put_bucket_acl_s3api() {
record_command "put-bucket-acl" "client:$1"
if [[ $# -ne 3 ]]; then
log 2 "put bucket acl command requires command type, bucket name, acls or username"
return 1
fi
local error=""
local put_result=0
if [[ $1 == 's3api' ]]; then
log 5 "bucket name: $2, acls: $3"
error=$(aws --no-verify-ssl s3api put-bucket-acl --bucket "$2" --access-control-policy "file://$3" 2>&1) || put_result=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate setacl "s3://$2" --acl-grant=read:"$3" 2>&1) || put_result=$?
else
log 2 "put_bucket_acl not implemented for '$1'"
log 5 "bucket name: $2, acls: $3"
if ! error=$(aws --no-verify-ssl s3api put-bucket-acl --bucket "$2" --access-control-policy "file://$3" 2>&1); then
log 2 "error putting bucket acl: $error"
return 1
fi
if [[ $put_result -ne 0 ]]; then
log 2 "error putting bucket acl: $error"
return 0
}
put_bucket_canned_acl_s3cmd() {
record_command "put-bucket-acl" "client:s3cmd"
if [[ $# -ne 2 ]]; then
log 2 "put bucket acl command requires bucket name, permission"
return 1
fi
if ! error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate setacl "s3://$1" "$2" 2>&1); then
log 2 "error putting s3cmd canned ACL: $error"
return 1
fi
return 0
@@ -28,7 +32,8 @@ put_bucket_canned_acl() {
log 2 "'put bucket canned acl' command requires bucket name, canned ACL"
return 1
fi
if ! error=$(aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --acl "$2"); then
record_command "put-bucket-acl" "client:s3api"
if ! error=$(aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --acl "$2" 2>&1); then
log 2 "error re-setting bucket acls: $error"
return 1
fi
@@ -36,11 +41,12 @@ put_bucket_canned_acl() {
}
put_bucket_canned_acl_with_user() {
if [[ $# -ne 2 ]]; then
if [[ $# -ne 4 ]]; then
log 2 "'put bucket canned acl with user' command requires bucket name, canned ACL, username, password"
return 1
fi
if ! error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --acl "$2"); then
record_command "put-bucket-acl" "client:s3api"
if ! error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --acl "$2" 2>&1); then
log 2 "error re-setting bucket acls: $error"
return 1
fi

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
put_bucket_ownership_controls() {
record_command "put-bucket-ownership-controls" "client:s3api"
if [[ $# -ne 2 ]]; then
log 2 "'put bucket ownership controls' command requires bucket name, control"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
put_bucket_policy() {
record_command "put-bucket-policy" "client:$1"
if [[ $# -ne 3 ]]; then
log 2 "'put bucket policy' command requires command type, bucket, policy file"
return 1
@@ -26,6 +27,7 @@ put_bucket_policy() {
}
put_bucket_policy_with_user() {
record_command "put-bucket-policy" "client:s3api"
if [[ $# -ne 4 ]]; then
log 2 "'put bucket policy with user' command requires bucket, policy file, username, password"
return 1

View File

@@ -0,0 +1,24 @@
#!/usr/bin/env bash
put_bucket_tagging() {
if [ $# -ne 4 ]; then
echo "bucket tag command missing command type, bucket name, key, value"
return 1
fi
local error
local result
record_command "put-bucket-tagging" "client:$1"
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api put-bucket-tagging --bucket "$2" --tagging "TagSet=[{Key=$3,Value=$4}]") || result=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure tag set "$MC_ALIAS"/"$2" "$3=$4" 2>&1) || result=$?
else
log 2 "invalid command type $1"
return 1
fi
if [[ $result -ne 0 ]]; then
echo "Error adding bucket tag: $error"
return 1
fi
return 0
}

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
put_bucket_versioning() {
record_command "put-bucket-versioning" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "put bucket versioning command requires command type, bucket name, 'Enabled' or 'Suspended'"
return 1

View File

@@ -1,6 +1,9 @@
#!/usr/bin/env bash
source ./tests/report.sh
put_object() {
record_command "put-object" "client:$1"
if [ $# -ne 4 ]; then
log 2 "put object command requires command type, source, destination bucket, destination key"
return 1
@@ -28,6 +31,7 @@ put_object() {
}
put_object_with_user() {
record_command "put-object" "client:$1"
if [ $# -ne 6 ]; then
log 2 "put object command requires command type, source, destination bucket, destination key, aws ID, aws secret key"
return 1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
put_object_legal_hold() {
record_command "put-object-legal-hold" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'put object legal hold' command requires bucket, key, hold status ('ON' or 'OFF')"
return 1

View File

@@ -0,0 +1,27 @@
#!/usr/bin/env bash
put_object_lock_configuration() {
if [[ $# -ne 4 ]]; then
log 2 "'put-object-lock-configuration' command requires bucket name, enabled, mode, period"
return 1
fi
local config="{\"ObjectLockEnabled\": \"$2\", \"Rule\": {\"DefaultRetention\": {\"Mode\": \"$3\", \"Days\": $4}}}"
if ! error=$(aws --no-verify-ssl s3api put-object-lock-configuration --bucket "$1" --object-lock-configuration "$config" 2>&1); then
log 2 "error putting object lock configuration: $error"
return 1
fi
return 0
}
put_object_lock_configuration_disabled() {
if [[ $# -ne 1 ]]; then
log 2 "'put-object-lock-configuration' disable command requires bucket name"
return 1
fi
local config="{\"ObjectLockEnabled\": \"Enabled\"}"
if ! error=$(aws --no-verify-ssl s3api put-object-lock-configuration --bucket "$1" --object-lock-configuration "$config" 2>&1); then
log 2 "error putting object lock configuration: $error"
return 1
fi
return 0
}

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
put_object_retention() {
record_command "put-object-retention" "client:s3api"
if [[ $# -ne 4 ]]; then
log 2 "'put object retention' command requires bucket, key, retention mode, retention date"
return 1

View File

@@ -0,0 +1,24 @@
#!/usr/bin/env bash
put_object_tagging() {
if [ $# -ne 5 ]; then
log 2 "'put-object-tagging' command missing command type, object name, file, key, and/or value"
return 1
fi
local error
local result
record_command "put-object-tagging" "client:$1"
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api put-object-tagging --bucket "$2" --key "$3" --tagging "TagSet=[{Key=$4,Value=$5}]" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure tag set "$MC_ALIAS"/"$2"/"$3" "$4=$5" 2>&1) || result=$?
else
log 2 "invalid command type $1"
return 1
fi
if [[ $result -ne 0 ]]; then
log 2 "Error adding object tag: $error"
return 1
fi
return 0
}

View File

@@ -0,0 +1,37 @@
#!/usr/bin/env bash
put_public_access_block() {
if [[ $# -ne 2 ]]; then
log 2 "'put_public_access_block' command requires bucket, access block list"
return 1
fi
if ! error=$(aws --no-verify-ssl s3api put-public-access-block --bucket "$1" --public-access-block-configuration "$2"); then
log 2 "error updating public access block: $error"
return 1
fi
}
put_public_access_block_enable_public_acls() {
if [[ $# -ne 1 ]]; then
log 2 "command requires bucket"
return 1
fi
if ! put_public_access_block "$1" "BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=true,RestrictPublicBuckets=true"; then
log 2 "error putting public acccess block"
return 1
fi
return 0
}
put_public_access_block_disable_public_acls() {
if [[ $# -ne 1 ]]; then
log 2 "command requires bucket"
return 1
fi
if ! put_public_access_block "$1" "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"; then
log 2 "error putting public access block"
return 1
fi
return 0
}

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
select_object_content() {
record_command "select-object-content" "client:s3api"
if [[ $# -ne 7 ]]; then
log 2 "'select object content' command requires bucket, key, expression, expression type, input serialization, output serialization, outfile"
return 1

View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
upload_part() {
if [ $# -ne 5 ]; then
log 2 "upload multipart part function must have bucket, key, upload ID, file name, part number"
return 1
fi
local etag_json
record_command "upload-part" "client:s3api"
if ! etag_json=$(aws --no-verify-ssl s3api upload-part --bucket "$1" --key "$2" --upload-id "$3" --part-number "$5" --body "$4-$(($5-1))" 2>&1); then
log 2 "Error uploading part $5: $etag_json"
return 1
fi
if ! etag=$(echo "$etag_json" | grep -v "InsecureRequestWarning" | jq '.ETag' 2>&1); then
log 2 "error obtaining etag: $etag"
return 1
fi
export etag
}

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash
upload_part_copy() {
record_command "upload-part-copy" "client:s3api"
if [ $# -ne 5 ]; then
echo "upload multipart part copy function must have bucket, key, upload ID, file name, part number"
return 1
@@ -17,6 +18,7 @@ upload_part_copy() {
}
upload_part_copy_with_range() {
record_command "upload-part-copy" "client:s3api"
if [ $# -ne 6 ]; then
log 2 "upload multipart part copy function must have bucket, key, upload ID, file name, part number, range"
return 1

View File

@@ -92,6 +92,12 @@ check_universal_vars() {
if [[ -n "$DIRECT" ]]; then
export DIRECT
fi
if [[ -n "$DIRECT_DISPLAY_NAME" ]]; then
export DIRECT_DISPLAY_NAME
fi
if [[ -n "$COVERAGE_DB" ]]; then
export COVERAGE_DB
fi
}
check_versity_vars() {

View File

@@ -142,6 +142,7 @@ func TestHeadObject(s *S3Conf) {
HeadObject_invalid_part_number(s)
HeadObject_non_existing_mp(s)
HeadObject_mp_success(s)
HeadObject_non_existing_dir_object(s)
HeadObject_success(s)
}
@@ -160,6 +161,7 @@ func TestGetObject(s *S3Conf) {
GetObject_success(s)
GetObject_by_range_success(s)
GetObject_by_range_resp_status(s)
GetObject_non_existing_dir_object(s)
}
func TestListObjects(s *S3Conf) {
@@ -182,6 +184,7 @@ func TestListObjectsV2(s *S3Conf) {
func TestDeleteObject(s *S3Conf) {
DeleteObject_non_existing_object(s)
DeleteObject_non_existing_dir_object(s)
DeleteObject_success(s)
DeleteObject_success_status_code(s)
}
@@ -197,6 +200,8 @@ func TestCopyObject(s *S3Conf) {
CopyObject_not_owned_source_bucket(s)
CopyObject_copy_to_itself(s)
CopyObject_to_itself_with_new_metadata(s)
CopyObject_CopySource_starting_with_slash(s)
CopyObject_non_existing_dir_object(s)
CopyObject_success(s)
}
@@ -575,6 +580,7 @@ func GetIntTests() IntTests {
"HeadObject_invalid_part_number": HeadObject_invalid_part_number,
"HeadObject_non_existing_mp": HeadObject_non_existing_mp,
"HeadObject_mp_success": HeadObject_mp_success,
"HeadObject_non_existing_dir_object": HeadObject_non_existing_dir_object,
"HeadObject_success": HeadObject_success,
"GetObjectAttributes_non_existing_bucket": GetObjectAttributes_non_existing_bucket,
"GetObjectAttributes_non_existing_object": GetObjectAttributes_non_existing_object,
@@ -587,6 +593,7 @@ func GetIntTests() IntTests {
"GetObject_success": GetObject_success,
"GetObject_by_range_success": GetObject_by_range_success,
"GetObject_by_range_resp_status": GetObject_by_range_resp_status,
"GetObject_non_existing_dir_object": GetObject_non_existing_dir_object,
"ListObjects_non_existing_bucket": ListObjects_non_existing_bucket,
"ListObjects_with_prefix": ListObjects_with_prefix,
"ListObject_truncated": ListObject_truncated,
@@ -600,6 +607,7 @@ func GetIntTests() IntTests {
"ListObjectsV2_start_after_not_in_list": ListObjectsV2_start_after_not_in_list,
"ListObjectsV2_start_after_empty_result": ListObjectsV2_start_after_empty_result,
"DeleteObject_non_existing_object": DeleteObject_non_existing_object,
"DeleteObject_non_existing_dir_object": DeleteObject_non_existing_dir_object,
"DeleteObject_success": DeleteObject_success,
"DeleteObject_success_status_code": DeleteObject_success_status_code,
"DeleteObjects_empty_input": DeleteObjects_empty_input,
@@ -609,6 +617,8 @@ func GetIntTests() IntTests {
"CopyObject_not_owned_source_bucket": CopyObject_not_owned_source_bucket,
"CopyObject_copy_to_itself": CopyObject_copy_to_itself,
"CopyObject_to_itself_with_new_metadata": CopyObject_to_itself_with_new_metadata,
"CopyObject_CopySource_starting_with_slash": CopyObject_CopySource_starting_with_slash,
"CopyObject_non_existing_dir_object": CopyObject_non_existing_dir_object,
"CopyObject_success": CopyObject_success,
"PutObjectTagging_non_existing_object": PutObjectTagging_non_existing_object,
"PutObjectTagging_long_tags": PutObjectTagging_long_tags,

View File

@@ -2954,6 +2954,41 @@ func HeadObject_mp_success(s *S3Conf) error {
})
}
func HeadObject_non_existing_dir_object(s *S3Conf) error {
testName := "HeadObject_non_existing_dir_object"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
obj, dataLen := "my-obj", int64(1234567)
meta := map[string]string{
"key1": "val1",
"key2": "val2",
}
_, _, err := putObjectWithData(dataLen, &s3.PutObjectInput{
Bucket: &bucket,
Key: &obj,
Metadata: meta,
}, s3client)
if err != nil {
return err
}
obj = "my-obj/"
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err = s3client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: &bucket,
Key: &obj,
})
defer cancel()
if err := checkSdkApiErr(err, "NotFound"); err != nil {
return err
}
return nil
})
}
const defaultContentType = "binary/octet-stream"
func HeadObject_success(s *S3Conf) error {
testName := "HeadObject_success"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
@@ -2992,6 +3027,9 @@ func HeadObject_success(s *S3Conf) error {
if contentLength != dataLen {
return fmt.Errorf("expected data length %v, instead got %v", dataLen, contentLength)
}
if *out.ContentType != defaultContentType {
return fmt.Errorf("expected content type %v, instead got %v", defaultContentType, *out.ContentType)
}
return nil
})
@@ -3376,6 +3414,9 @@ func GetObject_success(s *S3Conf) error {
if *out.ContentLength != dataLength {
return fmt.Errorf("expected content-length %v, instead got %v", dataLength, out.ContentLength)
}
if *out.ContentType != defaultContentType {
return fmt.Errorf("expected content type %v, instead got %v", defaultContentType, *out.ContentType)
}
bdy, err := io.ReadAll(out.Body)
if err != nil {
@@ -3506,6 +3547,33 @@ func GetObject_by_range_resp_status(s *S3Conf) error {
})
}
func GetObject_non_existing_dir_object(s *S3Conf) error {
testName := "GetObject_non_existing_dir_object"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
dataLength, obj := int64(1234567), "my-obj"
_, _, err := putObjectWithData(dataLength, &s3.PutObjectInput{
Bucket: &bucket,
Key: &obj,
}, s3client)
if err != nil {
return err
}
obj = "my-obj/"
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err = s3client.GetObject(ctx, &s3.GetObjectInput{
Bucket: &bucket,
Key: &obj,
})
defer cancel()
if err := checkSdkApiErr(err, "NoSuchKey"); err != nil {
return err
}
return nil
})
}
func ListObjects_non_existing_bucket(s *S3Conf) error {
testName := "ListObjects_non_existing_bucket"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
@@ -3893,6 +3961,30 @@ func DeleteObject_non_existing_object(s *S3Conf) error {
})
}
func DeleteObject_non_existing_dir_object(s *S3Conf) error {
testName := "DeleteObject_non_existing_dir_object"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
obj := "my-obj"
err := putObjects(s3client, []string{obj}, bucket)
if err != nil {
return err
}
obj = "my-obj/"
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err = s3client.DeleteObject(ctx, &s3.DeleteObjectInput{
Bucket: &bucket,
Key: &obj,
})
cancel()
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrNoSuchKey)); err != nil {
return err
}
return nil
})
}
func DeleteObject_success(s *S3Conf) error {
testName := "DeleteObject_success"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
@@ -4215,6 +4307,106 @@ func CopyObject_to_itself_with_new_metadata(s *S3Conf) error {
})
}
func CopyObject_CopySource_starting_with_slash(s *S3Conf) error {
testName := "CopyObject_CopySource_starting_with_slash"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
dataLength, obj := int64(1234567), "src-obj"
dstBucket := getBucketName()
if err := setup(s, dstBucket); err != nil {
return err
}
csum, _, err := putObjectWithData(dataLength, &s3.PutObjectInput{
Bucket: &bucket,
Key: &obj,
}, s3client)
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err = s3client.CopyObject(ctx, &s3.CopyObjectInput{
Bucket: &dstBucket,
Key: &obj,
CopySource: getPtr(fmt.Sprintf("/%v/%v", bucket, obj)),
})
cancel()
if err != nil {
return err
}
ctx, cancel = context.WithTimeout(context.Background(), shortTimeout)
out, err := s3client.GetObject(ctx, &s3.GetObjectInput{
Bucket: &dstBucket,
Key: &obj,
})
defer cancel()
if err != nil {
return err
}
if *out.ContentLength != dataLength {
return fmt.Errorf("expected content-length %v, instead got %v", dataLength, out.ContentLength)
}
defer out.Body.Close()
bdy, err := io.ReadAll(out.Body)
if err != nil {
return err
}
outCsum := sha256.Sum256(bdy)
if outCsum != csum {
return fmt.Errorf("invalid object data")
}
if err := teardown(s, dstBucket); err != nil {
return err
}
return nil
})
}
func CopyObject_non_existing_dir_object(s *S3Conf) error {
testName := "CopyObject_non_existing_dir_object"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {
dataLength, obj := int64(1234567), "my-obj"
dstBucket := getBucketName()
err := setup(s, dstBucket)
if err != nil {
return err
}
_, _, err = putObjectWithData(dataLength, &s3.PutObjectInput{
Bucket: &bucket,
Key: &obj,
}, s3client)
if err != nil {
return err
}
obj = "my-obj/"
ctx, cancel := context.WithTimeout(context.Background(), shortTimeout)
_, err = s3client.CopyObject(ctx, &s3.CopyObjectInput{
Bucket: &dstBucket,
Key: &obj,
CopySource: getPtr(fmt.Sprintf("%v/%v", bucket, obj)),
})
cancel()
if err := checkApiErr(err, s3err.GetAPIError(s3err.ErrNoSuchKey)); err != nil {
return err
}
err = teardown(s, dstBucket)
if err != nil {
return nil
}
return nil
})
}
func CopyObject_success(s *S3Conf) error {
testName := "CopyObject_success"
return actionHandler(s, testName, func(s3client *s3.Client, bucket string) error {

91
tests/report.sh Normal file
View File

@@ -0,0 +1,91 @@
#!/usr/bin/env bash
check_and_create_database() {
# Define SQL commands to create a table
SQL_CREATE_TABLE="CREATE TABLE IF NOT EXISTS entries (
id INTEGER PRIMARY KEY AUTOINCREMENT,
command TEXT NOT NULL,
client TEXT NOT NULL,
count INTEGER DEFAULT 1,
UNIQUE(command, client)
);"
# Execute the SQL commands to create the database and table
sqlite3 "$COVERAGE_DB" <<EOF
$SQL_CREATE_TABLE
.exit
EOF
log 5 "Database '$COVERAGE_DB' and table 'entries' created successfully."
}
record_command() {
if [ -z "$COVERAGE_DB" ]; then
log 5 "no coverage db set, not recording"
return 0
fi
if [[ $# -lt 1 ]]; then
log 2 "'record command' requires at least command name"
return 1
fi
check_and_create_database
log 5 "command to record: $1"
client=""
#role="root"
for arg in "${@:2}"; do
log 5 "Argument: $arg"
if [[ $arg != *":"* ]]; then
log 3 "'$arg' must contain colon to record client"
continue
fi
header=$(echo "$arg" | awk -F: '{print $1}')
case $header in
"client")
client=$(echo "$arg" | awk -F: '{print $2}')
;;
#"role")
# role=$(echo "$arg" | awk -F: '{print $2}')
# ;;
esac
done
if ! error=$(sqlite3 "$COVERAGE_DB" "INSERT INTO entries (command, client, count) VALUES(\"$1\", \"$client\", 1) ON CONFLICT(command, client) DO UPDATE SET count = count + 1" 2>&1); then
log 2 "error in sqlite statement: $error"
fi
}
record_result() {
if [ -z "$COVERAGE_DB" ]; then
log 5 "no coverage db set, not recording"
return 0
fi
# Define SQL commands to create a table
SQL_CREATE_TABLE="CREATE TABLE IF NOT EXISTS results (
id INTEGER PRIMARY KEY AUTOINCREMENT,
command TEXT NOT NULL,
client TEXT,
count INTEGER,
pass INTEGER DEFAULT 1,
UNIQUE(command, client)
);"
# Execute the SQL commands to create the database and table
sqlite3 "$COVERAGE_DB" <<EOF
$SQL_CREATE_TABLE
.exit
EOF
# Iterate over each command in the entries table
while IFS="|" read -r command client count; do
if [[ $BATS_TEST_STATUS -eq 0 ]]; then
# Test passed
sqlite3 "$COVERAGE_DB" "INSERT INTO results (command, client, count) VALUES ('$command', '$client', '$count')
ON CONFLICT(command, client) DO UPDATE SET count = count + $count;"
else
# Test failed
sqlite3 "$COVERAGE_DB" "INSERT INTO results (command, client, count, pass) VALUES ('$command', '$client', '$count', 0)
ON CONFLICT(command, client) DO UPDATE SET count = count + $count;"
fi
done < <(sqlite3 "$COVERAGE_DB" "SELECT command, client, count FROM entries;")
sqlite3 "$COVERAGE_DB" "DROP TABLE entries;"
log 5 "Database '$COVERAGE_DB' and table 'entries' created successfully."
}

View File

@@ -57,4 +57,7 @@ teardown() {
end_time=$(date +%s)
log 4 "Total test time: $((end_time - start_time))"
fi
if [[ -n "$COVERAGE_DB" ]]; then
record_result
fi
}

View File

@@ -6,6 +6,7 @@ source ./tests/util_aws.sh
source ./tests/util_bucket_create.sh
source ./tests/util_file.sh
source ./tests/util_users.sh
source ./tests/test_aws_root_inner.sh
source ./tests/test_common.sh
source ./tests/commands/copy_object.sh
source ./tests/commands/delete_bucket_policy.sh
@@ -26,45 +27,21 @@ source ./tests/commands/put_bucket_policy.sh
source ./tests/commands/put_bucket_versioning.sh
source ./tests/commands/put_object.sh
source ./tests/commands/put_object_legal_hold.sh
source ./tests/commands/put_object_lock_configuration.sh
source ./tests/commands/put_object_retention.sh
source ./tests/commands/put_public_access_block.sh
source ./tests/commands/select_object_content.sh
export RUN_USERS=true
# abort-multipart-upload
@test "test_abort_multipart_upload" {
local bucket_file="bucket-file"
create_test_files "$bucket_file" || fail "error creating test files"
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error creating test file"
setup_bucket "aws" "$BUCKET_ONE_NAME" || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
run_then_abort_multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || fail "abort failed"
if object_exists "aws" "$BUCKET_ONE_NAME" "$bucket_file"; then
fail "Upload file exists after abort"
fi
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
test_abort_multipart_upload_aws_root
}
# complete-multipart-upload
@test "test_complete_multipart_upload" {
local bucket_file="bucket-file"
create_test_files "$bucket_file" || fail "error creating test files"
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error creating test file"
setup_bucket "aws" "$BUCKET_ONE_NAME" || fail "failed to create bucket '$BUCKET_ONE_NAME'"
multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || fail "error performing multipart upload"
download_and_compare_file "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder/$bucket_file-copy" || fail "error downloading and comparing file"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
test_complete_multipart_upload_aws_root
}
# copy-object
@@ -73,8 +50,7 @@ export RUN_USERS=true
}
@test "test_copy_object_empty" {
copy_object_empty || local result=$?
[[ result -eq 0 ]] || fail "copy objects with no parameters test failure"
copy_object_empty || fail "copy objects with no parameters test failure"
}
# create-bucket
@@ -82,82 +58,14 @@ export RUN_USERS=true
test_common_create_delete_bucket "aws"
}
# create-multipart-upload
@test "test_create_multipart_upload_properties" {
local bucket_file="bucket-file"
local expected_content_type="application/zip"
local expected_meta_key="testKey"
local expected_meta_val="testValue"
local expected_hold_status="ON"
local expected_retention_mode="GOVERNANCE"
local expected_tag_key="TestTag"
local expected_tag_val="TestTagVal"
local five_seconds_later
os_name="$(uname)"
if [[ "$os_name" == "Darwin" ]]; then
now=$(date -u +"%Y-%m-%dT%H:%M:%S")
later=$(date -j -v +15S -f "%Y-%m-%dT%H:%M:%S" "$now" +"%Y-%m-%dT%H:%M:%S")
else
now=$(date +"%Y-%m-%dT%H:%M:%S")
later=$(date -d "$now 15 seconds" +"%Y-%m-%dT%H:%M:%S")
fi
create_test_files "$bucket_file" || fail "error creating test file"
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error creating test file"
delete_bucket_or_contents_if_exists "s3api" "$BUCKET_ONE_NAME" || fail "error deleting bucket, or checking for existence"
# in static bucket config, bucket will still exist
bucket_exists "s3api" "$BUCKET_ONE_NAME" || local exists_result=$?
[[ $exists_result -ne 2 ]] || fail "error checking for bucket existence"
if [[ $exists_result -eq 1 ]]; then
create_bucket_object_lock_enabled "$BUCKET_ONE_NAME" || fail "error creating bucket"
fi
log 5 "LATER: $later"
multipart_upload_with_params "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 \
"$expected_content_type" \
"{\"$expected_meta_key\": \"$expected_meta_val\"}" \
"$expected_hold_status" \
"$expected_retention_mode" \
"$later" \
"$expected_tag_key=$expected_tag_val" || fail "error performing multipart upload"
head_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error getting metadata"
raw_metadata=$(echo "$metadata" | grep -v "InsecureRequestWarning")
log 5 "raw metadata: $raw_metadata"
content_type=$(echo "$raw_metadata" | jq -r ".ContentType")
[[ $content_type == "$expected_content_type" ]] || fail "content type mismatch ($content_type, $expected_content_type)"
meta_val=$(echo "$raw_metadata" | jq -r ".Metadata.$expected_meta_key")
[[ $meta_val == "$expected_meta_val" ]] || fail "metadata val mismatch ($meta_val, $expected_meta_val)"
hold_status=$(echo "$raw_metadata" | jq -r ".ObjectLockLegalHoldStatus")
[[ $hold_status == "$expected_hold_status" ]] || fail "hold status mismatch ($hold_status, $expected_hold_status)"
retention_mode=$(echo "$raw_metadata" | jq -r ".ObjectLockMode")
[[ $retention_mode == "$expected_retention_mode" ]] || fail "retention mode mismatch ($retention_mode, $expected_retention_mode)"
retain_until_date=$(echo "$raw_metadata" | jq -r ".ObjectLockRetainUntilDate")
[[ $retain_until_date == "$later"* ]] || fail "retention date mismatch ($retain_until_date, $five_seconds_later)"
get_object_tagging "aws" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error getting tagging"
log 5 "tags: $tags"
tag_key=$(echo "$tags" | jq -r ".TagSet[0].Key")
[[ $tag_key == "$expected_tag_key" ]] || fail "tag mismatch ($tag_key, $expected_tag_key)"
tag_val=$(echo "$tags" | jq -r ".TagSet[0].Value")
[[ $tag_val == "$expected_tag_val" ]] || fail "tag mismatch ($tag_val, $expected_tag_val)"
put_object_legal_hold "$BUCKET_ONE_NAME" "$bucket_file" "OFF" || fail "error disabling legal hold"
head_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error getting metadata"
get_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder/$bucket_file-copy" || fail "error getting object"
compare_files "$test_file_folder/$bucket_file" "$test_file_folder/$bucket_file-copy" || fail "files not equal"
sleep 15
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
@test "test_create_bucket_invalid_name" {
test_create_bucket_invalid_name_aws_root
}
# create-multipart-upload
@test "test_create_multipart_upload_properties" {
test_create_multipart_upload_properties_aws_root
}
# delete-bucket - test_create_delete_bucket_aws
@@ -180,48 +88,12 @@ export RUN_USERS=true
# delete-objects
@test "test_delete_objects" {
local object_one="test-file-one"
local object_two="test-file-two"
create_test_files "$object_one" "$object_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result_one=$?
[[ $result_one -eq 0 ]] || fail "Error creating bucket"
put_object "s3api" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME" "$object_one" || local result_two=$?
[[ $result_two -eq 0 ]] || fail "Error adding object one"
put_object "s3api" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME" "$object_two" || local result_three=$?
[[ $result_three -eq 0 ]] || fail "Error adding object two"
error=$(aws --no-verify-ssl s3api delete-objects --bucket "$BUCKET_ONE_NAME" --delete '{
"Objects": [
{"Key": "test-file-one"},
{"Key": "test-file-two"}
]
}') || local result=$?
[[ $result -eq 0 ]] || fail "Error deleting objects: $error"
object_exists "aws" "$BUCKET_ONE_NAME" "$object_one" || local exists_one=$?
[[ $exists_one -eq 1 ]] || fail "Object one not deleted"
object_exists "aws" "$BUCKET_ONE_NAME" "$object_two" || local exists_two=$?
[[ $exists_two -eq 1 ]] || fail "Object two not deleted"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
test_delete_objects_aws_root
}
# get-bucket-acl
@test "test_get_bucket_acl" {
setup_bucket "aws" "$BUCKET_ONE_NAME" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating bucket"
get_bucket_acl "s3api" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Error retrieving acl"
id=$(echo "$acl" | grep -v "InsecureRequestWarning" | jq '.Owner.ID')
[[ $id == '"'"$AWS_ACCESS_KEY_ID"'"' ]] || fail "Acl mismatch"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
test_get_bucket_acl_aws_root
}
# get-bucket-location
@@ -235,62 +107,20 @@ export RUN_USERS=true
# get-object
@test "test_get_object_full_range" {
bucket_file="bucket_file"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
echo -n "0123456789" > "$test_file_folder/$bucket_file"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
put_object "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error putting object"
get_object_with_range "$BUCKET_ONE_NAME" "$bucket_file" "bytes=9-15" "$test_file_folder/$bucket_file-range" || fail "error getting range"
[[ "$(cat "$test_file_folder/$bucket_file-range")" == "9" ]] || fail "byte range not copied properly"
test_get_object_full_range_aws_root
}
@test "test_get_object_invalid_range" {
bucket_file="bucket_file"
test_get_object_invalid_range_aws_root
}
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
put_object "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error putting object"
get_object_with_range "$BUCKET_ONE_NAME" "$bucket_file" "bytes=0-0" "$test_file_folder/$bucket_file-range" || local get_result=$?
[[ $get_result -ne 0 ]] || fail "Get object with zero range returned no error"
# get-object-attributes
@test "test_get_object_attributes" {
test_get_object_attributes_aws_root
}
@test "test_put_object" {
bucket_file="bucket_file"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
setup_bucket "s3api" "$BUCKET_TWO_NAME" || local setup_result_two=$?
[[ $setup_result_two -eq 0 ]] || fail "Bucket two setup error"
put_object "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
copy_error=$(aws --no-verify-ssl s3api copy-object --copy-source "$BUCKET_ONE_NAME/$bucket_file" --key "$bucket_file" --bucket "$BUCKET_TWO_NAME" 2>&1) || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Error copying file: $copy_error"
copy_file "s3://$BUCKET_TWO_NAME/$bucket_file" "$test_file_folder/${bucket_file}_copy" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
compare_files "$test_file_folder/$bucket_file" "$test_file_folder/${bucket_file}_copy" || local compare_result=$?
[[ $compare_result -eq 0 ]] || file "files don't match"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_bucket_or_contents "aws" "$BUCKET_TWO_NAME"
delete_test_files "$bucket_file"
}
@test "test_create_bucket_invalid_name" {
if [[ $RECREATE_BUCKETS != "true" ]]; then
return
fi
create_bucket_invalid_name "aws" || local create_result=$?
[[ $create_result -eq 0 ]] || fail "Invalid name test failed"
[[ "$bucket_create_error" == *"Invalid bucket name "* ]] || fail "unexpected error: $bucket_create_error"
test_put_object_aws_root
}
# test adding and removing an object on versitygw
@@ -312,131 +142,12 @@ export RUN_USERS=true
test_common_list_objects "aws"
}
@test "test_get_object_attributes" {
bucket_file="bucket_file"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating bucket"
put_object "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
get_object_attributes "$BUCKET_ONE_NAME" "$bucket_file" || local get_result=$?
[[ $get_result -eq 0 ]] || fail "failed to get object attributes"
# shellcheck disable=SC2154
if echo "$attributes" | jq -e 'has("ObjectSize")'; then
object_size=$(echo "$attributes" | jq ".ObjectSize")
[[ $object_size == 0 ]] || fail "Incorrect object size: $object_size"
else
fail "ObjectSize parameter missing: $attributes"
fi
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
@test "test_get_put_object_legal_hold" {
test_get_put_object_legal_hold_aws_root
}
#@test "test_get_put_object_legal_hold" {
# # bucket must be created with lock for legal hold
# if [[ $RECREATE_BUCKETS == false ]]; then
# return
# fi
#
# bucket_file="bucket_file"
# username="ABCDEFG"
# password="HIJKLMN"
#
# legal_hold_retention_setup "$username" "$password" "$bucket_file"
#
# get_object_lock_configuration "$BUCKET_ONE_NAME" || fail "error getting lock configuration"
# # shellcheck disable=SC2154
# log 5 "$lock_config"
# enabled=$(echo "$lock_config" | jq -r ".ObjectLockConfiguration.ObjectLockEnabled")
# [[ $enabled == "Enabled" ]] || fail "ObjectLockEnabled should be 'Enabled', is '$enabled'"
#
# put_object_legal_hold "$BUCKET_ONE_NAME" "$bucket_file" "ON" || fail "error putting legal hold on object"
# get_object_legal_hold "$BUCKET_ONE_NAME" "$bucket_file" || fail "error getting object legal hold status"
# # shellcheck disable=SC2154
# log 5 "$legal_hold"
# hold_status=$(echo "$legal_hold" | grep -v "InsecureRequestWarning" | jq -r ".LegalHold.Status" 2>&1) || fail "error obtaining hold status: $hold_status"
# [[ $hold_status == "ON" ]] || fail "Status should be 'ON', is '$hold_status'"
#
# echo "fdkljafajkfs" > "$test_file_folder/$bucket_file"
# if put_object_with_user "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$password"; then
# fail "able to overwrite object with hold"
# fi
# # shellcheck disable=SC2154
# #[[ $put_object_error == *"Object is WORM protected and cannot be overwritten"* ]] || fail "unexpected error message: $put_object_error"
#
# if delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$password"; then
# fail "able to delete object with hold"
# fi
# # shellcheck disable=SC2154
# [[ $delete_object_error == *"Object is WORM protected and cannot be overwritten"* ]] || fail "unexpected error message: $delete_object_error"
# put_object_legal_hold "$BUCKET_ONE_NAME" "$bucket_file" "OFF" || fail "error removing legal hold on object"
# delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$password" || fail "error deleting object after removing legal hold"
#
# delete_bucket_recursive "s3api" "$BUCKET_ONE_NAME"
#}
#@test "test_get_put_object_retention" {
# # bucket must be created with lock for legal hold
# if [[ $RECREATE_BUCKETS == false ]]; then
# return
# fi
#
# bucket_file="bucket_file"
# username="ABCDEFG"
# secret_key="HIJKLMN"
#
# legal_hold_retention_setup "$username" "$secret_key" "$bucket_file"
#
# get_object_lock_configuration "$BUCKET_ONE_NAME" || fail "error getting lock configuration"
# log 5 "$lock_config"
# enabled=$(echo "$lock_config" | jq -r ".ObjectLockConfiguration.ObjectLockEnabled")
# [[ $enabled == "Enabled" ]] || fail "ObjectLockEnabled should be 'Enabled', is '$enabled'"
#
# if [[ "$OSTYPE" == "darwin"* ]]; then
# retention_date=$(date -v+2d +"%Y-%m-%dT%H:%M:%S")
# else
# retention_date=$(date -d "+2 days" +"%Y-%m-%dT%H:%M:%S")
# fi
# put_object_retention "$BUCKET_ONE_NAME" "$bucket_file" "GOVERNANCE" "$retention_date" || fail "failed to add object retention"
# get_object_retention "$BUCKET_ONE_NAME" "$bucket_file" || fail "failed to get object retention"
# log 5 "$retention"
# retention=$(echo "$retention" | grep -v "InsecureRequestWarning")
# mode=$(echo "$retention" | jq -r ".Retention.Mode")
# retain_until_date=$(echo "$retention" | jq -r ".Retention.RetainUntilDate")
# [[ $mode == "GOVERNANCE" ]] || fail "retention mode should be governance, is $mode"
# [[ $retain_until_date == "$retention_date"* ]] || fail "retain until date should be $retention_date, is $retain_until_date"
#
# echo "fdkljafajkfs" > "$test_file_folder/$bucket_file"
# put_object_with_user "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$secret_key" || local put_result=$?
# [[ $put_result -ne 0 ]] || fail "able to overwrite object with hold"
# [[ $error == *"Object is WORM protected and cannot be overwritten"* ]] || fail "unexpected error message: $error"
#
# delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$secret_key" || local delete_result=$?
# [[ $delete_result -ne 0 ]] || fail "able to delete object with hold"
# [[ $error == *"Object is WORM protected and cannot be overwritten"* ]] || fail "unexpected error message: $error"
#
# delete_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error deleting object"
# delete_bucket_recursive "s3api" "$BUCKET_ONE_NAME"
#}
legal_hold_retention_setup() {
[[ $# -eq 3 ]] || fail "legal hold or retention setup requires username, secret key, bucket file"
delete_bucket_or_contents_if_exists "s3api" "$BUCKET_ONE_NAME" || fail "error deleting bucket, or checking for existence"
setup_user "$1" "$2" "user" || fail "error creating user if nonexistent"
create_test_files "$3" || fail "error creating test files"
#create_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error creating bucket"
create_bucket_object_lock_enabled "$BUCKET_ONE_NAME" || fail "error creating bucket"
change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$BUCKET_ONE_NAME" "$1" || fail "error changing bucket ownership"
get_bucket_policy "s3api" "$BUCKET_ONE_NAME" || fail "error getting bucket policy"
log 5 "POLICY: $bucket_policy"
get_bucket_owner "$BUCKET_ONE_NAME"
log 5 "owner: $bucket_owner"
#put_bucket_ownership_controls "$BUCKET_ONE_NAME" "BucketOwnerPreferred" || fail "error putting bucket ownership controls"
put_object_with_user "s3api" "$test_file_folder/$3" "$BUCKET_ONE_NAME" "$3" "$1" "$2" || fail "failed to add object to bucket"
@test "test_get_put_object_retention" {
test_get_put_object_retention_aws_root
}
@test "test_put_bucket_acl" {
@@ -445,62 +156,12 @@ legal_hold_retention_setup() {
# test v1 s3api list objects command
@test "test-s3api-list-objects-v1" {
local object_one="test-file-one"
local object_two="test-file-two"
local object_two_data="test data\n"
create_test_files "$object_one" "$object_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
printf "%s" "$object_two_data" > "$test_file_folder"/"$object_two"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "s3api" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME" "$object_one" || local copy_result_one=$?
[[ $copy_result_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "s3api" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME" "$object_two" || local copy_result_two=$?
[[ $copy_result_two -eq 0 ]] || fail "Failed to add object $object_two"
list_objects_s3api_v1 "$BUCKET_ONE_NAME"
key_one=$(echo "$objects" | jq -r '.Contents[0].Key')
[[ $key_one == "$object_one" ]] || fail "Object one mismatch ($key_one, $object_one)"
size_one=$(echo "$objects" | jq -r '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch ($size_one, 0)"
key_two=$(echo "$objects" | jq -r '.Contents[1].Key')
[[ $key_two == "$object_two" ]] || fail "Object two mismatch ($key_two, $object_two)"
size_two=$(echo "$objects" | jq '.Contents[1].Size')
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch ($size_two, ${#object_two_data})"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
test_s3api_list_objects_v1_aws_root
}
# test v2 s3api list objects command
@test "test-s3api-list-objects-v2" {
local object_one="test-file-one"
local object_two="test-file-two"
local object_two_data="test data\n"
create_test_files "$object_one" "$object_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
printf "%s" "$object_two_data" > "$test_file_folder"/"$object_two"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "s3api" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME" "$object_one" || local copy_object_one=$?
[[ $copy_object_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "s3api" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME" "$object_two" || local copy_object_two=$?
[[ $copy_object_two -eq 0 ]] || fail "Failed to add object $object_two"
list_objects_s3api_v2 "$BUCKET_ONE_NAME"
key_one=$(echo "$objects" | jq -r '.Contents[0].Key')
[[ $key_one == "$object_one" ]] || fail "Object one mismatch ($key_one, $object_one)"
size_one=$(echo "$objects" | jq -r '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch ($size_one, 0)"
key_two=$(echo "$objects" | jq -r '.Contents[1].Key')
[[ $key_two == "$object_two" ]] || fail "Object two mismatch ($key_two, $object_two)"
size_two=$(echo "$objects" | jq -r '.Contents[1].Size')
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch ($size_two, ${#object_two_data})"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
test_s3api_list_objects_v2_aws_root
}
# test abilty to set and retrieve object tags
@@ -510,45 +171,7 @@ legal_hold_retention_setup() {
# test multi-part upload list parts command
@test "test-multipart-upload-list-parts" {
local bucket_file="bucket-file"
create_test_files "$bucket_file" || fail "error creating test file"
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error creating test file"
setup_bucket "aws" "$BUCKET_ONE_NAME" || fail "failed to create bucket '$BUCKET_ONE_NAME'"
list_parts "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || fail "listing multipart upload parts failed"
declare -a parts_map
# shellcheck disable=SC2154
log 5 "parts: $parts"
for i in {0..3}; do
local part_number
local etag
# shellcheck disable=SC2154
part=$(echo "$parts" | grep -v "InsecureRequestWarning" | jq -r ".[$i]" 2>&1) || fail "error getting part: $part"
part_number=$(echo "$part" | jq ".PartNumber" 2>&1) || fail "error parsing part number: $part_number"
[[ $part_number != "" ]] || fail "error: blank part number"
etag=$(echo "$part" | jq ".ETag" 2>&1) || fail "error parsing etag: $etag"
[[ $etag != "" ]] || fail "error: blank etag"
# shellcheck disable=SC2004
parts_map[$part_number]=$etag
done
[[ ${#parts_map[@]} -ne 0 ]] || fail "error loading multipart upload parts to check"
for i in {0..3}; do
local part_number
local etag
# shellcheck disable=SC2154
listed_part=$(echo "$listed_parts" | grep -v "InsecureRequestWarning" | jq -r ".Parts[$i]" 2>&1) || fail "error parsing listed part: $listed_part"
part_number=$(echo "$listed_part" | jq ".PartNumber" 2>&1) || fail "error parsing listed part number: $part_number"
etag=$(echo "$listed_part" | jq ".ETag" 2>&1) || fail "error getting listed etag: $etag"
[[ ${parts_map[$part_number]} == "$etag" ]] || fail "error: etags don't match (part number: $part_number, etags ${parts_map[$part_number]},$etag)"
done
run_then_abort_multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder/$bucket_file" 4
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
test_multipart_upload_list_parts_aws_root
}
# test listing of active uploads
@@ -584,18 +207,14 @@ legal_hold_retention_setup() {
@test "test-multipart-upload-from-bucket" {
local bucket_file="bucket-file"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error creating test file"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
create_test_files "$bucket_file" || fail "error creating test files"
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error adding data to test file"
setup_bucket "aws" "$BUCKET_ONE_NAME" || fail "failed to create bucket: $BUCKET_ONE_NAME"
multipart_upload_from_bucket "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || upload_result=$?
[[ $upload_result -eq 0 ]] || fail "Error performing multipart upload"
multipart_upload_from_bucket "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || fail "error performing multipart upload"
get_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file-copy" "$test_file_folder/$bucket_file-copy"
compare_files "$test_file_folder"/$bucket_file-copy "$test_file_folder"/$bucket_file || compare_result=$?
[[ $compare_result -eq 0 ]] || fail "Data doesn't match"
get_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file-copy" "$test_file_folder/$bucket_file-copy" || fail "error getting object"
compare_files "$test_file_folder"/$bucket_file-copy "$test_file_folder"/$bucket_file || fail "data doesn't match"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
@@ -1230,11 +849,6 @@ EOF
# test_common_list_objects_file_count "aws"
#}
#@test "test_filename_length" {
# file_name=$(printf "%0.sa" $(seq 1 1025))
# echo "$file_name"
# ensure that lists of files greater than a size of 1000 (pagination) are returned properly
#@test "test_list_objects_file_count" {
# test_common_list_objects_file_count "aws"
@@ -1269,6 +883,10 @@ EOF
fi
}
@test "test_retention_bypass" {
test_retention_bypass_aws_root
}
@test "test_head_bucket_doesnt_exist" {
setup_bucket "aws" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
@@ -1370,3 +988,95 @@ EOF
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$policy_file" "$test_file"
}
@test "test_policy_put_acl" {
if [[ $DIRECT != "true" ]]; then
# https://github.com/versity/versitygw/issues/702
skip
fi
policy_file="policy_file"
test_file="test_file"
username="ABCDEFG"
password="HIJLKMN"
create_test_files "$policy_file" || fail "error creating policy file"
create_large_file "$test_file" || fail "error creating large file"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
put_bucket_ownership_controls "$BUCKET_ONE_NAME" "BucketOwnerPreferred" || fail "error putting bucket ownership controls"
if [[ $DIRECT == "true" ]]; then
setup_user_direct "$username" "user" "$BUCKET_ONE_NAME" || fail "error setting up direct user $username"
principal="{\"AWS\": \"arn:aws:iam::$DIRECT_AWS_USER_ID:user/$username\"}"
# shellcheck disable=SC2154
username=$key_id
# shellcheck disable=SC2154
password=$secret_key
else
password="HIJLKMN"
setup_user "$username" "$password" "user" || fail "error setting up user $username"
principal="\"$username\""
fi
cat <<EOF > "$test_file_folder"/$policy_file
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": $principal,
"Action": "s3:PutBucketAcl",
"Resource": "arn:aws:s3:::$BUCKET_ONE_NAME"
}
]
}
EOF
if [[ $DIRECT == "true" ]]; then
put_public_access_block_enable_public_acls "$BUCKET_ONE_NAME" || fail "error enabling public ACLs"
fi
put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$test_file_folder/$policy_file" || fail "error putting policy"
put_bucket_canned_acl_with_user "$BUCKET_ONE_NAME" "public-read" "$username" "$password" || fail "error putting canned acl"
get_bucket_acl "s3api" "$BUCKET_ONE_NAME" || fail "error getting bucket acl"
# shellcheck disable=SC2154
log 5 "ACL: $acl"
second_grant=$(echo "$acl" | jq -r ".Grants[1]" 2>&1) || fail "error getting second grant: $second_grant"
second_grantee=$(echo "$second_grant" | jq -r ".Grantee" 2>&1) || fail "error getting second grantee: $second_grantee"
permission=$(echo "$second_grant" | jq -r ".Permission" 2>&1) || fail "error getting permission: $permission"
log 5 "second grantee: $second_grantee"
[[ $permission == "READ" ]] || fail "incorrect permission: $permission"
if [[ $DIRECT == "true" ]]; then
uri=$(echo "$second_grantee" | jq -r ".URI" 2>&1) || fail "error getting uri: $uri"
[[ $uri == "http://acs.amazonaws.com/groups/global/AllUsers" ]] || fail "unexpected URI: $uri"
else
id=$(echo "$second_grantee" | jq -r ".ID" 2>&1) || fail "error getting ID: $id"
[[ $id == "$username" ]] || fail "unexpected ID: $id"
fi
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
}
@test "test_put_object_lock_configuration" {
bucket_name=$BUCKET_ONE_NAME
if [[ $RECREATE_BUCKETS == "true" ]]; then
delete_bucket "s3api" "$bucket_name" || fail "error deleting bucket"
create_bucket_object_lock_enabled "$bucket_name" || fail "error setting up bucket"
fi
local enabled="Enabled"
local governance="GOVERNANCE"
local days="1"
put_object_lock_configuration "$bucket_name" "$enabled" "$governance" "$days" || fail "error putting object lock configuration"
get_object_lock_configuration "$bucket_name" || fail "error getting object lock configuration"
log 5 "LOCK CONFIG: $lock_config"
object_lock_configuration=$(echo "$lock_config" | jq -r ".ObjectLockConfiguration" 2>&1) || fail "error getting ObjectLockConfiguration: $object_lock_configuration"
object_lock_enabled=$(echo "$object_lock_configuration" | jq -r ".ObjectLockEnabled" 2>&1) || fail "error getting ObjectLockEnabled: $object_lock_enabled"
[[ $object_lock_enabled == "$enabled" ]] || fail "incorrect ObjectLockEnabled value: $object_lock_enabled"
default_retention=$(echo "$object_lock_configuration" | jq -r ".Rule.DefaultRetention" 2>&1) || fail "error getting DefaultRetention: $default_retention"
mode=$(echo "$default_retention" | jq -r ".Mode" 2>&1) || fail "error getting Mode: $mode"
[[ $mode == "$governance" ]] || fail "incorrect Mode value: $mode"
returned_days=$(echo "$default_retention" | jq -r ".Days" 2>&1) || fail "error getting Days: $returned_days"
[[ $returned_days == "1" ]] || fail "incorrect Days value: $returned_days"
delete_bucket_or_contents "aws" "$bucket_name"
}

494
tests/test_aws_root_inner.sh Executable file
View File

@@ -0,0 +1,494 @@
#!/usr/bin/env bats
source ./tests/commands/delete_objects.sh
source ./tests/commands/list_objects_v2.sh
source ./tests/commands/list_parts.sh
test_abort_multipart_upload_aws_root() {
local bucket_file="bucket-file"
create_test_files "$bucket_file" || fail "error creating test files"
# shellcheck disable=SC2154
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error creating test file"
setup_bucket "aws" "$BUCKET_ONE_NAME" || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
run_then_abort_multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || fail "abort failed"
if object_exists "aws" "$BUCKET_ONE_NAME" "$bucket_file"; then
fail "Upload file exists after abort"
fi
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
test_complete_multipart_upload_aws_root() {
local bucket_file="bucket-file"
create_test_files "$bucket_file" || fail "error creating test files"
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error creating test file"
setup_bucket "aws" "$BUCKET_ONE_NAME" || fail "failed to create bucket '$BUCKET_ONE_NAME'"
multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || fail "error performing multipart upload"
download_and_compare_file "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder/$bucket_file-copy" || fail "error downloading and comparing file"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
test_create_multipart_upload_properties_aws_root() {
local bucket_file="bucket-file"
local expected_content_type="application/zip"
local expected_meta_key="testKey"
local expected_meta_val="testValue"
local expected_hold_status="ON"
local expected_retention_mode="GOVERNANCE"
local expected_tag_key="TestTag"
local expected_tag_val="TestTagVal"
local five_seconds_later
os_name="$(uname)"
if [[ "$os_name" == "Darwin" ]]; then
now=$(date -u +"%Y-%m-%dT%H:%M:%S")
later=$(date -j -v +15S -f "%Y-%m-%dT%H:%M:%S" "$now" +"%Y-%m-%dT%H:%M:%S")
else
now=$(date +"%Y-%m-%dT%H:%M:%S")
later=$(date -d "$now 15 seconds" +"%Y-%m-%dT%H:%M:%S")
fi
create_test_files "$bucket_file" || fail "error creating test file"
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error creating test file"
delete_bucket_or_contents_if_exists "s3api" "$BUCKET_ONE_NAME" || fail "error deleting bucket, or checking for existence"
# in static bucket config, bucket will still exist
bucket_exists "s3api" "$BUCKET_ONE_NAME" || local exists_result=$?
[[ $exists_result -ne 2 ]] || fail "error checking for bucket existence"
if [[ $exists_result -eq 1 ]]; then
create_bucket_object_lock_enabled "$BUCKET_ONE_NAME" || fail "error creating bucket"
fi
get_object_lock_configuration "$BUCKET_ONE_NAME" || fail "error getting log config"
# shellcheck disable=SC2154
log 5 "LOG CONFIG: $log_config"
log 5 "LATER: $later"
multipart_upload_with_params "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 \
"$expected_content_type" \
"{\"$expected_meta_key\": \"$expected_meta_val\"}" \
"$expected_hold_status" \
"$expected_retention_mode" \
"$later" \
"$expected_tag_key=$expected_tag_val" || fail "error performing multipart upload"
head_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error getting metadata"
# shellcheck disable=SC2154
raw_metadata=$(echo "$metadata" | grep -v "InsecureRequestWarning")
log 5 "raw metadata: $raw_metadata"
content_type=$(echo "$raw_metadata" | jq -r ".ContentType")
[[ $content_type == "$expected_content_type" ]] || fail "content type mismatch ($content_type, $expected_content_type)"
meta_val=$(echo "$raw_metadata" | jq -r ".Metadata.$expected_meta_key")
[[ $meta_val == "$expected_meta_val" ]] || fail "metadata val mismatch ($meta_val, $expected_meta_val)"
hold_status=$(echo "$raw_metadata" | jq -r ".ObjectLockLegalHoldStatus")
[[ $hold_status == "$expected_hold_status" ]] || fail "hold status mismatch ($hold_status, $expected_hold_status)"
retention_mode=$(echo "$raw_metadata" | jq -r ".ObjectLockMode")
[[ $retention_mode == "$expected_retention_mode" ]] || fail "retention mode mismatch ($retention_mode, $expected_retention_mode)"
retain_until_date=$(echo "$raw_metadata" | jq -r ".ObjectLockRetainUntilDate")
[[ $retain_until_date == "$later"* ]] || fail "retention date mismatch ($retain_until_date, $five_seconds_later)"
get_object_tagging "aws" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error getting tagging"
# shellcheck disable=SC2154
log 5 "tags: $tags"
tag_key=$(echo "$tags" | jq -r ".TagSet[0].Key")
[[ $tag_key == "$expected_tag_key" ]] || fail "tag mismatch ($tag_key, $expected_tag_key)"
tag_val=$(echo "$tags" | jq -r ".TagSet[0].Value")
[[ $tag_val == "$expected_tag_val" ]] || fail "tag mismatch ($tag_val, $expected_tag_val)"
put_object_legal_hold "$BUCKET_ONE_NAME" "$bucket_file" "OFF" || fail "error disabling legal hold"
head_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error getting metadata"
get_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder/$bucket_file-copy" || fail "error getting object"
compare_files "$test_file_folder/$bucket_file" "$test_file_folder/$bucket_file-copy" || fail "files not equal"
sleep 15
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
test_delete_objects_aws_root() {
local object_one="test-file-one"
local object_two="test-file-two"
create_test_files "$object_one" "$object_two" || fail "error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error creating bucket"
put_object "s3api" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME" "$object_one" || fail "error adding object one"
put_object "s3api" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME" "$object_two" || fail "error adding object two"
delete_objects "$BUCKET_ONE_NAME" "$object_one" "$object_two" || fail "error deleting objects"
object_exists "s3api" "$BUCKET_ONE_NAME" "$object_one" || local object_one_exists_result=$?
[[ $object_one_exists_result -eq 1 ]] || fail "object $object_one not deleted"
object_exists "s3api" "$BUCKET_ONE_NAME" "$object_two" || local object_two_exists_result=$?
[[ $object_two_exists_result -eq 1 ]] || fail "object $object_two not deleted"
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
}
test_get_bucket_acl_aws_root() {
# TODO remove when able to assign bucket ownership back to root
if [[ $RECREATE_BUCKETS == "false" ]]; then
skip
fi
setup_bucket "aws" "$BUCKET_ONE_NAME" || fail "error creating bucket"
get_bucket_acl "s3api" "$BUCKET_ONE_NAME" || fail "error retreving ACL"
# shellcheck disable=SC2154
log 5 "ACL: $acl"
id=$(echo "$acl" | grep -v "InsecureRequestWarning" | jq -r '.Owner.ID')
[[ $id == "$AWS_ACCESS_KEY_ID" ]] || fail "Acl mismatch"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
}
test_get_object_full_range_aws_root() {
bucket_file="bucket_file"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
echo -n "0123456789" > "$test_file_folder/$bucket_file"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
put_object "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error putting object"
get_object_with_range "$BUCKET_ONE_NAME" "$bucket_file" "bytes=9-15" "$test_file_folder/$bucket_file-range" || fail "error getting range"
[[ "$(cat "$test_file_folder/$bucket_file-range")" == "9" ]] || fail "byte range not copied properly"
}
test_get_object_invalid_range_aws_root() {
bucket_file="bucket_file"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
put_object "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error putting object"
get_object_with_range "$BUCKET_ONE_NAME" "$bucket_file" "bytes=0-0" "$test_file_folder/$bucket_file-range" || local get_result=$?
[[ $get_result -ne 0 ]] || fail "Get object with zero range returned no error"
}
test_put_object_aws_root() {
bucket_file="bucket_file"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
setup_bucket "s3api" "$BUCKET_TWO_NAME" || local setup_result_two=$?
[[ $setup_result_two -eq 0 ]] || fail "Bucket two setup error"
put_object "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
copy_error=$(aws --no-verify-ssl s3api copy-object --copy-source "$BUCKET_ONE_NAME/$bucket_file" --key "$bucket_file" --bucket "$BUCKET_TWO_NAME" 2>&1) || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Error copying file: $copy_error"
copy_file "s3://$BUCKET_TWO_NAME/$bucket_file" "$test_file_folder/${bucket_file}_copy" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
compare_files "$test_file_folder/$bucket_file" "$test_file_folder/${bucket_file}_copy" || local compare_result=$?
[[ $compare_result -eq 0 ]] || file "files don't match"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_bucket_or_contents "aws" "$BUCKET_TWO_NAME"
delete_test_files "$bucket_file"
}
test_create_bucket_invalid_name_aws_root() {
if [[ $RECREATE_BUCKETS != "true" ]]; then
return
fi
create_bucket_invalid_name "aws" || local create_result=$?
[[ $create_result -eq 0 ]] || fail "Invalid name test failed"
# shellcheck disable=SC2154
[[ "$bucket_create_error" == *"Invalid bucket name "* ]] || fail "unexpected error: $bucket_create_error"
}
test_get_object_attributes_aws_root() {
bucket_file="bucket_file"
create_test_files "$bucket_file" || fail "error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
put_object "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || fail "failed to add object to bucket"
get_object_attributes "$BUCKET_ONE_NAME" "$bucket_file" || failed "failed to get object attributes"
# shellcheck disable=SC2154
has_object_size=$(echo "$attributes" | jq -e '.ObjectSize' 2>&1) || fail "error checking for ObjectSize parameters: $has_object_size"
if [[ $has_object_size -eq 0 ]]; then
object_size=$(echo "$attributes" | jq -r ".ObjectSize")
[[ $object_size == 0 ]] || fail "Incorrect object size: $object_size"
else
fail "ObjectSize parameter missing: $attributes"
fi
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
}
test_get_put_object_legal_hold_aws_root() {
# bucket must be created with lock for legal hold
if [[ $RECREATE_BUCKETS == false ]]; then
return
fi
bucket_file="bucket_file"
username="ABCDEFG"
password="HIJKLMN"
legal_hold_retention_setup "$username" "$password" "$bucket_file"
get_object_lock_configuration "$BUCKET_ONE_NAME" || fail "error getting lock configuration"
# shellcheck disable=SC2154
log 5 "$lock_config"
enabled=$(echo "$lock_config" | jq -r ".ObjectLockConfiguration.ObjectLockEnabled")
[[ $enabled == "Enabled" ]] || fail "ObjectLockEnabled should be 'Enabled', is '$enabled'"
put_object_legal_hold "$BUCKET_ONE_NAME" "$bucket_file" "ON" || fail "error putting legal hold on object"
get_object_legal_hold "$BUCKET_ONE_NAME" "$bucket_file" || fail "error getting object legal hold status"
# shellcheck disable=SC2154
log 5 "$legal_hold"
hold_status=$(echo "$legal_hold" | grep -v "InsecureRequestWarning" | jq -r ".LegalHold.Status" 2>&1) || fail "error obtaining hold status: $hold_status"
[[ $hold_status == "ON" ]] || fail "Status should be 'ON', is '$hold_status'"
echo "fdkljafajkfs" > "$test_file_folder/$bucket_file"
if put_object_with_user "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$password"; then
fail "able to overwrite object with hold"
fi
# shellcheck disable=SC2154
#[[ $put_object_error == *"Object is WORM protected and cannot be overwritten"* ]] || fail "unexpected error message: $put_object_error"
if delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$password"; then
fail "able to delete object with hold"
fi
# shellcheck disable=SC2154
[[ $delete_object_error == *"Object is WORM protected and cannot be overwritten"* ]] || fail "unexpected error message: $delete_object_error"
put_object_legal_hold "$BUCKET_ONE_NAME" "$bucket_file" "OFF" || fail "error removing legal hold on object"
delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$password" || fail "error deleting object after removing legal hold"
delete_bucket_recursive "s3api" "$BUCKET_ONE_NAME"
}
test_get_put_object_retention_aws_root() {
bucket_file="bucket_file"
username="ABCDEFG"
secret_key="HIJKLMN"
# TODO remove after able to change bucket owner back to root user
if [[ $RECREATE_BUCKETS == "false" ]]; then
skip
fi
legal_hold_retention_setup "$username" "$secret_key" "$bucket_file"
get_object_lock_configuration "$BUCKET_ONE_NAME" || fail "error getting lock configuration"
log 5 "$lock_config"
enabled=$(echo "$lock_config" | jq -r ".ObjectLockConfiguration.ObjectLockEnabled")
[[ $enabled == "Enabled" ]] || fail "ObjectLockEnabled should be 'Enabled', is '$enabled'"
if [[ "$OSTYPE" == "darwin"* ]]; then
retention_date=$(TZ="UTC" date -v+5S +"%Y-%m-%dT%H:%M:%S")
else
retention_date=$(TZ="UTC" date -d "+5 seconds" +"%Y-%m-%dT%H:%M:%S")
fi
log 5 "retention date: $retention_date"
put_object_retention "$BUCKET_ONE_NAME" "$bucket_file" "GOVERNANCE" "$retention_date" || fail "failed to add object retention"
get_object_retention "$BUCKET_ONE_NAME" "$bucket_file" || fail "failed to get object retention"
log 5 "$retention"
retention=$(echo "$retention" | grep -v "InsecureRequestWarning")
mode=$(echo "$retention" | jq -r ".Retention.Mode")
retain_until_date=$(echo "$retention" | jq -r ".Retention.RetainUntilDate")
[[ $mode == "GOVERNANCE" ]] || fail "retention mode should be governance, is $mode"
[[ $retain_until_date == "$retention_date"* ]] || fail "retain until date should be $retention_date, is $retain_until_date"
echo "fdkljafajkfs" > "$test_file_folder/$bucket_file"
put_object_with_user "s3api" "$test_file_folder/$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$secret_key" || local put_result=$?
[[ $put_result -ne 0 ]] || fail "able to overwrite object with hold"
# shellcheck disable=SC2154
[[ $put_object_error == *"Object is WORM protected and cannot be overwritten"* ]] || fail "unexpected error message: $error"
delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$secret_key" || local delete_result=$?
[[ $delete_result -ne 0 ]] || fail "able to delete object with hold"
[[ $delete_object_error == *"Object is WORM protected and cannot be overwritten"* ]] || fail "unexpected error message: $error"
sleep 5
delete_object "s3api" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error deleting object"
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
delete_test_files "$bucket_file"
}
test_retention_bypass_aws_root() {
bucket_file="bucket_file"
username="ABCDEFG"
secret_key="HIJKLMN"
policy_file="policy_file"
legal_hold_retention_setup "$username" "$secret_key" "$bucket_file"
get_object_lock_configuration "$BUCKET_ONE_NAME" || fail "error getting lock configuration"
log 5 "$lock_config"
enabled=$(echo "$lock_config" | jq -r ".ObjectLockConfiguration.ObjectLockEnabled")
[[ $enabled == "Enabled" ]] || fail "ObjectLockEnabled should be 'Enabled', is '$enabled'"
if [[ "$OSTYPE" == "darwin"* ]]; then
retention_date=$(TZ="UTC" date -v+30S +"%Y-%m-%dT%H:%M:%S")
else
retention_date=$(TZ="UTC" date -d "+30 seconds" +"%Y-%m-%dT%H:%M:%S")
fi
log 5 "retention date: $retention_date"
put_object_retention "$BUCKET_ONE_NAME" "$bucket_file" "GOVERNANCE" "$retention_date" || fail "failed to add object retention"
if delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$bucket_file"; then
log 2 "able to delete object despite retention"
return 1
fi
cat <<EOF > "$test_file_folder/$policy_file"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "$username",
"Action": ["s3:BypassGovernanceRetention","s3:DeleteObject"],
"Resource": "arn:aws:s3:::$BUCKET_ONE_NAME/*"
}
]
}
EOF
put_bucket_policy "s3api" "$BUCKET_ONE_NAME" "$test_file_folder/$policy_file" || fail "error putting bucket policy"
delete_object_bypass_retention "$BUCKET_ONE_NAME" "$bucket_file" "$username" "$secret_key" || fail "error deleting object and bypassing retention"
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
delete_test_files "$bucket_file" "$policy_file"
}
legal_hold_retention_setup() {
[[ $# -eq 3 ]] || fail "legal hold or retention setup requires username, secret key, bucket file"
delete_bucket_or_contents_if_exists "s3api" "$BUCKET_ONE_NAME" || fail "error deleting bucket, or checking for existence"
setup_user "$1" "$2" "user" || fail "error creating user if nonexistent"
create_test_files "$3" || fail "error creating test files"
#create_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error creating bucket"
if [[ $RECREATE_BUCKETS == "true" ]]; then
create_bucket_object_lock_enabled "$BUCKET_ONE_NAME" || fail "error creating bucket"
fi
change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$BUCKET_ONE_NAME" "$1" || fail "error changing bucket ownership"
get_bucket_policy "s3api" "$BUCKET_ONE_NAME" || fail "error getting bucket policy"
# shellcheck disable=SC2154
log 5 "POLICY: $bucket_policy"
get_bucket_owner "$BUCKET_ONE_NAME"
# shellcheck disable=SC2154
log 5 "owner: $bucket_owner"
#put_bucket_ownership_controls "$BUCKET_ONE_NAME" "BucketOwnerPreferred" || fail "error putting bucket ownership controls"
put_object_with_user "s3api" "$test_file_folder/$3" "$BUCKET_ONE_NAME" "$3" "$1" "$2" || fail "failed to add object to bucket"
}
test_s3api_list_objects_v1_aws_root() {
local object_one="test-file-one"
local object_two="test-file-two"
local object_two_data="test data\n"
create_test_files "$object_one" "$object_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
printf "%s" "$object_two_data" > "$test_file_folder"/"$object_two"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "s3api" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME" "$object_one" || local copy_result_one=$?
[[ $copy_result_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "s3api" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME" "$object_two" || local copy_result_two=$?
[[ $copy_result_two -eq 0 ]] || fail "Failed to add object $object_two"
list_objects_s3api_v1 "$BUCKET_ONE_NAME"
# shellcheck disable=SC2154
key_one=$(echo "$objects" | jq -r '.Contents[0].Key')
[[ $key_one == "$object_one" ]] || fail "Object one mismatch ($key_one, $object_one)"
size_one=$(echo "$objects" | jq -r '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch ($size_one, 0)"
key_two=$(echo "$objects" | jq -r '.Contents[1].Key')
[[ $key_two == "$object_two" ]] || fail "Object two mismatch ($key_two, $object_two)"
size_two=$(echo "$objects" | jq '.Contents[1].Size')
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch ($size_two, ${#object_two_data})"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
}
test_s3api_list_objects_v2_aws_root() {
local object_one="test-file-one"
local object_two="test-file-two"
local object_two_data="test data\n"
create_test_files "$object_one" "$object_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
printf "%s" "$object_two_data" > "$test_file_folder"/"$object_two"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "s3api" "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME" "$object_one" || local copy_object_one=$?
[[ $copy_object_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "s3api" "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME" "$object_two" || local copy_object_two=$?
[[ $copy_object_two -eq 0 ]] || fail "Failed to add object $object_two"
list_objects_v2 "$BUCKET_ONE_NAME" || fail "error listing objects (v2)"
key_one=$(echo "$objects" | jq -r '.Contents[0].Key')
[[ $key_one == "$object_one" ]] || fail "Object one mismatch ($key_one, $object_one)"
size_one=$(echo "$objects" | jq -r '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch ($size_one, 0)"
key_two=$(echo "$objects" | jq -r '.Contents[1].Key')
[[ $key_two == "$object_two" ]] || fail "Object two mismatch ($key_two, $object_two)"
size_two=$(echo "$objects" | jq -r '.Contents[1].Size')
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch ($size_two, ${#object_two_data})"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
}
test_multipart_upload_list_parts_aws_root() {
local bucket_file="bucket-file"
create_test_files "$bucket_file" || fail "error creating test file"
dd if=/dev/urandom of="$test_file_folder/$bucket_file" bs=5M count=1 || fail "error creating test file"
setup_bucket "aws" "$BUCKET_ONE_NAME" || fail "failed to create bucket '$BUCKET_ONE_NAME'"
start_multipart_upload_and_list_parts "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || fail "listing multipart upload parts failed"
declare -a parts_map
# shellcheck disable=SC2154
log 5 "parts: $parts"
for i in {0..3}; do
local part_number
local etag
# shellcheck disable=SC2154
part=$(echo "$parts" | grep -v "InsecureRequestWarning" | jq -r ".[$i]" 2>&1) || fail "error getting part: $part"
part_number=$(echo "$part" | jq ".PartNumber" 2>&1) || fail "error parsing part number: $part_number"
[[ $part_number != "" ]] || fail "error: blank part number"
etag=$(echo "$part" | jq ".ETag" 2>&1) || fail "error parsing etag: $etag"
[[ $etag != "" ]] || fail "error: blank etag"
# shellcheck disable=SC2004
parts_map[$part_number]=$etag
done
[[ ${#parts_map[@]} -ne 0 ]] || fail "error loading multipart upload parts to check"
for i in {0..3}; do
local part_number
local etag
# shellcheck disable=SC2154
listed_part=$(echo "$listed_parts" | grep -v "InsecureRequestWarning" | jq -r ".Parts[$i]" 2>&1) || fail "error parsing listed part: $listed_part"
part_number=$(echo "$listed_part" | jq ".PartNumber" 2>&1) || fail "error parsing listed part number: $part_number"
etag=$(echo "$listed_part" | jq ".ETag" 2>&1) || fail "error getting listed etag: $etag"
[[ ${parts_map[$part_number]} == "$etag" ]] || fail "error: etags don't match (part number: $part_number, etags ${parts_map[$part_number]},$etag)"
done
run_then_abort_multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder/$bucket_file" 4
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}

View File

@@ -5,6 +5,7 @@ source ./tests/util.sh
source ./tests/util_file.sh
source ./tests/util_policy.sh
source ./tests/commands/copy_object.sh
source ./tests/commands/delete_bucket_tagging.sh
source ./tests/commands/delete_object_tagging.sh
source ./tests/commands/get_bucket_acl.sh
source ./tests/commands/get_bucket_location.sh
@@ -13,7 +14,10 @@ source ./tests/commands/get_object.sh
source ./tests/commands/get_object_tagging.sh
source ./tests/commands/list_buckets.sh
source ./tests/commands/put_bucket_acl.sh
source ./tests/commands/put_bucket_tagging.sh
source ./tests/commands/put_object_tagging.sh
source ./tests/commands/put_object.sh
source ./tests/commands/put_public_access_block.sh
test_common_multipart_upload() {
if [[ $# -ne 1 ]]; then
@@ -81,6 +85,7 @@ test_common_copy_object() {
delete_bucket_or_contents "$1" "$BUCKET_ONE_NAME"
delete_bucket_or_contents "$1" "$BUCKET_TWO_NAME"
delete_test_files "$object_name" "$object_name-copy"
}
test_common_put_object_with_data() {
@@ -266,7 +271,7 @@ test_common_set_get_delete_bucket_tags() {
check_bucket_tags_empty "$1" "$BUCKET_ONE_NAME" || fail "error checking if bucket tags are empty"
put_bucket_tag "$1" "$BUCKET_ONE_NAME" $key $value
put_bucket_tagging "$1" "$BUCKET_ONE_NAME" $key $value || fail "error putting bucket tags"
get_bucket_tagging "$1" "$BUCKET_ONE_NAME" || fail "Error getting bucket tags second time"
local tag_set_key
@@ -282,7 +287,7 @@ test_common_set_get_delete_bucket_tags() {
[[ $tag_set_key == "$key" ]] || fail "Key mismatch"
[[ $tag_set_value == "$value" ]] || fail "Value mismatch"
fi
delete_bucket_tags "$1" "$BUCKET_ONE_NAME"
delete_bucket_tagging "$1" "$BUCKET_ONE_NAME"
get_bucket_tagging "$1" "$BUCKET_ONE_NAME" || fail "Error getting bucket tags third time"
@@ -300,15 +305,11 @@ test_common_set_get_object_tags() {
local key="test_key"
local value="test_value"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "$1" "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "$1" "$test_file_folder"/"$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket '$BUCKET_ONE_NAME'"
create_test_files "$bucket_file" || fail "error creating test files"
setup_bucket "$1" "$BUCKET_ONE_NAME" || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "$1" "$test_file_folder"/"$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || fail "Failed to add object to bucket '$BUCKET_ONE_NAME'"
get_object_tagging "$1" "$BUCKET_ONE_NAME" $bucket_file || local get_result=$?
[[ $get_result -eq 0 ]] || fail "Error getting object tags"
get_object_tagging "$1" "$BUCKET_ONE_NAME" $bucket_file || fail "Error getting object tags"
if [[ $1 == 'aws' ]]; then
tag_set=$(echo "$tags" | jq '.TagSet')
[[ $tag_set == "[]" ]] || [[ $tag_set == "" ]] || fail "Error: tags not empty"
@@ -316,9 +317,8 @@ test_common_set_get_object_tags() {
fail "no tags found (tags: $tags)"
fi
put_object_tag "$1" "$BUCKET_ONE_NAME" $bucket_file $key $value
get_object_tagging "$1" "$BUCKET_ONE_NAME" "$bucket_file" || local get_result_two=$?
[[ $get_result_two -eq 0 ]] || fail "Error getting object tags"
put_object_tagging "$1" "$BUCKET_ONE_NAME" $bucket_file $key $value || fail "error putting object tagging"
get_object_tagging "$1" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error getting object tags"
if [[ $1 == 'aws' ]]; then
tag_set_key=$(echo "$tags" | jq -r '.TagSet[0].Key')
tag_set_value=$(echo "$tags" | jq -r '.TagSet[0].Value')
@@ -396,26 +396,19 @@ test_common_delete_object_tagging() {
tag_key="key"
tag_value="value"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
create_test_files "$bucket_file" || fail "Error creating test files"
setup_bucket "$1" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
setup_bucket "$1" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
put_object "$1" "$test_file_folder"/"$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || local copy_result=$?
[[ $copy_result -eq 0 ]] || fail "Failed to add object to bucket"
put_object "$1" "$test_file_folder"/"$bucket_file" "$BUCKET_ONE_NAME" "$bucket_file" || fail "Failed to add object to bucket"
put_object_tag "$1" "$BUCKET_ONE_NAME" "$bucket_file" "$tag_key" "$tag_value" || put_result=$?
[[ $put_result -eq 0 ]] || fail "failed to add tags to object"
put_object_tagging "$1" "$BUCKET_ONE_NAME" "$bucket_file" "$tag_key" "$tag_value" || fail "failed to add tags to object"
get_and_verify_object_tags "$1" "$BUCKET_ONE_NAME" "$bucket_file" "$tag_key" "$tag_value" || get_result=$?
[[ $get_result -eq 0 ]] || fail "failed to get tags"
get_and_verify_object_tags "$1" "$BUCKET_ONE_NAME" "$bucket_file" "$tag_key" "$tag_value" || fail "failed to get tags"
delete_object_tagging "$1" "$BUCKET_ONE_NAME" "$bucket_file" || delete_result=$?
[[ $delete_result -eq 0 ]] || fail "error deleting object tagging"
delete_object_tagging "$1" "$BUCKET_ONE_NAME" "$bucket_file" || fail "error deleting object tagging"
check_object_tags_empty "$1" "$BUCKET_ONE_NAME" "$bucket_file" || get_result=$?
[[ $get_result -eq 0 ]] || fail "failed to get tags"
check_object_tags_empty "$1" "$BUCKET_ONE_NAME" "$bucket_file" || fail "failed to get tags"
delete_bucket_or_contents "aws" "$BUCKET_ONE_NAME"
delete_test_files "$bucket_file"
@@ -423,41 +416,93 @@ test_common_delete_object_tagging() {
test_common_get_bucket_location() {
[[ $# -eq 1 ]] || fail "test common get bucket location missing command type"
setup_bucket "aws" "$BUCKET_ONE_NAME" || local setup_result=$?
setup_bucket "$1" "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
get_bucket_location "aws" "$BUCKET_ONE_NAME"
get_bucket_location "$1" "$BUCKET_ONE_NAME"
# shellcheck disable=SC2154
[[ $bucket_location == "null" ]] || [[ $bucket_location == "us-east-1" ]] || fail "wrong location: '$bucket_location'"
}
test_put_bucket_acl_s3cmd() {
if [[ $DIRECT != "true" ]]; then
# https://github.com/versity/versitygw/issues/695
skip
fi
setup_bucket "s3cmd" "$BUCKET_ONE_NAME" || fail "error creating bucket"
put_bucket_ownership_controls "$BUCKET_ONE_NAME" "BucketOwnerPreferred" || fail "error putting bucket ownership controls"
username="abcdefgh"
if [[ $DIRECT != "true" ]]; then
setup_user "$username" "HIJKLMN" "user" || fail "error creating user"
fi
sleep 5
get_bucket_acl "s3cmd" "$BUCKET_ONE_NAME" || fail "error retrieving acl"
log 5 "Initial ACLs: $acl"
acl_line=$(echo "$acl" | grep "ACL")
user_id=$(echo "$acl_line" | awk '{print $2}')
if [[ $DIRECT == "true" ]]; then
[[ $user_id == "$DIRECT_DISPLAY_NAME:" ]] || fail "ID mismatch ($user_id, $DIRECT_DISPLAY_NAME)"
else
[[ $user_id == "$AWS_ACCESS_KEY_ID:" ]] || fail "ID mismatch ($user_id, $AWS_ACCESS_KEY_ID)"
fi
permission=$(echo "$acl_line" | awk '{print $3}')
[[ $permission == "FULL_CONTROL" ]] || fail "Permission mismatch ($permission)"
if [[ $DIRECT == "true" ]]; then
put_public_access_block_enable_public_acls "$BUCKET_ONE_NAME" || fail "error enabling public ACLs"
fi
put_bucket_canned_acl_s3cmd "$BUCKET_ONE_NAME" "--acl-public" || fail "error putting canned s3cmd ACL"
get_bucket_acl "s3cmd" "$BUCKET_ONE_NAME" || fail "error retrieving acl"
log 5 "ACL after read put: $acl"
acl_lines=$(echo "$acl" | grep "ACL")
log 5 "ACL lines: $acl_lines"
while IFS= read -r line; do
lines+=("$line")
done <<< "$acl_lines"
log 5 "lines: ${lines[*]}"
[[ ${#lines[@]} -eq 2 ]] || fail "unexpected number of ACL lines: ${#lines[@]}"
anon_name=$(echo "${lines[1]}" | awk '{print $2}')
anon_permission=$(echo "${lines[1]}" | awk '{print $3}')
[[ $anon_name == "*anon*:" ]] || fail "unexpected anon name: $anon_name"
[[ $anon_permission == "READ" ]] || fail "unexpected anon permission: $anon_permission"
delete_bucket_or_contents "s3cmd" "$BUCKET_ONE_NAME"
}
test_common_put_bucket_acl() {
[[ $# -eq 1 ]] || fail "test common put bucket acl missing command type"
setup_bucket "$1" "$BUCKET_ONE_NAME" || fail "error creating bucket"
put_bucket_ownership_controls "$BUCKET_ONE_NAME" "BucketOwnerPreferred" || fail "error putting bucket ownership controls"
setup_user "ABCDEFG" "HIJKLMN" "user" || fail "error creating user"
username="ABCDEFG"
setup_user "$username" "HIJKLMN" "user" || fail "error creating user"
get_bucket_acl "$1" "$BUCKET_ONE_NAME" || fail "error retrieving acl"
log 5 "Initial ACLs: $acl"
id=$(echo "$acl" | grep -v "InsecureRequestWarning" | jq '.Owner.ID' 2>&1) || fail "error getting ID: $id"
if [[ $id != '"'"$AWS_ACCESS_KEY_ID"'"' ]]; then
id=$(echo "$acl" | grep -v "InsecureRequestWarning" | jq -r '.Owner.ID' 2>&1) || fail "error getting ID: $id"
if [[ $id != "$AWS_ACCESS_KEY_ID" ]]; then
# for direct, ID is canonical user ID rather than AWS_ACCESS_KEY_ID
canonical_id=$(aws --no-verify-ssl s3api list-buckets --query 'Owner.ID' 2>&1) || fail "error getting caononical ID: $canonical_id"
canonical_id=$(aws --no-verify-ssl s3api list-buckets --query 'Owner.ID' 2>&1) || fail "error getting canonical ID: $canonical_id"
[[ $id == "$canonical_id" ]] || fail "acl ID doesn't match AWS key or canonical ID"
fi
acl_file="test-acl"
create_test_files "$acl_file"
if [[ $DIRECT == "true" ]]; then
grantee="{\"Type\": \"Group\", \"URI\": \"http://acs.amazonaws.com/groups/global/AllUsers\"}"
else
grantee="{\"ID\": \"$username\", \"Type\": \"CanonicalUser\"}"
fi
cat <<EOF > "$test_file_folder"/"$acl_file"
{
"Grants": [
{
"Grantee": {
"ID": "ABCDEFG",
"Type": "CanonicalUser"
},
"Grantee": $grantee,
"Permission": "READ"
}
],
@@ -468,12 +513,7 @@ cat <<EOF > "$test_file_folder"/"$acl_file"
EOF
log 6 "before 1st put acl"
if [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
put_bucket_acl "$1" "$BUCKET_ONE_NAME" "$test_file_folder"/"$acl_file" || fail "error putting first acl"
else
put_bucket_acl "$1" "$BUCKET_ONE_NAME" "ABCDEFG" || fail "error putting first acl"
fi
put_bucket_acl_s3api "$1" "$BUCKET_ONE_NAME" "$test_file_folder"/"$acl_file" || fail "error putting first acl"
get_bucket_acl "$1" "$BUCKET_ONE_NAME" || fail "error retrieving second ACL"
log 5 "Acls after 1st put: $acl"
@@ -486,7 +526,7 @@ cat <<EOF > "$test_file_folder"/"$acl_file"
"Grants": [
{
"Grantee": {
"ID": "ABCDEFG",
"ID": "$username",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
@@ -498,8 +538,7 @@ cat <<EOF > "$test_file_folder"/"$acl_file"
}
EOF
put_bucket_acl "$1" "$BUCKET_ONE_NAME" "$test_file_folder"/"$acl_file" || fail "error putting second acl"
put_bucket_acl_s3api "$1" "$BUCKET_ONE_NAME" "$test_file_folder"/"$acl_file" || fail "error putting second acl"
get_bucket_acl "$1" "$BUCKET_ONE_NAME" || fail "error retrieving second ACL"
log 5 "Acls after 2nd put: $acl"

View File

@@ -24,7 +24,14 @@ export RUN_MC=true
test_common_create_delete_bucket "mc"
}
# delete-bucket - test_create_delete_bucket
# delete-bucket
@test "test_delete_bucket" {
if [[ $RECREATE_BUCKETS == "false" ]]; then
skip "will not test bucket deletion in static bucket test config"
fi
setup_bucket "mc" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
delete_bucket "mc" "$BUCKET_ONE_NAME" || fail "error deleting bucket"
}
# delete-bucket-policy
@test "test_get_put_delete_bucket_policy" {

View File

@@ -39,3 +39,11 @@ source ./tests/test_common.sh
@test "test_list_objects_file_count" {
test_common_list_objects_file_count "s3"
}
@test "test_delete_bucket" {
if [[ $RECREATE_BUCKETS == "false" ]]; then
skip "will not test bucket deletion in static bucket test config"
fi
setup_bucket "s3" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
delete_bucket "s3" "$BUCKET_ONE_NAME" || fail "error deleting bucket"
}

View File

@@ -18,9 +18,9 @@ export RUN_USERS=true
}
# copy-object
#@test "test_copy_object" {
# test_common_copy_object "s3cmd"
#}
@test "test_copy_object" {
test_common_copy_object "s3cmd"
}
# create-bucket
@test "test_create_delete_bucket" {
@@ -73,9 +73,9 @@ export RUN_USERS=true
test_common_put_object_no_data "s3cmd"
}
#@test "test_put_bucket_acl" {
# test_common_put_bucket_acl "s3cmd"
#}
@test "test_put_bucket_acl" {
test_put_bucket_acl_s3cmd
}
# test listing buckets on versitygw
@test "test_list_buckets_s3cmd" {

View File

@@ -2,6 +2,8 @@
source ./tests/test_user_common.sh
source ./tests/util_users.sh
source ./tests/commands/get_object.sh
source ./tests/commands/put_object.sh
export RUN_USERS=true
@@ -26,3 +28,88 @@ export RUN_USERS=true
@test "test_userplus_operation_aws" {
test_userplus_operation "aws"
}
@test "test_user_get_object" {
username="ABCDEFG"
password="HIJKLMN"
test_file="test_file"
setup_user "$username" "$password" "user" || fail "error creating user if nonexistent"
create_test_files "$test_file" || fail "error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
if get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$test_file_folder/$test_file-copy" "$username" "$password"; then
fail "able to get object despite not being bucket owner"
fi
change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$BUCKET_ONE_NAME" "$username" || fail "error changing bucket ownership"
put_object "s3api" "$test_file_folder/$test_file" "$BUCKET_ONE_NAME" "$test_file" || fail "failed to add object to bucket"
get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$test_file_folder/$test_file-copy" "$username" "$password" || fail "error getting object"
}
@test "test_userplus_get_object" {
username="ABCDEFG"
password="HIJKLMN"
test_file="test_file"
setup_user "$username" "$password" "admin" || fail "error creating user if nonexistent"
create_test_files "$test_file" || fail "error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
if get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$test_file_folder/$test_file-copy" "$username" "$password"; then
fail "able to get object despite not being bucket owner"
fi
change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$BUCKET_ONE_NAME" "$username" || fail "error changing bucket ownership"
put_object "s3api" "$test_file_folder/$test_file" "$BUCKET_ONE_NAME" "$test_file" || fail "failed to add object to bucket"
get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$test_file_folder/$test_file-copy" "$username" "$password" || fail "error getting object"
}
@test "test_user_delete_object" {
username="ABCDEFG"
password="HIJKLMN"
test_file="test_file"
setup_user "$username" "$password" "user" || fail "error creating user if nonexistent"
create_test_files "$test_file" || fail "error creating test files"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
if get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$test_file_folder/$test_file-copy" "$username" "$password"; then
fail "able to get object despite not being bucket owner"
fi
change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$BUCKET_ONE_NAME" "$username" || fail "error changing bucket ownership"
put_object "s3api" "$test_file_folder/$test_file" "$BUCKET_ONE_NAME" "$test_file" || fail "failed to add object to bucket"
delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$username" "$password" || fail "error deleting object"
}
@test "test_admin_put_get_object" {
username="ABCDEFG"
password="HIJKLMN"
test_file="test_file"
setup_user "$username" "$password" "admin" || fail "error creating user if nonexistent"
create_test_file_with_size "$test_file" 10 || fail "error creating test file"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
put_object_with_user "s3api" "$test_file_folder/$test_file" "$BUCKET_ONE_NAME" "$test_file" "$username" "$password" || fail "failed to add object to bucket"
get_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$test_file_folder/$test_file-copy" "$username" "$password" || fail "error getting object"
compare_files "$test_file_folder/$test_file" "$test_file_folder/$test_file-copy" || fail "files don't match"
delete_object_with_user "s3api" "$BUCKET_ONE_NAME" "$test_file" "$username" "$password" || fail "error deleting object"
if get_object "s3api" "$BUCKET_ONE_NAME" "$test_file" "$test_file_folder/$test_file-copy"; then
fail "file not successfully deleted"
fi
# shellcheck disable=SC2154
[[ "$get_object_error" == *"NoSuchKey"* ]] || fail "unexpected error message: $get_object_error"
delete_bucket_or_contents "s3api" "$BUCKET_ONE_NAME"
delete_test_files "$test_file" "$test_file-copy"
}
@test "test_user_create_multipart_upload" {
username="ABCDEFG"
password="HIJKLMN"
test_file="test_file"
setup_user "$username" "$password" "user" || fail "error creating user if nonexistent"
create_large_file "$test_file" || fail "error creating test file"
setup_bucket "s3api" "$BUCKET_ONE_NAME" || fail "error setting up bucket"
change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$BUCKET_ONE_NAME" "$username" || fail "error changing bucket ownership"
create_multipart_upload_with_user "$BUCKET_ONE_NAME" "dummy" "$username" "$password" || fail "unable to create multipart upload"
}

View File

@@ -4,6 +4,7 @@ source ./tests/setup.sh
source ./tests/util_users.sh
source ./tests/util.sh
source ./tests/util_bucket_create.sh
source ./tests/commands/list_buckets.sh
test_admin_user() {
if [[ $# -ne 1 ]]; then

View File

@@ -10,15 +10,19 @@ source ./tests/commands/create_bucket.sh
source ./tests/commands/delete_bucket.sh
source ./tests/commands/delete_bucket_policy.sh
source ./tests/commands/delete_object.sh
source ./tests/commands/get_bucket_acl.sh
source ./tests/commands/get_bucket_ownership_controls.sh
source ./tests/commands/get_bucket_tagging.sh
source ./tests/commands/get_object_tagging.sh
source ./tests/commands/head_bucket.sh
source ./tests/commands/head_object.sh
source ./tests/commands/list_objects.sh
source ./tests/commands/list_parts.sh
source ./tests/commands/put_bucket_acl.sh
source ./tests/commands/put_bucket_ownership_controls.sh
source ./tests/commands/put_object_lock_configuration.sh
source ./tests/commands/upload_part_copy.sh
source ./tests/commands/upload_part.sh
# recursively delete an AWS bucket
# param: bucket name
@@ -55,11 +59,32 @@ delete_bucket_recursive() {
return 0
}
delete_bucket_recursive_s3api() {
add_governance_bypass_policy() {
if [[ $# -ne 1 ]]; then
log 2 "delete bucket recursive command for s3api requires bucket name"
log 2 "'add governance bypass policy' command requires command ID"
return 1
fi
test_file_folder=$PWD
if [[ -z "$GITHUB_ACTIONS" ]]; then
create_test_file_folder
fi
cat <<EOF > "$test_file_folder/policy-bypass-governance.txt"
{
"Version": "dummy",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:BypassGovernanceRetention",
"Resource": "arn:aws:s3:::$1/*"
}
]
}
EOF
put_bucket_policy "s3api" "$1" "$test_file_folder/policy-bypass-governance.txt" || fail "error putting bucket policy"
}
clear_bucket_s3api() {
if ! list_objects 's3api' "$1"; then
log 2 "error listing objects"
return 1
@@ -74,7 +99,25 @@ delete_bucket_recursive_s3api() {
log 2 "error removing object legal hold"
return 1
fi
if ! delete_object 's3api' "$1" "$object"; then
sleep 1
if [[ $LOG_LEVEL_INT -ge 5 ]]; then
if ! get_object_legal_hold "$1" "$object"; then
log 2 "error getting object legal hold status"
return 1
fi
log 5 "LEGAL HOLD: $legal_hold"
if ! get_object_retention "$1" "$object"; then
log 2 "error getting object retention"
if [[ $get_object_retention_error != *"NoSuchObjectLockConfiguration"* ]]; then
return 1
fi
fi
log 5 "RETENTION: $retention"
get_bucket_policy "s3api" "$1" || fail "error getting bucket policy"
log 5 "BUCKET POLICY: $bucket_policy"
fi
add_governance_bypass_policy "$1" || fail "error adding governance bypass policy"
if ! delete_object_bypass_retention "$1" "$object" "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY"; then
log 2 "error deleting object after legal hold removal"
return 1
fi
@@ -83,6 +126,19 @@ delete_bucket_recursive_s3api() {
return 1
fi
done
delete_bucket_policy "s3api" "$1" || fail "error deleting bucket policy"
put_bucket_canned_acl "$1" "private" || fail "error deleting bucket ACLs"
put_object_lock_configuration_disabled "$1" || fail "error removing object lock config"
#change_bucket_owner "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" "$1" "$AWS_ACCESS_KEY_ID" || fail "error changing bucket owner"
}
delete_bucket_recursive_s3api() {
if [[ $# -ne 1 ]]; then
log 2 "delete bucket recursive command for s3api requires bucket name"
return 1
fi
clear_bucket_s3api "$1" || fail "error clearing bucket"
delete_bucket 's3api' "$1" || local delete_bucket_result=$?
if [[ $delete_bucket_result -ne 0 ]]; then
@@ -104,7 +160,7 @@ delete_bucket_contents() {
local exit_code=0
local error
if [[ $1 == "aws" ]] || [[ $1 == 's3api' ]]; then
error=$(aws --no-verify-ssl s3 rm s3://"$2" --recursive 2>&1) || exit_code="$?"
clear_bucket_s3api "$2" || exit_code="$?"
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate del s3://"$2" --recursive --force 2>&1) || exit_code="$?"
elif [[ $1 == "mc" ]]; then
@@ -156,7 +212,7 @@ delete_bucket_or_contents() {
log 2 "error deleting bucket contents"
return 1
fi
if ! delete_bucket_policy "s3api" "$2"; then
if ! delete_bucket_policy "$1" "$2"; then
log 2 "error deleting bucket policies"
return 1
fi
@@ -164,7 +220,7 @@ delete_bucket_or_contents() {
log 2 "error getting object ownership rule"
return 1
fi
# shellcheck disable=SC2154
log 5 "object ownership rule: $object_ownership_rule"
if [[ "$object_ownership_rule" != "BucketOwnerEnforced" ]] && ! put_bucket_canned_acl "$2" "private"; then
log 2 "error resetting bucket ACLs"
return 1
@@ -192,8 +248,7 @@ delete_bucket_or_contents_if_exists() {
return 1
fi
if [[ $bucket_exists_result -eq 0 ]]; then
delete_bucket_or_contents "$1" "$2" || local delete_result=$?
if [[ delete_result -ne 0 ]]; then
if ! delete_bucket_or_contents "$1" "$2"; then
log 2 "error deleting bucket or contents"
return 1
fi
@@ -247,16 +302,16 @@ setup_bucket() {
# return 0 for true, 1 for false, 2 for error
object_exists() {
if [ $# -ne 3 ]; then
echo "object exists check missing command, bucket name, object name"
log 2 "object exists check missing command, bucket name, object name"
return 2
fi
head_object "$1" "$2" "$3" || head_result=$?
if [[ $head_result -eq 2 ]]; then
echo "error checking if object exists"
head_object "$1" "$2" "$3" || local head_object_result=$?
if [[ $head_object_result -eq 2 ]]; then
log 2 "error checking if object exists"
return 2
fi
# shellcheck disable=SC2086
return $head_result
return $head_object_result
}
put_object_with_metadata() {
@@ -357,35 +412,6 @@ check_and_put_object() {
return 0
}
list_buckets_with_user() {
if [[ $# -ne 3 ]]; then
echo "List buckets command missing format, user id, key"
return 1
fi
local exit_code=0
local output
if [[ $1 == "aws" ]]; then
output=$(AWS_ACCESS_KEY_ID="$2" AWS_SECRET_ACCESS_KEY="$3" aws --no-verify-ssl s3 ls s3:// 2>&1) || exit_code=$?
else
echo "invalid format: $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error listing buckets: $output"
return 1
fi
bucket_array=()
while IFS= read -r line; do
bucket_name=$(echo "$line" | awk '{print $NF}')
bucket_array+=("${bucket_name%/}")
done <<< "$output"
export bucket_array
}
remove_insecure_request_warning() {
if [[ $# -ne 1 ]]; then
echo "remove insecure request warning requires input lines"
@@ -459,33 +485,6 @@ get_object_acl() {
export acl
}
# add tags to bucket
# params: bucket, key, value
# return: 0 for success, 1 for error
put_bucket_tag() {
if [ $# -ne 4 ]; then
echo "bucket tag command missing command type, bucket name, key, value"
return 1
fi
local error
local result
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api put-bucket-tagging --bucket "$2" --tagging "TagSet=[{Key=$3,Value=$4}]") || result=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure tag set "$MC_ALIAS"/"$2" "$3=$4" 2>&1) || result=$?
else
log 2 "invalid command type $1"
return 1
fi
if [[ $result -ne 0 ]]; then
echo "Error adding bucket tag: $error"
return 1
fi
return 0
}
check_tags_empty() {
if [[ $# -ne 1 ]]; then
echo "check tags empty requires command type"
@@ -536,48 +535,6 @@ check_bucket_tags_empty() {
return $check_result
}
delete_bucket_tags() {
if [ $# -ne 2 ]; then
echo "delete bucket tag command missing command type, bucket name"
return 1
fi
local result
if [[ $1 == 'aws' ]]; then
tags=$(aws --no-verify-ssl s3api delete-bucket-tagging --bucket "$2" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
tags=$(mc --insecure tag remove "$MC_ALIAS"/"$2" 2>&1) || result=$?
else
echo "invalid command type $1"
return 1
fi
return 0
}
# add tags to object
# params: object, key, value
# return: 0 for success, 1 for error
put_object_tag() {
if [ $# -ne 5 ]; then
echo "object tag command missing command type, object name, file, key, and/or value"
return 1
fi
local error
local result
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api put-object-tagging --bucket "$2" --key "$3" --tagging "TagSet=[{Key=$4,Value=$5}]" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure tag set "$MC_ALIAS"/"$2"/"$3" "$4=$5" 2>&1) || result=$?
else
echo "invalid command type $1"
return 1
fi
if [[ $result -ne 0 ]]; then
echo "Error adding object tag: $error"
return 1
fi
return 0
}
get_and_verify_object_tags() {
if [[ $# -ne 5 ]]; then
echo "get and verify object tags missing command type, bucket, key, tag key, tag value"
@@ -627,40 +584,6 @@ list_objects_s3api_v1() {
export objects
}
# list objects in bucket, v2
# param: bucket
# export objects on success, return 1 for failure
list_objects_s3api_v2() {
if [ $# -ne 1 ]; then
echo "list objects command missing bucket and/or path"
return 1
fi
objects=$(aws --no-verify-ssl s3api list-objects-v2 --bucket "$1") || local result=$?
if [[ $result -ne 0 ]]; then
echo "error listing objects: $objects"
return 1
fi
export objects
}
# upload a single part of a multipart upload
# params: bucket, key, upload ID, original (unsplit) file name, part number
# return: 0 for success, 1 for failure
upload_part() {
if [ $# -ne 5 ]; then
echo "upload multipart part function must have bucket, key, upload ID, file name, part number"
return 1
fi
local etag_json
etag_json=$(aws --no-verify-ssl s3api upload-part --bucket "$1" --key "$2" --upload-id "$3" --part-number "$5" --body "$4-$(($5-1))") || local uploaded=$?
if [[ $uploaded -ne 0 ]]; then
echo "Error uploading part $5: $etag_json"
return 1
fi
etag=$(echo "$etag_json" | jq '.ETag')
export etag
}
# perform all parts of a multipart upload before completion command
# params: bucket, key, file to split and upload, number of file parts to upload
# return: 0 for success, 1 for failure
@@ -878,7 +801,7 @@ copy_file() {
# list parts of an unfinished multipart upload
# params: bucket, key, local file location, and parts to split into before upload
# export parts on success, return 1 for error
list_parts() {
start_multipart_upload_and_list_parts() {
if [ $# -ne 4 ]; then
log 2 "list multipart upload parts command requires bucket, key, file, and part count"
return 1
@@ -889,7 +812,7 @@ list_parts() {
return 1
fi
if ! listed_parts=$(aws --no-verify-ssl s3api list-parts --bucket "$1" --key "$2" --upload-id "$upload_id" 2>&1); then
if ! list_parts "$1" "$2" "$upload_id"; then
log 2 "Error listing multipart upload parts: $listed_parts"
return 1
fi

View File

@@ -3,30 +3,6 @@
source ./tests/util_mc.sh
source ./tests/logger.sh
create_bucket_with_user() {
if [ $# -ne 4 ]; then
log 2 "create bucket missing command type, bucket name, access, secret"
return 1
fi
local exit_code=0
if [[ $1 == "aws" ]]; then
error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3 mb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate mb --access_key="$3" --secret_key="$4" s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "mc" ]]; then
error=$(mc --insecure mb "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
log 2 "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
log 2 "error creating bucket: $error"
export error
return 1
fi
return 0
}
create_bucket_invalid_name() {
if [ $# -ne 1 ]; then
log 2 "create bucket w/invalid name missing command type"

View File

@@ -15,14 +15,30 @@ create_test_files() {
create_test_file_folder
fi
for name in "$@"; do
touch "$test_file_folder"/"$name" || local touch_result=$?
if [[ $touch_result -ne 0 ]]; then
echo "error creating file $name"
if [[ -e "$test_file_folder/$name" ]]; then
error=$(rm "$test_file_folder/$name" 2>&1) || fail "error removing existing test file: $error"
fi
error=$(touch "$test_file_folder"/"$name" 2>&1) || fail "error creating new file: $error"
done
export test_file_folder
}
create_test_file_with_size() {
if [ $# -ne 2 ]; then
log 2 "'create test file with size' function requires name, size"
return 1
fi
if ! create_test_file_folder "$1"; then
log 2 "error creating test file"
return 1
fi
if ! error=$(dd if=/dev/urandom of="$test_file_folder"/"$1" bs=1 count="$2" 2>&1); then
log 2 "error writing file data: $error"
return 1
fi
return 0
}
create_test_folder() {
if [ $# -lt 1 ]; then
echo "create test folder command missing folder name"
@@ -110,9 +126,11 @@ create_test_file_folder() {
else
test_file_folder=$PWD/versity-gwtest
fi
mkdir -p "$test_file_folder" || local mkdir_result=$?
if [[ $mkdir_result -ne 0 ]]; then
echo "error creating test file folder"
if ! error=$(mkdir -p "$test_file_folder" 2>&1); then
if [[ $error != *"File exists"* ]]; then
log 2 "error creating test file folder: $error"
return 1
fi
fi
export test_file_folder
}

View File

@@ -61,35 +61,43 @@ create_user_if_nonexistent() {
return $?
}
put_user_policy() {
if [[ $# -ne 3 ]]; then
log 2 "attaching user policy requires user ID, role, bucket name"
put_user_policy_userplus() {
if [[ $# -ne 1 ]]; then
log 2 "'put user policy userplus' function requires username"
return 1
fi
if [[ -z "$test_file_folder" ]]; then
log 2 "no test folder defined"
return 1
fi
# TODO add other roles
if [[ $2 != "user" ]]; then
log 2 "role for '$2' not currently supported"
if [[ -z "$test_file_folder" ]] && [[ -z "$GITHUB_ACTIONS" ]] && ! create_test_file_folder; then
log 2 "unable to create test file folder"
return 1
fi
#"Resource": "arn:aws:s3:::${aws:username}-*"
cat <<EOF > "$test_file_folder"/user_policy_file
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "arn:aws:s3:::$3/*"
}
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::$1-*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::$1-*",
"arn:aws:s3:::$1-*/*"
]
}
]
}
EOF
if ! error=$(aws iam put-user-policy --user-name "$1" --policy-name "UserPolicy" --policy-document "file://$test_file_folder/user_policy_file" 2>&1); then
log 2 "error putting user policy: $error"
return 1
@@ -97,6 +105,29 @@ EOF
return 0
}
put_user_policy() {
if [[ $# -ne 3 ]]; then
log 2 "attaching user policy requires user ID, role, bucket name"
return 1
fi
if [[ -z "$test_file_folder" ]] && [[ -z "$GITHUB_ACTIONS" ]] && ! create_test_file_folder; then
log 2 "unable to create test file folder"
return 1
fi
case $2 in
"user")
;;
"userplus")
if ! put_user_policy_userplus "$1"; then
log 2 "error adding userplus policy"
return 1
fi
;;
esac
return 0
}
create_user_direct() {
if [[ $# -ne 3 ]]; then
log 2 "create user direct command requires desired username, role, bucket name"
@@ -272,11 +303,25 @@ delete_user() {
fi
}
change_bucket_owner_direct() {
if [[ $# -ne 4 ]]; then
echo "change bucket owner command requires ID, key, bucket name, and new owner"
return 1
fi
}
change_bucket_owner() {
if [[ $# -ne 4 ]]; then
echo "change bucket owner command requires ID, key, bucket name, and new owner"
return 1
fi
if [[ $DIRECT == "true" ]]; then
if ! change_bucket_owner_direct "$1" "$2" "$3" "$4"; then
log 2 "error changing bucket owner direct to s3"
return 1
fi
fi
error=$($VERSITY_EXE admin --allow-insecure --access "$1" --secret "$2" --endpoint-url "$AWS_ENDPOINT_URL" change-bucket-owner --bucket "$3" --owner "$4" 2>&1) || local change_result=$?
if [[ $change_result -ne 0 ]]; then
echo "error changing bucket owner: $error"