Compare commits

...

75 Commits

Author SHA1 Message Date
Ben McClelland
905b283421 Merge pull request #402 from versity/ben/sign_with_user_agent 2024-02-14 10:53:53 -08:00
Ben McClelland
6fea34acda fix: request signature check with signed user-agent
This is a hack to replace the ignored headers in the aws-sdk-go-v2
internal/v4 package. The headers in the default ignore list include
User-Agent, but this is included is signed headers from some clients.

fixes #396
2024-02-13 22:56:13 -08:00
Ben McClelland
1c29fbfd81 Merge pull request #397 from versity/presigned-url-authentication
Presigned URL authentication
2024-02-13 11:33:49 -08:00
jonaustin09
a3b14d3a05 feat: Added an integration test for UploadPart action with v4 query params authentication, added unit tests for validateDate function 2024-02-13 11:38:28 -05:00
Ben McClelland
cafb57eb33 Merge pull request #399 from versity/ben/xml_responses
fix: correct xml response encoding for list-buckets
2024-02-13 08:16:55 -08:00
Ben McClelland
0760467c3d fix: correct xml response encoding for list-buckets and tagging
fixes #395
2024-02-12 16:20:07 -08:00
Ben McClelland
4d168da376 Merge pull request #401 from versity/dependabot/go_modules/dev-dependencies-0c6b2d3779
chore(deps): bump the dev-dependencies group with 4 updates
2024-02-12 16:09:21 -08:00
Ben McClelland
cde033811f Merge pull request #400 from versity/test_cmdline_list_parts
Test cmdline list parts
2024-02-12 16:08:30 -08:00
Luke McCrone
7a56c7e15e test: multipart upload - list parts, uploads 2024-02-12 19:36:24 -03:00
jonaustin09
e21e514997 feat: Added 20 integration tests for v4 authentication with query params. Fixed few bugs in v4 query params authentication 2024-02-12 16:31:01 -05:00
dependabot[bot]
660709fe6d chore(deps): bump the dev-dependencies group with 4 updates
Bumps the dev-dependencies group with 4 updates: [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go), [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go), [github.com/valyala/fasthttp](https://github.com/valyala/fasthttp) and [golang.org/x/sys](https://github.com/golang/sys).


Updates `github.com/Azure/azure-sdk-for-go/sdk/azcore` from 1.9.1 to 1.9.2
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.9.1...sdk/azcore/v1.9.2)

Updates `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` from 1.2.1 to 1.3.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azidentity/v1.2.1...sdk/azcore/v1.3.0)

Updates `github.com/valyala/fasthttp` from 1.51.0 to 1.52.0
- [Release notes](https://github.com/valyala/fasthttp/releases)
- [Commits](https://github.com/valyala/fasthttp/compare/v1.51.0...v1.52.0)

Updates `golang.org/x/sys` from 0.16.0 to 0.17.0
- [Commits](https://github.com/golang/sys/compare/v0.16.0...v0.17.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/valyala/fasthttp
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-12 21:21:05 +00:00
Ben McClelland
5931d713f2 Merge pull request #398 from versity/test_cmdline_multipart_abort
test: multipart abort
2024-02-08 18:34:25 -08:00
Luke McCrone
08e5eb02a0 test: multipart abort 2024-02-08 18:31:19 -03:00
Ben McClelland
7cba952546 Merge pull request #394 from versity/test_cmdline_obj_tag
Test cmdline obj tag
2024-02-07 10:20:26 -08:00
Luke McCrone
5d6c0f8b67 another shellcheck fix 2024-02-07 14:58:06 -03:00
jonaustin09
be17b3fd33 feat: Closes #355. Added support for presigned URLs, particularly v4 authentication with query params 2024-02-07 09:17:35 -05:00
Ben McClelland
e6440da30a Merge pull request #393 from versity/ben/releaser_naming
fix: add release version to release artifacts
2024-02-06 08:42:26 -08:00
Ben McClelland
443da7f9a4 fix: add release version to release artifacts 2024-02-05 11:04:51 -08:00
Ben McClelland
6c56307746 Merge pull request #391 from versity/ben/docker_actions
feat: add docker images to release
2024-02-04 10:21:07 -08:00
Ben McClelland
9765eadd84 feat: add docker images to release 2024-02-04 10:17:15 -08:00
Ben McClelland
4619171f86 Merge pull request #389 from versity/test_cmdline_head_data
Test cmdline head data
2024-02-02 11:35:35 -08:00
Luke McCrone
89b4b615ab test: cmdline tests (acls, get bucket/object info) 2024-02-02 11:32:13 -08:00
Jon Austin
0c056f935b ListObjectsV2 start-after prop (#388)
* fix: Fixes #138, Added StartAfter property in ListObjectsV2 action, added couple of integration tests for ListObjectsV2
2024-02-01 11:04:52 -08:00
Ben McClelland
bf1e2c83d5 Merge pull request #385 from versity/bucket-tagging-actions
Bucket tagging actions
2024-01-31 10:15:22 -08:00
Ben McClelland
68794518af fix: remove special proxy handling for bucket acls in posix backend 2024-01-31 10:10:12 -08:00
jonaustin09
3cce3a5201 feat: Added unit and integration test cases for posix bucket tagging related actions 2024-01-31 10:09:48 -08:00
jonaustin09
d70ea61830 feat: Added the following actions support in posix backend: PutBucketTagging, GetBucketTagging, DeleteBucketTagging 2024-01-31 10:09:48 -08:00
Ben McClelland
9d0cf77b25 Merge pull request #387 from versity/bucket-acl-on-creation
Bucket ACL on bucket creation
2024-01-31 09:55:12 -08:00
jonaustin09
0d3a238ceb feat: Implemented logic to add bucket ACL on bucket creation 2024-01-31 09:49:56 -08:00
Ben McClelland
99d0d9a007 Merge pull request #384 from versity/luke/posix_test
Luke/posix test
2024-01-29 15:20:54 -08:00
Luke McCrone
1409d664b4 test: initial aws cli bats tests 2024-01-29 20:07:00 -03:00
Ben McClelland
b908a4b981 Merge pull request #386 from versity/dependabot/go_modules/dev-dependencies-55f64e24bf
chore(deps): bump the dev-dependencies group with 3 updates
2024-01-29 14:53:14 -08:00
dependabot[bot]
ac06b5c4ae chore(deps): bump the dev-dependencies group with 3 updates
Bumps the dev-dependencies group with 3 updates: [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2), [github.com/google/uuid](https://github.com/google/uuid) and [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2).


Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.48.0 to 1.48.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.48.0...service/s3/v1.48.1)

Updates `github.com/google/uuid` from 1.5.0 to 1.6.0
- [Release notes](https://github.com/google/uuid/releases)
- [Changelog](https://github.com/google/uuid/blob/master/CHANGELOG.md)
- [Commits](https://github.com/google/uuid/compare/v1.5.0...v1.6.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.15.14 to 1.15.15
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.15.14...config/v1.15.15)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/google/uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-29 22:03:46 +00:00
Ben McClelland
3146556293 Merge pull request #380 from versity/ben/chunked_reader
feat: add chunked upload support
2024-01-25 13:41:43 -08:00
Ben McClelland
1c03fce3f5 Merge pull request #383 from versity/dependabot/go_modules/dev-dependencies-83121c2333
chore(deps): bump the dev-dependencies group with 3 updates
2024-01-22 15:39:54 -08:00
dependabot[bot]
b83e2393a5 chore(deps): bump the dev-dependencies group with 3 updates
Bumps the dev-dependencies group with 3 updates: [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go), [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) and [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2).


Updates `github.com/Azure/azure-sdk-for-go/sdk/azidentity` from 1.4.0 to 1.5.1
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.4.0...sdk/internal/v1.5.1)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.26.3 to 1.26.6
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.26.3...config/v1.26.6)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.15.11 to 1.15.14
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.15.11...config/v1.15.14)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-22 21:45:22 +00:00
Ben McClelland
1366408baa feat: add chunked upload support
As described in
https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
this adds support for reading from a chunked upload encoded request
body. The chunked reader modifies the data stream to remove the
chunk encoding while validating the chunk signatures in line. This
allows the upper layers to get just the object data stream.
2024-01-22 11:35:01 -08:00
Jon Austin
cf92b6fd80 Fix/azure copy object (#382)
* fix: Added destination bucket acl check and metadata comparision for CopyObject action in azure backend

---------

Co-authored-by: Ben McClelland <ben.mcclelland@versity.com>
2024-01-22 10:01:16 -08:00
Jon Austin
d956ecacd7 Fix/azure iam (#381)
* fix: Fixed iam internal iam file removal bug
2024-01-22 10:00:41 -08:00
Jon Austin
68e800492e Fix/azure list objects (#379)
* fix: Added pagination to ListObjects and ListObjectsV2 actions, fixed multipart upload non existing key error handling
2024-01-22 09:54:45 -08:00
Ben McClelland
f836d96717 Merge pull request #378 from versity/ben/signature 2024-01-17 13:18:54 -08:00
Ben McClelland
b5894dd714 fix: allow spaces in Authorization string
This change removes all spaces after the algorithm to have
standard parsing for the following key/value pairs. This fixes
some clients that were using a slightly different format than
the example AWS request strings.
2024-01-17 10:45:57 -08:00
Ben McClelland
17bdc58da9 Merge pull request #374 from versity/ben/test_fixup
Ben/test fixup
2024-01-17 10:45:43 -08:00
jonaustin09
03e4a28d57 fix: Fixed couple of bugs regarding to GetObject range errors, blob metadata reference losing 2024-01-17 08:27:37 -08:00
jonaustin09
240db54feb feat: Added ChangeBucketOwner, ListBucketsAndOwners action implementation in azure backend. Fixed acl key bug in getting container metadata. Added container owner in ListBuckets action 2024-01-17 08:27:37 -08:00
Ben McClelland
d404f96320 fix: translate azure errors to s3 for compatibility 2024-01-17 08:27:37 -08:00
Ben McClelland
1cdf0706e7 fix: fix crashes in test cases when fields missing 2024-01-17 08:27:37 -08:00
Ben McClelland
ca6d9e3c11 fix: docker env set to tests defaults 2024-01-17 08:27:37 -08:00
Ben McClelland
e16c54c1a3 Merge pull request #375 from versity/dependabot/go_modules/dev-dependencies-88fd56ff93
chore(deps): bump the dev-dependencies group with 1 update
2024-01-16 08:12:42 -08:00
dependabot[bot]
15daec9f51 chore(deps): bump the dev-dependencies group with 1 update
Bumps the dev-dependencies group with 1 update: [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go).


Updates `github.com/nats-io/nats.go` from 1.31.0 to 1.32.0
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.31.0...v1.32.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-15 21:10:11 +00:00
Ben McClelland
c406d7069f Merge pull request #371 from versity/ben/default_acl
fix: cleanup backend ACLs
2024-01-11 12:30:44 -08:00
Ben McClelland
6481e2aac5 fix: cleanup backend ACLs
This adds the default ACL to the CreateBucket backend method so
that the backend doesn't need to know how to construct and ACL.

This also moves the s3proxy ACLs to a tag key/value because the
gateway ACLs are not the same accounts as the backend s3 server.
TODO: we may need to mask this tag key/value if we add support
for the Get/PutBucketTagging API.
2024-01-10 09:36:00 -08:00
Ben McClelland
45cf5e6373 Merge pull request #366 from versity/ben/az_ident
feat: add azure local env auth
2024-01-09 22:24:43 -08:00
Ben McClelland
3db43b7206 feat: add azure local env auth
This is the recommended auth from the following:
https://github.com/Azure-Samples/storage-blobs-go-quickstart/blob/master/storage-quickstart.go
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-go?toc=%2Fazure%2Fdeveloper%2Fgo%2Ftoc.json&bc=%2Fazure%2Fdeveloper%2Fgo%2Fbreadcrumb%2Ftoc.json&tabs=roles-azure-portal#authenticate-to-azure-and-authorize-access-to-blob-data
2024-01-09 22:21:39 -08:00
Ben McClelland
6786a6385a Merge pull request #367 from versity/azure-sas-token
Azure sas token authentication
2024-01-09 22:06:57 -08:00
jonaustin09
e5fc12042b feat: Added sas token authentication for azure backend 2024-01-09 22:03:13 -08:00
Ben McClelland
06ccd7496e Merge pull request #369 from versity/ben/az_cleanup
chore: remove azure bug comment
2024-01-09 08:29:42 -08:00
Ben McClelland
c86362b269 Merge pull request #370 from versity/dependabot/go_modules/dev-dependencies-925c4d3e9f
chore(deps): bump the dev-dependencies group with 6 updates
2024-01-09 08:28:59 -08:00
Ben McClelland
a86a8cbce5 fix: add azure CreateMultipartUpload to allow clients to work as expected
The azure sdk doesnt use a separate function to initialize a
multipart upload, so CreateMultipartUpload becomes a no-op.
But we still need to have it return success so that clients
wont get an unexpected error.
2024-01-08 13:40:20 -08:00
dependabot[bot]
328ea4f4b7 chore(deps): bump the dev-dependencies group with 6 updates
Bumps the dev-dependencies group with 6 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) | `1.24.0` | `1.24.1` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.47.7` | `1.48.0` |
| [github.com/gofiber/fiber/v2](https://github.com/gofiber/fiber) | `2.51.0` | `2.52.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.15.0` | `0.16.0` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.26.2` | `1.26.3` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.15.9` | `1.15.11` |


Updates `github.com/aws/aws-sdk-go-v2` from 1.24.0 to 1.24.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.24.0...v1.24.1)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.47.7 to 1.48.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.47.7...service/s3/v1.48.0)

Updates `github.com/gofiber/fiber/v2` from 2.51.0 to 2.52.0
- [Release notes](https://github.com/gofiber/fiber/releases)
- [Commits](https://github.com/gofiber/fiber/compare/v2.51.0...v2.52.0)

Updates `golang.org/x/sys` from 0.15.0 to 0.16.0
- [Commits](https://github.com/golang/sys/compare/v0.15.0...v0.16.0)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.26.2 to 1.26.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.26.2...config/v1.26.3)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.15.9 to 1.15.11
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.15.11/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.15.9...config/v1.15.11)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/gofiber/fiber/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-08 21:30:29 +00:00
Ben McClelland
bf38a03af9 chore: remove azure bug comment
This comment references a bug that was fixed in the v1.2.1 sdk
update:
https://github.com/Azure/azure-sdk-for-go/issues/22171
2024-01-08 13:11:41 -08:00
Ben McClelland
f237d06a01 Merge pull request #368 from versity/azure-docker
Azure docker
2024-01-08 10:10:18 -08:00
jonaustin09
8fc16392d1 feat: Dockerized azure backend to run 2 images: one for azurite, one for azure backend 2024-01-08 10:07:50 -08:00
Jon Austin
9bfec719f3 Azure ACL (#364)
feat: Addded GetBucketAcl and PutBucketAcl actions implementation in azure backend. ACL is stored in the container metadata
2024-01-03 11:15:53 -08:00
Ben McClelland
4a1d479bcb Merge pull request #365 from versity/ben/readme_update
chore: update docs for s3 backend support
2024-01-03 11:13:26 -08:00
Ben McClelland
9226999ae9 chore: update docs for s3 backend support 2024-01-03 11:00:09 -08:00
Ben McClelland
3f18bb5977 Merge pull request #362 from versity/dependabot/go_modules/dev-dependencies-21be33ef01
chore(deps): bump the dev-dependencies group with 1 update
2024-01-01 14:02:48 -08:00
dependabot[bot]
b145777340 chore(deps): bump the dev-dependencies group with 1 update
Bumps the dev-dependencies group with 1 update: [github.com/urfave/cli/v2](https://github.com/urfave/cli).


Updates `github.com/urfave/cli/v2` from 2.26.0 to 2.27.1
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/main/docs/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v2.26.0...v2.27.1)

---
updated-dependencies:
- dependency-name: github.com/urfave/cli/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-01 21:11:27 +00:00
Ben McClelland
bae716b012 Merge pull request #352 from versity/ben/azure_blob
Ben/azure blob
2023-12-29 21:59:23 -08:00
Ben McClelland
4343252c1f Merge pull request #361 from versity/ben/readme
chore: update readme status and news
2023-12-29 21:58:07 -08:00
Ben McClelland
5a3ecc2db4 fix: azure run go mod tidy 2023-12-29 21:56:47 -08:00
jonaustin09
cafa45760c feat: Added pagination for ListParts azure action and added get range support for GetObject azure action 2023-12-29 21:55:32 -08:00
jonaustin09
8cc89fa713 feat: Azure backend implementation 2023-12-29 21:55:32 -08:00
Ben McClelland
3b945f72fc feat: azure blob backend initial pass 2023-12-29 21:54:56 -08:00
Ben McClelland
111d75b5d4 chore: update readme status and news 2023-12-29 21:46:47 -08:00
50 changed files with 6834 additions and 1673 deletions

View File

@@ -1,6 +1,8 @@
POSIX_PORT=
PROXY_PORT=
ACCESS_KEY_ID=
SECRET_ACCESS_KEY=
IAM_DIR=
SETUP_DIR=
POSIX_PORT=7071
PROXY_PORT=7070
ACCESS_KEY_ID=user
SECRET_ACCESS_KEY=pass
IAM_DIR=.
SETUP_DIR=.
AZ_ACCOUNT_NAME=devstoreaccount1
AZ_ACCOUNT_KEY=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

45
.github/workflows/docker.yaml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Publish Docker image
on:
release:
types: [published]
jobs:
push_to_registries:
name: Push Docker image to multiple registries
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check out the repo
uses: actions/checkout@v4
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: |
versity/versitygw
ghcr.io/${{ github.repository }}
- name: Build and push Docker images
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

42
.github/workflows/system.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: system tests
on: pull_request
jobs:
build:
name: RunTests
runs-on: ubuntu-latest
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v3
- name: Install ShellCheck
run: sudo apt-get install shellcheck
- name: Run ShellCheck
run: shellcheck -S warning ./tests/*.sh
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: 'stable'
id: go
- name: Get Dependencies
run: |
go get -v -t -d ./...
- name: Install BATS
run: |
git clone https://github.com/bats-core/bats-core.git
cd bats-core && ./install.sh $HOME
- name: Build and Run
run: |
make testbin
export AWS_ACCESS_KEY_ID=user
export AWS_SECRET_ACCESS_KEY=pass
aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID --profile versity
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY --profile versity
export VERSITY_EXE=./versitygw
mkdir /tmp/gw
VERSITYGW_TEST_ENV=$GITHUB_WORKSPACE/tests/.env.default $HOME/bin/bats ./tests/s3_bucket_tests.sh
VERSITYGW_TEST_ENV=$GITHUB_WORKSPACE/tests/.env.default $HOME/bin/bats ./tests/posix_tests.sh

7
.gitignore vendored
View File

@@ -39,3 +39,10 @@ VERSION
/profile.txt
dist/
# secrets file for local github-actions testing
.secrets
# env files for testing
.env*
!.env.default

View File

@@ -27,6 +27,7 @@ archives:
# this name template makes the OS and Arch compatible with the results of uname.
name_template: >-
{{ .ProjectName }}_
v{{ .Version }}_
{{- title .Os }}_
{{- if eq .Arch "amd64" }}x86_64
{{- else if eq .Arch "386" }}i386

View File

@@ -1,4 +1,4 @@
FROM golang:1.20-alpine
FROM golang:latest
WORKDIR /app
@@ -8,6 +8,7 @@ RUN go mod download
COPY ./ ./
WORKDIR /app/cmd/versitygw
ENV CGO_ENABLED=0
RUN go build -o versitygw
FROM alpine:latest

View File

@@ -85,6 +85,11 @@ up-posix:
up-proxy:
docker compose --env-file .env.dev up proxy
# Creates and runs S3 gateway to azurite instance in a docker container
.PHONY: up-azurite
up-azurite:
docker compose --env-file .env.dev up azurite azuritegw
# Creates and runs both S3 gateway and proxy server instances in docker containers
.PHONY: up-app
up-app:

View File

@@ -8,13 +8,18 @@
[![Apache V2 License](https://img.shields.io/badge/license-Apache%20V2-blue.svg)](https://github.com/versity/versitygw/blob/main/LICENSE)
**Current status:** Beta: Most clients functional, work in progress for more test coverage. Issue reports welcome.
**Current status:** Ready for general testing, Issue reports welcome.
**News:**<br>
* New performance analysis article [https://github.com/versity/versitygw/wiki/Performance](https://github.com/versity/versitygw/wiki/Performance)
See project [documentation](https://github.com/versity/versitygw/wiki) on the wiki.
* Share filesystem directory via S3 protocol
* Proxy S3 requests to S3 storage
* Simple to deploy S3 server with a single command
* Protocol compatibility allows common access to files via posix or S3
* Protocol compatibility in `posix` allows common access to files via posix or S3
Versity Gateway, a simple to use tool for seamless inline translation between AWS S3 object commands and storage systems. The Versity Gateway bridges the gap between S3-reliant applications and other storage systems, enabling enhanced compatibility and integration while offering exceptional scalability.

View File

@@ -270,7 +270,7 @@ func (s *IAMServiceInternal) storeIAM(update UpdateAcctFunc) error {
// reset retries on successful read
retries = 0
err = os.Remove(iamFile)
err = os.Remove(fname)
if errors.Is(err, fs.ErrNotExist) {
// racing with someone else updating
// keep retrying after backoff

986
backend/azure/azure.go Normal file
View File

@@ -0,0 +1,986 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package azure
import (
"bytes"
"context"
"encoding/base64"
"encoding/binary"
"encoding/json"
"fmt"
"io"
"math"
"os"
"strconv"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
// When getting container metadata with GetProperties method the sdk returns
// the first letter capital, when accessing the metadata after listing the containers
// it returns the first letter lower
type aclKey string
const aclKeyCapital aclKey = "Acl"
const aclKeyLower aclKey = "acl"
type Azure struct {
backend.BackendUnsupported
client *azblob.Client
sharedkeyCreds *azblob.SharedKeyCredential
defaultCreds *azidentity.DefaultAzureCredential
serviceURL string
sasToken string
}
var _ backend.Backend = &Azure{}
func New(accountName, accountKey, serviceURL, sasToken string) (*Azure, error) {
url := serviceURL
if serviceURL == "" && accountName != "" {
// if not otherwise specified, use the typical form:
// http(s)://<account>.blob.core.windows.net/
url = fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
}
if sasToken != "" {
client, err := azblob.NewClientWithNoCredential(url+"?"+sasToken, nil)
if err != nil {
return nil, fmt.Errorf("init client: %w", err)
}
return &Azure{client: client, serviceURL: serviceURL, sasToken: sasToken}, nil
}
if accountName == "" {
// if account name not provided, try to get from env var
accountName = os.Getenv("AZURE_CLIENT_ID")
}
if accountName == "" || accountKey == "" {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
return nil, fmt.Errorf("init default credentials: %w", err)
}
client, err := azblob.NewClient(url, cred, nil)
if err != nil {
return nil, fmt.Errorf("init client: %w", err)
}
return &Azure{client: client, serviceURL: url, defaultCreds: cred}, nil
}
cred, err := azblob.NewSharedKeyCredential(accountName, accountKey)
if err != nil {
return nil, fmt.Errorf("init credentials: %w", err)
}
client, err := azblob.NewClientWithSharedKeyCredential(url, cred, nil)
if err != nil {
return nil, fmt.Errorf("init client: %w", err)
}
return &Azure{client: client, serviceURL: url, sharedkeyCreds: cred}, nil
}
func (az *Azure) Shutdown() {}
func (az *Azure) String() string {
return "Azure Blob Gateway"
}
func (az *Azure) CreateBucket(ctx context.Context, input *s3.CreateBucketInput, acl []byte) error {
meta := map[string]*string{
string(aclKeyCapital): backend.GetStringPtr(string(acl)),
}
_, err := az.client.CreateContainer(ctx, *input.Bucket, &container.CreateOptions{Metadata: meta})
return azureErrToS3Err(err)
}
func (az *Azure) ListBuckets(ctx context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error) {
pager := az.client.NewListContainersPager(nil)
var buckets []s3response.ListAllMyBucketsEntry
var result s3response.ListAllMyBucketsResult
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return result, azureErrToS3Err(err)
}
for _, v := range resp.ContainerItems {
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
Name: *v.Name,
// TODO: using modification date here instead of creation, is that ok?
CreationDate: *v.Properties.LastModified,
})
}
}
result.Buckets.Bucket = buckets
result.Owner.ID = owner
return result, nil
}
func (az *Azure) HeadBucket(ctx context.Context, input *s3.HeadBucketInput) (*s3.HeadBucketOutput, error) {
client, err := az.getContainerClient(*input.Bucket)
if err != nil {
return nil, err
}
_, err = client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
return &s3.HeadBucketOutput{}, nil
}
func (az *Azure) DeleteBucket(ctx context.Context, input *s3.DeleteBucketInput) error {
_, err := az.client.DeleteContainer(ctx, *input.Bucket, nil)
return azureErrToS3Err(err)
}
func (az *Azure) PutObject(ctx context.Context, po *s3.PutObjectInput) (string, error) {
tags, err := parseTags(po.Tagging)
if err != nil {
return "", err
}
uploadResp, err := az.client.UploadStream(ctx, *po.Bucket, *po.Key, po.Body, &blockblob.UploadStreamOptions{
Metadata: parseMetadata(po.Metadata),
Tags: tags,
})
if err != nil {
return "", azureErrToS3Err(err)
}
return string(*uploadResp.ETag), nil
}
func (az *Azure) GetObject(ctx context.Context, input *s3.GetObjectInput, writer io.Writer) (*s3.GetObjectOutput, error) {
var opts *azblob.DownloadStreamOptions
if *input.Range != "" {
offset, count, err := parseRange(*input.Range)
if err != nil {
return nil, err
}
opts = &azblob.DownloadStreamOptions{
Range: blob.HTTPRange{
Count: count,
Offset: offset,
},
}
}
blobDownloadResponse, err := az.client.DownloadStream(ctx, *input.Bucket, *input.Key, opts)
if err != nil {
return nil, azureErrToS3Err(err)
}
defer blobDownloadResponse.Body.Close()
_, err = io.Copy(writer, blobDownloadResponse.Body)
if err != nil {
return nil, fmt.Errorf("copy data: %w", err)
}
var tagcount int32
if blobDownloadResponse.TagCount != nil {
tagcount = int32(*blobDownloadResponse.TagCount)
}
return &s3.GetObjectOutput{
AcceptRanges: input.Range,
ContentLength: blobDownloadResponse.ContentLength,
ContentEncoding: blobDownloadResponse.ContentEncoding,
ContentType: blobDownloadResponse.ContentType,
ETag: (*string)(blobDownloadResponse.ETag),
LastModified: blobDownloadResponse.LastModified,
Metadata: parseAzMetadata(blobDownloadResponse.Metadata),
TagCount: &tagcount,
ContentRange: blobDownloadResponse.ContentRange,
}, nil
}
func (az *Azure) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.HeadObjectOutput, error) {
client, err := az.getBlobClient(*input.Bucket, *input.Key)
if err != nil {
return nil, err
}
resp, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
return &s3.HeadObjectOutput{
AcceptRanges: resp.AcceptRanges,
ContentLength: resp.ContentLength,
ContentType: resp.ContentType,
ContentEncoding: resp.ContentEncoding,
ContentLanguage: resp.ContentLanguage,
ContentDisposition: resp.ContentDisposition,
ETag: (*string)(resp.ETag),
LastModified: resp.LastModified,
Metadata: parseAzMetadata(resp.Metadata),
Expires: resp.ExpiresOn,
}, nil
}
func (az *Azure) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
pager := az.client.NewListBlobsFlatPager(*input.Bucket, &azblob.ListBlobsFlatOptions{
Marker: input.Marker,
MaxResults: input.MaxKeys,
Prefix: input.Prefix,
})
var objects []types.Object
var nextMarker *string
var isTruncated bool
var maxKeys int32 = math.MaxInt32
if input.MaxKeys != nil {
maxKeys = *input.MaxKeys
}
Pager:
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return nil, azureErrToS3Err(err)
}
for _, v := range resp.Segment.BlobItems {
if nextMarker == nil && *resp.NextMarker != "" {
nextMarker = resp.NextMarker
isTruncated = true
}
if len(objects) >= int(maxKeys) {
break Pager
}
objects = append(objects, types.Object{
ETag: (*string)(v.Properties.ETag),
Key: v.Name,
LastModified: v.Properties.LastModified,
Size: v.Properties.ContentLength,
StorageClass: types.ObjectStorageClass(*v.Properties.AccessTier),
})
}
}
// TODO: generate common prefixes when appropriate
return &s3.ListObjectsOutput{
Contents: objects,
Marker: input.Marker,
MaxKeys: input.MaxKeys,
Name: input.Bucket,
NextMarker: nextMarker,
Prefix: input.Prefix,
IsTruncated: &isTruncated,
}, nil
}
func (az *Azure) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
pager := az.client.NewListBlobsFlatPager(*input.Bucket, &azblob.ListBlobsFlatOptions{
Marker: input.ContinuationToken,
MaxResults: input.MaxKeys,
Prefix: input.Prefix,
})
var objects []types.Object
var nextMarker *string
var isTruncated bool
var maxKeys int32 = math.MaxInt32
if input.MaxKeys != nil {
maxKeys = *input.MaxKeys
}
Pager:
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return nil, azureErrToS3Err(err)
}
for _, v := range resp.Segment.BlobItems {
if nextMarker == nil && *resp.NextMarker != "" {
nextMarker = resp.NextMarker
isTruncated = true
}
if len(objects) >= int(maxKeys) {
break Pager
}
nextMarker = resp.NextMarker
objects = append(objects, types.Object{
ETag: (*string)(v.Properties.ETag),
Key: v.Name,
LastModified: v.Properties.LastModified,
Size: v.Properties.ContentLength,
StorageClass: types.ObjectStorageClass(*v.Properties.AccessTier),
})
}
}
// TODO: generate common prefixes when appropriate
return &s3.ListObjectsV2Output{
Contents: objects,
ContinuationToken: input.ContinuationToken,
MaxKeys: input.MaxKeys,
Name: input.Bucket,
NextContinuationToken: nextMarker,
Prefix: input.Prefix,
IsTruncated: &isTruncated,
}, nil
}
func (az *Azure) DeleteObject(ctx context.Context, input *s3.DeleteObjectInput) error {
_, err := az.client.DeleteBlob(ctx, *input.Bucket, *input.Key, nil)
return azureErrToS3Err(err)
}
func (az *Azure) DeleteObjects(ctx context.Context, input *s3.DeleteObjectsInput) (s3response.DeleteObjectsResult, error) {
delResult, errs := []types.DeletedObject{}, []types.Error{}
for _, obj := range input.Delete.Objects {
err := az.DeleteObject(ctx, &s3.DeleteObjectInput{
Bucket: input.Bucket,
Key: obj.Key,
})
if err == nil {
delResult = append(delResult, types.DeletedObject{Key: obj.Key})
} else {
serr, ok := err.(s3err.APIError)
if ok {
errs = append(errs, types.Error{
Key: obj.Key,
Code: &serr.Code,
Message: &serr.Description,
})
} else {
errs = append(errs, types.Error{
Key: obj.Key,
Code: backend.GetStringPtr("InternalError"),
Message: backend.GetStringPtr(err.Error()),
})
}
}
}
return s3response.DeleteObjectsResult{
Deleted: delResult,
Error: errs,
}, nil
}
func (az *Azure) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.CopyObjectOutput, error) {
containerClient, err := az.getContainerClient(*input.Bucket)
if err != nil {
return nil, err
}
res, err := containerClient.GetProperties(ctx, &container.GetPropertiesOptions{})
if err != nil {
return nil, azureErrToS3Err(err)
}
dstContainerAcl, err := getAclFromMetadata(res.Metadata, aclKeyCapital)
if err != nil {
return nil, err
}
err = auth.VerifyACL(*dstContainerAcl, *input.ExpectedBucketOwner, types.PermissionWrite, false)
if err != nil {
return nil, err
}
if strings.Join([]string{*input.Bucket, *input.Key}, "/") == *input.CopySource && isMetaSame(res.Metadata, input.Metadata) {
return nil, s3err.GetAPIError(s3err.ErrInvalidCopyDest)
}
tags, err := parseTags(input.Tagging)
if err != nil {
return nil, err
}
client, err := az.getBlobClient(*input.Bucket, *input.Key)
if err != nil {
return nil, err
}
resp, err := client.CopyFromURL(ctx, az.serviceURL+"/"+*input.CopySource, &blob.CopyFromURLOptions{
BlobTags: tags,
Metadata: parseMetadata(input.Metadata),
})
if err != nil {
return nil, azureErrToS3Err(err)
}
return &s3.CopyObjectOutput{
CopyObjectResult: &types.CopyObjectResult{
ETag: (*string)(resp.ETag),
LastModified: resp.LastModified,
},
}, nil
}
func (az *Azure) PutObjectTagging(ctx context.Context, bucket, object string, tags map[string]string) error {
client, err := az.getBlobClient(bucket, object)
if err != nil {
return err
}
_, err = client.SetTags(ctx, tags, nil)
if err != nil {
return azureErrToS3Err(err)
}
return nil
}
func (az *Azure) GetObjectTagging(ctx context.Context, bucket, object string) (map[string]string, error) {
client, err := az.getBlobClient(bucket, object)
if err != nil {
return nil, err
}
tags, err := client.GetTags(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
return parseAzTags(tags.BlobTagSet), nil
}
func (az *Azure) DeleteObjectTagging(ctx context.Context, bucket, object string) error {
client, err := az.getBlobClient(bucket, object)
if err != nil {
return err
}
_, err = client.SetTags(ctx, map[string]string{}, nil)
if err != nil {
return azureErrToS3Err(err)
}
return nil
}
func (az *Azure) CreateMultipartUpload(ctx context.Context, input *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
// Multipart upload starts with UploadPart action so there is no
// correlating function for creating mutlipart uploads.
// TODO: since azure only allows for a single multipart upload
// for an object name at a time, we need to send an error back to
// the client if there is already an outstanding upload in progress
// for this object.
// Alternatively, is there something we can do with upload ids to
// keep concurrent uploads unique still? I haven't found an efficient
// way to rename final objects.
return &s3.CreateMultipartUploadOutput{
Bucket: input.Bucket,
Key: input.Key,
UploadId: input.Key,
}, nil
}
// Each part is translated into an uncommitted block in a newly created blob in staging area
func (az *Azure) UploadPart(ctx context.Context, input *s3.UploadPartInput) (etag string, err error) {
client, err := az.getBlockBlobClient(*input.Bucket, *input.Key)
if err != nil {
return "", err
}
// TODO: request streamable version of StageBlock()
// (*blockblob.Client).StageBlock does not have a streamable
// version of this function at this time, so we need to cache
// the body in memory to create an io.ReadSeekCloser
rdr, err := getReadSeekCloser(input.Body)
if err != nil {
return "", err
}
// block id serves as etag here
etag = blockIDInt32ToBase64(*input.PartNumber)
_, err = client.StageBlock(ctx, etag, rdr, nil)
if err != nil {
return "", parseMpError(err)
}
return etag, nil
}
func (az *Azure) UploadPartCopy(ctx context.Context, input *s3.UploadPartCopyInput) (s3response.CopyObjectResult, error) {
client, err := az.getBlockBlobClient(*input.Bucket, *input.Key)
if err != nil {
return s3response.CopyObjectResult{}, nil
}
//TODO: handle block copy by range
//TODO: the action returns not implemented on azurite, maybe in production this will work?
// UploadId here is the source block id
_, err = client.StageBlockFromURL(ctx, *input.UploadId, *input.CopySource, nil)
if err != nil {
return s3response.CopyObjectResult{}, parseMpError(err)
}
return s3response.CopyObjectResult{}, nil
}
// Lists all uncommitted parts from the blob
func (az *Azure) ListParts(ctx context.Context, input *s3.ListPartsInput) (s3response.ListPartsResult, error) {
client, err := az.getBlockBlobClient(*input.Bucket, *input.Key)
if err != nil {
return s3response.ListPartsResult{}, nil
}
resp, err := client.GetBlockList(ctx, blockblob.BlockListTypeUncommitted, nil)
if err != nil {
return s3response.ListPartsResult{}, parseMpError(err)
}
var partNumberMarker int
var nextPartNumberMarker int
var maxParts int32 = math.MaxInt32
var isTruncated bool
if *input.PartNumberMarker != "" {
partNumberMarker, err = strconv.Atoi(*input.PartNumberMarker)
if err != nil {
return s3response.ListPartsResult{}, s3err.GetAPIError(s3err.ErrInvalidPartNumberMarker)
}
}
if input.MaxParts != nil {
maxParts = *input.MaxParts
}
parts := []s3response.Part{}
for _, el := range resp.BlockList.UncommittedBlocks {
partNumber, err := decodeBlockId(*el.Name)
if err != nil {
return s3response.ListPartsResult{}, err
}
if partNumberMarker != 0 && partNumberMarker < partNumber {
continue
}
if len(parts) >= int(maxParts) {
nextPartNumberMarker = partNumber
isTruncated = true
break
}
parts = append(parts, s3response.Part{
Size: *el.Size,
ETag: *el.Name,
PartNumber: partNumber,
LastModified: time.Now().Format(backend.RFC3339TimeFormat),
})
}
return s3response.ListPartsResult{
Bucket: *input.Bucket,
Key: *input.Key,
Parts: parts,
NextPartNumberMarker: nextPartNumberMarker,
PartNumberMarker: partNumberMarker,
IsTruncated: isTruncated,
MaxParts: int(maxParts),
}, nil
}
// Lists all block blobs, which has uncommitted blocks
func (az *Azure) ListMultipartUploads(ctx context.Context, input *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResult, error) {
client, err := az.getContainerClient(*input.Bucket)
if err != nil {
return s3response.ListMultipartUploadsResult{}, err
}
pager := client.NewListBlobsFlatPager(&container.ListBlobsFlatOptions{
Include: container.ListBlobsInclude{UncommittedBlobs: true},
Marker: input.KeyMarker,
Prefix: input.Prefix,
})
var maxUploads int32
if input.MaxUploads != nil {
maxUploads = *input.MaxUploads
}
isTruncated := false
nextKeyMarker := ""
uploads := []s3response.Upload{}
breakFlag := false
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return s3response.ListMultipartUploadsResult{}, azureErrToS3Err(err)
}
for _, el := range resp.Segment.BlobItems {
if el.Properties.AccessTier == nil {
if len(uploads) >= int(*input.MaxUploads) && maxUploads != 0 {
breakFlag = true
nextKeyMarker = *el.Name
isTruncated = true
break
}
uploads = append(uploads, s3response.Upload{
Key: *el.Name,
Initiated: el.Properties.CreationTime.Format(backend.RFC3339TimeFormat),
})
}
}
if breakFlag {
break
}
}
return s3response.ListMultipartUploadsResult{
Uploads: uploads,
Bucket: *input.Bucket,
KeyMarker: *input.KeyMarker,
NextKeyMarker: nextKeyMarker,
MaxUploads: int(maxUploads),
Prefix: *input.Prefix,
IsTruncated: isTruncated,
Delimiter: *input.Delimiter,
}, nil
}
// Deletes the block blob with committed/uncommitted blocks
func (az *Azure) AbortMultipartUpload(ctx context.Context, input *s3.AbortMultipartUploadInput) error {
// TODO: need to verify this blob has uncommitted blocks?
_, err := az.client.DeleteBlob(ctx, *input.Bucket, *input.Key, nil)
if err != nil {
return parseMpError(err)
}
return nil
}
// Commits all the uncommitted blocks inside the block blob
// And moves the block blob from staging area into the blobs list
// It indicates the end of the multipart upload
func (az *Azure) CompleteMultipartUpload(ctx context.Context, input *s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error) {
client, err := az.getBlockBlobClient(*input.Bucket, *input.Key)
if err != nil {
return nil, err
}
blockIds := []string{}
for _, el := range input.MultipartUpload.Parts {
blockIds = append(blockIds, *el.ETag)
}
resp, err := client.CommitBlockList(ctx, blockIds, nil)
if err != nil {
return nil, parseMpError(err)
}
return &s3.CompleteMultipartUploadOutput{
Bucket: input.Bucket,
Key: input.Key,
ETag: (*string)(resp.ETag),
}, nil
}
func (az *Azure) PutBucketAcl(ctx context.Context, bucket string, data []byte) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
meta := map[string]*string{
string(aclKeyCapital): backend.GetStringPtr(string(data)),
}
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{
Metadata: meta,
})
if err != nil {
return azureErrToS3Err(err)
}
return nil
}
func (az *Azure) GetBucketAcl(ctx context.Context, input *s3.GetBucketAclInput) ([]byte, error) {
client, err := az.getContainerClient(*input.Bucket)
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
aclPtr, ok := props.Metadata[string(aclKeyCapital)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrInternalError)
}
return []byte(*aclPtr), nil
}
func (az *Azure) ChangeBucketOwner(ctx context.Context, bucket, newOwner string) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
acl, err := getAclFromMetadata(props.Metadata, aclKeyCapital)
if err != nil {
return err
}
acl.Owner = newOwner
newAcl, err := json.Marshal(acl)
if err != nil {
return fmt.Errorf("marshal acl: %w", err)
}
err = az.PutBucketAcl(ctx, bucket, newAcl)
if err != nil {
return err
}
return nil
}
// The action actually returns the containers owned by the user, who initialized the gateway
// TODO: Not sure if there's a way to list all the containers and owners?
func (az *Azure) ListBucketsAndOwners(ctx context.Context) (buckets []s3response.Bucket, err error) {
pager := az.client.NewListContainersPager(nil)
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return buckets, azureErrToS3Err(err)
}
for _, v := range resp.ContainerItems {
acl, err := getAclFromMetadata(v.Metadata, aclKeyLower)
if err != nil {
return buckets, err
}
buckets = append(buckets, s3response.Bucket{
Name: *v.Name,
Owner: acl.Owner,
})
}
}
return buckets, nil
}
func (az *Azure) getContainerURL(cntr string) string {
return fmt.Sprintf("%v/%v", az.serviceURL, cntr)
}
func (az *Azure) getBlobURL(cntr, blb string) string {
return fmt.Sprintf("%v/%v", az.getContainerURL(cntr), blb)
}
func (az *Azure) getBlobClient(cntr, blb string) (*blob.Client, error) {
blobURL := az.getBlobURL(cntr, blb)
if az.defaultCreds != nil {
return blob.NewClient(blobURL, az.defaultCreds, nil)
}
if az.sasToken != "" {
return blob.NewClientWithNoCredential(blobURL+"?"+az.sasToken, nil)
}
return blob.NewClientWithSharedKeyCredential(blobURL, az.sharedkeyCreds, nil)
}
func (az *Azure) getContainerClient(cntr string) (*container.Client, error) {
containerURL := az.getContainerURL(cntr)
if az.defaultCreds != nil {
return container.NewClient(containerURL, az.defaultCreds, nil)
}
if az.sasToken != "" {
return container.NewClientWithNoCredential(containerURL+"?"+az.sasToken, nil)
}
return container.NewClientWithSharedKeyCredential(containerURL, az.sharedkeyCreds, nil)
}
func (az *Azure) getBlockBlobClient(cntr, blb string) (*blockblob.Client, error) {
blobURL := az.getBlobURL(cntr, blb)
if az.defaultCreds != nil {
return blockblob.NewClient(blobURL, az.defaultCreds, nil)
}
if az.sasToken != "" {
return blockblob.NewClientWithNoCredential(blobURL+"?"+az.sasToken, nil)
}
return blockblob.NewClientWithSharedKeyCredential(blobURL, az.sharedkeyCreds, nil)
}
func parseMetadata(m map[string]string) map[string]*string {
if m == nil {
return nil
}
meta := make(map[string]*string)
for k, v := range m {
val := v
meta[k] = &val
}
return meta
}
func parseAzMetadata(m map[string]*string) map[string]string {
if m == nil {
return nil
}
meta := make(map[string]string)
for k, v := range m {
meta[k] = *v
}
return meta
}
func parseTags(tagstr *string) (map[string]string, error) {
tagsStr := getString(tagstr)
tags := make(map[string]string)
if tagsStr != "" {
tagParts := strings.Split(tagsStr, "&")
for _, prt := range tagParts {
p := strings.Split(prt, "=")
if len(p) != 2 {
return nil, s3err.GetAPIError(s3err.ErrInvalidTag)
}
tags[p[0]] = p[1]
}
}
return tags, nil
}
func parseAzTags(tagSet []*blob.Tags) map[string]string {
tags := map[string]string{}
for _, tag := range tagSet {
tags[*tag.Key] = *tag.Value
}
return tags
}
func getString(str *string) string {
if str == nil {
return ""
}
return *str
}
// Converts io.Reader into io.ReadSeekCloser
func getReadSeekCloser(input io.Reader) (io.ReadSeekCloser, error) {
var buffer bytes.Buffer
_, err := io.Copy(&buffer, input)
if err != nil {
return nil, err
}
return streaming.NopCloser(bytes.NewReader(buffer.Bytes())), nil
}
// Creates a new Base64 encoded block id from a 32 bit integer
func blockIDInt32ToBase64(blockID int32) string {
binaryBlockID := &[4]byte{} // All block IDs are 4 bytes long
binary.LittleEndian.PutUint32(binaryBlockID[:], uint32(blockID))
return base64.StdEncoding.EncodeToString(binaryBlockID[:])
}
// Decodes Base64 encoded string to integer
func decodeBlockId(blockID string) (int, error) {
slice, err := base64.StdEncoding.DecodeString(blockID)
if err != nil {
return 0, nil
}
return int(binary.LittleEndian.Uint32(slice)), nil
}
func parseRange(rg string) (offset, count int64, err error) {
rangeKv := strings.Split(rg, "=")
if len(rangeKv) < 2 {
return 0, 0, s3err.GetAPIError(s3err.ErrInvalidRange)
}
bRange := strings.Split(rangeKv[1], "-")
if len(bRange) < 1 || len(bRange) > 2 {
return 0, 0, s3err.GetAPIError(s3err.ErrInvalidRange)
}
offset, err = strconv.ParseInt(bRange[0], 10, 64)
if err != nil {
return 0, 0, s3err.GetAPIError(s3err.ErrInvalidRange)
}
if len(bRange) == 1 || bRange[1] == "" {
return offset, count, nil
}
count, err = strconv.ParseInt(bRange[1], 10, 64)
if err != nil {
return 0, 0, s3err.GetAPIError(s3err.ErrInvalidRange)
}
if count < offset {
return 0, 0, s3err.GetAPIError(s3err.ErrInvalidRange)
}
return offset, count - offset + 1, nil
}
func getAclFromMetadata(meta map[string]*string, key aclKey) (*auth.ACL, error) {
aclPtr, ok := meta[string(key)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrInternalError)
}
var acl auth.ACL
err := json.Unmarshal([]byte(*aclPtr), &acl)
if err != nil {
return nil, fmt.Errorf("unmarshal acl: %w", err)
}
return &acl, nil
}
func isMetaSame(azMeta map[string]*string, awsMeta map[string]string) bool {
if len(azMeta) != len(awsMeta)+1 {
return false
}
for key, val := range azMeta {
if key == string(aclKeyCapital) || key == string(aclKeyLower) {
continue
}
awsVal, ok := awsMeta[key]
if !ok || awsVal != *val {
return false
}
}
return true
}

63
backend/azure/err.go Normal file
View File

@@ -0,0 +1,63 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package azure
import (
"errors"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/versity/versitygw/s3err"
)
// Parses azure ResponseError into AWS APIError
func azureErrToS3Err(apiErr error) error {
var azErr *azcore.ResponseError
if !errors.As(apiErr, &azErr) {
return apiErr
}
return azErrToS3err(azErr)
}
func azErrToS3err(azErr *azcore.ResponseError) s3err.APIError {
switch azErr.ErrorCode {
case "ContainerAlreadyExists":
return s3err.GetAPIError(s3err.ErrBucketAlreadyExists)
case "InvalidResourceName", "ContainerNotFound":
return s3err.GetAPIError(s3err.ErrNoSuchBucket)
case "BlobNotFound":
return s3err.GetAPIError(s3err.ErrNoSuchKey)
case "TagsTooLarge":
return s3err.GetAPIError(s3err.ErrInvalidTag)
case "Requested Range Not Satisfiable":
return s3err.GetAPIError(s3err.ErrInvalidRange)
}
return s3err.APIError{
Code: azErr.ErrorCode,
Description: azErr.RawResponse.Status,
HTTPStatusCode: azErr.StatusCode,
}
}
func parseMpError(mpErr error) error {
err := azureErrToS3Err(mpErr)
serr, ok := err.(s3err.APIError)
if !ok || serr.Code != "NoSuchKey" {
return mpErr
}
return s3err.GetAPIError(s3err.ErrNoSuchUpload)
}

View File

@@ -35,7 +35,7 @@ type Backend interface {
ListBuckets(_ context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error)
HeadBucket(context.Context, *s3.HeadBucketInput) (*s3.HeadBucketOutput, error)
GetBucketAcl(context.Context, *s3.GetBucketAclInput) ([]byte, error)
CreateBucket(context.Context, *s3.CreateBucketInput) error
CreateBucket(_ context.Context, _ *s3.CreateBucketInput, defaultACL []byte) error
PutBucketAcl(_ context.Context, bucket string, data []byte) error
DeleteBucket(context.Context, *s3.DeleteBucketInput) error
@@ -65,6 +65,11 @@ type Backend interface {
RestoreObject(context.Context, *s3.RestoreObjectInput) error
SelectObjectContent(ctx context.Context, input *s3.SelectObjectContentInput) func(w *bufio.Writer)
// bucket tagging operations
GetBucketTagging(_ context.Context, bucket string) (map[string]string, error)
PutBucketTagging(_ context.Context, bucket string, tags map[string]string) error
DeleteBucketTagging(_ context.Context, bucket string) error
// object tags operations
GetObjectTagging(_ context.Context, bucket, object string) (map[string]string, error)
PutObjectTagging(_ context.Context, bucket, object string, tags map[string]string) error
@@ -95,7 +100,7 @@ func (BackendUnsupported) HeadBucket(context.Context, *s3.HeadBucketInput) (*s3.
func (BackendUnsupported) GetBucketAcl(context.Context, *s3.GetBucketAclInput) ([]byte, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) CreateBucket(context.Context, *s3.CreateBucketInput) error {
func (BackendUnsupported) CreateBucket(context.Context, *s3.CreateBucketInput, []byte) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) PutBucketAcl(_ context.Context, bucket string, data []byte) error {
@@ -179,6 +184,16 @@ func (BackendUnsupported) SelectObjectContent(ctx context.Context, input *s3.Sel
}
}
func (BackendUnsupported) GetBucketTagging(_ context.Context, bucket string) (map[string]string, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) PutBucketTagging(_ context.Context, bucket string, tags map[string]string) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) DeleteBucketTagging(_ context.Context, bucket string) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) GetObjectTagging(_ context.Context, bucket, object string) (map[string]string, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}

View File

@@ -161,13 +161,12 @@ func (p *Posix) HeadBucket(_ context.Context, input *s3.HeadBucketInput) (*s3.He
return &s3.HeadBucketOutput{}, nil
}
func (p *Posix) CreateBucket(_ context.Context, input *s3.CreateBucketInput) error {
func (p *Posix) CreateBucket(_ context.Context, input *s3.CreateBucketInput, acl []byte) error {
if input.Bucket == nil {
return s3err.GetAPIError(s3err.ErrInvalidBucketName)
}
bucket := *input.Bucket
owner := string(input.ObjectOwnership)
err := os.Mkdir(bucket, 0777)
if err != nil && os.IsExist(err) {
@@ -177,13 +176,7 @@ func (p *Posix) CreateBucket(_ context.Context, input *s3.CreateBucketInput) err
return fmt.Errorf("mkdir bucket: %w", err)
}
acl := auth.ACL{ACL: "private", Owner: owner, Grantees: []auth.Grantee{}}
jsonACL, err := json.Marshal(acl)
if err != nil {
return fmt.Errorf("marshal acl: %w", err)
}
if err := xattr.Set(bucket, aclkey, jsonACL); err != nil {
if err := xattr.Set(bucket, aclkey, acl); err != nil {
return fmt.Errorf("set acl: %w", err)
}
@@ -1647,7 +1640,15 @@ func (p *Posix) ListObjectsV2(_ context.Context, input *s3.ListObjectsV2Input) (
}
marker := ""
if input.ContinuationToken != nil {
marker = *input.ContinuationToken
if input.StartAfter != nil {
if *input.StartAfter > *input.ContinuationToken {
marker = *input.StartAfter
} else {
marker = *input.ContinuationToken
}
} else {
marker = *input.ContinuationToken
}
}
delim := ""
if input.Delimiter != nil {
@@ -1727,6 +1728,57 @@ func (p *Posix) GetBucketAcl(_ context.Context, input *s3.GetBucketAclInput) ([]
return b, nil
}
func (p *Posix) PutBucketTagging(_ context.Context, bucket string, tags map[string]string) error {
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return fmt.Errorf("stat bucket: %w", err)
}
if tags == nil {
err = xattr.Remove(bucket, "user."+tagHdr)
if err != nil {
return fmt.Errorf("remove tags: %w", err)
}
return nil
}
b, err := json.Marshal(tags)
if err != nil {
return fmt.Errorf("marshal tags: %w", err)
}
err = xattr.Set(bucket, "user."+tagHdr, b)
if err != nil {
return fmt.Errorf("set tags: %w", err)
}
return nil
}
func (p *Posix) GetBucketTagging(_ context.Context, bucket string) (map[string]string, error) {
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return nil, fmt.Errorf("stat bucket: %w", err)
}
tags, err := p.getXattrTags(bucket, "")
if err != nil {
return nil, err
}
return tags, nil
}
func (p *Posix) DeleteBucketTagging(ctx context.Context, bucket string) error {
return p.PutBucketTagging(ctx, bucket, nil)
}
func (p *Posix) GetObjectTagging(_ context.Context, bucket, object string) (map[string]string, error) {
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {

View File

@@ -17,6 +17,7 @@ package s3proxy
import (
"context"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"encoding/json"
"errors"
@@ -32,12 +33,13 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/aws/smithy-go"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
const aclKey string = "versitygwAcl"
type S3Proxy struct {
backend.BackendUnsupported
@@ -72,9 +74,8 @@ func New(access, secret, endpoint, region string, disableChecksum, sslSkipVerify
func (s *S3Proxy) ListBuckets(ctx context.Context, owner string, isAdmin bool) (s3response.ListAllMyBucketsResult, error) {
output, err := s.client.ListBuckets(ctx, &s3.ListBucketsInput{})
err = handleError(err)
if err != nil {
return s3response.ListAllMyBucketsResult{}, err
return s3response.ListAllMyBucketsResult{}, handleError(err)
}
var buckets []s3response.ListAllMyBucketsEntry
@@ -97,13 +98,27 @@ func (s *S3Proxy) ListBuckets(ctx context.Context, owner string, isAdmin bool) (
func (s *S3Proxy) HeadBucket(ctx context.Context, input *s3.HeadBucketInput) (*s3.HeadBucketOutput, error) {
out, err := s.client.HeadBucket(ctx, input)
err = handleError(err)
return out, err
return out, handleError(err)
}
func (s *S3Proxy) CreateBucket(ctx context.Context, input *s3.CreateBucketInput) error {
func (s *S3Proxy) CreateBucket(ctx context.Context, input *s3.CreateBucketInput, acl []byte) error {
_, err := s.client.CreateBucket(ctx, input)
if err != nil {
return handleError(err)
}
var tagSet []types.Tag
tagSet = append(tagSet, types.Tag{
Key: backend.GetStringPtr(aclKey),
Value: backend.GetStringPtr(base64Encode(acl)),
})
_, err = s.client.PutBucketTagging(ctx, &s3.PutBucketTaggingInput{
Bucket: input.Bucket,
Tagging: &types.Tagging{
TagSet: tagSet,
},
})
return handleError(err)
}
@@ -114,27 +129,23 @@ func (s *S3Proxy) DeleteBucket(ctx context.Context, input *s3.DeleteBucketInput)
func (s *S3Proxy) CreateMultipartUpload(ctx context.Context, input *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
out, err := s.client.CreateMultipartUpload(ctx, input)
err = handleError(err)
return out, err
return out, handleError(err)
}
func (s *S3Proxy) CompleteMultipartUpload(ctx context.Context, input *s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error) {
out, err := s.client.CompleteMultipartUpload(ctx, input)
err = handleError(err)
return out, err
return out, handleError(err)
}
func (s *S3Proxy) AbortMultipartUpload(ctx context.Context, input *s3.AbortMultipartUploadInput) error {
_, err := s.client.AbortMultipartUpload(ctx, input)
err = handleError(err)
return err
return handleError(err)
}
func (s *S3Proxy) ListMultipartUploads(ctx context.Context, input *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResult, error) {
output, err := s.client.ListMultipartUploads(ctx, input)
err = handleError(err)
if err != nil {
return s3response.ListMultipartUploadsResult{}, err
return s3response.ListMultipartUploadsResult{}, handleError(err)
}
var uploads []s3response.Upload
@@ -180,9 +191,8 @@ func (s *S3Proxy) ListMultipartUploads(ctx context.Context, input *s3.ListMultip
func (s *S3Proxy) ListParts(ctx context.Context, input *s3.ListPartsInput) (s3response.ListPartsResult, error) {
output, err := s.client.ListParts(ctx, input)
err = handleError(err)
if err != nil {
return s3response.ListPartsResult{}, err
return s3response.ListPartsResult{}, handleError(err)
}
var parts []s3response.Part
@@ -233,9 +243,8 @@ func (s *S3Proxy) UploadPart(ctx context.Context, input *s3.UploadPartInput) (et
output, err := s.client.UploadPart(ctx, input, s3.WithAPIOptions(
v4.SwapComputePayloadSHA256ForUnsignedPayloadMiddleware,
))
err = handleError(err)
if err != nil {
return "", err
return "", handleError(err)
}
return *output.ETag, nil
@@ -243,9 +252,8 @@ func (s *S3Proxy) UploadPart(ctx context.Context, input *s3.UploadPartInput) (et
func (s *S3Proxy) UploadPartCopy(ctx context.Context, input *s3.UploadPartCopyInput) (s3response.CopyObjectResult, error) {
output, err := s.client.UploadPartCopy(ctx, input)
err = handleError(err)
if err != nil {
return s3response.CopyObjectResult{}, err
return s3response.CopyObjectResult{}, handleError(err)
}
return s3response.CopyObjectResult{
@@ -260,9 +268,8 @@ func (s *S3Proxy) PutObject(ctx context.Context, input *s3.PutObjectInput) (stri
output, err := s.client.PutObject(ctx, input, s3.WithAPIOptions(
v4.SwapComputePayloadSHA256ForUnsignedPayloadMiddleware,
))
err = handleError(err)
if err != nil {
return "", err
return "", handleError(err)
}
return *output.ETag, nil
@@ -270,16 +277,13 @@ func (s *S3Proxy) PutObject(ctx context.Context, input *s3.PutObjectInput) (stri
func (s *S3Proxy) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.HeadObjectOutput, error) {
out, err := s.client.HeadObject(ctx, input)
err = handleError(err)
return out, err
return out, handleError(err)
}
func (s *S3Proxy) GetObject(ctx context.Context, input *s3.GetObjectInput, w io.Writer) (*s3.GetObjectOutput, error) {
output, err := s.client.GetObject(ctx, input)
err = handleError(err)
if err != nil {
return nil, err
return nil, handleError(err)
}
defer output.Body.Close()
@@ -293,30 +297,22 @@ func (s *S3Proxy) GetObject(ctx context.Context, input *s3.GetObjectInput, w io.
func (s *S3Proxy) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttributesInput) (*s3.GetObjectAttributesOutput, error) {
out, err := s.client.GetObjectAttributes(ctx, input)
err = handleError(err)
return out, err
return out, handleError(err)
}
func (s *S3Proxy) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.CopyObjectOutput, error) {
out, err := s.client.CopyObject(ctx, input)
err = handleError(err)
return out, err
return out, handleError(err)
}
func (s *S3Proxy) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
out, err := s.client.ListObjects(ctx, input)
err = handleError(err)
return out, err
return out, handleError(err)
}
func (s *S3Proxy) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
out, err := s.client.ListObjectsV2(ctx, input)
err = handleError(err)
return out, err
return out, handleError(err)
}
func (s *S3Proxy) DeleteObject(ctx context.Context, input *s3.DeleteObjectInput) error {
@@ -330,9 +326,8 @@ func (s *S3Proxy) DeleteObjects(ctx context.Context, input *s3.DeleteObjectsInpu
}
output, err := s.client.DeleteObjects(ctx, input)
err = handleError(err)
if err != nil {
return s3response.DeleteObjectsResult{}, err
return s3response.DeleteObjectsResult{}, handleError(err)
}
return s3response.DeleteObjectsResult{
@@ -342,53 +337,58 @@ func (s *S3Proxy) DeleteObjects(ctx context.Context, input *s3.DeleteObjectsInpu
}
func (s *S3Proxy) GetBucketAcl(ctx context.Context, input *s3.GetBucketAclInput) ([]byte, error) {
output, err := s.client.GetBucketAcl(ctx, input)
err = handleError(err)
tagout, err := s.client.GetBucketTagging(ctx, &s3.GetBucketTaggingInput{
Bucket: input.Bucket,
})
if err != nil {
return nil, err
return nil, handleError(err)
}
var acl auth.ACL
acl.Owner = *output.Owner.ID
for _, el := range output.Grants {
acl.Grantees = append(acl.Grantees, auth.Grantee{
Permission: el.Permission,
Access: *el.Grantee.ID,
})
for _, tag := range tagout.TagSet {
if *tag.Key == aclKey {
acl, err := base64Decode(*tag.Value)
if err != nil {
return nil, handleError(err)
}
return acl, nil
}
}
return json.Marshal(acl)
return []byte{}, nil
}
func (s *S3Proxy) PutBucketAcl(ctx context.Context, bucket string, data []byte) error {
acl, err := auth.ParseACL(data)
if err != nil {
return err
}
input := &s3.PutBucketAclInput{
tagout, err := s.client.GetBucketTagging(ctx, &s3.GetBucketTaggingInput{
Bucket: &bucket,
ACL: acl.ACL,
AccessControlPolicy: &types.AccessControlPolicy{
Owner: &types.Owner{
ID: &acl.Owner,
},
},
})
if err != nil {
return handleError(err)
}
for _, el := range acl.Grantees {
acc := el.Access
input.AccessControlPolicy.Grants = append(input.AccessControlPolicy.Grants, types.Grant{
Permission: el.Permission,
Grantee: &types.Grantee{
ID: &acc,
Type: types.TypeCanonicalUser,
},
var found bool
for i, tag := range tagout.TagSet {
if *tag.Key == aclKey {
tagout.TagSet[i] = types.Tag{
Key: backend.GetStringPtr(aclKey),
Value: backend.GetStringPtr(base64Encode(data)),
}
found = true
break
}
}
if !found {
tagout.TagSet = append(tagout.TagSet, types.Tag{
Key: backend.GetStringPtr(aclKey),
Value: backend.GetStringPtr(base64Encode(data)),
})
}
_, err = s.client.PutBucketAcl(ctx, input)
_, err = s.client.PutBucketTagging(ctx, &s3.PutBucketTaggingInput{
Bucket: &bucket,
Tagging: &types.Tagging{
TagSet: tagout.TagSet,
},
})
return handleError(err)
}
@@ -416,9 +416,8 @@ func (s *S3Proxy) GetObjectTagging(ctx context.Context, bucket, object string) (
Bucket: &bucket,
Key: &object,
})
err = handleError(err)
if err != nil {
return nil, err
return nil, handleError(err)
}
tags := make(map[string]string)
@@ -532,3 +531,15 @@ func handleError(err error) error {
}
return err
}
func base64Encode(input []byte) string {
return base64.StdEncoding.EncodeToString(input)
}
func base64Decode(encoded string) ([]byte, error) {
decoded, err := base64.StdEncoding.DecodeString(encoded)
if err != nil {
return nil, err
}
return decoded, nil
}

74
cmd/versitygw/azure.go Normal file
View File

@@ -0,0 +1,74 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package main
import (
"fmt"
"github.com/urfave/cli/v2"
"github.com/versity/versitygw/backend/azure"
)
var (
azAccount, azKey, azServiceURL, azSASToken string
)
func azureCommand() *cli.Command {
return &cli.Command{
Name: "azure",
Usage: "azure blob storage backend",
Description: `direct translation from s3 objects to azure blobs`,
Action: runAzure,
Flags: []cli.Flag{
&cli.StringFlag{
Name: "account",
Usage: "azure account name",
EnvVars: []string{"AZ_ACCOUNT_NAME"},
Aliases: []string{"a"},
Destination: &azAccount,
},
&cli.StringFlag{
Name: "access-key",
Usage: "azure account key",
EnvVars: []string{"AZ_ACCESS_KEY"},
Aliases: []string{"k"},
Destination: &azKey,
},
&cli.StringFlag{
Name: "sas-token",
Usage: "azure blob storage SAS token",
EnvVars: []string{"AZ_SAS_TOKEN"},
Aliases: []string{"st"},
Destination: &azSASToken,
},
&cli.StringFlag{
Name: "url",
Usage: "azure service URL",
EnvVars: []string{"AZ_ENDPOINT"},
Aliases: []string{"u"},
Destination: &azServiceURL,
},
},
}
}
func runAzure(ctx *cli.Context) error {
be, err := azure.New(azAccount, azKey, azServiceURL, azSASToken)
if err != nil {
return fmt.Errorf("init azure: %w", err)
}
return runGateway(ctx.Context, be)
}

View File

@@ -75,6 +75,7 @@ func main() {
posixCommand(),
scoutfsCommand(),
s3Command(),
azureCommand(),
adminCommand(),
testCommand(),
}

View File

@@ -268,11 +268,11 @@ func getAction(tf testFunc) func(*cli.Context) error {
func extractIntTests() (commands []*cli.Command) {
tests := integration.GetIntTests()
for key, val := range tests {
testKey := key
k := key
testFunc := val
commands = append(commands, &cli.Command{
Name: testKey,
Usage: fmt.Sprintf("Runs %v integration test", testKey),
Name: k,
Usage: fmt.Sprintf("Runs %v integration test", key),
Action: func(ctx *cli.Context) error {
opts := []integration.Option{
integration.WithAccess(awsID),

View File

@@ -21,3 +21,18 @@ services:
ports:
- "${PROXY_PORT}:${PROXY_PORT}"
command: ["sh", "-c", CompileDaemon -build="go build -C ./cmd/versitygw -o versitygw" -command="./cmd/versitygw/versitygw -p :$PROXY_PORT s3 -a $ACCESS_KEY_ID -s $SECRET_ACCESS_KEY --endpoint http://posix:$POSIX_PORT"]
azurite:
image: mcr.microsoft.com/azure-storage/azurite
ports:
- "10000:10000"
- "10001:10001"
- "10002:10002"
azuritegw:
build:
context: .
dockerfile: ./Dockerfile.dev
volumes:
- ./:/app
ports:
- 7070:7070
command: ["sh", "-c", CompileDaemon -build="go build -C ./cmd/versitygw -o versitygw" -command="./cmd/versitygw/versitygw -a $ACCESS_KEY_ID -s $SECRET_ACCESS_KEY --iam-dir $IAM_DIR azure -a $AZ_ACCOUNT_NAME -k $AZ_ACCOUNT_KEY --url http://azurite:10000/$AZ_ACCOUNT_NAME"]

63
go.mod
View File

@@ -3,52 +3,61 @@ module github.com/versity/versitygw
go 1.20
require (
github.com/aws/aws-sdk-go-v2 v1.24.0
github.com/aws/aws-sdk-go-v2/service/s3 v1.47.7
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.2
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.0
github.com/aws/aws-sdk-go-v2 v1.24.1
github.com/aws/aws-sdk-go-v2/service/s3 v1.48.1
github.com/aws/smithy-go v1.19.0
github.com/go-ldap/ldap/v3 v3.4.6
github.com/gofiber/fiber/v2 v2.51.0
github.com/google/uuid v1.5.0
github.com/nats-io/nats.go v1.31.0
github.com/gofiber/fiber/v2 v2.52.0
github.com/google/uuid v1.6.0
github.com/nats-io/nats.go v1.32.0
github.com/pkg/xattr v0.4.9
github.com/segmentio/kafka-go v0.4.47
github.com/urfave/cli/v2 v2.26.0
github.com/valyala/fasthttp v1.51.0
github.com/urfave/cli/v2 v2.27.1
github.com/valyala/fasthttp v1.52.0
github.com/versity/scoutfs-go v0.0.0-20230606232754-0474b14343b9
golang.org/x/sys v0.15.0
golang.org/x/sys v0.17.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2 // indirect
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.10 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.18.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.26.6 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.3 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.18.7 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.7 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.5 // indirect
github.com/golang-jwt/jwt/v5 v5.2.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/nats-io/nkeys v0.4.6 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/nats-io/nkeys v0.4.7 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/pierrec/lz4/v4 v4.1.18 // indirect
github.com/stretchr/testify v1.8.1 // indirect
golang.org/x/crypto v0.17.0 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
golang.org/x/crypto v0.19.0 // indirect
golang.org/x/net v0.21.0 // indirect
golang.org/x/text v0.14.0 // indirect
)
require (
github.com/andybalholm/brotli v1.0.5 // indirect
github.com/andybalholm/brotli v1.1.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 // indirect
github.com/aws/aws-sdk-go-v2/config v1.26.2
github.com/aws/aws-sdk-go-v2/credentials v1.16.13
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.9
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.9 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.9 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.9 // indirect
github.com/aws/aws-sdk-go-v2/config v1.26.6
github.com/aws/aws-sdk-go-v2/credentials v1.16.16
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.15
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.10 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.10 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.9 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.9 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.9 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.10 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.10 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.10 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/klauspost/compress v1.17.0 // indirect
github.com/klauspost/compress v1.17.6 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect

133
go.sum
View File

@@ -1,45 +1,56 @@
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.2 h1:c4k2FIYIh4xtwqrQwV0Ct1v5+ehlNXj5NI/MWVsiTkQ=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.2/go.mod h1:5FDJtLEO/GxwNgUxbwrY3LP0pEoThTQJtk2oysdXHxM=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1 h1:sO0/P7g68FrryJzljemN+6GTssUXdANk6aJ7T1ZxnsQ=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1/go.mod h1:h8hyGFDsU5HMivxiS2iYFZsgDbU9OnnJ163x5UGVKYo=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2 h1:LqbJ/WzJUwBf8UiaSzgX7aMclParm9/5Vgp+TY51uBQ=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2/go.mod h1:yInRyqWXAuaPrgI7p70+lDDgh3mlBohis29jGMISnmc=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.5.0 h1:AifHbc4mg0x9zW52WOpKbsHaDKuRhlI7TVl47thgQ70=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.0 h1:IfFdxTUDiV58iZqPKgyWiz4X4fCxZeQ1pTQPImLYXpY=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.0/go.mod h1:SUZc9YRRHfx2+FAQKNDGrssXehqLpxmwRv2mC/5ntj4=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1 h1:DzHpqpoJVaCgOUdVHxE8QB52S6NiVdDQvGlny1qvPqA=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/alexbrainman/sspi v0.0.0-20210105120005-909beea2cc74 h1:Kk6a4nehpJ3UuJRqlA3JxYxBZEqCeOmATOvrbT4p9RA=
github.com/alexbrainman/sspi v0.0.0-20210105120005-909beea2cc74/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/andybalholm/brotli v1.0.5 h1:8uQZIdzKmjc/iuPu7O2ioW48L81FgatrcpfFmiq/cCs=
github.com/andybalholm/brotli v1.0.5/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
github.com/aws/aws-sdk-go-v2 v1.24.0 h1:890+mqQ+hTpNuw0gGP6/4akolQkSToDJgHfQE7AwGuk=
github.com/aws/aws-sdk-go-v2 v1.24.0/go.mod h1:LNh45Br1YAkEKaAqvmE1m8FUx6a5b/V0oAKV7of29b4=
github.com/andybalholm/brotli v1.1.0 h1:eLKJA0d02Lf0mVpIDgYnqXcUn0GqVmEFny3VuID1U3M=
github.com/andybalholm/brotli v1.1.0/go.mod h1:sms7XGricyQI9K10gOSf56VKKWS4oLer58Q+mhRPtnY=
github.com/aws/aws-sdk-go-v2 v1.24.1 h1:xAojnj+ktS95YZlDf0zxWBkbFtymPeDP+rvUQIH3uAU=
github.com/aws/aws-sdk-go-v2 v1.24.1/go.mod h1:LNh45Br1YAkEKaAqvmE1m8FUx6a5b/V0oAKV7of29b4=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 h1:OCs21ST2LrepDfD3lwlQiOqIGp6JiEUqG84GzTDoyJs=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4/go.mod h1:usURWEKSNNAcAZuzRn/9ZYPT8aZQkR7xcCtunK/LkJo=
github.com/aws/aws-sdk-go-v2/config v1.26.2 h1:+RWLEIWQIGgrz2pBPAUoGgNGs1TOyF4Hml7hCnYj2jc=
github.com/aws/aws-sdk-go-v2/config v1.26.2/go.mod h1:l6xqvUxt0Oj7PI/SUXYLNyZ9T/yBPn3YTQcJLLOdtR8=
github.com/aws/aws-sdk-go-v2/credentials v1.16.13 h1:WLABQ4Cp4vXtXfOWOS3MEZKr6AAYUpMczLhgKtAjQ/8=
github.com/aws/aws-sdk-go-v2/credentials v1.16.13/go.mod h1:Qg6x82FXwW0sJHzYruxGiuApNo31UEtJvXVSZAXeWiw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.10 h1:w98BT5w+ao1/r5sUuiH6JkVzjowOKeOJRHERyy1vh58=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.10/go.mod h1:K2WGI7vUvkIv1HoNbfBA1bvIZ+9kL3YVmWxeKuLQsiw=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.9 h1:5zA8qVCXMPGt6YneFnll5B157SfdK2SewU85PH9/yM0=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.9/go.mod h1:t4gy210hPxkbtYM8xOzrWdxVq1PyekR76OOKXy3s0Vs=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.9 h1:v+HbZaCGmOwnTTVS86Fleq0vPzOd7tnJGbFhP0stNLs=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.9/go.mod h1:Xjqy+Nyj7VDLBtCMkQYOw1QYfAEZCVLrfI0ezve8wd4=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.9 h1:N94sVhRACtXyVcjXxrwK1SKFIJrA9pOJ5yu2eSHnmls=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.9/go.mod h1:hqamLz7g1/4EJP+GH5NBhcUMLjW+gKLQabgyz6/7WAU=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.2 h1:GrSw8s0Gs/5zZ0SX+gX4zQjRnRsMJDJ2sLur1gRBhEM=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.2/go.mod h1:6fQQgfuGmw8Al/3M2IgIllycxV7ZW7WCdVSqfBeUiCY=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.9 h1:ugD6qzjYtB7zM5PN/ZIeaAIyefPaD82G8+SJopgvUpw=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.9/go.mod h1:YD0aYBWCrPENpHolhKw2XDlTIWae2GKXT1T4o6N6hiM=
github.com/aws/aws-sdk-go-v2/config v1.26.6 h1:Z/7w9bUqlRI0FFQpetVuFYEsjzE3h7fpU6HuGmfPL/o=
github.com/aws/aws-sdk-go-v2/config v1.26.6/go.mod h1:uKU6cnDmYCvJ+pxO9S4cWDb2yWWIH5hra+32hVh1MI4=
github.com/aws/aws-sdk-go-v2/credentials v1.16.16 h1:8q6Rliyv0aUFAVtzaldUEcS+T5gbadPbWdV1WcAddK8=
github.com/aws/aws-sdk-go-v2/credentials v1.16.16/go.mod h1:UHVZrdUsv63hPXFo1H7c5fEneoVo9UXiz36QG1GEPi0=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 h1:c5I5iH+DZcH3xOIMlz3/tCKJDaHFwYEmxvlh2fAcFo8=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11/go.mod h1:cRrYDYAMUohBJUtUnOhydaMHtiK/1NZ0Otc9lIb6O0Y=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.15 h1:2MUXyGW6dVaQz6aqycpbdLIH1NMcUI6kW6vQ0RabGYg=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.15/go.mod h1:aHbhbR6WEQgHAiRj41EQ2W47yOYwNtIkWTXmcAtYqj8=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 h1:vF+Zgd9s+H4vOXd5BMaPWykta2a6Ih0AKLq/X6NYKn4=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10/go.mod h1:6BkRjejp/GR4411UGqkX8+wFMbFbqsUIimfK4XjOKR4=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.10 h1:nYPe006ktcqUji8S2mqXf9c/7NdiKriOwMvWQHgYztw=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.10/go.mod h1:6UV4SZkVvmODfXKql4LCbaZUpF7HO2BX38FgBf9ZOLw=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.3 h1:n3GDfwqF2tzEkXlv5cuy4iy7LpKDtqDMcNLfZDu9rls=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.3/go.mod h1:6fQQgfuGmw8Al/3M2IgIllycxV7ZW7WCdVSqfBeUiCY=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.10 h1:5oE2WzJE56/mVveuDZPJESKlg/00AaS2pY2QZcnxg4M=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.10/go.mod h1:FHbKWQtRBYUz4vO5WBWjzMD2by126ny5y/1EoaWoLfI=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.4 h1:/b31bi3YVNlkzkBrm9LfpaKoaYZUxIAj4sHfOTmLfqw=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.4/go.mod h1:2aGXHFmbInwgP9ZfpmdIfOELL79zhdNYNmReK8qDfdQ=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.9 h1:/90OR2XbSYfXucBMJ4U14wrjlfleq/0SB6dZDPncgmo=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.9/go.mod h1:dN/Of9/fNZet7UrQQ6kTDo/VSwKPIq94vjlU16bRARc=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.9 h1:Nf2sHxjMJR8CSImIVCONRi4g0Su3J+TSTbS7G0pUeMU=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.9/go.mod h1:idky4TER38YIjr2cADF1/ugFMKvZV7p//pVeV5LZbF0=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.9 h1:iEAeF6YC3l4FzlJPP9H3Ko1TXpdjdqWffxXjp8SY6uk=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.9/go.mod h1:kjsXoK23q9Z/tLBrckZLLyvjhZoS+AGrzqzUfEClvMM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.47.7 h1:o0ASbVwUAIrfp/WcCac+6jioZt4Hd8k/1X8u7GJ/QeM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.47.7/go.mod h1:vADO6Jn+Rq4nDtfwNjhgR84qkZwiC6FqCaXdw/kYwjA=
github.com/aws/aws-sdk-go-v2/service/sso v1.18.5 h1:ldSFWz9tEHAwHNmjx2Cvy1MjP5/L9kNoR0skc6wyOOM=
github.com/aws/aws-sdk-go-v2/service/sso v1.18.5/go.mod h1:CaFfXLYL376jgbP7VKC96uFcU8Rlavak0UlAwk1Dlhc=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.5 h1:2k9KmFawS63euAkY4/ixVNsYYwrwnd5fIvgEKkfZFNM=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.5/go.mod h1:W+nd4wWDVkSUIox9bacmkBP5NMFQeTJ/xqNabpzSR38=
github.com/aws/aws-sdk-go-v2/service/sts v1.26.6 h1:HJeiuZ2fldpd0WqngyMR6KW7ofkXNLyOaHwEIGm39Cs=
github.com/aws/aws-sdk-go-v2/service/sts v1.26.6/go.mod h1:XX5gh4CB7wAs4KhcF46G6C8a2i7eupU19dcAAE+EydU=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.10 h1:L0ai8WICYHozIKK+OtPzVJBugL7culcuM4E4JOpIEm8=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.10/go.mod h1:byqfyxJBshFk0fF9YmK0M0ugIO8OWjzH2T3bPG4eGuA=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.10 h1:DBYTXwIGQSGs9w4jKm60F5dmCQ3EEruxdc0MFh+3EY4=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.10/go.mod h1:wohMUQiFdzo0NtxbBg0mSRGZ4vL3n0dKjLTINdcIino=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.10 h1:KOxnQeWy5sXyS37fdKEvAsGHOr9fa/qvwxfJurR/BzE=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.10/go.mod h1:jMx5INQFYFYB3lQD9W0D8Ohgq6Wnl7NYOJ2TQndbulI=
github.com/aws/aws-sdk-go-v2/service/s3 v1.48.1 h1:5XNlsBsEvBZBMO6p82y+sqpWg8j5aBCe+5C2GBFgqBQ=
github.com/aws/aws-sdk-go-v2/service/s3 v1.48.1/go.mod h1:4qXHrG1Ne3VGIMZPCB8OjH/pLFO94sKABIusjh0KWPU=
github.com/aws/aws-sdk-go-v2/service/sso v1.18.7 h1:eajuO3nykDPdYicLlP3AGgOyVN3MOlFmZv7WGTuJPow=
github.com/aws/aws-sdk-go-v2/service/sso v1.18.7/go.mod h1:+mJNDdF+qiUlNKNC3fxn74WWNN+sOiGOEImje+3ScPM=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.7 h1:QPMJf+Jw8E1l7zqhZmMlFw6w1NmfkfiSK8mS4zOx3BA=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.7/go.mod h1:ykf3COxYI0UJmxcfcxcVuz7b6uADi1FkiUz6Eb7AgM8=
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7 h1:NzO4Vrau795RkUdSHKEwiR01FaGzGOH1EETJ+5QHnm0=
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7/go.mod h1:6h2YuIoxaMSCFf5fi1EgZAwdfkGMgDY+DVfa61uLe4U=
github.com/aws/smithy-go v1.19.0 h1:KWFKQV80DpP3vJrrA9sVAHQ5gc2z8i4EzrLhLlWXcBM=
github.com/aws/smithy-go v1.19.0/go.mod h1:NukqUGpCZIILqqiV0NIjeFh24kd/FAa4beRb6nbIUPE=
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
@@ -47,23 +58,28 @@ github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46t
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
github.com/go-asn1-ber/asn1-ber v1.5.5 h1:MNHlNMBDgEKD4TcKr36vQN68BA00aDfjIt3/bD50WnA=
github.com/go-asn1-ber/asn1-ber v1.5.5/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-ldap/ldap/v3 v3.4.6 h1:ert95MdbiG7aWo/oPYp9btL3KJlMPKnP58r09rI8T+A=
github.com/go-ldap/ldap/v3 v3.4.6/go.mod h1:IGMQANNtxpsOzj7uUAMjpGBaOVTC4DYyIy8VsTdxmtc=
github.com/gofiber/fiber/v2 v2.51.0 h1:JNACcZy5e2tGApWB2QrRpenTWn0fq0hkFm6k0C86gKQ=
github.com/gofiber/fiber/v2 v2.51.0/go.mod h1:xaQRZQJGqnKOQnbQw+ltvku3/h8QxvNi8o6JiJ7Ll0U=
github.com/gofiber/fiber/v2 v2.52.0 h1:S+qXi7y+/Pgvqq4DrSmREGiFwtB7Bu6+QFLuIHYw/UE=
github.com/gofiber/fiber/v2 v2.52.0/go.mod h1:KEOE+cXMhXG0zHc9d8+E38hoX+ZN7bhOtgeF2oT6jrQ=
github.com/golang-jwt/jwt/v5 v5.2.0 h1:d/ix8ftRUorsN+5eMIlF4T6J8CAt9rch3My2winC1Jw=
github.com/golang-jwt/jwt/v5 v5.2.0/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.5.0 h1:1p67kYwdtXjb0gL0BPiP1Av9wiZPo5A8z2cWkTZ+eyU=
github.com/google/uuid v1.5.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
github.com/klauspost/compress v1.17.0 h1:Rnbp4K9EjcDuVuHtd0dgA4qNuv9yKDYKK1ulpJwgrqM=
github.com/klauspost/compress v1.17.0/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/klauspost/compress v1.17.6 h1:60eq2E/jlfwQXtvZEeBUYADs+BwKBWURIY+Gj2eRGjI=
github.com/klauspost/compress v1.17.6/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
@@ -71,15 +87,17 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U=
github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/nats-io/nats.go v1.31.0 h1:/WFBHEc/dOKBF6qf1TZhrdEfTmOZ5JzdJ+Y3m6Y/p7E=
github.com/nats-io/nats.go v1.31.0/go.mod h1:di3Bm5MLsoB4Bx61CBTsxuarI36WbhAwOm8QrW39+i8=
github.com/nats-io/nkeys v0.4.6 h1:IzVe95ru2CT6ta874rt9saQRkWfe2nFj1NtvYSLqMzY=
github.com/nats-io/nkeys v0.4.6/go.mod h1:4DxZNzenSVd1cYQoAa8948QY3QDjrHfcfVADymtkpts=
github.com/nats-io/nats.go v1.32.0 h1:Bx9BZS+aXYlxW08k8Gd3yR2s73pV5XSoAQUyp1Kwvp0=
github.com/nats-io/nats.go v1.32.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/pierrec/lz4/v4 v4.1.15/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pierrec/lz4/v4 v4.1.18 h1:xaKrnTkyoqfh1YItXl56+6KJNVYWlEEPuAQW9xsplYQ=
github.com/pierrec/lz4/v4 v4.1.18/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/xattr v0.4.9 h1:5883YPCtkSd8LFbs13nXplj9g9tlrwoJRjgpgMu1/fE=
github.com/pkg/xattr v0.4.9/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@@ -93,17 +111,15 @@ github.com/segmentio/kafka-go v0.4.47 h1:IqziR4pA3vrZq7YdRxaT3w1/5fvIH5qpCwstUan
github.com/segmentio/kafka-go v0.4.47/go.mod h1:HjF6XbOKh0Pjlkr5GVZxt6CsjjwnmhVOfURM5KMd8qg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/urfave/cli/v2 v2.26.0 h1:3f3AMg3HpThFNT4I++TKOejZO8yU55t3JnnSr4S4QEI=
github.com/urfave/cli/v2 v2.26.0/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=
github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.51.0 h1:8b30A5JlZ6C7AS81RsWjYMQmrZG6feChmgAolCl1SqA=
github.com/valyala/fasthttp v1.51.0/go.mod h1:oI2XroL+lI7vdXyYoQk03bXBThfFl2cVdIA3Xl7cH8g=
github.com/valyala/fasthttp v1.52.0 h1:wqBQpxH71XW0e2g+Og4dzQM8pk34aFYlA1Ga8db7gU0=
github.com/valyala/fasthttp v1.52.0/go.mod h1:hf5C4QnVMkNXMspnsUlfM3WitlgYflyhHYoKol/szxQ=
github.com/valyala/tcplisten v1.0.0 h1:rBHj/Xf+E1tRGZyWIWwJDiRY0zc1Js+CV5DqwacVSA8=
github.com/valyala/tcplisten v1.0.0/go.mod h1:T0xQ8SeCZGxckz9qRXTfG43PvQ/mcWh7FwZEA7Ioqkc=
github.com/versity/scoutfs-go v0.0.0-20230606232754-0474b14343b9 h1:ZfmQR01Kk6/kQh6+zlqfBYszVY02fzf9xYrchOY4NFM=
@@ -121,8 +137,8 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k=
golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/crypto v0.19.0 h1:ENy+Az/9Y1vSrlrvBSyna3PITt4tiZLf7sgCjZBX7Wo=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
@@ -130,8 +146,9 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/net v0.21.0 h1:AQyQV4dYCvJ7vGmJyKki9+PBdyvhkSd8EIx/qb0AYv4=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -142,13 +159,14 @@ golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -163,14 +181,15 @@ golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -22,10 +22,36 @@ func TestAuthentication(s *S3Conf) {
Authentication_signature_error_incorrect_secret_key(s)
}
func TestPresignedAuthentication(s *S3Conf) {
PresignedAuth_missing_algo_query_param(s)
PresignedAuth_unsupported_algorithm(s)
PresignedAuth_missing_credentials_query_param(s)
PresignedAuth_malformed_creds_invalid_parts(s)
PresignedAuth_malformed_creds_invalid_parts(s)
PresignedAuth_creds_incorrect_service(s)
PresignedAuth_creds_incorrect_region(s)
PresignedAuth_creds_invalid_date(s)
PresignedAuth_missing_date_query(s)
PresignedAuth_dates_mismatch(s)
PresignedAuth_non_existing_access_key_id(s)
PresignedAuth_missing_signed_headers_query_param(s)
PresignedAuth_missing_expiration_query_param(s)
PresignedAuth_invalid_expiration_query_param(s)
PresignedAuth_negative_expiration_query_param(s)
PresignedAuth_exceeding_expiration_query_param(s)
PresignedAuth_expired_request(s)
PresignedAuth_incorrect_secret_key(s)
PresignedAuth_PutObject_success(s)
PresignedAuth_Put_GetObject_with_data(s)
PresignedAuth_UploadPart(s)
}
func TestCreateBucket(s *S3Conf) {
CreateBucket_invalid_bucket_name(s)
CreateBucket_existing_bucket(s)
CreateBucket_as_user(s)
CreateBucket_default_acl(s)
CreateBucket_non_default_acl(s)
CreateDeleteBucket_success(s)
}
@@ -46,6 +72,23 @@ func TestDeleteBucket(s *S3Conf) {
DeleteBucket_success_status_code(s)
}
func TestPutBucketTagging(s *S3Conf) {
PutBucketTagging_non_existing_bucket(s)
PutBucketTagging_long_tags(s)
PutBucketTagging_success(s)
}
func TestGetBucketTagging(s *S3Conf) {
GetBucketTagging_non_existing_bucket(s)
GetBucketTagging_success(s)
}
func TestDeleteBucketTagging(s *S3Conf) {
DeleteBucketTagging_non_existing_object(s)
DeleteBucketTagging_success_status(s)
DeleteBucketTagging_success(s)
}
func TestPutObject(s *S3Conf) {
PutObject_non_existing_bucket(s)
PutObject_special_chars(s)
@@ -78,6 +121,13 @@ func TestListObjects(s *S3Conf) {
ListObjects_marker_not_from_obj_list(s)
}
func TestListObjectsV2(s *S3Conf) {
ListObjectsV2_start_after(s)
ListObjectsV2_both_start_after_and_continuation_token(s)
ListObjectsV2_start_after_not_in_list(s)
ListObjectsV2_start_after_empty_result(s)
}
func TestDeleteObject(s *S3Conf) {
DeleteObject_non_existing_object(s)
DeleteObject_success(s)
@@ -193,14 +243,19 @@ func TestGetBucketAcl(s *S3Conf) {
func TestFullFlow(s *S3Conf) {
TestAuthentication(s)
TestPresignedAuthentication(s)
TestCreateBucket(s)
TestHeadBucket(s)
TestListBuckets(s)
TestDeleteBucket(s)
TestPutBucketTagging(s)
TestGetBucketTagging(s)
TestDeleteBucketTagging(s)
TestPutObject(s)
TestHeadObject(s)
TestGetObject(s)
TestListObjects(s)
TestListObjectsV2(s)
TestDeleteObject(s)
TestDeleteObjects(s)
TestCopyObject(s)
@@ -228,128 +283,162 @@ type IntTests map[string]func(s *S3Conf) error
func GetIntTests() IntTests {
return IntTests{
"Authentication_empty_auth_header": Authentication_empty_auth_header,
"Authentication_invalid_auth_header": Authentication_invalid_auth_header,
"Authentication_unsupported_signature_version": Authentication_unsupported_signature_version,
"Authentication_malformed_credentials": Authentication_malformed_credentials,
"Authentication_malformed_credentials_invalid_parts": Authentication_malformed_credentials_invalid_parts,
"Authentication_credentials_terminated_string": Authentication_credentials_terminated_string,
"Authentication_credentials_incorrect_service": Authentication_credentials_incorrect_service,
"Authentication_credentials_incorrect_region": Authentication_credentials_incorrect_region,
"Authentication_credentials_invalid_date": Authentication_credentials_invalid_date,
"Authentication_credentials_future_date": Authentication_credentials_future_date,
"Authentication_credentials_past_date": Authentication_credentials_past_date,
"Authentication_credentials_non_existing_access_key": Authentication_credentials_non_existing_access_key,
"Authentication_invalid_signed_headers": Authentication_invalid_signed_headers,
"Authentication_missing_date_header": Authentication_missing_date_header,
"Authentication_invalid_date_header": Authentication_invalid_date_header,
"Authentication_date_mismatch": Authentication_date_mismatch,
"Authentication_incorrect_payload_hash": Authentication_incorrect_payload_hash,
"Authentication_incorrect_md5": Authentication_incorrect_md5,
"Authentication_signature_error_incorrect_secret_key": Authentication_signature_error_incorrect_secret_key,
"CreateBucket_invalid_bucket_name": CreateBucket_invalid_bucket_name,
"CreateBucket_existing_bucket": CreateBucket_existing_bucket,
"CreateBucket_as_user": CreateBucket_as_user,
"CreateDeleteBucket_success": CreateDeleteBucket_success,
"HeadBucket_non_existing_bucket": HeadBucket_non_existing_bucket,
"HeadBucket_success": HeadBucket_success,
"ListBuckets_as_user": ListBuckets_as_user,
"ListBuckets_as_admin": ListBuckets_as_admin,
"ListBuckets_success": ListBuckets_success,
"DeleteBucket_non_existing_bucket": DeleteBucket_non_existing_bucket,
"DeleteBucket_non_empty_bucket": DeleteBucket_non_empty_bucket,
"DeleteBucket_success_status_code": DeleteBucket_success_status_code,
"PutObject_non_existing_bucket": PutObject_non_existing_bucket,
"PutObject_special_chars": PutObject_special_chars,
"PutObject_invalid_long_tags": PutObject_invalid_long_tags,
"PutObject_success": PutObject_success,
"PutObject_invalid_credentials": PutObject_invalid_credentials,
"HeadObject_non_existing_object": HeadObject_non_existing_object,
"HeadObject_success": HeadObject_success,
"GetObject_non_existing_key": GetObject_non_existing_key,
"GetObject_invalid_ranges": GetObject_invalid_ranges,
"GetObject_with_meta": GetObject_with_meta,
"GetObject_success": GetObject_success,
"GetObject_by_range_success": GetObject_by_range_success,
"ListObjects_non_existing_bucket": ListObjects_non_existing_bucket,
"ListObjects_with_prefix": ListObjects_with_prefix,
"ListObject_truncated": ListObject_truncated,
"ListObjects_invalid_max_keys": ListObjects_invalid_max_keys,
"ListObjects_max_keys_0": ListObjects_max_keys_0,
"ListObjects_delimiter": ListObjects_delimiter,
"ListObjects_max_keys_none": ListObjects_max_keys_none,
"ListObjects_marker_not_from_obj_list": ListObjects_marker_not_from_obj_list,
"DeleteObject_non_existing_object": DeleteObject_non_existing_object,
"DeleteObject_success": DeleteObject_success,
"DeleteObject_success_status_code": DeleteObject_success_status_code,
"DeleteObjects_empty_input": DeleteObjects_empty_input,
"DeleteObjects_non_existing_objects": DeleteObjects_non_existing_objects,
"DeleteObjects_success": DeleteObjects_success,
"CopyObject_non_existing_dst_bucket": CopyObject_non_existing_dst_bucket,
"CopyObject_not_owned_source_bucket": CopyObject_not_owned_source_bucket,
"CopyObject_copy_to_itself": CopyObject_copy_to_itself,
"CopyObject_to_itself_with_new_metadata": CopyObject_to_itself_with_new_metadata,
"CopyObject_success": CopyObject_success,
"PutObjectTagging_non_existing_object": PutObjectTagging_non_existing_object,
"PutObjectTagging_long_tags": PutObjectTagging_long_tags,
"PutObjectTagging_success": PutObjectTagging_success,
"GetObjectTagging_non_existing_object": GetObjectTagging_non_existing_object,
"GetObjectTagging_success": GetObjectTagging_success,
"DeleteObjectTagging_non_existing_object": DeleteObjectTagging_non_existing_object,
"DeleteObjectTagging_success_status": DeleteObjectTagging_success_status,
"DeleteObjectTagging_success": DeleteObjectTagging_success,
"CreateMultipartUpload_non_existing_bucket": CreateMultipartUpload_non_existing_bucket,
"CreateMultipartUpload_success": CreateMultipartUpload_success,
"UploadPart_non_existing_bucket": UploadPart_non_existing_bucket,
"UploadPart_invalid_part_number": UploadPart_invalid_part_number,
"UploadPart_non_existing_key": UploadPart_non_existing_key,
"UploadPart_non_existing_mp_upload": UploadPart_non_existing_mp_upload,
"UploadPart_success": UploadPart_success,
"UploadPartCopy_non_existing_bucket": UploadPartCopy_non_existing_bucket,
"UploadPartCopy_incorrect_uploadId": UploadPartCopy_incorrect_uploadId,
"UploadPartCopy_incorrect_object_key": UploadPartCopy_incorrect_object_key,
"UploadPartCopy_invalid_part_number": UploadPartCopy_invalid_part_number,
"UploadPartCopy_invalid_copy_source": UploadPartCopy_invalid_copy_source,
"UploadPartCopy_non_existing_source_bucket": UploadPartCopy_non_existing_source_bucket,
"UploadPartCopy_non_existing_source_object_key": UploadPartCopy_non_existing_source_object_key,
"UploadPartCopy_success": UploadPartCopy_success,
"UploadPartCopy_by_range_invalid_range": UploadPartCopy_by_range_invalid_range,
"UploadPartCopy_greater_range_than_obj_size": UploadPartCopy_greater_range_than_obj_size,
"UploadPartCopy_by_range_success": UploadPartCopy_by_range_success,
"ListParts_incorrect_uploadId": ListParts_incorrect_uploadId,
"ListParts_incorrect_object_key": ListParts_incorrect_object_key,
"ListParts_success": ListParts_success,
"ListMultipartUploads_non_existing_bucket": ListMultipartUploads_non_existing_bucket,
"ListMultipartUploads_empty_result": ListMultipartUploads_empty_result,
"ListMultipartUploads_invalid_max_uploads": ListMultipartUploads_invalid_max_uploads,
"ListMultipartUploads_max_uploads": ListMultipartUploads_max_uploads,
"ListMultipartUploads_incorrect_next_key_marker": ListMultipartUploads_incorrect_next_key_marker,
"ListMultipartUploads_ignore_upload_id_marker": ListMultipartUploads_ignore_upload_id_marker,
"ListMultipartUploads_success": ListMultipartUploads_success,
"AbortMultipartUpload_non_existing_bucket": AbortMultipartUpload_non_existing_bucket,
"AbortMultipartUpload_incorrect_uploadId": AbortMultipartUpload_incorrect_uploadId,
"AbortMultipartUpload_incorrect_object_key": AbortMultipartUpload_incorrect_object_key,
"AbortMultipartUpload_success": AbortMultipartUpload_success,
"AbortMultipartUpload_success_status_code": AbortMultipartUpload_success_status_code,
"CompletedMultipartUpload_non_existing_bucket": CompletedMultipartUpload_non_existing_bucket,
"CompleteMultipartUpload_invalid_part_number": CompleteMultipartUpload_invalid_part_number,
"CompleteMultipartUpload_invalid_ETag": CompleteMultipartUpload_invalid_ETag,
"CompleteMultipartUpload_success": CompleteMultipartUpload_success,
"PutBucketAcl_non_existing_bucket": PutBucketAcl_non_existing_bucket,
"PutBucketAcl_invalid_acl_canned_and_acp": PutBucketAcl_invalid_acl_canned_and_acp,
"PutBucketAcl_invalid_acl_canned_and_grants": PutBucketAcl_invalid_acl_canned_and_grants,
"PutBucketAcl_invalid_acl_acp_and_grants": PutBucketAcl_invalid_acl_acp_and_grants,
"PutBucketAcl_invalid_owner": PutBucketAcl_invalid_owner,
"PutBucketAcl_success_access_denied": PutBucketAcl_success_access_denied,
"PutBucketAcl_success_grants": PutBucketAcl_success_grants,
"PutBucketAcl_success_canned_acl": PutBucketAcl_success_canned_acl,
"PutBucketAcl_success_acp": PutBucketAcl_success_acp,
"GetBucketAcl_non_existing_bucket": GetBucketAcl_non_existing_bucket,
"GetBucketAcl_access_denied": GetBucketAcl_access_denied,
"GetBucketAcl_success": GetBucketAcl_success,
"PutObject_overwrite_dir_obj": PutObject_overwrite_dir_obj,
"PutObject_overwrite_file_obj": PutObject_overwrite_file_obj,
"PutObject_dir_obj_with_data": PutObject_dir_obj_with_data,
"CreateMultipartUpload_dir_obj": CreateMultipartUpload_dir_obj,
"Authentication_empty_auth_header": Authentication_empty_auth_header,
"Authentication_invalid_auth_header": Authentication_invalid_auth_header,
"Authentication_unsupported_signature_version": Authentication_unsupported_signature_version,
"Authentication_malformed_credentials": Authentication_malformed_credentials,
"Authentication_malformed_credentials_invalid_parts": Authentication_malformed_credentials_invalid_parts,
"Authentication_credentials_terminated_string": Authentication_credentials_terminated_string,
"Authentication_credentials_incorrect_service": Authentication_credentials_incorrect_service,
"Authentication_credentials_incorrect_region": Authentication_credentials_incorrect_region,
"Authentication_credentials_invalid_date": Authentication_credentials_invalid_date,
"Authentication_credentials_future_date": Authentication_credentials_future_date,
"Authentication_credentials_past_date": Authentication_credentials_past_date,
"Authentication_credentials_non_existing_access_key": Authentication_credentials_non_existing_access_key,
"Authentication_invalid_signed_headers": Authentication_invalid_signed_headers,
"Authentication_missing_date_header": Authentication_missing_date_header,
"Authentication_invalid_date_header": Authentication_invalid_date_header,
"Authentication_date_mismatch": Authentication_date_mismatch,
"Authentication_incorrect_payload_hash": Authentication_incorrect_payload_hash,
"Authentication_incorrect_md5": Authentication_incorrect_md5,
"Authentication_signature_error_incorrect_secret_key": Authentication_signature_error_incorrect_secret_key,
"PresignedAuth_missing_algo_query_param": PresignedAuth_missing_algo_query_param,
"PresignedAuth_unsupported_algorithm": PresignedAuth_unsupported_algorithm,
"PresignedAuth_missing_credentials_query_param": PresignedAuth_missing_credentials_query_param,
"PresignedAuth_malformed_creds_invalid_parts": PresignedAuth_malformed_creds_invalid_parts,
"PresignedAuth_creds_invalid_terminator": PresignedAuth_creds_invalid_terminator,
"PresignedAuth_creds_incorrect_service": PresignedAuth_creds_incorrect_service,
"PresignedAuth_creds_incorrect_region": PresignedAuth_creds_incorrect_region,
"PresignedAuth_creds_invalid_date": PresignedAuth_creds_invalid_date,
"PresignedAuth_missing_date_query": PresignedAuth_missing_date_query,
"PresignedAuth_dates_mismatch": PresignedAuth_dates_mismatch,
"PresignedAuth_non_existing_access_key_id": PresignedAuth_non_existing_access_key_id,
"PresignedAuth_missing_signed_headers_query_param": PresignedAuth_missing_signed_headers_query_param,
"PresignedAuth_missing_expiration_query_param": PresignedAuth_missing_expiration_query_param,
"PresignedAuth_invalid_expiration_query_param": PresignedAuth_invalid_expiration_query_param,
"PresignedAuth_negative_expiration_query_param": PresignedAuth_negative_expiration_query_param,
"PresignedAuth_exceeding_expiration_query_param": PresignedAuth_exceeding_expiration_query_param,
"PresignedAuth_expired_request": PresignedAuth_expired_request,
"PresignedAuth_incorrect_secret_key": PresignedAuth_incorrect_secret_key,
"PresignedAuth_PutObject_success": PresignedAuth_PutObject_success,
"PresignedAuth_Put_GetObject_with_data": PresignedAuth_Put_GetObject_with_data,
"PresignedAuth_UploadPart": PresignedAuth_UploadPart,
"CreateBucket_invalid_bucket_name": CreateBucket_invalid_bucket_name,
"CreateBucket_existing_bucket": CreateBucket_existing_bucket,
"CreateBucket_as_user": CreateBucket_as_user,
"CreateDeleteBucket_success": CreateDeleteBucket_success,
"CreateBucket_default_acl": CreateBucket_default_acl,
"CreateBucket_non_default_acl": CreateBucket_non_default_acl,
"HeadBucket_non_existing_bucket": HeadBucket_non_existing_bucket,
"HeadBucket_success": HeadBucket_success,
"ListBuckets_as_user": ListBuckets_as_user,
"ListBuckets_as_admin": ListBuckets_as_admin,
"ListBuckets_success": ListBuckets_success,
"DeleteBucket_non_existing_bucket": DeleteBucket_non_existing_bucket,
"DeleteBucket_non_empty_bucket": DeleteBucket_non_empty_bucket,
"DeleteBucket_success_status_code": DeleteBucket_success_status_code,
"PutBucketTagging_non_existing_bucket": PutBucketTagging_non_existing_bucket,
"PutBucketTagging_long_tags": PutBucketTagging_long_tags,
"PutBucketTagging_success": PutBucketTagging_success,
"GetBucketTagging_non_existing_bucket": GetBucketTagging_non_existing_bucket,
"GetBucketTagging_success": GetBucketTagging_success,
"DeleteBucketTagging_non_existing_object": DeleteBucketTagging_non_existing_object,
"DeleteBucketTagging_success_status": DeleteBucketTagging_success_status,
"DeleteBucketTagging_success": DeleteBucketTagging_success,
"PutObject_non_existing_bucket": PutObject_non_existing_bucket,
"PutObject_special_chars": PutObject_special_chars,
"PutObject_invalid_long_tags": PutObject_invalid_long_tags,
"PutObject_success": PutObject_success,
"HeadObject_non_existing_object": HeadObject_non_existing_object,
"HeadObject_success": HeadObject_success,
"GetObject_non_existing_key": GetObject_non_existing_key,
"GetObject_invalid_ranges": GetObject_invalid_ranges,
"GetObject_with_meta": GetObject_with_meta,
"GetObject_success": GetObject_success,
"GetObject_by_range_success": GetObject_by_range_success,
"ListObjects_non_existing_bucket": ListObjects_non_existing_bucket,
"ListObjects_with_prefix": ListObjects_with_prefix,
"ListObject_truncated": ListObject_truncated,
"ListObjects_invalid_max_keys": ListObjects_invalid_max_keys,
"ListObjects_max_keys_0": ListObjects_max_keys_0,
"ListObjects_delimiter": ListObjects_delimiter,
"ListObjects_max_keys_none": ListObjects_max_keys_none,
"ListObjects_marker_not_from_obj_list": ListObjects_marker_not_from_obj_list,
"ListObjectsV2_start_after": ListObjectsV2_start_after,
"ListObjectsV2_both_start_after_and_continuation_token": ListObjectsV2_both_start_after_and_continuation_token,
"ListObjectsV2_start_after_not_in_list": ListObjectsV2_start_after_not_in_list,
"ListObjectsV2_start_after_empty_result": ListObjectsV2_start_after_empty_result,
"DeleteObject_non_existing_object": DeleteObject_non_existing_object,
"DeleteObject_success": DeleteObject_success,
"DeleteObject_success_status_code": DeleteObject_success_status_code,
"DeleteObjects_empty_input": DeleteObjects_empty_input,
"DeleteObjects_non_existing_objects": DeleteObjects_non_existing_objects,
"DeleteObjects_success": DeleteObjects_success,
"CopyObject_non_existing_dst_bucket": CopyObject_non_existing_dst_bucket,
"CopyObject_not_owned_source_bucket": CopyObject_not_owned_source_bucket,
"CopyObject_copy_to_itself": CopyObject_copy_to_itself,
"CopyObject_to_itself_with_new_metadata": CopyObject_to_itself_with_new_metadata,
"CopyObject_success": CopyObject_success,
"PutObjectTagging_non_existing_object": PutObjectTagging_non_existing_object,
"PutObjectTagging_long_tags": PutObjectTagging_long_tags,
"PutObjectTagging_success": PutObjectTagging_success,
"GetObjectTagging_non_existing_object": GetObjectTagging_non_existing_object,
"GetObjectTagging_success": GetObjectTagging_success,
"DeleteObjectTagging_non_existing_object": DeleteObjectTagging_non_existing_object,
"DeleteObjectTagging_success_status": DeleteObjectTagging_success_status,
"DeleteObjectTagging_success": DeleteObjectTagging_success,
"CreateMultipartUpload_non_existing_bucket": CreateMultipartUpload_non_existing_bucket,
"CreateMultipartUpload_success": CreateMultipartUpload_success,
"UploadPart_non_existing_bucket": UploadPart_non_existing_bucket,
"UploadPart_invalid_part_number": UploadPart_invalid_part_number,
"UploadPart_non_existing_key": UploadPart_non_existing_key,
"UploadPart_non_existing_mp_upload": UploadPart_non_existing_mp_upload,
"UploadPart_success": UploadPart_success,
"UploadPartCopy_non_existing_bucket": UploadPartCopy_non_existing_bucket,
"UploadPartCopy_incorrect_uploadId": UploadPartCopy_incorrect_uploadId,
"UploadPartCopy_incorrect_object_key": UploadPartCopy_incorrect_object_key,
"UploadPartCopy_invalid_part_number": UploadPartCopy_invalid_part_number,
"UploadPartCopy_invalid_copy_source": UploadPartCopy_invalid_copy_source,
"UploadPartCopy_non_existing_source_bucket": UploadPartCopy_non_existing_source_bucket,
"UploadPartCopy_non_existing_source_object_key": UploadPartCopy_non_existing_source_object_key,
"UploadPartCopy_success": UploadPartCopy_success,
"UploadPartCopy_by_range_invalid_range": UploadPartCopy_by_range_invalid_range,
"UploadPartCopy_greater_range_than_obj_size": UploadPartCopy_greater_range_than_obj_size,
"UploadPartCopy_by_range_success": UploadPartCopy_by_range_success,
"ListParts_incorrect_uploadId": ListParts_incorrect_uploadId,
"ListParts_incorrect_object_key": ListParts_incorrect_object_key,
"ListParts_success": ListParts_success,
"ListMultipartUploads_non_existing_bucket": ListMultipartUploads_non_existing_bucket,
"ListMultipartUploads_empty_result": ListMultipartUploads_empty_result,
"ListMultipartUploads_invalid_max_uploads": ListMultipartUploads_invalid_max_uploads,
"ListMultipartUploads_max_uploads": ListMultipartUploads_max_uploads,
"ListMultipartUploads_incorrect_next_key_marker": ListMultipartUploads_incorrect_next_key_marker,
"ListMultipartUploads_ignore_upload_id_marker": ListMultipartUploads_ignore_upload_id_marker,
"ListMultipartUploads_success": ListMultipartUploads_success,
"AbortMultipartUpload_non_existing_bucket": AbortMultipartUpload_non_existing_bucket,
"AbortMultipartUpload_incorrect_uploadId": AbortMultipartUpload_incorrect_uploadId,
"AbortMultipartUpload_incorrect_object_key": AbortMultipartUpload_incorrect_object_key,
"AbortMultipartUpload_success": AbortMultipartUpload_success,
"AbortMultipartUpload_success_status_code": AbortMultipartUpload_success_status_code,
"CompletedMultipartUpload_non_existing_bucket": CompletedMultipartUpload_non_existing_bucket,
"CompleteMultipartUpload_invalid_part_number": CompleteMultipartUpload_invalid_part_number,
"CompleteMultipartUpload_invalid_ETag": CompleteMultipartUpload_invalid_ETag,
"CompleteMultipartUpload_success": CompleteMultipartUpload_success,
"PutBucketAcl_non_existing_bucket": PutBucketAcl_non_existing_bucket,
"PutBucketAcl_invalid_acl_canned_and_acp": PutBucketAcl_invalid_acl_canned_and_acp,
"PutBucketAcl_invalid_acl_canned_and_grants": PutBucketAcl_invalid_acl_canned_and_grants,
"PutBucketAcl_invalid_acl_acp_and_grants": PutBucketAcl_invalid_acl_acp_and_grants,
"PutBucketAcl_invalid_owner": PutBucketAcl_invalid_owner,
"PutBucketAcl_success_access_denied": PutBucketAcl_success_access_denied,
"PutBucketAcl_success_grants": PutBucketAcl_success_grants,
"PutBucketAcl_success_canned_acl": PutBucketAcl_success_canned_acl,
"PutBucketAcl_success_acp": PutBucketAcl_success_acp,
"GetBucketAcl_non_existing_bucket": GetBucketAcl_non_existing_bucket,
"GetBucketAcl_access_denied": GetBucketAcl_access_denied,
"GetBucketAcl_success": GetBucketAcl_success,
"PutObject_overwrite_dir_obj": PutObject_overwrite_dir_obj,
"PutObject_overwrite_file_obj": PutObject_overwrite_file_obj,
"PutObject_dir_obj_with_data": PutObject_dir_obj_with_data,
"CreateMultipartUpload_dir_obj": CreateMultipartUpload_dir_obj,
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,7 @@ import (
"io"
rnd "math/rand"
"net/http"
"net/url"
"os"
"os/exec"
"strings"
@@ -81,7 +82,7 @@ func teardown(s *S3Conf, bucket string) error {
}
}
if *out.IsTruncated {
if out.IsTruncated != nil && *out.IsTruncated {
in.ContinuationToken = out.ContinuationToken
} else {
break
@@ -150,6 +151,20 @@ func authHandler(s *S3Conf, cfg *authConfig, handler func(req *http.Request) err
return nil
}
func presignedAuthHandler(s *S3Conf, testName string, handler func(client *s3.PresignClient) error) error {
runF(testName)
clt := s3.NewPresignClient(s3.NewFromConfig(s.Config()))
err := handler(clt)
if err != nil {
failF("%v: %v", testName, err)
return fmt.Errorf("%v: %w", testName, err)
}
passF(testName)
return nil
}
func createSignedReq(method, endpoint, path, access, secret, service, region string, body []byte, date time.Time) (*http.Request, error) {
req, err := http.NewRequest(method, fmt.Sprintf("%v/%v", endpoint, path), bytes.NewReader(body))
if err != nil {
@@ -215,7 +230,7 @@ func checkSdkApiErr(err error, code string) error {
var ae smithy.APIError
if errors.As(err, &ae) {
if ae.ErrorCode() != code {
return fmt.Errorf("expected %v, instead got %v", ae.ErrorCode(), code)
return fmt.Errorf("expected %v, instead got %v", code, ae.ErrorCode())
}
return nil
}
@@ -551,3 +566,26 @@ func genRandString(length int) string {
}
return string(result)
}
const (
credAccess int = iota
credDate
credRegion
credService
credTerminator
)
func changeAuthCred(uri, newVal string, index int) (string, error) {
urlParsed, err := url.Parse(uri)
if err != nil {
return "", err
}
queries := urlParsed.Query()
creds := strings.Split(queries.Get("X-Amz-Credential"), "/")
creds[index] = newVal
queries.Set("X-Amz-Credential", strings.Join(creds, "/"))
urlParsed.RawQuery = queries.Encode()
return urlParsed.String(), nil
}

View File

@@ -35,7 +35,7 @@ var _ backend.Backend = &BackendMock{}
// CopyObjectFunc: func(contextMoqParam context.Context, copyObjectInput *s3.CopyObjectInput) (*s3.CopyObjectOutput, error) {
// panic("mock out the CopyObject method")
// },
// CreateBucketFunc: func(contextMoqParam context.Context, createBucketInput *s3.CreateBucketInput) error {
// CreateBucketFunc: func(contextMoqParam context.Context, createBucketInput *s3.CreateBucketInput, defaultACL []byte) error {
// panic("mock out the CreateBucket method")
// },
// CreateMultipartUploadFunc: func(contextMoqParam context.Context, createMultipartUploadInput *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
@@ -44,6 +44,9 @@ var _ backend.Backend = &BackendMock{}
// DeleteBucketFunc: func(contextMoqParam context.Context, deleteBucketInput *s3.DeleteBucketInput) error {
// panic("mock out the DeleteBucket method")
// },
// DeleteBucketTaggingFunc: func(contextMoqParam context.Context, bucket string) error {
// panic("mock out the DeleteBucketTagging method")
// },
// DeleteObjectFunc: func(contextMoqParam context.Context, deleteObjectInput *s3.DeleteObjectInput) error {
// panic("mock out the DeleteObject method")
// },
@@ -56,6 +59,9 @@ var _ backend.Backend = &BackendMock{}
// GetBucketAclFunc: func(contextMoqParam context.Context, getBucketAclInput *s3.GetBucketAclInput) ([]byte, error) {
// panic("mock out the GetBucketAcl method")
// },
// GetBucketTaggingFunc: func(contextMoqParam context.Context, bucket string) (map[string]string, error) {
// panic("mock out the GetBucketTagging method")
// },
// GetObjectFunc: func(contextMoqParam context.Context, getObjectInput *s3.GetObjectInput, writer io.Writer) (*s3.GetObjectOutput, error) {
// panic("mock out the GetObject method")
// },
@@ -95,6 +101,9 @@ var _ backend.Backend = &BackendMock{}
// PutBucketAclFunc: func(contextMoqParam context.Context, bucket string, data []byte) error {
// panic("mock out the PutBucketAcl method")
// },
// PutBucketTaggingFunc: func(contextMoqParam context.Context, bucket string, tags map[string]string) error {
// panic("mock out the PutBucketTagging method")
// },
// PutObjectFunc: func(contextMoqParam context.Context, putObjectInput *s3.PutObjectInput) (string, error) {
// panic("mock out the PutObject method")
// },
@@ -142,7 +151,7 @@ type BackendMock struct {
CopyObjectFunc func(contextMoqParam context.Context, copyObjectInput *s3.CopyObjectInput) (*s3.CopyObjectOutput, error)
// CreateBucketFunc mocks the CreateBucket method.
CreateBucketFunc func(contextMoqParam context.Context, createBucketInput *s3.CreateBucketInput) error
CreateBucketFunc func(contextMoqParam context.Context, createBucketInput *s3.CreateBucketInput, defaultACL []byte) error
// CreateMultipartUploadFunc mocks the CreateMultipartUpload method.
CreateMultipartUploadFunc func(contextMoqParam context.Context, createMultipartUploadInput *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error)
@@ -150,6 +159,9 @@ type BackendMock struct {
// DeleteBucketFunc mocks the DeleteBucket method.
DeleteBucketFunc func(contextMoqParam context.Context, deleteBucketInput *s3.DeleteBucketInput) error
// DeleteBucketTaggingFunc mocks the DeleteBucketTagging method.
DeleteBucketTaggingFunc func(contextMoqParam context.Context, bucket string) error
// DeleteObjectFunc mocks the DeleteObject method.
DeleteObjectFunc func(contextMoqParam context.Context, deleteObjectInput *s3.DeleteObjectInput) error
@@ -162,6 +174,9 @@ type BackendMock struct {
// GetBucketAclFunc mocks the GetBucketAcl method.
GetBucketAclFunc func(contextMoqParam context.Context, getBucketAclInput *s3.GetBucketAclInput) ([]byte, error)
// GetBucketTaggingFunc mocks the GetBucketTagging method.
GetBucketTaggingFunc func(contextMoqParam context.Context, bucket string) (map[string]string, error)
// GetObjectFunc mocks the GetObject method.
GetObjectFunc func(contextMoqParam context.Context, getObjectInput *s3.GetObjectInput, writer io.Writer) (*s3.GetObjectOutput, error)
@@ -201,6 +216,9 @@ type BackendMock struct {
// PutBucketAclFunc mocks the PutBucketAcl method.
PutBucketAclFunc func(contextMoqParam context.Context, bucket string, data []byte) error
// PutBucketTaggingFunc mocks the PutBucketTagging method.
PutBucketTaggingFunc func(contextMoqParam context.Context, bucket string, tags map[string]string) error
// PutObjectFunc mocks the PutObject method.
PutObjectFunc func(contextMoqParam context.Context, putObjectInput *s3.PutObjectInput) (string, error)
@@ -266,6 +284,8 @@ type BackendMock struct {
ContextMoqParam context.Context
// CreateBucketInput is the createBucketInput argument value.
CreateBucketInput *s3.CreateBucketInput
// DefaultACL is the defaultACL argument value.
DefaultACL []byte
}
// CreateMultipartUpload holds details about calls to the CreateMultipartUpload method.
CreateMultipartUpload []struct {
@@ -281,6 +301,13 @@ type BackendMock struct {
// DeleteBucketInput is the deleteBucketInput argument value.
DeleteBucketInput *s3.DeleteBucketInput
}
// DeleteBucketTagging holds details about calls to the DeleteBucketTagging method.
DeleteBucketTagging []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
}
// DeleteObject holds details about calls to the DeleteObject method.
DeleteObject []struct {
// ContextMoqParam is the contextMoqParam argument value.
@@ -311,6 +338,13 @@ type BackendMock struct {
// GetBucketAclInput is the getBucketAclInput argument value.
GetBucketAclInput *s3.GetBucketAclInput
}
// GetBucketTagging holds details about calls to the GetBucketTagging method.
GetBucketTagging []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
}
// GetObject holds details about calls to the GetObject method.
GetObject []struct {
// ContextMoqParam is the contextMoqParam argument value.
@@ -408,6 +442,15 @@ type BackendMock struct {
// Data is the data argument value.
Data []byte
}
// PutBucketTagging holds details about calls to the PutBucketTagging method.
PutBucketTagging []struct {
// ContextMoqParam is the contextMoqParam argument value.
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
// Tags is the tags argument value.
Tags map[string]string
}
// PutObject holds details about calls to the PutObject method.
PutObject []struct {
// ContextMoqParam is the contextMoqParam argument value.
@@ -475,10 +518,12 @@ type BackendMock struct {
lockCreateBucket sync.RWMutex
lockCreateMultipartUpload sync.RWMutex
lockDeleteBucket sync.RWMutex
lockDeleteBucketTagging sync.RWMutex
lockDeleteObject sync.RWMutex
lockDeleteObjectTagging sync.RWMutex
lockDeleteObjects sync.RWMutex
lockGetBucketAcl sync.RWMutex
lockGetBucketTagging sync.RWMutex
lockGetObject sync.RWMutex
lockGetObjectAcl sync.RWMutex
lockGetObjectAttributes sync.RWMutex
@@ -492,6 +537,7 @@ type BackendMock struct {
lockListObjectsV2 sync.RWMutex
lockListParts sync.RWMutex
lockPutBucketAcl sync.RWMutex
lockPutBucketTagging sync.RWMutex
lockPutObject sync.RWMutex
lockPutObjectAcl sync.RWMutex
lockPutObjectTagging sync.RWMutex
@@ -652,21 +698,23 @@ func (mock *BackendMock) CopyObjectCalls() []struct {
}
// CreateBucket calls CreateBucketFunc.
func (mock *BackendMock) CreateBucket(contextMoqParam context.Context, createBucketInput *s3.CreateBucketInput) error {
func (mock *BackendMock) CreateBucket(contextMoqParam context.Context, createBucketInput *s3.CreateBucketInput, defaultACL []byte) error {
if mock.CreateBucketFunc == nil {
panic("BackendMock.CreateBucketFunc: method is nil but Backend.CreateBucket was just called")
}
callInfo := struct {
ContextMoqParam context.Context
CreateBucketInput *s3.CreateBucketInput
DefaultACL []byte
}{
ContextMoqParam: contextMoqParam,
CreateBucketInput: createBucketInput,
DefaultACL: defaultACL,
}
mock.lockCreateBucket.Lock()
mock.calls.CreateBucket = append(mock.calls.CreateBucket, callInfo)
mock.lockCreateBucket.Unlock()
return mock.CreateBucketFunc(contextMoqParam, createBucketInput)
return mock.CreateBucketFunc(contextMoqParam, createBucketInput, defaultACL)
}
// CreateBucketCalls gets all the calls that were made to CreateBucket.
@@ -676,10 +724,12 @@ func (mock *BackendMock) CreateBucket(contextMoqParam context.Context, createBuc
func (mock *BackendMock) CreateBucketCalls() []struct {
ContextMoqParam context.Context
CreateBucketInput *s3.CreateBucketInput
DefaultACL []byte
} {
var calls []struct {
ContextMoqParam context.Context
CreateBucketInput *s3.CreateBucketInput
DefaultACL []byte
}
mock.lockCreateBucket.RLock()
calls = mock.calls.CreateBucket
@@ -759,6 +809,42 @@ func (mock *BackendMock) DeleteBucketCalls() []struct {
return calls
}
// DeleteBucketTagging calls DeleteBucketTaggingFunc.
func (mock *BackendMock) DeleteBucketTagging(contextMoqParam context.Context, bucket string) error {
if mock.DeleteBucketTaggingFunc == nil {
panic("BackendMock.DeleteBucketTaggingFunc: method is nil but Backend.DeleteBucketTagging was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
}
mock.lockDeleteBucketTagging.Lock()
mock.calls.DeleteBucketTagging = append(mock.calls.DeleteBucketTagging, callInfo)
mock.lockDeleteBucketTagging.Unlock()
return mock.DeleteBucketTaggingFunc(contextMoqParam, bucket)
}
// DeleteBucketTaggingCalls gets all the calls that were made to DeleteBucketTagging.
// Check the length with:
//
// len(mockedBackend.DeleteBucketTaggingCalls())
func (mock *BackendMock) DeleteBucketTaggingCalls() []struct {
ContextMoqParam context.Context
Bucket string
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
}
mock.lockDeleteBucketTagging.RLock()
calls = mock.calls.DeleteBucketTagging
mock.lockDeleteBucketTagging.RUnlock()
return calls
}
// DeleteObject calls DeleteObjectFunc.
func (mock *BackendMock) DeleteObject(contextMoqParam context.Context, deleteObjectInput *s3.DeleteObjectInput) error {
if mock.DeleteObjectFunc == nil {
@@ -907,6 +993,42 @@ func (mock *BackendMock) GetBucketAclCalls() []struct {
return calls
}
// GetBucketTagging calls GetBucketTaggingFunc.
func (mock *BackendMock) GetBucketTagging(contextMoqParam context.Context, bucket string) (map[string]string, error) {
if mock.GetBucketTaggingFunc == nil {
panic("BackendMock.GetBucketTaggingFunc: method is nil but Backend.GetBucketTagging was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
}
mock.lockGetBucketTagging.Lock()
mock.calls.GetBucketTagging = append(mock.calls.GetBucketTagging, callInfo)
mock.lockGetBucketTagging.Unlock()
return mock.GetBucketTaggingFunc(contextMoqParam, bucket)
}
// GetBucketTaggingCalls gets all the calls that were made to GetBucketTagging.
// Check the length with:
//
// len(mockedBackend.GetBucketTaggingCalls())
func (mock *BackendMock) GetBucketTaggingCalls() []struct {
ContextMoqParam context.Context
Bucket string
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
}
mock.lockGetBucketTagging.RLock()
calls = mock.calls.GetBucketTagging
mock.lockGetBucketTagging.RUnlock()
return calls
}
// GetObject calls GetObjectFunc.
func (mock *BackendMock) GetObject(contextMoqParam context.Context, getObjectInput *s3.GetObjectInput, writer io.Writer) (*s3.GetObjectOutput, error) {
if mock.GetObjectFunc == nil {
@@ -1387,6 +1509,46 @@ func (mock *BackendMock) PutBucketAclCalls() []struct {
return calls
}
// PutBucketTagging calls PutBucketTaggingFunc.
func (mock *BackendMock) PutBucketTagging(contextMoqParam context.Context, bucket string, tags map[string]string) error {
if mock.PutBucketTaggingFunc == nil {
panic("BackendMock.PutBucketTaggingFunc: method is nil but Backend.PutBucketTagging was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
Tags map[string]string
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
Tags: tags,
}
mock.lockPutBucketTagging.Lock()
mock.calls.PutBucketTagging = append(mock.calls.PutBucketTagging, callInfo)
mock.lockPutBucketTagging.Unlock()
return mock.PutBucketTaggingFunc(contextMoqParam, bucket, tags)
}
// PutBucketTaggingCalls gets all the calls that were made to PutBucketTagging.
// Check the length with:
//
// len(mockedBackend.PutBucketTaggingCalls())
func (mock *BackendMock) PutBucketTaggingCalls() []struct {
ContextMoqParam context.Context
Bucket string
Tags map[string]string
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
Tags map[string]string
}
mock.lockPutBucketTagging.RLock()
calls = mock.calls.PutBucketTagging
mock.lockPutBucketTagging.RUnlock()
return calls
}
// PutObject calls PutObjectFunc.
func (mock *BackendMock) PutObject(contextMoqParam context.Context, putObjectInput *s3.PutObjectInput) (string, error) {
if mock.PutObjectFunc == nil {

File diff suppressed because it is too large Load Diff

View File

@@ -343,6 +343,9 @@ func TestS3ApiController_ListActions(t *testing.T) {
ListObjectsFunc: func(context.Context, *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
return &s3.ListObjectsOutput{}, nil
},
GetBucketTaggingFunc: func(contextMoqParam context.Context, bucket string) (map[string]string, error) {
return map[string]string{}, nil
},
},
}
@@ -365,6 +368,9 @@ func TestS3ApiController_ListActions(t *testing.T) {
ListObjectsFunc: func(context.Context, *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
},
GetBucketTaggingFunc: func(contextMoqParam context.Context, bucket string) (map[string]string, error) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
},
},
}
appError := fiber.New()
@@ -384,6 +390,24 @@ func TestS3ApiController_ListActions(t *testing.T) {
wantErr bool
statusCode int
}{
{
name: "Get-bucket-tagging-non-existing-bucket",
app: appError,
args: args{
req: httptest.NewRequest(http.MethodGet, "/my-bucket?tagging", nil),
},
wantErr: false,
statusCode: 404,
},
{
name: "Get-bucket-tagging-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodGet, "/my-bucket?tagging", nil),
},
wantErr: false,
statusCode: 200,
},
{
name: "Get-bucket-acl-success",
app: app,
@@ -492,6 +516,17 @@ func TestS3ApiController_PutBucketActions(t *testing.T) {
</AccessControlPolicy>
`
tagBody := `
<Tagging xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<TagSet>
<Tag>
<Key>organization</Key>
<Value>marketing</Value>
</Tag>
</TagSet>
</Tagging>
`
s3ApiController := S3ApiController{
be: &BackendMock{
GetBucketAclFunc: func(context.Context, *s3.GetBucketAclInput) ([]byte, error) {
@@ -500,7 +535,10 @@ func TestS3ApiController_PutBucketActions(t *testing.T) {
PutBucketAclFunc: func(context.Context, string, []byte) error {
return nil
},
CreateBucketFunc: func(context.Context, *s3.CreateBucketInput) error {
CreateBucketFunc: func(context.Context, *s3.CreateBucketInput, []byte) error {
return nil
},
PutBucketTaggingFunc: func(contextMoqParam context.Context, bucket string, tags map[string]string) error {
return nil
},
},
@@ -543,6 +581,24 @@ func TestS3ApiController_PutBucketActions(t *testing.T) {
wantErr bool
statusCode int
}{
{
name: "Put-bucket-tagging-invalid-body",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPut, "/my-bucket?tagging", nil),
},
wantErr: false,
statusCode: 400,
},
{
name: "Put-bucket-tagging-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPut, "/my-bucket?tagging", strings.NewReader(tagBody)),
},
wantErr: false,
statusCode: 200,
},
{
name: "Put-bucket-acl-invalid-acl",
app: app,
@@ -869,12 +925,12 @@ func TestS3ApiController_DeleteBucket(t *testing.T) {
app := fiber.New()
s3ApiController := S3ApiController{
be: &BackendMock{
GetBucketAclFunc: func(context.Context, *s3.GetBucketAclInput) ([]byte, error) {
return acldata, nil
},
DeleteBucketFunc: func(context.Context, *s3.DeleteBucketInput) error {
return nil
},
DeleteBucketTaggingFunc: func(contextMoqParam context.Context, bucket string) error {
return nil
},
},
}
@@ -904,6 +960,15 @@ func TestS3ApiController_DeleteBucket(t *testing.T) {
wantErr: false,
statusCode: 204,
},
{
name: "Delete-bucket-tagging-success",
app: app,
args: args{
req: httptest.NewRequest(http.MethodDelete, "/my-bucket?tagging", nil),
},
wantErr: false,
statusCode: 204,
},
}
for _, tt := range tests {
resp, err := tt.app.Test(tt.args.req)

View File

@@ -38,7 +38,7 @@ func AclParser(be backend.Backend, logger s3log.AuditLogger) fiber.Handler {
if ctx.Method() == http.MethodPatch {
return ctx.Next()
}
if len(pathParts) == 2 && pathParts[1] != "" && ctx.Method() == http.MethodPut && !ctx.Request().URI().QueryArgs().Has("acl") {
if len(pathParts) == 2 && pathParts[1] != "" && ctx.Method() == http.MethodPut && !ctx.Request().URI().QueryArgs().Has("acl") && !ctx.Request().URI().QueryArgs().Has("tagging") {
if err := auth.IsAdmin(acct, isRoot); err != nil {
return controllers.SendXMLResponse(ctx, nil, err, &controllers.MetaOpts{Logger: logger, Action: "CreateBucket"})
}

View File

@@ -44,6 +44,12 @@ func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.Au
acct := accounts{root: root, iam: iam}
return func(ctx *fiber.Ctx) error {
// If account is set in context locals, it means it was presigned url case
_, ok := ctx.Locals("account").(auth.Account)
if ok {
return ctx.Next()
}
ctx.Locals("region", region)
ctx.Locals("startTime", time.Now())
authorization := ctx.Get("Authorization")
@@ -96,7 +102,7 @@ func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.Au
}
// Validate the dates difference
err = validateDate(tdate)
err = utils.ValidateDate(tdate)
if err != nil {
return sendResponse(ctx, err, logger)
}
@@ -158,29 +164,6 @@ func (a accounts) getAccount(access string) (auth.Account, error) {
return a.iam.GetUserAccount(access)
}
func validateDate(date time.Time) error {
now := time.Now().UTC()
diff := date.Unix() - now.Unix()
// Checks the dates difference to be less than a minute
if diff > 60 {
return s3err.APIError{
Code: "SignatureDoesNotMatch",
Description: fmt.Sprintf("Signature not yet current: %s is still later than %s", date.Format(iso8601Format), now.Format(iso8601Format)),
HTTPStatusCode: http.StatusForbidden,
}
}
if diff < -60 {
return s3err.APIError{
Code: "SignatureDoesNotMatch",
Description: fmt.Sprintf("Signature expired: %s is now earlier than %s", date.Format(iso8601Format), now.Format(iso8601Format)),
HTTPStatusCode: http.StatusForbidden,
}
}
return nil
}
func sendResponse(ctx *fiber.Ctx, err error, logger s3log.AuditLogger) error {
return controllers.SendResponse(ctx, err, &controllers.MetaOpts{Logger: logger})
}

View File

@@ -0,0 +1,61 @@
// Copyright 2024 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package middlewares
import (
"io"
"time"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3log"
)
// ProcessChunkedBody initializes the chunked upload stream if the
// request appears to be a chunked upload
func ProcessChunkedBody(root RootUserConfig, iam auth.IAMService, logger s3log.AuditLogger, region string) fiber.Handler {
return func(ctx *fiber.Ctx) error {
decodedLength := ctx.Get("X-Amz-Decoded-Content-Length")
if decodedLength == "" {
return ctx.Next()
}
// TODO: validate content length
authData, err := utils.ParseAuthorization(ctx.Get("Authorization"))
if err != nil {
return sendResponse(ctx, err, logger)
}
acct := ctx.Locals("account").(auth.Account)
amzdate := ctx.Get("X-Amz-Date")
date, _ := time.Parse(iso8601Format, amzdate)
if utils.IsBigDataAction(ctx) {
var err error
wrapBodyReader(ctx, func(r io.Reader) io.Reader {
var cr *utils.ChunkReader
cr, err = utils.NewChunkReader(ctx, r, authData, region, acct.Secret, date)
return cr
})
if err != nil {
return sendResponse(ctx, err, logger)
}
return ctx.Next()
}
return ctx.Next()
}
}

View File

@@ -33,10 +33,14 @@ func VerifyMD5Body(logger s3log.AuditLogger) fiber.Handler {
}
if utils.IsBigDataAction(ctx) {
var err error
wrapBodyReader(ctx, func(r io.Reader) io.Reader {
r, _ = utils.NewHashReader(r, incomingSum, utils.HashTypeMd5)
r, err = utils.NewHashReader(r, incomingSum, utils.HashTypeMd5)
return r
})
if err != nil {
return controllers.SendResponse(ctx, err, &controllers.MetaOpts{Logger: logger})
}
return ctx.Next()
}

View File

@@ -0,0 +1,69 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package middlewares
import (
"io"
"time"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/s3api/utils"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3log"
)
func VerifyPresignedV4Signature(root RootUserConfig, iam auth.IAMService, logger s3log.AuditLogger, region string, debug bool) fiber.Handler {
acct := accounts{root: root, iam: iam}
return func(ctx *fiber.Ctx) error {
if ctx.Query("X-Amz-Signature") == "" {
return ctx.Next()
}
ctx.Locals("region", region)
ctx.Locals("startTime", time.Now())
authData, err := utils.ParsePresignedURIParts(ctx)
if err != nil {
return sendResponse(ctx, err, logger)
}
ctx.Locals("isRoot", authData.Access == root.Access)
account, err := acct.getAccount(authData.Access)
if err == auth.ErrNoSuchUser {
return sendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidAccessKeyID), logger)
}
if err != nil {
return sendResponse(ctx, err, logger)
}
ctx.Locals("account", account)
if utils.IsBigDataAction(ctx) {
wrapBodyReader(ctx, func(r io.Reader) io.Reader {
return utils.NewPresignedAuthReader(ctx, r, authData, account.Secret, debug)
})
return ctx.Next()
}
err = utils.CheckPresignedSignature(ctx, authData, account.Secret, debug)
if err != nil {
return sendResponse(ctx, err, logger)
}
return ctx.Next()
}
}

View File

@@ -56,7 +56,9 @@ func New(app *fiber.App, be backend.Backend, root middlewares.RootUserConfig, po
app.Use(middlewares.RequestLogger(server.debug))
// Authentication middlewares
app.Use(middlewares.VerifyPresignedV4Signature(root, iam, l, region, server.debug))
app.Use(middlewares.VerifyV4Signature(root, iam, l, region, server.debug))
app.Use(middlewares.ProcessChunkedBody(root, iam, l, region))
app.Use(middlewares.VerifyMD5Body(l))
app.Use(middlewares.AclParser(be, l))

View File

@@ -21,6 +21,7 @@ import (
"os"
"strings"
"time"
"unicode"
"github.com/aws/aws-sdk-go-v2/aws"
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
@@ -125,16 +126,19 @@ func CheckValidSignature(ctx *fiber.Ctx, auth AuthData, secret, checksum string,
signer := v4.NewSigner()
signErr := signer.SignHTTP(req.Context(), aws.Credentials{
AccessKeyID: auth.Access,
SecretAccessKey: secret,
}, req, checksum, service, auth.Region, tdate, func(options *v4.SignerOptions) {
options.DisableURIPathEscaping = true
if debug {
options.LogSigning = true
options.Logger = logging.NewStandardLogger(os.Stderr)
}
})
signErr := signer.SignHTTP(req.Context(),
aws.Credentials{
AccessKeyID: auth.Access,
SecretAccessKey: secret,
},
req, checksum, service, auth.Region, tdate,
func(options *v4.SignerOptions) {
options.DisableURIPathEscaping = true
if debug {
options.LogSigning = true
options.Logger = logging.NewStandardLogger(os.Stderr)
}
})
if signErr != nil {
return fmt.Errorf("sign generated http request: %w", err)
}
@@ -173,18 +177,20 @@ func ParseAuthorization(authorization string) (AuthData, error) {
// authorization must start with:
// Authorization: <ALGORITHM>
// followed by key=value pairs separated by ","
authParts := strings.Fields(authorization)
authParts := strings.SplitN(authorization, " ", 2)
for i, el := range authParts {
authParts[i] = strings.TrimSpace(el)
if strings.Contains(el, " ") {
authParts[i] = removeSpace(el)
}
}
if len(authParts) < 3 {
if len(authParts) < 2 {
return a, s3err.GetAPIError(s3err.ErrMissingFields)
}
algo := authParts[0]
kvData := strings.Join(authParts[1:], "")
kvData := authParts[1]
kvPairs := strings.Split(kvData, ",")
// we are expecting at least Credential, SignedHeaders, and Signature
// key value pairs here
@@ -244,6 +250,17 @@ func ParseAuthorization(authorization string) (AuthData, error) {
}, nil
}
func removeSpace(str string) string {
var b strings.Builder
b.Grow(len(str))
for _, ch := range str {
if !unicode.IsSpace(ch) {
b.WriteRune(ch)
}
}
return b.String()
}
var (
specialValues = map[string]bool{
"UNSIGNED-PAYLOAD": true,

130
s3api/utils/auth_test.go Normal file
View File

@@ -0,0 +1,130 @@
package utils
import (
"net"
"testing"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
"github.com/gofiber/fiber/v2"
"github.com/valyala/fasthttp/fasthttputil"
)
func TestAuthParse(t *testing.T) {
vectors := []struct {
name string // name of test string
authstr string // Authorization string
algo string
sig string
}{
{
name: "restic",
authstr: "AWS4-HMAC-SHA256 Credential=user/20240116/us-east-1/s3/aws4_request,SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length,Signature=d5199fc7f3aa35dd3d400427be2ae4c98bfad390785280cbb9eea015b51e12ac",
algo: "AWS4-HMAC-SHA256",
sig: "d5199fc7f3aa35dd3d400427be2ae4c98bfad390785280cbb9eea015b51e12ac",
},
{
name: "aws eaxample",
authstr: "AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/s3/aws4_request, SignedHeaders=host;range;x-amz-date, Signature=fe5f80f77d5fa3beca038a248ff027d0445342fe2855ddc963176630326f1024",
algo: "AWS4-HMAC-SHA256",
sig: "fe5f80f77d5fa3beca038a248ff027d0445342fe2855ddc963176630326f1024",
},
{
name: "s3browser",
authstr: "AWS4-HMAC-SHA256 Credential=access_key/20240206/us-east-1/s3/aws4_request,SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date, Signature=37a35d96998d786113ad420c57c22c5433f6aca74f88f26566caa047fc3601c6",
algo: "AWS4-HMAC-SHA256",
sig: "37a35d96998d786113ad420c57c22c5433f6aca74f88f26566caa047fc3601c6",
},
}
for _, v := range vectors {
t.Run(v.name, func(t *testing.T) {
data, err := ParseAuthorization(v.authstr)
if err != nil {
t.Fatal(err)
}
if data.Algorithm != v.algo {
t.Errorf("algo got %v, expected %v", data.Algorithm, v.algo)
}
if data.Signature != v.sig {
t.Errorf("signature got %v, expected %v", data.Signature, v.sig)
}
})
}
}
// 2024/02/06 21:03:28 Request headers:
// 2024/02/06 21:03:28 Host: 172.21.0.160:11000
// 2024/02/06 21:03:28 User-Agent: S3 Browser/11.5.7 (https://s3browser.com)
// 2024/02/06 21:03:28 Authorization: AWS4-HMAC-SHA256 Credential=access_key/20240206/us-east-1/s3/aws4_request,SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date, Signature=37a35d96998d786113ad420c57c22c5433f6aca74f88f26566caa047fc3601c6
// 2024/02/06 21:03:28 X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
// 2024/02/06 21:03:28 X-Amz-Date: 20240206T210328Z
func Test_Client_UserAgent(t *testing.T) {
signedHdrs := []string{"host", "user-agent", "x-amz-content-sha256", "x-amz-date"}
access := "access_key"
secret := "secret_key"
region := "us-east-1"
host := "172.21.0.160:11000"
agent := "S3 Browser/11.5.7 (https://s3browser.com)"
expectedSig := "37a35d96998d786113ad420c57c22c5433f6aca74f88f26566caa047fc3601c6"
dateStr := "20240206T210328Z"
app := fiber.New(fiber.Config{DisableStartupMessage: true})
tdate, err := time.Parse(iso8601Format, dateStr)
if err != nil {
t.Fatal(err)
}
app.Get("/", func(c *fiber.Ctx) error {
req, err := createHttpRequestFromCtx(c, signedHdrs, int64(c.Request().Header.ContentLength()))
if err != nil {
t.Fatal(err)
}
req.Host = host
req.Header.Add("X-Amz-Content-Sha256", zeroLenSig)
signer := v4.NewSigner()
signErr := signer.SignHTTP(req.Context(),
aws.Credentials{
AccessKeyID: access,
SecretAccessKey: secret,
},
req, zeroLenSig, service, region, tdate,
func(options *v4.SignerOptions) {
options.DisableURIPathEscaping = true
})
if signErr != nil {
t.Fatalf("sign generated http request: %v", err)
}
genAuth, err := ParseAuthorization(req.Header.Get("Authorization"))
if err != nil {
return err
}
if genAuth.Signature != expectedSig {
t.Errorf("SIG: %v\nexpected: %v\n", genAuth.Signature, expectedSig)
}
return c.Send(c.Request().Header.UserAgent())
})
ln := fasthttputil.NewInmemoryListener()
go func() {
err := app.Listener(ln)
if err != nil {
panic(err)
}
}()
c := fiber.AcquireClient()
c.UserAgent = agent
a := c.Get("http://example.com")
a.HostClient.Dial = func(_ string) (net.Conn, error) { return ln.Dial() }
a.String()
fiber.ReleaseClient(c)
}

269
s3api/utils/chunk-reader.go Normal file
View File

@@ -0,0 +1,269 @@
// Copyright 2024 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package utils
import (
"bytes"
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"errors"
"fmt"
"hash"
"io"
"strconv"
"time"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/s3err"
)
// chunked uploads described in:
// https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
const (
chunkHdrStr = ";chunk-signature="
chunkHdrDelim = "\r\n"
zeroLenSig = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
awsV4 = "AWS4"
awsS3Service = "s3"
awsV4Request = "aws4_request"
streamPayloadAlgo = "AWS4-HMAC-SHA256-PAYLOAD"
)
// ChunkReader reads from chunked upload request body, and returns
// object data stream
type ChunkReader struct {
r io.Reader
signingKey []byte
prevSig string
parsedSig string
currentChunkSize int64
chunkDataLeft int64
trailerExpected int
stash []byte
chunkHash hash.Hash
strToSignPrefix string
skipcheck bool
}
// NewChunkReader reads from request body io.Reader and parses out the
// chunk metadata in stream. The headers are validated for proper signatures.
// Reading from the chunk reader will read only the object data stream
// without the chunk headers/trailers.
func NewChunkReader(ctx *fiber.Ctx, r io.Reader, authdata AuthData, region, secret string, date time.Time) (*ChunkReader, error) {
return &ChunkReader{
r: r,
signingKey: getSigningKey(secret, region, date),
// the authdata.Signature is validated in the auth-reader,
// so we can use that here without any other checks
prevSig: authdata.Signature,
chunkHash: sha256.New(),
strToSignPrefix: getStringToSignPrefix(date, region),
}, nil
}
// Read satisfies the io.Reader for this type
func (cr *ChunkReader) Read(p []byte) (int, error) {
n, err := cr.r.Read(p)
if err != nil && err != io.EOF {
return n, err
}
if cr.chunkDataLeft < int64(n) {
chunkSize := cr.chunkDataLeft
if chunkSize > 0 {
cr.chunkHash.Write(p[:chunkSize])
}
n, err := cr.parseAndRemoveChunkInfo(p[chunkSize:n])
n += int(chunkSize)
return n, err
}
cr.chunkDataLeft -= int64(n)
cr.chunkHash.Write(p[:n])
return n, err
}
// https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html#sigv4-chunked-body-definition
// This part is the same for all chunks,
// only the previous signature and hash of current chunk changes
func getStringToSignPrefix(date time.Time, region string) string {
credentialScope := fmt.Sprintf("%s/%s/%s/%s",
date.Format("20060102"),
region,
awsS3Service,
awsV4Request)
return fmt.Sprintf("%s\n%s\n%s",
streamPayloadAlgo,
date.Format("20060102T150405Z"),
credentialScope)
}
// https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html#sigv4-chunked-body-definition
// signature For each chunk, you calculate the signature using the following
// string to sign. For the first chunk, you use the seed-signature as the
// previous signature.
func getChunkStringToSign(prefix, prevSig string, chunkHash []byte) string {
return fmt.Sprintf("%s\n%s\n%s\n%s",
prefix,
prevSig,
zeroLenSig,
hex.EncodeToString(chunkHash))
}
// The provided p should have all of the previous chunk data and trailer
// consumed already. The positioning here is expected that p[0] starts the
// new chunk size with the ";chunk-signature=" following. The only exception
// is if we started consuming the trailer, but hit the end of the read buffer.
// In this case, parseAndRemoveChunkInfo is called with skipcheck=true to
// finish consuming the final trailer bytes.
// This parses the chunk metadata in situ without allocating an extra buffer.
// It will just read and validate the chunk metadata and then move the
// following chunk data to overwrite the metadata in the provided buffer.
func (cr *ChunkReader) parseAndRemoveChunkInfo(p []byte) (int, error) {
n := len(p)
if !cr.skipcheck && cr.parsedSig != "" {
chunkhash := cr.chunkHash.Sum(nil)
cr.chunkHash.Reset()
sigstr := getChunkStringToSign(cr.strToSignPrefix, cr.prevSig, chunkhash)
cr.prevSig = hex.EncodeToString(hmac256(cr.signingKey, []byte(sigstr)))
if cr.currentChunkSize != 0 && cr.prevSig != cr.parsedSig {
return 0, s3err.GetAPIError(s3err.ErrSignatureDoesNotMatch)
}
}
if cr.trailerExpected != 0 {
if len(p) < len(chunkHdrDelim) {
// This is the special case where we need to consume the
// trailer, but instead hit the end of the buffer. The
// subsequent call will finish consuming the trailer.
cr.chunkDataLeft = 0
cr.trailerExpected -= len(p)
cr.skipcheck = true
return 0, nil
}
// move data up to remove trailer
copy(p, p[cr.trailerExpected:])
n -= cr.trailerExpected
}
cr.skipcheck = false
chunkSize, sig, bufOffset, err := cr.parseChunkHeaderBytes(p[:n])
cr.currentChunkSize = chunkSize
cr.parsedSig = sig
if err == errskipHeader {
cr.chunkDataLeft = 0
return 0, nil
}
if err != nil {
return 0, err
}
if chunkSize == 0 {
return 0, io.EOF
}
cr.trailerExpected = len(chunkHdrDelim)
// move data up to remove chunk header
copy(p, p[bufOffset:n])
n -= bufOffset
// if remaining buffer larger than chunk data,
// parse next header in buffer
if int64(n) > chunkSize {
cr.chunkDataLeft = 0
cr.chunkHash.Write(p[:chunkSize])
n, err := cr.parseAndRemoveChunkInfo(p[chunkSize:n])
return n + int(chunkSize), err
} else {
cr.chunkDataLeft = chunkSize - int64(n)
cr.chunkHash.Write(p[:n])
}
return n, nil
}
// https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
// Task 3: Calculate Signature
// https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html#signing-request-intro
func getSigningKey(secret, region string, date time.Time) []byte {
dateKey := hmac256([]byte(awsV4+secret), []byte(date.Format(yyyymmdd)))
dateRegionKey := hmac256(dateKey, []byte(region))
dateRegionServiceKey := hmac256(dateRegionKey, []byte(awsS3Service))
signingKey := hmac256(dateRegionServiceKey, []byte(awsV4Request))
return signingKey
}
func hmac256(key []byte, data []byte) []byte {
hash := hmac.New(sha256.New, key)
hash.Write(data)
return hash.Sum(nil)
}
var (
errInvalidChunkFormat = errors.New("invalid chunk header format")
errskipHeader = errors.New("skip to next header")
)
const (
maxHeaderSize = 1024
)
// Theis returns the chunk payload size, signature, data start offset, and
// error if any. See the AWS documentation for the chunk header format. The
// header[0] byte is expected to be the first byte of the chunk size here.
func (cr *ChunkReader) parseChunkHeaderBytes(header []byte) (int64, string, int, error) {
if cr.stash != nil {
tmp := make([]byte, maxHeaderSize)
copy(tmp, cr.stash)
copy(tmp[len(cr.stash):], header)
header = tmp
cr.stash = nil
}
semicolonIndex := bytes.Index(header, []byte(chunkHdrStr))
if semicolonIndex == -1 {
cr.stash = make([]byte, len(header))
copy(cr.stash, header)
cr.trailerExpected = 0
return 0, "", 0, errskipHeader
}
sigIndex := semicolonIndex + len(chunkHdrStr)
sigEndIndex := bytes.Index(header[sigIndex:], []byte(chunkHdrDelim))
if sigEndIndex == -1 {
cr.stash = make([]byte, len(header))
copy(cr.stash, header)
cr.trailerExpected = 0
return 0, "", 0, errskipHeader
}
chunkSizeBytes := header[:semicolonIndex]
chunkSize, err := strconv.ParseInt(string(chunkSizeBytes), 16, 64)
if err != nil {
return 0, "", 0, errInvalidChunkFormat
}
signature := string(header[sigIndex:(sigIndex + sigEndIndex)])
dataStartOffset := sigIndex + sigEndIndex + len(chunkHdrDelim)
return chunkSize, signature, dataStartOffset, nil
}

View File

@@ -0,0 +1,243 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package utils
import (
"errors"
"fmt"
"io"
"net/http"
"net/url"
"os"
"strconv"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
"github.com/aws/smithy-go/logging"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/s3err"
)
const (
unsignedPayload string = "UNSIGNED-PAYLOAD"
)
// PresignedAuthReader is an io.Reader that validates presigned request authorization
// once the underlying reader returns io.EOF. This is needed for streaming
// data requests where the data size is not known until
// the data is completely read.
type PresignedAuthReader struct {
ctx *fiber.Ctx
auth AuthData
secret string
r io.Reader
debug bool
}
func NewPresignedAuthReader(ctx *fiber.Ctx, r io.Reader, auth AuthData, secret string, debug bool) *PresignedAuthReader {
return &PresignedAuthReader{
ctx: ctx,
r: r,
auth: auth,
secret: secret,
debug: debug,
}
}
// Read allows *PresignedAuthReader to be used as an io.Reader
func (pr *PresignedAuthReader) Read(p []byte) (int, error) {
n, err := pr.r.Read(p)
if errors.Is(err, io.EOF) {
cerr := CheckPresignedSignature(pr.ctx, pr.auth, pr.secret, pr.debug)
if cerr != nil {
return n, cerr
}
}
return n, err
}
// CheckPresignedSignature validates presigned request signature
func CheckPresignedSignature(ctx *fiber.Ctx, auth AuthData, secret string, debug bool) error {
signedHdrs := strings.Split(auth.SignedHeaders, ";")
var contentLength int64
var err error
contentLengthStr := ctx.Get("Content-Length")
if contentLengthStr != "" {
contentLength, err = strconv.ParseInt(contentLengthStr, 10, 64)
if err != nil {
return s3err.GetAPIError(s3err.ErrInvalidRequest)
}
}
// Create a new http request instance from fasthttp request
req, err := createPresignedHttpRequestFromCtx(ctx, signedHdrs, contentLength)
if err != nil {
return fmt.Errorf("create http request from context: %w", err)
}
date, _ := time.Parse(iso8601Format, auth.Date)
signer := v4.NewSigner()
uri, _, signErr := signer.PresignHTTP(ctx.Context(), aws.Credentials{
AccessKeyID: auth.Access,
SecretAccessKey: secret,
}, req, unsignedPayload, service, auth.Region, date, func(options *v4.SignerOptions) {
options.DisableURIPathEscaping = true
if debug {
options.LogSigning = true
options.Logger = logging.NewStandardLogger(os.Stderr)
}
})
if signErr != nil {
return fmt.Errorf("presign generated http request: %w", err)
}
urlParts, err := url.Parse(uri)
if err != nil {
return fmt.Errorf("parse presigned url: %w", err)
}
signature := urlParts.Query().Get("X-Amz-Signature")
if signature != auth.Signature {
return s3err.GetAPIError(s3err.ErrSignatureDoesNotMatch)
}
return nil
}
// https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
//
// # ParsePresignedURIParts parses and validates request URL query parameters
//
// ?X-Amz-Algorithm=AWS4-HMAC-SHA256
// &X-Amz-Credential=access-key-id/20130721/us-east-1/s3/aws4_request
// &X-Amz-Date=20130721T201207Z
// &X-Amz-Expires=86400
// &X-Amz-SignedHeaders=host
// &X-Amz-Signature=1e68ad45c1db540284a4a1eca3884c293ba1a0ff63ab9db9a15b5b29dfa02cd8
func ParsePresignedURIParts(ctx *fiber.Ctx) (AuthData, error) {
a := AuthData{}
// Get and verify algorithm query parameter
algo := ctx.Query("X-Amz-Algorithm")
if algo == "" {
return a, s3err.GetAPIError(s3err.ErrInvalidQueryParams)
}
if algo != "AWS4-HMAC-SHA256" {
return a, s3err.GetAPIError(s3err.ErrInvalidQuerySignatureAlgo)
}
// Parse and validate credentials query parameter
credsQuery := ctx.Query("X-Amz-Credential")
if credsQuery == "" {
return a, s3err.GetAPIError(s3err.ErrInvalidQueryParams)
}
creds := strings.Split(credsQuery, "/")
if len(creds) != 5 {
return a, s3err.GetAPIError(s3err.ErrCredMalformed)
}
if creds[3] != "s3" {
return a, s3err.GetAPIError(s3err.ErrSignatureIncorrService)
}
if creds[4] != "aws4_request" {
return a, s3err.GetAPIError(s3err.ErrSignatureTerminationStr)
}
_, err := time.Parse(yyyymmdd, creds[1])
if err != nil {
return a, s3err.GetAPIError(s3err.ErrSignatureDateDoesNotMatch)
}
// Parse and validate Date query param
date := ctx.Query("X-Amz-Date")
if date == "" {
return a, s3err.GetAPIError(s3err.ErrInvalidQueryParams)
}
tdate, err := time.Parse(iso8601Format, date)
if err != nil {
return a, s3err.GetAPIError(s3err.ErrMalformedDate)
}
if date[:8] != creds[1] {
return a, s3err.GetAPIError(s3err.ErrSignatureDateDoesNotMatch)
}
if ctx.Locals("region") != creds[2] {
return a, s3err.APIError{
Code: "SignatureDoesNotMatch",
Description: fmt.Sprintf("Credential should be scoped to a valid Region, not %v", creds[2]),
HTTPStatusCode: http.StatusForbidden,
}
}
signature := ctx.Query("X-Amz-Signature")
if signature == "" {
return a, s3err.GetAPIError(s3err.ErrInvalidQueryParams)
}
signedHdrs := ctx.Query("X-Amz-SignedHeaders")
if signedHdrs == "" {
return a, s3err.GetAPIError(s3err.ErrInvalidQueryParams)
}
// Validate X-Amz-Expires query param and check if request is expired
err = validateExpiration(ctx.Query("X-Amz-Expires"), tdate)
if err != nil {
return a, err
}
a.Signature = signature
a.Access = creds[0]
a.Algorithm = algo
a.Region = creds[2]
a.SignedHeaders = signedHdrs
a.Date = date
return a, nil
}
func validateExpiration(str string, date time.Time) error {
if str == "" {
return s3err.GetAPIError(s3err.ErrInvalidQueryParams)
}
exp, err := strconv.Atoi(str)
if err != nil {
return s3err.GetAPIError(s3err.ErrMalformedExpires)
}
if exp < 0 {
return s3err.GetAPIError(s3err.ErrNegativeExpires)
}
if exp > 604800 {
return s3err.GetAPIError(s3err.ErrMaximumExpires)
}
now := time.Now()
passed := int(now.Sub(date).Seconds())
if passed > exp {
return s3err.GetAPIError(s3err.ErrExpiredPresignRequest)
}
return nil
}

View File

@@ -0,0 +1,100 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package utils
import (
"testing"
"time"
"github.com/versity/versitygw/s3err"
)
func Test_validateExpiration(t *testing.T) {
type args struct {
str string
date time.Time
}
tests := []struct {
name string
args args
err error
}{
{
name: "empty-expiration",
args: args{
str: "",
date: time.Now(),
},
err: s3err.GetAPIError(s3err.ErrInvalidQueryParams),
},
{
name: "invalid-expiration",
args: args{
str: "invalid_expiration",
date: time.Now(),
},
err: s3err.GetAPIError(s3err.ErrMalformedExpires),
},
{
name: "negative-expiration",
args: args{
str: "-320",
date: time.Now(),
},
err: s3err.GetAPIError(s3err.ErrNegativeExpires),
},
{
name: "exceeding-expiration",
args: args{
str: "6048000",
date: time.Now(),
},
err: s3err.GetAPIError(s3err.ErrMaximumExpires),
},
{
name: "expired value",
args: args{
str: "200",
date: time.Now().AddDate(0, 0, -1),
},
err: s3err.GetAPIError(s3err.ErrExpiredPresignRequest),
},
{
name: "valid expiration",
args: args{
str: "300",
date: time.Now(),
},
err: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateExpiration(tt.args.str, tt.args.date)
// Check for nil case
if tt.err == nil && err != nil {
t.Errorf("Expected nil error, got: %v", err)
return
} else if tt.err == nil && err == nil {
// Both are nil, no need for further comparison
return
}
if err.Error() != tt.err.Error() {
t.Errorf("Expected error: %v, got: %v", tt.err, err)
}
})
}
}

48
s3api/utils/sign_hack.go Normal file
View File

@@ -0,0 +1,48 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package utils
import (
"reflect"
"unsafe"
)
// This is a hack to replace the default IgnoredHeaders in the aws-sdk-go-v2
// internal/v4 package. Some AWS applications
// (e.g. AWS Java SDK v1, Athena JDBC driver, s3 browser) sign the requests
// including the User-Agent header. The aws sdk doesn't allow directly
// modifying the ignored header list. Below is a hack to replace this list
// with our own.
type Rule interface {
IsValid(value string) bool
}
type Rules []Rule
//go:linkname __ignoredHeaders github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4.IgnoredHeaders
var __ignoredHeaders unsafe.Pointer
func init() {
// Avoids "go.info.github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4.IgnoredHeaders:
// relocation target go.info.github.com/xxx/xxx/xxx.Rules not defined"
var ignoredHeaders = (*Rules)(unsafe.Pointer(&__ignoredHeaders))
// clear the map, and set just the ignored headers we want
reflect.ValueOf((*ignoredHeaders)[0]).FieldByName("Rule").Elem().Clear()
reflect.ValueOf((*ignoredHeaders)[0]).FieldByName("Rule").Elem().SetMapIndex(
reflect.ValueOf("Authorization"), reflect.ValueOf(struct{}{}))
reflect.ValueOf((*ignoredHeaders)[0]).FieldByName("Rule").Elem().SetMapIndex(
reflect.ValueOf("Expect"), reflect.ValueOf(struct{}{}))
}

View File

@@ -23,6 +23,7 @@ import (
"regexp"
"strconv"
"strings"
"time"
"github.com/gofiber/fiber/v2"
"github.com/valyala/fasthttp"
@@ -86,6 +87,66 @@ func createHttpRequestFromCtx(ctx *fiber.Ctx, signedHdrs []string, contentLength
return httpReq, nil
}
var (
signedQueryArgs = map[string]bool{
"X-Amz-Algorithm": true,
"X-Amz-Credential": true,
"X-Amz-Date": true,
"X-Amz-SignedHeaders": true,
"X-Amz-Signature": true,
}
)
func createPresignedHttpRequestFromCtx(ctx *fiber.Ctx, signedHdrs []string, contentLength int64) (*http.Request, error) {
req := ctx.Request()
var body io.Reader
if IsBigDataAction(ctx) {
body = req.BodyStream()
} else {
body = bytes.NewReader(req.Body())
}
uri := string(ctx.Request().URI().Path())
isFirst := true
ctx.Request().URI().QueryArgs().VisitAll(func(key, value []byte) {
_, ok := signedQueryArgs[string(key)]
if !ok {
if isFirst {
uri += fmt.Sprintf("?%s=%s", key, value)
isFirst = false
} else {
uri += fmt.Sprintf("&%s=%s", key, value)
}
}
})
httpReq, err := http.NewRequest(string(req.Header.Method()), uri, body)
if err != nil {
return nil, errors.New("error in creating an http request")
}
// Set the request headers
req.Header.VisitAll(func(key, value []byte) {
keyStr := string(key)
if includeHeader(keyStr, signedHdrs) {
httpReq.Header.Add(keyStr, string(value))
}
})
// Check if Content-Length in signed headers
// If content length is non 0, then the header will be included
if !includeHeader("Content-Length", signedHdrs) {
httpReq.ContentLength = 0
} else {
httpReq.ContentLength = contentLength
}
// Set the Host header
httpReq.Host = string(req.Header.Host())
return httpReq, nil
}
func SetMetaHeaders(ctx *fiber.Ctx, meta map[string]string) {
ctx.Response().Header.DisableNormalizing()
for key, val := range meta {
@@ -149,3 +210,26 @@ func IsBigDataAction(ctx *fiber.Ctx) bool {
}
return false
}
func ValidateDate(date time.Time) error {
now := time.Now().UTC()
diff := date.Unix() - now.Unix()
// Checks the dates difference to be less than a minute
if diff > 60 {
return s3err.APIError{
Code: "SignatureDoesNotMatch",
Description: fmt.Sprintf("Signature not yet current: %s is still later than %s", date.Format(iso8601Format), now.Format(iso8601Format)),
HTTPStatusCode: http.StatusForbidden,
}
}
if diff < -60 {
return s3err.APIError{
Code: "SignatureDoesNotMatch",
Description: fmt.Sprintf("Signature expired: %s is now earlier than %s", date.Format(iso8601Format), now.Format(iso8601Format)),
HTTPStatusCode: http.StatusForbidden,
}
}
return nil
}

View File

@@ -1,6 +0,0 @@
https://doc.s3.amazonaws.com/2006-03-01/AmazonS3.xsd
see https://blog.aqwari.net/xml-schema-go/
go install aqwari.net/xml/cmd/xsdgen@latest
xsdgen -o s3api_xsd_generated.go -pkg s3response AmazonS3.xsd

File diff suppressed because it is too large Load Diff

View File

@@ -16,6 +16,7 @@ package s3response
import (
"encoding/xml"
"time"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
@@ -73,7 +74,7 @@ type ListMultipartUploadsResult struct {
CommonPrefixes []CommonPrefix
}
// Upload desribes in progress multipart upload
// Upload describes in progress multipart upload
type Upload struct {
Key string
UploadID string `xml:"UploadId"`
@@ -107,7 +108,8 @@ type TagSet struct {
}
type Tagging struct {
TagSet TagSet `xml:"TagSet"`
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Tagging" json:"-"`
TagSet TagSet `xml:"TagSet"`
}
type DeleteObjects struct {
@@ -139,3 +141,58 @@ type Bucket struct {
Name string `json:"name"`
Owner string `json:"owner"`
}
type ListAllMyBucketsResult struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListAllMyBucketsResult" json:"-"`
Owner CanonicalUser
Buckets ListAllMyBucketsList
}
type ListAllMyBucketsEntry struct {
Name string
CreationDate time.Time
}
type ListAllMyBucketsList struct {
Bucket []ListAllMyBucketsEntry
}
type CanonicalUser struct {
ID string
DisplayName string
}
type CopyObjectResult struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CopyObjectResult" json:"-"`
LastModified time.Time
ETag string
}
type AccessControlPolicy struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ AccessControlPolicy" json:"-"`
Owner CanonicalUser
AccessControlList AccessControlList
}
type AccessControlList struct {
Grant []Grant
}
type Grant struct {
Grantee Grantee
Permission string
}
// Set the following to encode correctly:
//
// Grantee: s3response.Grantee{
// Xsi: "http://www.w3.org/2001/XMLSchema-instance",
// Type: "CanonicalUser",
// },
type Grantee struct {
XMLName xml.Name `xml:"Grantee"`
Xsi string `xml:"xmlns:xsi,attr,omitempty"`
Type string `xml:"xsi:type,attr,omitempty"`
ID string
DisplayName string
}

9
tests/.env.default Normal file
View File

@@ -0,0 +1,9 @@
AWS_REGION=us-east-1
AWS_PROFILE=versity
VERSITY_EXE=./versitygw
BACKEND=posix
LOCAL_FOLDER=/tmp/gw
AWS_ENDPOINT_URL=http://127.0.0.1:7070
BUCKET_ONE_NAME=versity-gwtest-bucket-one
BUCKET_TWO_NAME=versity-gwtest-bucket-two
RECREATE_BUCKETS=true

9
tests/.env.versitygw Normal file
View File

@@ -0,0 +1,9 @@
AWS_REGION=us-east-1
AWS_PROFILE=versity
VERSITY_EXE=./versitygw
BACKEND=posix
LOCAL_FOLDER=/tmp/gw
AWS_ENDPOINT_URL=http://127.0.0.1:7070
BUCKET_ONE_NAME=versity-gwtest-bucket-one
BUCKET_TWO_NAME=versity-gwtest-bucket-two
RECREATE_BUCKETS=true

13
tests/README.md Normal file
View File

@@ -0,0 +1,13 @@
# Command-Line Tests
Instructions:
1. Build the `versitygw` binary.
2. Create a local AWS profile for connection to S3, and add the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` values above to the profile.
3. Create an environment file (`.env`) similar to the ones in this folder, setting the `AWS_PROFILE` parameter to the name of the profile you created.
4. In the root repo folder, run with `VERSITYGW_TEST_ENV=<env file> tests/s3_bucket_tests.sh`.
5. If running/testing the GitHub workflow locally, create a `.secrets` file, and set the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` parameters here to the values of your AWS S3 IAM account.
```
AWS_ACCESS_KEY_ID=<key_id>
AWS_SECRET_ACCESS_KEY=<secret_key>
```
6. To run the workflow locally, install **act** and run with `act -W .github/workflows/system.yml`.

88
tests/posix_tests.sh Executable file
View File

@@ -0,0 +1,88 @@
#!/usr/bin/env bats
source ./tests/setup.sh
source ./tests/util.sh
source ./tests/util_posix.sh
# test that changes to local folders and files are reflected on S3
@test "test_local_creation_deletion" {
if [[ $RECREATE_BUCKETS != "true" ]]; then
return
fi
local object_name="test-object"
mkdir "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
local object="$BUCKET_ONE_NAME"/"$object_name"
touch "$LOCAL_FOLDER"/"$object"
bucket_exists_remote_and_local "$BUCKET_ONE_NAME" || local bucket_exists_two=$?
[[ $bucket_exists_two -eq 0 ]] || fail "Failed bucket existence check"
object_exists_remote_and_local "$object" || local object_exists_two=$?
[[ $object_exists_two -eq 0 ]] || fail "Failed object existence check"
rm "$LOCAL_FOLDER"/"$object"
sleep 1
object_not_exists_remote_and_local "$object" || local object_deleted=$?
[[ $object_deleted -eq 0 ]] || fail "Failed object deletion check"
rmdir "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
sleep 1
bucket_not_exists_remote_and_local "$BUCKET_ONE_NAME" || local bucket_deleted=$?
[[ $bucket_deleted -eq 0 ]] || fail "Failed bucket deletion check"
}
# test head-object command
@test "test_head_object" {
local bucket_name=$BUCKET_ONE_NAME
local object_name="object-one"
create_test_files $object_name
if [ -e "$LOCAL_FOLDER"/"$bucket_name"/$object_name ]; then
chmod 755 "$LOCAL_FOLDER"/"$bucket_name"/$object_name
fi
setup_bucket "$bucket_name" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating bucket"
put_object "$test_file_folder"/"$object_name" "$bucket_name"/"$object_name" || local result="$?"
[[ result -eq 0 ]] || fail "Error adding object one"
chmod 000 "$LOCAL_FOLDER"/"$bucket_name"/$object_name
sleep 1
object_is_accessible "$bucket_name" $object_name || local accessible=$?
[[ $accessible -eq 1 ]] || fail "Object should be inaccessible"
chmod 755 "$LOCAL_FOLDER"/"$bucket_name"/$object_name
sleep 1
object_is_accessible "$bucket_name" $object_name || local accessible_two=$?
[[ $accessible_two -eq 0 ]] || fail "Object should be accessible"
delete_object "$bucket_name"/$object_name
delete_bucket_or_contents "$bucket_name"
delete_test_files $object_name
}
# check info, accessiblity of bucket
@test "test_get_bucket_info" {
if [ -e "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME" ]; then
chmod 755 "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
sleep 1
else
setup_bucket "$BUCKET_ONE_NAME" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating bucket"
fi
chmod 000 "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
sleep 1
bucket_is_accessible "$BUCKET_ONE_NAME" || local accessible=$?
[[ $accessible -eq 1 ]] || fail "Bucket should be inaccessible"
chmod 755 "$LOCAL_FOLDER"/"$BUCKET_ONE_NAME"
sleep 1
bucket_is_accessible "$BUCKET_ONE_NAME" || local accessible_two=$?
[[ $accessible_two -eq 0 ]] || fail "Bucket should be accessible"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
}

403
tests/s3_bucket_tests.sh Executable file
View File

@@ -0,0 +1,403 @@
#!/usr/bin/env bats
source ./tests/setup.sh
source ./tests/util.sh
# test creation and deletion of bucket on versitygw
@test "test_create_delete_bucket" {
if [[ $RECREATE_BUCKETS != "true" ]]; then
return
fi
setup_bucket "$BUCKET_ONE_NAME" || local create_result=$?
[[ $create_result -eq 0 ]] || fail "Failed to create bucket"
bucket_exists "$BUCKET_ONE_NAME" || local exists_three=$?
[[ $exists_three -eq 0 ]] || fail "Failed bucket existence check"
delete_bucket_or_contents "$BUCKET_ONE_NAME" || local delete_result_two=$?
[[ $delete_result_two -eq 0 ]] || fail "Failed to delete bucket"
}
# test adding and removing an object on versitygw
@test "test_put_object" {
local object_name="test-object"
setup_bucket "$BUCKET_ONE_NAME" || local setup_result=$?
[[ $setup_result -eq 0 ]] || fail "error setting up bucket"
create_test_files "$object_name" || local create_result=$?
object="$BUCKET_ONE_NAME"/$object_name
put_object "$test_file_folder"/"$object_name" "$object" || local put_object=$?
[[ $put_object -eq 0 ]] || fail "Failed to add object to bucket"
object_exists "$object" || local exists_result_one=$?
[[ $exists_result_one -eq 0 ]] || fail "Object not added to bucket"
delete_object "$object" || local delete_result=$?
[[ $delete_result -eq 0 ]] || fail "Failed to delete object"
object_exists "$object" || local exists_result_two=$?
[[ $exists_result_two -eq 1 ]] || fail "Object not removed from bucket"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files "$object_name"
}
# test listing buckets on versitygw
@test "test_list_buckets" {
setup_bucket "$BUCKET_ONE_NAME" || local setup_result_one=$?
[[ $setup_result_one -eq 0 ]] || fail "Bucket one setup error"
setup_bucket "$BUCKET_TWO_NAME" || local setup_result_two=$?
[[ $setup_result_two -eq 0 ]] || fail "Bucket two setup error"
list_buckets
local bucket_one_found=false
local bucket_two_found=false
for bucket in "${bucket_array[@]}"; do
if [ "$bucket" == "$BUCKET_ONE_NAME" ]; then
bucket_one_found=true
elif [ "$bucket" == "$BUCKET_TWO_NAME" ]; then
bucket_two_found=true
fi
if [ $bucket_one_found == true ] && [ $bucket_two_found == true ]; then
break
fi
done
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_bucket_or_contents "$BUCKET_TWO_NAME"
if [ $bucket_one_found != true ] || [ $bucket_two_found != true ]; then
fail "'$BUCKET_ONE_NAME' and/or '$BUCKET_TWO_NAME' not listed (all buckets: ${bucket_array[*]})"
fi
}
# test listing a bucket's objects on versitygw
@test "test_list_objects" {
object_one="test-file-one"
object_two="test-file-two"
create_test_files $object_one $object_two
setup_bucket "$BUCKET_ONE_NAME" || local result_one=$?
[[ result_one -eq 0 ]] || fail "Error creating bucket"
put_object "$test_file_folder"/$object_one "$BUCKET_ONE_NAME"/"$object_one" || local result_two=$?
[[ result_two -eq 0 ]] || fail "Error adding object one"
put_object "$test_file_folder"/$object_two "$BUCKET_ONE_NAME"/"$object_two" || local result_three=$?
[[ result_three -eq 0 ]] || fail "Error adding object two"
list_objects "$BUCKET_ONE_NAME"
local object_one_found=false
local object_two_found=false
for object in "${object_array[@]}"; do
if [ "$object" == $object_one ]; then
object_one_found=true
elif [ "$object" == $object_two ]; then
object_two_found=true
fi
done
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files $object_one $object_two
if [ $object_one_found != true ] || [ $object_two_found != true ]; then
fail "$object_one and/or $object_two not listed (all objects: ${object_array[*]})"
fi
}
# test ability to retrieve bucket ACLs
@test "test_get_bucket_acl" {
setup_bucket "$BUCKET_ONE_NAME" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating bucket"
get_bucket_acl "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Error retrieving acl"
id=$(echo "$acl" | jq '.Owner.ID')
[[ $id == '"'"$AWS_ACCESS_KEY_ID"'"' ]] || fail "Acl mismatch"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
}
# test ability to delete multiple objects from bucket
@test "test_delete_objects" {
local object_one="test-file-one"
local object_two="test-file-two"
create_test_files "$object_one" "$object_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "$BUCKET_ONE_NAME" || local result_one=$?
[[ $result_one -eq 0 ]] || fail "Error creating bucket"
put_object "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME"/"$object_one" || local result_two=$?
[[ $result_two -eq 0 ]] || fail "Error adding object one"
put_object "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME"/"$object_two" || local result_three=$?
[[ $result_three -eq 0 ]] || fail "Error adding object two"
error=$(aws s3api delete-objects --bucket "$BUCKET_ONE_NAME" --delete '{
"Objects": [
{"Key": "test-file-one"},
{"Key": "test-file-two"}
]
}') || local result=$?
[[ $result -eq 0 ]] || fail "Error deleting objects: $error"
object_exists "$BUCKET_ONE_NAME"/"$object_one" || local exists_one=$?
[[ $exists_one -eq 1 ]] || fail "Object one not deleted"
object_exists "$BUCKET_ONE_NAME"/"$object_two" || local exists_two=$?
[[ $exists_two -eq 1 ]] || fail "Object two not deleted"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
}
# test abilty to set and retrieve bucket tags
@test "test-set-get-bucket-tags" {
local key="test_key"
local value="test_value"
setup_bucket "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
get_bucket_tags "$BUCKET_ONE_NAME" || local get_result=$?
[[ $get_result -eq 0 ]] || fail "Error getting bucket tags"
tag_set=$(echo "$tags" | jq '.TagSet')
[[ $tag_set == "[]" ]] || fail "Error: tags not empty"
put_bucket_tag "$BUCKET_ONE_NAME" $key $value
get_bucket_tags "$BUCKET_ONE_NAME" || local get_result_two=$?
[[ $get_result_two -eq 0 ]] || fail "Error getting bucket tags"
tag_set_key=$(echo "$tags" | jq '.TagSet[0].Key')
tag_set_value=$(echo "$tags" | jq '.TagSet[0].Value')
[[ $tag_set_key == '"'$key'"' ]] || fail "Key mismatch"
[[ $tag_set_value == '"'$value'"' ]] || fail "Value mismatch"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
}
# test v1 s3api list objects command
@test "test-s3api-list-objects-v1" {
local object_one="test-file-one"
local object_two="test-file-two"
local object_two_data="test data\n"
create_test_files "$object_one" "$object_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
printf "%s" "$object_two_data" > "$test_file_folder"/"$object_two"
setup_bucket "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME"/"$object_one" || local put_object_one=$?
[[ $put_object_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME"/"$object_two" || local put_object_two=$?
[[ $put_object_two -eq 0 ]] || fail "Failed to add object $object_two"
list_objects_s3api_v1 "$BUCKET_ONE_NAME"
key_one=$(echo "$objects" | jq '.Contents[0].Key')
[[ $key_one == '"'$object_one'"' ]] || fail "Object one mismatch"
size_one=$(echo "$objects" | jq '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch"
key_two=$(echo "$objects" | jq '.Contents[1].Key')
[[ $key_two == '"'$object_two'"' ]] || fail "Object two mismatch"
size_two=$(echo "$objects" | jq '.Contents[1].Size')
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
}
# test v2 s3api list objects command
@test "test-s3api-list-objects-v2" {
local object_one="test-file-one"
local object_two="test-file-two"
local object_two_data="test data\n"
create_test_files "$object_one" "$object_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
printf "%s" "$object_two_data" > "$test_file_folder"/"$object_two"
setup_bucket "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
put_object "$test_file_folder"/"$object_one" "$BUCKET_ONE_NAME"/"$object_one" || local put_object_one=$?
[[ $put_object_one -eq 0 ]] || fail "Failed to add object $object_one"
put_object "$test_file_folder"/"$object_two" "$BUCKET_ONE_NAME"/"$object_two" || local put_object_two=$?
[[ $put_object_two -eq 0 ]] || fail "Failed to add object $object_two"
list_objects_s3api_v2 "$BUCKET_ONE_NAME"
key_one=$(echo "$objects" | jq '.Contents[0].Key')
[[ $key_one == '"'$object_one'"' ]] || fail "Object one mismatch"
size_one=$(echo "$objects" | jq '.Contents[0].Size')
[[ $size_one -eq 0 ]] || fail "Object one size mismatch"
key_two=$(echo "$objects" | jq '.Contents[1].Key')
[[ $key_two == '"'$object_two'"' ]] || fail "Object two mismatch"
size_two=$(echo "$objects" | jq '.Contents[1].Size')
[[ $size_two -eq ${#object_two_data} ]] || fail "Object two size mismatch"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files "$object_one" "$object_two"
}
# test abilty to set and retrieve object tags
@test "test-set-get-object-tags" {
local bucket_file="bucket-file"
local key="test_key"
local value="test_value"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
local object_path="$BUCKET_ONE_NAME"/"$bucket_file"
put_object "$test_file_folder"/"$bucket_file" "$object_path" || local put_object=$?
[[ $put_object -eq 0 ]] || fail "Failed to add object to bucket '$BUCKET_ONE_NAME'"
get_object_tags "$BUCKET_ONE_NAME" $bucket_file || local get_result=$?
[[ $get_result -eq 0 ]] || fail "Error getting object tags"
tag_set=$(echo "$tags" | jq '.TagSet')
[[ $tag_set == "[]" ]] || fail "Error: tags not empty"
put_object_tag "$BUCKET_ONE_NAME" $bucket_file $key $value
get_object_tags "$BUCKET_ONE_NAME" $bucket_file || local get_result_two=$?
[[ $get_result_two -eq 0 ]] || fail "Error getting object tags"
tag_set_key=$(echo "$tags" | jq '.TagSet[0].Key')
tag_set_value=$(echo "$tags" | jq '.TagSet[0].Value')
[[ $tag_set_key == '"'$key'"' ]] || fail "Key mismatch"
[[ $tag_set_value == '"'$value'"' ]] || fail "Value mismatch"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
# test multi-part upload
@test "test-multi-part-upload" {
local bucket_file="bucket-file"
bucket_file_data="test file\n"
create_test_files "$bucket_file" || local created=$?
printf "%s" "$bucket_file_data" > "$test_file_folder"/$bucket_file
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || upload_result=$?
[[ $upload_result -eq 0 ]] || fail "Error performing multipart upload"
copy_file "s3://$BUCKET_ONE_NAME/$bucket_file" "$test_file_folder/$bucket_file-copy"
copy_data=$(<"$test_file_folder"/$bucket_file-copy)
[[ $bucket_file_data == "$copy_data" ]] || fail "Data doesn't match"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
# test multi-part upload abort
@test "test-multi-part-upload-abort" {
local bucket_file="bucket-file"
bucket_file_data="test file\n"
create_test_files "$bucket_file" || local created=$?
printf "%s" "$bucket_file_data" > "$test_file_folder"/$bucket_file
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
abort_multipart_upload "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || abort_result=$?
[[ $abort_result -eq 0 ]] || fail "Abort failed"
object_exists "$BUCKET_ONE_NAME/$bucket_file" || exists=$?
[[ $exists -eq 1 ]] || fail "Upload file exists after abort"
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
# test multi-part upload list parts command
@test "test-multipart-upload-list-parts" {
local bucket_file="bucket-file"
local bucket_file_data="test file\n"
create_test_files "$bucket_file" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
printf "%s" "$bucket_file_data" > "$test_file_folder"/$bucket_file
setup_bucket "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
list_parts "$BUCKET_ONE_NAME" "$bucket_file" "$test_file_folder"/"$bucket_file" 4 || list_result=$?
[[ list_result -eq 0 ]] || fail "Listing multipart upload parts failed"
declare -a parts_map
for ((i=0;i<$4;i++)) {
local part_number
local etag
part_number=$(echo "$parts" | jq ".[$i].PartNumber")
if [[ $part_number -eq "" ]]; then
echo "error: blank part number"
return 1
fi
etag=$(echo "$parts" | jq ".[$i].ETag")
if [[ $etag == "" ]]; then
echo "error: blank etag"
return 1
fi
parts_map[$part_number]=$etag
}
for ((i=0;i<$4;i++)) {
local part_number
local etag
part_number=$(echo "$listed_parts" | jq ".Parts[$i].PartNumber")
etag=$(echo "$listed_parts" | jq ".Parts[$i].ETag")
if [[ ${parts_map[$part_number]} != "$etag" ]]; then
echo "error: etags don't match (part number: $part_number, etags ${parts_map[$part_number]},$etag)"
return 1
fi
}
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files $bucket_file
}
# test listing of active uploads
@test "test-multipart-upload-list-uploads" {
local bucket_file_one="bucket-file-one"
local bucket_file_two="bucket-file-two"
create_test_files "$bucket_file_one" "$bucket_file_two" || local created=$?
[[ $created -eq 0 ]] || fail "Error creating test files"
setup_bucket "$BUCKET_ONE_NAME" || local result=$?
[[ $result -eq 0 ]] || fail "Failed to create bucket '$BUCKET_ONE_NAME'"
list_multipart_uploads "$BUCKET_ONE_NAME" "$test_file_folder"/"$bucket_file_one" "$test_file_folder"/"$bucket_file_two"
[[ $? -eq 0 ]] || fail "failed to list multipart uploads"
local key_one
local key_two
key_one=$(echo "$uploads" | jq '.Uploads[0].Key')
key_two=$(echo "$uploads" | jq '.Uploads[1].Key')
key_one=${key_one//\"/}
key_two=${key_two//\"/}
echo "$test_file_folder/${bucket_file_one}abc"
echo "${key_one}abc"
echo "Length of test_file_folder/bucket_file_one: ${#test_file_folder}/${#bucket_file_one}"
echo "Length of key_one: ${#key_one}"
if [[ "$test_file_folder/$bucket_file_one" != *"$key_one" ]]; then
fail "Key mismatch ($test_file_folder/$bucket_file_one, $key_one)"
fi
if [[ "$test_file_folder/$bucket_file_two" != *"$key_two" ]]; then
fail "Key mismatch ($test_file_folder/$bucket_file_two, $key_two)"
fi
delete_bucket_or_contents "$BUCKET_ONE_NAME"
delete_test_files "$bucket_file_one" "$bucket_file_two"
}

89
tests/setup.sh Normal file
View File

@@ -0,0 +1,89 @@
#!/usr/bin/env bats
setup() {
if [ "$GITHUB_ACTIONS" != "true" ] && [ -r .secrets ]; then
source .secrets
else
echo "Warning: no secrets file found"
fi
if [ -z "$VERSITYGW_TEST_ENV" ]; then
if [ -r tests/.env ]; then
source tests/.env
else
echo "Warning: no .env file found in tests folder"
fi
else
echo "$VERSITYGW_TEST_ENV"
# shellcheck source=./.env.default
source "$VERSITYGW_TEST_ENV"
fi
if [ -z "$AWS_ACCESS_KEY_ID" ]; then
echo "No AWS access key set"
return 1
elif [ -z "$AWS_SECRET_ACCESS_KEY" ]; then
echo "No AWS secret access key set"
return 1
elif [ -z "$VERSITY_EXE" ]; then
echo "No versity executable location set"
return 1
elif [ -z "$BACKEND" ]; then
echo "No backend parameter set (options: 'posix')"
return 1
elif [ -z "$AWS_REGION" ]; then
echo "No AWS region set"
return 1
elif [ -z "$AWS_PROFILE" ]; then
echo "No AWS profile set"
return 1
elif [ -z "$LOCAL_FOLDER" ]; then
echo "No local storage folder set"
return 1
elif [ -z "$AWS_ENDPOINT_URL" ]; then
echo "No AWS endpoint URL set"
return 1
elif [ -z "$BUCKET_ONE_NAME" ]; then
echo "No bucket one name set"
return 1
elif [ -z "$BUCKET_TWO_NAME" ]; then
echo "No bucket two name set"
return 1
elif [ -z "$RECREATE_BUCKETS" ]; then
echo "No recreate buckets parameter set"
return 1
elif [[ $RECREATE_BUCKETS != "true" ]] && [[ $RECREATE_BUCKETS != "false" ]]; then
echo "RECREATE_BUCKETS must be 'true' or 'false'"
return 1
fi
ROOT_ACCESS_KEY="$AWS_ACCESS_KEY_ID" ROOT_SECRET_KEY="$AWS_SECRET_ACCESS_KEY" "$VERSITY_EXE" "$BACKEND" "$LOCAL_FOLDER" &
export AWS_REGION
export AWS_PROFILE
export AWS_ENDPOINT_URL
export LOCAL_FOLDER
export BUCKET_ONE_NAME
export BUCKET_TWO_NAME
versitygw_pid=$!
export versitygw_pid
}
fail() {
echo "$1"
return 1
}
teardown() {
if [ -n "$versitygw_pid" ]; then
if ps -p "$versitygw_pid" > /dev/null; then
kill "$versitygw_pid"
wait "$versitygw_pid" || true
else
echo "Process with PID $versitygw_pid does not exist."
fi
else
echo "versitygw_pid is not set or empty."
fi
}

720
tests/util.sh Normal file
View File

@@ -0,0 +1,720 @@
#!/usr/bin/env bats
# create an AWS bucket
# param: bucket name
# return 0 for success, 1 for failure
create_bucket() {
if [ $# -ne 1 ]; then
echo "create bucket missing bucket name"
return 1
fi
local exit_code=0
local error
error=$(aws s3 mb s3://"$1" 2>&1) || exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "error creating bucket: $error"
return 1
fi
return 0
}
# delete an AWS bucket
# param: bucket name
# return 0 for success, 1 for failure
delete_bucket() {
if [ $# -ne 1 ]; then
echo "delete bucket missing bucket name"
return 1
fi
local exit_code=0
local error
error=$(aws s3 rb s3://"$1" 2>&1) || exit_code="$?"
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"The specified bucket does not exist"* ]]; then
return 0
else
echo "error deleting bucket: $error"
return 1
fi
fi
return 0
}
# recursively delete an AWS bucket
# param: bucket name
# return 0 for success, 1 for failure
delete_bucket_recursive() {
if [ $# -ne 1 ]; then
echo "delete bucket missing bucket name"
return 1
fi
local exit_code=0
local error
error=$(aws s3 rb s3://"$1" --force 2>&1) || exit_code="$?"
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"The specified bucket does not exist"* ]]; then
return 0
else
echo "error deleting bucket: $error"
return 1
fi
fi
return 0
}
# delete contents of a bucket
# param: bucket name
# return 0 for success, 1 for failure
delete_bucket_contents() {
if [ $# -ne 1 ]; then
echo "delete bucket missing bucket name"
return 1
fi
local exit_code=0
local error
error=$(aws s3 rm s3://"$1" --recursive 2>&1) || exit_code="$?"
if [ $exit_code -ne 0 ]; then
echo "error deleting bucket: $error"
return 1
fi
return 0
}
# check if bucket exists
# param: bucket name
# return 0 for true, 1 for false, 2 for error
bucket_exists() {
if [ $# -ne 1 ]; then
echo "bucket exists check missing bucket name"
return 2
fi
local exit_code=0
local error
error=$(aws s3 ls s3://"$1" 2>&1) || exit_code="$?"
echo "Exit code: $exit_code, error: $error"
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"The specified bucket does not exist"* ]] || [[ "$error" == *"Access Denied"* ]]; then
return 1
else
echo "error checking if bucket exists: $error"
return 2
fi
fi
return 0
}
# delete buckets or just the contents depending on RECREATE_BUCKETS parameter
# param: bucket name
# return: 0 for success, 1 for failure
delete_bucket_or_contents() {
if [ $# -ne 1 ]; then
echo "delete bucket or contents function requires bucket name"
return 1
fi
if [[ $RECREATE_BUCKETS != "true" ]]; then
delete_bucket_contents "$1" || local delete_result=$?
if [[ $delete_result -ne 0 ]]; then
echo "error deleting bucket contents"
return 1
fi
return 0
fi
delete_bucket_recursive "$1" || local delete_result=$?
if [[ $delete_result -ne 0 ]]; then
echo "Bucket deletion error"
return 1
fi
return 0
}
# if RECREATE_BUCKETS is set to true create bucket, deleting it if it exists to clear state. If not,
# check to see if it exists and return an error if it does not.
# param: bucket name
# return 0 for success, 1 for failure
setup_bucket() {
if [ $# -ne 1 ]; then
echo "bucket creation function requires bucket name"
return 1
fi
local exists_result
bucket_exists "$1" || exists_result=$?
if [[ $exists_result -eq 2 ]]; then
echo "Bucket existence check error"
return 1
fi
if [[ $exists_result -eq 0 ]]; then
delete_bucket_or_contents "$1" || delete_result=$?
if [[ delete_result -ne 0 ]]; then
echo "error deleting bucket or contents"
return 1
fi
fi
if [[ $RECREATE_BUCKETS != "true" ]]; then
echo "When RECREATE_BUCKETS isn't set to \"true\", buckets should be pre-created by user"
return 1
fi
local create_result
create_bucket "$1" || create_result=$?
if [[ $create_result -ne 0 ]]; then
echo "Error creating bucket"
return 1
fi
echo "Bucket creation success"
return 0
}
# check if object exists on S3 via gateway
# param: object path
# return 0 for true, 1 for false, 2 for error
object_exists() {
if [ $# -ne 1 ]; then
echo "object exists check missing object name"
return 2
fi
local exit_code=0
local error
error=$(aws s3 ls s3://"$1" 2>&1) || exit_code="$?"
if [ $exit_code -ne 0 ]; then
if [[ "$error" == "" ]]; then
return 1
else
echo "error checking if object exists: $error"
return 2
fi
fi
return 0
}
# add object to versitygw
# params: source file, destination copy location
# return 0 for success, 1 for failure
put_object() {
if [ $# -ne 2 ]; then
echo "put object command requires source, destination"
return 1
fi
local exit_code=0
local error
error=$(aws s3 cp "$1" s3://"$2" 2>&1) || exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "error copying object to bucket: $error"
return 1
fi
return 0
}
# add object to versitygw if it doesn't exist
# params: source file, destination copy location
# return 0 for success or already exists, 1 for failure
check_and_put_object() {
if [ $# -ne 2 ]; then
echo "check and put object function requires source, destination"
return 1
fi
object_exists "$2" || local exists_result=$?
if [ "$exists_result" -eq 2 ]; then
echo "error checking if object exists"
return 1
fi
if [ "$exists_result" -eq 1 ]; then
put_object "$1" "$2" || local put_result=$?
if [ "$put_result" -ne 0 ]; then
echo "error adding object"
return 1
fi
fi
return 0
}
# delete object from versitygw
# param: object path, including bucket name
# return 0 for success, 1 for failure
delete_object() {
if [ $# -ne 1 ]; then
echo "delete object command requires object parameter"
return 1
fi
local exit_code=0
local error
error=$(aws s3 rm s3://"$1" 2>&1) || exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "error deleting object: $error"
return 1
fi
return 0
}
# list buckets on versitygw
# no params
# export bucket_array (bucket names) on success, return 1 for failure
list_buckets() {
local exit_code=0
local output
output=$(aws s3 ls 2>&1) || exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "error listing buckets: $output"
return 1
fi
bucket_array=()
while IFS= read -r line; do
bucket_name=$(echo "$line" | awk '{print $NF}')
bucket_array+=("$bucket_name")
done <<< "$output"
export bucket_array
}
# list objects on versitygw, in bucket or folder
# param: path of bucket or folder
# export object_array (object names) on success, return 1 for failure
list_objects() {
if [ $# -ne 1 ]; then
echo "list objects command requires bucket or folder"
return 1
fi
local exit_code=0
local output
output=$(aws s3 ls s3://"$1" 2>&1) || exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "error listing objects: $output"
return 1
fi
object_array=()
while IFS= read -r line; do
object_name=$(echo "$line" | awk '{print $NF}')
object_array+=("$object_name")
done <<< "$output"
export object_array
}
# check if bucket info can be retrieved
# param: path of bucket or folder
# return 0 for yes, 1 for no, 2 for error
bucket_is_accessible() {
if [ $# -ne 1 ]; then
echo "bucket accessibility check missing bucket name"
return 2
fi
local exit_code=0
local error
error=$(aws s3api head-bucket --bucket "$1" 2>&1) || exit_code="$?"
if [ $exit_code -eq 0 ]; then
return 0
fi
if [[ "$error" == *"500"* ]]; then
return 1
fi
echo "Error checking bucket accessibility: $error"
return 2
}
# check if object info (etag) is accessible
# param: path of object
# return 0 for yes, 1 for no, 2 for error
object_is_accessible() {
if [ $# -ne 2 ]; then
echo "object accessibility check missing bucket and/or key"
return 2
fi
local exit_code=0
object_data=$(aws s3api head-object --bucket "$1" --key "$2" 2>&1) || exit_code="$?"
if [ $exit_code -ne 0 ]; then
echo "Error obtaining object data: $object_data"
return 2
fi
etag=$(echo "$object_data" | jq '.ETag')
if [[ "$etag" == '""' ]]; then
return 1
fi
return 0
}
# get bucket acl
# param: bucket path
# export acl for success, return 1 for error
get_bucket_acl() {
if [ $# -ne 1 ]; then
echo "bucket ACL command missing bucket name"
return 1
fi
local exit_code=0
acl=$(aws s3api get-bucket-acl --bucket "$1" 2>&1) || exit_code="$?"
if [ $exit_code -ne 0 ]; then
echo "Error getting bucket ACLs: $acl"
return 1
fi
export acl
}
# add tags to bucket
# params: bucket, key, value
# return: 0 for success, 1 for error
put_bucket_tag() {
if [ $# -ne 3 ]; then
echo "bucket tag command missing bucket name, key, value"
return 1
fi
local error
local result
error=$(aws s3api put-bucket-tagging --bucket "$1" --tagging "TagSet=[{Key=$2,Value=$3}]") || result=$?
if [[ $result -ne 0 ]]; then
echo "Error adding bucket tag: $error"
return 1
fi
return 0
}
# get bucket tags
# params: bucket
# export 'tags' on success, return 1 for error
get_bucket_tags() {
if [ $# -ne 1 ]; then
echo "get bucket tag command missing bucket name"
return 1
fi
local result
tags=$(aws s3api get-bucket-tagging --bucket "$1") || result=$?
if [[ $result -ne 0 ]]; then
echo "error getting bucket tags: $tags"
return 1
fi
export tags
}
# add tags to object
# params: object, key, value
# return: 0 for success, 1 for error
put_object_tag() {
if [ $# -ne 4 ]; then
echo "object tag command missing object name, file, key, and/or value"
return 1
fi
local error
local result
error=$(aws s3api put-object-tagging --bucket "$1" --key "$2" --tagging "TagSet=[{Key=$3,Value=$4}]") || result=$?
if [[ $result -ne 0 ]]; then
echo "Error adding object tag: $error"
return 1
fi
return 0
}
# get object tags
# params: bucket
# export 'tags' on success, return 1 for error
get_object_tags() {
if [ $# -ne 2 ]; then
echo "get object tag command missing bucket and/or key"
return 1
fi
local result
tags=$(aws s3api get-object-tagging --bucket "$1" --key "$2") || result=$?
if [[ $result -ne 0 ]]; then
echo "error getting object tags: $tags"
return 1
fi
export tags
}
# create a test file and export folder. do so in temp folder
# params: filename
# export test file folder on success, return 1 for error
create_test_files() {
if [ $# -lt 1 ]; then
echo "create test files command missing filename"
return 1
fi
test_file_folder=.
if [[ -z "$GITHUB_ACTIONS" ]]; then
test_file_folder=${TMPDIR}versity-gwtest
mkdir -p "$test_file_folder" || local mkdir_result=$?
if [[ $mkdir_result -ne 0 ]]; then
echo "error creating test file folder"
fi
fi
for name in "$@"; do
touch "$test_file_folder"/"$name" || local touch_result=$?
if [[ $touch_result -ne 0 ]]; then
echo "error creating file $name"
fi
done
export test_file_folder
}
# delete a test file
# params: filename
# return: 0 for success, 1 for error
delete_test_files() {
if [ $# -lt 1 ]; then
echo "delete test files command missing filenames"
return 1
fi
if [ -z "$test_file_folder" ]; then
echo "no test file folder defined, not deleting"
return 1
fi
for name in "$@"; do
rm "$test_file_folder"/"$name" || rm_result=$?
if [[ $rm_result -ne 0 ]]; then
echo "error deleting file $name"
fi
done
return 0
}
# list objects in bucket, v1
# param: bucket
# export objects on success, return 1 for failure
list_objects_s3api_v1() {
if [ $# -ne 1 ]; then
echo "list objects command missing bucket"
return 1
fi
objects=$(aws s3api list-objects --bucket "$1") || local result=$?
if [[ $result -ne 0 ]]; then
echo "error listing objects: $objects"
return 1
fi
export objects
}
# list objects in bucket, v2
# param: bucket
# export objects on success, return 1 for failure
list_objects_s3api_v2() {
if [ $# -ne 1 ]; then
echo "list objects command missing bucket and/or path"
return 1
fi
objects=$(aws s3api list-objects-v2 --bucket "$1") || local result=$?
if [[ $result -ne 0 ]]; then
echo "error listing objects: $objects"
return 1
fi
export objects
}
# initialize a multipart upload
# params: bucket, key
# return 0 for success, 1 for failure
create_multipart_upload() {
if [ $# -ne 2 ]; then
echo "create multipart upload function must have bucket, key"
return 1
fi
local multipart_data
multipart_data=$(aws s3api create-multipart-upload --bucket "$1" --key "$2") || local created=$?
if [[ $created -ne 0 ]]; then
echo "Error creating multipart upload: $upload_id"
return 1
fi
upload_id=$(echo "$multipart_data" | jq '.UploadId')
upload_id="${upload_id//\"/}"
export upload_id
}
# upload a single part of a multipart upload
# params: bucket, key, upload ID, original (unsplit) file name, part number
# return: 0 for success, 1 for failure
upload_part() {
if [ $# -ne 5 ]; then
echo "upload multipart part function must have bucket, key, upload ID, file name, part number"
return 1
fi
local etag_json
etag_json=$(aws s3api upload-part --bucket "$1" --key "$2" --upload-id "$3" --part-number "$5" --body "$4-$(($5-1))") || local uploaded=$?
if [[ $uploaded -ne 0 ]]; then
echo "Error uploading part $5: $etag_json"
return 1
fi
etag=$(echo "$etag_json" | jq '.ETag')
export etag
}
# perform all parts of a multipart upload before completion command
# params: bucket, key, file to split and upload, number of file parts to upload
# return: 0 for success, 1 for failure
multipart_upload_before_completion() {
if [ $# -ne 4 ]; then
echo "multipart upload pre-completion command missing bucket, key, file, and/or part count"
return 1
fi
file_size=$(stat -c %s "$3" 2>/dev/null || stat -f %z "$3" 2>/dev/null)
part_size=$((file_size / $4))
remainder=$((file_size % $4))
if [[ remainder -ne 0 ]]; then
part_size=$((part_size+1))
fi
local error
local split_result
error=$(split -a 1 -d -b "$part_size" "$3" "$3"-) || split_result=$?
if [[ $split_result -ne 0 ]]; then
echo "error splitting file: $error"
return 1
fi
create_multipart_upload "$1" "$2" || create_result=$?
if [[ $create_result -ne 0 ]]; then
echo "error creating multpart upload"
return 1
fi
parts="["
for ((i = 1; i <= $4; i++)); do
upload_part "$1" "$2" "$upload_id" "$3" "$i" || local upload_result=$?
if [[ $upload_result -ne 0 ]]; then
echo "error uploading part $i"
return 1
fi
parts+="{\"ETag\": $etag, \"PartNumber\": $i}"
if [[ $i -ne $4 ]]; then
parts+=","
fi
done
parts+="]"
export parts
}
# perform a multi-part upload
# params: bucket, key, source file location, number of parts
# return 0 for success, 1 for failure
multipart_upload() {
if [ $# -ne 4 ]; then
echo "multipart upload command missing bucket, key, file, and/or part count"
return 1
fi
multipart_upload_before_completion "$1" "$2" "$3" "$4" || result=$?
if [[ $result -ne 0 ]]; then
echo "error performing pre-completion multipart upload"
return 1
fi
error=$(aws s3api complete-multipart-upload --bucket "$1" --key "$2" --upload-id "$upload_id" --multipart-upload '{"Parts": '"$parts"'}') || local completed=$?
if [[ $completed -ne 0 ]]; then
echo "Error completing upload: $error"
return 1
fi
return 0
}
# run the abort multipart command
# params: bucket, key, upload ID
# return 0 for success, 1 for failure
run_abort_command() {
if [ $# -ne 3 ]; then
echo "command to run abort requires bucket, key, upload ID"
return 1
fi
error=$(aws s3api abort-multipart-upload --bucket "$1" --key "$2" --upload-id "$3") || local aborted=$?
if [[ $aborted -ne 0 ]]; then
echo "Error aborting upload: $error"
return 1
fi
return 0
}
# run upload, then abort it
# params: bucket, key, local file location, number of parts to split into before uploading
# return 0 for success, 1 for failure
abort_multipart_upload() {
if [ $# -ne 4 ]; then
echo "abort multipart upload command missing bucket, key, file, and/or part count"
return 1
fi
multipart_upload_before_completion "$1" "$2" "$3" "$4" || result=$?
if [[ $result -ne 0 ]]; then
echo "error performing pre-completion multipart upload"
return 1
fi
run_abort_command "$1" "$2" "$upload_id"
return $?
}
# copy a file to/from S3
# params: source, destination
# return 0 for success, 1 for failure
copy_file() {
if [ $# -ne 2 ]; then
echo "copy file command requires src and dest"
return 1
fi
local result
error=$(aws s3 cp "$1" "$2") || result=$?
if [[ $result -ne 0 ]]; then
echo "error copying file: $error"
return 1
fi
return 0
}
# list parts of an unfinished multipart upload
# params: bucket, key, local file location, and parts to split into before upload
# export parts on success, return 1 for error
list_parts() {
if [ $# -ne 4 ]; then
echo "list multipart upload parts command missing bucket, key, file, and/or part count"
return 1
fi
multipart_upload_before_completion "$1" "$2" "$3" "$4" || result=$?
if [[ $result -ne 0 ]]; then
echo "error performing pre-completion multipart upload"
return 1
fi
listed_parts=$(aws s3api list-parts --bucket "$1" --key "$2" --upload-id "$upload_id") || local listed=$?
if [[ $listed -ne 0 ]]; then
echo "Error aborting upload: $parts"
return 1
fi
export listed_parts
}
# list unfinished multipart uploads
# params: bucket, key one, key two
# export current two uploads on success, return 1 for error
list_multipart_uploads() {
if [ $# -ne 3 ]; then
echo "list multipart uploads command requires bucket and two keys"
return 1
fi
create_multipart_upload "$1" "$2" || local create_result=$?
if [[ $create_result -ne 0 ]]; then
echo "error creating multpart upload"
return 1
fi
create_multipart_upload "$1" "$3" || local create_result_two=$?
if [[ $create_result_two -ne 0 ]]; then
echo "error creating multpart upload two"
return 1
fi
uploads=$(aws s3api list-multipart-uploads --bucket "$1") || local list_result=$?
if [[ $list_result -ne 0 ]]; then
echo "error listing uploads: $uploads"
return 1
fi
export uploads
}

97
tests/util_posix.sh Normal file
View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bats
# check if object exists both on S3 and locally
# param: object path
# 0 for yes, 1 for no, 2 for error
object_exists_remote_and_local() {
if [ $# -ne 1 ]; then
echo "object existence check requires single name parameter"
return 2
fi
object_exists "$1" || local exist_result=$?
if [[ $exist_result -eq 2 ]]; then
echo "Error checking if object exists"
return 2
fi
if [[ $exist_result -eq 1 ]]; then
echo "Error: object doesn't exist remotely"
return 1
fi
if [[ ! -e "$LOCAL_FOLDER"/"$1" ]]; then
echo "Error: object doesn't exist locally"
return 1
fi
return 0
}
# check if object doesn't exist both on S3 and locally
# param: object path
# return 0 for doesn't exist, 1 for still exists, 2 for error
object_not_exists_remote_and_local() {
if [ $# -ne 1 ]; then
echo "object non-existence check requires single name parameter"
return 2
fi
object_exists "$1" || local exist_result=$?
if [[ $exist_result -eq 2 ]]; then
echo "Error checking if object doesn't exist"
return 2
fi
if [[ $exist_result -eq 0 ]]; then
echo "Error: object exists remotely"
return 1
fi
if [[ -e "$LOCAL_FOLDER"/"$1" ]]; then
echo "Error: object exists locally"
return 1
fi
return 0
}
# check if a bucket doesn't exist both on S3 and on gateway
# param: bucket name
# return: 0 for doesn't exist, 1 for does, 2 for error
bucket_not_exists_remote_and_local() {
if [ $# -ne 1 ]; then
echo "bucket existence check requires single name parameter"
return 2
fi
bucket_exists "$1" || local exist_result=$?
if [[ $exist_result -eq 2 ]]; then
echo "Error checking if bucket exists"
return 2
fi
if [[ $exist_result -eq 0 ]]; then
echo "Error: bucket exists remotely"
return 1
fi
if [[ -e "$LOCAL_FOLDER"/"$1" ]]; then
echo "Error: bucket exists locally"
return 1
fi
return 0
}
# check if a bucket exists both on S3 and on gateway
# param: bucket name
# return: 0 for yes, 1 for no, 2 for error
bucket_exists_remote_and_local() {
if [ $# -ne 1 ]; then
echo "bucket existence check requires single name parameter"
return 2
fi
bucket_exists "$1" || local exist_result=$?
if [[ $exist_result -eq 2 ]]; then
echo "Error checking if bucket exists"
return 2
fi
if [[ $exist_result -eq 1 ]]; then
echo "Error: bucket doesn't exist remotely"
return 1
fi
if [[ ! -e "$LOCAL_FOLDER"/"$1" ]]; then
echo "Error: bucket doesn't exist locally"
return 1
fi
return 0
}