Compare commits

..

100 Commits

Author SHA1 Message Date
Ben McClelland
9cfc2c7b08 Merge pull request #776 from versity/copy-object-with-starting-slash
feat: Added an integration test to cover the case to CopyObject with …
2024-08-29 12:58:57 -07:00
jonaustin09
9be4f27550 feat: Added an integration test to cover the case to CopyObject with the copysource starting with / 2024-08-29 15:32:20 -04:00
Ben McClelland
cdb5187ca2 Merge pull request #774 from cvubrugier/fix-773-copy-source-slash
fix: handle "x-amz-copy-source" header starting with '/' in s3api
2024-08-29 09:10:12 -07:00
Ben McClelland
c2f9e801ef Merge pull request #775 from versity/fix/list-objects-next-marker
fix: Added an integration test case for ListObjectsV2 to specify max-…
2024-08-29 08:56:45 -07:00
jonaustin09
201777c819 fix: Added an integration test case for ListObjectsV2 to specify max-keys as the exact number of objects in the bucket 2024-08-29 11:14:29 -04:00
Christophe Vu-Brugier
20940f0b46 fix: handle "x-amz-copy-source" header starting with '/' in s3api
The "x-amz-copy-source" header may start with '/' as observed with
WinSCP. However, '/' is also the separator between the bucket and the
object path in "x-amz-copy-source".

Consider the following code in VerifyObjectCopyAccess():

    srcBucket, srcObject, found := strings.Cut(copySource, "/")

If `copySource` starts with '/', then `srcBucket` is set to an empty
string. Later, an error is returned because bucket "" does not exist.

This issue was fixed in the Posix and Azure backends by the following
commit:

 * 5e484f2 fix: Fixed CopySource parsing to handle the values starting with '/' in CopyObject action in posix and azure backends.

But the issue was not fixed in `VerifyObjectCopyAccess`.

This commit sanitizes "x-amz-copy-source" right after the header is
extracted in `s3api/controllers/base.go`. This ensures that the
`CopySource` argument passed to the backend functions UploadPartCopy()
and CopyObject() does not start with '/'. Since the backends no longer
need to strip away any leading '/' in `CopySource`, the parts of
commit 5e484f2 modifying the Posix and Azure backends are reverted.

Fixes issue #773.

Signed-off-by: Christophe Vu-Brugier <christophe.vu-brugier@seagate.com>
2024-08-29 15:08:17 +02:00
Ben McClelland
26ef99c593 Merge pull request #772 from versity/ben/docs
fix: update help description to reference repo
2024-08-28 19:41:26 -07:00
Ben McClelland
923ee5f0db fix: update help description to reference repo 2024-08-28 19:00:46 -07:00
Izzy-B37
282213a9de Demonstration dashboard for Versity gateway metrics (#771)
* feat: demo grafana dashboard with docker compose
2024-08-28 18:31:38 -07:00
Ben McClelland
366993d9d1 Merge pull request #748 from versity/ben/banner
feat: change startup banner to versitygw version
2024-08-28 11:14:30 -07:00
Ben McClelland
3cb53d0fad Merge pull request #768 from versity/fix/s3-ls-page-size
ListObjects(V2) common prefixes pagination
2024-08-28 11:14:07 -07:00
Ben McClelland
810bf01871 feat: change startup banner to versitygw version
This changes the startup banner to report the versitygw version
and build info along with interfaces configured for admin and
s3 services when quiet option not enabled.

Fixes #728
2024-08-28 10:50:12 -07:00
Ben McClelland
1cc72e1055 Merge pull request #769 from versity/test_cmdline_copyright
test: copyright
2024-08-28 10:17:14 -07:00
Luke McCrone
1b4db1fd96 test: copyright 2024-08-28 12:51:39 -03:00
jonaustin09
227fdaa00b fix: Fixed the pagination for common prefixes in ListObjects & ListObjectsV2 actions 2024-08-28 11:07:44 -04:00
Ben McClelland
a2ba263d31 Merge pull request #766 from versity/ben/xml_response
fix: add XMLName to InitiateMultipartUploadResult for consistency
2024-08-27 17:26:13 -07:00
Ben McClelland
e1c2945fb0 fix: add XMLName to InitiateMultipartUploadResult for consistency 2024-08-27 15:32:07 -07:00
Jon Austin
d79f978df9 feat: Added the standard storage class to all the available get/list actions responses in posix. (#765) 2024-08-27 15:28:40 -07:00
Ben McClelland
3ed7c18839 Merge pull request #764 from versity/test_cmdline_policy_delete_tagging_two
Test cmdline policy delete tagging two
2024-08-27 13:28:08 -07:00
Ben McClelland
3afc3f9c5d Merge pull request #761 from versity/ben/time_marshal
fix: move RFC 3339 time formatting to s3response
2024-08-27 13:27:56 -07:00
Luke McCrone
3238aac4bd test: delete tagging test, dockerfile 2024-08-27 14:45:30 -03:00
Ben McClelland
ee202b76f3 fix: move RFC 3339 time formatting to s3response
It is better if we let the s3response module handle the xml
formatting spec specifics, and let the backends not worry
about how to format the time fields. This should help to
prevent any future backend modifications or additions from
accidental incorrect time formatting.
2024-08-26 21:08:24 -07:00
Ben McClelland
e065c86e62 Merge pull request #759 from versity/fix/list-objects-v2-timestamp
ListObjects & ListObjectsV2 return types
2024-08-26 17:39:52 -07:00
jonaustin09
684ab2371b fix: Changed ListObjects and ListObjectsV2 actions return types
Changed ListObjectsV2 and ListObjects actions return types from
*s3.ListObjects(V2)Output to s3response.ListObjects(V2)Result.

Changed the listing objects timestamp to RFC3339 to match AWS
S3 objects timestamp.

Fixes #752
2024-08-26 15:46:45 -07:00
Ben McClelland
908356fa34 Merge pull request #760 from versity/dependabot/go_modules/dev-dependencies-bf2fce9727
chore(deps): bump the dev-dependencies group with 5 updates
2024-08-26 14:53:52 -07:00
Ben McClelland
54c17e39c5 Merge pull request #737 from versity/test_cmdline_policy_get_tagging
Test cmdline policy get tagging
2024-08-26 14:43:11 -07:00
dependabot[bot]
1198dee565 chore(deps): bump the dev-dependencies group with 5 updates
Bumps the dev-dependencies group with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.59.0` | `1.60.1` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.30.4` | `1.30.5` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.27.28` | `1.27.31` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.17.28` | `1.17.30` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.17.11` | `1.17.15` |


Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.59.0 to 1.60.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.59.0...service/s3/v1.60.1)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.30.4 to 1.30.5
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.30.4...service/s3/v1.30.5)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.27.28 to 1.27.31
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.27.28...config/v1.27.31)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.17.28 to 1.17.30
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.28...credentials/v1.17.30)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.11 to 1.17.15
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.17.11...credentials/v1.17.15)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-26 21:27:59 +00:00
Luke McCrone
a5c3332dc6 test: added put, get tagging tests, some reorganization 2024-08-26 14:24:40 -03:00
Ben McClelland
df7fcef34e Merge pull request #751 from versity/ben/ipv6
feat: enable ipv6 support for listening socket
2024-08-26 09:00:01 -07:00
Ben McClelland
3d28c5753f Merge pull request #756 from versity/ben/invalid_names
fix: return KeyTooLongError when filenames exceed allowed length
2024-08-26 08:59:43 -07:00
Ben McClelland
d93322cf4e Merge pull request #757 from versity/ben/o_tmpfile_corruption
fix: put file corruption with chunked transfer
2024-08-26 08:54:53 -07:00
Ben McClelland
453136bd5a fix: return KeyTooLongError when filenames exceed allowed length
The posix limits wont exactly match up with the AWS key length
limits because posix has component length limits as well as path
length limits.

This reponds with the aws compatible KeyTooLongError under these
conditions now.

Note that delete object returns success even in the error cases.

Fixes #755
2024-08-24 14:53:42 -07:00
Ben McClelland
756d155a62 fix: put file corruption with chunked transfer
When on linux with O_TMPFILE support, we issue and fallocate for
the expected object size ax an optimization for the underlying
filesystem to allocate the full file all ate once. With the chunked
transfer encoding, the final object size is recoded in the
X-Amz-Decoded-Content-Length instead of the standard ContentLength
which includes the chunk encoding in the payload.

We were incorrectly using the content length to fallocate the
file which would cause the filesystem to pad out any unwritten
length to this size with 0s.

The fix here is to make sure we pass the X-Amz-Decoded-Content-Length
as the object size to the backend for all PUTs.

Fixes #753
2024-08-24 14:31:36 -07:00
Ben McClelland
77e037ae87 Merge pull request #754 from versity/fix/list-objects-v2-delim
Directory objects listing with delimiter
2024-08-23 08:59:56 -07:00
jonaustin09
71df685fb7 fix: Fixed directory objects listing with delimiter 2024-08-23 11:28:52 -04:00
Ben McClelland
296a78ed56 feat: enable ipv6 support for listening socket
Fiber allows for dual stack ipv4/ipv6 by setting Network setting to
fiber.NetworkTCP. The default is fiber.NetworkTCP4 which is ipv4
only because the dual stack is not compatible with prefork. But
we do not use prefork, so it is fine to enable the dual ipv4/ipv6
support.
2024-08-22 13:46:06 -07:00
Ben McClelland
8f89d32121 Merge pull request #746 from versity/ben/list_buckets
fix: allow listing buckets without acl attribute
2024-08-22 13:02:06 -07:00
Ben McClelland
72ad820e07 Merge pull request #750 from versity/ben/unescape_copy_source
fix: unescape copy source before handing to backend
2024-08-22 12:30:00 -07:00
Ben McClelland
77aa4366b5 fix: unescape copy source before handing to backend
We were handing the URL escaped string to the backend as the
copysource which includes "%<hex>" for spaces and other special
characters. The backend would then interpret this as the source
path. This fixes the copyobject and upload part copy.

Fixes #749
2024-08-22 10:06:38 -07:00
Ben McClelland
2942b162a2 fix: allow listing buckets without acl attribute
The admin/root user has access to all buckets. So we don't need a
valid ACL defined on the bucket to list these. Treat a non-existing
ACL as an empty ACL for this case.

Fixes #727
2024-08-21 15:20:47 -07:00
Ben McClelland
2aef5e42d4 Merge pull request #745 from versity/fix/create-mp-return-type
CreateMultipartUpload return type fix
2024-08-21 15:07:06 -07:00
jonaustin09
cc3c62cd9d fix: Change CreateMultipartUpload return type to match expected xml response
The AWS spec for the create multipart upload response is:
<?xml version="1.0" encoding="UTF-8"?>
<InitiateMultipartUploadResult>
   <Bucket>string</Bucket>
   <Key>string</Key>
   <UploadId>string</UploadId>
</InitiateMultipartUploadResult>

So we need the return type to marshal to this xml format.
2024-08-21 14:49:39 -07:00
Ben McClelland
853143eb3d Merge pull request #744 from versity/ben/ldap_systemd
fix: add ldap uid/gid attribute options to systemd example config
2024-08-21 14:41:25 -07:00
Ben McClelland
baaffea59a fix: add ldap uid/gid attribute options to systemd example config
This updates the systemd config to add VGW_IAM_LDAP_USER_ID_ATR
amd VGW_IAM_LDAP_GROUP_ID_ATR env var options that were already
in the cli config.

Fixes #733
2024-08-21 14:02:13 -07:00
Ben McClelland
876c76ba65 Merge pull request #743 from versity/ben/azure_fixes
fix: azure backend errors with storage url and container metadata
2024-08-21 13:55:07 -07:00
Ben McClelland
c0c32298cd fix: azure backend errors with storage url and container metadata
The container URL was adding an extra / when the service URL ended
with /. This results in an invalid container URI error from Azure.

This also fixes the container metadata keys to not include "-" that
the Azure service disallows. The metadata values are base64 encoded
to prevent any special chacter handling on Azure service side.

The content type is set for uploads consistent with S3 expectations.

Fixes #736
2024-08-20 11:36:19 -07:00
Ben McClelland
009e501b20 Merge pull request #742 from versity/dependabot/go_modules/dev-dependencies-9a4755fd03
chore(deps): bump the dev-dependencies group with 20 updates
2024-08-19 20:46:22 -07:00
dependabot[bot]
59b12e0ea8 chore(deps): bump the dev-dependencies group with 20 updates
Bumps the dev-dependencies group with 20 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) | `1.30.3` | `1.30.4` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.58.3` | `1.59.0` |
| [github.com/aws/smithy-go](https://github.com/aws/smithy-go) | `1.20.3` | `1.20.4` |
| [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) | `1.36.0` | `1.37.0` |
| [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://github.com/aws/aws-sdk-go-v2) | `1.16.11` | `1.16.12` |
| [github.com/aws/aws-sdk-go-v2/internal/ini](https://github.com/aws/aws-sdk-go-v2) | `1.8.0` | `1.8.1` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.22.4` | `1.22.5` |
| [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://github.com/aws/aws-sdk-go-v2) | `1.26.4` | `1.26.5` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.30.3` | `1.30.4` |
| [github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream](https://github.com/aws/aws-sdk-go-v2) | `1.6.3` | `1.6.4` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.27.27` | `1.27.28` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.17.27` | `1.17.28` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.17.10` | `1.17.11` |
| [github.com/aws/aws-sdk-go-v2/internal/configsources](https://github.com/aws/aws-sdk-go-v2) | `1.3.15` | `1.3.16` |
| [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://github.com/aws/aws-sdk-go-v2) | `2.6.15` | `2.6.16` |
| [github.com/aws/aws-sdk-go-v2/internal/v4a](https://github.com/aws/aws-sdk-go-v2) | `1.3.15` | `1.3.16` |
| [github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding](https://github.com/aws/aws-sdk-go-v2) | `1.11.3` | `1.11.4` |
| [github.com/aws/aws-sdk-go-v2/service/internal/checksum](https://github.com/aws/aws-sdk-go-v2) | `1.3.17` | `1.3.18` |
| [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://github.com/aws/aws-sdk-go-v2) | `1.11.17` | `1.11.18` |
| [github.com/aws/aws-sdk-go-v2/service/internal/s3shared](https://github.com/aws/aws-sdk-go-v2) | `1.17.15` | `1.17.16` |


Updates `github.com/aws/aws-sdk-go-v2` from 1.30.3 to 1.30.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.30.3...v1.30.4)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.58.3 to 1.59.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.58.3...service/s3/v1.59.0)

Updates `github.com/aws/smithy-go` from 1.20.3 to 1.20.4
- [Release notes](https://github.com/aws/smithy-go/releases)
- [Changelog](https://github.com/aws/smithy-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/smithy-go/compare/v1.20.3...v1.20.4)

Updates `github.com/nats-io/nats.go` from 1.36.0 to 1.37.0
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.36.0...v1.37.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/ec2/imds` from 1.16.11 to 1.16.12
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.16.11...v1.16.12)

Updates `github.com/aws/aws-sdk-go-v2/internal/ini` from 1.8.0 to 1.8.1
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.8.1/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.8.0...v1.8.1)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.22.4 to 1.22.5
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/pi/v1.22.4...service/pi/v1.22.5)

Updates `github.com/aws/aws-sdk-go-v2/service/ssooidc` from 1.26.4 to 1.26.5
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.26.4...config/v1.26.5)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.30.3 to 1.30.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.30.3...v1.30.4)

Updates `github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream` from 1.6.3 to 1.6.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/service/drs/v1.6.4/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.6.3...service/drs/v1.6.4)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.27.27 to 1.27.28
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.27.27...config/v1.27.28)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.17.27 to 1.17.28
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.27...credentials/v1.17.28)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.10 to 1.17.11
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/config/v1.17.11/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.17.10...config/v1.17.11)

Updates `github.com/aws/aws-sdk-go-v2/internal/configsources` from 1.3.15 to 1.3.16
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.15...internal/ini/v1.3.16)

Updates `github.com/aws/aws-sdk-go-v2/internal/endpoints/v2` from 2.6.15 to 2.6.16
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/endpoints/v2.6.15...internal/endpoints/v2.6.16)

Updates `github.com/aws/aws-sdk-go-v2/internal/v4a` from 1.3.15 to 1.3.16
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.15...internal/ini/v1.3.16)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding` from 1.11.3 to 1.11.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/oam/v1.11.3...service/dax/v1.11.4)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/checksum` from 1.3.17 to 1.3.18
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.17...internal/ini/v1.3.18)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/presigned-url` from 1.11.17 to 1.11.18
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/dax/v1.11.17...service/dax/v1.11.18)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/s3shared` from 1.17.15 to 1.17.16
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.15...credentials/v1.17.16)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/smithy-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/nats-io/nats.go
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/ec2/imds
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/ini
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/configsources
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/v4a
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/checksum
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/s3shared
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-19 21:27:44 +00:00
Ben McClelland
b1c072548a Merge pull request #735 from versity/ben/copy_object_meta
fix: copy-object with replace metadata-directive
2024-08-13 14:16:42 -07:00
Ben McClelland
54490f55cc chore: cleanup staticcheck errors 2024-08-13 11:09:14 -07:00
Ben McClelland
a36d974942 fix: copy-object with replace metadata-directive
In copy-object, if the source and destination are the same then
X-Amz-Metadata-Directive must be set to "REPLACE" in order to use
this api call to update the metadata of the object in place.

The default X-Amz-Metadata-Directive is "COPY" if not specified.
"COPY" is only valid if source and destination are not the same
object.

When "REPLACE" selected, metadata does not have to differ for the
call to be successful. The "REPLACE" always sets the incoming
metadata (even if empty or the same as the source).

Fixes #734
2024-08-13 10:52:47 -07:00
Ben McClelland
42f554b0d6 Merge pull request #731 from versity/dependabot/go_modules/dev-dependencies-ce20a30350
chore(deps): bump the dev-dependencies group with 6 updates
2024-08-12 15:02:30 -07:00
dependabot[bot]
adbf53505a chore(deps): bump the dev-dependencies group with 6 updates
Bumps the dev-dependencies group with 6 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go) | `1.13.0` | `1.14.0` |
| [github.com/urfave/cli/v2](https://github.com/urfave/cli) | `2.27.3` | `2.27.4` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.23.0` | `0.24.0` |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.25.0` | `0.26.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.27.0` | `0.28.0` |
| [golang.org/x/text](https://github.com/golang/text) | `0.16.0` | `0.17.0` |


Updates `github.com/Azure/azure-sdk-for-go/sdk/azcore` from 1.13.0 to 1.14.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.13.0...sdk/azcore/v1.14.0)

Updates `github.com/urfave/cli/v2` from 2.27.3 to 2.27.4
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/main/docs/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v2.27.3...v2.27.4)

Updates `golang.org/x/sys` from 0.23.0 to 0.24.0
- [Commits](https://github.com/golang/sys/compare/v0.23.0...v0.24.0)

Updates `golang.org/x/crypto` from 0.25.0 to 0.26.0
- [Commits](https://github.com/golang/crypto/compare/v0.25.0...v0.26.0)

Updates `golang.org/x/net` from 0.27.0 to 0.28.0
- [Commits](https://github.com/golang/net/compare/v0.27.0...v0.28.0)

Updates `golang.org/x/text` from 0.16.0 to 0.17.0
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.16.0...v0.17.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/urfave/cli/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/net
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/text
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-12 21:43:22 +00:00
Ben McClelland
7785288957 Merge pull request #730 from versity/fix/bucket-acl-error-handling
PutBucketAcl error handling
2024-08-12 13:31:00 -07:00
jonaustin09
23fd0d3fdd fix: Fixed PutBucketAcl action error handling, removed the bucket owner check for all the acl options 2024-08-12 15:27:03 -04:00
Ben McClelland
cbf03c30ce Merge pull request #726 from versity/fix/iam-get-root-user
Root user credentials in IAM services
2024-08-12 10:18:36 -07:00
Ben McClelland
9f53d0f584 Merge pull request #725 from versity/ben/delete_error
fix: non-existing object delete response
2024-08-09 13:26:52 -07:00
Ben McClelland
252bb0e120 Merge pull request #721 from versity/test_cmdline_multiple_principals
Test cmdline multiple principals
2024-08-09 13:26:39 -07:00
jonaustin09
34b7fd6ee7 fix: Added the root user data in the iam services records 2024-08-09 16:14:51 -04:00
Luke McCrone
0facfdc9fd test: multiple policy principals, improved bucket cleanup, general cleanup 2024-08-09 16:40:44 -03:00
Ben McClelland
e92b36a12c fix: non-existing object delete response
The expected response code for deleting non-existing objects is
expected to be 204 (No Content) instead of NoSuchKey. The tests
are updated to validate expected responses.

Fixes #724
2024-08-08 11:46:36 -07:00
Ben McClelland
adbc8140ed Merge pull request #714 from versity/ben/get_obj_dir
fix: head/get/delete/copy directory object should fail when corresponding …
2024-08-06 10:12:51 -07:00
Ben McClelland
ce9d3aa01a Merge pull request #715 from versity/dependabot/go_modules/dev-dependencies-18a6b4804e
chore(deps): bump the dev-dependencies group with 4 updates
2024-08-05 15:29:18 -07:00
dependabot[bot]
2e6bef6760 chore(deps): bump the dev-dependencies group with 4 updates
Bumps the dev-dependencies group with 4 updates: [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2), [golang.org/x/sys](https://github.com/golang/sys), [golang.org/x/time](https://github.com/golang/time) and [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2).


Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.58.2 to 1.58.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.58.2...service/s3/v1.58.3)

Updates `golang.org/x/sys` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/sys/compare/v0.22.0...v0.23.0)

Updates `golang.org/x/time` from 0.5.0 to 0.6.0
- [Commits](https://github.com/golang/time/compare/v0.5.0...v0.6.0)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.9 to 1.17.10
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.17.9...config/v1.17.10)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: golang.org/x/time
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-05 21:33:43 +00:00
Ben McClelland
797376a235 fix: head/get/delete/copy directory object should fail when corresponding file object exists
The API hanlders and backend were stripping trailing "/" in object
paths. So if an object exists and a request came in for head/get/delete/copy
for that same name but with a trailing "/" indicating request should
be for directory object, the "/" would be stripped and the request
would be handlied for the incorrect file object.

This fix adds in checks to handle the case with the training "/"
in the request.

Fixes #709
2024-08-05 11:55:32 -07:00
Ben McClelland
cb992a4794 Merge pull request #710 from versity/ben/content_type
fix: set default content type to binary/octet-stream
2024-08-02 16:36:54 -07:00
Ben McClelland
61a97e94db fix: set default content type to binary/octet-stream
AWS uses binary/octet-stream for the default content type if the
client doesn't specify the content type. Change the default for
the gateway to match this behavior.

Fixes #697
2024-08-02 09:02:57 -07:00
Luke
18a8813ce7 Test cmdline acl hardcode fix (#712)
* test: s3cmd, s3api acls tests (direct vs s3)
* test: completed s3:PutBucketAcl tests
2024-08-01 17:24:49 -07:00
Ben McClelland
8872e2a428 Merge pull request #707 from versity/ben/scoutfs_listing_etag
fix: allow objecting listing in scoutfs for files created without etag attrs
2024-07-31 14:56:37 -07:00
Ben McClelland
b421598647 fix: allow objecting listing in scoutfs for files created without etag attrs
Files created outside of versitygw can be missing etag attributes.
Allow the empty etag to not cause errors with listing the files.

Fixes #694
2024-07-31 10:01:00 -07:00
Ben McClelland
cacd1d28ea Merge pull request #700 from versity/ben/test-template
chore: add issue template for test cases
2024-07-30 12:34:02 -07:00
Ben McClelland
ad30c251bc chore: add issue template for test cases 2024-07-30 11:13:32 -07:00
Ben McClelland
55c7109c94 Merge pull request #696 from versity/dependabot/go_modules/dev-dependencies-6a99dab1aa
chore(deps): bump the dev-dependencies group with 3 updates
2024-07-29 16:03:38 -07:00
dependabot[bot]
331996d3dd chore(deps): bump the dev-dependencies group with 3 updates
Bumps the dev-dependencies group with 3 updates: [github.com/urfave/cli/v2](https://github.com/urfave/cli), [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) and [github.com/xrash/smetrics](https://github.com/xrash/smetrics).


Updates `github.com/urfave/cli/v2` from 2.27.2 to 2.27.3
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/main/docs/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v2.27.2...v2.27.3)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.8 to 1.17.9
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.17.8...config/v1.17.9)

Updates `github.com/xrash/smetrics` from 0.0.0-20240312152122-5f08fbb34913 to 0.0.0-20240521201337-686a1a2994c1
- [Commits](https://github.com/xrash/smetrics/commits)

---
updated-dependencies:
- dependency-name: github.com/urfave/cli/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/xrash/smetrics
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-29 21:19:34 +00:00
Ben McClelland
18a9a23f2f Merge pull request #692 from versity/test_cmdline_coverage
Test cmdline coverage
2024-07-25 13:33:39 -07:00
Luke McCrone
60c8eb795d test: improve command coverage for tools, retention bypass 2024-07-25 15:23:31 -03:00
Ben McClelland
1173ea920b Merge pull request #689 from versity/ben/docker
feat: add arm64 docker images
2024-07-22 22:06:30 -07:00
Ben McClelland
370b51d327 Merge pull request #690 from versity/dependabot/go_modules/dev-dependencies-bdabae4bc6
chore(deps): bump the dev-dependencies group with 9 updates
2024-07-22 22:06:15 -07:00
dependabot[bot]
1a3937de90 chore(deps): bump the dev-dependencies group with 9 updates
Bumps the dev-dependencies group with 9 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go) | `1.12.0` | `1.13.0` |
| [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) | `1.3.2` | `1.4.0` |
| [github.com/pkg/xattr](https://github.com/pkg/xattr) | `0.4.9` | `0.4.10` |
| [github.com/Azure/azure-sdk-for-go/sdk/internal](https://github.com/Azure/azure-sdk-for-go) | `1.9.1` | `1.10.0` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.22.3` | `1.22.4` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.27.26` | `1.27.27` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.17.26` | `1.17.27` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.17.7` | `1.17.8` |
| [github.com/mattn/go-runewidth](https://github.com/mattn/go-runewidth) | `0.0.15` | `0.0.16` |


Updates `github.com/Azure/azure-sdk-for-go/sdk/azcore` from 1.12.0 to 1.13.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.12.0...sdk/azcore/v1.13.0)

Updates `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` from 1.3.2 to 1.4.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/storage/azblob/v1.3.2...sdk/azcore/v1.4.0)

Updates `github.com/pkg/xattr` from 0.4.9 to 0.4.10
- [Release notes](https://github.com/pkg/xattr/releases)
- [Commits](https://github.com/pkg/xattr/compare/v0.4.9...v0.4.10)

Updates `github.com/Azure/azure-sdk-for-go/sdk/internal` from 1.9.1 to 1.10.0
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.9.1...sdk/azcore/v1.10.0)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.22.3 to 1.22.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.22.3...service/pi/v1.22.4)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.27.26 to 1.27.27
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.27.26...config/v1.27.27)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.17.26 to 1.17.27
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.26...credentials/v1.17.27)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.7 to 1.17.8
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.17.8/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.17.7...v1.17.8)

Updates `github.com/mattn/go-runewidth` from 0.0.15 to 0.0.16
- [Commits](https://github.com/mattn/go-runewidth/compare/v0.0.15...v0.0.16)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/pkg/xattr
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/internal
  dependency-type: indirect
  update-type: version-update:semver-minor
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/mattn/go-runewidth
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-22 21:25:55 +00:00
Ben McClelland
ca79182c95 feat: add arm64 docker images 2024-07-22 10:00:58 -07:00
Ben McClelland
d2b004af9a Merge pull request #688 from versity/fix/cp-obj-cp-src-slash
CopySource parsing starting with '/' for CopyObject action
2024-07-22 09:13:30 -07:00
Ben McClelland
93b4926aeb Merge pull request #687 from versity/fix/aws-cli-obj-lock-header
Object lock X-Amz-Bypass-Governance-Retention header for AWS CLI
2024-07-22 09:13:13 -07:00
jonaustin09
12da1e2099 fix: Added X-Amz-Bypass-Governance-Retention header check to both check 'true' and 'True' values for DeleteObject(s) actions. 2024-07-22 11:43:34 -04:00
jonaustin09
5e484f2355 fix: Fixed CopySource parsing to handle the values starting with '/' in CopyObject action in posix and azure backends. 2024-07-22 11:30:32 -04:00
Ben McClelland
d521c66171 Merge pull request #677 from versity/test_cmdline_more_user_ops
Test cmdline more user ops
2024-07-17 10:44:41 -07:00
Luke McCrone
c580947b98 test: added user tests, added command recording, re-added s3cmd tests 2024-07-17 13:45:01 -03:00
Ben McClelland
733b6e7b2f Merge pull request #681 from versity/fix/root-put-bucket-acl-owner
Bucket ACL owner check for admin/root users
2024-07-17 08:52:04 -07:00
jonaustin09
23a40d86a2 fix: Removed the bucket ACL owner check for admin and root users 2024-07-17 09:39:00 -04:00
Ben McClelland
ed9a10a337 Merge pull request #679 from versity/fix/s3cmd-acl-grt-type
Type property support in bucket ACL
2024-07-16 16:03:12 -07:00
jonaustin09
828eb93bee fix: Added 'Type' property support in bucket ACL Grantee schema 2024-07-16 18:17:16 -04:00
Ben McClelland
3361391506 Merge pull request #674 from versity/admin-api-access-logs
Admin APIs access logs
2024-07-16 08:47:19 -07:00
Ben McClelland
55cf7674b8 Merge pull request #673 from versity/ben/symlinks
feat: add option to allow symlinked directories as buckets
2024-07-16 08:40:41 -07:00
Ben McClelland
cf5d164b9f Merge pull request #676 from versity/dependabot/go_modules/dev-dependencies-5c50e37e7d
chore(deps): bump the dev-dependencies group with 15 updates
2024-07-15 15:11:51 -07:00
dependabot[bot]
5ddc5418a6 chore(deps): bump the dev-dependencies group with 15 updates
Bumps the dev-dependencies group with 15 updates:

| Package | From | To |
| --- | --- | --- |
| [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) | `1.30.1` | `1.30.3` |
| [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) | `1.58.0` | `1.58.2` |
| [github.com/aws/aws-sdk-go-v2/feature/ec2/imds](https://github.com/aws/aws-sdk-go-v2) | `1.16.9` | `1.16.11` |
| [github.com/aws/aws-sdk-go-v2/service/sso](https://github.com/aws/aws-sdk-go-v2) | `1.22.1` | `1.22.3` |
| [github.com/aws/aws-sdk-go-v2/service/ssooidc](https://github.com/aws/aws-sdk-go-v2) | `1.26.2` | `1.26.4` |
| [github.com/aws/aws-sdk-go-v2/service/sts](https://github.com/aws/aws-sdk-go-v2) | `1.30.1` | `1.30.3` |
| [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) | `1.27.24` | `1.27.26` |
| [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) | `1.17.24` | `1.17.26` |
| [github.com/aws/aws-sdk-go-v2/feature/s3/manager](https://github.com/aws/aws-sdk-go-v2) | `1.17.5` | `1.17.7` |
| [github.com/aws/aws-sdk-go-v2/internal/configsources](https://github.com/aws/aws-sdk-go-v2) | `1.3.13` | `1.3.15` |
| [github.com/aws/aws-sdk-go-v2/internal/endpoints/v2](https://github.com/aws/aws-sdk-go-v2) | `2.6.13` | `2.6.15` |
| [github.com/aws/aws-sdk-go-v2/internal/v4a](https://github.com/aws/aws-sdk-go-v2) | `1.3.13` | `1.3.15` |
| [github.com/aws/aws-sdk-go-v2/service/internal/checksum](https://github.com/aws/aws-sdk-go-v2) | `1.3.15` | `1.3.17` |
| [github.com/aws/aws-sdk-go-v2/service/internal/presigned-url](https://github.com/aws/aws-sdk-go-v2) | `1.11.15` | `1.11.17` |
| [github.com/aws/aws-sdk-go-v2/service/internal/s3shared](https://github.com/aws/aws-sdk-go-v2) | `1.17.13` | `1.17.15` |


Updates `github.com/aws/aws-sdk-go-v2` from 1.30.1 to 1.30.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.30.1...v1.30.3)

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.58.0 to 1.58.2
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.58.0...service/s3/v1.58.2)

Updates `github.com/aws/aws-sdk-go-v2/feature/ec2/imds` from 1.16.9 to 1.16.11
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.16.9...v1.16.11)

Updates `github.com/aws/aws-sdk-go-v2/service/sso` from 1.22.1 to 1.22.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.22.1...config/v1.22.3)

Updates `github.com/aws/aws-sdk-go-v2/service/ssooidc` from 1.26.2 to 1.26.4
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.26.2...config/v1.26.4)

Updates `github.com/aws/aws-sdk-go-v2/service/sts` from 1.30.1 to 1.30.3
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.30.1...v1.30.3)

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.27.24 to 1.27.26
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.27.24...config/v1.27.26)

Updates `github.com/aws/aws-sdk-go-v2/credentials` from 1.17.24 to 1.17.26
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.24...credentials/v1.17.26)

Updates `github.com/aws/aws-sdk-go-v2/feature/s3/manager` from 1.17.5 to 1.17.7
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/v1.17.7/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.17.5...v1.17.7)

Updates `github.com/aws/aws-sdk-go-v2/internal/configsources` from 1.3.13 to 1.3.15
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.13...internal/ini/v1.3.15)

Updates `github.com/aws/aws-sdk-go-v2/internal/endpoints/v2` from 2.6.13 to 2.6.15
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/endpoints/v2.6.13...internal/endpoints/v2.6.15)

Updates `github.com/aws/aws-sdk-go-v2/internal/v4a` from 1.3.13 to 1.3.15
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.13...internal/ini/v1.3.15)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/checksum` from 1.3.15 to 1.3.17
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/internal/ini/v1.3.15...internal/ini/v1.3.17)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/presigned-url` from 1.11.15 to 1.11.17
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/service/dax/v1.11.17/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/waf/v1.11.15...service/dax/v1.11.17)

Updates `github.com/aws/aws-sdk-go-v2/service/internal/s3shared` from 1.17.13 to 1.17.15
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/credentials/v1.17.13...credentials/v1.17.15)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/s3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/ec2/imds
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sso
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/ssooidc
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/sts
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/credentials
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/feature/s3/manager
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/configsources
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/internal/v4a
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/checksum
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
- dependency-name: github.com/aws/aws-sdk-go-v2/service/internal/s3shared
  dependency-type: indirect
  update-type: version-update:semver-patch
  dependency-group: dev-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-15 21:54:01 +00:00
Ben McClelland
f949e2d5ea Merge pull request #672 from versity/ben/ctx_walk
fix: cancel filesystem traversal when listing request cancelled
2024-07-15 14:19:15 -07:00
Ben McClelland
a8adb471fe fix: cancel filesystem traversal when listing request cancelled
For large directories, the treewalk can take longer than the
client request timeout. If the client times out the request
then we need to stop walking the filesystem and just return
the context error.

This should prevent the gateway from consuming system resources
uneccessarily after an incoming request is terminated.
2024-07-15 13:47:21 -07:00
jonaustin09
ddd048495a feat: Implemented server access logs with file for Admin APIs 2024-07-15 15:49:03 -04:00
Ben McClelland
f6dd2f947c feat: add option to allow symlinked directories as buckets
This adds the ability to treat symlinks to directories at the top
level gateway directory as buckets the same as normal directories.

This could be a potential security issue allowing traversal into
other filesystems within the system, so is defaulted to off. This
can be enabled when specifically needed for both posix and scoutfs
backend systems.

Fixes #644
2024-07-13 10:21:15 -07:00
Ben McClelland
5d33c7bde5 Merge pull request #671 from versity/fix/change-bucket-owner-ownership
ChangeBucketOwner acl update
2024-07-11 14:02:09 -07:00
jonaustin09
2843cdbd45 fix: Fixed ChangeBucketOwnership action implementation to update the bucket acl 2024-07-11 13:45:01 -04:00
141 changed files with 9474 additions and 3181 deletions

View File

@@ -1,27 +1,23 @@
---
name: Bug report
name: Bug Report
about: Create a report to help us improve
title: ''
title: '[Bug] - <Short Description>'
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Steps to reproduce the behavior.
<!-- Steps to reproduce the behavior. -->
**Expected behavior**
A clear and concise description of what you expected to happen.
<!-- A clear and concise description of what you expected to happen. -->
**Server Version**
output of
```
./versitygw -version
uname -a
```
<!-- output of: './versitygw -version && uname -a' -->
**Additional context**
Describe s3 client and version if applicable.
<!-- Describe s3 client and version if applicable.

View File

@@ -1,14 +1,14 @@
---
name: Feature request
name: Feature Request
about: Suggest an idea for this project
title: ''
title: '[Feature] - <Short Description>'
labels: enhancement
assignees: ''
---
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
<!-- A clear and concise description of what you want to happen. -->
**Additional context**
Add any other context or screenshots about the feature request here.
<!-- Add any other context or screenshots about the feature request here. -->

33
.github/ISSUE_TEMPLATE/test_case.md vendored Normal file
View File

@@ -0,0 +1,33 @@
---
name: Test Case Request
about: Request new test cases or additional test coverage
title: '[Test Case] - <Short Description>'
labels: 'testcase'
assignees: ''
---
## Description
<!-- Please provide a detailed description of the test case or test coverage request. -->
## Purpose
<!-- Explain why this test case is important and what it aims to achieve. -->
## Scope
<!-- Describe the scope of the test case, including any specific functionalities, features, or modules that should be tested. -->
## Acceptance Criteria
<!-- List the criteria that must be met for the test case to be considered complete. -->
1.
2.
3.
## Additional Context
<!-- Add any other context or screenshots about the feature request here. -->
## Resources
<!-- Provide any resources, documentation, or links that could help in writing the test case. -->
**Thank you for contributing to our project!**

28
.github/workflows/docker-bats.yaml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: docker bats tests
on: pull_request
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build Docker Image
run: |
mv tests/.env.docker.default tests/.env.docker
mv tests/.secrets.default tests/.secrets
docker build --build-arg="GO_LIBRARY=go1.21.7.linux-amd64.tar.gz" \
--build-arg="AWS_CLI=awscli-exe-linux-x86_64.zip" --build-arg="MC_FOLDER=linux-amd64" \
--progress=plain -f Dockerfile_test_bats -t bats_test .
- name: Set up Docker Compose
run: sudo apt-get install -y docker-compose
- name: Run Docker Container
run: docker-compose -f docker-compose-bats.yml up posix_backend

View File

@@ -15,6 +15,13 @@ jobs:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
@@ -43,6 +50,7 @@ jobs:
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
build-args: |
VERSION=${{ github.event.release.tag_name }}
TIME=${{ github.event.release.published_at }}

View File

@@ -8,17 +8,17 @@ jobs:
fail-fast: false
matrix:
include:
#- set: 1
# LOCAL_FOLDER: /tmp/gw1
# BUCKET_ONE_NAME: versity-gwtest-bucket-one-1
# BUCKET_TWO_NAME: versity-gwtest-bucket-two-1
# IAM_TYPE: folder
# USERS_FOLDER: /tmp/iam1
# AWS_ENDPOINT_URL: https://127.0.0.1:7070
# RUN_SET: "s3cmd"
# RECREATE_BUCKETS: "true"
# PORT: 7070
# BACKEND: "posix"
- set: 1
LOCAL_FOLDER: /tmp/gw1
BUCKET_ONE_NAME: versity-gwtest-bucket-one-1
BUCKET_TWO_NAME: versity-gwtest-bucket-two-1
IAM_TYPE: folder
USERS_FOLDER: /tmp/iam1
AWS_ENDPOINT_URL: https://127.0.0.1:7070
RUN_SET: "s3cmd"
RECREATE_BUCKETS: "true"
PORT: 7070
BACKEND: "posix"
- set: 2
LOCAL_FOLDER: /tmp/gw2
BUCKET_ONE_NAME: versity-gwtest-bucket-one-2
@@ -103,6 +103,8 @@ jobs:
run: |
git clone https://github.com/bats-core/bats-core.git
cd bats-core && ./install.sh $HOME
git clone https://github.com/bats-core/bats-support.git ${{ github.workspace }}/tests/bats-support
git clone https://github.com/ztombol/bats-assert.git ${{ github.workspace }}/tests/bats-assert
- name: Install s3cmd
run: |
@@ -135,6 +137,11 @@ jobs:
MC_ALIAS: versity
LOG_LEVEL: 4
GOCOVERDIR: ${{ github.workspace }}/cover
USERNAME_ONE: ABCDEFG
PASSWORD_ONE: 1234567
USERNAME_TWO: HIJKLMN
PASSWORD_TWO: 8901234
TEST_FILE_FOLDER: ${{ github.workspace }}/versity-gwtest-files
run: |
make testbin
export AWS_ACCESS_KEY_ID=ABCDEFGHIJKLMNOPQRST

3
.gitignore vendored
View File

@@ -59,3 +59,6 @@ tests/!s3cfg.local.default
# patches
*.patch
# grafana's local database (kept on filesystem for survival between instantiations)
metrics-exploration/grafana_data/**

View File

@@ -1,8 +1,11 @@
FROM --platform=linux/arm64 ubuntu:latest
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
ARG SECRETS_FILE=tests/.secrets
ARG CONFIG_FILE=tests/.env.docker
ARG GO_LIBRARY=go1.21.7.linux-arm64.tar.gz
ARG AWS_CLI=awscli-exe-linux-aarch64.zip
ARG MC_FOLDER=linux-arm64
ENV TZ=Etc/UTC
RUN apt-get update && \
@@ -24,20 +27,20 @@ RUN apt-get update && \
WORKDIR /tmp
# Install AWS cli
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install
RUN curl "https://awscli.amazonaws.com/${AWS_CLI}" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install
# Install mc
RUN curl https://dl.min.io/client/mc/release/linux-arm64/mc \
RUN curl https://dl.min.io/client/mc/release/${MC_FOLDER}/mc \
--create-dirs \
-o /usr/local/minio-binaries/mc && \
chmod -R 755 /usr/local/minio-binaries
ENV PATH="/usr/local/minio-binaries":${PATH}
# Download Go 1.21 (adjust the version and platform as needed)
RUN wget https://golang.org/dl/go1.21.7.linux-arm64.tar.gz
RUN wget https://golang.org/dl/${GO_LIBRARY}
# Extract the downloaded archive
RUN tar -xvf go1.21.7.linux-arm64.tar.gz -C /usr/local
RUN tar -xvf $GO_LIBRARY -C /usr/local
# Set Go environment variables
ENV PATH="/usr/local/go/bin:${PATH}"
@@ -60,6 +63,10 @@ RUN git clone https://github.com/bats-core/bats-core.git && \
USER tester
COPY --chown=tester:tester . /home/tester
# add bats support libraries
RUN git clone https://github.com/bats-core/bats-support.git && rm -rf /home/tester/tests/bats-support && mv bats-support /home/tester/tests
RUN git clone https://github.com/ztombol/bats-assert.git && rm -rf /home/tester/tests/bats-assert && mv bats-assert /home/tester/tests
WORKDIR /home/tester
RUN make

View File

@@ -35,6 +35,7 @@ type ACL struct {
type Grantee struct {
Permission types.Permission
Access string
Type types.Type
}
type GetBucketAclOutput struct {
@@ -42,12 +43,36 @@ type GetBucketAclOutput struct {
AccessControlList AccessControlList
}
type AccessControlList struct {
Grants []types.Grant `xml:"Grant"`
type PutBucketAclInput struct {
Bucket *string
ACL types.BucketCannedACL
AccessControlPolicy *AccessControlPolicy
GrantFullControl *string
GrantRead *string
GrantReadACP *string
GrantWrite *string
GrantWriteACP *string
}
type AccessControlPolicy struct {
AccessControlList AccessControlList `xml:"AccessControlList"`
Owner types.Owner
Owner *types.Owner
}
type AccessControlList struct {
Grants []Grant `xml:"Grant"`
}
type Grant struct {
Grantee *Grt
Permission types.Permission
}
type Grt struct {
XMLNS string `xml:"xmlns:xsi,attr"`
XMLXSI types.Type `xml:"xsi:type,attr"`
Type types.Type `xml:"Type"`
ID string `xml:"ID"`
}
func ParseACL(data []byte) (ACL, error) {
@@ -68,11 +93,19 @@ func ParseACLOutput(data []byte) (GetBucketAclOutput, error) {
return GetBucketAclOutput{}, fmt.Errorf("parse acl: %w", err)
}
grants := []types.Grant{}
grants := []Grant{}
for _, elem := range acl.Grantees {
acs := elem.Access
grants = append(grants, types.Grant{Grantee: &types.Grantee{ID: &acs}, Permission: elem.Permission})
grants = append(grants, Grant{
Grantee: &Grt{
XMLNS: "http://www.w3.org/2001/XMLSchema-instance",
XMLXSI: elem.Type,
ID: acs,
Type: elem.Type,
},
Permission: elem.Permission,
})
}
return GetBucketAclOutput{
@@ -85,18 +118,16 @@ func ParseACLOutput(data []byte) (GetBucketAclOutput, error) {
}, nil
}
func UpdateACL(input *s3.PutBucketAclInput, acl ACL, iam IAMService) ([]byte, error) {
func UpdateACL(input *PutBucketAclInput, acl ACL, iam IAMService, isAdmin bool) ([]byte, error) {
if input == nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
if acl.Owner != *input.AccessControlPolicy.Owner.ID {
return nil, s3err.GetAPIError(s3err.ErrAccessDenied)
}
defaultGrantees := []Grantee{
{
Permission: types.PermissionFullControl,
Access: acl.Owner,
Type: types.TypeCanonicalUser,
},
}
@@ -107,16 +138,19 @@ func UpdateACL(input *s3.PutBucketAclInput, acl ACL, iam IAMService) ([]byte, er
defaultGrantees = append(defaultGrantees, Grantee{
Permission: types.PermissionRead,
Access: "all-users",
Type: types.TypeGroup,
})
case types.BucketCannedACLPublicReadWrite:
defaultGrantees = append(defaultGrantees, []Grantee{
{
Permission: types.PermissionRead,
Access: "all-users",
Type: types.TypeGroup,
},
{
Permission: types.PermissionWrite,
Access: "all-users",
Type: types.TypeGroup,
},
}...)
}
@@ -129,45 +163,71 @@ func UpdateACL(input *s3.PutBucketAclInput, acl ACL, iam IAMService) ([]byte, er
if input.GrantFullControl != nil && *input.GrantFullControl != "" {
fullControlList = splitUnique(*input.GrantFullControl, ",")
for _, str := range fullControlList {
defaultGrantees = append(defaultGrantees, Grantee{Access: str, Permission: "FULL_CONTROL"})
defaultGrantees = append(defaultGrantees, Grantee{
Access: str,
Permission: types.PermissionFullControl,
Type: types.TypeCanonicalUser,
})
}
}
if input.GrantRead != nil && *input.GrantRead != "" {
readList = splitUnique(*input.GrantRead, ",")
for _, str := range readList {
defaultGrantees = append(defaultGrantees, Grantee{Access: str, Permission: "READ"})
defaultGrantees = append(defaultGrantees, Grantee{
Access: str,
Permission: types.PermissionRead,
Type: types.TypeCanonicalUser,
})
}
}
if input.GrantReadACP != nil && *input.GrantReadACP != "" {
readACPList = splitUnique(*input.GrantReadACP, ",")
for _, str := range readACPList {
defaultGrantees = append(defaultGrantees, Grantee{Access: str, Permission: "READ_ACP"})
defaultGrantees = append(defaultGrantees, Grantee{
Access: str,
Permission: types.PermissionReadAcp,
Type: types.TypeCanonicalUser,
})
}
}
if input.GrantWrite != nil && *input.GrantWrite != "" {
writeList = splitUnique(*input.GrantWrite, ",")
for _, str := range writeList {
defaultGrantees = append(defaultGrantees, Grantee{Access: str, Permission: "WRITE"})
defaultGrantees = append(defaultGrantees, Grantee{
Access: str,
Permission: types.PermissionWrite,
Type: types.TypeCanonicalUser,
})
}
}
if input.GrantWriteACP != nil && *input.GrantWriteACP != "" {
writeACPList = splitUnique(*input.GrantWriteACP, ",")
for _, str := range writeACPList {
defaultGrantees = append(defaultGrantees, Grantee{Access: str, Permission: "WRITE_ACP"})
defaultGrantees = append(defaultGrantees, Grantee{
Access: str,
Permission: types.PermissionWriteAcp,
Type: types.TypeCanonicalUser,
})
}
}
accs = append(append(append(append(fullControlList, readList...), writeACPList...), readACPList...), writeList...)
} else {
cache := make(map[string]bool)
for _, grt := range input.AccessControlPolicy.Grants {
if grt.Grantee == nil || grt.Grantee.ID == nil || grt.Permission == "" {
for _, grt := range input.AccessControlPolicy.AccessControlList.Grants {
if grt.Grantee == nil || grt.Grantee.ID == "" || grt.Permission == "" {
return nil, s3err.GetAPIError(s3err.ErrInvalidRequest)
}
defaultGrantees = append(defaultGrantees, Grantee{Access: *grt.Grantee.ID, Permission: grt.Permission})
if _, ok := cache[*grt.Grantee.ID]; !ok {
cache[*grt.Grantee.ID] = true
accs = append(accs, *grt.Grantee.ID)
access := grt.Grantee.ID
defaultGrantees = append(defaultGrantees, Grantee{
Access: access,
Permission: grt.Permission,
Type: types.TypeCanonicalUser,
})
if _, ok := cache[access]; !ok {
cache[access] = true
accs = append(accs, access)
}
}
}
@@ -227,9 +287,21 @@ func splitUnique(s, divider string) []string {
}
func verifyACL(acl ACL, access string, permission types.Permission) error {
grantee := Grantee{Access: access, Permission: permission}
granteeFullCtrl := Grantee{Access: access, Permission: "FULL_CONTROL"}
granteeAllUsers := Grantee{Access: "all-users", Permission: permission}
grantee := Grantee{
Access: access,
Permission: permission,
Type: types.TypeCanonicalUser,
}
granteeFullCtrl := Grantee{
Access: access,
Permission: types.PermissionFullControl,
Type: types.TypeCanonicalUser,
}
granteeAllUsers := Grantee{
Access: "all-users",
Permission: permission,
Type: types.TypeGroup,
}
isFound := false

View File

@@ -76,6 +76,7 @@ var (
)
type Opts struct {
RootAccount Account
Dir string
LDAPServerURL string
LDAPBindDN string
@@ -114,20 +115,20 @@ func New(o *Opts) (IAMService, error) {
switch {
case o.Dir != "":
svc, err = NewInternal(o.Dir)
svc, err = NewInternal(o.RootAccount, o.Dir)
fmt.Printf("initializing internal IAM with %q\n", o.Dir)
case o.LDAPServerURL != "":
svc, err = NewLDAPService(o.LDAPServerURL, o.LDAPBindDN, o.LDAPPassword,
svc, err = NewLDAPService(o.RootAccount, o.LDAPServerURL, o.LDAPBindDN, o.LDAPPassword,
o.LDAPQueryBase, o.LDAPAccessAtr, o.LDAPSecretAtr, o.LDAPRoleAtr, o.LDAPUserIdAtr,
o.LDAPGroupIdAtr, o.LDAPObjClasses)
fmt.Printf("initializing LDAP IAM with %q\n", o.LDAPServerURL)
case o.S3Endpoint != "":
svc, err = NewS3(o.S3Access, o.S3Secret, o.S3Region, o.S3Bucket,
svc, err = NewS3(o.RootAccount, o.S3Access, o.S3Secret, o.S3Region, o.S3Bucket,
o.S3Endpoint, o.S3DisableSSlVerfiy, o.S3Debug)
fmt.Printf("initializing S3 IAM with '%v/%v'\n",
o.S3Endpoint, o.S3Bucket)
case o.VaultEndpointURL != "":
svc, err = NewVaultIAMService(o.VaultEndpointURL, o.VaultSecretStoragePath,
svc, err = NewVaultIAMService(o.RootAccount, o.VaultEndpointURL, o.VaultSecretStoragePath,
o.VaultMountPath, o.VaultRootToken, o.VaultRoleId, o.VaultRoleSecret,
o.VaultServerCert, o.VaultClientCert, o.VaultClientCertKey)
fmt.Printf("initializing Vault IAM with %q\n", o.VaultEndpointURL)

View File

@@ -40,7 +40,8 @@ type IAMServiceInternal struct {
// IAM service. All account updates should be sent to a single
// gateway instance if possible.
sync.RWMutex
dir string
dir string
rootAcc Account
}
// UpdateAcctFunc accepts the current data and returns the new data to be stored
@@ -54,9 +55,10 @@ type iAMConfig struct {
var _ IAMService = &IAMServiceInternal{}
// NewInternal creates a new instance for the Internal IAM service
func NewInternal(dir string) (*IAMServiceInternal, error) {
func NewInternal(rootAcc Account, dir string) (*IAMServiceInternal, error) {
i := &IAMServiceInternal{
dir: dir,
dir: dir,
rootAcc: rootAcc,
}
err := i.initIAM()
@@ -70,6 +72,10 @@ func NewInternal(dir string) (*IAMServiceInternal, error) {
// CreateAccount creates a new IAM account. Returns an error if the account
// already exists.
func (s *IAMServiceInternal) CreateAccount(account Account) error {
if account.Access == s.rootAcc.Access {
return ErrUserExists
}
s.Lock()
defer s.Unlock()
@@ -97,6 +103,10 @@ func (s *IAMServiceInternal) CreateAccount(account Account) error {
// GetUserAccount retrieves account info for the requested user. Returns
// ErrNoSuchUser if the account does not exist.
func (s *IAMServiceInternal) GetUserAccount(access string) (Account, error) {
if access == s.rootAcc.Access {
return s.rootAcc, nil
}
s.RLock()
defer s.RUnlock()

View File

@@ -31,11 +31,12 @@ type LdapIAMService struct {
roleAtr string
groupIdAtr string
userIdAtr string
rootAcc Account
}
var _ IAMService = &LdapIAMService{}
func NewLDAPService(url, bindDN, pass, queryBase, accAtr, secAtr, roleAtr, userIdAtr, groupIdAtr, objClasses string) (IAMService, error) {
func NewLDAPService(rootAcc Account, url, bindDN, pass, queryBase, accAtr, secAtr, roleAtr, userIdAtr, groupIdAtr, objClasses string) (IAMService, error) {
if url == "" || bindDN == "" || pass == "" || queryBase == "" || accAtr == "" ||
secAtr == "" || roleAtr == "" || userIdAtr == "" || groupIdAtr == "" || objClasses == "" {
return nil, fmt.Errorf("required parameters list not fully provided")
@@ -58,10 +59,14 @@ func NewLDAPService(url, bindDN, pass, queryBase, accAtr, secAtr, roleAtr, userI
roleAtr: roleAtr,
userIdAtr: userIdAtr,
groupIdAtr: groupIdAtr,
rootAcc: rootAcc,
}, nil
}
func (ld *LdapIAMService) CreateAccount(account Account) error {
if ld.rootAcc.Access == account.Access {
return ErrUserExists
}
userEntry := ldap.NewAddRequest(fmt.Sprintf("%v=%v,%v", ld.accessAtr, account.Access, ld.queryBase), nil)
userEntry.Attribute("objectClass", ld.objClasses)
userEntry.Attribute(ld.accessAtr, []string{account.Access})
@@ -79,6 +84,9 @@ func (ld *LdapIAMService) CreateAccount(account Account) error {
}
func (ld *LdapIAMService) GetUserAccount(access string) (Account, error) {
if access == ld.rootAcc.Access {
return ld.rootAcc, nil
}
searchRequest := ldap.NewSearchRequest(
ld.queryBase,
ldap.ScopeWholeSubtree,

View File

@@ -57,12 +57,13 @@ type IAMServiceS3 struct {
endpoint string
sslSkipVerify bool
debug bool
rootAcc Account
client *s3.Client
}
var _ IAMService = &IAMServiceS3{}
func NewS3(access, secret, region, bucket, endpoint string, sslSkipVerify, debug bool) (*IAMServiceS3, error) {
func NewS3(rootAcc Account, access, secret, region, bucket, endpoint string, sslSkipVerify, debug bool) (*IAMServiceS3, error) {
if access == "" {
return nil, fmt.Errorf("must provide s3 IAM service access key")
}
@@ -87,6 +88,7 @@ func NewS3(access, secret, region, bucket, endpoint string, sslSkipVerify, debug
endpoint: endpoint,
sslSkipVerify: sslSkipVerify,
debug: debug,
rootAcc: rootAcc,
}
cfg, err := i.getConfig()
@@ -106,6 +108,10 @@ func NewS3(access, secret, region, bucket, endpoint string, sslSkipVerify, debug
}
func (s *IAMServiceS3) CreateAccount(account Account) error {
if s.rootAcc.Access == account.Access {
return ErrUserExists
}
s.Lock()
defer s.Unlock()
@@ -124,6 +130,10 @@ func (s *IAMServiceS3) CreateAccount(account Account) error {
}
func (s *IAMServiceS3) GetUserAccount(access string) (Account, error) {
if access == s.rootAcc.Access {
return s.rootAcc, nil
}
s.RLock()
defer s.RUnlock()
@@ -242,7 +252,7 @@ func (s *IAMServiceS3) getAccounts() (iAMConfig, error) {
})
if err != nil {
// if the error is object not exists,
// init empty accounts stuct and return that
// init empty accounts struct and return that
var nsk *types.NoSuchKey
if errors.As(err, &nsk) {
return iAMConfig{AccessAccounts: map[string]Account{}}, nil

View File

@@ -30,11 +30,12 @@ type VaultIAMService struct {
client *vault.Client
reqOpts []vault.RequestOption
secretStoragePath string
rootAcc Account
}
var _ IAMService = &VaultIAMService{}
func NewVaultIAMService(endpoint, secretStoragePath, mountPath, rootToken, roleID, roleSecret, serverCert, clientCert, clientCertKey string) (IAMService, error) {
func NewVaultIAMService(rootAcc Account, endpoint, secretStoragePath, mountPath, rootToken, roleID, roleSecret, serverCert, clientCert, clientCertKey string) (IAMService, error) {
opts := []vault.ClientOption{
vault.WithAddress(endpoint),
// set request timeout to 10 secs
@@ -100,10 +101,14 @@ func NewVaultIAMService(endpoint, secretStoragePath, mountPath, rootToken, roleI
client: client,
reqOpts: reqOpts,
secretStoragePath: secretStoragePath,
rootAcc: rootAcc,
}, nil
}
func (vt *VaultIAMService) CreateAccount(account Account) error {
if vt.rootAcc.Access == account.Access {
return ErrUserExists
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
_, err := vt.client.Secrets.KvV2Write(ctx, vt.secretStoragePath+"/"+account.Access, schema.KvV2WriteRequest{
Data: map[string]any{
@@ -125,6 +130,9 @@ func (vt *VaultIAMService) CreateAccount(account Account) error {
}
func (vt *VaultIAMService) GetUserAccount(access string) (Account, error) {
if vt.rootAcc.Access == access {
return vt.rootAcc, nil
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
resp, err := vt.client.Secrets.KvV2Read(ctx, vt.secretStoragePath+"/"+access, vt.reqOpts...)
cancel()

View File

@@ -30,6 +30,7 @@ import (
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
@@ -56,9 +57,11 @@ const (
keyOwnership key = "Ownership"
keyTags key = "Tags"
keyPolicy key = "Policy"
keyBucketLock key = "Bucket-Lock"
keyObjRetention key = "Object_retention"
keyObjLegalHold key = "Object_legal_hold"
keyBucketLock key = "Bucketlock"
keyObjRetention key = "Objectretention"
keyObjLegalHold key = "Objectlegalhold"
defaultContentType = "binary/octet-stream"
)
type Azure struct {
@@ -127,8 +130,8 @@ func (az *Azure) String() string {
func (az *Azure) CreateBucket(ctx context.Context, input *s3.CreateBucketInput, acl []byte) error {
meta := map[string]*string{
string(keyAclCapital): backend.GetStringPtr(string(acl)),
string(keyOwnership): backend.GetStringPtr(string(input.ObjectOwnership)),
string(keyAclCapital): backend.GetStringPtr(encodeBytes(acl)),
string(keyOwnership): backend.GetStringPtr(encodeBytes([]byte(input.ObjectOwnership))),
}
acct, ok := ctx.Value("account").(auth.Account)
@@ -148,28 +151,21 @@ func (az *Azure) CreateBucket(ctx context.Context, input *s3.CreateBucketInput,
return fmt.Errorf("parse default bucket lock state: %w", err)
}
meta[string(keyBucketLock)] = backend.GetStringPtr(string(defaultLockParsed))
meta[string(keyBucketLock)] = backend.GetStringPtr(encodeBytes(defaultLockParsed))
}
_, err := az.client.CreateContainer(ctx, *input.Bucket, &container.CreateOptions{Metadata: meta})
if errors.Is(s3err.GetAPIError(s3err.ErrBucketAlreadyExists), azureErrToS3Err(err)) {
client, err := az.getContainerClient(*input.Bucket)
aclBytes, err := az.getContainerMetaData(ctx, *input.Bucket, string(keyAclCapital))
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
aclPtr, ok := props.Metadata[string(keyAclCapital)]
if !ok {
return fmt.Errorf("missing acl in the bucket")
}
var acl auth.ACL
if err := json.Unmarshal([]byte(*aclPtr), &acl); err != nil {
return fmt.Errorf("unmarshal bucket acl: %w", err)
if len(aclBytes) > 0 {
if err := json.Unmarshal(aclBytes, &acl); err != nil {
return fmt.Errorf("unmarshal bucket acl: %w", err)
}
}
if acl.Owner == acct.Access {
return s3err.GetAPIError(s3err.ErrBucketAlreadyOwnedByYou)
@@ -225,12 +221,7 @@ func (az *Azure) ListBuckets(ctx context.Context, owner string, isAdmin bool) (s
}
func (az *Azure) HeadBucket(ctx context.Context, input *s3.HeadBucketInput) (*s3.HeadBucketOutput, error) {
client, err := az.getContainerClient(*input.Bucket)
if err != nil {
return nil, err
}
_, err = client.GetProperties(ctx, nil)
_, err := az.getContainerMetaData(ctx, *input.Bucket, "any")
if err != nil {
return nil, azureErrToS3Err(err)
}
@@ -254,64 +245,21 @@ func (az *Azure) DeleteBucket(ctx context.Context, input *s3.DeleteBucketInput)
}
func (az *Azure) PutBucketOwnershipControls(ctx context.Context, bucket string, ownership types.ObjectOwnership) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
resp, err := client.GetProperties(ctx, &container.GetPropertiesOptions{})
if err != nil {
return azureErrToS3Err(err)
}
resp.Metadata[string(keyOwnership)] = backend.GetStringPtr(string(ownership))
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{Metadata: resp.Metadata})
if err != nil {
return azureErrToS3Err(err)
}
return nil
return az.setContainerMetaData(ctx, bucket, string(keyOwnership), []byte(ownership))
}
func (az *Azure) GetBucketOwnershipControls(ctx context.Context, bucket string) (types.ObjectOwnership, error) {
var ownship types.ObjectOwnership
client, err := az.getContainerClient(bucket)
ownership, err := az.getContainerMetaData(ctx, bucket, string(keyOwnership))
if err != nil {
return ownship, err
}
resp, err := client.GetProperties(ctx, &container.GetPropertiesOptions{})
if err != nil {
return ownship, azureErrToS3Err(err)
}
ownership, ok := resp.Metadata[string(keyOwnership)]
if !ok {
return ownship, s3err.GetAPIError(s3err.ErrOwnershipControlsNotFound)
}
return types.ObjectOwnership(*ownership), nil
return types.ObjectOwnership(ownership), nil
}
func (az *Azure) DeleteBucketOwnershipControls(ctx context.Context, bucket string) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
resp, err := client.GetProperties(ctx, &container.GetPropertiesOptions{})
if err != nil {
return azureErrToS3Err(err)
}
delete(resp.Metadata, string(keyOwnership))
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{Metadata: resp.Metadata})
if err != nil {
return azureErrToS3Err(err)
}
return nil
return az.deleteContainerMetaData(ctx, bucket, string(keyOwnership))
}
func (az *Azure) PutObject(ctx context.Context, po *s3.PutObjectInput) (string, error) {
@@ -320,17 +268,30 @@ func (az *Azure) PutObject(ctx context.Context, po *s3.PutObjectInput) (string,
return "", err
}
uploadResp, err := az.client.UploadStream(ctx, *po.Bucket, *po.Key, po.Body, &blockblob.UploadStreamOptions{
opts := &blockblob.UploadStreamOptions{
Metadata: parseMetadata(po.Metadata),
Tags: tags,
})
}
opts.HTTPHeaders = &blob.HTTPHeaders{}
opts.HTTPHeaders.BlobContentEncoding = po.ContentEncoding
opts.HTTPHeaders.BlobContentLanguage = po.ContentLanguage
opts.HTTPHeaders.BlobContentDisposition = po.ContentDisposition
opts.HTTPHeaders.BlobContentType = po.ContentType
if opts.HTTPHeaders.BlobContentType == nil {
opts.HTTPHeaders.BlobContentType = backend.GetStringPtr(string(defaultContentType))
}
uploadResp, err := az.client.UploadStream(ctx, *po.Bucket, *po.Key, po.Body, opts)
if err != nil {
return "", azureErrToS3Err(err)
}
// Set object legal hold
if po.ObjectLockLegalHoldStatus == types.ObjectLockLegalHoldStatusOn {
if err := az.PutObjectLegalHold(ctx, *po.Bucket, *po.Key, "", true); err != nil {
err := az.PutObjectLegalHold(ctx, *po.Bucket, *po.Key, "", true)
if err != nil {
return "", err
}
}
@@ -345,7 +306,8 @@ func (az *Azure) PutObject(ctx context.Context, po *s3.PutObjectInput) (string,
if err != nil {
return "", fmt.Errorf("parse object lock retention: %w", err)
}
if err := az.PutObjectRetention(ctx, *po.Bucket, *po.Key, "", true, retParsed); err != nil {
err = az.PutObjectRetention(ctx, *po.Bucket, *po.Key, "", true, retParsed)
if err != nil {
return "", err
}
}
@@ -354,53 +316,31 @@ func (az *Azure) PutObject(ctx context.Context, po *s3.PutObjectInput) (string,
}
func (az *Azure) PutBucketTagging(ctx context.Context, bucket string, tags map[string]string) error {
client, err := az.getContainerClient(bucket)
if tags == nil {
return az.deleteContainerMetaData(ctx, bucket, string(keyTags))
}
tagsJson, err := json.Marshal(tags)
if err != nil {
return err
}
resp, err := client.GetProperties(ctx, &container.GetPropertiesOptions{})
if err != nil {
return azureErrToS3Err(err)
}
if tags == nil {
delete(resp.Metadata, string(keyTags))
} else {
tagsJson, err := json.Marshal(tags)
if err != nil {
return err
}
resp.Metadata[string(keyTags)] = backend.GetStringPtr(string(tagsJson))
}
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{Metadata: resp.Metadata})
if err != nil {
return azureErrToS3Err(err)
}
return nil
return az.setContainerMetaData(ctx, bucket, string(keyTags), tagsJson)
}
func (az *Azure) GetBucketTagging(ctx context.Context, bucket string) (map[string]string, error) {
client, err := az.getContainerClient(bucket)
tagsJson, err := az.getContainerMetaData(ctx, bucket, string(keyTags))
if err != nil {
return nil, err
}
resp, err := client.GetProperties(ctx, &container.GetPropertiesOptions{})
if err != nil {
return nil, azureErrToS3Err(err)
}
tagsJson, ok := resp.Metadata[string(keyTags)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrBucketTaggingNotFound)
}
var tags map[string]string
if json.Unmarshal([]byte(*tagsJson), &tags); err != nil {
if len(tagsJson) == 0 {
return tags, nil
}
err = json.Unmarshal(tagsJson, &tags)
if err != nil {
return nil, err
}
@@ -435,11 +375,16 @@ func (az *Azure) GetObject(ctx context.Context, input *s3.GetObjectInput) (*s3.G
tagcount = int32(*blobDownloadResponse.TagCount)
}
contentType := blobDownloadResponse.ContentType
if contentType == nil {
contentType = backend.GetStringPtr(defaultContentType)
}
return &s3.GetObjectOutput{
AcceptRanges: input.Range,
ContentLength: blobDownloadResponse.ContentLength,
ContentEncoding: blobDownloadResponse.ContentEncoding,
ContentType: blobDownloadResponse.ContentType,
ContentType: contentType,
ETag: (*string)(blobDownloadResponse.ETag),
LastModified: blobDownloadResponse.LastModified,
Metadata: parseAzMetadata(blobDownloadResponse.Metadata),
@@ -535,7 +480,7 @@ func (az *Azure) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAtt
ETag: data.ETag,
LastModified: data.LastModified,
ObjectSize: data.ContentLength,
StorageClass: &data.StorageClass,
StorageClass: data.StorageClass,
VersionId: data.VersionId,
}, nil
}
@@ -580,14 +525,14 @@ func (az *Azure) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAtt
}, nil
}
func (az *Azure) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
func (az *Azure) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
pager := az.client.NewListBlobsFlatPager(*input.Bucket, &azblob.ListBlobsFlatOptions{
Marker: input.Marker,
MaxResults: input.MaxKeys,
Prefix: input.Prefix,
})
var objects []types.Object
var objects []s3response.Object
var nextMarker *string
var isTruncated bool
var maxKeys int32 = math.MaxInt32
@@ -600,7 +545,7 @@ Pager:
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return nil, azureErrToS3Err(err)
return s3response.ListObjectsResult{}, azureErrToS3Err(err)
}
for _, v := range resp.Segment.BlobItems {
@@ -611,7 +556,7 @@ Pager:
if len(objects) >= int(maxKeys) {
break Pager
}
objects = append(objects, types.Object{
objects = append(objects, s3response.Object{
ETag: (*string)(v.Properties.ETag),
Key: v.Name,
LastModified: v.Properties.LastModified,
@@ -623,7 +568,7 @@ Pager:
// TODO: generate common prefixes when appropriate
return &s3.ListObjectsOutput{
return s3response.ListObjectsResult{
Contents: objects,
Marker: input.Marker,
MaxKeys: input.MaxKeys,
@@ -631,10 +576,11 @@ Pager:
NextMarker: nextMarker,
Prefix: input.Prefix,
IsTruncated: &isTruncated,
Delimiter: input.Delimiter,
}, nil
}
func (az *Azure) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
func (az *Azure) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
marker := ""
if *input.ContinuationToken > *input.StartAfter {
marker = *input.ContinuationToken
@@ -647,7 +593,7 @@ func (az *Azure) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input
Prefix: input.Prefix,
})
var objects []types.Object
var objects []s3response.Object
var nextMarker *string
var isTruncated bool
var maxKeys int32 = math.MaxInt32
@@ -660,7 +606,7 @@ Pager:
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return nil, azureErrToS3Err(err)
return s3response.ListObjectsV2Result{}, azureErrToS3Err(err)
}
for _, v := range resp.Segment.BlobItems {
if nextMarker == nil && *resp.NextMarker != "" {
@@ -671,7 +617,7 @@ Pager:
break Pager
}
nextMarker = resp.NextMarker
objects = append(objects, types.Object{
objects = append(objects, s3response.Object{
ETag: (*string)(v.Properties.ETag),
Key: v.Name,
LastModified: v.Properties.LastModified,
@@ -683,7 +629,7 @@ Pager:
// TODO: generate common prefixes when appropriate
return &s3.ListObjectsV2Output{
return s3response.ListObjectsV2Result{
Contents: objects,
ContinuationToken: input.ContinuationToken,
MaxKeys: input.MaxKeys,
@@ -697,6 +643,13 @@ Pager:
func (az *Azure) DeleteObject(ctx context.Context, input *s3.DeleteObjectInput) error {
_, err := az.client.DeleteBlob(ctx, *input.Bucket, *input.Key, nil)
if err != nil {
azerr, ok := err.(*azcore.ResponseError)
if ok && azerr.StatusCode == 404 {
// if the object does not exist, S3 returns success
return nil
}
}
return azureErrToS3Err(err)
}
@@ -734,17 +687,12 @@ func (az *Azure) DeleteObjects(ctx context.Context, input *s3.DeleteObjectsInput
}
func (az *Azure) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.CopyObjectOutput, error) {
containerClient, err := az.getContainerClient(*input.Bucket)
mdmap, err := az.getContainerMetaDataMap(ctx, *input.Bucket)
if err != nil {
return nil, err
}
res, err := containerClient.GetProperties(ctx, &container.GetPropertiesOptions{})
if err != nil {
return nil, azureErrToS3Err(err)
}
if strings.Join([]string{*input.Bucket, *input.Key}, "/") == *input.CopySource && isMetaSame(res.Metadata, input.Metadata) {
if strings.Join([]string{*input.Bucket, *input.Key}, "/") == *input.CopySource && isMetaSame(mdmap, input.Metadata) {
return nil, s3err.GetAPIError(s3err.ErrInvalidCopyDest)
}
@@ -753,12 +701,12 @@ func (az *Azure) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3
return nil, err
}
client, err := az.getBlobClient(*input.Bucket, *input.Key)
bclient, err := az.getBlobClient(*input.Bucket, *input.Key)
if err != nil {
return nil, err
}
resp, err := client.CopyFromURL(ctx, az.serviceURL+"/"+*input.CopySource, &blob.CopyFromURLOptions{
resp, err := bclient.CopyFromURL(ctx, az.serviceURL+"/"+*input.CopySource, &blob.CopyFromURLOptions{
BlobTags: tags,
Metadata: parseMetadata(input.Metadata),
})
@@ -816,7 +764,7 @@ func (az *Azure) DeleteObjectTagging(ctx context.Context, bucket, object string)
return nil
}
func (az *Azure) CreateMultipartUpload(ctx context.Context, input *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
func (az *Azure) CreateMultipartUpload(ctx context.Context, input *s3.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error) {
// Multipart upload starts with UploadPart action so there is no
// correlating function for creating mutlipart uploads.
// TODO: since azure only allows for a single multipart upload
@@ -826,10 +774,10 @@ func (az *Azure) CreateMultipartUpload(ctx context.Context, input *s3.CreateMult
// Alternatively, is there something we can do with upload ids to
// keep concurrent uploads unique still? I haven't found an efficient
// way to rename final objects.
return &s3.CreateMultipartUploadOutput{
Bucket: input.Bucket,
Key: input.Key,
UploadId: input.Key,
return s3response.InitiateMultipartUploadResult{
Bucket: *input.Bucket,
Key: *input.Key,
UploadId: *input.Key,
}, nil
}
@@ -915,7 +863,7 @@ func (az *Azure) ListParts(ctx context.Context, input *s3.ListPartsInput) (s3res
Size: *el.Size,
ETag: *el.Name,
PartNumber: partNumber,
LastModified: time.Now().Format(backend.RFC3339TimeFormat),
LastModified: time.Now(),
})
if len(parts) >= int(maxParts) {
nextPartNumberMarker = partNumber
@@ -970,7 +918,7 @@ func (az *Azure) ListMultipartUploads(ctx context.Context, input *s3.ListMultipa
}
uploads = append(uploads, s3response.Upload{
Key: *el.Name,
Initiated: el.Properties.CreationTime.Format(backend.RFC3339TimeFormat),
Initiated: *el.Properties.CreationTime,
})
}
}
@@ -1053,92 +1001,30 @@ func (az *Azure) CompleteMultipartUpload(ctx context.Context, input *s3.Complete
}
func (az *Azure) PutBucketAcl(ctx context.Context, bucket string, data []byte) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
props.Metadata[string(keyAclCapital)] = backend.GetStringPtr(string(data))
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{
Metadata: props.Metadata,
})
if err != nil {
return azureErrToS3Err(err)
}
return nil
return az.setContainerMetaData(ctx, bucket, string(keyAclCapital), data)
}
func (az *Azure) GetBucketAcl(ctx context.Context, input *s3.GetBucketAclInput) ([]byte, error) {
client, err := az.getContainerClient(*input.Bucket)
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
aclPtr, ok := props.Metadata[string(keyAclCapital)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrInternalError)
}
return []byte(*aclPtr), nil
return az.getContainerMetaData(ctx, *input.Bucket, string(keyAclCapital))
}
func (az *Azure) PutBucketPolicy(ctx context.Context, bucket string, policy []byte) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
if policy == nil {
delete(props.Metadata, string(keyPolicy))
} else {
// Store policy as base64 encoded, because storing raw json causes an SDK error
policyEncoded := base64.StdEncoding.EncodeToString(policy)
props.Metadata[string(keyPolicy)] = &policyEncoded
return az.deleteContainerMetaData(ctx, bucket, string(keyPolicy))
}
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{
Metadata: props.Metadata,
})
if err != nil {
return azureErrToS3Err(err)
}
return nil
return az.setContainerMetaData(ctx, bucket, string(keyPolicy), policy)
}
func (az *Azure) GetBucketPolicy(ctx context.Context, bucket string) ([]byte, error) {
client, err := az.getContainerClient(bucket)
p, err := az.getContainerMetaData(ctx, bucket, string(keyPolicy))
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
policyPtr, ok := props.Metadata[string(keyPolicy)]
if !ok {
if len(p) == 0 {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucketPolicy)
}
policy, err := base64.StdEncoding.DecodeString(*policyPtr)
if err != nil {
return nil, err
}
return policy, nil
return p, nil
}
func (az *Azure) DeleteBucketPolicy(ctx context.Context, bucket string) error {
@@ -1146,23 +1032,17 @@ func (az *Azure) DeleteBucketPolicy(ctx context.Context, bucket string) error {
}
func (az *Azure) PutObjectLockConfiguration(ctx context.Context, bucket string, config []byte) error {
client, err := az.getContainerClient(bucket)
cfg, err := az.getContainerMetaData(ctx, bucket, string(keyBucketLock))
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
cfg, exists := props.Metadata[string(keyBucketLock)]
if !exists {
if len(cfg) == 0 {
return s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotAllowed)
}
var bucketLockCfg auth.BucketLockConfig
if err := json.Unmarshal([]byte(*cfg), &bucketLockCfg); err != nil {
if err := json.Unmarshal(cfg, &bucketLockCfg); err != nil {
return fmt.Errorf("unmarshal object lock config: %w", err)
}
@@ -1170,53 +1050,34 @@ func (az *Azure) PutObjectLockConfiguration(ctx context.Context, bucket string,
return s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotAllowed)
}
props.Metadata[string(keyBucketLock)] = backend.GetStringPtr(string(config))
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{
Metadata: props.Metadata,
})
if err != nil {
return azureErrToS3Err(err)
}
return nil
return az.setContainerMetaData(ctx, bucket, string(keyBucketLock), config)
}
func (az *Azure) GetObjectLockConfiguration(ctx context.Context, bucket string) ([]byte, error) {
client, err := az.getContainerClient(bucket)
cfg, err := az.getContainerMetaData(ctx, bucket, string(keyBucketLock))
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
config, ok := props.Metadata[string(keyBucketLock)]
if !ok {
if len(cfg) == 0 {
return nil, s3err.GetAPIError(s3err.ErrObjectLockConfigurationNotFound)
}
return []byte(*config), nil
return cfg, nil
}
func (az *Azure) PutObjectRetention(ctx context.Context, bucket, object, versionId string, bypass bool, retention []byte) error {
contClient, err := az.getContainerClient(bucket)
if err != nil {
return err
}
contProps, err := contClient.GetProperties(ctx, nil)
cfg, err := az.getContainerMetaData(ctx, bucket, string(keyBucketLock))
if err != nil {
return azureErrToS3Err(err)
}
contCfg, ok := contProps.Metadata[string(keyBucketLock)]
if !ok {
if len(cfg) == 0 {
return s3err.GetAPIError(s3err.ErrInvalidBucketObjectLockConfiguration)
}
var bucketLockConfig auth.BucketLockConfig
if err := json.Unmarshal([]byte(*contCfg), &bucketLockConfig); err != nil {
if err := json.Unmarshal(cfg, &bucketLockConfig); err != nil {
return fmt.Errorf("parse bucket lock config: %w", err)
}
@@ -1291,22 +1152,17 @@ func (az *Azure) GetObjectRetention(ctx context.Context, bucket, object, version
}
func (az *Azure) PutObjectLegalHold(ctx context.Context, bucket, object, versionId string, status bool) error {
contClient, err := az.getContainerClient(bucket)
if err != nil {
return err
}
contProps, err := contClient.GetProperties(ctx, nil)
cfg, err := az.getContainerMetaData(ctx, bucket, string(keyBucketLock))
if err != nil {
return azureErrToS3Err(err)
}
contCfg, ok := contProps.Metadata[string(keyBucketLock)]
if !ok {
if len(cfg) == 0 {
return s3err.GetAPIError(s3err.ErrInvalidBucketObjectLockConfiguration)
}
var bucketLockConfig auth.BucketLockConfig
if err := json.Unmarshal([]byte(*contCfg), &bucketLockConfig); err != nil {
if err := json.Unmarshal(cfg, &bucketLockConfig); err != nil {
return fmt.Errorf("parse bucket lock config: %w", err)
}
@@ -1368,34 +1224,8 @@ func (az *Azure) GetObjectLegalHold(ctx context.Context, bucket, object, version
return &status, nil
}
func (az *Azure) ChangeBucketOwner(ctx context.Context, bucket, newOwner string) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
acl, err := getAclFromMetadata(props.Metadata, keyAclCapital)
if err != nil {
return err
}
acl.Owner = newOwner
newAcl, err := json.Marshal(acl)
if err != nil {
return fmt.Errorf("marshal acl: %w", err)
}
err = az.PutBucketAcl(ctx, bucket, newAcl)
if err != nil {
return err
}
return nil
func (az *Azure) ChangeBucketOwner(ctx context.Context, bucket string, acl []byte) error {
return az.PutBucketAcl(ctx, bucket, acl)
}
// The action actually returns the containers owned by the user, who initialized the gateway
@@ -1424,7 +1254,7 @@ func (az *Azure) ListBucketsAndOwners(ctx context.Context) (buckets []s3response
}
func (az *Azure) getContainerURL(cntr string) string {
return fmt.Sprintf("%v/%v", az.serviceURL, cntr)
return fmt.Sprintf("%v/%v", strings.TrimRight(az.serviceURL, "/"), cntr)
}
func (az *Azure) getBlobURL(cntr, blb string) string {
@@ -1553,14 +1383,132 @@ func decodeBlockId(blockID string) (int, error) {
return int(binary.LittleEndian.Uint32(slice)), nil
}
func encodeBytes(b []byte) string {
return base64.StdEncoding.EncodeToString(b)
}
func decodeString(str string) ([]byte, error) {
if str == "" {
return []byte{}, nil
}
decoded, err := base64.StdEncoding.DecodeString(str)
if err != nil {
return nil, err
}
return decoded, nil
}
func (az *Azure) getContainerMetaData(ctx context.Context, bucket, key string) ([]byte, error) {
client, err := az.getContainerClient(bucket)
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
if props.Metadata == nil {
return []byte{}, nil
}
data, ok := props.Metadata[key]
if !ok {
return []byte{}, nil
}
value, err := decodeString(*data)
if err != nil {
return nil, err
}
return value, nil
}
func (az *Azure) getContainerMetaDataMap(ctx context.Context, bucket string) (map[string]*string, error) {
client, err := az.getContainerClient(bucket)
if err != nil {
return nil, err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return nil, azureErrToS3Err(err)
}
return props.Metadata, nil
}
func (az *Azure) setContainerMetaData(ctx context.Context, bucket, key string, value []byte) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
mdmap := props.Metadata
if mdmap == nil {
mdmap = make(map[string]*string)
}
str := encodeBytes(value)
mdmap[key] = backend.GetStringPtr(str)
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{Metadata: mdmap})
if err != nil {
return err
}
return nil
}
func (az *Azure) deleteContainerMetaData(ctx context.Context, bucket, key string) error {
client, err := az.getContainerClient(bucket)
if err != nil {
return err
}
props, err := client.GetProperties(ctx, nil)
if err != nil {
return azureErrToS3Err(err)
}
mdmap := props.Metadata
if mdmap == nil {
mdmap = make(map[string]*string)
}
delete(mdmap, key)
_, err = client.SetMetadata(ctx, &container.SetMetadataOptions{Metadata: mdmap})
if err != nil {
return err
}
return nil
}
func getAclFromMetadata(meta map[string]*string, key key) (*auth.ACL, error) {
aclPtr, ok := meta[string(key)]
data, ok := meta[string(key)]
if !ok {
return nil, s3err.GetAPIError(s3err.ErrInternalError)
}
value, err := decodeString(*data)
if err != nil {
return nil, err
}
var acl auth.ACL
err := json.Unmarshal([]byte(*aclPtr), &acl)
if len(value) == 0 {
return &acl, nil
}
err = json.Unmarshal(value, &acl)
if err != nil {
return nil, fmt.Errorf("unmarshal acl: %w", err)
}

View File

@@ -48,7 +48,7 @@ type Backend interface {
DeleteBucketOwnershipControls(_ context.Context, bucket string) error
// multipart operations
CreateMultipartUpload(context.Context, *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error)
CreateMultipartUpload(context.Context, *s3.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error)
CompleteMultipartUpload(context.Context, *s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error)
AbortMultipartUpload(context.Context, *s3.AbortMultipartUploadInput) error
ListMultipartUploads(context.Context, *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResult, error)
@@ -63,8 +63,8 @@ type Backend interface {
GetObjectAcl(context.Context, *s3.GetObjectAclInput) (*s3.GetObjectAclOutput, error)
GetObjectAttributes(context.Context, *s3.GetObjectAttributesInput) (s3response.GetObjectAttributesResult, error)
CopyObject(context.Context, *s3.CopyObjectInput) (*s3.CopyObjectOutput, error)
ListObjects(context.Context, *s3.ListObjectsInput) (*s3.ListObjectsOutput, error)
ListObjectsV2(context.Context, *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error)
ListObjects(context.Context, *s3.ListObjectsInput) (s3response.ListObjectsResult, error)
ListObjectsV2(context.Context, *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error)
DeleteObject(context.Context, *s3.DeleteObjectInput) error
DeleteObjects(context.Context, *s3.DeleteObjectsInput) (s3response.DeleteResult, error)
PutObjectAcl(context.Context, *s3.PutObjectAclInput) error
@@ -93,7 +93,7 @@ type Backend interface {
GetObjectLegalHold(_ context.Context, bucket, object, versionId string) (*bool, error)
// non AWS actions
ChangeBucketOwner(_ context.Context, bucket, newOwner string) error
ChangeBucketOwner(_ context.Context, bucket string, acl []byte) error
ListBucketsAndOwners(context.Context) ([]s3response.Bucket, error)
}
@@ -151,8 +151,8 @@ func (BackendUnsupported) DeleteBucketOwnershipControls(_ context.Context, bucke
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) CreateMultipartUpload(context.Context, *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
func (BackendUnsupported) CreateMultipartUpload(context.Context, *s3.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error) {
return s3response.InitiateMultipartUploadResult{}, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) CompleteMultipartUpload(context.Context, *s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
@@ -191,11 +191,11 @@ func (BackendUnsupported) GetObjectAttributes(context.Context, *s3.GetObjectAttr
func (BackendUnsupported) CopyObject(context.Context, *s3.CopyObjectInput) (*s3.CopyObjectOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) ListObjects(context.Context, *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
func (BackendUnsupported) ListObjects(context.Context, *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
return s3response.ListObjectsResult{}, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) ListObjectsV2(context.Context, *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
func (BackendUnsupported) ListObjectsV2(context.Context, *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
return s3response.ListObjectsV2Result{}, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) DeleteObject(context.Context, *s3.DeleteObjectInput) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
@@ -268,7 +268,7 @@ func (BackendUnsupported) GetObjectLegalHold(_ context.Context, bucket, object,
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) ChangeBucketOwner(_ context.Context, bucket, newOwner string) error {
func (BackendUnsupported) ChangeBucketOwner(_ context.Context, bucket string, acl []byte) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) ListBucketsAndOwners(context.Context) ([]s3response.Bucket, error) {

View File

@@ -30,11 +30,6 @@ import (
"github.com/versity/versitygw/s3response"
)
var (
// RFC3339TimeFormat RFC3339 time format
RFC3339TimeFormat = "2006-01-02T15:04:05.999Z"
)
func IsValidBucketName(name string) bool { return true }
type ByBucketName []s3response.ListAllMyBucketsEntry

View File

@@ -61,6 +61,10 @@ type Posix struct {
// used to determine if chowning is needed
euid int
egid int
// bucketlinks is a flag to enable symlinks to directories at the top
// level gateway directory to be treated as buckets the same as directories
bucketlinks bool
}
var _ backend.Backend = &Posix{}
@@ -87,8 +91,9 @@ const (
)
type PosixOpts struct {
ChownUID bool
ChownGID bool
ChownUID bool
ChownGID bool
BucketLinks bool
}
func New(rootdir string, meta meta.MetadataStorer, opts PosixOpts) (*Posix, error) {
@@ -103,13 +108,14 @@ func New(rootdir string, meta meta.MetadataStorer, opts PosixOpts) (*Posix, erro
}
return &Posix{
meta: meta,
rootfd: f,
rootdir: rootdir,
euid: os.Geteuid(),
egid: os.Getegid(),
chownuid: opts.ChownUID,
chowngid: opts.ChownGID,
meta: meta,
rootfd: f,
rootdir: rootdir,
euid: os.Geteuid(),
egid: os.Getegid(),
chownuid: opts.ChownUID,
chowngid: opts.ChownGID,
bucketlinks: opts.BucketLinks,
}, nil
}
@@ -130,17 +136,25 @@ func (p *Posix) ListBuckets(_ context.Context, owner string, isAdmin bool) (s3re
var buckets []s3response.ListAllMyBucketsEntry
for _, entry := range entries {
if !entry.IsDir() {
// buckets must be a directory
continue
}
fi, err := entry.Info()
if err != nil {
// skip entries returning errors
continue
}
if p.bucketlinks && entry.Type() == fs.ModeSymlink {
fi, err = os.Stat(entry.Name())
if err != nil {
// skip entries returning errors
continue
}
}
if !fi.IsDir() {
// buckets must be a directory
continue
}
// return all the buckets for admin users
if isAdmin {
buckets = append(buckets, s3response.ListAllMyBucketsEntry{
@@ -151,6 +165,10 @@ func (p *Posix) ListBuckets(_ context.Context, owner string, isAdmin bool) (s3re
}
aclTag, err := p.meta.RetrieveAttribute(entry.Name(), "", aclkey)
if errors.Is(err, meta.ErrNoSuchKey) {
// skip buckets without acl tag
continue
}
if err != nil {
return s3response.ListAllMyBucketsResult{}, fmt.Errorf("get acl tag: %w", err)
}
@@ -363,12 +381,12 @@ func (p *Posix) DeleteBucketOwnershipControls(_ context.Context, bucket string)
return nil
}
func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error) {
if mpu.Bucket == nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidBucketName)
return s3response.InitiateMultipartUploadResult{}, s3err.GetAPIError(s3err.ErrInvalidBucketName)
}
if mpu.Key == nil {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
return s3response.InitiateMultipartUploadResult{}, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
bucket := *mpu.Bucket
@@ -376,16 +394,16 @@ func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipa
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
return s3response.InitiateMultipartUploadResult{}, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return nil, fmt.Errorf("stat bucket: %w", err)
return s3response.InitiateMultipartUploadResult{}, fmt.Errorf("stat bucket: %w", err)
}
if strings.HasSuffix(*mpu.Key, "/") {
// directory objects can't be uploaded with mutlipart uploads
// because posix directories can't contain data
return nil, s3err.GetAPIError(s3err.ErrDirectoryObjectContainsData)
return s3response.InitiateMultipartUploadResult{}, s3err.GetAPIError(s3err.ErrDirectoryObjectContainsData)
}
// parse object tags
@@ -396,10 +414,10 @@ func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipa
for _, prt := range tagParts {
p := strings.Split(prt, "=")
if len(p) != 2 {
return nil, s3err.GetAPIError(s3err.ErrInvalidTag)
return s3response.InitiateMultipartUploadResult{}, s3err.GetAPIError(s3err.ErrInvalidTag)
}
if len(p[0]) > 128 || len(p[1]) > 256 {
return nil, s3err.GetAPIError(s3err.ErrInvalidTag)
return s3response.InitiateMultipartUploadResult{}, s3err.GetAPIError(s3err.ErrInvalidTag)
}
tags[p[0]] = p[1]
}
@@ -417,7 +435,7 @@ func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipa
// associated with this specific multipart upload
err = os.MkdirAll(filepath.Join(tmppath, uploadID), 0755)
if err != nil {
return nil, fmt.Errorf("create upload temp dir: %w", err)
return s3response.InitiateMultipartUploadResult{}, fmt.Errorf("create upload temp dir: %w", err)
}
// set an attribute with the original object name so that we can
@@ -429,7 +447,7 @@ func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipa
// other uploads for the same object name outstanding
os.RemoveAll(filepath.Join(tmppath, uploadID))
os.Remove(tmppath)
return nil, fmt.Errorf("set name attr for upload: %w", err)
return s3response.InitiateMultipartUploadResult{}, fmt.Errorf("set name attr for upload: %w", err)
}
// set user metadata
@@ -440,7 +458,7 @@ func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipa
// cleanup object if returning error
os.RemoveAll(filepath.Join(tmppath, uploadID))
os.Remove(tmppath)
return nil, fmt.Errorf("set user attr %q: %w", k, err)
return s3response.InitiateMultipartUploadResult{}, fmt.Errorf("set user attr %q: %w", k, err)
}
}
@@ -451,7 +469,7 @@ func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipa
// cleanup object if returning error
os.RemoveAll(filepath.Join(tmppath, uploadID))
os.Remove(tmppath)
return nil, err
return s3response.InitiateMultipartUploadResult{}, err
}
}
@@ -463,7 +481,7 @@ func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipa
// cleanup object if returning error
os.RemoveAll(filepath.Join(tmppath, uploadID))
os.Remove(tmppath)
return nil, fmt.Errorf("set content-type: %w", err)
return s3response.InitiateMultipartUploadResult{}, fmt.Errorf("set content-type: %w", err)
}
}
@@ -473,7 +491,7 @@ func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipa
// cleanup object if returning error
os.RemoveAll(filepath.Join(tmppath, uploadID))
os.Remove(tmppath)
return nil, err
return s3response.InitiateMultipartUploadResult{}, err
}
}
@@ -488,20 +506,20 @@ func (p *Posix) CreateMultipartUpload(ctx context.Context, mpu *s3.CreateMultipa
// cleanup object if returning error
os.RemoveAll(filepath.Join(tmppath, uploadID))
os.Remove(tmppath)
return nil, fmt.Errorf("parse object lock retention: %w", err)
return s3response.InitiateMultipartUploadResult{}, fmt.Errorf("parse object lock retention: %w", err)
}
if err := p.PutObjectRetention(ctx, bucket, filepath.Join(objdir, uploadID), "", true, retParsed); err != nil {
// cleanup object if returning error
os.RemoveAll(filepath.Join(tmppath, uploadID))
os.Remove(tmppath)
return nil, err
return s3response.InitiateMultipartUploadResult{}, err
}
}
return &s3.CreateMultipartUploadOutput{
Bucket: &bucket,
Key: &object,
UploadId: &uploadID,
return s3response.InitiateMultipartUploadResult{
Bucket: bucket,
Key: object,
UploadId: uploadID,
}, nil
}
@@ -791,20 +809,6 @@ func (p *Posix) loadUserMetaData(bucket, object string, m map[string]string) (st
return contentType, contentEncoding
}
func compareUserMetadata(meta1, meta2 map[string]string) bool {
if len(meta1) != len(meta2) {
return false
}
for key, val := range meta1 {
if meta2[key] != val {
return false
}
}
return true
}
func isValidMeta(val string) bool {
if strings.HasPrefix(val, metaHdr) {
return true
@@ -937,9 +941,10 @@ func (p *Posix) ListMultipartUploads(_ context.Context, mpu *s3.ListMultipartUpl
keyMarkerInd = len(uploads)
}
uploads = append(uploads, s3response.Upload{
Key: objectName,
UploadID: uploadID,
Initiated: fi.ModTime().Format(backend.RFC3339TimeFormat),
Key: objectName,
UploadID: uploadID,
StorageClass: types.StorageClassStandard,
Initiated: fi.ModTime(),
})
}
}
@@ -1084,7 +1089,7 @@ func (p *Posix) ListParts(_ context.Context, input *s3.ListPartsInput) (s3respon
parts = append(parts, s3response.Part{
PartNumber: pn,
ETag: etag,
LastModified: fi.ModTime().Format(backend.RFC3339TimeFormat),
LastModified: fi.ModTime(),
Size: fi.Size(),
})
}
@@ -1116,6 +1121,7 @@ func (p *Posix) ListParts(_ context.Context, input *s3.ListPartsInput) (s3respon
PartNumberMarker: partNumberMarker,
Parts: parts,
UploadID: uploadID,
StorageClass: types.StorageClassStandard,
}, nil
}
@@ -1227,6 +1233,9 @@ func (p *Posix) UploadPartCopy(ctx context.Context, upi *s3.UploadPartCopyInput)
if errors.Is(err, fs.ErrNotExist) {
return s3response.CopyObjectResult{}, s3err.GetAPIError(s3err.ErrNoSuchUpload)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return s3response.CopyObjectResult{}, s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil {
return s3response.CopyObjectResult{}, fmt.Errorf("stat uploadid: %w", err)
}
@@ -1254,6 +1263,9 @@ func (p *Posix) UploadPartCopy(ctx context.Context, upi *s3.UploadPartCopyInput)
if errors.Is(err, fs.ErrNotExist) {
return s3response.CopyObjectResult{}, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return s3response.CopyObjectResult{}, s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil {
return s3response.CopyObjectResult{}, fmt.Errorf("stat object: %w", err)
}
@@ -1409,6 +1421,12 @@ func (p *Posix) PutObject(ctx context.Context, po *s3.PutObjectInput) (string, e
if err == nil && d.IsDir() {
return "", s3err.GetAPIError(s3err.ErrExistingObjectIsDirectory)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return "", s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil && !errors.Is(err, fs.ErrNotExist) {
return "", fmt.Errorf("stat object: %w", err)
}
f, err := p.openTmpFile(filepath.Join(*po.Bucket, metaTmpDir),
*po.Bucket, *po.Key, contentLength, acct, doFalloc)
@@ -1509,7 +1527,25 @@ func (p *Posix) DeleteObject(_ context.Context, input *s3.DeleteObjectInput) err
return fmt.Errorf("stat bucket: %w", err)
}
err = os.Remove(filepath.Join(bucket, object))
objpath := filepath.Join(bucket, object)
fi, err := os.Stat(objpath)
if err != nil {
// AWS returns success if the object does not exist or
// is invalid somehow.
// TODO: log if !errors.Is(err, fs.ErrNotExist) somewhere?
return nil
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
// requested object is expecting a directory with a trailing
// slash, but the object is not a directory. treat this as
// a non-existent object.
// AWS returns success if the object does not exist
return nil
}
err = os.Remove(objpath)
if errors.Is(err, fs.ErrNotExist) {
return s3err.GetAPIError(s3err.ErrNoSuchKey)
}
@@ -1615,14 +1651,22 @@ func (p *Posix) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.GetO
object := *input.Key
objPath := filepath.Join(bucket, object)
fi, err := os.Stat(objPath)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return nil, s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil {
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
acceptRange := *input.Range
startOffset, length, err := backend.ParseRange(fi.Size(), acceptRange)
if err != nil {
@@ -1681,6 +1725,7 @@ func (p *Posix) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.GetO
Metadata: userMetaData,
TagCount: tagCount,
ContentRange: &contentRange,
StorageClass: types.StorageClassStandard,
}, nil
}
@@ -1724,6 +1769,7 @@ func (p *Posix) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.GetO
Metadata: userMetaData,
TagCount: tagCount,
ContentRange: &contentRange,
StorageClass: types.StorageClassStandard,
Body: &backend.FileSectionReadCloser{R: rdr, F: f},
}, nil
}
@@ -1758,6 +1804,9 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrInvalidPart)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return nil, s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil {
return nil, fmt.Errorf("stat part: %w", err)
}
@@ -1775,6 +1824,7 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
ETag: &etag,
PartsCount: &partsCount,
ContentLength: &size,
StorageClass: types.StorageClassStandard,
}, nil
}
@@ -1787,13 +1837,20 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
}
objPath := filepath.Join(bucket, object)
fi, err := os.Stat(objPath)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return nil, s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil {
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
userMetaData := make(map[string]string)
contentType, contentEncoding := p.loadUserMetaData(bucket, object, userMetaData)
@@ -1844,6 +1901,7 @@ func (p *Posix) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s3.
ObjectLockLegalHoldStatus: objectLockLegalHoldStatus,
ObjectLockMode: objectLockMode,
ObjectLockRetainUntilDate: objectLockRetainUntilDate,
StorageClass: types.StorageClassStandard,
}, nil
}
@@ -1857,7 +1915,7 @@ func (p *Posix) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttr
ETag: data.ETag,
LastModified: data.LastModified,
ObjectSize: data.ContentLength,
StorageClass: &data.StorageClass,
StorageClass: data.StorageClass,
VersionId: data.VersionId,
}, nil
}
@@ -1908,6 +1966,7 @@ func (p *Posix) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAttr
NextPartNumberMarker: resp.NextPartNumberMarker,
Parts: parts,
},
StorageClass: types.StorageClassStandard,
}, nil
}
@@ -1952,55 +2011,66 @@ func (p *Posix) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return nil, s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil {
return nil, fmt.Errorf("stat object: %w", err)
return nil, fmt.Errorf("open object: %w", err)
}
defer f.Close()
fInfo, err := f.Stat()
fi, err := f.Stat()
if err != nil {
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(srcObject, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
meta := make(map[string]string)
p.loadUserMetaData(srcBucket, srcObject, meta)
mdmap := make(map[string]string)
p.loadUserMetaData(srcBucket, srcObject, mdmap)
var etag string
dstObjdPath := filepath.Join(dstBucket, dstObject)
if dstObjdPath == objPath {
if compareUserMetadata(meta, input.Metadata) {
if input.MetadataDirective == types.MetadataDirectiveCopy {
return &s3.CopyObjectOutput{}, s3err.GetAPIError(s3err.ErrInvalidCopyDest)
} else {
for key := range meta {
err := p.meta.DeleteAttribute(dstBucket, dstObject, key)
if err != nil {
return nil, fmt.Errorf("delete user metadata: %w", err)
}
}
for key := range mdmap {
err := p.meta.DeleteAttribute(dstBucket, dstObject, key)
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return nil, fmt.Errorf("delete user metadata: %w", err)
}
for k, v := range input.Metadata {
err := p.meta.StoreAttribute(dstBucket, dstObject,
fmt.Sprintf("%v.%v", metaHdr, k), []byte(v))
if err != nil {
return nil, fmt.Errorf("set user attr %q: %w", k, err)
}
}
for k, v := range input.Metadata {
err := p.meta.StoreAttribute(dstBucket, dstObject,
fmt.Sprintf("%v.%v", metaHdr, k), []byte(v))
if err != nil {
return nil, fmt.Errorf("set user attr %q: %w", k, err)
}
}
b, _ := p.meta.RetrieveAttribute(dstBucket, dstObject, etagkey)
etag = string(b)
} else {
contentLength := fi.Size()
etag, err = p.PutObject(ctx,
&s3.PutObjectInput{
Bucket: &dstBucket,
Key: &dstObject,
Body: f,
ContentLength: &contentLength,
Metadata: input.Metadata,
})
if err != nil {
return nil, err
}
}
contentLength := fInfo.Size()
etag, err := p.PutObject(ctx,
&s3.PutObjectInput{
Bucket: &dstBucket,
Key: &dstObject,
Body: f,
ContentLength: &contentLength,
Metadata: meta,
})
if err != nil {
return nil, err
}
fi, err := os.Stat(dstObjdPath)
fi, err = os.Stat(dstObjdPath)
if err != nil {
return nil, fmt.Errorf("stat dst object: %w", err)
}
@@ -2013,9 +2083,9 @@ func (p *Posix) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s3.
}, nil
}
func (p *Posix) ListObjects(_ context.Context, input *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
func (p *Posix) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
if input.Bucket == nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidBucketName)
return s3response.ListObjectsResult{}, s3err.GetAPIError(s3err.ErrInvalidBucketName)
}
bucket := *input.Bucket
prefix := ""
@@ -2037,20 +2107,20 @@ func (p *Posix) ListObjects(_ context.Context, input *s3.ListObjectsInput) (*s3.
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
return s3response.ListObjectsResult{}, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return nil, fmt.Errorf("stat bucket: %w", err)
return s3response.ListObjectsResult{}, fmt.Errorf("stat bucket: %w", err)
}
fileSystem := os.DirFS(bucket)
results, err := backend.Walk(fileSystem, prefix, delim, marker, maxkeys,
results, err := backend.Walk(ctx, fileSystem, prefix, delim, marker, maxkeys,
p.fileToObj(bucket), []string{metaTmpDir})
if err != nil {
return nil, fmt.Errorf("walk %v: %w", bucket, err)
return s3response.ListObjectsResult{}, fmt.Errorf("walk %v: %w", bucket, err)
}
return &s3.ListObjectsOutput{
return s3response.ListObjectsResult{
CommonPrefixes: results.CommonPrefixes,
Contents: results.Objects,
Delimiter: &delim,
@@ -2064,43 +2134,46 @@ func (p *Posix) ListObjects(_ context.Context, input *s3.ListObjectsInput) (*s3.
}
func (p *Posix) fileToObj(bucket string) backend.GetObjFunc {
return func(path string, d fs.DirEntry) (types.Object, error) {
return func(path string, d fs.DirEntry) (s3response.Object, error) {
if d.IsDir() {
// directory object only happens if directory empty
// check to see if this is a directory object by checking etag
etagBytes, err := p.meta.RetrieveAttribute(bucket, path, etagkey)
if errors.Is(err, meta.ErrNoSuchKey) || errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return types.Object{}, fmt.Errorf("get etag: %w", err)
return s3response.Object{}, fmt.Errorf("get etag: %w", err)
}
etag := string(etagBytes)
fi, err := d.Info()
if errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return types.Object{}, fmt.Errorf("get fileinfo: %w", err)
return s3response.Object{}, fmt.Errorf("get fileinfo: %w", err)
}
key := path + "/"
size := int64(0)
mtime := fi.ModTime()
return types.Object{
return s3response.Object{
ETag: &etag,
Key: &key,
LastModified: backend.GetTimePtr(fi.ModTime()),
Key: &path,
LastModified: &mtime,
Size: &size,
StorageClass: types.ObjectStorageClassStandard,
}, nil
}
// file object, get object info and fill out object data
etagBytes, err := p.meta.RetrieveAttribute(bucket, path, etagkey)
if errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return types.Object{}, fmt.Errorf("get etag: %w", err)
return s3response.Object{}, fmt.Errorf("get etag: %w", err)
}
// note: meta.ErrNoSuchKey will return etagBytes = []byte{}
// so this will just set etag to "" if its not already set
@@ -2109,26 +2182,28 @@ func (p *Posix) fileToObj(bucket string) backend.GetObjFunc {
fi, err := d.Info()
if errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return types.Object{}, fmt.Errorf("get fileinfo: %w", err)
return s3response.Object{}, fmt.Errorf("get fileinfo: %w", err)
}
size := fi.Size()
mtime := fi.ModTime()
return types.Object{
return s3response.Object{
ETag: &etag,
Key: &path,
LastModified: backend.GetTimePtr(fi.ModTime()),
LastModified: &mtime,
Size: &size,
StorageClass: types.ObjectStorageClassStandard,
}, nil
}
}
func (p *Posix) ListObjectsV2(_ context.Context, input *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
func (p *Posix) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
if input.Bucket == nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidBucketName)
return s3response.ListObjectsV2Result{}, s3err.GetAPIError(s3err.ErrInvalidBucketName)
}
bucket := *input.Bucket
prefix := ""
@@ -2158,22 +2233,22 @@ func (p *Posix) ListObjectsV2(_ context.Context, input *s3.ListObjectsV2Input) (
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
return s3response.ListObjectsV2Result{}, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return nil, fmt.Errorf("stat bucket: %w", err)
return s3response.ListObjectsV2Result{}, fmt.Errorf("stat bucket: %w", err)
}
fileSystem := os.DirFS(bucket)
results, err := backend.Walk(fileSystem, prefix, delim, marker, maxkeys,
results, err := backend.Walk(ctx, fileSystem, prefix, delim, marker, maxkeys,
p.fileToObj(bucket), []string{metaTmpDir})
if err != nil {
return nil, fmt.Errorf("walk %v: %w", bucket, err)
return s3response.ListObjectsV2Result{}, fmt.Errorf("walk %v: %w", bucket, err)
}
count := int32(len(results.Objects))
return &s3.ListObjectsV2Output{
return s3response.ListObjectsV2Result{
CommonPrefixes: results.CommonPrefixes,
Contents: results.Objects,
Delimiter: &delim,
@@ -2619,39 +2694,8 @@ func (p *Posix) GetObjectRetention(_ context.Context, bucket, object, versionId
return data, nil
}
func (p *Posix) ChangeBucketOwner(ctx context.Context, bucket, newOwner string) error {
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return fmt.Errorf("stat bucket: %w", err)
}
aclTag, err := p.meta.RetrieveAttribute(bucket, "", aclkey)
if err != nil {
return fmt.Errorf("get acl: %w", err)
}
var acl auth.ACL
err = json.Unmarshal(aclTag, &acl)
if err != nil {
return fmt.Errorf("unmarshal acl: %w", err)
}
acl.Owner = newOwner
newAcl, err := json.Marshal(acl)
if err != nil {
return fmt.Errorf("marshal acl: %w", err)
}
err = p.meta.StoreAttribute(bucket, "", aclkey, newAcl)
if err != nil {
return fmt.Errorf("set acl: %w", err)
}
return nil
func (p *Posix) ChangeBucketOwner(ctx context.Context, bucket string, acl []byte) error {
return p.PutBucketAcl(ctx, bucket, acl)
}
func (p *Posix) ListBucketsAndOwners(ctx context.Context) (buckets []s3response.Bucket, err error) {
@@ -2671,14 +2715,16 @@ func (p *Posix) ListBucketsAndOwners(ctx context.Context) (buckets []s3response.
}
aclTag, err := p.meta.RetrieveAttribute(entry.Name(), "", aclkey)
if err != nil {
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return buckets, fmt.Errorf("get acl tag: %w", err)
}
var acl auth.ACL
err = json.Unmarshal(aclTag, &acl)
if err != nil {
return buckets, fmt.Errorf("parse acl tag: %w", err)
if len(aclTag) > 0 {
err = json.Unmarshal(aclTag, &acl)
if err != nil {
return buckets, fmt.Errorf("parse acl tag: %w", err)
}
}
buckets = append(buckets, s3response.Bucket{

View File

@@ -161,9 +161,17 @@ func (s *S3Proxy) DeleteBucketOwnershipControls(ctx context.Context, bucket stri
return handleError(err)
}
func (s *S3Proxy) CreateMultipartUpload(ctx context.Context, input *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
func (s *S3Proxy) CreateMultipartUpload(ctx context.Context, input *s3.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error) {
out, err := s.client.CreateMultipartUpload(ctx, input)
return out, handleError(err)
if err != nil {
return s3response.InitiateMultipartUploadResult{}, handleError(err)
}
return s3response.InitiateMultipartUploadResult{
Bucket: *out.Bucket,
Key: *out.Key,
UploadId: *out.UploadId,
}, nil
}
func (s *S3Proxy) CompleteMultipartUpload(ctx context.Context, input *s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error) {
@@ -195,8 +203,8 @@ func (s *S3Proxy) ListMultipartUploads(ctx context.Context, input *s3.ListMultip
ID: *u.Owner.ID,
DisplayName: *u.Owner.DisplayName,
},
StorageClass: string(u.StorageClass),
Initiated: u.Initiated.Format(backend.RFC3339TimeFormat),
StorageClass: u.StorageClass,
Initiated: *u.Initiated,
})
}
@@ -233,7 +241,7 @@ func (s *S3Proxy) ListParts(ctx context.Context, input *s3.ListPartsInput) (s3re
for _, p := range output.Parts {
parts = append(parts, s3response.Part{
PartNumber: int(*p.PartNumber),
LastModified: p.LastModified.Format(backend.RFC3339TimeFormat),
LastModified: *p.LastModified,
ETag: *p.ETag,
Size: *p.Size,
})
@@ -262,7 +270,7 @@ func (s *S3Proxy) ListParts(ctx context.Context, input *s3.ListPartsInput) (s3re
ID: *output.Owner.ID,
DisplayName: *output.Owner.DisplayName,
},
StorageClass: string(output.StorageClass),
StorageClass: output.StorageClass,
PartNumberMarker: pnm,
NextPartNumberMarker: npmn,
MaxParts: int(*output.MaxParts),
@@ -354,7 +362,7 @@ func (s *S3Proxy) GetObjectAttributes(ctx context.Context, input *s3.GetObjectAt
ETag: out.ETag,
LastModified: out.LastModified,
ObjectSize: out.ObjectSize,
StorageClass: &out.StorageClass,
StorageClass: out.StorageClass,
VersionId: out.VersionId,
ObjectParts: &parts,
}, handleError(err)
@@ -365,14 +373,47 @@ func (s *S3Proxy) CopyObject(ctx context.Context, input *s3.CopyObjectInput) (*s
return out, handleError(err)
}
func (s *S3Proxy) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
func (s *S3Proxy) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
out, err := s.client.ListObjects(ctx, input)
return out, handleError(err)
if err != nil {
return s3response.ListObjectsResult{}, handleError(err)
}
contents := convertObjects(out.Contents)
return s3response.ListObjectsResult{
CommonPrefixes: out.CommonPrefixes,
Contents: contents,
Delimiter: out.Delimiter,
IsTruncated: out.IsTruncated,
Marker: out.Marker,
MaxKeys: out.MaxKeys,
Name: out.Name,
NextMarker: out.NextMarker,
Prefix: out.Prefix,
}, nil
}
func (s *S3Proxy) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
func (s *S3Proxy) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
out, err := s.client.ListObjectsV2(ctx, input)
return out, handleError(err)
if err != nil {
return s3response.ListObjectsV2Result{}, handleError(err)
}
contents := convertObjects(out.Contents)
return s3response.ListObjectsV2Result{
CommonPrefixes: out.CommonPrefixes,
Contents: contents,
Delimiter: out.Delimiter,
IsTruncated: out.IsTruncated,
ContinuationToken: out.ContinuationToken,
MaxKeys: out.MaxKeys,
Name: out.Name,
NextContinuationToken: out.NextContinuationToken,
Prefix: out.Prefix,
KeyCount: out.KeyCount,
}, nil
}
func (s *S3Proxy) DeleteObject(ctx context.Context, input *s3.DeleteObjectInput) error {
@@ -619,8 +660,12 @@ func (s *S3Proxy) GetObjectLegalHold(ctx context.Context, bucket, object, versio
return &status, nil
}
func (s *S3Proxy) ChangeBucketOwner(ctx context.Context, bucket, newOwner string) error {
req, err := http.NewRequest(http.MethodPatch, fmt.Sprintf("%v/change-bucket-owner/?bucket=%v&owner=%v", s.endpoint, bucket, newOwner), nil)
func (s *S3Proxy) ChangeBucketOwner(ctx context.Context, bucket string, acl []byte) error {
var acll auth.ACL
if err := json.Unmarshal(acl, &acll); err != nil {
return fmt.Errorf("unmarshal acl: %w", err)
}
req, err := http.NewRequest(http.MethodPatch, fmt.Sprintf("%v/change-bucket-owner/?bucket=%v&owner=%v", s.endpoint, bucket, acll.Owner), nil)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
@@ -650,7 +695,7 @@ func (s *S3Proxy) ChangeBucketOwner(ctx context.Context, bucket, newOwner string
return err
}
defer resp.Body.Close()
return fmt.Errorf(string(body))
return fmt.Errorf("%v", string(body))
}
return nil
@@ -726,3 +771,21 @@ func base64Decode(encoded string) ([]byte, error) {
}
return decoded, nil
}
func convertObjects(objs []types.Object) []s3response.Object {
result := make([]s3response.Object, len(objs))
for _, obj := range objs {
result = append(result, s3response.Object{
ETag: obj.ETag,
Key: obj.Key,
LastModified: obj.LastModified,
Owner: obj.Owner,
Size: obj.Size,
RestoreStatus: obj.RestoreStatus,
StorageClass: obj.StorageClass,
})
}
return result
}

View File

@@ -36,12 +36,14 @@ import (
"github.com/versity/versitygw/backend/meta"
"github.com/versity/versitygw/backend/posix"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
type ScoutfsOpts struct {
ChownUID bool
ChownGID bool
GlacierMode bool
BucketLinks bool
}
type ScoutFS struct {
@@ -457,6 +459,9 @@ func (s *ScoutFS) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrInvalidPart)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return nil, s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil {
return nil, fmt.Errorf("stat part: %w", err)
}
@@ -486,13 +491,20 @@ func (s *ScoutFS) HeadObject(ctx context.Context, input *s3.HeadObjectInput) (*s
}
objPath := filepath.Join(bucket, object)
fi, err := os.Stat(objPath)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return nil, s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil {
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
userMetaData := make(map[string]string)
contentType, contentEncoding := s.loadUserMetaData(bucket, object, userMetaData)
@@ -603,14 +615,22 @@ func (s *ScoutFS) GetObject(_ context.Context, input *s3.GetObjectInput) (*s3.Ge
}
objPath := filepath.Join(bucket, object)
fi, err := os.Stat(objPath)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
if errors.Is(err, syscall.ENAMETOOLONG) {
return nil, s3err.GetAPIError(s3err.ErrKeyTooLong)
}
if err != nil {
return nil, fmt.Errorf("stat object: %w", err)
}
if strings.HasSuffix(object, "/") && !fi.IsDir() {
return nil, s3err.GetAPIError(s3err.ErrNoSuchKey)
}
startOffset, length, err := backend.ParseRange(fi.Size(), acceptRange)
if err != nil {
return nil, err
@@ -714,9 +734,9 @@ func (s *ScoutFS) getXattrTags(bucket, object string) (map[string]string, error)
return tags, nil
}
func (s *ScoutFS) ListObjects(_ context.Context, input *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
func (s *ScoutFS) ListObjects(ctx context.Context, input *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
if input.Bucket == nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidBucketName)
return s3response.ListObjectsResult{}, s3err.GetAPIError(s3err.ErrInvalidBucketName)
}
bucket := *input.Bucket
prefix := ""
@@ -738,20 +758,20 @@ func (s *ScoutFS) ListObjects(_ context.Context, input *s3.ListObjectsInput) (*s
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
return s3response.ListObjectsResult{}, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return nil, fmt.Errorf("stat bucket: %w", err)
return s3response.ListObjectsResult{}, fmt.Errorf("stat bucket: %w", err)
}
fileSystem := os.DirFS(bucket)
results, err := backend.Walk(fileSystem, prefix, delim, marker, maxkeys,
results, err := backend.Walk(ctx, fileSystem, prefix, delim, marker, maxkeys,
s.fileToObj(bucket), []string{metaTmpDir})
if err != nil {
return nil, fmt.Errorf("walk %v: %w", bucket, err)
return s3response.ListObjectsResult{}, fmt.Errorf("walk %v: %w", bucket, err)
}
return &s3.ListObjectsOutput{
return s3response.ListObjectsResult{
CommonPrefixes: results.CommonPrefixes,
Contents: results.Objects,
Delimiter: &delim,
@@ -764,9 +784,9 @@ func (s *ScoutFS) ListObjects(_ context.Context, input *s3.ListObjectsInput) (*s
}, nil
}
func (s *ScoutFS) ListObjectsV2(_ context.Context, input *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
func (s *ScoutFS) ListObjectsV2(ctx context.Context, input *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
if input.Bucket == nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidBucketName)
return s3response.ListObjectsV2Result{}, s3err.GetAPIError(s3err.ErrInvalidBucketName)
}
bucket := *input.Bucket
prefix := ""
@@ -788,20 +808,20 @@ func (s *ScoutFS) ListObjectsV2(_ context.Context, input *s3.ListObjectsV2Input)
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
return s3response.ListObjectsV2Result{}, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return nil, fmt.Errorf("stat bucket: %w", err)
return s3response.ListObjectsV2Result{}, fmt.Errorf("stat bucket: %w", err)
}
fileSystem := os.DirFS(bucket)
results, err := backend.Walk(fileSystem, prefix, delim, marker, int32(maxkeys),
results, err := backend.Walk(ctx, fileSystem, prefix, delim, marker, int32(maxkeys),
s.fileToObj(bucket), []string{metaTmpDir})
if err != nil {
return nil, fmt.Errorf("walk %v: %w", bucket, err)
return s3response.ListObjectsV2Result{}, fmt.Errorf("walk %v: %w", bucket, err)
}
return &s3.ListObjectsV2Output{
return s3response.ListObjectsV2Result{
CommonPrefixes: results.CommonPrefixes,
Contents: results.Objects,
Delimiter: &delim,
@@ -815,50 +835,58 @@ func (s *ScoutFS) ListObjectsV2(_ context.Context, input *s3.ListObjectsV2Input)
}
func (s *ScoutFS) fileToObj(bucket string) backend.GetObjFunc {
return func(path string, d fs.DirEntry) (types.Object, error) {
return func(path string, d fs.DirEntry) (s3response.Object, error) {
objPath := filepath.Join(bucket, path)
if d.IsDir() {
// directory object only happens if directory empty
// check to see if this is a directory object by checking etag
b, err := s.meta.RetrieveAttribute(bucket, path, etagkey)
if err != nil {
return types.Object{}, fmt.Errorf("get etag: %w", err)
etagBytes, err := s.meta.RetrieveAttribute(bucket, path, etagkey)
if errors.Is(err, meta.ErrNoSuchKey) || errors.Is(err, fs.ErrNotExist) {
return s3response.Object{}, backend.ErrSkipObj
}
etag := string(b)
if err != nil {
return s3response.Object{}, fmt.Errorf("get etag: %w", err)
}
etag := string(etagBytes)
fi, err := d.Info()
if errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return types.Object{}, fmt.Errorf("get fileinfo: %w", err)
return s3response.Object{}, fmt.Errorf("get fileinfo: %w", err)
}
key := path + "/"
mtime := fi.ModTime()
return types.Object{
return s3response.Object{
ETag: &etag,
Key: &key,
LastModified: backend.GetTimePtr(fi.ModTime()),
LastModified: &mtime,
StorageClass: types.ObjectStorageClassStandard,
}, nil
}
// file object, get object info and fill out object data
b, err := s.meta.RetrieveAttribute(bucket, path, etagkey)
if errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return types.Object{}, fmt.Errorf("get etag: %w", err)
if err != nil && !errors.Is(err, meta.ErrNoSuchKey) {
return s3response.Object{}, fmt.Errorf("get etag: %w", err)
}
// note: meta.ErrNoSuchKey will return etagBytes = []byte{}
// so this will just set etag to "" if its not already set
etag := string(b)
fi, err := d.Info()
if errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return types.Object{}, fmt.Errorf("get fileinfo: %w", err)
return s3response.Object{}, fmt.Errorf("get fileinfo: %w", err)
}
sc := types.ObjectStorageClassStandard
@@ -867,10 +895,10 @@ func (s *ScoutFS) fileToObj(bucket string) backend.GetObjFunc {
// If so, we will return the InvalidObjectState error.
st, err := statMore(objPath)
if errors.Is(err, fs.ErrNotExist) {
return types.Object{}, backend.ErrSkipObj
return s3response.Object{}, backend.ErrSkipObj
}
if err != nil {
return types.Object{}, fmt.Errorf("stat more: %w", err)
return s3response.Object{}, fmt.Errorf("stat more: %w", err)
}
if st.Offline_blocks != 0 {
sc = types.ObjectStorageClassGlacier
@@ -878,11 +906,12 @@ func (s *ScoutFS) fileToObj(bucket string) backend.GetObjFunc {
}
size := fi.Size()
mtime := fi.ModTime()
return types.Object{
return s3response.Object{
ETag: &etag,
Key: &path,
LastModified: backend.GetTimePtr(fi.ModTime()),
LastModified: &mtime,
Size: &size,
StorageClass: sc,
}, nil

View File

@@ -37,8 +37,9 @@ func New(rootdir string, opts ScoutfsOpts) (*ScoutFS, error) {
metastore := meta.XattrMeta{}
p, err := posix.New(rootdir, metastore, posix.PosixOpts{
ChownUID: opts.ChownUID,
ChownGID: opts.ChownGID,
ChownUID: opts.ChownUID,
ChownGID: opts.ChownGID,
BucketLinks: opts.BucketLinks,
})
if err != nil {
return nil, err

View File

@@ -15,6 +15,7 @@
package backend
import (
"context"
"errors"
"fmt"
"io/fs"
@@ -23,24 +24,25 @@ import (
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/s3response"
)
type WalkResults struct {
CommonPrefixes []types.CommonPrefix
Objects []types.Object
Objects []s3response.Object
Truncated bool
NextMarker string
}
type GetObjFunc func(path string, d fs.DirEntry) (types.Object, error)
type GetObjFunc func(path string, d fs.DirEntry) (s3response.Object, error)
var ErrSkipObj = errors.New("skip this object")
// Walk walks the supplied fs.FS and returns results compatible with list
// objects responses
func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int32, getObj GetObjFunc, skipdirs []string) (WalkResults, error) {
func Walk(ctx context.Context, fileSystem fs.FS, prefix, delimiter, marker string, max int32, getObj GetObjFunc, skipdirs []string) (WalkResults, error) {
cpmap := make(map[string]struct{})
var objects []types.Object
var objects []s3response.Object
var pastMarker bool
if marker == "" {
@@ -55,6 +57,9 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int32, getObj
if err != nil {
return err
}
if ctx.Err() != nil {
return ctx.Err()
}
// Ignore the root directory
if path == "." {
return nil
@@ -91,7 +96,10 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int32, getObj
if err != nil {
return fmt.Errorf("readdir %q: %w", path, err)
}
if len(ents) == 0 {
path += string(os.PathSeparator)
if len(ents) == 0 && delimiter == "" {
dirobj, err := getObj(path, d)
if err == ErrSkipObj {
return nil
@@ -100,9 +108,13 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int32, getObj
return fmt.Errorf("directory to object %q: %w", path, err)
}
objects = append(objects, dirobj)
return nil
}
return nil
if len(ents) != 0 {
return nil
}
}
if !pastMarker {
@@ -180,9 +192,16 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int32, getObj
// Common prefixes are a set, so should not have duplicates.
// These are abstractly a "directory", so need to include the
// delimiter at the end.
cpmap[prefix+before+delimiter] = struct{}{}
cpref := prefix + before + delimiter
if cpref == marker {
pastMarker = true
return nil
}
cpmap[cpref] = struct{}{}
if (len(objects) + len(cpmap)) == int(max) {
pastMax = true
newMarker = cpref
truncated = true
return fs.SkipAll
}
return nil

View File

@@ -15,15 +15,19 @@
package backend_test
import (
"context"
"crypto/md5"
"encoding/hex"
"fmt"
"io/fs"
"sync"
"testing"
"testing/fstest"
"time"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3response"
)
type walkTest struct {
@@ -32,19 +36,20 @@ type walkTest struct {
getobj backend.GetObjFunc
}
func getObj(path string, d fs.DirEntry) (types.Object, error) {
func getObj(path string, d fs.DirEntry) (s3response.Object, error) {
if d.IsDir() {
etag := getMD5(path)
fi, err := d.Info()
if err != nil {
return types.Object{}, fmt.Errorf("get fileinfo: %w", err)
return s3response.Object{}, fmt.Errorf("get fileinfo: %w", err)
}
mtime := fi.ModTime()
return types.Object{
return s3response.Object{
ETag: &etag,
Key: &path,
LastModified: backend.GetTimePtr(fi.ModTime()),
LastModified: &mtime,
}, nil
}
@@ -52,15 +57,16 @@ func getObj(path string, d fs.DirEntry) (types.Object, error) {
fi, err := d.Info()
if err != nil {
return types.Object{}, fmt.Errorf("get fileinfo: %w", err)
return s3response.Object{}, fmt.Errorf("get fileinfo: %w", err)
}
size := fi.Size()
mtime := fi.ModTime()
return types.Object{
return s3response.Object{
ETag: &etag,
Key: &path,
LastModified: backend.GetTimePtr(fi.ModTime()),
LastModified: &mtime,
Size: &size,
}, nil
}
@@ -86,7 +92,7 @@ func TestWalk(t *testing.T) {
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetStringPtr("photos/"),
}},
Objects: []types.Object{{
Objects: []s3response.Object{{
Key: backend.GetStringPtr("sample.jpg"),
}},
},
@@ -101,14 +107,14 @@ func TestWalk(t *testing.T) {
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetStringPtr("test/"),
}},
Objects: []types.Object{},
Objects: []s3response.Object{},
},
getobj: getObj,
},
}
for _, tt := range tests {
res, err := backend.Walk(tt.fsys, "", "/", "", 1000, tt.getobj, []string{})
res, err := backend.Walk(context.Background(), tt.fsys, "", "/", "", 1000, tt.getobj, []string{})
if err != nil {
t.Fatalf("walk: %v", err)
}
@@ -168,7 +174,7 @@ func printCommonPrefixes(list []types.CommonPrefix) string {
return res + "]"
}
func compareObjects(a, b []types.Object) bool {
func compareObjects(a, b []s3response.Object) bool {
if len(a) == 0 && len(b) == 0 {
return true
}
@@ -184,7 +190,7 @@ func compareObjects(a, b []types.Object) bool {
return false
}
func containsObject(c types.Object, list []types.Object) bool {
func containsObject(c s3response.Object, list []s3response.Object) bool {
for _, cp := range list {
if *c.Key == *cp.Key {
return true
@@ -193,7 +199,7 @@ func containsObject(c types.Object, list []types.Object) bool {
return false
}
func printObjects(list []types.Object) string {
func printObjects(list []s3response.Object) string {
res := "["
for _, cp := range list {
if res == "[" {
@@ -204,3 +210,50 @@ func printObjects(list []types.Object) string {
}
return res + "]"
}
type slowFS struct {
fstest.MapFS
}
const (
readDirPause = 100 * time.Millisecond
// walkTimeOut should be less than the tree traversal time
// which is the readdirPause time * the number of directories
walkTimeOut = 500 * time.Millisecond
)
func (s *slowFS) ReadDir(name string) ([]fs.DirEntry, error) {
time.Sleep(readDirPause)
return s.MapFS.ReadDir(name)
}
func TestWalkStop(t *testing.T) {
s := &slowFS{MapFS: fstest.MapFS{
"/a/b/c/d/e/f/g/h/i/g/k/l/m/n": &fstest.MapFile{},
}}
ctx, cancel := context.WithTimeout(context.Background(), walkTimeOut)
defer cancel()
var err error
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
_, err = backend.Walk(ctx, s, "", "/", "", 1000,
func(path string, d fs.DirEntry) (s3response.Object, error) {
return s3response.Object{}, nil
}, []string{})
}()
select {
case <-time.After(1 * time.Second):
t.Fatalf("walk is not terminated in time")
case <-ctx.Done():
}
wg.Wait()
if err != ctx.Err() {
t.Fatalf("unexpected error: %v", err)
}
}

View File

@@ -19,9 +19,11 @@ import (
"crypto/tls"
"fmt"
"log"
"net"
"net/http"
_ "net/http/pprof"
"os"
"strings"
"github.com/gofiber/fiber/v2"
"github.com/urfave/cli/v2"
@@ -45,8 +47,8 @@ var (
natsURL, natsTopic string
eventWebhookURL string
eventConfigFilePath string
logWebhookURL string
accessLog string
logWebhookURL, accessLog string
adminLogFile string
healthPath string
debug bool
pprof string
@@ -112,10 +114,13 @@ func main() {
func initApp() *cli.App {
return &cli.App{
Name: "versitygw",
Usage: "Start S3 gateway service with specified backend storage.",
Description: `The S3 gateway is an S3 protocol translator that allows an S3 client
to access the supported backend storage as if it was a native S3 service.`,
Usage: "Versity S3 Gateway",
Description: `The Versity S3 Gateway is an S3 protocol translator that allows an S3 client
to access the supported backend storage as if it was a native S3 service.
VersityGW is an open-source project licensed under the Apache 2.0 License. The
source code is hosted on GitHub at https://github.com/versity/versitygw, and
documentation can be found in the GitHub wiki.`,
Copyright: "Copyright (c) 2023-2024 Versity Software",
Action: func(ctx *cli.Context) error {
return ctx.App.Command("help").Run(ctx)
},
@@ -223,6 +228,12 @@ func initFlags() []cli.Flag {
EnvVars: []string{"LOGFILE", "VGW_ACCESS_LOG"},
Destination: &accessLog,
},
&cli.StringFlag{
Name: "admin-access-log",
Usage: "enable admin server access logging to specified file",
EnvVars: []string{"LOGFILE", "VGW_ADMIN_ACCESS_LOG"},
Destination: &adminLogFile,
},
&cli.StringFlag{
Name: "log-webhook-url",
Usage: "webhook url to send the audit logs",
@@ -512,10 +523,12 @@ func runGateway(ctx context.Context, be backend.Backend) error {
}
app := fiber.New(fiber.Config{
AppName: "versitygw",
ServerHeader: "VERSITYGW",
StreamRequestBody: true,
DisableKeepalive: true,
AppName: "versitygw",
ServerHeader: "VERSITYGW",
StreamRequestBody: true,
DisableKeepalive: true,
Network: fiber.NetworkTCP,
DisableStartupMessage: true,
})
var opts []s3api.Option
@@ -551,8 +564,10 @@ func runGateway(ctx context.Context, be backend.Backend) error {
}
admApp := fiber.New(fiber.Config{
AppName: "versitygw",
ServerHeader: "VERSITYGW",
AppName: "versitygw",
ServerHeader: "VERSITYGW",
Network: fiber.NetworkTCP,
DisableStartupMessage: true,
})
var admOpts []s3api.AdminOpt
@@ -573,6 +588,11 @@ func runGateway(ctx context.Context, be backend.Backend) error {
}
iam, err := auth.New(&auth.Opts{
RootAccount: auth.Account{
Access: rootUserAccess,
Secret: rootUserSecret,
Role: auth.RoleAdmin,
},
Dir: iamDir,
LDAPServerURL: ldapURL,
LDAPBindDN: ldapBindDN,
@@ -608,9 +628,10 @@ func runGateway(ctx context.Context, be backend.Backend) error {
return fmt.Errorf("setup iam: %w", err)
}
logger, err := s3log.InitLogger(&s3log.LogConfig{
LogFile: accessLog,
WebhookURL: logWebhookURL,
loggers, err := s3log.InitLogger(&s3log.LogConfig{
LogFile: accessLog,
WebhookURL: logWebhookURL,
AdminLogFile: adminLogFile,
})
if err != nil {
return fmt.Errorf("setup logger: %w", err)
@@ -641,12 +662,16 @@ func runGateway(ctx context.Context, be backend.Backend) error {
srv, err := s3api.New(app, be, middlewares.RootUserConfig{
Access: rootUserAccess,
Secret: rootUserSecret,
}, port, region, iam, logger, evSender, metricsManager, opts...)
}, port, region, iam, loggers.S3Logger, loggers.AdminLogger, evSender, metricsManager, opts...)
if err != nil {
return fmt.Errorf("init gateway: %v", err)
}
admSrv := s3api.NewAdminServer(admApp, be, middlewares.RootUserConfig{Access: rootUserAccess, Secret: rootUserSecret}, admPort, region, iam, admOpts...)
admSrv := s3api.NewAdminServer(admApp, be, middlewares.RootUserConfig{Access: rootUserAccess, Secret: rootUserSecret}, admPort, region, iam, loggers.AdminLogger, admOpts...)
if !quiet {
printBanner(port, admPort, certFile != "", admCertFile != "")
}
c := make(chan error, 2)
go func() { c <- srv.Serve() }()
@@ -663,10 +688,17 @@ Loop:
case err = <-c:
break Loop
case <-sigHup:
if logger != nil {
err = logger.HangUp()
if loggers.S3Logger != nil {
err = loggers.S3Logger.HangUp()
if err != nil {
err = fmt.Errorf("HUP logger: %w", err)
err = fmt.Errorf("HUP s3 logger: %w", err)
break Loop
}
}
if loggers.AdminLogger != nil {
err = loggers.AdminLogger.HangUp()
if err != nil {
err = fmt.Errorf("HUP admin logger: %w", err)
break Loop
}
}
@@ -684,13 +716,22 @@ Loop:
fmt.Fprintf(os.Stderr, "shutdown iam: %v\n", err)
}
if logger != nil {
err := logger.Shutdown()
if loggers.S3Logger != nil {
err := loggers.S3Logger.Shutdown()
if err != nil {
if saveErr == nil {
saveErr = err
}
fmt.Fprintf(os.Stderr, "shutdown logger: %v\n", err)
fmt.Fprintf(os.Stderr, "shutdown s3 logger: %v\n", err)
}
}
if loggers.AdminLogger != nil {
err := loggers.AdminLogger.Shutdown()
if err != nil {
if saveErr == nil {
saveErr = err
}
fmt.Fprintf(os.Stderr, "shutdown admin logger: %v\n", err)
}
}
@@ -710,3 +751,177 @@ Loop:
return saveErr
}
func printBanner(port, admPort string, ssl, admSsl bool) {
interfaces, err := getMatchingIPs(port)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to match local IP addresses: %v\n", err)
return
}
var admInterfaces []string
if admPort != "" {
admInterfaces, err = getMatchingIPs(admPort)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to match admin port local IP addresses: %v\n", err)
return
}
}
title := "VersityGW"
version := fmt.Sprintf("Version %v, Build %v", Version, Build)
urls := []string{}
hst, prt, err := net.SplitHostPort(port)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to parse port: %v\n", err)
return
}
for _, ip := range interfaces {
url := fmt.Sprintf("http://%s:%s", ip, prt)
if ssl {
url = fmt.Sprintf("https://%s:%s", ip, prt)
}
urls = append(urls, url)
}
if hst == "" {
hst = "0.0.0.0"
}
boundHost := fmt.Sprintf("(bound on host %s and port %s)", hst, prt)
lines := []string{
centerText(title),
centerText(version),
centerText(boundHost),
centerText(""),
}
if len(admInterfaces) > 0 {
lines = append(lines,
leftText("S3 service listening on:"),
)
} else {
lines = append(lines,
leftText("Admin/S3 service listening on:"),
)
}
for _, url := range urls {
lines = append(lines, leftText(" "+url))
}
if len(admInterfaces) > 0 {
lines = append(lines,
centerText(""),
leftText("Admin service listening on:"),
)
_, prt, err := net.SplitHostPort(admPort)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to parse port: %v\n", err)
return
}
for _, ip := range admInterfaces {
url := fmt.Sprintf("http://%s:%s", ip, prt)
if admSsl {
url = fmt.Sprintf("https://%s:%s", ip, prt)
}
lines = append(lines, leftText(" "+url))
}
}
// Print the top border
fmt.Println("┌" + strings.Repeat("─", columnWidth-2) + "┐")
// Print each line
for _, line := range lines {
fmt.Printf("│%-*s│\n", columnWidth-2, line)
}
// Print the bottom border
fmt.Println("└" + strings.Repeat("─", columnWidth-2) + "┘")
}
// getMatchingIPs returns all IP addresses for local system interfaces that
// match the input address specification.
func getMatchingIPs(spec string) ([]string, error) {
// Split the input spec into IP and port
host, _, err := net.SplitHostPort(spec)
if err != nil {
return nil, fmt.Errorf("parse address/port: %v", err)
}
// Handle cases where IP is omitted (e.g., ":1234")
if host == "" {
host = "0.0.0.0"
}
ipaddr, err := net.ResolveIPAddr("ip", host)
if err != nil {
return nil, err
}
parsedInputIP := ipaddr.IP
var result []string
// Get all network interfaces
interfaces, err := net.Interfaces()
if err != nil {
return nil, err
}
for _, iface := range interfaces {
// Get all addresses associated with the interface
addrs, err := iface.Addrs()
if err != nil {
return nil, err
}
for _, addr := range addrs {
// Parse the address to get the IP part
ipAddr, _, err := net.ParseCIDR(addr.String())
if err != nil {
return nil, err
}
if ipAddr.IsLinkLocalUnicast() {
continue
}
if ipAddr.IsInterfaceLocalMulticast() {
continue
}
if ipAddr.IsLinkLocalMulticast() {
continue
}
// Check if the IP matches the input specification
if parsedInputIP.Equal(net.IPv4(0, 0, 0, 0)) || parsedInputIP.Equal(ipAddr) {
result = append(result, ipAddr.String())
}
}
}
return result, nil
}
const columnWidth = 70
func centerText(text string) string {
padding := (columnWidth - 2 - len(text)) / 2
if padding < 0 {
padding = 0
}
return strings.Repeat(" ", padding) + text
}
func leftText(text string) string {
if len(text) > columnWidth-2 {
return text
}
return text + strings.Repeat(" ", columnWidth-2-len(text))
}

View File

@@ -24,6 +24,7 @@ import (
var (
chownuid, chowngid bool
bucketlinks bool
)
func posixCommand() *cli.Command {
@@ -54,6 +55,12 @@ will be translated into the file /mnt/fs/gwroot/mybucket/a/b/c/myobject`,
EnvVars: []string{"VGW_CHOWN_GID"},
Destination: &chowngid,
},
&cli.BoolFlag{
Name: "bucketlinks",
Usage: "allow symlinked directories at bucket level to be treated as buckets",
EnvVars: []string{"VGW_BUCKET_LINKS"},
Destination: &bucketlinks,
},
},
}
}
@@ -70,8 +77,9 @@ func runPosix(ctx *cli.Context) error {
}
be, err := posix.New(gwroot, meta.XattrMeta{}, posix.PosixOpts{
ChownUID: chownuid,
ChownGID: chowngid,
ChownUID: chownuid,
ChownGID: chowngid,
BucketLinks: bucketlinks,
})
if err != nil {
return fmt.Errorf("init posix: %v", err)

View File

@@ -63,6 +63,12 @@ move interfaces as well as support for tiered filesystems.`,
EnvVars: []string{"VGW_CHOWN_GID"},
Destination: &chowngid,
},
&cli.BoolFlag{
Name: "bucketlinks",
Usage: "allow symlinked directories at bucket level to be treated as buckets",
EnvVars: []string{"VGW_BUCKET_LINKS"},
Destination: &bucketlinks,
},
},
}
}
@@ -76,6 +82,7 @@ func runScoutfs(ctx *cli.Context) error {
opts.GlacierMode = glacier
opts.ChownUID = chownuid
opts.ChownGID = chowngid
opts.BucketLinks = bucketlinks
be, err := scoutfs.New(ctx.Args().Get(0), opts)
if err != nil {

View File

@@ -19,6 +19,7 @@ services:
dockerfile: Dockerfile_test_bats
args:
- CONFIG_FILE=tests/.env.default
image: bats_test
s3_backend:
build:
context: .

View File

@@ -0,0 +1,64 @@
services:
telegraf:
image: telegraf
container_name: telegraf
restart: always
volumes:
- ./metrics-exploration/telegraf.conf:/etc/telegraf/telegraf.conf:ro
depends_on:
- influxdb
links:
- influxdb
ports:
- '8125:8125/udp'
influxdb:
image: influxdb
container_name: influxdb
restart: always
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=admin
- DOCKER_INFLUXDB_INIT_PASSWORD=adminpass
- DOCKER_INFLUXDB_INIT_ORG=myorg
- DOCKER_INFLUXDB_INIT_BUCKET=metrics
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=my-super-secret-auth-token
ports:
- '8086:8086'
volumes:
- influxdb_data:/var/lib/influxdb
grafana:
image: grafana/grafana
container_name: grafana-server
restart: always
depends_on:
- influxdb
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_INSTALL_PLUGINS=
links:
- influxdb
ports:
- '3000:3000'
volumes:
- ./metrics-exploration/grafana_data:/var/lib/grafana
- ./metrics-exploration/provisioning:/etc/grafana/provisioning
versitygw:
image: versity/versitygw:latest
container_name: versitygw
ports:
- "7070:7070"
environment:
- ROOT_ACCESS_KEY=user
- ROOT_SECRET_KEY=password
- VGW_METRICS_STATSD_SERVERS=telegraf:8125
depends_on:
- telegraf
command: >
posix /tmp/vgw
volumes:
influxdb_data: {}

View File

@@ -237,6 +237,8 @@ ROOT_SECRET_ACCESS_KEY=
#VGW_IAM_LDAP_ACCESS_ATR=
#VGW_IAM_LDAP_SECRET_ATR=
#VGW_IAM_LDAP_ROLE_ATR=
#VGW_IAM_LDAP_USER_ID_ATR=
#VGW_IAM_LDAP_GROUP_ID_ATR=
###############
# IAM caching #
@@ -305,6 +307,10 @@ ROOT_SECRET_ACCESS_KEY=
#VGW_CHOWN_UID=false
#VGW_CHOWN_GID=false
# The VGW_BUCKET_LINKS option will enable the gateway to treat symbolic links
# to directories at the top level gateway directory as buckets.
#VGW_BUCKET_LINKS=false
###########
# scoutfs #
###########
@@ -336,6 +342,10 @@ ROOT_SECRET_ACCESS_KEY=
#VGW_CHOWN_UID=false
#VGW_CHOWN_GID=false
# The VGW_BUCKET_LINKS option will enable the gateway to treat symbolic links
# to directories at the top level gateway directory as buckets.
#VGW_BUCKET_LINKS=false
######
# s3 #
######

65
go.mod
View File

@@ -3,45 +3,44 @@ module github.com/versity/versitygw
go 1.21.0
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.14.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.4.0
github.com/DataDog/datadog-go/v5 v5.5.0
github.com/aws/aws-sdk-go-v2 v1.30.1
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.0
github.com/aws/smithy-go v1.20.3
github.com/aws/aws-sdk-go-v2 v1.30.4
github.com/aws/aws-sdk-go-v2/service/s3 v1.60.1
github.com/aws/smithy-go v1.20.4
github.com/go-ldap/ldap/v3 v3.4.8
github.com/gofiber/fiber/v2 v2.52.5
github.com/google/go-cmp v0.6.0
github.com/google/uuid v1.6.0
github.com/hashicorp/vault-client-go v0.4.3
github.com/nats-io/nats.go v1.36.0
github.com/pkg/xattr v0.4.9
github.com/nats-io/nats.go v1.37.0
github.com/pkg/xattr v0.4.10
github.com/segmentio/kafka-go v0.4.47
github.com/smira/go-statsd v1.3.3
github.com/urfave/cli/v2 v2.27.2
github.com/urfave/cli/v2 v2.27.4
github.com/valyala/fasthttp v1.55.0
github.com/versity/scoutfs-go v0.0.0-20240325223134-38eb2f5f7d44
golang.org/x/sys v0.22.0
golang.org/x/sys v0.24.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/internal v1.9.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.9 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.22.1 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.30.1 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.12 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.22.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.30.5 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.7 // indirect
github.com/golang-jwt/jwt/v5 v5.2.1 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-retryablehttp v0.7.7 // indirect
github.com/hashicorp/go-rootcerts v1.0.2 // indirect
github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/nats-io/nkeys v0.4.7 // indirect
@@ -49,33 +48,33 @@ require (
github.com/pierrec/lz4/v4 v4.1.21 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/ryanuber/go-glob v1.0.0 // indirect
golang.org/x/crypto v0.25.0 // indirect
golang.org/x/net v0.27.0 // indirect
golang.org/x/text v0.16.0 // indirect
golang.org/x/time v0.5.0 // indirect
golang.org/x/crypto v0.26.0 // indirect
golang.org/x/net v0.28.0 // indirect
golang.org/x/text v0.17.0 // indirect
golang.org/x/time v0.6.0 // indirect
)
require (
github.com/andybalholm/brotli v1.1.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.3 // indirect
github.com/aws/aws-sdk-go-v2/config v1.27.24
github.com/aws/aws-sdk-go-v2/credentials v1.17.24
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.5
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.13 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.13 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.13 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.3 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.15 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.15 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.13 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.4 // indirect
github.com/aws/aws-sdk-go-v2/config v1.27.31
github.com/aws/aws-sdk-go-v2/credentials v1.17.30
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.15
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.16 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.16 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.16 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.18 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.18 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.16 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.4 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/tcplisten v1.0.0 // indirect
github.com/xrash/smetrics v0.0.0-20240312152122-5f08fbb34913 // indirect
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 // indirect
)

139
go.sum
View File

@@ -1,13 +1,13 @@
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0 h1:1nGuui+4POelzDwI7RG56yfQJHCnKvwfMoU7VsEp+Zg=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0/go.mod h1:99EvauvlcJ1U06amZiksfYz/3aFGyIhWGHVyiZXtBAI=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.14.0 h1:nyQWyZvwGTvunIMxi1Y9uXkcyr+I7TeNrr/foo4Kpk8=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.14.0/go.mod h1:l38EPgmsp71HHLq9j7De57JcKOWPyhrsW1Awm1JS6K0=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0 h1:tfLQ34V6F7tVSwoTf/4lH5sE0o6eCJuNDTmH09nDpbc=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0/go.mod h1:9kIvujWAA58nmPmWB1m23fyWic1kYZMxD9CxaWn4Qpg=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.9.1 h1:Xy/qV1DyOhhqsU/z0PyFMJfYCxnzna+vBEUtFW0ksQo=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.9.1/go.mod h1:oib6iWdC+sILvNUoJbbBn3xv7TXow7mEp/WRcsYvmow=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.5.0 h1:AifHbc4mg0x9zW52WOpKbsHaDKuRhlI7TVl47thgQ70=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.5.0/go.mod h1:T5RfihdXtBDxt1Ch2wobif3TvzTdumDy29kahv6AV9A=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2 h1:YUUxeiOWgdAQE3pXt2H7QXzZs0q8UBjgRbl56qo8GYM=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2/go.mod h1:dmXQgZuiSubAecswZE+Sm8jkvEa7kQgTPVRvwL/nd0E=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0 h1:PiSrjRPpkQNjrM8H0WwKMnZUdu1RGMtd/LdGKUrOo+c=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0/go.mod h1:oDrbWx4ewMylP7xHivfgixbfGBT6APAwsSoHRKotnIc=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.4.0 h1:Be6KInmFEKV81c0pOAEbRYehLMwmmGI1exuFj248AMk=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.4.0/go.mod h1:WCPBHsOXfBVnivScjs2ypRfimjEW0qPVLGgJkZlrIOA=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 h1:XHOnouVk1mxXfQidrMEnLlPk9UMeRtyBTnEFtxkV0kU=
@@ -21,44 +21,44 @@ github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7V
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/andybalholm/brotli v1.1.0 h1:eLKJA0d02Lf0mVpIDgYnqXcUn0GqVmEFny3VuID1U3M=
github.com/andybalholm/brotli v1.1.0/go.mod h1:sms7XGricyQI9K10gOSf56VKKWS4oLer58Q+mhRPtnY=
github.com/aws/aws-sdk-go-v2 v1.30.1 h1:4y/5Dvfrhd1MxRDD77SrfsDaj8kUkkljU7XE83NPV+o=
github.com/aws/aws-sdk-go-v2 v1.30.1/go.mod h1:nIQjQVp5sfpQcTc9mPSr1B0PaWK5ByX9MOoDadSN4lc=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.3 h1:tW1/Rkad38LA15X4UQtjXZXNKsCgkshC3EbmcUmghTg=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.3/go.mod h1:UbnqO+zjqk3uIt9yCACHJ9IVNhyhOCnYk8yA19SAWrM=
github.com/aws/aws-sdk-go-v2/config v1.27.24 h1:NM9XicZ5o1CBU/MZaHwFtimRpWx9ohAUAqkG6AqSqPo=
github.com/aws/aws-sdk-go-v2/config v1.27.24/go.mod h1:aXzi6QJTuQRVVusAO8/NxpdTeTyr/wRcybdDtfUwJSs=
github.com/aws/aws-sdk-go-v2/credentials v1.17.24 h1:YclAsrnb1/GTQNt2nzv+756Iw4mF8AOzcDfweWwwm/M=
github.com/aws/aws-sdk-go-v2/credentials v1.17.24/go.mod h1:Hld7tmnAkoBQdTMNYZGzztzKRdA4fCdn9L83LOoigac=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.9 h1:Aznqksmd6Rfv2HQN9cpqIV/lQRMaIpJkLLaJ1ZI76no=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.9/go.mod h1:WQr3MY7AxGNxaqAtsDWn+fBxmd4XvLkzeqQ8P1VM0/w=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.5 h1:qkipTyOc+ElVS+TgGJCf/6gqu0CL5+ii19W/eMQfY94=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.5/go.mod h1:UjB35RXl+ESpnVtyaKqdw11NhMxm90lF9o2zqJNbi14=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.13 h1:5SAoZ4jYpGH4721ZNoS1znQrhOfZinOhc4XuTXx/nVc=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.13/go.mod h1:+rdA6ZLpaSeM7tSg/B0IEDinCIBJGmW8rKDFkYpP04g=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.13 h1:WIijqeaAO7TYFLbhsZmi2rgLEAtWOC1LhxCAVTJlSKw=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.13/go.mod h1:i+kbfa76PQbWw/ULoWnp51EYVWH4ENln76fLQE3lXT8=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.0 h1:hT8rVHwugYE2lEfdFE0QWVo81lF7jMrYJVDWI+f+VxU=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.0/go.mod h1:8tu/lYfQfFe6IGnaOdrpVgEL2IrrDOf6/m9RQum4NkY=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.13 h1:THZJJ6TU/FOiM7DZFnisYV9d49oxXWUzsVIMTuf3VNU=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.13/go.mod h1:VISUTg6n+uBaYIWPBaIG0jk7mbBxm7DUqBtU2cUDDWI=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.3 h1:dT3MqvGhSoaIhRseqw2I0yH81l7wiR2vjs57O51EAm8=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.3/go.mod h1:GlAeCkHwugxdHaueRr4nhPuY+WW+gR8UjlcqzPr1SPI=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.15 h1:2jyRZ9rVIMisyQRnhSS/SqlckveoxXneIumECVFP91Y=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.15/go.mod h1:bDRG3m382v1KJBk1cKz7wIajg87/61EiiymEyfLvAe0=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.15 h1:I9zMeF107l0rJrpnHpjEiiTSCKYAIw8mALiXcPsGBiA=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.15/go.mod h1:9xWJ3Q/S6Ojusz1UIkfycgD1mGirJfLLKqq3LPT7WN8=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.13 h1:Eq2THzHt6P41mpjS2sUzz/3dJYFRqdWZ+vQaEMm98EM=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.13/go.mod h1:FgwTca6puegxgCInYwGjmd4tB9195Dd6LCuA+8MjpWw=
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.0 h1:4rhV0Hn+bf8IAIUphRX1moBcEvKJipCPmswMCl6Q5mw=
github.com/aws/aws-sdk-go-v2/service/s3 v1.58.0/go.mod h1:hdV0NTYd0RwV4FvNKhKUNbPLZoq9CTr/lke+3I7aCAI=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.1 h1:p1GahKIjyMDZtiKoIn0/jAj/TkMzfzndDv5+zi2Mhgc=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.1/go.mod h1:/vWdhoIoYA5hYoPZ6fm7Sv4d8701PiG5VKe8/pPJL60=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.2 h1:ORnrOK0C4WmYV/uYt3koHEWBLYsRDwk2Np+eEoyV4Z0=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.2/go.mod h1:xyFHA4zGxgYkdD73VeezHt3vSKEG9EmFnGwoKlP00u4=
github.com/aws/aws-sdk-go-v2/service/sts v1.30.1 h1:+woJ607dllHJQtsnJLi52ycuqHMwlW+Wqm2Ppsfp4nQ=
github.com/aws/aws-sdk-go-v2/service/sts v1.30.1/go.mod h1:jiNR3JqT15Dm+QWq2SRgh0x0bCNSRP2L25+CqPNpJlQ=
github.com/aws/smithy-go v1.20.3 h1:ryHwveWzPV5BIof6fyDvor6V3iUL7nTfiTKXHiW05nE=
github.com/aws/smithy-go v1.20.3/go.mod h1:krry+ya/rV9RDcV/Q16kpu6ypI4K2czasz0NC3qS14E=
github.com/aws/aws-sdk-go-v2 v1.30.4 h1:frhcagrVNrzmT95RJImMHgabt99vkXGslubDaDagTk8=
github.com/aws/aws-sdk-go-v2 v1.30.4/go.mod h1:CT+ZPWXbYrci8chcARI3OmI/qgd+f6WtuLOoaIA8PR0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.4 h1:70PVAiL15/aBMh5LThwgXdSQorVr91L127ttckI9QQU=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.4/go.mod h1:/MQxMqci8tlqDH+pjmoLu1i0tbWCUP1hhyMRuFxpQCw=
github.com/aws/aws-sdk-go-v2/config v1.27.31 h1:kxBoRsjhT3pq0cKthgj6RU6bXTm/2SgdoUMyrVw0rAI=
github.com/aws/aws-sdk-go-v2/config v1.27.31/go.mod h1:z04nZdSWFPaDwK3DdJOG2r+scLQzMYuJeW0CujEm9FM=
github.com/aws/aws-sdk-go-v2/credentials v1.17.30 h1:aau/oYFtibVovr2rDt8FHlU17BTicFEMAi29V1U+L5Q=
github.com/aws/aws-sdk-go-v2/credentials v1.17.30/go.mod h1:BPJ/yXV92ZVq6G8uYvbU0gSl8q94UB63nMT5ctNO38g=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.12 h1:yjwoSyDZF8Jth+mUk5lSPJCkMC0lMy6FaCD51jm6ayE=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.12/go.mod h1:fuR57fAgMk7ot3WcNQfb6rSEn+SUffl7ri+aa8uKysI=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.15 h1:ijB7hr56MngOiELJe0C5aQRaBQ11LveNgWFyG02AUto=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.15/go.mod h1:0QEmQSSWMVfiAk93l1/ayR9DQ9+jwni7gHS2NARZXB0=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.16 h1:TNyt/+X43KJ9IJJMjKfa3bNTiZbUP7DeCxfbTROESwY=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.16/go.mod h1:2DwJF39FlNAUiX5pAc0UNeiz16lK2t7IaFcm0LFHEgc=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.16 h1:jYfy8UPmd+6kJW5YhY0L1/KftReOGxI/4NtVSTh9O/I=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.16/go.mod h1:7ZfEPZxkW42Afq4uQB8H2E2e6ebh6mXTueEpYzjCzcs=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.16 h1:mimdLQkIX1zr8GIPY1ZtALdBQGxcASiBd2MOp8m/dMc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.16/go.mod h1:YHk6owoSwrIsok+cAH9PENCOGoH5PU2EllX4vLtSrsY=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.4 h1:KypMCbLPPHEmf9DgMGw51jMj77VfGPAN2Kv4cfhlfgI=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.4/go.mod h1:Vz1JQXliGcQktFTN/LN6uGppAIRoLBR2bMvIMP0gOjc=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.18 h1:GckUnpm4EJOAio1c8o25a+b3lVfwVzC9gnSBqiiNmZM=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.18/go.mod h1:Br6+bxfG33Dk3ynmkhsW2Z/t9D4+lRqdLDNCKi85w0U=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.18 h1:tJ5RnkHCiSH0jyd6gROjlJtNwov0eGYNz8s8nFcR0jQ=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.18/go.mod h1:++NHzT+nAF7ZPrHPsA+ENvsXkOO8wEu+C6RXltAG4/c=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.16 h1:jg16PhLPUiHIj8zYIW6bqzeQSuHVEiWnGA0Brz5Xv2I=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.16/go.mod h1:Uyk1zE1VVdsHSU7096h/rwnXDzOzYQVl+FNPhPw7ShY=
github.com/aws/aws-sdk-go-v2/service/s3 v1.60.1 h1:mx2ucgtv+MWzJesJY9Ig/8AFHgoE5FwLXwUVgW/FGdI=
github.com/aws/aws-sdk-go-v2/service/s3 v1.60.1/go.mod h1:BSPI0EfnYUuNHPS0uqIo5VrRwzie+Fp+YhQOUs16sKI=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.5 h1:zCsFCKvbj25i7p1u94imVoO447I/sFv8qq+lGJhRN0c=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.5/go.mod h1:ZeDX1SnKsVlejeuz41GiajjZpRSWR7/42q/EyA/QEiM=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.5 h1:SKvPgvdvmiTWoi0GAJ7AsJfOz3ngVkD/ERbs5pUnHNI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.5/go.mod h1:20sz31hv/WsPa3HhU3hfrIet2kxM4Pe0r20eBZ20Tac=
github.com/aws/aws-sdk-go-v2/service/sts v1.30.5 h1:OMsEmCyz2i89XwRwPouAJvhj81wINh+4UK+k/0Yo/q8=
github.com/aws/aws-sdk-go-v2/service/sts v1.30.5/go.mod h1:vmSqFK+BVIwVpDAGZB3CoCXHzurt4qBE8lf+I/kRTh0=
github.com/aws/smithy-go v1.20.4 h1:2HK1zBdPgRbjFOHlfeQZfpC4r72MOb9bZkiFwggKO+4=
github.com/aws/smithy-go v1.20.4/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/cpuguy83/go-md2man/v2 v2.0.4 h1:wfIWP927BUkWJb2NmU/kNDYIBTh/ziUX91+lVfRxZq4=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -109,10 +109,6 @@ github.com/jcmturner/gokrb5/v8 v8.4.4 h1:x1Sv4HaTpepFkXbt2IkL29DXRf8sOfZXo8eRKh6
github.com/jcmturner/gokrb5/v8 v8.4.4/go.mod h1:1btQEpgT6k+unzCwX1KdWMEwPPkkgBtP+F6aCACiMrs=
github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZY=
github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
@@ -123,12 +119,12 @@ github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovk
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U=
github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/nats-io/nats.go v1.36.0 h1:suEUPuWzTSse/XhESwqLxXGuj8vGRuPRoG7MoRN/qyU=
github.com/nats-io/nats.go v1.36.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/nats.go v1.37.0 h1:07rauXbVnnJvv1gfIyghFEo6lUcYRY0WXc3x7x0vUxE=
github.com/nats-io/nats.go v1.37.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
@@ -139,8 +135,8 @@ github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFu
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/xattr v0.4.9 h1:5883YPCtkSd8LFbs13nXplj9g9tlrwoJRjgpgMu1/fE=
github.com/pkg/xattr v0.4.9/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pkg/xattr v0.4.10 h1:Qe0mtiNFHQZ296vRgUjRCoPHPqH7VdTOrZx3g0T+pGA=
github.com/pkg/xattr v0.4.10/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
@@ -166,8 +162,8 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/urfave/cli/v2 v2.27.2 h1:6e0H+AkS+zDckwPCUrZkKX38mRaau4nL2uipkJpbkcI=
github.com/urfave/cli/v2 v2.27.2/go.mod h1:g0+79LmHHATl7DAcHO99smiR/T7uGLw84w8Y42x+4eM=
github.com/urfave/cli/v2 v2.27.4 h1:o1owoI+02Eb+K107p27wEX9Bb8eqIoZCfLXloLUSWJ8=
github.com/urfave/cli/v2 v2.27.4/go.mod h1:m4QzxcD2qpra4z7WhzEGn74WZLViBnMpb1ToCAKdGRQ=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.55.0 h1:Zkefzgt6a7+bVKHnu/YaYSOPfNYNisSVBo/unVCf8k8=
@@ -182,8 +178,8 @@ github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
github.com/xrash/smetrics v0.0.0-20240312152122-5f08fbb34913 h1:+qGGcbkzsfDQNPPe9UDgpxAWQrhbbBXOYJFQDq/dtJw=
github.com/xrash/smetrics v0.0.0-20240312152122-5f08fbb34913/go.mod h1:4aEEwZQutDLsQv2Deui4iYQ6DWTxR14g6m8Wv88+Xqk=
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 h1:gEOO8jv9F4OT7lGCjxCBTO/36wtF6j2nSip77qHd4x4=
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1/go.mod h1:Ohn+xnUBiLI6FVj/9LpzZWtj1/D6lUovWYBkxHVV3aM=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
@@ -193,8 +189,8 @@ golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.25.0 h1:ypSNr+bnYL2YhwoMt2zPxHFmbAN1KZs/njMG3hxUp30=
golang.org/x/crypto v0.25.0/go.mod h1:T+wALwcMOSE0kXgUAnPAHqTLW+XHgcELELW8VaDgm/M=
golang.org/x/crypto v0.26.0 h1:RrRspgV4mU+YwB4FYnuBoKsUapNIL5cohGAmSH3azsw=
golang.org/x/crypto v0.26.0/go.mod h1:GY7jblb9wI+FOo5y8/S2oY4zWP07AkOJ4+jxCqdqn54=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
@@ -210,8 +206,8 @@ golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.27.0 h1:5K3Njcw06/l2y9vpGCSdcxWOYHOUk3dVNGDXN+FvAys=
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
golang.org/x/net v0.28.0 h1:a9JDOJc5GMUJ0+UDqmLT86WiEy7iWyIhz8gz8E4e5hE=
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -235,8 +231,8 @@ golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.24.0 h1:Twjiwq9dn6R1fQcyiK+wQyHWfaz/BJB+YIpzU/Cv3Xg=
golang.org/x/sys v0.24.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -252,10 +248,10 @@ golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.17.0 h1:XtiM5bkSOt+ewxlOE/aE/AKEHibwj/6gvWMl9Rsh0Qc=
golang.org/x/text v0.17.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
@@ -266,9 +262,6 @@ golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -0,0 +1,19 @@
# Versity Gateway Dashboard
This project is a dashboard that visualizes data in the six metrics emitted by the Versity Gateway, displayed in Grafana.
The Versity Gateway emits metrics in the statsd format. We used Telegraf as the bridge from statsd to influxdb.
This implementation uses the influxql query language.
## Usage
From the root of this repository, run `docker compose -f docker-compose-metrics.yml up` to start the stack.
To shut it down, run `docker compose -f docker-compose-metrics.yml down -v`.
The Grafana database is explicitly not destroyed when shutting down containers. The influxdb one, however, is.
The dashbaord is automatically provisioned at container bring up and is visible at http://localhost:3000 with username: `admin` and password: `admin`.
To use the gateway and generate metrics, `source metrics-exploration/aws_env_setup.sh` and start using your aws cli as usual.

View File

@@ -0,0 +1,4 @@
export AWS_SECRET_ACCESS_KEY=password
export AWS_ACCESS_KEY_ID=user
export AWS_ENDPOINT_URL=http://127.0.0.1:7070
export AWS_REGION=us-east-1

View File

@@ -0,0 +1,64 @@
services:
telegraf:
image: telegraf
container_name: telegraf
restart: always
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
depends_on:
- influxdb
links:
- influxdb
ports:
- '8125:8125/udp'
influxdb:
image: influxdb
container_name: influxdb
restart: always
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=admin
- DOCKER_INFLUXDB_INIT_PASSWORD=adminpass
- DOCKER_INFLUXDB_INIT_ORG=myorg
- DOCKER_INFLUXDB_INIT_BUCKET=metrics
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=my-super-secret-auth-token
ports:
- '8086:8086'
volumes:
- influxdb_data:/var/lib/influxdb
grafana:
image: grafana/grafana
container_name: grafana-server
restart: always
depends_on:
- influxdb
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_INSTALL_PLUGINS=
links:
- influxdb
ports:
- '3000:3000'
volumes:
- ./grafana_data:/var/lib/grafana
- ./provisioning:/etc/grafana/provisioning
versitygw:
image: versity/versitygw:latest
container_name: versitygw
ports:
- "7070:7070"
environment:
- ROOT_ACCESS_KEY=user
- ROOT_SECRET_KEY=password
- VGW_METRICS_STATSD_SERVERS=telegraf:8125
depends_on:
- telegraf
command: >
posix /tmp/vgw
volumes:
influxdb_data: {}

View File

@@ -0,0 +1,25 @@
apiVersion: 1
providers:
# <string> an unique provider name. Required
- name: 'influxql'
# <int> Org id. Default to 1
orgId: 1
# <string> name of the dashboard folder.
folder: 'influxql'
# <string> folder UID. will be automatically generated if not specified
folderUid: ''
# <string> provider type. Default to 'file'
type: file
# <bool> disable dashboard deletion
disableDeletion: false
# <int> how often Grafana will scan for changed dashboards
updateIntervalSeconds: 10
# <bool> allow updating provisioned dashboards from the UI
allowUiUpdates: true
options:
# <string, required> path to dashboard files on disk. Required when using the 'file' type
path: /etc/grafana/provisioning/dashboards/influxql
# <bool> use folder names from filesystem to create folders in Grafana
foldersFromFilesStructure: true

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,13 @@
apiVersion: 1
datasources:
- name: influxdb
type: influxdb
isDefault: true
access: proxy
url: http://influxdb:8086
jsonData:
dbName: 'metrics'
httpHeaderName1: 'Authorization'
secureJsonData:
httpHeaderValue1: 'Token my-super-secret-auth-token'

View File

@@ -0,0 +1,34 @@
[global_tags]
[agent]
debug = true
quiet = false
interval = "60s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
hostname = "versitygw"
omit_hostname = false
[[outputs.file]]
files = ["stdout"]
[[outputs.influxdb_v2]]
urls = ["http://influxdb:8086"]
timeout = "5s"
token = "my-super-secret-auth-token"
organization = "myorg"
bucket = "metrics"
[[inputs.statsd]]
protocol = "udp4"
service_address = ":8125"
percentiles = [90]
metric_separator = "_"
datadog_extensions = false
allowed_pending_messages = 10000
percentile_limit = 1000

View File

@@ -0,0 +1,6 @@
#!/usr/bin/env bash
. ./aws_env_setup.sh
aws s3 mb s3://test
aws s3 cp docker-compose.yml s3://test/test.yaml

View File

@@ -19,12 +19,13 @@ import (
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3api/controllers"
"github.com/versity/versitygw/s3log"
)
type S3AdminRouter struct{}
func (ar *S3AdminRouter) Init(app *fiber.App, be backend.Backend, iam auth.IAMService) {
controller := controllers.NewAdminController(iam, be)
func (ar *S3AdminRouter) Init(app *fiber.App, be backend.Backend, iam auth.IAMService, logger s3log.AuditLogger) {
controller := controllers.NewAdminController(iam, be, logger)
// CreateUser admin api
app.Patch("/create-user", controller.CreateUser)

View File

@@ -22,6 +22,7 @@ import (
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3api/middlewares"
"github.com/versity/versitygw/s3log"
)
type S3AdminServer struct {
@@ -32,7 +33,7 @@ type S3AdminServer struct {
cert *tls.Certificate
}
func NewAdminServer(app *fiber.App, be backend.Backend, root middlewares.RootUserConfig, port, region string, iam auth.IAMService, opts ...AdminOpt) *S3AdminServer {
func NewAdminServer(app *fiber.App, be backend.Backend, root middlewares.RootUserConfig, port, region string, iam auth.IAMService, l s3log.AuditLogger, opts ...AdminOpt) *S3AdminServer {
server := &S3AdminServer{
app: app,
backend: be,
@@ -46,13 +47,13 @@ func NewAdminServer(app *fiber.App, be backend.Backend, root middlewares.RootUse
// Logging middlewares
app.Use(logger.New())
app.Use(middlewares.DecodeURL(nil, nil))
app.Use(middlewares.DecodeURL(l, nil))
// Authentication middlewares
app.Use(middlewares.VerifyV4Signature(root, iam, nil, nil, region, false))
app.Use(middlewares.VerifyMD5Body(nil))
app.Use(middlewares.VerifyV4Signature(root, iam, l, nil, region, false))
app.Use(middlewares.VerifyMD5Body(l))
server.router.Init(app, be, iam)
server.router.Init(app, be, iam, l)
return server
}

View File

@@ -16,146 +16,310 @@ package controllers
import (
"encoding/json"
"errors"
"fmt"
"strings"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3log"
)
type AdminController struct {
iam auth.IAMService
be backend.Backend
l s3log.AuditLogger
}
func NewAdminController(iam auth.IAMService, be backend.Backend) AdminController {
return AdminController{iam: iam, be: be}
func NewAdminController(iam auth.IAMService, be backend.Backend, l s3log.AuditLogger) AdminController {
return AdminController{iam: iam, be: be, l: l}
}
func (c AdminController) CreateUser(ctx *fiber.Ctx) error {
acct := ctx.Locals("account").(auth.Account)
if acct.Role != "admin" {
return ctx.Status(fiber.StatusForbidden).SendString("access denied: only admin users have access to this resource")
return sendResponse(ctx, errors.New("access denied: only admin users have access to this resource"), nil,
&metaOptions{
logger: c.l,
status: fiber.StatusForbidden,
action: "admin:CreateUser",
})
}
var usr auth.Account
err := json.Unmarshal(ctx.Body(), &usr)
if err != nil {
return ctx.Status(fiber.StatusBadRequest).SendString(fmt.Errorf("failed to parse request body: %w", err).Error())
return sendResponse(ctx, fmt.Errorf("failed to parse request body: %w", err), nil,
&metaOptions{
logger: c.l,
status: fiber.StatusBadRequest,
action: "admin:CreateUser",
})
}
if usr.Role != auth.RoleAdmin && usr.Role != auth.RoleUser && usr.Role != auth.RoleUserPlus {
return ctx.Status(fiber.StatusBadRequest).SendString("invalid parameters: user role have to be one of the following: 'user', 'admin', 'userplus'")
return sendResponse(ctx, errors.New("invalid parameters: user role have to be one of the following: 'user', 'admin', 'userplus'"), nil,
&metaOptions{
logger: c.l,
status: fiber.StatusBadRequest,
action: "admin:CreateUser",
})
}
err = c.iam.CreateAccount(usr)
if err != nil {
status := fiber.StatusInternalServerError
msg := fmt.Errorf("failed to create user: %w", err).Error()
err = fmt.Errorf("failed to create user: %w", err)
if strings.Contains(msg, "user already exists") {
if strings.Contains(err.Error(), "user already exists") {
status = fiber.StatusConflict
}
return ctx.Status(status).SendString(msg)
return sendResponse(ctx, err, nil,
&metaOptions{
status: status,
logger: c.l,
action: "admin:CreateUser",
})
}
return ctx.Status(fiber.StatusCreated).SendString("The user has been created successfully")
return sendResponse(ctx, nil, "The user has been created successfully", &metaOptions{
status: fiber.StatusCreated,
logger: c.l,
action: "admin:CreateUser",
})
}
func (c AdminController) UpdateUser(ctx *fiber.Ctx) error {
acct := ctx.Locals("account").(auth.Account)
if acct.Role != "admin" {
return ctx.Status(fiber.StatusForbidden).SendString("access denied: only admin users have access to this resource")
return sendResponse(ctx, errors.New("access denied: only admin users have access to this resource"), nil,
&metaOptions{
logger: c.l,
status: fiber.StatusForbidden,
action: "admin:UpdateUser",
})
}
access := ctx.Query("access")
if access == "" {
return ctx.Status(fiber.StatusBadRequest).SendString("missing user access parameter")
return sendResponse(ctx, errors.New("missing user access parameter"), nil,
&metaOptions{
status: fiber.StatusBadRequest,
logger: c.l,
action: "admin:UpdateUser",
})
}
var props auth.MutableProps
if err := json.Unmarshal(ctx.Body(), &props); err != nil {
return ctx.Status(fiber.StatusBadRequest).SendString(fmt.Errorf("invalid request body %w", err).Error())
return sendResponse(ctx, fmt.Errorf("invalid request body %w", err), nil,
&metaOptions{
status: fiber.StatusBadRequest,
logger: c.l,
action: "admin:UpdateUser",
})
}
err := c.iam.UpdateUserAccount(access, props)
if err != nil {
status := fiber.StatusInternalServerError
msg := fmt.Errorf("failed to update user account: %w", err).Error()
err = fmt.Errorf("failed to update user account: %w", err)
if strings.Contains(msg, "user not found") {
if strings.Contains(err.Error(), "user not found") {
status = fiber.StatusNotFound
}
return ctx.Status(status).SendString(msg)
return sendResponse(ctx, err, nil,
&metaOptions{
status: status,
logger: c.l,
action: "admin:UpdateUser",
})
}
return ctx.SendString("the user has been updated successfully")
return sendResponse(ctx, nil, "the user has been updated successfully",
&metaOptions{
logger: c.l,
action: "admin:UpdateUser",
})
}
func (c AdminController) DeleteUser(ctx *fiber.Ctx) error {
access := ctx.Query("access")
acct := ctx.Locals("account").(auth.Account)
if acct.Role != "admin" {
return ctx.Status(fiber.StatusForbidden).SendString("access denied: only admin users have access to this resource")
return sendResponse(ctx, errors.New("access denied: only admin users have access to this resource"), nil,
&metaOptions{
logger: c.l,
status: fiber.StatusForbidden,
action: "admin:DeleteUser",
})
}
err := c.iam.DeleteUserAccount(access)
if err != nil {
return err
return sendResponse(ctx, err, nil,
&metaOptions{
logger: c.l,
action: "admin:DeleteUser",
})
}
return ctx.SendString("The user has been deleted successfully")
return sendResponse(ctx, nil, "The user has been deleted successfully",
&metaOptions{
logger: c.l,
action: "admin:DeleteUser",
})
}
func (c AdminController) ListUsers(ctx *fiber.Ctx) error {
acct := ctx.Locals("account").(auth.Account)
if acct.Role != "admin" {
return ctx.Status(fiber.StatusForbidden).SendString("access denied: only admin users have access to this resource")
return sendResponse(ctx, errors.New("access denied: only admin users have access to this resource"), nil,
&metaOptions{
logger: c.l,
status: fiber.StatusForbidden,
action: "admin:ListUsers",
})
}
accs, err := c.iam.ListUserAccounts()
if err != nil {
return err
}
return ctx.JSON(accs)
return sendResponse(ctx, err, accs,
&metaOptions{
logger: c.l,
action: "admin:ListUsers",
})
}
func (c AdminController) ChangeBucketOwner(ctx *fiber.Ctx) error {
acct := ctx.Locals("account").(auth.Account)
if acct.Role != "admin" {
return ctx.Status(fiber.StatusForbidden).SendString("access denied: only admin users have access to this resource")
return sendResponse(ctx, errors.New("access denied: only admin users have access to this resource"), nil,
&metaOptions{
logger: c.l,
status: fiber.StatusForbidden,
action: "admin:ChangeBucketOwner",
})
}
owner := ctx.Query("owner")
bucket := ctx.Query("bucket")
accs, err := auth.CheckIfAccountsExist([]string{owner}, c.iam)
if err != nil {
return err
return sendResponse(ctx, err, nil,
&metaOptions{
logger: c.l,
action: "admin:ChangeBucketOwner",
})
}
if len(accs) > 0 {
return ctx.Status(fiber.StatusNotFound).SendString("user specified as the new bucket owner does not exist")
return sendResponse(ctx, errors.New("user specified as the new bucket owner does not exist"), nil,
&metaOptions{
logger: c.l,
action: "admin:ChangeBucketOwner",
status: fiber.StatusNotFound,
})
}
err = c.be.ChangeBucketOwner(ctx.Context(), bucket, owner)
acl := auth.ACL{
Owner: owner,
Grantees: []auth.Grantee{
{
Permission: types.PermissionFullControl,
Access: owner,
Type: types.TypeCanonicalUser,
},
},
}
aclParsed, err := json.Marshal(acl)
if err != nil {
return err
return sendResponse(ctx, fmt.Errorf("failed to marshal the bucket acl: %w", err), nil,
&metaOptions{
logger: c.l,
action: "admin:ChangeBucketOwner",
})
}
return ctx.SendString("Bucket owner has been updated successfully")
err = c.be.ChangeBucketOwner(ctx.Context(), bucket, aclParsed)
return sendResponse(ctx, err, "Bucket owner has been updated successfully",
&metaOptions{
logger: c.l,
action: "admin:ChangeBucketOwner",
})
}
func (c AdminController) ListBuckets(ctx *fiber.Ctx) error {
acct := ctx.Locals("account").(auth.Account)
if acct.Role != "admin" {
return ctx.Status(fiber.StatusForbidden).SendString("access denied: only admin users have access to this resource")
return sendResponse(ctx, errors.New("access denied: only admin users have access to this resource"), nil,
&metaOptions{
logger: c.l,
status: fiber.StatusForbidden,
action: "admin:ListBuckets",
})
}
buckets, err := c.be.ListBucketsAndOwners(ctx.Context())
return sendResponse(ctx, err, buckets,
&metaOptions{
logger: c.l,
action: "admin:ListBuckets",
})
}
type metaOptions struct {
action string
status int
logger s3log.AuditLogger
}
func sendResponse(ctx *fiber.Ctx, err error, data any, m *metaOptions) error {
status := m.status
if err != nil {
if status == 0 {
status = fiber.StatusInternalServerError
}
if m.logger != nil {
m.logger.Log(ctx, err, []byte(err.Error()), s3log.LogMeta{
Action: m.action,
HttpStatus: status,
})
}
return ctx.Status(status).SendString(err.Error())
}
if status == 0 {
status = fiber.StatusOK
}
msg, ok := data.(string)
if ok {
if m.logger != nil {
m.logger.Log(ctx, nil, []byte(msg), s3log.LogMeta{
Action: m.action,
HttpStatus: status,
})
}
return ctx.Status(status).SendString(msg)
}
dataJSON, err := json.Marshal(data)
if err != nil {
return err
}
return ctx.JSON(buckets)
if m.logger != nil {
m.logger.Log(ctx, nil, dataJSON, s3log.LogMeta{
HttpStatus: status,
Action: m.action,
})
}
ctx.Set(fiber.HeaderContentType, fiber.MIMEApplicationJSON)
return ctx.Status(status).Send(dataJSON)
}

View File

@@ -407,7 +407,7 @@ func TestAdminController_ChangeBucketOwner(t *testing.T) {
}
adminController := AdminController{
be: &BackendMock{
ChangeBucketOwnerFunc: func(contextMoqParam context.Context, bucket, newOwner string) error {
ChangeBucketOwnerFunc: func(contextMoqParam context.Context, bucket string, acl []byte) error {
return nil
},
},

View File

@@ -26,7 +26,7 @@ var _ backend.Backend = &BackendMock{}
// AbortMultipartUploadFunc: func(contextMoqParam context.Context, abortMultipartUploadInput *s3.AbortMultipartUploadInput) error {
// panic("mock out the AbortMultipartUpload method")
// },
// ChangeBucketOwnerFunc: func(contextMoqParam context.Context, bucket string, newOwner string) error {
// ChangeBucketOwnerFunc: func(contextMoqParam context.Context, bucket string, acl []byte) error {
// panic("mock out the ChangeBucketOwner method")
// },
// CompleteMultipartUploadFunc: func(contextMoqParam context.Context, completeMultipartUploadInput *s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error) {
@@ -38,7 +38,7 @@ var _ backend.Backend = &BackendMock{}
// CreateBucketFunc: func(contextMoqParam context.Context, createBucketInput *s3.CreateBucketInput, defaultACL []byte) error {
// panic("mock out the CreateBucket method")
// },
// CreateMultipartUploadFunc: func(contextMoqParam context.Context, createMultipartUploadInput *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
// CreateMultipartUploadFunc: func(contextMoqParam context.Context, createMultipartUploadInput *s3.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error) {
// panic("mock out the CreateMultipartUpload method")
// },
// DeleteBucketFunc: func(contextMoqParam context.Context, deleteBucketInput *s3.DeleteBucketInput) error {
@@ -116,10 +116,10 @@ var _ backend.Backend = &BackendMock{}
// ListObjectVersionsFunc: func(contextMoqParam context.Context, listObjectVersionsInput *s3.ListObjectVersionsInput) (*s3.ListObjectVersionsOutput, error) {
// panic("mock out the ListObjectVersions method")
// },
// ListObjectsFunc: func(contextMoqParam context.Context, listObjectsInput *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
// ListObjectsFunc: func(contextMoqParam context.Context, listObjectsInput *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
// panic("mock out the ListObjects method")
// },
// ListObjectsV2Func: func(contextMoqParam context.Context, listObjectsV2Input *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
// ListObjectsV2Func: func(contextMoqParam context.Context, listObjectsV2Input *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
// panic("mock out the ListObjectsV2 method")
// },
// ListPartsFunc: func(contextMoqParam context.Context, listPartsInput *s3.ListPartsInput) (s3response.ListPartsResult, error) {
@@ -187,7 +187,7 @@ type BackendMock struct {
AbortMultipartUploadFunc func(contextMoqParam context.Context, abortMultipartUploadInput *s3.AbortMultipartUploadInput) error
// ChangeBucketOwnerFunc mocks the ChangeBucketOwner method.
ChangeBucketOwnerFunc func(contextMoqParam context.Context, bucket string, newOwner string) error
ChangeBucketOwnerFunc func(contextMoqParam context.Context, bucket string, acl []byte) error
// CompleteMultipartUploadFunc mocks the CompleteMultipartUpload method.
CompleteMultipartUploadFunc func(contextMoqParam context.Context, completeMultipartUploadInput *s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error)
@@ -199,7 +199,7 @@ type BackendMock struct {
CreateBucketFunc func(contextMoqParam context.Context, createBucketInput *s3.CreateBucketInput, defaultACL []byte) error
// CreateMultipartUploadFunc mocks the CreateMultipartUpload method.
CreateMultipartUploadFunc func(contextMoqParam context.Context, createMultipartUploadInput *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error)
CreateMultipartUploadFunc func(contextMoqParam context.Context, createMultipartUploadInput *s3.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error)
// DeleteBucketFunc mocks the DeleteBucket method.
DeleteBucketFunc func(contextMoqParam context.Context, deleteBucketInput *s3.DeleteBucketInput) error
@@ -277,10 +277,10 @@ type BackendMock struct {
ListObjectVersionsFunc func(contextMoqParam context.Context, listObjectVersionsInput *s3.ListObjectVersionsInput) (*s3.ListObjectVersionsOutput, error)
// ListObjectsFunc mocks the ListObjects method.
ListObjectsFunc func(contextMoqParam context.Context, listObjectsInput *s3.ListObjectsInput) (*s3.ListObjectsOutput, error)
ListObjectsFunc func(contextMoqParam context.Context, listObjectsInput *s3.ListObjectsInput) (s3response.ListObjectsResult, error)
// ListObjectsV2Func mocks the ListObjectsV2 method.
ListObjectsV2Func func(contextMoqParam context.Context, listObjectsV2Input *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error)
ListObjectsV2Func func(contextMoqParam context.Context, listObjectsV2Input *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error)
// ListPartsFunc mocks the ListParts method.
ListPartsFunc func(contextMoqParam context.Context, listPartsInput *s3.ListPartsInput) (s3response.ListPartsResult, error)
@@ -351,8 +351,8 @@ type BackendMock struct {
ContextMoqParam context.Context
// Bucket is the bucket argument value.
Bucket string
// NewOwner is the newOwner argument value.
NewOwner string
// ACL is the acl argument value.
ACL []byte
}
// CompleteMultipartUpload holds details about calls to the CompleteMultipartUpload method.
CompleteMultipartUpload []struct {
@@ -822,23 +822,23 @@ func (mock *BackendMock) AbortMultipartUploadCalls() []struct {
}
// ChangeBucketOwner calls ChangeBucketOwnerFunc.
func (mock *BackendMock) ChangeBucketOwner(contextMoqParam context.Context, bucket string, newOwner string) error {
func (mock *BackendMock) ChangeBucketOwner(contextMoqParam context.Context, bucket string, acl []byte) error {
if mock.ChangeBucketOwnerFunc == nil {
panic("BackendMock.ChangeBucketOwnerFunc: method is nil but Backend.ChangeBucketOwner was just called")
}
callInfo := struct {
ContextMoqParam context.Context
Bucket string
NewOwner string
ACL []byte
}{
ContextMoqParam: contextMoqParam,
Bucket: bucket,
NewOwner: newOwner,
ACL: acl,
}
mock.lockChangeBucketOwner.Lock()
mock.calls.ChangeBucketOwner = append(mock.calls.ChangeBucketOwner, callInfo)
mock.lockChangeBucketOwner.Unlock()
return mock.ChangeBucketOwnerFunc(contextMoqParam, bucket, newOwner)
return mock.ChangeBucketOwnerFunc(contextMoqParam, bucket, acl)
}
// ChangeBucketOwnerCalls gets all the calls that were made to ChangeBucketOwner.
@@ -848,12 +848,12 @@ func (mock *BackendMock) ChangeBucketOwner(contextMoqParam context.Context, buck
func (mock *BackendMock) ChangeBucketOwnerCalls() []struct {
ContextMoqParam context.Context
Bucket string
NewOwner string
ACL []byte
} {
var calls []struct {
ContextMoqParam context.Context
Bucket string
NewOwner string
ACL []byte
}
mock.lockChangeBucketOwner.RLock()
calls = mock.calls.ChangeBucketOwner
@@ -974,7 +974,7 @@ func (mock *BackendMock) CreateBucketCalls() []struct {
}
// CreateMultipartUpload calls CreateMultipartUploadFunc.
func (mock *BackendMock) CreateMultipartUpload(contextMoqParam context.Context, createMultipartUploadInput *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
func (mock *BackendMock) CreateMultipartUpload(contextMoqParam context.Context, createMultipartUploadInput *s3.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error) {
if mock.CreateMultipartUploadFunc == nil {
panic("BackendMock.CreateMultipartUploadFunc: method is nil but Backend.CreateMultipartUpload was just called")
}
@@ -1934,7 +1934,7 @@ func (mock *BackendMock) ListObjectVersionsCalls() []struct {
}
// ListObjects calls ListObjectsFunc.
func (mock *BackendMock) ListObjects(contextMoqParam context.Context, listObjectsInput *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
func (mock *BackendMock) ListObjects(contextMoqParam context.Context, listObjectsInput *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
if mock.ListObjectsFunc == nil {
panic("BackendMock.ListObjectsFunc: method is nil but Backend.ListObjects was just called")
}
@@ -1970,7 +1970,7 @@ func (mock *BackendMock) ListObjectsCalls() []struct {
}
// ListObjectsV2 calls ListObjectsV2Func.
func (mock *BackendMock) ListObjectsV2(contextMoqParam context.Context, listObjectsV2Input *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
func (mock *BackendMock) ListObjectsV2(contextMoqParam context.Context, listObjectsV2Input *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
if mock.ListObjectsV2Func == nil {
panic("BackendMock.ListObjectsV2Func: method is nil but Backend.ListObjectsV2 was just called")
}

View File

@@ -22,6 +22,7 @@ import (
"io"
"log"
"net/http"
"net/url"
"strconv"
"strings"
"time"
@@ -50,7 +51,8 @@ type S3ApiController struct {
}
const (
iso8601Format = "20060102T150405Z"
iso8601Format = "20060102T150405Z"
defaultContentType = "binary/octet-stream"
)
func New(be backend.Backend, iam auth.IAMService, logger s3log.AuditLogger, evs s3event.S3EventSender, mm *metrics.Manager, debug bool, readonly bool) S3ApiController {
@@ -91,6 +93,10 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
}
path := ctx.Path()
if path[len(path)-1:] == "/" && key[len(key)-1:] != "/" {
key = key + "/"
}
if ctx.Request().URI().QueryArgs().Has("tagging") {
err := auth.VerifyAccess(ctx.Context(), c.be, auth.AccessOptions{
@@ -413,51 +419,65 @@ func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
})
}
utils.SetMetaHeaders(ctx, res.Metadata)
var lastmod string
if res.LastModified != nil {
lastmod = res.LastModified.Format(timefmt)
contentType := getstring(res.ContentType)
if contentType == "" {
contentType = defaultContentType
}
acceptRanges := getstring(res.AcceptRanges)
if acceptRanges == "" {
acceptRanges = "bytes"
}
utils.SetResponseHeaders(ctx, []utils.CustomHeader{
hdrs := []utils.CustomHeader{
{
Key: "Content-Type",
Value: getstring(res.ContentType),
},
{
Key: "Content-Encoding",
Value: getstring(res.ContentEncoding),
Value: contentType,
},
{
Key: "ETag",
Value: getstring(res.ETag),
},
{
Key: "Last-Modified",
Value: lastmod,
Key: "accept-ranges",
Value: acceptRanges,
},
{
Key: "x-amz-storage-class",
Value: string(res.StorageClass),
},
{
}
if getstring(res.ContentRange) != "" {
hdrs = append(hdrs, utils.CustomHeader{
Key: "Content-Range",
Value: getstring(res.ContentRange),
},
{
Key: "accept-ranges",
Value: getstring(res.AcceptRanges),
},
})
if res.TagCount != nil {
utils.SetResponseHeaders(ctx, []utils.CustomHeader{
{
Key: "x-amz-tagging-count",
Value: fmt.Sprint(*res.TagCount),
},
})
}
if res.LastModified != nil {
hdrs = append(hdrs, utils.CustomHeader{
Key: "Last-Modified",
Value: res.LastModified.Format(timefmt),
})
}
if getstring(res.ContentEncoding) != "" {
hdrs = append(hdrs, utils.CustomHeader{
Key: "Content-Encoding",
Value: getstring(res.ContentEncoding),
})
}
if res.TagCount != nil {
hdrs = append(hdrs, utils.CustomHeader{
Key: "x-amz-tagging-count",
Value: fmt.Sprint(*res.TagCount),
})
}
if res.StorageClass != "" {
hdrs = append(hdrs, utils.CustomHeader{
Key: "x-amz-storage-class",
Value: string(res.StorageClass),
})
}
// Set x-amz-meta-... headers
utils.SetMetaHeaders(ctx, res.Metadata)
// Set other response headers
utils.SetResponseHeaders(ctx, hdrs)
status := http.StatusOK
if acceptRange != "" {
@@ -941,10 +961,7 @@ func (c S3ApiController) ListActions(ctx *fiber.Ctx) error {
Delimiter: &delimiter,
MaxKeys: &maxkeys,
})
return SendXMLResponse(ctx, struct {
*s3.ListObjectsOutput
XMLName struct{} `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult"`
}{ListObjectsOutput: res}, err,
return SendXMLResponse(ctx, res, err,
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
@@ -1224,7 +1241,7 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
if ctx.Request().URI().QueryArgs().Has("acl") {
parsedAcl := ctx.Locals("parsedAcl").(auth.ACL)
var input *s3.PutBucketAclInput
var input *auth.PutBucketAclInput
ownership, err := c.be.GetBucketOwnershipControls(ctx.Context(), bucket)
if err != nil && !errors.Is(err, s3err.GetAPIError(s3err.ErrOwnershipControlsNotFound)) {
@@ -1276,7 +1293,37 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
if c.debug {
log.Printf("error unmarshalling access control policy: %v", err)
}
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrMalformedXML),
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrMalformedACL),
&MetaOpts{
Logger: c.logger,
Action: metrics.ActionPutBucketAcl,
BucketOwner: parsedAcl.Owner,
})
}
if accessControlPolicy.Owner == nil ||
accessControlPolicy.Owner.ID == nil ||
*accessControlPolicy.Owner.ID == "" {
if c.debug {
log.Println("empty access control policy owner")
}
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrMalformedACL),
&MetaOpts{
Logger: c.logger,
Action: metrics.ActionPutBucketAcl,
BucketOwner: parsedAcl.Owner,
})
}
if *accessControlPolicy.Owner.ID != parsedAcl.Owner {
if c.debug {
log.Printf("invalid access control policy owner id: %v, expected %v", *accessControlPolicy.Owner.ID, parsedAcl.Owner)
}
return SendResponse(ctx, s3err.APIError{
Code: "InvalidArgument",
Description: "Invalid id",
HTTPStatusCode: http.StatusBadRequest,
},
&MetaOpts{
Logger: c.logger,
Action: metrics.ActionPutBucketAcl,
@@ -1290,7 +1337,7 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
grants, acl)
}
return SendResponse(ctx,
s3err.GetAPIError(s3err.ErrInvalidRequest),
s3err.GetAPIError(s3err.ErrUnexpectedContent),
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
@@ -1299,16 +1346,11 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
})
}
input = &s3.PutBucketAclInput{
Bucket: &bucket,
ACL: "",
AccessControlPolicy: &types.AccessControlPolicy{
Owner: &accessControlPolicy.Owner,
Grants: accessControlPolicy.AccessControlList.Grants,
},
input = &auth.PutBucketAclInput{
Bucket: &bucket,
AccessControlPolicy: &accessControlPolicy,
}
}
if acl != "" {
} else if acl != "" {
if acl != "private" && acl != "public-read" && acl != "public-read-write" {
if c.debug {
log.Printf("invalid acl: %q", acl)
@@ -1322,13 +1364,13 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
BucketOwner: parsedAcl.Owner,
})
}
if len(ctx.Body()) > 0 || grants != "" {
if grants != "" {
if c.debug {
log.Printf("invalid request: %q (grants) %q (acl)",
grants, acl)
}
return SendResponse(ctx,
s3err.GetAPIError(s3err.ErrInvalidRequest),
s3err.GetAPIError(s3err.ErrBothCannedAndHeaderGrants),
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
@@ -1337,34 +1379,34 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
})
}
input = &s3.PutBucketAclInput{
input = &auth.PutBucketAclInput{
Bucket: &bucket,
ACL: types.BucketCannedACL(acl),
AccessControlPolicy: &types.AccessControlPolicy{
Owner: &types.Owner{
ID: &acct.Access,
},
},
}
}
if grants != "" {
input = &s3.PutBucketAclInput{
} else if grants != "" {
input = &auth.PutBucketAclInput{
Bucket: &bucket,
GrantFullControl: &grantFullControl,
GrantRead: &grantRead,
GrantReadACP: &grantReadACP,
GrantWrite: &granWrite,
GrantWriteACP: &grantWriteACP,
AccessControlPolicy: &types.AccessControlPolicy{
Owner: &types.Owner{
ID: &acct.Access,
},
},
ACL: "",
}
} else {
if c.debug {
log.Println("none of the bucket acl options has been specified: canned, req headers, req body")
}
return SendResponse(ctx,
s3err.GetAPIError(s3err.ErrMissingSecurityHeader),
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
Action: metrics.ActionPutBucketAcl,
BucketOwner: parsedAcl.Owner,
})
}
updAcl, err := auth.UpdateACL(input, parsedAcl, c.iam)
updAcl, err := auth.UpdateACL(input, parsedAcl, c.iam, acct.Role == auth.RoleAdmin)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
@@ -1440,17 +1482,18 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
Owner: acct.Access,
}
updAcl, err := auth.UpdateACL(&s3.PutBucketAclInput{
updAcl, err := auth.UpdateACL(&auth.PutBucketAclInput{
GrantFullControl: &grantFullControl,
GrantRead: &grantRead,
GrantReadACP: &grantReadACP,
GrantWrite: &granWrite,
GrantWriteACP: &grantWriteACP,
AccessControlPolicy: &types.AccessControlPolicy{Owner: &types.Owner{
ID: &acct.Access,
}},
AccessControlPolicy: &auth.AccessControlPolicy{
Owner: &types.Owner{
ID: &acct.Access,
}},
ACL: types.BucketCannedACL(acl),
}, defACL, c.iam)
}, defACL, c.iam, acct.Role == auth.RoleAdmin)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
@@ -1492,11 +1535,15 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
// Copy source headers
copySource := ctx.Get("X-Amz-Copy-Source")
if len(copySource) > 0 && copySource[0] == '/' {
copySource = copySource[1:]
}
copySrcIfMatch := ctx.Get("X-Amz-Copy-Source-If-Match")
copySrcIfNoneMatch := ctx.Get("X-Amz-Copy-Source-If-None-Match")
copySrcModifSince := ctx.Get("X-Amz-Copy-Source-If-Modified-Since")
copySrcUnmodifSince := ctx.Get("X-Amz-Copy-Source-If-Unmodified-Since")
copySrcRange := ctx.Get("X-Amz-Copy-Source-Range")
directive := ctx.Get("X-Amz-Metadata-Directive")
// Permission headers
acl := ctx.Get("X-Amz-Acl")
@@ -1506,11 +1553,19 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
granWrite := ctx.Get("X-Amz-Grant-Write")
grantWriteACP := ctx.Get("X-Amz-Grant-Write-Acp")
// Other headers
// Content Length
contentLengthStr := ctx.Get("Content-Length")
if contentLengthStr == "" {
contentLengthStr = "0"
}
// Use decoded content length if available because the
// middleware will decode the chunked transfer encoding
decodedLength := ctx.Get("X-Amz-Decoded-Content-Length")
if decodedLength != "" {
contentLengthStr = decodedLength
}
// Other headers
bucketOwner := ctx.Get("X-Amz-Expected-Bucket-Owner")
storageClass := ctx.Get("X-Amz-Storage-Class")
@@ -1612,7 +1667,7 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
}
bypassHdr := ctx.Get("X-Amz-Bypass-Governance-Retention")
bypass := bypassHdr == "true"
bypass := strings.EqualFold(bypassHdr, "true")
if bypass {
policy, err := c.be.GetBucketPolicy(ctx.Context(), bucket)
if err != nil {
@@ -1696,6 +1751,24 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
if ctx.Request().URI().QueryArgs().Has("uploadId") &&
ctx.Request().URI().QueryArgs().Has("partNumber") &&
copySource != "" {
cs := copySource
copySource, err := url.QueryUnescape(copySource)
if err != nil {
if c.debug {
log.Printf("error unescaping copy source %q: %v",
cs, err)
}
return SendXMLResponse(ctx, nil,
s3err.GetAPIError(s3err.ErrInvalidCopySource),
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
Action: metrics.ActionUploadPartCopy,
BucketOwner: parsedAcl.Owner,
})
}
partNumber := int32(ctx.QueryInt("partNumber", -1))
if partNumber < 1 || partNumber > 10000 {
if c.debug {
@@ -1711,7 +1784,7 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
})
}
err := auth.VerifyObjectCopyAccess(ctx.Context(), c.be, copySource,
err = auth.VerifyObjectCopyAccess(ctx.Context(), c.be, copySource,
auth.AccessOptions{
Acl: parsedAcl,
AclPermission: types.PermissionWrite,
@@ -1867,13 +1940,26 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
})
}
//TODO: This part will be changed when object acls are implemented
grants := []types.Grant{}
for _, grt := range accessControlPolicy.AccessControlList.Grants {
grants = append(grants, types.Grant{
Grantee: &types.Grantee{
ID: &grt.Grantee.ID,
Type: grt.Grantee.Type,
},
Permission: grt.Permission,
})
}
input = &s3.PutObjectAclInput{
Bucket: &bucket,
Key: &keyStart,
ACL: "",
AccessControlPolicy: &types.AccessControlPolicy{
Owner: &accessControlPolicy.Owner,
Grants: accessControlPolicy.AccessControlList.Grants,
Owner: accessControlPolicy.Owner,
Grants: grants,
},
}
}
@@ -1944,7 +2030,24 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
}
if copySource != "" {
err := auth.VerifyObjectCopyAccess(ctx.Context(), c.be, copySource,
cs := copySource
copySource, err := url.QueryUnescape(copySource)
if err != nil {
if c.debug {
log.Printf("error unescaping copy source %q: %v",
cs, err)
}
return SendXMLResponse(ctx, nil,
s3err.GetAPIError(s3err.ErrInvalidCopySource),
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
Action: metrics.ActionCopyObject,
BucketOwner: parsedAcl.Owner,
})
}
err = auth.VerifyObjectCopyAccess(ctx.Context(), c.be, copySource,
auth.AccessOptions{
Acl: parsedAcl,
AclPermission: types.PermissionWrite,
@@ -2005,6 +2108,22 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
metadata := utils.GetUserMetaData(&ctx.Request().Header)
if directive != "" && directive != "COPY" && directive != "REPLACE" {
return SendXMLResponse(ctx, nil,
s3err.GetAPIError(s3err.ErrInvalidMetadataDirective),
&MetaOpts{
Logger: c.logger,
MetricsMng: c.mm,
Action: metrics.ActionCopyObject,
BucketOwner: parsedAcl.Owner,
})
}
metaDirective := types.MetadataDirectiveCopy
if directive == "REPLACE" {
metaDirective = types.MetadataDirectiveReplace
}
res, err := c.be.CopyObject(ctx.Context(),
&s3.CopyObjectInput{
Bucket: &bucket,
@@ -2016,6 +2135,7 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
CopySourceIfUnmodifiedSince: umtime,
ExpectedBucketOwner: &acct.Access,
Metadata: metadata,
MetadataDirective: metaDirective,
StorageClass: types.StorageClass(storageClass),
})
if err == nil {
@@ -2278,7 +2398,7 @@ func (c S3ApiController) DeleteObjects(ctx *fiber.Ctx) error {
acct := ctx.Locals("account").(auth.Account)
isRoot := ctx.Locals("isRoot").(bool)
parsedAcl := ctx.Locals("parsedAcl").(auth.ACL)
bypass := ctx.Get("X-Amz-Bypass-Governance-Retention")
bypassHdr := ctx.Get("X-Amz-Bypass-Governance-Retention")
var dObj s3response.DeleteObjects
err := xml.Unmarshal(ctx.Body(), &dObj)
@@ -2315,7 +2435,10 @@ func (c S3ApiController) DeleteObjects(ctx *fiber.Ctx) error {
})
}
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, utils.ParseDeleteObjects(dObj.Objects), bypass == "true", c.be)
// The AWS CLI sends 'True', while Go SDK sends 'true'
bypass := strings.EqualFold(bypassHdr, "true")
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, utils.ParseDeleteObjects(dObj.Objects), bypass, c.be)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
@@ -2354,11 +2477,15 @@ func (c S3ApiController) DeleteActions(ctx *fiber.Ctx) error {
acct := ctx.Locals("account").(auth.Account)
isRoot := ctx.Locals("isRoot").(bool)
parsedAcl := ctx.Locals("parsedAcl").(auth.ACL)
bypass := ctx.Get("X-Amz-Bypass-Governance-Retention")
bypassHdr := ctx.Get("X-Amz-Bypass-Governance-Retention")
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
}
path := ctx.Path()
if path[len(path)-1:] == "/" && key[len(key)-1:] != "/" {
key = key + "/"
}
if ctx.Request().URI().QueryArgs().Has("tagging") {
err := auth.VerifyAccess(ctx.Context(), c.be,
@@ -2459,7 +2586,10 @@ func (c S3ApiController) DeleteActions(ctx *fiber.Ctx) error {
})
}
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, []string{key}, bypass == "true", c.be)
// The AWS CLI sends 'True', while Go SDK sends 'true'
bypass := strings.EqualFold(bypassHdr, "true")
err = auth.CheckObjectAccess(ctx.Context(), bucket, acct.Access, []string{key}, bypass, c.be)
if err != nil {
return SendResponse(ctx, err,
&MetaOpts{
@@ -2554,6 +2684,10 @@ func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
}
path := ctx.Path()
if path[len(path)-1:] == "/" && key[len(key)-1:] != "/" {
key = key + "/"
}
var partNumber *int32
if ctx.Request().URI().QueryArgs().Has("partNumber") {
@@ -2629,10 +2763,6 @@ func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
Key: "ETag",
Value: getstring(res.ETag),
},
{
Key: "x-amz-storage-class",
Value: string(res.StorageClass),
},
{
Key: "x-amz-restore",
Value: getstring(res.Restore),
@@ -2676,12 +2806,22 @@ func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
Value: getstring(res.ContentEncoding),
})
}
if res.ContentType != nil {
if res.StorageClass != "" {
headers = append(headers, utils.CustomHeader{
Key: "Content-Type",
Value: getstring(res.ContentType),
Key: "x-amz-storage-class",
Value: string(res.StorageClass),
})
}
contentType := getstring(res.ContentType)
if contentType == "" {
contentType = defaultContentType
}
headers = append(headers, utils.CustomHeader{
Key: "Content-Type",
Value: contentType,
})
utils.SetResponseHeaders(ctx, headers)
return SendResponse(ctx, nil,
@@ -2716,7 +2856,7 @@ func (c S3ApiController) CreateActions(ctx *fiber.Ctx) error {
if ctx.Request().URI().QueryArgs().Has("restore") {
var restoreRequest types.RestoreRequest
if err := xml.Unmarshal(ctx.Body(), &restoreRequest); err != nil {
if !errors.Is(io.EOF, err) {
if !errors.Is(err, io.EOF) {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrMalformedXML),
&MetaOpts{
Logger: c.logger,

View File

@@ -373,11 +373,11 @@ func TestS3ApiController_ListActions(t *testing.T) {
ListMultipartUploadsFunc: func(_ context.Context, output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResult, error) {
return s3response.ListMultipartUploadsResult{}, nil
},
ListObjectsV2Func: func(context.Context, *s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
return &s3.ListObjectsV2Output{}, nil
ListObjectsV2Func: func(context.Context, *s3.ListObjectsV2Input) (s3response.ListObjectsV2Result, error) {
return s3response.ListObjectsV2Result{}, nil
},
ListObjectsFunc: func(context.Context, *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
return &s3.ListObjectsOutput{}, nil
ListObjectsFunc: func(context.Context, *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
return s3response.ListObjectsResult{}, nil
},
GetBucketTaggingFunc: func(contextMoqParam context.Context, bucket string) (map[string]string, error) {
return map[string]string{}, nil
@@ -416,8 +416,8 @@ func TestS3ApiController_ListActions(t *testing.T) {
GetBucketAclFunc: func(context.Context, *s3.GetBucketAclInput) ([]byte, error) {
return acldata, nil
},
ListObjectsFunc: func(context.Context, *s3.ListObjectsInput) (*s3.ListObjectsOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
ListObjectsFunc: func(context.Context, *s3.ListObjectsInput) (s3response.ListObjectsResult, error) {
return s3response.ListObjectsResult{}, s3err.GetAPIError(s3err.ErrNotImplemented)
},
GetBucketTaggingFunc: func(contextMoqParam context.Context, bucket string) (map[string]string, error) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
@@ -858,7 +858,7 @@ func TestS3ApiController_PutBucketActions(t *testing.T) {
req: incorrectBucketOwner,
},
wantErr: false,
statusCode: 403,
statusCode: 400,
},
{
name: "Put-bucket-acl-success",
@@ -1697,8 +1697,8 @@ func TestS3ApiController_CreateActions(t *testing.T) {
CompleteMultipartUploadFunc: func(context.Context, *s3.CompleteMultipartUploadInput) (*s3.CompleteMultipartUploadOutput, error) {
return &s3.CompleteMultipartUploadOutput{}, nil
},
CreateMultipartUploadFunc: func(context.Context, *s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error) {
return &s3.CreateMultipartUploadOutput{}, nil
CreateMultipartUploadFunc: func(context.Context, *s3.CreateMultipartUploadInput) (s3response.InitiateMultipartUploadResult, error) {
return s3response.InitiateMultipartUploadResult{}, nil
},
SelectObjectContentFunc: func(context.Context, *s3.SelectObjectContentInput) func(w *bufio.Writer) {
return func(w *bufio.Writer) {}

View File

@@ -28,11 +28,11 @@ type S3ApiRouter struct {
WithAdmSrv bool
}
func (sa *S3ApiRouter) Init(app *fiber.App, be backend.Backend, iam auth.IAMService, logger s3log.AuditLogger, evs s3event.S3EventSender, mm *metrics.Manager, debug bool, readonly bool) {
func (sa *S3ApiRouter) Init(app *fiber.App, be backend.Backend, iam auth.IAMService, logger s3log.AuditLogger, aLogger s3log.AuditLogger, evs s3event.S3EventSender, mm *metrics.Manager, debug bool, readonly bool) {
s3ApiController := controllers.New(be, iam, logger, evs, mm, debug, readonly)
if sa.WithAdmSrv {
adminController := controllers.NewAdminController(iam, be)
adminController := controllers.NewAdminController(iam, be, aLogger)
// CreateUser admin api
app.Patch("/create-user", adminController.CreateUser)

View File

@@ -45,7 +45,7 @@ func TestS3ApiRouter_Init(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.sa.Init(tt.args.app, tt.args.be, tt.args.iam, nil, nil, nil, false, false)
tt.sa.Init(tt.args.app, tt.args.be, tt.args.iam, nil, nil, nil, nil, false, false)
})
}
}

View File

@@ -47,6 +47,7 @@ func New(
port, region string,
iam auth.IAMService,
l s3log.AuditLogger,
adminLogger s3log.AuditLogger,
evs s3event.S3EventSender,
mm *metrics.Manager,
opts ...Option,
@@ -64,7 +65,9 @@ func New(
// Logging middlewares
if !server.quiet {
app.Use(logger.New())
app.Use(logger.New(logger.Config{
Format: "${time} | ${status} | ${latency} | ${ip} | ${method} | ${path} | ${error} | ${queryParams}\n",
}))
}
// Set up health endpoint if specified
if server.health != "" {
@@ -82,7 +85,7 @@ func New(
app.Use(middlewares.VerifyMD5Body(l))
app.Use(middlewares.AclParser(be, l, server.readonly))
server.router.Init(app, be, iam, l, evs, mm, server.debug, server.readonly)
server.router.Init(app, be, iam, l, adminLogger, evs, mm, server.debug, server.readonly)
return server, nil
}

View File

@@ -64,7 +64,7 @@ func TestNew(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotS3ApiServer, err := New(tt.args.app, tt.args.be, tt.args.root,
tt.args.port, "us-east-1", &auth.IAMServiceInternal{}, nil, nil, nil)
tt.args.port, "us-east-1", &auth.IAMServiceInternal{}, nil, nil, nil, nil)
if (err != nil) != tt.wantErr {
t.Errorf("New() error = %v, wantErr %v", err, tt.wantErr)
return

View File

@@ -259,7 +259,7 @@ func FilterObjectAttributes(attrs map[types.ObjectAttributes]struct{}, output s3
output.ObjectSize = nil
}
if _, ok := attrs[types.ObjectAttributesStorageClass]; !ok {
output.StorageClass = nil
output.StorageClass = ""
}
return output

View File

@@ -127,6 +127,11 @@ const (
ErrBothCannedAndHeaderGrants
ErrOwnershipControlsNotFound
ErrAclNotSupported
ErrMalformedACL
ErrUnexpectedContent
ErrMissingSecurityHeader
ErrInvalidMetadataDirective
ErrKeyTooLong
// Non-AWS errors
ErrExistingObjectIsDirectory
@@ -148,7 +153,7 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrBucketNotEmpty: {
Code: "BucketNotEmpty",
Description: "The bucket you tried to delete is not empty",
Description: "The bucket you tried to delete is not empty.",
HTTPStatusCode: http.StatusConflict,
},
ErrBucketAlreadyExists: {
@@ -173,17 +178,17 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrInvalidMaxUploads: {
Code: "InvalidArgument",
Description: "Argument max-uploads must be an integer between 0 and 2147483647",
Description: "Argument max-uploads must be an integer between 0 and 2147483647.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidMaxKeys: {
Code: "InvalidArgument",
Description: "Argument maxKeys must be an integer between 0 and 2147483647",
Description: "Argument maxKeys must be an integer between 0 and 2147483647.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidMaxParts: {
Code: "InvalidArgument",
Description: "Argument max-parts must be an integer between 0 and 2147483647",
Description: "Argument max-parts must be an integer between 0 and 2147483647.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPartNumberMarker: {
@@ -193,7 +198,7 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrNoSuchBucket: {
Code: "NoSuchBucket",
Description: "The specified bucket does not exist",
Description: "The specified bucket does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
ErrNoSuchKey: {
@@ -218,7 +223,7 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrInvalidPartNumber: {
Code: "InvalidArgument",
Description: "Part number must be an integer between 1 and 10000, inclusive",
Description: "Part number must be an integer between 1 and 10000, inclusive.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidCopyDest: {
@@ -263,7 +268,7 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrPostPolicyConditionInvalidFormat: {
Code: "PostPolicyInvalidKeyName",
Description: "Invalid according to Policy: Policy Condition failed",
Description: "Invalid according to Policy: Policy Condition failed.",
HTTPStatusCode: http.StatusForbidden,
},
ErrEntityTooSmall: {
@@ -298,7 +303,7 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrMalformedPresignedDate: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Date must be in the ISO8601 Long Format \"yyyyMMdd'T'HHmmss'Z'\"",
Description: "X-Amz-Date must be in the ISO8601 Long Format \"yyyyMMdd'T'HHmmss'Z'\".",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingSignHeadersTag: {
@@ -313,7 +318,7 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrUnsignedHeaders: {
Code: "AccessDenied",
Description: "There were headers present in the request which were not signed",
Description: "There were headers present in the request which were not signed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidQueryParams: {
@@ -328,22 +333,22 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrExpiredPresignRequest: {
Code: "AccessDenied",
Description: "Request has expired",
Description: "Request has expired.",
HTTPStatusCode: http.StatusForbidden,
},
ErrMalformedExpires: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Expires should be a number",
Description: "X-Amz-Expires should be a number.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNegativeExpires: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Expires must be non-negative",
Description: "X-Amz-Expires must be non-negative.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMaximumExpires: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Expires must be less than a week (in seconds); that is, the given X-Amz-Expires must be less than 604800 seconds",
Description: "X-Amz-Expires must be less than a week (in seconds); that is, the given X-Amz-Expires must be less than 604800 seconds.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidAccessKeyID: {
@@ -353,7 +358,7 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrRequestNotReadyYet: {
Code: "AccessDenied",
Description: "Request is not valid yet",
Description: "Request is not valid yet.",
HTTPStatusCode: http.StatusForbidden,
},
ErrSignatureDoesNotMatch: {
@@ -363,17 +368,17 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrSignatureDateDoesNotMatch: {
Code: "SignatureDoesNotMatch",
Description: "Date in Credential scope does not match YYYYMMDD from ISO-8601 version of date from HTTP",
Description: "Date in Credential scope does not match YYYYMMDD from ISO-8601 version of date from HTTP.",
HTTPStatusCode: http.StatusForbidden,
},
ErrSignatureTerminationStr: {
Code: "SignatureDoesNotMatch",
Description: "Credential should be scoped with a valid terminator: 'aws4_request'",
Description: "Credential should be scoped with a valid terminator: 'aws4_request'.",
HTTPStatusCode: http.StatusForbidden,
},
ErrSignatureIncorrService: {
Code: "SignatureDoesNotMatch",
Description: "Credential should be scoped to correct service: s3",
Description: "Credential should be scoped to correct service: s3.",
HTTPStatusCode: http.StatusForbidden,
},
ErrContentSHA256Mismatch: {
@@ -383,32 +388,32 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrMissingDateHeader: {
Code: "AccessDenied",
Description: "AWS authentication requires a valid Date or x-amz-date header",
Description: "AWS authentication requires a valid Date or x-amz-date header.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidRequest: {
Code: "InvalidRequest",
Description: "Invalid Request",
Description: "Invalid Request.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAuthNotSetup: {
Code: "InvalidRequest",
Description: "Signed request requires setting up SeaweedFS S3 authentication",
Description: "Signed request requires setting up SeaweedFS S3 authentication.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNotImplemented: {
Code: "NotImplemented",
Description: "A header you provided implies functionality that is not implemented",
Description: "A header you provided implies functionality that is not implemented.",
HTTPStatusCode: http.StatusNotImplemented,
},
ErrPreconditionFailed: {
Code: "PreconditionFailed",
Description: "At least one of the pre-conditions you specified did not hold",
Description: "At least one of the pre-conditions you specified did not hold.",
HTTPStatusCode: http.StatusPreconditionFailed,
},
ErrInvalidObjectState: {
Code: "InvalidObjectState",
Description: "The operation is not valid for the current state of the object",
Description: "The operation is not valid for the current state of the object.",
HTTPStatusCode: http.StatusForbidden,
},
ErrInvalidRange: {
@@ -423,52 +428,52 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrObjectLockConfigurationNotFound: {
Code: "ObjectLockConfigurationNotFoundError",
Description: "Object Lock configuration does not exist for this bucket",
Description: "Object Lock configuration does not exist for this bucket.",
HTTPStatusCode: http.StatusNotFound,
},
ErrNoSuchObjectLockConfiguration: {
Code: "NoSuchObjectLockConfiguration",
Description: "The specified object does not have an ObjectLock configuration",
Description: "The specified object does not have an ObjectLock configuration.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidBucketObjectLockConfiguration: {
Code: "InvalidRequest",
Description: "Bucket is missing ObjectLockConfiguration",
Description: "Bucket is missing ObjectLockConfiguration.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectLockConfigurationNotAllowed: {
Code: "InvalidBucketState",
Description: "Object Lock configuration cannot be enabled on existing buckets",
Description: "Object Lock configuration cannot be enabled on existing buckets.",
HTTPStatusCode: http.StatusConflict,
},
ErrObjectLocked: {
Code: "InvalidRequest",
Description: "Object is WORM protected and cannot be overwritten",
Description: "Object is WORM protected and cannot be overwritten.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrPastObjectLockRetainDate: {
Code: "InvalidRequest",
Description: "the retain until date must be in the future",
Description: "the retain until date must be in the future.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectLockInvalidRetentionPeriod: {
Code: "InvalidRetentionPeriod",
Description: "the retention days/years must be positive integer",
Description: "the retention days/years must be positive integer.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNoSuchBucketPolicy: {
Code: "NoSuchBucketPolicy",
Description: "The bucket policy does not exist",
Description: "The bucket policy does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
ErrBucketTaggingNotFound: {
Code: "NoSuchTagSet",
Description: "The TagSet does not exist",
Description: "The TagSet does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
ErrObjectLockInvalidHeaders: {
Code: "InvalidRequest",
Description: "x-amz-object-lock-retain-until-date and x-amz-object-lock-mode must both be supplied",
Description: "x-amz-object-lock-retain-until-date and x-amz-object-lock-mode must both be supplied.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrRequestTimeTooSkewed: {
@@ -478,22 +483,47 @@ var errorCodeResponse = map[ErrorCode]APIError{
},
ErrInvalidBucketAclWithObjectOwnership: {
Code: "ErrInvalidBucketAclWithObjectOwnership",
Description: "Bucket cannot have ACLs set with ObjectOwnership's BucketOwnerEnforced setting",
Description: "Bucket cannot have ACLs set with ObjectOwnership's BucketOwnerEnforced setting.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBothCannedAndHeaderGrants: {
Code: "InvalidRequest",
Description: "Specifying both Canned ACLs and Header Grants is not allowed",
Description: "Specifying both Canned ACLs and Header Grants is not allowed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrOwnershipControlsNotFound: {
Code: "OwnershipControlsNotFoundError",
Description: "The bucket ownership controls were not found",
Description: "The bucket ownership controls were not found.",
HTTPStatusCode: http.StatusNotFound,
},
ErrAclNotSupported: {
Code: "AccessControlListNotSupported",
Description: "The bucket does not allow ACLs",
Description: "The bucket does not allow ACLs.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMalformedACL: {
Code: "MalformedACLError",
Description: "The XML you provided was not well-formed or did not validate against our published schema.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrUnexpectedContent: {
Code: "UnexpectedContent",
Description: "This request does not support content.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingSecurityHeader: {
Code: "MissingSecurityHeader",
Description: "Your request was missing a required header.",
HTTPStatusCode: http.StatusNotFound,
},
ErrInvalidMetadataDirective: {
Code: "InvalidArgument",
Description: "Unknown metadata directive.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrKeyTooLong: {
Code: "KeyTooLongError",
Description: "Your key is too long.",
HTTPStatusCode: http.StatusBadRequest,
},

View File

@@ -35,11 +35,13 @@ type LogMeta struct {
BucketOwner string
ObjectSize int64
Action string
HttpStatus int
}
type LogConfig struct {
LogFile string
WebhookURL string
LogFile string
WebhookURL string
AdminLogFile string
}
type LogFields struct {
@@ -71,18 +73,66 @@ type LogFields struct {
AclRequired string
}
func InitLogger(cfg *LogConfig) (AuditLogger, error) {
type AdminLogFields struct {
Time time.Time
RemoteIP string
Requester string
RequestID string
Operation string
RequestURI string
HttpStatus int
ErrorCode string
BytesSent int
TotalTime int64
TurnAroundTime int64
Referer string
UserAgent string
SignatureVersion string
CipherSuite string
AuthenticationType string
TLSVersion string
}
type Loggers struct {
S3Logger AuditLogger
AdminLogger AuditLogger
}
func InitLogger(cfg *LogConfig) (*Loggers, error) {
if cfg.WebhookURL != "" && cfg.LogFile != "" {
return nil, fmt.Errorf("there should be specified one of the following: file, webhook")
}
if cfg.WebhookURL != "" {
return InitWebhookLogger(cfg.WebhookURL)
}
if cfg.LogFile != "" {
return InitFileLogger(cfg.LogFile)
loggers := new(Loggers)
switch {
case cfg.WebhookURL != "":
fmt.Printf("initializing S3 access logs with '%v' webhook url\n", cfg.WebhookURL)
l, err := InitWebhookLogger(cfg.WebhookURL)
if err != nil {
return nil, err
}
loggers.S3Logger = l
case cfg.LogFile != "":
fmt.Printf("initializing S3 access logs with '%v' file\n", cfg.LogFile)
l, err := InitFileLogger(cfg.LogFile)
if err != nil {
return nil, err
}
loggers.S3Logger = l
}
return nil, nil
if cfg.AdminLogFile != "" {
fmt.Printf("initializing admin access logs with '%v' file\n", cfg.AdminLogFile)
l, err := InitAdminFileLogger(cfg.AdminLogFile)
if err != nil {
return nil, err
}
loggers.AdminLogger = l
}
return loggers, nil
}
func genID() string {

151
s3log/file_admin.go Normal file
View File

@@ -0,0 +1,151 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package s3log
import (
"crypto/tls"
"fmt"
"os"
"time"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/auth"
)
// FileLogger is a local file audit log
type AdminFileLogger struct {
FileLogger
}
var _ AuditLogger = &AdminFileLogger{}
// InitFileLogger initializes audit logs to local file
func InitAdminFileLogger(logname string) (AuditLogger, error) {
f, err := os.OpenFile(logname, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return nil, fmt.Errorf("open log: %w", err)
}
f.WriteString(fmt.Sprintf("log starts %v\n", time.Now()))
return &AdminFileLogger{FileLogger: FileLogger{logfile: logname, f: f}}, nil
}
// Log sends log message to file logger
func (f *AdminFileLogger) Log(ctx *fiber.Ctx, err error, body []byte, meta LogMeta) {
f.mu.Lock()
defer f.mu.Unlock()
if f.gotErr {
return
}
lf := AdminLogFields{}
access := "-"
reqURI := ctx.OriginalURL()
errorCode := ""
startTime := ctx.Locals("startTime").(time.Time)
tlsConnState := ctx.Context().TLSConnectionState()
if tlsConnState != nil {
lf.CipherSuite = tls.CipherSuiteName(tlsConnState.CipherSuite)
lf.TLSVersion = getTLSVersionName(tlsConnState.Version)
}
if err != nil {
errorCode = err.Error()
}
switch ctx.Locals("account").(type) {
case auth.Account:
access = ctx.Locals("account").(auth.Account).Access
}
lf.Time = time.Now()
lf.RemoteIP = ctx.IP()
lf.Requester = access
lf.RequestID = genID()
lf.Operation = meta.Action
lf.RequestURI = reqURI
lf.HttpStatus = meta.HttpStatus
lf.ErrorCode = errorCode
lf.BytesSent = len(body)
lf.TotalTime = time.Since(startTime).Milliseconds()
lf.TurnAroundTime = time.Since(startTime).Milliseconds()
lf.Referer = ctx.Get("Referer")
lf.UserAgent = ctx.Get("User-Agent")
lf.SignatureVersion = "SigV4"
lf.AuthenticationType = "AuthHeader"
f.writeLog(lf)
}
func (f *AdminFileLogger) writeLog(lf AdminLogFields) {
if lf.RemoteIP == "" {
lf.RemoteIP = "-"
}
if lf.Requester == "" {
lf.Requester = "-"
}
if lf.Operation == "" {
lf.Operation = "-"
}
if lf.RequestURI == "" {
lf.RequestURI = "-"
}
if lf.ErrorCode == "" {
lf.ErrorCode = "-"
}
if lf.Referer == "" {
lf.Referer = "-"
}
if lf.UserAgent == "" {
lf.UserAgent = "-"
}
if lf.CipherSuite == "" {
lf.CipherSuite = "-"
}
if lf.TLSVersion == "" {
lf.TLSVersion = "-"
}
log := fmt.Sprintf("%v %v %v %v %v %v %v %v %v %v %v %v %v %v %v %v %v\n",
fmt.Sprintf("[%v]", lf.Time.Format(timeFormat)),
lf.RemoteIP,
lf.Requester,
lf.RequestID,
lf.Operation,
lf.RequestURI,
lf.HttpStatus,
lf.ErrorCode,
lf.BytesSent,
lf.TotalTime,
lf.TurnAroundTime,
lf.Referer,
lf.UserAgent,
lf.SignatureVersion,
lf.CipherSuite,
lf.AuthenticationType,
lf.TLSVersion,
)
_, err := f.f.WriteString(log)
if err != nil {
fmt.Fprintf(os.Stderr, "error writing to log file: %v\n", err)
// TODO: do we need to terminate on log error?
// set err for now so that we don't spew errors
f.gotErr = true
}
}

View File

@@ -21,14 +21,30 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
const RFC3339TimeFormat = "2006-01-02T15:04:05.999Z"
// Part describes part metadata.
type Part struct {
PartNumber int
LastModified string
LastModified time.Time
ETag string
Size int64
}
func (p Part) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
type Alias Part
aux := &struct {
LastModified string `xml:"LastModified"`
*Alias
}{
Alias: (*Alias)(&p),
}
aux.LastModified = p.LastModified.Format(RFC3339TimeFormat)
return e.EncodeElement(aux, start)
}
// ListPartsResponse - s3 api list parts response.
type ListPartsResult struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListPartsResult" json:"-"`
@@ -41,7 +57,7 @@ type ListPartsResult struct {
Owner Owner
// The class of storage used to store the object.
StorageClass string
StorageClass types.StorageClass
PartNumberMarker int
NextPartNumberMarker int
@@ -56,7 +72,7 @@ type GetObjectAttributesResult struct {
ETag *string
LastModified *time.Time
ObjectSize *int64
StorageClass *types.StorageClass
StorageClass types.StorageClass
VersionId *string
ObjectParts *ObjectParts
}
@@ -91,14 +107,85 @@ type ListMultipartUploadsResult struct {
CommonPrefixes []CommonPrefix
}
type ListObjectsResult struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
Name *string
Prefix *string
Marker *string
NextMarker *string
MaxKeys *int32
Delimiter *string
IsTruncated *bool
Contents []Object
CommonPrefixes []types.CommonPrefix
EncodingType types.EncodingType
}
type ListObjectsV2Result struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
Name *string
Prefix *string
StartAfter *string
ContinuationToken *string
NextContinuationToken *string
KeyCount *int32
MaxKeys *int32
Delimiter *string
IsTruncated *bool
Contents []Object
CommonPrefixes []types.CommonPrefix
EncodingType types.EncodingType
}
type Object struct {
ETag *string
Key *string
LastModified *time.Time
Owner *types.Owner
RestoreStatus *types.RestoreStatus
Size *int64
StorageClass types.ObjectStorageClass
}
func (o Object) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
type Alias Object
aux := &struct {
LastModified *string `xml:"LastModified,omitempty"`
*Alias
}{
Alias: (*Alias)(&o),
}
if o.LastModified != nil {
formattedTime := o.LastModified.Format(RFC3339TimeFormat)
aux.LastModified = &formattedTime
}
return e.EncodeElement(aux, start)
}
// Upload describes in progress multipart upload
type Upload struct {
Key string
UploadID string `xml:"UploadId"`
Initiator Initiator
Owner Owner
StorageClass string
Initiated string
StorageClass types.StorageClass
Initiated time.Time
}
func (u Upload) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
type Alias Upload
aux := &struct {
Initiated string `xml:"Initiated"`
*Alias
}{
Alias: (*Alias)(&u),
}
aux.Initiated = u.Initiated.Format(RFC3339TimeFormat)
return e.EncodeElement(aux, start)
}
// CommonPrefix ListObjectsResponse common prefixes (directory abstraction)
@@ -221,3 +308,10 @@ type Grantee struct {
type OwnershipControls struct {
Rules []types.OwnershipControlsRule `xml:"Rule"`
}
type InitiateMultipartUploadResult struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ InitiateMultipartUploadResult" json:"-"`
Bucket string
Key string
UploadId string
}

View File

@@ -17,4 +17,12 @@ GOCOVERDIR=$PWD/cover
USERS_FOLDER=$PWD/iam
#TEST_LOG_FILE=test.log
#VERSITY_LOG_FILE=versity.log
IAM_TYPE=folder
IAM_TYPE=folder
DIRECT=false
#DIRECT_DISPLAY_NAME=
#COVERAGE_DB=coverage.sql
USERNAME_ONE=ABCDEFG
PASSWORD_ONE=HIJKLMN
USERNAME_TWO=HIJKLMN
PASSWORD_TWO=OPQRSTU
TEST_FILE_FOLDER=$PWD/versity-gwtest-files

27
tests/.env.docker.default Normal file
View File

@@ -0,0 +1,27 @@
AWS_PROFILE=versity
AWS_ENDPOINT_URL=https://127.0.0.1:7070
VERSITY_EXE=./versitygw
RUN_VERSITYGW=true
BACKEND=posix
LOCAL_FOLDER=/tmp/gw
BUCKET_ONE_NAME=versity-gwtest-bucket-one
BUCKET_TWO_NAME=versity-gwtest-bucket-two
CERT=$PWD/cert-docker.pem
KEY=$PWD/versitygw-docker.pem
S3CMD_CONFIG=./tests/s3cfg.local.default
SECRETS_FILE=./tests/.secrets
MC_ALIAS=versity
LOG_LEVEL=2
USERS_FOLDER=$PWD/iam
#TEST_LOG_FILE=test.log
#VERSITY_LOG_FILE=versity.log
IAM_TYPE=folder
DIRECT=false
#DIRECT_DISPLAY_NAME=
#COVERAGE_DB=coverage.sql
USERNAME_ONE=ABCDEFG
PASSWORD_ONE=HIJKLMN
USERNAME_TWO=HIJKLMN
PASSWORD_TWO=OPQRSTU
TEST_FILE_FOLDER=$PWD/versity-gwtest-files
RECREATE_BUCKETS=true

5
tests/.secrets.default Normal file
View File

@@ -0,0 +1,5 @@
# change to your account attributes
AWS_ACCESS_KEY_ID=ABCDEFGHIJKLMNOPQRST
AWS_SECRET_ACCESS_KEY=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmn
AWS_REGION=us-east-1
AWS_PROFILE=versity

View File

@@ -9,10 +9,11 @@
* **aws cli**: Instructions are [here](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
* **s3cmd**: Instructions are [here](https://github.com/s3tools/s3cmd/blob/master/INSTALL.md).
* **mc**: Instructions are [here](https://min.io/docs/minio/linux/reference/minio-mc.html).
3. Install BATS. Instructions are [here](https://bats-core.readthedocs.io/en/stable/installation.html).
4. If running on Mac OS, install **jq** with the command `brew install jq`.
4. Create a `.secrets` file in the `tests` folder, and add the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` values to the file.
5. Create a local AWS profile for connection to S3, and add the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_REGION` values for your account to the profile. Example:
3. Install **BATS**. Instructions are [here](https://bats-core.readthedocs.io/en/stable/installation.html).
4. Install **bats-support** and **bats-assert**. This can be done by saving the root folder of each repo (https://github.com/bats-core/bats-support and https://github.com/ztombol/bats-assert) in the `tests` folder.
5. If running on Mac OS, install **jq** with the command `brew install jq`.
6. Create a `.secrets` file in the `tests` folder, and add the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`, and `AWS_PROFILE` values to the file.
7. Create a local AWS profile for connection to S3, and add the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_REGION` values for your account to the profile. Example:
```
export AWS_PROFILE=versity-test
export AWS_ACCESS_KEY_ID=<your account ID>
@@ -22,18 +23,18 @@
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY --profile $AWS_PROFILE
aws configure set aws_region $AWS_REGION --profile $AWS_PROFILE
```
6. Create an environment file (`.env`) similar to the ones in this folder, setting the `AWS_PROFILE` parameter to the name of the profile you created.
7. If using SSL, create a local private key and certificate, such as with the commands below. Afterwards, set the `KEY` and `CERT` fields in the `.env` file to these, respectively.
8. Create an environment file (`.env`) similar to the ones in this folder, setting the `AWS_PROFILE` parameter to the name of the profile you created.
9. If using SSL, create a local private key and certificate, such as with the commands below. Afterwards, set the `KEY` and `CERT` fields in the `.env` file to these, respectively.
```
openssl genpkey -algorithm RSA -out versitygw.pem -pkeyopt rsa_keygen_bits:2048
openssl req -new -x509 -key versitygw.pem -out cert.pem -days 365
```
8. Set `BUCKET_ONE_NAME` and `BUCKET_TWO_NAME` to the desired names of your buckets. If you don't want them to be created each time, set `RECREATE_BUCKETS` to `false`.
9. In the root repo folder, run single test group with `VERSITYGW_TEST_ENV=<env file> tests/run.sh <options>`. To print options, run `tests/run.sh -h`. To run all tests, run `VERSITYGW_TEST_ENV=<env file> tests/run_all.sh`.
10. Set `BUCKET_ONE_NAME` and `BUCKET_TWO_NAME` to the desired names of your buckets. If you don't want them to be created each time, set `RECREATE_BUCKETS` to `false`.
11. In the root repo folder, run single test group with `VERSITYGW_TEST_ENV=<env file> tests/run.sh <options>`. To print options, run `tests/run.sh -h`. To run all tests, run `VERSITYGW_TEST_ENV=<env file> tests/run_all.sh`.
### Static Bucket Mode
To preserve buckets while running tests, set `RECREATE_BUCKETS` to `false`. Two utility functions are included, if needed, to create, and delete buckets for this: `tests/setup_static.sh` and `tests/remove_static.sh`.
To preserve buckets while running tests, set `RECREATE_BUCKETS` to `false`. Two utility functions are included, if needed, to create, and delete buckets for this: `tests/setup_static.sh` and `tests/remove_static.sh`. Note that this creates a bucket with object lock enabled, and some tests may fail if the bucket being tested doesn't have object lock enabled.
### S3 Backend
@@ -57,8 +58,9 @@ To communicate directly with s3, in order to compare the gateway results to dire
## Instructions - Running With Docker
1. Create a `.secrets` file in the `tests` folder, and add the `AWS_PROFILE`, `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and the `AWS_PROFILE` fields, as well as the additional s3 fields explained in the **S3 Backend** section above if running with the s3 backend.
2. Build and run the `Dockerfile_test_bats` file. Change the `SECRETS_FILE` and `CONFIG_FILE` parameters to point to your secrets and config file, respectively. Example: `docker build -t <tag> -f Dockerfile_test_bats --build-arg="SECRETS_FILE=<file>" --build-arg="CONFIG_FILE=<file>" .`.
1. Copy `.secrets.default` to `.secrets` in the `tests` folder and change the parameters and add the additional s3 fields explained in the **S3 Backend** section above if running with the s3 backend.
2. By default, the dockerfile uses the **arm** architecture (usually modern Mac). If using **amd** (usually earlier Mac or Linux), you can either replace the corresponding `ARG` values directly, or with `arg="<param>=<amd library or folder>"` Also, you can determine which is used by your OS with `uname -a`.
3. Build and run the `Dockerfile_test_bats` file. Change the `SECRETS_FILE` and `CONFIG_FILE` parameters to point to your secrets and config file, respectively, if not using the defaults. Example: `docker build -t <tag> -f Dockerfile_test_bats --build-arg="SECRETS_FILE=<file>" --build-arg="CONFIG_FILE=<file>" .`.
## Instructions - Running with docker-compose
@@ -76,3 +78,7 @@ To run in insecure mode, comment out the `CERT` and `KEY` parameters in the `.en
To use static buckets set the `RECREATE_BUCKETS` value to `false`.
For the s3 backend, see the **S3 Backend** instructions above.
If using AMD rather than ARM architecture, add the corresponding **args** values matching those in the Dockerfile for **amd** libraries.
A single instance can be run with `docker-compose -f docker-compose-bats.yml up <service name>`

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
abort_multipart_upload() {
record_command "abort-multipart-upload" "client:s3api"
if [ $# -ne 3 ]; then
log 2 "'abort multipart upload' command requires bucket, key, upload ID"
return 1
@@ -17,6 +32,7 @@ abort_multipart_upload_with_user() {
log 2 "'abort multipart upload' command requires bucket, key, upload ID, username, password"
return 1
fi
record_command "abort-multipart-upload" "client:s3api"
if ! abort_multipart_upload_error=$(AWS_ACCESS_KEY_ID="$4" AWS_SECRET_ACCESS_KEY="$5" aws --no-verify-ssl s3api abort-multipart-upload --bucket "$1" --key "$2" --upload-id "$3" 2>&1); then
log 2 "Error aborting upload: $abort_multipart_upload_error"
export abort_multipart_upload_error

View File

@@ -1,11 +1,26 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
complete_multipart_upload() {
if [[ $# -ne 4 ]]; then
log 2 "'complete multipart upload' command requires bucket, key, upload ID, parts list"
return 1
fi
log 5 "complete multipart upload id: $3, parts: $4"
record_command "complete-multipart-upload" "client:s3api"
error=$(aws --no-verify-ssl s3api complete-multipart-upload --bucket "$1" --key "$2" --upload-id "$3" --multipart-upload '{"Parts": '"$4"'}' 2>&1) || local completed=$?
if [[ $completed -ne 0 ]]; then
log 2 "error completing multipart upload: $error"

View File

@@ -1,5 +1,19 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
copy_object() {
if [ $# -ne 4 ]; then
echo "copy object command requires command type, source, bucket, key"
@@ -7,6 +21,7 @@ copy_object() {
fi
local exit_code=0
local error
record_command "copy-object" "client:$1"
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 cp "$2" s3://"$3/$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
@@ -29,6 +44,7 @@ copy_object() {
}
copy_object_empty() {
record-command "copy-object" "client:s3api"
error=$(aws --no-verify-ssl s3api copy-object 2>&1) || local result=$?
if [[ $result -eq 0 ]]; then
log 2 "copy object with empty parameters returned no error"

View File

@@ -1,5 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/report.sh
# create an AWS bucket
# param: bucket name
# return 0 for success, 1 for failure
@@ -9,6 +25,7 @@ create_bucket() {
return 1
fi
record_command "create-bucket" "client:$1"
local exit_code=0
local error
log 6 "create bucket"
@@ -32,7 +49,31 @@ create_bucket() {
return 0
}
create_bucket_with_user() {
if [ $# -ne 4 ]; then
log 2 "create bucket missing command type, bucket name, access, secret"
return 1
fi
local exit_code=0
if [[ $1 == "aws" ]] || [[ $1 == "s3api" ]]; then
error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3 mb s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "s3cmd" ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate mb --access_key="$3" --secret_key="$4" s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == "mc" ]]; then
error=$(mc --insecure mb "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
log 2 "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
log 2 "error creating bucket: $error"
return 1
fi
return 0
}
create_bucket_object_lock_enabled() {
record_command "create-bucket" "client:s3api"
if [ $# -ne 1 ]; then
log 2 "create bucket missing bucket name"
return 1

View File

@@ -1,9 +1,24 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# initialize a multipart upload
# params: bucket, key
# return 0 for success, 1 for failure
create_multipart_upload() {
record_command "create-multipart-upload" "client:s3api"
if [ $# -ne 2 ]; then
log 2 "create multipart upload function must have bucket, key"
return 1
@@ -19,11 +34,11 @@ create_multipart_upload() {
return 1
fi
upload_id="${upload_id//\"/}"
export upload_id
return 0
}
create_multipart_upload_with_user() {
record_command "create-multipart-upload" "client:s3api"
if [ $# -ne 4 ]; then
log 2 "create multipart upload function must have bucket, key, username, password"
return 1
@@ -39,11 +54,11 @@ create_multipart_upload_with_user() {
return 1
fi
upload_id="${upload_id//\"/}"
export upload_id
return 0
}
create_multipart_upload_params() {
record_command "create-multipart-upload" "client:s3api"
if [ $# -ne 8 ]; then
log 2 "create multipart upload function with params must have bucket, key, content type, metadata, object lock legal hold status, " \
"object lock mode, object lock retain until date, and tagging"
@@ -63,14 +78,13 @@ create_multipart_upload_params() {
log 2 "error creating multipart upload with params: $multipart_data"
return 1
fi
export multipart_data
upload_id=$(echo "$multipart_data" | grep -v "InsecureRequestWarning" | jq '.UploadId')
upload_id="${upload_id//\"/}"
export upload_id
return 0
}
create_multipart_upload_custom() {
record_command "create-multipart-upload" "client:s3api"
if [ $# -lt 2 ]; then
log 2 "create multipart upload custom function must have at least bucket and key"
return 1
@@ -87,11 +101,9 @@ create_multipart_upload_custom() {
log 2 "error creating custom multipart data command: $multipart_data"
return 1
fi
export multipart_data
log 5 "multipart data: $multipart_data"
upload_id=$(echo "$multipart_data" | grep -v "InsecureRequestWarning" | jq '.UploadId')
upload_id="${upload_id//\"/}"
log 5 "upload id: $upload_id"
export upload_id
return 0
}

View File

@@ -1,18 +1,37 @@
#!/usr/bin/env bash
# delete an AWS bucket
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# param: bucket name
# return 0 for success, 1 for failure
# fail if params are bad, or bucket exists and user is unable to delete bucket
delete_bucket() {
log 6 "delete_bucket"
record_command "delete-bucket" "client:$1"
if [ $# -ne 2 ]; then
log 2 "delete bucket missing command type, bucket name"
log 2 "'delete_bucket' command requires client, bucket"
return 1
fi
local exit_code=0
local error
if [[ ( $RECREATE_BUCKETS == "false" ) && (( "$2" == "$BUCKET_ONE_NAME" ) || ( "$2" == "$BUCKET_TWO_NAME" )) ]]; then
log 2 "attempt to delete main buckets in static mode"
return 1
fi
exit_code=0
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 rb s3://"$2" 2>&1) || exit_code=$?
error=$(aws --no-verify-ssl s3 rb s3://"$2") || exit_code=$?
elif [[ $1 == 'aws' ]] || [[ $1 == 's3api' ]]; then
error=$(aws --no-verify-ssl s3api delete-bucket --bucket "$2" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
@@ -26,10 +45,9 @@ delete_bucket() {
if [ $exit_code -ne 0 ]; then
if [[ "$error" == *"The specified bucket does not exist"* ]]; then
return 0
else
log 2 "error deleting bucket: $error"
return 1
fi
log 2 "error deleting bucket: $error"
return 1
fi
return 0
}

View File

@@ -1,10 +1,26 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
delete_bucket_policy() {
record_command "delete-bucket-policy" "client:$1"
if [[ $# -ne 2 ]]; then
log 2 "delete bucket policy command requires command type, bucket"
return 1
fi
local delete_result=0
if [[ $1 == 'aws' ]] || [[ $1 == 's3api' ]]; then
error=$(aws --no-verify-ssl s3api delete-bucket-policy --bucket "$2" 2>&1) || delete_result=$?
elif [[ $1 == 's3cmd' ]]; then
@@ -23,6 +39,7 @@ delete_bucket_policy() {
}
delete_bucket_policy_with_user() {
record_command "delete-bucket-policy" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'delete bucket policy with user' command requires bucket, username, password"
return 1

View File

@@ -0,0 +1,51 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
delete_bucket_tagging() {
record_command "delete-bucket-tagging" "client:$1"
if [ $# -ne 2 ]; then
log 2 "delete bucket tagging command missing command type, bucket name"
return 1
fi
local result
if [[ $1 == 'aws' ]]; then
tags=$(aws --no-verify-ssl s3api delete-bucket-tagging --bucket "$2" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
tags=$(mc --insecure tag remove "$MC_ALIAS"/"$2" 2>&1) || result=$?
else
log 2 "invalid command type $1"
return 1
fi
if [[ $result -ne 0 ]]; then
log 2 "error deleting bucket tagging: $tags"
return 1
fi
return 0
}
delete_bucket_tagging_with_user() {
log 6 "delete_bucket_tagging_with_user"
record_command "delete-bucket-tagging" "client:s3api"
if [ $# -ne 3 ]; then
log 2 "delete bucket tagging command missing username, password, bucket name"
return 1
fi
if ! error=$(AWS_ACCESS_KEY_ID="$1" AWS_SECRET_ACCESS_KEY="$2" aws --no-verify-ssl s3api delete-bucket-tagging --bucket "$3" 2>&1); then
log 2 "error deleting bucket tagging with user: $error"
return 1
fi
return 0
}

View File

@@ -1,6 +1,23 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# params: client, bucket, key
delete_object() {
log 6 "delete_object"
record_command "delete-object" "client:$1"
if [ $# -ne 3 ]; then
log 2 "delete object command requires command type, bucket, key"
return 1
@@ -27,7 +44,20 @@ delete_object() {
return 0
}
delete_object_bypass_retention() {
if [[ $# -ne 4 ]]; then
log 2 "'delete-object with bypass retention' requires bucket, key, user, password"
return 1
fi
if ! delete_object_error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3api delete-object --bucket "$1" --key "$2" --bypass-governance-retention 2>&1); then
log 2 "error deleting object with bypass retention: $delete_object_error"
return 1
fi
return 0
}
delete_object_with_user() {
record_command "delete-object" "client:$1"
if [ $# -ne 5 ]; then
log 2 "delete object with user command requires command type, bucket, key, access ID, secret key"
return 1
@@ -36,7 +66,7 @@ delete_object_with_user() {
if [[ $1 == 's3' ]]; then
delete_object_error=$(AWS_ACCESS_KEY_ID="$4" AWS_SECRET_ACCESS_KEY="$5" aws --no-verify-ssl s3 rm "s3://$2/$3" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
delete_object_error=$(AWS_ACCESS_KEY_ID="$4" AWS_SECRET_ACCESS_KEY="$5" aws --no-verify-ssl s3api delete-object --bucket "$2" --key "$3" 2>&1) || exit_code=$?
delete_object_error=$(AWS_ACCESS_KEY_ID="$4" AWS_SECRET_ACCESS_KEY="$5" aws --no-verify-ssl s3api delete-object --bucket "$2" --key "$3" --bypass-governance-retention 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
delete_object_error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate rm --access_key="$4" --secret_key="$5" "s3://$2/$3" 2>&1) || exit_code=$?
else

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
delete_object_tagging() {
record_command "delete-object-tagging" "client:$1"
if [[ $# -ne 3 ]]; then
echo "delete object tagging command missing command type, bucket, key"
return 1

View File

@@ -0,0 +1,33 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
delete_objects() {
record_command "delete-objects" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'delete-objects' command requires bucket name, two object keys"
return 1
fi
if ! error=$(aws --no-verify-ssl s3api delete-objects --bucket "$1" --delete "{
\"Objects\": [
{\"Key\": \"$2\"},
{\"Key\": \"$3\"}
]
}" 2>&1); then
log 2 "error deleting objects: $error"
return 1
fi
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_bucket_acl() {
record_command "get-bucket-acl" "client:$1"
if [ $# -ne 2 ]; then
log 2 "bucket ACL command missing command type, bucket name"
return 1
@@ -18,10 +33,11 @@ get_bucket_acl() {
log 2 "Error getting bucket ACLs: $acl"
return 1
fi
export acl
acl=$(echo "$acl" | grep -v "InsecureRequestWarning")
}
get_bucket_acl_with_user() {
record_command "get-bucket-acl" "client:s3api"
if [ $# -ne 3 ]; then
log 2 "'get bucket ACL with user' command requires bucket name, username, password"
return 1
@@ -30,6 +46,5 @@ get_bucket_acl_with_user() {
log 2 "error getting bucket ACLs: $bucket_acl"
return 1
fi
export bucket_acl
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_bucket_location() {
record_command "get-bucket-location" "client:$1"
if [[ $# -ne 2 ]]; then
echo "get bucket location command requires command type, bucket name"
return 1
@@ -19,10 +34,10 @@ get_bucket_location() {
return 1
fi
location=$(echo "$location_json" | jq -r '.LocationConstraint')
export location
}
get_bucket_location_aws() {
record_command "get-bucket-location" "client:s3api"
if [[ $# -ne 1 ]]; then
echo "get bucket location (aws) requires bucket name"
return 1
@@ -33,11 +48,11 @@ get_bucket_location_aws() {
return 1
fi
bucket_location=$(echo "$location_json" | jq -r '.LocationConstraint')
export bucket_location
return 0
}
get_bucket_location_s3cmd() {
record_command "get-bucket-location" "client:s3cmd"
if [[ $# -ne 1 ]]; then
echo "get bucket location (s3cmd) requires bucket name"
return 1
@@ -48,11 +63,11 @@ get_bucket_location_s3cmd() {
return 1
fi
bucket_location=$(echo "$info" | grep -o 'Location:.*' | awk '{print $2}')
export bucket_location
return 0
}
get_bucket_location_mc() {
record_command "get-bucket-location" "client:mc"
if [[ $# -ne 1 ]]; then
echo "get bucket location (mc) requires bucket name"
return 1
@@ -62,7 +77,7 @@ get_bucket_location_mc() {
echo "error getting s3cmd info: $info"
return 1
fi
# shellcheck disable=SC2034
bucket_location=$(echo "$info" | grep -o 'Location:.*' | awk '{print $2}')
export bucket_location
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_bucket_ownership_controls() {
record_command "get-bucket-ownership-controls" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "'get bucket ownership controls' command requires bucket name"
return 1
@@ -13,7 +28,6 @@ get_bucket_ownership_controls() {
log 5 "Raw bucket Ownership Controls: $raw_bucket_ownership_controls"
bucket_ownership_controls=$(echo "$raw_bucket_ownership_controls" | grep -v "InsecureRequestWarning")
export bucket_ownership_controls
return 0
}
@@ -31,6 +45,5 @@ get_object_ownership_rule() {
return 1
fi
log 5 "object ownership rule: $object_ownership_rule"
export object_ownership_rule
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_bucket_policy() {
record_command "get-bucket-policy" "client:$1"
if [[ $# -ne 2 ]]; then
log 2 "get bucket policy command requires command type, bucket"
return 1
@@ -20,11 +35,11 @@ get_bucket_policy() {
log 2 "error getting policy: $bucket_policy"
return 1
fi
export bucket_policy
return 0
}
get_bucket_policy_aws() {
record_command "get-bucket-policy" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "aws 'get bucket policy' command requires bucket"
return 1
@@ -42,11 +57,11 @@ get_bucket_policy_aws() {
else
bucket_policy=$(echo "$policy_json" | jq -r '.Policy')
fi
export bucket_policy
return 0
}
get_bucket_policy_with_user() {
record_command "get-bucket-policy" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'get bucket policy with user' command requires bucket, username, password"
return 1
@@ -62,11 +77,11 @@ get_bucket_policy_with_user() {
return 1
fi
fi
export bucket_policy
return 0
}
get_bucket_policy_s3cmd() {
record_command "get-bucket-policy" "client:s3cmd"
if [[ $# -ne 1 ]]; then
log 2 "s3cmd 'get bucket policy' command requires bucket"
return 1
@@ -105,11 +120,11 @@ get_bucket_policy_s3cmd() {
fi
done <<< "$info"
log 5 "bucket policy: $bucket_policy"
export bucket_policy
return 0
}
get_bucket_policy_mc() {
record_command "get-bucket-policy" "client:mc"
if [[ $# -ne 1 ]]; then
echo "aws 'get bucket policy' command requires bucket"
return 1
@@ -119,6 +134,5 @@ get_bucket_policy_mc() {
echo "error getting policy: $bucket_policy"
return 1
fi
export bucket_policy
return 0
}

View File

@@ -1,21 +1,32 @@
#!/usr/bin/env bash
# get bucket tags
# params: bucket
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# params: client, bucket
# export 'tags' on success, return 1 for error
get_bucket_tagging() {
if [ $# -ne 2 ]; then
echo "get bucket tag command missing command type, bucket name"
return 1
fi
log 6 "get_bucket_tagging"
assert [ $# -eq 2 ]
record_command "get-bucket-tagging" "client:$1"
local result
if [[ $1 == 'aws' ]]; then
tags=$(aws --no-verify-ssl s3api get-bucket-tagging --bucket "$2" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
tags=$(mc --insecure tag list "$MC_ALIAS"/"$2" 2>&1) || result=$?
else
echo "invalid command type $1"
return 1
fail "invalid command type $1"
fi
log 5 "Tags: $tags"
tags=$(echo "$tags" | grep -v "InsecureRequestWarning")
@@ -28,4 +39,27 @@ get_bucket_tagging() {
return 1
fi
export tags
}
}
get_bucket_tagging_with_user() {
log 6 "get_bucket_tagging_with_user"
if [ $# -ne 3 ]; then
log 2 "'get_bucket_tagging_with_user' command requires ID, key, bucket"
return 1
fi
record_command "get-bucket-tagging" "client:s3api"
local result
if ! tags=$(AWS_ACCESS_KEY_ID="$1" AWS_SECRET_ACCESS_KEY="$2" aws --no-verify-ssl s3api get-bucket-tagging --bucket "$3" 2>&1); then
log 5 "tags error: $tags"
if [[ $tags =~ "No tags found" ]] || [[ $tags =~ "The TagSet does not exist" ]]; then
export tags=
return 0
fi
fail "unrecognized error getting bucket tagging with user: $tags"
return 1
fi
log 5 "raw tags data: $tags"
tags=$(echo "$tags" | grep -v "InsecureRequestWarning")
log 5 "modified tags data: $tags"
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_bucket_versioning() {
record_command "get-bucket-versioning" "client:s3api"
if [[ $# -ne 2 ]]; then
log 2 "put bucket versioning command requires command type, bucket name"
return 1

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_object() {
record_command "get-object" "client:$1"
if [ $# -ne 4 ]; then
log 2 "get object command requires command type, bucket, key, destination"
return 1
@@ -8,26 +23,28 @@ get_object() {
local exit_code=0
local error
if [[ $1 == 's3' ]]; then
error=$(aws --no-verify-ssl s3 mv "s3://$2/$3" "$4" 2>&1) || exit_code=$?
get_object_error=$(aws --no-verify-ssl s3 mv "s3://$2/$3" "$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api get-object --bucket "$2" --key "$3" "$4" 2>&1) || exit_code=$?
get_object_error=$(aws --no-verify-ssl s3api get-object --bucket "$2" --key "$3" "$4" 2>&1) || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate get "s3://$2/$3" "$4" 2>&1) || exit_code=$?
get_object_error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate get "s3://$2/$3" "$4" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure get "$MC_ALIAS/$2/$3" "$4" 2>&1) || exit_code=$?
get_object_error=$(mc --insecure get "$MC_ALIAS/$2/$3" "$4" 2>&1) || exit_code=$?
else
log 2 "'get object' command not implemented for '$1'"
return 1
fi
log 5 "get object exit code: $exit_code"
if [ $exit_code -ne 0 ]; then
log 2 "error getting object: $error"
log 2 "error getting object: $get_object_error"
export get_object_error
return 1
fi
return 0
}
get_object_with_range() {
record_command "get-object" "client:s3api"
if [[ $# -ne 4 ]]; then
log 2 "'get object with range' requires bucket, key, range, outfile"
return 1
@@ -41,6 +58,7 @@ get_object_with_range() {
}
get_object_with_user() {
record_command "get-object" "client:$1"
if [ $# -ne 6 ]; then
log 2 "'get object with user' command requires command type, bucket, key, save location, aws ID, aws secret key"
return 1
@@ -55,7 +73,6 @@ get_object_with_user() {
log 5 "put object exit code: $exit_code"
if [ $exit_code -ne 0 ]; then
log 2 "error getting object: $get_object_error"
export get_object_error
return 1
fi
return 0

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_object_attributes() {
record_command "get-object-attributes" "client:s3api"
if [[ $# -ne 2 ]]; then
log 2 "'get object attributes' command requires bucket, key"
return 1
@@ -12,6 +27,5 @@ get_object_attributes() {
fi
attributes=$(echo "$attributes" | grep -v "InsecureRequestWarning")
log 5 "$attributes"
export attributes
return 0
}

View File

@@ -1,15 +1,29 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_object_legal_hold() {
if [[ $# -ne 2 ]]; then
log 2 "'get object legal hold' command requires bucket, key"
return 1
fi
record_command "get-object-legal-hold" "client:s3api"
legal_hold=$(aws --no-verify-ssl s3api get-object-legal-hold --bucket "$1" --key "$2" 2>&1) || local get_result=$?
if [[ $get_result -ne 0 ]]; then
log 2 "error getting object legal hold: $legal_hold"
return 1
fi
export legal_hold
return 0
}

View File

@@ -1,15 +1,31 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_object_lock_configuration() {
record_command "get-object-lock-configuration" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "'get object lock configuration' command missing bucket name"
return 1
fi
lock_config=$(aws --no-verify-ssl s3api get-object-lock-configuration --bucket "$1") || local get_result=$?
if [[ $get_result -ne 0 ]]; then
if ! lock_config=$(aws --no-verify-ssl s3api get-object-lock-configuration --bucket "$1" 2>&1); then
log 2 "error obtaining lock config: $lock_config"
# shellcheck disable=SC2034
get_object_lock_config_err=$lock_config
return 1
fi
export lock_config
lock_config=$(echo "$lock_config" | grep -v "InsecureRequestWarning")
return 0
}

View File

@@ -1,15 +1,30 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_object_retention() {
record_command "get-object-retention" "client:s3api"
if [[ $# -ne 2 ]]; then
log 2 "'get object retention' command requires bucket, key"
return 1
fi
retention=$(aws --no-verify-ssl s3api get-object-retention --bucket "$1" --key "$2" 2>&1) || local get_result=$?
if [[ $get_result -ne 0 ]]; then
if ! retention=$(aws --no-verify-ssl s3api get-object-retention --bucket "$1" --key "$2" 2>&1); then
log 2 "error getting object retention: $retention"
get_object_retention_error=$retention
export get_object_retention_error
return 1
fi
export retention
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
get_object_tagging() {
record_command "get-object-tagging" "client:$1"
if [ $# -ne 3 ]; then
log 2 "get object tag command missing command type, bucket, and/or key"
return 1

View File

@@ -1,10 +1,30 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/report.sh
# params: client, bucket name
# fail for invalid params, return
# 0 - bucket exists
# 1 - bucket does not exist
# 2 - misc error
head_bucket() {
if [ $# -ne 2 ]; then
echo "head bucket command missing command type, bucket name"
return 1
fi
log 6 "head_bucket"
record_command "head-bucket" "client:$1"
assert [ $# -eq 2 ]
local exit_code=0
if [[ $1 == "aws" ]] || [[ $1 == 's3api' ]] || [[ $1 == 's3' ]]; then
bucket_info=$(aws --no-verify-ssl s3api head-bucket --bucket "$2" 2>&1) || exit_code=$?
@@ -13,13 +33,14 @@ head_bucket() {
elif [[ $1 == 'mc' ]]; then
bucket_info=$(mc --insecure stat "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fail "invalid command type $1"
fi
if [ $exit_code -ne 0 ]; then
echo "error getting bucket info: $bucket_info"
return 1
if [[ "$bucket_info" == *"404"* ]] || [[ "$bucket_info" == *"does not exist"* ]]; then
return 1
fi
log 2 "error getting bucket info: $bucket_info"
return 2
fi
export bucket_info
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
head_object() {
record_command "head-object" "client:$1"
if [ $# -ne 3 ]; then
log 2 "head-object missing command, bucket name, object name"
return 2
@@ -25,6 +40,5 @@ head_object() {
return 2
fi
fi
export metadata
return 0
}

View File

@@ -1,6 +1,22 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
list_buckets() {
log 6 "list_buckets"
record_command "list-buckets" "client:$1"
if [ $# -ne 1 ]; then
echo "list buckets command missing command type"
return 1
@@ -10,7 +26,7 @@ list_buckets() {
if [[ $1 == 's3' ]]; then
buckets=$(aws --no-verify-ssl s3 ls 2>&1 s3://) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
list_buckets_s3api || exit_code=$?
list_buckets_s3api "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
buckets=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3:// 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
@@ -33,16 +49,56 @@ list_buckets() {
bucket_name=$(echo "$line" | awk '{print $NF}')
bucket_array+=("${bucket_name%/}")
done <<< "$buckets"
export bucket_array
return 0
}
list_buckets_with_user() {
record_command "list-buckets" "client:$1"
if [ $# -ne 3 ]; then
echo "'list buckets as user' command missing command type, username, password"
return 1
fi
local exit_code=0
if [[ $1 == 's3' ]]; then
buckets=$(AWS_ACCESS_KEY_ID="$2" AWS_SECRET_ACCESS_KEY="$3" aws --no-verify-ssl s3 ls 2>&1 s3://) || exit_code=$?
elif [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
list_buckets_s3api "$2" "$3" || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
buckets=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate --access_key="$2" --secret_key="$3" ls s3:// 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
buckets=$(mc --insecure ls "$MC_ALIAS" 2>&1) || exit_code=$?
else
echo "list buckets command not implemented for '$1'"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error listing buckets: $buckets"
return 1
fi
if [[ $1 == 's3api' ]] || [[ $1 == 'aws' ]]; then
return 0
fi
bucket_array=()
while IFS= read -r line; do
bucket_name=$(echo "$line" | awk '{print $NF}')
bucket_array+=("${bucket_name%/}")
done <<< "$buckets"
return 0
}
list_buckets_s3api() {
output=$(aws --no-verify-ssl s3api list-buckets 2>&1) || exit_code=$?
if [[ $exit_code -ne 0 ]]; then
if [[ $# -ne 2 ]]; then
log 2 "'list_buckets_s3api' requires username, password"
return 1
fi
if ! output=$(AWS_ACCESS_KEY_ID="$1" AWS_SECRET_ACCESS_KEY="$2" aws --no-verify-ssl s3api list-buckets 2>&1); then
echo "error listing buckets: $output"
return 1
fi
log 5 "bucket data: $output"
modified_output=""
while IFS= read -r line; do
@@ -55,6 +111,5 @@ list_buckets_s3api() {
names=$(jq -r '.Buckets[].Name' <<<"$modified_output")
IFS=$'\n' read -rd '' -a bucket_array <<<"$names"
export bucket_array
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
list_multipart_uploads() {
record_command "list-multipart-uploads" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "'list multipart uploads' command requires bucket name"
return 1
@@ -9,19 +24,18 @@ list_multipart_uploads() {
log 2 "error listing uploads: $uploads"
return 1
fi
export uploads
}
list_multipart_uploads_with_user() {
record_command "list-multipart-uploads" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'list multipart uploads' command requires bucket name, username, password"
return 1
fi
if ! uploads=$(AWS_ACCESS_KEY_ID="$2" AWS_SECRET_ACCESS_KEY="$3" aws --no-verify-ssl s3api list-multipart-uploads --bucket "$1" 2>&1); then
log 2 "error listing uploads: $uploads"
# shellcheck disable=SC2034
list_multipart_uploads_error=$uploads
export list_multipart_uploads_error
return 1
fi
export uploads
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
list_object_versions() {
record_command "list-object-versions" "client:s3api"
if [[ $# -ne 1 ]]; then
log 2 "'list object versions' command requires bucket name"
return 1
@@ -10,6 +25,5 @@ list_object_versions() {
log 2 "error listing object versions: $versions"
return 1
fi
export versions
return 0
}

View File

@@ -1,32 +1,46 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# args: client, bucket name
# return 0 if able to list, 1 if not
list_objects() {
log 6 "list_objects"
record_command "list-objects" "client:$1"
if [ $# -ne 2 ]; then
echo "list objects command requires command type, and bucket or folder"
return 1
fi
local exit_code=0
local output
if [[ $1 == "aws" ]] || [[ $1 == 's3' ]]; then
output=$(aws --no-verify-ssl s3 ls s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 's3api' ]]; then
list_objects_s3api "$2" || exit_code=$?
elif [[ $1 == 's3cmd' ]]; then
output=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3://"$2" 2>&1) || exit_code=$?
elif [[ $1 == 'mc' ]]; then
output=$(mc --insecure ls "$MC_ALIAS"/"$2" 2>&1) || exit_code=$?
else
echo "invalid command type $1"
return 1
fi
if [ $exit_code -ne 0 ]; then
echo "error listing objects: $output"
log 2 "'list_objects' command requires client, bucket"
return 1
fi
if [[ $1 == 's3api' ]]; then
return 0
local output
local result=0
if [[ $1 == "aws" ]] || [[ $1 == 's3' ]]; then
output=$(aws --no-verify-ssl s3 ls s3://"$2" 2>&1) || result=$?
elif [[ $1 == 's3api' ]]; then
list_objects_s3api "$2" || result=$?
return $result
elif [[ $1 == 's3cmd' ]]; then
output=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3://"$2" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
output=$(mc --insecure ls "$MC_ALIAS"/"$2" 2>&1) || result=$?
else
fail "invalid command type $1"
return 1
fi
# shellcheck disable=SC2154
assert_success "error listing objects: $output"
object_array=()
while IFS= read -r line; do
@@ -39,23 +53,21 @@ list_objects() {
export object_array
}
# args: bucket name
# fail if unable to list
list_objects_s3api() {
if [[ $# -ne 1 ]]; then
echo "list objects s3api command requires bucket name"
log 6 "list_objects_s3api"
if [ $# -ne 1 ]; then
log 2 "'list_objects_s3api' requires bucket"
return 1
fi
output=$(aws --no-verify-ssl s3api list-objects --bucket "$1" 2>&1) || local exit_code=$?
if [[ $exit_code -ne 0 ]]; then
echo "error listing objects: $output"
if ! output=$(aws --no-verify-ssl s3api list-objects --bucket "$1" 2>&1); then
log 2 "error listing objects: $output"
return 1
fi
modified_output=""
while IFS= read -r line; do
if [[ $line != *InsecureRequestWarning* ]]; then
modified_output+="$line"
fi
done <<< "$output"
log 5 "list_objects_s3api: raw data returned: $output"
modified_output=$(echo "$output" | grep -v "InsecureRequestWarning")
object_array=()
log 5 "modified output: $modified_output"
@@ -65,6 +77,52 @@ list_objects_s3api() {
keys=$(echo "$contents" | jq -r '.Key')
IFS=$'\n' read -rd '' -a object_array <<<"$keys"
fi
return 0
}
export object_array
}
# list objects in bucket, v1
# param: bucket
# export objects on success, return 1 for failure
list_objects_s3api_v1() {
if [ $# -lt 1 ] || [ $# -gt 2 ]; then
echo "list objects command requires bucket, (optional) delimiter"
return 1
fi
if [ "$2" == "" ]; then
objects=$(aws --no-verify-ssl s3api list-objects --bucket "$1") || local result=$?
else
objects=$(aws --no-verify-ssl s3api list-objects --bucket "$1" --delimiter "$2") || local result=$?
fi
if [[ $result -ne 0 ]]; then
echo "error listing objects: $objects"
return 1
fi
export objects
}
list_objects_with_prefix() {
if [ $# -ne 3 ]; then
log 2 "'list_objects_with_prefix' command requires, client, bucket, prefix"
return 1
fi
local result=0
if [ "$1" == 's3' ]; then
objects=$(aws --no-verify-ssl s3 ls s3://"$2/$3" 2>&1) || result=$?
elif [ "$1" == 's3api' ]; then
objects=$(aws --no-verify-ssl s3api list-objects --bucket "$2" --prefix "$3" 2>&1) || result=$?
elif [ "$1" == 's3cmd' ]; then
objects=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate ls s3://"$2/$3" 2>&1) || result=$?
elif [[ "$1" == 'mc' ]]; then
objects=$(mc --insecure ls "$MC_ALIAS/$2/$3" 2>&1) || result=$?
else
log 2 "invalid command type '$1'"
return 1
fi
if [ $result -ne 0 ]; then
log 2 "error listing objects: $objects"
return 1
fi
log 5 "output: $objects"
export objects
return 0
}

View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# list objects in bucket, v2
# param: bucket
# export objects on success, return 1 for failure
list_objects_v2() {
if [ $# -ne 1 ]; then
echo "list objects command missing bucket and/or path"
return 1
fi
record_command "list-objects-v2 client:s3api"
objects=$(aws --no-verify-ssl s3api list-objects-v2 --bucket "$1") || local result=$?
if [[ $result -ne 0 ]]; then
echo "error listing objects: $objects"
return 1
fi
}

View File

@@ -0,0 +1,39 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
list_parts() {
if [[ $# -ne 3 ]]; then
log 2 "'list-parts' command requires bucket, key, upload ID"
return 1
fi
record_command "list-parts" "client:s3api"
if ! listed_parts=$(aws --no-verify-ssl s3api list-parts --bucket "$1" --key "$2" --upload-id "$3" 2>&1); then
log 2 "Error listing multipart upload parts: $listed_parts"
return 1
fi
}
list_parts_with_user() {
if [ $# -ne 5 ]; then
log 2 "'list_parts_with_user' requires username, password, bucket, key, upload ID"
return 1
fi
record_command 'list-parts' 'client:s3api'
if ! listed_parts=$(AWS_ACCESS_KEY_ID="$1" AWS_SECRET_ACCESS_KEY="$2" aws --no-verify-ssl s3api list-parts --bucket "$3" --key "$4" --upload-id "$5" 2>&1); then
log 2 "Error listing multipart upload parts: $listed_parts"
return 1
fi
}

View File

@@ -1,34 +1,106 @@
#!/usr/bin/env bash
put_bucket_acl() {
if [[ $# -ne 3 ]]; then
log 2 "put bucket acl command requires command type, bucket name, acls or username"
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/util_file.sh
put_bucket_acl_s3api() {
log 6 "put_bucket_acl_s3api"
record_command "put-bucket-acl" "client:s3api"
if [[ $# -ne 2 ]]; then
log 2 "put bucket acl command requires bucket name, acl file"
return 1
fi
local error=""
local put_result=0
if [[ $1 == 's3api' ]]; then
log 5 "bucket name: $2, acls: $3"
error=$(aws --no-verify-ssl s3api put-bucket-acl --bucket "$2" --access-control-policy "file://$3" 2>&1) || put_result=$?
elif [[ $1 == 's3cmd' ]]; then
error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate setacl "s3://$2" --acl-grant=read:"$3" 2>&1) || put_result=$?
else
log 2 "put_bucket_acl not implemented for '$1'"
return 1
fi
if [[ $put_result -ne 0 ]]; then
log 5 "bucket name: $1, acls: $2"
if ! error=$(aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --access-control-policy "file://$2" 2>&1); then
log 2 "error putting bucket acl: $error"
return 1
fi
return 0
}
put_bucket_acl_s3api_with_user() {
log 6 "put_bucket_acl_s3api_with_user"
record_command "put-bucket-acl" "client:s3api"
if [[ $# -ne 4 ]]; then
log 2 "put bucket acl command requires bucket name, acl file, username, password"
return 1
fi
log 5 "bucket name: $1, acls: $2"
if ! error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --access-control-policy "file://$2" 2>&1); then
log 2 "error putting bucket acl: $error"
return 1
fi
return 0
}
reset_bucket_acl() {
if [ $# -ne 1 ]; then
log 2 "'reset_bucket_acl' requires bucket name"
return 1
fi
acl_file="acl_file"
if ! create_test_files "$acl_file"; then
log 2 "error creating test files"
return 1
fi
# shellcheck disable=SC2154
cat <<EOF > "$test_file_folder/$acl_file"
{
"Grants": [
{
"Grantee": {
"ID": "$AWS_ACCESS_KEY_ID",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
}
],
"Owner": {
"ID": "$AWS_ACCESS_KEY_ID"
}
}
EOF
if ! put_bucket_acl_s3api "$BUCKET_ONE_NAME" "$test_file_folder/$acl_file"; then
log 2 "error putting bucket acl (s3api)"
return 1
fi
delete_test_files "$acl_file"
return 0
}
put_bucket_canned_acl_s3cmd() {
record_command "put-bucket-acl" "client:s3cmd"
if [[ $# -ne 2 ]]; then
log 2 "put bucket acl command requires bucket name, permission"
return 1
fi
if ! error=$(s3cmd "${S3CMD_OPTS[@]}" --no-check-certificate setacl "s3://$1" "$2" 2>&1); then
log 2 "error putting s3cmd canned ACL: $error"
return 1
fi
return 0
}
put_bucket_canned_acl() {
if [[ $# -ne 2 ]]; then
log 2 "'put bucket canned acl' command requires bucket name, canned ACL"
return 1
fi
if ! error=$(aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --acl "$2"); then
record_command "put-bucket-acl" "client:s3api"
if ! error=$(aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --acl "$2" 2>&1); then
log 2 "error re-setting bucket acls: $error"
return 1
fi
@@ -36,11 +108,12 @@ put_bucket_canned_acl() {
}
put_bucket_canned_acl_with_user() {
if [[ $# -ne 2 ]]; then
if [[ $# -ne 4 ]]; then
log 2 "'put bucket canned acl with user' command requires bucket name, canned ACL, username, password"
return 1
fi
if ! error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --acl "$2"); then
record_command "put-bucket-acl" "client:s3api"
if ! error=$(AWS_ACCESS_KEY_ID="$3" AWS_SECRET_ACCESS_KEY="$4" aws --no-verify-ssl s3api put-bucket-acl --bucket "$1" --acl "$2" 2>&1); then
log 2 "error re-setting bucket acls: $error"
return 1
fi

View File

@@ -1,14 +1,25 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# fail if unable to put bucket ownership controls
put_bucket_ownership_controls() {
if [[ $# -ne 2 ]]; then
log 2 "'put bucket ownership controls' command requires bucket name, control"
return 1
fi
if ! controls_error=$(aws --no-verify-ssl s3api put-bucket-ownership-controls --bucket "$1" \
--ownership-controls="Rules=[{ObjectOwnership=$2}]" 2>&1); then
log 2 "error putting bucket ownership controls: $controls_error"
return 1
fi
return 0
log 6 "put_bucket_ownership_controls"
record_command "put-bucket-ownership-controls" "client:s3api"
assert [ $# -eq 2 ]
run aws --no-verify-ssl s3api put-bucket-ownership-controls --bucket "$1" --ownership-controls="Rules=[{ObjectOwnership=$2}]"
# shellcheck disable=SC2154
assert_success "error putting bucket ownership controls: $output"
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
put_bucket_policy() {
record_command "put-bucket-policy" "client:$1"
if [[ $# -ne 3 ]]; then
log 2 "'put bucket policy' command requires command type, bucket, policy file"
return 1
@@ -26,6 +41,7 @@ put_bucket_policy() {
}
put_bucket_policy_with_user() {
record_command "put-bucket-policy" "client:s3api"
if [[ $# -ne 4 ]]; then
log 2 "'put bucket policy with user' command requires bucket, policy file, username, password"
return 1

View File

@@ -0,0 +1,50 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
put_bucket_tagging() {
log 6 "put_bucket_tagging"
if [ $# -ne 4 ]; then
log 2 "bucket tag command missing command type, bucket name, key, value"
return 1
fi
local error
local result=0
record_command "put-bucket-tagging" "client:$1"
if [[ $1 == 'aws' ]] || [[ $1 == 's3api' ]]; then
error=$(aws --no-verify-ssl s3api put-bucket-tagging --bucket "$2" --tagging "TagSet=[{Key=$3,Value=$4}]") || result=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure tag set "$MC_ALIAS"/"$2" "$3=$4" 2>&1) || result=$?
else
log 2 "invalid command type $1"
return 1
fi
if [[ $result -ne 0 ]]; then
log 2 "Error adding bucket tag: $error"
return 1
fi
return 0
}
put_bucket_tagging_with_user() {
log 6 "put_bucket_tagging_with_user"
assert [ $# -eq 5 ]
record_command "put-bucket-tagging" "client:$1"
if ! error=$(AWS_ACCESS_KEY_ID="$4" AWS_SECRET_ACCESS_KEY="$5" aws --no-verify-ssl s3api put-bucket-tagging --bucket "$1" --tagging "TagSet=[{Key=$2,Value=$3}]"); then
log 2 "error putting bucket tagging: $error"
return 1
fi
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
put_bucket_versioning() {
record_command "put-bucket-versioning" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "put bucket versioning command requires command type, bucket name, 'Enabled' or 'Suspended'"
return 1

View File

@@ -1,6 +1,24 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
source ./tests/report.sh
put_object() {
log 6 "put_object"
record_command "put-object" "client:$1"
if [ $# -ne 4 ]; then
log 2 "put object command requires command type, source, destination bucket, destination key"
return 1
@@ -28,6 +46,7 @@ put_object() {
}
put_object_with_user() {
record_command "put-object" "client:$1"
if [ $# -ne 6 ]; then
log 2 "put object command requires command type, source, destination bucket, destination key, aws ID, aws secret key"
return 1

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
put_object_legal_hold() {
record_command "put-object-legal-hold" "client:s3api"
if [[ $# -ne 3 ]]; then
log 2 "'put object legal hold' command requires bucket, key, hold status ('ON' or 'OFF')"
return 1

View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
put_object_lock_configuration() {
if [[ $# -ne 4 ]]; then
log 2 "'put-object-lock-configuration' command requires bucket name, enabled, mode, period"
return 1
fi
local config="{\"ObjectLockEnabled\": \"$2\", \"Rule\": {\"DefaultRetention\": {\"Mode\": \"$3\", \"Days\": $4}}}"
if ! error=$(aws --no-verify-ssl s3api put-object-lock-configuration --bucket "$1" --object-lock-configuration "$config" 2>&1); then
log 2 "error putting object lock configuration: $error"
return 1
fi
return 0
}
put_object_lock_configuration_disabled() {
if [[ $# -ne 1 ]]; then
log 2 "'put-object-lock-configuration' disable command requires bucket name"
return 1
fi
local config="{\"ObjectLockEnabled\": \"Enabled\"}"
if ! error=$(aws --no-verify-ssl s3api put-object-lock-configuration --bucket "$1" --object-lock-configuration "$config" 2>&1); then
log 2 "error putting object lock configuration: $error"
return 1
fi
return 0
}

View File

@@ -1,6 +1,21 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
put_object_retention() {
record_command "put-object-retention" "client:s3api"
if [[ $# -ne 4 ]]; then
log 2 "'put object retention' command requires bucket, key, retention mode, retention date"
return 1

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env bash
# Copyright 2024 Versity Software
# This file is licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
put_object_tagging() {
if [ $# -ne 5 ]; then
log 2 "'put-object-tagging' command missing command type, object name, file, key, and/or value"
return 1
fi
local error
local result
record_command "put-object-tagging" "client:$1"
if [[ $1 == 'aws' ]]; then
error=$(aws --no-verify-ssl s3api put-object-tagging --bucket "$2" --key "$3" --tagging "TagSet=[{Key=$4,Value=$5}]" 2>&1) || result=$?
elif [[ $1 == 'mc' ]]; then
error=$(mc --insecure tag set "$MC_ALIAS"/"$2"/"$3" "$4=$5" 2>&1) || result=$?
else
log 2 "invalid command type $1"
return 1
fi
if [[ $result -ne 0 ]]; then
log 2 "Error adding object tag: $error"
return 1
fi
return 0
}

Some files were not shown because too many files have changed in this diff Show More