Compare commits

..

21 Commits
v0.1 ... v0.2

Author SHA1 Message Date
Ben McClelland
a95d03c498 Merge pull request #78 from versity/ben/cleanup_base
Ben/cleanup base
2023-06-12 08:00:05 -07:00
Ben McClelland
feace16fa9 set response headers for get object 2023-06-12 07:46:09 -07:00
Ben McClelland
33e1d39138 cleanup responses to split out expected xml body response 2023-06-12 07:46:09 -07:00
Ben McClelland
115910eafe Merge pull request #72 from versity/ben/posix_multipart
Ben/posix multipart
2023-06-12 07:45:35 -07:00
Ben McClelland
ef06d11d7c fix: get simple multipart upload tests passing 2023-06-12 07:37:21 -07:00
Ben McClelland
2697edd40a head object time format 2023-06-12 07:15:57 -07:00
Ben McClelland
f88cb9fa7f Merge pull request #70 from versity/ben/internal_error_log
feat: add log for internal server errors not of type s3err.APIError
2023-06-12 07:15:16 -07:00
Ben McClelland
38bb042a32 Merge pull request #74 from versity/benmcclelland-patch-1
Added dark/light theme logo and footer to README.md
2023-06-10 20:21:26 -07:00
Ben McClelland
7682defa95 Added dark/light theme logo and footer to README.md 2023-06-10 20:21:07 -07:00
Ben McClelland
12df87577b Merge pull request #73 from versity/benmcclelland-patch-1
Add documentation/wiki links to README.md
2023-06-10 13:57:05 -07:00
Ben McClelland
92a763e53a Add documentation/wiki links to README.md 2023-06-10 13:56:03 -07:00
Ben McClelland
c3aaf1538e Merge pull request #71 from versity/ben/readme_updates
update README
2023-06-09 10:59:34 -07:00
Ben McClelland
c7625c9b58 update README 2023-06-09 10:58:30 -07:00
Ben McClelland
50357ce61a feat: add log for internal server errors not of type s3err.APIError 2023-06-09 10:35:21 -07:00
Jon Austin
160a99cbbd feat: Added admin CLI, created api endpoint for creating new user, cr… (#68)
* feat: Added admin CLI, created api endpoint for creating new user, created action for admin CLI to create a new user, changed the authentication middleware to verify the users from db

* feat: Added both single and multi user support, added caching layer for getting IAM users

* fix: Added all the files
2023-06-09 10:30:20 -07:00
Ben McClelland
0350215e2e Merge pull request #69 from versity/ben/dir_objects
Ben/dir objects
2023-06-09 10:25:32 -07:00
Ben McClelland
de346816fc fix put directory object 2023-06-08 22:32:54 -07:00
Ben McClelland
f1ac6b808b fix list objects for directory type objects 2023-06-08 22:04:08 -07:00
Ben McClelland
8ade0c96cf Merge pull request #67 from versity/ben/list
fix list objects
2023-06-08 10:33:54 -07:00
Ben McClelland
f4400edaa0 fix list objects 2023-06-07 22:57:00 -07:00
meghanmcclelland
f337aa288d Update README.md (#66)
* Update README.md

* Update README.md
2023-06-07 17:34:01 -07:00
24 changed files with 1053 additions and 372 deletions

View File

@@ -1,24 +1,37 @@
# Versity S3 Gateway
# The Versity Gateway: A High-Performance Open Source S3 to File Translation Tool
[![Versity Logo](https://www.versity.com/wp-content/themes/versity-theme/assets/img/svg/logo.svg)](https://www.versity.com)
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/versity/versitygw/blob/assets/assets/logo-white.svg">
<source media="(prefers-color-scheme: light)" srcset="https://github.com/versity/versitygw/blob/assets/assets/logo.svg">
<a href="https://www.versity.com"><img alt="Versity Software logo image." src="https://github.com/versity/versitygw/blob/assets/assets/logo.svg"></a>
</picture>
[![Apache V2 License](https://img.shields.io/badge/license-Apache%20V2-blue.svg)](https://github.com/versity/versitygw/blob/main/LICENSE)
The Versity S3 Gateway provides an S3 server that translates S3 client access to a modular backend service. The server translates incoming S3 API requests and transforms them into equivalent operations to the backend service. By leveraging this gateway server, applications can interact with the S3-compatible API on top of already existing storage systems. This project enables leveraging existing infrastructure investments while seamlessly integrating with S3-compatible systems, offering increased flexibility and compatibility in managing data storage.
The Versity Gateway: A High-Performance Open Source S3 to File Translation Tool
The Versity S3 Gateway is focused on performance, simplicity, and expandability. New backend types can be added to support new storage systems. The initial backend is a posix filesystem. The posix backend allows standing up an S3 compatible server from an existing filesystem mount with a simple command.
Current status: Alpha, in development not yet suited for production use
The gateway is completely stateless. Mutliple gateways can host the same backend service and clients can load balance across the gateways.
See project [documentation](https://github.com/versity/versitygw/wiki) on the wiki.
Versity Gateway, a simple to use tool for seamless inline translation between AWS S3 object commands and file-based storage systems. The Versity Gateway bridges the gap between S3-reliant applications and file storage systems, enabling enhanced compatibility and integration with file based systems while offering exceptional scalability.
The server translates incoming S3 API requests and transforms them into equivalent operations to the backend service. By leveraging this gateway server, applications can interact with the S3-compatible API on top of already existing storage systems. This project enables leveraging existing infrastructure investments while seamlessly integrating with S3-compatible systems, offering increased flexibility and compatibility in managing data storage.
The Versity Gateway is focused on performance, simplicity, and expandability. The Versity Gateway is designed with modularity in mind, enabling future extensions to support additional backend storage systems. At present, the Versity Gateway supports any generic POSIX file backend storage and Versitys open source ScoutFS filesystem.
The gateway is completely stateless. Multiple Versity Gateway instances may be deployed in a cluster to increase aggregate throughput. The Versity Gateways stateless architecture allows any request to be serviced by any gateway thereby distributing workloads and enhancing performance. Load balancers may be used to evenly distribute requests across the cluster of gateways for optimal performance.
The S3 HTTP(S) server and routing is implemented using the [Fiber](https://gofiber.io) web framework. This framework is actively developed with a focus on performance. S3 API compatibility leverages the official [aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) whenever possible for maximum service compatibility with AWS S3.
## Getting Started
See the [Quickstart](https://github.com/versity/versitygw/wiki/Quickstart) documentation.
### Run the gateway with posix backend:
```
mkdir /tmp/vgw
ADMIN_ACCESS_KEY="testuser" ADMIN_SECRET_KEY="secret" ./versitygw --port :10000 posix /tmp/vgw
ROOT_ACCESS_KEY="testuser" ROOT_SECRET_KEY="secret" ./versitygw --port :10000 posix /tmp/vgw
```
This will enable an S3 server on the current host listening on port 10000 and hosting the directory `/tmp/vgw`.
@@ -34,3 +47,18 @@ The command format is
versitygw [global options] command [command options] [arguments...]
```
The global options are specified before the backend type and the backend options are specified after.
***
#### Versity gives you clarity and control over your archival storage, so you can allocate more resources to your core mission.
### Contact
info@versity.com <br />
+1 844 726 8826
### @versitysoftware
[![linkedin](https://github.com/versity/versitygw/blob/assets/assets/linkedin.jpg)](https://www.linkedin.com/company/versity/) &nbsp;
[![twitter](https://github.com/versity/versitygw/blob/assets/assets/twitter.jpg)](https://twitter.com/VersitySoftware) &nbsp;
[![facebook](https://github.com/versity/versitygw/blob/assets/assets/facebook.jpg)](https://www.facebook.com/versitysoftware) &nbsp;
[![instagram](https://github.com/versity/versitygw/blob/assets/assets/instagram.jpg)](https://www.instagram.com/versitysoftware/) &nbsp;

View File

@@ -14,24 +14,124 @@
package auth
import "github.com/versity/versitygw/s3err"
import (
"encoding/json"
"fmt"
"os"
"sync"
"github.com/versity/versitygw/s3err"
)
type Account struct {
Secret string `json:"secret"`
Role string `json:"role"`
Region string `json:"region"`
}
type IAMConfig struct {
AccessAccounts map[string]string
AccessAccounts map[string]Account `json:"accessAccounts"`
}
type AccountsCache struct {
mu sync.Mutex
Accounts map[string]Account
}
func (c *AccountsCache) getAccount(access string) *Account {
c.mu.Lock()
defer c.mu.Unlock()
acc, ok := c.Accounts[access]
if !ok {
return nil
}
return &acc
}
func (c *AccountsCache) updateAccounts() error {
c.mu.Lock()
defer c.mu.Unlock()
var data IAMConfig
file, err := os.ReadFile("users.json")
if err != nil {
return fmt.Errorf("error reading config file: %w", err)
}
if err := json.Unmarshal(file, &data); err != nil {
return fmt.Errorf("error parsing the data: %w", err)
}
c.Accounts = data.AccessAccounts
return nil
}
type IAMService interface {
GetIAMConfig() (*IAMConfig, error)
CreateAccount(access string, account *Account) error
GetUserAccount(access string) *Account
}
type IAMServiceUnsupported struct{}
type IAMServiceUnsupported struct {
accCache *AccountsCache
}
var _ IAMService = &IAMServiceUnsupported{}
func New() IAMService {
return &IAMServiceUnsupported{}
return &IAMServiceUnsupported{accCache: &AccountsCache{Accounts: map[string]Account{}}}
}
func (IAMServiceUnsupported) GetIAMConfig() (*IAMConfig, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (s IAMServiceUnsupported) CreateAccount(access string, account *Account) error {
var data IAMConfig
file, err := os.ReadFile("users.json")
if err != nil {
data = IAMConfig{AccessAccounts: map[string]Account{
access: *account,
}}
} else {
if err := json.Unmarshal(file, &data); err != nil {
return err
}
_, ok := data.AccessAccounts[access]
if ok {
return fmt.Errorf("user with the given access already exists")
}
data.AccessAccounts[access] = *account
}
updatedJSON, err := json.MarshalIndent(data, "", " ")
if err != nil {
return err
}
if err := os.WriteFile("users.json", updatedJSON, 0644); err != nil {
return err
}
return nil
}
func (s IAMServiceUnsupported) GetUserAccount(access string) *Account {
acc := s.accCache.getAccount(access)
if acc == nil {
err := s.accCache.updateAccounts()
if err != nil {
return nil
}
return s.accCache.getAccount(access)
}
return acc
}

View File

@@ -21,6 +21,7 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
//go:generate moq -out backend_moq_test.go . Backend
@@ -39,8 +40,8 @@ type Backend interface {
CreateMultipartUpload(*s3.CreateMultipartUploadInput) (*s3.CreateMultipartUploadOutput, error)
CompleteMultipartUpload(bucket, object, uploadID string, parts []types.Part) (*s3.CompleteMultipartUploadOutput, error)
AbortMultipartUpload(*s3.AbortMultipartUploadInput) error
ListMultipartUploads(output *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error)
ListObjectParts(bucket, object, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error)
ListMultipartUploads(output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error)
ListObjectParts(bucket, object, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error)
CopyPart(srcBucket, srcObject, DstBucket, uploadID, rangeHeader string, part int) (*types.CopyPartResult, error)
PutObjectPart(bucket, object, uploadID string, part int, length int64, r io.Reader) (etag string, err error)
@@ -115,11 +116,11 @@ func (BackendUnsupported) CompleteMultipartUpload(bucket, object, uploadID strin
func (BackendUnsupported) AbortMultipartUpload(input *s3.AbortMultipartUploadInput) error {
return s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) ListMultipartUploads(output *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
func (BackendUnsupported) ListMultipartUploads(output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error) {
return s3response.ListMultipartUploadsResponse{}, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) ListObjectParts(bucket, object, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)
func (BackendUnsupported) ListObjectParts(bucket, object, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error) {
return s3response.ListPartsResponse{}, s3err.GetAPIError(s3err.ErrNotImplemented)
}
func (BackendUnsupported) CopyPart(srcBucket, srcObject, DstBucket, uploadID, rangeHeader string, part int) (*types.CopyPartResult, error) {
return nil, s3err.GetAPIError(s3err.ErrNotImplemented)

View File

@@ -6,6 +6,7 @@ package backend
import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/s3response"
"io"
"sync"
)
@@ -68,10 +69,10 @@ var _ Backend = &BackendMock{}
// ListBucketsFunc: func() (*s3.ListBucketsOutput, error) {
// panic("mock out the ListBuckets method")
// },
// ListMultipartUploadsFunc: func(output *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error) {
// ListMultipartUploadsFunc: func(output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error) {
// panic("mock out the ListMultipartUploads method")
// },
// ListObjectPartsFunc: func(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error) {
// ListObjectPartsFunc: func(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error) {
// panic("mock out the ListObjectParts method")
// },
// ListObjectsFunc: func(bucket string, prefix string, marker string, delim string, maxkeys int) (*s3.ListObjectsOutput, error) {
@@ -172,10 +173,10 @@ type BackendMock struct {
ListBucketsFunc func() (*s3.ListBucketsOutput, error)
// ListMultipartUploadsFunc mocks the ListMultipartUploads method.
ListMultipartUploadsFunc func(output *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error)
ListMultipartUploadsFunc func(output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error)
// ListObjectPartsFunc mocks the ListObjectParts method.
ListObjectPartsFunc func(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error)
ListObjectPartsFunc func(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error)
// ListObjectsFunc mocks the ListObjects method.
ListObjectsFunc func(bucket string, prefix string, marker string, delim string, maxkeys int) (*s3.ListObjectsOutput, error)
@@ -1094,7 +1095,7 @@ func (mock *BackendMock) ListBucketsCalls() []struct {
}
// ListMultipartUploads calls ListMultipartUploadsFunc.
func (mock *BackendMock) ListMultipartUploads(output *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error) {
func (mock *BackendMock) ListMultipartUploads(output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error) {
if mock.ListMultipartUploadsFunc == nil {
panic("BackendMock.ListMultipartUploadsFunc: method is nil but Backend.ListMultipartUploads was just called")
}
@@ -1126,7 +1127,7 @@ func (mock *BackendMock) ListMultipartUploadsCalls() []struct {
}
// ListObjectParts calls ListObjectPartsFunc.
func (mock *BackendMock) ListObjectParts(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error) {
func (mock *BackendMock) ListObjectParts(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error) {
if mock.ListObjectPartsFunc == nil {
panic("BackendMock.ListObjectPartsFunc: method is nil but Backend.ListObjectParts was just called")
}

View File

@@ -24,6 +24,11 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
var (
// RFC3339TimeFormat RFC3339 time format
RFC3339TimeFormat = "2006-01-02T15:04:05.999Z"
)
func IsValidBucketName(name string) bool { return true }
type ByBucketName []types.Bucket

View File

@@ -36,6 +36,7 @@ import (
"github.com/pkg/xattr"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
type Posix struct {
@@ -51,12 +52,7 @@ const (
metaTmpMultipartDir = metaTmpDir + "/multipart"
onameAttr = "user.objname"
tagHdr = "X-Amz-Tagging"
dirObjKey = "user.dirisobject"
)
var (
newObjUID = 0
newObjGID = 0
emptyMD5 = "d41d8cd98f00b204e9800998ecf8427e"
)
func New(rootdir string) (*Posix, error) {
@@ -236,8 +232,10 @@ func (p *Posix) CompleteMultipartUpload(bucket, object, uploadID string, parts [
// check all parts ok
last := len(parts) - 1
partsize := int64(0)
var totalsize int64
for i, p := range parts {
fi, err := os.Lstat(filepath.Join(objdir, uploadID, fmt.Sprintf("%v", p.PartNumber)))
partPath := filepath.Join(objdir, uploadID, fmt.Sprintf("%v", p.PartNumber))
fi, err := os.Lstat(partPath)
if err != nil {
return nil, s3err.GetAPIError(s3err.ErrInvalidPart)
}
@@ -245,13 +243,21 @@ func (p *Posix) CompleteMultipartUpload(bucket, object, uploadID string, parts [
if i == 0 {
partsize = fi.Size()
}
totalsize += fi.Size()
// all parts except the last need to be the same size
if i < last && partsize != fi.Size() {
return nil, s3err.GetAPIError(s3err.ErrInvalidPart)
}
b, err := xattr.Get(partPath, "user.etag")
etag := string(b)
if err != nil {
etag = ""
}
parts[i].ETag = &etag
}
f, err := openTmpFile(filepath.Join(bucket, metaTmpDir), bucket, object, 0)
f, err := openTmpFile(filepath.Join(bucket, metaTmpDir), bucket, object, totalsize)
if err != nil {
return nil, fmt.Errorf("open temp file: %w", err)
}
@@ -277,11 +283,8 @@ func (p *Posix) CompleteMultipartUpload(bucket, object, uploadID string, parts [
dir := filepath.Dir(objname)
if dir != "" {
if err = mkdirAll(dir, os.FileMode(0755), bucket, object); err != nil {
if err != nil && os.IsExist(err) {
return nil, s3err.GetAPIError(s3err.ErrObjectParentIsFile)
}
if err != nil {
return nil, fmt.Errorf("make object parent directories: %w", err)
return nil, s3err.GetAPIError(s3err.ErrExistingObjectIsDirectory)
}
}
}
@@ -309,15 +312,6 @@ func (p *Posix) CompleteMultipartUpload(bucket, object, uploadID string, parts [
return nil, fmt.Errorf("set etag attr: %w", err)
}
if newObjUID != 0 || newObjGID != 0 {
err = os.Chown(objname, newObjUID, newObjGID)
if err != nil {
// cleanup object if returning error
os.Remove(objname)
return nil, fmt.Errorf("set object uid/gid: %w", err)
}
}
// cleanup tmp dirs
os.RemoveAll(upiddir)
// use Remove for objdir in case there are still other uploads
@@ -438,12 +432,6 @@ func mkdirAll(path string, perm os.FileMode, bucket, object string) error {
}
return s3err.GetAPIError(s3err.ErrObjectParentIsFile)
}
if newObjUID != 0 || newObjGID != 0 {
err = os.Chown(path, newObjUID, newObjGID)
if err != nil {
return fmt.Errorf("set parent ownership: %w", err)
}
}
return nil
}
@@ -499,24 +487,40 @@ func (p *Posix) AbortMultipartUpload(mpu *s3.AbortMultipartUploadInput) error {
return nil
}
func (p *Posix) ListMultipartUploads(mpu *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error) {
func (p *Posix) ListMultipartUploads(mpu *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error) {
bucket := *mpu.Bucket
var delimiter string
if mpu.Delimiter != nil {
delimiter = *mpu.Delimiter
}
var prefix string
if mpu.Prefix != nil {
prefix = *mpu.Prefix
}
var lmu s3response.ListMultipartUploadsResponse
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
return lmu, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return nil, fmt.Errorf("stat bucket: %w", err)
return lmu, fmt.Errorf("stat bucket: %w", err)
}
// ignore readdir error and use the empty list returned
objs, _ := os.ReadDir(filepath.Join(bucket, metaTmpMultipartDir))
var uploads []types.MultipartUpload
var uploads []s3response.Upload
keyMarker := *mpu.KeyMarker
uploadIDMarker := *mpu.UploadIdMarker
var keyMarker string
if mpu.KeyMarker != nil {
keyMarker = *mpu.KeyMarker
}
var uploadIDMarker string
if mpu.UploadIdMarker != nil {
uploadIDMarker = *mpu.UploadIdMarker
}
var pastMarker bool
if keyMarker == "" && uploadIDMarker == "" {
pastMarker = true
@@ -532,7 +536,7 @@ func (p *Posix) ListMultipartUploads(mpu *s3.ListMultipartUploadsInput) (*s3.Lis
continue
}
objectName := string(b)
if !strings.HasPrefix(objectName, *mpu.Prefix) {
if mpu.Prefix != nil && !strings.HasPrefix(objectName, *mpu.Prefix) {
continue
}
@@ -558,64 +562,71 @@ func (p *Posix) ListMultipartUploads(mpu *s3.ListMultipartUploadsInput) (*s3.Lis
upiddir := filepath.Join(bucket, metaTmpMultipartDir, obj.Name(), upid.Name())
loadUserMetaData(upiddir, userMetaData)
fi, err := upid.Info()
if err != nil {
return lmu, fmt.Errorf("stat %q: %w", upid.Name(), err)
}
uploadID := upid.Name()
uploads = append(uploads, types.MultipartUpload{
Key: &objectName,
UploadId: &uploadID,
uploads = append(uploads, s3response.Upload{
Key: objectName,
UploadID: uploadID,
Initiated: fi.ModTime().Format(backend.RFC3339TimeFormat),
})
if len(uploads) == int(mpu.MaxUploads) {
return &s3.ListMultipartUploadsOutput{
Bucket: &bucket,
Delimiter: mpu.Delimiter,
return s3response.ListMultipartUploadsResponse{
Bucket: bucket,
Delimiter: delimiter,
IsTruncated: i != len(objs) || j != len(upids),
KeyMarker: &keyMarker,
MaxUploads: mpu.MaxUploads,
NextKeyMarker: &objectName,
NextUploadIdMarker: &uploadID,
Prefix: mpu.Prefix,
UploadIdMarker: mpu.UploadIdMarker,
KeyMarker: keyMarker,
MaxUploads: int(mpu.MaxUploads),
NextKeyMarker: objectName,
NextUploadIDMarker: uploadID,
Prefix: prefix,
UploadIDMarker: uploadIDMarker,
Uploads: uploads,
}, nil
}
}
}
return &s3.ListMultipartUploadsOutput{
Bucket: &bucket,
Delimiter: mpu.Delimiter,
KeyMarker: &keyMarker,
MaxUploads: mpu.MaxUploads,
Prefix: mpu.Prefix,
UploadIdMarker: mpu.UploadIdMarker,
return s3response.ListMultipartUploadsResponse{
Bucket: bucket,
Delimiter: delimiter,
KeyMarker: keyMarker,
MaxUploads: int(mpu.MaxUploads),
Prefix: prefix,
UploadIDMarker: uploadIDMarker,
Uploads: uploads,
}, nil
}
func (p *Posix) ListObjectParts(bucket, object, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error) {
func (p *Posix) ListObjectParts(bucket, object, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error) {
var lpr s3response.ListPartsResponse
_, err := os.Stat(bucket)
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchBucket)
return lpr, s3err.GetAPIError(s3err.ErrNoSuchBucket)
}
if err != nil {
return nil, fmt.Errorf("stat bucket: %w", err)
return lpr, fmt.Errorf("stat bucket: %w", err)
}
sum, err := p.checkUploadIDExists(bucket, object, uploadID)
if err != nil {
return nil, err
return lpr, err
}
objdir := filepath.Join(bucket, metaTmpMultipartDir, fmt.Sprintf("%x", sum))
ents, err := os.ReadDir(filepath.Join(objdir, uploadID))
if errors.Is(err, fs.ErrNotExist) {
return nil, s3err.GetAPIError(s3err.ErrNoSuchUpload)
return lpr, s3err.GetAPIError(s3err.ErrNoSuchUpload)
}
if err != nil {
return nil, fmt.Errorf("readdir upload: %w", err)
return lpr, fmt.Errorf("readdir upload: %w", err)
}
var parts []types.Part
var parts []s3response.Part
for _, e := range ents {
pn, _ := strconv.Atoi(e.Name())
if pn <= partNumberMarker {
@@ -634,10 +645,10 @@ func (p *Posix) ListObjectParts(bucket, object, uploadID string, partNumberMarke
continue
}
parts = append(parts, types.Part{
PartNumber: int32(pn),
ETag: &etag,
LastModified: backend.GetTimePtr(fi.ModTime()),
parts = append(parts, s3response.Part{
PartNumber: pn,
ETag: etag,
LastModified: fi.ModTime().Format(backend.RFC3339TimeFormat),
Size: fi.Size(),
})
}
@@ -646,12 +657,12 @@ func (p *Posix) ListObjectParts(bucket, object, uploadID string, partNumberMarke
func(i int, j int) bool { return parts[i].PartNumber < parts[j].PartNumber })
oldLen := len(parts)
if len(parts) > maxParts {
if maxParts > 0 && len(parts) > maxParts {
parts = parts[:maxParts]
}
newLen := len(parts)
nextpart := int32(0)
nextpart := 0
if len(parts) != 0 {
nextpart = parts[len(parts)-1].PartNumber
}
@@ -660,15 +671,15 @@ func (p *Posix) ListObjectParts(bucket, object, uploadID string, partNumberMarke
upiddir := filepath.Join(objdir, uploadID)
loadUserMetaData(upiddir, userMetaData)
return &s3.ListPartsOutput{
Bucket: &bucket,
return s3response.ListPartsResponse{
Bucket: bucket,
IsTruncated: oldLen != newLen,
Key: &object,
MaxParts: int32(maxParts),
NextPartNumberMarker: backend.GetStringPtr(fmt.Sprintf("%v", nextpart)),
PartNumberMarker: backend.GetStringPtr(fmt.Sprintf("%v", partNumberMarker)),
Key: object,
MaxParts: maxParts,
NextPartNumberMarker: nextpart,
PartNumberMarker: partNumberMarker,
Parts: parts,
UploadId: &uploadID,
UploadID: uploadID,
}, nil
}
@@ -709,7 +720,7 @@ func (p *Posix) PutObjectPart(bucket, object, uploadID string, part int, length
}
dataSum := hash.Sum(nil)
etag := hex.EncodeToString(dataSum[:])
etag := hex.EncodeToString(dataSum)
xattr.Set(partPath, "user.etag", []byte(etag))
return etag, nil
@@ -737,13 +748,10 @@ func (p *Posix) PutObject(po *s3.PutObjectInput) (string, error) {
xattr.Set(name, "user."+k, []byte(v))
}
// set our attribute that this dir was specifically put
xattr.Set(name, dirObjKey, nil)
// set etag attribute to signify this dir was specifically put
xattr.Set(name, "user.etag", []byte(emptyMD5))
// TODO: what etag should be returned here
// and we should set etag xattr to identify dir was
// specifically uploaded
return "", nil
return emptyMD5, nil
}
// object is file
@@ -764,7 +772,7 @@ func (p *Posix) PutObject(po *s3.PutObjectInput) (string, error) {
if dir != "" {
err = mkdirAll(dir, os.FileMode(0755), *po.Bucket, *po.Key)
if err != nil {
return "", fmt.Errorf("make object parent directories: %w", err)
return "", s3err.GetAPIError(s3err.ErrExistingObjectIsDirectory)
}
}
@@ -781,13 +789,6 @@ func (p *Posix) PutObject(po *s3.PutObjectInput) (string, error) {
etag := hex.EncodeToString(dataSum[:])
xattr.Set(name, "user.etag", []byte(etag))
if newObjUID != 0 || newObjGID != 0 {
err = os.Chown(name, newObjUID, newObjGID)
if err != nil {
return "", fmt.Errorf("set object uid/gid: %v", err)
}
}
return etag, nil
}
@@ -826,7 +827,7 @@ func (p *Posix) removeParents(bucket, object string) error {
break
}
_, err := xattr.Get(parent, dirObjKey)
_, err := xattr.Get(parent, "user.etag")
if err == nil {
break
}
@@ -1015,7 +1016,20 @@ func (p *Posix) ListObjects(bucket, prefix, marker, delim string, maxkeys int) (
}
fileSystem := os.DirFS(bucket)
results, err := backend.Walk(fileSystem, prefix, delim, marker, maxkeys)
results, err := backend.Walk(fileSystem, prefix, delim, marker, maxkeys,
func(path string) (bool, error) {
_, err := xattr.Get(filepath.Join(bucket, path), "user.etag")
if isNoAttr(err) {
return false, nil
}
if err != nil {
return false, err
}
return true, nil
}, func(path string) (string, error) {
etag, err := xattr.Get(filepath.Join(bucket, path), "user.etag")
return string(etag), err
}, []string{metaTmpDir})
if err != nil {
return nil, fmt.Errorf("walk %v: %w", bucket, err)
}
@@ -1043,7 +1057,20 @@ func (p *Posix) ListObjectsV2(bucket, prefix, marker, delim string, maxkeys int)
}
fileSystem := os.DirFS(bucket)
results, err := backend.Walk(fileSystem, prefix, delim, marker, maxkeys)
results, err := backend.Walk(fileSystem, prefix, delim, marker, maxkeys,
func(path string) (bool, error) {
_, err := xattr.Get(filepath.Join(bucket, path), "user.etag")
if isNoAttr(err) {
return false, nil
}
if err != nil {
return false, err
}
return true, nil
}, func(path string) (string, error) {
etag, err := xattr.Get(filepath.Join(bucket, path), "user.etag")
return string(etag), err
}, []string{metaTmpDir})
if err != nil {
return nil, fmt.Errorf("walk %v: %w", bucket, err)
}

View File

@@ -76,7 +76,7 @@ func (tmp *tmpfile) link() error {
func (tmp *tmpfile) Write(b []byte) (int, error) {
if int64(len(b)) > tmp.size {
return 0, fmt.Errorf("write exceeds content length")
return 0, fmt.Errorf("write exceeds content length %v", tmp.size)
}
n, err := tmp.f.Write(b)

View File

@@ -150,7 +150,7 @@ func (tmp *tmpfile) fallbackLink() error {
func (tmp *tmpfile) Write(b []byte) (int, error) {
if int64(len(b)) > tmp.size {
return 0, fmt.Errorf("write exceeds content length")
return 0, fmt.Errorf("write exceeds content length %v", tmp.size)
}
n, err := tmp.f.Write(b)

View File

@@ -31,9 +31,12 @@ type WalkResults struct {
NextMarker string
}
type DirObjCheck func(path string) (bool, error)
type GetETag func(path string) (string, error)
// Walk walks the supplied fs.FS and returns results compatible with list
// objects responses
func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int) (WalkResults, error) {
func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int, dirchk DirObjCheck, getetag GetETag, skipdirs []string) (WalkResults, error) {
cpmap := make(map[string]struct{})
var objects []types.Object
@@ -63,16 +66,52 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int) (WalkResu
return nil
}
// If prefix is defined and the directory does not match prefix,
// do not descend into the directory because nothing will
// match this prefix. Make sure to append the / at the end of
// directories since this is implied as a directory path name.
if prefix != "" && !strings.HasPrefix(path+string(os.PathSeparator), prefix) {
if contains(d.Name(), skipdirs) {
return fs.SkipDir
}
// TODO: special case handling if directory is empty
// and was "PUT" explicitly
// If prefix is defined and the directory does not match prefix,
// do not descend into the directory because nothing will
// match this prefix. Make sure to append the / at the end of
// directories since this is implied as a directory path name.
// If path is a prefix of prefix, then path could still be
// building to match. So only skip if path isnt a prefix of prefix
// and prefix isnt a prefix of path.
if prefix != "" &&
!strings.HasPrefix(path+string(os.PathSeparator), prefix) &&
!strings.HasPrefix(prefix, path+string(os.PathSeparator)) {
return fs.SkipDir
}
// TODO: can we do better here rather than a second readdir
// per directory?
ents, err := fs.ReadDir(fileSystem, path)
if err != nil {
return fmt.Errorf("readdir %q: %w", path, err)
}
if len(ents) == 0 {
dirobj, err := dirchk(path)
if err != nil {
return fmt.Errorf("directory object check %q: %w", path, err)
}
if dirobj {
fi, err := d.Info()
if err != nil {
return fmt.Errorf("dir info %q: %w", path, err)
}
etag, err := getetag(path)
if err != nil {
return fmt.Errorf("get etag %q: %w", path, err)
}
path := path + "/"
objects = append(objects, types.Object{
ETag: &etag,
Key: &path,
LastModified: GetTimePtr(fi.ModTime()),
})
}
}
return nil
}
@@ -95,16 +134,22 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int) (WalkResu
if err != nil {
return fmt.Errorf("get info for %v: %w", path, err)
}
etag, err := getetag(path)
if err != nil {
return fmt.Errorf("get etag %q: %w", path, err)
}
objects = append(objects, types.Object{
ETag: new(string),
ETag: &etag,
Key: &path,
LastModified: GetTimePtr(fi.ModTime()),
Size: fi.Size(),
})
if (len(objects) + len(cpmap)) == max {
if max > 0 && (len(objects)+len(cpmap)) == max {
pastMax = true
}
return nil
}
@@ -128,7 +173,7 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int) (WalkResu
// these are all rolled up into the common prefix.
// Note: The delimeter can be anything, so we have to operate on
// the full path without any assumptions on posix directory heirarchy
// here. Usually the delimeter with be "/", but thats not required.
// here. Usually the delimeter will be "/", but thats not required.
suffix := strings.TrimPrefix(path, prefix)
before, _, found := strings.Cut(suffix, delimiter)
if !found {
@@ -136,8 +181,12 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int) (WalkResu
if err != nil {
return fmt.Errorf("get info for %v: %w", path, err)
}
etag, err := getetag(path)
if err != nil {
return fmt.Errorf("get etag %q: %w", path, err)
}
objects = append(objects, types.Object{
ETag: new(string),
ETag: &etag,
Key: &path,
LastModified: GetTimePtr(fi.ModTime()),
Size: fi.Size(),
@@ -162,15 +211,16 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int) (WalkResu
return WalkResults{}, err
}
commonPrefixStrings := make([]string, 0, len(cpmap))
var commonPrefixStrings []string
for k := range cpmap {
commonPrefixStrings = append(commonPrefixStrings, k)
}
sort.Strings(commonPrefixStrings)
commonPrefixes := make([]types.CommonPrefix, 0, len(commonPrefixStrings))
for _, cp := range commonPrefixStrings {
pfx := cp
commonPrefixes = append(commonPrefixes, types.CommonPrefix{
Prefix: &cp,
Prefix: &pfx,
})
}
@@ -181,3 +231,12 @@ func Walk(fileSystem fs.FS, prefix, delimiter, marker string, max int) (WalkResu
NextMarker: newMarker,
}, nil
}
func contains(a string, strs []string) bool {
for _, s := range strs {
if s == a {
return true
}
}
return false
}

View File

@@ -26,31 +26,50 @@ import (
type walkTest struct {
fsys fs.FS
expected backend.WalkResults
dc backend.DirObjCheck
}
func gettag(string) (string, error) { return "myetag", nil }
func TestWalk(t *testing.T) {
tests := []walkTest{{
// test case from
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html
fsys: fstest.MapFS{
"sample.jpg": {},
"photos/2006/January/sample.jpg": {},
"photos/2006/February/sample2.jpg": {},
"photos/2006/February/sample3.jpg": {},
"photos/2006/February/sample4.jpg": {},
tests := []walkTest{
{
// test case from
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html
fsys: fstest.MapFS{
"sample.jpg": {},
"photos/2006/January/sample.jpg": {},
"photos/2006/February/sample2.jpg": {},
"photos/2006/February/sample3.jpg": {},
"photos/2006/February/sample4.jpg": {},
},
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetStringPtr("photos/"),
}},
Objects: []types.Object{{
Key: backend.GetStringPtr("sample.jpg"),
}},
},
dc: func(string) (bool, error) { return false, nil },
},
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetStringPtr("photos/"),
}},
Objects: []types.Object{{
Key: backend.GetStringPtr("sample.jpg"),
}},
{
// test case single dir/single file
fsys: fstest.MapFS{
"test/file": {},
},
expected: backend.WalkResults{
CommonPrefixes: []types.CommonPrefix{{
Prefix: backend.GetStringPtr("test/"),
}},
Objects: []types.Object{},
},
dc: func(string) (bool, error) { return true, nil },
},
}}
}
for _, tt := range tests {
res, err := backend.Walk(tt.fsys, "", "/", "", 1000)
res, err := backend.Walk(tt.fsys, "", "/", "", 1000, tt.dc, gettag, []string{})
if err != nil {
t.Fatalf("walk: %v", err)
}
@@ -67,13 +86,16 @@ func compareResults(got, wanted backend.WalkResults, t *testing.T) {
}
if !compareObjects(got.Objects, wanted.Objects) {
t.Errorf("unexpected common prefix, got %v wanted %v",
t.Errorf("unexpected object, got %v wanted %v",
printObjects(got.Objects),
printObjects(wanted.Objects))
}
}
func compareCommonPrefix(a, b []types.CommonPrefix) bool {
if len(a) == 0 && len(b) == 0 {
return true
}
if len(a) != len(b) {
return false
}
@@ -108,6 +130,9 @@ func printCommonPrefixes(list []types.CommonPrefix) string {
}
func compareObjects(a, b []types.Object) bool {
if len(a) == 0 && len(b) == 0 {
return true
}
if len(a) != len(b) {
return false
}

145
cmd/versitygw/admin.go Normal file
View File

@@ -0,0 +1,145 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package main
import (
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"net/http"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
"github.com/urfave/cli/v2"
)
var (
adminAccess string
adminSecret string
adminRegion string
)
func adminCommand() *cli.Command {
return &cli.Command{
Name: "admin",
Usage: "admin CLI tool",
Description: `admin CLI tool for interacting with admin api.
Here is the available api list:
create-user
`,
Subcommands: []*cli.Command{
{
Name: "create-user",
Usage: "Create a new user",
Action: createUser,
Flags: []cli.Flag{
&cli.StringFlag{
Name: "access",
Usage: "access value for the new user",
Required: true,
Aliases: []string{"a"},
},
&cli.StringFlag{
Name: "secret",
Usage: "secret value for the new user",
Required: true,
Aliases: []string{"s"},
},
&cli.StringFlag{
Name: "role",
Usage: "role for the new user",
Required: true,
Aliases: []string{"r"},
},
&cli.StringFlag{
Name: "region",
Usage: "s3 region string for the user",
Value: "us-east-1",
Aliases: []string{"rg"},
},
},
},
},
Flags: []cli.Flag{
// TODO: create a configuration file for this
&cli.StringFlag{
Name: "adminAccess",
Usage: "admin access account",
EnvVars: []string{"ADMIN_ACCESS_KEY_ID", "ADMIN_ACCESS_KEY"},
Aliases: []string{"aa"},
Destination: &adminAccess,
},
&cli.StringFlag{
Name: "adminSecret",
Usage: "admin secret access key",
EnvVars: []string{"ADMIN_SECRET_ACCESS_KEY", "ADMIN_SECRET_KEY"},
Aliases: []string{"as"},
Destination: &adminSecret,
},
&cli.StringFlag{
Name: "adminRegion",
Usage: "s3 region string",
Value: "us-east-1",
Destination: &adminRegion,
Aliases: []string{"ar"},
},
},
}
}
func createUser(ctx *cli.Context) error {
access, secret, role, region := ctx.String("access"), ctx.String("secret"), ctx.String("role"), ctx.String("region")
if access == "" || secret == "" || region == "" {
return fmt.Errorf("invalid input parameters for the new user")
}
if role != "admin" && role != "user" {
return fmt.Errorf("invalid input parameter for role")
}
req, err := http.NewRequest(http.MethodPost, fmt.Sprintf("http://localhost:7070/create-user?access=%v&secret=%v&role=%v&region=%v", access, secret, role, region), nil)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
signer := v4.NewSigner()
hashedPayload := sha256.Sum256([]byte{})
hexPayload := hex.EncodeToString(hashedPayload[:])
req.Header.Set("X-Amz-Content-Sha256", hexPayload)
signErr := signer.SignHTTP(req.Context(), aws.Credentials{AccessKeyID: adminAccess, SecretAccessKey: adminSecret}, req, hexPayload, "s3", adminRegion, time.Now())
if signErr != nil {
return fmt.Errorf("failed to sign the request: %w", err)
}
client := http.Client{}
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("failed to send the request: %w", err)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
fmt.Printf("%s", body)
return nil
}

View File

@@ -30,8 +30,8 @@ import (
var (
port string
adminAccess string
adminSecret string
rootUserAccess string
rootUserSecret string
region string
certFile, keyFile string
debug bool
@@ -51,6 +51,7 @@ func main() {
app.Commands = []*cli.Command{
posixCommand(),
adminCommand(),
}
if err := app.Run(os.Args); err != nil {
@@ -94,21 +95,24 @@ func initFlags() []cli.Flag {
},
&cli.StringFlag{
Name: "access",
Usage: "admin access account",
Destination: &adminAccess,
EnvVars: []string{"ADMIN_ACCESS_KEY_ID", "ADMIN_ACCESS_KEY"},
Usage: "root user access key",
EnvVars: []string{"ROOT_ACCESS_KEY_ID", "ROOT_ACCESS_KEY"},
Aliases: []string{"a"},
Destination: &rootUserAccess,
},
&cli.StringFlag{
Name: "secret",
Usage: "admin secret access key",
Destination: &adminSecret,
EnvVars: []string{"ADMIN_SECRET_ACCESS_KEY", "ADMIN_SECRET_KEY"},
Usage: "root user secret access key",
EnvVars: []string{"ROOT_SECRET_ACCESS_KEY", "ROOT_SECRET_KEY"},
Aliases: []string{"s"},
Destination: &rootUserSecret,
},
&cli.StringFlag{
Name: "region",
Usage: "s3 region string",
Value: "us-east-1",
Destination: &region,
Aliases: []string{"r"},
},
&cli.StringFlag{
Name: "cert",
@@ -132,6 +136,7 @@ func runGateway(be backend.Backend) error {
app := fiber.New(fiber.Config{
AppName: "versitygw",
ServerHeader: "VERSITYGW",
BodyLimit: 5 * 1024 * 1024 * 1024,
})
var opts []s3api.Option
@@ -155,12 +160,11 @@ func runGateway(be backend.Backend) error {
opts = append(opts, s3api.WithDebug())
}
srv, err := s3api.New(app, be, port,
middlewares.AdminConfig{
AdminAccess: adminAccess,
AdminSecret: adminSecret,
Region: region,
}, auth.IAMServiceUnsupported{}, opts...)
srv, err := s3api.New(app, be, middlewares.RootUserConfig{
Access: rootUserAccess,
Secret: rootUserSecret,
Region: region,
}, port, auth.New(), opts...)
if err != nil {
return fmt.Errorf("init gateway: %v", err)
}

View File

@@ -0,0 +1,49 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package controllers
import (
"fmt"
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/backend/auth"
)
type AdminController struct {
IAMService auth.IAMService
}
func NewAdminController() AdminController {
return AdminController{IAMService: auth.New()}
}
func (c AdminController) CreateUser(ctx *fiber.Ctx) error {
access, secret, role, region := ctx.Query("access"), ctx.Query("secret"), ctx.Query("role"), ctx.Query("region")
requesterRole := ctx.Locals("role")
if requesterRole != "admin" {
return fmt.Errorf("access denied: only admin users have access to this resource")
}
user := auth.Account{Secret: secret, Role: role, Region: region}
err := c.IAMService.CreateAccount(access, &user)
if err != nil {
return fmt.Errorf("failed to create a user: %w", err)
}
ctx.SendString("The user has been created successfully")
return nil
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3response"
"io"
"sync"
)
@@ -69,10 +70,10 @@ var _ backend.Backend = &BackendMock{}
// ListBucketsFunc: func() (*s3.ListBucketsOutput, error) {
// panic("mock out the ListBuckets method")
// },
// ListMultipartUploadsFunc: func(output *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error) {
// ListMultipartUploadsFunc: func(output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error) {
// panic("mock out the ListMultipartUploads method")
// },
// ListObjectPartsFunc: func(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error) {
// ListObjectPartsFunc: func(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error) {
// panic("mock out the ListObjectParts method")
// },
// ListObjectsFunc: func(bucket string, prefix string, marker string, delim string, maxkeys int) (*s3.ListObjectsOutput, error) {
@@ -173,10 +174,10 @@ type BackendMock struct {
ListBucketsFunc func() (*s3.ListBucketsOutput, error)
// ListMultipartUploadsFunc mocks the ListMultipartUploads method.
ListMultipartUploadsFunc func(output *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error)
ListMultipartUploadsFunc func(output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error)
// ListObjectPartsFunc mocks the ListObjectParts method.
ListObjectPartsFunc func(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error)
ListObjectPartsFunc func(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error)
// ListObjectsFunc mocks the ListObjects method.
ListObjectsFunc func(bucket string, prefix string, marker string, delim string, maxkeys int) (*s3.ListObjectsOutput, error)
@@ -1095,7 +1096,7 @@ func (mock *BackendMock) ListBucketsCalls() []struct {
}
// ListMultipartUploads calls ListMultipartUploadsFunc.
func (mock *BackendMock) ListMultipartUploads(output *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error) {
func (mock *BackendMock) ListMultipartUploads(output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error) {
if mock.ListMultipartUploadsFunc == nil {
panic("BackendMock.ListMultipartUploadsFunc: method is nil but Backend.ListMultipartUploads was just called")
}
@@ -1127,7 +1128,7 @@ func (mock *BackendMock) ListMultipartUploadsCalls() []struct {
}
// ListObjectParts calls ListObjectPartsFunc.
func (mock *BackendMock) ListObjectParts(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error) {
func (mock *BackendMock) ListObjectParts(bucket string, object string, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error) {
if mock.ListObjectPartsFunc == nil {
panic("BackendMock.ListObjectPartsFunc: method is nil but Backend.ListObjectParts was just called")
}

View File

@@ -21,9 +21,9 @@ import (
"fmt"
"io"
"net/http"
"os"
"strconv"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/s3"
@@ -44,65 +44,111 @@ func New(be backend.Backend) S3ApiController {
func (c S3ApiController) ListBuckets(ctx *fiber.Ctx) error {
res, err := c.be.ListBuckets()
return Responce(ctx, res, err)
return SendXMLResponse(ctx, res, err)
}
func (c S3ApiController) GetActions(ctx *fiber.Ctx) error {
bucket, key, keyEnd, uploadId, maxPartsStr, partNumberMarkerStr, acceptRange := ctx.Params("bucket"), ctx.Params("key"), ctx.Params("*1"), ctx.Query("uploadId"), ctx.Query("max-parts"), ctx.Query("part-number-marker"), ctx.Get("Range")
bucket := ctx.Params("bucket")
key := ctx.Params("key")
keyEnd := ctx.Params("*1")
uploadId := ctx.Query("uploadId")
maxParts := ctx.QueryInt("max-parts", 0)
partNumberMarker := ctx.QueryInt("part-number-marker", 0)
acceptRange := ctx.Get("Range")
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
}
if uploadId != "" {
maxParts, err := strconv.Atoi(maxPartsStr)
if err != nil && maxPartsStr != "" {
return errors.New("wrong api call")
if maxParts < 0 || (maxParts == 0 && ctx.Query("max-parts") != "") {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidMaxParts))
}
partNumberMarker, err := strconv.Atoi(partNumberMarkerStr)
if err != nil && partNumberMarkerStr != "" {
return errors.New("wrong api call")
if partNumberMarker < 0 || (partNumberMarker == 0 && ctx.Query("part-number-marker") != "") {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidPartNumberMarker))
}
res, err := c.be.ListObjectParts(bucket, "", uploadId, partNumberMarker, maxParts)
return Responce(ctx, res, err)
res, err := c.be.ListObjectParts(bucket, key, uploadId, partNumberMarker, maxParts)
return SendXMLResponse(ctx, res, err)
}
if ctx.Request().URI().QueryArgs().Has("acl") {
res, err := c.be.GetObjectAcl(bucket, key)
return Responce(ctx, res, err)
return SendXMLResponse(ctx, res, err)
}
if attrs := ctx.Get("X-Amz-Object-Attributes"); attrs != "" {
res, err := c.be.GetObjectAttributes(bucket, key, strings.Split(attrs, ","))
return Responce(ctx, res, err)
return SendXMLResponse(ctx, res, err)
}
res, err := c.be.GetObject(bucket, key, acceptRange, ctx.Response().BodyWriter())
if err != nil {
return Responce(ctx, res, err)
return SendResponse(ctx, err)
}
return nil
if res == nil {
return SendResponse(ctx, fmt.Errorf("get object nil response"))
}
utils.SetMetaHeaders(ctx, res.Metadata)
var lastmod string
if res.LastModified != nil {
lastmod = res.LastModified.Format(timefmt)
}
utils.SetResponseHeaders(ctx, []utils.CustomHeader{
{
Key: "Content-Length",
Value: fmt.Sprint(res.ContentLength),
},
{
Key: "Content-Type",
Value: getstring(res.ContentType),
},
{
Key: "Content-Encoding",
Value: getstring(res.ContentEncoding),
},
{
Key: "ETag",
Value: getstring(res.ETag),
},
{
Key: "Last-Modified",
Value: lastmod,
},
})
return ctx.SendStatus(http.StatusOK)
}
func getstring(s *string) string {
if s == nil {
return ""
}
return *s
}
func (c S3ApiController) ListActions(ctx *fiber.Ctx) error {
bucket := ctx.Params("bucket")
prefix := ctx.Query("prefix")
marker := ctx.Query("continuation-token")
delimiter := ctx.Query("delimiter")
maxkeys := ctx.QueryInt("max-keys")
if ctx.Request().URI().QueryArgs().Has("acl") {
res, err := c.be.GetBucketAcl(ctx.Params("bucket"))
return Responce(ctx, res, err)
return SendXMLResponse(ctx, res, err)
}
if ctx.Request().URI().QueryArgs().Has("uploads") {
res, err := c.be.ListMultipartUploads(&s3.ListMultipartUploadsInput{Bucket: aws.String(ctx.Params("bucket"))})
return Responce(ctx, res, err)
return SendXMLResponse(ctx, res, err)
}
if ctx.QueryInt("list-type") == 2 {
res, err := c.be.ListObjectsV2(ctx.Params("bucket"), "", "", "", 1)
return Responce(ctx, res, err)
res, err := c.be.ListObjectsV2(bucket, prefix, marker, delimiter, maxkeys)
return SendXMLResponse(ctx, res, err)
}
res, err := c.be.ListObjects(ctx.Params("bucket"), "", "", "", 1)
return Responce(ctx, res, err)
res, err := c.be.ListObjects(bucket, prefix, marker, delimiter, maxkeys)
return SendXMLResponse(ctx, res, err)
}
func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
@@ -131,76 +177,68 @@ func (c S3ApiController) PutBucketActions(ctx *fiber.Ctx) error {
GrantWriteACP: &grantWriteACP,
})
return Responce[any](ctx, nil, err)
return SendResponse(ctx, err)
}
err := c.be.PutBucket(bucket)
return Responce[any](ctx, nil, err)
return SendResponse(ctx, err)
}
func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
dstBucket, dstKeyStart, dstKeyEnd, uploadId, partNumberStr := ctx.Params("bucket"), ctx.Params("key"), ctx.Params("*1"), ctx.Query("uploadId"), ctx.Query("partNumber")
copySource, copySrcIfMatch, copySrcIfNoneMatch,
copySrcModifSince, copySrcUnmodifSince, acl,
grantFullControl, grantRead, grantReadACP,
granWrite, grantWriteACP, contentLengthStr :=
// Copy source headers
ctx.Get("X-Amz-Copy-Source"),
ctx.Get("X-Amz-Copy-Source-If-Match"),
ctx.Get("X-Amz-Copy-Source-If-None-Match"),
ctx.Get("X-Amz-Copy-Source-If-Modified-Since"),
ctx.Get("X-Amz-Copy-Source-If-Unmodified-Since"),
// Permission headers
ctx.Get("X-Amz-Acl"),
ctx.Get("X-Amz-Grant-Full-Control"),
ctx.Get("X-Amz-Grant-Read"),
ctx.Get("X-Amz-Grant-Read-Acp"),
ctx.Get("X-Amz-Grant-Write"),
ctx.Get("X-Amz-Grant-Write-Acp"),
// Other headers
ctx.Get("Content-Length")
bucket := ctx.Params("bucket")
keyStart := ctx.Params("key")
keyEnd := ctx.Params("*1")
uploadId := ctx.Query("uploadId")
partNumberStr := ctx.Query("partNumber")
// Copy source headers
copySource := ctx.Get("X-Amz-Copy-Source")
copySrcIfMatch := ctx.Get("X-Amz-Copy-Source-If-Match")
copySrcIfNoneMatch := ctx.Get("X-Amz-Copy-Source-If-None-Match")
copySrcModifSince := ctx.Get("X-Amz-Copy-Source-If-Modified-Since")
copySrcUnmodifSince := ctx.Get("X-Amz-Copy-Source-If-Unmodified-Since")
// Permission headers
acl := ctx.Get("X-Amz-Acl")
grantFullControl := ctx.Get("X-Amz-Grant-Full-Control")
grantRead := ctx.Get("X-Amz-Grant-Read")
grantReadACP := ctx.Get("X-Amz-Grant-Read-Acp")
granWrite := ctx.Get("X-Amz-Grant-Write")
grantWriteACP := ctx.Get("X-Amz-Grant-Write-Acp")
// Other headers
contentLengthStr := ctx.Get("Content-Length")
grants := grantFullControl + grantRead + grantReadACP + granWrite + grantWriteACP
if dstKeyEnd != "" {
dstKeyStart = strings.Join([]string{dstKeyStart, dstKeyEnd}, "/")
if keyEnd != "" {
keyStart = strings.Join([]string{keyStart, keyEnd}, "/")
}
path := ctx.Path()
if path[len(path)-1:] == "/" && keyStart[len(keyStart)-1:] != "/" {
keyStart = keyStart + "/"
}
if partNumberStr != "" {
copySrcModifSinceDate, err := time.Parse(time.RFC3339, copySrcModifSince)
if err != nil && copySrcModifSince != "" {
return errors.New("wrong api call")
}
copySrcUnmodifSinceDate, err := time.Parse(time.RFC3339, copySrcUnmodifSince)
if err != nil && copySrcUnmodifSince != "" {
return errors.New("wrong api call")
}
partNumber, err := strconv.ParseInt(partNumberStr, 10, 64)
var contentLength int64
if contentLengthStr != "" {
var err error
contentLength, err = strconv.ParseInt(contentLengthStr, 10, 64)
if err != nil {
return errors.New("wrong api call")
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidRequest))
}
res, err := c.be.UploadPartCopy(&s3.UploadPartCopyInput{
Bucket: &dstBucket,
Key: &dstKeyStart,
PartNumber: int32(partNumber),
UploadId: &uploadId,
CopySource: &copySource,
CopySourceIfMatch: &copySrcIfMatch,
CopySourceIfNoneMatch: &copySrcIfNoneMatch,
CopySourceIfModifiedSince: &copySrcModifSinceDate,
CopySourceIfUnmodifiedSince: &copySrcUnmodifSinceDate,
})
return Responce(ctx, res, err)
}
if uploadId != "" {
if uploadId != "" && partNumberStr != "" {
partNumber := ctx.QueryInt("partNumber", -1)
if partNumber < 1 {
return SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidPart))
}
body := io.ReadSeeker(bytes.NewReader([]byte(ctx.Body())))
res, err := c.be.UploadPart(dstBucket, dstKeyStart, uploadId, body)
return Responce(ctx, res, err)
etag, err := c.be.PutObjectPart(bucket, keyStart, uploadId,
partNumber, contentLength, body)
ctx.Response().Header.Set("Etag", etag)
return SendResponse(ctx, err)
}
if grants != "" || acl != "" {
@@ -209,8 +247,8 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
}
err := c.be.PutObjectAcl(&s3.PutObjectAclInput{
Bucket: &dstBucket,
Key: &dstKeyStart,
Bucket: &bucket,
Key: &keyStart,
ACL: types.ObjectCannedACL(acl),
GrantFullControl: &grantFullControl,
GrantRead: &grantRead,
@@ -218,37 +256,35 @@ func (c S3ApiController) PutActions(ctx *fiber.Ctx) error {
GrantWrite: &granWrite,
GrantWriteACP: &grantWriteACP,
})
return Responce[any](ctx, nil, err)
return SendResponse(ctx, err)
}
if copySource != "" {
_, _, _, _ = copySrcIfMatch, copySrcIfNoneMatch,
copySrcModifSince, copySrcUnmodifSince
copySourceSplit := strings.Split(copySource, "/")
srcBucket, srcObject := copySourceSplit[0], copySourceSplit[1:]
res, err := c.be.CopyObject(srcBucket, strings.Join(srcObject, "/"), dstBucket, dstKeyStart)
return Responce(ctx, res, err)
}
contentLength, err := strconv.ParseInt(contentLengthStr, 10, 64)
if err != nil {
return errors.New("wrong api call")
res, err := c.be.CopyObject(srcBucket, strings.Join(srcObject, "/"), bucket, keyStart)
return SendXMLResponse(ctx, res, err)
}
metadata := utils.GetUserMetaData(&ctx.Request().Header)
res, err := c.be.PutObject(&s3.PutObjectInput{
Bucket: &dstBucket,
Key: &dstKeyStart,
etag, err := c.be.PutObject(&s3.PutObjectInput{
Bucket: &bucket,
Key: &keyStart,
ContentLength: contentLength,
Metadata: metadata,
Body: bytes.NewReader(ctx.Request().Body()),
})
return Responce(ctx, res, err)
ctx.Response().Header.Set("ETag", etag)
return SendResponse(ctx, err)
}
func (c S3ApiController) DeleteBucket(ctx *fiber.Ctx) error {
err := c.be.DeleteBucket(ctx.Params("bucket"))
return Responce[any](ctx, nil, err)
return SendResponse(ctx, err)
}
func (c S3ApiController) DeleteObjects(ctx *fiber.Ctx) error {
@@ -258,11 +294,14 @@ func (c S3ApiController) DeleteObjects(ctx *fiber.Ctx) error {
}
err := c.be.DeleteObjects(ctx.Params("bucket"), &s3.DeleteObjectsInput{Delete: &dObj})
return Responce[any](ctx, nil, err)
return SendResponse(ctx, err)
}
func (c S3ApiController) DeleteActions(ctx *fiber.Ctx) error {
bucket, key, keyEnd, uploadId := ctx.Params("bucket"), ctx.Params("key"), ctx.Params("*1"), ctx.Query("uploadId")
bucket := ctx.Params("bucket")
key := ctx.Params("key")
keyEnd := ctx.Params("*1")
uploadId := ctx.Query("uploadId")
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
@@ -278,30 +317,44 @@ func (c S3ApiController) DeleteActions(ctx *fiber.Ctx) error {
ExpectedBucketOwner: &expectedBucketOwner,
RequestPayer: types.RequestPayer(requestPayer),
})
return Responce[any](ctx, nil, err)
return SendResponse(ctx, err)
}
err := c.be.DeleteObject(bucket, key)
return Responce[any](ctx, nil, err)
return SendResponse(ctx, err)
}
func (c S3ApiController) HeadBucket(ctx *fiber.Ctx) error {
res, err := c.be.HeadBucket(ctx.Params("bucket"))
return Responce(ctx, res, err)
_, err := c.be.HeadBucket(ctx.Params("bucket"))
// TODO: set bucket response headers
return SendResponse(ctx, err)
}
const (
timefmt = "Mon, 02 Jan 2006 15:04:05 GMT"
)
func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
bucket, key, keyEnd := ctx.Params("bucket"), ctx.Params("key"), ctx.Params("*1")
bucket := ctx.Params("bucket")
key := ctx.Params("key")
keyEnd := ctx.Params("*1")
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
}
res, err := c.be.HeadObject(bucket, key)
if err != nil {
return ErrorResponse(ctx, err)
return SendResponse(ctx, err)
}
if res == nil {
return SendResponse(ctx, fmt.Errorf("head object nil response"))
}
utils.SetMetaHeaders(ctx, res.Metadata)
var lastmod string
if res.LastModified != nil {
lastmod = res.LastModified.Format(timefmt)
}
utils.SetResponseHeaders(ctx, []utils.CustomHeader{
{
Key: "Content-Length",
@@ -309,60 +362,62 @@ func (c S3ApiController) HeadObject(ctx *fiber.Ctx) error {
},
{
Key: "Content-Type",
Value: *res.ContentType,
Value: getstring(res.ContentType),
},
{
Key: "Content-Encoding",
Value: *res.ContentEncoding,
Value: getstring(res.ContentEncoding),
},
{
Key: "ETag",
Value: *res.ETag,
Value: getstring(res.ETag),
},
{
Key: "Last-Modified",
Value: res.LastModified.Format("20060102T150405Z"),
Value: lastmod,
},
})
// https://github.com/gofiber/fiber/issues/2080
// ctx.SendStatus() sets incorrect content length on HEAD request
ctx.Status(http.StatusOK)
return nil
return SendResponse(ctx, nil)
}
func (c S3ApiController) CreateActions(ctx *fiber.Ctx) error {
bucket, key, keyEnd, uploadId := ctx.Params("bucket"), ctx.Params("key"), ctx.Params("*1"), ctx.Query("uploadId")
var restoreRequest s3.RestoreObjectInput
bucket := ctx.Params("bucket")
key := ctx.Params("key")
keyEnd := ctx.Params("*1")
uploadId := ctx.Query("uploadId")
if keyEnd != "" {
key = strings.Join([]string{key, keyEnd}, "/")
}
var restoreRequest s3.RestoreObjectInput
if ctx.Request().URI().QueryArgs().Has("restore") {
xmlErr := xml.Unmarshal(ctx.Body(), &restoreRequest)
if xmlErr != nil {
return errors.New("wrong api call")
}
err := c.be.RestoreObject(bucket, key, &restoreRequest)
return Responce[any](ctx, nil, err)
return SendResponse(ctx, err)
}
if uploadId != "" {
var parts []types.Part
data := struct {
Parts []types.Part `xml:"Part"`
}{}
if err := xml.Unmarshal(ctx.Body(), &parts); err != nil {
if err := xml.Unmarshal(ctx.Body(), &data); err != nil {
return errors.New("wrong api call")
}
res, err := c.be.CompleteMultipartUpload(bucket, "", uploadId, parts)
return Responce(ctx, res, err)
res, err := c.be.CompleteMultipartUpload(bucket, key, uploadId, data.Parts)
return SendXMLResponse(ctx, res, err)
}
res, err := c.be.CreateMultipartUpload(&s3.CreateMultipartUploadInput{Bucket: &bucket, Key: &key})
return Responce(ctx, res, err)
return SendXMLResponse(ctx, res, err)
}
func Responce[R comparable](ctx *fiber.Ctx, resp R, err error) error {
func SendResponse(ctx *fiber.Ctx, err error) error {
if err != nil {
serr, ok := err.(s3err.APIError)
if ok {
@@ -373,20 +428,38 @@ func Responce[R comparable](ctx *fiber.Ctx, resp R, err error) error {
s3err.GetAPIError(s3err.ErrInternalError), "", "", ""))
}
// https://github.com/gofiber/fiber/issues/2080
// ctx.SendStatus() sets incorrect content length on HEAD request
ctx.Status(http.StatusOK)
return nil
}
func SendXMLResponse(ctx *fiber.Ctx, resp any, err error) error {
if err != nil {
serr, ok := err.(s3err.APIError)
if ok {
ctx.Status(serr.HTTPStatusCode)
return ctx.Send(s3err.GetAPIErrorResponse(serr, "", "", ""))
}
fmt.Fprintf(os.Stderr, "Internal Error, req:\n%v\nerr:\n%v\n",
ctx.Request(), err)
return ctx.Send(s3err.GetAPIErrorResponse(
s3err.GetAPIError(s3err.ErrInternalError), "", "", ""))
}
var b []byte
if b, err = xml.Marshal(resp); err != nil {
return err
if resp != nil {
if b, err = xml.Marshal(resp); err != nil {
return err
}
if len(b) > 0 {
ctx.Response().Header.SetContentType(fiber.MIMEApplicationXML)
}
}
return ctx.Send(b)
}
func ErrorResponse(ctx *fiber.Ctx, err error) error {
serr, ok := err.(s3err.APIError)
if ok {
ctx.Status(serr.HTTPStatusCode)
return ctx.Send(s3err.GetAPIErrorResponse(serr, "", "", ""))
}
return ctx.Send(s3err.GetAPIErrorResponse(
s3err.GetAPIError(s3err.ErrInternalError), "", "", ""))
}

View File

@@ -29,6 +29,7 @@ import (
"github.com/valyala/fasthttp"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/s3err"
"github.com/versity/versitygw/s3response"
)
func TestNew(t *testing.T) {
@@ -128,8 +129,8 @@ func TestS3ApiController_GetActions(t *testing.T) {
app := fiber.New()
s3ApiController := S3ApiController{be: &BackendMock{
ListObjectPartsFunc: func(bucket, object, uploadID string, partNumberMarker int, maxParts int) (*s3.ListPartsOutput, error) {
return &s3.ListPartsOutput{}, nil
ListObjectPartsFunc: func(bucket, object, uploadID string, partNumberMarker int, maxParts int) (s3response.ListPartsResponse, error) {
return s3response.ListPartsResponse{}, nil
},
GetObjectAclFunc: func(bucket, object string) (*s3.GetObjectAclOutput, error) {
return &s3.GetObjectAclOutput{}, nil
@@ -169,16 +170,16 @@ func TestS3ApiController_GetActions(t *testing.T) {
req: httptest.NewRequest(http.MethodGet, "/my-bucket/key?uploadId=hello&max-parts=InvalidMaxParts", nil),
},
wantErr: false,
statusCode: 500,
statusCode: 400,
},
{
name: "Get-actions-invalid-part-number",
name: "Get-actions-invalid-part-number-marker",
app: app,
args: args{
req: httptest.NewRequest(http.MethodGet, "/my-bucket/key?uploadId=hello&max-parts=200&part-number-marker=InvalidPartNumber", nil),
},
wantErr: false,
statusCode: 500,
statusCode: 400,
},
{
name: "Get-actions-list-object-parts-success",
@@ -233,8 +234,8 @@ func TestS3ApiController_ListActions(t *testing.T) {
GetBucketAclFunc: func(bucket string) (*s3.GetBucketAclOutput, error) {
return &s3.GetBucketAclOutput{}, nil
},
ListMultipartUploadsFunc: func(output *s3.ListMultipartUploadsInput) (*s3.ListMultipartUploadsOutput, error) {
return &s3.ListMultipartUploadsOutput{}, nil
ListMultipartUploadsFunc: func(output *s3.ListMultipartUploadsInput) (s3response.ListMultipartUploadsResponse, error) {
return s3response.ListMultipartUploadsResponse{}, nil
},
ListObjectsV2Func: func(bucket, prefix, marker, delim string, maxkeys int) (*s3.ListObjectsV2Output, error) {
return &s3.ListObjectsV2Output{}, nil
@@ -441,13 +442,13 @@ func TestS3ApiController_PutActions(t *testing.T) {
statusCode int
}{
{
name: "Upload-copy-part-error-case",
name: "Upload-put-part-error-case",
app: app,
args: args{
req: httptest.NewRequest(http.MethodPut, "/my-bucket/my-key?partNumber=invalid", nil),
req: httptest.NewRequest(http.MethodPut, "/my-bucket/my-key?uploadId=abc&partNumber=invalid", nil),
},
wantErr: false,
statusCode: 500,
statusCode: 400,
},
{
name: "Upload-copy-part-success",
@@ -517,11 +518,13 @@ func TestS3ApiController_PutActions(t *testing.T) {
resp, err := tt.app.Test(tt.args.req)
if (err != nil) != tt.wantErr {
t.Errorf("S3ApiController.GetActions() error = %v, wantErr %v", err, tt.wantErr)
t.Errorf("S3ApiController.GetActions() %v error = %v, wantErr %v",
tt.name, err, tt.wantErr)
}
if resp.StatusCode != tt.statusCode {
t.Errorf("S3ApiController.GetActions() statusCode = %v, wantStatusCode = %v", resp.StatusCode, tt.statusCode)
t.Errorf("S3ApiController.GetActions() %v statusCode = %v, wantStatusCode = %v",
tt.name, resp.StatusCode, tt.statusCode)
}
}
}
@@ -951,7 +954,7 @@ func TestS3ApiController_CreateActions(t *testing.T) {
}
}
func Test_responce(t *testing.T) {
func Test_XMLresponse(t *testing.T) {
type args struct {
ctx *fiber.Ctx
resp any
@@ -1008,14 +1011,74 @@ func Test_responce(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := Responce(tt.args.ctx, tt.args.resp, tt.args.err); (err != nil) != tt.wantErr {
t.Errorf("responce() error = %v, wantErr %v", err, tt.wantErr)
if err := SendXMLResponse(tt.args.ctx, tt.args.resp, tt.args.err); (err != nil) != tt.wantErr {
t.Errorf("responce() %v error = %v, wantErr %v", tt.name, err, tt.wantErr)
}
statusCode := tt.args.ctx.Response().StatusCode()
if statusCode != tt.statusCode {
t.Errorf("responce() code = %v, wantErr %v", statusCode, tt.wantErr)
t.Errorf("responce() %v code = %v, wantErr %v", tt.name, statusCode, tt.wantErr)
}
})
}
}
func Test_response(t *testing.T) {
type args struct {
ctx *fiber.Ctx
resp any
err error
}
app := fiber.New()
tests := []struct {
name string
args args
wantErr bool
statusCode int
}{
{
name: "Internal-server-error",
args: args{
ctx: app.AcquireCtx(&fasthttp.RequestCtx{}),
resp: nil,
err: s3err.GetAPIError(16),
},
wantErr: false,
statusCode: 500,
},
{
name: "Error-not-implemented",
args: args{
ctx: app.AcquireCtx(&fasthttp.RequestCtx{}),
resp: nil,
err: s3err.GetAPIError(50),
},
wantErr: false,
statusCode: 501,
},
{
name: "Successful-response",
args: args{
ctx: app.AcquireCtx(&fasthttp.RequestCtx{}),
resp: "Valid response",
err: nil,
},
wantErr: false,
statusCode: 200,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := SendResponse(tt.args.ctx, tt.args.err); (err != nil) != tt.wantErr {
t.Errorf("responce() %v error = %v, wantErr %v", tt.name, err, tt.wantErr)
}
statusCode := tt.args.ctx.Response().StatusCode()
if statusCode != tt.statusCode {
t.Errorf("responce() %v code = %v, wantErr %v", tt.name, statusCode, tt.wantErr)
}
})
}

View File

@@ -35,63 +35,60 @@ const (
iso8601Format = "20060102T150405Z"
)
type AdminConfig struct {
AdminAccess string
AdminSecret string
Region string
type RootUserConfig struct {
Access string
Secret string
Region string
}
func VerifyV4Signature(config AdminConfig, iam auth.IAMService, debug bool) fiber.Handler {
acct := accounts{
admin: config,
iam: iam,
}
func VerifyV4Signature(root RootUserConfig, iam auth.IAMService, debug bool) fiber.Handler {
acct := accounts{root: root, iam: iam}
return func(ctx *fiber.Ctx) error {
authorization := ctx.Get("Authorization")
if authorization == "" {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrAuthHeaderEmpty))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrAuthHeaderEmpty))
}
// Check the signature version
authParts := strings.Split(authorization, " ")
if len(authParts) < 4 {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrMissingFields))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrMissingFields))
}
if authParts[0] != "AWS4-HMAC-SHA256" {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrSignatureVersionNotSupported))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrSignatureVersionNotSupported))
}
credKv := strings.Split(authParts[1], "=")
if len(credKv) != 2 {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrCredMalformed))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrCredMalformed))
}
creds := strings.Split(credKv[1], "/")
if len(creds) < 4 {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrCredMalformed))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrCredMalformed))
}
signHdrKv := strings.Split(authParts[2], "=")
if len(signHdrKv) != 2 {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrCredMalformed))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrCredMalformed))
}
signedHdrs := strings.Split(signHdrKv[1], ";")
secret, ok := acct.getAcctSecret(creds[0])
if !ok {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrInvalidAccessKeyID))
account := acct.getAccount(creds[0])
if account == nil {
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidAccessKeyID))
}
// Check X-Amz-Date header
date := ctx.Get("X-Amz-Date")
if date == "" {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrMissingDateHeader))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrMissingDateHeader))
}
// Parse the date and check the date validity
tdate, err := time.Parse(iso8601Format, date)
if err != nil {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrMalformedDate))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrMalformedDate))
}
// Calculate the hash of the request payload
@@ -102,60 +99,62 @@ func VerifyV4Signature(config AdminConfig, iam auth.IAMService, debug bool) fibe
// Compare the calculated hash with the hash provided
if hashPayloadHeader != hexPayload {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrContentSHA256Mismatch))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrContentSHA256Mismatch))
}
// Create a new http request instance from fasthttp request
req, err := utils.CreateHttpRequestFromCtx(ctx, signedHdrs)
if err != nil {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrInternalError))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrInternalError))
}
signer := v4.NewSigner()
signErr := signer.SignHTTP(req.Context(), aws.Credentials{
AccessKeyID: creds[0],
SecretAccessKey: secret,
}, req, hexPayload, creds[3], config.Region, tdate, func(options *v4.SignerOptions) {
SecretAccessKey: account.Secret,
}, req, hexPayload, creds[3], account.Region, tdate, func(options *v4.SignerOptions) {
if debug {
options.LogSigning = true
options.Logger = logging.NewStandardLogger(os.Stderr)
}
})
if signErr != nil {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrInternalError))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrInternalError))
}
parts := strings.Split(req.Header.Get("Authorization"), " ")
if len(parts) < 4 {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrMissingFields))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrMissingFields))
}
calculatedSign := strings.Split(parts[3], "=")[1]
expectedSign := strings.Split(authParts[3], "=")[1]
if expectedSign != calculatedSign {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrSignatureDoesNotMatch))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrSignatureDoesNotMatch))
}
ctx.Locals("role", account.Role)
return ctx.Next()
}
}
type accounts struct {
admin AdminConfig
iam auth.IAMService
root RootUserConfig
iam auth.IAMService
}
func (a accounts) getAcctSecret(access string) (string, bool) {
if a.admin.AdminAccess == access {
return a.admin.AdminSecret, true
func (a accounts) getAccount(access string) *auth.Account {
var account *auth.Account
if access == a.root.Access {
account = &auth.Account{
Secret: a.root.Secret,
Role: "admin",
Region: a.root.Region,
}
} else {
account = a.iam.GetUserAccount(access)
}
conf, err := a.iam.GetIAMConfig()
if err != nil {
return "", false
}
secret, ok := conf.AccessAccounts[access]
return secret, ok
return account
}

View File

@@ -34,7 +34,7 @@ func VerifyMD5Body() fiber.Handler {
calculatedSum := base64.StdEncoding.EncodeToString(sum[:])
if incomingSum != calculatedSum {
return controllers.Responce[any](ctx, nil, s3err.GetAPIError(s3err.ErrInvalidDigest))
return controllers.SendResponse(ctx, s3err.GetAPIError(s3err.ErrInvalidDigest))
}
return ctx.Next()

View File

@@ -17,13 +17,18 @@ package s3api
import (
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/backend/auth"
"github.com/versity/versitygw/s3api/controllers"
)
type S3ApiRouter struct{}
func (sa *S3ApiRouter) Init(app *fiber.App, be backend.Backend) {
func (sa *S3ApiRouter) Init(app *fiber.App, be backend.Backend, iam auth.IAMService) {
s3ApiController := controllers.New(be)
adminController := controllers.AdminController{IAMService: iam}
// TODO: think of better routing system
app.Post("/create-user", adminController.CreateUser)
// ListBuckets action
app.Get("/", s3ApiController.ListBuckets)

View File

@@ -19,12 +19,14 @@ import (
"github.com/gofiber/fiber/v2"
"github.com/versity/versitygw/backend"
"github.com/versity/versitygw/backend/auth"
)
func TestS3ApiRouter_Init(t *testing.T) {
type args struct {
app *fiber.App
be backend.Backend
iam auth.IAMService
}
tests := []struct {
name string
@@ -37,12 +39,13 @@ func TestS3ApiRouter_Init(t *testing.T) {
args: args{
app: fiber.New(),
be: backend.BackendUnsupported{},
iam: auth.IAMServiceUnsupported{},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.sa.Init(tt.args.app, tt.args.be)
tt.sa.Init(tt.args.app, tt.args.be, tt.args.iam)
})
}
}

View File

@@ -33,7 +33,7 @@ type S3ApiServer struct {
debug bool
}
func New(app *fiber.App, be backend.Backend, port string, adminUser middlewares.AdminConfig, iam auth.IAMService, opts ...Option) (*S3ApiServer, error) {
func New(app *fiber.App, be backend.Backend, root middlewares.RootUserConfig, port string, iam auth.IAMService, opts ...Option) (*S3ApiServer, error) {
server := &S3ApiServer{
app: app,
backend: be,
@@ -45,10 +45,10 @@ func New(app *fiber.App, be backend.Backend, port string, adminUser middlewares.
opt(server)
}
app.Use(middlewares.VerifyV4Signature(adminUser, iam, server.debug))
app.Use(middlewares.VerifyV4Signature(root, iam, server.debug))
app.Use(logger.New())
app.Use(middlewares.VerifyMD5Body())
server.router.Init(app, be)
server.router.Init(app, be, iam)
return server, nil
}

View File

@@ -26,10 +26,10 @@ import (
func TestNew(t *testing.T) {
type args struct {
app *fiber.App
be backend.Backend
port string
adminUser middlewares.AdminConfig
app *fiber.App
be backend.Backend
port string
root middlewares.RootUserConfig
}
app := fiber.New()
@@ -46,10 +46,10 @@ func TestNew(t *testing.T) {
{
name: "Create S3 api server",
args: args{
app: app,
be: be,
port: port,
adminUser: middlewares.AdminConfig{},
app: app,
be: be,
port: port,
root: middlewares.RootUserConfig{},
},
wantS3ApiServer: &S3ApiServer{
app: app,
@@ -62,8 +62,8 @@ func TestNew(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotS3ApiServer, err := New(tt.args.app, tt.args.be,
tt.args.port, tt.args.adminUser, auth.IAMServiceUnsupported{})
gotS3ApiServer, err := New(tt.args.app, tt.args.be, tt.args.root,
tt.args.port, auth.IAMServiceUnsupported{})
if (err != nil) != tt.wantErr {
t.Errorf("New() error = %v, wantErr %v", err, tt.wantErr)
return

View File

@@ -2,7 +2,6 @@ package utils
import (
"bytes"
"fmt"
"net/http"
"reflect"
"testing"
@@ -62,8 +61,6 @@ func TestCreateHttpRequestFromCtx(t *testing.T) {
return
}
fmt.Println(got.Header, tt.want.Header)
if !reflect.DeepEqual(got.Header, tt.want.Header) {
t.Errorf("CreateHttpRequestFromCtx() got = %v, want %v", got, tt.want)
}

96
s3response/s3response.go Normal file
View File

@@ -0,0 +1,96 @@
// Copyright 2023 Versity Software
// This file is licensed under the Apache License, Version 2.0
// (the "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package s3response
import (
"encoding/xml"
)
// Part describes part metadata.
type Part struct {
PartNumber int
LastModified string
ETag string
Size int64
}
// ListPartsResponse - s3 api list parts response.
type ListPartsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListPartsResult" json:"-"`
Bucket string
Key string
UploadID string `xml:"UploadId"`
Initiator Initiator
Owner Owner
// The class of storage used to store the object.
StorageClass string
PartNumberMarker int
NextPartNumberMarker int
MaxParts int
IsTruncated bool
// List of parts.
Parts []Part `xml:"Part"`
}
// ListMultipartUploadsResponse - s3 api list multipart uploads response.
type ListMultipartUploadsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListMultipartUploadsResult" json:"-"`
Bucket string
KeyMarker string
UploadIDMarker string `xml:"UploadIdMarker"`
NextKeyMarker string
NextUploadIDMarker string `xml:"NextUploadIdMarker"`
Delimiter string
Prefix string
EncodingType string `xml:"EncodingType,omitempty"`
MaxUploads int
IsTruncated bool
// List of pending uploads.
Uploads []Upload `xml:"Upload"`
// Delimed common prefixes.
CommonPrefixes []CommonPrefix
}
// Upload desribes in progress multipart upload
type Upload struct {
Key string
UploadID string `xml:"UploadId"`
Initiator Initiator
Owner Owner
StorageClass string
Initiated string
}
// CommonPrefix ListObjectsResponse common prefixes (directory abstraction)
type CommonPrefix struct {
Prefix string
}
// Initiator same fields as Owner
type Initiator Owner
// Owner bucket ownership
type Owner struct {
ID string
DisplayName string
}