Compare commits

...

12 Commits

Author SHA1 Message Date
Harshavardhana
048af5e5cd Fix an urgent issue in OS X build for 10.12 sierra. 2016-10-24 13:34:01 -07:00
Harshavardhana
b367365daa Fix dockerfile. 2016-10-21 18:46:33 -07:00
Minio Trusted
d1331ecc5c Fix user-agent prefix to have docker instead of suffix. 2016-10-21 17:49:56 -07:00
Harshavardhana
774e9c8e3a Block change to 1M from 128KiB and extensively use buffer pools.
Following are the benefits of this change.
```
benchmark                           old allocs     new allocs     delta
BenchmarkPutObjectPart5MbFS-4       46172          16383          -64.52%
BenchmarkPutObjectPart10MbFS-4      36849          7223           -80.40%
BenchmarkPutObjectPart25MbFS-4      31699          3865           -87.81%
BenchmarkPutObjectPart50MbFS-4      27577          2481           -91.00%
BenchmarkPutObjectVerySmallFS-4     57             56             -1.75%
BenchmarkPutObject10KbFS-4          57             56             -1.75%
BenchmarkPutObject100KbFS-4         57             56             -1.75%
BenchmarkPutObject1MbFS-4           176            56             -68.18%
BenchmarkPutObject5MbFS-4           720            90             -87.50%
BenchmarkPutObject10MbFS-4          1400           124            -91.14%
BenchmarkPutObject25MbFS-4          3440           261            -92.41%
BenchmarkPutObject50MbFS-4          6841           466            -93.19%
```
2016-10-20 13:39:24 -07:00
Harshavardhana
614a8cf7ad Sign V2 support. 2016-10-18 17:22:20 -07:00
Harshavardhana
e32e8d1922 Fix docker file. 2016-10-13 21:18:36 -07:00
Harshavardhana
8c59a41668 Checking updates should send a proper user-agent. 2016-10-13 20:52:09 -07:00
Harshavardhana
29a2ffef5d Fix build issue. 2016-10-13 09:27:44 -07:00
Harshavardhana
c836c454cc Fix README.md on release branch 2016-10-12 18:26:59 -07:00
Harshavardhana
0f7300058b Fix docker file. 2016-10-12 18:19:13 -07:00
Harshavardhana
c5300ec279 update: Deprecate the usage of update=yes query param. (#2801)
Fixes #2799
2016-10-12 18:10:52 -07:00
Kevin Qiu
3d8d12ba1b Use Set instead of Add in the event that the request already contains the content-length (#2683) 2016-10-12 18:10:52 -07:00
26 changed files with 1146 additions and 325 deletions

View File

@@ -1,5 +1,8 @@
go_import_path: github.com/minio/minio
sudo: required
dist: trusty
language: go
os:
@@ -20,4 +23,4 @@ after_success:
- bash <(curl -s https://codecov.io/bash)
go:
- 1.6.2
- 1.7.1

View File

@@ -1,19 +1,17 @@
FROM golang:1.6-alpine
FROM golang:1.7-alpine
WORKDIR /go/src/app
ENV ALLOW_CONTAINER_ROOT=1
COPY . /go/src/app
RUN \
apk add --no-cache git && \
go-wrapper download && \
go-wrapper install && \
go-wrapper install -ldflags "-X github.com/minio/minio/cmd.Version=2016-10-22T00:50:41Z -X github.com/minio/minio/cmd.ReleaseTag=RELEASE.2016-10-22T00-50-41Z -X github.com/minio/minio/cmd.CommitID=d1331ecc5cc4abc7304157cc26be64618821a77c" && \
mkdir -p /export/docker && \
cp /go/src/app/docs/Docker.md /export/docker/ && \
rm -rf /go/pkg /go/src && \
apk del git
EXPOSE 9000
ENTRYPOINT ["go-wrapper", "run", "server"]
ENTRYPOINT ["go-wrapper", "run"]
VOLUME ["/export"]
CMD ["/export"]

131
README.md
View File

@@ -1,9 +1,8 @@
# Minio Quickstart Guide [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/minio/minio?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
# Minio Quickstart Guide [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/minio/minio?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
Minio is an object storage server released under Apache License v2.0. It is compatible with Amazon S3 cloud storage service. It is best suited for storing unstructured data such as photos, videos, log files, backups and container / VM images. Size of an object can range from a few KBs to a maximum of 5TB.
## 1. Download
Minio server is light enough to be bundled with the application stack, similar to NodeJS, Redis and MySQL.
| Platform| Architecture | URL|
@@ -17,7 +16,6 @@ Minio server is light enough to be bundled with the application stack, similar t
|FreeBSD|64-bit|https://dl.minio.io/server/minio/release/freebsd-amd64/minio|
### Install from Homebrew
Install minio packages using [Homebrew](http://brew.sh/)
```sh
@@ -26,169 +24,79 @@ $ minio --help
```
### Install from Source
Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://docs.minio.io/docs/how-to-install-golang).
```sh
$ go get -u github.com/minio/minio
```
## 2. Run Minio Server
In the examples below, Minio serves the contents of the ``Photos`` directory as an object store.
### Docker Container
```sh
$ docker pull minio/minio
$ docker run -p 9000:9000 minio/minio server /export
```
Please visit Minio Docker quickstart guide for more [here](https://docs.minio.io/docs/minio-docker-quickstart-guide)
### GNU/Linux
```sh
```sh
$ chmod +x minio
$ ./minio --help
$ ./minio server ~/Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
...
```
### OS X
```sh
```sh
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
...
```
### Microsoft Windows
```sh
C:\Users\Username\Downloads> minio.exe --help
C:\Users\Username\Downloads> minio.exe server D:\Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc.exe config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
...
```
### Docker Container
```sh
$ docker pull minio/minio
$ docker run -p 9000:9000 minio/minio
```
Please visit Minio Docker quickstart guide for more [here](https://docs.minio.io/docs/minio-docker-quickstart-guide)
### FreeBSD
```sh
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
...
```
Please visit official zfs FreeBSD guide for more details [here](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
## 3. Test Minio Server using Minio Browser
Open a web browser and navigate to http://127.0.0.1:9000 to view your buckets on minio server.
![Screenshot](https://github.com/minio/minio/blob/master/docs/screenshots/minio-browser.jpg?raw=true)
## 4. Test Minio Server using `mc`
Install mc from [here](https://docs.minio.io/docs/minio-client-quickstart-guide). Use `mc ls` command to list all the buckets on your minio server.
```sh
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
$ mc mb myminio/mybucket
$ mc ls myminio/
[2015-08-05 08:13:22 IST] 0B andoria/
[2015-08-05 06:14:26 IST] 0B deflector/
[2015-08-05 08:13:11 IST] 0B ferenginar/
[2016-03-08 14:56:35 IST] 0B jarjarbing/
[2016-01-20 16:07:41 IST] 0B my.minio.io/
[2015-08-05 08:13:11 IST] 0B mybucket/
```
For more examples please navigate to [Minio Client Complete Guide](https://docs.minio.io/docs/minio-client-complete-guide).
## 5. Explore Further
- [Minio Erasure Code QuickStart Guide](https://docs.minio.io/docs/minio-erasure-code-quickstart-guide)
- [Minio Docker Quickstart Guide](https://docs.minio.io/docs/minio-docker-quickstart-guide)
- [Use `mc` with Minio Server](https://docs.minio.io/docs/minio-client-quickstart-guide)
@@ -196,6 +104,5 @@ For more examples please navigate to [Minio Client Complete Guide](https://docs.
- [Use `s3cmd` with Minio Server](https://docs.minio.io/docs/s3cmd-with-minio)
- [Use `minio-go` SDK with Minio Server](https://docs.minio.io/docs/golang-client-quickstart-guide)
## 6. Contribute to Minio Project
Please follow Minio [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)

View File

@@ -58,6 +58,18 @@ func isRequestSignStreamingV4(r *http.Request) bool {
return r.Header.Get("x-amz-content-sha256") == streamingContentSHA256 && r.Method == "PUT"
}
// Verify if request has AWS Signature Version '2'.
func isRequestSignatureV2(r *http.Request) bool {
return (!strings.HasPrefix(r.Header.Get("Authorization"), signV4Algorithm) &&
strings.HasPrefix(r.Header.Get("Authorization"), signV2Algorithm))
}
// Verify request has AWS PreSign Version '2'.
func isRequestPresignedSignatureV2(r *http.Request) bool {
_, ok := r.URL.Query()["AWSAccessKeyId"]
return ok
}
// Authorization type.
type authType int
@@ -66,15 +78,21 @@ const (
authTypeUnknown authType = iota
authTypeAnonymous
authTypePresigned
authTypePresignedV2
authTypePostPolicy
authTypeStreamingSigned
authTypeSigned
authTypeSignedV2
authTypeJWT
)
// Get request authentication type.
func getRequestAuthType(r *http.Request) authType {
if isRequestSignStreamingV4(r) {
if isRequestSignatureV2(r) {
return authTypeSignedV2
} else if isRequestPresignedSignatureV2(r) {
return authTypePresignedV2
} else if isRequestSignStreamingV4(r) {
return authTypeStreamingSigned
} else if isRequestSignatureV4(r) {
return authTypeSigned
@@ -104,8 +122,16 @@ func sumMD5(data []byte) []byte {
return hash.Sum(nil)
}
// Verify if request has valid AWS Signature Version '2'.
func isReqAuthenticatedV2(r *http.Request) (s3Error APIErrorCode) {
if isRequestSignatureV2(r) {
return doesSignV2Match(r)
}
return doesPresignV2SignatureMatch(r)
}
// Verify if request has valid AWS Signature Version '4'.
func isReqAuthenticated(r *http.Request) (s3Error APIErrorCode) {
func isReqAuthenticated(r *http.Request, region string) (s3Error APIErrorCode) {
if r == nil {
return ErrInternalError
}
@@ -121,7 +147,6 @@ func isReqAuthenticated(r *http.Request) (s3Error APIErrorCode) {
}
// Populate back the payload.
r.Body = ioutil.NopCloser(bytes.NewReader(payload))
validateRegion := true // Validate region.
var sha256sum string
// Skips calculating sha256 on the payload on server,
// if client requested for it.
@@ -131,9 +156,9 @@ func isReqAuthenticated(r *http.Request) (s3Error APIErrorCode) {
sha256sum = hex.EncodeToString(sum256(payload))
}
if isRequestSignatureV4(r) {
return doesSignatureMatch(sha256sum, r, validateRegion)
return doesSignatureMatch(sha256sum, r, region)
} else if isRequestPresignedSignatureV4(r) {
return doesPresignedSignatureMatch(sha256sum, r, validateRegion)
return doesPresignedSignatureMatch(sha256sum, r, region)
}
return ErrAccessDenied
}
@@ -145,13 +170,21 @@ func isReqAuthenticated(r *http.Request) (s3Error APIErrorCode) {
// request headers and body are used to calculate the signature validating
// the client signature present in request.
func checkAuth(r *http.Request) APIErrorCode {
aType := getRequestAuthType(r)
if aType != authTypePresigned && aType != authTypeSigned {
// For all unhandled auth types return error AccessDenied.
return ErrAccessDenied
}
return checkAuthWithRegion(r, serverConfig.GetRegion())
}
// checkAuthWithRegion - similar to checkAuth but takes a custom region.
func checkAuthWithRegion(r *http.Request, region string) APIErrorCode {
// Validates the request for both Presigned and Signed.
return isReqAuthenticated(r)
aType := getRequestAuthType(r)
switch aType {
case authTypeSignedV2, authTypePresignedV2: // Signature V2.
return isReqAuthenticatedV2(r)
case authTypeSigned, authTypePresigned: // Signature V4.
return isReqAuthenticated(r, region)
}
// For all unhandled auth types return error AccessDenied.
return ErrAccessDenied
}
// authHandler - handles all the incoming authorization headers and validates them if possible.
@@ -168,7 +201,9 @@ func setAuthHandler(h http.Handler) http.Handler {
var supportedS3AuthTypes = map[authType]struct{}{
authTypeAnonymous: {},
authTypePresigned: {},
authTypePresignedV2: {},
authTypeSigned: {},
authTypeSignedV2: {},
authTypePostPolicy: {},
authTypeStreamingSigned: {},
}

View File

@@ -20,9 +20,104 @@ import (
"bytes"
"io"
"net/http"
"net/url"
"testing"
)
// Test get request auth type.
func TestGetRequestAuthType(t *testing.T) {
type testCase struct {
req *http.Request
authT authType
}
testCases := []testCase{
// Test case - 1
// Check for generic signature v4 header.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
},
Header: http.Header{
"Authorization": []string{"AWS4-HMAC-SHA256 <cred_string>"},
"X-Amz-Content-Sha256": []string{streamingContentSHA256},
},
Method: "PUT",
},
authT: authTypeStreamingSigned,
},
// Test case - 2
// Check for JWT header.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
},
Header: http.Header{
"Authorization": []string{"Bearer 12313123"},
},
},
authT: authTypeJWT,
},
// Test case - 3
// Empty authorization header.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
},
Header: http.Header{
"Authorization": []string{""},
},
},
authT: authTypeUnknown,
},
// Test case - 4
// Check for presigned.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
RawQuery: "X-Amz-Credential=EXAMPLEINVALIDEXAMPL%2Fs3%2F20160314%2Fus-east-1",
},
},
authT: authTypePresigned,
},
// Test case - 5
// Check for post policy.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
},
Header: http.Header{
"Content-Type": []string{"multipart/form-data"},
},
Method: "POST",
},
authT: authTypePostPolicy,
},
}
// .. Tests all request auth type.
for i, testc := range testCases {
authT := getRequestAuthType(testc.req)
if authT != testc.authT {
t.Errorf("Test %d: Expected %d, got %d", i+1, testc.authT, authT)
}
}
}
// Test all s3 supported auth types.
func TestS3SupportedAuthType(t *testing.T) {
type testCase struct {
@@ -56,19 +151,29 @@ func TestS3SupportedAuthType(t *testing.T) {
authT: authTypeStreamingSigned,
pass: true,
},
// Test 6 - JWT is not supported s3 type.
// Test 6 - supported s3 type with signature v2.
{
authT: authTypeSignedV2,
pass: true,
},
// Test 7 - supported s3 type with presign v2.
{
authT: authTypePresignedV2,
pass: true,
},
// Test 8 - JWT is not supported s3 type.
{
authT: authTypeJWT,
pass: false,
},
// Test 7 - unknown auth header is not supported s3 type.
// Test 9 - unknown auth header is not supported s3 type.
{
authT: authTypeUnknown,
pass: false,
},
// Test 8 - some new auth type is not supported s3 type.
// Test 10 - some new auth type is not supported s3 type.
{
authT: authType(7),
authT: authType(9),
pass: false,
},
}
@@ -115,6 +220,39 @@ func TestIsRequestUnsignedPayload(t *testing.T) {
}
}
func TestIsRequestPresignedSignatureV2(t *testing.T) {
testCases := []struct {
inputQueryKey string
inputQueryValue string
expectedResult bool
}{
// Test case - 1.
// Test case with query key "AWSAccessKeyId" set.
{"", "", false},
// Test case - 2.
{"AWSAccessKeyId", "", true},
// Test case - 3.
{"X-Amz-Content-Sha256", "", false},
}
for i, testCase := range testCases {
// creating an input HTTP request.
// Only the query parameters are relevant for this particular test.
inputReq, err := http.NewRequest("GET", "http://example.com", nil)
if err != nil {
t.Fatalf("Error initializing input HTTP request: %v", err)
}
q := inputReq.URL.Query()
q.Add(testCase.inputQueryKey, testCase.inputQueryValue)
inputReq.URL.RawQuery = q.Encode()
actualResult := isRequestPresignedSignatureV2(inputReq)
if testCase.expectedResult != actualResult {
t.Errorf("Test %d: Expected the result to `%v`, but instead got `%v`", i+1, testCase.expectedResult, actualResult)
}
}
}
// TestIsRequestPresignedSignatureV4 - Test validates the logic for presign signature verision v4 detection.
func TestIsRequestPresignedSignatureV4(t *testing.T) {
testCases := []struct {
@@ -199,7 +337,7 @@ func TestIsReqAuthenticated(t *testing.T) {
if testCase.s3Error == ErrBadDigest {
testCase.req.Header.Set("Content-Md5", "garbage")
}
if s3Error := isReqAuthenticated(testCase.req); s3Error != testCase.s3Error {
if s3Error := isReqAuthenticated(testCase.req, serverConfig.GetRegion()); s3Error != testCase.s3Error {
t.Fatalf("Unexpected s3error returned wanted %d, got %d", testCase.s3Error, s3Error)
}
}

View File

@@ -75,8 +75,14 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -135,8 +141,14 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}

View File

@@ -75,8 +75,14 @@ func (api objectAPIHandlers) GetBucketLocationHandler(w http.ResponseWriter, r *
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, "us-east-1"); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -124,8 +130,14 @@ func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter,
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -165,7 +177,8 @@ func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter,
// owned by the authenticated sender of the request.
func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.Request) {
// List buckets does not support bucket policies, no need to enforce it.
if s3Error := checkAuth(r); s3Error != ErrNone {
// Proceed to validate signature. Validates the request for both Presigned and Signed.
if s3Error := checkAuthWithRegion(r, ""); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -202,8 +215,14 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -312,7 +331,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
// This implementation of the PUT operation creates a new bucket for authenticated request
func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Request) {
// PutBucket does not support policies, use checkAuth to validate signature.
if s3Error := checkAuth(r); s3Error != ErrNone {
if s3Error := checkAuthWithRegion(r, "us-east-1"); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -444,8 +463,14 @@ func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Re
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}

View File

@@ -134,7 +134,7 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
return
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -203,21 +203,15 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
// This implementation of the DELETE operation uses the policy
// subresource to add to remove a policy on a bucket.
func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
// DeleteBucketPolicy does not support bucket policies, use checkAuth to validate signature.
if s3Error := checkAuth(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
return
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// Delete bucket access policy.
if err := removeBucketPolicy(bucket, api.ObjectAPI); err != nil {
errorIf(err, "Unable to remove bucket policy.")
@@ -244,21 +238,15 @@ func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r
// This operation uses the policy
// subresource to return the policy of a specified bucket.
func (api objectAPIHandlers) GetBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
// GetBucketPolicy does not support bucket policies, use checkAuth to validate signature.
if s3Error := checkAuth(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
return
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// Read bucket access policy.
policy, err := readBucketPolicy(bucket, api.ObjectAPI)
if err != nil {

View File

@@ -21,6 +21,7 @@ import (
"errors"
"hash"
"io"
"sync"
"github.com/klauspost/reedsolomon"
"github.com/minio/blake2b-simd"
@@ -47,13 +48,23 @@ func newHash(algo string) hash.Hash {
}
}
var hashBufPool = sync.Pool{
New: func() interface{} {
b := make([]byte, readSizeV1)
return &b
},
}
// hashSum calculates the hash of the entire path and returns.
func hashSum(disk StorageAPI, volume, path string, writer hash.Hash) ([]byte, error) {
// Allocate staging buffer of 128KiB for copyBuffer.
buf := make([]byte, readSizeV1)
// Allocate staging buffer of 2MiB for copyBuffer.
bufp := hashBufPool.Get().(*[]byte)
// Reuse buffer.
defer hashBufPool.Put(bufp)
// Copy entire buffer to writer.
if err := copyBuffer(writer, disk, volume, path, buf); err != nil {
if err := copyBuffer(writer, disk, volume, path, *bufp); err != nil {
return nil, err
}

View File

@@ -24,6 +24,7 @@ import (
"path"
"strconv"
"strings"
"sync"
"time"
"github.com/skyrings/skyring-common/tools/uuid"
@@ -289,6 +290,14 @@ func getFSAppendDataPath(uploadID string) string {
return path.Join(tmpMetaPrefix, uploadID+".data")
}
// Parts pool shared buffer.
var appendPartsBufPool = sync.Pool{
New: func() interface{} {
b := make([]byte, readSizeV1)
return &b
},
}
// Append parts to fsAppendDataFile.
func appendParts(disk StorageAPI, bucket, object, uploadID string) {
uploadIDPath := path.Join(mpartMetaPrefix, bucket, object, uploadID)
@@ -348,7 +357,10 @@ func appendParts(disk StorageAPI, bucket, object, uploadID string) {
partPath := path.Join(mpartMetaPrefix, bucket, object, uploadID, part.Name)
offset := int64(0)
totalLeft := part.Size
buf := make([]byte, readSizeV1)
// Get buffer from parts pool.
bufp := appendPartsBufPool.Get().(*[]byte)
buf := *bufp
for totalLeft > 0 {
curLeft := int64(readSizeV1)
if totalLeft < readSizeV1 {
@@ -370,12 +382,16 @@ func appendParts(disk StorageAPI, bucket, object, uploadID string) {
offset += n
totalLeft -= n
}
appendPartsBufPool.Put(bufp)
// All good, the part has been appended to the tmp file, rename it back.
if err = disk.RenameFile(minioMetaBucket, tmpDataPath, minioMetaBucket, fsAppendDataPath); err != nil {
errorIf(err, "Unable to rename %s to %s", tmpDataPath, fsAppendDataPath)
return
}
fsAppendMeta.AddObjectPart(part.Number, part.Name, part.ETag, part.Size)
if err = writeFSMetadata(disk, minioMetaBucket, fsAppendMetaPath, fsAppendMeta); err != nil {
errorIf(err, "Unable to write FS metadata %s", fsAppendMetaPath)
return
}
// If there are more parts that need to be appended to fsAppendDataFile
@@ -385,6 +401,14 @@ func appendParts(disk StorageAPI, bucket, object, uploadID string) {
}
}
// Parts pool shared buffer.
var partsBufPool = sync.Pool{
New: func() interface{} {
b := make([]byte, readSizeV1)
return &b
},
}
// PutObjectPart - reads incoming data until EOF for the part file on
// an ongoing multipart transaction. Internally incoming data is
// written to '.minio/tmp' location and safely renamed to
@@ -433,8 +457,11 @@ func (fs fsObjects) PutObjectPart(bucket, object, uploadID string, partID int, s
if size > 0 && bufSize > size {
bufSize = size
}
buf := make([]byte, int(bufSize))
bytesWritten, cErr := fsCreateFile(fs.storage, teeReader, buf, minioMetaBucket, tmpPartPath)
bufp := partsBufPool.Get().(*[]byte)
buf := *bufp
defer partsBufPool.Put(bufp)
bytesWritten, cErr := fsCreateFile(fs.storage, teeReader, buf[:bufSize], minioMetaBucket, tmpPartPath)
if cErr != nil {
fs.storage.DeleteFile(minioMetaBucket, tmpPartPath)
return "", toObjectErr(cErr, minioMetaBucket, tmpPartPath)
@@ -578,6 +605,14 @@ func (fs fsObjects) ListObjectParts(bucket, object, uploadID string, partNumberM
return fs.listObjectParts(bucket, object, uploadID, partNumberMarker, maxParts)
}
// Complete multipart pool shared buffer.
var completeBufPool = sync.Pool{
New: func() interface{} {
b := make([]byte, readSizeV1)
return &b
},
}
// CompleteMultipartUpload - completes an ongoing multipart
// transaction after receiving all the parts indicated by the client.
// Returns an md5sum calculated by concatenating all the individual
@@ -642,8 +677,9 @@ func (fs fsObjects) CompleteMultipartUpload(bucket string, object string, upload
} else {
tempObj := path.Join(tmpMetaPrefix, uploadID+"-"+"part.1")
// Allocate staging buffer.
var buf = make([]byte, readSizeV1)
bufp := completeBufPool.Get().(*[]byte)
buf := *bufp
defer completeBufPool.Put(bufp)
// Loop through all parts, validate them and then commit to disk.
for i, part := range parts {

View File

@@ -25,6 +25,7 @@ import (
"path"
"sort"
"strings"
"sync"
"github.com/minio/minio/pkg/disk"
"github.com/minio/minio/pkg/mimedb"
@@ -227,6 +228,13 @@ func (fs fsObjects) DeleteBucket(bucket string) error {
/// Object Operations
var getBufPool = sync.Pool{
New: func() interface{} {
b := make([]byte, readSizeV1)
return &b
},
}
// GetObject - get an object.
func (fs fsObjects) GetObject(bucket, object string, offset int64, length int64, writer io.Writer) (err error) {
// Verify if bucket is valid.
@@ -266,8 +274,11 @@ func (fs fsObjects) GetObject(bucket, object string, offset int64, length int64,
if length > 0 && bufSize > length {
bufSize = length
}
// Allocate a staging buffer.
buf := make([]byte, int(bufSize))
bufp := getBufPool.Get().(*[]byte)
buf := *bufp
defer getBufPool.Put(bufp)
for {
// Figure out the right size for the buffer.
curLeft := bufSize
@@ -356,7 +367,15 @@ func (fs fsObjects) GetObjectInfo(bucket, object string) (ObjectInfo, error) {
}, nil
}
// PutObject - create an object.
var putBufPool = sync.Pool{
New: func() interface{} {
b := make([]byte, readSizeV1)
return &b
},
}
// PutObject - saves an object atomically of size - size bytes.
// With size bytes of '-1' this function reads till EOF.
func (fs fsObjects) PutObject(bucket string, object string, size int64, data io.Reader, metadata map[string]string) (string, error) {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
@@ -405,9 +424,12 @@ func (fs fsObjects) PutObject(bucket string, object string, size int64, data io.
if size > 0 && bufSize > size {
bufSize = size
}
buf := make([]byte, int(bufSize))
bufp := putBufPool.Get().(*[]byte)
buf := *bufp
defer putBufPool.Put(bufp)
teeReader := io.TeeReader(limitDataReader, md5Writer)
bytesWritten, err := fsCreateFile(fs.storage, teeReader, buf, minioMetaBucket, tempObj)
bytesWritten, err := fsCreateFile(fs.storage, teeReader, buf[:bufSize], minioMetaBucket, tempObj)
if err != nil {
fs.storage.DeleteFile(minioMetaBucket, tempObj)
return "", toObjectErr(err, bucket, object)

View File

@@ -25,7 +25,7 @@ import (
// Global constants for Minio.
const (
minGoVersion = ">= 1.6" // Minio requires at least Go v1.6
minGoVersion = ">= 1.7" // Minio requires at least Go v1.7
)
// minio configuration related constants.

View File

@@ -20,6 +20,8 @@ import (
"fmt"
"os"
"sort"
"strings"
"time"
"github.com/minio/cli"
"github.com/minio/mc/pkg/console"
@@ -165,14 +167,17 @@ func Main() {
// Do not print update messages, if quiet flag is set.
if !globalQuiet {
// Do not print any errors in release update function.
noError := true
updateMsg := getReleaseUpdate(minioUpdateStableURL, noError)
if updateMsg.Update {
console.Println(updateMsg)
if strings.HasPrefix(ReleaseTag, "RELEASE.") && c.Args().Get(0) != "update" {
updateMsg, _, err := getReleaseUpdate(minioUpdateStableURL, time.Second*1)
if err != nil {
// Ignore all network related errors.
return nil
}
if updateMsg.Update {
console.Println(updateMsg)
}
}
}
return nil
}

View File

@@ -27,7 +27,7 @@ const (
blockSizeV1 = 10 * 1024 * 1024 // 10MiB.
// Staging buffer read size for all internal operations version 1.
readSizeV1 = 128 * 1024 // 128KiB.
readSizeV1 = 2 * 1024 * 1024 // 2MiB.
// Buckets meta prefix.
bucketMetaPrefix = "buckets"

View File

@@ -95,8 +95,14 @@ func (api objectAPIHandlers) GetObjectHandler(w http.ResponseWriter, r *http.Req
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -201,8 +207,14 @@ func (api objectAPIHandlers) HeadObjectHandler(w http.ResponseWriter, r *http.Re
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -251,8 +263,14 @@ func (api objectAPIHandlers) CopyObjectHandler(w http.ResponseWriter, r *http.Re
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -440,9 +458,16 @@ func (api objectAPIHandlers) PutObjectHandler(w http.ResponseWriter, r *http.Req
return
}
md5Sum, err = api.ObjectAPI.PutObject(bucket, object, size, reader, metadata)
case authTypeSignedV2, authTypePresignedV2:
s3Error := isReqAuthenticatedV2(r)
if s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
md5Sum, err = api.ObjectAPI.PutObject(bucket, object, size, r.Body, metadata)
case authTypePresigned, authTypeSigned:
// Initialize signature verifier.
reader := newSignVerify(r)
reader := newSignVerify(r, serverConfig.GetRegion())
// Create object.
md5Sum, err = api.ObjectAPI.PutObject(bucket, object, size, reader, metadata)
}
@@ -496,8 +521,14 @@ func (api objectAPIHandlers) NewMultipartUploadHandler(w http.ResponseWriter, r
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -597,9 +628,16 @@ func (api objectAPIHandlers) PutObjectPartHandler(w http.ResponseWriter, r *http
return
}
partMD5, err = api.ObjectAPI.PutObjectPart(bucket, object, uploadID, partID, size, reader, incomingMD5)
case authTypeSignedV2, authTypePresignedV2:
s3Error := isReqAuthenticatedV2(r)
if s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
partMD5, err = api.ObjectAPI.PutObjectPart(bucket, object, uploadID, partID, size, r.Body, incomingMD5)
case authTypePresigned, authTypeSigned:
// Initialize signature verifier.
reader := newSignVerify(r)
reader := newSignVerify(r, serverConfig.GetRegion())
partMD5, err = api.ObjectAPI.PutObjectPart(bucket, object, uploadID, partID, size, reader, incomingMD5)
}
if err != nil {
@@ -631,8 +669,14 @@ func (api objectAPIHandlers) AbortMultipartUploadHandler(w http.ResponseWriter,
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -664,8 +708,14 @@ func (api objectAPIHandlers) ListObjectPartsHandler(w http.ResponseWriter, r *ht
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -715,8 +765,14 @@ func (api objectAPIHandlers) CompleteMultipartUploadHandler(w http.ResponseWrite
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -836,8 +892,14 @@ func (api objectAPIHandlers) DeleteObjectHandler(w http.ResponseWriter, r *http.
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
if s3Error := isReqAuthenticatedV2(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
if s3Error := isReqAuthenticated(r, serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}

View File

@@ -75,11 +75,8 @@ func IsValidBucketName(bucket string) bool {
// Rejects strings with following characters.
//
// - Backslash ("\")
// - Caret ("^")
// - Grave accent / back tick ("`")
// - Vertical bar / pipe ("|")
//
// Minio does not support object names with trailing "/".
// Additionally minio does not support object names with trailing "/".
func IsValidObjectName(object string) bool {
if len(object) == 0 {
return false
@@ -103,7 +100,7 @@ func IsValidObjectPrefix(object string) bool {
return false
}
// Reject unsupported characters in object name.
if strings.ContainsAny(object, "`^|\\\"") {
if strings.ContainsAny(object, `\`) {
return false
}
return true

View File

@@ -23,6 +23,7 @@ import (
"path"
"runtime"
"strings"
"sync"
"syscall"
"unsafe"
)
@@ -30,9 +31,14 @@ import (
const (
// readDirentBufSize for syscall.ReadDirent() to hold multiple
// directory entries in one buffer. golang source uses 4096 as
// buffer size whereas we want 25 times larger to save lots of
// entries to avoid multiple syscall.ReadDirent() call.
readDirentBufSize = 4096 * 25
// buffer size whereas we use 256KiB for large entries in single
// operation to avoid multiple syscall.ReadDirent() call.
//
// We would ideally want 1MiB. but now sticking to 256KiB due to
// a bug in OS X 10.12 where large buffers results in
// syscall.Readdirent() returning garbage values.
// - https://github.com/minio/minio/issues/3075
readDirentBufSize = 4096 * 64 // 256KiB.
)
// actual length of the byte array from the c - world.
@@ -107,9 +113,19 @@ func parseDirents(dirPath string, buf []byte) (entries []string, err error) {
return entries, nil
}
var readDirBufPool = sync.Pool{
New: func() interface{} {
b := make([]byte, readDirentBufSize)
return &b
},
}
// Return all the entries at the directory dirPath.
func readDir(dirPath string) (entries []string, err error) {
buf := make([]byte, readDirentBufSize)
bufp := readDirBufPool.Get().(*[]byte)
buf := *bufp
defer readDirBufPool.Put(bufp)
d, err := os.Open(dirPath)
if err != nil {
// File is really not found.

View File

@@ -26,6 +26,7 @@ import (
"path/filepath"
"runtime"
"strings"
"sync"
"sync/atomic"
"syscall"
@@ -42,6 +43,7 @@ type posix struct {
ioErrCount int32 // ref: https://golang.org/pkg/sync/atomic/#pkg-note-BUG
diskPath string
minFreeDisk int64
pool sync.Pool
}
var errFaultyDisk = errors.New("Faulty disk")
@@ -70,16 +72,22 @@ func checkPathLength(pathName string) error {
func isDirEmpty(dirname string) bool {
f, err := os.Open(dirname)
if err != nil {
if os.IsNotExist(err) {
return false
}
errorIf(err, "Unable to access directory.")
return false
}
defer f.Close()
// List one entry.
_, err = f.Readdirnames(1)
if err == io.EOF {
// Returns true if we have reached EOF, directory is indeed empty.
return true
}
if err != nil {
if err == io.EOF {
// Returns true if we have reached EOF, directory is indeed empty.
return true
if os.IsNotExist(err) {
return false
}
errorIf(err, "Unable to list directory.")
return false
@@ -102,6 +110,13 @@ func newPosix(diskPath string) (StorageAPI, error) {
fs := &posix{
diskPath: diskPath,
minFreeDisk: fsMinSpacePercent, // Minimum 5% disk should be free.
// 1MiB buffer pool for posix internal operations.
pool: sync.Pool{
New: func() interface{} {
b := make([]byte, 1*1024*1024)
return &b
},
},
}
st, err := os.Stat(preparePath(diskPath))
if err != nil {
@@ -592,8 +607,13 @@ func (s *posix) AppendFile(volume, path string, buf []byte) (err error) {
// Close upon return.
defer w.Close()
// Return io.Copy
_, err = io.Copy(w, bytes.NewReader(buf))
bufp := s.pool.Get().(*[]byte)
// Reuse buffer.
defer s.pool.Put(bufp)
// Return io.CopyBuffer
_, err = io.CopyBuffer(w, bytes.NewReader(buf), *bufp)
return err
}
@@ -680,6 +700,11 @@ func deleteFile(basePath, deletePath string) error {
}
// Attempt to remove path.
if err := os.Remove(preparePath(deletePath)); err != nil {
if os.IsNotExist(err) {
return errFileNotFound
} else if os.IsPermission(err) {
return errFileAccessDenied
}
return err
}
// Recursively go down the next path and delete again.

View File

@@ -679,8 +679,8 @@ func TestPosixListDir(t *testing.T) {
t.Errorf("Unable to initialize posix, %s", err)
}
if err = posixStorage.DeleteFile("bin", "yes"); !os.IsPermission(err) {
t.Errorf("expected: Permission error, got: %s", err)
if err = posixStorage.DeleteFile("bin", "yes"); err != errFileAccessDenied {
t.Errorf("expected: %s error, got: %s", errFileAccessDenied, err)
}
}
@@ -793,8 +793,8 @@ func TestDeleteFile(t *testing.T) {
t.Errorf("Unable to initialize posix, %s", err)
}
if err = posixStorage.DeleteFile("bin", "yes"); !os.IsPermission(err) {
t.Errorf("expected: Permission error, got: %s", err)
if err = posixStorage.DeleteFile("bin", "yes"); err != errFileAccessDenied {
t.Errorf("expected: %s error, got: %s", errFileAccessDenied, err)
}
}

326
cmd/signature-v2.go Normal file
View File

@@ -0,0 +1,326 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"crypto/hmac"
"crypto/sha1"
"encoding/base64"
"fmt"
"net/http"
"sort"
"strconv"
"strings"
"time"
)
// Signature and API related constants.
const (
signV2Algorithm = "AWS"
)
// AWS S3 Signature V2 calculation rule is give here:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationStringToSign
// Whitelist resource list that will be used in query string for signature-V2 calculation.
var resourceList = []string{
"acl",
"delete",
"lifecycle",
"location",
"logging",
"notification",
"partNumber",
"policy",
"requestPayment",
"torrent",
"uploadId",
"uploads",
"versionId",
"versioning",
"versions",
"website",
}
// TODO add post policy signature.
// doesPresignV2SignatureMatch - Verify query headers with presigned signature
// - http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationQueryStringAuth
// returns ErrNone if matches. S3 errors otherwise.
func doesPresignV2SignatureMatch(r *http.Request) APIErrorCode {
// Access credentials.
cred := serverConfig.GetCredential()
// url.RawPath will be valid if path has any encoded characters, if not it will
// be empty - in which case we need to consider url.Path (bug in net/http?)
encodedResource := r.URL.RawPath
encodedQuery := r.URL.RawQuery
if encodedResource == "" {
splits := strings.Split(r.URL.Path, "?")
if len(splits) > 0 {
encodedResource = splits[0]
}
}
queries := strings.Split(encodedQuery, "&")
var filteredQueries []string
var gotSignature string
var expires string
var accessKey string
for _, query := range queries {
keyval := strings.Split(query, "=")
switch keyval[0] {
case "AWSAccessKeyId":
accessKey = keyval[1]
case "Signature":
gotSignature = keyval[1]
case "Expires":
expires = keyval[1]
default:
filteredQueries = append(filteredQueries, query)
}
}
if accessKey == "" {
return ErrInvalidQueryParams
}
// Validate if access key id same.
if accessKey != cred.AccessKeyID {
return ErrInvalidAccessKeyID
}
// Make sure the request has not expired.
expiresInt, err := strconv.ParseInt(expires, 10, 64)
if err != nil {
return ErrMalformedExpires
}
if expiresInt < time.Now().UTC().Unix() {
return ErrExpiredPresignRequest
}
expectedSignature := preSignatureV2(r.Method, encodedResource, strings.Join(filteredQueries, "&"), r.Header, expires)
if gotSignature != getURLEncodedName(expectedSignature) {
return ErrSignatureDoesNotMatch
}
return ErrNone
}
// Authorization = "AWS" + " " + AWSAccessKeyId + ":" + Signature;
// Signature = Base64( HMAC-SHA1( YourSecretAccessKeyID, UTF-8-Encoding-Of( StringToSign ) ) );
//
// StringToSign = HTTP-Verb + "\n" +
// Content-Md5 + "\n" +
// Content-Type + "\n" +
// Date + "\n" +
// CanonicalizedProtocolHeaders +
// CanonicalizedResource;
//
// CanonicalizedResource = [ "/" + Bucket ] +
// <HTTP-Request-URI, from the protocol name up to the query string> +
// [ subresource, if present. For example "?acl", "?location", "?logging", or "?torrent"];
//
// CanonicalizedProtocolHeaders = <described below>
// doesSignV2Match - Verify authorization header with calculated header in accordance with
// - http://docs.aws.amazon.com/AmazonS3/latest/dev/auth-request-sig-v2.html
// returns true if matches, false otherwise. if error is not nil then it is always false
func validateV2AuthHeader(v2Auth string) APIErrorCode {
if v2Auth == "" {
return ErrAuthHeaderEmpty
}
// Verify if the header algorithm is supported or not.
if !strings.HasPrefix(v2Auth, signV2Algorithm) {
return ErrSignatureVersionNotSupported
}
// below is V2 Signed Auth header format, splitting on `space` (after the `AWS` string).
// Authorization = "AWS" + " " + AWSAccessKeyId + ":" + Signature
authFields := strings.Split(v2Auth, " ")
if len(authFields) != 2 {
return ErrMissingFields
}
// Then will be splitting on ":", this will seprate `AWSAccessKeyId` and `Signature` string.
keySignFields := strings.Split(strings.TrimSpace(authFields[1]), ":")
if len(keySignFields) != 2 {
return ErrMissingFields
}
// Access credentials.
cred := serverConfig.GetCredential()
if keySignFields[0] != cred.AccessKeyID {
return ErrInvalidAccessKeyID
}
return ErrNone
}
func doesSignV2Match(r *http.Request) APIErrorCode {
v2Auth := r.Header.Get("Authorization")
if apiError := validateV2AuthHeader(v2Auth); apiError != ErrNone {
return apiError
}
// url.RawPath will be valid if path has any encoded characters, if not it will
// be empty - in which case we need to consider url.Path (bug in net/http?)
encodedResource := r.URL.RawPath
encodedQuery := r.URL.RawQuery
if encodedResource == "" {
splits := strings.Split(r.URL.Path, "?")
if len(splits) > 0 {
encodedResource = splits[0]
}
}
expectedAuth := signatureV2(r.Method, encodedResource, encodedQuery, r.Header)
if v2Auth != expectedAuth {
return ErrSignatureDoesNotMatch
}
return ErrNone
}
// Return signature-v2 for the presigned request.
func preSignatureV2(method string, encodedResource string, encodedQuery string, headers http.Header, expires string) string {
cred := serverConfig.GetCredential()
stringToSign := presignV2STS(method, encodedResource, encodedQuery, headers, expires)
hm := hmac.New(sha1.New, []byte(cred.SecretAccessKey))
hm.Write([]byte(stringToSign))
signature := base64.StdEncoding.EncodeToString(hm.Sum(nil))
return signature
}
// Return signature-v2 authrization header.
func signatureV2(method string, encodedResource string, encodedQuery string, headers http.Header) string {
cred := serverConfig.GetCredential()
stringToSign := signV2STS(method, encodedResource, encodedQuery, headers)
hm := hmac.New(sha1.New, []byte(cred.SecretAccessKey))
hm.Write([]byte(stringToSign))
signature := base64.StdEncoding.EncodeToString(hm.Sum(nil))
return fmt.Sprintf("%s %s:%s", signV2Algorithm, cred.AccessKeyID, signature)
}
// Return canonical headers.
func canonicalizedAmzHeadersV2(headers http.Header) string {
var keys []string
keyval := make(map[string]string)
for key := range headers {
lkey := strings.ToLower(key)
if !strings.HasPrefix(lkey, "x-amz-") {
continue
}
keys = append(keys, lkey)
keyval[lkey] = strings.Join(headers[key], ",")
}
sort.Strings(keys)
var canonicalHeaders []string
for _, key := range keys {
canonicalHeaders = append(canonicalHeaders, key+":"+keyval[key])
}
return strings.Join(canonicalHeaders, "\n")
}
// Return canonical resource string.
func canonicalizedResourceV2(encodedPath string, encodedQuery string) string {
queries := strings.Split(encodedQuery, "&")
keyval := make(map[string]string)
for _, query := range queries {
key := query
val := ""
index := strings.Index(query, "=")
if index != -1 {
key = query[:index]
val = query[index+1:]
}
keyval[key] = val
}
var canonicalQueries []string
for _, key := range resourceList {
val, ok := keyval[key]
if !ok {
continue
}
if val == "" {
canonicalQueries = append(canonicalQueries, key)
continue
}
canonicalQueries = append(canonicalQueries, key+"="+val)
}
if len(canonicalQueries) == 0 {
return encodedPath
}
// the queries will be already sorted as resourceList is sorted.
return encodedPath + "?" + strings.Join(canonicalQueries, "&")
}
// Return string to sign for authz header calculation.
func signV2STS(method string, encodedResource string, encodedQuery string, headers http.Header) string {
canonicalHeaders := canonicalizedAmzHeadersV2(headers)
if len(canonicalHeaders) > 0 {
canonicalHeaders += "\n"
}
// From the Amazon docs:
//
// StringToSign = HTTP-Verb + "\n" +
// Content-Md5 + "\n" +
// Content-Type + "\n" +
// Date + "\n" +
// CanonicalizedProtocolHeaders +
// CanonicalizedResource;
stringToSign := strings.Join([]string{
method,
headers.Get("Content-MD5"),
headers.Get("Content-Type"),
headers.Get("Date"),
canonicalHeaders,
}, "\n") + canonicalizedResourceV2(encodedResource, encodedQuery)
return stringToSign
}
// Return string to sign for pre-sign signature calculation.
func presignV2STS(method string, encodedResource string, encodedQuery string, headers http.Header, expires string) string {
canonicalHeaders := canonicalizedAmzHeadersV2(headers)
if len(canonicalHeaders) > 0 {
canonicalHeaders += "\n"
}
// From the Amazon docs:
//
// StringToSign = HTTP-Verb + "\n" +
// Content-Md5 + "\n" +
// Content-Type + "\n" +
// Expires + "\n" +
// CanonicalizedProtocolHeaders +
// CanonicalizedResource;
stringToSign := strings.Join([]string{
method,
headers.Get("Content-MD5"),
headers.Get("Content-Type"),
expires,
canonicalHeaders,
}, "\n") + canonicalizedResourceV2(encodedResource, encodedQuery)
return stringToSign
}

182
cmd/signature-v2_test.go Normal file
View File

@@ -0,0 +1,182 @@
package cmd
import (
"fmt"
"net/http"
"net/url"
"sort"
"testing"
"time"
)
// Tests for 'func TestResourceListSorting(t *testing.T)'.
func TestResourceListSorting(t *testing.T) {
sortedResourceList := make([]string, len(resourceList))
copy(sortedResourceList, resourceList)
sort.Strings(sortedResourceList)
for i := 0; i < len(resourceList); i++ {
if resourceList[i] != sortedResourceList[i] {
t.Errorf("Expected resourceList[%d] = \"%s\", resourceList is not correctly sorted.", i, sortedResourceList[i])
break
}
}
}
func TestDoesPresignedV2SignatureMatch(t *testing.T) {
root, err := newTestConfig("us-east-1")
if err != nil {
t.Fatal("Unable to initialize test config.")
}
defer removeAll(root)
now := time.Now().UTC()
testCases := []struct {
queryParams map[string]string
headers map[string]string
expected APIErrorCode
}{
// (0) Should error without a set URL query.
{
expected: ErrInvalidQueryParams,
},
// (1) Should error on an invalid access key.
{
queryParams: map[string]string{
"Expires": "60",
"Signature": "badsignature",
"AWSAccessKeyId": "Z7IXGOO6BZ0REAN1Q26I",
},
expected: ErrInvalidAccessKeyID,
},
// (2) Should error with malformed expires.
{
queryParams: map[string]string{
"Expires": "60s",
"Signature": "badsignature",
"AWSAccessKeyId": serverConfig.GetCredential().AccessKeyID,
},
expected: ErrMalformedExpires,
},
// (3) Should give an expired request if it has expired.
{
queryParams: map[string]string{
"Expires": "60",
"Signature": "badsignature",
"AWSAccessKeyId": serverConfig.GetCredential().AccessKeyID,
},
expected: ErrExpiredPresignRequest,
},
// (4) Should error when the signature does not match.
{
queryParams: map[string]string{
"Expires": fmt.Sprintf("%d", now.Unix()+60),
"Signature": "badsignature",
"AWSAccessKeyId": serverConfig.GetCredential().AccessKeyID,
},
expected: ErrSignatureDoesNotMatch,
},
}
// Run each test case individually.
for i, testCase := range testCases {
// Turn the map[string]string into map[string][]string, because Go.
query := url.Values{}
for key, value := range testCase.queryParams {
query.Set(key, value)
}
// Create a request to use.
req, e := http.NewRequest(http.MethodGet, "http://host/a/b?"+query.Encode(), nil)
if e != nil {
t.Errorf("(%d) failed to create http.Request, got %v", i, e)
}
// Do the same for the headers.
for key, value := range testCase.headers {
req.Header.Set(key, value)
}
// Check if it matches!
err := doesPresignV2SignatureMatch(req)
if err != testCase.expected {
t.Errorf("(%d) expected to get %s, instead got %s", i, niceError(testCase.expected), niceError(err))
}
}
}
// TestValidateV2AuthHeader - Tests validate the logic of V2 Authorization header validator.
func TestValidateV2AuthHeader(t *testing.T) {
// Initialize server config.
if err := initConfig(); err != nil {
t.Fatal(err)
}
// Save config.
if err := serverConfig.Save(); err != nil {
t.Fatal(err)
}
accessID := serverConfig.GetCredential().AccessKeyID
testCases := []struct {
authString string
expectedError APIErrorCode
}{
// Test case - 1.
// Case with empty V2AuthString.
{
authString: "",
expectedError: ErrAuthHeaderEmpty,
},
// Test case - 2.
// Test case with `signV2Algorithm` ("AWS") not being the prefix.
{
authString: "NoV2Prefix",
expectedError: ErrSignatureVersionNotSupported,
},
// Test case - 3.
// Test case with missing parts in the Auth string.
// below is the correct format of V2 Authorization header.
// Authorization = "AWS" + " " + AWSAccessKeyId + ":" + Signature
{
authString: signV2Algorithm,
expectedError: ErrMissingFields,
},
// Test case - 4.
// Test case with signature part missing.
{
authString: fmt.Sprintf("%s %s", signV2Algorithm, accessID),
expectedError: ErrMissingFields,
},
// Test case - 5.
// Test case with wrong accessID.
{
authString: fmt.Sprintf("%s %s:%s", signV2Algorithm, "InvalidAccessID", "signature"),
expectedError: ErrInvalidAccessKeyID,
},
// Test case - 6.
// Case with right accessID and format.
{
authString: fmt.Sprintf("%s %s:%s", signV2Algorithm, accessID, "signature"),
expectedError: ErrNone,
},
}
for i, testCase := range testCases {
t.Run(fmt.Sprintf("Case %d AuthStr \"%s\".", i+1, testCase.authString), func(t *testing.T) {
actualErrCode := validateV2AuthHeader(testCase.authString)
if testCase.expectedError != actualErrCode {
t.Errorf("Expected the error code to be %v, got %v.", testCase.expectedError, actualErrCode)
}
})
}
}

View File

@@ -147,7 +147,7 @@ func getSignature(signingKey []byte, stringToSign string) string {
// doesPolicySignatureMatch - Verify query headers with post policy
// - http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html
// returns true if matches, false otherwise. if error is not nil then it is always false
// returns ErrNone if the signature matches.
func doesPolicySignatureMatch(formValues map[string]string) APIErrorCode {
// Access credentials.
cred := serverConfig.GetCredential()
@@ -193,14 +193,11 @@ func doesPolicySignatureMatch(formValues map[string]string) APIErrorCode {
// doesPresignedSignatureMatch - Verify query headers with presigned signature
// - http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
// returns true if matches, false otherwise. if error is not nil then it is always false
func doesPresignedSignatureMatch(hashedPayload string, r *http.Request, validateRegion bool) APIErrorCode {
// returns ErrNone if the signature matches.
func doesPresignedSignatureMatch(hashedPayload string, r *http.Request, region string) APIErrorCode {
// Access credentials.
cred := serverConfig.GetCredential()
// Server region.
region := serverConfig.GetRegion()
// Copy request
req := *r
@@ -223,15 +220,13 @@ func doesPresignedSignatureMatch(hashedPayload string, r *http.Request, validate
// Verify if region is valid.
sRegion := pSignValues.Credential.scope.region
// Should validate region, only if region is set. Some operations
// do not need region validated for example GetBucketLocation.
if validateRegion {
if !isValidRegion(sRegion, region) {
return ErrInvalidRegion
}
} else {
// Should validate region, only if region is set.
if region == "" {
region = sRegion
}
if !isValidRegion(sRegion, region) {
return ErrInvalidRegion
}
// Extract all the signed headers along with its values.
extractedSignedHeaders, errCode := extractSignedHeaders(pSignValues.SignedHeaders, req.Header)
@@ -321,14 +316,11 @@ func doesPresignedSignatureMatch(hashedPayload string, r *http.Request, validate
// doesSignatureMatch - Verify authorization header with calculated header in accordance with
// - http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
// returns true if matches, false otherwise. if error is not nil then it is always false
func doesSignatureMatch(hashedPayload string, r *http.Request, validateRegion bool) APIErrorCode {
// returns ErrNone if signature matches.
func doesSignatureMatch(hashedPayload string, r *http.Request, region string) APIErrorCode {
// Access credentials.
cred := serverConfig.GetCredential()
// Server region.
region := serverConfig.GetRegion()
// Copy request.
req := *r
@@ -354,7 +346,7 @@ func doesSignatureMatch(hashedPayload string, r *http.Request, validateRegion bo
for _, h := range signV4Values.SignedHeaders {
if h == "content-length" {
header = cloneHeader(req.Header)
header.Add("content-length", strconv.FormatInt(r.ContentLength, 10))
header.Set("content-length", strconv.FormatInt(r.ContentLength, 10))
break
}
}
@@ -372,14 +364,17 @@ func doesSignatureMatch(hashedPayload string, r *http.Request, validateRegion bo
// Verify if region is valid.
sRegion := signV4Values.Credential.scope.region
// Should validate region, only if region is set. Some operations
// do not need region validated for example GetBucketLocation.
if validateRegion {
if !isValidRegion(sRegion, region) {
return ErrInvalidRegion
}
// Region is set to be empty, we use whatever was sent by the
// request and proceed further. This is a work-around to address
// an important problem for ListBuckets() getting signed with
// different regions.
if region == "" {
region = sRegion
}
// Should validate region, only if region is set.
if !isValidRegion(sRegion, region) {
return ErrInvalidRegion
}
region = sRegion
// Extract date, if not present throw error.
var date string

View File

@@ -106,15 +106,15 @@ func TestDoesPresignedSignatureMatch(t *testing.T) {
credentialTemplate := "%s/%s/%s/s3/aws4_request"
testCases := []struct {
queryParams map[string]string
headers map[string]string
verifyRegion bool
expected APIErrorCode
queryParams map[string]string
headers map[string]string
region string
expected APIErrorCode
}{
// (0) Should error without a set URL query.
{
verifyRegion: false,
expected: ErrInvalidQueryParams,
region: "us-east-1",
expected: ErrInvalidQueryParams,
},
// (1) Should error on an invalid access key.
{
@@ -126,8 +126,8 @@ func TestDoesPresignedSignatureMatch(t *testing.T) {
"X-Amz-SignedHeaders": "host;x-amz-content-sha256;x-amz-date",
"X-Amz-Credential": fmt.Sprintf(credentialTemplate, "Z7IXGOO6BZ0REAN1Q26I", now.Format(yyyymmdd), "us-west-1"),
},
verifyRegion: false,
expected: ErrInvalidAccessKeyID,
region: "us-west-1",
expected: ErrInvalidAccessKeyID,
},
// (2) Should error when the payload sha256 doesn't match.
{
@@ -140,8 +140,8 @@ func TestDoesPresignedSignatureMatch(t *testing.T) {
"X-Amz-Credential": fmt.Sprintf(credentialTemplate, serverConfig.GetCredential().AccessKeyID, now.Format(yyyymmdd), "us-west-1"),
"X-Amz-Content-Sha256": "ThisIsNotThePayloadHash",
},
verifyRegion: false,
expected: ErrContentSHA256Mismatch,
region: "us-west-1",
expected: ErrContentSHA256Mismatch,
},
// (3) Should fail with an invalid region.
{
@@ -154,8 +154,8 @@ func TestDoesPresignedSignatureMatch(t *testing.T) {
"X-Amz-Credential": fmt.Sprintf(credentialTemplate, serverConfig.GetCredential().AccessKeyID, now.Format(yyyymmdd), "us-west-1"),
"X-Amz-Content-Sha256": payload,
},
verifyRegion: true,
expected: ErrInvalidRegion,
region: "us-east-1",
expected: ErrInvalidRegion,
},
// (4) Should NOT fail with an invalid region if it doesn't verify it.
{
@@ -168,8 +168,8 @@ func TestDoesPresignedSignatureMatch(t *testing.T) {
"X-Amz-Credential": fmt.Sprintf(credentialTemplate, serverConfig.GetCredential().AccessKeyID, now.Format(yyyymmdd), "us-west-1"),
"X-Amz-Content-Sha256": payload,
},
verifyRegion: false,
expected: ErrUnsignedHeaders,
region: "us-west-1",
expected: ErrUnsignedHeaders,
},
// (5) Should fail to extract headers if the host header is not signed.
{
@@ -182,8 +182,8 @@ func TestDoesPresignedSignatureMatch(t *testing.T) {
"X-Amz-Credential": fmt.Sprintf(credentialTemplate, serverConfig.GetCredential().AccessKeyID, now.Format(yyyymmdd), serverConfig.GetRegion()),
"X-Amz-Content-Sha256": payload,
},
verifyRegion: true,
expected: ErrUnsignedHeaders,
region: serverConfig.GetRegion(),
expected: ErrUnsignedHeaders,
},
// (6) Should give an expired request if it has expired.
{
@@ -200,8 +200,8 @@ func TestDoesPresignedSignatureMatch(t *testing.T) {
"X-Amz-Date": now.AddDate(0, 0, -2).Format(iso8601Format),
"X-Amz-Content-Sha256": payload,
},
verifyRegion: false,
expected: ErrExpiredPresignRequest,
region: serverConfig.GetRegion(),
expected: ErrExpiredPresignRequest,
},
// (7) Should error if the signature is incorrect.
{
@@ -218,8 +218,8 @@ func TestDoesPresignedSignatureMatch(t *testing.T) {
"X-Amz-Date": now.Format(iso8601Format),
"X-Amz-Content-Sha256": payload,
},
verifyRegion: false,
expected: ErrSignatureDoesNotMatch,
region: serverConfig.GetRegion(),
expected: ErrSignatureDoesNotMatch,
},
}
@@ -243,7 +243,7 @@ func TestDoesPresignedSignatureMatch(t *testing.T) {
}
// Check if it matches!
err := doesPresignedSignatureMatch(payload, req, testCase.verifyRegion)
err := doesPresignedSignatureMatch(payload, req, testCase.region)
if err != testCase.expected {
t.Errorf("(%d) expected to get %s, instead got %s", i, niceError(testCase.expected), niceError(err))
}

View File

@@ -19,10 +19,11 @@ package cmd
import (
"encoding/hex"
"fmt"
"github.com/minio/sha256-simd"
"hash"
"io"
"net/http"
"github.com/minio/sha256-simd"
)
// signVerifyReader represents an io.Reader compatible interface which
@@ -31,13 +32,15 @@ import (
type signVerifyReader struct {
Request *http.Request // HTTP request to be validated and read.
HashWriter hash.Hash // sha256 hash writer.
Region string
}
// Initializes a new signature verify reader.
func newSignVerify(req *http.Request) *signVerifyReader {
func newSignVerify(req *http.Request, region string) *signVerifyReader {
return &signVerifyReader{
Request: req, // Save the request.
HashWriter: sha256.New(), // Inititalize sha256.
Region: region,
}
}
@@ -49,7 +52,6 @@ func isSignVerify(reader io.Reader) bool {
// Verify - verifies signature and returns error upon signature mismatch.
func (v *signVerifyReader) Verify() error {
validateRegion := true // Defaults to validating region.
shaPayloadHex := hex.EncodeToString(v.HashWriter.Sum(nil))
if skipContentSha256Cksum(v.Request) {
// Sets 'UNSIGNED-PAYLOAD' if client requested to not calculated sha256.
@@ -58,9 +60,9 @@ func (v *signVerifyReader) Verify() error {
// Signature verification block.
var s3Error APIErrorCode
if isRequestSignatureV4(v.Request) {
s3Error = doesSignatureMatch(shaPayloadHex, v.Request, validateRegion)
s3Error = doesSignatureMatch(shaPayloadHex, v.Request, v.Region)
} else if isRequestPresignedSignatureV4(v.Request) {
s3Error = doesPresignedSignatureMatch(shaPayloadHex, v.Request, validateRegion)
s3Error = doesPresignedSignatureMatch(shaPayloadHex, v.Request, v.Region)
} else {
// Couldn't figure out the request type, set the error as AccessDenied.
s3Error = ErrAccessDenied

View File

@@ -206,6 +206,8 @@ func signRequest(req *http.Request, accessKey, secretKey string) error {
}
sort.Strings(headers)
region := serverConfig.GetRegion()
// Get canonical headers.
var buf bytes.Buffer
for _, k := range headers {
@@ -257,7 +259,7 @@ func signRequest(req *http.Request, accessKey, secretKey string) error {
// Get scope.
scope := strings.Join([]string{
currTime.Format(yyyymmdd),
"us-east-1",
region,
"s3",
"aws4_request",
}, "/")
@@ -267,8 +269,8 @@ func signRequest(req *http.Request, accessKey, secretKey string) error {
stringToSign = stringToSign + hex.EncodeToString(sum256([]byte(canonicalRequest)))
date := sumHMAC([]byte("AWS4"+secretKey), []byte(currTime.Format(yyyymmdd)))
region := sumHMAC(date, []byte("us-east-1"))
service := sumHMAC(region, []byte("s3"))
regionHMAC := sumHMAC(date, []byte(region))
service := sumHMAC(regionHMAC, []byte("s3"))
signingKey := sumHMAC(service, []byte("aws4_request"))
signature := hex.EncodeToString(sumHMAC(signingKey, []byte(stringToSign)))
@@ -305,9 +307,9 @@ func newTestRequest(method, urlStr string, contentLength int64, body io.ReadSeek
case body == nil:
hashedPayload = hex.EncodeToString(sum256([]byte{}))
default:
payloadBytes, e := ioutil.ReadAll(body)
if e != nil {
return nil, e
payloadBytes, err := ioutil.ReadAll(body)
if err != nil {
return nil, err
}
hashedPayload = hex.EncodeToString(sum256(payloadBytes))
md5Base64 := base64.StdEncoding.EncodeToString(sumMD5(payloadBytes))

View File

@@ -17,10 +17,12 @@
package cmd
import (
"bytes"
"encoding/json"
"errors"
"io/ioutil"
"net/http"
"os"
"runtime"
"strings"
"time"
@@ -33,10 +35,6 @@ import (
// command specific flags.
var (
updateFlags = []cli.Flag{
cli.BoolFlag{
Name: "help, h",
Usage: "Help for update.",
},
cli.BoolFlag{
Name: "experimental, E",
Usage: "Check experimental update.",
@@ -49,7 +47,7 @@ var updateCmd = cli.Command{
Name: "update",
Usage: "Check for a new software update.",
Action: mainUpdate,
Flags: updateFlags,
Flags: append(updateFlags, globalFlags...),
CustomHelpTemplate: `Name:
minio {{.Name}} - {{.Usage}}
@@ -114,7 +112,8 @@ func parseReleaseData(data string) (time.Time, error) {
if releaseDateSplits[0] != "minio" {
return time.Time{}, (errors.New("Update data malformed, missing minio tag"))
}
// "OFFICIAL" tag is still kept for backward compatibility, we should remove this for the next release.
// "OFFICIAL" tag is still kept for backward compatibility.
// We should remove this for the next release.
if releaseDateSplits[1] != "RELEASE" && releaseDateSplits[1] != "OFFICIAL" {
return time.Time{}, (errors.New("Update data malformed, missing RELEASE tag"))
}
@@ -132,8 +131,26 @@ func parseReleaseData(data string) (time.Time, error) {
return parsedDate, nil
}
// User Agent should always following the below style.
// Please open an issue to discuss any new changes here.
//
// Minio (OS; ARCH) APP/VER APP/VER
var (
userAgentSuffix = "Minio/" + Version + " " + "Minio/" + ReleaseTag + " " + "Minio/" + CommitID
)
// Check if the operating system is a docker container.
func isDocker() bool {
cgroup, err := ioutil.ReadFile("/proc/self/cgroup")
if err != nil && os.IsNotExist(err) {
return false
}
fatalIf(err, "Unable to read `cgroup` file.")
return bytes.Contains(cgroup, []byte("docker"))
}
// verify updates for releases.
func getReleaseUpdate(updateURL string, noError bool) updateMessage {
func getReleaseUpdate(updateURL string, duration time.Duration) (updateMsg updateMessage, errMsg string, err error) {
// Construct a new update url.
newUpdateURLPrefix := updateURL + "/" + runtime.GOOS + "-" + runtime.GOARCH
newUpdateURL := newUpdateURLPrefix + "/minio.shasum"
@@ -143,78 +160,87 @@ func getReleaseUpdate(updateURL string, noError bool) updateMessage {
switch runtime.GOOS {
case "windows":
// For windows.
downloadURL = newUpdateURLPrefix + "/minio.exe?update=yes"
downloadURL = newUpdateURLPrefix + "/minio.exe"
default:
// For all other operating systems.
downloadURL = newUpdateURLPrefix + "/minio?update=yes"
downloadURL = newUpdateURLPrefix + "/minio"
}
// Initialize update message.
updateMsg := updateMessage{
updateMsg = updateMessage{
Download: downloadURL,
Version: Version,
}
// Instantiate a new client with 3 sec timeout.
client := &http.Client{
Timeout: 3 * time.Second,
}
// Fetch new update.
data, err := client.Get(newUpdateURL)
if err != nil && noError {
return updateMsg
}
fatalIf((err), "Unable to read from update URL "+newUpdateURL+".")
// Error out if 'update' command is issued for development based builds.
if Version == "DEVELOPMENT.GOGET" && !noError {
fatalIf((errors.New("")),
"Update mechanism is not supported for go get based binary builds. Please download official releases from https://minio.io/#minio")
Timeout: duration,
}
// Parse current minio version into RFC3339.
current, err := time.Parse(time.RFC3339, Version)
if err != nil && noError {
return updateMsg
if err != nil {
errMsg = "Unable to parse version string as time."
return
}
fatalIf((err), "Unable to parse version string as time.")
// Verify if current minio version is zero.
if current.IsZero() && !noError {
fatalIf((errors.New("")),
"Updates mechanism is not supported for custom builds. Please download official releases from https://minio.io/#minio")
if current.IsZero() {
err = errors.New("date should not be zero")
errMsg = "Updates mechanism is not supported for custom builds. Please download official releases from https://minio.io/#minio"
return
}
// Initialize new request.
req, err := http.NewRequest("GET", newUpdateURL, nil)
if err != nil {
return
}
userAgentPrefix := func() string {
if isDocker() {
return "Minio (" + runtime.GOOS + "; " + runtime.GOARCH + "; " + "docker) "
}
return "Minio (" + runtime.GOOS + "; " + runtime.GOARCH + ") "
}()
// Set user agent.
req.Header.Set("User-Agent", userAgentPrefix+" "+userAgentSuffix)
// Fetch new update.
resp, err := client.Do(req)
if err != nil {
return
}
// Verify if we have a valid http response i.e http.StatusOK.
if data != nil {
if data.StatusCode != http.StatusOK {
// Return quickly if noError is set.
if noError {
return updateMsg
}
fatalIf((errors.New("")), "Failed to retrieve update notice. "+data.Status)
if resp != nil {
if resp.StatusCode != http.StatusOK {
errMsg = "Failed to retrieve update notice."
err = errors.New("http status : " + resp.Status)
return
}
}
// Read the response body.
updateBody, err := ioutil.ReadAll(data.Body)
if err != nil && noError {
return updateMsg
updateBody, err := ioutil.ReadAll(resp.Body)
if err != nil {
errMsg = "Failed to retrieve update notice. Please try again later."
return
}
fatalIf((err), "Failed to retrieve update notice. Please try again later.")
errMsg = "Failed to retrieve update notice. Please try again later. Please report this issue at https://github.com/minio/minio/issues"
// Parse the date if its valid.
latest, err := parseReleaseData(string(updateBody))
if err != nil && noError {
return updateMsg
if err != nil {
return
}
errMsg := "Failed to retrieve update notice. Please try again later. Please report this issue at https://github.com/minio/minio/issues"
fatalIf(err, errMsg)
// Verify if the date is not zero.
if latest.IsZero() && !noError {
fatalIf((errors.New("")), errMsg)
if latest.IsZero() {
err = errors.New("date should not be zero")
return
}
// Is the update latest?.
@@ -223,18 +249,26 @@ func getReleaseUpdate(updateURL string, noError bool) updateMessage {
}
// Return update message.
return updateMsg
return updateMsg, "", nil
}
// main entry point for update command.
func mainUpdate(ctx *cli.Context) {
// Print all errors as they occur.
noError := false
// Error out if 'update' command is issued for development based builds.
if Version == "DEVELOPMENT.GOGET" {
fatalIf(errors.New(""), "Update mechanism is not supported for go get based binary builds. Please download official releases from https://minio.io/#minio")
}
// Check for update.
var updateMsg updateMessage
var errMsg string
var err error
var secs = time.Second * 3
if ctx.Bool("experimental") {
console.Println(getReleaseUpdate(minioUpdateExperimentalURL, noError))
updateMsg, errMsg, err = getReleaseUpdate(minioUpdateExperimentalURL, secs)
} else {
console.Println(getReleaseUpdate(minioUpdateStableURL, noError))
updateMsg, errMsg, err = getReleaseUpdate(minioUpdateStableURL, secs)
}
fatalIf(err, errMsg)
console.Println(updateMsg)
}