Compare commits

...

444 Commits

Author SHA1 Message Date
Harshavardhana
2062add05f fs/posix: On windows use helpers and init format.json properly. (#3434)
Fixes #3433
2016-12-12 15:43:41 -08:00
Krishnan Parthasarathi
6b4e6bcebf Move LoginHandler into LoginServer which others embed (#3431)
* Move LoginHandler into LoginServer which others embed

* Add unit tests for loginServer
2016-12-12 08:11:23 -08:00
Harshavardhana
2d6f8153fa format: Check properly for disks in valid formats. (#3427)
There was an error in how we validated disk formats,
if one of the disk was formatted and was formatted with
FS would cause confusion and object layer would never
initialize essentially go into an infinite loop.

Validate pre-emptively and also check for FS format
properly.
2016-12-11 15:18:55 -08:00
Anis Elleuch
5c10f4adf0 presign v2: include resp headers in signature calc (#3428)
Include response headers when presigning an url using signature v2 algorithm
2016-12-11 14:32:25 -08:00
Harshavardhana
4daa0d2cee lock: Moving locking to handler layer. (#3381)
This is implemented so that the issues like in the
following flow don't affect the behavior of operation.

```
GetObjectInfo()
.... --> Time window for mutation (no lock held)
.... --> Time window for mutation (no lock held)
GetObject()
```

This happens when two simultaneous uploads are made
to the same object the object has returned wrong
info to the client.

Another classic example is "CopyObject" API itself
which reads from a source object and copies to
destination object.

Fixes #3370
Fixes #2912
2016-12-10 16:15:12 -08:00
Harshavardhana
cd0f350c02 env: Bring back MINIO_BROWSER env. (#3423)
Set MINIO_BROWSER=off to disable web browser completely.

Fixes #3422
2016-12-10 00:42:22 -08:00
Krishna Srinivas
ac554bf663 FS/Multipart: Fix race between PutObjectPart and Complete/Abort multi… (#3419)
FS/Multipart: Fix race between PutObjectPart and Complete/Abort multipart. close(timeoutCh) on complete/abort so that a racing PutObjectPart does not leave a dangling go-routine.

Fixes #3351
2016-12-09 16:10:18 -08:00
Harshavardhana
b363709c11 caching: Optimize memory allocations. (#3405)
This change brings in changes at multiple places

 - Reuse buffers at almost all locations ranging
   from rpc, fs, xl, checksum etc.
 - Change caching behavior to disable itself
   under low memory conditions i.e < 8GB of RAM.
 - Only objects cached are of size 1/10th the size
   of the cache for example if 4GB is the cache size
   the maximum object size which will be cached
   is going to be 400MB. This change is an
   optimization to cache more objects rather
   than few larger objects.
 - If object cache is enabled default GC
   percent has been reduced to 20% in lieu
   with newly found behavior of GC. If the cache
   utilization reaches 75% of the maximum value
   GC percent is reduced to 10% to make GC
   more aggressive.
 - Do not use *bytes.Buffer* due to its growth
   requirements. For every allocation *bytes.Buffer*
   allocates an additional buffer for its internal
   purposes. This is undesirable for us, so
   implemented a new cappedWriter which is capped to a
   desired size, beyond this all writes rejected.

Possible fix for #3403.
2016-12-08 20:35:07 -08:00
Anis Elleuch
410b579e87 startup: Show elapsed time in disks format process (#3413) 2016-12-07 10:22:00 -08:00
Karthic Rao
7b7c0bba58 Use a non member mutex lock for serverConfig access. (#3411)
- This is to ensure that the any new config references made to the
  serverConfig is also backed by a mutex lock.
- Otherwise any new config assigment will also replace the member mutex
  which is currently used for safe access.
2016-12-07 03:41:54 -08:00
Anis Elleuch
0cef971832 Fix max cache size calculation when system RAM is inferior to the default cache size (#3410) 2016-12-06 16:09:26 -08:00
Anis Elleuch
5c9a95df32 srv-mux: do not print peek protocol EOF err msg (#3402)
EOF err message in Peek Protocol is shown when a client closes the
connection in the middle of peek protocol, this commit hides it since it
doesn't make sense to show it
2016-12-05 14:49:32 -08:00
Anis Elleuch
3b455d6137 tests: Add tests for xl-v1-list-objects-heal (#3399) 2016-12-05 09:40:33 -08:00
Anis Elleuch
b2a0e5754b bucket-handlers: More tests for post form handler (#3392) 2016-12-04 12:23:19 -08:00
Anis Elleuch
63d9bb626a postform: fix check when ${filename} is provided (#3391)
Checking key condition when ${filename} is provided wasn't working well,
this patch fixes the wrong behavior
2016-12-04 10:30:52 -08:00
Anis Elleuch
372da5eaf5 tests: Enhance checkPostPolicy() coverage (#3389) 2016-12-03 12:41:07 -08:00
Harshavardhana
cf17fc7774 fs: PutObject create 0byte objects properly. (#3387)
Current code always appends to a file only if 1byte or
more was sent on the wire was affecting both PutObject
and PutObjectPart uploads.

This patch fixes such a situation and resolves #3385
2016-12-03 11:53:12 -08:00
Krishnan Parthasarathi
67509453d3 FS: sync abortMultipart cleanup and bg append (#3388)
backgroundAppend type's abort method should wait for appendParts to finish
writing ongoing appending of parts in the background before cleaning up
the part files.
2016-12-02 23:33:06 -08:00
Harshavardhana
d31f256020 Fail on lint errors during CI build. 2016-12-02 18:08:12 -08:00
Harshavardhana
d67f47927c api: Fix the formatting issues in last patch. 2016-12-02 17:39:21 -08:00
Anis Elleuch
85bb5870a9 Post Policy Form: exhaustive post policy check (#3386)
Add support of all conditions check described in
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html
and simplify a little bit the existing code
2016-12-02 17:00:33 -08:00
Harshavardhana
4a9b205a15 docs: Add missing comments for exported functions. 2016-12-02 14:39:23 -08:00
koolhead17
f3b346cbb3 docs: removed the Edge tag reference. (#3366) 2016-12-02 10:47:18 -08:00
Harshavardhana
ff4ce0ee14 fs/xl: Combine input checks into re-usable functions. (#3383)
Repeated code around both object layers are moved
and combined into simple re-usable functions.
2016-12-01 23:15:17 -08:00
Anis Elleuch
918924796f fs: Enable shutdown test with faulty disks (#3380) 2016-12-01 13:59:06 -08:00
Krishnan Parthasarathi
feb6685359 posix: Use preparePath only for paths used with syscall or os functions (#3377) 2016-11-30 20:56:15 -08:00
Bala FA
0d59ea1e94 postpolicy: handle Amazon S3 compatible content-length-range condition (#3376)
Previously minio server expects content-length-range values as integer
in JSON.  However Amazon S3 handles content-length-range values as
integer and strings.

This patch adds support for string values.
2016-11-30 18:30:59 -08:00
Krishna Srinivas
38edd94282 ListBuckets: Allow listBuckets request to be signed with region configured in config.json (#3374)
Fixes #3373
2016-11-30 13:55:56 -08:00
Krishna Srinivas
8021061bd8 Implement BucketUpdater interface to call BucketMetaState methods. (#3375) 2016-11-30 13:37:38 -08:00
Anand Babu (AB) Periasamy
bc9509bc8a edits on Limits 2016-11-30 01:58:31 -08:00
Krishna Srinivas
e3b4910b66 FS/CompleteMultipart: lock the namespace before renaming the appended tmp file. (#3371) 2016-11-29 23:26:36 -08:00
Harshavardhana
d056f19d07 api: Allow reconnection of policy/notification rpc clients. (#3368)
Since we moved out reconnection logic from net-rpc-client.go
we should do it from the top-layer properly and bring back
the code to reconnect properly in-case the connection is lost.
2016-11-29 22:39:32 -08:00
Harshavardhana
834007728c fs: Do not print redundant md5Sum response header. (#3369)
For both GET and HEAD requests.
2016-11-29 16:47:01 -08:00
Krishnan Parthasarathi
a609a4126c Vendorize recent fixes in dsync (#3365)
* Update 'i' only if lock grant read from buffered channel
2016-11-28 23:03:46 -08:00
Krishna Srinivas
bcd1a2308b FS/Shutdown: cleanup and delete .minio.sys during Shutdown() (#3360) 2016-11-28 22:54:48 -08:00
koolhead17
694bad434c docs: Modified the Docker doc to reflect distributed release image & (#3362)
modification.
2016-11-28 12:45:35 -08:00
Anis Elleuch
01f625824a args: Honor config-dir & quiet wherever they are (#3356)
setGlobalsFromContext() is added to set global variables after parsing
command line arguments. Thus, global flags will be honored wherever
they are placed in minio command.
2016-11-28 12:15:36 -08:00
Bala FA
9ccfb70104 Minor cleanup. (#3361) 2016-11-28 12:14:24 -08:00
Harshavardhana
201a20ac02 handlers: Handle re-direction properly for S3 requests. (#3355)
Make sure all S3 signature requests are not re-directed
to `/minio`. This should be only done for JWT and some
Anonymous requests.

This also fixes a bug found from https://github.com/bji/libs3

```
$ s3 -u list

ERROR: XmlParseFailure
```

Now after this fix shows proper output
```
$ s3 -u list
                         Bucket                                 Created
--------------------------------------------------------  --------------------
andoria                                                   2016-11-27T08:19:06Z
```
2016-11-27 16:30:46 -08:00
Krishna Srinivas
f3322e94c8 FS: Skip creating fs.json for objects created by minio (ex. policy.json) (#3353) 2016-11-27 11:33:08 -08:00
Harshavardhana
46a6fde813 xl/fs: Fix initializing meta volume bug. 2016-11-25 18:17:53 -08:00
Anis Elleuch
fd1f09a66c log: Enable loggers just after configuration load (#3348)
It would make sense to enable logger just after config initialisation.
That way, errorIf() and fatalIf() will be usable and can catch error
like invalid access and key errors.
2016-11-25 10:39:00 -08:00
Remco Verhoef
752ed7915b Add windows services docs with nssm instead of sc. 2016-11-24 16:53:10 -08:00
Bala FA
d3064e40b3 isDocker() logs error than fatal error. (#3347) 2016-11-24 16:06:49 -08:00
Bala FA
39f9324616 Remove uncessary err != nil check. (#3346) 2016-11-24 15:22:33 -08:00
Aditya Manthramurthy
a822b8e782 docs: Fix command sample to be copy-pastable (#3345) 2016-11-24 11:38:19 -08:00
Harshavardhana
a6922f94ee docs: Fix more formatting issues. 2016-11-23 20:25:05 -08:00
Harshavardhana
b4d57bf4c4 docs: Fix some formatting issues for rendering 2016-11-23 20:20:12 -08:00
Bala FA
0f2e493c9a Use isErrIgnored() function wherever applicable. (#3343) 2016-11-23 20:05:04 -08:00
Nitish Tiwari
4ef2d8940c Rename quick-start-guide.md to README.md (#3344) 2016-11-23 20:04:29 -08:00
Nitish Tiwari
96099a1e97 Added distributed Minio architecture images and quick start guide (#3342) 2016-11-23 19:30:23 -08:00
Harshavardhana
0e87f29de9 Disable heal message printing, comment it out as todo. 2016-11-23 17:54:29 -08:00
Harshavardhana
dd74e5a809 Revert "init: Honor config-dir flag when it is passed as global or local flag (#3337)"
This reverts commit e2ef95af7d.

This is reverted since the previous patch caused crashes.
2016-11-23 17:31:36 -08:00
Harshavardhana
12c1abed98 Vendorize with bug fixes from minio browser. (#3341)
This patch brings in changes from miniobrowser repo.

- Bucket policy UI and functionality fixes by @krishnasrinivas
- Bucket policy implementation by @balamurugana
- UI changes and new functionality changing password etc. @rushenn
- UI and new functionality for sharing URLs, deleting files
  @rushenn and @krishnasrinivas.
- Other misc fixes by @vadmeste @brendanashworth
2016-11-23 17:31:11 -08:00
Anis Elleuch
e2ef95af7d init: Honor config-dir flag when it is passed as global or local flag (#3337)
setGlobalsFromContext() is added to sets global variable after parsing
command line arguments.
2016-11-23 17:13:40 -08:00
Harshavardhana
d711ff454e logs: Do not log common successful errors. (#3340)
Errors like `BucketNotFound`, `BucketExists` shouldn't be logged.

Fixes #3229
2016-11-23 16:36:26 -08:00
Anis Elleuch
c667d20dfc config-migrate: Fix buggy continuous re-migration of v9 to v10 config (#3338) 2016-11-23 15:53:55 -08:00
Harshavardhana
6efee2072d objectLayer: Check for format.json in a wrapped disk. (#3311)
This is needed to validate if the `format.json` indeed exists
when a fresh node is brought online.

This wrapped implementation also connects to the remote node
by attempting a re-login. Subsequently after a successful
connect `format.json` is validated as well.

Fixes #3207
2016-11-23 15:48:10 -08:00
Harshavardhana
7a5bbf7a2e Revert "Vendor update for dsync, fixing major go routine leak issue. (#3308)"
This reverts commit 273228fafa.
2016-11-23 15:17:41 -08:00
Anis Elleuch
14cb3645a3 config/logger: remove syslogger and upgrade to config v10 which eliminates syslog config (#3336) 2016-11-23 15:00:53 -08:00
Krishna Srinivas
f4f512fedd FS/multipart: Bug fix related to part path. Hold lock on part while appending. (#3335) 2016-11-23 12:50:09 -08:00
Nitish Tiwari
1983925dcf Added distributed Minio architecture files. (#3330) 2016-11-23 11:36:52 -08:00
Anis Elleuch
22c98d3fa2 logger: Disassociate shared log config between console, file and syslog (#3333)
logurs is not helping us to set different log formats (json/text) to
different loggers. Now, we create different logurs instances and call
them in errorIf and fatalIf
2016-11-23 11:35:04 -08:00
Krishna Srinivas
01ae5bb39c FS/multipart: Fix append-parts to use minioMetaTmpBucket. (#3304) 2016-11-23 03:04:04 -08:00
koolhead17
11faf3f16d docs: Modified FreeBSD guide with official how to install golang link, (#3293) 2016-11-23 02:35:19 -08:00
Remco Verhoef
d3df6c711a How to run Minio as a service on Windows (#3327) 2016-11-23 02:27:54 -08:00
Bala FA
ed6e781679 globals: make read only variables as constants. (#3326) 2016-11-22 20:13:20 -08:00
Bala FA
baf1c1638d server: set maximum allowed request body. (#3324)
This patch sets the value as 5GiB + 64MiB.  5GiB is the maximum
allowed object size and 64MiB is for form data fields and headers.
2016-11-22 19:58:51 -08:00
Bala FA
825000bc34 Use humanize constants for KiB, MiB and GiB units. (#3322) 2016-11-22 18:18:22 -08:00
Bala FA
c1ebcbcda2 Remove usused code. (#3321) 2016-11-22 17:44:18 -08:00
Anis Elleuch
41a3a9e402 server: forbid zero port in address flag since it confuses clients and (#3318) 2016-11-22 17:01:15 -08:00
Bala FA
1d4ac4b084 Rename getUUID() into mustGetUUID() (#3320)
In case of UUID generation failure mustGetUUID() will panic than
infinitely trying in for loop.
2016-11-22 16:52:37 -08:00
Bala FA
71b357e4f2 Remove uploadIDChange structure. (#3309)
addUploadID() and removeUploadID() are wrappers to updateUploadJSON()
which is called with respective arguments.
2016-11-22 15:29:39 -08:00
Anis Elleuch
339c9019b9 Protect multipart directory from removing when it is empty (#3315) 2016-11-22 13:15:06 -08:00
Harshavardhana
dd93f808c8 web: Add more data for jsonrpc responses. (#3296)
This change adds more richer error response
for JSON-RPC by interpreting object layer
errors to corresponding meaningful errors
for the web browser.

```go
&json2.Error{
   Message: "Bucket Name Invalid, Only lowercase letters, full stops, and numbers are allowed.",
}
```

Additionally this patch also allows PresignedGetObject()
to take expiry parameter to have variable expiry.
2016-11-22 11:12:38 -08:00
Anis Elleuch
4098025c11 Remove XL multipart tmp files when the latter is canceled (#3214)
XL multipart fails to remove tmp files when an error occurs during upload, this case covers the scenario where an upload is canceled manually by the client in the middle of job.
2016-11-21 16:34:57 -08:00
Bala FA
bef0a50bc1 Cleanup and fixes (#3273)
* newRequestID() (previously generateUploadID()) returns string than byte array.
* Remove unclear comments and added appropriate comments.
* SHA-256, MD5 Hash functions return Hex/Base64 encoded string than byte array.
* Remove duplicate MD5 hasher functions.
* Rename listObjectsValidateArgs() into validateListObjectsArgs()
* Remove repeated auth check code in all bucket request handlers.
* Remove abbreviated names in bucket-metadata
* Avoid nested if in bucketPolicyMatchStatement()
* Use ioutil.ReadFile() instead of os.Open() and ioutil.ReadAll()
* Set crossDomainXML as constant.
2016-11-21 13:51:05 -08:00
Anis Elleuch
71ada9d6f8 commitXLMetadata() expects src and dst buckets in its arguments (#3307) 2016-11-21 12:45:02 -08:00
tibbes
33c022fcec Fix checkdeps.sh on Mac (#3306)
Update the check_minimum_version function to use numeric comparison (not
string comparison) on components of version numbers. Fixes the following
output:

```
$ make
Checking deps:
ERROR
OSX version '10.11.6' not supported.
Minimum supported version: 10.8
make: *** [checks] Error 1
```
2016-11-21 12:25:46 -08:00
Karthic Rao
273228fafa Vendor update for dsync, fixing major go routine leak issue. (#3308) 2016-11-21 10:47:48 -08:00
Harshavardhana
aa98702908 api: Handle content-length-range policy properly. (#3297)
content-length-range policy in postPolicy API was
not working properly handle it. The reflection
strategy used has changed in recent version of Go.
Any free form interface{} of any integer is treated
as `float64` this caused a bug where content-length-range
parsing failed to provide any value.

Fixes #3295
2016-11-21 04:15:26 -08:00
Harshavardhana
5197649081 utils: reduceErrs returns and validates quorum errors. (#3300)
This is needed as explained by @krisis

Lets say we have following errors.

```
[]error{nil, errFileNotFound, errDiskAccessDenied, errDiskAccesDenied}
```

Since the last two errors are filtered, the maximum is nil,
depending on map order.

Let's say we get nil from reduceErr. Clearly at this point
we don't have quorum nodes agreeing about the data and since
GetObject only requires N/2 (Read quorum) and isDiskQuorum
would have returned true. This is problematic and can lead to
undersiable consequences.

Fixes #3298
2016-11-21 01:47:26 -08:00
Harshavardhana
066f64d34a bootup: MetaVolume init should use isErrIngored helper. (#3303) 2016-11-21 01:46:55 -08:00
Krishna Srinivas
afa4c7c3ef fs/multipart: Append multipart parts in a proper Go routine in background. (#3282) 2016-11-20 23:42:53 -08:00
Krishna Srinivas
38537c7df2 Print line numbers to give more info on the failed tests in ExecObjectLayerAPIAnonTest() (#3302) 2016-11-20 23:41:39 -08:00
Krishnan Parthasarathi
eed9ab0464 XL: pickValidXLMeta should return error instead of panic'ing (#3277) 2016-11-20 20:56:44 -08:00
Harshavardhana
0b9f0d14a1 auth/rpc: Take remote disk offline after maximum allowed attempts. (#3288)
Disks when are offline for a long period of time, we should
ignore the disk after trying Login upto 5 times.

This is to reduce the network chattiness, this also reduces
the overall time spent on `net.Dial`.

Fixes #3286
2016-11-20 16:57:12 -08:00
Anis Elleuch
ffbee70e04 Avoid removing 'tmp' directory inside '.minio.sys' (#3294) 2016-11-20 14:25:43 -08:00
Harshavardhana
2c3a2241e7 update: Change update notifier for new style banner. (#3289)
For binary releases and operating systems it would be

All operating systems.
```
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Minio is 25 days 12 hours 30 minutes old                           ┃
┃ Update: https://dl.minio.io/server/minio/release/linux-amd64/minio ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
```

On docker.
```
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Minio is 25 days 12 hours 32 minutes old ┃
┃ Update: docker pull minio/minio          ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
```
2016-11-19 23:20:13 -08:00
Harshavardhana
1c47365445 xl/bootup: Upon bootup handle errors loading bucket and event configs. (#3287)
In a situation when we have lots of buckets the bootup time
might have slowed down a bit but during this situation the
servers quickly going up and down would be an in-transit state.

Certain calls which do not use quorum like `readXLMetaStat`
might return an error saying `errDiskNotFound` this is returned
in place of expected `errFileNotFound` which leads to an issue
where server doesn't start.

To avoid this situation we need to ignore them as safe values
to be ignored, for the most part these are network related errors.

Fixes #3275
2016-11-19 17:37:57 -08:00
Krishnan Parthasarathi
81eb7c0301 Have separate directory names for docker-compose and Dockerfile (#3292) 2016-11-19 10:42:22 -08:00
Anis Elleuch
4a926d19a7 Add GnuTLS config documentation for Windows (#3285) 2016-11-18 12:03:23 -08:00
Bala FA
05dc52a206 fix: use constants for access/secret key min/max length (#3271) 2016-11-16 17:33:55 -08:00
Harshavardhana
c91d3791f9 heal: Add healing support for bucket, bucket metadata files. (#3252)
This patch implements healing in general but it is only used
as part of quickHeal().

Fixes #3237
2016-11-16 16:42:23 -08:00
Bala FA
df8153859c docs: Add comments of using canonical import paths. (#3269) 2016-11-16 16:23:32 -08:00
Aditya Manthramurthy
2f43709f85 Prevent gorilla mux from normalizing path (Fixes #3256) (#3268)
Also fix test to not use a bucket name with a leading slash - this
causes the bucket name to become empty and go to an unintended API
call (listbuckets).
2016-11-16 16:23:22 -08:00
Bala FA
61d67a061c tests: add unit test for DeleteMultipleObjectsHandler. (#3267)
Fixes #3058
2016-11-16 09:46:09 -08:00
Harshavardhana
1b85302161 Fix spelling and golint errors. (#3266)
Fixes #3263
2016-11-15 18:14:23 -08:00
Anis Elleuch
6512d9978e Add support of user self signed certificates
Additionally add documentation about how to configure TLS with Minio
2016-11-15 16:15:23 -08:00
Aditya Manthramurthy
e216201901 Remove control command from minio binary (Fixes #3264) (#3265) 2016-11-15 13:39:02 -08:00
Krishnan Parthasarathi
7abcededf2 Add tests for storage rpc handlers (#3262) 2016-11-15 12:12:06 -08:00
Harshavardhana
398421b9f5 xl/bootup: Server bootup shouldn't return for missing buckets. (#3255)
Ref #3196
2016-11-14 15:45:00 -08:00
Anis Elleuch
b8f0d9352f signature-v2: encode path and query strings when calculating signature (#3253) 2016-11-14 10:23:21 -08:00
Harshavardhana
f234c35020 lock: slice length of lock clients should be precisely urls. (#3254)
This patch fixes a possible bug, reproduced rarely only seen
once.

```
panic: runtime error: index out of range

goroutine 136 [running]:
panic(0xac1a40, 0xc4200120b0)
    /usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/minio/minio/vendor/github.com/minio/dsync.lock.func1(0xc4203d2240, 0x4, 0xc420474080, 0x4, 0x4, 0xc4202abb60, 0x0, 0xa86d01, 0xefcfc0, 0xc420417a80)
    /go/src/github.com/minio/minio/vendor/github.com/minio/dsync/drwmutex.go:170 +0x69b
created by github.com/minio/minio/vendor/github.com/minio/dsync.lock
    /go/src/github.com/minio/minio/vendor/github.com/minio/dsync/drwmutex.go:191 +0xf4
```
2016-11-14 10:18:56 -08:00
koolhead17
3bcf7b7593 Docs: Modified Docker instruction adding distributed image tag. (#3250) 2016-11-13 13:40:56 -08:00
Anis Elleuch
0c042a622a Return objects content types in Web List Objects handler (#3249) 2016-11-13 12:26:40 -08:00
Harshavardhana
c57a358c9d Validate date header only for Signed{,V2} and StreamingSigned. (#3248)
Fixes #2941
2016-11-13 12:08:24 -08:00
Anis Elleuch
380d6c6435 Use getObjectInfo() in both FS and XL ListObjects() to simplify and to return complete object information (#3247) 2016-11-13 11:48:02 -08:00
Harshavardhana
716316f711 Reduce number of envs and options from command line. (#3230)
Ref #3229

After review with @abperiasamy we decided to remove all the unnecessary options

- MINIO_BROWSER (Implemented as a security feature but now deemed obsolete
  since even if blocking access to MINIO_BROWSER, s3 API port is open)
- MINIO_CACHE_EXPIRY (Defaults to 72h)
- MINIO_MAXCONN (No one used this option and we don't test this)
- MINIO_ENABLE_FSMETA (Enable FSMETA all the time)

Remove --ignore-disks option - this option was implemented when XL layer
 would initialize the backend disks and heal them automatically to disallow
 XL accidentally using the root partition itself this option was introduced.

This behavior has been changed XL no longer automatically initializes
`format.json`  a HEAL is controlled activity, so ignore-disks is not
useful anymore. This change also addresses the problems of our documentation
going forward and keeps things simple. This patch brings in reduction of
options and defaulting them to a valid known inputs.  This patch also
serves as a guideline of limiting many ways to do the same thing.
2016-11-11 16:40:55 -08:00
Anis Elleuch
98e79b4b50 Use 307 StatusTemporaryRedirect to redirect clients from http to https with forcing them to keep HTTP Verb (#3246) 2016-11-11 15:04:51 -08:00
Anis Elleuch
e2216a0936 exponentialBackoffWait returns zero after some retries, limit attempt number inside (#3245) 2016-11-11 09:49:30 -08:00
Harshavardhana
e8d9d710d0 rpc: Protect racy access of internal auth states. (#3238)
Fixes #3232
2016-11-11 00:14:32 -08:00
Harshavardhana
a8ab02a73a v4/presign: Fix presign requests when there are more signed headers. (#3222)
This fix removes a wrong logic which fails for requests which
have more signed headers in a presign request.

Fixes #3217
2016-11-10 21:57:15 -08:00
Aditya Manthramurthy
cf022de4d5 Add tests for s3PeerAPIHandlers (Fixes #3067) (#3242) 2016-11-10 16:43:04 -08:00
Bala FA
3995e21c5b fix: Ignore object not found error in RemoveObject() in web-handler. (#3228)
Fixes #3181
2016-11-10 15:02:03 -08:00
Harshavardhana
2f7fb78692 rpc: Our rpcClient should make an attempt to reconnect. (#3221)
rpcClient should attempt a reconnect if the call fails
with 'rpc.ErrShutdown' this is needed since at times when
the servers are taken down and brought back up.

The hijacked connection from net.Dial is usually closed.

So upon first attempt rpcClient might falsely indicate that
disk to be down, to avoid this state make another dial attempt
to really fail.

Fixes #3206
Fixes #3205
2016-11-10 07:44:41 -08:00
Bala FA
cf2fb30ac7 event: Add event notification for object deletion in web browser. (#3226)
Fixes #3201
2016-11-10 07:42:55 -08:00
Harshavardhana
51d1e6f75b Fix missing CompleteMultipartUpload Etag. (#3227)
Fixes #3224
2016-11-10 07:41:02 -08:00
Harshavardhana
2f373684f5 Fix the server startup messages and help text. (#3211) 2016-11-09 23:37:12 -08:00
Anis Elleuch
5741a53d46 More portable way to list files to be spellchecked and include docs/ directory (#3220) 2016-11-09 15:17:47 -08:00
Aditya Manthramurthy
dd0698d14c Improve namespace lock API: (#3203)
- abstract out instrumentation information.
- use separate lockInstance type that encapsulates the nsMutex, volume,
  path and opsID as the frontend or top-level lock object.
2016-11-09 10:58:41 -08:00
Harshavardhana
3e67bfcc88 heal: Print heal command appropriately without export path. (#3208)
Fixes #3204
2016-11-09 10:50:14 -08:00
Anis Elleuch
ea579f5b69 Avoid shutdown fs multiple times and create a new fs in each loop (#3213) 2016-11-09 10:10:14 -08:00
Anis Elleuch
daf6f3a5c0 Fix path comparing in checkgopath (#3215) 2016-11-09 10:09:42 -08:00
Aditya Manthramurthy
d44e9d6da9 Prevent weird messages from rpc lib on start (Fixes #3209): (#3212)
This is done by not making the methods of the BucketMetaState interface
as methods (via type nesting) on the type implementing
RPCs (s3PeerAPIHandlers).
2016-11-08 23:47:44 -08:00
Bala FA
9c2cfb5cb6 tests: Add missing unit test cases for AbortMultipartUploadHandler(). (#3200)
Fixes #3070
2016-11-08 16:25:00 -08:00
Bala FA
91a0ade908 tests: add unit test for HeadObjectHandler (#3197)
Fixes #3068
2016-11-07 16:02:27 -08:00
Bala FA
5eb4002bf7 cleanup build scripts. (#3192) 2016-11-07 13:28:41 -08:00
Aditya Manthramurthy
85a5c358d8 Add bucket metadata state client/handler (Fixes #3022) (#3152)
- Adds an interface to update in-memory bucket metadata state called
  BucketMetaState - this interface has functions to:
     - update bucket notification configuration,
     - bucket listener configuration,
     - bucket policy configuration, and
     - send bucket event

- This interface is implemented by `localBMS` a type for manipulating
  local node in-memory bucket metadata, and by `remoteBMS` a type for
  manipulating remote node in-memory bucket metadata.

- The remote node interface, makes an RPC call, but the local node
  interface does not - it updates in-memory bucket state directly.

- Rename mkPeersFromEndpoints to makeS3Peers and refactored it.

- Use arrayslice instead of map in s3Peers struct

- `s3Peers.SendUpdate` now receives an arrayslice of peer indexes to
  send the request to, with a special nil value slice indicating that
  all peers should be sent the update.

- `s3Peers.SendUpdate` now returns an arrayslice of errors, representing
  errors from peers when sending an update. The array positions
  correspond to peer array s3Peers.peers

Improve globalS3Peers:

- Make isDistXL a global `globalIsDistXL` and remove from s3Peers

- Make globalS3Peers an array of (address, bucket-meta-state) pairs.

- Fix code and tests.
2016-11-07 12:09:24 -08:00
Harshavardhana
33c771bb3e tests: Add tests for browser peer rpc. Fixes #3062 (#3189) 2016-11-07 11:43:35 -08:00
Karthic Rao
286a8924fd Add leak detection to object-handler tests. (#3195) 2016-11-06 21:53:50 -08:00
Karthic Rao
efca29b00e Fix typos and comments in leak_detect_test.go. (#3193) 2016-11-06 21:02:29 -08:00
Harshavardhana
9161016962 tests: Improve coverage on signature v4 tests. (#3188)
Fixes #3065
2016-11-06 11:47:16 -08:00
Anis Elleuch
5ff30777e1 Rewrite connection muxer peek process to avoid server blocking by silent clients (#3187) 2016-11-06 11:41:01 -08:00
Anis Elleuch
754c0770d6 Merge ListenAndServe and ListenAndServeTLS for simplification purpose (#3186) 2016-11-05 20:32:13 -07:00
Harshavardhana
1105508453 connection muxer should use bufio.Reader to be simpler. (#3177) 2016-11-05 12:57:31 -07:00
Harshavardhana
1ba497950c Fix net.Listener to fully close the underlying socket. (#3171)
Leads to races and accepting connections. This patch implements
a way to reject accepting new connections.
2016-11-05 09:43:28 -07:00
Harshavardhana
bf3c93a8cc Fix docker file to use binary endpoint. (#3180) 2016-11-04 18:25:30 -07:00
Aditya Manthramurthy
eb1bc67db1 Refactor: simplify signV4TrimAll() (#3179) 2016-11-04 13:52:22 -07:00
Krishna Srinivas
8408dfaa6c bootup-validation: Allow port configuration only using --address option. (#3166) 2016-11-04 12:14:19 -07:00
Harshavardhana
d192044915 router: PathPrefix router was wrong. (#3172) 2016-11-04 12:13:22 -07:00
Anis Elleuch
d9bab6b3bd sigv4: Trim and shrink spaces in headers values (#3162) 2016-11-03 16:41:25 -07:00
koolhead17
4503491a0a docs:Browser/added storageInfo content to the doc. (#3146) 2016-11-03 01:58:55 -07:00
Anis Elleuch
e6965ca066 Quit initializing disks process when term signal is invoked (#3163) 2016-11-02 15:27:36 -07:00
Harshavardhana
d9674f7524 Improve coverage of web-handlers.go (#3157)
This patch additionally relaxes the requirement for
accesskeys to be in a regexy set of values.

Fixes #3063
2016-11-02 14:45:11 -07:00
koolhead17
f024deb1f8 docs: Fixed path for docker compose (#3156) 2016-11-02 08:54:20 -07:00
Anis Elleuch
79601d27f2 Use endpoint url when printing disks status in distributed mode (#3151) 2016-11-02 08:51:06 -07:00
Aditya Manthramurthy
226a69fe15 Add test for updateUploadJSON (Fixes #3060) (#3155) 2016-11-01 20:04:32 -07:00
Harshavardhana
9bb799462e Start all listeners when a given host resolves to multiple IPs. (#3145)
Default golang net.Listen only listens on the first IP when
host resolves to multiple IPs.

This change addresses a problem for example your ``/etc/hosts``
has entries as following

```
127.0.1.1  minio1

192.168.1.10 minio1
```

Trying to start minio as

```
minio server --address "minio1:9001" ~/Photos
```

Causes the minio server to be bound only to "127.0.1.1" which
is an incorrect behavior since we are generally interested in
`192.168.1.10` as well.

This patch addresses this issue if the hostname is resolvable
and gives back list of addresses associated with that hostname
we just bind on all of them as it is the expected behavior.
2016-11-01 15:38:28 -07:00
Karthic Rao
8bffa78f7f Fix Instance type during benchmarks. (#3147)
- The benchmark initialization function was not taking into account the
  instance type (FS/XL), was using XL ObjectLayer even for FS
  benchmarks.
- This was leading to incorrect benchmark results for FS related
  benchmarks.
- The fix takes into account the instance type (FS/XL) and correctly
  returns FS backend for FS benchmarks.
2016-11-01 10:21:16 -07:00
Anis Elleuch
4b302173ae fs: Remove object append file if found since it is not always used (#3139) 2016-11-01 00:56:03 -07:00
Karthic Rao
9db2205db3 Fix List-Object Benchmark failure. (#3142) 2016-10-31 13:57:30 -07:00
Aditya Manthramurthy
6914fe436c Remove unused function. (#3143) 2016-10-31 12:18:15 -07:00
Anis Elleuch
807cc3c28d Tracing locking errors for better debugging (#3140) 2016-10-31 10:17:14 -07:00
Harshavardhana
f3c6c55719 posix: Fix windows performance issues. (#3132)
Do not attempt to fetch volume/drive information for
each i/o situation. In our case we do this in all calls
`posix.go` this in-turn created a terrible situation for
windows. This issue does not affect the i/o path on Unix
platforms since statvfs calls are in the range of micro
seconds on these platforms.

This verification is only needed during startup and we
let things fail at a later stage on windows.
2016-10-31 09:34:44 -07:00
Harshavardhana
a773a6dce6 docker: Fix docker build. 2016-10-31 02:51:00 -07:00
Harshavardhana
62dcee3b14 dist: Moved systemd scripts to minio-systemd. (#3136)
Removing this from the repo.
2016-10-31 02:37:32 -07:00
Harshavardhana
490159ea89 docker: Move the docker document and compose to a separate dir. (#3135) 2016-10-31 02:08:16 -07:00
Karthic Rao
3e8cb8c937 build: ineffassign fixes. (#3134) 2016-10-30 23:32:46 -07:00
Aditya Manthramurthy
dd6ecf1193 Read/write quorum algo with uploads.json in xl (Fixes #3123) (#3124)
- Reads and writes of uploads.json in XL now uses quorum for
  newMultipart, completeMultipart and abortMultipart operations.

- Each disk's `uploads.json` file is read and updated independently for
  adding or removing an upload id from the file. Quorum is used to
  decide if the high-level operation actually succeeded.

- Refactor FS code to simplify the flow, and fix a bug while reading
  uploads.json.
2016-10-30 09:27:29 -07:00
Anis Elleuch
90e1803798 Fix docker-compose to follow the new server args style (#3128) 2016-10-29 16:53:09 -07:00
Anis Elleuch
a47ce7ab22 Add support of fallocate for FS and XL backends (#3032) 2016-10-29 12:44:44 -07:00
Krishna Srinivas
0b3282ac9f Logging: errorIf fatalIf print in the format [file.go:82:funcName()] (#3127) 2016-10-29 02:34:16 -07:00
Krishna Srinivas
79b98b5c25 boot: getPath() should take care of simple directory exports. (#3122)
fixes #3120
2016-10-28 13:08:56 -07:00
Krishnan Parthasarathi
6a57f2c1f0 XL: Add more information to panic msg (#3119) 2016-10-28 08:46:03 -07:00
Krishna Srinivas
0fa2477cb0 boot: check parameter syntax before initializing the system. (#3114)
fixes #3112
2016-10-28 08:45:32 -07:00
Anis Elleuch
7ffb337cd7 Update control cmd USAGE to indicate that it is possible to put access secret keys in URL (#3115) 2016-10-27 13:26:11 -07:00
Karthic Rao
a9dd2793f9 Better formatting and adding systemd config to run Minio server on custom port. (#3113) 2016-10-27 10:04:06 -07:00
Harshavardhana
9e2d0ac50b Move to URL based syntax formatting. (#3092)
For command line arguments we are currently following

- <node-1>:/path ... <node-n>:/path

This patch changes this to

- http://<node-1>/path ... http://<node-n>/path
2016-10-27 03:30:52 -07:00
Aditya Manthramurthy
30dc11a931 No listener.json for single-node mode (Fixes #3052) (#3108)
In FS or single-node XL mode, there is no need to save listener
configuration to persistent storage. As there is only one server, if it
is restarted, any connected listenBucketAPI clients were disconnected
and will have to reconnect - so there is nothing to actually store.

This incidentally solves #3052 by avoiding the problem.
2016-10-26 20:13:00 -07:00
Anis Elleuch
a15dc5fed5 Print message when creating the config file (#3089) 2016-10-26 18:44:22 -07:00
Anis Elleuch
f7c20b97a1 control cmds: Extract access and secret keys from URL if specified (#3109) 2016-10-26 18:42:12 -07:00
Harshavardhana
e9c45102b0 posix: Use sync.Pool buffers to copy in large buffers. (#3106)
These fixes are borrowed from the fixes required for GlusterFS i/o throughput.
2016-10-26 17:14:05 -07:00
Anis Elleuch
8871eb8e1e Show offline nodes after a fixed number of init retry (#3107) 2016-10-26 16:09:06 -07:00
Krishna Srinivas
0f32efb825 PostPolicy - rename of files/functions + add testcases (#3104) 2016-10-26 10:15:57 -07:00
Karthic Rao
63f1b4fdf4 mispell fixes. (#3100) 2016-10-26 08:46:14 -07:00
Krishnan Parthasarathi
31f2db6880 Remove leftover debug statement from PutObject StreamingSignature unit-test (#3099) 2016-10-26 03:17:47 -07:00
Krishnan Parthasarathi
2c9b406f6c Add TLS based tests to functional test suite (#3083) 2016-10-26 02:30:31 -07:00
Krishnan Parthasarathi
12cd2da265 Add PutObjectHandler unit tests covering failure cases (#3096) 2016-10-26 02:06:22 -07:00
Harshavardhana
485c0ea8bf tests: Combine v2 tests with the Suite itself. (#3088) 2016-10-25 13:34:14 -07:00
Krishnan Parthasarathi
49ba07d1d6 Use net.ParseCIDR instead of custom-built parsers (#3055)
Removes avoidable conversion to and from net.IP to string.
2016-10-25 11:14:47 -07:00
Krishna Srinivas
35e541e0b1 content-length-range policy should be honored for the uploaded object sizes. (#3076) 2016-10-24 23:47:03 -07:00
Aditya Manthramurthy
3977d6b7bd Lock bucket while modifying its metadata (Fixes #2979) (#3019)
- When modifying notification configuration
- When modifying listener configuration
- When modifying policy configuration

With this change we also stop early checking if the bucket exists, since
that uses a Read-lock and causes a deadlock due to the outer Write-lock.
2016-10-24 19:52:24 -07:00
Harshavardhana
0905398459 Fix benchmark tests. (#3082)
Fixes #3081
2016-10-24 18:45:06 -07:00
Aditya Manthramurthy
f41faf96b7 Fix newMultipartUpload to not leave stale uploads.json (Fixes #3014) (#3079) 2016-10-24 17:37:18 -07:00
Bala FA
36639b65a9 rename completeMultipartMD5() into getCompleteMultipartMD5(). (#3051) 2016-10-24 13:56:13 -07:00
Harshavardhana
7fc598b73f Fix user-agent prefix to have docker instead of suffix. (#3074) 2016-10-24 13:44:15 -07:00
Krishna Srinivas
21d41ad7fd init[windows]: Fix to handle the case when export path is a relative path. (#3054)
ex. to handle "minio server export"
2016-10-24 08:26:28 -07:00
Harshavardhana
fe56220d1a Do not print nil when hostname is provided as --address (#3053)
Fixes #3018
2016-10-23 23:55:12 -07:00
Harshavardhana
5782ec3ada Fix peers and web UIVersion validation. (#3048) 2016-10-23 12:32:35 -07:00
Krishnan Parthasarathi
8839c5105a Pass values to closures esp. when passed to defer statement. (#3050)
opsID, a variable on the stack, changes over the course of
Completemultipartupload function in xl-v1-multipart.go.  This was
being used in a function closure which was passed to defer
statement. The variables used in the closure depend on their values at
the time of evaluation which is indeterminate behaviour. It is
incorrect to depend on values of variables on stack at the end of
function, when deferred functions are executed.
2016-10-23 09:57:52 -07:00
Harshavardhana
e293f079f5 github: Add PR and issue templates. (#3046) 2016-10-22 17:35:52 -07:00
Anis Elleuch
6c2d5e3d05 Correct the number of failed disks that we can withstand in startup message (#3045) 2016-10-22 10:36:50 -07:00
Krishna Srinivas
5999a23d3e When object whose size is greater than 5G is uploaded using presigned POST we should return error. (#3033)
fixes #2961
2016-10-22 09:05:01 -07:00
Krishna Srinivas
e51be73ac7 PresignedPost: Support for Signature V2 presigned POST Policy. (#3043)
fixes #2993
2016-10-22 08:57:12 -07:00
Harshavardhana
4b5b363c6c tests: Fix race between SetBucketListenerConfig and eventNotifyForBucketNotifications (#3041) 2016-10-22 02:35:33 -07:00
Krishna Srinivas
f2b0c08e34 logging: print file:line:funcName of the caller of errorIf and fatalIf (#3035) 2016-10-22 02:24:34 -07:00
Harshavardhana
83b364891d tests: Fix a potential race in ListenBucketNotificationHandler. (#3040) 2016-10-21 22:56:27 -07:00
Karthic Rao
87af2dbc43 dist: Adding systemd script for running Minio distributed. (#3034) 2016-10-21 20:56:56 -07:00
Justin Clift
885bac330b Added a "catch all" documentation link (#3038)
This should help guide other newbies ;)
2016-10-21 19:31:21 -07:00
Harshavardhana
e3ab478c70 tests: Fix a potential race in RemoveBucketNotification. (#3037)
Fixes #3036
2016-10-21 17:12:56 -07:00
Harshavardhana
ecaccefd2e tests: Implement GetBucketNotification handler tests. (#3029) 2016-10-21 02:39:37 -07:00
Harshavardhana
ece559afe2 api: Do not use sqs for ListenBucketNotification. (#3023)
Add more tests. Fixes #3024
2016-10-21 01:25:17 -07:00
Krishna Srinivas
d3aaf50a40 posix: Split on ":" in path d:\export makes minio use wrong disk. (#3027)
As the host/path split happens at a higher layer now, split at posix is not needed.
fixes part of #2987
2016-10-20 23:39:33 -07:00
Karthic Rao
43ce028840 Point link of the docs to docs.minio.io. (#3025) 2016-10-20 22:28:22 -07:00
Krishna Srinivas
32c3a558e9 distributed-XL: Support to run one minio process per export even on the same machine. (#2999)
fixes #2983
2016-10-20 18:31:02 -07:00
Anis Elleuch
41f9ab1c69 Translate storage access denied error to S3 Access Denied response (#3015) 2016-10-20 16:09:55 -07:00
Aditya Manthramurthy
8876e0a80a Delete bucket listener config file from disk (#3016) 2016-10-20 16:09:19 -07:00
Anis Elleuch
c21ac80268 Validate access/secret keys found in the config file and enhance invalid keys messages (#3017) 2016-10-20 16:07:24 -07:00
Frank
0e2cd1a64d Added clear subcommand for control lock (#3013)
Added clear subcommand for control lock with following options:

```
  3. Clear lock named 'bucket/object' (exact match).
    $ minio control lock clear http://localhost:9000/bucket/object

  4. Clear all locks with names that start with 'bucket/prefix' (wildcard match).
    $ minio control lock --recursive clear http://localhost:9000/bucket/prefix

  5. Clear all locks older than 10minutes.
    $ minio control lock --older-than=10m clear http://localhost:9000/

  6. Clear all locks with names that start with 'bucket/a' and that are older than 1hour.
    $ minio control lock --recursive --older-than=1h clear http://localhost:9000/bucket/a
```
2016-10-20 13:15:28 -07:00
Aditya Manthramurthy
6274727b71 Pick up server address from --address option (#3002) (#3008)
This makes sure that when SSL is enabled (for FS/single node mode),
the server address is picked up from the --address option (that needs
to include the hostname for SSL verification, and has to be input
appropriately by user), instead of just using ":<port>".
2016-10-20 11:39:10 -07:00
Harshavardhana
95567c68bf posix: Do not print errors in expected errors. (#3012)
Fixes #3011
2016-10-20 09:26:18 -07:00
Anis Elleuch
c189337b6e rpc: Support SNI in TLS certificates (#3009) 2016-10-20 07:43:31 -07:00
Krishnan Parthasarathi
6fc81dc162 Delete temp object/part when PutObject{,Part} fails (#3004) 2016-10-19 22:52:03 -07:00
Krishnan Parthasarathi
7d50361ca9 Move housekeeping before object layer initialization (#3001)
In a distributed setup that the server should not perform any operation
on the storage layer after it is exported via RPC. e.g, cleaning up of
temporary directories under .minio.sys/tmp may interfere with ongoing
PUT objects being served by the distributed setup.
2016-10-19 19:59:48 -07:00
Frank
19c51f3f3c Added ForceUnlock to namespace-lock (#2990) 2016-10-19 09:27:36 -07:00
Aditya Manthramurthy
c3bbadacbf Improve Peer RPC error handling (Fixes #2992) (#2995)
* Check for RPC connection shutdown and try again just once.

* Refactor SendRPC to use sync.WaitGroup
2016-10-18 21:26:58 -07:00
Anis Elleuch
2208992e6a More informative message when erasure fails to read a part of an object (#2989) 2016-10-18 13:09:26 -07:00
Anis Elleuch
bbba8e432a Add ssl support to s3/web peers connections (#2988) 2016-10-18 11:46:33 -07:00
Harshavardhana
39331b6b4e xl: GetCheckSumInfo() shouldn't fail if hash not available. (#2984)
In a multipart upload scenario disks going down and coming backup
can lead to certain parts missing on the disk/server which was
going down. This is a valid case since these blocks can be
missing and should be healed through heal operation. But we are
not supposed to fail prematurely since we have enough data on
the other disks as well within read-quorum.

This fix relaxes previous assumption, fixes a major corruption
issue reproduced by @vadmeste.

Fixes #2976
2016-10-18 11:13:25 -07:00
Mike Ralphson
6e748cb1cf Report when invalid bucket names are skipped in FS backend. (#2947) 2016-10-18 01:42:46 -07:00
Anis Elleuch
2005d656e6 Properly load creds from env and save them when server cmd is executed (#2970) 2016-10-17 23:14:41 -07:00
Aditya Manthramurthy
0f26ec8095 Propagate creds change to cluster (Fixes #2855) (#2929) 2016-10-17 20:18:08 -07:00
Harshavardhana
8d2347bc7b storage: DeleteFile should return errFileNotFound for ENOENT. (#2978) 2016-10-17 16:38:46 -07:00
Aditya Manthramurthy
0ff359ca0e Fix early init. problem for notifications (Fixes #2972) (#2977) 2016-10-17 16:38:29 -07:00
Harshavardhana
f8e13fb00e server: Startup sequence should be more idempotent. (#2974)
Fixes #2971 - honors ignore-disks option properly.
Fixes #2969 - change the net.Dial to have a timeout of 3secs.
2016-10-17 14:31:33 -07:00
Harshavardhana
686a610fc3 api: Nanosecond precision for API responses is valid with S3. (#2957)
Wqe need to be compatible as well fixes #2955
2016-10-17 08:44:55 -07:00
Mike Ralphson
7fc1685b7a Allow Travis builds from GitHub forks (#2958)
Set the go_import_path explicitly. See
https://docs.travis-ci.com/user/languages/go#Go-Import-Path
2016-10-17 08:42:20 -07:00
Krishnan Parthasarathi
b89609dc2e XL: Filter out md5Sum from user defined headers (#2962) 2016-10-17 08:41:33 -07:00
Anis Elleuch
fa50312220 Avoid returning disk corrupted by servers in the middle of init all disks formats (#2964) 2016-10-17 08:39:55 -07:00
Harshavardhana
fee3f99a6e xl: heal bucket should validate if bucket exists first. (#2953)
Fixes #2944
2016-10-17 02:10:23 -07:00
Frank
ea406754a6 New dsync and added ForceUnlock to lock rpc server (#2956)
* Update dsync and added ForceUnlock function
* Added test cases for ForceUnlock
2016-10-17 01:53:29 -07:00
Aditya Manthramurthy
d02cb963d5 Fix listen-bucket (Fixes #2942) (#2949)
Don't close socket while re-initializing notify-listeners, as the rpc
client object is shared between notify-listeners and peer clients.

Also, improves SendRPC() readability by using GetPeerClient().
2016-10-16 20:52:10 -07:00
Anis Elleuch
334cdb5d64 XL total/free space calculation is done inside xl module (#2945) 2016-10-16 14:24:15 -07:00
Harshavardhana
a681af6953 Update minio browser with new changes. (#2940)
- Bucket policy set/unset support.
- Shareable URL support.
- Delete object support.
2016-10-15 08:51:53 -07:00
Anis Elleuch
5c3639c1b7 Redirect /minio to /minio/ when requests come from browsers (#2937) 2016-10-15 06:21:51 -07:00
Krishna Srinivas
903574db90 copy-object: Do not use ETag of source as MD5 as it will not be MD5 if source was uploaded as multipart. (#2938)
fixes #2934
2016-10-15 06:20:55 -07:00
Anis Elleuch
f463d3ce42 Fix a crash when service shutdown is signaled and object API is not ready yet (#2939) 2016-10-15 06:20:16 -07:00
Aditya Manthramurthy
17eeec6895 Bucket policy propagation (Fixes #2930) (#2932)
Fixes a serialisation bug - encoding/gob does not directly support
serializing `map[string]interface{}`, so we serialise to JSON and send a
byte array in the RPC call, and deserialize and update on the receiver.
2016-10-14 22:49:51 -07:00
Karthic Rao
070d3610ff tests: V2 Signature tests for object-handlers. (#2931) 2016-10-14 20:52:46 -07:00
Harshavardhana
f22862aa28 heal: Refactor heal command. (#2901)
- return errors for heal operation through rpc replies.
  - implement rotating wheel for healing status.

Fixes #2491
2016-10-14 19:57:40 -07:00
koolhead17
18be3bc95a docs: added space in README.md so that its compatible with doctor. (#2927) 2016-10-14 13:20:33 -07:00
Harshavardhana
2f520ed92f Remove errors package, add comments and simplify. (#2925) 2016-10-14 12:31:00 -07:00
Mateusz Gajewski
c03ce0f74a Display SSL expiry warnings (#2925) 2016-10-14 12:30:36 -07:00
Krishna Srinivas
0320a77dc0 HealBucket: create the bucket if it is missing in one of the disks. (#2924) 2016-10-14 11:12:17 -07:00
koolhead17
3349153058 docs: added space in source download steps so it appears as desired in (#2923)
doctor.
2016-10-14 08:49:12 -07:00
Harshavardhana
5e86352464 doc: Fix docker.md instructions and words. 2016-10-13 21:34:03 -07:00
Harshavardhana
18d125ef1c doc: Redo install instructions (#2922) 2016-10-13 19:44:27 -07:00
Aditya Manthramurthy
31be826f51 Fix missing error check for jsonrpc.Server.RegisterService() (#2921) 2016-10-13 17:34:10 -07:00
Harshavardhana
eb372d53df Fix docker release titles 2016-10-13 16:24:18 -07:00
Harshavardhana
1788c58d5c Add docker edge instructions 2016-10-13 16:19:14 -07:00
Karthic Rao
17e49a9ed2 signature-v2 fix. (#2918)
- Return errors similar to V4 Sign processsing.
- Return ErrMissing fields when Auth Header fields are missing.
- Return InvalidAccessID when accessID doesn't match.

* tests: Adding V2 signature tests for bucket handler API's.
2016-10-13 09:25:56 -07:00
Aditya Manthramurthy
0aabc1d8d9 Use Peer RPC to propagate bucket policy changes (#2891) 2016-10-13 09:19:04 -07:00
Harshavardhana
55f6828750 Do not print update message unless there is an update. (#2919) 2016-10-13 09:17:08 -07:00
Aditya Manthramurthy
6303f26330 Protect map from concurrent access (Fixes #2915) (#2916)
Protects the Peers RPC clients map from concurrent access to fix a data race condition.
2016-10-13 01:33:50 -07:00
Krishnan Parthasarathi
b59bac670a Handle err returned by rpc.Server.RegisterName (#2910) 2016-10-12 23:13:24 -07:00
Anis Elleuch
84acc820c7 Fix free drive space calculation in XL mode (#2917) 2016-10-12 20:22:15 -07:00
Harshavardhana
92858c7db2 Fix docker documentation. 2016-10-12 18:31:23 -07:00
Harshavardhana
fdaa129a5b Fix dockerfile container image. (#2892) 2016-10-12 18:09:08 -07:00
Anis Elleuch
df59967f59 Avoid checking date header of web requests by properly applying generic handlers (#2914) 2016-10-12 12:58:36 -07:00
Mateusz Gajewski
73982c8cb6 Listen bucket notification for multiple prefixes/suffixes (#2911)
* Listen bucket notification for multiple prefixes/suffixes

* After review fixes
2016-10-12 11:02:15 -07:00
Aditya Manthramurthy
6199aa0707 Peer RPCs for bucket notifications (#2877)
* Implements a Peer RPC router that sends info to all Minio servers in the cluster.
* Bucket notifications are propagated to all nodes via this RPC router.
* Bucket listener configuration is persisted to separate object layer
  file (`listener.json`) and peer RPCs are used to communicate changes
  throughout the cluster.
* When events are generated, RPC calls to send them to other servers
  where bucket listeners may be connected is implemented.
* Some bucket notification tests are now disabled as they cannot work in
  the new design.
* Minor fix in `funcFromPC` to use `path.Join`
2016-10-12 01:03:50 -07:00
Krishnan Parthasarathi
a5921b5743 Use same timestamp for all chunks in chunked signature (#2908) 2016-10-11 23:46:51 -07:00
Karthic Rao
f0538dbb5c fix broken link for Go Installation in CONTRIBUTING.md (#2907) 2016-10-11 22:19:35 -07:00
Karthic Rao
ff91ecb177 tests: Adding unknown signature type test for API handlers. (#2905) 2016-10-11 20:38:10 -07:00
Frank
a6357502c1 Correct typo in error string (#2902) 2016-10-11 08:56:02 -07:00
Harshavardhana
fa8ea41cd9 lock/instrumentation: Cleanup and print in user friendly form. (#2807) 2016-10-11 00:50:27 -07:00
Karthic Rao
3ac6790ca2 tests: Add Object Layer nil test for bucket-handler API's (#2899) 2016-10-11 00:00:02 -07:00
Krishna Srinivas
268b96058f ns-lock: lock namespace during FS object operations. (#2896) 2016-10-10 10:20:04 -07:00
Frank
0d031c432b Fix typo in serverVersion (#2894) 2016-10-10 10:11:56 -07:00
Frank
6e8f3224c5 Test coverage for lock rpc server (#2893)
* Add test coverage for removeEntry and removeEntryIfExists
* Initial test framework for Lock/Unlock functionality
* Add clarification comments
* Add test coverage code for RLock() and RUnlock()
* Add test coverage for Expired() function
* Have all lock-rpc-server test functions start with the same prefix
* Properly initialize JWT security token
2016-10-10 10:11:29 -07:00
Karthic Rao
9c53e9f4c3 tests: Enhance coverage for bucket policy handlers. (#2895) 2016-10-10 09:29:56 -07:00
Krishnan Parthasarathi
2d5e988a6d Refactor streaming signatureV4 w/ state machine (#2862)
* Refactor streaming signatureV4 w/ state machine

- Used state machine to make transitions between reading chunk header,
  chunk data and trailer explicit.

* debug: add print/panic statements to gather more info on CI failure

* Persist lastChunk status between Read() on ChunkReader

... remove panic() which was added as interim aid for debugging.

* Add unit-tests to cover v4 streaming signature
2016-10-10 01:42:32 -07:00
Harshavardhana
3cfb23750a control: Implement service command 'stop,restart,status'. (#2883)
- stop - stops all the servers.
- restart - restart all the servers.
- status - prints status of storage info about the cluster.
2016-10-09 23:03:10 -07:00
Anis Elleuch
57f75b1d9b Ignore copy conditions when ETag is not available (#2888) 2016-10-09 16:21:42 -07:00
Anand Babu (AB) Periasamy
4560cbc20c List docker as first example 2016-10-09 13:06:25 -07:00
Karthic Rao
e213172431 tests: Missing anonymous tests for bucket-handlers. (#2885) 2016-10-09 09:21:37 -07:00
Krishna Srinivas
f5f007e183 Test: Add test case for xl.HealObject() (#2884)
fixes #2842
2016-10-08 17:08:17 -07:00
Karthic Rao
09463265ce tests: Adding anonymous requests tests for bucket policy handlers. (#2882) 2016-10-08 01:04:26 -07:00
Karthic Rao
8f4cf2a7d0 tests: anonymous/unsigned tests for object handler API's . (#2881) 2016-10-07 23:28:50 -07:00
Karthic Rao
30183c4a9a tests: cleanup and unsigned request test. (#2880)
- Cleaning up of ListMultipartUpload API test for improving readability,
  code maintainance and extensibility.
- Moving ListMultipartUploads to Go 1.7 sub tests.
- Using the new Anonymous request helper function for
  ListMultipartUploads.
2016-10-07 20:16:57 -07:00
Karthic Rao
d1df5e0ae1 tests: Add helper function for API handler anonymous request tests. (#2876)
- Add helper function for API handler anonymous request tests.
- Add PutObject Part Anonymous request case using the new helper
  function to validate its functionality.
2016-10-07 11:16:11 -07:00
Harshavardhana
f1bc9343a1 prep: Initialization should wait instead of exit the servers. (#2872)
- Servers do not exit for invalid credentials instead they print and wait.
- Servers do not exit for version mismatch instead they print and wait.
- Servers do not exit for time differences between nodes they print and wait.
2016-10-07 11:15:55 -07:00
Frank
e53a9f6cab Update vendorized version of dsync that relaxes read quorum to N/2 (#2874) 2016-10-07 08:02:59 -07:00
Karthic Rao
97f4989945 tests: cleaning up. (#2875)
- Clean up PutObjectPart and ListObjectPart API handler tests.
- Add more comments, make the tests more readable.
- Add verification for HTTP response status code.
- Initialize the test using object Layer.
- Move to Go 1.7 sub tests.
2016-10-07 08:02:37 -07:00
Harshavardhana
ed676667d0 vendor: update reedsolomon package with new changes. (#2870)
- Cached inverse matrices for better reconstruct performance.
- New error reconstruction required is returned, helpful in
  initiating healing.
2016-10-06 21:57:42 -07:00
Harshavardhana
1e5e213d24 auth: Make sure we initialize or change config before RPC requests. (#2867) 2016-10-06 13:35:56 -07:00
Karthic Rao
a8105ec068 - Test utility function for easy asserting of cases wherein objectLayer (#2865)
is `nil` in API handlers.
- Remove the existing tests for the `nil` check and use the new method
  to test for object layer being `nil`.
2016-10-06 13:34:33 -07:00
Krishna Srinivas
c6d2967b84 Doc: Document list of supported environmental varaibles. (#2864)
fixes #2773
2016-10-06 09:30:08 -07:00
Krishna Srinivas
bb9be02228 minio-browser: do not redirect to /minio if MINIO_BROWSER=off (#2863)
fixes #2837
2016-10-06 08:30:32 -07:00
Harshavardhana
64f37bbf5b rpc: Add RPC client tests. (#2858) 2016-10-06 02:30:54 -07:00
Karthic Rao
0fc96fa25c Refactor bucket policy handler test to use API test initializer. (#2859) 2016-10-06 02:02:42 -07:00
Karthic Rao
2d8c6f8288 unit test for bucketPolicyConditionMatch function. (#2857) 2016-10-06 00:23:46 -07:00
Harshavardhana
b94211bd66 api: ListObjectsV1 compliance with AWS S3. (#2856)
XSD - xml schema definition for SOAP operations
on S3 provides positional restrictions on XML
output.

Fix the response by re-arranging the positions in
accordance with S3 behavior.

Fixes #2849
2016-10-05 20:12:47 -07:00
Harshavardhana
6494b77d41 server: Add more elaborate startup messages. (#2731)
These messages based on our prep stage during XL
and prints more informative message regarding
drive information.

This change also does a much needed refactoring.
2016-10-05 12:48:07 -07:00
Bala FA
63a7ca1af0 web: fix jwt token expiry set to one day by default. (#2819)
Fixes #2818
2016-10-05 10:18:55 -07:00
Krishna Srinivas
95f544657a Signature-V2: use raw resource/query from the request for signature calculation. (#2833) 2016-10-05 09:18:53 -07:00
Harshavardhana
740a919e25 config: Use migrateV8 to v9 function properly. (#2852) 2016-10-05 02:28:04 -07:00
Krishnan Parthasarathi
402c92beda Add listObjectParts test w/ unknown request signature type (#2847) 2016-10-04 07:57:35 -07:00
Karthic Rao
6a9013b97c misspell fixes. (#2835) 2016-10-04 00:09:21 -07:00
Krishnan Parthasarathi
73b50aea2d Add preSign auth type tests for ListObjectPartsHandler and PutObjectPartHandler (#2834) 2016-10-03 22:05:33 -07:00
Aditya Manthramurthy
315e66858c Add PostgreSQL notifier (#2739) (#2824)
* The user is required to specify a table name and database connection
  information in the configuration file.

* INSERTs and DELETEs are done via prepared statements for speed.

* Assumes a table structure, and requires PostgreSQL 9.5 or above due to
  the use of UPSERT.

* Creates the table if it does not exist with the given table name using
  a query like:

    CREATE TABLE myminio (
        key varchar PRIMARY KEY,
        value JSONB
    );

* Vendors some required libraries.
2016-10-03 17:29:55 -07:00
Krishnan Parthasarathi
4f902d42b2 Add unit-tests for ListObjectParts API handler (#2826)
* Add missing uploadID test
... make variables in test code unexported.
* Add ServerNotInitialized test for ListObjectPartsHandler
* Add tests for ListObjectParts with signatureV2 and Anonymous requests
* Add failure test cases for ListObjectParts
2016-10-03 08:54:57 -07:00
Krishna Srinivas
61a18ed48f sha256: Verify sha256 along with md5sum, signature is verified on the request early. (#2813) 2016-10-02 15:51:49 -07:00
Anis Elleuch
b5a6dd1395 Avoid path-cleaning policy resources for a better compliance with S3 (#2823) 2016-10-01 21:30:25 -07:00
Krishnan Parthasarathi
83e6e1060e Layer LimitReader responsibly allowing sign verification to work (#2821) 2016-10-01 09:37:40 -07:00
Krishnan Parthasarathi
ddeb8242d8 PutObjectPartHandler unit-tests (#2810) 2016-10-01 08:23:26 -07:00
Harshavardhana
a08052f640 Add docker pulls badge. 2016-09-30 19:19:19 -07:00
Harshavardhana
5ecba587f7 api: Relax object name validation. (#2814)
Fixes #2812
2016-09-30 16:56:36 -07:00
Harshavardhana
db3da97a50 signature/v2: Fix presigned requests. 2016-09-30 15:22:00 -07:00
Harshavardhana
5885ffc8ae signature: Add legacy signature v2 support transparently. (#2811)
Add new tests as well.
2016-09-30 14:32:13 -07:00
Anis Elleuch
9fb1c89f81 Add TLS encryption capability to RPC clients (#2789) 2016-09-29 23:42:37 -07:00
Anis Elleuch
1e6afac3bd Add NATS notifier (#2795) 2016-09-29 23:42:10 -07:00
Harshavardhana
64083b9227 signature: Region changes should be handled just like AWS. (#2805)
- PutBucket happens with 'us-east-1'.
- ListBuckets happens with any region.
- GetBucketLocation happens with 'us-east-1' and location is returned.
2016-09-29 15:51:00 -07:00
Krishnan Parthasarathi
5fdd768903 Make addition of TopicConfig to globalEventNotifier go-routine safe (#2806) 2016-09-28 22:46:19 -07:00
Harshavardhana
f72163f856 build: Deprecate requirement of GOROOT (#2803) 2016-09-28 18:49:16 -07:00
Krishnan Parthasarathi
428629f577 Add unit tests for server-main.go (#2802) 2016-09-28 11:19:07 -07:00
Harshavardhana
1edd74dda2 update: Deprecate the usage of update=yes query param. (#2801)
Fixes #2799
2016-09-28 02:41:21 -07:00
Krishnan Parthasarathi
740ecf530c Add PutBucketNotification, ListenBucketNotification handler unit-tests. (#2787) 2016-09-28 01:08:03 -07:00
Aditya Manthramurthy
10d2ef5449 Remove comments relating to deprecated MINIO_DEBUG envvar (#2797) 2016-09-27 18:28:46 -07:00
Aditya Manthramurthy
8ea571c7f7 Remove MINIO_DEBUG environment variable (#2794)
Removes the unimplemented settings of MINIO_DEBUG=mem and makes
MINIO_DEBUG=lock the default behaviour.
2016-09-27 14:35:43 -07:00
Harshavardhana
ca3022d545 api: Change ListenBucketNotification with new API format. (#2791)
Take prefix, suffix and events as query params.
2016-09-27 13:17:43 -07:00
Anis Elleuch
9417614a8e Recalculate free minimum disk space (#2788)
* Fix calculating free space disk by using blocks available for unprivileged user

* Use fixed minimal free disk space instead of percentage
2016-09-27 12:46:38 -07:00
Aditya Manthramurthy
70d52bbc4c Add unit test for rate-limit-handler (#2661) (#2784) 2016-09-26 21:31:12 -07:00
Harshavardhana
6aa2fc95c0 Revert "bucket: refactor policies and fix bugs related to enforcing policies. (#2766)"
This reverts commit ca5ca8332b.
2016-09-26 19:32:33 -07:00
Harshavardhana
cfbab22237 web: Remove bucket policy when we have no more statements. (#2779) 2016-09-26 03:11:22 -07:00
Harshavardhana
be0e06c0aa web: Simplify and rename GetAllBucketPolicy --> ListAllBucketPolicies. (#2778) 2016-09-25 21:53:19 -07:00
Harshavardhana
1c941fd787 rpc: Should validate server versions. (#2775)
Fixes #2764
2016-09-24 03:34:45 -07:00
Krishnan Parthasarathi
669783f875 Purge stale object cache entry (#2770) 2016-09-23 19:55:28 -07:00
Krishnan Parthasarathi
27e474b3d2 Improve code coverage in bucket-notification-handlers.go (#2759)
* Fix incorrect test cases for bucket-notification handler

* Add tests covering failure cases for bucket notification
2016-09-23 13:32:51 -07:00
Krishna Srinivas
1e53316241 Add tests for presigned-get (#2767)
* web-handlers: support for presigned-get json-rpc call for MinioBrowser's "share" feature.

* Add tests for presigned-get
2016-09-23 01:25:49 -07:00
Harshavardhana
ca5ca8332b bucket: refactor policies and fix bugs related to enforcing policies. (#2766)
This patch also addresses the problem of double caching at
object layer once at XL and another at handler layer.
2016-09-22 23:47:48 -07:00
Bala FA
aa579bbc20 web: add method to get all policies for given bucket name. (#2756)
Refer #1858
2016-09-22 23:06:45 -07:00
Harshavardhana
e375d822da bucket: SetBucketPolicy should save a valid Version and validate. (#2762) 2016-09-22 22:27:21 -07:00
astaxie
4bc2eb9a4d chinese translate (#2637) 2016-09-22 20:58:10 -07:00
Anis Elleuch
fc783f8407 More tests for web handlers (#2755)
* Return negative values of Total and Free in StorageInfo() when we fail to get disk info

* Return consistent messages in web handlers when the server is not initialized
2016-09-22 16:35:12 -07:00
Anis Elleuch
79e0c52fc2 Increment golang version for docker images (#2761) 2016-09-22 16:32:54 -07:00
Anis Elleuch
ef22330563 Require go 1.7.1 to build Minio server (#2727) 2016-09-22 10:33:52 -07:00
Harshavardhana
e1adbf83d8 docker: Compose config should point to edge tag. 2016-09-22 09:55:45 -07:00
Karthic Rao
1148f95292 ineffassign fixes (#2758) 2016-09-21 23:03:54 -07:00
Karthic Rao
f7430ec09c use runtime/debug.Stack() in leak detect test (#2757) 2016-09-21 22:04:35 -07:00
Karthic Rao
b8903d842c api/complete-multipart: fixes and tests. (#2719)
* api/complete-multipart: tests and simplification.

- Removing the logic of sending white space characters.
- Fix for incorrect HTTP response status for certain cases.
- Tests for New Multipart Upload and Complete Multipart Upload.

* tests: test for Delelete Object API handler
2016-09-21 20:08:08 -07:00
Aditya Manthramurthy
32f097b4d6 Controller rpc tests (#2709)
* Test code for controller-handler operations:

* Heal operations
* List operation
* Switch to "testing" lib, moving away from gocheck
* Minor refactors

* Remove extra call to initGracefulShutdown

* Remove dead code in mainControl:

Dead code found by the TestControlMain() test function that always
passes.

* Add tests for control-*-main.go
2016-09-21 19:58:50 -07:00
Krishnan Parthasarathi
559ad38b8c Add bucket-notification-handler tests (#2750) 2016-09-21 17:41:34 -07:00
Anis Elleuch
90417d2dd6 Check for bucket existence in Set/Get/Remove bucket policy workflow + tests (#2745) 2016-09-21 16:38:50 -07:00
Anis Elleuch
e66fb4bd7b configMigrate() returns errors + tests (#2735) 2016-09-21 09:44:57 -07:00
Harshavardhana
018c90dae7 events: ElasticSearch doesnt support objects with '/' in them. (#2747)
Fix this by using a unique sha256 generated for each unique key.
2016-09-20 16:36:18 -07:00
Anis Elleuch
a5066e8f76 Better code coverage of handler-utils.go (#2746) 2016-09-20 10:40:46 -07:00
Harshavardhana
0a3448c8b6 events: Change event notifiers to delete and update keys. (#2742)
ElasticSearch and Redis are both treated like a database.
Each indexs are based on the object names uniquely indentifying
the event. Upon each delete event of the named object deletes
the index on elasticsearch and redis respectively.
2016-09-20 02:11:17 -07:00
Harshavardhana
c4964232eb config: Fail to start for config mistakes. (#2740) 2016-09-19 15:23:49 -07:00
Harshavardhana
ef3c807b4a policies: Parser should handle Principals with various forms. (#2733)
Handles cases for these three combinations

  - "Principal": "*",
  - "Principal": { "AWS" : "*" }
  - "Principal": { "AWS" : [ "*" ]}

Fixes #2732
2016-09-19 13:52:28 -07:00
Harshavardhana
113b93346b lock: Make some cleanup and moving the code around. (#2718)
This patch just avoids lot of ifs and inverts some logic.
2016-09-19 13:14:55 -07:00
Krishna Srinivas
a955676986 Signature-V4: Dump the request with error message on signature mismatch. (#2734)
fixes #2691
2016-09-19 10:17:46 -07:00
Harshavardhana
725df557b5 tests: Add tests for bucket-notification-utils (#2726)
Part - 2 final fix #2711
2016-09-17 03:19:39 -07:00
koolhead17
250ac644d6 docs: Modified README.md with alias addition and bucket creation steps. (#2725) 2016-09-17 02:57:36 -07:00
Harshavardhana
490056eee3 tests: Add tests for bucket-notification-utils (#2723)
Part fix - 1 for #2711
2016-09-16 17:26:27 -07:00
Harshavardhana
797d749322 tests: Add tests for filterRuleMatch (#2722)
Part-1 fix for #2418
2016-09-16 16:44:44 -07:00
Harshavardhana
79888bfff7 tests: Add auth-handler. (#2721)
Fixes #2658
2016-09-16 15:17:49 -07:00
koolhead17
6ca57e81f1 docs: Fixed markdown typo in README.md. (#2720) 2016-09-16 14:26:28 -07:00
Anis Elleuch
010f61e91f Add more tests for event-notifier code (#2716) 2016-09-16 14:26:05 -07:00
Harshavardhana
9216981262 tests: Add test for diskCount. (#2717)
Fixes #2312
2016-09-16 13:44:52 -07:00
Anis Elleuch
b89a1cd482 tests: Implemented more tests for fs-v1*.go (#2686) 2016-09-16 13:06:49 -07:00
Harshavardhana
7d37dea449 tests: Add more streaming signature tests. (#2713)
Part fix for #2621
2016-09-16 02:45:42 -07:00
Frank
df2ef64d20 Upgrade to new dsync version incl. stale lock detection (#2708) 2016-09-16 00:30:55 -07:00
Anis Elleuch
7a549096de XL and FS use different tree walk ignored errors (#2707) 2016-09-15 13:43:40 -07:00
Harshavardhana
a1ff351f21 tests: Fix ListMultipartUploadsHandler tests. (#2705) 2016-09-15 01:44:19 -07:00
Harshavardhana
03430d0db8 tests: Add ListBucketHandler tests. (#2701)
part-3 final fix for #2412
2016-09-14 23:53:42 -07:00
Anis Elleuch
6f73d597e0 Fix tracing twice an error in fs Complete Multipart Upload (#2703) 2016-09-14 21:24:54 -07:00
koolhead17
e273a40345 docs: Modified README.md by providing information about Minio server (#2704)
data directory.
2016-09-14 21:24:12 -07:00
Anis Elleuch
a84548d7ea Fix FS remove bucket regression bug (#2693) 2016-09-14 16:41:39 -07:00
Kartik Lunkad
19e01ceb19 QuickStart docs for Minio Server Setup needs update #2698 (#2700) 2016-09-14 15:16:59 -07:00
Harshavardhana
1e6d67b16d server: Remove deadcode. (#2699) 2016-09-14 13:43:08 -07:00
Aditya Manthramurthy
6533927237 Lock-free rate-limit algorithm + bug-fix (#2694) 2016-09-14 11:27:37 -07:00
Harshavardhana
da9ae574df server: We should fail properly during server startup. (#2689)
Fixes #2688
2016-09-14 01:11:03 -07:00
Harshavardhana
16e4a7c200 Merge pull request #2657 from minio/distributed
Distributed XL support
2016-09-13 22:34:49 -07:00
Harshavardhana
ee7e70c992 tests: Add tests for ListMultipartUploads, DeleteMultipleObjects. (#2649)
Additionally adds PostPolicyHandler tests.
2016-09-13 21:22:31 -07:00
Krishna Srinivas
54a9f59a13 Init: Print SQS ARNs after globalEventNotifier is inited. (#2682)
fixes #2681
2016-09-13 21:18:30 -07:00
Harshavardhana
e6fd664331 tests: Fix format-config tests. 2016-09-13 21:18:30 -07:00
Karthic Rao
b247ec9352 tests: refactor object-handler tests. (#2656)
- Move the initialization to a common executor for object Layer API
  tests.d
2016-09-13 21:18:30 -07:00
Harshavardhana
43befab8ef Change distributed server wording. 2016-09-13 21:18:30 -07:00
Harshavardhana
eae0281c64 tests: Add GetBucketLocation, HeadBucket tests. (#2644) 2016-09-13 21:18:30 -07:00
Karthic Rao
8bd78fbdfb performance: gjson parsing for readXLMeta, listParts, getObjectInfo. (#2631)
- Using gjson for constructing xlMetaV1{} in realXLMeta.
- Test for parsing constructing xlMetaV1{} using gjson.
- Changes made since benchmarks showed 30-40% improvement in speed.
- Follow up comments in issue https://github.com/minio/minio/issues/2208
  for more details.
- gjson parsing of parts from xl.json for listParts.
- gjson parsing of statInfo from xl.json for getObjectInfo.
- Vendorizing gjson dependency.
2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
66459a4ce0 Add unit-tests for formatting disks during initialization (#2635)
* Add unit-tests for formatting disks during initialization

- Fixed corresponding code at places where it was deviating from the
  tabular spec.

* Added more test cases and simplified algo

... based on feedback from ``go test -coverprofile``.
2016-09-13 21:18:30 -07:00
Harshavardhana
182109f0de xl: Heal format.json properly on fresh disks. 2016-09-13 21:18:30 -07:00
Harshavardhana
9998e9ea19 api: Response timeFormat do not need to have nano-second precision.
Fixes an error reported by s3verify.
2016-09-13 21:18:30 -07:00
Harshavardhana
ba2ba328da server: Fixes for various conditions
- Fix distributed branch to be able to run FS version.
- Fix distributed branch to be able to run XL local disks.
- Ignore initialization failures of notification and bucket
  policies, the codepath should load whatever is possible.
2016-09-13 21:18:30 -07:00
Anis Elleuch
67b8080144 Fix control lock rpc name in control lock cmd (#2627) 2016-09-13 21:18:30 -07:00
Anis Elleuch
239a34ca97 Add tests for regular and streaming v4 PutObject Handler (#2618) 2016-09-13 21:18:30 -07:00
Krishna Srinivas
81d8263ae2 binary-update: Do not fetch update info for minio binary compiled from source.
fixes #2494
2016-09-13 21:18:30 -07:00
Krishna Srinivas
b4e4846e9f PutObject: object layer now returns ObjectInfo instead of md5sum to avoid extra GetObjectInfo call. (#2599)
From the S3 layer after PutObject we were calling GetObjectInfo for bucket notification. This can
be avoided if PutObjectInfo returns ObjectInfo.

fixes #2567
2016-09-13 21:18:30 -07:00
Krishna Srinivas
92e49eab5a FS/Multipart: Do not rename append files to another tmp file as the append files are already in tmp location. (#2612) 2016-09-13 21:18:30 -07:00
Harshavardhana
c4a7b950a0 fs: Fix asynchronous multipart bug.
Construct part path properly.
2016-09-13 21:18:30 -07:00
Karthic Rao
1ce339abeb Fixing ineffssaign errors (#2608) 2016-09-13 21:18:30 -07:00
Aditya Manthramurthy
a1f922315b Add docker-compose file to run Minio in distributed mode (#2606)
Serves as a starting point to run a Minio cluster using Docker. The
file can be used as configuration for the docker-compose tool to start
4 Minio servers in distributed mode.

* Add a docker-compose.yml file to run 4 minio server instances in
  distributed mode

* Update Docker.md with command to use the file
2016-09-13 21:18:30 -07:00
Anis Elleuch
3e284162d7 Add global flags to all commands and subcommands (#2605) 2016-09-13 21:18:30 -07:00
Anis Elleuch
ff99392102 Enhance minio server help template (#2603) 2016-09-13 21:18:30 -07:00
Krishna Srinivas
9358ee011b logging: Print stack trace in case of errors.
fixes #1827
2016-09-13 21:18:30 -07:00
Harshavardhana
37cbcae6ba xl: Remove an unecessary lock with isBucketExist() (#2593)
Fixes #2566
2016-09-13 21:18:30 -07:00
Harshavardhana
ae64b7fac8 XL: Handle object layer initialization properly.
Initialization when disk was down the network disk
reported an incorrect error rather than errDiskNotFound.

This resulted in incorrect error handling during
prepInitStorage() stage.

Fixes #2577
2016-09-13 21:18:30 -07:00
Anis Elleuch
d936ed90ae Avoid testing on system errors strings in posix (#2583) 2016-09-13 21:18:30 -07:00
Krishna Srinivas
7cc77eba45 XL/Healing: errDiskNotFound is the only pardonable error in xlShouldHeal. (#2586)
This is so that we try to heal a file for all the "bad" cases except when the disk is down.
2016-09-13 21:18:30 -07:00
Karthic Rao
07d232c7b4 instrumentation: instrumentation for locks. (#2584)
- Instrumentation for locks.
- Detailed test coverage.
- Adding RPC control handler to fetch lock instrumentation.
- RPC control handlers suite tests with a test RPC server.
2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
de67bca211 Move formatting of disks out of object layer initialization (#2572) 2016-09-13 21:18:30 -07:00
Anis Elleuch
5c4dbc966f Add Howto setup minio distributed with Docker 2016-09-13 21:18:30 -07:00
Harshavardhana
bca1385683 lock: Fix support single node XL locking as well. 2016-09-13 21:18:30 -07:00
Harshavardhana
f655592ff5 vendorize new dsync with new changes of logging. 2016-09-13 21:18:30 -07:00
Harshavardhana
cbe87cb2ed Fix fd-leak in rpcClient close it pro-actively. 2016-09-13 21:18:30 -07:00
Anis Elleuch
0513b3ed07 Add Heal Disk Metadata RPC API + tests (#2556) 2016-09-13 21:18:30 -07:00
Frank
7f92165c79 Single use DRWMutex, RPCClient refactor and added missing cases to lock-rpc-server (#2560)
This PR contains various fixes for the distributed release:
- Use DRWMutex in namespace-lock only for a single Lock()/RLock() call in conformance to server-side rw-locking as implemented in minio/dsync
- Implement missing cases in lock-rpc-server to catch Unlock() for active read locks and RUnlock() for an active write lock
- Refactor RPCClient to release local mutex while making actual RPC.Call()
2016-09-13 21:18:30 -07:00
Harshavardhana
780ccc26f7 server: Validate server arguments for duplicates. (#2554)
- Validates invalid format inputs.
- Validates duplicate entries.
- Validates sufficient amount of disks.

Partially fixes #2502
2016-09-13 21:18:30 -07:00
Harshavardhana
339425fd52 server: Fetch StorageInfo() from underlying disks transparently. (#2549)
Fixes #2511
2016-09-13 21:18:30 -07:00
Harshavardhana
fa6e9540a8 server: We shouldn't exit the server in lazy init. (#2548)
Avoid fatalIf instead these are non-critical errors,
continue running the server.

 - initializing bucket notifications
 - initializing bucket policies.
 - migrating bucket policies failure.

Fixes #2547
2016-09-13 21:18:30 -07:00
Harshavardhana
9605fde04d controller/auth: Implement JWT based authorization for controller. (#2544)
Fixes #2474
2016-09-13 21:18:30 -07:00
Anis Elleuch
200d327737 List only objects that need healing (#2546) 2016-09-13 21:18:30 -07:00
Harshavardhana
e1b0985b5b rpc: Refactor authentication and login changes. (#2543)
- Cache login requests.
- Converge validating distributed setup.
2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
c8dfc4cda4 auth-rpc: Reset token on disconnect (#2542) 2016-09-13 21:18:30 -07:00
Bala FA
7922a54c9a rpc-client: remove unwanted nil check of rpcClient. (#2538) 2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
bda6bcd5be Layered rpc-client implementation (#2512) 2016-09-13 21:18:30 -07:00
Harshavardhana
7e3e24b394 rpc: client login should ignore server versions. 2016-09-13 21:18:30 -07:00
Harshavardhana
bb0466f4ce control: Fix controller CLI handling with distributed server object layer.
Object layer initialization is done lazily fix it.
2016-09-13 21:18:30 -07:00
Harshavardhana
8797952409 server: Add server command line for running in distributed mode 2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
c33d1b8ee6 Vendorize minio/dsync for server-side read lock (#2484)
- Prevention of stale lock accumulation.
- Removal of dead code.
2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
1f67c18222 Lock/Rlock rpc reply was incorrect (#2479) 2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
a4691611a7 Move initGracefulShutdown before objectLayer initialization (#2468) 2016-09-13 21:18:30 -07:00
awwalker
7c7eb1475d splitNetPath: Add support for windows paths including volumeNames e.g ip:C:\network\path 2016-09-13 21:18:30 -07:00
Harshavardhana
0bce3d6d63 Fix web handler initialize with distributed lazy init. 2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
804d91ef61 storage/rpc-client: Reconnect on network disconnect (#2436) 2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
229600ce9b Implement RLock, RUnlock rpc handlers (#2437)
This would make it simplify dsync.RWMutex's algorithm to acquire a
distributed read lock and be tolerant to N/2-1 failures.
2016-09-13 21:18:30 -07:00
Harshavardhana
43098df9d8 rpc: Re-factor ReadFile behavior client <--> server.
Current code did not marshal/unmarshal buffers properly from
the server, The reason being buffers have to be allocated and
sent properly back to client to be consumable.
2016-09-13 21:18:30 -07:00
Harshavardhana
6908a0dcd4 Extract rpc server wrapped errors and translate to storage error. 2016-09-13 21:18:30 -07:00
Harshavardhana
cae5761f16 rpc/client: Add missing rpcTokens for each rpc calls. 2016-09-13 21:18:30 -07:00
Harshavardhana
83074ed57e Add missing rpc-client.go - missing in previous commit. 2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
64a7e6992e Vendorize rpc reconnect changes from minio/dsync (#2405) 2016-09-13 21:18:30 -07:00
Harshavardhana
61af764f8a Add rpc layer authentication. 2016-09-13 21:18:30 -07:00
Harshavardhana
b4172ad3c8 Rename rpc-{client,server} storage-rpc-{client,server} 2016-09-13 21:18:30 -07:00
Harshavardhana
4917038f55 Move the ObjectAPI() resource to be beginning of each handlers.
We should return back proper errors so that the clients can
retry until server has been initialized.
2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
b0f3f94163 unify single-node and distributed namespace locking (#2401) 2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
b7c169d71d object layer initialization using distributed locking (#2397)
* vendorized latest minio/dsync

* wip - object layer initialization using distributed locking
2016-09-13 21:18:30 -07:00
Frank
3939c75345 Added distributed RWMutex (#2369) 2016-09-13 21:18:30 -07:00
Krishnan Parthasarathi
e55926e8cf distribute: Make server work with multiple remote disks
This change initializes rpc servers associated with disks that are
local. It makes object layer initialization on demand, namely on the
first request to the object layer.

Also adds lock RPC service vendorized minio/dsync
2016-09-13 21:18:30 -07:00
Anis Elleuch
f82f535509 Fix possible race in shutdown callbacks process and simplify shuttting down profiler (#2684) 2016-09-13 18:43:45 -07:00
Anis Elleuch
51e337228e Avoid hiding disk errors in some cases in FS Shutdown (#2668) 2016-09-13 11:01:10 -07:00
Kevin Qiu
241c56e6d7 Use Set instead of Add in the event that the request already contains the content-length (#2683) 2016-09-13 10:59:40 -07:00
Aditya Manthramurthy
895471afa1 Fixes #2678 (#2679)
Refactor `ratelimit.acquire()` to properly enforce the *globalMaxConn*
limit.
2016-09-12 15:53:54 -07:00
319 changed files with 45121 additions and 9549 deletions

View File

@@ -1,13 +1,33 @@
## Expected behaviour
<!--- Provide a general summary of the issue in the Title above -->
## Actual behaviour
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Steps to reproduce the behaviour
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Minio version
- (paste output of `minio version`)
## System information
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1.
2.
3.
4.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used:
* Environment name and version (e.g. nginx 1.9.1):
* Server type and version:
* Operating System and version:
* Link to your project:

28
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,28 @@
<!--- Provide a general summary of your changes in the Title above -->
## Description
<!--- Describe your changes in detail -->
## Motivation and Context
<!--- Why is this change required? What problem does it solve? -->
<!--- If it fixes an open issue, please link to the issue here. -->
## How Has This Been Tested?
<!--- Please describe in detail how you tested your changes. -->
<!--- Include details of your testing environment, and the tests you ran to -->
<!--- see how your change affects other areas of the code, etc. -->
## Types of changes
<!--- What types of changes does your code introduce? Put an `x` in all the boxes that apply: -->
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
## Checklist:
<!--- Go over all the following points, and put an `x` in all the boxes that apply. -->
<!--- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
- [ ] My code follows the code style of this project.
- [ ] My change requires a change to the documentation.
- [ ] I have updated the documentation accordingly.
- [ ] I have added tests to cover my changes.
- [ ] All new and existing tests passed.

View File

@@ -1,18 +1,19 @@
go_import_path: github.com/minio/minio
sudo: required
dist: trusty
language: go
os:
- linux
- osx
osx_image: xcode7.2
env:
- ARCH=x86_64
- ARCH=i686
script:
- make
- make test GOFLAGS="-race"
- make coverage
@@ -20,4 +21,4 @@ after_success:
- bash <(curl -s https://codecov.io/bash)
go:
- 1.6.2
- 1.7.3

View File

@@ -1,6 +1,6 @@
### Install Golang
If you do not have a working Golang environment setup please follow [Golang Installation Guide](./INSTALLGO.md).
If you do not have a working Golang environment setup please follow [Golang Installation Guide](https://docs.minio.io/docs/how-to-install-golang).
### Setup your Minio Github Repository
Fork [Minio upstream](https://github.com/minio/minio/fork) source repository to your own personal repository. Copy the URL for minio from your personal github repo (you will need it for the `git clone` command below).

View File

@@ -1,19 +1,18 @@
FROM golang:1.6-alpine
FROM golang:1.7-alpine
WORKDIR /go/src/app
ENV ALLOW_CONTAINER_ROOT=1
COPY . /go/src/app
RUN \
apk add --no-cache git && \
go-wrapper download && \
go-wrapper install && \
go-wrapper install -ldflags "$(go run buildscripts/gen-ldflags.go)" && \
mkdir -p /export/docker && \
cp /go/src/app/docs/Docker.md /export/docker/ && \
cp /go/src/app/docs/docker/README.md /export/docker/ && \
rm -rf /go/pkg /go/src && \
apk del git
EXPOSE 9000
ENTRYPOINT ["go-wrapper", "run", "server"]
ENTRYPOINT ["minio"]
VOLUME ["/export"]
CMD ["/export"]

View File

@@ -18,7 +18,7 @@ $
- Run `go get foo/bar`
- Edit your code to import foo/bar
- Run `govendor add foo/bar` from top-level folder
- Run `govendor add foo/bar` from top-level directory
#### Remove dependencies

View File

@@ -83,8 +83,8 @@ fmt:
lint:
@echo "Running $@:"
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/golint github.com/minio/minio/cmd...
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/golint github.com/minio/minio/pkg...
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/golint -set_exit_status github.com/minio/minio/cmd...
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/golint -set_exit_status github.com/minio/minio/pkg...
ineffassign:
@echo "Running $@:"
@@ -92,8 +92,8 @@ ineffassign:
cyclo:
@echo "Running $@:"
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/gocyclo -over 65 cmd
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/gocyclo -over 65 pkg
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/gocyclo -over 100 cmd
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/gocyclo -over 100 pkg
build: getdeps verifiers $(UI_ASSETS)
@@ -101,8 +101,9 @@ deadcode:
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/deadcode
spelling:
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/misspell -error cmd/**/*
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/misspell -error pkg/**/*
@-GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/misspell -error `find cmd/`
@-GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/misspell -error `find pkg/`
@-GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/misspell -error `find docs/`
test: build
@echo "Running all minio testing:"

234
README.md
View File

@@ -1,201 +1,99 @@
# Minio Quickstart Guide [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/minio/minio?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
# Minio Quickstart Guide [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/minio/minio?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
Minio is an object storage server released under Apache License v2.0. It is compatible with Amazon S3 cloud storage service. It is best suited for storing unstructured data such as photos, videos, log files, backups and container / VM images. Size of an object can range from a few KBs to a maximum of 5TB.
## 1. Download
Minio server is light enough to be bundled with the application stack, similar to NodeJS, Redis and MySQL.
## Docker Container
### Stable
```sh
$ docker pull minio/minio
$ docker run -p 9000:9000 minio/minio server /export
```
### Edge
```sh
$ docker pull minio/minio:edge
$ docker run -p 9000:9000 minio/minio:edge server /export
```
Please visit Minio Docker quickstart guide for more [here](https://docs.minio.io/docs/minio-docker-quickstart-guide)
## OS X
### Homebrew
Install minio packages using [Homebrew](http://brew.sh/)
```sh
$ brew install minio
$ minio server ~/Photos
```
### Binary Download
| Platform| Architecture | URL|
| ----------| -------- | ------|
|Apple OS X|64-bit Intel|https://dl.minio.io/server/minio/release/darwin-amd64/minio|
```sh
$ chmod 755 minio
$ ./minio server ~/Photos
```
## GNU/Linux
### Binary Download
| Platform| Architecture | URL|
| ----------| -------- | ------|
|GNU/Linux|64-bit Intel|https://dl.minio.io/server/minio/release/linux-amd64/minio|
||32-bit Intel|https://dl.minio.io/server/minio/release/linux-386/minio|
||32-bit ARM|https://dl.minio.io/server/minio/release/linux-arm/minio|
|Apple OS X|64-bit Intel|https://dl.minio.io/server/minio/release/darwin-amd64/minio|
|Microsoft Windows|64-bit|https://dl.minio.io/server/minio/release/windows-amd64/minio.exe|
||32-bit|https://dl.minio.io/server/minio/release/windows-386/minio.exe|
|FreeBSD|64-bit|https://dl.minio.io/server/minio/release/freebsd-amd64/minio|
### Install from Homebrew
Install minio packages using [Homebrew](http://brew.sh/)
```sh
$ brew install minio
$ minio --help
$ chmod +x minio
$ ./minio server ~/Photos
```
### Install from Source
## Microsoft Windows
### Binary Download
| Platform| Architecture | URL|
| ----------| -------- | ------|
|Microsoft Windows|64-bit|https://dl.minio.io/server/minio/release/windows-amd64/minio.exe|
||32-bit|https://dl.minio.io/server/minio/release/windows-386/minio.exe|
```sh
C:\Users\Username\Downloads> minio.exe server D:\Photos
```
## FreeBSD
### Binary Download
| Platform| Architecture | URL|
| ----------| -------- | ------|
|FreeBSD|64-bit|https://dl.minio.io/server/minio/release/freebsd-amd64/minio|
```sh
$ chmod 755 minio
$ ./minio server ~/Photos
```
Please visit official zfs FreeBSD guide for more details [here](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
## Install from Source
Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://docs.minio.io/docs/how-to-install-golang).
```sh
$ go get -u github.com/minio/minio
```
## 2. Run Minio Server
### GNU/Linux
```sh
$ chmod +x minio
$ ./minio --help
$ ./minio server ~/Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
```
### OS X
```sh
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
```
### Microsoft Windows
```sh
C:\Users\Username\Downloads> minio.exe --help
C:\Users\Username\Downloads> minio.exe server D:\Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc.exe config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
```
### Docker Container
```sh
$ docker pull minio/minio
$ docker run -p 9000:9000 minio/minio
```
Please visit Minio Docker quickstart guide for more [here](https://docs.minio.io/docs/minio-docker-quickstart-guide)
### FreeBSD
```sh
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
```
Please visit official zfs FreeBSD guide for more details [here](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
## 3. Test Minio Server using Minio Browser
Open a web browser and navigate to http://127.0.0.1:9000 to view your buckets on minio server.
## Test using Minio Browser
Minio Server comes with an embedded web based object browser. Point your web browser to http://127.0.0.1:9000 ensure your server has started successfully.
![Screenshot](https://github.com/minio/minio/blob/master/docs/screenshots/minio-browser.jpg?raw=true)
## Test using Minio Client `mc`
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the Minio Client [Quickstart Guide](https://docs.minio.io/docs/minio-client-quickstart-guide) for further instructions.
## 4. Test Minio Server using `mc`
Install mc from [here](https://docs.minio.io/docs/minio-client-quickstart-guide). Use `mc ls` command to list all the buckets on your minio server.
```sh
$ mc ls myminio/
[2015-08-05 08:13:22 IST] 0B andoria/
[2015-08-05 06:14:26 IST] 0B deflector/
[2015-08-05 08:13:11 IST] 0B ferenginar/
[2016-03-08 14:56:35 IST] 0B jarjarbing/
[2016-01-20 16:07:41 IST] 0B my.minio.io/
```
For more examples please navigate to [Minio Client Complete Guide](https://docs.minio.io/docs/minio-client-complete-guide).
## 5. Explore Further
## Explore Further
- [Minio Erasure Code QuickStart Guide](https://docs.minio.io/docs/minio-erasure-code-quickstart-guide)
- [Minio Docker Quickstart Guide](https://docs.minio.io/docs/minio-docker-quickstart-guide)
- [Use `mc` with Minio Server](https://docs.minio.io/docs/minio-client-quickstart-guide)
- [Use `aws-cli` with Minio Server](https://docs.minio.io/docs/aws-cli-with-minio)
- [Use `s3cmd` with Minio Server](https://docs.minio.io/docs/s3cmd-with-minio)
- [Use `minio-go` SDK with Minio Server](https://docs.minio.io/docs/golang-client-quickstart-guide)
- [The Minio documentation website](https://docs.minio.io)
## 6. Contribute to Minio Project
## Contribute to Minio Project
Please follow Minio [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)

200
README_ZH.md Normal file
View File

@@ -0,0 +1,200 @@
# Minio 快速入门 [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/minio/minio?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
Minio是一个对象存储服务基于Apache License v2.0协议. 它完全兼容亚马逊的S3云储存服务非常适合于存储很多非结构化的数据例如图片、视频、日志文件、备份数据和容器/虚拟机镜像等而一个对象文件可以是任意大小从几kb到最大5T不等。
## 1. 下载
Minio是一个非常轻量的服务,可以很简单的和其他应用的结合,类似 NodeJS, Redis 或者 MySQL.
| Platform| Architecture | URL|
| ----------| -------- | ------|
|GNU/Linux|64-bit Intel|https://dl.minio.io/server/minio/release/linux-amd64/minio|
||32-bit Intel|https://dl.minio.io/server/minio/release/linux-386/minio|
||32-bit ARM|https://dl.minio.io/server/minio/release/linux-arm/minio|
|Apple OS X|64-bit Intel|https://dl.minio.io/server/minio/release/darwin-amd64/minio|
|Microsoft Windows|64-bit|https://dl.minio.io/server/minio/release/windows-amd64/minio.exe|
||32-bit|https://dl.minio.io/server/minio/release/windows-386/minio.exe|
|FreeBSD|64-bit|https://dl.minio.io/server/minio/release/freebsd-amd64/minio|
### Homebrew 安装
使用[Homebrew](http://brew.sh/) 来安装minio
```sh
$ brew install minio
$ minio --help
```
### 源码安装
源码安装只针对开发者和一些高级用户如果你还没有golang的环境请安装golang官网安装[How to install Golang](https://docs.minio.io/docs/zh-CN/how-to-install-golang).
```sh
$ go get -u github.com/minio/minio
```
## 2. 运行Minio服务
### GNU/Linux
```sh
$ chmod +x minio
$ ./minio --help
$ ./minio server ~/Photos
端点: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
区域: us-east-1
浏览器访问入口:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
命令行访问: https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
对象操作API (兼容Amazon S3):
Go: https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/zh-CN/java-client-quickstart-guide
Python: https://docs.minio.io/docs/zh-CN/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/zh-CN/javascript-client-quickstart-guide
```
### OS X
```sh
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
端点: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
区域: us-east-1
浏览器访问入口:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
命令行访问: https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
对象操作API (兼容Amazon S3):
Go: https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/zh-CN/java-client-quickstart-guide
Python: https://docs.minio.io/docs/zh-CN/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/zh-CN/javascript-client-quickstart-guide
```
### Microsoft Windows
```sh
C:\Users\Username\Downloads> minio.exe --help
C:\Users\Username\Downloads> minio.exe server D:\Photos
端点: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
区域: us-east-1
浏览器访问入口:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
命令行访问: https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide
$ mc.exe config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
对象操作API (兼容Amazon S3):
Go: https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/zh-CN/java-client-quickstart-guide
Python: https://docs.minio.io/docs/zh-CN/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/zh-CN/javascript-client-quickstart-guide
```
### Docker
```sh
$ docker pull minio/minio
$ docker run -p 9000:9000 minio/minio
```
访问minio的docker入门指南获得更多内容 [here](https://docs.minio.io/docs/zh-CN/minio-docker-quickstart-guide)
### FreeBSD
```sh
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
端点: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
区域: us-east-1
浏览器访问入口:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
命令行访问: https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
对象操作API (兼容Amazon S3):
Go: https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/zh-CN/java-client-quickstart-guide
Python: https://docs.minio.io/docs/zh-CN/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/zh-CN/javascript-client-quickstart-guide
```
请访问FreeBSD的官网指南获取更多详细信息[here](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
## 3. 使用浏览器测试minio服务
打开浏览器并输入 http://127.0.0.1:9000 查看在minio服务器上面的所有bucket
![Screenshot](https://github.com/minio/minio/blob/master/docs/screenshots/minio-browser.jpg?raw=true)
## 4. 使用`mc`测试minio服务
按照 [这个](https://docs.minio.io/docs/minio-client-quickstart-guide) 安装mc. 使用 `mc ls` 命令显示所有在minio服务上面的bucket.
```sh
$ mc ls myminio/
[2015-08-05 08:13:22 IST] 0B andoria/
[2015-08-05 06:14:26 IST] 0B deflector/
[2015-08-05 08:13:11 IST] 0B ferenginar/
[2016-03-08 14:56:35 IST] 0B jarjarbing/
[2016-01-20 16:07:41 IST] 0B my.minio.io/
```
查看更多的例子请访问 [Minio Client Complete Guide](https://docs.minio.io/docs/zh-CN/minio-client-complete-guide).
## 5. 更多内容
- [Minio Erasure Code 快速入门](https://docs.minio.io/docs/zh-CN/minio-erasure-code-quickstart-guide)
- [Minio Docker 快速入门](https://docs.minio.io/docs/zh-CN/minio-docker-quickstart-guide)
- [使用`mc`测试 Minio Server](https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide)
- [使用 `aws-cli` 测试 Minio Server](https://docs.minio.io/docs/zh-CN/aws-cli-with-minio)
- [使用 `s3cmd` 测试 Minio Server](https://docs.minio.io/docs/zh-CN/s3cmd-with-minio)
- [使用 `minio-go` SDK ce's测试 Minio Server](https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide)
## 6. 给Minio项目贡献
请按照Minio [贡献者指导手册](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash
#
# Minio Cloud Storage, (C) 2015 Minio, Inc.
# Minio Cloud Storage, (C) 2014-2016 Minio, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -21,14 +21,21 @@ _init() {
## Minimum required versions for build dependencies
GIT_VERSION="1.0"
GO_VERSION="1.6"
GO_VERSION="1.7.1"
OSX_VERSION="10.8"
UNAME=$(uname -sm)
## Check all dependencies are present
MISSING=""
KNAME=$(uname -s)
ARCH=$(uname -m)
}
## FIXME:
## In OSX, 'readlink -f' option does not exist, hence
## we have our own readlink -f behaviour here.
## Once OSX has the option, below function is good enough.
##
## readlink() {
## return /bin/readlink -f "$1"
## }
##
readlink() {
TARGET_FILE=$1
@@ -50,172 +57,114 @@ readlink() {
echo $RESULT
}
###
#
# Takes two arguments
# arg1: version number in `x.x.x` format
# arg2: version number in `x.x.x` format
#
# example: check_version "$version1" "$version2"
#
# returns:
# 0 - Installed version is equal to required
# 1 - Installed version is greater than required
# 2 - Installed version is lesser than required
# 3 - If args have length zero
#
####
check_version() {
## validate args
[[ -z "$1" ]] && return 3
[[ -z "$2" ]] && return 3
if [[ $1 == $2 ]]; then
return 0
fi
local IFS=.
local i ver1=($1) ver2=($2)
# fill empty fields in ver1 with zeros
for ((i=${#ver1[@]}; i<${#ver2[@]}; i++)); do
ver1[i]=0
done
for ((i=0; i<${#ver1[@]}; i++)); do
if [[ -z ${ver2[i]} ]]; then
# fill empty fields in ver2 with zeros
ver2[i]=0
fi
if ((10#${ver1[i]} > 10#${ver2[i]})); then
## FIXME:
## In OSX, 'sort -V' option does not exist, hence
## we have our own version compare function.
## Once OSX has the option, below function is good enough.
##
## check_minimum_version() {
## versions=($(echo -e "$1\n$2" | sort -V))
## return [ "$1" == "${versions[0]}" ]
## }
##
check_minimum_version() {
IFS='.' read -r -a varray1 <<< "$1"
IFS='.' read -r -a varray2 <<< "$2"
for i in "${!varray1[@]}"; do
if [[ ${varray1[i]} -lt ${varray2[i]} ]]; then
return 0
elif [[ ${varray1[i]} -gt ${varray2[i]} ]]; then
return 1
fi
if ((10#${ver1[i]} < 10#${ver2[i]})); then
## Installed version is lesser than required - Bad condition
return 2
fi
done
return 0
}
check_golang_env() {
echo ${GOROOT:?} 2>&1 >/dev/null
if [ $? -eq 1 ]; then
echo "ERROR"
echo "GOROOT environment variable missing, please refer to Go installation document"
echo "https://github.com/minio/minio/blob/master/INSTALLGO.md#install-go-13"
exit 1
fi
echo ${GOPATH:?} 2>&1 >/dev/null
if [ $? -eq 1 ]; then
echo "ERROR"
echo "GOPATH environment variable missing, please refer to Go installation document"
echo "https://github.com/minio/minio/blob/master/INSTALLGO.md#install-go-13"
exit 1
fi
local go_binary_path=$(which go)
if [ -z "${go_binary_path}" ] ; then
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document"
echo "https://github.com/minio/minio/blob/master/INSTALLGO.md#install-go-13"
exit -1
fi
local new_go_binary_path=${go_binary_path}
if [ -h "${go_binary_path}" ]; then
new_go_binary_path=$(readlink ${go_binary_path})
fi
if [[ !"$(dirname ${new_go_binary_path})" =~ *"${GOROOT%%*(/)}"* ]] ; then
echo "The go binary found in your PATH configuration does not belong to the Go installation pointed by your GOROOT environment," \
"please refer to Go installation document"
echo "https://github.com/minio/minio/blob/master/INSTALLGO.md#install-go-13"
exit -1
fi
}
is_supported_os() {
case ${UNAME%% *} in
"Linux")
os="linux"
;;
"FreeBSD")
os="freebsd"
;;
"Darwin")
osx_host_version=$(env sw_vers -productVersion)
check_version "${osx_host_version}" "${OSX_VERSION}"
[[ $? -ge 2 ]] && die "Minimum OSX version supported is ${OSX_VERSION}"
;;
"*")
echo "Exiting.. unsupported operating system found"
exit 1;
esac
}
is_supported_arch() {
local supported
case ${UNAME##* } in
"x86_64" | "amd64")
supported=1
;;
"arm"*)
supported=1
assert_is_supported_arch() {
case "${ARCH}" in
x86_64 | amd64 | aarch64 | arm* )
return
;;
*)
supported=0
;;
echo "ERROR"
echo "Arch '${ARCH}' not supported."
echo "Supported Arch: [x86_64, amd64, aarch64, arm*]"
exit 1
esac
if [ $supported -eq 0 ]; then
echo "Invalid arch: ${UNAME} not supported, please use x86_64/amd64"
exit 1;
fi
}
check_deps() {
check_version "$(env go version 2>/dev/null | sed 's/^.* go\([0-9.]*\).*$/\1/')" "${GO_VERSION}"
if [ $? -ge 2 ]; then
MISSING="${MISSING} golang(${GO_VERSION})"
assert_is_supported_os() {
case "${KNAME}" in
Linux | FreeBSD )
return
;;
Darwin )
osx_host_version=$(env sw_vers -productVersion)
if ! check_minimum_version "${OSX_VERSION}" "${osx_host_version}"; then
echo "ERROR"
echo "OSX version '${osx_host_version}' not supported."
echo "Minimum supported version: ${OSX_VERSION}"
exit 1
fi
return
;;
*)
echo "ERROR"
echo "OS '${KNAME}' is not supported."
echo "Supported OS: [Linux, FreeBSD, Darwin]"
exit 1
esac
}
assert_check_golang_env() {
if ! which go >/dev/null 2>&1; then
echo "ERROR"
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document at"
echo "https://docs.minio.io/docs/how-to-install-golang"
exit 1
fi
check_version "$(env git --version 2>/dev/null | sed -e 's/^.* \([0-9.\].*\).*$/\1/' -e 's/^\([0-9.\]*\).*/\1/g')" "${GIT_VERSION}"
if [ $? -ge 2 ]; then
MISSING="${MISSING} git"
installed_go_version=$(go version | sed 's/^.* go\([0-9.]*\).*$/\1/')
if ! check_minimum_version "${GO_VERSION}" "${installed_go_version}"; then
echo "ERROR"
echo "Go version '${installed_go_version}' not supported."
echo "Minimum supported version: ${GO_VERSION}"
exit 1
fi
if [ -z "${GOPATH}" ]; then
echo "ERROR"
echo "GOPATH environment variable missing, please refer to Go installation document"
echo "https://docs.minio.io/docs/how-to-install-golang"
exit 1
fi
}
assert_check_deps() {
installed_git_version=$(git version | awk '{print $NF}')
if ! check_minimum_version "${GIT_VERSION}" "${installed_git_version}"; then
echo "ERROR"
echo "Git version '${installed_git_version}' not supported."
echo "Minimum supported version: ${GIT_VERSION}"
exit 1
fi
}
main() {
echo -n "Check for supported arch.. "
is_supported_arch
## Check for supported arch
assert_is_supported_arch
echo -n "Check for supported os.. "
is_supported_os
## Check for supported os
assert_is_supported_os
echo -n "Checking if proper environment variables are set.. "
check_golang_env
## Check for Go environment
assert_check_golang_env
echo "Done"
echo "Using GOPATH=${GOPATH} and GOROOT=${GOROOT}"
echo -n "Checking dependencies for Minio.. "
check_deps
## If dependencies are missing, warn the user and abort
if [ "x${MISSING}" != "x" ]; then
echo "ERROR"
echo
echo "The following build tools are missing:"
echo
echo "** ${MISSING} **"
echo
echo "Please install them "
echo "${MISSING}"
echo
echo "Follow https://docs.minio.io/docs/how-to-install-golang for further instructions"
exit 1
fi
echo "Done"
## Check for dependencies
assert_check_deps
}
_init && main "$@"

View File

@@ -15,26 +15,23 @@
# limitations under the License.
#
_init() {
shopt -s extglob
PWD=$(pwd -P)
GOPATH=$(cd "$(go env GOPATH)" ; env pwd -P)
}
main() {
echo "Checking if project is at ${GOPATH}"
for minio in $(echo ${GOPATH} | tr ':' ' '); do
if [ ! -d ${minio}/src/github.com/minio/minio ]; then
echo "Project not found in ${minio}, please follow instructions provided at https://github.com/minio/minio/blob/master/CONTRIBUTING.md#setup-your-minio-github-repository" \
&& exit 1
fi
if [ "x${minio}/src/github.com/minio/minio" != "x${PWD}" ]; then
echo "Build outside of ${minio}, two source checkouts found. Exiting." && exit 1
echo "Checking project is in GOPATH:"
IFS=':' read -r -a paths <<< "$GOPATH"
for path in "${paths[@]}"; do
minio_path="$path/src/github.com/minio/minio"
if [ -d "$minio_path" ]; then
if [ "$minio_path" -ef "$PWD" ]; then
exit 0
fi
fi
done
echo "ERROR"
echo "Project not found in ${GOPATH}."
echo "Follow instructions at https://github.com/minio/minio/blob/master/CONTRIBUTING.md#setup-your-minio-github-repository"
exit 1
}
_init && main
main

View File

@@ -19,7 +19,6 @@ package cmd
import (
"crypto/rand"
"encoding/base64"
"regexp"
)
// credential container for access and secret keys.
@@ -29,15 +28,21 @@ type credential struct {
}
const (
minioAccessID = 20
minioSecretID = 40
accessKeyMinLen = 5
accessKeyMaxLen = 20
secretKeyMinLen = 8
secretKeyMaxLen = 40
)
// isValidSecretKey - validate secret key.
var isValidSecretKey = regexp.MustCompile(`^.{8,40}$`)
// isValidAccessKey - validate access key for right length.
func isValidAccessKey(accessKey string) bool {
return len(accessKey) >= accessKeyMinLen && len(accessKey) <= accessKeyMaxLen
}
// isValidAccessKey - validate access key.
var isValidAccessKey = regexp.MustCompile(`^[a-zA-Z0-9\\-\\.\\_\\~]{5,20}$`)
// isValidSecretKey - validate secret key for right length.
func isValidSecretKey(secretKey string) bool {
return len(secretKey) >= secretKeyMinLen && len(secretKey) <= secretKeyMaxLen
}
// mustGenAccessKeys - must generate access credentials.
func mustGenAccessKeys() (creds credential) {
@@ -66,11 +71,11 @@ func genAccessKeys() (credential, error) {
// genAccessKeyID - generate random alpha numeric value using only uppercase characters
// takes input as size in integer
func genAccessKeyID() ([]byte, error) {
alpha := make([]byte, minioAccessID)
alpha := make([]byte, accessKeyMaxLen)
if _, err := rand.Read(alpha); err != nil {
return nil, err
}
for i := 0; i < minioAccessID; i++ {
for i := 0; i < accessKeyMaxLen; i++ {
alpha[i] = alphaNumericTable[alpha[i]%byte(len(alphaNumericTable))]
}
return alpha, nil
@@ -78,9 +83,9 @@ func genAccessKeyID() ([]byte, error) {
// genSecretAccessKey - generate random base64 numeric value from a random seed.
func genSecretAccessKey() ([]byte, error) {
rb := make([]byte, minioSecretID)
rb := make([]byte, secretKeyMaxLen)
if _, err := rand.Read(rb); err != nil {
return nil, err
}
return []byte(base64.StdEncoding.EncodeToString(rb))[:minioSecretID], nil
return []byte(base64.StdEncoding.EncodeToString(rb))[:secretKeyMaxLen], nil
}

View File

@@ -28,7 +28,7 @@ type ObjectIdentifier struct {
// createBucketConfiguration container for bucket configuration request from client.
// Used for parsing the location from the request body for MakeBucketbucket.
type createBucketLocationConfiguration struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CreateBucketConfiguration" json:"-"`
XMLName xml.Name `xml:"CreateBucketConfiguration" json:"-"`
Location string `xml:"LocationConstraint"`
}

View File

@@ -48,7 +48,6 @@ const (
ErrNone APIErrorCode = iota
ErrAccessDenied
ErrBadDigest
ErrBucketAlreadyExists
ErrEntityTooSmall
ErrEntityTooLarge
ErrIncompleteBody
@@ -65,6 +64,7 @@ const (
ErrInvalidCopySource
ErrInvalidCopyDest
ErrInvalidPolicyDocument
ErrInvalidObjectState
ErrMalformedXML
ErrMissingContentLength
ErrMissingContentMD5
@@ -134,6 +134,7 @@ const (
ErrObjectExistsAsDirectory
ErrPolicyNesting
ErrInvalidObjectName
ErrServerNotInitialized
// Add new extended error codes here.
// Please open a https://github.com/minio/minio/issues before adding
// new error codes here.
@@ -192,11 +193,6 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "The Content-Md5 you specified did not match what we received.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBucketAlreadyExists: {
Code: "BucketAlreadyExists",
Description: "The requested bucket name is not available.",
HTTPStatusCode: http.StatusConflict,
},
ErrEntityTooSmall: {
Code: "EntityTooSmall",
Description: "Your proposed upload is smaller than the minimum allowed object size.",
@@ -312,6 +308,11 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "The list of parts was not in ascending order. The parts list must be specified in order by part number.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidObjectState: {
Code: "InvalidObjectState",
Description: "The operation is not valid for the current state of the object.",
HTTPStatusCode: http.StatusForbidden,
},
ErrAuthorizationHeaderMalformed: {
Code: "AuthorizationHeaderMalformed",
Description: "The authorization header is malformed; the region is wrong; expecting 'us-east-1'.",
@@ -454,7 +455,7 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "Request is not valid yet",
HTTPStatusCode: http.StatusForbidden,
},
// FIXME: Actual XML error response also contains the header which missed in lsit of signed header parameters.
// FIXME: Actual XML error response also contains the header which missed in list of signed header parameters.
ErrUnsignedHeaders: {
Code: "AccessDenied",
Description: "There were headers present in the request which were not signed",
@@ -548,7 +549,7 @@ var errorCodeResponse = map[APIErrorCode]APIError{
},
ErrPolicyNesting: {
Code: "XMinioPolicyNesting",
Description: "Policy nesting conflict has occurred.",
Description: "New bucket policy conflicts with an existing policy. Please try again with new prefix.",
HTTPStatusCode: http.StatusConflict,
},
ErrInvalidObjectName: {
@@ -556,6 +557,11 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "Object name contains unsupported characters. Unsupported characters are `^*|\\\"",
HTTPStatusCode: http.StatusBadRequest,
},
ErrServerNotInitialized: {
Code: "XMinioServerNotInitialized",
Description: "Server not initialized, please try again.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
// Add your error structure here.
}
@@ -566,6 +572,8 @@ func toAPIErrorCode(err error) (apiErr APIErrorCode) {
if err == nil {
return ErrNone
}
err = errorCause(err)
// Verify if the underlying error is signature mismatch.
switch err {
case errSignatureMismatch:
@@ -573,10 +581,12 @@ func toAPIErrorCode(err error) (apiErr APIErrorCode) {
case errContentSHA256Mismatch:
apiErr = ErrContentSHA256Mismatch
}
if apiErr != ErrNone {
// If there was a match in the above switch case.
return apiErr
}
switch err.(type) {
case StorageFull:
apiErr = ErrStorageFull
@@ -586,6 +596,8 @@ func toAPIErrorCode(err error) (apiErr APIErrorCode) {
apiErr = ErrIncompleteBody
case ObjectExistsAsDirectory:
apiErr = ErrObjectExistsAsDirectory
case PrefixAccessDenied:
apiErr = ErrAccessDenied
case BucketNameInvalid:
apiErr = ErrInvalidBucketName
case BucketNotFound:
@@ -606,11 +618,26 @@ func toAPIErrorCode(err error) (apiErr APIErrorCode) {
apiErr = ErrWriteQuorum
case InsufficientReadQuorum:
apiErr = ErrReadQuorum
case UnsupportedDelimiter:
apiErr = ErrNotImplemented
case InvalidMarkerPrefixCombination:
apiErr = ErrNotImplemented
case InvalidUploadIDKeyCombination:
apiErr = ErrNotImplemented
case MalformedUploadID:
apiErr = ErrNoSuchUpload
case PartTooSmall:
apiErr = ErrEntityTooSmall
case SHA256Mismatch:
apiErr = ErrContentSHA256Mismatch
case ObjectTooLarge:
apiErr = ErrEntityTooLarge
case ObjectTooSmall:
apiErr = ErrEntityTooSmall
default:
apiErr = ErrInternalError
}
return apiErr
}
@@ -634,7 +661,3 @@ func getAPIErrorResponse(err APIError, resource string) APIErrorResponse {
return data
}
func getErrMalformedCredentialDate(malformedDateStr string) {
}

135
cmd/api-errors_test.go Normal file
View File

@@ -0,0 +1,135 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"errors"
"testing"
)
func TestAPIErrCode(t *testing.T) {
testCases := []struct {
err error
errCode APIErrorCode
}{
// Valid cases.
{
BadDigest{},
ErrBadDigest,
},
{
IncompleteBody{},
ErrIncompleteBody,
},
{
ObjectExistsAsDirectory{},
ErrObjectExistsAsDirectory,
},
{
BucketNameInvalid{},
ErrInvalidBucketName,
},
{
BucketExists{},
ErrBucketAlreadyOwnedByYou,
},
{
ObjectNotFound{},
ErrNoSuchKey,
},
{
ObjectNameInvalid{},
ErrInvalidObjectName,
},
{
InvalidUploadID{},
ErrNoSuchUpload,
},
{
InvalidPart{},
ErrInvalidPart,
},
{
InsufficientReadQuorum{},
ErrReadQuorum,
},
{
InsufficientWriteQuorum{},
ErrWriteQuorum,
},
{
UnsupportedDelimiter{},
ErrNotImplemented,
},
{
InvalidMarkerPrefixCombination{},
ErrNotImplemented,
},
{
InvalidUploadIDKeyCombination{},
ErrNotImplemented,
},
{
MalformedUploadID{},
ErrNoSuchUpload,
},
{
PartTooSmall{},
ErrEntityTooSmall,
},
{
BucketNotEmpty{},
ErrBucketNotEmpty,
},
{
BucketNotFound{},
ErrNoSuchBucket,
},
{
StorageFull{},
ErrStorageFull,
},
{
errSignatureMismatch,
ErrSignatureDoesNotMatch,
},
{
errContentSHA256Mismatch,
ErrContentSHA256Mismatch,
}, // End of all valid cases.
// Case where err is nil.
{
nil,
ErrNone,
},
// Case where err type is unknown.
{
errors.New("Custom error"),
ErrInternalError,
},
}
// Validate all the errors with their API error codes.
for i, testCase := range testCases {
errCode := toAPIErrorCode(testCase.err)
if errCode != testCase.errCode {
t.Errorf("Test %d: Expected error code %d, got %d", i+1, testCase.errCode, errCode)
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Minio Cloud Storage, (C) 2015 Minio, Inc.
* Minio Cloud Storage, (C) 2015,2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -25,25 +25,23 @@ import (
"strconv"
)
//// helpers
// Static alphaNumeric table used for generating unique request ids
// Static alphanumeric table used for generating unique request ids
var alphaNumericTable = []byte("0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ")
// generateRequestID - Generate request id
func generateRequestID() []byte {
// newRequestID generates and returns request ID string.
func newRequestID() string {
alpha := make([]byte, 16)
rand.Read(alpha)
for i := 0; i < 16; i++ {
alpha[i] = alphaNumericTable[alpha[i]%byte(len(alphaNumericTable))]
}
return alpha
return string(alpha)
}
// Write http common headers
func setCommonHeaders(w http.ResponseWriter) {
// Set unique request ID for each reply.
w.Header().Set("X-Amz-Request-Id", string(generateRequestID()))
w.Header().Set("X-Amz-Request-Id", newRequestID())
w.Header().Set("Server", ("Minio/" + ReleaseTag + " (" + runtime.GOOS + "; " + runtime.GOARCH + ")"))
w.Header().Set("Accept-Ranges", "bytes")
}

View File

@@ -20,9 +20,9 @@ import (
"testing"
)
func TestGenerateRequestID(t *testing.T) {
func TestNewRequestID(t *testing.T) {
// Ensure that it returns an alphanumeric result of length 16.
var id = generateRequestID()
var id = newRequestID()
if len(id) != 16 {
t.Fail()

View File

@@ -38,6 +38,7 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
// Parse bucket url queries for ListObjects V2.
func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimiter string, fetchOwner bool, maxkeys int, encodingType string) {
prefix = values.Get("prefix")
token = values.Get("continuation-token")
startAfter = values.Get("start-after")
delimiter = values.Get("delimiter")
if values.Get("max-keys") != "" {
@@ -45,11 +46,8 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
} else {
maxkeys = maxObjectList
}
fetchOwner = values.Get("fetch-owner") == "true"
encodingType = values.Get("encoding-type")
token = values.Get("continuation-token")
if values.Get("fetch-owner") == "true" {
fetchOwner = true
}
return
}
@@ -80,3 +78,21 @@ func getObjectResources(values url.Values) (uploadID string, partNumberMarker, m
encodingType = values.Get("encoding-type")
return
}
// Parse listen bucket notification resources.
func getListenBucketNotificationResources(values url.Values) (prefix []string, suffix []string, events []string) {
prefix = values["prefix"]
suffix = values["suffix"]
events = values["events"]
return prefix, suffix, events
}
// Validates filter values
func validateFilterValues(values []string) (err APIErrorCode) {
for _, value := range values {
if !IsValidObjectPrefix(value) {
return ErrFilterValueInvalid
}
}
return ErrNone
}

226
cmd/api-resources_test.go Normal file
View File

@@ -0,0 +1,226 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"net/url"
"strings"
"testing"
)
// Test list objects resources V2.
func TestListObjectsV2Resources(t *testing.T) {
testCases := []struct {
values url.Values
prefix, token, startAfter, delimiter string
fetchOwner bool
maxKeys int
encodingType string
}{
{
values: url.Values{
"prefix": []string{"photos/"},
"continuation-token": []string{"token"},
"start-after": []string{"start-after"},
"delimiter": []string{"/"},
"fetch-owner": []string{"true"},
"max-keys": []string{"100"},
"encoding-type": []string{"gzip"},
},
prefix: "photos/",
token: "token",
startAfter: "start-after",
delimiter: "/",
fetchOwner: true,
maxKeys: 100,
encodingType: "gzip",
},
{
values: url.Values{
"prefix": []string{"photos/"},
"continuation-token": []string{"token"},
"start-after": []string{"start-after"},
"delimiter": []string{"/"},
"fetch-owner": []string{"true"},
"encoding-type": []string{"gzip"},
},
prefix: "photos/",
token: "token",
startAfter: "start-after",
delimiter: "/",
fetchOwner: true,
maxKeys: 1000,
encodingType: "gzip",
},
}
for i, testCase := range testCases {
prefix, token, startAfter, delimiter, fetchOwner, maxKeys, encodingType := getListObjectsV2Args(testCase.values)
if prefix != testCase.prefix {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.prefix, prefix)
}
if token != testCase.token {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.token, token)
}
if startAfter != testCase.startAfter {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.startAfter, startAfter)
}
if delimiter != testCase.delimiter {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.delimiter, delimiter)
}
if fetchOwner != testCase.fetchOwner {
t.Errorf("Test %d: Expected %t, got %t", i+1, testCase.fetchOwner, fetchOwner)
}
if maxKeys != testCase.maxKeys {
t.Errorf("Test %d: Expected %d, got %d", i+1, testCase.maxKeys, maxKeys)
}
if encodingType != testCase.encodingType {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.encodingType, encodingType)
}
}
}
// Test list objects resources V1.
func TestListObjectsV1Resources(t *testing.T) {
testCases := []struct {
values url.Values
prefix, marker, delimiter string
maxKeys int
encodingType string
}{
{
values: url.Values{
"prefix": []string{"photos/"},
"marker": []string{"test"},
"delimiter": []string{"/"},
"max-keys": []string{"100"},
"encoding-type": []string{"gzip"},
},
prefix: "photos/",
marker: "test",
delimiter: "/",
maxKeys: 100,
encodingType: "gzip",
},
{
values: url.Values{
"prefix": []string{"photos/"},
"marker": []string{"test"},
"delimiter": []string{"/"},
"encoding-type": []string{"gzip"},
},
prefix: "photos/",
marker: "test",
delimiter: "/",
maxKeys: 1000,
encodingType: "gzip",
},
}
for i, testCase := range testCases {
prefix, marker, delimiter, maxKeys, encodingType := getListObjectsV1Args(testCase.values)
if prefix != testCase.prefix {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.prefix, prefix)
}
if marker != testCase.marker {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.marker, marker)
}
if delimiter != testCase.delimiter {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.delimiter, delimiter)
}
if maxKeys != testCase.maxKeys {
t.Errorf("Test %d: Expected %d, got %d", i+1, testCase.maxKeys, maxKeys)
}
if encodingType != testCase.encodingType {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.encodingType, encodingType)
}
}
}
// Validates extracting information for object resources.
func TestGetObjectsResources(t *testing.T) {
testCases := []struct {
values url.Values
uploadID string
partNumberMarker, maxParts int
encodingType string
}{
{
values: url.Values{
"uploadId": []string{"11123-11312312311231-12313"},
"part-number-marker": []string{"1"},
"max-parts": []string{"1000"},
"encoding-type": []string{"gzip"},
},
uploadID: "11123-11312312311231-12313",
partNumberMarker: 1,
maxParts: 1000,
encodingType: "gzip",
},
}
for i, testCase := range testCases {
uploadID, partNumberMarker, maxParts, encodingType := getObjectResources(testCase.values)
if uploadID != testCase.uploadID {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.uploadID, uploadID)
}
if partNumberMarker != testCase.partNumberMarker {
t.Errorf("Test %d: Expected %d, got %d", i+1, testCase.partNumberMarker, partNumberMarker)
}
if maxParts != testCase.maxParts {
t.Errorf("Test %d: Expected %d, got %d", i+1, testCase.maxParts, maxParts)
}
if encodingType != testCase.encodingType {
t.Errorf("Test %d: Expected %s, got %s", i+1, testCase.encodingType, encodingType)
}
}
}
// Validates if filter values are correct
func TestValidateFilterValues(t *testing.T) {
testCases := []struct {
values []string
expectedError APIErrorCode
}{
{
values: []string{""},
expectedError: ErrNone,
},
{
values: []string{"", "prefix"},
expectedError: ErrNone,
},
{
values: []string{strings.Repeat("a", 1025)},
expectedError: ErrFilterValueInvalid,
},
{
values: []string{"a\\b"},
expectedError: ErrFilterValueInvalid,
},
{
values: []string{string([]byte{0xff, 0xfe, 0xfd})},
expectedError: ErrFilterValueInvalid,
},
}
for i, testCase := range testCases {
if actualError := validateFilterValues(testCase.values); actualError != testCase.expectedError {
t.Errorf("Test %d: Expected %d, got %d", i+1, testCase.expectedError, actualError)
}
}
}

View File

@@ -14,12 +14,25 @@
* limitations under the License.
*/
// Package cmd file carries any specific responses constructed/necessary in
// multipart operations.
package cmd
import "net/http"
// Represents additional fields necessary for ErrPartTooSmall S3 error.
type completeMultipartAPIError struct {
// Proposed size represents uploaded size of the part.
ProposedSize int64
// Minimum size allowed epresents the minimum size allowed per
// part. Defaults to 5MB.
MinSizeAllowed int64
// Part number of the part which is incorrect.
PartNumber int
// ETag of the part which is incorrect.
PartETag string
// Other default XML error responses.
APIErrorResponse
}
// writeErrorResponsePartTooSmall - function is used specifically to
// construct a proper error response during CompleteMultipartUpload
// when one of the parts is < 5MB.
@@ -28,27 +41,16 @@ import "net/http"
// error. So we construct a new type which lies well within the scope
// of this function.
func writePartSmallErrorResponse(w http.ResponseWriter, r *http.Request, err PartTooSmall) {
// Represents additional fields necessary for ErrPartTooSmall S3 error.
type completeMultipartAPIError struct {
// Proposed size represents uploaded size of the part.
ProposedSize int64
// Minimum size allowed epresents the minimum size allowed per
// part. Defaults to 5MB.
MinSizeAllowed int64
// Part number of the part which is incorrect.
PartNumber int
// ETag of the part which is incorrect.
PartETag string
// Other default XML error responses.
APIErrorResponse
}
apiError := getAPIError(toAPIErrorCode(err))
// Generate complete multipart error response.
errorResponse := getAPIErrorResponse(getAPIError(toAPIErrorCode(err)), r.URL.Path)
errorResponse := getAPIErrorResponse(apiError, r.URL.Path)
cmpErrResp := completeMultipartAPIError{err.PartSize, int64(5242880), err.PartNumber, err.PartETag, errorResponse}
encodedErrorResponse := encodeResponse(cmpErrResp)
// Write error body
// respond with 400 bad request.
w.WriteHeader(apiError.HTTPStatusCode)
// Write error body.
w.Write(encodedErrorResponse)
w.(http.Flusher).Flush()
}
// Add any other multipart specific responses here.

View File

@@ -24,10 +24,11 @@ import (
)
const (
timeFormatAMZ = "2006-01-02T15:04:05.000Z" // Reply date format
maxObjectList = 1000 // Limit number of objects in a listObjectsResponse.
maxUploadsList = 1000 // Limit number of uploads in a listUploadsResponse.
maxPartsList = 1000 // Limit number of parts in a listPartsResponse.
timeFormatAMZ = "2006-01-02T15:04:05Z" // Reply date format
timeFormatAMZLong = "2006-01-02T15:04:05.000Z" // Reply date format with nanosecond precision.
maxObjectList = 1000 // Limit number of objects in a listObjectsResponse.
maxUploadsList = 1000 // Limit number of uploads in a listUploadsResponse.
maxPartsList = 1000 // Limit number of parts in a listPartsResponse.
)
// LocationResponse - format for location response.
@@ -40,20 +41,9 @@ type LocationResponse struct {
type ListObjectsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
CommonPrefixes []CommonPrefix
Contents []Object
Delimiter string
// Encoding type used to encode object keys in the response.
EncodingType string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
Marker string
MaxKeys int
Name string
Name string
Prefix string
Marker string
// When response is truncated (the IsTruncated element value in the response
// is true), you can use the key name in this field as marker in the subsequent
@@ -62,29 +52,28 @@ type ListObjectsResponse struct {
// specified. If response does not include the NextMaker and it is truncated,
// you can use the value of the last Key in the response as the marker in the
// subsequent request to get the next set of object keys.
NextMarker string
Prefix string
NextMarker string `xml:"NextMarker,omitempty"`
MaxKeys int
Delimiter string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
Contents []Object
CommonPrefixes []CommonPrefix
// Encoding type used to encode object keys in the response.
EncodingType string `xml:"EncodingType,omitempty"`
}
// ListObjectsV2Response - format for list objects response.
type ListObjectsV2Response struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
CommonPrefixes []CommonPrefix
Contents []Object
Delimiter string
// Encoding type used to encode object keys in the response.
EncodingType string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
StartAfter string
MaxKeys int
Name string
Name string
Prefix string
StartAfter string `xml:"StartAfter,omitempty"`
// When response is truncated (the IsTruncated element value in the response
// is true), you can use the key name in this field as marker in the subsequent
// request to get next set of objects. Server lists objects in alphabetical
@@ -92,16 +81,28 @@ type ListObjectsV2Response struct {
// specified. If response does not include the NextMaker and it is truncated,
// you can use the value of the last Key in the response as the marker in the
// subsequent request to get the next set of object keys.
ContinuationToken string
NextContinuationToken string
Prefix string
ContinuationToken string `xml:"ContinuationToken,omitempty"`
NextContinuationToken string `xml:"NextContinuationToken,omitempty"`
KeyCount int
MaxKeys int
Delimiter string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
Contents []Object
CommonPrefixes []CommonPrefix
// Encoding type used to encode object keys in the response.
EncodingType string `xml:"EncodingType,omitempty"`
}
// Part container for part metadata.
type Part struct {
PartNumber int
ETag string
LastModified string
ETag string
Size int64
}
@@ -137,23 +138,29 @@ type ListMultipartUploadsResponse struct {
UploadIDMarker string `xml:"UploadIdMarker"`
NextKeyMarker string
NextUploadIDMarker string `xml:"NextUploadIdMarker"`
EncodingType string
Delimiter string
Prefix string
EncodingType string `xml:"EncodingType,omitempty"`
MaxUploads int
IsTruncated bool
Uploads []Upload `xml:"Upload"`
Prefix string
Delimiter string
CommonPrefixes []CommonPrefix
// List of pending uploads.
Uploads []Upload `xml:"Upload"`
// Delimed common prefixes.
CommonPrefixes []CommonPrefix
}
// ListBucketsResponse - format for list buckets response
type ListBucketsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListAllMyBucketsResult" json:"-"`
Owner Owner
// Container for one or more buckets.
Buckets struct {
Buckets []Bucket `xml:"Bucket"`
} // Buckets are nested
Owner Owner
}
// Upload container for in progress multipart upload
@@ -179,23 +186,23 @@ type Bucket struct {
// Object container for object metadata
type Object struct {
ETag string
Key string
LastModified string // time string of format "2006-01-02T15:04:05.000Z"
ETag string
Size int64
// Owner of the object.
Owner Owner
// The class of storage used to store the object.
StorageClass string
}
// CopyObjectResponse container returns ETag and LastModified of the
// successfully copied object
// CopyObjectResponse container returns ETag and LastModified of the successfully copied object
type CopyObjectResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CopyObjectResult" json:"-"`
ETag string
LastModified string // time string of format "2006-01-02T15:04:05.000Z"
LastModified string // time string of format "2006-01-02T15:04:05.000Z"
ETag string // md5sum of the copied object.
}
// Initiator inherit from Owner struct, fields are same
@@ -254,12 +261,8 @@ func getObjectLocation(bucketName string, key string) string {
return "/" + bucketName + "/" + key
}
// takes an array of Bucketmetadata information for serialization
// input:
// array of bucket metadata
//
// output:
// populated struct that can be serialized to match xml and json api spec output
// generates ListBucketsResponse from array of BucketInfo which can be
// serialized to match XML and JSON API spec output.
func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
var listbuckets []Bucket
var data = ListBucketsResponse{}
@@ -271,7 +274,7 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
for _, bucket := range buckets {
var listbucket = Bucket{}
listbucket.Name = bucket.Name
listbucket.CreationDate = bucket.Created.Format(timeFormatAMZ)
listbucket.CreationDate = bucket.Created.Format(timeFormatAMZLong)
listbuckets = append(listbuckets, listbucket)
}
@@ -297,7 +300,7 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter string, max
continue
}
content.Key = object.Name
content.LastModified = object.ModTime.UTC().Format(timeFormatAMZ)
content.LastModified = object.ModTime.UTC().Format(timeFormatAMZLong)
if object.MD5Sum != "" {
content.ETag = "\"" + object.MD5Sum + "\""
}
@@ -344,7 +347,7 @@ func generateListObjectsV2Response(bucket, prefix, token, startAfter, delimiter
continue
}
content.Key = object.Name
content.LastModified = object.ModTime.UTC().Format(timeFormatAMZ)
content.LastModified = object.ModTime.UTC().Format(timeFormatAMZLong)
if object.MD5Sum != "" {
content.ETag = "\"" + object.MD5Sum + "\""
}
@@ -370,18 +373,19 @@ func generateListObjectsV2Response(bucket, prefix, token, startAfter, delimiter
prefixes = append(prefixes, prefixItem)
}
data.CommonPrefixes = prefixes
data.KeyCount = len(data.Contents) + len(data.CommonPrefixes)
return data
}
// generateCopyObjectResponse
// generates CopyObjectResponse from etag and lastModified time.
func generateCopyObjectResponse(etag string, lastModified time.Time) CopyObjectResponse {
return CopyObjectResponse{
ETag: "\"" + etag + "\"",
LastModified: lastModified.UTC().Format(timeFormatAMZ),
LastModified: lastModified.UTC().Format(timeFormatAMZLong),
}
}
// generateInitiateMultipartUploadResponse
// generates InitiateMultipartUploadResponse for given bucket, key and uploadID.
func generateInitiateMultipartUploadResponse(bucket, key, uploadID string) InitiateMultipartUploadResponse {
return InitiateMultipartUploadResponse{
Bucket: bucket,
@@ -390,7 +394,7 @@ func generateInitiateMultipartUploadResponse(bucket, key, uploadID string) Initi
}
}
// generateCompleteMultipartUploadResponse
// generates CompleteMultipartUploadResponse for given bucket, key, location and ETag.
func generateCompleteMultpartUploadResponse(bucket, key, location, etag string) CompleteMultipartUploadResponse {
return CompleteMultipartUploadResponse{
Location: location,
@@ -400,7 +404,7 @@ func generateCompleteMultpartUploadResponse(bucket, key, location, etag string)
}
}
// generateListPartsResult
// generates ListPartsResponse from ListPartsInfo.
func generateListPartsResponse(partsInfo ListPartsInfo) ListPartsResponse {
// TODO - support EncodingType in xml decoding
listPartsResponse := ListPartsResponse{}
@@ -424,13 +428,13 @@ func generateListPartsResponse(partsInfo ListPartsInfo) ListPartsResponse {
newPart.PartNumber = part.PartNumber
newPart.ETag = "\"" + part.ETag + "\""
newPart.Size = part.Size
newPart.LastModified = part.LastModified.UTC().Format(timeFormatAMZ)
newPart.LastModified = part.LastModified.UTC().Format(timeFormatAMZLong)
listPartsResponse.Parts[index] = newPart
}
return listPartsResponse
}
// generateListMultipartUploadsResponse
// generates ListMultipartUploadsResponse for given bucket and ListMultipartsInfo.
func generateListMultipartUploadsResponse(bucket string, multipartsInfo ListMultipartsInfo) ListMultipartUploadsResponse {
listMultipartUploadsResponse := ListMultipartUploadsResponse{}
listMultipartUploadsResponse.Bucket = bucket
@@ -454,7 +458,7 @@ func generateListMultipartUploadsResponse(bucket string, multipartsInfo ListMult
newUpload := Upload{}
newUpload.UploadID = upload.UploadID
newUpload.Key = upload.Object
newUpload.Initiated = upload.Initiated.UTC().Format(timeFormatAMZ)
newUpload.Initiated = upload.Initiated.UTC().Format(timeFormatAMZLong)
listMultipartUploadsResponse.Uploads[index] = newUpload
}
return listMultipartUploadsResponse
@@ -489,18 +493,18 @@ func writeSuccessNoContent(w http.ResponseWriter) {
// writeErrorRespone write error headers
func writeErrorResponse(w http.ResponseWriter, req *http.Request, errorCode APIErrorCode, resource string) {
error := getAPIError(errorCode)
apiError := getAPIError(errorCode)
// set common headers
setCommonHeaders(w)
// write Header
w.WriteHeader(error.HTTPStatusCode)
w.WriteHeader(apiError.HTTPStatusCode)
writeErrorResponseNoHeader(w, req, errorCode, resource)
}
func writeErrorResponseNoHeader(w http.ResponseWriter, req *http.Request, errorCode APIErrorCode, resource string) {
error := getAPIError(errorCode)
apiError := getAPIError(errorCode)
// Generate error response.
errorResponse := getAPIErrorResponse(error, resource)
errorResponse := getAPIErrorResponse(apiError, resource)
encodedErrorResponse := encodeResponse(errorResponse)
// HEAD should have no body, do not attempt to write to it
if req.Method != "HEAD" {

View File

@@ -20,11 +20,16 @@ import router "github.com/gorilla/mux"
// objectAPIHandler implements and provides http handlers for S3 API.
type objectAPIHandlers struct {
ObjectAPI ObjectLayer
ObjectAPI func() ObjectLayer
}
// registerAPIRouter - registers S3 compatible APIs.
func registerAPIRouter(mux *router.Router, api objectAPIHandlers) {
func registerAPIRouter(mux *router.Router) {
// Initialize API.
api := objectAPIHandlers{
ObjectAPI: newObjectLayerFn,
}
// API Router
apiRouter := mux.NewRoute().PathPrefix("/").Subrouter()
@@ -63,7 +68,7 @@ func registerAPIRouter(mux *router.Router, api objectAPIHandlers) {
// GetBucketNotification
bucket.Methods("GET").HandlerFunc(api.GetBucketNotificationHandler).Queries("notification", "")
// ListenBucketNotification
bucket.Methods("GET").HandlerFunc(api.ListenBucketNotificationHandler).Queries("notificationARN", "{notificationARN:.*}")
bucket.Methods("GET").HandlerFunc(api.ListenBucketNotificationHandler).Queries("events", "{events:.*}")
// ListMultipartUploads
bucket.Methods("GET").HandlerFunc(api.ListMultipartUploadsHandler).Queries("uploads", "")
// ListObjectsV2

View File

@@ -18,10 +18,6 @@ package cmd
import (
"bytes"
"crypto/md5"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"io/ioutil"
"net/http"
"strings"
@@ -42,12 +38,24 @@ func isRequestSignatureV4(r *http.Request) bool {
return strings.HasPrefix(r.Header.Get("Authorization"), signV4Algorithm)
}
// Verify if request has AWS Signature Version '2'.
func isRequestSignatureV2(r *http.Request) bool {
return (!strings.HasPrefix(r.Header.Get("Authorization"), signV4Algorithm) &&
strings.HasPrefix(r.Header.Get("Authorization"), signV2Algorithm))
}
// Verify if request has AWS PreSign Version '4'.
func isRequestPresignedSignatureV4(r *http.Request) bool {
_, ok := r.URL.Query()["X-Amz-Credential"]
return ok
}
// Verify request has AWS PreSign Version '2'.
func isRequestPresignedSignatureV2(r *http.Request) bool {
_, ok := r.URL.Query()["AWSAccessKeyId"]
return ok
}
// Verify if request has AWS Post policy Signature Version '4'.
func isRequestPostPolicySignatureV4(r *http.Request) bool {
return strings.Contains(r.Header.Get("Content-Type"), "multipart/form-data") && r.Method == "POST"
@@ -66,15 +74,21 @@ const (
authTypeUnknown authType = iota
authTypeAnonymous
authTypePresigned
authTypePresignedV2
authTypePostPolicy
authTypeStreamingSigned
authTypeSigned
authTypeSignedV2
authTypeJWT
)
// Get request authentication type.
func getRequestAuthType(r *http.Request) authType {
if isRequestSignStreamingV4(r) {
if isRequestSignatureV2(r) {
return authTypeSignedV2
} else if isRequestPresignedSignatureV2(r) {
return authTypePresignedV2
} else if isRequestSignStreamingV4(r) {
return authTypeStreamingSigned
} else if isRequestSignatureV4(r) {
return authTypeSigned
@@ -90,70 +104,90 @@ func getRequestAuthType(r *http.Request) authType {
return authTypeUnknown
}
// sum256 calculate sha256 sum for an input byte array
func sum256(data []byte) []byte {
hash := sha256.New()
hash.Write(data)
return hash.Sum(nil)
func checkRequestAuthType(r *http.Request, bucket, policyAction, region string) APIErrorCode {
reqAuthType := getRequestAuthType(r)
switch reqAuthType {
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
s3Error := isReqAuthenticatedV2(r)
if s3Error != ErrNone {
errorIf(errSignatureMismatch, dumpRequest(r))
}
return s3Error
case authTypeSigned, authTypePresigned:
s3Error := isReqAuthenticated(r, region)
if s3Error != ErrNone {
errorIf(errSignatureMismatch, dumpRequest(r))
}
return s3Error
}
if reqAuthType == authTypeAnonymous && policyAction != "" {
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
return enforceBucketPolicy(bucket, policyAction, r.URL)
}
// By default return ErrAccessDenied
return ErrAccessDenied
}
// sumMD5 calculate md5 sum for an input byte array
func sumMD5(data []byte) []byte {
hash := md5.New()
hash.Write(data)
return hash.Sum(nil)
// Verify if request has valid AWS Signature Version '2'.
func isReqAuthenticatedV2(r *http.Request) (s3Error APIErrorCode) {
if isRequestSignatureV2(r) {
return doesSignV2Match(r)
}
return doesPresignV2SignatureMatch(r)
}
func reqSignatureV4Verify(r *http.Request) (s3Error APIErrorCode) {
sha256sum := r.Header.Get("X-Amz-Content-Sha256")
// Skips calculating sha256 on the payload on server,
// if client requested for it.
if skipContentSha256Cksum(r) {
sha256sum = unsignedPayload
}
if isRequestSignatureV4(r) {
return doesSignatureMatch(sha256sum, r, serverConfig.GetRegion())
} else if isRequestPresignedSignatureV4(r) {
return doesPresignedSignatureMatch(sha256sum, r, serverConfig.GetRegion())
}
return ErrAccessDenied
}
// Verify if request has valid AWS Signature Version '4'.
func isReqAuthenticated(r *http.Request) (s3Error APIErrorCode) {
func isReqAuthenticated(r *http.Request, region string) (s3Error APIErrorCode) {
if r == nil {
return ErrInternalError
}
payload, err := ioutil.ReadAll(r.Body)
if err != nil {
errorIf(err, "Unable to read request body for signature verification")
return ErrInternalError
}
// Verify Content-Md5, if payload is set.
if r.Header.Get("Content-Md5") != "" {
if r.Header.Get("Content-Md5") != base64.StdEncoding.EncodeToString(sumMD5(payload)) {
if r.Header.Get("Content-Md5") != getMD5HashBase64(payload) {
return ErrBadDigest
}
}
// Populate back the payload.
r.Body = ioutil.NopCloser(bytes.NewReader(payload))
validateRegion := true // Validate region.
// Skips calculating sha256 on the payload on server, if client requested for it.
var sha256sum string
// Skips calculating sha256 on the payload on server,
// if client requested for it.
if skipContentSha256Cksum(r) {
sha256sum = unsignedPayload
} else {
sha256sum = hex.EncodeToString(sum256(payload))
sha256sum = getSHA256Hash(payload)
}
if isRequestSignatureV4(r) {
return doesSignatureMatch(sha256sum, r, validateRegion)
return doesSignatureMatch(sha256sum, r, region)
} else if isRequestPresignedSignatureV4(r) {
return doesPresignedSignatureMatch(sha256sum, r, validateRegion)
return doesPresignedSignatureMatch(sha256sum, r, region)
}
return ErrAccessDenied
}
// checkAuth - checks for conditions satisfying the authorization of
// the incoming request. Request should be either Presigned or Signed
// in accordance with AWS S3 Signature V4 requirements. ErrAccessDenied
// is returned for unhandled auth type. Once the auth type is indentified
// request headers and body are used to calculate the signature validating
// the client signature present in request.
func checkAuth(r *http.Request) APIErrorCode {
aType := getRequestAuthType(r)
if aType != authTypePresigned && aType != authTypeSigned {
// For all unhandled auth types return error AccessDenied.
return ErrAccessDenied
}
// Validates the request for both Presigned and Signed.
return isReqAuthenticated(r)
}
// authHandler - handles all the incoming authorization headers and validates them if possible.
type authHandler struct {
handler http.Handler
@@ -168,7 +202,9 @@ func setAuthHandler(h http.Handler) http.Handler {
var supportedS3AuthTypes = map[authType]struct{}{
authTypeAnonymous: {},
authTypePresigned: {},
authTypePresignedV2: {},
authTypeSigned: {},
authTypeSignedV2: {},
authTypePostPolicy: {},
authTypeStreamingSigned: {},
}

View File

@@ -20,9 +20,104 @@ import (
"bytes"
"io"
"net/http"
"net/url"
"testing"
)
// Test get request auth type.
func TestGetRequestAuthType(t *testing.T) {
type testCase struct {
req *http.Request
authT authType
}
testCases := []testCase{
// Test case - 1
// Check for generic signature v4 header.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
},
Header: http.Header{
"Authorization": []string{"AWS4-HMAC-SHA256 <cred_string>"},
"X-Amz-Content-Sha256": []string{streamingContentSHA256},
},
Method: "PUT",
},
authT: authTypeStreamingSigned,
},
// Test case - 2
// Check for JWT header.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
},
Header: http.Header{
"Authorization": []string{"Bearer 12313123"},
},
},
authT: authTypeJWT,
},
// Test case - 3
// Empty authorization header.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
},
Header: http.Header{
"Authorization": []string{""},
},
},
authT: authTypeUnknown,
},
// Test case - 4
// Check for presigned.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
RawQuery: "X-Amz-Credential=EXAMPLEINVALIDEXAMPL%2Fs3%2F20160314%2Fus-east-1",
},
},
authT: authTypePresigned,
},
// Test case - 5
// Check for post policy.
{
req: &http.Request{
URL: &url.URL{
Host: "localhost:9000",
Scheme: "http",
Path: "/",
},
Header: http.Header{
"Content-Type": []string{"multipart/form-data"},
},
Method: "POST",
},
authT: authTypePostPolicy,
},
}
// .. Tests all request auth type.
for i, testc := range testCases {
authT := getRequestAuthType(testc.req)
if authT != testc.authT {
t.Errorf("Test %d: Expected %d, got %d", i+1, testc.authT, authT)
}
}
}
// Test all s3 supported auth types.
func TestS3SupportedAuthType(t *testing.T) {
type testCase struct {
@@ -56,19 +151,29 @@ func TestS3SupportedAuthType(t *testing.T) {
authT: authTypeStreamingSigned,
pass: true,
},
// Test 6 - JWT is not supported s3 type.
// Test 6 - supported s3 type with signature v2.
{
authT: authTypeSignedV2,
pass: true,
},
// Test 7 - supported s3 type with presign v2.
{
authT: authTypePresignedV2,
pass: true,
},
// Test 8 - JWT is not supported s3 type.
{
authT: authTypeJWT,
pass: false,
},
// Test 7 - unknown auth header is not supported s3 type.
// Test 9 - unknown auth header is not supported s3 type.
{
authT: authTypeUnknown,
pass: false,
},
// Test 8 - some new auth type is not supported s3 type.
// Test 10 - some new auth type is not supported s3 type.
{
authT: authType(7),
authT: authType(9),
pass: false,
},
}
@@ -93,7 +198,7 @@ func TestIsRequestUnsignedPayload(t *testing.T) {
// Test case - 2.
// Test case with "X-Amz-Content-Sha256" header set to "UNSIGNED-PAYLOAD"
// The payload is flagged as unsigned When "X-Amz-Content-Sha256" header is set to "UNSIGNED-PAYLOAD".
{"UNSIGNED-PAYLOAD", true},
{unsignedPayload, true},
// Test case - 3.
// set to a random value.
{"abcd", false},
@@ -115,6 +220,39 @@ func TestIsRequestUnsignedPayload(t *testing.T) {
}
}
func TestIsRequestPresignedSignatureV2(t *testing.T) {
testCases := []struct {
inputQueryKey string
inputQueryValue string
expectedResult bool
}{
// Test case - 1.
// Test case with query key "AWSAccessKeyId" set.
{"", "", false},
// Test case - 2.
{"AWSAccessKeyId", "", true},
// Test case - 3.
{"X-Amz-Content-Sha256", "", false},
}
for i, testCase := range testCases {
// creating an input HTTP request.
// Only the query parameters are relevant for this particular test.
inputReq, err := http.NewRequest("GET", "http://example.com", nil)
if err != nil {
t.Fatalf("Error initializing input HTTP request: %v", err)
}
q := inputReq.URL.Query()
q.Add(testCase.inputQueryKey, testCase.inputQueryValue)
inputReq.URL.RawQuery = q.Encode()
actualResult := isRequestPresignedSignatureV2(inputReq)
if testCase.expectedResult != actualResult {
t.Errorf("Test %d: Expected the result to `%v`, but instead got `%v`", i+1, testCase.expectedResult, actualResult)
}
}
}
// TestIsRequestPresignedSignatureV4 - Test validates the logic for presign signature verision v4 detection.
func TestIsRequestPresignedSignatureV4(t *testing.T) {
testCases := []struct {
@@ -163,7 +301,7 @@ func mustNewRequest(method string, urlStr string, contentLength int64, body io.R
func mustNewSignedRequest(method string, urlStr string, contentLength int64, body io.ReadSeeker, t *testing.T) *http.Request {
req := mustNewRequest(method, urlStr, contentLength, body, t)
cred := serverConfig.GetCredential()
if err := signRequest(req, cred.AccessKeyID, cred.SecretAccessKey); err != nil {
if err := signRequestV4(req, cred.AccessKeyID, cred.SecretAccessKey); err != nil {
t.Fatalf("Unable to inititalized new signed http request %s", err)
}
return req
@@ -199,7 +337,7 @@ func TestIsReqAuthenticated(t *testing.T) {
if testCase.s3Error == ErrBadDigest {
testCase.req.Header.Set("Content-Md5", "garbage")
}
if s3Error := isReqAuthenticated(testCase.req); s3Error != testCase.s3Error {
if s3Error := isReqAuthenticated(testCase.req, serverConfig.GetRegion()); s3Error != testCase.s3Error {
t.Fatalf("Unexpected s3error returned wanted %d, got %d", testCase.s3Error, s3Error)
}
}

206
cmd/auth-rpc-client.go Normal file
View File

@@ -0,0 +1,206 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"fmt"
"net/rpc"
"sync"
"time"
jwtgo "github.com/dgrijalva/jwt-go"
)
// GenericReply represents any generic RPC reply.
type GenericReply struct{}
// GenericArgs represents any generic RPC arguments.
type GenericArgs struct {
Token string // Used to authenticate every RPC call.
// Used to verify if the RPC call was issued between
// the same Login() and disconnect event pair.
Timestamp time.Time
// Indicates if args should be sent to remote peers as well.
Remote bool
}
// SetToken - sets the token to the supplied value.
func (ga *GenericArgs) SetToken(token string) {
ga.Token = token
}
// SetTimestamp - sets the timestamp to the supplied value.
func (ga *GenericArgs) SetTimestamp(tstamp time.Time) {
ga.Timestamp = tstamp
}
// RPCLoginArgs - login username and password for RPC.
type RPCLoginArgs struct {
Username string
Password string
}
// RPCLoginReply - login reply provides generated token to be used
// with subsequent requests.
type RPCLoginReply struct {
Token string
Timestamp time.Time
ServerVersion string
}
// Validates if incoming token is valid.
func isRPCTokenValid(tokenStr string) bool {
jwt, err := newJWT(defaultInterNodeJWTExpiry, serverConfig.GetCredential())
if err != nil {
errorIf(err, "Unable to initialize JWT")
return false
}
token, err := jwtgo.Parse(tokenStr, func(token *jwtgo.Token) (interface{}, error) {
if _, ok := token.Method.(*jwtgo.SigningMethodHMAC); !ok {
return nil, fmt.Errorf("Unexpected signing method: %v", token.Header["alg"])
}
return []byte(jwt.SecretAccessKey), nil
})
if err != nil {
errorIf(err, "Unable to parse JWT token string")
return false
}
// Return if token is valid.
return token.Valid
}
// Auth config represents authentication credentials and Login method name to be used
// for fetching JWT tokens from the RPC server.
type authConfig struct {
accessKey string // Username for the server.
secretKey string // Password for the server.
secureConn bool // Ask for a secured connection
address string // Network address path of RPC server.
path string // Network path for HTTP dial.
loginMethod string // RPC service name for authenticating using JWT
}
// AuthRPCClient is a wrapper type for RPCClient which provides JWT based authentication across reconnects.
type AuthRPCClient struct {
mu sync.Mutex
config *authConfig
rpc *RPCClient // reconnect'able rpc client built on top of net/rpc Client
isLoggedIn bool // Indicates if the auth client has been logged in and token is valid.
serverToken string // Disk rpc JWT based token.
serverVersion string // Server version exchanged by the RPC.
}
// newAuthClient - returns a jwt based authenticated (go) rpc client, which does automatic reconnect.
func newAuthClient(cfg *authConfig) *AuthRPCClient {
return &AuthRPCClient{
// Save the config.
config: cfg,
// Initialize a new reconnectable rpc client.
rpc: newClient(cfg.address, cfg.path, cfg.secureConn),
// Allocated auth client not logged in yet.
isLoggedIn: false,
}
}
// Close - closes underlying rpc connection.
func (authClient *AuthRPCClient) Close() error {
authClient.mu.Lock()
// reset token on closing a connection
authClient.isLoggedIn = false
authClient.mu.Unlock()
return authClient.rpc.Close()
}
// Login - a jwt based authentication is performed with rpc server.
func (authClient *AuthRPCClient) Login() (err error) {
authClient.mu.Lock()
// As soon as the function returns unlock,
defer authClient.mu.Unlock()
// Return if already logged in.
if authClient.isLoggedIn {
return nil
}
reply := RPCLoginReply{}
if err = authClient.rpc.Call(authClient.config.loginMethod, RPCLoginArgs{
Username: authClient.config.accessKey,
Password: authClient.config.secretKey,
}, &reply); err != nil {
return err
}
// Validate if version do indeed match.
if reply.ServerVersion != Version {
return errServerVersionMismatch
}
// Validate if server timestamp is skewed.
curTime := time.Now().UTC()
if curTime.Sub(reply.Timestamp) > globalMaxSkewTime {
return errServerTimeMismatch
}
// Set token, time stamp as received from a successful login call.
authClient.serverToken = reply.Token
authClient.serverVersion = reply.ServerVersion
authClient.isLoggedIn = true
return nil
}
// Call - If rpc connection isn't established yet since previous disconnect,
// connection is established, a jwt authenticated login is performed and then
// the call is performed.
func (authClient *AuthRPCClient) Call(serviceMethod string, args interface {
SetToken(token string)
SetTimestamp(tstamp time.Time)
}, reply interface{}) (err error) {
// On successful login, attempt the call.
if err = authClient.Login(); err == nil {
// Set token and timestamp before the rpc call.
args.SetToken(authClient.serverToken)
args.SetTimestamp(time.Now().UTC())
// Call the underlying rpc.
err = authClient.rpc.Call(serviceMethod, args, reply)
// Invalidate token, and mark it for re-login on subsequent reconnect.
if err == rpc.ErrShutdown {
authClient.mu.Lock()
authClient.isLoggedIn = false
authClient.mu.Unlock()
}
}
return err
}
// Node returns the node (network address) of the connection
func (authClient *AuthRPCClient) Node() (node string) {
if authClient.rpc != nil {
node = authClient.rpc.node
}
return node
}
// RPCPath returns the RPC path of the connection
func (authClient *AuthRPCClient) RPCPath() (rpcPath string) {
if authClient.rpc != nil {
rpcPath = authClient.rpc.rpcPath
}
return rpcPath
}

View File

@@ -0,0 +1,51 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import "testing"
// Tests authorized RPC client.
func TestAuthRPCClient(t *testing.T) {
authCfg := &authConfig{
accessKey: "123",
secretKey: "123",
secureConn: false,
address: "localhost:9000",
path: "/rpc/disk",
loginMethod: "MyPackage.LoginHandler",
}
authRPC := newAuthClient(authCfg)
if authRPC.Node() != authCfg.address {
t.Fatalf("Unexpected node value %s, but expected %s", authRPC.Node(), authCfg.address)
}
if authRPC.RPCPath() != authCfg.path {
t.Fatalf("Unexpected node value %s, but expected %s", authRPC.RPCPath(), authCfg.path)
}
authCfg = &authConfig{
accessKey: "123",
secretKey: "123",
secureConn: false,
loginMethod: "MyPackage.LoginHandler",
}
authRPC = newAuthClient(authCfg)
if authRPC.Node() != authCfg.address {
t.Fatalf("Unexpected node value %s, but expected %s", authRPC.Node(), authCfg.address)
}
if authRPC.RPCPath() != authCfg.path {
t.Fatalf("Unexpected node value %s, but expected %s", authRPC.RPCPath(), authCfg.path)
}
}

View File

@@ -18,16 +18,46 @@ package cmd
import (
"bytes"
"crypto/md5"
"encoding/hex"
"io/ioutil"
"math"
"math/rand"
"strconv"
"testing"
"time"
humanize "github.com/dustin/go-humanize"
)
// Prepare benchmark backend
func prepareBenchmarkBackend(instanceType string) (ObjectLayer, []string, error) {
var nDisks int
switch instanceType {
// Total number of disks for FS backend is set to 1.
case FSTestStr:
nDisks = 1
// Total number of disks for FS backend is set to 16.
case XLTestStr:
nDisks = 16
default:
nDisks = 1
}
// get `nDisks` random disks.
disks, err := getRandomDisks(nDisks)
if err != nil {
return nil, nil, err
}
endpoints, err := parseStorageEndpoints(disks)
if err != nil {
return nil, nil, err
}
// initialize object layer.
obj, _, err := initObjectLayer(endpoints)
if err != nil {
return nil, nil, err
}
return obj, disks, nil
}
// Benchmark utility functions for ObjectLayer.PutObject().
// Creates Object layer setup ( MakeBucket ) and then runs the PutObject benchmark.
func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
@@ -40,29 +70,25 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
b.Fatal(err)
}
// PutObject returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
var md5Sum string
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
metadata["md5Sum"] = getMD5Hash(textData)
sha256sum := ""
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObject starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
// insert the object.
md5Sum, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata)
objInfo, err := obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata, sha256sum)
if err != nil {
b.Fatal(err)
}
if md5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, md5Sum, metadata["md5Sum"])
if objInfo.MD5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.MD5Sum, metadata["md5Sum"])
}
}
// Benchmark ends here. Stop timer.
@@ -83,7 +109,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
b.Fatal(err)
}
objSize := 128 * 1024 * 1024
objSize := 128 * humanize.MiByte
// PutObjectPart returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
@@ -92,10 +118,9 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for NewMultipartUpload.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
metadata["md5Sum"] = getMD5Hash(textData)
sha256sum := ""
uploadID, err = obj.NewMultipartUpload(bucket, object, metadata)
if err != nil {
b.Fatal(err)
@@ -110,16 +135,14 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
// insert the object.
totalPartsNR := int(math.Ceil(float64(objSize) / float64(partSize)))
for j := 0; j < totalPartsNR; j++ {
hasher.Reset()
if j < totalPartsNR-1 {
textPartData = textData[j*partSize : (j+1)*partSize-1]
} else {
textPartData = textData[j*partSize:]
}
hasher.Write([]byte(textPartData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
md5Sum, err = obj.PutObjectPart(bucket, object, uploadID, j, int64(len(textPartData)), bytes.NewBuffer(textPartData), metadata["md5Sum"])
metadata["md5Sum"] = getMD5Hash([]byte(textPartData))
md5Sum, err = obj.PutObjectPart(bucket, object, uploadID, j, int64(len(textPartData)), bytes.NewBuffer(textPartData), metadata["md5Sum"], sha256sum)
if err != nil {
b.Fatal(err)
}
@@ -134,8 +157,14 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
// creates XL/FS backend setup, obtains the object layer and calls the runPutObjectPartBenchmark function.
func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -147,8 +176,14 @@ func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
// creates XL/FS backend setup, obtains the object layer and calls the runPutObjectBenchmark function.
func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -160,8 +195,14 @@ func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
// creates XL/FS backend setup, obtains the object layer and runs parallel benchmark for put object.
func benchmarkPutObjectParallel(b *testing.B, instanceType string, objSize int) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -174,7 +215,12 @@ func benchmarkPutObjectParallel(b *testing.B, instanceType string, objSize int)
// Benchmark utility functions for ObjectLayer.GetObject().
// Creates Object layer setup ( MakeBucket, PutObject) and then runs the benchmark.
func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
var err error
rootPath, err := newTestConfig("us-east-1")
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
@@ -183,26 +229,23 @@ func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
b.Fatal(err)
}
// PutObject returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
var md5Sum string
sha256sum := ""
for i := 0; i < 10; i++ {
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
metadata["md5Sum"] = getMD5Hash(textData)
// insert the object.
md5Sum, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata)
var objInfo ObjectInfo
objInfo, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata, sha256sum)
if err != nil {
b.Fatal(err)
}
if md5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, md5Sum, metadata["md5Sum"])
if objInfo.MD5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.MD5Sum, metadata["md5Sum"])
}
}
@@ -226,7 +269,7 @@ func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
func getRandomByte() []byte {
const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
// seeding the random number generator.
rand.Seed(time.Now().UnixNano())
rand.Seed(time.Now().UTC().UnixNano())
var b byte
// pick a character randomly.
b = letterBytes[rand.Intn(len(letterBytes))]
@@ -241,8 +284,14 @@ func generateBytesData(size int) []byte {
// creates XL/FS backend setup, obtains the object layer and calls the runGetObjectBenchmark function.
func benchmarkGetObject(b *testing.B, instanceType string, objSize int) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -254,8 +303,14 @@ func benchmarkGetObject(b *testing.B, instanceType string, objSize int) {
// creates XL/FS backend setup, obtains the object layer and runs parallel benchmark for ObjectLayer.GetObject() .
func benchmarkGetObjectParallel(b *testing.B, instanceType string, objSize int) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -268,7 +323,12 @@ func benchmarkGetObjectParallel(b *testing.B, instanceType string, objSize int)
// Parallel benchmark utility functions for ObjectLayer.PutObject().
// Creates Object layer setup ( MakeBucket ) and then runs the PutObject benchmark.
func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
var err error
rootPath, err := newTestConfig("us-east-1")
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
@@ -277,17 +337,13 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
b.Fatal(err)
}
// PutObject returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
var md5Sum string
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
metadata["md5Sum"] = getMD5Hash([]byte(textData))
sha256sum := ""
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObject starts here. Reset the benchmark timer.
@@ -297,12 +353,12 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
i := 0
for pb.Next() {
// insert the object.
md5Sum, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata)
objInfo, err := obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata, sha256sum)
if err != nil {
b.Fatal(err)
}
if md5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", md5Sum, metadata["md5Sum"])
if objInfo.MD5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", objInfo.MD5Sum, metadata["md5Sum"])
}
i++
}
@@ -315,7 +371,12 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// Parallel benchmark utility functions for ObjectLayer.GetObject().
// Creates Object layer setup ( MakeBucket, PutObject) and then runs the benchmark.
func runGetObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
var err error
rootPath, err := newTestConfig("us-east-1")
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
@@ -324,26 +385,23 @@ func runGetObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
b.Fatal(err)
}
// PutObject returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
var md5Sum string
for i := 0; i < 10; i++ {
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
metadata["md5Sum"] = getMD5Hash([]byte(textData))
sha256sum := ""
// insert the object.
md5Sum, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata)
var objInfo ObjectInfo
objInfo, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata, sha256sum)
if err != nil {
b.Fatal(err)
}
if md5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, md5Sum, metadata["md5Sum"])
if objInfo.MD5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.MD5Sum, metadata["md5Sum"])
}
}

150
cmd/browser-peer-rpc.go Normal file
View File

@@ -0,0 +1,150 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"path"
"sync"
"time"
)
// Login handler implements JWT login token generator, which upon login request
// along with username and password is generated.
func (br *browserPeerAPIHandlers) LoginHandler(args *RPCLoginArgs, reply *RPCLoginReply) error {
jwt, err := newJWT(defaultInterNodeJWTExpiry, serverConfig.GetCredential())
if err != nil {
return err
}
if err = jwt.Authenticate(args.Username, args.Password); err != nil {
return err
}
token, err := jwt.GenerateToken(args.Username)
if err != nil {
return err
}
reply.Token = token
reply.ServerVersion = Version
reply.Timestamp = time.Now().UTC()
return nil
}
// SetAuthPeerArgs - Arguments collection for SetAuth RPC call
type SetAuthPeerArgs struct {
// For Auth
GenericArgs
// New credentials that receiving peer should update to.
Creds credential
}
// SetAuthPeer - Update to new credentials sent from a peer Minio
// server. Since credentials are already validated on the sending
// peer, here we just persist to file and update in-memory config. All
// subsequently running isRPCTokenValid() calls will fail, and clients
// will be forced to re-establish connections. Connections will be
// re-established only when the sending client has also updated its
// credentials.
func (br *browserPeerAPIHandlers) SetAuthPeer(args SetAuthPeerArgs, reply *GenericReply) error {
// Check auth
if !isRPCTokenValid(args.Token) {
return errInvalidToken
}
// Update credentials in memory
serverConfig.SetCredential(args.Creds)
// Save credentials to config file
if err := serverConfig.Save(); err != nil {
errorIf(err, "Error updating config file with new credentials sent from browser RPC.")
return err
}
return nil
}
// Sends SetAuthPeer RPCs to all peers in the Minio cluster
func updateCredsOnPeers(creds credential) map[string]error {
// Get list of peer addresses (from globalS3Peers)
peers := []string{}
for _, p := range globalS3Peers {
peers = append(peers, p.addr)
}
// Array of errors for each peer
errs := make([]error, len(peers))
var wg sync.WaitGroup
// Launch go routines to send request to each peer in parallel.
for ix := range peers {
wg.Add(1)
go func(ix int) {
defer wg.Done()
// Exclude self to avoid race with
// invalidating the RPC token.
if peers[ix] == globalMinioAddr {
errs[ix] = nil
return
}
// Initialize client
client := newAuthClient(&authConfig{
accessKey: serverConfig.GetCredential().AccessKeyID,
secretKey: serverConfig.GetCredential().SecretAccessKey,
address: peers[ix],
secureConn: isSSL(),
path: path.Join(reservedBucket, browserPeerPath),
loginMethod: "Browser.LoginHandler",
})
// Construct RPC call arguments.
args := SetAuthPeerArgs{Creds: creds}
// Make RPC call - we only care about error
// response and not the reply.
err := client.Call("Browser.SetAuthPeer", &args, &GenericReply{})
// We try a bit hard (3 attempts with 1 second delay)
// to set creds on peers in case of failure.
if err != nil {
for i := 0; i < 2; i++ {
time.Sleep(1 * time.Second) // 1 second delay.
err = client.Call("Browser.SetAuthPeer", &args, &GenericReply{})
if err == nil {
break
}
}
}
// Send result down the channel
errs[ix] = err
}(ix)
}
// Wait for requests to complete.
wg.Wait()
// Put errors into map.
errsMap := make(map[string]error)
for i, err := range errs {
if err != nil {
errsMap[peers[i]] = err
}
}
return errsMap
}

View File

@@ -0,0 +1,123 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"path"
"testing"
)
// API suite container common to both FS and XL.
type TestRPCBrowserPeerSuite struct {
serverType string
testServer TestServer
testAuthConf *authConfig
}
// Setting up the test suite and starting the Test server.
func (s *TestRPCBrowserPeerSuite) SetUpSuite(c *testing.T) {
s.testServer = StartTestBrowserPeerRPCServer(c, s.serverType)
s.testAuthConf = &authConfig{
address: s.testServer.Server.Listener.Addr().String(),
accessKey: s.testServer.AccessKey,
secretKey: s.testServer.SecretKey,
path: path.Join(reservedBucket, browserPeerPath),
loginMethod: "BrowserPeer.LoginHandler",
}
}
// No longer used with gocheck, but used in explicit teardown code in
// each test function. // Called implicitly by "gopkg.in/check.v1"
// after all tests are run.
func (s *TestRPCBrowserPeerSuite) TearDownSuite(c *testing.T) {
s.testServer.Stop()
}
func TestBrowserPeerRPC(t *testing.T) {
// setup code
s := &TestRPCBrowserPeerSuite{serverType: "XL"}
s.SetUpSuite(t)
// run test
s.testBrowserPeerRPC(t)
// teardown code
s.TearDownSuite(t)
}
// Tests for browser peer rpc.
func (s *TestRPCBrowserPeerSuite) testBrowserPeerRPC(t *testing.T) {
// Construct RPC call arguments.
creds := credential{
AccessKeyID: "abcd1",
SecretAccessKey: "abcd1234",
}
// Validate for invalid token.
args := SetAuthPeerArgs{Creds: creds}
args.Token = "garbage"
rclient := newClient(s.testAuthConf.address, s.testAuthConf.path, false)
defer rclient.Close()
err := rclient.Call("BrowserPeer.SetAuthPeer", &args, &GenericReply{})
if err != nil {
if err.Error() != errInvalidToken.Error() {
t.Fatal(err)
}
}
// Validate for successful Peer update.
args = SetAuthPeerArgs{Creds: creds}
client := newAuthClient(s.testAuthConf)
defer client.Close()
err = client.Call("BrowserPeer.SetAuthPeer", &args, &GenericReply{})
if err != nil {
t.Fatal(err)
}
// Validate for failure in login handler with previous credentials.
rclient = newClient(s.testAuthConf.address, s.testAuthConf.path, false)
defer rclient.Close()
rargs := &RPCLoginArgs{
Username: s.testAuthConf.accessKey,
Password: s.testAuthConf.secretKey,
}
rreply := &RPCLoginReply{}
err = rclient.Call("BrowserPeer.LoginHandler", rargs, rreply)
if err != nil {
if err.Error() != errInvalidAccessKeyID.Error() {
t.Fatal(err)
}
}
// Validate for success in loing handled with valid credetnails.
rargs = &RPCLoginArgs{
Username: creds.AccessKeyID,
Password: creds.SecretAccessKey,
}
rreply = &RPCLoginReply{}
err = rclient.Call("BrowserPeer.LoginHandler", rargs, rreply)
if err != nil {
t.Fatal(err)
}
// Validate all the replied fields after successful login.
if rreply.Token == "" {
t.Fatalf("Generated token cannot be empty %s", errInvalidToken)
}
if rreply.Timestamp.IsZero() {
t.Fatal("Time stamp returned cannot be zero")
}
}

49
cmd/browser-rpc-router.go Normal file
View File

@@ -0,0 +1,49 @@
/*
* Minio Cloud Storage, (C) 2014-2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"net/rpc"
router "github.com/gorilla/mux"
)
// Set up an RPC endpoint that receives browser related calls. The
// original motivation is for propagating credentials change
// throughout Minio cluster, initiated from a Minio browser session.
const (
browserPeerPath = "/browser/setauth"
)
// The Type exporting methods exposed for RPC calls.
type browserPeerAPIHandlers struct{}
// Register RPC router
func registerBrowserPeerRPCRouter(mux *router.Router) error {
bpHandlers := &browserPeerAPIHandlers{}
bpRPCServer := rpc.NewServer()
err := bpRPCServer.RegisterName("BrowserPeer", bpHandlers)
if err != nil {
return traceError(err)
}
bpRouter := mux.NewRoute().PathPrefix(reservedBucket).Subrouter()
bpRouter.Path(browserPeerPath).Handler(bpRPCServer)
return nil
}

View File

@@ -29,7 +29,7 @@ import (
// - delimiter if set should be equal to '/', otherwise the request is rejected.
// - marker if set should have a common prefix with 'prefix' param, otherwise
// the request is rejected.
func listObjectsValidateArgs(prefix, marker, delimiter string, maxKeys int) APIErrorCode {
func validateListObjectsArgs(prefix, marker, delimiter string, maxKeys int) APIErrorCode {
// Max keys cannot be negative.
if maxKeys < 0 {
return ErrInvalidMaxKeys
@@ -64,23 +64,17 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:ListBucket", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucket", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
// Extract all the listObjectsV2 query params to their native values.
prefix, token, startAfter, delimiter, fetchOwner, maxKeys, _ := getListObjectsV2Args(r.URL.Query())
@@ -93,14 +87,14 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
}
// Validate the query params before beginning to serve the request.
// fetch-owner is not validated since it is a boolean
if s3Error := listObjectsValidateArgs(prefix, marker, delimiter, maxKeys); s3Error != ErrNone {
if s3Error := validateListObjectsArgs(prefix, marker, delimiter, maxKeys); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
// Inititate a list objects operation based on the input params.
// On success would return back ListObjectsInfo object to be
// marshalled into S3 compatible XML header.
listObjectsInfo, err := api.ObjectAPI.ListObjects(bucket, prefix, marker, delimiter, maxKeys)
listObjectsInfo, err := objectAPI.ListObjects(bucket, prefix, marker, delimiter, maxKeys)
if err != nil {
errorIf(err, "Unable to list objects.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
@@ -124,29 +118,22 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucket", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:ListBucket", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// Extract all the litsObjectsV1 query params to their native values.
prefix, marker, delimiter, maxKeys, _ := getListObjectsV1Args(r.URL.Query())
// Validate all the query params before beginning to serve the request.
if s3Error := listObjectsValidateArgs(prefix, marker, delimiter, maxKeys); s3Error != ErrNone {
if s3Error := validateListObjectsArgs(prefix, marker, delimiter, maxKeys); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -154,7 +141,7 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
// Inititate a list objects operation based on the input params.
// On success would return back ListObjectsInfo object to be
// marshalled into S3 compatible XML header.
listObjectsInfo, err := api.ObjectAPI.ListObjects(bucket, prefix, marker, delimiter, maxKeys)
listObjectsInfo, err := objectAPI.ListObjects(bucket, prefix, marker, delimiter, maxKeys)
if err != nil {
errorIf(err, "Unable to list objects.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)

View File

@@ -17,11 +17,11 @@
package cmd
import (
"encoding/base64"
"encoding/xml"
"io"
"net/http"
"net/url"
"path"
"strings"
"sync"
@@ -32,9 +32,22 @@ import (
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
// Enforces bucket policies for a bucket for a given tatusaction.
func enforceBucketPolicy(bucket string, action string, reqURL *url.URL) (s3Error APIErrorCode) {
if !IsValidBucketName(bucket) {
return ErrInvalidBucketName
// Verify if bucket actually exists
if err := checkBucketExist(bucket, newObjectLayerFn()); err != nil {
err = errorCause(err)
switch err.(type) {
case BucketNameInvalid:
// Return error for invalid bucket name.
return ErrInvalidBucketName
case BucketNotFound:
// For no bucket found we return NoSuchBucket instead.
return ErrNoSuchBucket
}
errorIf(err, "Unable to read bucket policy.")
// Return internal error for any other errors so that we can investigate.
return ErrInternalError
}
// Fetch bucket policy, if policy is not set return access denied.
policy := globalBucketPolicies.GetBucketPolicy(bucket)
if policy == nil {
@@ -64,25 +77,18 @@ func (api objectAPIHandlers) GetBucketLocationHandler(w http.ResponseWriter, r *
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:GetBucketLocation", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
if _, err := api.ObjectAPI.GetBucketInfo(bucket); err != nil {
if s3Error := checkRequestAuthType(r, bucket, "s3:GetBucketLocation", "us-east-1"); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
if _, err := objectAPI.GetBucketInfo(bucket); err != nil {
errorIf(err, "Unable to fetch bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
@@ -113,22 +119,15 @@ func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter,
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucketMultipartUploads", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html
if s3Error := enforceBucketPolicy(bucket, "s3:ListBucketMultipartUploads", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, _ := getBucketMultipartResources(r.URL.Query())
@@ -144,7 +143,7 @@ func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter,
}
}
listMultipartsInfo, err := api.ObjectAPI.ListMultipartUploads(bucket, prefix, keyMarker, uploadIDMarker, delimiter, maxUploads)
listMultipartsInfo, err := objectAPI.ListMultipartUploads(bucket, prefix, keyMarker, uploadIDMarker, delimiter, maxUploads)
if err != nil {
errorIf(err, "Unable to list multipart uploads.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
@@ -159,18 +158,29 @@ func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter,
writeSuccessResponse(w, encodedSuccessResponse)
}
// ListBucketsHandler - GET Service
// ListBucketsHandler - GET Service.
// -----------
// This implementation of the GET operation returns a list of all buckets
// owned by the authenticated sender of the request.
func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.Request) {
// List buckets does not support bucket policies, no need to enforce it.
if s3Error := checkAuth(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
bucketsInfo, err := api.ObjectAPI.ListBuckets()
// ListBuckets does not have any bucket action.
s3Error := checkRequestAuthType(r, "", "", "us-east-1")
if s3Error == ErrInvalidRegion {
// Clients like boto3 send listBuckets() call signed with region that is configured.
s3Error = checkRequestAuthType(r, "", "", serverConfig.GetRegion())
}
if s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
// Invoke the list buckets.
bucketsInfo, err := objectAPI.ListBuckets()
if err != nil {
errorIf(err, "Unable to list buckets.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
@@ -191,22 +201,15 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
if s3Error := checkRequestAuthType(r, bucket, "s3:DeleteObject", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:DeleteObject", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// Content-Length is required and should be non-zero
@@ -249,7 +252,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
wg.Add(1)
go func(i int, obj ObjectIdentifier) {
defer wg.Done()
dErr := api.ObjectAPI.DeleteObject(bucket, obj.ObjectName)
dErr := objectAPI.DeleteObject(bucket, obj.ObjectName)
if dErr != nil {
dErrs[i] = dErr
}
@@ -267,7 +270,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
deletedObjects = append(deletedObjects, object)
continue
}
if _, ok := err.(ObjectNotFound); ok {
if _, ok := errorCause(err).(ObjectNotFound); ok {
// If the object is not found it should be
// accounted as deleted as per S3 spec.
deletedObjects = append(deletedObjects, object)
@@ -290,20 +293,18 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
// Write success response.
writeSuccessResponse(w, encodedSuccessResponse)
if globalEventNotifier.IsBucketNotificationSet(bucket) {
// Notify deleted event for objects.
for _, dobj := range deletedObjects {
eventNotify(eventData{
Type: ObjectRemovedDelete,
Bucket: bucket,
ObjInfo: ObjectInfo{
Name: dobj.ObjectName,
},
ReqParams: map[string]string{
"sourceIPAddress": r.RemoteAddr,
},
})
}
// Notify deleted event for objects.
for _, dobj := range deletedObjects {
eventNotify(eventData{
Type: ObjectRemovedDelete,
Bucket: bucket,
ObjInfo: ObjectInfo{
Name: dobj.ObjectName,
},
ReqParams: map[string]string{
"sourceIPAddress": r.RemoteAddr,
},
})
}
}
@@ -311,8 +312,14 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
// ----------
// This implementation of the PUT operation creates a new bucket for authenticated request
func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Request) {
// PutBucket does not support policies, use checkAuth to validate signature.
if s3Error := checkAuth(r); s3Error != ErrNone {
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
// PutBucket does not have any bucket action.
if s3Error := checkRequestAuthType(r, "", "", "us-east-1"); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -327,8 +334,12 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
return
}
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
defer bucketLock.Unlock()
// Proceed to creating a bucket.
err := api.ObjectAPI.MakeBucket(bucket)
err := objectAPI.MakeBucket(bucket)
if err != nil {
errorIf(err, "Unable to create a bucket.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
@@ -344,6 +355,12 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
// This implementation of the POST operation handles object creation with a specified
// signature policy in multipart/form-data
func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *http.Request) {
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
// Here the parameter is the size of the form data that should
// be loaded in memory, the remaining being put in temporary files.
reader, err := r.MultipartReader()
@@ -361,13 +378,13 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
}
bucket := mux.Vars(r)["bucket"]
formValues["Bucket"] = bucket
object := formValues["Key"]
if fileName != "" && strings.Contains(object, "${filename}") {
if fileName != "" && strings.Contains(formValues["Key"], "${filename}") {
// S3 feature to replace ${filename} found in Key form field
// by the filename attribute passed in multipart
object = strings.Replace(object, "${filename}", fileName, -1)
formValues["Key"] = strings.Replace(formValues["Key"], "${filename}", fileName, -1)
}
object := formValues["Key"]
// Verify policy signature.
apiErr := doesPolicySignatureMatch(formValues)
@@ -375,26 +392,60 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponse(w, r, apiErr, r.URL.Path)
return
}
if apiErr = checkPostPolicy(formValues); apiErr != ErrNone {
policyBytes, err := base64.StdEncoding.DecodeString(formValues["Policy"])
if err != nil {
writeErrorResponse(w, r, ErrMalformedPOSTRequest, r.URL.Path)
return
}
postPolicyForm, err := parsePostPolicyForm(string(policyBytes))
if err != nil {
writeErrorResponse(w, r, ErrMalformedPOSTRequest, r.URL.Path)
return
}
// Make sure formValues adhere to policy restrictions.
if apiErr = checkPostPolicy(formValues, postPolicyForm); apiErr != ErrNone {
writeErrorResponse(w, r, apiErr, r.URL.Path)
return
}
// Use rangeReader to ensure that object size is within expected range.
lengthRange := postPolicyForm.Conditions.ContentLengthRange
if lengthRange.Valid {
// If policy restricted the size of the object.
fileBody = &rangeReader{
Reader: fileBody,
Min: lengthRange.Min,
Max: lengthRange.Max,
}
} else {
// Default values of min/max size of the object.
fileBody = &rangeReader{
Reader: fileBody,
Min: 0,
Max: maxObjectSize,
}
}
// Save metadata.
metadata := make(map[string]string)
// Nothing to store right now.
md5Sum, err := api.ObjectAPI.PutObject(bucket, object, -1, fileBody, metadata)
sha256sum := ""
objectLock := globalNSMutex.NewNSLock(bucket, object)
objectLock.Lock()
defer objectLock.Unlock()
objInfo, err := objectAPI.PutObject(bucket, object, -1, fileBody, metadata, sha256sum)
if err != nil {
errorIf(err, "Unable to create object.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
if md5Sum != "" {
w.Header().Set("ETag", "\""+md5Sum+"\"")
}
// TODO full URL is preferred.
w.Header().Set("ETag", "\""+objInfo.MD5Sum+"\"")
w.Header().Set("Location", getObjectLocation(bucket, object))
// Set common headers.
@@ -403,24 +454,15 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Write successful response.
writeSuccessNoContent(w)
if globalEventNotifier.IsBucketNotificationSet(bucket) {
// Fetch object info for notifications.
objInfo, err := api.ObjectAPI.GetObjectInfo(bucket, object)
if err != nil {
errorIf(err, "Unable to fetch object info for \"%s\"", path.Join(bucket, object))
return
}
// Notify object created event.
eventNotify(eventData{
Type: ObjectCreatedPost,
Bucket: bucket,
ObjInfo: objInfo,
ReqParams: map[string]string{
"sourceIPAddress": r.RemoteAddr,
},
})
}
// Notify object created event.
eventNotify(eventData{
Type: ObjectCreatedPost,
Bucket: bucket,
ObjInfo: objInfo,
ReqParams: map[string]string{
"sourceIPAddress": r.RemoteAddr,
},
})
}
// HeadBucketHandler - HEAD Bucket
@@ -433,25 +475,22 @@ func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Re
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:ListBucket", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
if _, err := api.ObjectAPI.GetBucketInfo(bucket); err != nil {
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucket", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.RLock()
defer bucketLock.RUnlock()
if _, err := objectAPI.GetBucketInfo(bucket); err != nil {
errorIf(err, "Unable to fetch bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
@@ -461,8 +500,14 @@ func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Re
// DeleteBucketHandler - Delete bucket
func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.Request) {
// DeleteBucket does not support bucket policies, use checkAuth to validate signature.
if s3Error := checkAuth(r); s3Error != ErrNone {
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
// DeleteBucket does not have any bucket action.
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
@@ -470,18 +515,25 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
vars := mux.Vars(r)
bucket := vars["bucket"]
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
defer bucketLock.Unlock()
// Attempt to delete bucket.
if err := api.ObjectAPI.DeleteBucket(bucket); err != nil {
if err := objectAPI.DeleteBucket(bucket); err != nil {
errorIf(err, "Unable to delete a bucket.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// Delete bucket access policy, if present - ignore any errors.
removeBucketPolicy(bucket, api.ObjectAPI)
_ = removeBucketPolicy(bucket, objectAPI)
// Delete notification config, if present - ignore any errors.
removeNotificationConfig(bucket, api.ObjectAPI)
_ = removeNotificationConfig(bucket, objectAPI)
// Delete listener config, if present - ignore any errors.
_ = removeListenerConfig(bucket, objectAPI)
// Write success response.
writeSuccessNoContent(w)

773
cmd/bucket-handlers_test.go Normal file
View File

@@ -0,0 +1,773 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"bytes"
"encoding/xml"
"io/ioutil"
"net/http"
"net/http/httptest"
"strconv"
"testing"
)
// Wrapper for calling GetBucketPolicy HTTP handler tests for both XL multiple disks and single node setup.
func TestGetBucketLocationHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testGetBucketLocationHandler, []string{"GetBucketLocation"})
}
func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
// test cases with sample input and expected output.
testCases := []struct {
bucketName string
accessKey string
secretKey string
// expected Response.
expectedRespStatus int
locationResponse []byte
errorResponse APIErrorResponse
shouldPass bool
}{
// Test case - 1.
// Tests for authenticated request and proper response.
{
bucketName: bucketName,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusOK,
locationResponse: []byte(`<?xml version="1.0" encoding="UTF-8"?>
<LocationConstraint xmlns="http://s3.amazonaws.com/doc/2006-03-01/"></LocationConstraint>`),
errorResponse: APIErrorResponse{},
shouldPass: true,
},
// Test case - 2.
// Tests for signature mismatch error.
{
bucketName: bucketName,
accessKey: "abcd",
secretKey: "abcd",
expectedRespStatus: http.StatusForbidden,
locationResponse: []byte(""),
errorResponse: APIErrorResponse{
Resource: "/" + bucketName + "/",
Code: "InvalidAccessKeyID",
Message: "The access key ID you provided does not exist in our records.",
},
shouldPass: false,
},
}
for i, testCase := range testCases {
if i != 1 {
continue
}
// initialize httptest Recorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
// construct HTTP request for Get bucket location.
req, err := newTestSignedRequestV4("GET", getBucketLocationURL("", testCase.bucketName), 0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for GetBucketLocationHandler: <ERROR> %v", i+1, instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
if rec.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, rec.Code)
}
if !bytes.Equal(testCase.locationResponse, rec.Body.Bytes()) && testCase.shouldPass {
t.Errorf("Test %d: %s: Expected the response to be `%s`, but instead found `%s`", i+1, instanceType, string(testCase.locationResponse), string(rec.Body.Bytes()))
}
errorResponse := APIErrorResponse{}
err = xml.Unmarshal(rec.Body.Bytes(), &errorResponse)
if err != nil && !testCase.shouldPass {
t.Fatalf("Test %d: %s: Unable to marshal response body %s", i+1, instanceType, string(rec.Body.Bytes()))
}
if errorResponse.Resource != testCase.errorResponse.Resource {
t.Errorf("Test %d: %s: Expected the error resource to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Resource, errorResponse.Resource)
}
if errorResponse.Message != testCase.errorResponse.Message {
t.Errorf("Test %d: %s: Expected the error message to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Message, errorResponse.Message)
}
if errorResponse.Code != testCase.errorResponse.Code {
t.Errorf("Test %d: %s: Expected the error code to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Code, errorResponse.Code)
}
// Verify response of the V2 signed HTTP request.
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
reqV2, err := newTestSignedRequestV2("GET", getBucketLocationURL("", testCase.bucketName), 0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(recV2, reqV2)
if recV2.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, recV2.Code)
}
errorResponse = APIErrorResponse{}
err = xml.Unmarshal(recV2.Body.Bytes(), &errorResponse)
if err != nil && !testCase.shouldPass {
t.Fatalf("Test %d: %s: Unable to marshal response body %s", i+1, instanceType, string(recV2.Body.Bytes()))
}
if errorResponse.Resource != testCase.errorResponse.Resource {
t.Errorf("Test %d: %s: Expected the error resource to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Resource, errorResponse.Resource)
}
if errorResponse.Message != testCase.errorResponse.Message {
t.Errorf("Test %d: %s: Expected the error message to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Message, errorResponse.Message)
}
if errorResponse.Code != testCase.errorResponse.Code {
t.Errorf("Test %d: %s: Expected the error code to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Code, errorResponse.Code)
}
}
// Test for Anonymous/unsigned http request.
// ListBucketsHandler doesn't support bucket policies, setting the policies shouldn't make any difference.
anonReq, err := newTestRequest("GET", getBucketLocationURL("", bucketName), 0, nil)
if err != nil {
t.Fatalf("Minio %s: Failed to create an anonymous request.", instanceType)
}
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getReadOnlyBucketStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "TestGetBucketLocationHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyBucketStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
// since the `objectLayer==nil` check is performed before any other checks inside the handlers.
// The only aim is to generate an HTTP request in a way that the relevant/registered end point is evoked/called.
nilBucket := "dummy-bucket"
nilReq, err := newTestRequest("GET", getBucketLocationURL("", nilBucket), 0, nil)
if err != nil {
t.Errorf("Minio %s: Failed to create HTTP request for testing the response when object Layer is set to `nil`.", instanceType)
}
// Executes the object layer set to `nil` test.
// `ExecObjectLayerAPINilTest` manages the operation.
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling HeadBucket HTTP handler tests for both XL multiple disks and single node setup.
func TestHeadBucketHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testHeadBucketHandler, []string{"HeadBucket"})
}
func testHeadBucketHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
// test cases with sample input and expected output.
testCases := []struct {
bucketName string
accessKey string
secretKey string
// expected Response.
expectedRespStatus int
}{
// Test case - 1.
// Bucket exists.
{
bucketName: bucketName,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusOK,
},
// Test case - 2.
// Non-existent bucket name.
{
bucketName: "2333",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNotFound,
},
// Test case - 3.
// Testing for signature mismatch error.
// setting invalid acess and secret key.
{
bucketName: bucketName,
accessKey: "abcd",
secretKey: "abcd",
expectedRespStatus: http.StatusForbidden,
},
}
for i, testCase := range testCases {
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
// construct HTTP request for HEAD bucket.
req, err := newTestSignedRequestV4("HEAD", getHEADBucketURL("", testCase.bucketName), 0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for HeadBucketHandler: <ERROR> %v", i+1, instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
if rec.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, rec.Code)
}
// Verify response the V2 signed HTTP request.
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
reqV2, err := newTestSignedRequestV2("HEAD", getHEADBucketURL("", testCase.bucketName), 0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(recV2, reqV2)
if recV2.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, recV2.Code)
}
}
// Test for Anonymous/unsigned http request.
anonReq, err := newTestRequest("HEAD", getHEADBucketURL("", bucketName), 0, nil)
if err != nil {
t.Fatalf("Minio %s: Failed to create an anonymous request for bucket \"%s\": <ERROR> %v",
instanceType, bucketName, err)
}
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getReadOnlyBucketStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "TestHeadBucketHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyBucketStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
// since the `objectLayer==nil` check is performed before any other checks inside the handlers.
// The only aim is to generate an HTTP request in a way that the relevant/registered end point is evoked/called.
nilBucket := "dummy-bucket"
nilReq, err := newTestRequest("HEAD", getHEADBucketURL("", nilBucket), 0, nil)
if err != nil {
t.Errorf("Minio %s: Failed to create HTTP request for testing the response when object Layer is set to `nil`.", instanceType)
}
// execute the object layer set to `nil` test.
// `ExecObjectLayerAPINilTest` manages the operation.
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling TestListMultipartUploadsHandler tests for both XL multiple disks and single node setup.
func TestListMultipartUploadsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListMultipartUploadsHandler, []string{"ListMultipartUploads"})
}
// testListMultipartUploadsHandler - Tests validate listing of multipart uploads.
func testListMultipartUploadsHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
// Collection of non-exhaustive ListMultipartUploads test cases, valid errors
// and success responses.
testCases := []struct {
// Inputs to ListMultipartUploads.
bucket string
prefix string
keyMarker string
uploadIDMarker string
delimiter string
maxUploads string
accessKey string
secretKey string
expectedRespStatus int
shouldPass bool
}{
// Test case - 1.
// Setting invalid bucket name.
{
bucket: ".test",
prefix: "",
keyMarker: "",
uploadIDMarker: "",
delimiter: "",
maxUploads: "0",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusBadRequest,
shouldPass: false,
},
// Test case - 2.
// Setting a non-existent bucket.
{
bucket: "volatile-bucket-1",
prefix: "",
keyMarker: "",
uploadIDMarker: "",
delimiter: "",
maxUploads: "0",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNotFound,
shouldPass: false,
},
// Test case -3.
// Setting invalid delimiter, expecting the HTTP response status to be http.StatusNotImplemented.
{
bucket: bucketName,
prefix: "",
keyMarker: "",
uploadIDMarker: "",
delimiter: "-",
maxUploads: "0",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNotImplemented,
shouldPass: false,
},
// Test case - 4.
// Setting Invalid prefix and marker combination.
{
bucket: bucketName,
prefix: "asia",
keyMarker: "europe-object",
uploadIDMarker: "",
delimiter: "",
maxUploads: "0",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNotImplemented,
shouldPass: false,
},
// Test case - 5.
// Invalid upload id and marker combination.
{
bucket: bucketName,
prefix: "asia",
keyMarker: "asia/europe/",
uploadIDMarker: "abc",
delimiter: "",
maxUploads: "0",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNotImplemented,
shouldPass: false,
},
// Test case - 6.
// Setting a negative value to max-uploads paramater, should result in http.StatusBadRequest.
{
bucket: bucketName,
prefix: "",
keyMarker: "",
uploadIDMarker: "",
delimiter: "",
maxUploads: "-1",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusBadRequest,
shouldPass: false,
},
// Test case - 7.
// Case with right set of parameters,
// should result in success 200OK.
{
bucket: bucketName,
prefix: "",
keyMarker: "",
uploadIDMarker: "",
delimiter: "/",
maxUploads: "100",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusOK,
shouldPass: true,
},
// Test case - 8.
// Good case without delimiter.
{
bucket: bucketName,
prefix: "",
keyMarker: "",
uploadIDMarker: "",
delimiter: "",
maxUploads: "100",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusOK,
shouldPass: true,
},
// Test case - 9.
// Setting Invalid AccessKey and SecretKey to induce and verify Signature Mismatch error.
{
bucket: bucketName,
prefix: "",
keyMarker: "",
uploadIDMarker: "",
delimiter: "",
maxUploads: "100",
accessKey: "abcd",
secretKey: "abcd",
expectedRespStatus: http.StatusForbidden,
shouldPass: true,
},
}
for i, testCase := range testCases {
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
// construct HTTP request for List multipart uploads endpoint.
u := getListMultipartUploadsURLWithParams("", testCase.bucket, testCase.prefix, testCase.keyMarker, testCase.uploadIDMarker, testCase.delimiter, testCase.maxUploads)
req, gerr := newTestSignedRequestV4("GET", u, 0, nil, testCase.accessKey, testCase.secretKey)
if gerr != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for ListMultipartUploadsHandler: <ERROR> %v", i+1, instanceType, gerr)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
if rec.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, rec.Code)
}
// Verify response the V2 signed HTTP request.
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
// verify response for V2 signed HTTP request.
reqV2, err := newTestSignedRequestV2("GET", u, 0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(recV2, reqV2)
if recV2.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, recV2.Code)
}
}
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
// construct HTTP request for List multipart uploads endpoint.
u := getListMultipartUploadsURLWithParams("", bucketName, "", "", "", "", "")
req, err := newTestSignedRequestV4("GET", u, 0, nil, "", "") // Generate an anonymous request.
if err != nil {
t.Fatalf("Test %s: Failed to create HTTP request for ListMultipartUploadsHandler: <ERROR> %v", instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
if rec.Code != http.StatusForbidden {
t.Errorf("Test %s: Expected the response status to be `http.StatusForbidden`, but instead found `%d`", instanceType, rec.Code)
}
url := getListMultipartUploadsURLWithParams("", testCases[6].bucket, testCases[6].prefix, testCases[6].keyMarker,
testCases[6].uploadIDMarker, testCases[6].delimiter, testCases[6].maxUploads)
// Test for Anonymous/unsigned http request.
anonReq, err := newTestRequest("GET", url, 0, nil)
if err != nil {
t.Fatalf("Minio %s: Failed to create an anonymous request for bucket \"%s\": <ERROR> %v",
instanceType, bucketName, err)
}
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyBucketStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "TestListMultipartUploadsHandler", bucketName, "", instanceType, apiRouter, anonReq, getWriteOnlyBucketStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
// since the `objectLayer==nil` check is performed before any other checks inside the handlers.
// The only aim is to generate an HTTP request in a way that the relevant/registered end point is evoked/called.
nilBucket := "dummy-bucket"
url = getListMultipartUploadsURLWithParams("", nilBucket, "dummy-prefix", testCases[6].keyMarker,
testCases[6].uploadIDMarker, testCases[6].delimiter, testCases[6].maxUploads)
nilReq, err := newTestRequest("GET", url, 0, nil)
if err != nil {
t.Errorf("Minio %s: Failed to create HTTP request for testing the response when object Layer is set to `nil`.", instanceType)
}
// execute the object layer set to `nil` test.
// `ExecObjectLayerAPINilTest` manages the operation.
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling TestListBucketsHandler tests for both XL multiple disks and single node setup.
func TestListBucketsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListBucketsHandler, []string{"ListBuckets"})
}
// testListBucketsHandler - Tests validate listing of buckets.
func testListBucketsHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
testCases := []struct {
bucketName string
accessKey string
secretKey string
expectedRespStatus int
}{
// Test case - 1.
// Validate a good case request succeeds.
{
bucketName: bucketName,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusOK,
},
// Test case - 2.
// Test case with invalid accessKey to produce and validate Signature MisMatch error.
{
bucketName: bucketName,
accessKey: "abcd",
secretKey: "abcd",
expectedRespStatus: http.StatusForbidden,
},
}
for i, testCase := range testCases {
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
req, lerr := newTestSignedRequestV4("GET", getListBucketURL(""), 0, nil, testCase.accessKey, testCase.secretKey)
if lerr != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for ListBucketsHandler: <ERROR> %v", i+1, instanceType, lerr)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
if rec.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, rec.Code)
}
// Verify response of the V2 signed HTTP request.
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
// verify response for V2 signed HTTP request.
reqV2, err := newTestSignedRequestV2("GET", getListBucketURL(""), 0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(recV2, reqV2)
if recV2.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, recV2.Code)
}
}
// Test for Anonymous/unsigned http request.
// ListBucketsHandler doesn't support bucket policies, setting the policies shouldn't make a difference.
anonReq, err := newTestRequest("GET", getListBucketURL(""), 0, nil)
if err != nil {
t.Fatalf("Minio %s: Failed to create an anonymous request.", instanceType)
}
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyObjectStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "ListBucketsHandler", "", "", instanceType, apiRouter, anonReq, getWriteOnlyObjectStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
// since the `objectLayer==nil` check is performed before any other checks inside the handlers.
// The only aim is to generate an HTTP request in a way that the relevant/registered end point is evoked/called.
nilReq, err := newTestRequest("GET", getListBucketURL(""), 0, nil)
if err != nil {
t.Errorf("Minio %s: Failed to create HTTP request for testing the response when object Layer is set to `nil`.", instanceType)
}
// execute the object layer set to `nil` test.
// `ExecObjectLayerAPINilTest` manages the operation.
ExecObjectLayerAPINilTest(t, "", "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling DeleteMultipleObjects HTTP handler tests for both XL multiple disks and single node setup.
func TestAPIDeleteMultipleObjectsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testAPIDeleteMultipleObjectsHandler, []string{"DeleteMultipleObjects"})
}
func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
var err error
// register event notifier.
err = initEventNotifier(obj)
if err != nil {
t.Fatal("Notifier initialization failed.")
}
contentBytes := []byte("hello")
sha256sum := ""
var objectNames []string
for i := 0; i < 10; i++ {
objectName := "test-object-" + strconv.Itoa(i)
// uploading the object.
_, err = obj.PutObject(bucketName, objectName, int64(len(contentBytes)), bytes.NewBuffer(contentBytes),
make(map[string]string), sha256sum)
// if object upload fails stop the test.
if err != nil {
t.Fatalf("Put Object %d: Error uploading object: <ERROR> %v", i, err)
}
// object used for the test.
objectNames = append(objectNames, objectName)
}
getObjectIdentifierList := func(objectNames []string) (objectIdentifierList []ObjectIdentifier) {
for _, objectName := range objectNames {
objectIdentifierList = append(objectIdentifierList, ObjectIdentifier{objectName})
}
return objectIdentifierList
}
requestList := []DeleteObjectsRequest{
{Quiet: false, Objects: getObjectIdentifierList(objectNames[:5])},
{Quiet: true, Objects: getObjectIdentifierList(objectNames[5:])},
}
// generate multi objects delete response.
successRequest0 := encodeResponse(requestList[0])
successResponse0 := generateMultiDeleteResponse(requestList[0].Quiet, requestList[0].Objects, nil)
encodedSuccessResponse0 := encodeResponse(successResponse0)
successRequest1 := encodeResponse(requestList[1])
successResponse1 := generateMultiDeleteResponse(requestList[1].Quiet, requestList[1].Objects, nil)
encodedSuccessResponse1 := encodeResponse(successResponse1)
// generate multi objects delete response for errors.
// errorRequest := encodeResponse(requestList[1])
errorResponse := generateMultiDeleteResponse(requestList[1].Quiet, requestList[1].Objects, nil)
encodedErrorResponse := encodeResponse(errorResponse)
testCases := []struct {
bucket string
objects []byte
accessKey string
secretKey string
expectedContent []byte
expectedRespStatus int
}{
// Test case - 1.
// Delete objects with invalid access key.
{
bucket: bucketName,
objects: successRequest0,
accessKey: "Invalid-AccessID",
secretKey: credentials.SecretAccessKey,
expectedContent: nil,
expectedRespStatus: http.StatusForbidden,
},
// Test case - 2.
// Delete valid objects with quiet flag off.
{
bucket: bucketName,
objects: successRequest0,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedContent: encodedSuccessResponse0,
expectedRespStatus: http.StatusOK,
},
// Test case - 3.
// Delete valid objects with quiet flag on.
{
bucket: bucketName,
objects: successRequest1,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedContent: encodedSuccessResponse1,
expectedRespStatus: http.StatusOK,
},
// Test case - 4.
// Delete previously deleted objects.
{
bucket: bucketName,
objects: successRequest1,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedContent: encodedErrorResponse,
expectedRespStatus: http.StatusOK,
},
}
for i, testCase := range testCases {
var req *http.Request
var actualContent []byte
// Indicating that all parts are uploaded and initiating completeMultipartUpload.
req, err = newTestSignedRequestV4("POST", getDeleteMultipleObjectsURL("", bucketName),
int64(len(testCase.objects)), bytes.NewReader(testCase.objects), testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Failed to create HTTP request for DeleteMultipleObjects: <ERROR> %v", err)
}
rec := httptest.NewRecorder()
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to executes the registered handler.
apiRouter.ServeHTTP(rec, req)
// Assert the response code with the expected status.
if rec.Code != testCase.expectedRespStatus {
t.Errorf("Case %d: Minio %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, rec.Code)
}
// read the response body.
actualContent, err = ioutil.ReadAll(rec.Body)
if err != nil {
t.Fatalf("Test %d : Minio %s: Failed parsing response body: <ERROR> %v", i+1, instanceType, err)
}
// Verify whether the bucket obtained object is same as the one created.
if testCase.expectedContent != nil && !bytes.Equal(testCase.expectedContent, actualContent) {
t.Errorf("Test %d : Minio %s: Object content differs from expected value.", i+1, instanceType)
}
}
// Currently anonymous user cannot delete multiple objects in Minio server, hence no test case is required.
// HTTP request to test the case of `objectLayer` being set to `nil`.
// There is no need to use an existing bucket or valid input for creating the request,
// since the `objectLayer==nil` check is performed before any other checks inside the handlers.
// The only aim is to generate an HTTP request in a way that the relevant/registered end point is evoked/called.
// Indicating that all parts are uploaded and initiating completeMultipartUpload.
nilBucket := "dummy-bucket"
nilObject := ""
nilReq, err := newTestSignedRequestV4("POST", getDeleteMultipleObjectsURL("", nilBucket), 0, nil, "", "")
if err != nil {
t.Errorf("Minio %s: Failed to create HTTP request for testing the response when object Layer is set to `nil`.", instanceType)
}
// execute the object layer set to `nil` test.
// `ExecObjectLayerAPINilTest` manages the operation.
ExecObjectLayerAPINilTest(t, nilBucket, nilObject, instanceType, apiRouter, nilReq)
}

173
cmd/bucket-metadata.go Normal file
View File

@@ -0,0 +1,173 @@
/*
* Minio Cloud Storage, (C) 2014-2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"encoding/json"
"net/rpc"
)
// BucketMetaState - Interface to update bucket metadata in-memory
// state.
type BucketMetaState interface {
// Updates bucket notification
UpdateBucketNotification(args *SetBucketNotificationPeerArgs) error
// Updates bucket listener
UpdateBucketListener(args *SetBucketListenerPeerArgs) error
// Updates bucket policy
UpdateBucketPolicy(args *SetBucketPolicyPeerArgs) error
// Sends event
SendEvent(args *EventArgs) error
}
// BucketUpdater - Interface implementer calls one of BucketMetaState's methods.
type BucketUpdater interface {
BucketUpdate(client BucketMetaState) error
}
// Type that implements BucketMetaState for local node.
type localBucketMetaState struct {
ObjectAPI func() ObjectLayer
}
// localBucketMetaState.UpdateBucketNotification - updates in-memory global bucket
// notification info.
func (lc *localBucketMetaState) UpdateBucketNotification(args *SetBucketNotificationPeerArgs) error {
// check if object layer is available.
objAPI := lc.ObjectAPI()
if objAPI == nil {
return errServerNotInitialized
}
globalEventNotifier.SetBucketNotificationConfig(args.Bucket, args.NCfg)
return nil
}
// localBucketMetaState.UpdateBucketListener - updates in-memory global bucket
// listeners info.
func (lc *localBucketMetaState) UpdateBucketListener(args *SetBucketListenerPeerArgs) error {
// check if object layer is available.
objAPI := lc.ObjectAPI()
if objAPI == nil {
return errServerNotInitialized
}
// Update in-memory notification config.
return globalEventNotifier.SetBucketListenerConfig(args.Bucket, args.LCfg)
}
// localBucketMetaState.UpdateBucketPolicy - updates in-memory global bucket
// policy info.
func (lc *localBucketMetaState) UpdateBucketPolicy(args *SetBucketPolicyPeerArgs) error {
// check if object layer is available.
objAPI := lc.ObjectAPI()
if objAPI == nil {
return errServerNotInitialized
}
var pCh policyChange
if err := json.Unmarshal(args.PChBytes, &pCh); err != nil {
return err
}
return globalBucketPolicies.SetBucketPolicy(args.Bucket, pCh)
}
// localBucketMetaState.SendEvent - sends event to local event notifier via
// `globalEventNotifier`
func (lc *localBucketMetaState) SendEvent(args *EventArgs) error {
// check if object layer is available.
objAPI := lc.ObjectAPI()
if objAPI == nil {
return errServerNotInitialized
}
return globalEventNotifier.SendListenerEvent(args.Arn, args.Event)
}
// Type that implements BucketMetaState for remote node.
type remoteBucketMetaState struct {
*AuthRPCClient
}
// remoteBucketMetaState.UpdateBucketNotification - sends bucket notification
// change to remote peer via RPC call.
func (rc *remoteBucketMetaState) UpdateBucketNotification(args *SetBucketNotificationPeerArgs) error {
reply := GenericReply{}
err := rc.Call("S3.SetBucketNotificationPeer", args, &reply)
// Check for network error and retry once.
if err != nil && err == rpc.ErrShutdown {
// Close the underlying connection to attempt once more.
rc.Close()
// Attempt again and proceed.
err = rc.Call("S3.SetBucketNotificationPeer", args, &reply)
}
return err
}
// remoteBucketMetaState.UpdateBucketListener - sends bucket listener change to
// remote peer via RPC call.
func (rc *remoteBucketMetaState) UpdateBucketListener(args *SetBucketListenerPeerArgs) error {
reply := GenericReply{}
err := rc.Call("S3.SetBucketListenerPeer", args, &reply)
// Check for network error and retry once.
if err != nil && err == rpc.ErrShutdown {
// Close the underlying connection to attempt once more.
rc.Close()
// Attempt again and proceed.
err = rc.Call("S3.SetBucketListenerPeer", args, &reply)
}
return err
}
// remoteBucketMetaState.UpdateBucketPolicy - sends bucket policy change to remote
// peer via RPC call.
func (rc *remoteBucketMetaState) UpdateBucketPolicy(args *SetBucketPolicyPeerArgs) error {
reply := GenericReply{}
err := rc.Call("S3.SetBucketPolicyPeer", args, &reply)
// Check for network error and retry once.
if err != nil && err == rpc.ErrShutdown {
// Close the underlying connection to attempt once more.
rc.Close()
// Attempt again and proceed.
err = rc.Call("S3.SetBucketPolicyPeer", args, &reply)
}
return err
}
// remoteBucketMetaState.SendEvent - sends event for bucket listener to remote
// peer via RPC call.
func (rc *remoteBucketMetaState) SendEvent(args *EventArgs) error {
reply := GenericReply{}
err := rc.Call("S3.Event", args, &reply)
// Check for network error and retry once.
if err != nil && err == rpc.ErrShutdown {
// Close the underlying connection to attempt once more.
rc.Close()
// Attempt again and proceed.
err = rc.Call("S3.Event", args, &reply)
}
return err
}

View File

@@ -32,30 +32,32 @@ type keyFilter struct {
FilterRules []filterRule `xml:"FilterRule,omitempty"`
}
// Common elements of service notification.
type serviceConfig struct {
Events []string `xml:"Event"`
Filter struct {
Key keyFilter `xml:"S3Key,omitempty"`
}
ID string `xml:"Id"`
type filterStruct struct {
Key keyFilter `xml:"S3Key,omitempty" json:"S3Key,omitempty"`
}
// ServiceConfig - Common elements of service notification.
type ServiceConfig struct {
Events []string `xml:"Event" json:"Event"`
Filter filterStruct `xml:"Filter" json:"Filter"`
ID string `xml:"Id" json:"Id"`
}
// Queue SQS configuration.
type queueConfig struct {
serviceConfig
ServiceConfig
QueueARN string `xml:"Queue"`
}
// Topic SNS configuration, this is a compliance field not used by minio yet.
type topicConfig struct {
serviceConfig
TopicARN string `xml:"Topic"`
ServiceConfig
TopicARN string `xml:"Topic" json:"Topic"`
}
// Lambda function configuration, this is a compliance field not used by minio yet.
type lambdaConfig struct {
serviceConfig
ServiceConfig
LambdaARN string `xml:"CloudFunction"`
}
@@ -64,10 +66,16 @@ type lambdaConfig struct {
type notificationConfig struct {
XMLName xml.Name `xml:"NotificationConfiguration"`
QueueConfigs []queueConfig `xml:"QueueConfiguration"`
TopicConfigs []topicConfig `xml:"TopicConfiguration"`
LambdaConfigs []lambdaConfig `xml:"CloudFunctionConfiguration"`
}
// listenerConfig structure represents run-time notification
// configuration for live listeners
type listenerConfig struct {
TopicConfig topicConfig `json:"TopicConfiguration"`
TargetServer string `json:"TargetServer"`
}
// Internal error used to signal notifications not set.
var errNoSuchNotifications = errors.New("The specified bucket does not have bucket notifications")
@@ -152,17 +160,6 @@ type NotificationEvent struct {
S3 eventMeta `json:"s3"`
}
// Represents the minio topic type and account id's.
type arnTopic struct {
Type string
AccountID string
}
// Stringer for constructing AWS ARN compatible string.
func (m arnTopic) String() string {
return minioTopic + serverConfig.GetRegion() + ":" + m.AccountID + ":" + m.Type
}
// Represents the minio sqs type and account id's.
type arnSQS struct {
Type string

View File

@@ -20,9 +20,9 @@ import (
"bytes"
"encoding/json"
"encoding/xml"
"fmt"
"io"
"net/http"
"path"
"time"
"github.com/gorilla/mux"
@@ -31,6 +31,7 @@ import (
const (
bucketConfigPrefix = "buckets"
bucketNotificationConfig = "notification.xml"
bucketListenerConfig = "listener.json"
)
// GetBucketNotificationHandler - This implementation of the GET
@@ -39,15 +40,29 @@ const (
// not enabled on the bucket, the operation returns an empty
// NotificationConfiguration element.
func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
// Validate request authorization.
if s3Error := checkAuth(r); s3Error != ErrNone {
objAPI := api.ObjectAPI()
if objAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// Attempt to successfully load notification config.
nConfig, err := loadNotificationConfig(bucket, api.ObjectAPI)
nConfig, err := loadNotificationConfig(bucket, objAPI)
if err != nil && err != errNoSuchNotifications {
errorIf(err, "Unable to read notification configuration.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
@@ -78,15 +93,21 @@ func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter,
// By default, your bucket has no event notifications configured. That is,
// the notification configuration will be an empty NotificationConfiguration.
func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
// Validate request authorization.
if s3Error := checkAuth(r); s3Error != ErrNone {
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
_, err := api.ObjectAPI.GetBucketInfo(bucket)
_, err := objectAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
@@ -104,11 +125,10 @@ func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter,
// Reads the incoming notification configuration.
var buffer bytes.Buffer
var bufferSize int64
if r.ContentLength >= 0 {
bufferSize, err = io.CopyN(&buffer, r.Body, r.ContentLength)
_, err = io.CopyN(&buffer, r.Body, r.ContentLength)
} else {
bufferSize, err = io.Copy(&buffer, r.Body)
_, err = io.Copy(&buffer, r.Body)
}
if err != nil {
errorIf(err, "Unable to read incoming body.")
@@ -131,22 +151,45 @@ func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter,
return
}
// Proceed to save notification configuration.
notificationConfigPath := path.Join(bucketConfigPrefix, bucket, bucketNotificationConfig)
_, err = api.ObjectAPI.PutObject(minioMetaBucket, notificationConfigPath, bufferSize, bytes.NewReader(buffer.Bytes()), nil)
// Put bucket notification config.
err = PutBucketNotificationConfig(bucket, &notificationCfg, objectAPI)
if err != nil {
errorIf(err, "Unable to write bucket notification configuration.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// Set bucket notification config.
globalEventNotifier.SetBucketNotificationConfig(bucket, &notificationCfg)
// Success.
writeSuccessResponse(w, nil)
}
// PutBucketNotificationConfig - Put a new notification config for a
// bucket (overwrites any previous config) persistently, updates
// global in-memory state, and notify other nodes in the cluster (if
// any)
func PutBucketNotificationConfig(bucket string, ncfg *notificationConfig, objAPI ObjectLayer) error {
if ncfg == nil {
return errInvalidArgument
}
// Acquire a write lock on bucket before modifying its
// configuration.
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
// Release lock after notifying peers
defer bucketLock.Unlock()
// persist config to disk
err := persistNotificationConfig(bucket, ncfg, objAPI)
if err != nil {
return fmt.Errorf("Unable to persist Bucket notification config to object layer - config=%v errMsg=%v", *ncfg, err)
}
// All servers (including local) are told to update in-memory config
S3PeersUpdateBucketNotification(bucket, ncfg)
return nil
}
// writeNotification marshals notification message before writing to client.
func writeNotification(w http.ResponseWriter, notification map[string][]NotificationEvent) error {
// Invalid response writer.
@@ -157,7 +200,7 @@ func writeNotification(w http.ResponseWriter, notification map[string][]Notifica
if notification == nil {
return errInvalidArgument
}
// Marshal notification data into XML and write to client.
// Marshal notification data into JSON and write to client.
notificationBytes, err := json.Marshal(&notification)
if err != nil {
return err
@@ -180,8 +223,6 @@ var crlf = []byte("\r\n")
// for each notification input, otherwise writes whitespace characters periodically
// to keep the connection active. Each notification messages are terminated by CRLF
// character. Upon any error received on response writer the for loop exits.
//
// TODO - do not log for all errors.
func sendBucketNotification(w http.ResponseWriter, arnListenerCh <-chan []NotificationEvent) {
var dummyEvents = map[string][]NotificationEvent{"Records": nil}
// Continuously write to client either timely empty structures
@@ -193,8 +234,9 @@ func sendBucketNotification(w http.ResponseWriter, arnListenerCh <-chan []Notifi
errorIf(err, "Unable to write notification to client.")
return
}
case <-time.After(5 * time.Second):
case <-time.After(globalSNSConnAlive): // Wait for global conn active seconds.
if err := writeNotification(w, dummyEvents); err != nil {
// FIXME - do not log for all errors.
errorIf(err, "Unable to write notification to client.")
return
}
@@ -204,66 +246,197 @@ func sendBucketNotification(w http.ResponseWriter, arnListenerCh <-chan []Notifi
// ListenBucketNotificationHandler - list bucket notifications.
func (api objectAPIHandlers) ListenBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
// Validate request authorization.
if s3Error := checkAuth(r); s3Error != ErrNone {
// Validate if bucket exists.
objAPI := api.ObjectAPI()
if objAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
// Get notification ARN.
topicARN := r.URL.Query().Get("notificationARN")
if topicARN == "" {
writeErrorResponse(w, r, ErrARNNotification, r.URL.Path)
// Parse listen bucket notification resources.
prefixes, suffixes, events := getListenBucketNotificationResources(r.URL.Query())
if err := validateFilterValues(prefixes); err != ErrNone {
writeErrorResponse(w, r, err, r.URL.Path)
return
}
// Validate if bucket exists.
_, err := api.ObjectAPI.GetBucketInfo(bucket)
if err := validateFilterValues(suffixes); err != ErrNone {
writeErrorResponse(w, r, err, r.URL.Path)
return
}
// Validate all the resource events.
for _, event := range events {
if errCode := checkEvent(event); errCode != ErrNone {
writeErrorResponse(w, r, errCode, r.URL.Path)
return
}
}
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to bucket info.")
errorIf(err, "Unable to get bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
notificationCfg := globalEventNotifier.GetBucketNotificationConfig(bucket)
if notificationCfg == nil {
writeErrorResponse(w, r, ErrARNNotification, r.URL.Path)
return
accountID := fmt.Sprintf("%d", time.Now().UTC().UnixNano())
accountARN := fmt.Sprintf(
"%s:%s:%s:%s-%s",
minioTopic,
serverConfig.GetRegion(),
accountID,
snsTypeMinio,
globalMinioAddr,
)
var filterRules []filterRule
for _, prefix := range prefixes {
filterRules = append(filterRules, filterRule{
Name: "prefix",
Value: prefix,
})
}
// Set SNS notifications only if special "listen" sns is set in bucket
// notification configs.
if !isMinioSNSConfigured(topicARN, notificationCfg.TopicConfigs) {
writeErrorResponse(w, r, ErrARNNotification, r.URL.Path)
for _, suffix := range suffixes {
filterRules = append(filterRules, filterRule{
Name: "suffix",
Value: suffix,
})
}
// Make topic configuration corresponding to this
// ListenBucketNotification request.
topicCfg := &topicConfig{
TopicARN: accountARN,
ServiceConfig: ServiceConfig{
Events: events,
Filter: struct {
Key keyFilter `xml:"S3Key,omitempty" json:"S3Key,omitempty"`
}{
Key: keyFilter{
FilterRules: filterRules,
},
},
ID: "sns-" + accountID,
},
}
// Setup a listening channel that will receive notifications
// from the RPC handler.
nEventCh := make(chan []NotificationEvent)
defer close(nEventCh)
// Add channel for listener events
if err = globalEventNotifier.AddListenerChan(accountARN, nEventCh); err != nil {
errorIf(err, "Error adding a listener!")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// Remove listener channel after the writer has closed or the
// client disconnected.
defer globalEventNotifier.RemoveListenerChan(accountARN)
// Update topic config to bucket config and persist - as soon
// as this call compelets, events may start appearing in
// nEventCh
lc := listenerConfig{
TopicConfig: *topicCfg,
TargetServer: globalMinioAddr,
}
err = AddBucketListenerConfig(bucket, &lc, objAPI)
if err != nil {
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
defer RemoveBucketListenerConfig(bucket, &lc, objAPI)
// Add all common headers.
setCommonHeaders(w)
// Create a new notification event channel.
nEventCh := make(chan []NotificationEvent)
// Close the listener channel.
defer close(nEventCh)
// Set sns target.
globalEventNotifier.SetSNSTarget(topicARN, nEventCh)
// Remove sns listener after the writer has closed or the client disconnected.
defer globalEventNotifier.RemoveSNSTarget(topicARN, nEventCh)
// Start sending bucket notifications.
sendBucketNotification(w, nEventCh)
}
// Removes notification.xml for a given bucket, only used during DeleteBucket.
func removeNotificationConfig(bucket string, objAPI ObjectLayer) error {
// Verify bucket is valid.
if !IsValidBucketName(bucket) {
return BucketNameInvalid{Bucket: bucket}
// AddBucketListenerConfig - Updates on disk state of listeners, and
// updates all peers with the change in listener config.
func AddBucketListenerConfig(bucket string, lcfg *listenerConfig, objAPI ObjectLayer) error {
if lcfg == nil {
return errInvalidArgument
}
listenerCfgs := globalEventNotifier.GetBucketListenerConfig(bucket)
// add new lid to listeners and persist to object layer.
listenerCfgs = append(listenerCfgs, *lcfg)
// Acquire a write lock on bucket before modifying its
// configuration.
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
// Release lock after notifying peers
defer bucketLock.Unlock()
// update persistent config if dist XL
if globalIsDistXL {
err := persistListenerConfig(bucket, listenerCfgs, objAPI)
if err != nil {
errorIf(err, "Error persisting listener config when adding a listener.")
return err
}
}
notificationConfigPath := path.Join(bucketConfigPrefix, bucket, bucketNotificationConfig)
return objAPI.DeleteObject(minioMetaBucket, notificationConfigPath)
// persistence success - now update in-memory globals on all
// peers (including local)
S3PeersUpdateBucketListener(bucket, listenerCfgs)
return nil
}
// RemoveBucketListenerConfig - removes a given bucket notification config
func RemoveBucketListenerConfig(bucket string, lcfg *listenerConfig, objAPI ObjectLayer) {
listenerCfgs := globalEventNotifier.GetBucketListenerConfig(bucket)
// remove listener with matching ARN - if not found ignore and
// exit.
var updatedLcfgs []listenerConfig
found := false
for k, configuredLcfg := range listenerCfgs {
if configuredLcfg.TopicConfig.TopicARN == lcfg.TopicConfig.TopicARN {
updatedLcfgs = append(listenerCfgs[:k],
listenerCfgs[k+1:]...)
found = true
break
}
}
if !found {
return
}
// Acquire a write lock on bucket before modifying its
// configuration.
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
// Release lock after notifying peers
defer bucketLock.Unlock()
// update persistent config if dist XL
if globalIsDistXL {
err := persistListenerConfig(bucket, updatedLcfgs, objAPI)
if err != nil {
errorIf(err, "Error persisting listener config when removing a listener.")
return
}
}
// persistence success - now update in-memory globals on all
// peers (including local)
S3PeersUpdateBucketListener(bucket, updatedLcfgs)
}

View File

@@ -1,10 +1,31 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"bufio"
"bytes"
"encoding/json"
"encoding/xml"
"io"
"io/ioutil"
"net/http"
"net/http/httptest"
"reflect"
"testing"
)
@@ -19,7 +40,7 @@ func (f *flushWriter) Write(b []byte) (n int, err error) { return f.Writer.Write
func (f *flushWriter) Header() http.Header { return http.Header{} }
func (f *flushWriter) WriteHeader(code int) {}
func newFlushWriter(writer io.Writer) *flushWriter {
func newFlushWriter(writer io.Writer) http.ResponseWriter {
return &flushWriter{writer}
}
@@ -35,7 +56,7 @@ func TestWriteNotification(t *testing.T) {
var buffer bytes.Buffer
// Collection of test cases for each event writer.
testCases := []struct {
writer *flushWriter
writer http.ResponseWriter
event map[string][]NotificationEvent
err error
}{
@@ -88,3 +109,215 @@ func TestWriteNotification(t *testing.T) {
}
}
}
func TestSendBucketNotification(t *testing.T) {
// Initialize a new test config.
root, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Unable to initialize test config %s", err)
}
defer removeAll(root)
eventCh := make(chan []NotificationEvent)
// Create a Pipe with FlushWriter on the write-side and bufio.Scanner
// on the reader-side to receive notification over the listen channel in a
// synchronized manner.
pr, pw := io.Pipe()
fw := newFlushWriter(pw)
scanner := bufio.NewScanner(pr)
// Start a go-routine to wait for notification events.
go func(listenerCh <-chan []NotificationEvent) {
sendBucketNotification(fw, listenerCh)
}(eventCh)
// Construct notification events to be passed on the events channel.
var events []NotificationEvent
evTypes := []EventName{
ObjectCreatedPut,
ObjectCreatedPost,
ObjectCreatedCopy,
ObjectCreatedCompleteMultipartUpload,
}
for _, evType := range evTypes {
events = append(events, newNotificationEvent(eventData{
Type: evType,
}))
}
// Send notification events to the channel on which sendBucketNotification
// is waiting on.
eventCh <- events
// Read from the pipe connected to the ResponseWriter.
scanner.Scan()
notificationBytes := scanner.Bytes()
// Close the read-end and send an empty notification event on the channel
// to signal sendBucketNotification to terminate.
pr.Close()
eventCh <- []NotificationEvent{}
close(eventCh)
// Checking if the notification are the same as those sent over the channel.
var notifications map[string][]NotificationEvent
err = json.Unmarshal(notificationBytes, &notifications)
if err != nil {
t.Fatal("Failed to Unmarshal notification")
}
records := notifications["Records"]
for i, rec := range records {
if rec.EventName == evTypes[i].String() {
continue
}
t.Errorf("Failed to receive %d event %s", i, evTypes[i].String())
}
}
func TestGetBucketNotificationHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testGetBucketNotificationHandler, []string{
"GetBucketNotification",
})
}
func testGetBucketNotificationHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
// declare sample configs
filterRules := []filterRule{
{
Name: "prefix",
Value: "minio",
},
{
Name: "suffix",
Value: "*.jpg",
},
}
sampleSvcCfg := ServiceConfig{
[]string{"s3:ObjectRemoved:*", "s3:ObjectCreated:*"},
filterStruct{
keyFilter{filterRules},
},
"1",
}
sampleNotifCfg := notificationConfig{
QueueConfigs: []queueConfig{
{
ServiceConfig: sampleSvcCfg,
QueueARN: "testqARN",
},
},
}
rec := httptest.NewRecorder()
req, err := newTestSignedRequestV4("GET", getGetBucketNotificationURL("", bucketName),
0, nil, credentials.AccessKeyID, credentials.SecretAccessKey)
if err != nil {
t.Fatalf("%s: Failed to create HTTP testRequest for ListenBucketNotification: <ERROR> %v", instanceType, err)
}
apiRouter.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("Unexpected http response %d", rec.Code)
}
if err = persistNotificationConfig(bucketName, &sampleNotifCfg, obj); err != nil {
t.Fatalf("Unable to save notification config %s", err)
}
rec = httptest.NewRecorder()
req, err = newTestSignedRequestV4("GET", getGetBucketNotificationURL("", bucketName),
0, nil, credentials.AccessKeyID, credentials.SecretAccessKey)
if err != nil {
t.Fatalf("%s: Failed to create HTTP testRequest for ListenBucketNotification: <ERROR> %v", instanceType, err)
}
apiRouter.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("Unexpected http response %d", rec.Code)
}
notificationBytes, err := ioutil.ReadAll(rec.Body)
if err != nil {
t.Fatalf("Unexpected error %s", err)
}
nConfig := notificationConfig{}
if err = xml.Unmarshal(notificationBytes, &nConfig); err != nil {
t.Fatalf("Unexpected XML received %s", err)
}
if sampleNotifCfg.QueueConfigs[0].QueueARN != nConfig.QueueConfigs[0].QueueARN {
t.Fatalf("Uexpected notification configs expected %#v, got %#v", sampleNotifCfg, nConfig)
}
if !reflect.DeepEqual(sampleNotifCfg.QueueConfigs[0].Events, nConfig.QueueConfigs[0].Events) {
t.Fatalf("Uexpected notification configs expected %#v, got %#v", sampleNotifCfg, nConfig)
}
}
func TestListenBucketNotificationNilHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListenBucketNotificationNilHandler, []string{
"ListenBucketNotification",
"PutObject",
})
}
func testListenBucketNotificationNilHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
// get random bucket name.
randBucket := getRandomBucketName()
// Nil Object layer
nilAPIRouter := initTestAPIEndPoints(nil, []string{
"ListenBucketNotification",
})
testRec := httptest.NewRecorder()
testReq, tErr := newTestSignedRequestV4("GET",
getListenBucketNotificationURL("", randBucket, []string{},
[]string{"*.jpg"}, []string{
"s3:ObjectCreated:*",
"s3:ObjectRemoved:*",
}), 0, nil, credentials.AccessKeyID, credentials.SecretAccessKey)
if tErr != nil {
t.Fatalf("%s: Failed to create HTTP testRequest for ListenBucketNotification: <ERROR> %v", instanceType, tErr)
}
nilAPIRouter.ServeHTTP(testRec, testReq)
if testRec.Code != http.StatusServiceUnavailable {
t.Fatalf("Test 1: %s: expected HTTP code %d, but received %d: <ERROR> %v",
instanceType, http.StatusServiceUnavailable, testRec.Code, tErr)
}
}
func testRemoveNotificationConfig(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
invalidBucket := "Invalid\\Bucket"
// get random bucket name.
randBucket := bucketName
sampleNotificationBytes := []byte("<NotificationConfiguration><TopicConfiguration>" +
"<Event>s3:ObjectCreated:*</Event><Event>s3:ObjectRemoved:*</Event><Filter>" +
"<S3Key></S3Key></Filter><Id></Id><Topic>arn:minio:sns:us-east-1:1474332374:listen</Topic>" +
"</TopicConfiguration></NotificationConfiguration>")
// Set sample bucket notification on randBucket.
testRec := httptest.NewRecorder()
testReq, tErr := newTestSignedRequestV4("PUT", getPutBucketNotificationURL("", randBucket),
int64(len(sampleNotificationBytes)), bytes.NewReader(sampleNotificationBytes),
credentials.AccessKeyID, credentials.SecretAccessKey)
if tErr != nil {
t.Fatalf("%s: Failed to create HTTP testRequest for PutBucketNotification: <ERROR> %v", instanceType, tErr)
}
apiRouter.ServeHTTP(testRec, testReq)
testCases := []struct {
bucketName string
expectedErr error
}{
{invalidBucket, BucketNameInvalid{Bucket: invalidBucket}},
{randBucket, nil},
}
for i, test := range testCases {
tErr := removeNotificationConfig(test.bucketName, obj)
if tErr != test.expectedErr {
t.Errorf("Test %d: %s expected error %v, but received %v", i+1, instanceType, test.expectedErr, tErr)
}
}
}
func TestRemoveNotificationConfig(t *testing.T) {
ExecObjectLayerAPITest(t, testRemoveNotificationConfig, []string{
"PutBucketNotification",
"ListenBucketNotification",
})
}

View File

@@ -126,55 +126,27 @@ func checkQueueARN(queueARN string) APIErrorCode {
return checkARN(queueARN, minioSqs)
}
// checkTopicARN - check if the topic arn is valid.
func checkTopicARN(topicARN string) APIErrorCode {
return checkARN(topicARN, minioTopic)
}
// Returns true if the topicARN is for an Minio sns listen type.
func isMinioSNS(topicARN arnTopic) bool {
return strings.HasSuffix(topicARN.Type, snsTypeMinio)
}
// isMinioSNSConfigured - verifies if one topic ARN is valid and is enabled.
func isMinioSNSConfigured(topicARN string, topicConfigs []topicConfig) bool {
for _, topicConfig := range topicConfigs {
// Validate if topic ARN is already enabled.
if topicARN == topicConfig.TopicARN {
return true
}
}
return false
}
// Validate if we recognize the queue type.
func isValidQueue(sqsARN arnSQS) bool {
amqpQ := isAMQPQueue(sqsARN) // Is amqp queue?.
elasticQ := isElasticQueue(sqsARN) // Is elastic queue?.
redisQ := isRedisQueue(sqsARN) // Is redis queue?.
return amqpQ || elasticQ || redisQ
}
// Validate if we recognize the topic type.
func isValidTopic(topicARN arnTopic) bool {
return isMinioSNS(topicARN) // Is minio topic?.
}
// Validates account id for input queue ARN.
func isValidQueueID(queueARN string) bool {
// Unmarshals QueueARN into structured object.
sqsARN := unmarshalSqsARN(queueARN)
// AMQP queue.
if isAMQPQueue(sqsARN) {
// Is Queue identifier valid?.
if isAMQPQueue(sqsARN) { // AMQP eueue.
amqpN := serverConfig.GetAMQPNotifyByID(sqsARN.AccountID)
return amqpN.Enable && amqpN.URL != ""
} else if isNATSQueue(sqsARN) {
natsN := serverConfig.GetNATSNotifyByID(sqsARN.AccountID)
return natsN.Enable && natsN.Address != ""
} else if isElasticQueue(sqsARN) { // Elastic queue.
elasticN := serverConfig.GetElasticSearchNotifyByID(sqsARN.AccountID)
return elasticN.Enable && elasticN.URL != ""
} else if isRedisQueue(sqsARN) { // Redis queue.
redisN := serverConfig.GetRedisNotifyByID(sqsARN.AccountID)
return redisN.Enable && redisN.Addr != ""
} else if isPostgreSQLQueue(sqsARN) {
pgN := serverConfig.GetPostgreSQLNotifyByID(sqsARN.AccountID)
// Postgres can work with only default conn. info.
return pgN.Enable
}
return false
}
@@ -186,13 +158,6 @@ func checkQueueConfig(qConfig queueConfig) APIErrorCode {
return s3Error
}
// Unmarshals QueueARN into structured object.
sqsARN := unmarshalSqsARN(qConfig.QueueARN)
// Validate if sqsARN requested any of the known supported queues.
if !isValidQueue(sqsARN) {
return ErrARNNotification
}
// Validate if the account ID is correct.
if !isValidQueueID(qConfig.QueueARN) {
return ErrARNNotification
@@ -212,34 +177,6 @@ func checkQueueConfig(qConfig queueConfig) APIErrorCode {
return ErrNone
}
// Check - validates queue configuration and returns error if any.
func checkTopicConfig(tConfig topicConfig) APIErrorCode {
// Check queue arn is valid.
if s3Error := checkTopicARN(tConfig.TopicARN); s3Error != ErrNone {
return s3Error
}
// Unmarshals QueueARN into structured object.
topicARN := unmarshalTopicARN(tConfig.TopicARN)
// Validate if topicARN requested any of the known supported queues.
if !isValidTopic(topicARN) {
return ErrARNNotification
}
// Check if valid events are set in queue config.
if s3Error := checkEvents(tConfig.Events); s3Error != ErrNone {
return s3Error
}
// Check if valid filters are set in queue config.
if s3Error := checkFilterRules(tConfig.Filter.Key.FilterRules); s3Error != ErrNone {
return s3Error
}
// Success.
return ErrNone
}
// Validates all incoming queue configs, checkQueueConfig validates if the
// input fields for each queues is not malformed and has valid configuration
// information. If validation fails bucket notifications are not enabled.
@@ -253,53 +190,19 @@ func validateQueueConfigs(queueConfigs []queueConfig) APIErrorCode {
return ErrNone
}
// Validates all incoming topic configs, checkTopicConfig validates if the
// input fields for each queues is not malformed and has valid configuration
// information. If validation fails bucket notifications are not enabled.
func validateTopicConfigs(topicConfigs []topicConfig) APIErrorCode {
for _, tConfig := range topicConfigs {
if s3Error := checkTopicConfig(tConfig); s3Error != ErrNone {
return s3Error
}
}
// Success.
return ErrNone
}
// Check all the queue configs for any duplicates.
func checkDuplicateQueueConfigs(configs []queueConfig) APIErrorCode {
configMaps := make(map[string]int)
var queueConfigARNS []string
// Navigate through each configs and count the entries.
for _, config := range configs {
configMaps[config.QueueARN]++
queueConfigARNS = append(queueConfigARNS, config.QueueARN)
}
// Validate if there are any duplicate counts.
for _, count := range configMaps {
if count != 1 {
return ErrOverlappingConfigs
}
}
// Success.
return ErrNone
}
// Check all the topic configs for any duplicates.
func checkDuplicateTopicConfigs(configs []topicConfig) APIErrorCode {
configMaps := make(map[string]int)
// Navigate through each configs and count the entries.
for _, config := range configs {
configMaps[config.TopicARN]++
}
// Validate if there are any duplicate counts.
for _, count := range configMaps {
if count != 1 {
return ErrOverlappingConfigs
}
// Check if there are any duplicate counts.
if err := checkDuplicateStrings(queueConfigARNS); err != nil {
errorIf(err, "Invalid queue configs found.")
return ErrOverlappingConfigs
}
// Success.
@@ -314,46 +217,25 @@ func validateNotificationConfig(nConfig notificationConfig) APIErrorCode {
if s3Error := validateQueueConfigs(nConfig.QueueConfigs); s3Error != ErrNone {
return s3Error
}
// Validate all topic configs.
if s3Error := validateTopicConfigs(nConfig.TopicConfigs); s3Error != ErrNone {
return s3Error
}
// Check for duplicate queue configs.
if s3Error := checkDuplicateQueueConfigs(nConfig.QueueConfigs); s3Error != ErrNone {
return s3Error
}
// Check for duplicate topic configs.
if s3Error := checkDuplicateTopicConfigs(nConfig.TopicConfigs); s3Error != ErrNone {
return s3Error
if len(nConfig.QueueConfigs) > 1 {
if s3Error := checkDuplicateQueueConfigs(nConfig.QueueConfigs); s3Error != ErrNone {
return s3Error
}
}
// Add validation for other configurations.
return ErrNone
}
// Unmarshals input value of AWS ARN format into minioTopic object.
// Returned value represents minio topic type, currently supported are
// - listen
func unmarshalTopicARN(topicARN string) arnTopic {
topic := arnTopic{}
if !strings.HasPrefix(topicARN, minioTopic+serverConfig.GetRegion()+":") {
return topic
}
topicType := strings.TrimPrefix(topicARN, minioTopic+serverConfig.GetRegion()+":")
switch {
case strings.HasSuffix(topicType, snsTypeMinio):
topic.Type = snsTypeMinio
} // Add more topic here.
topic.AccountID = strings.TrimSuffix(topicType, ":"+topic.Type)
return topic
}
// Unmarshals input value of AWS ARN format into minioSqs object.
// Returned value represents minio sqs types, currently supported are
// - amqp
// - nats
// - elasticsearch
// - redis
// - postgresql
func unmarshalSqsARN(queueARN string) (mSqs arnSQS) {
mSqs = arnSQS{}
if !strings.HasPrefix(queueARN, minioSqs+serverConfig.GetRegion()+":") {
@@ -363,10 +245,14 @@ func unmarshalSqsARN(queueARN string) (mSqs arnSQS) {
switch {
case strings.HasSuffix(sqsType, queueTypeAMQP):
mSqs.Type = queueTypeAMQP
case strings.HasSuffix(sqsType, queueTypeNATS):
mSqs.Type = queueTypeNATS
case strings.HasSuffix(sqsType, queueTypeElastic):
mSqs.Type = queueTypeElastic
case strings.HasSuffix(sqsType, queueTypeRedis):
mSqs.Type = queueTypeRedis
case strings.HasSuffix(sqsType, queueTypePostgreSQL):
mSqs.Type = queueTypePostgreSQL
} // Add more queues here.
mSqs.AccountID = strings.TrimSuffix(sqsType, ":"+mSqs.Type)
return mSqs

View File

@@ -16,7 +16,126 @@
package cmd
import "testing"
import (
"strings"
"testing"
)
// Test validates for duplicate configs.
func TestCheckDuplicateConfigs(t *testing.T) {
testCases := []struct {
qConfigs []queueConfig
expectedErrCode APIErrorCode
}{
// Error for duplicate queue configs.
{
qConfigs: []queueConfig{
{
QueueARN: "arn:minio:sqs:us-east-1:1:redis",
},
{
QueueARN: "arn:minio:sqs:us-east-1:1:redis",
},
},
expectedErrCode: ErrOverlappingConfigs,
},
// Valid queue configs.
{
qConfigs: []queueConfig{
{
QueueARN: "arn:minio:sqs:us-east-1:1:redis",
},
},
expectedErrCode: ErrNone,
},
}
// ... validate for duplicate queue configs.
for i, testCase := range testCases {
errCode := checkDuplicateQueueConfigs(testCase.qConfigs)
if errCode != testCase.expectedErrCode {
t.Errorf("Test %d: Expected %d, got %d", i+1, testCase.expectedErrCode, errCode)
}
}
}
// Tests for validating filter rules.
func TestCheckFilterRules(t *testing.T) {
testCases := []struct {
rules []filterRule
expectedErrCode APIErrorCode
}{
// Valid prefix and suffix values.
{
rules: []filterRule{
{
Name: "prefix",
Value: "test/test1",
},
{
Name: "suffix",
Value: ".jpg",
},
},
expectedErrCode: ErrNone,
},
// Invalid filter name.
{
rules: []filterRule{
{
Name: "unknown",
Value: "test/test1",
},
},
expectedErrCode: ErrFilterNameInvalid,
},
// Cannot have duplicate prefixes.
{
rules: []filterRule{
{
Name: "prefix",
Value: "test/test1",
},
{
Name: "prefix",
Value: "test/test1",
},
},
expectedErrCode: ErrFilterNamePrefix,
},
// Cannot have duplicate suffixes.
{
rules: []filterRule{
{
Name: "suffix",
Value: ".jpg",
},
{
Name: "suffix",
Value: ".txt",
},
},
expectedErrCode: ErrFilterNameSuffix,
},
// Filter value cannot be bigger than > 1024.
{
rules: []filterRule{
{
Name: "prefix",
Value: strings.Repeat("a", 1025),
},
},
expectedErrCode: ErrFilterValueInvalid,
},
}
for i, testCase := range testCases {
errCode := checkFilterRules(testCase.rules)
if errCode != testCase.expectedErrCode {
t.Errorf("Test %d: Expected %d, got %d", i+1, testCase.expectedErrCode, errCode)
}
}
}
// Tests filter name validation.
func TestIsValidFilterName(t *testing.T) {
@@ -97,79 +216,6 @@ func TestValidEvents(t *testing.T) {
}
}
// Tests topic arn validation.
func TestTopicARN(t *testing.T) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
defer removeAll(rootPath)
testCases := []struct {
topicARN string
errCode APIErrorCode
}{
// Valid minio topic with '1' account id.
{
topicARN: "arn:minio:sns:us-east-1:1:minio",
errCode: ErrNone,
},
// Valid minio topic with '10' account id.
{
topicARN: "arn:minio:sns:us-east-1:10:minio",
errCode: ErrNone,
},
// Invalid empty topic arn.
{
topicARN: "",
errCode: ErrARNNotification,
},
// Invalid notification service type.
{
topicARN: "arn:minio:sqs:us-east-1:1:listen",
errCode: ErrARNNotification,
},
// Invalid region 'us-west-1' in queue arn.
{
topicARN: "arn:minio:sns:us-west-1:1:listen",
errCode: ErrRegionNotification,
},
// Empty topic account id is invalid.
{
topicARN: "arn:minio:sns:us-east-1::listen",
errCode: ErrARNNotification,
},
// Empty topic account name is invalid.
{
topicARN: "arn:minio:sns:us-east-1:10:",
errCode: ErrARNNotification,
},
// Empty topic account id and account name is invalid.
{
topicARN: "arn:minio:sns:us-east-1::",
errCode: ErrARNNotification,
},
// Missing topic id and separator missing at the end in topic arn.
{
topicARN: "arn:minio:sns:us-east-1:listen",
errCode: ErrARNNotification,
},
// Missing topic id and empty string at the end in topic arn.
{
topicARN: "arn:minio:sns:us-east-1:",
errCode: ErrARNNotification,
},
}
// Validate all topics.
for i, testCase := range testCases {
errCode := checkTopicARN(testCase.topicARN)
if testCase.errCode != errCode {
t.Errorf("Test %d: Expected \"%d\", got \"%d\"", i+1, testCase.errCode, errCode)
}
}
}
// Tests queue arn validation.
func TestQueueARN(t *testing.T) {
rootPath, err := newTestConfig("us-east-1")
@@ -248,55 +294,8 @@ func TestQueueARN(t *testing.T) {
}
}
// Test unmarshal topic arn.
func TestUnmarshalTopicARN(t *testing.T) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
defer removeAll(rootPath)
testCases := []struct {
topicARN string
Type string
}{
// Valid minio topic arn.
{
topicARN: "arn:minio:sns:us-east-1:1:listen",
Type: "listen",
},
// Invalid empty topic arn.
{
topicARN: "",
Type: "",
},
// Invalid region 'us-west-1' in topic arn.
{
topicARN: "arn:minio:sns:us-west-1:1:listen",
Type: "",
},
// Partial topic arn.
{
topicARN: "arn:minio:sns:",
Type: "",
},
// Invalid topic service value.
{
topicARN: "arn:minio:sns:us-east-1:1:*",
Type: "",
},
}
for i, testCase := range testCases {
topic := unmarshalTopicARN(testCase.topicARN)
if testCase.Type != topic.Type {
t.Errorf("Test %d: Expected \"%s\", got \"%s\"", i+1, testCase.Type, topic.Type)
}
}
}
// Test unmarshal queue arn.
func TestUnmarshalSqsARN(t *testing.T) {
func TestUnmarshalSQSARN(t *testing.T) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("unable initialize config file, %s", err)

View File

@@ -17,19 +17,19 @@
package cmd
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"net/http"
humanize "github.com/dustin/go-humanize"
mux "github.com/gorilla/mux"
"github.com/minio/minio-go/pkg/set"
"github.com/minio/minio/pkg/wildcard"
)
// maximum supported access policy size.
const maxAccessPolicySize = 20 * 1024 // 20KiB.
const maxAccessPolicySize = 20 * humanize.KiByte
// Verify if a given action is valid for the url path based on the
// existing bucket access policy.
@@ -50,17 +50,10 @@ func bucketPolicyEvalStatements(action string, resource string, conditions map[s
// Verify if action, resource and conditions match input policy statement.
func bucketPolicyMatchStatement(action string, resource string, conditions map[string]set.StringSet, statement policyStatement) bool {
// Verify if action matches.
if bucketPolicyActionMatch(action, statement) {
// Verify if resource matches.
if bucketPolicyResourceMatch(resource, statement) {
// Verify if condition matches.
if bucketPolicyConditionMatch(conditions, statement) {
return true
}
}
}
return false
// Verify if action, resource and condition match in given statement.
return (bucketPolicyActionMatch(action, statement) &&
bucketPolicyResourceMatch(resource, statement) &&
bucketPolicyConditionMatch(conditions, statement))
}
// Verify if given action matches with policy statement.
@@ -126,18 +119,26 @@ func bucketPolicyConditionMatch(conditions map[string]set.StringSet, statement p
// This implementation of the PUT operation uses the policy
// subresource to add to or replace a policy on a bucket
func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
objAPI := api.ObjectAPI()
if objAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
// Before proceeding validate if bucket exists.
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// If Content-Length is unknown or zero, deny the
@@ -164,36 +165,13 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// Parse bucket policy.
var policy = &bucketPolicy{}
err = parseBucketPolicy(bytes.NewReader(policyBytes), policy)
if err != nil {
errorIf(err, "Unable to parse bucket policy.")
writeErrorResponse(w, r, ErrInvalidPolicyDocument, r.URL.Path)
return
}
// Parse check bucket policy.
if s3Error := checkBucketPolicyResources(bucket, policy); s3Error != ErrNone {
// Parse validate and save bucket policy.
if s3Error := parseAndPersistBucketPolicy(bucket, policyBytes, objAPI); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
// Save bucket policy.
if err = writeBucketPolicy(bucket, api.ObjectAPI, bytes.NewReader(policyBytes), int64(len(policyBytes))); err != nil {
errorIf(err, "Unable to write bucket policy.")
switch err.(type) {
case BucketNameInvalid:
writeErrorResponse(w, r, ErrInvalidBucketName, r.URL.Path)
default:
writeErrorResponse(w, r, ErrInternalError, r.URL.Path)
}
return
}
// Set the bucket policy in memory.
globalBucketPolicies.SetBucketPolicy(bucket, policy)
// Success.
writeSuccessNoContent(w)
}
@@ -203,27 +181,32 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
// This implementation of the DELETE operation uses the policy
// subresource to add to remove a policy on a bucket.
func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
objAPI := api.ObjectAPI()
if objAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
// Before proceeding validate if bucket exists.
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// Delete bucket access policy.
if err := removeBucketPolicy(bucket, api.ObjectAPI); err != nil {
errorIf(err, "Unable to remove bucket policy.")
// Delete bucket access policy, by passing an empty policy
// struct.
if err := persistAndNotifyBucketPolicyChange(bucket, policyChange{true, nil}, objAPI); err != nil {
switch err.(type) {
case BucketNameInvalid:
writeErrorResponse(w, r, ErrInvalidBucketName, r.URL.Path)
case BucketPolicyNotFound:
writeErrorResponse(w, r, ErrNoSuchBucketPolicy, r.URL.Path)
default:
@@ -232,9 +215,6 @@ func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r
return
}
// Remove bucket policy.
globalBucketPolicies.RemoveBucketPolicy(bucket)
// Success.
writeSuccessNoContent(w)
}
@@ -244,28 +224,33 @@ func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r
// This operation uses the policy
// subresource to return the policy of a specified bucket.
func (api objectAPIHandlers) GetBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
objAPI := api.ObjectAPI()
if objAPI == nil {
writeErrorResponse(w, r, ErrServerNotInitialized, r.URL.Path)
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
// Before proceeding validate if bucket exists.
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// Read bucket access policy.
policy, err := readBucketPolicy(bucket, api.ObjectAPI)
policy, err := readBucketPolicy(bucket, objAPI)
if err != nil {
errorIf(err, "Unable to read bucket policy.")
switch err.(type) {
case BucketNameInvalid:
writeErrorResponse(w, r, ErrInvalidBucketName, r.URL.Path)
case BucketPolicyNotFound:
writeErrorResponse(w, r, ErrNoSuchBucketPolicy, r.URL.Path)
default:

View File

@@ -1,5 +1,5 @@
/*
* Minio Cloud Storage, (C) 2015, 2016 Minio, Inc.
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -19,6 +19,7 @@ package cmd
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/http/httptest"
@@ -60,7 +61,7 @@ func TestBucketPolicyResourceMatch(t *testing.T) {
// Policy with resource ending with bucket/oo* should allow access to bucket/ootput.txt.
{generateResource("minio-bucket", "ootput.txt"), generateStatement(fmt.Sprintf("%s%s", AWSResourcePrefix, "minio-bucket"+"/oo*")), true},
// Test case - 7.
// Policy with resource ending with bucket/oo* allows access to all subfolders starting with "oo" inside given bucket.
// Policy with resource ending with bucket/oo* allows access to all sub-dirs starting with "oo" inside given bucket.
{generateResource("minio-bucket", "oop-bucket/my-file"), generateStatement(fmt.Sprintf("%s%s", AWSResourcePrefix, "minio-bucket"+"/oo*")), true},
// Test case - 8.
{generateResource("minio-bucket", "Asia/India/1.pjg"), generateStatement(fmt.Sprintf("%s%s", AWSResourcePrefix, "minio-bucket"+"/Asia/Japan/*")), false},
@@ -241,34 +242,18 @@ func TestBucketPolicyActionMatch(t *testing.T) {
// Wrapper for calling Put Bucket Policy HTTP handler tests for both XL multiple disks and single node setup.
func TestPutBucketPolicyHandler(t *testing.T) {
ExecObjectLayerTest(t, testPutBucketPolicyHandler)
ExecObjectLayerAPITest(t, testPutBucketPolicyHandler, []string{"PutBucketPolicy"})
}
// testPutBucketPolicyHandler - Test for Bucket policy end point.
// TODO: Add exhaustive cases with various combination of statement fields.
func testPutBucketPolicyHandler(obj ObjectLayer, instanceType string, t TestErrHandler) {
func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
// get random bucket name.
bucketName := getRandomBucketName()
// Create bucket.
err := obj.MakeBucket(bucketName)
if err != nil {
// failed to create newbucket, abort.
t.Fatalf("%s : %s", instanceType, err)
bucketName1 := fmt.Sprintf("%s-1", bucketName)
if err := obj.MakeBucket(bucketName1); err != nil {
t.Fatal(err)
}
// Register the API end points with XL/FS object layer.
apiRouter := initTestAPIEndPoints(obj, []string{"PutBucketPolicy"})
// initialize the server and obtain the credentials and root.
// credentials are necessary to sign the HTTP request.
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root folder after the test ends.
defer removeAll(rootPath)
credentials := serverConfig.GetCredential()
// template for constructing HTTP request body for PUT bucket policy.
bucketPolicyTemplate := `{"Version":"2012-10-17","Statement":[{"Sid":"","Effect":"Allow","Principal":{"AWS":["*"]},"Action":["s3:GetBucketLocation","s3:ListBucket"],"Resource":["arn:aws:s3:::%s"]},{"Sid":"","Effect":"Allow","Principal":{"AWS":["*"]},"Action":["s3:GetObject"],"Resource":["arn:aws:s3:::%s/this*"]}]}`
@@ -276,67 +261,203 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType string, t TestErrH
// test cases with sample input and expected output.
testCases := []struct {
bucketName string
accessKey string
secretKey string
// bucket policy to be set,
// set as request body.
bucketPolicyReader io.ReadSeeker
// length in bytes of the bucket policy being set.
policyLen int
accessKey string
secretKey string
// expected Response.
expectedRespStatus int
}{
{bucketName, credentials.AccessKeyID, credentials.SecretAccessKey, http.StatusNoContent},
// Test case - 1.
{
bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNoContent,
},
// Test case - 2.
// Setting the content length to be more than max allowed size.
// Expecting StatusBadRequest (400).
{
bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))),
policyLen: maxAccessPolicySize + 1,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusBadRequest,
},
// Test case - 3.
// Case with content-length of the HTTP request set to 0.
// Expecting the HTTP response status to be StatusLengthRequired (411).
{
bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))),
policyLen: 0,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusLengthRequired,
},
// Test case - 4.
// setting the readSeeker to `nil`, bucket policy parser will fail.
{
bucketName: bucketName,
bucketPolicyReader: nil,
policyLen: 10,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusBadRequest,
},
// Test case - 5.
// setting the keys to be empty.
// Expecting statusForbidden.
{
bucketName: bucketName,
bucketPolicyReader: nil,
policyLen: 10,
accessKey: "",
secretKey: "",
expectedRespStatus: http.StatusForbidden,
},
// Test case - 6.
// setting an invalid bucket policy.
// the bucket policy parser will fail.
{
bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte("dummy-policy")),
policyLen: len([]byte("dummy-policy")),
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusBadRequest,
},
// Test case - 7.
// Different bucket name used in the HTTP request and the policy string.
// checkBucketPolicyResources should fail.
{
bucketName: bucketName1,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusBadRequest,
},
// Test case - 8.
// non-existent bucket is used.
// writing BucketPolicy should fail.
// should result is 500 InternalServerError.
{
bucketName: "non-existent-bucket",
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, "non-existent-bucket", "non-existent-bucket"))),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNotFound,
},
// Test case - 9.
// invalid bucket name is used.
// writing BucketPolicy should fail.
// should result is 400 StatusBadRequest.
{
bucketName: ".invalid-bucket",
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, ".invalid-bucket", ".invalid-bucket"))),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusBadRequest,
},
}
// Iterating over the test cases, calling the function under test and asserting the response.
for i, testCase := range testCases {
// obtain the put bucket policy request body.
bucketPolicyStr := fmt.Sprintf(bucketPolicyTemplate, testCase.bucketName, testCase.bucketName)
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
recV4 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
req, err := newTestSignedRequest("PUT", getPutPolicyURL("", testCase.bucketName),
int64(len(bucketPolicyStr)), bytes.NewReader([]byte(bucketPolicyStr)), testCase.accessKey, testCase.secretKey)
reqV4, err := newTestSignedRequestV4("PUT", getPutPolicyURL("", testCase.bucketName),
int64(testCase.policyLen), testCase.bucketPolicyReader, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, err)
t.Fatalf("Test %d: %s: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic ofthe handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
if rec.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testCase.expectedRespStatus, rec.Code)
apiRouter.ServeHTTP(recV4, reqV4)
if recV4.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, recV4.Code)
}
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
reqV2, err := newTestSignedRequestV2("PUT", getPutPolicyURL("", testCase.bucketName),
int64(testCase.policyLen), testCase.bucketPolicyReader, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic ofthe handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(recV2, reqV2)
if recV2.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, recV2.Code)
}
}
// Test for Anonymous/unsigned http request.
// Bucket policy related functions doesn't support anonymous requests, setting policies shouldn't make a difference.
bucketPolicyStr := fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)
// create unsigned HTTP request for PutBucketPolicyHandler.
anonReq, err := newTestRequest("PUT", getPutPolicyURL("", bucketName),
int64(len(bucketPolicyStr)), bytes.NewReader([]byte(bucketPolicyStr)))
if err != nil {
t.Fatalf("Minio %s: Failed to create an anonymous request for bucket \"%s\": <ERROR> %v",
instanceType, bucketName, err)
}
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyObjectStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "PutBucketPolicyHandler", bucketName, "", instanceType, apiRouter, anonReq, getWriteOnlyObjectStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
// since the `objectLayer==nil` check is performed before any other checks inside the handlers.
// The only aim is to generate an HTTP request in a way that the relevant/registered end point is evoked/called.
nilBucket := "dummy-bucket"
nilReq, err := newTestSignedRequestV4("PUT", getPutPolicyURL("", nilBucket),
0, nil, "", "")
if err != nil {
t.Errorf("Minio %s: Failed to create HTTP request for testing the response when object Layer is set to `nil`.", instanceType)
}
// execute the object layer set to `nil` test.
// `ExecObjectLayerAPINilTest` manages the operation.
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling Get Bucket Policy HTTP handler tests for both XL multiple disks and single node setup.
func TestGetBucketPolicyHandler(t *testing.T) {
ExecObjectLayerTest(t, testGetBucketPolicyHandler)
ExecObjectLayerAPITest(t, testGetBucketPolicyHandler, []string{"PutBucketPolicy", "GetBucketPolicy"})
}
// testGetBucketPolicyHandler - Test for end point which fetches the access policy json of the given bucket.
// TODO: Add exhaustive cases with various combination of statement fields.
func testGetBucketPolicyHandler(obj ObjectLayer, instanceType string, t TestErrHandler) {
func testGetBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
// initialize bucket policy.
initBucketPolicies(obj)
// get random bucket name.
bucketName := getRandomBucketName()
// Create bucket.
err := obj.MakeBucket(bucketName)
if err != nil {
// failed to create newbucket, abort.
t.Fatalf("%s : %s", instanceType, err)
}
// Register the API end points with XL/FS object layer.
// Registering only the PutBucketPolicy and GetBucketPolicy handlers.
apiRouter := initTestAPIEndPoints(obj, []string{"PutBucketPolicy", "GetBucketPolicy"})
// initialize the server and obtain the credentials and root.
// credentials are necessary to sign the HTTP request.
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root folder after the test ends.
defer removeAll(rootPath)
credentials := serverConfig.GetCredential()
// template for constructing HTTP request body for PUT bucket policy.
bucketPolicyTemplate := `{"Version":"2012-10-17","Statement":[{"Action":["s3:GetBucketLocation","s3:ListBucket"],"Effect":"Allow","Principal":{"AWS":["*"]},"Resource":["arn:aws:s3:::%s"],"Sid":""},{"Action":["s3:GetObject"],"Effect":"Allow","Principal":{"AWS":["*"]},"Resource":["arn:aws:s3:::%s/this*"],"Sid":""}]}`
@@ -357,18 +478,32 @@ func testGetBucketPolicyHandler(obj ObjectLayer, instanceType string, t TestErrH
// obtain the put bucket policy request body.
bucketPolicyStr := fmt.Sprintf(bucketPolicyTemplate, testPolicy.bucketName, testPolicy.bucketName)
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
recV4 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
req, err := newTestSignedRequest("PUT", getPutPolicyURL("", testPolicy.bucketName),
reqV4, err := newTestSignedRequestV4("PUT", getPutPolicyURL("", testPolicy.bucketName),
int64(len(bucketPolicyStr)), bytes.NewReader([]byte(bucketPolicyStr)), testPolicy.accessKey, testPolicy.secretKey)
if err != nil {
t.Fatalf("Test %d: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic ofthe handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
if rec.Code != testPolicy.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testPolicy.expectedRespStatus, rec.Code)
apiRouter.ServeHTTP(recV4, reqV4)
if recV4.Code != testPolicy.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testPolicy.expectedRespStatus, recV4.Code)
}
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
reqV2, err := newTestSignedRequestV2("PUT", getPutPolicyURL("", testPolicy.bucketName),
int64(len(bucketPolicyStr)), bytes.NewReader([]byte(bucketPolicyStr)), testPolicy.accessKey, testPolicy.secretKey)
if err != nil {
t.Fatalf("Test %d: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic ofthe handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(recV2, reqV2)
if recV2.Code != testPolicy.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testPolicy.expectedRespStatus, recV2.Code)
}
}
@@ -381,16 +516,42 @@ func testGetBucketPolicyHandler(obj ObjectLayer, instanceType string, t TestErrH
expectedBucketPolicy string
expectedRespStatus int
}{
{bucketName, credentials.AccessKeyID, credentials.SecretAccessKey, bucketPolicyTemplate, http.StatusOK},
// Test case - 1.
// Case which valid inputs, expected to return success status of 200OK.
{
bucketName: bucketName,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedBucketPolicy: bucketPolicyTemplate,
expectedRespStatus: http.StatusOK,
},
// Test case - 2.
// Case with non-existent bucket name.
{
bucketName: "non-existent-bucket",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedBucketPolicy: bucketPolicyTemplate,
expectedRespStatus: http.StatusNotFound,
},
// Test case - 3.
// Case with invalid bucket name.
{
bucketName: ".invalid-bucket-name",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedBucketPolicy: "",
expectedRespStatus: http.StatusBadRequest,
},
}
// Iterating over the cases, fetching the policy and validating the response.
for i, testCase := range testCases {
// expected bucket policy json string.
expectedBucketPolicyStr := fmt.Sprintf(testCase.expectedBucketPolicy, testCase.bucketName, testCase.bucketName)
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
recV4 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
req, err := newTestSignedRequest("GET", getGetPolicyURL("", testCase.bucketName),
reqV4, err := newTestSignedRequestV4("GET", getGetPolicyURL("", testCase.bucketName),
0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
@@ -398,57 +559,94 @@ func testGetBucketPolicyHandler(obj ObjectLayer, instanceType string, t TestErrH
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler, GetBucketPolicyHandler handles the request.
apiRouter.ServeHTTP(rec, req)
apiRouter.ServeHTTP(recV4, reqV4)
// Assert the response code with the expected status.
if rec.Code != testCase.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testCase.expectedRespStatus, rec.Code)
if recV4.Code != testCase.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testCase.expectedRespStatus, recV4.Code)
}
// read the response body.
bucketPolicyReadBuf, err := ioutil.ReadAll(rec.Body)
bucketPolicyReadBuf, err := ioutil.ReadAll(recV4.Body)
if err != nil {
t.Fatalf("Test %d: %s: Failed parsing response body: <ERROR> %v", i+1, instanceType, err)
}
// Verify whether the bucket policy fetched is same as the one inserted.
if expectedBucketPolicyStr != string(bucketPolicyReadBuf) {
t.Errorf("Test %d: %s: Bucket policy differs from expected value.", i+1, instanceType)
if recV4.Code != testCase.expectedRespStatus {
// Verify whether the bucket policy fetched is same as the one inserted.
if expectedBucketPolicyStr != string(bucketPolicyReadBuf) {
t.Errorf("Test %d: %s: Bucket policy differs from expected value.", i+1, instanceType)
}
}
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
reqV2, err := newTestSignedRequestV2("GET", getGetPolicyURL("", testCase.bucketName),
0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: Failed to create HTTP request for GetBucketPolicyHandler: <ERROR> %v", i+1, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler, GetBucketPolicyHandler handles the request.
apiRouter.ServeHTTP(recV2, reqV2)
// Assert the response code with the expected status.
if recV2.Code != testCase.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testCase.expectedRespStatus, recV2.Code)
}
// read the response body.
bucketPolicyReadBuf, err = ioutil.ReadAll(recV2.Body)
if err != nil {
t.Fatalf("Test %d: %s: Failed parsing response body: <ERROR> %v", i+1, instanceType, err)
}
if recV2.Code == http.StatusOK {
// Verify whether the bucket policy fetched is same as the one inserted.
if expectedBucketPolicyStr != string(bucketPolicyReadBuf) {
t.Errorf("Test %d: %s: Bucket policy differs from expected value.", i+1, instanceType)
}
}
}
// Test for Anonymous/unsigned http request.
// Bucket policy related functions doesn't support anonymous requests, setting policies shouldn't make a difference.
// create unsigned HTTP request for PutBucketPolicyHandler.
anonReq, err := newTestRequest("GET", getPutPolicyURL("", bucketName), 0, nil)
if err != nil {
t.Fatalf("Minio %s: Failed to create an anonymous request for bucket \"%s\": <ERROR> %v",
instanceType, bucketName, err)
}
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyObjectStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "GetBucketPolicyHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyObjectStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
// since the `objectLayer==nil` check is performed before any other checks inside the handlers.
// The only aim is to generate an HTTP request in a way that the relevant/registered end point is evoked/called.
nilBucket := "dummy-bucket"
nilReq, err := newTestSignedRequestV4("GET", getGetPolicyURL("", nilBucket),
0, nil, "", "")
if err != nil {
t.Errorf("Minio %s: Failed to create HTTP request for testing the response when object Layer is set to `nil`.", instanceType)
}
// execute the object layer set to `nil` test.
// `ExecObjectLayerAPINilTest` manages the operation.
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling Delete Bucket Policy HTTP handler tests for both XL multiple disks and single node setup.
func TestDeleteBucketPolicyHandler(t *testing.T) {
ExecObjectLayerTest(t, testDeleteBucketPolicyHandler)
ExecObjectLayerAPITest(t, testDeleteBucketPolicyHandler, []string{"PutBucketPolicy", "DeleteBucketPolicy"})
}
// testDeleteBucketPolicyHandler - Test for Delete bucket policy end point.
// TODO: Add exhaustive cases with various combination of statement fields.
func testDeleteBucketPolicyHandler(obj ObjectLayer, instanceType string, t TestErrHandler) {
func testDeleteBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
// initialize bucket policy.
initBucketPolicies(obj)
// get random bucket name.
bucketName := getRandomBucketName()
// Create bucket.
err := obj.MakeBucket(bucketName)
if err != nil {
// failed to create newbucket, abort.
t.Fatalf("%s : %s", instanceType, err)
}
// Register the API end points with XL/FS object layer.
// Registering PutBucketPolicy and DeleteBucketPolicy handlers.
apiRouter := initTestAPIEndPoints(obj, []string{"PutBucketPolicy", "DeleteBucketPolicy"})
// initialize the server and obtain the credentials and root.
// credentials are necessary to sign the HTTP request.
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root folder after the test ends.
defer removeAll(rootPath)
credentials := serverConfig.GetCredential()
// template for constructing HTTP request body for PUT bucket policy.
bucketPolicyTemplate := `{
"Version": "2012-10-17",
@@ -493,7 +691,12 @@ func testDeleteBucketPolicyHandler(obj ObjectLayer, instanceType string, t TestE
// expected Response.
expectedRespStatus int
}{
{bucketName, credentials.AccessKeyID, credentials.SecretAccessKey, http.StatusNoContent},
{
bucketName: bucketName,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNoContent,
},
}
// Iterating over the cases and writing the bucket policy.
@@ -502,48 +705,252 @@ func testDeleteBucketPolicyHandler(obj ObjectLayer, instanceType string, t TestE
// obtain the put bucket policy request body.
bucketPolicyStr := fmt.Sprintf(bucketPolicyTemplate, testPolicy.bucketName, testPolicy.bucketName)
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
recV4 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
req, err := newTestSignedRequest("PUT", getPutPolicyURL("", testPolicy.bucketName),
reqV4, err := newTestSignedRequestV4("PUT", getPutPolicyURL("", testPolicy.bucketName),
int64(len(bucketPolicyStr)), bytes.NewReader([]byte(bucketPolicyStr)), testPolicy.accessKey, testPolicy.secretKey)
if err != nil {
t.Fatalf("Test %d: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
if rec.Code != testPolicy.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testPolicy.expectedRespStatus, rec.Code)
apiRouter.ServeHTTP(recV4, reqV4)
if recV4.Code != testPolicy.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testPolicy.expectedRespStatus, recV4.Code)
}
}
// testcases with input and expected output for DeleteBucketPolicyHandler.
testCases := []struct {
bucketName string
accessKey string
secretKey string
// expected Response.
// expected response.
expectedRespStatus int
}{
{bucketName, credentials.AccessKeyID, credentials.SecretAccessKey, http.StatusNoContent},
// Test case - 1.
{
bucketName: bucketName,
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNoContent,
},
// Test case - 2.
// Case with non-existent-bucket.
{
bucketName: "non-existent-bucket",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusNotFound,
},
// Test case - 3.
// Case with invalid bucket name.
{
bucketName: ".invalid-bucket-name",
accessKey: credentials.AccessKeyID,
secretKey: credentials.SecretAccessKey,
expectedRespStatus: http.StatusBadRequest,
},
}
// Iterating over the cases and deleting the bucket policy and then asserting response.
for i, testCase := range testCases {
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
recV4 := httptest.NewRecorder()
// construct HTTP request for Delete bucket policy endpoint.
req, err := newTestSignedRequest("DELETE", getDeletePolicyURL("", testCase.bucketName),
reqV4, err := newTestSignedRequestV4("DELETE", getDeletePolicyURL("", testCase.bucketName),
0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: Failed to create HTTP request for GetBucketPolicyHandler: <ERROR> %v", i+1, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler, DeleteBucketPolicyHandler handles the request.
apiRouter.ServeHTTP(rec, req)
apiRouter.ServeHTTP(recV4, reqV4)
// Assert the response code with the expected status.
if rec.Code != testCase.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testCase.expectedRespStatus, rec.Code)
if recV4.Code != testCase.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testCase.expectedRespStatus, recV4.Code)
}
}
// Iterating over the cases and writing the bucket policy.
// its required to write the policies first before running tests on GetBucketPolicy.
for i, testPolicy := range putTestPolicies {
// obtain the put bucket policy request body.
bucketPolicyStr := fmt.Sprintf(bucketPolicyTemplate, testPolicy.bucketName, testPolicy.bucketName)
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for PUT bucket policy endpoint.
reqV2, err := newTestSignedRequestV2("PUT", getPutPolicyURL("", testPolicy.bucketName),
int64(len(bucketPolicyStr)), bytes.NewReader([]byte(bucketPolicyStr)), testPolicy.accessKey, testPolicy.secretKey)
if err != nil {
t.Fatalf("Test %d: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(recV2, reqV2)
if recV2.Code != testPolicy.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testPolicy.expectedRespStatus, recV2.Code)
}
}
for i, testCase := range testCases {
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for Delete bucket policy endpoint.
reqV2, err := newTestSignedRequestV2("DELETE", getDeletePolicyURL("", testCase.bucketName),
0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: Failed to create HTTP request for GetBucketPolicyHandler: <ERROR> %v", i+1, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler, DeleteBucketPolicyHandler handles the request.
apiRouter.ServeHTTP(recV2, reqV2)
// Assert the response code with the expected status.
if recV2.Code != testCase.expectedRespStatus {
t.Fatalf("Case %d: Expected the response status to be `%d`, but instead found `%d`", i+1, testCase.expectedRespStatus, recV2.Code)
}
}
// Test for Anonymous/unsigned http request.
// Bucket policy related functions doesn't support anonymous requests, setting policies shouldn't make a difference.
// create unsigned HTTP request for PutBucketPolicyHandler.
anonReq, err := newTestRequest("DELETE", getPutPolicyURL("", bucketName), 0, nil)
if err != nil {
t.Fatalf("Minio %s: Failed to create an anonymous request for bucket \"%s\": <ERROR> %v",
instanceType, bucketName, err)
}
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyObjectStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "DeleteBucketPolicyHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyObjectStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
// since the `objectLayer==nil` check is performed before any other checks inside the handlers.
// The only aim is to generate an HTTP request in a way that the relevant/registered end point is evoked/called.
nilBucket := "dummy-bucket"
nilReq, err := newTestSignedRequestV4("DELETE", getDeletePolicyURL("", nilBucket),
0, nil, "", "")
if err != nil {
t.Errorf("Minio %s: Failed to create HTTP request for testing the response when object Layer is set to `nil`.", instanceType)
}
// execute the object layer set to `nil` test.
// `ExecObjectLayerAPINilTest` manages the operation.
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// TestBucketPolicyConditionMatch - Tests to validate whether bucket policy conditions match.
func TestBucketPolicyConditionMatch(t *testing.T) {
// obtain the inner map[string]set.StringSet for policyStatement.Conditions .
getInnerMap := func(key2, value string) map[string]set.StringSet {
innerMap := make(map[string]set.StringSet)
innerMap[key2] = set.CreateStringSet(value)
return innerMap
}
// obtain policyStatement with Conditions set.
getStatementWithCondition := func(key1, key2, value string) policyStatement {
innerMap := getInnerMap(key2, value)
// to set policyStatment.Conditions .
conditions := make(map[string]map[string]set.StringSet)
conditions[key1] = innerMap
// new policy statement.
statement := policyStatement{}
// set the condition.
statement.Conditions = conditions
return statement
}
testCases := []struct {
statementCondition policyStatement
condition map[string]set.StringSet
expectedMatch bool
}{
// Test case - 1.
// StringEquals condition matches.
{
statementCondition: getStatementWithCondition("StringEquals", "s3:prefix", "Asia/"),
condition: getInnerMap("prefix", "Asia/"),
expectedMatch: true,
},
// Test case - 2.
// StringEquals condition doesn't match.
{
statementCondition: getStatementWithCondition("StringEquals", "s3:prefix", "Asia/"),
condition: getInnerMap("prefix", "Africa/"),
expectedMatch: false,
},
// Test case - 3.
// StringEquals condition matches.
{
statementCondition: getStatementWithCondition("StringEquals", "s3:max-keys", "Asia/"),
condition: getInnerMap("max-keys", "Asia/"),
expectedMatch: true,
},
// Test case - 4.
// StringEquals condition doesn't match.
{
statementCondition: getStatementWithCondition("StringEquals", "s3:max-keys", "Asia/"),
condition: getInnerMap("max-keys", "Africa/"),
expectedMatch: false,
},
// Test case - 5.
// StringNotEquals condition matches.
{
statementCondition: getStatementWithCondition("StringNotEquals", "s3:prefix", "Asia/"),
condition: getInnerMap("prefix", "Asia/"),
expectedMatch: true,
},
// Test case - 6.
// StringNotEquals condition doesn't match.
{
statementCondition: getStatementWithCondition("StringNotEquals", "s3:prefix", "Asia/"),
condition: getInnerMap("prefix", "Africa/"),
expectedMatch: false,
},
// Test case - 7.
// StringNotEquals condition matches.
{
statementCondition: getStatementWithCondition("StringNotEquals", "s3:max-keys", "Asia/"),
condition: getInnerMap("max-keys", "Asia/"),
expectedMatch: true,
},
// Test case - 8.
// StringNotEquals condition doesn't match.
{
statementCondition: getStatementWithCondition("StringNotEquals", "s3:max-keys", "Asia/"),
condition: getInnerMap("max-keys", "Africa/"),
expectedMatch: false,
},
}
for i, tc := range testCases {
t.Run(fmt.Sprintf("Test case %d: Failed.", i+1), func(t *testing.T) {
// call the function under test and assert the result with the expected result.
doesMatch := bucketPolicyConditionMatch(tc.condition, tc.statementCondition)
if tc.expectedMatch != doesMatch {
t.Errorf("Expected the match to be `%v`; got `%v`.", tc.expectedMatch, doesMatch)
}
})
}
}

View File

@@ -74,8 +74,10 @@ func migrateBucketPolicyConfig(objAPI ObjectLayer) error {
policyBytes, err := ioutil.ReadFile(policyPath)
fatalIf(err, "Unable to read bucket policy to migrate bucket policy", policyPath)
newPolicyPath := retainSlash(bucketConfigPrefix) + retainSlash(bucketName) + policyJSON
var metadata map[string]string
sha256sum := ""
// Erasure code the policy config to all the disks.
_, err = objAPI.PutObject(minioMetaBucket, newPolicyPath, int64(len(policyBytes)), bytes.NewReader(policyBytes), nil)
_, err = objAPI.PutObject(minioMetaBucket, newPolicyPath, int64(len(policyBytes)), bytes.NewReader(policyBytes), metadata, sha256sum)
fatalIf(err, "Unable to write bucket policy during migration.", newPolicyPath)
return nil
}

View File

@@ -23,7 +23,6 @@ import (
"errors"
"fmt"
"io"
"path"
"sort"
"strings"
@@ -36,7 +35,7 @@ const (
)
// supportedActionMap - lists all the actions supported by minio.
var supportedActionMap = set.CreateStringSet("*", "*", "s3:*", "s3:GetObject",
var supportedActionMap = set.CreateStringSet("*", "s3:*", "s3:GetObject",
"s3:ListBucket", "s3:PutObject", "s3:GetBucketLocation", "s3:DeleteObject",
"s3:AbortMultipartUpload", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts")
@@ -50,18 +49,12 @@ var supportedConditionsKey = set.CreateStringSet("s3:prefix", "s3:max-keys")
// supportedEffectMap - supported effects.
var supportedEffectMap = set.CreateStringSet("Allow", "Deny")
// policyUser - canonical users list.
type policyUser struct {
AWS set.StringSet `json:"AWS,omitempty"`
CanonicalUser set.StringSet `json:"CanonicalUser,omitempty"`
}
// Statement - minio policy statement
type policyStatement struct {
Actions set.StringSet `json:"Action"`
Conditions map[string]map[string]set.StringSet `json:"Condition,omitempty"`
Effect string
Principal policyUser `json:"Principal"`
Principal interface{} `json:"Principal"`
Resources set.StringSet `json:"Resource"`
Sid string
}
@@ -76,7 +69,7 @@ type bucketPolicy struct {
func (b bucketPolicy) String() string {
bbytes, err := json.Marshal(&b)
if err != nil {
errorIf(err, "Unable to unmarshal bucket policy into JSON %#v", b)
errorIf(err, "Unable to marshal bucket policy into JSON %#v", b)
return ""
}
return string(bbytes)
@@ -86,11 +79,11 @@ func (b bucketPolicy) String() string {
func isValidActions(actions set.StringSet) (err error) {
// Statement actions cannot be empty.
if len(actions) == 0 {
err = errors.New("Action list cannot be empty.")
err = errors.New("Action list cannot be empty")
return err
}
if unsupportedActions := actions.Difference(supportedActionMap); !unsupportedActions.IsEmpty() {
err = fmt.Errorf("Unsupported actions found: %#v, please validate your policy document.", unsupportedActions)
err = fmt.Errorf("Unsupported actions found: %#v, please validate your policy document", unsupportedActions)
return err
}
return nil
@@ -100,11 +93,11 @@ func isValidActions(actions set.StringSet) (err error) {
func isValidEffect(effect string) (err error) {
// Statement effect cannot be empty.
if effect == "" {
err = errors.New("Policy effect cannot be empty.")
err = errors.New("Policy effect cannot be empty")
return err
}
if !supportedEffectMap.Contains(effect) {
err = errors.New("Unsupported Effect found: " + effect + ", please validate your policy document.")
err = errors.New("Unsupported Effect found: " + effect + ", please validate your policy document")
return err
}
return nil
@@ -114,34 +107,77 @@ func isValidEffect(effect string) (err error) {
func isValidResources(resources set.StringSet) (err error) {
// Statement resources cannot be empty.
if len(resources) == 0 {
err = errors.New("Resource list cannot be empty.")
err = errors.New("Resource list cannot be empty")
return err
}
for resource := range resources {
if !strings.HasPrefix(resource, AWSResourcePrefix) {
err = errors.New("Unsupported resource style found: " + resource + ", please validate your policy document.")
err = errors.New("Unsupported resource style found: " + resource + ", please validate your policy document")
return err
}
resourceSuffix := strings.SplitAfter(resource, AWSResourcePrefix)[1]
if len(resourceSuffix) == 0 || strings.HasPrefix(resourceSuffix, "/") {
err = errors.New("Invalid resource style found: " + resource + ", please validate your policy document.")
err = errors.New("Invalid resource style found: " + resource + ", please validate your policy document")
return err
}
}
return nil
}
// Parse principals parses a incoming json. Handles cases for
// these three combinations.
// - "Principal": "*",
// - "Principal": { "AWS" : "*" }
// - "Principal": { "AWS" : [ "*" ]}
func parsePrincipals(principal interface{}) set.StringSet {
principals, ok := principal.(map[string]interface{})
if !ok {
var principalStr string
principalStr, ok = principal.(string)
if ok {
return set.CreateStringSet(principalStr)
}
} // else {
var principalStrs []string
for _, p := range principals {
principalStr, isStr := p.(string)
if !isStr {
principalsAdd, isInterface := p.([]interface{})
if !isInterface {
principalStrsAddr, isStrs := p.([]string)
if !isStrs {
continue
}
principalStrs = append(principalStrs, principalStrsAddr...)
} else {
for _, pa := range principalsAdd {
var pstr string
pstr, isStr = pa.(string)
if !isStr {
continue
}
principalStrs = append(principalStrs, pstr)
}
}
continue
} // else {
principalStrs = append(principalStrs, principalStr)
}
return set.CreateStringSet(principalStrs...)
}
// isValidPrincipals - are valid principals.
func isValidPrincipals(principals set.StringSet) (err error) {
func isValidPrincipals(principal interface{}) (err error) {
principals := parsePrincipals(principal)
// Statement principal should have a value.
if len(principals) == 0 {
err = errors.New("Principal cannot be empty.")
err = errors.New("Principal cannot be empty")
return err
}
if unsuppPrincipals := principals.Difference(set.CreateStringSet([]string{"*"}...)); !unsuppPrincipals.IsEmpty() {
// Minio does not support or implement IAM, "*" is the only valid value.
// Amazon s3 doc on principals: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal
err = fmt.Errorf("Unsupported principals found: %#v, please validate your policy document.", unsuppPrincipals)
err = fmt.Errorf("Unsupported principals found: %#v, please validate your policy document", unsuppPrincipals)
return err
}
return nil
@@ -155,17 +191,17 @@ func isValidConditions(conditions map[string]map[string]set.StringSet) (err erro
conditionKeyVal := make(map[string]set.StringSet)
for conditionType := range conditions {
if !supportedConditionsType.Contains(conditionType) {
err = fmt.Errorf("Unsupported condition type '%s', please validate your policy document.", conditionType)
err = fmt.Errorf("Unsupported condition type '%s', please validate your policy document", conditionType)
return err
}
for key, value := range conditions[conditionType] {
if !supportedConditionsKey.Contains(key) {
err = fmt.Errorf("Unsupported condition key '%s', please validate your policy document.", conditionType)
err = fmt.Errorf("Unsupported condition key '%s', please validate your policy document", conditionType)
return err
}
conditionVal, ok := conditionKeyVal[key]
if ok && !value.Intersection(conditionVal).IsEmpty() {
err = fmt.Errorf("Ambigious condition values for key '%s', please validate your policy document.", key)
err = fmt.Errorf("Ambigious condition values for key '%s', please validate your policy document", key)
return err
}
conditionKeyVal[key] = value
@@ -187,7 +223,7 @@ func resourcePrefix(resource string) string {
if strings.HasSuffix(resource, "*") {
resource = strings.TrimSuffix(resource, "*")
}
return path.Clean(resource)
return resource
}
// checkBucketPolicyResources validates Resources in unmarshalled bucket policy structure.
@@ -257,13 +293,13 @@ func parseBucketPolicy(bucketPolicyReader io.Reader, policy *bucketPolicy) (err
// Policy version cannot be empty.
if len(policy.Version) == 0 {
err = errors.New("Policy version cannot be empty.")
err = errors.New("Policy version cannot be empty")
return err
}
// Policy statements cannot be empty.
if len(policy.Statements) == 0 {
err = errors.New("Policy statement cannot be empty.")
err = errors.New("Policy statement cannot be empty")
return err
}
@@ -274,7 +310,7 @@ func parseBucketPolicy(bucketPolicyReader io.Reader, policy *bucketPolicy) (err
return err
}
// Statement principal should be supported format.
if err := isValidPrincipals(statement.Principal.AWS); err != nil {
if err := isValidPrincipals(statement.Principal); err != nil {
return err
}
// Statement actions should be valid.

View File

@@ -76,7 +76,9 @@ var (
func getReadWriteObjectStatement(bucketName, objectPrefix string) policyStatement {
objectResourceStatement := policyStatement{}
objectResourceStatement.Effect = "Allow"
objectResourceStatement.Principal.AWS = set.CreateStringSet([]string{"*"}...)
objectResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
}
objectResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", AWSResourcePrefix, bucketName+"/"+objectPrefix+"*")}...)
objectResourceStatement.Actions = set.CreateStringSet(readWriteObjectActions...)
return objectResourceStatement
@@ -86,7 +88,9 @@ func getReadWriteObjectStatement(bucketName, objectPrefix string) policyStatemen
func getReadWriteBucketStatement(bucketName, objectPrefix string) policyStatement {
bucketResourceStatement := policyStatement{}
bucketResourceStatement.Effect = "Allow"
bucketResourceStatement.Principal.AWS = set.CreateStringSet([]string{"*"}...)
bucketResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
}
bucketResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", AWSResourcePrefix, bucketName)}...)
bucketResourceStatement.Actions = set.CreateStringSet(readWriteBucketActions...)
return bucketResourceStatement
@@ -104,7 +108,9 @@ func getReadWriteStatement(bucketName, objectPrefix string) []policyStatement {
func getReadOnlyBucketStatement(bucketName, objectPrefix string) policyStatement {
bucketResourceStatement := policyStatement{}
bucketResourceStatement.Effect = "Allow"
bucketResourceStatement.Principal.AWS = set.CreateStringSet([]string{"*"}...)
bucketResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
}
bucketResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", AWSResourcePrefix, bucketName)}...)
bucketResourceStatement.Actions = set.CreateStringSet(readOnlyBucketActions...)
return bucketResourceStatement
@@ -114,7 +120,9 @@ func getReadOnlyBucketStatement(bucketName, objectPrefix string) policyStatement
func getReadOnlyObjectStatement(bucketName, objectPrefix string) policyStatement {
objectResourceStatement := policyStatement{}
objectResourceStatement.Effect = "Allow"
objectResourceStatement.Principal.AWS = set.CreateStringSet([]string{"*"}...)
objectResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
}
objectResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", AWSResourcePrefix, bucketName+"/"+objectPrefix+"*")}...)
objectResourceStatement.Actions = set.CreateStringSet(readOnlyObjectActions...)
return objectResourceStatement
@@ -133,7 +141,9 @@ func getWriteOnlyBucketStatement(bucketName, objectPrefix string) policyStatemen
bucketResourceStatement := policyStatement{}
bucketResourceStatement.Effect = "Allow"
bucketResourceStatement.Principal.AWS = set.CreateStringSet([]string{"*"}...)
bucketResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
}
bucketResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", AWSResourcePrefix, bucketName)}...)
bucketResourceStatement.Actions = set.CreateStringSet(writeOnlyBucketActions...)
return bucketResourceStatement
@@ -143,7 +153,9 @@ func getWriteOnlyBucketStatement(bucketName, objectPrefix string) policyStatemen
func getWriteOnlyObjectStatement(bucketName, objectPrefix string) policyStatement {
objectResourceStatement := policyStatement{}
objectResourceStatement.Effect = "Allow"
objectResourceStatement.Principal.AWS = set.CreateStringSet([]string{"*"}...)
objectResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
}
objectResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", AWSResourcePrefix, bucketName+"/"+objectPrefix+"*")}...)
objectResourceStatement.Actions = set.CreateStringSet(writeOnlyObjectActions...)
return objectResourceStatement
@@ -172,14 +184,14 @@ func TestIsValidActions(t *testing.T) {
// Test case - 1.
// "s3:ListObject" is an invalid Action.
{set.CreateStringSet([]string{"s3:GetObject", "s3:ListObject", "s3:RemoveObject"}...),
errors.New("Unsupported actions found: set.StringSet{\"s3:RemoveObject\":struct {}{}, \"s3:ListObject\":struct {}{}}, please validate your policy document."), false},
errors.New("Unsupported actions found: set.StringSet{\"s3:RemoveObject\":struct {}{}, \"s3:ListObject\":struct {}{}}, please validate your policy document"), false},
// Test case - 2.
// Empty Actions.
{set.CreateStringSet([]string{}...), errors.New("Action list cannot be empty."), false},
{set.CreateStringSet([]string{}...), errors.New("Action list cannot be empty"), false},
// Test case - 3.
// "s3:DeleteEverything"" is an invalid Action.
{set.CreateStringSet([]string{"s3:GetObject", "s3:ListBucket", "s3:PutObject", "s3:DeleteEverything"}...),
errors.New("Unsupported actions found: set.StringSet{\"s3:DeleteEverything\":struct {}{}}, please validate your policy document."), false},
errors.New("Unsupported actions found: set.StringSet{\"s3:DeleteEverything\":struct {}{}}, please validate your policy document"), false},
// Inputs with valid Action.
// Test Case - 4.
{set.CreateStringSet([]string{
@@ -211,13 +223,13 @@ func TestIsValidEffect(t *testing.T) {
}{
// Inputs with unsupported Effect.
// Test case - 1.
{"", errors.New("Policy effect cannot be empty."), false},
{"", errors.New("Policy effect cannot be empty"), false},
// Test case - 2.
{"DontAllow", errors.New("Unsupported Effect found: DontAllow, please validate your policy document."), false},
{"DontAllow", errors.New("Unsupported Effect found: DontAllow, please validate your policy document"), false},
// Test case - 3.
{"NeverAllow", errors.New("Unsupported Effect found: NeverAllow, please validate your policy document."), false},
{"NeverAllow", errors.New("Unsupported Effect found: NeverAllow, please validate your policy document"), false},
// Test case - 4.
{"AllowAlways", errors.New("Unsupported Effect found: AllowAlways, please validate your policy document."), false},
{"AllowAlways", errors.New("Unsupported Effect found: AllowAlways, please validate your policy document"), false},
// Inputs with valid Effect.
// Test Case - 5.
@@ -255,16 +267,16 @@ func TestIsValidResources(t *testing.T) {
// Inputs with unsupported Action.
// Test case - 1.
// Empty Resources.
{[]string{}, errors.New("Resource list cannot be empty."), false},
{[]string{}, errors.New("Resource list cannot be empty"), false},
// Test case - 2.
// A valid resource should have prefix "arn:aws:s3:::".
{[]string{"my-resource"}, errors.New("Unsupported resource style found: my-resource, please validate your policy document."), false},
{[]string{"my-resource"}, errors.New("Unsupported resource style found: my-resource, please validate your policy document"), false},
// Test case - 3.
// A Valid resource should have bucket name followed by "arn:aws:s3:::".
{[]string{"arn:aws:s3:::"}, errors.New("Invalid resource style found: arn:aws:s3:::, please validate your policy document."), false},
{[]string{"arn:aws:s3:::"}, errors.New("Invalid resource style found: arn:aws:s3:::, please validate your policy document"), false},
// Test Case - 4.
// Valid resource shouldn't have slash('/') followed by "arn:aws:s3:::".
{[]string{"arn:aws:s3:::/"}, errors.New("Invalid resource style found: arn:aws:s3:::/, please validate your policy document."), false},
{[]string{"arn:aws:s3:::/"}, errors.New("Invalid resource style found: arn:aws:s3:::/, please validate your policy document"), false},
// Test cases with valid Resources.
{[]string{"arn:aws:s3:::my-bucket"}, nil, true},
@@ -301,18 +313,20 @@ func TestIsValidPrincipals(t *testing.T) {
// Inputs with unsupported Principals.
// Test case - 1.
// Empty Principals list.
{[]string{}, errors.New("Principal cannot be empty."), false},
{[]string{}, errors.New("Principal cannot be empty"), false},
// Test case - 2.
// "*" is the only valid principal.
{[]string{"my-principal"}, errors.New("Unsupported principals found: set.StringSet{\"my-principal\":struct {}{}}, please validate your policy document."), false},
{[]string{"my-principal"}, errors.New("Unsupported principals found: set.StringSet{\"my-principal\":struct {}{}}, please validate your policy document"), false},
// Test case - 3.
{[]string{"*", "111122233"}, errors.New("Unsupported principals found: set.StringSet{\"111122233\":struct {}{}}, please validate your policy document."), false},
{[]string{"*", "111122233"}, errors.New("Unsupported principals found: set.StringSet{\"111122233\":struct {}{}}, please validate your policy document"), false},
// Test case - 4.
// Test case with valid principal value.
{[]string{"*"}, nil, true},
}
for i, testCase := range testCases {
err := isValidPrincipals(set.CreateStringSet(testCase.principals...))
err := isValidPrincipals(map[string]interface{}{
"AWS": testCase.principals,
})
if err != nil && testCase.shouldPass {
t.Errorf("Test %d: Expected to pass, but failed with: <ERROR> %s", i+1, err.Error())
}
@@ -420,23 +434,23 @@ func TestIsValidConditions(t *testing.T) {
// Test case - 1.
// "StringValues" is an invalid type.
{testConditions[0], fmt.Errorf("Unsupported condition type 'StringValues', " +
"please validate your policy document."), false},
"please validate your policy document"), false},
// Test case - 2.
// "s3:Object" is an invalid key.
{testConditions[1], fmt.Errorf("Unsupported condition key " +
"'StringEquals', please validate your policy document."), false},
"'StringEquals', please validate your policy document"), false},
// Test case - 3.
// Test case with Ambigious conditions set.
{testConditions[2], fmt.Errorf("Ambigious condition values for key 's3:prefix', " +
"please validate your policy document."), false},
"please validate your policy document"), false},
// Test case - 4.
// Test case with valid and invalid condition types.
{testConditions[3], fmt.Errorf("Unsupported condition type 'InvalidType', " +
"please validate your policy document."), false},
"please validate your policy document"), false},
// Test case - 5.
// Test case with valid and invalid condition keys.
{testConditions[4], fmt.Errorf("Unsupported condition key 'StringEquals', " +
"please validate your policy document."), false},
"please validate your policy document"), false},
// Test cases with valid conditions.
// Test case - 6.
{testConditions[5], nil, true},
@@ -587,7 +601,9 @@ func TestParseBucketPolicy(t *testing.T) {
// set unsupported principals.
setUnsupportedPrincipals := func(statements []policyStatement) []policyStatement {
// "User1111"" is an Unsupported Principal.
statements[0].Principal.AWS = set.CreateStringSet([]string{"*", "User1111"}...)
statements[0].Principal = map[string]interface{}{
"AWS": []string{"*", "User1111"},
}
return statements
}
// set unsupported Resources.
@@ -637,10 +653,10 @@ func TestParseBucketPolicy(t *testing.T) {
}{
// Test case - 1.
// bucketPolicy statement empty.
{bucketAccesPolicies[0], bucketPolicy{}, errors.New("Policy statement cannot be empty."), false},
{bucketAccesPolicies[0], bucketPolicy{}, errors.New("Policy statement cannot be empty"), false},
// Test case - 2.
// bucketPolicy version empty.
{bucketAccesPolicies[1], bucketPolicy{}, errors.New("Policy version cannot be empty."), false},
{bucketAccesPolicies[1], bucketPolicy{}, errors.New("Policy version cannot be empty"), false},
// Test case - 3.
// Readonly bucketPolicy.
{bucketAccesPolicies[2], bucketAccesPolicies[2], nil, true},
@@ -652,16 +668,16 @@ func TestParseBucketPolicy(t *testing.T) {
{bucketAccesPolicies[4], bucketAccesPolicies[4], nil, true},
// Test case - 6.
// bucketPolicy statement contains unsupported action.
{bucketAccesPolicies[5], bucketAccesPolicies[5], fmt.Errorf("Unsupported actions found: set.StringSet{\"s3:DeleteEverything\":struct {}{}}, please validate your policy document."), false},
{bucketAccesPolicies[5], bucketAccesPolicies[5], fmt.Errorf("Unsupported actions found: set.StringSet{\"s3:DeleteEverything\":struct {}{}}, please validate your policy document"), false},
// Test case - 7.
// bucketPolicy statement contains unsupported Effect.
{bucketAccesPolicies[6], bucketAccesPolicies[6], fmt.Errorf("Unsupported Effect found: DontAllow, please validate your policy document."), false},
{bucketAccesPolicies[6], bucketAccesPolicies[6], fmt.Errorf("Unsupported Effect found: DontAllow, please validate your policy document"), false},
// Test case - 8.
// bucketPolicy statement contains unsupported Principal.
{bucketAccesPolicies[7], bucketAccesPolicies[7], fmt.Errorf("Unsupported principals found: set.StringSet{\"User1111\":struct {}{}}, please validate your policy document."), false},
{bucketAccesPolicies[7], bucketAccesPolicies[7], fmt.Errorf("Unsupported principals found: set.StringSet{\"User1111\":struct {}{}}, please validate your policy document"), false},
// Test case - 9.
// bucketPolicy statement contains unsupported Resource.
{bucketAccesPolicies[8], bucketAccesPolicies[8], fmt.Errorf("Unsupported resource style found: my-resource, please validate your policy document."), false},
{bucketAccesPolicies[8], bucketAccesPolicies[8], fmt.Errorf("Unsupported resource style found: my-resource, please validate your policy document"), false},
}
for i, testCase := range testCases {
var buffer bytes.Buffer

View File

@@ -18,7 +18,7 @@ package cmd
import (
"bytes"
"errors"
"encoding/json"
"io"
"path"
"sync"
@@ -36,6 +36,16 @@ type bucketPolicies struct {
bucketPolicyConfigs map[string]*bucketPolicy
}
// Represent a policy change
type policyChange struct {
// isRemove is true if the policy change is to delete the
// policy on a bucket.
IsRemove bool
// represents the new policy for the bucket
BktPolicy *bucketPolicy
}
// Fetch bucket policy for a given bucket.
func (bp bucketPolicies) GetBucketPolicy(bucket string) *bucketPolicy {
bp.rwMutex.RLock()
@@ -44,49 +54,59 @@ func (bp bucketPolicies) GetBucketPolicy(bucket string) *bucketPolicy {
}
// Set a new bucket policy for a bucket, this operation will overwrite
// any previous bucketpolicies for the bucket.
func (bp *bucketPolicies) SetBucketPolicy(bucket string, policy *bucketPolicy) error {
// any previous bucket policies for the bucket.
func (bp *bucketPolicies) SetBucketPolicy(bucket string, pCh policyChange) error {
bp.rwMutex.Lock()
defer bp.rwMutex.Unlock()
if policy == nil {
return errors.New("invalid argument")
}
bp.bucketPolicyConfigs[bucket] = policy
return nil
}
// Remove bucket policy for a bucket, from in-memory map.
func (bp *bucketPolicies) RemoveBucketPolicy(bucket string) {
bp.rwMutex.Lock()
defer bp.rwMutex.Unlock()
delete(bp.bucketPolicyConfigs, bucket)
if pCh.IsRemove {
delete(bp.bucketPolicyConfigs, bucket)
} else {
if pCh.BktPolicy == nil {
return errInvalidArgument
}
bp.bucketPolicyConfigs[bucket] = pCh.BktPolicy
}
return nil
}
// Loads all bucket policies from persistent layer.
func loadAllBucketPolicies(objAPI ObjectLayer) (policies map[string]*bucketPolicy, err error) {
// List buckets to proceed loading all notification configuration.
buckets, err := objAPI.ListBuckets()
errorIf(err, "Unable to list buckets.")
if err != nil {
return nil, err
return nil, errorCause(err)
}
policies = make(map[string]*bucketPolicy)
var pErrs []error
// Loads bucket policy.
for _, bucket := range buckets {
var policy *bucketPolicy
policy, err = readBucketPolicy(bucket.Name, objAPI)
if err != nil {
switch err.(type) {
case BucketPolicyNotFound:
continue
policy, pErr := readBucketPolicy(bucket.Name, objAPI)
if pErr != nil {
// net.Dial fails for rpc client or any
// other unexpected errors during net.Dial.
if !isErrIgnored(pErr, errDiskNotFound) {
if !isErrBucketPolicyNotFound(pErr) {
pErrs = append(pErrs, pErr)
}
}
return nil, err
// Continue to load other bucket policies if possible.
continue
}
policies[bucket.Name] = policy
}
// Look for any errors occurred while reading bucket policies.
for _, pErr := range pErrs {
if pErr != nil {
return policies, pErr
}
}
// Success.
return policies, nil
}
// Intialize all bucket policies.
@@ -94,16 +114,20 @@ func initBucketPolicies(objAPI ObjectLayer) error {
if objAPI == nil {
return errInvalidArgument
}
// Read all bucket policies.
policies, err := loadAllBucketPolicies(objAPI)
if err != nil {
return err
}
// Populate global bucket collection.
globalBucketPolicies = &bucketPolicies{
rwMutex: &sync.RWMutex{},
bucketPolicyConfigs: policies,
}
// Success.
return nil
}
@@ -119,25 +143,29 @@ func getOldBucketsConfigPath() (string, error) {
// readBucketPolicyJSON - reads bucket policy for an input bucket, returns BucketPolicyNotFound
// if bucket policy is not found.
func readBucketPolicyJSON(bucket string, objAPI ObjectLayer) (bucketPolicyReader io.Reader, err error) {
// Verify bucket is valid.
if !IsValidBucketName(bucket) {
return nil, BucketNameInvalid{Bucket: bucket}
}
policyPath := pathJoin(bucketConfigPrefix, bucket, policyJSON)
// Acquire a read lock on policy config before reading.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, policyPath)
objLock.RLock()
defer objLock.RUnlock()
objInfo, err := objAPI.GetObjectInfo(minioMetaBucket, policyPath)
if err != nil {
if _, ok := err.(ObjectNotFound); ok {
if isErrObjectNotFound(err) || isErrIncompleteBody(err) {
return nil, BucketPolicyNotFound{Bucket: bucket}
}
return nil, err
errorIf(err, "Unable to load policy for the bucket %s.", bucket)
return nil, errorCause(err)
}
var buffer bytes.Buffer
err = objAPI.GetObject(minioMetaBucket, policyPath, 0, objInfo.Size, &buffer)
if err != nil {
if _, ok := err.(ObjectNotFound); ok {
if isErrObjectNotFound(err) || isErrIncompleteBody(err) {
return nil, BucketPolicyNotFound{Bucket: bucket}
}
return nil, err
errorIf(err, "Unable to load policy for the bucket %s.", bucket)
return nil, errorCause(err)
}
return &buffer, nil
@@ -165,13 +193,14 @@ func readBucketPolicy(bucket string, objAPI ObjectLayer) (*bucketPolicy, error)
// removeBucketPolicy - removes any previously written bucket policy. Returns BucketPolicyNotFound
// if no policies are found.
func removeBucketPolicy(bucket string, objAPI ObjectLayer) error {
// Verify bucket is valid.
if !IsValidBucketName(bucket) {
return BucketNameInvalid{Bucket: bucket}
}
policyPath := pathJoin(bucketConfigPrefix, bucket, policyJSON)
// Acquire a write lock on policy config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, policyPath)
objLock.Lock()
defer objLock.Unlock()
if err := objAPI.DeleteObject(minioMetaBucket, policyPath); err != nil {
errorIf(err, "Unable to remove bucket-policy on bucket %s.", bucket)
err = errorCause(err)
if _, ok := err.(ObjectNotFound); ok {
return BucketPolicyNotFound{Bucket: bucket}
}
@@ -180,14 +209,78 @@ func removeBucketPolicy(bucket string, objAPI ObjectLayer) error {
return nil
}
// writeBucketPolicy - save all bucket policies.
func writeBucketPolicy(bucket string, objAPI ObjectLayer, reader io.Reader, size int64) error {
// Verify if bucket path legal
if !IsValidBucketName(bucket) {
return BucketNameInvalid{Bucket: bucket}
// writeBucketPolicy - save a bucket policy that is assumed to be validated.
func writeBucketPolicy(bucket string, objAPI ObjectLayer, bpy *bucketPolicy) error {
buf, err := json.Marshal(bpy)
if err != nil {
errorIf(err, "Unable to marshal bucket policy '%v' to JSON", *bpy)
return err
}
policyPath := pathJoin(bucketConfigPrefix, bucket, policyJSON)
// Acquire a write lock on policy config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, policyPath)
objLock.Lock()
defer objLock.Unlock()
if _, err := objAPI.PutObject(minioMetaBucket, policyPath, int64(len(buf)), bytes.NewReader(buf), nil, ""); err != nil {
errorIf(err, "Unable to set policy for the bucket %s", bucket)
return errorCause(err)
}
return nil
}
func parseAndPersistBucketPolicy(bucket string, policyBytes []byte, objAPI ObjectLayer) APIErrorCode {
// Parse bucket policy.
var policy = &bucketPolicy{}
err := parseBucketPolicy(bytes.NewReader(policyBytes), policy)
if err != nil {
errorIf(err, "Unable to parse bucket policy.")
return ErrInvalidPolicyDocument
}
policyPath := pathJoin(bucketConfigPrefix, bucket, policyJSON)
_, err := objAPI.PutObject(minioMetaBucket, policyPath, size, reader, nil)
return err
// Parse check bucket policy.
if s3Error := checkBucketPolicyResources(bucket, policy); s3Error != ErrNone {
return s3Error
}
// Acquire a write lock on bucket before modifying its configuration.
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
// Release lock after notifying peers
defer bucketLock.Unlock()
// Save bucket policy.
if err = persistAndNotifyBucketPolicyChange(bucket, policyChange{false, policy}, objAPI); err != nil {
switch err.(type) {
case BucketNameInvalid:
return ErrInvalidBucketName
case BucketNotFound:
return ErrNoSuchBucket
default:
errorIf(err, "Unable to save bucket policy.")
return ErrInternalError
}
}
return ErrNone
}
// persistAndNotifyBucketPolicyChange - takes a policyChange argument,
// persists it to storage, and notify nodes in the cluster about the
// change. In-memory state is updated in response to the notification.
func persistAndNotifyBucketPolicyChange(bucket string, pCh policyChange, objAPI ObjectLayer) error {
if pCh.IsRemove {
if err := removeBucketPolicy(bucket, objAPI); err != nil {
return err
}
} else {
if pCh.BktPolicy == nil {
return errInvalidArgument
}
if err := writeBucketPolicy(bucket, objAPI, pCh.BktPolicy); err != nil {
return err
}
}
// Notify all peers (including self) to update in-memory state
S3PeersUpdateBucketPolicy(bucket, pCh)
return nil
}

View File

@@ -17,6 +17,10 @@
package cmd
import (
"crypto/x509"
"encoding/pem"
"errors"
"io/ioutil"
"os"
"path/filepath"
)
@@ -27,7 +31,11 @@ func createCertsPath() error {
if err != nil {
return err
}
return os.MkdirAll(certsPath, 0700)
if err := os.MkdirAll(certsPath, 0700); err != nil {
return err
}
rootCAsPath := filepath.Join(certsPath, globalMinioCertsCADir)
return os.MkdirAll(rootCAsPath, 0700)
}
// getCertsPath get certs path.
@@ -58,6 +66,25 @@ func mustGetKeyFile() string {
return filepath.Join(mustGetCertsPath(), globalMinioKeyFile)
}
// mustGetCAFiles must get the list of the CA certificates stored in minio config dir
func mustGetCAFiles() (caCerts []string) {
CAsDir := filepath.Join(mustGetCertsPath(), globalMinioCertsCADir)
caFiles, _ := ioutil.ReadDir(CAsDir)
for _, cert := range caFiles {
caCerts = append(caCerts, filepath.Join(CAsDir, cert.Name()))
}
return
}
// mustGetSystemCertPool returns empty cert pool in case of error (windows)
func mustGetSystemCertPool() *x509.CertPool {
pool, err := x509.SystemCertPool()
if err != nil {
return x509.NewCertPool()
}
return pool
}
// isCertFileExists verifies if cert file exists, returns true if
// found, false otherwise.
func isCertFileExists() bool {
@@ -82,8 +109,39 @@ func isKeyFileExists() bool {
// isSSL - returns true with both cert and key exists.
func isSSL() bool {
if isCertFileExists() && isKeyFileExists() {
return true
}
return false
return isCertFileExists() && isKeyFileExists()
}
// Reads certificated file and returns a list of parsed certificates.
func readCertificateChain() ([]*x509.Certificate, error) {
bytes, err := ioutil.ReadFile(mustGetCertFile())
if err != nil {
return nil, err
}
// Proceed to parse the certificates.
return parseCertificateChain(bytes)
}
// Parses certificate chain, returns a list of parsed certificates.
func parseCertificateChain(bytes []byte) ([]*x509.Certificate, error) {
var certs []*x509.Certificate
var block *pem.Block
current := bytes
// Parse all certs in the chain.
for len(current) > 0 {
block, current = pem.Decode(current)
if block == nil {
return nil, errors.New("Could not PEM block")
}
// Parse the decoded certificate.
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return nil, err
}
certs = append(certs, cert)
}
return certs, nil
}

View File

@@ -58,3 +58,54 @@ func TestGetFiles(t *testing.T) {
t.Errorf("KeyFile does not contain %s", globalMinioKeyFile)
}
}
// Parses .crt file contents
func TestParseCertificateChain(t *testing.T) {
// given
cert := `-----BEGIN CERTIFICATE-----
MIICdTCCAd4CCQCO5G/W1xcE9TANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJa
WTEOMAwGA1UECBMFTWluaW8xETAPBgNVBAcTCEludGVybmV0MQ4wDAYDVQQKEwVN
aW5pbzEOMAwGA1UECxMFTWluaW8xDjAMBgNVBAMTBU1pbmlvMR0wGwYJKoZIhvcN
AQkBFg50ZXN0c0BtaW5pby5pbzAeFw0xNjEwMTQxMTM0MjJaFw0xNzEwMTQxMTM0
MjJaMH8xCzAJBgNVBAYTAlpZMQ4wDAYDVQQIEwVNaW5pbzERMA8GA1UEBxMISW50
ZXJuZXQxDjAMBgNVBAoTBU1pbmlvMQ4wDAYDVQQLEwVNaW5pbzEOMAwGA1UEAxMF
TWluaW8xHTAbBgkqhkiG9w0BCQEWDnRlc3RzQG1pbmlvLmlvMIGfMA0GCSqGSIb3
DQEBAQUAA4GNADCBiQKBgQDwNUYB/Sj79WsUE8qnXzzh2glSzWxUE79sCOpQYK83
HWkrl5WxlG8ZxDR1IQV9Ex/lzigJu8G+KXahon6a+3n5GhNrYRe5kIXHQHz0qvv4
aMulqlnYpvSfC83aaO9GVBtwXS/O4Nykd7QBg4nZlazVmsGk7POOjhpjGShRsqpU
JwIDAQABMA0GCSqGSIb3DQEBBQUAA4GBALqjOA6bD8BEl7hkQ8XwX/owSAL0URDe
nUfCOsXgIIAqgw4uTCLOfCJVZNKmRT+KguvPAQ6Z80vau2UxPX5Q2Q+OHXDRrEnK
FjqSBgLP06Qw7a++bshlWGTt5bHWOneW3EQikedckVuIKPkOCib9yGi4VmBBjdFE
M9ofSEt/bdRD
-----END CERTIFICATE-----`
// when
certs, err := parseCertificateChain([]byte(cert))
// then
if err != nil {
t.Fatalf("Could not parse certificate: %s", err)
}
if len(certs) != 1 {
t.Fatalf("Expected number of certificates in chain was 1, actual: %d", len(certs))
}
if certs[0].Subject.CommonName != "Minio" {
t.Fatalf("Expected Subject.CommonName was Minio, actual: %s", certs[0].Subject.CommonName)
}
}
// Parses invalid .crt file contents and returns error
func TestParseInvalidCertificateChain(t *testing.T) {
// given
cert := `This is now valid certificate`
// when
_, err := parseCertificateChain([]byte(cert))
// then
if err == nil {
t.Fatalf("Expected error but none occurred")
}
}

View File

@@ -17,7 +17,6 @@
package cmd
import (
"fmt"
"net"
"os"
"syscall"
@@ -32,10 +31,16 @@ import (
// This causes confusion on Mac OSX that minio server is not reachable
// on 127.0.0.1 even though minio server is running. So before we start
// the minio server we make sure that the port is free on each tcp network.
func checkPortAvailability(port int) error {
//
// Port is string on purpose here.
// https://github.com/golang/go/issues/16142#issuecomment-245912773
//
// "Keep in mind that ports in Go are strings: https://play.golang.org/p/zk2WEri_E9"
// - @bradfitz
func checkPortAvailability(portStr string) error {
network := [3]string{"tcp", "tcp4", "tcp6"}
for _, n := range network {
l, err := net.Listen(n, fmt.Sprintf(":%d", port))
l, err := net.Listen(n, net.JoinHostPort("", portStr))
if err != nil {
if isAddrInUse(err) {
// Return error if another process is listening on the

View File

@@ -17,7 +17,6 @@
package cmd
import (
"fmt"
"net"
"runtime"
"testing"
@@ -26,7 +25,7 @@ import (
// Tests for port availability logic written for server startup sequence.
func TestCheckPortAvailability(t *testing.T) {
tests := []struct {
port int
port string
}{
{getFreePort()},
{getFreePort()},
@@ -35,11 +34,11 @@ func TestCheckPortAvailability(t *testing.T) {
// This test should pass if the ports are available
err := checkPortAvailability(test.port)
if err != nil {
t.Fatalf("checkPortAvailability test failed for port: %d. Error: %v", test.port, err)
t.Fatalf("checkPortAvailability test failed for port: %s. Error: %v", test.port, err)
}
// Now use the ports and check again
ln, err := net.Listen("tcp", fmt.Sprintf(":%d", test.port))
ln, err := net.Listen("tcp", net.JoinHostPort("", test.port))
if err != nil {
t.Fail()
}
@@ -49,7 +48,7 @@ func TestCheckPortAvailability(t *testing.T) {
// Skip if the os is windows due to https://github.com/golang/go/issues/7598
if err == nil && runtime.GOOS != "windows" {
t.Fatalf("checkPortAvailability should fail for port: %d. Error: %v", test.port, err)
t.Fatalf("checkPortAvailability should fail for port: %s. Error: %v", test.port, err)
}
}
}

View File

@@ -24,19 +24,6 @@ var commands = []cli.Command{}
// Collection of minio commands currently supported in a trie tree.
var commandsTree = newTrie()
// Collection of minio flags currently supported.
var globalFlags = []cli.Flag{
cli.StringFlag{
Name: "config-dir, C",
Value: mustGetConfigPath(),
Usage: "Path to configuration folder.",
},
cli.BoolFlag{
Name: "quiet",
Usage: "Suppress chatty output.",
},
}
// registerCommand registers a cli command.
func registerCommand(command cli.Command) {
commands = append(commands, command)

View File

@@ -17,7 +17,7 @@
package cmd
import (
"errors"
"fmt"
"os"
"path/filepath"
@@ -25,51 +25,84 @@ import (
"github.com/minio/minio/pkg/quick"
)
func migrateConfig() {
func migrateConfig() error {
// Purge all configs with version '1'.
purgeV1()
if err := purgeV1(); err != nil {
return err
}
// Migrate version '2' to '3'.
migrateV2ToV3()
if err := migrateV2ToV3(); err != nil {
return err
}
// Migrate version '3' to '4'.
migrateV3ToV4()
if err := migrateV3ToV4(); err != nil {
return err
}
// Migrate version '4' to '5'.
migrateV4ToV5()
if err := migrateV4ToV5(); err != nil {
return err
}
// Migrate version '5' to '6.
migrateV5ToV6()
if err := migrateV5ToV6(); err != nil {
return err
}
// Migrate version '6' to '7'.
migrateV6ToV7()
if err := migrateV6ToV7(); err != nil {
return err
}
// Migrate version '7' to '8'.
if err := migrateV7ToV8(); err != nil {
return err
}
// Migrate version '8' to '9'.
if err := migrateV8ToV9(); err != nil {
return err
}
// Migrate version '9' to '10'.
if err := migrateV9ToV10(); err != nil {
return err
}
return nil
}
// Version '1' is not supported anymore and deprecated, safe to delete.
func purgeV1() {
func purgeV1() error {
cv1, err := loadConfigV1()
if err != nil && os.IsNotExist(err) {
return
if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("Unable to load config version 1. %v", err)
}
fatalIf(err, "Unable to load config version 1.")
if cv1.Version == "1" {
console.Println("Removed unsupported config version 1.")
/// Purge old fsUsers.json file
configPath, err := getConfigPath()
fatalIf(err, "Unable to retrieve config path.")
if err != nil {
return fmt.Errorf("Unable to retrieve config path. %v", err)
}
configFile := filepath.Join(configPath, "fsUsers.json")
removeAll(configFile)
return nil
}
fatalIf(errors.New(""), "Failed to migrate unrecognized config version "+cv1.Version+".")
return fmt.Errorf("Failed to migrate unrecognized config version " + cv1.Version + ".")
}
// Version '2' to '3' config migration adds new fields and re-orders
// previous fields. Simplifies config for future additions.
func migrateV2ToV3() {
func migrateV2ToV3() error {
cv2, err := loadConfigV2()
if err != nil && os.IsNotExist(err) {
return
if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("Unable to load config version 2. %v", err)
}
fatalIf(err, "Unable to load config version 2.")
if cv2.Version != "2" {
return
return nil
}
srvConfig := &configV3{}
srvConfig.Version = "3"
@@ -95,7 +128,7 @@ func migrateV2ToV3() {
}
srvConfig.Logger.File = flogger
slogger := syslogLogger{}
slogger := syslogLoggerV3{}
slogger.Level = "debug"
if cv2.SyslogLogger.Addr != "" {
slogger.Enable = true
@@ -104,29 +137,38 @@ func migrateV2ToV3() {
srvConfig.Logger.Syslog = slogger
qc, err := quick.New(srvConfig)
fatalIf(err, "Unable to initialize config.")
if err != nil {
return fmt.Errorf("Unable to initialize config. %v", err)
}
configFile, err := getConfigFile()
fatalIf(err, "Unable to get config file.")
if err != nil {
return fmt.Errorf("Unable to get config file. %v", err)
}
// Migrate the config.
err = qc.Save(configFile)
fatalIf(err, "Failed to migrate config from "+cv2.Version+" to "+srvConfig.Version+" failed.")
if err != nil {
return fmt.Errorf("Failed to migrate config from "+cv2.Version+" to "+srvConfig.Version+" failed. %v", err)
}
console.Println("Migration from version " + cv2.Version + " to " + srvConfig.Version + " completed successfully.")
return nil
}
// Version '3' to '4' migrates config, removes previous fields related
// to backend types and server address. This change further simplifies
// the config for future additions.
func migrateV3ToV4() {
func migrateV3ToV4() error {
cv3, err := loadConfigV3()
if err != nil && os.IsNotExist(err) {
return
if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("Unable to load config version 3. %v", err)
}
fatalIf(err, "Unable to load config version 3.")
if cv3.Version != "3" {
return
return nil
}
// Save only the new fields, ignore the rest.
@@ -143,27 +185,36 @@ func migrateV3ToV4() {
srvConfig.Logger.Syslog = cv3.Logger.Syslog
qc, err := quick.New(srvConfig)
fatalIf(err, "Unable to initialize the quick config.")
if err != nil {
return fmt.Errorf("Unable to initialize the quick config. %v", err)
}
configFile, err := getConfigFile()
fatalIf(err, "Unable to get config file.")
if err != nil {
return fmt.Errorf("Unable to get config file. %v", err)
}
err = qc.Save(configFile)
fatalIf(err, "Failed to migrate config from "+cv3.Version+" to "+srvConfig.Version+" failed.")
if err != nil {
return fmt.Errorf("Failed to migrate config from "+cv3.Version+" to "+srvConfig.Version+" failed. %v", err)
}
console.Println("Migration from version " + cv3.Version + " to " + srvConfig.Version + " completed successfully.")
return nil
}
// Version '4' to '5' migrates config, removes previous fields related
// to backend types and server address. This change further simplifies
// the config for future additions.
func migrateV4ToV5() {
func migrateV4ToV5() error {
cv4, err := loadConfigV4()
if err != nil && os.IsNotExist(err) {
return
if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("Unable to load config version 4. %v", err)
}
fatalIf(err, "Unable to load config version 4.")
if cv4.Version != "4" {
return
return nil
}
// Save only the new fields, ignore the rest.
@@ -183,27 +234,36 @@ func migrateV4ToV5() {
srvConfig.Logger.Redis.Enable = false
qc, err := quick.New(srvConfig)
fatalIf(err, "Unable to initialize the quick config.")
if err != nil {
return fmt.Errorf("Unable to initialize the quick config. %v", err)
}
configFile, err := getConfigFile()
fatalIf(err, "Unable to get config file.")
if err != nil {
return fmt.Errorf("Unable to get config file. %v", err)
}
err = qc.Save(configFile)
fatalIf(err, "Failed to migrate config from "+cv4.Version+" to "+srvConfig.Version+" failed.")
if err != nil {
return fmt.Errorf("Failed to migrate config from "+cv4.Version+" to "+srvConfig.Version+" failed. %v", err)
}
console.Println("Migration from version " + cv4.Version + " to " + srvConfig.Version + " completed successfully.")
return nil
}
// Version '5' to '6' migrates config, removes previous fields related
// to backend types and server address. This change further simplifies
// the config for future additions.
func migrateV5ToV6() {
func migrateV5ToV6() error {
cv5, err := loadConfigV5()
if err != nil && os.IsNotExist(err) {
return
if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("Unable to load config version 5. %v", err)
}
fatalIf(err, "Unable to load config version 5.")
if cv5.Version != "5" {
return
return nil
}
// Save only the new fields, ignore the rest.
@@ -250,32 +310,41 @@ func migrateV5ToV6() {
}
qc, err := quick.New(srvConfig)
fatalIf(err, "Unable to initialize the quick config.")
if err != nil {
return fmt.Errorf("Unable to initialize the quick config. %v", err)
}
configFile, err := getConfigFile()
fatalIf(err, "Unable to get config file.")
if err != nil {
return fmt.Errorf("Unable to get config file. %v", err)
}
err = qc.Save(configFile)
fatalIf(err, "Failed to migrate config from "+cv5.Version+" to "+srvConfig.Version+" failed.")
if err != nil {
return fmt.Errorf("Failed to migrate config from "+cv5.Version+" to "+srvConfig.Version+" failed. %v", err)
}
console.Println("Migration from version " + cv5.Version + " to " + srvConfig.Version + " completed successfully.")
return nil
}
// Version '6' to '7' migrates config, removes previous fields related
// to backend types and server address. This change further simplifies
// the config for future additions.
func migrateV6ToV7() {
func migrateV6ToV7() error {
cv6, err := loadConfigV6()
if err != nil && os.IsNotExist(err) {
return
if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("Unable to load config version 6. %v", err)
}
fatalIf(err, "Unable to load config version 6.")
if cv6.Version != "6" {
return
return nil
}
// Save only the new fields, ignore the rest.
srvConfig := &serverConfigV7{}
srvConfig.Version = globalMinioConfigVersion
srvConfig.Version = "7"
srvConfig.Credential = cv6.Credential
srvConfig.Region = cv6.Region
if srvConfig.Region == "" {
@@ -305,12 +374,262 @@ func migrateV6ToV7() {
}
qc, err := quick.New(srvConfig)
fatalIf(err, "Unable to initialize the quick config.")
if err != nil {
return fmt.Errorf("Unable to initialize the quick config. %v", err)
}
configFile, err := getConfigFile()
fatalIf(err, "Unable to get config file.")
if err != nil {
return fmt.Errorf("Unable to get config file. %v", err)
}
err = qc.Save(configFile)
fatalIf(err, "Failed to migrate config from "+cv6.Version+" to "+srvConfig.Version+" failed.")
if err != nil {
return fmt.Errorf("Failed to migrate config from "+cv6.Version+" to "+srvConfig.Version+" failed. %v", err)
}
console.Println("Migration from version " + cv6.Version + " to " + srvConfig.Version + " completed successfully.")
return nil
}
// Version '7' to '8' migrates config, removes previous fields related
// to backend types and server address. This change further simplifies
// the config for future additions.
func migrateV7ToV8() error {
cv7, err := loadConfigV7()
if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("Unable to load config version 7. %v", err)
}
if cv7.Version != "7" {
return nil
}
// Save only the new fields, ignore the rest.
srvConfig := &serverConfigV8{}
srvConfig.Version = "8"
srvConfig.Credential = cv7.Credential
srvConfig.Region = cv7.Region
if srvConfig.Region == "" {
// Region needs to be set for AWS Signature Version 4.
srvConfig.Region = "us-east-1"
}
srvConfig.Logger.Console = cv7.Logger.Console
srvConfig.Logger.File = cv7.Logger.File
srvConfig.Logger.Syslog = cv7.Logger.Syslog
srvConfig.Notify.AMQP = make(map[string]amqpNotify)
srvConfig.Notify.NATS = make(map[string]natsNotify)
srvConfig.Notify.ElasticSearch = make(map[string]elasticSearchNotify)
srvConfig.Notify.Redis = make(map[string]redisNotify)
srvConfig.Notify.PostgreSQL = make(map[string]postgreSQLNotify)
if len(cv7.Notify.AMQP) == 0 {
srvConfig.Notify.AMQP["1"] = amqpNotify{}
} else {
srvConfig.Notify.AMQP = cv7.Notify.AMQP
}
if len(cv7.Notify.NATS) == 0 {
srvConfig.Notify.NATS["1"] = natsNotify{}
} else {
srvConfig.Notify.NATS = cv7.Notify.NATS
}
if len(cv7.Notify.ElasticSearch) == 0 {
srvConfig.Notify.ElasticSearch["1"] = elasticSearchNotify{}
} else {
srvConfig.Notify.ElasticSearch = cv7.Notify.ElasticSearch
}
if len(cv7.Notify.Redis) == 0 {
srvConfig.Notify.Redis["1"] = redisNotify{}
} else {
srvConfig.Notify.Redis = cv7.Notify.Redis
}
qc, err := quick.New(srvConfig)
if err != nil {
return fmt.Errorf("Unable to initialize the quick config. %v", err)
}
configFile, err := getConfigFile()
if err != nil {
return fmt.Errorf("Unable to get config file. %v", err)
}
err = qc.Save(configFile)
if err != nil {
return fmt.Errorf("Failed to migrate config from "+cv7.Version+" to "+srvConfig.Version+" failed. %v", err)
}
console.Println("Migration from version " + cv7.Version + " to " + srvConfig.Version + " completed successfully.")
return nil
}
// Version '8' to '9' migration. Adds postgresql notifier
// configuration, but it's otherwise the same as V8.
func migrateV8ToV9() error {
cv8, err := loadConfigV8()
if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("Unable to load config version 8. %v", err)
}
if cv8.Version != "8" {
return nil
}
// Copy over fields from V8 into V9 config struct
srvConfig := &serverConfigV9{}
srvConfig.Version = "9"
srvConfig.Credential = cv8.Credential
srvConfig.Region = cv8.Region
if srvConfig.Region == "" {
// Region needs to be set for AWS Signature Version 4.
srvConfig.Region = "us-east-1"
}
srvConfig.Logger.Console = cv8.Logger.Console
srvConfig.Logger.Console.Level = "error"
srvConfig.Logger.File = cv8.Logger.File
srvConfig.Logger.Syslog = cv8.Logger.Syslog
// check and set notifiers config
if len(cv8.Notify.AMQP) == 0 {
srvConfig.Notify.AMQP = make(map[string]amqpNotify)
srvConfig.Notify.AMQP["1"] = amqpNotify{}
} else {
srvConfig.Notify.AMQP = cv8.Notify.AMQP
}
if len(cv8.Notify.NATS) == 0 {
srvConfig.Notify.NATS = make(map[string]natsNotify)
srvConfig.Notify.NATS["1"] = natsNotify{}
} else {
srvConfig.Notify.NATS = cv8.Notify.NATS
}
if len(cv8.Notify.ElasticSearch) == 0 {
srvConfig.Notify.ElasticSearch = make(map[string]elasticSearchNotify)
srvConfig.Notify.ElasticSearch["1"] = elasticSearchNotify{}
} else {
srvConfig.Notify.ElasticSearch = cv8.Notify.ElasticSearch
}
if len(cv8.Notify.Redis) == 0 {
srvConfig.Notify.Redis = make(map[string]redisNotify)
srvConfig.Notify.Redis["1"] = redisNotify{}
} else {
srvConfig.Notify.Redis = cv8.Notify.Redis
}
if len(cv8.Notify.PostgreSQL) == 0 {
srvConfig.Notify.PostgreSQL = make(map[string]postgreSQLNotify)
srvConfig.Notify.PostgreSQL["1"] = postgreSQLNotify{}
} else {
srvConfig.Notify.PostgreSQL = cv8.Notify.PostgreSQL
}
qc, err := quick.New(srvConfig)
if err != nil {
return fmt.Errorf("Unable to initialize the quick config. %v",
err)
}
configFile, err := getConfigFile()
if err != nil {
return fmt.Errorf("Unable to get config file. %v", err)
}
err = qc.Save(configFile)
if err != nil {
return fmt.Errorf(
"Failed to migrate config from "+
cv8.Version+" to "+srvConfig.Version+
" failed. %v", err,
)
}
console.Println(
"Migration from version " +
cv8.Version + " to " + srvConfig.Version +
" completed successfully.",
)
return nil
}
// Version '9' to '10' migration. Remove syslog config
// but it's otherwise the same as V9.
func migrateV9ToV10() error {
cv9, err := loadConfigV9()
if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("Unable to load config version 9. %v", err)
}
if cv9.Version != "9" {
return nil
}
// Copy over fields from V9 into V10 config struct
srvConfig := &serverConfigV10{}
srvConfig.Version = "10"
srvConfig.Credential = cv9.Credential
srvConfig.Region = cv9.Region
if srvConfig.Region == "" {
// Region needs to be set for AWS Signature Version 4.
srvConfig.Region = "us-east-1"
}
srvConfig.Logger.Console = cv9.Logger.Console
srvConfig.Logger.File = cv9.Logger.File
// check and set notifiers config
if len(cv9.Notify.AMQP) == 0 {
srvConfig.Notify.AMQP = make(map[string]amqpNotify)
srvConfig.Notify.AMQP["1"] = amqpNotify{}
} else {
srvConfig.Notify.AMQP = cv9.Notify.AMQP
}
if len(cv9.Notify.NATS) == 0 {
srvConfig.Notify.NATS = make(map[string]natsNotify)
srvConfig.Notify.NATS["1"] = natsNotify{}
} else {
srvConfig.Notify.NATS = cv9.Notify.NATS
}
if len(cv9.Notify.ElasticSearch) == 0 {
srvConfig.Notify.ElasticSearch = make(map[string]elasticSearchNotify)
srvConfig.Notify.ElasticSearch["1"] = elasticSearchNotify{}
} else {
srvConfig.Notify.ElasticSearch = cv9.Notify.ElasticSearch
}
if len(cv9.Notify.Redis) == 0 {
srvConfig.Notify.Redis = make(map[string]redisNotify)
srvConfig.Notify.Redis["1"] = redisNotify{}
} else {
srvConfig.Notify.Redis = cv9.Notify.Redis
}
if len(cv9.Notify.PostgreSQL) == 0 {
srvConfig.Notify.PostgreSQL = make(map[string]postgreSQLNotify)
srvConfig.Notify.PostgreSQL["1"] = postgreSQLNotify{}
} else {
srvConfig.Notify.PostgreSQL = cv9.Notify.PostgreSQL
}
qc, err := quick.New(srvConfig)
if err != nil {
return fmt.Errorf("Unable to initialize the quick config. %v",
err)
}
configFile, err := getConfigFile()
if err != nil {
return fmt.Errorf("Unable to get config file. %v", err)
}
err = qc.Save(configFile)
if err != nil {
return fmt.Errorf(
"Failed to migrate config from "+
cv9.Version+" to "+srvConfig.Version+
" failed. %v", err,
)
}
console.Println(
"Migration from version " +
cv9.Version + " to " + srvConfig.Version +
" completed successfully.",
)
return nil
}

203
cmd/config-migrate_test.go Normal file
View File

@@ -0,0 +1,203 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"io/ioutil"
"os"
"testing"
)
// Test if config v1 is purged
func TestServerConfigMigrateV1(t *testing.T) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
setGlobalConfigPath(rootPath)
// Create a V1 config json file and store it
configJSON := "{ \"version\":\"1\", \"accessKeyId\":\"abcde\", \"secretAccessKey\":\"abcdefgh\"}"
configPath := rootPath + "/fsUsers.json"
if err := ioutil.WriteFile(configPath, []byte(configJSON), 0644); err != nil {
t.Fatal("Unexpected error: ", err)
}
// Fire a migrateConfig()
if err := migrateConfig(); err != nil {
t.Fatal("Unexpected error: ", err)
}
// Check if config v1 is removed from filesystem
if _, err := os.Stat(configPath); err == nil || !os.IsNotExist(err) {
t.Fatal("Config V1 file is not purged")
}
// Initialize server config and check again if everything is fine
if _, err := initConfig(); err != nil {
t.Fatalf("Unable to initialize from updated config file %s", err)
}
}
// Test if all migrate code returns nil when config file does not
// exist
func TestServerConfigMigrateInexistentConfig(t *testing.T) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
setGlobalConfigPath(rootPath)
configPath := rootPath + "/" + globalMinioConfigFile
// Remove config file
if err := os.Remove(configPath); err != nil {
t.Fatal("Unexpected error: ", err)
}
if err := migrateV2ToV3(); err != nil {
t.Fatal("migrate v2 to v3 should succeed when no config file is found")
}
if err := migrateV3ToV4(); err != nil {
t.Fatal("migrate v3 to v4 should succeed when no config file is found")
}
if err := migrateV4ToV5(); err != nil {
t.Fatal("migrate v4 to v5 should succeed when no config file is found")
}
if err := migrateV5ToV6(); err != nil {
t.Fatal("migrate v5 to v6 should succeed when no config file is found")
}
if err := migrateV6ToV7(); err != nil {
t.Fatal("migrate v6 to v7 should succeed when no config file is found")
}
if err := migrateV7ToV8(); err != nil {
t.Fatal("migrate v7 to v8 should succeed when no config file is found")
}
if err := migrateV8ToV9(); err != nil {
t.Fatal("migrate v8 to v9 should succeed when no config file is found")
}
if err := migrateV9ToV10(); err != nil {
t.Fatal("migrate v9 to v10 should succeed when no config file is found")
}
}
// Test if a config migration from v2 to v10 is successfully done
func TestServerConfigMigrateV2toV10(t *testing.T) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
setGlobalConfigPath(rootPath)
configPath := rootPath + "/" + globalMinioConfigFile
// Create a corrupted config file
if err := ioutil.WriteFile(configPath, []byte("{ \"version\":\"2\","), 0644); err != nil {
t.Fatal("Unexpected error: ", err)
}
// Fire a migrateConfig()
if err := migrateConfig(); err == nil {
t.Fatal("migration should fail with corrupted config file")
}
accessKey := "accessfoo"
secretKey := "secretfoo"
// Create a V2 config json file and store it
configJSON := "{ \"version\":\"2\", \"credentials\": {\"accessKeyId\":\"" + accessKey + "\", \"secretAccessKey\":\"" + secretKey + "\", \"region\":\"us-east-1\"}, \"mongoLogger\":{\"addr\":\"127.0.0.1:3543\", \"db\":\"foodb\", \"collection\":\"foo\"}, \"syslogLogger\":{\"network\":\"127.0.0.1:543\", \"addr\":\"addr\"}, \"fileLogger\":{\"filename\":\"log.out\"}}"
if err := ioutil.WriteFile(configPath, []byte(configJSON), 0644); err != nil {
t.Fatal("Unexpected error: ", err)
}
// Fire a migrateConfig()
if err := migrateConfig(); err != nil {
t.Fatal("Unexpected error: ", err)
}
// Initialize server config and check again if everything is fine
if _, err := initConfig(); err != nil {
t.Fatalf("Unable to initialize from updated config file %s", err)
}
// Check the version number in the upgraded config file
expectedVersion := globalMinioConfigVersion
if serverConfig.Version != expectedVersion {
t.Fatalf("Expect version "+expectedVersion+", found: %v", serverConfig.Version)
}
// Check if accessKey and secretKey are not altered during migration
if serverConfig.Credential.AccessKeyID != accessKey {
t.Fatalf("Access key lost during migration, expected: %v, found:%v", accessKey, serverConfig.Credential.AccessKeyID)
}
if serverConfig.Credential.SecretAccessKey != secretKey {
t.Fatalf("Secret key lost during migration, expected: %v, found: %v", secretKey, serverConfig.Credential.SecretAccessKey)
}
// Initialize server config and check again if everything is fine
if _, err := initConfig(); err != nil {
t.Fatalf("Unable to initialize from updated config file %s", err)
}
}
// Test if all migrate code returns error with corrupted config files
func TestServerConfigMigrateFaultyConfig(t *testing.T) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
setGlobalConfigPath(rootPath)
configPath := rootPath + "/" + globalMinioConfigFile
// Create a corrupted config file
if err := ioutil.WriteFile(configPath, []byte("{ \"version\":\""), 0644); err != nil {
t.Fatal("Unexpected error: ", err)
}
// Test different migrate versions and be sure they are returning an error
if err := migrateV2ToV3(); err == nil {
t.Fatal("migrateConfigV2ToV3() should fail with a corrupted json")
}
if err := migrateV3ToV4(); err == nil {
t.Fatal("migrateConfigV3ToV4() should fail with a corrupted json")
}
if err := migrateV4ToV5(); err == nil {
t.Fatal("migrateConfigV4ToV5() should fail with a corrupted json")
}
if err := migrateV5ToV6(); err == nil {
t.Fatal("migrateConfigV5ToV6() should fail with a corrupted json")
}
if err := migrateV6ToV7(); err == nil {
t.Fatal("migrateConfigV6ToV7() should fail with a corrupted json")
}
if err := migrateV7ToV8(); err == nil {
t.Fatal("migrateConfigV7ToV8() should fail with a corrupted json")
}
if err := migrateV8ToV9(); err == nil {
t.Fatal("migrateConfigV8ToV9() should fail with a corrupted json")
}
if err := migrateV9ToV10(); err == nil {
t.Fatal("migrateConfigV9ToV10() should fail with a corrupted json")
}
}

View File

@@ -3,6 +3,7 @@ package cmd
import (
"os"
"path/filepath"
"sync"
"github.com/minio/minio/pkg/quick"
)
@@ -88,6 +89,13 @@ type backendV3 struct {
Disks []string `json:"disks,omitempty"`
}
// syslogLogger v3
type syslogLoggerV3 struct {
Enable bool `json:"enable"`
Addr string `json:"address"`
Level string `json:"level"`
}
// loggerV3 type.
type loggerV3 struct {
Console struct {
@@ -275,6 +283,12 @@ func loadConfigV5() (*configV5, error) {
return c, nil
}
type loggerV6 struct {
Console consoleLogger `json:"console"`
File fileLogger `json:"file"`
Syslog syslogLoggerV3 `json:"syslog"`
}
// configV6 server configuration version '6'.
type configV6 struct {
Version string `json:"version"`
@@ -284,7 +298,7 @@ type configV6 struct {
Region string `json:"region"`
// Additional error logging configuration.
Logger logger `json:"logger"`
Logger loggerV6 `json:"logger"`
// Notification queue configuration.
Notify notifier `json:"notify"`
@@ -310,3 +324,122 @@ func loadConfigV6() (*configV6, error) {
}
return c, nil
}
// configV7 server configuration version '7'.
type serverConfigV7 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV6 `json:"logger"`
// Notification queue configuration.
Notify notifier `json:"notify"`
// Read Write mutex.
rwMutex *sync.RWMutex
}
// loadConfigV7 load config version '7'.
func loadConfigV7() (*serverConfigV7, error) {
configFile, err := getConfigFile()
if err != nil {
return nil, err
}
if _, err = os.Stat(configFile); err != nil {
return nil, err
}
c := &serverConfigV7{}
c.Version = "7"
qc, err := quick.New(c)
if err != nil {
return nil, err
}
if err := qc.Load(configFile); err != nil {
return nil, err
}
return c, nil
}
// serverConfigV8 server configuration version '8'. Adds NATS notifier
// configuration.
type serverConfigV8 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV6 `json:"logger"`
// Notification queue configuration.
Notify notifier `json:"notify"`
// Read Write mutex.
rwMutex *sync.RWMutex
}
// loadConfigV8 load config version '8'.
func loadConfigV8() (*serverConfigV8, error) {
configFile, err := getConfigFile()
if err != nil {
return nil, err
}
if _, err = os.Stat(configFile); err != nil {
return nil, err
}
c := &serverConfigV8{}
c.Version = "8"
qc, err := quick.New(c)
if err != nil {
return nil, err
}
if err := qc.Load(configFile); err != nil {
return nil, err
}
return c, nil
}
// serverConfigV9 server configuration version '9'. Adds PostgreSQL
// notifier configuration.
type serverConfigV9 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV6 `json:"logger"`
// Notification queue configuration.
Notify notifier `json:"notify"`
// Read Write mutex.
rwMutex *sync.RWMutex
}
func loadConfigV9() (*serverConfigV9, error) {
configFile, err := getConfigFile()
if err != nil {
return nil, err
}
if _, err = os.Stat(configFile); err != nil {
return nil, err
}
srvCfg := &serverConfigV9{}
srvCfg.Version = "9"
srvCfg.rwMutex = &sync.RWMutex{}
qc, err := quick.New(srvCfg)
if err != nil {
return nil, err
}
if err := qc.Load(configFile); err != nil {
return nil, err
}
return srvCfg, nil
}

320
cmd/config-v10.go Normal file
View File

@@ -0,0 +1,320 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"os"
"sync"
"github.com/minio/minio/pkg/quick"
)
// Read Write mutex for safe access to ServerConfig.
var serverConfigMu sync.RWMutex
// serverConfigV10 server configuration version '10' which is like version '9'
// except it drops support of syslog config.
type serverConfigV10 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger logger `json:"logger"`
// Notification queue configuration.
Notify notifier `json:"notify"`
}
// initConfig - initialize server config and indicate if we are creating a new file or we are just loading
func initConfig() (bool, error) {
if !isConfigFileExists() {
// Initialize server config.
srvCfg := &serverConfigV10{}
srvCfg.Version = globalMinioConfigVersion
srvCfg.Region = "us-east-1"
srvCfg.Credential = mustGenAccessKeys()
// Enable console logger by default on a fresh run.
srvCfg.Logger.Console = consoleLogger{
Enable: true,
Level: "error",
}
// Make sure to initialize notification configs.
srvCfg.Notify.AMQP = make(map[string]amqpNotify)
srvCfg.Notify.AMQP["1"] = amqpNotify{}
srvCfg.Notify.ElasticSearch = make(map[string]elasticSearchNotify)
srvCfg.Notify.ElasticSearch["1"] = elasticSearchNotify{}
srvCfg.Notify.Redis = make(map[string]redisNotify)
srvCfg.Notify.Redis["1"] = redisNotify{}
srvCfg.Notify.NATS = make(map[string]natsNotify)
srvCfg.Notify.NATS["1"] = natsNotify{}
srvCfg.Notify.PostgreSQL = make(map[string]postgreSQLNotify)
srvCfg.Notify.PostgreSQL["1"] = postgreSQLNotify{}
// Create config path.
err := createConfigPath()
if err != nil {
return false, err
}
// hold the mutex lock before a new config is assigned.
// Save the new config globally.
// unlock the mutex.
serverConfigMu.Lock()
serverConfig = srvCfg
serverConfigMu.Unlock()
// Save config into file.
return true, serverConfig.Save()
}
configFile, err := getConfigFile()
if err != nil {
return false, err
}
if _, err = os.Stat(configFile); err != nil {
return false, err
}
srvCfg := &serverConfigV10{}
srvCfg.Version = globalMinioConfigVersion
qc, err := quick.New(srvCfg)
if err != nil {
return false, err
}
if err := qc.Load(configFile); err != nil {
return false, err
}
// hold the mutex lock before a new config is assigned.
serverConfigMu.Lock()
// Save the loaded config globally.
serverConfig = srvCfg
serverConfigMu.Unlock()
// Set the version properly after the unmarshalled json is loaded.
serverConfig.Version = globalMinioConfigVersion
return false, nil
}
// serverConfig server config.
var serverConfig *serverConfigV10
// GetVersion get current config version.
func (s serverConfigV10) GetVersion() string {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Version
}
/// Logger related.
func (s *serverConfigV10) SetAMQPNotifyByID(accountID string, amqpn amqpNotify) {
serverConfigMu.Lock()
defer serverConfigMu.Unlock()
s.Notify.AMQP[accountID] = amqpn
}
func (s serverConfigV10) GetAMQP() map[string]amqpNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.AMQP
}
// GetAMQPNotify get current AMQP logger.
func (s serverConfigV10) GetAMQPNotifyByID(accountID string) amqpNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.AMQP[accountID]
}
//
func (s *serverConfigV10) SetNATSNotifyByID(accountID string, natsn natsNotify) {
serverConfigMu.Lock()
defer serverConfigMu.Unlock()
s.Notify.NATS[accountID] = natsn
}
func (s serverConfigV10) GetNATS() map[string]natsNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.NATS
}
// GetNATSNotify get current NATS logger.
func (s serverConfigV10) GetNATSNotifyByID(accountID string) natsNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.NATS[accountID]
}
func (s *serverConfigV10) SetElasticSearchNotifyByID(accountID string, esNotify elasticSearchNotify) {
serverConfigMu.Lock()
defer serverConfigMu.Unlock()
s.Notify.ElasticSearch[accountID] = esNotify
}
func (s serverConfigV10) GetElasticSearch() map[string]elasticSearchNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.ElasticSearch
}
// GetElasticSearchNotify get current ElasicSearch logger.
func (s serverConfigV10) GetElasticSearchNotifyByID(accountID string) elasticSearchNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.ElasticSearch[accountID]
}
func (s *serverConfigV10) SetRedisNotifyByID(accountID string, rNotify redisNotify) {
serverConfigMu.Lock()
defer serverConfigMu.Unlock()
s.Notify.Redis[accountID] = rNotify
}
func (s serverConfigV10) GetRedis() map[string]redisNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.Redis
}
// GetRedisNotify get current Redis logger.
func (s serverConfigV10) GetRedisNotifyByID(accountID string) redisNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.Redis[accountID]
}
func (s *serverConfigV10) SetPostgreSQLNotifyByID(accountID string, pgn postgreSQLNotify) {
serverConfigMu.Lock()
defer serverConfigMu.Unlock()
s.Notify.PostgreSQL[accountID] = pgn
}
func (s serverConfigV10) GetPostgreSQL() map[string]postgreSQLNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.PostgreSQL
}
func (s serverConfigV10) GetPostgreSQLNotifyByID(accountID string) postgreSQLNotify {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Notify.PostgreSQL[accountID]
}
// SetFileLogger set new file logger.
func (s *serverConfigV10) SetFileLogger(flogger fileLogger) {
serverConfigMu.Lock()
defer serverConfigMu.Unlock()
s.Logger.File = flogger
}
// GetFileLogger get current file logger.
func (s serverConfigV10) GetFileLogger() fileLogger {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Logger.File
}
// SetConsoleLogger set new console logger.
func (s *serverConfigV10) SetConsoleLogger(clogger consoleLogger) {
serverConfigMu.Lock()
defer serverConfigMu.Unlock()
s.Logger.Console = clogger
}
// GetConsoleLogger get current console logger.
func (s serverConfigV10) GetConsoleLogger() consoleLogger {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Logger.Console
}
// SetRegion set new region.
func (s *serverConfigV10) SetRegion(region string) {
serverConfigMu.Lock()
defer serverConfigMu.Unlock()
s.Region = region
}
// GetRegion get current region.
func (s serverConfigV10) GetRegion() string {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Region
}
// SetCredentials set new credentials.
func (s *serverConfigV10) SetCredential(creds credential) {
serverConfigMu.Lock()
defer serverConfigMu.Unlock()
s.Credential = creds
}
// GetCredentials get current credentials.
func (s serverConfigV10) GetCredential() credential {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
return s.Credential
}
// Save config.
func (s serverConfigV10) Save() error {
serverConfigMu.RLock()
defer serverConfigMu.RUnlock()
// get config file.
configFile, err := getConfigFile()
if err != nil {
return err
}
// initialize quick.
qc, err := quick.New(&s)
if err != nil {
return err
}
// Save config file.
return qc.Save(configFile)
}

View File

@@ -26,7 +26,7 @@ func TestServerConfig(t *testing.T) {
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root folder after the test ends.
// remove the root directory after the test ends.
defer removeAll(rootPath)
if serverConfig.GetRegion() != "us-east-1" {
@@ -78,15 +78,6 @@ func TestServerConfig(t *testing.T) {
t.Errorf("Expecting file logger config %#v found %#v", fileLogger{Enable: true}, consoleCfg)
}
// Set new syslog logger.
serverConfig.SetSyslogLogger(syslogLogger{
Enable: true,
})
sysLogCfg := serverConfig.GetSyslogLogger()
if !reflect.DeepEqual(sysLogCfg, syslogLogger{Enable: true}) {
t.Errorf("Expecting syslog logger config %#v found %#v", syslogLogger{Enable: true}, sysLogCfg)
}
// Match version.
if serverConfig.GetVersion() != globalMinioConfigVersion {
t.Errorf("Expecting version %s found %s", serverConfig.GetVersion(), globalMinioConfigVersion)
@@ -101,7 +92,7 @@ func TestServerConfig(t *testing.T) {
setGlobalConfigPath(rootPath)
// Initialize server config.
if err := initConfig(); err != nil {
if _, err := initConfig(); err != nil {
t.Fatalf("Unable to initialize from updated config file %s", err)
}
}

View File

@@ -1,263 +0,0 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"os"
"sync"
"github.com/minio/minio/pkg/quick"
)
// serverConfigV7 server configuration version '7'.
type serverConfigV7 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger logger `json:"logger"`
// Notification queue configuration.
Notify notifier `json:"notify"`
// Read Write mutex.
rwMutex *sync.RWMutex
}
// initConfig - initialize server config. config version (called only once).
func initConfig() error {
if !isConfigFileExists() {
// Initialize server config.
srvCfg := &serverConfigV7{}
srvCfg.Version = globalMinioConfigVersion
srvCfg.Region = "us-east-1"
srvCfg.Credential = mustGenAccessKeys()
// Enable console logger by default on a fresh run.
srvCfg.Logger.Console = consoleLogger{
Enable: true,
Level: "fatal",
}
// Make sure to initialize notification configs.
srvCfg.Notify.AMQP = make(map[string]amqpNotify)
srvCfg.Notify.AMQP["1"] = amqpNotify{}
srvCfg.Notify.ElasticSearch = make(map[string]elasticSearchNotify)
srvCfg.Notify.ElasticSearch["1"] = elasticSearchNotify{}
srvCfg.Notify.Redis = make(map[string]redisNotify)
srvCfg.Notify.Redis["1"] = redisNotify{}
srvCfg.rwMutex = &sync.RWMutex{}
// Create config path.
err := createConfigPath()
if err != nil {
return err
}
// Save the new config globally.
serverConfig = srvCfg
// Save config into file.
return serverConfig.Save()
}
configFile, err := getConfigFile()
if err != nil {
return err
}
if _, err = os.Stat(configFile); err != nil {
return err
}
srvCfg := &serverConfigV7{}
srvCfg.Version = globalMinioConfigVersion
srvCfg.rwMutex = &sync.RWMutex{}
qc, err := quick.New(srvCfg)
if err != nil {
return err
}
if err := qc.Load(configFile); err != nil {
return err
}
// Save the loaded config globally.
serverConfig = srvCfg
// Set the version properly after the unmarshalled json is loaded.
serverConfig.Version = globalMinioConfigVersion
return nil
}
// serverConfig server config.
var serverConfig *serverConfigV7
// GetVersion get current config version.
func (s serverConfigV7) GetVersion() string {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Version
}
/// Logger related.
func (s *serverConfigV7) SetAMQPNotifyByID(accountID string, amqpn amqpNotify) {
s.rwMutex.Lock()
defer s.rwMutex.Unlock()
s.Notify.AMQP[accountID] = amqpn
}
func (s serverConfigV7) GetAMQP() map[string]amqpNotify {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Notify.AMQP
}
// GetAMQPNotify get current AMQP logger.
func (s serverConfigV7) GetAMQPNotifyByID(accountID string) amqpNotify {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Notify.AMQP[accountID]
}
func (s *serverConfigV7) SetElasticSearchNotifyByID(accountID string, esNotify elasticSearchNotify) {
s.rwMutex.Lock()
defer s.rwMutex.Unlock()
s.Notify.ElasticSearch[accountID] = esNotify
}
func (s serverConfigV7) GetElasticSearch() map[string]elasticSearchNotify {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Notify.ElasticSearch
}
// GetElasticSearchNotify get current ElasicSearch logger.
func (s serverConfigV7) GetElasticSearchNotifyByID(accountID string) elasticSearchNotify {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Notify.ElasticSearch[accountID]
}
func (s *serverConfigV7) SetRedisNotifyByID(accountID string, rNotify redisNotify) {
s.rwMutex.Lock()
defer s.rwMutex.Unlock()
s.Notify.Redis[accountID] = rNotify
}
func (s serverConfigV7) GetRedis() map[string]redisNotify {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Notify.Redis
}
// GetRedisNotify get current Redis logger.
func (s serverConfigV7) GetRedisNotifyByID(accountID string) redisNotify {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Notify.Redis[accountID]
}
// SetFileLogger set new file logger.
func (s *serverConfigV7) SetFileLogger(flogger fileLogger) {
s.rwMutex.Lock()
defer s.rwMutex.Unlock()
s.Logger.File = flogger
}
// GetFileLogger get current file logger.
func (s serverConfigV7) GetFileLogger() fileLogger {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Logger.File
}
// SetConsoleLogger set new console logger.
func (s *serverConfigV7) SetConsoleLogger(clogger consoleLogger) {
s.rwMutex.Lock()
defer s.rwMutex.Unlock()
s.Logger.Console = clogger
}
// GetConsoleLogger get current console logger.
func (s serverConfigV7) GetConsoleLogger() consoleLogger {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Logger.Console
}
// SetSyslogLogger set new syslog logger.
func (s *serverConfigV7) SetSyslogLogger(slogger syslogLogger) {
s.rwMutex.Lock()
defer s.rwMutex.Unlock()
s.Logger.Syslog = slogger
}
// GetSyslogLogger get current syslog logger.
func (s *serverConfigV7) GetSyslogLogger() syslogLogger {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Logger.Syslog
}
// SetRegion set new region.
func (s *serverConfigV7) SetRegion(region string) {
s.rwMutex.Lock()
defer s.rwMutex.Unlock()
s.Region = region
}
// GetRegion get current region.
func (s serverConfigV7) GetRegion() string {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Region
}
// SetCredentials set new credentials.
func (s *serverConfigV7) SetCredential(creds credential) {
s.rwMutex.Lock()
defer s.rwMutex.Unlock()
s.Credential = creds
}
// GetCredentials get current credentials.
func (s serverConfigV7) GetCredential() credential {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
return s.Credential
}
// Save config.
func (s serverConfigV7) Save() error {
s.rwMutex.RLock()
defer s.rwMutex.RUnlock()
// get config file.
configFile, err := getConfigFile()
if err != nil {
return err
}
// initialize quick.
qc, err := quick.New(&s)
if err != nil {
return err
}
// Save config file.
return qc.Save(configFile)
}

View File

@@ -54,7 +54,9 @@ func getConfigPath() (string, error) {
// mustGetConfigPath must get server config path.
func mustGetConfigPath() string {
configPath, err := getConfigPath()
fatalIf(err, "Unable to get config path.")
if err != nil {
return ""
}
return configPath
}
@@ -69,7 +71,11 @@ func createConfigPath() error {
// isConfigFileExists - returns true if config file exists.
func isConfigFileExists() bool {
st, err := os.Stat(mustGetConfigFile())
path, err := getConfigFile()
if err != nil {
return false
}
st, err := os.Stat(path)
// If file exists and is regular return true.
if err == nil && st.Mode().IsRegular() {
return true
@@ -77,14 +83,6 @@ func isConfigFileExists() bool {
return false
}
// mustGetConfigFile must get server config file.
func mustGetConfigFile() string {
configFile, err := getConfigFile()
fatalIf(err, "Unable to get config file.")
return configFile
}
// getConfigFile get server config file.
func getConfigFile() (string, error) {
configPath, err := getConfigPath()

View File

@@ -1,120 +0,0 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"fmt"
"net/rpc"
"net/url"
"path"
"strings"
"github.com/minio/cli"
)
var healCmd = cli.Command{
Name: "heal",
Usage: "To heal objects.",
Action: healControl,
CustomHelpTemplate: `NAME:
minio control {{.Name}} - {{.Usage}}
USAGE:
minio control {{.Name}}
EAMPLES:
1. Heal an object.
$ minio control {{.Name}} http://localhost:9000/songs/classical/western/piano.mp3
2. Heal all objects in a bucket recursively.
$ minio control {{.Name}} http://localhost:9000/songs
3. Heall all objects with a given prefix recursively.
$ minio control {{.Name}} http://localhost:9000/songs/classical/
`,
}
// "minio control heal" entry point.
func healControl(ctx *cli.Context) {
// Parse bucket and object from url.URL.Path
parseBucketObject := func(path string) (bucketName string, objectName string) {
splits := strings.SplitN(path, string(slashSeparator), 3)
switch len(splits) {
case 0, 1:
bucketName = ""
objectName = ""
case 2:
bucketName = splits[1]
objectName = ""
case 3:
bucketName = splits[1]
objectName = splits[2]
}
return bucketName, objectName
}
if len(ctx.Args()) != 1 {
cli.ShowCommandHelpAndExit(ctx, "heal", 1)
}
parsedURL, err := url.Parse(ctx.Args()[0])
fatalIf(err, "Unable to parse URL")
bucketName, objectName := parseBucketObject(parsedURL.Path)
if bucketName == "" {
cli.ShowCommandHelpAndExit(ctx, "heal", 1)
}
client, err := rpc.DialHTTPPath("tcp", parsedURL.Host, path.Join(reservedBucket, controlPath))
fatalIf(err, "Unable to connect to %s", parsedURL.Host)
// If object does not have trailing "/" then it's an object, hence heal it.
if objectName != "" && !strings.HasSuffix(objectName, slashSeparator) {
fmt.Printf("Healing : /%s/%s", bucketName, objectName)
args := &HealObjectArgs{bucketName, objectName}
reply := &HealObjectReply{}
err = client.Call("Control.HealObject", args, reply)
fatalIf(err, "RPC Control.HealObject call failed")
fmt.Println()
return
}
// Recursively list and heal the objects.
prefix := objectName
marker := ""
for {
args := HealListArgs{bucketName, prefix, marker, "", 1000}
reply := &HealListReply{}
err = client.Call("Control.ListObjectsHeal", args, reply)
fatalIf(err, "RPC Heal.ListObjects call failed")
// Heal the objects returned in the ListObjects reply.
for _, obj := range reply.Objects {
fmt.Printf("Healing : /%s/%s", bucketName, obj)
reply := &HealObjectReply{}
err = client.Call("Control.HealObject", HealObjectArgs{bucketName, obj}, reply)
fatalIf(err, "RPC Heal.HealObject call failed")
fmt.Println()
}
if !reply.IsTruncated {
// End of listing.
break
}
marker = reply.NextMarker
}
}

View File

@@ -1,52 +0,0 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import "github.com/minio/cli"
// "minio control" command.
var controlCmd = cli.Command{
Name: "control",
Usage: "Control and manage minio server.",
Action: mainControl,
Subcommands: []cli.Command{
healCmd,
shutdownCmd,
},
CustomHelpTemplate: `NAME:
{{.Name}} - {{.Usage}}
USAGE:
{{.Name}} [FLAGS] COMMAND
FLAGS:
{{range .Flags}}{{.}}
{{end}}
COMMANDS:
{{range .Commands}}{{join .Names ", "}}{{ "\t" }}{{.Usage}}
{{end}}
`,
}
func mainControl(ctx *cli.Context) {
if ctx.Args().First() != "" { // command help.
cli.ShowCommandHelp(ctx, ctx.Args().First())
} else {
// command with Subcommands is an App.
cli.ShowAppHelp(ctx)
}
}

View File

@@ -1,68 +0,0 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"net/rpc"
"net/url"
"path"
"github.com/minio/cli"
)
var shutdownCmd = cli.Command{
Name: "shutdown",
Usage: "Shutdown or restart the server.",
Action: shutdownControl,
Flags: []cli.Flag{
cli.BoolFlag{
Name: "restart",
Usage: "Restart the server.",
},
},
CustomHelpTemplate: `NAME:
minio control {{.Name}} - {{.Usage}}
USAGE:
minio control {{.Name}} http://localhost:9000/
EAMPLES:
1. Shutdown the server:
$ minio control shutdown http://localhost:9000/
2. Reboot the server:
$ minio control shutdown --restart http://localhost:9000/
`,
}
// "minio control shutdown" entry point.
func shutdownControl(c *cli.Context) {
if len(c.Args()) != 1 {
cli.ShowCommandHelpAndExit(c, "shutdown", 1)
}
parsedURL, err := url.ParseRequestURI(c.Args()[0])
fatalIf(err, "Unable to parse URL")
client, err := rpc.DialHTTPPath("tcp", parsedURL.Host, path.Join(reservedBucket, controlPath))
fatalIf(err, "Unable to connect to %s", parsedURL.Host)
args := &ShutdownArgs{Reboot: c.Bool("restart")}
reply := &ShutdownReply{}
err = client.Call("Control.Shutdown", args, reply)
fatalIf(err, "RPC Control.Shutdown call failed")
}

View File

@@ -1,88 +0,0 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
// HealListArgs - argument for ListObjects RPC.
type HealListArgs struct {
Bucket string
Prefix string
Marker string
Delimiter string
MaxKeys int
}
// HealListReply - reply by ListObjects RPC.
type HealListReply struct {
IsTruncated bool
NextMarker string
Objects []string
}
// ListObjects - list all objects that needs healing.
func (c *controllerAPIHandlers) ListObjectsHeal(arg *HealListArgs, reply *HealListReply) error {
objAPI := c.ObjectAPI
if objAPI == nil {
return errInvalidArgument
}
info, err := objAPI.ListObjectsHeal(arg.Bucket, arg.Prefix, arg.Marker, arg.Delimiter, arg.MaxKeys)
if err != nil {
return err
}
reply.IsTruncated = info.IsTruncated
reply.NextMarker = info.NextMarker
for _, obj := range info.Objects {
reply.Objects = append(reply.Objects, obj.Name)
}
return nil
}
// HealObjectArgs - argument for HealObject RPC.
type HealObjectArgs struct {
Bucket string
Object string
}
// HealObjectReply - reply by HealObject RPC.
type HealObjectReply struct{}
// HealObject - heal the object.
func (c *controllerAPIHandlers) HealObject(arg *HealObjectArgs, reply *HealObjectReply) error {
objAPI := c.ObjectAPI
if objAPI == nil {
return errInvalidArgument
}
return objAPI.HealObject(arg.Bucket, arg.Object)
}
// ShutdownArgs - argument for Shutdown RPC.
type ShutdownArgs struct {
Reboot bool
}
// ShutdownReply - reply by Shutdown RPC.
type ShutdownReply struct{}
// Shutdown - Shutdown the server.
func (c *controllerAPIHandlers) Shutdown(arg *ShutdownArgs, reply *ShutdownReply) error {
if arg.Reboot {
globalShutdownSignalCh <- shutdownRestart
} else {
globalShutdownSignalCh <- shutdownHalt
}
return nil
}

View File

@@ -19,7 +19,7 @@ package cmd
import "net/http"
// Standard cross domain policy information located at https://s3.amazonaws.com/crossdomain.xml
var crossDomainXML = `<?xml version="1.0"?><!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"><cross-domain-policy><allow-access-from domain="*" secure="false" /></cross-domain-policy>`
const crossDomainXML = `<?xml version="1.0"?><!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"><cross-domain-policy><allow-access-from domain="*" secure="false" /></cross-domain-policy>`
// Cross domain policy implements http.Handler interface, implementing a custom ServerHTTP.
type crossDomainPolicy struct {

View File

@@ -44,9 +44,9 @@ func DamerauLevenshteinDistance(a string, b string) int {
for j := 0; j <= len(b); j++ {
d[0][j] = j
}
var cost int
for i := 1; i <= len(a); i++ {
for j := 1; j <= len(b); j++ {
cost := 0
if a[i-1] == b[j-1] {
cost = 0
} else {

View File

@@ -29,7 +29,7 @@ import (
// all the disks, writes also calculate individual block's checksum
// for future bit-rot protection.
func erasureCreateFile(disks []StorageAPI, volume, path string, reader io.Reader, blockSize int64, dataBlocks int, parityBlocks int, algo string, writeQuorum int) (bytesWritten int64, checkSums []string, err error) {
// Allocated blockSized buffer for reading.
// Allocated blockSized buffer for reading from incoming stream.
buf := make([]byte, blockSize)
hashWriters := newHashWriters(len(disks), algo)
@@ -41,7 +41,7 @@ func erasureCreateFile(disks []StorageAPI, volume, path string, reader io.Reader
// FIXME: this is a bug in Golang, n == 0 and err ==
// io.ErrUnexpectedEOF for io.ReadFull function.
if n == 0 && rErr == io.ErrUnexpectedEOF {
return 0, nil, rErr
return 0, nil, traceError(rErr)
}
if rErr == io.EOF {
// We have reached EOF on the first byte read, io.Reader
@@ -58,7 +58,7 @@ func erasureCreateFile(disks []StorageAPI, volume, path string, reader io.Reader
break
}
if rErr != nil && rErr != io.ErrUnexpectedEOF {
return 0, nil, rErr
return 0, nil, traceError(rErr)
}
if n > 0 {
// Returns encoded blocks.
@@ -88,19 +88,19 @@ func erasureCreateFile(disks []StorageAPI, volume, path string, reader io.Reader
func encodeData(dataBuffer []byte, dataBlocks, parityBlocks int) ([][]byte, error) {
rs, err := reedsolomon.New(dataBlocks, parityBlocks)
if err != nil {
return nil, err
return nil, traceError(err)
}
// Split the input buffer into data and parity blocks.
var blocks [][]byte
blocks, err = rs.Split(dataBuffer)
if err != nil {
return nil, err
return nil, traceError(err)
}
// Encode parity blocks using data blocks.
err = rs.Encode(blocks)
if err != nil {
return nil, err
return nil, traceError(err)
}
// Return encoded blocks.
@@ -122,7 +122,7 @@ func appendFile(disks []StorageAPI, volume, path string, enBlocks [][]byte, hash
defer wg.Done()
wErr := disk.AppendFile(volume, path, enBlocks[index])
if wErr != nil {
wErrs[index] = wErr
wErrs[index] = traceError(wErr)
return
}
@@ -139,7 +139,7 @@ func appendFile(disks []StorageAPI, volume, path string, enBlocks [][]byte, hash
// Do we have write quorum?.
if !isDiskQuorum(wErrs, writeQuorum) {
return errXLWriteQuorum
return traceError(errXLWriteQuorum)
}
return nil
return reduceWriteQuorumErrs(wErrs, objectOpIgnoredErrs, writeQuorum)
}

View File

@@ -21,6 +21,7 @@ import (
"crypto/rand"
"testing"
humanize "github.com/dustin/go-humanize"
"github.com/klauspost/reedsolomon"
)
@@ -48,8 +49,8 @@ func TestErasureCreateFile(t *testing.T) {
disks := setup.disks
// Prepare a slice of 1MB with random data.
data := make([]byte, 1*1024*1024)
// Prepare a slice of 1MiB with random data.
data := make([]byte, 1*humanize.MiByte)
_, err = rand.Read(data)
if err != nil {
t.Fatal(err)
@@ -93,8 +94,8 @@ func TestErasureCreateFile(t *testing.T) {
// 1 more disk down. 7 disk down in total. Should return quorum error.
disks[10] = AppendDiskDown{disks[10].(*posix)}
_, _, err = erasureCreateFile(disks, "testbucket", "testobject4", bytes.NewReader(data), blockSize, dataBlocks, parityBlocks, bitRotAlgo, dataBlocks+1)
if err != errXLWriteQuorum {
t.Errorf("erasureCreateFile returned expected errXLWriteQuorum error, got %s", err)
if errorCause(err) != errXLWriteQuorum {
t.Errorf("erasureCreateFile return value: expected errXLWriteQuorum, got %s", err)
}
}
@@ -195,7 +196,7 @@ func TestErasureEncode(t *testing.T) {
}
// Failed as expected, but does it fail for the expected reason.
if actualErr != nil && !testCase.shouldPass {
if testCase.expectedErr != actualErr {
if errorCause(actualErr) != testCase.expectedErr {
t.Errorf("Test %d: Expected Error to be \"%v\", but instead found \"%v\" ", i+1, testCase.expectedErr, actualErr)
}
}

View File

@@ -64,7 +64,7 @@ func erasureHealFile(latestDisks []StorageAPI, outDatedDisks []StorageAPI, volum
}
err := disk.AppendFile(healBucket, healPath, enBlocks[index])
if err != nil {
return nil, err
return nil, traceError(err)
}
hashWriters[index].Write(enBlocks[index])
}

View File

@@ -22,6 +22,8 @@ import (
"os"
"path"
"testing"
humanize "github.com/dustin/go-humanize"
)
// Test erasureHealFile()
@@ -39,8 +41,8 @@ func TestErasureHealFile(t *testing.T) {
disks := setup.disks
// Prepare a slice of 1MB with random data.
data := make([]byte, 1*1024*1024)
// Prepare a slice of 1MiB with random data.
data := make([]byte, 1*humanize.MiByte)
_, err = rand.Read(data)
if err != nil {
t.Fatal(err)
@@ -66,7 +68,11 @@ func TestErasureHealFile(t *testing.T) {
copy(latest, disks)
latest[0] = nil
outDated[0] = disks[0]
healCheckSums, err := erasureHealFile(latest, outDated, "testbucket", "testobject1", "testbucket", "testobject1", 1*1024*1024, blockSize, dataBlocks, parityBlocks, bitRotAlgo)
healCheckSums, err := erasureHealFile(latest, outDated, "testbucket", "testobject1", "testbucket", "testobject1", 1*humanize.MiByte, blockSize, dataBlocks, parityBlocks, bitRotAlgo)
if err != nil {
t.Fatal(err)
}
// Checksum of the healed file should match.
if checkSums[0] != healCheckSums[0] {
t.Error("Healing failed, data does not match.")
@@ -86,7 +92,7 @@ func TestErasureHealFile(t *testing.T) {
outDated[index] = disks[index]
}
healCheckSums, err = erasureHealFile(latest, outDated, "testbucket", "testobject1", "testbucket", "testobject1", 1*1024*1024, blockSize, dataBlocks, parityBlocks, bitRotAlgo)
healCheckSums, err = erasureHealFile(latest, outDated, "testbucket", "testobject1", "testbucket", "testobject1", 1*humanize.MiByte, blockSize, dataBlocks, parityBlocks, bitRotAlgo)
if err != nil {
t.Fatal(err)
}
@@ -116,7 +122,7 @@ func TestErasureHealFile(t *testing.T) {
latest[index] = nil
outDated[index] = disks[index]
}
healCheckSums, err = erasureHealFile(latest, outDated, "testbucket", "testobject1", "testbucket", "testobject1", 1*1024*1024, blockSize, dataBlocks, parityBlocks, bitRotAlgo)
_, err = erasureHealFile(latest, outDated, "testbucket", "testobject1", "testbucket", "testobject1", 1*humanize.MiByte, blockSize, dataBlocks, parityBlocks, bitRotAlgo)
if err == nil {
t.Error("Expected erasureHealFile() to fail when the number of available disks <= parityBlocks")
}

View File

@@ -84,10 +84,10 @@ func getReadDisks(orderedDisks []StorageAPI, index int, dataBlocks int) (readDis
// Sanity checks - we should never have this situation.
if dataDisks == dataBlocks {
return nil, 0, errUnexpected
return nil, 0, traceError(errUnexpected)
}
if dataDisks+parityDisks >= dataBlocks {
return nil, 0, errUnexpected
return nil, 0, traceError(errUnexpected)
}
// Find the disks from which next set of parallel reads should happen.
@@ -107,7 +107,7 @@ func getReadDisks(orderedDisks []StorageAPI, index int, dataBlocks int) (readDis
return readDisks, i + 1, nil
}
}
return nil, 0, errXLReadQuorum
return nil, 0, traceError(errXLReadQuorum)
}
// parallelRead - reads chunks in parallel from the disks specified in []readDisks.
@@ -161,12 +161,12 @@ func parallelRead(volume, path string, readDisks []StorageAPI, orderedDisks []St
func erasureReadFile(writer io.Writer, disks []StorageAPI, volume string, path string, offset int64, length int64, totalLength int64, blockSize int64, dataBlocks int, parityBlocks int, checkSums []string, algo string, pool *bpool.BytePool) (int64, error) {
// Offset and length cannot be negative.
if offset < 0 || length < 0 {
return 0, errUnexpected
return 0, traceError(errUnexpected)
}
// Can't request more data than what is available.
if offset+length > totalLength {
return 0, errUnexpected
return 0, traceError(errUnexpected)
}
// chunkSize is the amount of data that needs to be read from each disk at a time.
@@ -248,7 +248,7 @@ func erasureReadFile(writer io.Writer, disks []StorageAPI, volume string, path s
}
if nextIndex == len(disks) {
// No more disks to read from.
return bytesWritten, errXLReadQuorum
return bytesWritten, traceError(errXLReadQuorum)
}
// We do not have enough enough data blocks to reconstruct the data
// hence continue the for-loop till we have enough data blocks.
@@ -325,24 +325,24 @@ func decodeData(enBlocks [][]byte, dataBlocks, parityBlocks int) error {
// Initialized reedsolomon.
rs, err := reedsolomon.New(dataBlocks, parityBlocks)
if err != nil {
return err
return traceError(err)
}
// Reconstruct encoded blocks.
err = rs.Reconstruct(enBlocks)
if err != nil {
return err
return traceError(err)
}
// Verify reconstructed blocks (parity).
ok, err := rs.Verify(enBlocks)
if err != nil {
return err
return traceError(err)
}
if !ok {
// Blocks cannot be reconstructed, corrupted data.
err = errors.New("Verification failed after reconstruction, data likely corrupted.")
return err
err = errors.New("Verification failed after reconstruction, data likely corrupted")
return traceError(err)
}
// Success.

View File

@@ -22,13 +22,15 @@ import (
"testing"
"time"
"reflect"
humanize "github.com/dustin/go-humanize"
"github.com/minio/minio/pkg/bpool"
)
import "reflect"
// Tests getReadDisks which returns readable disks slice from which we can
// read parallelly.
func testGetReadDisks(t *testing.T, xl xlObjects) {
func testGetReadDisks(t *testing.T, xl *xlObjects) {
d := xl.storageDisks
testCases := []struct {
index int // index argument for getReadDisks
@@ -104,7 +106,7 @@ func testGetReadDisks(t *testing.T, xl xlObjects) {
for i, test := range testCases {
disks, nextIndex, err := getReadDisks(test.argDisks, test.index, xl.dataBlocks)
if err != test.err {
if errorCause(err) != test.err {
t.Errorf("test-case %d - expected error : %s, got : %s", i+1, test.err, err)
continue
}
@@ -121,7 +123,7 @@ func testGetReadDisks(t *testing.T, xl xlObjects) {
// Test getOrderedDisks which returns ordered slice of disks from their
// actual distribution.
func testGetOrderedDisks(t *testing.T, xl xlObjects) {
func testGetOrderedDisks(t *testing.T, xl *xlObjects) {
disks := xl.storageDisks
distribution := []int{16, 14, 12, 10, 8, 6, 4, 2, 1, 3, 5, 7, 9, 11, 13, 15}
orderedDisks := getOrderedDisks(distribution, disks)
@@ -217,12 +219,22 @@ func TestIsSuccessBlocks(t *testing.T) {
// Wrapper function for testGetReadDisks, testGetOrderedDisks.
func TestErasureReadUtils(t *testing.T) {
objLayer, dirs, err := getXLObjectLayer()
nDisks := 16
disks, err := getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
defer removeRoots(dirs)
xl := objLayer.(xlObjects)
endpoints, err := parseStorageEndpoints(disks)
if err != nil {
t.Fatal(err)
}
objLayer, _, err := initObjectLayer(endpoints)
if err != nil {
removeRoots(disks)
t.Fatal(err)
}
defer removeRoots(disks)
xl := objLayer.(*xlObjects)
testGetReadDisks(t, xl)
testGetOrderedDisks(t, xl)
}
@@ -250,8 +262,8 @@ func TestErasureReadFileDiskFail(t *testing.T) {
disks := setup.disks
// Prepare a slice of 1MB with random data.
data := make([]byte, 1*1024*1024)
// Prepare a slice of 1humanize.MiByte with random data.
data := make([]byte, 1*humanize.MiByte)
length := int64(len(data))
_, err = rand.Read(data)
if err != nil {
@@ -314,7 +326,7 @@ func TestErasureReadFileDiskFail(t *testing.T) {
disks[13] = ReadDiskDown{disks[13].(*posix)}
buf.Reset()
_, err = erasureReadFile(buf, disks, "testbucket", "testobject", 0, length, length, blockSize, dataBlocks, parityBlocks, checkSums, bitRotAlgo, pool)
if err != errXLReadQuorum {
if errorCause(err) != errXLReadQuorum {
t.Fatal("expected errXLReadQuorum error")
}
}
@@ -323,7 +335,7 @@ func TestErasureReadFileOffsetLength(t *testing.T) {
// Initialize environment needed for the test.
dataBlocks := 7
parityBlocks := 7
blockSize := int64(1 * 1024 * 1024)
blockSize := int64(1 * humanize.MiByte)
setup, err := newErasureTestSetup(dataBlocks, parityBlocks, blockSize)
if err != nil {
t.Error(err)
@@ -333,8 +345,8 @@ func TestErasureReadFileOffsetLength(t *testing.T) {
disks := setup.disks
// Prepare a slice of 5MB with random data.
data := make([]byte, 5*1024*1024)
// Prepare a slice of 5humanize.MiByte with random data.
data := make([]byte, 5*humanize.MiByte)
length := int64(len(data))
_, err = rand.Read(data)
if err != nil {
@@ -399,7 +411,7 @@ func TestErasureReadFileRandomOffsetLength(t *testing.T) {
// Initialize environment needed for the test.
dataBlocks := 7
parityBlocks := 7
blockSize := int64(1 * 1024 * 1024)
blockSize := int64(1 * humanize.MiByte)
setup, err := newErasureTestSetup(dataBlocks, parityBlocks, blockSize)
if err != nil {
t.Error(err)
@@ -409,8 +421,8 @@ func TestErasureReadFileRandomOffsetLength(t *testing.T) {
disks := setup.disks
// Prepare a slice of 5MB with random data.
data := make([]byte, 5*1024*1024)
// Prepare a slice of 5MiB with random data.
data := make([]byte, 5*humanize.MiByte)
length := int64(len(data))
_, err = rand.Read(data)
if err != nil {
@@ -430,7 +442,7 @@ func TestErasureReadFileRandomOffsetLength(t *testing.T) {
}
// To generate random offset/length.
r := rand.New(rand.NewSource(time.Now().UnixNano()))
r := rand.New(rand.NewSource(time.Now().UTC().UnixNano()))
// create pool buffer which will be used by erasureReadFile for
// reading from disks and erasure decoding.

View File

@@ -21,6 +21,7 @@ import (
"errors"
"hash"
"io"
"sync"
"github.com/klauspost/reedsolomon"
"github.com/minio/blake2b-simd"
@@ -47,13 +48,23 @@ func newHash(algo string) hash.Hash {
}
}
// Hash buffer pool is a pool of reusable
// buffers used while checksumming a stream.
var hashBufferPool = sync.Pool{
New: func() interface{} {
b := make([]byte, readSizeV1)
return &b
},
}
// hashSum calculates the hash of the entire path and returns.
func hashSum(disk StorageAPI, volume, path string, writer hash.Hash) ([]byte, error) {
// Allocate staging buffer of 128KiB for copyBuffer.
buf := make([]byte, readSizeV1)
// Fetch staging a new staging buffer from the pool.
bufp := hashBufferPool.Get().(*[]byte)
defer hashBufferPool.Put(bufp)
// Copy entire buffer to writer.
if err := copyBuffer(writer, disk, volume, path, buf); err != nil {
if err := copyBuffer(writer, disk, volume, path, *bufp); err != nil {
return nil, err
}
@@ -76,17 +87,17 @@ func getDataBlockLen(enBlocks [][]byte, dataBlocks int) int {
func writeDataBlocks(dst io.Writer, enBlocks [][]byte, dataBlocks int, offset int64, length int64) (int64, error) {
// Offset and out size cannot be negative.
if offset < 0 || length < 0 {
return 0, errUnexpected
return 0, traceError(errUnexpected)
}
// Do we have enough blocks?
if len(enBlocks) < dataBlocks {
return 0, reedsolomon.ErrTooFewShards
return 0, traceError(reedsolomon.ErrTooFewShards)
}
// Do we have enough data?
if int64(getDataBlockLen(enBlocks, dataBlocks)) < length {
return 0, reedsolomon.ErrShortData
return 0, traceError(reedsolomon.ErrShortData)
}
// Counter to decrement total left to write.
@@ -114,7 +125,7 @@ func writeDataBlocks(dst io.Writer, enBlocks [][]byte, dataBlocks int, offset in
if write < int64(len(block)) {
n, err := io.Copy(dst, bytes.NewReader(block[:write]))
if err != nil {
return 0, err
return 0, traceError(err)
}
totalWritten += n
break
@@ -122,7 +133,7 @@ func writeDataBlocks(dst io.Writer, enBlocks [][]byte, dataBlocks int, offset in
// Copy the block.
n, err := io.Copy(dst, bytes.NewReader(block))
if err != nil {
return 0, err
return 0, traceError(err)
}
// Decrement output size.

139
cmd/errors.go Normal file
View File

@@ -0,0 +1,139 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"fmt"
"os"
"path/filepath"
"runtime"
"strings"
)
// Holds the current directory path. Used for trimming path in traceError()
var rootPath string
// Figure out the rootPath
func initError() {
// Root path is automatically determined from the calling function's source file location.
// Catch the calling function's source file path.
_, file, _, _ := runtime.Caller(1)
// Save the directory alone.
rootPath = filepath.Dir(file)
}
// Represents a stack frame in the stack trace.
type traceInfo struct {
file string // File where error occurred
line int // Line where error occurred
name string // Name of the function where error occurred
}
// Error - error type containing cause and the stack trace.
type Error struct {
e error // Holds the cause error
trace []traceInfo // stack trace
errs []error // Useful for XL to hold errors from all disks
}
// Implement error interface.
func (e Error) Error() string {
return e.e.Error()
}
// Trace - returns stack trace.
func (e Error) Trace() []string {
var traceArr []string
for _, info := range e.trace {
traceArr = append(traceArr, fmt.Sprintf("%s:%d:%s",
info.file, info.line, info.name))
}
return traceArr
}
// NewStorageError - return new Error type.
func traceError(e error, errs ...error) error {
if e == nil {
return nil
}
err := &Error{}
err.e = e
err.errs = errs
stack := make([]uintptr, 40)
length := runtime.Callers(2, stack)
if length > len(stack) {
length = len(stack)
}
stack = stack[:length]
for _, pc := range stack {
pc = pc - 1
fn := runtime.FuncForPC(pc)
file, line := fn.FileLine(pc)
name := fn.Name()
if strings.HasSuffix(name, "ServeHTTP") {
break
}
if strings.HasSuffix(name, "runtime.") {
break
}
file = strings.TrimPrefix(file, rootPath+string(os.PathSeparator))
name = strings.TrimPrefix(name, "github.com/minio/minio/cmd.")
err.trace = append(err.trace, traceInfo{file, line, name})
}
return err
}
// Returns the underlying cause error.
func errorCause(err error) error {
if e, ok := err.(*Error); ok {
err = e.e
}
return err
}
// Returns slice of underlying cause error.
func errorsCause(errs []error) []error {
cerrs := make([]error, len(errs))
for i, err := range errs {
if err == nil {
continue
}
cerrs[i] = errorCause(err)
}
return cerrs
}
var baseIgnoredErrs = []error{
errDiskNotFound,
errFaultyDisk,
errFaultyRemoteDisk,
}
// isErrIgnored returns whether given error is ignored or not.
func isErrIgnored(err error, ignoredErrs ...error) bool {
err = errorCause(err)
for _, ignoredErr := range ignoredErrs {
if ignoredErr == err {
return true
}
}
return false
}

View File

@@ -18,8 +18,10 @@ package cmd
import (
"bytes"
"encoding/json"
"encoding/xml"
"fmt"
"net"
"net/url"
"path"
"sync"
@@ -28,14 +30,51 @@ import (
"github.com/Sirupsen/logrus"
)
// Global event notification queue. This is the queue that would be used to send all notifications.
type eventNotifier struct {
rwMutex *sync.RWMutex
// Collection of 'bucket' and notification config.
type externalNotifier struct {
// Per-bucket notification config. This is updated via
// PutBucketNotification API.
notificationConfigs map[string]*notificationConfig
snsTargets map[string][]chan []NotificationEvent
queueTargets map[string]*logrus.Logger
// An external target keeps a connection to an external
// service to which events are to be sent. It is a mapping
// from an ARN to a log object
targets map[string]*logrus.Logger
rwMutex *sync.RWMutex
}
type internalNotifier struct {
// per-bucket listener configuration. This is updated
// when listeners connect or disconnect.
listenerConfigs map[string][]listenerConfig
// An internal target is a peer Minio server, that is
// connected to a listening client. Here, targets is a map of
// listener ARN to log object.
targets map[string]*listenerLogger
// Connected listeners is a map of listener ARNs to channels
// on which the ListenBucket API handler go routine is waiting
// for events to send to a client.
connectedListeners map[string]chan []NotificationEvent
rwMutex *sync.RWMutex
}
// Global event notification configuration. This structure has state
// about configured external notifications, and run-time configuration
// for listener notifications.
type eventNotifier struct {
// `external` here refers to notification configuration to
// send events to supported external systems
external externalNotifier
// `internal` refers to notification configuration for live
// listening clients. Events for a client are send from all
// servers, internally to a particular server that is
// connected to the client.
internal internalNotifier
}
// Represents data to be sent with notification event.
@@ -53,7 +92,8 @@ func newNotificationEvent(event eventData) NotificationEvent {
region := serverConfig.GetRegion()
tnow := time.Now().UTC()
sequencer := fmt.Sprintf("%X", tnow.UnixNano())
// Following blocks fills in all the necessary details of s3 event message structure.
// Following blocks fills in all the necessary details of s3
// event message structure.
// http://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html
nEvent := NotificationEvent{
EventVersion: "2.0",
@@ -95,79 +135,143 @@ func newNotificationEvent(event eventData) NotificationEvent {
return nEvent
}
// Fetch the saved queue target.
func (en eventNotifier) GetQueueTarget(queueARN string) *logrus.Logger {
return en.queueTargets[queueARN]
// Fetch the external target. No locking needed here since this map is
// never written after initial startup.
func (en eventNotifier) GetExternalTarget(queueARN string) *logrus.Logger {
return en.external.targets[queueARN]
}
func (en eventNotifier) GetSNSTarget(snsARN string) []chan []NotificationEvent {
en.rwMutex.RLock()
defer en.rwMutex.RUnlock()
return en.snsTargets[snsARN]
func (en eventNotifier) GetInternalTarget(arn string) *listenerLogger {
en.internal.rwMutex.RLock()
defer en.internal.rwMutex.RUnlock()
return en.internal.targets[arn]
}
// Set a new sns target for an input sns ARN.
func (en *eventNotifier) SetSNSTarget(snsARN string, listenerCh chan []NotificationEvent) error {
en.rwMutex.Lock()
defer en.rwMutex.Unlock()
func (en *eventNotifier) AddListenerChan(snsARN string, listenerCh chan []NotificationEvent) error {
if listenerCh == nil {
return errInvalidArgument
}
en.snsTargets[snsARN] = append(en.snsTargets[snsARN], listenerCh)
en.internal.rwMutex.Lock()
defer en.internal.rwMutex.Unlock()
en.internal.connectedListeners[snsARN] = listenerCh
return nil
}
// Remove sns target for an input sns ARN.
func (en *eventNotifier) RemoveSNSTarget(snsARN string, listenerCh chan []NotificationEvent) {
en.rwMutex.Lock()
defer en.rwMutex.Unlock()
snsTarget, ok := en.snsTargets[snsARN]
func (en *eventNotifier) RemoveListenerChan(snsARN string) {
en.internal.rwMutex.Lock()
defer en.internal.rwMutex.Unlock()
if en.internal.connectedListeners != nil {
delete(en.internal.connectedListeners, snsARN)
}
}
func (en *eventNotifier) SendListenerEvent(arn string, event []NotificationEvent) error {
en.internal.rwMutex.Lock()
defer en.internal.rwMutex.Unlock()
ch, ok := en.internal.connectedListeners[arn]
if ok {
for i, savedListenerCh := range snsTarget {
if listenerCh == savedListenerCh {
snsTarget = append(snsTarget[:i], snsTarget[i+1:]...)
if len(snsTarget) == 0 {
delete(en.snsTargets, snsARN)
break
}
en.snsTargets[snsARN] = snsTarget
ch <- event
}
// If the channel is not present we ignore the event.
return nil
}
// Fetch bucket notification config for an input bucket.
func (en eventNotifier) GetBucketNotificationConfig(bucket string) *notificationConfig {
en.external.rwMutex.RLock()
defer en.external.rwMutex.RUnlock()
return en.external.notificationConfigs[bucket]
}
func (en *eventNotifier) SetBucketNotificationConfig(bucket string, ncfg *notificationConfig) {
en.external.rwMutex.Lock()
if ncfg == nil {
delete(en.external.notificationConfigs, bucket)
} else {
en.external.notificationConfigs[bucket] = ncfg
}
en.external.rwMutex.Unlock()
}
func (en *eventNotifier) GetBucketListenerConfig(bucket string) []listenerConfig {
en.internal.rwMutex.RLock()
defer en.internal.rwMutex.RUnlock()
return en.internal.listenerConfigs[bucket]
}
func (en *eventNotifier) SetBucketListenerConfig(bucket string, lcfg []listenerConfig) error {
en.internal.rwMutex.Lock()
defer en.internal.rwMutex.Unlock()
if len(lcfg) == 0 {
delete(en.internal.listenerConfigs, bucket)
} else {
en.internal.listenerConfigs[bucket] = lcfg
}
for _, elcArr := range en.internal.listenerConfigs {
for _, elcElem := range elcArr {
currArn := elcElem.TopicConfig.TopicARN
logger, err := newListenerLogger(currArn, elcElem.TargetServer)
if err != nil {
return err
}
en.internal.targets[currArn] = logger
}
}
return nil
}
func eventNotifyForBucketNotifications(eventType, objectName, bucketName string, nEvent []NotificationEvent) {
nConfig := globalEventNotifier.GetBucketNotificationConfig(bucketName)
if nConfig == nil {
return
}
// Validate if the event and object match the queue configs.
for _, qConfig := range nConfig.QueueConfigs {
eventMatch := eventMatch(eventType, qConfig.Events)
ruleMatch := filterRuleMatch(objectName, qConfig.Filter.Key.FilterRules)
if eventMatch && ruleMatch {
targetLog := globalEventNotifier.GetExternalTarget(qConfig.QueueARN)
if targetLog != nil {
targetLog.WithFields(logrus.Fields{
"Key": path.Join(bucketName, objectName),
"EventType": eventType,
"Records": nEvent,
}).Info()
}
}
}
}
// Returns true if bucket notification is set for the bucket, false otherwise.
func (en *eventNotifier) IsBucketNotificationSet(bucket string) bool {
if en == nil {
return false
func eventNotifyForBucketListeners(eventType, objectName, bucketName string,
nEvent []NotificationEvent) {
lCfgs := globalEventNotifier.GetBucketListenerConfig(bucketName)
if lCfgs == nil {
return
}
en.rwMutex.RLock()
defer en.rwMutex.RUnlock()
_, ok := en.notificationConfigs[bucket]
return ok
}
// Fetch bucket notification config for an input bucket.
func (en eventNotifier) GetBucketNotificationConfig(bucket string) *notificationConfig {
en.rwMutex.RLock()
defer en.rwMutex.RUnlock()
return en.notificationConfigs[bucket]
}
// Set a new notification config for a bucket, this operation will overwrite any previous
// notification configs for the bucket.
func (en *eventNotifier) SetBucketNotificationConfig(bucket string, notificationCfg *notificationConfig) error {
en.rwMutex.Lock()
defer en.rwMutex.Unlock()
if notificationCfg == nil {
return errInvalidArgument
// Validate if the event and object match listener configs
for _, lcfg := range lCfgs {
ruleMatch := filterRuleMatch(objectName, lcfg.TopicConfig.Filter.Key.FilterRules)
eventMatch := eventMatch(eventType, lcfg.TopicConfig.Events)
if eventMatch && ruleMatch {
targetLog := globalEventNotifier.GetInternalTarget(
lcfg.TopicConfig.TopicARN)
if targetLog != nil && targetLog.log != nil {
targetLog.log.WithFields(logrus.Fields{
"Key": path.Join(bucketName, objectName),
"EventType": eventType,
"Records": nEvent,
}).Info()
}
}
}
en.notificationConfigs[bucket] = notificationCfg
return nil
}
// eventNotify notifies an event to relevant targets based on their
// bucket notification configs.
// bucket configuration (notifications and listeners).
func eventNotify(event eventData) {
// Notifies a new event.
// List of events reported through this function are
@@ -177,15 +281,6 @@ func eventNotify(event eventData) {
// - s3:ObjectCreated:CompleteMultipartUpload
// - s3:ObjectRemoved:Delete
nConfig := globalEventNotifier.GetBucketNotificationConfig(event.Bucket)
// No bucket notifications enabled, drop the event notification.
if nConfig == nil {
return
}
if len(nConfig.QueueConfigs) == 0 && len(nConfig.TopicConfigs) == 0 && len(nConfig.LambdaConfigs) == 0 {
return
}
// Event type.
eventType := event.Type.String()
@@ -195,56 +290,46 @@ func eventNotify(event eventData) {
// Save the notification event to be sent.
notificationEvent := []NotificationEvent{newNotificationEvent(event)}
// Validate if the event and object match the queue configs.
for _, qConfig := range nConfig.QueueConfigs {
eventMatch := eventMatch(eventType, qConfig.Events)
ruleMatch := filterRuleMatch(objectName, qConfig.Filter.Key.FilterRules)
if eventMatch && ruleMatch {
targetLog := globalEventNotifier.GetQueueTarget(qConfig.QueueARN)
if targetLog != nil {
targetLog.WithFields(logrus.Fields{
"Records": notificationEvent,
}).Info()
}
}
}
// Validate if the event and object match the sns configs.
for _, topicConfig := range nConfig.TopicConfigs {
ruleMatch := filterRuleMatch(objectName, topicConfig.Filter.Key.FilterRules)
eventMatch := eventMatch(eventType, topicConfig.Events)
if eventMatch && ruleMatch {
targetListeners := globalEventNotifier.GetSNSTarget(topicConfig.TopicARN)
for _, listener := range targetListeners {
listener <- notificationEvent
}
}
}
// Notify external targets.
eventNotifyForBucketNotifications(eventType, objectName, event.Bucket, notificationEvent)
// Notify internal targets.
eventNotifyForBucketListeners(eventType, objectName, event.Bucket, notificationEvent)
}
// loads notifcation config if any for a given bucket, returns back structured notification config.
// loads notification config if any for a given bucket, returns
// structured notification config.
func loadNotificationConfig(bucket string, objAPI ObjectLayer) (*notificationConfig, error) {
// Construct the notification config path.
notificationConfigPath := path.Join(bucketConfigPrefix, bucket, bucketNotificationConfig)
objInfo, err := objAPI.GetObjectInfo(minioMetaBucket, notificationConfigPath)
ncPath := path.Join(bucketConfigPrefix, bucket, bucketNotificationConfig)
// Acquire a write lock on notification config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, ncPath)
objLock.RLock()
defer objLock.RUnlock()
objInfo, err := objAPI.GetObjectInfo(minioMetaBucket, ncPath)
if err != nil {
// 'notification.xml' not found return 'errNoSuchNotifications'.
// This is default when no bucket notifications are found on the bucket.
switch err.(type) {
case ObjectNotFound:
// 'notification.xml' not found return
// 'errNoSuchNotifications'. This is default when no
// bucket notifications are found on the bucket.
if isErrObjectNotFound(err) || isErrIncompleteBody(err) {
return nil, errNoSuchNotifications
}
errorIf(err, "Unable to load bucket-notification for bucket %s", bucket)
// Returns error for other errors.
return nil, err
}
var buffer bytes.Buffer
err = objAPI.GetObject(minioMetaBucket, notificationConfigPath, 0, objInfo.Size, &buffer)
err = objAPI.GetObject(minioMetaBucket, ncPath, 0, objInfo.Size, &buffer)
if err != nil {
// 'notification.xml' not found return 'errNoSuchNotifications'.
// This is default when no bucket notifications are found on the bucket.
switch err.(type) {
case ObjectNotFound:
// 'notification.xml' not found return
// 'errNoSuchNotifications'. This is default when no
// bucket notifications are found on the bucket.
if isErrObjectNotFound(err) || isErrIncompleteBody(err) {
return nil, errNoSuchNotifications
}
errorIf(err, "Unable to load bucket-notification for bucket %s", bucket)
// Returns error for other errors.
return nil, err
}
@@ -254,37 +339,186 @@ func loadNotificationConfig(bucket string, objAPI ObjectLayer) (*notificationCon
notificationCfg := &notificationConfig{}
if err = xml.Unmarshal(notificationConfigBytes, &notificationCfg); err != nil {
return nil, err
} // Successfully marshalled notification configuration.
}
// Return success.
return notificationCfg, nil
}
// loads all bucket notifications if present.
func loadAllBucketNotifications(objAPI ObjectLayer) (map[string]*notificationConfig, error) {
// List buckets to proceed loading all notification configuration.
buckets, err := objAPI.ListBuckets()
// loads notification config if any for a given bucket, returns
// structured notification config.
func loadListenerConfig(bucket string, objAPI ObjectLayer) ([]listenerConfig, error) {
// in single node mode, there are no peers, so in this case
// there is no configuration to load, as any previously
// connected listen clients have been disconnected
if !globalIsDistXL {
return nil, nil
}
// Construct the notification config path.
lcPath := path.Join(bucketConfigPrefix, bucket, bucketListenerConfig)
// Acquire a write lock on notification config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, lcPath)
objLock.RLock()
defer objLock.RUnlock()
objInfo, err := objAPI.GetObjectInfo(minioMetaBucket, lcPath)
if err != nil {
// 'listener.json' not found return
// 'errNoSuchNotifications'. This is default when no
// bucket notifications are found on the bucket.
if isErrObjectNotFound(err) {
return nil, errNoSuchNotifications
}
errorIf(err, "Unable to load bucket-listeners for bucket %s", bucket)
// Returns error for other errors.
return nil, err
}
var buffer bytes.Buffer
err = objAPI.GetObject(minioMetaBucket, lcPath, 0, objInfo.Size, &buffer)
if err != nil {
// 'notification.xml' not found return
// 'errNoSuchNotifications'. This is default when no
// bucket listeners are found on the bucket.
if isErrObjectNotFound(err) {
return nil, errNoSuchNotifications
}
errorIf(err, "Unable to load bucket-listeners for bucket %s", bucket)
// Returns error for other errors.
return nil, err
}
configs := make(map[string]*notificationConfig)
// Unmarshal notification bytes.
var lCfg []listenerConfig
lConfigBytes := buffer.Bytes()
if err = json.Unmarshal(lConfigBytes, &lCfg); err != nil {
errorIf(err, "Unable to unmarshal listener config from JSON.")
return nil, err
}
// Return success.
return lCfg, nil
}
func persistNotificationConfig(bucket string, ncfg *notificationConfig, obj ObjectLayer) error {
// marshal to xml
buf, err := xml.Marshal(ncfg)
if err != nil {
errorIf(err, "Unable to marshal notification configuration into XML")
return err
}
// build path
ncPath := path.Join(bucketConfigPrefix, bucket, bucketNotificationConfig)
// Acquire a write lock on notification config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, ncPath)
objLock.Lock()
defer objLock.Unlock()
// write object to path
sha256Sum := getSHA256Hash(buf)
_, err = obj.PutObject(minioMetaBucket, ncPath, int64(len(buf)), bytes.NewReader(buf), nil, sha256Sum)
if err != nil {
errorIf(err, "Unable to write bucket notification configuration.")
return err
}
return nil
}
// Persists validated listener config to object layer.
func persistListenerConfig(bucket string, lcfg []listenerConfig, obj ObjectLayer) error {
buf, err := json.Marshal(lcfg)
if err != nil {
errorIf(err, "Unable to marshal listener config to JSON.")
return err
}
// build path
lcPath := path.Join(bucketConfigPrefix, bucket, bucketListenerConfig)
// Acquire a write lock on notification config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, lcPath)
objLock.Lock()
defer objLock.Unlock()
// write object to path
sha256Sum := getSHA256Hash(buf)
_, err = obj.PutObject(minioMetaBucket, lcPath, int64(len(buf)), bytes.NewReader(buf), nil, sha256Sum)
if err != nil {
errorIf(err, "Unable to write bucket listener configuration to object layer.")
}
return err
}
// Removes notification.xml for a given bucket, only used during DeleteBucket.
func removeNotificationConfig(bucket string, objAPI ObjectLayer) error {
// Verify bucket is valid.
if !IsValidBucketName(bucket) {
return BucketNameInvalid{Bucket: bucket}
}
ncPath := path.Join(bucketConfigPrefix, bucket, bucketNotificationConfig)
// Acquire a write lock on notification config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, ncPath)
objLock.Lock()
err := objAPI.DeleteObject(minioMetaBucket, ncPath)
objLock.Unlock()
return err
}
// Remove listener configuration from storage layer. Used when a bucket is deleted.
func removeListenerConfig(bucket string, objAPI ObjectLayer) error {
// make the path
lcPath := path.Join(bucketConfigPrefix, bucket, bucketListenerConfig)
// Acquire a write lock on notification config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, lcPath)
objLock.Lock()
err := objAPI.DeleteObject(minioMetaBucket, lcPath)
objLock.Unlock()
return err
}
// Loads both notification and listener config.
func loadNotificationAndListenerConfig(bucketName string, objAPI ObjectLayer) (nCfg *notificationConfig, lCfg []listenerConfig, err error) {
// Loads notification config if any.
nCfg, err = loadNotificationConfig(bucketName, objAPI)
if err != nil && !isErrIgnored(err, errDiskNotFound, errNoSuchNotifications) {
return nil, nil, err
}
// Loads listener config if any.
lCfg, err = loadListenerConfig(bucketName, objAPI)
if err != nil && !isErrIgnored(err, errDiskNotFound, errNoSuchNotifications) {
return nil, nil, err
}
return nCfg, lCfg, nil
}
// loads all bucket notifications if present.
func loadAllBucketNotifications(objAPI ObjectLayer) (map[string]*notificationConfig, map[string][]listenerConfig, error) {
// List buckets to proceed loading all notification configuration.
buckets, err := objAPI.ListBuckets()
if err != nil {
return nil, nil, err
}
nConfigs := make(map[string]*notificationConfig)
lConfigs := make(map[string][]listenerConfig)
// Loads all bucket notifications.
for _, bucket := range buckets {
var nCfg *notificationConfig
nCfg, err = loadNotificationConfig(bucket.Name, objAPI)
// Load persistent notification and listener configurations
// a given bucket name.
nConfigs[bucket.Name], lConfigs[bucket.Name], err = loadNotificationAndListenerConfig(bucket.Name, objAPI)
if err != nil {
if err == errNoSuchNotifications {
continue
}
return nil, err
return nil, nil, err
}
configs[bucket.Name] = nCfg
}
// Success.
return configs, nil
return nConfigs, lConfigs, nil
}
// Loads all queue targets, initializes each queueARNs depending on their config.
@@ -308,10 +542,46 @@ func loadAllQueueTargets() (map[string]*logrus.Logger, error) {
// Using accountID we can now initialize a new AMQP logrus instance.
amqpLog, err := newAMQPNotify(accountID)
if err != nil {
// Encapsulate network error to be more informative.
if _, ok := err.(net.Error); ok {
return nil, &net.OpError{
Op: "Connecting to " + queueARN,
Net: "tcp",
Err: err,
}
}
return nil, err
}
queueTargets[queueARN] = amqpLog
}
// Load all nats targets, initialize their respective loggers.
for accountID, natsN := range serverConfig.GetNATS() {
if !natsN.Enable {
continue
}
// Construct the queue ARN for NATS.
queueARN := minioSqs + serverConfig.GetRegion() + ":" + accountID + ":" + queueTypeNATS
// Queue target if already initialized we move to the next ARN.
_, ok := queueTargets[queueARN]
if ok {
continue
}
// Using accountID we can now initialize a new NATS logrus instance.
natsLog, err := newNATSNotify(accountID)
if err != nil {
// Encapsulate network error to be more informative.
if _, ok := err.(net.Error); ok {
return nil, &net.OpError{
Op: "Connecting to " + queueARN,
Net: "tcp",
Err: err,
}
}
return nil, err
}
queueTargets[queueARN] = natsLog
}
// Load redis targets, initialize their respective loggers.
for accountID, redisN := range serverConfig.GetRedis() {
if !redisN.Enable {
@@ -327,6 +597,14 @@ func loadAllQueueTargets() (map[string]*logrus.Logger, error) {
// Using accountID we can now initialize a new Redis logrus instance.
redisLog, err := newRedisNotify(accountID)
if err != nil {
// Encapsulate network error to be more informative.
if _, ok := err.(net.Error); ok {
return nil, &net.OpError{
Op: "Connecting to " + queueARN,
Net: "tcp",
Err: err,
}
}
return nil, err
}
queueTargets[queueARN] = redisLog
@@ -345,10 +623,43 @@ func loadAllQueueTargets() (map[string]*logrus.Logger, error) {
// Using accountID we can now initialize a new ElasticSearch logrus instance.
elasticLog, err := newElasticNotify(accountID)
if err != nil {
// Encapsulate network error to be more informative.
if _, ok := err.(net.Error); ok {
return nil, &net.OpError{
Op: "Connecting to " + queueARN, Net: "tcp",
Err: err,
}
}
return nil, err
}
queueTargets[queueARN] = elasticLog
}
// Load PostgreSQL targets, initialize their respective loggers.
for accountID, pgN := range serverConfig.GetPostgreSQL() {
if !pgN.Enable {
continue
}
// Construct the queue ARN for Postgres.
queueARN := minioSqs + serverConfig.GetRegion() + ":" + accountID + ":" + queueTypePostgreSQL
_, ok := queueTargets[queueARN]
if ok {
continue
}
// Using accountID initialize a new Postgresql logrus instance.
pgLog, err := newPostgreSQLNotify(accountID)
if err != nil {
// Encapsulate network error to be more informative.
if _, ok := err.(net.Error); ok {
return nil, &net.OpError{
Op: "Connecting to " + queueARN, Net: "tcp",
Err: err,
}
}
return nil, err
}
queueTargets[queueARN] = pgLog
}
// Successfully initialized queue targets.
return queueTargets, nil
}
@@ -363,8 +674,9 @@ func initEventNotifier(objAPI ObjectLayer) error {
}
// Read all saved bucket notifications.
configs, err := loadAllBucketNotifications(objAPI)
nConfigs, lConfigs, err := loadAllBucketNotifications(objAPI)
if err != nil {
errorIf(err, "Error loading bucket notifications - %v", err)
return err
}
@@ -374,12 +686,36 @@ func initEventNotifier(objAPI ObjectLayer) error {
return err
}
// Inititalize event notifier queue.
// Initialize internal listener targets
listenTargets := make(map[string]*listenerLogger)
for _, listeners := range lConfigs {
for _, listener := range listeners {
ln, err := newListenerLogger(
listener.TopicConfig.TopicARN,
listener.TargetServer,
)
if err != nil {
errorIf(err, "Unable to initialize listener target logger.")
//TODO: improve error
return fmt.Errorf("Error initializing listner target logger - %v", err)
}
listenTargets[listener.TopicConfig.TopicARN] = ln
}
}
// Initialize event notifier queue.
globalEventNotifier = &eventNotifier{
rwMutex: &sync.RWMutex{},
notificationConfigs: configs,
queueTargets: queueTargets,
snsTargets: make(map[string][]chan []NotificationEvent),
external: externalNotifier{
notificationConfigs: nConfigs,
targets: queueTargets,
rwMutex: &sync.RWMutex{},
},
internal: internalNotifier{
rwMutex: &sync.RWMutex{},
targets: listenTargets,
listenerConfigs: lConfigs,
connectedListeners: make(map[string]chan []NotificationEvent),
},
}
return nil

View File

@@ -17,117 +17,497 @@
package cmd
import (
"fmt"
"net"
"reflect"
"testing"
"time"
)
// Tests event notify.
func TestEventNotify(t *testing.T) {
ExecObjectLayerTest(t, testEventNotify)
// Test InitEventNotifier with faulty disks
func TestInitEventNotifierFaultyDisks(t *testing.T) {
// Prepare for tests
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
disks, err := getRandomDisks(1)
if err != nil {
t.Fatal("Unable to create directories for FS backend. ", err)
}
defer removeAll(disks[0])
endpoints, err := parseStorageEndpoints(disks)
if err != nil {
t.Fatal(err)
}
obj, _, err := initObjectLayer(endpoints)
if err != nil {
t.Fatal("Unable to initialize FS backend.", err)
}
bucketName := "bucket"
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Unexpected error:", err)
}
fs := obj.(fsObjects)
fsstorage := fs.storage.(*retryStorage)
listenARN := "arn:minio:sns:us-east-1:1:listen"
queueARN := "arn:minio:sqs:us-east-1:1:redis"
// Write a notification.xml in the disk
notificationXML := "<NotificationConfiguration>"
notificationXML += "<TopicConfiguration><Event>s3:ObjectRemoved:*</Event><Event>s3:ObjectRemoved:*</Event><Topic>" + listenARN + "</Topic></TopicConfiguration>"
notificationXML += "<QueueConfiguration><Event>s3:ObjectRemoved:*</Event><Event>s3:ObjectRemoved:*</Event><Queue>" + queueARN + "</Queue></QueueConfiguration>"
notificationXML += "</NotificationConfiguration>"
if err := fsstorage.AppendFile(minioMetaBucket, bucketConfigPrefix+"/"+bucketName+"/"+bucketNotificationConfig, []byte(notificationXML)); err != nil {
t.Fatal("Unexpected error:", err)
}
// Test initEventNotifier() with faulty disks
for i := 1; i <= 5; i++ {
fs.storage = newNaughtyDisk(fsstorage, map[int]error{i: errFaultyDisk}, nil)
if err := initEventNotifier(fs); errorCause(err) != errFaultyDisk {
t.Fatal("Unexpected error:", err)
}
}
}
func testEventNotify(obj ObjectLayer, instanceType string, t TestErrHandler) {
bucketName := getRandomBucketName()
// InitEventNotifierWithAMQP - tests InitEventNotifier when AMQP is not prepared
func TestInitEventNotifierWithAMQP(t *testing.T) {
// initialize the server and obtain the credentials and root.
// credentials are necessary to sign the HTTP request.
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root folder after the test ends.
// remove the root directory after the test ends.
defer removeAll(rootPath)
initEventNotifier(obj)
// Notify object created event.
eventNotify(eventData{
Type: ObjectCreatedPost,
Bucket: bucketName,
ObjInfo: ObjectInfo{
Bucket: bucketName,
Name: "object1",
},
ReqParams: map[string]string{
"sourceIPAddress": "localhost:1337",
},
})
if err := globalEventNotifier.SetBucketNotificationConfig(bucketName, nil); err != errInvalidArgument {
t.Errorf("Expected error %s, got %s", errInvalidArgument, err)
disks, err := getRandomDisks(1)
defer removeAll(disks[0])
if err != nil {
t.Fatal("Unable to create directories for FS backend. ", err)
}
if err := globalEventNotifier.SetBucketNotificationConfig(bucketName, &notificationConfig{}); err != nil {
t.Errorf("Expected error to be nil, got %s", err)
endpoints, err := parseStorageEndpoints(disks)
if err != nil {
t.Fatal(err)
}
if !globalEventNotifier.IsBucketNotificationSet(bucketName) {
t.Errorf("Notification expected to be set, but notification not set.")
}
nConfig := globalEventNotifier.GetBucketNotificationConfig(bucketName)
if !reflect.DeepEqual(nConfig, &notificationConfig{}) {
t.Errorf("Mismatching notification configs.")
}
// Notify object created event.
eventNotify(eventData{
Type: ObjectCreatedPost,
Bucket: bucketName,
ObjInfo: ObjectInfo{
Bucket: bucketName,
Name: "object1",
},
ReqParams: map[string]string{
"sourceIPAddress": "localhost:1337",
},
})
}
// Tests various forms of inititalization of event notifier.
func TestInitEventNotifier(t *testing.T) {
fs, disk, err := getSingleNodeObjectLayer()
fs, _, err := initObjectLayer(endpoints)
if err != nil {
t.Fatal("Unable to initialize FS backend.", err)
}
xl, disks, err := getXLObjectLayer()
if err != nil {
t.Fatal("Unable to initialize XL backend.", err)
}
disks = append(disks, disk)
for _, d := range disks {
defer removeAll(d)
}
// Collection of test cases for inititalizing event notifier.
testCases := []struct {
objAPI ObjectLayer
configs map[string]*notificationConfig
err error
}{
// Test 1 - invalid arguments.
{
objAPI: nil,
err: errInvalidArgument,
},
// Test 2 - valid FS object layer but no bucket notifications.
{
objAPI: fs,
err: nil,
},
// Test 3 - valid XL object layer but no bucket notifications.
{
objAPI: xl,
err: nil,
},
}
// Validate if event notifier is properly initialized.
for i, testCase := range testCases {
err = initEventNotifier(testCase.objAPI)
if err != testCase.err {
t.Errorf("Test %d: Expected %s, but got: %s", i+1, testCase.err, err)
}
serverConfig.SetAMQPNotifyByID("1", amqpNotify{Enable: true})
if err := initEventNotifier(fs); err == nil {
t.Fatal("AMQP config didn't fail.")
}
}
// InitEventNotifierWithElasticSearch - test InitEventNotifier when ElasticSearch is not ready
func TestInitEventNotifierWithElasticSearch(t *testing.T) {
// initialize the server and obtain the credentials and root.
// credentials are necessary to sign the HTTP request.
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
disks, err := getRandomDisks(1)
defer removeAll(disks[0])
if err != nil {
t.Fatal("Unable to create directories for FS backend. ", err)
}
endpoints, err := parseStorageEndpoints(disks)
if err != nil {
t.Fatal(err)
}
fs, _, err := initObjectLayer(endpoints)
if err != nil {
t.Fatal("Unable to initialize FS backend.", err)
}
serverConfig.SetElasticSearchNotifyByID("1", elasticSearchNotify{Enable: true})
if err := initEventNotifier(fs); err == nil {
t.Fatal("ElasticSearch config didn't fail.")
}
}
// InitEventNotifierWithRedis - test InitEventNotifier when Redis is not ready
func TestInitEventNotifierWithRedis(t *testing.T) {
// initialize the server and obtain the credentials and root.
// credentials are necessary to sign the HTTP request.
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
disks, err := getRandomDisks(1)
defer removeAll(disks[0])
if err != nil {
t.Fatal("Unable to create directories for FS backend. ", err)
}
endpoints, err := parseStorageEndpoints(disks)
if err != nil {
t.Fatal(err)
}
fs, _, err := initObjectLayer(endpoints)
if err != nil {
t.Fatal("Unable to initialize FS backend.", err)
}
serverConfig.SetRedisNotifyByID("1", redisNotify{Enable: true})
if err := initEventNotifier(fs); err == nil {
t.Fatal("Redis config didn't fail.")
}
}
type TestPeerRPCServerData struct {
serverType string
testServer TestServer
}
func (s *TestPeerRPCServerData) Setup(t *testing.T) {
s.testServer = StartTestPeersRPCServer(t, s.serverType)
// setup port and minio addr
host, port, err := net.SplitHostPort(s.testServer.Server.Listener.Addr().String())
if err != nil {
t.Fatalf("Initialisation error: %v", err)
}
globalMinioHost = host
globalMinioPort = port
globalMinioAddr = getLocalAddress(
s.testServer.SrvCmdCfg,
)
// initialize the peer client(s)
initGlobalS3Peers(s.testServer.Disks)
}
func (s *TestPeerRPCServerData) TearDown() {
s.testServer.Stop()
_ = removeAll(s.testServer.Root)
for _, d := range s.testServer.Disks {
_ = removeAll(d.Path)
}
}
func TestSetNGetBucketNotification(t *testing.T) {
s := TestPeerRPCServerData{serverType: "XL"}
// setup and teardown
s.Setup(t)
defer s.TearDown()
bucketName := getRandomBucketName()
obj := s.testServer.Obj
if err := initEventNotifier(obj); err != nil {
t.Fatal("Unexpected error:", err)
}
globalEventNotifier.SetBucketNotificationConfig(bucketName, &notificationConfig{})
nConfig := globalEventNotifier.GetBucketNotificationConfig(bucketName)
if nConfig == nil {
t.Errorf("Notification expected to be set, but notification not set.")
}
if !reflect.DeepEqual(nConfig, &notificationConfig{}) {
t.Errorf("Mismatching notification configs.")
}
}
func TestInitEventNotifier(t *testing.T) {
s := TestPeerRPCServerData{serverType: "XL"}
// setup and teardown
s.Setup(t)
defer s.TearDown()
// test if empty object layer arg. returns expected error.
if err := initEventNotifier(nil); err == nil || err != errInvalidArgument {
t.Fatalf("initEventNotifier returned unexpected error value - %v", err)
}
obj := s.testServer.Obj
bucketName := getRandomBucketName()
// declare sample configs
filterRules := []filterRule{
{
Name: "prefix",
Value: "minio",
},
{
Name: "suffix",
Value: "*.jpg",
},
}
sampleSvcCfg := ServiceConfig{
[]string{"s3:ObjectRemoved:*", "s3:ObjectCreated:*"},
filterStruct{
keyFilter{filterRules},
},
"1",
}
sampleNotifCfg := notificationConfig{
QueueConfigs: []queueConfig{
{
ServiceConfig: sampleSvcCfg,
QueueARN: "testqARN",
},
},
}
sampleListenCfg := []listenerConfig{
{
TopicConfig: topicConfig{ServiceConfig: sampleSvcCfg,
TopicARN: "testlARN"},
TargetServer: globalMinioAddr,
},
}
// create bucket
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Unexpected error:", err)
}
// bucket is created, now writing should not give errors.
if err := persistNotificationConfig(bucketName, &sampleNotifCfg, obj); err != nil {
t.Fatal("Unexpected error:", err)
}
if err := persistListenerConfig(bucketName, sampleListenCfg, obj); err != nil {
t.Fatal("Unexpected error:", err)
}
// needed to load listener config from disk for testing (in
// single peer mode, the listener config is ingored, but here
// we want to test the loading from disk too.)
globalIsDistXL = true
// test event notifier init
if err := initEventNotifier(obj); err != nil {
t.Fatal("Unexpected error:", err)
}
// fetch bucket configs and verify
ncfg := globalEventNotifier.GetBucketNotificationConfig(bucketName)
if ncfg == nil {
t.Error("Bucket notification was not present for ", bucketName)
}
if len(ncfg.QueueConfigs) != 1 || ncfg.QueueConfigs[0].QueueARN != "testqARN" {
t.Error("Unexpected bucket notification found - ", *ncfg)
}
if globalEventNotifier.GetExternalTarget("testqARN") != nil {
t.Error("A logger was not expected to be found as it was not enabled in the config.")
}
lcfg := globalEventNotifier.GetBucketListenerConfig(bucketName)
if lcfg == nil {
t.Error("Bucket listener was not present for ", bucketName)
}
if len(lcfg) != 1 || lcfg[0].TargetServer != globalMinioAddr || lcfg[0].TopicConfig.TopicARN != "testlARN" {
t.Error("Unexpected listener config found - ", lcfg[0])
}
if globalEventNotifier.GetInternalTarget("testlARN") == nil {
t.Error("A listen logger was not found.")
}
}
func TestListenBucketNotification(t *testing.T) {
s := TestPeerRPCServerData{serverType: "XL"}
// setup and teardown
s.Setup(t)
defer s.TearDown()
// test initialisation
obj := s.testServer.Obj
bucketName := "bucket"
objectName := "object"
// Create the bucket to listen on
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Unexpected error:", err)
}
listenARN := fmt.Sprintf("%s:%s:1:%s-%s",
minioTopic,
serverConfig.GetRegion(),
snsTypeMinio,
s.testServer.Server.Listener.Addr(),
)
lcfg := listenerConfig{
TopicConfig: topicConfig{
ServiceConfig{
[]string{"s3:ObjectRemoved:*", "s3:ObjectCreated:*"},
filterStruct{},
"0",
},
listenARN,
},
TargetServer: globalMinioAddr,
}
// write listener config to storage layer
lcfgs := []listenerConfig{lcfg}
if err := persistListenerConfig(bucketName, lcfgs, obj); err != nil {
t.Fatalf("Test Setup error: %v", err)
}
// needed to load listener config from disk for testing (in
// single peer mode, the listener config is ingored, but here
// we want to test the loading from disk too.)
globalIsDistXL = true
// Init event notifier
if err := initEventNotifier(obj); err != nil {
t.Fatal("Unexpected error:", err)
}
// Check if the config is loaded
listenerCfg := globalEventNotifier.GetBucketListenerConfig(bucketName)
if listenerCfg == nil {
t.Fatal("Cannot load bucket listener config")
}
if len(listenerCfg) != 1 {
t.Fatal("Listener config is not correctly loaded. Exactly one listener config is expected")
}
// Check if topic ARN is correct
if listenerCfg[0].TopicConfig.TopicARN != listenARN {
t.Fatal("Configured topic ARN is incorrect.")
}
// Create a new notification event channel.
nEventCh := make(chan []NotificationEvent)
// Close the listener channel.
defer close(nEventCh)
// Add events channel for listener.
if err := globalEventNotifier.AddListenerChan(listenARN, nEventCh); err != nil {
t.Fatalf("Test Setup error: %v", err)
}
// Remove listen channel after the writer has closed or the
// client disconnected.
defer globalEventNotifier.RemoveListenerChan(listenARN)
// Fire an event notification
go eventNotify(eventData{
Type: ObjectRemovedDelete,
Bucket: bucketName,
ObjInfo: ObjectInfo{
Bucket: bucketName,
Name: objectName,
},
ReqParams: map[string]string{
"sourceIPAddress": "localhost:1337",
},
})
// Wait for the event notification here, if nothing is received within 30 seconds,
// test error will be fired
select {
case n := <-nEventCh:
// Check that received event
if len(n) == 0 {
t.Fatal("Unexpected error occurred")
}
if n[0].S3.Object.Key != objectName {
t.Fatalf("Received wrong object name in notification, expected %s, received %s", n[0].S3.Object.Key, objectName)
}
break
case <-time.After(3 * time.Second):
break
}
}
func TestAddRemoveBucketListenerConfig(t *testing.T) {
s := TestPeerRPCServerData{serverType: "XL"}
// setup and teardown
s.Setup(t)
defer s.TearDown()
// test code
obj := s.testServer.Obj
if err := initEventNotifier(obj); err != nil {
t.Fatalf("Failed to initialize event notifier: %v", err)
}
// Make a bucket to store topicConfigs.
randBucket := getRandomBucketName()
if err := obj.MakeBucket(randBucket); err != nil {
t.Fatalf("Failed to make bucket %s", randBucket)
}
// Add a topicConfig to an empty notificationConfig.
accountID := fmt.Sprintf("%d", time.Now().UTC().UnixNano())
accountARN := fmt.Sprintf(
"arn:minio:sqs:%s:%s:listen-%s",
serverConfig.GetRegion(),
accountID,
globalMinioAddr,
)
// Make topic configuration
filterRules := []filterRule{
{
Name: "prefix",
Value: "minio",
},
{
Name: "suffix",
Value: "*.jpg",
},
}
sampleTopicCfg := topicConfig{
TopicARN: accountARN,
ServiceConfig: ServiceConfig{
[]string{"s3:ObjectRemoved:*", "s3:ObjectCreated:*"},
filterStruct{
keyFilter{filterRules},
},
"sns-" + accountID,
},
}
sampleListenerCfg := &listenerConfig{
TopicConfig: sampleTopicCfg,
TargetServer: globalMinioAddr,
}
testCases := []struct {
lCfg *listenerConfig
expectedErr error
}{
{sampleListenerCfg, nil},
{nil, errInvalidArgument},
}
for i, test := range testCases {
err := AddBucketListenerConfig(randBucket, test.lCfg, obj)
if err != test.expectedErr {
t.Errorf(
"Test %d: Failed with error %v, expected to fail with %v",
i+1, err, test.expectedErr,
)
}
}
// test remove listener actually removes a listener
RemoveBucketListenerConfig(randBucket, sampleListenerCfg, obj)
// since it does not return errors we fetch the config and
// check
lcSlice := globalEventNotifier.GetBucketListenerConfig(randBucket)
if len(lcSlice) != 0 {
t.Errorf("Remove Listener Config Test: did not remove listener config - %v",
lcSlice)
}
}

View File

@@ -1,4 +1,4 @@
// +build windows
// +build !linux
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
@@ -18,13 +18,8 @@
package cmd
type syslogLogger struct {
Enable bool `json:"enable"`
Addr string `json:"address"`
Level string `json:"level"`
}
// enableSyslogLogger - unsupported on windows.
func enableSyslogLogger(raddr string) {
fatalIf(errSyslogNotSupported, "Unable to enable syslog.")
// Fallocate is not POSIX and not supported under Windows
// Always return successful
func Fallocate(fd int, offset int64, len int64) error {
return nil
}

31
cmd/fallocate_linux.go Normal file
View File

@@ -0,0 +1,31 @@
// +build linux
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import "syscall"
// Fallocate uses the linux Fallocate syscall, which helps us to be
// sure that subsequent writes on a file just created will not fail,
// in addition, file allocation will be contigous on the disk
func Fallocate(fd int, offset int64, len int64) error {
return syscall.Fallocate(fd,
1, // FALLOC_FL_KEEP_SIZE
offset,
len)
}

View File

@@ -149,6 +149,20 @@ func reduceFormatErrs(errs []error, diskCount int) (err error) {
return nil
}
// creates format.json, the FS format info in minioMetaBucket.
func initFormatFS(storageDisk StorageAPI) error {
// Initialize meta volume, if volume already exists ignores it.
if err := initMetaVolume([]StorageAPI{storageDisk}); err != nil {
return fmt.Errorf("Unable to initialize '.minio.sys' meta volume, %s", err)
}
return saveFSFormatData(storageDisk, newFSFormatV1())
}
// loads format.json from minioMetaBucket if it exists.
func loadFormatFS(storageDisk StorageAPI) (format *formatConfigV1, err error) {
return loadFormat(storageDisk)
}
// loadAllFormats - load all format config from all input disks in parallel.
func loadAllFormats(bootstrapDisks []StorageAPI) ([]*formatConfigV1, []error) {
// Initialize sync waitgroup.
@@ -189,15 +203,27 @@ func loadAllFormats(bootstrapDisks []StorageAPI) ([]*formatConfigV1, []error) {
}
}
// Return all formats and nil
return formatConfigs, nil
return formatConfigs, sErrs
}
// genericFormatCheck - validates and returns error.
// genericFormatCheckFS - validates format config and returns an error if any.
func genericFormatCheckFS(formatConfig *formatConfigV1, sErr error) (err error) {
if sErr != nil {
return sErr
}
// Successfully read, validate if FS.
if !isFSFormat(formatConfig) {
return errFSDiskFormat
}
return nil
}
// genericFormatCheckXL - validates and returns error.
// if (no quorum) return error
// if (any disk is corrupt) return error // phase2
// if (jbod inconsistent) return error // phase2
// if (disks not recognized) // Always error.
func genericFormatCheck(formatConfigs []*formatConfigV1, sErrs []error) (err error) {
func genericFormatCheckXL(formatConfigs []*formatConfigV1, sErrs []error) (err error) {
// Calculate the errors.
var (
errCorruptFormatCount = 0
@@ -221,14 +247,14 @@ func genericFormatCheck(formatConfigs []*formatConfigV1, sErrs []error) (err err
}
// Calculate read quorum.
readQuorum := len(formatConfigs)/2 + 1
readQuorum := len(formatConfigs) / 2
// Validate the err count under tolerant limit.
// Validate the err count under read quorum.
if errCount > len(formatConfigs)-readQuorum {
return errXLReadQuorum
}
// Check if number of corrupted format under quorum
// Check if number of corrupted format under read quorum
if errCorruptFormatCount > len(formatConfigs)-readQuorum {
return errCorruptedFormat
}
@@ -317,7 +343,7 @@ func checkJBODConsistency(formatConfigs []*formatConfigV1) error {
}
currentJBOD := format.XL.JBOD
if !reflect.DeepEqual(sentinelJBOD, currentJBOD) {
return errors.New("Inconsistent JBOD found.")
return errors.New("Inconsistent JBOD found")
}
}
return nil
@@ -390,8 +416,7 @@ func loadFormat(disk StorageAPI) (format *formatConfigV1, err error) {
return format, nil
}
// isFormatNotFound - returns true if all `format.json` are not
// found on all disks.
// isFormatNotFound - returns true if all `format.json` are not found on all disks.
func isFormatNotFound(formats []*formatConfigV1) bool {
for _, format := range formats {
// One of the `format.json` is found.
@@ -403,8 +428,7 @@ func isFormatNotFound(formats []*formatConfigV1) bool {
return true
}
// isFormatFound - returns true if all input formats are found on
// all disks.
// isFormatFound - returns true if all input formats are found on all disks.
func isFormatFound(formats []*formatConfigV1) bool {
for _, format := range formats {
// One of `format.json` is not found.
@@ -483,7 +507,7 @@ func healFormatXLFreshDisks(storageDisks []StorageAPI) error {
// From ordered disks fill the UUID position.
for index, disk := range orderedDisks {
if disk == nil {
newJBOD[index] = getUUID()
newJBOD[index] = mustGetUUID()
}
}
@@ -524,6 +548,11 @@ func healFormatXLFreshDisks(storageDisks []StorageAPI) error {
}
}
// Initialize meta volume, if volume already exists ignores it.
if err := initMetaVolume(orderedDisks); err != nil {
return fmt.Errorf("Unable to initialize '.minio.sys' meta volume, %s", err)
}
// Save new `format.json` across all disks, in JBOD order.
return saveFormatXL(orderedDisks, newFormatConfigs)
}
@@ -668,7 +697,7 @@ func healFormatXLCorruptedDisks(storageDisks []StorageAPI) error {
// From ordered disks fill the UUID position.
for index, disk := range orderedDisks {
if disk == nil {
newJBOD[index] = getUUID()
newJBOD[index] = mustGetUUID()
}
}
@@ -715,9 +744,10 @@ func healFormatXLCorruptedDisks(storageDisks []StorageAPI) error {
// loadFormatXL - loads XL `format.json` and returns back properly
// ordered storage slice based on `format.json`.
func loadFormatXL(bootstrapDisks []StorageAPI) (disks []StorageAPI, err error) {
func loadFormatXL(bootstrapDisks []StorageAPI, readQuorum int) (disks []StorageAPI, err error) {
var unformattedDisksFoundCnt = 0
var diskNotFoundCount = 0
var corruptedDisksFoundCnt = 0
formatConfigs := make([]*formatConfigV1, len(bootstrapDisks))
// Try to load `format.json` bootstrap disks.
@@ -735,6 +765,9 @@ func loadFormatXL(bootstrapDisks []StorageAPI) (disks []StorageAPI, err error) {
} else if err == errDiskNotFound {
diskNotFoundCount++
continue
} else if err == errCorruptedFormat {
corruptedDisksFoundCnt++
continue
}
return nil, err
}
@@ -743,11 +776,13 @@ func loadFormatXL(bootstrapDisks []StorageAPI) (disks []StorageAPI, err error) {
}
// If all disks indicate that 'format.json' is not available return 'errUnformattedDisk'.
if unformattedDisksFoundCnt > len(bootstrapDisks)-(len(bootstrapDisks)/2+1) {
if unformattedDisksFoundCnt > len(bootstrapDisks)-readQuorum {
return nil, errUnformattedDisk
} else if corruptedDisksFoundCnt > len(bootstrapDisks)-readQuorum {
return nil, errCorruptedFormat
} else if diskNotFoundCount == len(bootstrapDisks) {
return nil, errDiskNotFound
} else if diskNotFoundCount > len(bootstrapDisks)-(len(bootstrapDisks)/2+1) {
} else if diskNotFoundCount > len(bootstrapDisks)-readQuorum {
return nil, errXLReadQuorum
}
@@ -759,18 +794,17 @@ func loadFormatXL(bootstrapDisks []StorageAPI) (disks []StorageAPI, err error) {
return reorderDisks(bootstrapDisks, formatConfigs)
}
// checkFormatXL - verifies if format.json format is intact.
func checkFormatXL(formatConfigs []*formatConfigV1) error {
func checkFormatXLValues(formatConfigs []*formatConfigV1) error {
for _, formatXL := range formatConfigs {
if formatXL == nil {
continue
}
// Validate format version and format type.
if formatXL.Version != "1" {
return fmt.Errorf("Unsupported version of backend format [%s] found.", formatXL.Version)
return fmt.Errorf("Unsupported version of backend format [%s] found", formatXL.Version)
}
if formatXL.Format != "xl" {
return fmt.Errorf("Unsupported backend format [%s] found.", formatXL.Format)
return fmt.Errorf("Unsupported backend format [%s] found", formatXL.Format)
}
if formatXL.XL.Version != "1" {
return fmt.Errorf("Unsupported XL backend format found [%s]", formatXL.XL.Version)
@@ -779,6 +813,14 @@ func checkFormatXL(formatConfigs []*formatConfigV1) error {
return fmt.Errorf("Number of disks %d did not match the backend format %d", len(formatConfigs), len(formatXL.XL.JBOD))
}
}
return nil
}
// checkFormatXL - verifies if format.json format is intact.
func checkFormatXL(formatConfigs []*formatConfigV1) error {
if err := checkFormatXLValues(formatConfigs); err != nil {
return err
}
if err := checkJBODConsistency(formatConfigs); err != nil {
return err
}
@@ -855,7 +897,7 @@ func initFormatXL(storageDisks []StorageAPI) (err error) {
Format: "xl",
XL: &xlFormat{
Version: "1",
Disk: getUUID(),
Disk: mustGetUUID(),
},
}
jbod[index] = formats[index].XL.Disk
@@ -870,6 +912,11 @@ func initFormatXL(storageDisks []StorageAPI) (err error) {
formats[index].XL.JBOD = jbod
}
// Initialize meta volume, if volume already exists ignores it.
if err := initMetaVolume(storageDisks); err != nil {
return fmt.Errorf("Unable to initialize '.minio.sys' meta volume, %s", err)
}
// Save formats `format.json` across all disks.
return saveFormatXL(storageDisks, formats)
}

View File

@@ -26,7 +26,7 @@ func genFormatXLValid() []*formatConfigV1 {
jbod := make([]string, 8)
formatConfigs := make([]*formatConfigV1, 8)
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
}
for index := range jbod {
formatConfigs[index] = &formatConfigV1{
@@ -47,7 +47,7 @@ func genFormatXLInvalidVersion() []*formatConfigV1 {
jbod := make([]string, 8)
formatConfigs := make([]*formatConfigV1, 8)
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
}
for index := range jbod {
formatConfigs[index] = &formatConfigV1{
@@ -71,7 +71,7 @@ func genFormatXLInvalidFormat() []*formatConfigV1 {
jbod := make([]string, 8)
formatConfigs := make([]*formatConfigV1, 8)
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
}
for index := range jbod {
formatConfigs[index] = &formatConfigV1{
@@ -95,7 +95,7 @@ func genFormatXLInvalidXLVersion() []*formatConfigV1 {
jbod := make([]string, 8)
formatConfigs := make([]*formatConfigV1, 8)
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
}
for index := range jbod {
formatConfigs[index] = &formatConfigV1{
@@ -114,12 +114,19 @@ func genFormatXLInvalidXLVersion() []*formatConfigV1 {
return formatConfigs
}
func genFormatFS() *formatConfigV1 {
return &formatConfigV1{
Version: "1",
Format: "fs",
}
}
// generates a invalid format.json version for XL backend.
func genFormatXLInvalidJBODCount() []*formatConfigV1 {
jbod := make([]string, 7)
formatConfigs := make([]*formatConfigV1, 8)
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
}
for index := range jbod {
formatConfigs[index] = &formatConfigV1{
@@ -140,7 +147,7 @@ func genFormatXLInvalidJBOD() []*formatConfigV1 {
jbod := make([]string, 8)
formatConfigs := make([]*formatConfigV1, 8)
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
}
for index := range jbod {
formatConfigs[index] = &formatConfigV1{
@@ -154,7 +161,7 @@ func genFormatXLInvalidJBOD() []*formatConfigV1 {
}
}
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
}
// Corrupt JBOD entries on disk 6 and disk 8.
formatConfigs[5].XL.JBOD = jbod
@@ -167,7 +174,7 @@ func genFormatXLInvalidDisks() []*formatConfigV1 {
jbod := make([]string, 8)
formatConfigs := make([]*formatConfigV1, 8)
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
}
for index := range jbod {
formatConfigs[index] = &formatConfigV1{
@@ -181,8 +188,8 @@ func genFormatXLInvalidDisks() []*formatConfigV1 {
}
}
// Make disk 5 and disk 8 have inconsistent disk uuid's.
formatConfigs[4].XL.Disk = getUUID()
formatConfigs[7].XL.Disk = getUUID()
formatConfigs[4].XL.Disk = mustGetUUID()
formatConfigs[7].XL.Disk = mustGetUUID()
return formatConfigs
}
@@ -191,7 +198,7 @@ func genFormatXLInvalidDisksOrder() []*formatConfigV1 {
jbod := make([]string, 8)
formatConfigs := make([]*formatConfigV1, 8)
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
}
for index := range jbod {
formatConfigs[index] = &formatConfigV1{
@@ -215,9 +222,8 @@ func genFormatXLInvalidDisksOrder() []*formatConfigV1 {
}
func prepareFormatXLHealFreshDisks(obj ObjectLayer) ([]StorageAPI, error) {
var err error
xl := obj.(xlObjects)
xl := obj.(*xlObjects)
err = obj.MakeBucket("bucket")
if err != nil {
@@ -226,8 +232,9 @@ func prepareFormatXLHealFreshDisks(obj ObjectLayer) ([]StorageAPI, error) {
bucket := "bucket"
object := "object"
sha256sum := ""
_, err = obj.PutObject(bucket, object, int64(len("abcd")), bytes.NewReader([]byte("abcd")), nil)
_, err = obj.PutObject(bucket, object, int64(len("abcd")), bytes.NewReader([]byte("abcd")), nil, sha256sum)
if err != nil {
return []StorageAPI{}, err
}
@@ -263,8 +270,17 @@ func prepareFormatXLHealFreshDisks(obj ObjectLayer) ([]StorageAPI, error) {
}
func TestFormatXLHealFreshDisks(t *testing.T) {
nDisks := 16
fsDirs, err := getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
endpoints, err := parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Create an instance of xl backend.
obj, fsDirs, err := getXLObjectLayer()
obj, _, err := initObjectLayer(endpoints)
if err != nil {
t.Error(err)
}
@@ -280,7 +296,7 @@ func TestFormatXLHealFreshDisks(t *testing.T) {
}
// Load again XL format.json to validate it
_, err = loadFormatXL(storageDisks)
_, err = loadFormatXL(storageDisks, 8)
if err != nil {
t.Fatal("loading healed disk failed: ", err)
}
@@ -290,8 +306,17 @@ func TestFormatXLHealFreshDisks(t *testing.T) {
}
func TestFormatXLHealFreshDisksErrorExpected(t *testing.T) {
nDisks := 16
fsDirs, err := getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
endpoints, err := parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Create an instance of xl backend.
obj, fsDirs, err := getXLObjectLayer()
obj, _, err := initObjectLayer(endpoints)
if err != nil {
t.Error(err)
}
@@ -301,13 +326,11 @@ func TestFormatXLHealFreshDisksErrorExpected(t *testing.T) {
t.Fatal(err)
}
for i := 0; i < 16; i++ {
d := storageDisks[i].(*posix)
storageDisks[i] = &naughtyDisk{disk: d, defaultErr: errDiskNotFound}
}
// Prepares all disks are offline.
prepareNOfflineDisks(storageDisks, 16, t)
// Load again XL format.json to validate it
_, err = loadFormatXL(storageDisks)
_, err = loadFormatXL(storageDisks, 8)
if err == nil {
t.Fatal("loading format disk error")
}
@@ -326,12 +349,12 @@ func TestFormatXLHealFreshDisksErrorExpected(t *testing.T) {
// a given disk to test healing a corrupted disk
func TestFormatXLHealCorruptedDisks(t *testing.T) {
// Create an instance of xl backend.
obj, fsDirs, err := getXLObjectLayer()
obj, fsDirs, err := prepareXL()
if err != nil {
t.Fatal(err)
}
xl := obj.(xlObjects)
xl := obj.(*xlObjects)
err = obj.MakeBucket("bucket")
if err != nil {
@@ -340,8 +363,9 @@ func TestFormatXLHealCorruptedDisks(t *testing.T) {
bucket := "bucket"
object := "object"
sha256sum := ""
_, err = obj.PutObject(bucket, object, int64(len("abcd")), bytes.NewReader([]byte("abcd")), nil)
_, err = obj.PutObject(bucket, object, int64(len("abcd")), bytes.NewReader([]byte("abcd")), nil, sha256sum)
if err != nil {
t.Fatal(err)
}
@@ -385,7 +409,7 @@ func TestFormatXLHealCorruptedDisks(t *testing.T) {
}
// Load again XL format.json to validate it
_, err = loadFormatXL(permutedStorageDisks)
_, err = loadFormatXL(permutedStorageDisks, 8)
if err != nil {
t.Fatal("loading healed disk failed: ", err)
}
@@ -398,12 +422,12 @@ func TestFormatXLHealCorruptedDisks(t *testing.T) {
// some of format.json
func TestFormatXLReorderByInspection(t *testing.T) {
// Create an instance of xl backend.
obj, fsDirs, err := getXLObjectLayer()
obj, fsDirs, err := prepareXL()
if err != nil {
t.Fatal(err)
}
xl := obj.(xlObjects)
xl := obj.(*xlObjects)
err = obj.MakeBucket("bucket")
if err != nil {
@@ -412,8 +436,9 @@ func TestFormatXLReorderByInspection(t *testing.T) {
bucket := "bucket"
object := "object"
sha256sum := ""
_, err = obj.PutObject(bucket, object, int64(len("abcd")), bytes.NewReader([]byte("abcd")), nil)
_, err = obj.PutObject(bucket, object, int64(len("abcd")), bytes.NewReader([]byte("abcd")), nil, sha256sum)
if err != nil {
t.Fatal(err)
}
@@ -450,7 +475,7 @@ func TestFormatXLReorderByInspection(t *testing.T) {
t.Fatal("should not be nil")
}
if orderedDisks[i] != nil && orderedDisks[i] != xl.storageDisks[i] {
t.Fatal("Disks were not ordered correctly.")
t.Fatal("Disks were not ordered correctly")
}
}
@@ -534,7 +559,7 @@ func TestSavedUUIDOrder(t *testing.T) {
jbod := make([]string, 8)
formatConfigs := make([]*formatConfigV1, 8)
for index := range jbod {
jbod[index] = getUUID()
jbod[index] = mustGetUUID()
uuidTestCases[index].uuid = jbod[index]
uuidTestCases[index].shouldPass = true
}
@@ -569,18 +594,28 @@ func TestSavedUUIDOrder(t *testing.T) {
// Test initFormatXL() when disks are expected to return errors
func TestInitFormatXLErrors(t *testing.T) {
// Create an instance of xl backend.
obj, fsDirs, err := getXLObjectLayer()
nDisks := 16
fsDirs, err := getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl := obj.(xlObjects)
defer removeRoots(fsDirs)
endpoints, err := parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Create an instance of xl backend.
obj, _, err := initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl := obj.(*xlObjects)
testStorageDisks := make([]StorageAPI, 16)
// All disks API return disk not found
for i := 0; i < 16; i++ {
d := xl.storageDisks[i].(*posix)
d := xl.storageDisks[i].(*retryStorage)
testStorageDisks[i] = &naughtyDisk{disk: d, defaultErr: errDiskNotFound}
}
if err := initFormatXL(testStorageDisks); err != errDiskNotFound {
@@ -589,7 +624,7 @@ func TestInitFormatXLErrors(t *testing.T) {
// All disks returns disk not found in the fourth call
for i := 0; i < 15; i++ {
d := xl.storageDisks[i].(*posix)
d := xl.storageDisks[i].(*retryStorage)
testStorageDisks[i] = &naughtyDisk{disk: d, defaultErr: errDiskNotFound, errors: map[int]error{0: nil, 1: nil, 2: nil}}
}
if err := initFormatXL(testStorageDisks); err != errDiskNotFound {
@@ -603,8 +638,6 @@ func TestInitFormatXLErrors(t *testing.T) {
if err := initFormatXL(testStorageDisks); err != errDiskNotFound {
t.Fatal("Got a different error: ", err)
}
removeRoots(fsDirs)
}
// Test for reduceFormatErrs()
@@ -615,96 +648,154 @@ func TestReduceFormatErrs(t *testing.T) {
}
// One corrupted format
if err := reduceFormatErrs([]error{nil, nil, errCorruptedFormat, nil}, 4); err != errCorruptedFormat {
t.Fatal("Got a differnt error: ", err)
t.Fatal("Got a different error: ", err)
}
// All disks unformatted
if err := reduceFormatErrs([]error{errUnformattedDisk, errUnformattedDisk, errUnformattedDisk, errUnformattedDisk}, 4); err != errUnformattedDisk {
t.Fatal("Got a differnt error: ", err)
t.Fatal("Got a different error: ", err)
}
// Some disks unformatted
if err := reduceFormatErrs([]error{nil, nil, errUnformattedDisk, errUnformattedDisk}, 4); err != errSomeDiskUnformatted {
t.Fatal("Got a differnt error: ", err)
t.Fatal("Got a different error: ", err)
}
// Some disks offline
if err := reduceFormatErrs([]error{nil, nil, errDiskNotFound, errUnformattedDisk}, 4); err != errSomeDiskOffline {
t.Fatal("Got a differnt error: ", err)
t.Fatal("Got a different error: ", err)
}
}
// Tests for genericFormatCheck()
func TestGenericFormatCheck(t *testing.T) {
// Tests for genericFormatCheckFS()
func TestGenericFormatCheckFS(t *testing.T) {
// Generate format configs for XL.
formatConfigs := genFormatXLInvalidJBOD()
// Validate disk format is fs, should fail.
if err := genericFormatCheckFS(formatConfigs[0], nil); err != errFSDiskFormat {
t.Fatalf("Unexpected error, expected %s, got %s", errFSDiskFormat, err)
}
// Validate disk is unformatted, should fail.
if err := genericFormatCheckFS(nil, errUnformattedDisk); err != errUnformattedDisk {
t.Fatalf("Unexpected error, expected %s, got %s", errUnformattedDisk, err)
}
// Validate when disk is in FS format.
format := newFSFormatV1()
if err := genericFormatCheckFS(format, nil); err != nil {
t.Fatalf("Unexpected error should pass, failed with %s", err)
}
}
// Tests for genericFormatCheckXL()
func TestGenericFormatCheckXL(t *testing.T) {
var errs []error
formatConfigs := genFormatXLInvalidJBOD()
// Some disks has corrupted formats, one faulty disk
errs = []error{nil, nil, errCorruptedFormat, errCorruptedFormat, errCorruptedFormat, errCorruptedFormat,
errCorruptedFormat, errFaultyDisk}
if err := genericFormatCheck(formatConfigs, errs); err != errCorruptedFormat {
if err := genericFormatCheckXL(formatConfigs, errs); err != errCorruptedFormat {
t.Fatal("Got unexpected err: ", err)
}
// Many faulty disks
errs = []error{nil, nil, errFaultyDisk, errFaultyDisk, errFaultyDisk, errFaultyDisk,
errCorruptedFormat, errFaultyDisk}
if err := genericFormatCheck(formatConfigs, errs); err != errXLReadQuorum {
if err := genericFormatCheckXL(formatConfigs, errs); err != errXLReadQuorum {
t.Fatal("Got unexpected err: ", err)
}
// All formats successfully loaded
errs = []error{nil, nil, nil, nil, nil, nil, nil, nil}
if err := genericFormatCheck(formatConfigs, errs); err == nil {
if err := genericFormatCheckXL(formatConfigs, errs); err == nil {
t.Fatalf("Should fail here")
}
errs = []error{nil}
if err := genericFormatCheckXL([]*formatConfigV1{genFormatFS()}, errs); err == nil {
t.Fatalf("Should fail here")
}
errs = []error{errFaultyDisk}
if err := genericFormatCheckXL([]*formatConfigV1{genFormatFS()}, errs); err == nil {
t.Fatalf("Should fail here")
}
}
func TestLoadFormatXLErrs(t *testing.T) {
// Create an instance of xl backend.
obj, fsDirs, err := getXLObjectLayer()
nDisks := 16
fsDirs, err := getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl := obj.(xlObjects)
defer removeRoots(fsDirs)
endpoints, err := parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Create an instance of xl backend.
obj, _, err := initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl := obj.(*xlObjects)
xl.storageDisks[11] = nil
// disk 12 returns faulty disk
posixDisk, ok := xl.storageDisks[12].(*posix)
posixDisk, ok := xl.storageDisks[12].(*retryStorage)
if !ok {
t.Fatal("storage disk is not *posix type")
t.Fatal("storage disk is not *retryStorage type")
}
xl.storageDisks[10] = newNaughtyDisk(posixDisk, nil, errFaultyDisk)
if _, err = loadFormatXL(xl.storageDisks); err != errFaultyDisk {
if _, err = loadFormatXL(xl.storageDisks, 8); err != errFaultyDisk {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
defer removeRoots(fsDirs)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
// disks 0..10 returns disk not found
for i := 0; i <= 10; i++ {
posixDisk, ok := xl.storageDisks[i].(*posix)
posixDisk, ok := xl.storageDisks[i].(*retryStorage)
if !ok {
t.Fatal("storage disk is not *posix type")
t.Fatal("storage disk is not *retryStorage type")
}
xl.storageDisks[i] = newNaughtyDisk(posixDisk, nil, errDiskNotFound)
}
if _, err = loadFormatXL(xl.storageDisks); err != errXLReadQuorum {
if _, err = loadFormatXL(xl.storageDisks, 8); err != errXLReadQuorum {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
defer removeRoots(fsDirs)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
// disks 0..10 returns unformatted disk
for i := 0; i <= 10; i++ {
@@ -712,48 +803,82 @@ func TestLoadFormatXLErrs(t *testing.T) {
t.Fatal(err)
}
}
if _, err = loadFormatXL(xl.storageDisks); err != errUnformattedDisk {
if _, err = loadFormatXL(xl.storageDisks, 8); err != errUnformattedDisk {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
defer removeRoots(fsDirs)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
// disks 0..15 returns is nil (disk not found)
for i := 0; i < 16; i++ {
xl.storageDisks[i] = nil
}
if _, err := loadFormatXL(xl.storageDisks); err != errDiskNotFound {
if _, err := loadFormatXL(xl.storageDisks, 8); err != errDiskNotFound {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
}
// Tests for healFormatXLCorruptedDisks() with cases which lead to errors
func TestHealFormatXLCorruptedDisksErrs(t *testing.T) {
// Everything is fine, should return nil
obj, fsDirs, err := getXLObjectLayer()
root, err := newTestConfig("us-east-1")
if err != nil {
t.Fatal(err)
}
xl := obj.(xlObjects)
defer removeAll(root)
nDisks := 16
fsDirs, err := getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
endpoints, err := parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Everything is fine, should return nil
obj, _, err := initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl := obj.(*xlObjects)
if err = healFormatXLCorruptedDisks(xl.storageDisks); err != nil {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
// Disks 0..15 are nil
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Disks 0..15 are nil
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
for i := 0; i <= 15; i++ {
xl.storageDisks[i] = nil
}
@@ -762,15 +887,25 @@ func TestHealFormatXLCorruptedDisksErrs(t *testing.T) {
}
removeRoots(fsDirs)
// One disk returns Faulty Disk
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
posixDisk, ok := xl.storageDisks[0].(*posix)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// One disk returns Faulty Disk
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
posixDisk, ok := xl.storageDisks[0].(*retryStorage)
if !ok {
t.Fatal("storage disk is not *posix type")
t.Fatal("storage disk is not *retryStorage type")
}
xl.storageDisks[0] = newNaughtyDisk(posixDisk, nil, errFaultyDisk)
if err = healFormatXLCorruptedDisks(xl.storageDisks); err != errFaultyDisk {
@@ -778,24 +913,44 @@ func TestHealFormatXLCorruptedDisksErrs(t *testing.T) {
}
removeRoots(fsDirs)
// One disk is not found, heal corrupted disks should return nil
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// One disk is not found, heal corrupted disks should return nil
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
xl.storageDisks[0] = nil
if err = healFormatXLCorruptedDisks(xl.storageDisks); err != nil {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
// Remove format.json of all disks
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Remove format.json of all disks
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
for i := 0; i <= 15; i++ {
if err = xl.storageDisks[i].DeleteFile(".minio.sys", "format.json"); err != nil {
t.Fatal(err)
@@ -806,12 +961,22 @@ func TestHealFormatXLCorruptedDisksErrs(t *testing.T) {
}
removeRoots(fsDirs)
// Corrupted format json in one disk
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Corrupted format json in one disk
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
for i := 0; i <= 15; i++ {
if err = xl.storageDisks[i].AppendFile(".minio.sys", "format.json", []byte("corrupted data")); err != nil {
t.Fatal(err)
@@ -825,23 +990,50 @@ func TestHealFormatXLCorruptedDisksErrs(t *testing.T) {
// Tests for healFormatXLFreshDisks() with cases which lead to errors
func TestHealFormatXLFreshDisksErrs(t *testing.T) {
// Everything is fine, should return nil
obj, fsDirs, err := getXLObjectLayer()
root, err := newTestConfig("us-east-1")
if err != nil {
t.Fatal(err)
}
xl := obj.(xlObjects)
defer removeAll(root)
nDisks := 16
fsDirs, err := getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
endpoints, err := parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Everything is fine, should return nil
obj, _, err := initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl := obj.(*xlObjects)
if err = healFormatXLFreshDisks(xl.storageDisks); err != nil {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
// Disks 0..15 are nil
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Disks 0..15 are nil
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
for i := 0; i <= 15; i++ {
xl.storageDisks[i] = nil
}
@@ -850,15 +1042,25 @@ func TestHealFormatXLFreshDisksErrs(t *testing.T) {
}
removeRoots(fsDirs)
// One disk returns Faulty Disk
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
posixDisk, ok := xl.storageDisks[0].(*posix)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// One disk returns Faulty Disk
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
posixDisk, ok := xl.storageDisks[0].(*retryStorage)
if !ok {
t.Fatal("storage disk is not *posix type")
t.Fatal("storage disk is not *retryStorage type")
}
xl.storageDisks[0] = newNaughtyDisk(posixDisk, nil, errFaultyDisk)
if err = healFormatXLFreshDisks(xl.storageDisks); err != errFaultyDisk {
@@ -866,24 +1068,44 @@ func TestHealFormatXLFreshDisksErrs(t *testing.T) {
}
removeRoots(fsDirs)
// One disk is not found, heal corrupted disks should return nil
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// One disk is not found, heal corrupted disks should return nil
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
xl.storageDisks[0] = nil
if err = healFormatXLFreshDisks(xl.storageDisks); err != nil {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
// Remove format.json of all disks
obj, fsDirs, err = getXLObjectLayer()
fsDirs, err = getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
endpoints, err = parseStorageEndpoints(fsDirs)
if err != nil {
t.Fatal(err)
}
// Remove format.json of all disks
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal(err)
}
xl = obj.(*xlObjects)
for i := 0; i <= 15; i++ {
if err = xl.storageDisks[i].DeleteFile(".minio.sys", "format.json"); err != nil {
t.Fatal(err)
@@ -893,23 +1115,6 @@ func TestHealFormatXLFreshDisksErrs(t *testing.T) {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
// Remove format.json of all disks
obj, fsDirs, err = getXLObjectLayer()
if err != nil {
t.Fatal(err)
}
xl = obj.(xlObjects)
for i := 0; i <= 15; i++ {
if err = xl.storageDisks[i].DeleteFile(".minio.sys", "format.json"); err != nil {
t.Fatal(err)
}
}
if err = healFormatXLFreshDisks(xl.storageDisks); err != nil {
t.Fatal("Got an unexpected error: ", err)
}
removeRoots(fsDirs)
}
// Tests for isFormatFound()

View File

@@ -24,14 +24,12 @@ func fsCreateFile(disk StorageAPI, reader io.Reader, buf []byte, tmpBucket, temp
for {
n, rErr := reader.Read(buf)
if rErr != nil && rErr != io.EOF {
return 0, rErr
return 0, traceError(rErr)
}
bytesWritten += int64(n)
if n > 0 {
wErr := disk.AppendFile(tmpBucket, tempObj, buf[0:n])
if wErr != nil {
return 0, wErr
}
wErr := disk.AppendFile(tmpBucket, tempObj, buf[0:n])
if wErr != nil {
return 0, traceError(wErr)
}
if rErr == io.EOF {
break

View File

@@ -0,0 +1,214 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"errors"
"reflect"
"sync"
"time"
)
// Error sent by appendParts go-routine when there are holes in parts.
// For ex. let's say client uploads part-2 before part-1 in which case we
// can not append and have to wait till part-1 is uploaded. Hence we return
// this error. Currently this error is not used in the caller.
var errPartsMissing = errors.New("required parts missing")
// Error sent when appendParts go-routine has waited long enough and timedout.
var errAppendPartsTimeout = errors.New("appendParts goroutine timeout")
// Timeout value for the appendParts go-routine.
var appendPartsTimeout = 24 * 60 * 60 * time.Second // 24 hours.
// Holds a map of uploadID->appendParts go-routine
type backgroundAppend struct {
infoMap map[string]bgAppendPartsInfo
sync.Mutex
}
// Input to the appendParts go-routine
type bgAppendPartsInput struct {
meta fsMetaV1 // list of parts that need to be appended
errCh chan error // error sent by appendParts go-routine
}
// Identifies an appendParts go-routine.
type bgAppendPartsInfo struct {
inputCh chan bgAppendPartsInput
timeoutCh chan struct{} // closed by appendParts go-routine when it timesout
abortCh chan struct{} // closed after abort of upload to end the appendParts go-routine
completeCh chan struct{} // closed after complete of upload to end the appendParts go-routine
}
// Called after a part is uploaded so that it can be appended in the background.
func (b *backgroundAppend) append(disk StorageAPI, bucket, object, uploadID string, meta fsMetaV1) chan error {
b.Lock()
info, ok := b.infoMap[uploadID]
if !ok {
// Corresponding appendParts go-routine was not found, create a new one. Would happen when the first
// part of a multipart upload is uploaded.
inputCh := make(chan bgAppendPartsInput)
timeoutCh := make(chan struct{})
abortCh := make(chan struct{})
completeCh := make(chan struct{})
info = bgAppendPartsInfo{inputCh, timeoutCh, abortCh, completeCh}
b.infoMap[uploadID] = info
go b.appendParts(disk, bucket, object, uploadID, info)
}
b.Unlock()
errCh := make(chan error)
go func() {
// send input in a goroutine as send on the inputCh can block if appendParts go-routine
// is busy appending a part.
select {
case <-info.timeoutCh:
// This is to handle a rare race condition where we found info in b.infoMap
// but soon after that appendParts go-routine timed out.
errCh <- errAppendPartsTimeout
case info.inputCh <- bgAppendPartsInput{meta, errCh}:
}
}()
return errCh
}
// Called on complete-multipart-upload. Returns nil if the required parts have been appended.
func (b *backgroundAppend) complete(disk StorageAPI, bucket, object, uploadID string, meta fsMetaV1) error {
b.Lock()
defer b.Unlock()
info, ok := b.infoMap[uploadID]
delete(b.infoMap, uploadID)
if !ok {
return errPartsMissing
}
errCh := make(chan error)
select {
case <-info.timeoutCh:
// This is to handle a rare race condition where we found info in b.infoMap
// but soon after that appendParts go-routine timedouted out.
return errAppendPartsTimeout
case info.inputCh <- bgAppendPartsInput{meta, errCh}:
}
err := <-errCh
close(info.completeCh)
return err
}
// Called after complete-multipart-upload or abort-multipart-upload so that the appendParts go-routine is not left dangling.
func (b *backgroundAppend) abort(uploadID string) {
b.Lock()
defer b.Unlock()
info, ok := b.infoMap[uploadID]
if !ok {
return
}
delete(b.infoMap, uploadID)
info.abortCh <- struct{}{}
}
// This is run as a go-routine that appends the parts in the background.
func (b *backgroundAppend) appendParts(disk StorageAPI, bucket, object, uploadID string, info bgAppendPartsInfo) {
// Holds the list of parts that is already appended to the "append" file.
appendMeta := fsMetaV1{}
// Allocate staging read buffer.
buf := make([]byte, readSizeV1)
for {
select {
case input := <-info.inputCh:
// We receive on this channel when new part gets uploaded or when complete-multipart sends
// a value on this channel to confirm if all the required parts are appended.
meta := input.meta
for {
// Append should be done such a way that if part-3 and part-2 is uploaded before part-1, we
// wait till part-1 is uploaded after which we append part-2 and part-3 as well in this for-loop.
part, appendNeeded := partToAppend(meta, appendMeta)
if !appendNeeded {
if reflect.DeepEqual(meta.Parts, appendMeta.Parts) {
// Sending nil is useful so that the complete-multipart-upload knows that
// all the required parts have been appended.
input.errCh <- nil
} else {
// Sending error is useful so that complete-multipart-upload can fall-back to
// its own append process.
input.errCh <- errPartsMissing
}
break
}
if err := appendPart(disk, bucket, object, uploadID, part, buf); err != nil {
disk.DeleteFile(minioMetaTmpBucket, uploadID)
appendMeta.Parts = nil
input.errCh <- err
break
}
appendMeta.AddObjectPart(part.Number, part.Name, part.ETag, part.Size)
}
case <-info.abortCh:
// abort-multipart-upload closed abortCh to end the appendParts go-routine.
disk.DeleteFile(minioMetaTmpBucket, uploadID)
close(info.timeoutCh) // So that any racing PutObjectPart does not leave a dangling go-routine.
return
case <-info.completeCh:
// complete-multipart-upload closed completeCh to end the appendParts go-routine.
close(info.timeoutCh) // So that any racing PutObjectPart does not leave a dangling go-routine.
return
case <-time.After(appendPartsTimeout):
// Timeout the goroutine to garbage collect its resources. This would happen if the client initiates
// a multipart upload and does not complete/abort it.
b.Lock()
delete(b.infoMap, uploadID)
b.Unlock()
// Delete the temporary append file as well.
disk.DeleteFile(minioMetaTmpBucket, uploadID)
close(info.timeoutCh)
return
}
}
}
// Appends the "part" to the append-file inside "tmp/" that finally gets moved to the actual location
// upon complete-multipart-upload.
func appendPart(disk StorageAPI, bucket, object, uploadID string, part objectPartInfo, buf []byte) error {
partPath := pathJoin(bucket, object, uploadID, part.Name)
offset := int64(0)
totalLeft := part.Size
for totalLeft > 0 {
curLeft := int64(readSizeV1)
if totalLeft < readSizeV1 {
curLeft = totalLeft
}
n, err := disk.ReadFile(minioMetaMultipartBucket, partPath, offset, buf[:curLeft])
if err != nil {
// Check for EOF/ErrUnexpectedEOF not needed as it should never happen as we know
// the exact size of the file and hence know the size of buf[]
// EOF/ErrUnexpectedEOF indicates that the length of file was shorter than part.Size and
// hence considered as an error condition.
return err
}
if err = disk.AppendFile(minioMetaTmpBucket, uploadID, buf[:n]); err != nil {
return err
}
offset += n
totalLeft -= n
}
return nil
}

View File

@@ -19,4 +19,4 @@ package cmd
import "errors"
// errFSDiskFormat - returned when given disk format is other than FS format.
var errFSDiskFormat = errors.New("Disk is not in FS format.")
var errFSDiskFormat = errors.New("Disk is not in FS format")

View File

@@ -18,10 +18,7 @@ package cmd
import (
"encoding/json"
"os"
"path"
"sort"
"strings"
)
const (
@@ -81,12 +78,12 @@ func readFSMetadata(disk StorageAPI, bucket, filePath string) (fsMeta fsMetaV1,
// Read all `fs.json`.
buf, err := disk.ReadAll(bucket, filePath)
if err != nil {
return fsMetaV1{}, err
return fsMetaV1{}, traceError(err)
}
// Decode `fs.json` into fsMeta structure.
if err = json.Unmarshal(buf, &fsMeta); err != nil {
return fsMetaV1{}, err
return fsMetaV1{}, traceError(err)
}
// Success.
@@ -94,16 +91,23 @@ func readFSMetadata(disk StorageAPI, bucket, filePath string) (fsMeta fsMetaV1,
}
// Write fsMeta to fs.json or fs-append.json.
func writeFSMetadata(disk StorageAPI, bucket, filePath string, fsMeta fsMetaV1) (err error) {
tmpPath := path.Join(tmpMetaPrefix, getUUID())
func writeFSMetadata(disk StorageAPI, bucket, filePath string, fsMeta fsMetaV1) error {
tmpPath := mustGetUUID()
metadataBytes, err := json.Marshal(fsMeta)
if err != nil {
return err
return traceError(err)
}
if err = disk.AppendFile(minioMetaBucket, tmpPath, metadataBytes); err != nil {
return err
if err = disk.AppendFile(minioMetaTmpBucket, tmpPath, metadataBytes); err != nil {
return traceError(err)
}
return disk.RenameFile(minioMetaBucket, tmpPath, bucket, filePath)
err = disk.RenameFile(minioMetaTmpBucket, tmpPath, bucket, filePath)
if err != nil {
err = disk.DeleteFile(minioMetaTmpBucket, tmpPath)
if err != nil {
return traceError(err)
}
}
return nil
}
// newFSMetaV1 - initializes new fsMetaV1.
@@ -116,8 +120,8 @@ func newFSMetaV1() (fsMeta fsMetaV1) {
}
// newFSFormatV1 - initializes new formatConfigV1 with FS format info.
func newFSFormatV1() (format formatConfigV1) {
return formatConfigV1{
func newFSFormatV1() (format *formatConfigV1) {
return &formatConfigV1{
Version: "1",
Format: "fs",
FS: &fsFormat{
@@ -127,12 +131,12 @@ func newFSFormatV1() (format formatConfigV1) {
}
// isFSFormat - returns whether given formatConfigV1 is FS type or not.
func isFSFormat(format formatConfigV1) bool {
func isFSFormat(format *formatConfigV1) bool {
return format.Format == "fs"
}
// writes FS format (format.json) into minioMetaBucket.
func writeFSFormatData(storage StorageAPI, fsFormat formatConfigV1) error {
func saveFSFormatData(storage StorageAPI, fsFormat *formatConfigV1) error {
metadataBytes, err := json.Marshal(fsFormat)
if err != nil {
return err
@@ -157,32 +161,3 @@ func isPartsSame(uploadedParts []objectPartInfo, completeParts []completePart) b
}
return true
}
var extendedHeaders = []string{
"X-Amz-Meta-",
"X-Minio-Meta-",
// Add new extended headers.
}
// isExtendedHeader validates if input string matches extended headers.
func isExtendedHeader(header string) bool {
for _, extendedHeader := range extendedHeaders {
if strings.HasPrefix(header, extendedHeader) {
return true
}
}
return false
}
// Return true if extended HTTP headers are set, false otherwise.
func hasExtendedHeader(metadata map[string]string) bool {
if os.Getenv("MINIO_ENABLE_FSMETA") == "1" {
return true
}
for k := range metadata {
if isExtendedHeader(k) {
return true
}
}
return false
}

View File

@@ -17,50 +17,108 @@
package cmd
import (
"bytes"
"os"
"path/filepath"
"testing"
)
// Tests scenarios which can occur for hasExtendedHeader function.
func TestHasExtendedHeader(t *testing.T) {
// All test cases concerning hasExtendedHeader function.
testCases := []struct {
metadata map[string]string
has bool
}{
// Verifies if X-Amz-Meta is present.
{
metadata: map[string]string{
"X-Amz-Meta-1": "value",
},
has: true || os.Getenv("MINIO_ENABLE_FSMETA") == "1",
},
// Verifies if X-Minio-Meta is present.
{
metadata: map[string]string{
"X-Minio-Meta-1": "value",
},
has: true || os.Getenv("MINIO_ENABLE_FSMETA") == "1",
},
// Verifies if extended header is not present.
{
metadata: map[string]string{
"md5Sum": "value",
},
has: false || os.Getenv("MINIO_ENABLE_FSMETA") == "1",
},
// Verifies if extended header is not present, but with an empty input.
{
metadata: nil,
has: false || os.Getenv("MINIO_ENABLE_FSMETA") == "1",
},
func initFSObjects(disk string, t *testing.T) (obj ObjectLayer) {
endpoints, err := parseStorageEndpoints([]string{disk})
if err != nil {
t.Fatal(err)
}
obj, _, err = initObjectLayer(endpoints)
if err != nil {
t.Fatal("Unexpected err: ", err)
}
return obj
}
// TestReadFsMetadata - readFSMetadata testing with a healthy and faulty disk
func TestReadFSMetadata(t *testing.T) {
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
objectName := "object"
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Unexpected err: ", err)
}
sha256sum := ""
if _, err := obj.PutObject(bucketName, objectName, int64(len("abcd")), bytes.NewReader([]byte("abcd")),
map[string]string{"X-Amz-Meta-AppId": "a"}, sha256sum); err != nil {
t.Fatal("Unexpected err: ", err)
}
// Validate all test cases.
for i, testCase := range testCases {
has := hasExtendedHeader(testCase.metadata)
if has != testCase.has {
t.Fatalf("Test case %d: Expected \"%#v\", but got \"%#v\"", i+1, testCase.has, has)
// Construct the full path of fs.json
fsPath := "buckets/" + bucketName + "/" + objectName + "/fs.json"
// Regular fs metadata reading, no errors expected
if _, err := readFSMetadata(fs.storage, ".minio.sys", fsPath); err != nil {
t.Fatal("Unexpected error ", err)
}
// Corrupted fs.json
if err := fs.storage.AppendFile(".minio.sys", fsPath, []byte{'a'}); err != nil {
t.Fatal("Unexpected error ", err)
}
if _, err := readFSMetadata(fs.storage, ".minio.sys", fsPath); err == nil {
t.Fatal("Should fail", err)
}
// Test with corrupted disk
fsStorage := fs.storage.(*retryStorage)
naughty := newNaughtyDisk(fsStorage, nil, errFaultyDisk)
fs.storage = naughty
if _, err := readFSMetadata(fs.storage, ".minio.sys", fsPath); errorCause(err) != errFaultyDisk {
t.Fatal("Should fail", err)
}
}
// TestWriteFsMetadata - tests of writeFSMetadata with healthy and faulty disks
func TestWriteFSMetadata(t *testing.T) {
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
objectName := "object"
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Unexpected err: ", err)
}
sha256sum := ""
if _, err := obj.PutObject(bucketName, objectName, int64(len("abcd")), bytes.NewReader([]byte("abcd")),
map[string]string{"X-Amz-Meta-AppId": "a"}, sha256sum); err != nil {
t.Fatal("Unexpected err: ", err)
}
// Construct the complete path of fs.json
fsPath := "buckets/" + bucketName + "/" + objectName + "/fs.json"
// Fs metadata reading, no errors expected (healthy disk)
fsMeta, err := readFSMetadata(fs.storage, ".minio.sys", fsPath)
if err != nil {
t.Fatal("Unexpected error ", err)
}
// Reading metadata with a corrupted disk
fsStorage := fs.storage.(*retryStorage)
for i := 1; i <= 2; i++ {
naughty := newNaughtyDisk(fsStorage, map[int]error{i: errFaultyDisk, i + 1: errFaultyDisk}, nil)
fs.storage = naughty
if err = writeFSMetadata(fs.storage, ".minio.sys", fsPath, fsMeta); errorCause(err) != errFaultyDisk {
t.Fatal("Unexpected error", i, err)
}
}
}

View File

@@ -17,7 +17,6 @@
package cmd
import (
"encoding/json"
"path"
"time"
)
@@ -28,85 +27,62 @@ func (fs fsObjects) isMultipartUpload(bucket, prefix string) bool {
return err == nil
}
// Checks whether bucket exists.
func (fs fsObjects) isBucketExist(bucket string) bool {
// Check whether bucket exists.
_, err := fs.storage.StatVol(bucket)
if err != nil {
if err == errVolumeNotFound {
return false
}
errorIf(err, "Stat failed on bucket "+bucket+".")
return false
}
return true
}
// isUploadIDExists - verify if a given uploadID exists and is valid.
func (fs fsObjects) isUploadIDExists(bucket, object, uploadID string) bool {
uploadIDPath := path.Join(mpartMetaPrefix, bucket, object, uploadID)
_, err := fs.storage.StatFile(minioMetaBucket, path.Join(uploadIDPath, fsMetaJSONFile))
uploadIDPath := path.Join(bucket, object, uploadID)
_, err := fs.storage.StatFile(minioMetaMultipartBucket, path.Join(uploadIDPath, fsMetaJSONFile))
if err != nil {
if err == errFileNotFound {
return false
}
errorIf(err, "Unable to access upload id"+uploadIDPath)
errorIf(err, "Unable to access upload id "+pathJoin(minioMetaMultipartBucket, uploadIDPath))
return false
}
return true
}
// writeUploadJSON - create `uploads.json` or update it with new uploadID.
func (fs fsObjects) writeUploadJSON(bucket, object, uploadID string, initiated time.Time) (err error) {
uploadsPath := path.Join(mpartMetaPrefix, bucket, object, uploadsJSONFile)
uniqueID := getUUID()
tmpUploadsPath := path.Join(tmpMetaPrefix, uniqueID)
var uploadsJSON uploadsV1
uploadsJSON, err = readUploadsJSON(bucket, object, fs.storage)
// updateUploadJSON - add or remove upload ID info in all `uploads.json`.
func (fs fsObjects) updateUploadJSON(bucket, object, uploadID string, initiated time.Time, isRemove bool) error {
uploadsPath := path.Join(bucket, object, uploadsJSONFile)
tmpUploadsPath := mustGetUUID()
uploadsJSON, err := readUploadsJSON(bucket, object, fs.storage)
if errorCause(err) == errFileNotFound {
// If file is not found, we assume a default (empty)
// upload info.
uploadsJSON, err = newUploadsV1("fs"), nil
}
if err != nil {
// For any other errors.
if err != errFileNotFound {
return err
}
// Set uploads format to `fs`.
uploadsJSON = newUploadsV1("fs")
return err
}
// Add a new upload id.
uploadsJSON.AddUploadID(uploadID, initiated)
// Update `uploads.json` on all disks.
uploadsJSONBytes, wErr := json.Marshal(&uploadsJSON)
if wErr != nil {
return wErr
// update the uploadsJSON struct
if !isRemove {
// Add the uploadID
uploadsJSON.AddUploadID(uploadID, initiated)
} else {
// Remove the upload ID
uploadsJSON.RemoveUploadID(uploadID)
}
// Write `uploads.json` to disk.
if wErr = fs.storage.AppendFile(minioMetaBucket, tmpUploadsPath, uploadsJSONBytes); wErr != nil {
return wErr
}
wErr = fs.storage.RenameFile(minioMetaBucket, tmpUploadsPath, minioMetaBucket, uploadsPath)
if wErr != nil {
if dErr := fs.storage.DeleteFile(minioMetaBucket, tmpUploadsPath); dErr != nil {
return dErr
// update the file or delete it?
if len(uploadsJSON.Uploads) > 0 {
err = writeUploadJSON(&uploadsJSON, uploadsPath, tmpUploadsPath, fs.storage)
} else {
// no uploads, so we delete the file.
if err = fs.storage.DeleteFile(minioMetaMultipartBucket, uploadsPath); err != nil {
return toObjectErr(traceError(err), minioMetaMultipartBucket, uploadsPath)
}
return wErr
}
return nil
return err
}
// updateUploadsJSON - update `uploads.json` with new uploadsJSON for all disks.
func (fs fsObjects) updateUploadsJSON(bucket, object string, uploadsJSON uploadsV1) (err error) {
uploadsPath := path.Join(mpartMetaPrefix, bucket, object, uploadsJSONFile)
uniqueID := getUUID()
tmpUploadsPath := path.Join(tmpMetaPrefix, uniqueID)
uploadsBytes, wErr := json.Marshal(uploadsJSON)
if wErr != nil {
return wErr
}
if wErr = fs.storage.AppendFile(minioMetaBucket, tmpUploadsPath, uploadsBytes); wErr != nil {
return wErr
}
if wErr = fs.storage.RenameFile(minioMetaBucket, tmpUploadsPath, minioMetaBucket, uploadsPath); wErr != nil {
return wErr
}
return nil
// addUploadID - add upload ID and its initiated time to 'uploads.json'.
func (fs fsObjects) addUploadID(bucket, object string, uploadID string, initiated time.Time) error {
return fs.updateUploadJSON(bucket, object, uploadID, initiated, false)
}
// removeUploadID - remove upload ID in 'uploads.json'.
func (fs fsObjects) removeUploadID(bucket, object string, uploadID string) error {
return fs.updateUploadJSON(bucket, object, uploadID, time.Time{}, true)
}

View File

@@ -0,0 +1,105 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"os"
"path/filepath"
"testing"
"time"
)
// TestFSIsUploadExists - complete test with valid and invalid cases
func TestFSIsUploadExists(t *testing.T) {
// Prepare for testing
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
objectName := "object"
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Unexpected err: ", err)
}
uploadID, err := obj.NewMultipartUpload(bucketName, objectName, nil)
if err != nil {
t.Fatal("Unexpected err: ", err)
}
// Test with valid upload id
if exists := fs.isUploadIDExists(bucketName, objectName, uploadID); !exists {
t.Fatal("Wrong result, expected: ", exists)
}
// Test with inexistant bucket/object names
if exists := fs.isUploadIDExists("bucketfoo", "objectfoo", uploadID); exists {
t.Fatal("Wrong result, expected: ", !exists)
}
// Test with inexistant upload ID
if exists := fs.isUploadIDExists(bucketName, objectName, uploadID+"-ff"); exists {
t.Fatal("Wrong result, expected: ", !exists)
}
// isUploadIdExists with a faulty disk should return false
fsStorage := fs.storage.(*retryStorage)
naughty := newNaughtyDisk(fsStorage, nil, errFaultyDisk)
fs.storage = naughty
if exists := fs.isUploadIDExists(bucketName, objectName, uploadID); exists {
t.Fatal("Wrong result, expected: ", !exists)
}
}
// TestFSWriteUploadJSON - tests for writeUploadJSON for FS
func TestFSWriteUploadJSON(t *testing.T) {
// Prepare for tests
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
objectName := "object"
obj.MakeBucket(bucketName)
uploadID, err := obj.NewMultipartUpload(bucketName, objectName, nil)
if err != nil {
t.Fatal("Unexpected err: ", err)
}
if err != nil {
t.Fatal("Unexpected err: ", err)
}
if err := fs.addUploadID(bucketName, objectName, uploadID, time.Now().UTC()); err != nil {
t.Fatal("Unexpected err: ", err)
}
// isUploadIdExists with a faulty disk should return false
fsStorage := fs.storage.(*retryStorage)
for i := 1; i <= 3; i++ {
naughty := newNaughtyDisk(fsStorage, map[int]error{i: errFaultyDisk}, nil)
fs.storage = naughty
if err := fs.addUploadID(bucketName, objectName, uploadID, time.Now().UTC()); errorCause(err) != errFaultyDisk {
t.Fatal("Unexpected err: ", err)
}
}
}

View File

@@ -18,15 +18,14 @@ package cmd
import (
"crypto/md5"
"crypto/sha256"
"encoding/hex"
"fmt"
"hash"
"io"
"path"
"strconv"
"strings"
"time"
"github.com/skyrings/skyring-common/tools/uuid"
)
// listMultipartUploads - lists all multipart uploads.
@@ -44,7 +43,7 @@ func (fs fsObjects) listMultipartUploads(bucket, prefix, keyMarker, uploadIDMark
result.Delimiter = delimiter
// Not using path.Join() as it strips off the trailing '/'.
multipartPrefixPath := pathJoin(mpartMetaPrefix, bucket, prefix)
multipartPrefixPath := pathJoin(bucket, prefix)
if prefix == "" {
// Should have a trailing "/" if prefix is ""
// For ex. multipartPrefixPath should be "multipart/bucket/" if prefix is ""
@@ -52,15 +51,17 @@ func (fs fsObjects) listMultipartUploads(bucket, prefix, keyMarker, uploadIDMark
}
multipartMarkerPath := ""
if keyMarker != "" {
multipartMarkerPath = pathJoin(mpartMetaPrefix, bucket, keyMarker)
multipartMarkerPath = pathJoin(bucket, keyMarker)
}
var uploads []uploadMetadata
var err error
var eof bool
if uploadIDMarker != "" {
nsMutex.RLock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, keyMarker))
keyMarkerLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket,
pathJoin(bucket, keyMarker))
keyMarkerLock.RLock()
uploads, _, err = listMultipartUploadIDs(bucket, keyMarker, uploadIDMarker, maxUploads, fs.storage)
nsMutex.RUnlock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, keyMarker))
keyMarkerLock.RUnlock()
if err != nil {
return ListMultipartsInfo{}, err
}
@@ -70,12 +71,12 @@ func (fs fsObjects) listMultipartUploads(bucket, prefix, keyMarker, uploadIDMark
var endWalkCh chan struct{}
heal := false // true only for xl.ListObjectsHeal()
if maxUploads > 0 {
walkResultCh, endWalkCh = fs.listPool.Release(listParams{minioMetaBucket, recursive, multipartMarkerPath, multipartPrefixPath, heal})
walkResultCh, endWalkCh = fs.listPool.Release(listParams{minioMetaMultipartBucket, recursive, multipartMarkerPath, multipartPrefixPath, heal})
if walkResultCh == nil {
endWalkCh = make(chan struct{})
isLeaf := fs.isMultipartUpload
listDir := listDirFactory(isLeaf, fs.storage)
walkResultCh = startTreeWalk(minioMetaBucket, multipartPrefixPath, multipartMarkerPath, recursive, listDir, isLeaf, endWalkCh)
listDir := listDirFactory(isLeaf, fsTreeWalkIgnoredErrs, fs.storage)
walkResultCh = startTreeWalk(minioMetaMultipartBucket, multipartPrefixPath, multipartMarkerPath, recursive, listDir, isLeaf, endWalkCh)
}
for maxUploads > 0 {
walkResult, ok := <-walkResultCh
@@ -87,13 +88,13 @@ func (fs fsObjects) listMultipartUploads(bucket, prefix, keyMarker, uploadIDMark
// For any walk error return right away.
if walkResult.err != nil {
// File not found or Disk not found is a valid case.
if isErrIgnored(walkResult.err, walkResultIgnoredErrs) {
if isErrIgnored(walkResult.err, fsTreeWalkIgnoredErrs...) {
eof = true
break
}
return ListMultipartsInfo{}, err
return ListMultipartsInfo{}, walkResult.err
}
entry := strings.TrimPrefix(walkResult.entry, retainSlash(pathJoin(mpartMetaPrefix, bucket)))
entry := strings.TrimPrefix(walkResult.entry, retainSlash(bucket))
if strings.HasSuffix(walkResult.entry, slashSeparator) {
uploads = append(uploads, uploadMetadata{
Object: entry,
@@ -110,9 +111,12 @@ func (fs fsObjects) listMultipartUploads(bucket, prefix, keyMarker, uploadIDMark
var tmpUploads []uploadMetadata
var end bool
uploadIDMarker = ""
nsMutex.RLock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, entry))
entryLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket,
pathJoin(bucket, entry))
entryLock.RLock()
tmpUploads, end, err = listMultipartUploadIDs(bucket, entry, uploadIDMarker, maxUploads, fs.storage)
nsMutex.RUnlock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, entry))
entryLock.RUnlock()
if err != nil {
return ListMultipartsInfo{}, err
}
@@ -166,45 +170,8 @@ func (fs fsObjects) listMultipartUploads(bucket, prefix, keyMarker, uploadIDMark
// ListMultipartsInfo structure is unmarshalled directly into XML and
// replied back to the client.
func (fs fsObjects) ListMultipartUploads(bucket, prefix, keyMarker, uploadIDMarker, delimiter string, maxUploads int) (ListMultipartsInfo, error) {
// Validate input arguments.
if !IsValidBucketName(bucket) {
return ListMultipartsInfo{}, BucketNameInvalid{Bucket: bucket}
}
if !fs.isBucketExist(bucket) {
return ListMultipartsInfo{}, BucketNotFound{Bucket: bucket}
}
if !IsValidObjectPrefix(prefix) {
return ListMultipartsInfo{}, ObjectNameInvalid{Bucket: bucket, Object: prefix}
}
// Verify if delimiter is anything other than '/', which we do not support.
if delimiter != "" && delimiter != slashSeparator {
return ListMultipartsInfo{}, UnsupportedDelimiter{
Delimiter: delimiter,
}
}
// Verify if marker has prefix.
if keyMarker != "" && !strings.HasPrefix(keyMarker, prefix) {
return ListMultipartsInfo{}, InvalidMarkerPrefixCombination{
Marker: keyMarker,
Prefix: prefix,
}
}
if uploadIDMarker != "" {
if strings.HasSuffix(keyMarker, slashSeparator) {
return ListMultipartsInfo{}, InvalidUploadIDKeyCombination{
UploadIDMarker: uploadIDMarker,
KeyMarker: keyMarker,
}
}
id, err := uuid.Parse(uploadIDMarker)
if err != nil {
return ListMultipartsInfo{}, err
}
if id.IsZero() {
return ListMultipartsInfo{}, MalformedUploadID{
UploadID: uploadIDMarker,
}
}
if err := checkListMultipartArgs(bucket, prefix, keyMarker, uploadIDMarker, delimiter, fs); err != nil {
return ListMultipartsInfo{}, err
}
return fs.listMultipartUploads(bucket, prefix, keyMarker, uploadIDMarker, delimiter, maxUploads)
}
@@ -213,31 +180,32 @@ func (fs fsObjects) ListMultipartUploads(bucket, prefix, keyMarker, uploadIDMark
// request, returns back a unique upload id.
//
// Internally this function creates 'uploads.json' associated for the
// incoming object at '.minio/multipart/bucket/object/uploads.json' on
// incoming object at '.minio.sys/multipart/bucket/object/uploads.json' on
// all the disks. `uploads.json` carries metadata regarding on going
// multipart operation on the object.
func (fs fsObjects) newMultipartUpload(bucket string, object string, meta map[string]string) (uploadID string, err error) {
// Initialize `fs.json` values.
fsMeta := newFSMetaV1()
// Save additional metadata only if extended headers such as "X-Amz-Meta-" are set.
if hasExtendedHeader(meta) {
fsMeta.Meta = meta
}
// Save additional metadata.
fsMeta.Meta = meta
// This lock needs to be held for any changes to the directory contents of ".minio/multipart/object/"
nsMutex.Lock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object))
defer nsMutex.Unlock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object))
// This lock needs to be held for any changes to the directory
// contents of ".minio.sys/multipart/object/"
objectMPartPathLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket,
pathJoin(bucket, object))
objectMPartPathLock.Lock()
defer objectMPartPathLock.Unlock()
uploadID = getUUID()
uploadID = mustGetUUID()
initiated := time.Now().UTC()
// Create 'uploads.json'
if err = fs.writeUploadJSON(bucket, object, uploadID, initiated); err != nil {
// Add upload ID to uploads.json
if err = fs.addUploadID(bucket, object, uploadID, initiated); err != nil {
return "", err
}
fsMetaPath := path.Join(mpartMetaPrefix, bucket, object, uploadID, fsMetaJSONFile)
if err = writeFSMetadata(fs.storage, minioMetaBucket, fsMetaPath, fsMeta); err != nil {
return "", toObjectErr(err, minioMetaBucket, fsMetaPath)
uploadIDPath := path.Join(bucket, object, uploadID)
if err = writeFSMetadata(fs.storage, minioMetaMultipartBucket, path.Join(uploadIDPath, fsMetaJSONFile), fsMeta); err != nil {
return "", toObjectErr(err, minioMetaMultipartBucket, uploadIDPath)
}
// Return success.
return uploadID, nil
@@ -249,17 +217,8 @@ func (fs fsObjects) newMultipartUpload(bucket string, object string, meta map[st
//
// Implements S3 compatible initiate multipart API.
func (fs fsObjects) NewMultipartUpload(bucket, object string, meta map[string]string) (string, error) {
// Verify if bucket name is valid.
if !IsValidBucketName(bucket) {
return "", BucketNameInvalid{Bucket: bucket}
}
// Verify whether the bucket exists.
if !fs.isBucketExist(bucket) {
return "", BucketNotFound{Bucket: bucket}
}
// Verify if object name is valid.
if !IsValidObjectName(object) {
return "", ObjectNameInvalid{Bucket: bucket, Object: object}
if err := checkNewMultipartArgs(bucket, object, fs); err != nil {
return "", err
}
return fs.newMultipartUpload(bucket, object, meta)
}
@@ -278,146 +237,40 @@ func partToAppend(fsMeta fsMetaV1, fsAppendMeta fsMetaV1) (part objectPartInfo,
return fsMeta.Parts[nextPartIndex], true
}
// Returns metadata path for the file holding info about the parts that
// have been appended to the "append-file"
func getFSAppendMetaPath(uploadID string) string {
return path.Join(tmpMetaPrefix, uploadID+".json")
}
// Returns path for the append-file.
func getFSAppendDataPath(uploadID string) string {
return path.Join(tmpMetaPrefix, uploadID+".data")
}
// Append parts to fsAppendDataFile.
func appendParts(disk StorageAPI, bucket, object, uploadID string) {
uploadIDPath := path.Join(mpartMetaPrefix, bucket, object, uploadID)
// fs-append.json path
fsAppendMetaPath := getFSAppendMetaPath(uploadID)
// fs.json path
fsMetaPath := path.Join(mpartMetaPrefix, bucket, object, uploadID, fsMetaJSONFile)
// Lock the uploadID so that no one modifies fs.json
nsMutex.RLock(minioMetaBucket, uploadIDPath)
fsMeta, err := readFSMetadata(disk, minioMetaBucket, fsMetaPath)
nsMutex.RUnlock(minioMetaBucket, uploadIDPath)
if err != nil {
return
}
// Lock fs-append.json so that there is no parallel append to the file.
nsMutex.Lock(minioMetaBucket, fsAppendMetaPath)
defer nsMutex.Unlock(minioMetaBucket, fsAppendMetaPath)
fsAppendMeta, err := readFSMetadata(disk, minioMetaBucket, fsAppendMetaPath)
if err != nil {
if err != errFileNotFound {
return
}
fsAppendMeta = fsMeta
fsAppendMeta.Parts = nil
}
// Check if a part needs to be appended to
part, appendNeeded := partToAppend(fsMeta, fsAppendMeta)
if !appendNeeded {
return
}
// Hold write lock on the part so that there is no parallel upload on the part.
nsMutex.Lock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object, uploadID, strconv.Itoa(part.Number)))
defer nsMutex.Unlock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object, uploadID, strconv.Itoa(part.Number)))
// Proceed to append "part"
fsAppendDataPath := getFSAppendDataPath(uploadID)
tmpDataPath := path.Join(tmpMetaPrefix, getUUID())
if part.Number != 1 {
// Move it to tmp location before appending so that we don't leave inconsitent data
// if server crashes during append operation.
err = disk.RenameFile(minioMetaBucket, fsAppendDataPath, minioMetaBucket, tmpDataPath)
if err != nil {
return
}
// Delete fs-append.json so that we don't leave a stale file if server crashes
// when the part is being appended to the tmp file.
err = disk.DeleteFile(minioMetaBucket, fsAppendMetaPath)
if err != nil {
return
}
}
// Path to the part that needs to be appended.
partPath := path.Join(mpartMetaPrefix, bucket, object, uploadID, part.Name)
offset := int64(0)
totalLeft := part.Size
buf := make([]byte, readSizeV1)
for totalLeft > 0 {
curLeft := int64(readSizeV1)
if totalLeft < readSizeV1 {
curLeft = totalLeft
}
var n int64
n, err = disk.ReadFile(minioMetaBucket, partPath, offset, buf[:curLeft])
if n > 0 {
if err = disk.AppendFile(minioMetaBucket, tmpDataPath, buf[:n]); err != nil {
return
}
}
if err != nil {
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
}
return
}
offset += n
totalLeft -= n
}
// All good, the part has been appended to the tmp file, rename it back.
if err = disk.RenameFile(minioMetaBucket, tmpDataPath, minioMetaBucket, fsAppendDataPath); err != nil {
return
}
fsAppendMeta.AddObjectPart(part.Number, part.Name, part.ETag, part.Size)
if err = writeFSMetadata(disk, minioMetaBucket, fsAppendMetaPath, fsAppendMeta); err != nil {
return
}
// If there are more parts that need to be appended to fsAppendDataFile
_, appendNeeded = partToAppend(fsMeta, fsAppendMeta)
if appendNeeded {
go appendParts(disk, bucket, object, uploadID)
}
}
// PutObjectPart - reads incoming data until EOF for the part file on
// an ongoing multipart transaction. Internally incoming data is
// written to '.minio/tmp' location and safely renamed to
// '.minio/multipart' for reach parts.
func (fs fsObjects) PutObjectPart(bucket, object, uploadID string, partID int, size int64, data io.Reader, md5Hex string) (string, error) {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return "", BucketNameInvalid{Bucket: bucket}
}
// Verify whether the bucket exists.
if !fs.isBucketExist(bucket) {
return "", BucketNotFound{Bucket: bucket}
}
if !IsValidObjectName(object) {
return "", ObjectNameInvalid{Bucket: bucket, Object: object}
// written to '.minio.sys/tmp' location and safely renamed to
// '.minio.sys/multipart' for reach parts.
func (fs fsObjects) PutObjectPart(bucket, object, uploadID string, partID int, size int64, data io.Reader, md5Hex string, sha256sum string) (string, error) {
if err := checkPutObjectPartArgs(bucket, object, fs); err != nil {
return "", err
}
uploadIDPath := path.Join(mpartMetaPrefix, bucket, object, uploadID)
uploadIDPath := path.Join(bucket, object, uploadID)
nsMutex.RLock(minioMetaBucket, uploadIDPath)
preUploadIDLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket, uploadIDPath)
preUploadIDLock.RLock()
// Just check if the uploadID exists to avoid copy if it doesn't.
uploadIDExists := fs.isUploadIDExists(bucket, object, uploadID)
nsMutex.RUnlock(minioMetaBucket, uploadIDPath)
preUploadIDLock.RUnlock()
if !uploadIDExists {
return "", InvalidUploadID{UploadID: uploadID}
return "", traceError(InvalidUploadID{UploadID: uploadID})
}
partSuffix := fmt.Sprintf("object%d", partID)
tmpPartPath := path.Join(tmpMetaPrefix, uploadID+"."+getUUID()+"."+partSuffix)
tmpPartPath := uploadID + "." + mustGetUUID() + "." + partSuffix
// Initialize md5 writer.
md5Writer := md5.New()
hashWriters := []io.Writer{md5Writer}
var sha256Writer hash.Hash
if sha256sum != "" {
sha256Writer = sha256.New()
hashWriters = append(hashWriters, sha256Writer)
}
multiWriter := io.MultiWriter(hashWriters...)
// Limit the reader to its provided size if specified.
var limitDataReader io.Reader
if size > 0 {
@@ -428,84 +281,107 @@ func (fs fsObjects) PutObjectPart(bucket, object, uploadID string, partID int, s
limitDataReader = data
}
teeReader := io.TeeReader(limitDataReader, md5Writer)
teeReader := io.TeeReader(limitDataReader, multiWriter)
bufSize := int64(readSizeV1)
if size > 0 && bufSize > size {
bufSize = size
}
buf := make([]byte, int(bufSize))
bytesWritten, cErr := fsCreateFile(fs.storage, teeReader, buf, minioMetaBucket, tmpPartPath)
if cErr != nil {
fs.storage.DeleteFile(minioMetaBucket, tmpPartPath)
return "", toObjectErr(cErr, minioMetaBucket, tmpPartPath)
if size > 0 {
// Prepare file to avoid disk fragmentation
err := fs.storage.PrepareFile(minioMetaTmpBucket, tmpPartPath, size)
if err != nil {
return "", toObjectErr(err, minioMetaTmpBucket, tmpPartPath)
}
}
bytesWritten, cErr := fsCreateFile(fs.storage, teeReader, buf, minioMetaTmpBucket, tmpPartPath)
if cErr != nil {
fs.storage.DeleteFile(minioMetaTmpBucket, tmpPartPath)
return "", toObjectErr(cErr, minioMetaTmpBucket, tmpPartPath)
}
// Should return IncompleteBody{} error when reader has fewer
// bytes than specified in request header.
if bytesWritten < size {
fs.storage.DeleteFile(minioMetaBucket, tmpPartPath)
return "", IncompleteBody{}
fs.storage.DeleteFile(minioMetaTmpBucket, tmpPartPath)
return "", traceError(IncompleteBody{})
}
// Validate if payload is valid.
if isSignVerify(data) {
if err := data.(*signVerifyReader).Verify(); err != nil {
// Incoming payload wrong, delete the temporary object.
fs.storage.DeleteFile(minioMetaBucket, tmpPartPath)
// Error return.
return "", toObjectErr(err, bucket, object)
}
}
// Delete temporary part in case of failure. If
// PutObjectPart succeeds then there would be nothing to
// delete.
defer fs.storage.DeleteFile(minioMetaTmpBucket, tmpPartPath)
newMD5Hex := hex.EncodeToString(md5Writer.Sum(nil))
if md5Hex != "" {
if newMD5Hex != md5Hex {
// MD5 mismatch, delete the temporary object.
fs.storage.DeleteFile(minioMetaBucket, tmpPartPath)
// Returns md5 mismatch.
return "", BadDigest{md5Hex, newMD5Hex}
return "", traceError(BadDigest{md5Hex, newMD5Hex})
}
}
if sha256sum != "" {
newSHA256sum := hex.EncodeToString(sha256Writer.Sum(nil))
if newSHA256sum != sha256sum {
return "", traceError(SHA256Mismatch{})
}
}
// Hold write lock as we are updating fs.json
nsMutex.Lock(minioMetaBucket, uploadIDPath)
defer nsMutex.Unlock(minioMetaBucket, uploadIDPath)
postUploadIDLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket, uploadIDPath)
postUploadIDLock.Lock()
defer postUploadIDLock.Unlock()
// Just check if the uploadID exists to avoid copy if it doesn't.
if !fs.isUploadIDExists(bucket, object, uploadID) {
return "", InvalidUploadID{UploadID: uploadID}
return "", traceError(InvalidUploadID{UploadID: uploadID})
}
fsMetaPath := pathJoin(uploadIDPath, fsMetaJSONFile)
fsMeta, err := readFSMetadata(fs.storage, minioMetaBucket, fsMetaPath)
fsMeta, err := readFSMetadata(fs.storage, minioMetaMultipartBucket, fsMetaPath)
if err != nil {
return "", toObjectErr(err, minioMetaBucket, fsMetaPath)
return "", toObjectErr(err, minioMetaMultipartBucket, fsMetaPath)
}
fsMeta.AddObjectPart(partID, partSuffix, newMD5Hex, size)
partPath := path.Join(mpartMetaPrefix, bucket, object, uploadID, partSuffix)
err = fs.storage.RenameFile(minioMetaBucket, tmpPartPath, minioMetaBucket, partPath)
partPath := path.Join(bucket, object, uploadID, partSuffix)
// Lock the part so that another part upload with same part-number gets blocked
// while the part is getting appended in the background.
partLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket, partPath)
partLock.Lock()
err = fs.storage.RenameFile(minioMetaTmpBucket, tmpPartPath, minioMetaMultipartBucket, partPath)
if err != nil {
if dErr := fs.storage.DeleteFile(minioMetaBucket, tmpPartPath); dErr != nil {
return "", toObjectErr(dErr, minioMetaBucket, tmpPartPath)
}
return "", toObjectErr(err, minioMetaBucket, partPath)
partLock.Unlock()
return "", toObjectErr(traceError(err), minioMetaMultipartBucket, partPath)
}
uploadIDPath = path.Join(bucket, object, uploadID)
if err = writeFSMetadata(fs.storage, minioMetaMultipartBucket, path.Join(uploadIDPath, fsMetaJSONFile), fsMeta); err != nil {
partLock.Unlock()
return "", toObjectErr(err, minioMetaMultipartBucket, uploadIDPath)
}
if err = writeFSMetadata(fs.storage, minioMetaBucket, fsMetaPath, fsMeta); err != nil {
return "", toObjectErr(err, minioMetaBucket, fsMetaPath)
}
go appendParts(fs.storage, bucket, object, uploadID)
// Append the part in background.
errCh := fs.bgAppend.append(fs.storage, bucket, object, uploadID, fsMeta)
go func() {
// Also receive the error so that the appendParts go-routine does not block on send.
// But the error received is ignored as fs.PutObjectPart() would have already
// returned success to the client.
<-errCh
partLock.Unlock()
}()
return newMD5Hex, nil
}
// listObjectParts - wrapper scanning through
// '.minio/multipart/bucket/object/UPLOADID'. Lists all the parts
// saved inside '.minio/multipart/bucket/object/UPLOADID'.
// '.minio.sys/multipart/bucket/object/UPLOADID'. Lists all the parts
// saved inside '.minio.sys/multipart/bucket/object/UPLOADID'.
func (fs fsObjects) listObjectParts(bucket, object, uploadID string, partNumberMarker, maxParts int) (ListPartsInfo, error) {
result := ListPartsInfo{}
fsMetaPath := path.Join(mpartMetaPrefix, bucket, object, uploadID, fsMetaJSONFile)
fsMeta, err := readFSMetadata(fs.storage, minioMetaBucket, fsMetaPath)
fsMetaPath := path.Join(bucket, object, uploadID, fsMetaJSONFile)
fsMeta, err := readFSMetadata(fs.storage, minioMetaMultipartBucket, fsMetaPath)
if err != nil {
return ListPartsInfo{}, toObjectErr(err, minioMetaBucket, fsMetaPath)
}
@@ -518,10 +394,10 @@ func (fs fsObjects) listObjectParts(bucket, object, uploadID string, partNumberM
count := maxParts
for _, part := range parts {
var fi FileInfo
partNamePath := path.Join(mpartMetaPrefix, bucket, object, uploadID, part.Name)
fi, err = fs.storage.StatFile(minioMetaBucket, partNamePath)
partNamePath := path.Join(bucket, object, uploadID, part.Name)
fi, err = fs.storage.StatFile(minioMetaMultipartBucket, partNamePath)
if err != nil {
return ListPartsInfo{}, toObjectErr(err, minioMetaBucket, partNamePath)
return ListPartsInfo{}, toObjectErr(traceError(err), minioMetaMultipartBucket, partNamePath)
}
result.Parts = append(result.Parts, partInfo{
PartNumber: part.Number,
@@ -557,27 +433,35 @@ func (fs fsObjects) listObjectParts(bucket, object, uploadID string, partNumberM
// ListPartsInfo structure is unmarshalled directly into XML and
// replied back to the client.
func (fs fsObjects) ListObjectParts(bucket, object, uploadID string, partNumberMarker, maxParts int) (ListPartsInfo, error) {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return ListPartsInfo{}, BucketNameInvalid{Bucket: bucket}
if err := checkListPartsArgs(bucket, object, fs); err != nil {
return ListPartsInfo{}, err
}
// Verify whether the bucket exists.
if !fs.isBucketExist(bucket) {
return ListPartsInfo{}, BucketNotFound{Bucket: bucket}
}
if !IsValidObjectName(object) {
return ListPartsInfo{}, ObjectNameInvalid{Bucket: bucket, Object: object}
}
// Hold lock so that there is no competing abort-multipart-upload or complete-multipart-upload.
nsMutex.Lock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object, uploadID))
defer nsMutex.Unlock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object, uploadID))
// Hold lock so that there is no competing
// abort-multipart-upload or complete-multipart-upload.
uploadIDLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket,
pathJoin(bucket, object, uploadID))
uploadIDLock.Lock()
defer uploadIDLock.Unlock()
if !fs.isUploadIDExists(bucket, object, uploadID) {
return ListPartsInfo{}, InvalidUploadID{UploadID: uploadID}
return ListPartsInfo{}, traceError(InvalidUploadID{UploadID: uploadID})
}
return fs.listObjectParts(bucket, object, uploadID, partNumberMarker, maxParts)
}
func (fs fsObjects) totalObjectSize(fsMeta fsMetaV1, parts []completePart) (int64, error) {
objSize := int64(0)
for _, part := range parts {
partIdx := fsMeta.ObjectPartIndex(part.PartNumber)
if partIdx == -1 {
return 0, InvalidPart{}
}
objSize += fsMeta.Parts[partIdx].Size
}
return objSize, nil
}
// CompleteMultipartUpload - completes an ongoing multipart
// transaction after receiving all the parts indicated by the client.
// Returns an md5sum calculated by concatenating all the individual
@@ -585,86 +469,90 @@ func (fs fsObjects) ListObjectParts(bucket, object, uploadID string, partNumberM
//
// Implements S3 compatible Complete multipart API.
func (fs fsObjects) CompleteMultipartUpload(bucket string, object string, uploadID string, parts []completePart) (string, error) {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return "", BucketNameInvalid{Bucket: bucket}
}
// Verify whether the bucket exists.
if !fs.isBucketExist(bucket) {
return "", BucketNotFound{Bucket: bucket}
}
if !IsValidObjectName(object) {
return "", ObjectNameInvalid{
Bucket: bucket,
Object: object,
}
if err := checkCompleteMultipartArgs(bucket, object, fs); err != nil {
return "", err
}
uploadIDPath := path.Join(mpartMetaPrefix, bucket, object, uploadID)
uploadIDPath := path.Join(bucket, object, uploadID)
// Hold lock so that
// 1) no one aborts this multipart upload
// 2) no one does a parallel complete-multipart-upload on this
// multipart upload
nsMutex.Lock(minioMetaBucket, uploadIDPath)
defer nsMutex.Unlock(minioMetaBucket, uploadIDPath)
uploadIDLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket, uploadIDPath)
uploadIDLock.Lock()
defer uploadIDLock.Unlock()
if !fs.isUploadIDExists(bucket, object, uploadID) {
return "", InvalidUploadID{UploadID: uploadID}
return "", traceError(InvalidUploadID{UploadID: uploadID})
}
// fs-append.json path
fsAppendMetaPath := getFSAppendMetaPath(uploadID)
// Lock fs-append.json so that no parallel appendParts() is being done.
nsMutex.Lock(minioMetaBucket, fsAppendMetaPath)
defer nsMutex.Unlock(minioMetaBucket, fsAppendMetaPath)
// Calculate s3 compatible md5sum for complete multipart.
s3MD5, err := completeMultipartMD5(parts...)
s3MD5, err := getCompleteMultipartMD5(parts)
if err != nil {
return "", err
}
// Read saved fs metadata for ongoing multipart.
fsMetaPath := pathJoin(uploadIDPath, fsMetaJSONFile)
fsMeta, err := readFSMetadata(fs.storage, minioMetaBucket, fsMetaPath)
fsMeta, err := readFSMetadata(fs.storage, minioMetaMultipartBucket, fsMetaPath)
if err != nil {
return "", toObjectErr(err, minioMetaBucket, fsMetaPath)
return "", toObjectErr(err, minioMetaMultipartBucket, fsMetaPath)
}
fsAppendMeta, err := readFSMetadata(fs.storage, minioMetaBucket, fsAppendMetaPath)
if err == nil && isPartsSame(fsAppendMeta.Parts, parts) {
fsAppendDataPath := getFSAppendDataPath(uploadID)
if err = fs.storage.RenameFile(minioMetaBucket, fsAppendDataPath, bucket, object); err != nil {
return "", toObjectErr(err, minioMetaBucket, fsAppendDataPath)
// This lock is held during rename of the appended tmp file to the actual
// location so that any competing GetObject/PutObject/DeleteObject do not race.
appendFallback := true // In case background-append did not append the required parts.
if isPartsSame(fsMeta.Parts, parts) {
err = fs.bgAppend.complete(fs.storage, bucket, object, uploadID, fsMeta)
if err == nil {
appendFallback = false
if err = fs.storage.RenameFile(minioMetaTmpBucket, uploadID, bucket, object); err != nil {
return "", toObjectErr(traceError(err), minioMetaTmpBucket, uploadID)
}
}
// Remove the append-file metadata file in tmp location as we no longer need it.
fs.storage.DeleteFile(minioMetaBucket, fsAppendMetaPath)
} else {
tempObj := path.Join(tmpMetaPrefix, uploadID+"-"+"part.1")
}
if appendFallback {
// background append could not do append all the required parts, hence we do it here.
tempObj := uploadID + "-" + "part.1"
// Allocate staging buffer.
var buf = make([]byte, readSizeV1)
var objSize int64
objSize, err = fs.totalObjectSize(fsMeta, parts)
if err != nil {
return "", traceError(err)
}
if objSize > 0 {
// Prepare file to avoid disk fragmentation
err = fs.storage.PrepareFile(minioMetaTmpBucket, tempObj, objSize)
if err != nil {
return "", traceError(err)
}
}
// Loop through all parts, validate them and then commit to disk.
for i, part := range parts {
partIdx := fsMeta.ObjectPartIndex(part.PartNumber)
if partIdx == -1 {
return "", InvalidPart{}
return "", traceError(InvalidPart{})
}
if fsMeta.Parts[partIdx].ETag != part.ETag {
return "", BadDigest{}
return "", traceError(BadDigest{})
}
// All parts except the last part has to be atleast 5MB.
if (i < len(parts)-1) && !isMinAllowedPartSize(fsMeta.Parts[partIdx].Size) {
return "", PartTooSmall{
return "", traceError(PartTooSmall{
PartNumber: part.PartNumber,
PartSize: fsMeta.Parts[partIdx].Size,
PartETag: part.ETag,
}
})
}
// Construct part suffix.
partSuffix := fmt.Sprintf("object%d", part.PartNumber)
multipartPartFile := path.Join(mpartMetaPrefix, bucket, object, uploadID, partSuffix)
multipartPartFile := path.Join(bucket, object, uploadID, partSuffix)
offset := int64(0)
totalLeft := fsMeta.Parts[partIdx].Size
for totalLeft > 0 {
@@ -673,10 +561,10 @@ func (fs fsObjects) CompleteMultipartUpload(bucket string, object string, upload
curLeft = totalLeft
}
var n int64
n, err = fs.storage.ReadFile(minioMetaBucket, multipartPartFile, offset, buf[:curLeft])
n, err = fs.storage.ReadFile(minioMetaMultipartBucket, multipartPartFile, offset, buf[:curLeft])
if n > 0 {
if err = fs.storage.AppendFile(minioMetaBucket, tempObj, buf[:n]); err != nil {
return "", toObjectErr(err, minioMetaBucket, tempObj)
if err = fs.storage.AppendFile(minioMetaTmpBucket, tempObj, buf[:n]); err != nil {
return "", toObjectErr(traceError(err), minioMetaTmpBucket, tempObj)
}
}
if err != nil {
@@ -684,9 +572,9 @@ func (fs fsObjects) CompleteMultipartUpload(bucket string, object string, upload
break
}
if err == errFileNotFound {
return "", InvalidPart{}
return "", traceError(InvalidPart{})
}
return "", toObjectErr(err, minioMetaBucket, multipartPartFile)
return "", toObjectErr(traceError(err), minioMetaMultipartBucket, multipartPartFile)
}
offset += n
totalLeft -= n
@@ -694,29 +582,28 @@ func (fs fsObjects) CompleteMultipartUpload(bucket string, object string, upload
}
// Rename the file back to original location, if not delete the temporary object.
err = fs.storage.RenameFile(minioMetaBucket, tempObj, bucket, object)
err = fs.storage.RenameFile(minioMetaTmpBucket, tempObj, bucket, object)
if err != nil {
if dErr := fs.storage.DeleteFile(minioMetaBucket, tempObj); dErr != nil {
return "", toObjectErr(dErr, minioMetaBucket, tempObj)
if dErr := fs.storage.DeleteFile(minioMetaTmpBucket, tempObj); dErr != nil {
return "", toObjectErr(traceError(dErr), minioMetaTmpBucket, tempObj)
}
return "", toObjectErr(err, bucket, object)
return "", toObjectErr(traceError(err), bucket, object)
}
}
// No need to save part info, since we have concatenated all parts.
fsMeta.Parts = nil
// Save additional metadata only if extended headers such as "X-Amz-Meta-" are set.
if hasExtendedHeader(fsMeta.Meta) {
if len(fsMeta.Meta) == 0 {
fsMeta.Meta = make(map[string]string)
}
fsMeta.Meta["md5Sum"] = s3MD5
// Save additional metadata.
if len(fsMeta.Meta) == 0 {
fsMeta.Meta = make(map[string]string)
}
fsMeta.Meta["md5Sum"] = s3MD5
fsMetaPath = path.Join(bucketMetaPrefix, bucket, object, fsMetaJSONFile)
if err = writeFSMetadata(fs.storage, minioMetaBucket, fsMetaPath, fsMeta); err != nil {
return "", toObjectErr(err, bucket, object)
}
fsMetaPath = path.Join(bucketMetaPrefix, bucket, object, fsMetaJSONFile)
// Write the metadata to a temp file and rename it to the actual location.
if err = writeFSMetadata(fs.storage, minioMetaBucket, fsMetaPath, fsMeta); err != nil {
return "", toObjectErr(err, bucket, object)
}
// Cleanup all the parts if everything else has been safely committed.
@@ -724,33 +611,17 @@ func (fs fsObjects) CompleteMultipartUpload(bucket string, object string, upload
return "", toObjectErr(err, bucket, object)
}
// Hold the lock so that two parallel complete-multipart-uploads do not
// leave a stale uploads.json behind.
nsMutex.Lock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object))
defer nsMutex.Unlock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object))
// Hold the lock so that two parallel
// complete-multipart-uploads do not leave a stale
// uploads.json behind.
objectMPartPathLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket,
pathJoin(bucket, object))
objectMPartPathLock.Lock()
defer objectMPartPathLock.Unlock()
// Validate if there are other incomplete upload-id's present for
// the object, if yes do not attempt to delete 'uploads.json'.
uploadsJSON, err := readUploadsJSON(bucket, object, fs.storage)
if err != nil {
return "", toObjectErr(err, minioMetaBucket, object)
}
// If we have successfully read `uploads.json`, then we proceed to
// purge or update `uploads.json`.
uploadIDIdx := uploadsJSON.Index(uploadID)
if uploadIDIdx != -1 {
uploadsJSON.Uploads = append(uploadsJSON.Uploads[:uploadIDIdx], uploadsJSON.Uploads[uploadIDIdx+1:]...)
}
if len(uploadsJSON.Uploads) > 0 {
if err = fs.updateUploadsJSON(bucket, object, uploadsJSON); err != nil {
return "", toObjectErr(err, minioMetaBucket, path.Join(mpartMetaPrefix, bucket, object))
}
// Return success.
return s3MD5, nil
}
if err = fs.storage.DeleteFile(minioMetaBucket, path.Join(mpartMetaPrefix, bucket, object, uploadsJSONFile)); err != nil {
return "", toObjectErr(err, minioMetaBucket, path.Join(mpartMetaPrefix, bucket, object))
// remove entry from uploads.json
if err := fs.removeUploadID(bucket, object, uploadID); err != nil {
return "", toObjectErr(err, minioMetaMultipartBucket, path.Join(bucket, object))
}
// Return md5sum.
@@ -759,36 +630,21 @@ func (fs fsObjects) CompleteMultipartUpload(bucket string, object string, upload
// abortMultipartUpload - wrapper for purging an ongoing multipart
// transaction, deletes uploadID entry from `uploads.json` and purges
// the directory at '.minio/multipart/bucket/object/uploadID' holding
// the directory at '.minio.sys/multipart/bucket/object/uploadID' holding
// all the upload parts.
func (fs fsObjects) abortMultipartUpload(bucket, object, uploadID string) error {
// Signal appendParts routine to stop waiting for new parts to arrive.
fs.bgAppend.abort(uploadID)
// Cleanup all uploaded parts.
if err := cleanupUploadedParts(bucket, object, uploadID, fs.storage); err != nil {
return err
}
// Validate if there are other incomplete upload-id's present for
// the object, if yes do not attempt to delete 'uploads.json'.
uploadsJSON, err := readUploadsJSON(bucket, object, fs.storage)
if err == nil {
uploadIDIdx := uploadsJSON.Index(uploadID)
if uploadIDIdx != -1 {
uploadsJSON.Uploads = append(uploadsJSON.Uploads[:uploadIDIdx], uploadsJSON.Uploads[uploadIDIdx+1:]...)
}
// There are pending uploads for the same object, preserve
// them update 'uploads.json' in-place.
if len(uploadsJSON.Uploads) > 0 {
err = fs.updateUploadsJSON(bucket, object, uploadsJSON)
if err != nil {
return toObjectErr(err, bucket, object)
}
return nil
}
} // No more pending uploads for the object, we purge the entire
// entry at '.minio/multipart/bucket/object'.
if err = fs.storage.DeleteFile(minioMetaBucket, path.Join(mpartMetaPrefix, bucket, object, uploadsJSONFile)); err != nil {
return toObjectErr(err, minioMetaBucket, path.Join(mpartMetaPrefix, bucket, object))
// remove entry from uploads.json with quorum
if err := fs.removeUploadID(bucket, object, uploadID); err != nil {
return toObjectErr(err, bucket, object)
}
// success
return nil
}
@@ -805,30 +661,21 @@ func (fs fsObjects) abortMultipartUpload(bucket, object, uploadID string) error
// no affect and further requests to the same uploadID would not be
// honored.
func (fs fsObjects) AbortMultipartUpload(bucket, object, uploadID string) error {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return BucketNameInvalid{Bucket: bucket}
}
if !fs.isBucketExist(bucket) {
return BucketNotFound{Bucket: bucket}
}
if !IsValidObjectName(object) {
return ObjectNameInvalid{Bucket: bucket, Object: object}
if err := checkAbortMultipartArgs(bucket, object, fs); err != nil {
return err
}
// Hold lock so that there is no competing complete-multipart-upload or put-object-part.
nsMutex.Lock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object, uploadID))
defer nsMutex.Unlock(minioMetaBucket, pathJoin(mpartMetaPrefix, bucket, object, uploadID))
// Hold lock so that there is no competing
// complete-multipart-upload or put-object-part.
uploadIDLock := globalNSMutex.NewNSLock(minioMetaMultipartBucket,
pathJoin(bucket, object, uploadID))
uploadIDLock.Lock()
defer uploadIDLock.Unlock()
if !fs.isUploadIDExists(bucket, object, uploadID) {
return InvalidUploadID{UploadID: uploadID}
return traceError(InvalidUploadID{UploadID: uploadID})
}
fsAppendMetaPath := getFSAppendMetaPath(uploadID)
// Lock fs-append.json so that no parallel appendParts() is being done.
nsMutex.Lock(minioMetaBucket, fsAppendMetaPath)
defer nsMutex.Unlock(minioMetaBucket, fsAppendMetaPath)
err := fs.abortMultipartUpload(bucket, object, uploadID)
return err
}

218
cmd/fs-v1-multipart_test.go Normal file
View File

@@ -0,0 +1,218 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"bytes"
"os"
"path/filepath"
"reflect"
"testing"
)
// TestNewMultipartUploadFaultyDisk - test NewMultipartUpload with faulty disks
func TestNewMultipartUploadFaultyDisk(t *testing.T) {
// Prepare for tests
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
objectName := "object"
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Cannot create bucket, err: ", err)
}
// Test with faulty disk
fsStorage := fs.storage.(*retryStorage)
for i := 1; i <= 5; i++ {
// Faulty disk generates errFaultyDisk at 'i' storage api call number
fs.storage = newNaughtyDisk(fsStorage, map[int]error{i: errFaultyDisk}, nil)
if _, err := fs.NewMultipartUpload(bucketName, objectName, map[string]string{"X-Amz-Meta-xid": "3f"}); errorCause(err) != errFaultyDisk {
switch i {
case 1:
if !isSameType(errorCause(err), BucketNotFound{}) {
t.Fatal("Unexpected error ", err)
}
default:
t.Fatal("Unexpected error ", err)
}
}
}
}
// TestPutObjectPartFaultyDisk - test PutObjectPart with faulty disks
func TestPutObjectPartFaultyDisk(t *testing.T) {
root, err := newTestConfig("us-east-1")
if err != nil {
t.Fatal(err)
}
defer removeAll(root)
// Prepare for tests
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
objectName := "object"
data := []byte("12345")
dataLen := int64(len(data))
if err = obj.MakeBucket(bucketName); err != nil {
t.Fatal("Cannot create bucket, err: ", err)
}
uploadID, err := fs.NewMultipartUpload(bucketName, objectName, map[string]string{"X-Amz-Meta-xid": "3f"})
if err != nil {
t.Fatal("Unexpected error ", err)
}
md5Hex := getMD5Hash(data)
sha256sum := ""
// Test with faulty disk
fsStorage := fs.storage.(*retryStorage)
for i := 1; i <= 7; i++ {
// Faulty disk generates errFaultyDisk at 'i' storage api call number
fs.storage = newNaughtyDisk(fsStorage, map[int]error{i: errFaultyDisk}, nil)
md5sum, err := fs.PutObjectPart(bucketName, objectName, uploadID, 1, dataLen, bytes.NewReader(data), md5Hex, sha256sum)
if errorCause(err) != errFaultyDisk {
if errorCause(err) == nil {
t.Fatalf("Test %d shouldn't succeed, md5sum = %s\n", i, md5sum)
}
switch i {
case 1:
if !isSameType(errorCause(err), BucketNotFound{}) {
t.Fatal("Unexpected error ", err)
}
case 3:
case 2, 4, 5, 6:
if !isSameType(errorCause(err), InvalidUploadID{}) {
t.Fatal("Unexpected error ", err)
}
default:
t.Fatal("Unexpected error ", i, err, reflect.TypeOf(errorCause(err)), reflect.TypeOf(errFaultyDisk))
}
}
}
}
// TestCompleteMultipartUploadFaultyDisk - test CompleteMultipartUpload with faulty disks
func TestCompleteMultipartUploadFaultyDisk(t *testing.T) {
// Prepare for tests
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
objectName := "object"
data := []byte("12345")
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Cannot create bucket, err: ", err)
}
uploadID, err := fs.NewMultipartUpload(bucketName, objectName, map[string]string{"X-Amz-Meta-xid": "3f"})
if err != nil {
t.Fatal("Unexpected error ", err)
}
md5Hex := getMD5Hash(data)
sha256sum := ""
if _, err := fs.PutObjectPart(bucketName, objectName, uploadID, 1, 5, bytes.NewReader(data), md5Hex, sha256sum); err != nil {
t.Fatal("Unexpected error ", err)
}
parts := []completePart{{PartNumber: 1, ETag: md5Hex}}
fsStorage := fs.storage.(*retryStorage)
for i := 1; i <= 3; i++ {
// Faulty disk generates errFaultyDisk at 'i' storage api call number
fs.storage = newNaughtyDisk(fsStorage, map[int]error{i: errFaultyDisk}, nil)
if _, err := fs.CompleteMultipartUpload(bucketName, objectName, uploadID, parts); errorCause(err) != errFaultyDisk {
switch i {
case 1:
if !isSameType(errorCause(err), BucketNotFound{}) {
t.Fatal("Unexpected error ", err)
}
case 2:
if !isSameType(errorCause(err), InvalidUploadID{}) {
t.Fatal("Unexpected error ", err)
}
default:
t.Fatal("Unexpected error ", i, err, reflect.TypeOf(errorCause(err)), reflect.TypeOf(errFaultyDisk))
}
}
}
}
// TestListMultipartUploadsFaultyDisk - test ListMultipartUploads with faulty disks
func TestListMultipartUploadsFaultyDisk(t *testing.T) {
// Prepare for tests
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
objectName := "object"
data := []byte("12345")
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Cannot create bucket, err: ", err)
}
uploadID, err := fs.NewMultipartUpload(bucketName, objectName, map[string]string{"X-Amz-Meta-xid": "3f"})
if err != nil {
t.Fatal("Unexpected error ", err)
}
md5Hex := getMD5Hash(data)
sha256sum := ""
if _, err := fs.PutObjectPart(bucketName, objectName, uploadID, 1, 5, bytes.NewReader(data), md5Hex, sha256sum); err != nil {
t.Fatal("Unexpected error ", err)
}
fsStorage := fs.storage.(*retryStorage)
for i := 1; i <= 4; i++ {
// Faulty disk generates errFaultyDisk at 'i' storage api call number
fs.storage = newNaughtyDisk(fsStorage, map[int]error{i: errFaultyDisk}, nil)
if _, err := fs.ListMultipartUploads(bucketName, objectName, "", "", "", 1000); errorCause(err) != errFaultyDisk {
switch i {
case 1:
if !isSameType(errorCause(err), BucketNotFound{}) {
t.Fatal("Unexpected error ", err)
}
case 2:
if !isSameType(errorCause(err), InvalidUploadID{}) {
t.Fatal("Unexpected error ", err)
}
case 3:
if errorCause(err) != errFileNotFound {
t.Fatal("Unexpected error ", err)
}
default:
t.Fatal("Unexpected error ", i, err, reflect.TypeOf(errorCause(err)), reflect.TypeOf(errFaultyDisk))
}
}
}
}

View File

@@ -18,94 +18,60 @@ package cmd
import (
"crypto/md5"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"hash"
"io"
"os"
"path"
"sort"
"strings"
"github.com/minio/minio/pkg/disk"
"github.com/minio/minio/pkg/mimedb"
)
// fsObjects - Implements fs object layer.
type fsObjects struct {
storage StorageAPI
physicalDisk string
storage StorageAPI
// List pool management.
listPool *treeWalkPool
// To manage the appendRoutine go0routines
bgAppend *backgroundAppend
}
// creates format.json, the FS format info in minioMetaBucket.
func initFormatFS(storageDisk StorageAPI) error {
return writeFSFormatData(storageDisk, newFSFormatV1())
}
// loads format.json from minioMetaBucket if it exists.
func loadFormatFS(storageDisk StorageAPI) (format formatConfigV1, err error) {
// Reads entire `format.json`.
buf, err := storageDisk.ReadAll(minioMetaBucket, fsFormatJSONFile)
if err != nil {
return formatConfigV1{}, err
}
// Unmarshal format config.
if err = json.Unmarshal(buf, &format); err != nil {
return formatConfigV1{}, err
}
// Return structured `format.json`.
return format, nil
// list of all errors that can be ignored in tree walk operation in FS
var fsTreeWalkIgnoredErrs = []error{
errFileNotFound,
errVolumeNotFound,
}
// newFSObjects - initialize new fs object layer.
func newFSObjects(disk string) (ObjectLayer, error) {
storage, err := newStorageAPI(disk)
if err != nil && err != errDiskNotFound {
return nil, err
func newFSObjects(storage StorageAPI) (ObjectLayer, error) {
if storage == nil {
return nil, errInvalidArgument
}
// Attempt to create `.minio`.
err = storage.MakeVol(minioMetaBucket)
// Load format and validate.
_, err := loadFormatFS(storage)
if err != nil {
switch err {
// Ignore the errors.
case errVolumeExists, errDiskNotFound, errFaultyDisk:
default:
return nil, toObjectErr(err, minioMetaBucket)
}
return nil, fmt.Errorf("Unable to recognize backend format, %s", err)
}
// Runs house keeping code, like creating minioMetaBucket, cleaning up tmp files etc.
if err = fsHouseKeeping(storage); err != nil {
return nil, err
}
// loading format.json from minioMetaBucket.
// Note: The format.json content is ignored, reserved for future use.
format, err := loadFormatFS(storage)
if err != nil {
if err == errFileNotFound {
// format.json doesn't exist, create it inside minioMetaBucket.
err = initFormatFS(storage)
if err != nil {
return nil, err
}
} else {
return nil, err
}
} else if !isFSFormat(format) {
return nil, errFSDiskFormat
// Initialize meta volume, if volume already exists ignores it.
if err = initMetaVolume([]StorageAPI{storage}); err != nil {
return nil, fmt.Errorf("Unable to initialize '.minio.sys' meta volume, %s", err)
}
// Initialize fs objects.
fs := fsObjects{
storage: storage,
physicalDisk: disk,
listPool: newTreeWalkPool(globalLookupTimeout),
storage: storage,
listPool: newTreeWalkPool(globalLookupTimeout),
bgAppend: &backgroundAppend{
infoMap: make(map[string]bgAppendPartsInfo),
},
}
// Return successfully initialized object layer.
@@ -115,27 +81,42 @@ func newFSObjects(disk string) (ObjectLayer, error) {
// Should be called when process shuts down.
func (fs fsObjects) Shutdown() error {
// List if there are any multipart entries.
_, err := fs.storage.ListDir(minioMetaBucket, mpartMetaPrefix)
if err != errFileNotFound {
// Multipart directory is not empty hence do not remove '.minio.sys' volume.
prefix := ""
entries, err := fs.storage.ListDir(minioMetaMultipartBucket, prefix)
if err != nil {
// A non nil err means that an unexpected error occurred
return toObjectErr(traceError(err))
}
if len(entries) > 0 {
// Should not remove .minio.sys if there are any multipart
// uploads were found.
return nil
}
if err = fs.storage.DeleteVol(minioMetaMultipartBucket); err != nil {
return toObjectErr(traceError(err))
}
// List if there are any bucket configuration entries.
_, err = fs.storage.ListDir(minioMetaBucket, bucketConfigPrefix)
if err != errFileNotFound {
// Bucket config directory is not empty hence do not remove '.minio.sys' volume.
return nil
// A nil err means that bucket config directory is not empty hence do not remove '.minio.sys' volume.
// A non nil err means that an unexpected error occurred
return toObjectErr(traceError(err))
}
// Cleanup everything else.
prefix := ""
if err = cleanupDir(fs.storage, minioMetaBucket, prefix); err != nil {
errorIf(err, "Unable to cleanup minio meta bucket")
// Cleanup and delete tmp bucket.
if err = cleanupDir(fs.storage, minioMetaTmpBucket, prefix); err != nil {
return err
}
if err = fs.storage.DeleteVol(minioMetaTmpBucket); err != nil {
return toObjectErr(traceError(err))
}
// Remove format.json and delete .minio.sys bucket
if err = fs.storage.DeleteFile(minioMetaBucket, fsFormatJSONFile); err != nil {
return toObjectErr(traceError(err))
}
if err = fs.storage.DeleteVol(minioMetaBucket); err != nil {
if err != errVolumeNotEmpty {
errorIf(err, "Unable to delete minio meta bucket %s", minioMetaBucket)
return err
return toObjectErr(traceError(err))
}
}
// Successful.
@@ -144,12 +125,14 @@ func (fs fsObjects) Shutdown() error {
// StorageInfo - returns underlying storage statistics.
func (fs fsObjects) StorageInfo() StorageInfo {
info, err := disk.GetInfo(fs.physicalDisk)
fatalIf(err, "Unable to get disk info "+fs.physicalDisk)
return StorageInfo{
info, err := fs.storage.DiskInfo()
errorIf(err, "Unable to get disk info %#v", fs.storage)
storageInfo := StorageInfo{
Total: info.Total,
Free: info.Free,
}
storageInfo.Backend.Type = FS
return storageInfo
}
/// Bucket operations
@@ -158,10 +141,10 @@ func (fs fsObjects) StorageInfo() StorageInfo {
func (fs fsObjects) MakeBucket(bucket string) error {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return BucketNameInvalid{Bucket: bucket}
return traceError(BucketNameInvalid{Bucket: bucket})
}
if err := fs.storage.MakeVol(bucket); err != nil {
return toObjectErr(err, bucket)
return toObjectErr(traceError(err), bucket)
}
return nil
}
@@ -170,11 +153,11 @@ func (fs fsObjects) MakeBucket(bucket string) error {
func (fs fsObjects) GetBucketInfo(bucket string) (BucketInfo, error) {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return BucketInfo{}, BucketNameInvalid{Bucket: bucket}
return BucketInfo{}, traceError(BucketNameInvalid{Bucket: bucket})
}
vi, err := fs.storage.StatVol(bucket)
if err != nil {
return BucketInfo{}, toObjectErr(err, bucket)
return BucketInfo{}, toObjectErr(traceError(err), bucket)
}
return BucketInfo{
Name: bucket,
@@ -187,12 +170,14 @@ func (fs fsObjects) ListBuckets() ([]BucketInfo, error) {
var bucketInfos []BucketInfo
vols, err := fs.storage.ListVols()
if err != nil {
return nil, toObjectErr(err)
return nil, toObjectErr(traceError(err))
}
var invalidBucketNames []string
for _, vol := range vols {
// StorageAPI can send volume names which are incompatible
// with buckets, handle it and skip them.
if !IsValidBucketName(vol.Name) {
invalidBucketNames = append(invalidBucketNames, vol.Name)
continue
}
// Ignore the volume special bucket.
@@ -204,6 +189,11 @@ func (fs fsObjects) ListBuckets() ([]BucketInfo, error) {
Created: vol.Created,
})
}
// Print a user friendly message if we indeed skipped certain directories which are
// incompatible with S3's bucket name restrictions.
if len(invalidBucketNames) > 0 {
errorIf(errors.New("One or more invalid bucket names found"), "Skipping %s", invalidBucketNames)
}
sort.Sort(byBucketName(bucketInfos))
return bucketInfos, nil
}
@@ -212,14 +202,14 @@ func (fs fsObjects) ListBuckets() ([]BucketInfo, error) {
func (fs fsObjects) DeleteBucket(bucket string) error {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return BucketNameInvalid{Bucket: bucket}
return traceError(BucketNameInvalid{Bucket: bucket})
}
// Attempt to delete regular bucket.
if err := fs.storage.DeleteVol(bucket); err != nil {
return toObjectErr(err, bucket)
return toObjectErr(traceError(err), bucket)
}
// Cleanup all the previously incomplete multiparts.
if err := cleanupDir(fs.storage, path.Join(minioMetaBucket, mpartMetaPrefix), bucket); err != nil && err != errVolumeNotFound {
if err := cleanupDir(fs.storage, minioMetaMultipartBucket, bucket); err != nil && errorCause(err) != errVolumeNotFound {
return toObjectErr(err, bucket)
}
return nil
@@ -229,36 +219,31 @@ func (fs fsObjects) DeleteBucket(bucket string) error {
// GetObject - get an object.
func (fs fsObjects) GetObject(bucket, object string, offset int64, length int64, writer io.Writer) (err error) {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return BucketNameInvalid{Bucket: bucket}
}
// Verify if object is valid.
if !IsValidObjectName(object) {
return ObjectNameInvalid{Bucket: bucket, Object: object}
if err = checkGetObjArgs(bucket, object); err != nil {
return err
}
// Offset and length cannot be negative.
if offset < 0 || length < 0 {
return toObjectErr(errUnexpected, bucket, object)
return toObjectErr(traceError(errUnexpected), bucket, object)
}
// Writer cannot be nil.
if writer == nil {
return toObjectErr(errUnexpected, bucket, object)
return toObjectErr(traceError(errUnexpected), bucket, object)
}
// Stat the file to get file size.
fi, err := fs.storage.StatFile(bucket, object)
if err != nil {
return toObjectErr(err, bucket, object)
return toObjectErr(traceError(err), bucket, object)
}
// Reply back invalid range if the input offset and length fall out of range.
if offset > fi.Size || length > fi.Size {
return InvalidRange{offset, length, fi.Size}
return traceError(InvalidRange{offset, length, fi.Size})
}
// Reply if we have inputs with offset and length falling out of file size range.
if offset+length > fi.Size {
return InvalidRange{offset, length, fi.Size}
return traceError(InvalidRange{offset, length, fi.Size})
}
var totalLeft = length
@@ -287,11 +272,11 @@ func (fs fsObjects) GetObject(bucket, object string, offset int64, length int64,
offset += int64(nw)
}
if ew != nil {
err = ew
err = traceError(ew)
break
}
if nr != int64(nw) {
err = io.ErrShortWrite
err = traceError(io.ErrShortWrite)
break
}
}
@@ -299,7 +284,7 @@ func (fs fsObjects) GetObject(bucket, object string, offset int64, length int64,
break
}
if er != nil {
err = er
err = traceError(er)
break
}
if totalLeft == 0 {
@@ -310,22 +295,15 @@ func (fs fsObjects) GetObject(bucket, object string, offset int64, length int64,
return toObjectErr(err, bucket, object)
}
// GetObjectInfo - get object info.
func (fs fsObjects) GetObjectInfo(bucket, object string) (ObjectInfo, error) {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return ObjectInfo{}, (BucketNameInvalid{Bucket: bucket})
}
// Verify if object is valid.
if !IsValidObjectName(object) {
return ObjectInfo{}, (ObjectNameInvalid{Bucket: bucket, Object: object})
}
// getObjectInfo - get object info.
func (fs fsObjects) getObjectInfo(bucket, object string) (ObjectInfo, error) {
fi, err := fs.storage.StatFile(bucket, object)
if err != nil {
return ObjectInfo{}, toObjectErr(err, bucket, object)
return ObjectInfo{}, toObjectErr(traceError(err), bucket, object)
}
fsMeta, err := readFSMetadata(fs.storage, minioMetaBucket, path.Join(bucketMetaPrefix, bucket, object, fsMetaJSONFile))
if err != nil && err != errFileNotFound {
// Ignore error if the metadata file is not found, other errors must be returned.
if err != nil && errorCause(err) != errFileNotFound {
return ObjectInfo{}, toObjectErr(err, bucket, object)
}
@@ -342,8 +320,7 @@ func (fs fsObjects) GetObjectInfo(bucket, object string) (ObjectInfo, error) {
}
}
// Guess content-type from the extension if possible.
return ObjectInfo{
objInfo := ObjectInfo{
Bucket: bucket,
Name: object,
ModTime: fi.ModTime,
@@ -352,37 +329,54 @@ func (fs fsObjects) GetObjectInfo(bucket, object string) (ObjectInfo, error) {
MD5Sum: fsMeta.Meta["md5Sum"],
ContentType: fsMeta.Meta["content-type"],
ContentEncoding: fsMeta.Meta["content-encoding"],
UserDefined: fsMeta.Meta,
}, nil
}
// md5Sum has already been extracted into objInfo.MD5Sum. We
// need to remove it from fsMeta.Meta to avoid it from appearing as
// part of response headers. e.g, X-Minio-* or X-Amz-*.
delete(fsMeta.Meta, "md5Sum")
objInfo.UserDefined = fsMeta.Meta
return objInfo, nil
}
// GetObjectInfo - get object info.
func (fs fsObjects) GetObjectInfo(bucket, object string) (ObjectInfo, error) {
if err := checkGetObjArgs(bucket, object); err != nil {
return ObjectInfo{}, err
}
return fs.getObjectInfo(bucket, object)
}
// PutObject - create an object.
func (fs fsObjects) PutObject(bucket string, object string, size int64, data io.Reader, metadata map[string]string) (string, error) {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return "", BucketNameInvalid{Bucket: bucket}
}
if !IsValidObjectName(object) {
return "", ObjectNameInvalid{
Bucket: bucket,
Object: object,
}
func (fs fsObjects) PutObject(bucket string, object string, size int64, data io.Reader, metadata map[string]string, sha256sum string) (objInfo ObjectInfo, err error) {
if err = checkPutObjectArgs(bucket, object, fs); err != nil {
return ObjectInfo{}, err
}
// No metadata is set, allocate a new one.
if metadata == nil {
metadata = make(map[string]string)
}
uniqueID := getUUID()
uniqueID := mustGetUUID()
// Uploaded object will first be written to the temporary location which will eventually
// be renamed to the actual location. It is first written to the temporary location
// so that cleaning it up will be easy if the server goes down.
tempObj := path.Join(tmpMetaPrefix, uniqueID)
tempObj := uniqueID
// Initialize md5 writer.
md5Writer := md5.New()
hashWriters := []io.Writer{md5Writer}
var sha256Writer hash.Hash
if sha256sum != "" {
sha256Writer = sha256.New()
hashWriters = append(hashWriters, sha256Writer)
}
multiWriter := io.MultiWriter(hashWriters...)
// Limit the reader to its provided size if specified.
var limitDataReader io.Reader
if size > 0 {
@@ -393,169 +387,126 @@ func (fs fsObjects) PutObject(bucket string, object string, size int64, data io.
limitDataReader = data
}
if size == 0 {
// For size 0 we write a 0byte file.
err := fs.storage.AppendFile(minioMetaBucket, tempObj, []byte(""))
// Prepare file to avoid disk fragmentation
if size > 0 {
err = fs.storage.PrepareFile(minioMetaTmpBucket, tempObj, size)
if err != nil {
return "", toObjectErr(err, bucket, object)
}
} else {
// Allocate a buffer to Read() from request body
bufSize := int64(readSizeV1)
if size > 0 && bufSize > size {
bufSize = size
}
buf := make([]byte, int(bufSize))
teeReader := io.TeeReader(limitDataReader, md5Writer)
bytesWritten, err := fsCreateFile(fs.storage, teeReader, buf, minioMetaBucket, tempObj)
if err != nil {
fs.storage.DeleteFile(minioMetaBucket, tempObj)
return "", toObjectErr(err, bucket, object)
}
// Should return IncompleteBody{} error when reader has fewer
// bytes than specified in request header.
if bytesWritten < size {
fs.storage.DeleteFile(minioMetaBucket, tempObj)
return "", IncompleteBody{}
return ObjectInfo{}, toObjectErr(err, bucket, object)
}
}
// Allocate a buffer to Read() from request body
bufSize := int64(readSizeV1)
if size > 0 && bufSize > size {
bufSize = size
}
buf := make([]byte, int(bufSize))
teeReader := io.TeeReader(limitDataReader, multiWriter)
var bytesWritten int64
bytesWritten, err = fsCreateFile(fs.storage, teeReader, buf, minioMetaTmpBucket, tempObj)
if err != nil {
fs.storage.DeleteFile(minioMetaTmpBucket, tempObj)
errorIf(err, "Failed to create object %s/%s", bucket, object)
return ObjectInfo{}, toObjectErr(err, bucket, object)
}
// Should return IncompleteBody{} error when reader has fewer
// bytes than specified in request header.
if bytesWritten < size {
fs.storage.DeleteFile(minioMetaTmpBucket, tempObj)
return ObjectInfo{}, traceError(IncompleteBody{})
}
// Delete the temporary object in the case of a
// failure. If PutObject succeeds, then there would be
// nothing to delete.
defer fs.storage.DeleteFile(minioMetaTmpBucket, tempObj)
newMD5Hex := hex.EncodeToString(md5Writer.Sum(nil))
// Update the md5sum if not set with the newly calculated one.
if len(metadata["md5Sum"]) == 0 {
metadata["md5Sum"] = newMD5Hex
}
// Validate if payload is valid.
if isSignVerify(data) {
if vErr := data.(*signVerifyReader).Verify(); vErr != nil {
// Incoming payload wrong, delete the temporary object.
fs.storage.DeleteFile(minioMetaBucket, tempObj)
// Error return.
return "", toObjectErr(vErr, bucket, object)
}
}
// md5Hex representation.
md5Hex := metadata["md5Sum"]
if md5Hex != "" {
if newMD5Hex != md5Hex {
// MD5 mismatch, delete the temporary object.
fs.storage.DeleteFile(minioMetaBucket, tempObj)
// Returns md5 mismatch.
return "", BadDigest{md5Hex, newMD5Hex}
return ObjectInfo{}, traceError(BadDigest{md5Hex, newMD5Hex})
}
}
if sha256sum != "" {
newSHA256sum := hex.EncodeToString(sha256Writer.Sum(nil))
if newSHA256sum != sha256sum {
return ObjectInfo{}, traceError(SHA256Mismatch{})
}
}
// Entire object was written to the temp location, now it's safe to rename it to the actual location.
err := fs.storage.RenameFile(minioMetaBucket, tempObj, bucket, object)
err = fs.storage.RenameFile(minioMetaTmpBucket, tempObj, bucket, object)
if err != nil {
return "", toObjectErr(err, bucket, object)
return ObjectInfo{}, toObjectErr(traceError(err), bucket, object)
}
// Save additional metadata only if extended headers such as "X-Amz-Meta-" are set.
if hasExtendedHeader(metadata) {
// Initialize `fs.json` values.
if bucket != minioMetaBucket {
// Save objects' metadata in `fs.json`.
// Skip creating fs.json if bucket is .minio.sys as the object would have been created
// by minio's S3 layer (ex. policy.json)
fsMeta := newFSMetaV1()
fsMeta.Meta = metadata
fsMetaPath := path.Join(bucketMetaPrefix, bucket, object, fsMetaJSONFile)
if err = writeFSMetadata(fs.storage, minioMetaBucket, fsMetaPath, fsMeta); err != nil {
return "", toObjectErr(err, bucket, object)
return ObjectInfo{}, toObjectErr(traceError(err), bucket, object)
}
}
// Return md5sum, successfully wrote object.
return newMD5Hex, nil
return fs.getObjectInfo(bucket, object)
}
// DeleteObject - deletes an object from a bucket, this operation is destructive
// and there are no rollbacks supported.
func (fs fsObjects) DeleteObject(bucket, object string) error {
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return BucketNameInvalid{Bucket: bucket}
if err := checkDelObjArgs(bucket, object); err != nil {
return err
}
if !IsValidObjectName(object) {
return ObjectNameInvalid{Bucket: bucket, Object: object}
if bucket != minioMetaBucket {
// We don't store fs.json for minio-S3-layer created files like policy.json,
// hence we don't try to delete fs.json for such files.
err := fs.storage.DeleteFile(minioMetaBucket, path.Join(bucketMetaPrefix, bucket, object, fsMetaJSONFile))
if err != nil && err != errFileNotFound {
return toObjectErr(traceError(err), bucket, object)
}
}
err := fs.storage.DeleteFile(minioMetaBucket, path.Join(bucketMetaPrefix, bucket, object, fsMetaJSONFile))
if err != nil && err != errFileNotFound {
return toObjectErr(err, bucket, object)
}
if err = fs.storage.DeleteFile(bucket, object); err != nil {
return toObjectErr(err, bucket, object)
if err := fs.storage.DeleteFile(bucket, object); err != nil {
return toObjectErr(traceError(err), bucket, object)
}
return nil
}
// Checks whether bucket exists.
func isBucketExist(storage StorageAPI, bucketName string) bool {
// Check whether bucket exists.
_, err := storage.StatVol(bucketName)
if err != nil {
if err == errVolumeNotFound {
return false
}
errorIf(err, "Stat failed on bucket "+bucketName+".")
return false
}
return true
}
// ListObjects - list all objects at prefix upto maxKeys., optionally delimited by '/'. Maintains the list pool
// state for future re-entrant list requests.
func (fs fsObjects) ListObjects(bucket, prefix, marker, delimiter string, maxKeys int) (ListObjectsInfo, error) {
// Convert entry to FileInfo
entryToFileInfo := func(entry string) (fileInfo FileInfo, err error) {
// Convert entry to ObjectInfo
entryToObjectInfo := func(entry string) (objInfo ObjectInfo, err error) {
if strings.HasSuffix(entry, slashSeparator) {
// Object name needs to be full path.
fileInfo.Name = entry
fileInfo.Mode = os.ModeDir
objInfo.Name = entry
objInfo.IsDir = true
return
}
if fileInfo, err = fs.storage.StatFile(bucket, entry); err != nil {
return
if objInfo, err = fs.getObjectInfo(bucket, entry); err != nil {
return ObjectInfo{}, err
}
fsMeta, mErr := readFSMetadata(fs.storage, minioMetaBucket, path.Join(bucketMetaPrefix, bucket, entry, fsMetaJSONFile))
if mErr != nil && mErr != errFileNotFound {
return FileInfo{}, mErr
}
if len(fsMeta.Meta) == 0 {
fsMeta.Meta = make(map[string]string)
}
// Object name needs to be full path.
fileInfo.Name = entry
fileInfo.MD5Sum = fsMeta.Meta["md5Sum"]
return
}
// Verify if bucket is valid.
if !IsValidBucketName(bucket) {
return ListObjectsInfo{}, BucketNameInvalid{Bucket: bucket}
}
// Verify if bucket exists.
if !isBucketExist(fs.storage, bucket) {
return ListObjectsInfo{}, BucketNotFound{Bucket: bucket}
}
if !IsValidObjectPrefix(prefix) {
return ListObjectsInfo{}, ObjectNameInvalid{Bucket: bucket, Object: prefix}
}
// Verify if delimiter is anything other than '/', which we do not support.
if delimiter != "" && delimiter != slashSeparator {
return ListObjectsInfo{}, UnsupportedDelimiter{
Delimiter: delimiter,
}
}
// Verify if marker has prefix.
if marker != "" {
if !strings.HasPrefix(marker, prefix) {
return ListObjectsInfo{}, InvalidMarkerPrefixCombination{
Marker: marker,
Prefix: prefix,
}
}
if err := checkListObjsArgs(bucket, prefix, marker, delimiter, fs); err != nil {
return ListObjectsInfo{}, err
}
// With max keys of zero we have reached eof, return right here.
@@ -593,10 +544,10 @@ func (fs fsObjects) ListObjects(bucket, prefix, marker, delimiter string, maxKey
// object string does not end with "/".
return !strings.HasSuffix(object, slashSeparator)
}
listDir := listDirFactory(isLeaf, fs.storage)
listDir := listDirFactory(isLeaf, fsTreeWalkIgnoredErrs, fs.storage)
walkResultCh = startTreeWalk(bucket, prefix, marker, recursive, listDir, isLeaf, endWalkCh)
}
var fileInfos []FileInfo
var objInfos []ObjectInfo
var eof bool
var nextMarker string
for i := 0; i < maxKeys; {
@@ -609,17 +560,17 @@ func (fs fsObjects) ListObjects(bucket, prefix, marker, delimiter string, maxKey
// For any walk error return right away.
if walkResult.err != nil {
// File not found is a valid case.
if walkResult.err == errFileNotFound {
if errorCause(walkResult.err) == errFileNotFound {
return ListObjectsInfo{}, nil
}
return ListObjectsInfo{}, toObjectErr(walkResult.err, bucket, prefix)
}
fileInfo, err := entryToFileInfo(walkResult.entry)
objInfo, err := entryToObjectInfo(walkResult.entry)
if err != nil {
return ListObjectsInfo{}, nil
}
nextMarker = fileInfo.Name
fileInfos = append(fileInfos, fileInfo)
nextMarker = objInfo.Name
objInfos = append(objInfos, objInfo)
if walkResult.end {
eof = true
break
@@ -632,29 +583,28 @@ func (fs fsObjects) ListObjects(bucket, prefix, marker, delimiter string, maxKey
}
result := ListObjectsInfo{IsTruncated: !eof}
for _, fileInfo := range fileInfos {
result.NextMarker = fileInfo.Name
if fileInfo.Mode.IsDir() {
result.Prefixes = append(result.Prefixes, fileInfo.Name)
for _, objInfo := range objInfos {
result.NextMarker = objInfo.Name
if objInfo.IsDir {
result.Prefixes = append(result.Prefixes, objInfo.Name)
continue
}
result.Objects = append(result.Objects, ObjectInfo{
Name: fileInfo.Name,
ModTime: fileInfo.ModTime,
Size: fileInfo.Size,
MD5Sum: fileInfo.MD5Sum,
IsDir: false,
})
result.Objects = append(result.Objects, objInfo)
}
return result, nil
}
// HealObject - no-op for fs. Valid only for XL.
func (fs fsObjects) HealObject(bucket, object string) error {
return NotImplemented{}
return traceError(NotImplemented{})
}
// HealListObjects - list objects for healing. Valid only for XL
func (fs fsObjects) ListObjectsHeal(bucket, prefix, marker, delimiter string, maxKeys int) (ListObjectsInfo, error) {
return ListObjectsInfo{}, NotImplemented{}
// HealBucket - no-op for fs, Valid only for XL.
func (fs fsObjects) HealBucket(bucket string) error {
return traceError(NotImplemented{})
}
// ListObjectsHeal - list all objects to be healed. Valid only for XL
func (fs fsObjects) ListObjectsHeal(bucket, prefix, marker, delimiter string, maxKeys int) (ListObjectsInfo, error) {
return ListObjectsInfo{}, traceError(NotImplemented{})
}

View File

@@ -17,6 +17,7 @@
package cmd
import (
"bytes"
"os"
"path/filepath"
"testing"
@@ -39,23 +40,312 @@ func TestNewFS(t *testing.T) {
disks = append(disks, xlDisk)
}
endpoints, err := parseStorageEndpoints([]string{disk})
if err != nil {
t.Fatal("Uexpected error: ", err)
}
fsStorageDisks, err := initStorageDisks(endpoints)
if err != nil {
t.Fatal("Uexpected error: ", err)
}
endpoints, err = parseStorageEndpoints(disks)
if err != nil {
t.Fatal("Uexpected error: ", err)
}
xlStorageDisks, err := initStorageDisks(endpoints)
if err != nil {
t.Fatal("Uexpected error: ", err)
}
// Initializes all disks with XL
_, err := newXLObjects(disks, nil)
formattedDisks, err := waitForFormatDisks(true, endpoints, xlStorageDisks)
if err != nil {
t.Fatalf("Unable to format XL %s", err)
}
_, err = newXLObjects(formattedDisks)
if err != nil {
t.Fatalf("Unable to initialize XL object, %s", err)
}
testCases := []struct {
disk string
disk StorageAPI
expectedErr error
}{
{disk, nil},
{disks[0], errFSDiskFormat},
{fsStorageDisks[0], nil},
{xlStorageDisks[0], errFSDiskFormat},
}
for _, testCase := range testCases {
if _, err := newFSObjects(testCase.disk); err != testCase.expectedErr {
t.Fatalf("expected: %s, got: %s", testCase.expectedErr, err)
if _, err = waitForFormatDisks(true, endpoints, []StorageAPI{testCase.disk}); err != testCase.expectedErr {
t.Errorf("expected: %s, got :%s", testCase.expectedErr, err)
}
}
_, err = newFSObjects(nil)
if err != errInvalidArgument {
t.Errorf("Expecting error invalid argument, got %s", err)
}
_, err = newFSObjects(&retryStorage{xlStorageDisks[0]})
if err != nil {
errMsg := "Unable to recognize backend format, Disk is not in FS format."
if err.Error() == errMsg {
t.Errorf("Expecting %s, got %s", errMsg, err)
}
}
}
// TestFSShutdown - initialize a new FS object layer then calls
// Shutdown to check returned results
func TestFSShutdown(t *testing.T) {
rootPath, err := newTestConfig("us-east-1")
if err != nil {
t.Fatal(err)
}
defer removeAll(rootPath)
bucketName := "testbucket"
objectName := "object"
// Create and return an fsObject with its path in the disk
prepareTest := func() (fsObjects, string) {
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
objectContent := "12345"
obj.MakeBucket(bucketName)
sha256sum := ""
obj.PutObject(bucketName, objectName, int64(len(objectContent)), bytes.NewReader([]byte(objectContent)), nil, sha256sum)
return fs, disk
}
// Test Shutdown with regular conditions
fs, disk := prepareTest()
if err := fs.Shutdown(); err != nil {
t.Fatal("Cannot shutdown the FS object: ", err)
}
removeAll(disk)
// Test Shutdown with faulty disk
for i := 1; i <= 5; i++ {
fs, disk := prepareTest()
fs.DeleteObject(bucketName, objectName)
fsStorage := fs.storage.(*retryStorage)
fs.storage = newNaughtyDisk(fsStorage, map[int]error{i: errFaultyDisk}, nil)
if err := fs.Shutdown(); errorCause(err) != errFaultyDisk {
t.Fatal(i, ", Got unexpected fs shutdown error: ", err)
}
removeAll(disk)
}
}
// TestFSLoadFormatFS - test loadFormatFS with healty and faulty disks
func TestFSLoadFormatFS(t *testing.T) {
// Prepare for testing
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
// Regular format loading
_, err := loadFormatFS(fs.storage)
if err != nil {
t.Fatal("Should not fail here", err)
}
// Loading corrupted format file
fs.storage.AppendFile(minioMetaBucket, fsFormatJSONFile, []byte{'b'})
_, err = loadFormatFS(fs.storage)
if err == nil {
t.Fatal("Should return an error here")
}
// Loading format file from faulty disk
fsStorage := fs.storage.(*retryStorage)
fs.storage = newNaughtyDisk(fsStorage, nil, errFaultyDisk)
_, err = loadFormatFS(fs.storage)
if err != errFaultyDisk {
t.Fatal("Should return faulty disk error")
}
}
// TestFSGetBucketInfo - test GetBucketInfo with healty and faulty disks
func TestFSGetBucketInfo(t *testing.T) {
// Prepare for testing
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
obj.MakeBucket(bucketName)
// Test with valid parameters
info, err := fs.GetBucketInfo(bucketName)
if err != nil {
t.Fatal(err)
}
if info.Name != bucketName {
t.Fatalf("wrong bucket name, expected: %s, found: %s", bucketName, info.Name)
}
// Test with inexistant bucket
_, err = fs.GetBucketInfo("a")
if !isSameType(errorCause(err), BucketNameInvalid{}) {
t.Fatal("BucketNameInvalid error not returned")
}
// Loading format file from faulty disk
fsStorage := fs.storage.(*retryStorage)
fs.storage = newNaughtyDisk(fsStorage, nil, errFaultyDisk)
_, err = fs.GetBucketInfo(bucketName)
if errorCause(err) != errFaultyDisk {
t.Fatal("errFaultyDisk error not returned")
}
}
// TestFSDeleteObject - test fs.DeleteObject() with healthy and corrupted disks
func TestFSDeleteObject(t *testing.T) {
// Prepare for tests
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
objectName := "object"
obj.MakeBucket(bucketName)
sha256sum := ""
obj.PutObject(bucketName, objectName, int64(len("abcd")), bytes.NewReader([]byte("abcd")), nil, sha256sum)
// Test with invalid bucket name
if err := fs.DeleteObject("fo", objectName); !isSameType(errorCause(err), BucketNameInvalid{}) {
t.Fatal("Unexpected error: ", err)
}
// Test with invalid object name
if err := fs.DeleteObject(bucketName, "\\"); !isSameType(errorCause(err), ObjectNameInvalid{}) {
t.Fatal("Unexpected error: ", err)
}
// Test with inexist bucket/object
if err := fs.DeleteObject("foobucket", "fooobject"); !isSameType(errorCause(err), BucketNotFound{}) {
t.Fatal("Unexpected error: ", err)
}
// Test with valid condition
if err := fs.DeleteObject(bucketName, objectName); err != nil {
t.Fatal("Unexpected error: ", err)
}
// Loading format file from faulty disk
fsStorage := fs.storage.(*retryStorage)
fs.storage = newNaughtyDisk(fsStorage, nil, errFaultyDisk)
if err := fs.DeleteObject(bucketName, objectName); errorCause(err) != errFaultyDisk {
t.Fatal("Unexpected error: ", err)
}
}
// TestFSDeleteBucket - tests for fs DeleteBucket
func TestFSDeleteBucket(t *testing.T) {
// Prepare for testing
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
err := obj.MakeBucket(bucketName)
if err != nil {
t.Fatal("Unexpected error: ", err)
}
// Test with an invalid bucket name
if err := fs.DeleteBucket("fo"); !isSameType(errorCause(err), BucketNameInvalid{}) {
t.Fatal("Unexpected error: ", err)
}
// Test with an inexistant bucket
if err := fs.DeleteBucket("foobucket"); !isSameType(errorCause(err), BucketNotFound{}) {
t.Fatal("Unexpected error: ", err)
}
// Test with a valid case
if err := fs.DeleteBucket(bucketName); err != nil {
t.Fatal("Unexpected error: ", err)
}
obj.MakeBucket(bucketName)
// Loading format file from faulty disk
fsStorage := fs.storage.(*retryStorage)
for i := 1; i <= 2; i++ {
fs.storage = newNaughtyDisk(fsStorage, map[int]error{i: errFaultyDisk}, nil)
if err := fs.DeleteBucket(bucketName); errorCause(err) != errFaultyDisk {
t.Fatal("Unexpected error: ", err)
}
}
}
// TestFSListBuckets - tests for fs ListBuckets
func TestFSListBuckets(t *testing.T) {
// Prepare for tests
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
fs := obj.(fsObjects)
bucketName := "bucket"
if err := obj.MakeBucket(bucketName); err != nil {
t.Fatal("Unexpected error: ", err)
}
// Create a bucket with invalid name
if err := fs.storage.MakeVol("vo^"); err != nil {
t.Fatal("Unexpected error: ", err)
}
// Test
buckets, err := fs.ListBuckets()
if err != nil {
t.Fatal("Unexpected error: ", err)
}
if len(buckets) != 1 {
t.Fatal("ListBuckets not working properly")
}
// Test ListBuckets with faulty disks
fsStorage := fs.storage.(*retryStorage)
for i := 1; i <= 2; i++ {
fs.storage = newNaughtyDisk(fsStorage, nil, errFaultyDisk)
if _, err := fs.ListBuckets(); errorCause(err) != errFaultyDisk {
t.Fatal("Unexpected error: ", err)
}
}
}
// TestFSHealObject - tests for fs HealObject
func TestFSHealObject(t *testing.T) {
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
err := obj.HealObject("bucket", "object")
if err == nil || !isSameType(errorCause(err), NotImplemented{}) {
t.Fatalf("Heal Object should return NotImplemented error ")
}
}
// TestFSListObjectHeal - tests for fs ListObjectHeals
func TestFSListObjectsHeal(t *testing.T) {
disk := filepath.Join(os.TempDir(), "minio-"+nextSuffix())
defer removeAll(disk)
obj := initFSObjects(disk, t)
_, err := obj.ListObjectsHeal("bucket", "prefix", "marker", "delimiter", 1000)
if err == nil || !isSameType(errorCause(err), NotImplemented{}) {
t.Fatalf("Heal Object should return NotImplemented error ")
}
}

View File

@@ -1,5 +1,5 @@
/*
* Minio Cloud Storage, (C) 2015 Minio, Inc.
* Minio Cloud Storage, (C) 2015, 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -23,6 +23,7 @@ import (
"strings"
"time"
humanize "github.com/dustin/go-humanize"
router "github.com/gorilla/mux"
"github.com/rs/cors"
)
@@ -41,9 +42,13 @@ func registerHandlers(mux *router.Router, handlerFns ...HandlerFunc) http.Handle
// Adds limiting body size middleware
// Set the body size limit to 6 Gb = Maximum object size + other possible data
// in the same request
const requestMaxBodySize = 1024 * 1024 * 1024 * (5 + 1)
// Maximum allowed form data field values. 64MiB is a guessed practical value
// which is more than enough to accommodate any form data fields and headers.
const requestFormDataSize = 64 * humanize.MiByte
// For any HTTP request, request body should be not more than 5GiB + requestFormDataSize
// where, 5GiB is the maximum allowed object size for object upload.
const requestMaxBodySize = 5*humanize.GiByte + requestFormDataSize
type requestSizeLimitHandler struct {
handler http.Handler
@@ -60,33 +65,66 @@ func (h requestSizeLimitHandler) ServeHTTP(w http.ResponseWriter, r *http.Reques
h.handler.ServeHTTP(w, r)
}
// Adds redirect rules for incoming requests.
type redirectHandler struct {
handler http.Handler
locationPrefix string
}
// Reserved bucket.
const (
reservedBucket = "/minio"
)
// Adds redirect rules for incoming requests.
type redirectHandler struct {
handler http.Handler
}
func setBrowserRedirectHandler(h http.Handler) http.Handler {
return redirectHandler{handler: h, locationPrefix: reservedBucket}
return redirectHandler{handler: h}
}
// Fetch redirect location if urlPath satisfies certain
// criteria. Some special names are considered to be
// redirectable, this is purely internal function and
// serves only limited purpose on redirect-handler for
// browser requests.
func getRedirectLocation(urlPath string) (rLocation string) {
if urlPath == reservedBucket {
rLocation = reservedBucket + "/"
}
if contains([]string{
"/",
"/webrpc",
"/login",
"/favicon.ico",
}, urlPath) {
rLocation = reservedBucket + urlPath
}
return rLocation
}
// guessIsBrowserReq - returns true if the request is browser.
// This implementation just validates user-agent and
// looks for "Mozilla" string. This is no way certifiable
// way to know if the request really came from a browser
// since User-Agent's can be arbitrary. But this is just
// a best effort function.
func guessIsBrowserReq(req *http.Request) bool {
if req == nil {
return false
}
return strings.Contains(req.Header.Get("User-Agent"), "Mozilla")
}
func (h redirectHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Re-direction handled specifically for browsers.
if strings.Contains(r.Header.Get("User-Agent"), "Mozilla") && !isRequestSignatureV4(r) {
// '/' is redirected to 'locationPrefix/'
// '/webrpc' is redirected to 'locationPrefix/webrpc'
// '/login' is redirected to 'locationPrefix/login'
switch r.URL.Path {
case "/", "/webrpc", "/login", "/favicon.ico":
location := h.locationPrefix + r.URL.Path
// Redirect to new location.
http.Redirect(w, r, location, http.StatusTemporaryRedirect)
return
aType := getRequestAuthType(r)
// Re-direct only for JWT and anonymous requests from browser.
if aType == authTypeJWT || aType == authTypeAnonymous {
// Re-direction is handled specifically for browser requests.
if guessIsBrowserReq(r) && globalIsBrowserEnabled {
// Fetch the redirect location if any.
redirectLocation := getRedirectLocation(r.URL.Path)
if redirectLocation != "" {
// Employ a temporary re-direct.
http.Redirect(w, r, redirectLocation, http.StatusTemporaryRedirect)
return
}
}
}
h.handler.ServeHTTP(w, r)
@@ -102,10 +140,11 @@ func setBrowserCacheControlHandler(h http.Handler) http.Handler {
}
func (h cacheControlHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if r.Method == "GET" && strings.Contains(r.Header.Get("User-Agent"), "Mozilla") {
if r.Method == "GET" && guessIsBrowserReq(r) && globalIsBrowserEnabled {
// For all browser requests set appropriate Cache-Control policies
match, e := regexp.MatchString(reservedBucket+`/([^/]+\.js|favicon.ico)`, r.URL.Path)
if e != nil {
match, err := regexp.Match(reservedBucket+`/([^/]+\.js|favicon.ico)`, []byte(r.URL.Path))
if err != nil {
errorIf(err, "Unable to match incoming URL %s", r.URL)
writeErrorResponse(w, r, ErrInternalError, r.URL.Path)
return
}
@@ -133,13 +172,22 @@ func setPrivateBucketHandler(h http.Handler) http.Handler {
func (h minioPrivateBucketHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// For all non browser requests, reject access to 'reservedBucket'.
if !strings.Contains(r.Header.Get("User-Agent"), "Mozilla") && path.Clean(r.URL.Path) == reservedBucket {
if !guessIsBrowserReq(r) && path.Clean(r.URL.Path) == reservedBucket {
writeErrorResponse(w, r, ErrAllAccessDisabled, r.URL.Path)
return
}
h.handler.ServeHTTP(w, r)
}
type timeValidityHandler struct {
handler http.Handler
}
// setTimeValidityHandler to validate parsable time over http header
func setTimeValidityHandler(h http.Handler) http.Handler {
return timeValidityHandler{h}
}
// Supported Amz date formats.
var amzDateFormats = []string{
time.RFC1123,
@@ -148,23 +196,23 @@ var amzDateFormats = []string{
// Add new AMZ date formats here.
}
// parseAmzDate - parses date string into supported amz date formats.
func parseAmzDate(amzDateStr string) (amzDate time.Time, apiErr APIErrorCode) {
for _, dateFormat := range amzDateFormats {
amzDate, e := time.Parse(dateFormat, amzDateStr)
if e == nil {
return amzDate, ErrNone
}
}
return time.Time{}, ErrMalformedDate
}
// Supported Amz date headers.
var amzDateHeaders = []string{
"x-amz-date",
"date",
}
// parseAmzDate - parses date string into supported amz date formats.
func parseAmzDate(amzDateStr string) (amzDate time.Time, apiErr APIErrorCode) {
for _, dateFormat := range amzDateFormats {
amzDate, err := time.Parse(dateFormat, amzDateStr)
if err == nil {
return amzDate, ErrNone
}
}
return time.Time{}, ErrMalformedDate
}
// parseAmzDateHeader - parses supported amz date headers, in
// supported amz date formats.
func parseAmzDateHeader(req *http.Request) (time.Time, APIErrorCode) {
@@ -178,18 +226,10 @@ func parseAmzDateHeader(req *http.Request) (time.Time, APIErrorCode) {
return time.Time{}, ErrMissingDateHeader
}
type timeHandler struct {
handler http.Handler
}
// setTimeValidityHandler to validate parsable time over http header
func setTimeValidityHandler(h http.Handler) http.Handler {
return timeHandler{h}
}
func (h timeHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Verify if date headers are set, if not reject the request
if _, ok := r.Header["Authorization"]; ok {
func (h timeValidityHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
aType := getRequestAuthType(r)
if aType == authTypeSigned || aType == authTypeSignedV2 || aType == authTypeStreamingSigned {
// Verify if date headers are set, if not reject the request
amzDate, apiErr := parseAmzDateHeader(r)
if apiErr != ErrNone {
// All our internal APIs are sensitive towards Date
@@ -198,10 +238,10 @@ func (h timeHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
writeErrorResponse(w, r, apiErr, r.URL.Path)
return
}
// Verify if the request date header is shifted by less than maxSkewTime parameter in the past
// Verify if the request date header is shifted by less than globalMaxSkewTime parameter in the past
// or in the future, reject request otherwise.
curTime := time.Now().UTC()
if curTime.Sub(amzDate) > maxSkewTime || amzDate.Sub(curTime) > maxSkewTime {
if curTime.Sub(amzDate) > globalMaxSkewTime || amzDate.Sub(curTime) > globalMaxSkewTime {
writeErrorResponse(w, r, ErrRequestTimeTooSkewed, r.URL.Path)
return
}
@@ -232,47 +272,6 @@ func setIgnoreResourcesHandler(h http.Handler) http.Handler {
return resourceHandler{h}
}
// Resource handler ServeHTTP() wrapper
func (h resourceHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Skip the first element which is usually '/' and split the rest.
splits := strings.SplitN(r.URL.Path[1:], "/", 2)
// Save bucketName and objectName extracted from url Path.
var bucketName, objectName string
if len(splits) == 1 {
bucketName = splits[0]
}
if len(splits) == 2 {
bucketName = splits[0]
objectName = splits[1]
}
// If bucketName is present and not objectName check for bucket level resource queries.
if bucketName != "" && objectName == "" {
if ignoreNotImplementedBucketResources(r) {
writeErrorResponse(w, r, ErrNotImplemented, r.URL.Path)
return
}
}
// If bucketName and objectName are present check for its resource queries.
if bucketName != "" && objectName != "" {
if ignoreNotImplementedObjectResources(r) {
writeErrorResponse(w, r, ErrNotImplemented, r.URL.Path)
return
}
}
// A put method on path "/" doesn't make sense, ignore it.
if r.Method == "PUT" && r.URL.Path == "/" {
writeErrorResponse(w, r, ErrNotImplemented, r.URL.Path)
return
}
// Serve HTTP.
h.handler.ServeHTTP(w, r)
}
//// helpers
// Checks requests for not implemented Bucket resources
func ignoreNotImplementedBucketResources(req *http.Request) bool {
for name := range req.URL.Query() {
@@ -313,3 +312,42 @@ var notimplementedObjectResourceNames = map[string]bool{
"acl": true,
"policy": true,
}
// Resource handler ServeHTTP() wrapper
func (h resourceHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Skip the first element which is usually '/' and split the rest.
splits := strings.SplitN(r.URL.Path[1:], "/", 2)
// Save bucketName and objectName extracted from url Path.
var bucketName, objectName string
if len(splits) == 1 {
bucketName = splits[0]
}
if len(splits) == 2 {
bucketName = splits[0]
objectName = splits[1]
}
// If bucketName is present and not objectName check for bucket level resource queries.
if bucketName != "" && objectName == "" {
if ignoreNotImplementedBucketResources(r) {
writeErrorResponse(w, r, ErrNotImplemented, r.URL.Path)
return
}
}
// If bucketName and objectName are present check for its resource queries.
if bucketName != "" && objectName != "" {
if ignoreNotImplementedObjectResources(r) {
writeErrorResponse(w, r, ErrNotImplemented, r.URL.Path)
return
}
}
// A put method on path "/" doesn't make sense, ignore it.
if r.Method == "PUT" && r.URL.Path == "/" {
writeErrorResponse(w, r, ErrNotImplemented, r.URL.Path)
return
}
// Serve HTTP.
h.handler.ServeHTTP(w, r)
}

View File

@@ -0,0 +1,90 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"net/http"
"testing"
)
// Tests getRedirectLocation function for all its criteria.
func TestRedirectLocation(t *testing.T) {
testCases := []struct {
urlPath string
location string
}{
{
// 1. When urlPath is '/minio'
urlPath: reservedBucket,
location: reservedBucket + "/",
},
{
// 2. When urlPath is '/'
urlPath: "/",
location: reservedBucket + "/",
},
{
// 3. When urlPath is '/webrpc'
urlPath: "/webrpc",
location: reservedBucket + "/webrpc",
},
{
// 4. When urlPath is '/login'
urlPath: "/login",
location: reservedBucket + "/login",
},
{
// 5. When urlPath is '/favicon.ico'
urlPath: "/favicon.ico",
location: reservedBucket + "/favicon.ico",
},
{
// 6. When urlPath is '/unknown'
urlPath: "/unknown",
location: "",
},
}
// Validate all conditions.
for i, testCase := range testCases {
loc := getRedirectLocation(testCase.urlPath)
if testCase.location != loc {
t.Errorf("Test %d: Unexpected location expected %s, got %s", i+1, testCase.location, loc)
}
}
}
// Tests browser request guess function.
func TestGuessIsBrowser(t *testing.T) {
if guessIsBrowserReq(nil) {
t.Fatal("Unexpected return for nil request")
}
r := &http.Request{
Header: http.Header{},
}
r.Header.Set("User-Agent", "Mozilla")
if !guessIsBrowserReq(r) {
t.Fatal("Test shouldn't fail for a possible browser request.")
}
r = &http.Request{
Header: http.Header{},
}
r.Header.Set("User-Agent", "mc")
if guessIsBrowserReq(r) {
t.Fatal("Test shouldn't report as browser for a non browser request.")
}
}

View File

@@ -17,56 +17,103 @@
package cmd
import (
"crypto/x509"
"os"
"strings"
"time"
humanize "github.com/dustin/go-humanize"
"github.com/fatih/color"
"github.com/minio/cli"
"github.com/minio/mc/pkg/console"
"github.com/minio/minio/pkg/objcache"
)
// Global constants for Minio.
const (
minGoVersion = ">= 1.6" // Minio requires at least Go v1.6
minGoVersion = ">= 1.7" // Minio requires at least Go v1.7
)
// minio configuration related constants.
const (
globalMinioConfigVersion = "7"
globalMinioConfigDir = ".minio"
globalMinioCertsDir = "certs"
globalMinioCertFile = "public.crt"
globalMinioKeyFile = "private.key"
globalMinioConfigFile = "config.json"
globalMinioConfigVersion = "10"
globalMinioConfigDir = ".minio"
globalMinioCertsDir = "certs"
globalMinioCertsCADir = "CAs"
globalMinioCertFile = "public.crt"
globalMinioKeyFile = "private.key"
globalMinioConfigFile = "config.json"
globalMinioCertExpireWarnDays = time.Hour * 24 * 30 // 30 days.
// Add new global values here.
)
const (
// Limit fields size (except file) to 1Mib since Policy document
// can reach that size according to https://aws.amazon.com/articles/1434
maxFormFieldSize = int64(1 * humanize.MiByte)
// The maximum allowed difference between the request generation time and the server processing time
globalMaxSkewTime = 15 * time.Minute
)
var (
globalQuiet = false // Quiet flag set via command line
globalTrace = false // Trace flag set via environment setting.
globalQuiet = false // quiet flag set via command line.
globalConfigDir = mustGetConfigPath() // config-dir flag set via command line
// Add new global flags here.
// Maximum connections handled per
// server, defaults to 0 (unlimited).
globalMaxConn = 0
// Maximum cache size.
globalMaxCacheSize = uint64(maxCacheSize)
globalIsDistXL = false // "Is Distributed?" flag.
// This flag is set to 'true' by default, it is set to `false`
// when MINIO_BROWSER env is set to 'off'.
globalIsBrowserEnabled = !strings.EqualFold(os.Getenv("MINIO_BROWSER"), "off")
// Maximum cache size. Defaults to disabled.
// Caching is enabled only for RAM size > 8GiB.
globalMaxCacheSize = uint64(0)
// Cache expiry.
globalCacheExpiry = objcache.DefaultExpiry
// Minio local server address (in `host:port` format)
globalMinioAddr = ""
// Minio default port, can be changed through command line.
globalMinioPort = "9000"
// Holds the host that was passed using --address
globalMinioHost = ""
// Peer communication struct
globalS3Peers = s3Peers{}
// CA root certificates, a nil value means system certs pool will be used
globalRootCAs *x509.CertPool
// Add new variable global values here.
)
var (
// Limit fields size (except file) to 1Mib since Policy document
// can reach that size according to https://aws.amazon.com/articles/1434
maxFormFieldSize = int64(1024 * 1024)
)
var (
// The maximum allowed difference between the request generation time and the server processing time
maxSkewTime = 15 * time.Minute
// Keeps the connection active by waiting for following amount of time.
// Primarily used in ListenBucketNotification.
globalSNSConnAlive = 5 * time.Second
)
// global colors.
var (
colorBlue = color.New(color.FgBlue).SprintfFunc()
colorBold = color.New(color.Bold).SprintFunc()
colorRed = color.New(color.FgRed).SprintFunc()
colorBold = color.New(color.Bold).SprintFunc()
colorBlue = color.New(color.FgBlue).SprintfFunc()
colorGreen = color.New(color.FgGreen).SprintfFunc()
)
// Parse command arguments and set global variables accordingly
func setGlobalsFromContext(c *cli.Context) {
// Set config dir
switch {
case c.IsSet("config-dir"):
globalConfigDir = c.String("config-dir")
case c.GlobalIsSet("config-dir"):
globalConfigDir = c.GlobalString("config-dir")
}
if globalConfigDir == "" {
console.Fatalf("Unable to get config file. Config directory is empty.")
}
// Set global quiet flag.
globalQuiet = c.Bool("quiet") || c.GlobalBool("quiet")
}

View File

@@ -22,7 +22,6 @@ import (
"mime/multipart"
"net/http"
"strings"
"time"
)
// Validates location constraint in PutBucket request body.
@@ -33,34 +32,28 @@ func isValidLocationConstraint(r *http.Request) (s3Error APIErrorCode) {
// If the request has no body with content-length set to 0,
// we do not have to validate location constraint. Bucket will
// be created at default region.
if r.ContentLength == 0 {
return ErrNone
}
locationConstraint := createBucketLocationConfiguration{}
if err := xmlDecoder(r.Body, &locationConstraint, r.ContentLength); err != nil {
if err == io.EOF && r.ContentLength == -1 {
// EOF is a valid condition here when ContentLength is -1.
return ErrNone
err := xmlDecoder(r.Body, &locationConstraint, r.ContentLength)
if err == nil || err == io.EOF {
// Successfully decoded, proceed to verify the region.
// Once region has been obtained we proceed to verify it.
incomingRegion := locationConstraint.Location
if incomingRegion == "" {
// Location constraint is empty for region "us-east-1",
// in accordance with protocol.
incomingRegion = "us-east-1"
}
errorIf(err, "Unable to xml decode location constraint")
// Treat all other failures as XML parsing errors.
return ErrMalformedXML
} // Successfully decoded, proceed to verify the region.
// Once region has been obtained we proceed to verify it.
incomingRegion := locationConstraint.Location
if incomingRegion == "" {
// Location constraint is empty for region "us-east-1",
// in accordance with protocol.
incomingRegion = "us-east-1"
// Return errInvalidRegion if location constraint does not match
// with configured region.
s3Error = ErrNone
if serverRegion != incomingRegion {
s3Error = ErrInvalidRegion
}
return s3Error
}
// Return errInvalidRegion if location constraint does not match
// with configured region.
s3Error = ErrNone
if serverRegion != incomingRegion {
s3Error = ErrInvalidRegion
}
return s3Error
errorIf(err, "Unable to xml decode location constraint")
// Treat all other failures as XML parsing errors.
return ErrMalformedXML
}
// Supported headers that needs to be extracted.
@@ -121,7 +114,7 @@ func extractPostPolicyFormValues(reader *multipart.Reader) (filePart io.Reader,
}
formValues[canonicalFormName] = string(buffer)
} else {
filePart = io.LimitReader(part, maxObjectSize)
filePart = part
fileName = part.FileName()
// As described in S3 spec, we expect file to be the last form field
break
@@ -130,16 +123,3 @@ func extractPostPolicyFormValues(reader *multipart.Reader) (filePart io.Reader,
}
return filePart, fileName, formValues, nil
}
// Send whitespace character, once every 5secs, until CompleteMultipartUpload is done.
// CompleteMultipartUpload method of the object layer indicates that it's done via doneCh
func sendWhiteSpaceChars(w http.ResponseWriter, doneCh <-chan struct{}) {
for {
select {
case <-time.After(5 * time.Second):
w.Write([]byte(" "))
case <-doneCh:
return
}
}
}

View File

@@ -33,6 +33,15 @@ func TestIsValidLocationContraint(t *testing.T) {
}
defer removeAll(path)
// Test with corrupted XML
malformedReq := &http.Request{
Body: ioutil.NopCloser(bytes.NewBuffer([]byte("<>"))),
ContentLength: int64(len("<>")),
}
if err := isValidLocationConstraint(malformedReq); err != ErrMalformedXML {
t.Fatal("Unexpected error: ", err)
}
// generates the input request with XML bucket configuration set to the request body.
createExpectedRequest := func(req *http.Request, location string) (*http.Request, error) {
createBucketConfig := createBucketLocationConfiguration{}
@@ -96,6 +105,16 @@ func TestExtractMetadataHeaders(t *testing.T) {
},
metadata: map[string]string{},
},
// Validate if there are no keys to extract.
{
header: http.Header{
"X-Amz-Meta-Appid": []string{"amz-meta"},
"X-Minio-Meta-Appid": []string{"minio-meta"},
},
metadata: map[string]string{
"X-Amz-Meta-Appid": "amz-meta",
"X-Minio-Meta-Appid": "minio-meta"},
},
}
// Validate if the extracting headers.

48
cmd/hasher.go Normal file
View File

@@ -0,0 +1,48 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"crypto/md5"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
)
// getSHA256Hash returns SHA-256 hash of given data.
func getSHA256Hash(data []byte) string {
hash := sha256.New()
hash.Write(data)
return hex.EncodeToString(hash.Sum(nil))
}
// getMD5Sum returns MD5 sum of given data.
func getMD5Sum(data []byte) []byte {
hash := md5.New()
hash.Write(data)
return hash.Sum(nil)
}
// getMD5Hash returns MD5 hash of given data.
func getMD5Hash(data []byte) string {
return hex.EncodeToString(getMD5Sum(data))
}
// getMD5HashBase64 returns MD5 hash in base64 encoding of given data.
func getMD5HashBase64(data []byte) string {
return base64.StdEncoding.EncodeToString(getMD5Sum(data))
}

View File

@@ -1,42 +0,0 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"net"
"sort"
)
// byLastOctet implements sort.Interface used in sorting a list
// of ip address by their last octet value.
type byLastOctet []net.IP
func (n byLastOctet) Len() int { return len(n) }
func (n byLastOctet) Swap(i, j int) { n[i], n[j] = n[j], n[i] }
func (n byLastOctet) Less(i, j int) bool {
return []byte(n[i].To4())[3] < []byte(n[j].To4())[3]
}
// getIPsFromHosts - returns a reverse sorted list of ips based on the last octet value.
func getIPsFromHosts(hosts []string) (ips []net.IP) {
for _, host := range hosts {
ips = append(ips, net.ParseIP(host))
}
// Reverse sort ips by their last octet.
sort.Sort(sort.Reverse(byLastOctet(ips)))
return ips
}

View File

@@ -1,61 +0,0 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import "testing"
// Tests sorted list generated from hosts to ip.
func TestHostToIP(t *testing.T) {
// Collection of test cases to validate last octet sorting.
testCases := []struct {
hosts []string
sortedHosts []string
}{
{
// List of ip addresses that need to be sorted.
[]string{
"129.95.30.40",
"5.24.69.2",
"19.20.203.5",
"1.2.3.4",
"127.0.0.1",
"19.20.21.22",
"5.220.100.50",
},
// Numerical sorting result based on the last octet.
[]string{
"5.220.100.50",
"129.95.30.40",
"19.20.21.22",
"19.20.203.5",
"1.2.3.4",
"5.24.69.2",
"127.0.0.1",
},
},
}
// Tests the correct sorting behavior of getIPsFromHosts.
for j, testCase := range testCases {
ips := getIPsFromHosts(testCase.hosts)
for i, ip := range ips {
if ip.String() != testCase.sortedHosts[i] {
t.Fatalf("Test %d expected to pass but failed. Wanted ip %s, but got %s", j+1, testCase.sortedHosts[i], ip.String())
}
}
}
}

88
cmd/humanized-duration.go Normal file
View File

@@ -0,0 +1,88 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"fmt"
"math"
"time"
)
// humanizedDuration container to capture humanized time.
type humanizedDuration struct {
Days int64 `json:"days,omitempty"`
Hours int64 `json:"hours,omitempty"`
Minutes int64 `json:"minutes,omitempty"`
Seconds int64 `json:"seconds,omitempty"`
}
// StringShort() humanizes humanizedDuration to human readable short format.
// This does not print at seconds.
func (r humanizedDuration) StringShort() string {
if r.Days == 0 && r.Hours == 0 {
return fmt.Sprintf("%d minutes", r.Minutes)
}
if r.Days == 0 {
return fmt.Sprintf("%d hours %d minutes", r.Hours, r.Minutes)
}
return fmt.Sprintf("%d days %d hours %d minutes", r.Days, r.Hours, r.Minutes)
}
// String() humanizes humanizedDuration to human readable,
func (r humanizedDuration) String() string {
if r.Days == 0 && r.Hours == 0 && r.Minutes == 0 {
return fmt.Sprintf("%d seconds", r.Seconds)
}
if r.Days == 0 && r.Hours == 0 {
return fmt.Sprintf("%d minutes %d seconds", r.Minutes, r.Seconds)
}
if r.Days == 0 {
return fmt.Sprintf("%d hours %d minutes %d seconds", r.Hours, r.Minutes, r.Seconds)
}
return fmt.Sprintf("%d days %d hours %d minutes %d seconds", r.Days, r.Hours, r.Minutes, r.Seconds)
}
// timeDurationToHumanizedDuration convert golang time.Duration to a custom more readable humanizedDuration.
func timeDurationToHumanizedDuration(duration time.Duration) humanizedDuration {
r := humanizedDuration{}
if duration.Seconds() < 60.0 {
r.Seconds = int64(duration.Seconds())
return r
}
if duration.Minutes() < 60.0 {
remainingSeconds := math.Mod(duration.Seconds(), 60)
r.Seconds = int64(remainingSeconds)
r.Minutes = int64(duration.Minutes())
return r
}
if duration.Hours() < 24.0 {
remainingMinutes := math.Mod(duration.Minutes(), 60)
remainingSeconds := math.Mod(duration.Seconds(), 60)
r.Seconds = int64(remainingSeconds)
r.Minutes = int64(remainingMinutes)
r.Hours = int64(duration.Hours())
return r
}
remainingHours := math.Mod(duration.Hours(), 24)
remainingMinutes := math.Mod(duration.Minutes(), 60)
remainingSeconds := math.Mod(duration.Seconds(), 60)
r.Hours = int64(remainingHours)
r.Minutes = int64(remainingMinutes)
r.Seconds = int64(remainingSeconds)
r.Days = int64(duration.Hours() / 24)
return r
}

View File

@@ -0,0 +1,76 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"strings"
"testing"
"time"
)
// Test humanized duration.
func TestHumanizedDuration(t *testing.T) {
duration := time.Duration(90487000000000)
humanDuration := timeDurationToHumanizedDuration(duration)
if !strings.HasSuffix(humanDuration.String(), "seconds") {
t.Fatal("Stringer method for humanized duration should have seconds.", humanDuration.String())
}
if strings.HasSuffix(humanDuration.StringShort(), "seconds") {
t.Fatal("StringShorter method for humanized duration should not have seconds.", humanDuration.StringShort())
}
// Test humanized duration for seconds.
humanSecDuration := timeDurationToHumanizedDuration(time.Duration(5 * time.Second))
expectedHumanSecDuration := humanizedDuration{
Seconds: 5,
}
if humanSecDuration != expectedHumanSecDuration {
t.Fatalf("Expected %#v, got %#v incorrect conversion of duration to humanized form",
expectedHumanSecDuration, humanSecDuration)
}
if strings.HasSuffix(humanSecDuration.String(), "days") ||
strings.HasSuffix(humanSecDuration.String(), "hours") ||
strings.HasSuffix(humanSecDuration.String(), "minutes") {
t.Fatal("Stringer method for humanized duration should have only seconds.", humanSecDuration.String())
}
// Test humanized duration for minutes.
humanMinDuration := timeDurationToHumanizedDuration(10 * time.Minute)
expectedHumanMinDuration := humanizedDuration{
Minutes: 10,
}
if humanMinDuration != expectedHumanMinDuration {
t.Fatalf("Expected %#v, got %#v incorrect conversion of duration to humanized form",
expectedHumanMinDuration, humanMinDuration)
}
if strings.HasSuffix(humanMinDuration.String(), "hours") {
t.Fatal("Stringer method for humanized duration should have only minutes.", humanMinDuration.String())
}
// Test humanized duration for hours.
humanHourDuration := timeDurationToHumanizedDuration(10 * time.Hour)
expectedHumanHourDuration := humanizedDuration{
Hours: 10,
}
if humanHourDuration != expectedHumanHourDuration {
t.Fatalf("Expected %#v, got %#v incorrect conversion of duration to humanized form",
expectedHumanHourDuration, humanHourDuration)
}
if strings.HasSuffix(humanHourDuration.String(), "days") {
t.Fatal("Stringer method for humanized duration should have hours.", humanHourDuration.String())
}
}

69
cmd/interface-ips.go Normal file
View File

@@ -0,0 +1,69 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"fmt"
"net"
"sort"
)
// byLastOctetValue implements sort.Interface used in sorting a list
// of ip address by their last octet value.
type byLastOctetValue []net.IP
func (n byLastOctetValue) Len() int { return len(n) }
func (n byLastOctetValue) Swap(i, j int) { n[i], n[j] = n[j], n[i] }
func (n byLastOctetValue) Less(i, j int) bool {
return []byte(n[i].To4())[3] < []byte(n[j].To4())[3]
}
// getInterfaceIPv4s is synonymous to net.InterfaceAddrs()
// returns net.IP IPv4 only representation of the net.Addr.
// Additionally the returned list is sorted by their last
// octet value.
//
// [The logic to sort by last octet is implemented to
// prefer CIDRs with higher octects, this in-turn skips the
// localhost/loopback address to be not preferred as the
// first ip on the list. Subsequently this list helps us print
// a user friendly message with appropriate values].
func getInterfaceIPv4s() ([]net.IP, error) {
addrs, err := net.InterfaceAddrs()
if err != nil {
return nil, fmt.Errorf("Unable to determine network interface address. %s", err)
}
// Go through each return network address and collate IPv4 addresses.
var nips []net.IP
for _, addr := range addrs {
if addr.Network() == "ip+net" {
var nip net.IP
// Attempt to parse the addr through CIDR.
nip, _, err = net.ParseCIDR(addr.String())
if err != nil {
return nil, fmt.Errorf("Unable to parse addrss %s, error %s", addr, err)
}
// Collect only IPv4 addrs.
if nip.To4() != nil {
nips = append(nips, nip)
}
}
}
// Sort the list of IPs by their last octet value.
sort.Sort(sort.Reverse(byLastOctetValue(nips)))
return nips, nil
}

Some files were not shown because too many files have changed in this diff Show More