Compare commits

...

367 Commits

Author SHA1 Message Date
Nitish Tiwari
4a4d1d1b82 Add Minio TLS configuration doc for Kubernetes deployment (#5617) 2018-03-12 14:22:23 -07:00
Harshavardhana
29ef7d29e4 Fix deadlock in in-place CopyObject decryption/encryption (#5637)
In-place decryption/encryption already holds write
locks on them, attempting to acquire a read lock would
fail.
2018-03-12 13:52:38 -07:00
Nitish Tiwari
574b667c56 Remove madmin docs from top level docs directory (#5636)
madmin package is well documented in its source directory here
https://github.com/minio/minio/tree/master/pkg/madmin.

Hence, keeping another copy is not required as it makes it difficult
to maintain.
2018-03-12 11:51:58 -07:00
Nitish Tiwari
10b01ac836 Add healthcheck endpoints (#5543)
This PR adds readiness and liveness endpoints to probe Minio server
instance health. Endpoints can only be accessed without authentication
and the paths are /minio/health/live and /minio/health/ready for
liveness and readiness respectively.

The new healthcheck liveness endpoint is used for Docker healthcheck
now.

Fixes #5357
Fixes #5514
2018-03-12 11:46:53 +05:30
Harshavardhana
d90985b6d8 Return authHeaderMalformed for an incorrect region in signature (#5618) 2018-03-09 18:18:57 -08:00
Harshavardhana
7aaf01eb74 Save ETag when updating metadata (#5626)
Fixes #5622
2018-03-09 10:50:39 -08:00
Nitish Tiwari
ba0c7544ea Cleanup orchestration documents (#5623)
- Remove hostPort from Kubernetes deployment example docs. Initially
hostPort was added to ensure Minio pods are allocated to separate
machines, but as per latest Kubernetes documents this is not
recommended approach (ref: https://kubernetes.io/docs/concepts/
configuration/overview/#services). To define pod allocations,
Affinity and Anti-Affinity concepts are the recommended approach.
(ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node)

- Add Minio release tag to Docker-Compose example file.
2018-03-09 15:21:41 +05:30
kannappanr
380e0ddb57 Remove unwanted errorIf calls (#5621)
Remove errorIf call with a errSignatureMismatch error
2018-03-09 00:51:05 -08:00
Andreas Auernhammer
889dd387f1 [doc] fix openssl command for ECDSA key generation (#5616)
This change fixes the command for generating ECDSA private keys.
The current command produces private key files which cannot be parsed
by the server.

Fixes #5614
2018-03-08 15:06:42 -08:00
Dee Koder
973ff2fabd Fix mqtt example py code which was not working (#5619) 2018-03-08 16:46:52 +05:30
poornas
247c1bb5ef Pass location in MakeBucketWithLocation call (#5605)
fixes regression for gateways
2018-03-08 12:56:20 +05:30
Harshavardhana
27258b9c54 Ensure to load only regular files for CAs (#5612)
In kubernetes statefulset like environments when secrets
are mounted to pods they have sub-directories, we should
ideally be only looking for regular files here and skip
all others.
2018-03-07 22:16:28 +05:30
Harshavardhana
b325593b47 SSE-C CopyObject key-rotation doesn't need metadata REPLACE value (#5611)
Fix a compatibility issue with AWS S3 where to do key rotation
we need to replace an existing object's metadata. In such a
scenario "REPLACE" metadata directive is not necessary.
2018-03-06 16:04:48 -08:00
Dee Koder
a34901af77 Add missing wget command for easy cut copy paste. (#5609)
We should keep this for easy one click copy paste on remote servers 
rather than 2 things for users to deal with.
2018-03-06 13:36:48 +05:30
Anis Elleuch
cac10bcbf7 SSE-C: Add support in Bucket Post Policy (#5607)
* SSE-C: Add support in Bucket Post Policy

* Rename isSSECustomerRequest & isSSECopyCustomerRequest to hasSSECustomerHeader hasSSECopyCustomerHeader
2018-03-05 08:02:56 -08:00
Aditya Manthramurthy
ea8973b7d7 Return bit-rot verified data instead of re-reading from disk (#5568)
- Data from disk was being read after bitrot verification to return
  data for GetObject. Strictly speaking this does not guarantee bitrot
  protection, as disks may return bad data even temporarily.

- This fix reads data from disk, verifies data for bitrot and then
  returns data to the client directly.
2018-03-04 14:16:45 -08:00
Harshavardhana
52eea7b9c1 Support SSE-C multipart source objects in CopyObject (#5603)
Current code didn't implement the logic to support
decrypting encrypted multiple parts, this PR fixes
by supporting copying encrypted multipart objects.
2018-03-02 17:24:02 -08:00
Harshavardhana
e4f6877c8b Handle incoming proxy requests ip, scheme (#5591)
This PR implements functions to get the right ip, scheme
from the incoming proxied requests.
2018-03-02 15:23:04 -08:00
Harshavardhana
d71b1d25f8 Make sure to filter out internal metadata (#5601)
Currently we reply back `X-Minio-Internal` values
back to the client for an encrypted object, we should
filter these out and only reply AWS compatible headers.
2018-03-01 16:15:53 -08:00
Anis Elleuch
5f37988db5 Use toAPIErrorCode in HeadObject handler when decrypting request fails (#5600) 2018-03-01 16:01:56 -08:00
Andreas Auernhammer
4e5237b02a vendor update github.com/minio/sio (#5599)
This change updates the sio library and adds DARE 2.0 support
to the server.
2018-03-01 15:28:34 -08:00
Harshavardhana
1b7b8f14c9 Set appropriate encryption headers in HEAD object response (#5596)
Currently we don't set two SSE-C specific headers fix it
for AWS S3 compatibility.
2018-03-01 14:16:40 -08:00
Anis Elleuch
120b061966 Add multipart support in SSE-C encryption (#5576)
*) Add Put/Get support of multipart in encryption
*) Add GET Range support for encryption
*) Add CopyPart encrypted support
*) Support decrypting of large single PUT object
2018-03-01 11:37:57 -08:00
kannappanr
d32f90fe95 Honor global flags irrespective of the position. (#5486)
Flags like `json, config-dir, quiet` are now honored even if they are
between minio and gateway in the cli, like, `minio --json gateway s3`.

Fixes #5403
2018-02-28 20:13:33 -08:00
Harshavardhana
6faa1ef11a Fix shadowing issue reported by go vet (#5590) 2018-02-28 14:30:00 -08:00
Harshavardhana
9af254a82f Remove stable sort usage when not needed (#5586)
Stable sort is needed when we are sorting based on two or more
distinct elements. When equal elements are indistinguishable,
such as with integers, or more generally, any data where the
entire element is the key like `PartNumber`, stability is not
an issue.
2018-02-28 08:33:00 +05:30
Harshavardhana
6b3db7556a Fix gofmt issues reported for simplification (#5581)
added travis to catch this issue
2018-02-26 23:52:03 +05:30
Harshavardhana
5204a754db Move rpc version to 2.0.0 to align with backend migration (#5575)
Fixes #5574
2018-02-23 16:58:37 -08:00
Harshavardhana
7cc678c653 Support encryption for CopyObject, GET-Range requests (#5544)
- Implement CopyObject encryption support
- Handle Range GETs for encrypted objects

Fixes #5193
2018-02-23 15:07:21 -08:00
poornas
b7536570f8 update docs for NAS gateway (#5569) 2018-02-22 09:15:19 +05:30
Harshavardhana
e09d97abaf Fix docs in admin API (#5559) 2018-02-21 12:00:46 -08:00
Krishnan Parthasarathi
e5e3d17216 Do not close *lock.LockedFile on failure (#5565) 2018-02-21 11:28:24 -08:00
Harshavardhana
0ea54c9858 Change CopyObject{Part} to single srcInfo argument (#5553)
Refactor such that metadata and etag are
combined to a single argument `srcInfo`.

This is a precursor change for #5544 making
it easier for us to provide encryption/decryption
functions.
2018-02-21 14:18:47 +05:30
Krishna Srinivas
a00e052606 Provide more descriptive error during erasure init (#5282)
fixes #5239
2018-02-20 18:42:09 -08:00
Anis Elleuch
d2d49f6c6c xl: Avoid removing directory content in Delete API (#5548)
Delete & Multi Delete API should not try to remove the directory content.
The only permitted case is with zero size object with a trailing slash
in its name.
2018-02-20 15:33:26 -08:00
Harshavardhana
db9e83de62 Avoid significant connections in TIME_WAIT (#5555)
MaxIdleConns limits the total number of connections
kept in the pool for re-use. In addition, MaxIdleConnsPerHost
limits the number for a single host. Since minio gateways
usually connect to the same host, setting `MaxIdleConns = 100`
won't really have much of an impact since the idle connection
pool is limited to 2 anyway.

Now, with the pool set to a limit of 2, and when using
the client heavily from 2+ goroutines, the `http.Transport`
will open a connection, use it, then try to return it to
the idle-pool which often fails since there's a limit of 2.
So it's going to close the connection and new ones will be
opened on demand again, many of which get closed soon after
being used. Since those connections/sockets don't disappear
from the OS immediately, use `MaxIdleConnsPerHost = 100`
which fixes this problem.
2018-02-20 12:23:37 -08:00
poornas
25107c2e11 Add NAS gateway support (#5516) 2018-02-20 12:21:12 -08:00
Anis Elleuch
926e480156 posix.RenameFile(): Allow overwriting an empty directory (#5551)
Overwriting files is allowed, but since the introduction of
the object directory, we will aslo need to allow overwriting
an empty directory. Putting twice the same object directory
won't fail with 403 error anymore.
2018-02-20 12:20:18 -08:00
Harshavardhana
b2b5056163 gateway/gcs: Remove unused storageEndpoint (#5556) 2018-02-20 15:07:31 +05:30
A. Elleuch
1e7e41fada tests: Fix failed notify webhook test (#5528)
TestNewWebHookNotify wasn't passing in my local machine. The reason is
that the test expects the POST handler (as a webhook endpoint) is always
running on port 80, which is not always the case.
2018-02-17 19:06:43 -08:00
Harshavardhana
03923947c4 Fix delete bucket policies properly (#5540)
There was bug in previous PR where deleteBucketMetadata()
was never called, fix it correctly.
2018-02-16 20:16:48 -08:00
Harshavardhana
d12bdd50ee Rename minio-limitations.md to minio-limits.md (#5541) 2018-02-16 09:35:02 +05:30
Harshavardhana
fb96779a8a Add large bucket support for erasure coded backend (#5160)
This PR implements an object layer which
combines input erasure sets of XL layers
into a unified namespace.

This object layer extends the existing
erasure coded implementation, it is assumed
in this design that providing > 16 disks is
a static configuration as well i.e if you started
the setup with 32 disks with 4 sets 8 disks per
pack then you would need to provide 4 sets always.

Some design details and restrictions:

- Objects are distributed using consistent ordering
  to a unique erasure coded layer.
- Each pack has its own dsync so locks are synchronized
  properly at pack (erasure layer).
- Each pack still has a maximum of 16 disks
  requirement, you can start with multiple
  such sets statically.
- Static sets set of disks and cannot be
  changed, there is no elastic expansion allowed.
- Static sets set of disks and cannot be
  changed, there is no elastic removal allowed.
- ListObjects() across sets can be noticeably
  slower since List happens on all servers,
  and is merged at this sets layer.

Fixes #5465
Fixes #5464
Fixes #5461
Fixes #5460
Fixes #5459
Fixes #5458
Fixes #5460
Fixes #5488
Fixes #5489
Fixes #5497
Fixes #5496
2018-02-15 17:45:57 -08:00
Harshavardhana
dd80256151 Directory HEADs with encryption headers shouldn't return errors (#5539)
Since we do not encrypt directories we don't need to send
errors with encryption headers when the directory doesn't
have encryption metadata.

Continuation PR from 4ca10479b5
2018-02-15 14:18:28 -08:00
Harshavardhana
22897de4c7 fail when endpoints point to same path locally (#5523) 2018-02-15 14:38:17 +05:30
Harshavardhana
e22438c8cd Cleanup banners and remove fossa/snap (#5530) 2018-02-15 09:45:42 +05:30
Harshavardhana
c59f1e3a80 revamp minio build messages (#5519) 2018-02-14 10:29:19 +05:30
Harshavardhana
994fe53669 Avoid shadowing ignored errors listAllBuckets() (#5524)
It can happen such that one of the disks that was down would
return 'errDiskNotFound' but the err is preserved due to
loop shadowing which leads to issues when healing the bucket.
2018-02-13 17:03:50 -08:00
Andreas Auernhammer
4ca10479b5 [SSE-C]: avoid encrypting empty objects. (#5525)
This change adds an object size check such that the server does not
encrypt empty objects (typically folders) for SSE-C. The server still
returns SSE-C headers but the object is not encrypted since there is no
point to encrypt such objects.

Fixes #5493
2018-02-13 15:43:46 -08:00
Harshavardhana
91101b11bb Converge repeated code to common deleteBucketMetadata() (#5508) 2018-02-12 18:34:30 -08:00
Harshavardhana
8de6cf4124 update dsync implementation to fix a regression (#5513)
Currently minio master requires 4 servers, we
have decided to run on a minimum of 2 servers
instead - fixes a regression from previous
releases where 3 server setups were supported.
2018-02-12 15:16:12 +05:30
poornas
4f73fd9487 Unify gateway and object layer. (#5487)
* Unify gateway and object layer. Bring bucket policies into
object layer.
2018-02-09 15:19:30 -08:00
Minio Trusted
a7f6e14370 Update yaml files to latest version RELEASE.2018-02-09T22-40-05Z 2018-02-09 22:43:57 +00:00
dingjs
289457568c Translate #5365 to Chinese. (#5439) 2018-02-08 17:00:03 -08:00
Harshavardhana
fd3897d0c3 Move to go1.9.4 with recent security release (#5502) 2018-02-08 14:33:22 +05:30
Krishna Srinivas
047b7aff0c Seek to offset 0 after Truncate() (#5375) 2018-02-06 15:37:48 -08:00
Harshavardhana
1164fc60f3 Bring semantic versioning to provide for rolling upgrades (#5495)
This PR brings semver capabilities in our RPC layer to
ensure that we can upgrade the servers in rolling fashion
while keeping I/O in progress. This is only a framework change
the functionality remains the same as such and we do not
have any special API changes for now. But in future when
we bring in API changes we will be able to upgrade servers
without a downtime.

Additional change in this PR is to not abort when serverVersions
mismatch in a distributed cluster, instead wait for the quorum
treat the situation as if the server is down. This allows
for administrator to properly upgrade all the servers in the cluster.

Fixes #5393
2018-02-06 15:07:17 -08:00
kannappanr
48218272cc Document object name limitations on Windows (#5491)
Fixes #5161
2018-02-03 19:57:40 +05:30
Harshavardhana
0c880bb852 Deprecate and remove in-memory object caching (#5481)
in-memory caching cannot be cleanly implemented
without the access to GC which Go doesn't naturally
provide. At times we have seen that object caching
is more of an hindrance rather than a boon for
our use cases.

Removing it completely from our implementation
  related to #5160 and #5182
2018-02-02 10:17:13 -08:00
Harshavardhana
1ebbc2ce88 Make sure to convert the disk errors to object errors (#5480)
Fixes a bug introduced in the directory support PR, with
this fix s3fs works properly.
2018-02-02 14:04:15 +05:30
A. Elleuch
da2faa19a1 Reduce Minio access key minimum length to 3 (#5478)
This is a generic minimum value. The current reason is to support
Azure blob storage accounts name whose length is less than 5. 3 is the
minimum length for Azure.
2018-02-02 09:13:30 +05:30
Krishna Srinivas
2afd196c83 Quorum based listing for XL (#5475)
fixes #5380
2018-02-01 10:47:49 -08:00
Krishna Srinivas
b606ba3f81 fs.json file should be closed in CompleteMultipartUpload (#5482) 2018-02-01 15:27:12 +05:30
Harshavardhana
3316dbc037 simplify storage class validation (#5470)
Check if the storage class is set in an
non XL setup instead of relying on `globalEndpoints`
value. Also converge the checks for both SS
and RRS parity configuration.

This PR also removes redundant `tt.name` in all
test cases, since each testcase doesn't need to
be numbered explicitly they are numbered implicitly.
2018-02-01 13:00:07 +05:30
Harshavardhana
033cfb5cef Remove stale code from minio server (#5479) 2018-01-31 18:28:28 -08:00
Krishna Srinivas
3b2486ebaf Lock free multipart backend implementation for FS (#5401) 2018-01-31 13:17:24 -08:00
Aditya Manthramurthy
018813b98f Fix configuration handling bugs: (#5473)
* Update the GetConfig admin API to use the latest version of
  configuration, along with fixes to the corresponding RPCs.
* Remove mutex inside the configuration struct, and inside
  notification struct.
* Use global config mutex where needed.
* Add `serverConfig.ConfigDiff()` that provides a more granular diff
  of what is different between two configurations.
2018-01-31 08:15:54 -08:00
ebozduman
e608e05cda Removes capitalization of error causes (#5468) 2018-01-30 21:42:15 -08:00
Harshavardhana
3ea28e9771 Support creating directories on erasure coded backend (#5443)
This PR continues from #5049 where we started supporting
directories for erasure coded backend
2018-01-30 08:13:13 +05:30
Krishna Srinivas
45c35b3544 Autocorrect user provided Azure endpoint (#5417)
fixes #5373
2018-01-29 10:30:08 -08:00
Andreas Auernhammer
09a9002f12 add documentation about PKCS-8 encrypted RSA keys (#5454)
This change adds documentation about PKCS-8 vs PKCS-1 pitfalls. It 
also provides a command to convert encrypted PKCS-8 RSA keys to 
encrypted PKCS-1 RSA keys.

Fixes #5453
2018-01-27 09:30:02 +05:30
Aditya Manthramurthy
5cdcc73bd5 Admin API auth and heal related fixes (#5445)
- Fetch region for auth from global state
- Fix SHA256 handling for empty body in heal API
2018-01-25 19:24:00 +05:30
poornas
2dd117f647 fix testcases to init nslock properly (#5429) 2018-01-24 09:04:09 +05:30
Harshavardhana
2d19663fef Update dockerfile go version to 1.9.2 (#5441) 2018-01-23 17:19:19 +05:30
Aditya Manthramurthy
254b05e314 Fix locking in some admin APIs: (#5438)
- read lock for get config
- write lock for update creds
- write lock for format file
2018-01-22 18:09:12 -08:00
Aditya Manthramurthy
a003de72bf Update madmin doc (fixes #5432) (#5433) 2018-01-22 16:10:43 -08:00
Aditya Manthramurthy
a337ea4d11 Move admin APIs to new path and add redesigned heal APIs (#5351)
- Changes related to moving admin APIs
   - admin APIs now have an endpoint under /minio/admin
   - admin APIs are now versioned - a new API to server the version is
     added at "GET /minio/admin/version" and all API operations have the
     path prefix /minio/admin/v1/<operation>
   - new service stop API added
   - credentials change API is moved to /minio/admin/v1/config/credential
   - credentials change API and configuration get/set API now require TLS
     so that credentials are protected
   - all API requests now receive JSON
   - heal APIs are disabled as they will be changed substantially

- Heal API changes
   Heal API is now provided at a single endpoint with the ability for a
   client to start a heal sequence on all the data in the server, a
   single bucket, or under a prefix within a bucket.

   When a heal sequence is started, the server returns a unique token
   that needs to be used for subsequent 'status' requests to fetch heal
   results.

   On each status request from the client, the server returns heal result
   records that it has accumulated since the previous status request. The
   server accumulates upto 1000 records and pauses healing further
   objects until the client requests for status. If the client does not
   request any further records for a long time, the server aborts the
   heal sequence automatically.

   A heal result record is returned for each entity healed on the server,
   such as system metadata, object metadata, buckets and objects, and has
   information about the before and after states on each disk.

   A client may request to force restart a heal sequence - this causes
   the running heal sequence to be aborted at the next safe spot and
   starts a new heal sequence.
2018-01-22 14:54:55 -08:00
Harshavardhana
f3f09ed14e Fix a bug in dsync initialization and communication (#5428)
In current implementation we used as many dsync clients
as per number of endpoints(along with path) which is not
the expected implementation. The implementation of Dsync
was expected to be just for the endpoint Host alone such
that if you have 4 servers and each with 4 disks we need
to only have 4 dsync clients and 4 dsync servers. But
we currently had 8 clients, servers which in-fact is
unexpected and should be avoided.

This PR brings the implementation back to its original
intention. This issue was found #5160
2018-01-22 10:25:10 -08:00
Harshavardhana
bb73c84b10 Add notification structure link (#5426)
Fixes #4545
2018-01-20 09:23:09 +05:30
Harshavardhana
e19eddd759 Remove requirement for custom RPCClient (#5405)
This change is a simplification over existing
code since it is not required to have a separate
RPCClient structure instead keep authRPCClient can
do the same job.

There is no code which directly uses netRPCClient(),
keeping authRPCClient is better and simpler. This
simplication also allows for removal of multiple
levels of locking code per object.

Observed in #5160
2018-01-19 16:38:47 -08:00
Andreas Auernhammer
7f99cc9768 add HighwayHash256 support (#5359)
This change adds the HighwayHash256 PRF as bitrot protection / detection
algorithm. Since HighwayHash256 requires a 256 bit we generate a random
key from the first 100 decimals of π - See nothing-up-my-sleeve-numbers.
This key is fixed forever and tied to the HighwayHash256 bitrot algorithm.

Fixes #5358
2018-01-19 10:18:21 -08:00
Aditya Manthramurthy
2760409656 Remove dead code and associated dead code warning (#5424) 2018-01-19 10:16:21 -08:00
fossabot
1f13235cbd Add license scan report and status (#5430) 2018-01-19 13:16:59 +05:30
poornas
dd5a3289dd fix: listobjects return empty response for invalid prefix/marker (#5425)
Currently minio server returns a NotImplemented error when marker
is not common with prefix. Instead, return an empty ListObjectsResponse
2018-01-18 14:39:39 -08:00
Harshavardhana
b6e4f053a3 Fix lock rpc server maintenance loop go-routine leak (#5423)
The problem was after the globalServiceDoneCh receives a
message, we cleanly stop the ticker as expected. But the
go-routine where the `select` loop is running is never
returned from. The stage at which point this may occur
i.e server is being restarted, doesn't seriously affect
servers usage. But any build up like this on server has
consequences as the new functionality would come in future.
2018-01-18 14:39:24 -08:00
Minio Trusted
1c3f55ff64 Update yaml files to latest version RELEASE.2018-01-18T20-33-21Z 2018-01-18 20:41:53 +00:00
Nitish Tiwari
e2d5a87b26 Fix free and total space reported in startup banner (#5419)
With storage class support, the free and total space
reported in Minio XL startup banner should be based on
totalDisks - standardClassParityDisks, instead of totalDisks/2.

fixes #5416
2018-01-17 11:25:51 -08:00
Andreas Auernhammer
d0a43af616 replace all "crypto/sha256" with "github.com/minio/sha256-simd" (#5391)
This change replaces all imports of "crypto/sha256" with
"github.com/minio/sha256-simd". The sha256-simd package
is faster on ARM64 (NEON instructions) and can take advantage
of AVX-512 in certain scenarios.

Fixes #5374
2018-01-17 10:54:31 -08:00
Paul Stack
a020a70484 gateway/manta: Bump manta dependencies (#5414)
Internally, triton-go, what manta minio is built on, changed it's internal
error handling. This means we no longer need to unwrap specific error types

This doesn't change any manta minio functionality - it just changes how errors are
handled internally and adds a wrapper for a 404 error
2018-01-17 10:38:39 -08:00
Andreas Auernhammer
3f09c17bfe fix authentication bypass against Admin-API (#5412)
This change fixes an authentication bypass attack against the
minio Admin-API. Therefore the Admin-API rejects now all types of
requests except valid signature V2 and signature V4 requests - this
includes signature V2/V4 pre-signed requests.

Fixes #5411
2018-01-17 10:36:25 -08:00
ebozduman
24d9d7e5fa Removes logrus package and refactors logging messages (#5293)
This fix removes logrus package dependency and refactors the console
logging as the only logging mechanism by removing file logging support.
It rearranges the log message format and adds stack trace information
whenever trace information is not available in the error structure.
It also adds `--json` flag support for server logging.
When minio server is started with `--json` flag, all log messages are
displayed in json format, with no start-up and informational log
messages.
Fixes #5265 #5220 #5197
2018-01-17 07:24:46 -08:00
kannappanr
85580fe0d6 Document pre-existing data in fs mode (#5365)
Document that pre-existing data in fs mode is safe and 
accessible

Fixes #5050
2018-01-17 20:44:31 +05:30
Nitish Tiwari
8a1dc10c60 Update storage class related documents (#5399)
- Add storage usage info in storage class doc
- Update distributed & erasure code doc with info on storage class
2018-01-17 14:52:42 +05:30
Krishnan Parthasarathi
17301fe45d Don't delete lock ops entry during state change (#5388)
lock ops entry is removed in deleteLockEntryForOps, it shouldn't be removed
in status*To* functions.
2018-01-16 12:00:12 -08:00
Aditya Manthramurthy
aa7e5c71e9 Remove upload healing related dead code (#5404) 2018-01-15 18:20:39 -08:00
Kaan Kabalak
78a641fc6a Fix multi-file dropzone upload issue causing bucket listing duplication (#5392)
This commit fixes the order of the functions inside the selectPrefix
function due to the fact that, as multiple files were being uploaded,
the resetObjects function (that clears the object list) ran repeatedly
for each of these objects, right before the appendObjects function (that
appends the objects being uploaded to the current list of objects) also
ran for all of these objects. This caused all the objects in the bucket
to be repeated in the list for the number of objects that were dragged
into the dropzone.
2018-01-13 22:45:20 +05:30
Harshavardhana
12f67d47f1 Fix a possible race during PutObject() (#5376)
Under any concurrent removeObjects in progress
might have removed the parents of the same prefix
for which there is an ongoing putObject request.
An inconsistent situation may arise as explained
below even under sufficient locking.

PutObject is almost successful at the last stage when
a temporary file is renamed to its actual namespace
at `a/b/c/object1`. Concurrently a RemoveObject is
also in progress at the same prefix for an `a/b/c/object2`.

To create the object1 at location `a/b/c` PutObject has
to create all the parents recursively.

```
a/b/c - os.MkdirAll loops through has now created
        'a/' and 'b/' about to create 'c/'
a/b/c/object2 - at this point 'c/' and 'object2'
        are deleted about to delete b/
```

Now for os.MkdirAll loop the expected situation is
that top level parent 'a/b/' exists which it created
, such that it can create 'c/' - since removeObject
and putObject do not compete for lock due to holding
locks at different resources. removeObject proceeds
to delete parent 'b/' since 'c/' is not yet present,
once deleted 'os.MkdirAll' would receive an error as
syscall.ENOENT which would fail the putObject request.

This PR tries to address this issue by implementing
a safer/guarded approach where we would retry an operation
such as `os.MkdirAll` and `os.Rename` if both operations
observe syscall.ENOENT.

Fixes #5254
2018-01-13 22:43:02 +05:30
poornas
0bb6247056 Move nslocking from s3 layer to object layer (#5382)
Fixes #5350
2018-01-13 10:04:52 +05:30
Andreas Auernhammer
dd202a1a5f restrict TLS cipher suites of the server (#5245)
This change restircts the supported cipher suites of the minio server.
The server only supports AEAD ciphers (Chacha20Poly1305 and 
AES-GCM)

The supported cipher suites are:
 - tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
 - tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
 - tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
 - tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
 - tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
 - tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

Fixes #5244 and #5291
2018-01-13 09:12:11 +05:30
Nitish Tiwari
ede504400f Add validation of xlMeta ErasureInfo field (#5389) 2018-01-12 18:16:30 +05:30
Harshavardhana
4b2d04c86f Add chroot environment doc for minio (#5366)
Fixes #4659
2018-01-12 07:55:40 +05:30
Nitish Tiwari
42633748db Update madmin package to return storage class parity (#5387)
After the addition of Storage Class support, readQuorum
and writeQuorum are decided on a per object basis, instead
of deployment wide static quorums.

This PR updates madmin api to remove readQuorum/writeQuorum
and add Standard storage class and reduced redundancy storage
class parity as return values. Since these parity values are
used to decide the quorum for each object.

Fixes #5378
2018-01-12 07:52:52 +05:30
Aditya Manthramurthy
cd22feecf8 Remove healing of incomplete multipart uploads (#5390)
Since the server performs automatic clean-up of multipart uploads that
have not been resumed for more than a couple of weeks, it was decided
to remove functionality to heal multipart uploads.
2018-01-11 15:07:43 -08:00
kannappanr
20584dc08f Remove unnecessary errors printed on the console (#5386)
Some of the errors printed on server console can be
removed as those error message is unnecessary.

Fixes #5385
2018-01-11 11:42:05 -08:00
Aditya Manthramurthy
8e4eb591c1 Update error response when heal is not implemented (#5383) 2018-01-11 10:21:41 -08:00
Nitish Tiwari
1b721d76b1 Assume standard storage class if not set in metadata (#5370)
If STANDARD storage class is set before starting up Minio server, 
but x-amz-storage-class metadata field is not set in a PutObject 
request, Minio server defaults to N/2 data and N/2 parity disks.

This PR changes the behaviour to use data and parity disks set in
STANDARD storage class, even if x-amz-storage-class metadata 
field is not present in PutObject requests.
2018-01-11 14:58:12 +05:30
Aditya Manthramurthy
f413224b24 Fix config set handler (#5384)
- Return error when the config JSON has duplicate keys (fixes #5286)

- Limit size of configuration file provided to 256KiB - this prevents
  another form of DoS
2018-01-11 12:36:36 +05:30
Harshavardhana
b526cd7e55 Remove requirement for issued at JWT claims (#5364)
Remove the requirement for IssuedAt claims from JWT
for now, since we do not currently have a way to provide
a leeway window for validating the claims. Expiry does
the same checks as IssuedAt with an expiry window.

We do not need it right now since we have clock skew check
in our RPC layer to handle this correctly.

rpc-common.go
```
func isRequestTimeAllowed(requestTime time.Time) bool {
        // Check whether request time is within acceptable skew time.
        utcNow := UTCNow()
        return !(requestTime.Sub(utcNow) > rpcSkewTimeAllowed ||
                utcNow.Sub(requestTime) > rpcSkewTimeAllowed)
}
```

Once the PR upstream is merged https://github.com/dgrijalva/jwt-go/pull/139
We can bring in support for leeway later.

Fixes #5237
2018-01-10 10:34:00 -08:00
Aditya Manthramurthy
3f8379d07d Update Elasticsearch documentation with authentication information (#5381)
- Add documentation to show how to supply credential to access a
  secured elasticsearch server.

Fixes #5329
2018-01-10 09:20:42 +05:30
Harshavardhana
7350543f24 Allow x-amz-content-sha256 to be optional for PutObject() (#5340)
x-amz-content-sha256 can be optional for any AWS signature v4
requests, make sure to skip sha256 calculation when payload
checksum is not set.

Here is the overall expected behavior

** Signed request **
- X-Amz-Content-Sha256 is set to 'empty' or some 'value'  or its
  not 'UNSIGNED-PAYLOAD'- use it to validate the incoming payload.
- X-Amz-Content-Sha256 is set to 'UNSIGNED-PAYLOAD' - skip checksum verification
- X-Amz-Content-Sha256 is not set we use emptySHA256

** Presigned request **
- X-Amz-Content-Sha256 is set to 'empty' or some 'value'  or its
  not 'UNSIGNED-PAYLOAD'- use it to validate the incoming payload
- X-Amz-Content-Sha256 is set to 'UNSIGNED-PAYLOAD' - skip checksum verification
- X-Amz-Content-Sha256 is not set we use 'UNSIGNED-PAYLOAD'

Fixes #5339
2018-01-09 12:49:50 +05:30
Nitish Tiwari
56bde5df31 Refactor storage class parsing for Gateway mode (#5331)
This PR updates the behaviour to print relevant error message
if storage class is set in config.json for gateway

This PR also fixes the case where storage class set via
environment variables is not parsed properly into config.json.
2018-01-08 22:26:13 -08:00
Paul Trunk
bd9cdcf379 Add custom secret names for Docker (#5355) 2018-01-09 10:46:25 +05:30
Krishna Srinivas
7c72d14027 Separate the codebase for XL and FS format.json related code (#5317) 2018-01-08 14:30:55 -08:00
Harshavardhana
ccd9767b7a Remove incomplete older chinese translation doc (#5368) 2018-01-08 12:48:59 +05:30
Harshavardhana
dae8193bd4 Remove duplicate http constants (#5367) 2018-01-08 10:17:48 +05:30
kannappanr
1de3bd6911 Save http trace to a file (#5300)
Save http trace to a file instead of displaying it onto the console.
the environment variable MINIO_HTTP_TRACE will be a filepath instead
of a boolean.

This to handle the scenario where both json and http tracing are
turned on. In that case, both http trace and json output are displayed
on the screen making the json not parsable. Loging this trace onto
a file helps us avoid that scenario.

Fixes #5263
2018-01-05 11:24:31 -08:00
Kaan Kabalak
de2ce5acb4 Correct color code of Excel file icon (#5361)
Fixes #5352
2018-01-05 16:19:41 +05:30
Paul Stack
a1a98617ca gateway/manta: Add support for RBAC (#5332)
Manta has the ability to allow users to authenticate with a 
username other than the main account. We want to expose 
this functionality to minio manta gateway.
2018-01-05 13:30:29 +05:30
Andreas Auernhammer
b85c75996d add support for encrypted TLS private keys (#5308)
This change adds support for password-protected private keys.
If the private key is encrypted the server tries to decrypt
the key with the password provided by the env variable 
MINIO_CERT_PASSWD.

Fixes #5302
2018-01-05 13:18:08 +05:30
Harshavardhana
cc2497f52f Exitcode with '1' when update is available (#5354)
--quiet should simply update the binary without any prompt.

Fixes #5347
2018-01-04 21:26:43 +05:30
Nitish Tiwari
1e5fb4b79a Fix storage class related issues (#5338)
- Update startup banner to print storage class in capitals. This
makes it easier to identify different storage classes available.

- Update response metadata to not send STANDARD storage class.
This is in accordance with AWS S3 behaviour.

- Update minio-go library to bring in storage class related
changes. This is needed to make transparent translation of
storage class headers for Minio S3 Gateway.
2018-01-04 11:44:45 +05:30
kannappanr
6f7c6fc560 Honor browser enabled config value in startup message (#5313)
Currently, browser access information is displayed without checking
if browser enabled flag is turned off in config.json. Fixing it to
hide the information if the flag is turned off.

Fixes #5312
2018-01-04 11:00:52 +05:30
Harshavardhana
c0721164be Automatically set goroutines based on shardSize (#5346)
Update reedsolomon library to enable feature to automatically
set number of go-routines based on the input shard size,
since shard size is sort of a constant in Minio for
objects > 10MiB (default blocksize)

klauspost reported around 15-20% improvement in performance
numbers on older systems such as AVX and SSE3

```
name                  old speed      new speed      delta
Encode10x2x10000-8    5.45GB/s ± 1%  6.22GB/s ± 1%  +14.20%    (p=0.000 n=9+9)
Encode100x20x10000-8  1.44GB/s ± 1%  1.64GB/s ± 1%  +13.77%  (p=0.000 n=10+10)
Encode17x3x1M-8       10.0GB/s ± 5%  12.0GB/s ± 1%  +19.88%  (p=0.000 n=10+10)
Encode10x4x16M-8      7.81GB/s ± 5%  8.56GB/s ± 5%   +9.58%   (p=0.000 n=10+9)
Encode5x2x1M-8        15.3GB/s ± 2%  19.6GB/s ± 2%  +28.57%   (p=0.000 n=9+10)
Encode10x2x1M-8       12.2GB/s ± 5%  15.0GB/s ± 5%  +22.45%  (p=0.000 n=10+10)
Encode10x4x1M-8       7.84GB/s ± 1%  9.03GB/s ± 1%  +15.19%    (p=0.000 n=9+9)
Encode50x20x1M-8      1.73GB/s ± 4%  2.09GB/s ± 4%  +20.59%   (p=0.000 n=10+9)
Encode17x3x16M-8      10.6GB/s ± 1%  11.7GB/s ± 4%  +10.12%   (p=0.000 n=8+10)
```
2018-01-03 13:47:22 -08:00
Minio Trusted
b1fb550d5c Update yaml files to latest version RELEASE.2018-01-02T23-07-00Z 2018-01-02 23:11:15 +00:00
Andreas Auernhammer
a6318dbdaf fix timing oracle attack against signature V2/V4 verification (#5335)
This change replaces the non-constant time comparison of
request signatures with a constant time implementation. This
prevents a timing attack which can be used to learn a valid 
signature for a request without knowing the secret key.

Fixes #5334
2018-01-02 12:00:02 +05:30
Harshavardhana
e39d7ddb0f Fix PostPolicy form tests without hardcoded dates (#5337)
Fixes #5336
2018-01-01 07:28:10 +05:30
Kaan Kabalak
659f724f4c Integrate existing remove bucket functionality from newux to current UI (#5289)
This commit takes the existing remove bucket functionality written by
brendanashworth, integrates it to the current UI with a dropdown for
each bucket, and fixes small issues that were present, like the dropdown
not disappearing after the user clicks on 'Delete' for certain buckets.
This feature only deletes a bucket that is empty (that has no objects).

Fixes #4166
2017-12-29 18:45:44 +05:30
Nitish Tiwari
baaf67d82e Update config.json guide with details of version 22 (#5328)
Fixes #5296
2017-12-28 23:04:44 +05:30
A. Elleuch
2244adff07 fix: Better printing of XL config init error (#5284) 2017-12-28 23:02:48 +05:30
Nitish Tiwari
e3d841ffd1 Fix config.json parsing to fetch correct storage class (#5327) 2017-12-28 14:19:45 +05:30
Minio Trusted
751632d79e Update yaml files to lastest version RELEASE.2017-12-28T01-21-00Z 2017-12-28 01:26:09 +00:00
Nitish Tiwari
545a9e4a82 Fix storage class related issues (#5322)
- Add storage class metadata validation for request header
- Change storage class header values to be consistent with AWS S3
- Refactor internal method to take only the reqd argument
2017-12-27 10:06:16 +05:30
Harshavardhana
f25ec31565 Set maxResources appropriately for gateway like server (#5321) 2017-12-24 20:09:30 +05:30
Matthieu Paret
374feda237 add HTTPStats to madmin (#5299) 2017-12-22 17:47:30 -08:00
A. Elleuch
6ef0161835 fix: Restore empty files when healing (#5257)
HealFile() does not process the case when an empty file is lost in
some disks. Since, Reedsolomon erasure doesn't handle restoring empty
data, HealFile will create empty files similarly to CreateFile().
2017-12-22 14:57:57 -08:00
Nitish Tiwari
1a3dbbc9dd Add x-amz-storage-class support (#5295)
This adds configurable data and parity options on a per object
basis. To use variable parity

- Users can set environment variables to cofigure variable
parity

- Then add header x-amz-storage-class to putobject requests
with relevant storage class values

Fixes #4997
2017-12-22 16:58:13 +05:30
Aditya Manthramurthy
f1355da72e Add base64 encoded MD5 output for Hash Reader (#5315)
- Use it to send the Content-MD5 header correctly encoded to S3
  Gateway

- Fixes a bug in PutObject (including anonymous PutObject) and
  PutObjectPart with S3 Gateway found when testing with Mint.
2017-12-21 17:27:33 -08:00
Krishnan Parthasarathi
bbe521ffec ReInitDisk RPC handler should use retryStorage (#5310) 2017-12-21 12:28:01 +05:30
Paul Stack
7d75d61621 Add Support for Manta Object Storage as a Gateway (#5025)
Manta is an Object Storage by [Joyent](https://www.joyent.com/)

This PR adds initial support for Manta. It is intended as non-production 
ready so that feedback can be obtained.
2017-12-20 13:37:56 +05:30
Harshavardhana
1f77708a30 Limit number of connections upto system maxlimit (#5109) 2017-12-20 13:30:14 +05:30
Timon Wong
84fc78d60f Implement Alibaba Cloud OSS gateway support (#5103) 2017-12-19 13:55:17 +05:30
poornas
a182fe8c15 update steps to make changes to config.json (#5292) 2017-12-17 21:00:12 -08:00
Harshavardhana
819d1e80c6 Add more delays on distributed startup for slow network (#5240)
Refer #5237
2017-12-16 08:25:29 -08:00
Kaan Kabalak
ffdf115bf2 Fix mobile issue where checkbox was seen over sidebar on list item hover (#5288)
Fixes #5287
2017-12-15 12:36:05 -08:00
Harshavardhana
eb7c690ea9 Support in-place upgrades of new minio binary and releases. (#4961)
This PR allows 'minio update' to not only shows update banner
but also allows for in-place upgrades.

Updates are done safely by validating the downloaded
sha256 of the binary.

Fixes #4781
2017-12-15 12:33:42 -08:00
Nitish Tiwari
8c08571cd9 Update Kubernetes example yaml files (#5278)
Removed the non production ready Kubernetes constructs that are not needed
for standard Minio deployment. General cleanup of the documents.
2017-12-12 10:29:00 +05:30
kannappanr
2853fa1882 Remove logger field info from docs (#5281)
Logger field is removed from the docs,
as it has been removed from the config file.
2017-12-11 13:20:05 -08:00
Harshavardhana
1672c73988 Fix snap docs with instructions to run (#5266) 2017-12-07 08:38:58 +05:30
Harshavardhana
5476415016 Remove unused deps for cloudresourcemanager (#5274) 2017-12-06 16:41:56 -08:00
Kaan Kabalak
67ac74471d Fix error message on browser window resize when user has no buckets (#5275)
This commit handles the case where the list of buckets is empty on the
listObjects function of actions.js

Fixes #5267
2017-12-06 15:52:23 -08:00
kannappanr
a1c1a18dc5 Remove "logger" field from config.json (#5268)
File logging removed as part of improvement to server logging.

config.json format updated to version 21.

Fixes #5176
2017-12-06 12:48:29 +05:30
Harshavardhana
eb2894233c Convert gateways into respective packages (#5200)
- Make azure gateway a package
- Make b2 gateway a package
- Make gcs gateway a package
- Make s3 gateway a package
- Make sia gateway a package
2017-12-05 17:58:09 -08:00
Nitish Tiwari
52e382b697 Update go version in snap file (#5256) 2017-12-04 15:49:31 -08:00
techknowlogick
0d435e11b1 Change container name in b2 docs (#5259) 2017-12-04 22:34:00 +05:30
Harshavardhana
2755a0b763 Check if SSL is configured to validate input arguments (#5252)
This PR handles following situations

- secure endpoints provided, server should fail to start
  if TLS is not configured

- insecure endpoints provided, server starts ignoring
  if TLS is configured or not.

Fixes #5251
2017-12-04 12:17:12 +05:30
Aditya Manthramurthy
043e030a4a Add CopyObjectPart support to gateway S3 (#5213)
- Adds a metadata argument to the CopyObjectPart API to facilitate
  implementing encryption for copying APIs too.

- Update vendored minio-go - this version implements the
  CopyObjectPart client API for use with the S3 gateway.

Fixes #4885
2017-12-02 08:33:59 +05:30
Harshavardhana
490c30f853 erasure: Support cleaning up of stale multipart objects (#5250)
Just like our single directory/disk setup, this PR brings
the functionality to cleanup stale multipart objects
older > 2 weeks.
2017-11-30 18:11:42 -08:00
Harshavardhana
59749a2b85 erasure: Remove prefix based listing support on ListMultipartUploads (#5248)
Previously we have removed this support under FS on #4996,
deprecate this on erasure coded backend as well to simplify
our multipart support.
2017-11-30 15:58:46 -08:00
Michael Lynch
fc3cf97b81 Removing isValidObjectName from Sia gateway (#5243)
This check incorrectly rejects most valid filenames. The only filenames Sia
forbids are leading forward slashes and path traversal characters, but it's
better to simply allow Sia to reject invalid names on its own rather than try
to anticipate errors from Sia:

https://github.com/NebulousLabs/Sia/blob/master/doc/api/Renter.md#path-parameters-4
2017-11-30 14:43:21 -08:00
Harshavardhana
d45a8784fc Fix hash order to generate more even distribution (#5247)
The problem in existing code was the following line

```
start := int(keyCrc%uint32(cardinality)) | 1
```

A given a value of N cardinality the ending result
because of the the bitwise '|' would lead to always
higher affinity to odd sequences.

As can be seen from the test cases that this can
lead to many objects being allocated the same set
of disks or atleast the first disk is an odd disk
always.  This introduces a performance problem
for majority of the objects under concurrent load.

Remove `| 1` to provide a more cleaner distribution
and the new code will be.
```
start := int(keyCrc % uint32(cardinality))
```

Thanks to Krishna Srinivas for pointing out the bitwise
situation here.
2017-11-30 12:57:03 -08:00
Nitish Tiwari
6d7319380c Add Transparent Hugepage information (#5246)
Fixes #5242
2017-11-30 12:18:00 -08:00
Krishna Srinivas
14e6c5ec08 Simplify the steps to make changes to config.json (#5186)
This change introduces following simplified steps to follow 
during config migration.

```
 // Steps to move from version N to version N+1
 // 1. Add new struct serverConfigVN+1 in config-versions.go
 // 2. Set configCurrentVersion to "N+1"
 // 3. Set serverConfigCurrent to serverConfigVN+1
 // 4. Add new migration function (ex. func migrateVNToVN+1()) in config-migrate.go
 // 5. Call migrateVNToVN+1() from migrateConfig() in config-migrate.go
 // 6. Make changes in config-current_test.go for any test change
```
2017-11-29 13:12:47 -08:00
A. Elleuch
98d07210e7 fix: Ignore logging some tcp routine errors (#5097) 2017-11-28 13:51:17 -08:00
Nitish Tiwari
6923630389 Update bucket notification docs to mention events supported (#5235)
Fixes #4898
2017-11-28 15:41:26 +05:30
Nitish Tiwari
0c73c81919 Cleanup TLS setup document (#5231)
Fixes #4959
2017-11-28 15:15:50 +05:30
Harshavardhana
a46b640da3 gateway/sia: Support proper {make,get}Bucket operations (#5229)
Current implementation we faked the makeBucket operations
to allow for s3 clients to behave properly. But instead
we can create a placeholder zero byte file instead, which
is a hexadecimal representation of the bucket name itself.
2017-11-28 13:40:44 +05:30
Krishna Srinivas
71f9d2beff Increase maximum size of PUT request to 5TB (#5241)
fixes #5148
2017-11-28 12:59:02 +05:30
Michael Lynch
cf414a6053 Fixing Sia file uploads (#5233)
The Sia gateway had a bug with uploading that prevented the user's uploads
from reaching the Sia backend. The PutObject function called fsRemoveFile at
the end of the function, which didn't give the Sia backend enough time to
upload the file to the Sia network.

This adds a goroutine that watches the file upload progress and doesn't delete
the file until the upload reaches 100% complete.

Note that this solution has the limitation where if the minio process dies in
the middle of upload, it will leave orphaned files in the SIA_TEMP directory
that the user will need to remove manually.
2017-11-28 12:25:15 +05:30
Harshavardhana
05b395e81d Add more unit tests for azure/gcs/b2 gateway (#5236)
Also adds a blazer SDK update exposing
error response headers.
2017-11-27 18:29:22 -08:00
Paul Nicholls
6a2d7ae808 gateway/azure: ListParts return an empty list if no parts uploaded yet (#5230)
This makes azure ListParts implementation behave more like S3 
by returning an empty list rather than an error when no parts have
been uploaded yet.
2017-11-27 17:42:27 -08:00
Eduardo Reyes
685afb6749 Fixes last item in the list menu being blocked. (#5234)
This commit fixes an issue where the last item's menu on a list of files
that scrolls gets blocked by the floating add button.

The fix is simply add the same padding that we use for the responsive
view, since it works just fine in responsive.
2017-11-27 14:02:57 -08:00
Harshavardhana
8efa82126b Convert errors tracer into a separate package (#5221) 2017-11-25 11:58:29 -08:00
Frank Wessels
6e6aeb6a9e Updated version of klauspost/reedsolomon using proper AVX2 instructions as well a providing support for Cauchy matrices. (#5215) 2017-11-24 14:23:20 -08:00
Arnaud de Mouhy
c9e00ae0a5 Add support for dynamic address in docker healthcheck (#5228) 2017-11-24 14:11:26 +05:30
Nitish Tiwari
08e0698b7e Update docs to latest Minio server release (#5227) 2017-11-23 10:19:53 +05:30
Harshavardhana
135a6a7bb4 Add chinese translation docs. (#5224) 2017-11-22 15:15:40 -08:00
Dee Koder
8b4d7048f8 fixed typo variable under domain heading. (#5223) 2017-11-22 12:37:00 -08:00
David G
f4d4ea5c36 Implement Sia Gateway (#5114) 2017-11-22 12:12:10 -08:00
Aditya Manthramurthy
d1a6c32d80 Improve make and make install messages (#5207) 2017-11-21 16:22:01 -08:00
Krishna Srinivas
bbd05a8f1c gateway-gcs: Close the writer with error in case of any errors. (#5217)
fixes #5216
2017-11-21 14:45:37 -08:00
Krishna Srinivas
4393afb7e2 Remove checkGCSProjectID() as it needs extra permission setting (#5210)
fixes #5209
2017-11-21 10:43:39 -08:00
Krishna Srinivas
1a53734477 Rename UserDefined to UserMetadata for events (#5206)
fixes #5165
2017-11-20 15:32:25 -08:00
Andreas Auernhammer
e95c0bb913 return AWS compliant error if SSE-C key is wrong (#5203)
This PR changes the behavior of DecryptRequest.
Instead of returning `object-tampered` if the client provided
key is wrong DecryptRequest will return `access-denied`.

This is AWS S3 behavior.

Fixes #5202
2017-11-20 14:04:10 -08:00
Krishna Srinivas
fce556b8a0 Support for ListObjectParts in azure-gateway (#5198)
fixes #5169
2017-11-20 14:03:20 -08:00
Andreas Auernhammer
b97f99766f add benchmarks for erasure backend (#5084)
This change adds benchmarks for erasure read/write in different setups.
2017-11-17 14:57:04 -08:00
Nitish Tiwari
f7b6f7b22f Update getObjectInfo to stat for objects with trailing / (#5179)
Apache Spark sends getObject requests with trailing "/".
This PR updates the getObjectInfo to stat for files
even if they are sent with trailing "/".

Fixes #2965
2017-11-16 16:00:27 -08:00
Krishnan Parthasarathi
2a0a62b78d Return ErrContentSHA256Mismatch when sha256sum is invalid (#5188) 2017-11-16 11:13:04 -08:00
Krishnan Parthasarathi
67f66c40c1 Fix ListenBucketNotification deadlock (#5028)
Previously ListenBucketNotificationHandler could deadlock with
PutObjectHandler's eventNotify call when a client closes its
connection. This change removes the cyclic dependency between the
channel and map of ARN to channels by using a separate done channel to
signal that the client has quit.
2017-11-16 10:56:06 -08:00
Krishna Srinivas
5a2bdf6959 Handle Path validation inside the PostPolicy handler (#5192) 2017-11-15 14:10:45 -08:00
silenceshell
51e78a3e20 fix a typo (#5187) 2017-11-15 15:12:14 +05:30
Harshavardhana
9eb52ec7c7 Remove release scripts for minio. (#5181)
Use `GOOS=<osname> go build`  to build minio for any platform of choice.
2017-11-14 17:05:40 -08:00
Harshavardhana
0827a2747b api: CopyObject should return NotImplemented for now (#5183)
Commit ca6b4773ed introduces SSE-C
support for HEAD, GET, PUT operations but since we do not
implement CopyObject() we should return NotImplemented.
2017-11-14 16:57:19 -08:00
Krishna Srinivas
e7a724de0d Virtual host style S3 requests (#5095) 2017-11-14 16:56:24 -08:00
Krishna Srinivas
d57d57ddf5 Update ui-assets.go (#5185) 2017-11-14 16:22:35 -08:00
Harshavardhana
a4d6195244 Add public data-types for easier external loading (#5170)
This change brings public data-types such that
we can ask projects to implement gateway projects
externally than maintaining in our repo.

All publicly exported structs are maintained in object-api-datatypes.go

completePart --> CompletePart
uploadMetadata --> MultipartInfo

All other exported errors are at object-api-errors.go
2017-11-14 13:55:10 +05:30
Krishna Srinivas
7d3eaf79ff Set Minio user-agent for GCS calls (#5154) 2017-11-13 19:06:51 -08:00
kannappanr
b63c37b28e Return MethodNotAllowed error in PostPolicyBucketHandler if URL contains object name (#5142)
S3 spec requires that MethodNotAllowed error be return if object name is part
of the URL.

Fix postpolicy related unit tests to not set object name as part of target URL.

Fixes #5141
2017-11-13 16:30:20 -08:00
Harshavardhana
8d59f35523 Add GetInfo() support for solaris (#5174)
Fixes #5173
2017-11-13 12:54:38 -08:00
kannappanr
f460eceb6d Check for value > 7 days in X-Amz-Expires header. (#5163)
Add a check to see if the X-Amz-Expires header in the presigned URL is less than 7 days.

Fixes #5162
2017-11-13 12:54:03 -08:00
Harshavardhana
d10679866c Fix minio distributed setup to properly work on windows (#5152)
On windows having a preceding "/" will cause problems, if the
command line already has C:/<export-folder/ in it. Final resulting
path on windows might become C:/C:/ this will cause problems
of starting minio server properly in distributed mode on windows.
As a special case make sure to trim off the separator.

NOTE: It is also perfectly fine for windows users to have a path
without C:/ since at that point we treat it as relative path
and obtain the full filesystem path as well. Providing C:/
style is necessary to provide paths other than C:/,
such as F:/, D:/ etc.

Another additional benefit here is that this style also
supports providing UNC paths as well.

Fixes #5136
2017-11-12 08:09:53 +05:30
Andreas Auernhammer
a79a7e570c replace SSE-C key derivation scheme (#5168)
This chnage replaces the current SSE-C key derivation scheme. The 'old'
scheme derives an unique object encryption key from the client provided key.
This key derivation was not invertible. That means that a client cannot change
its key without changing the object encryption key.
AWS S3 allows users to update there SSE-C keys by executing a SSE-C COPY with
source == destination. AWS probably updates just the metadata (which is a very
cheap operation). The old key derivation scheme would require a complete copy
of the object because the minio server would not be able to derive the same
object encryption key from a different client provided key (without breaking
the crypto. hash function).

This change makes the key derivation invertible.
2017-11-10 17:21:23 -08:00
Harshavardhana
16ecaac4fc Help message should prioritize gateway after server (#5153)
Currently gateway is listed as a command after {version, update}
which is incorrect, fix it.
2017-11-08 13:38:53 -08:00
Andreas Auernhammer
ca6b4773ed add SSE-C support for HEAD, GET, PUT (#4894)
This change adds server-side-encryption support for HEAD, GET and PUT
operations. This PR only addresses single-part PUTs and GETs without
HTTP ranges.

Further this change adds the concept of reserved object metadata which is required
to make encrypted objects tamper-proof and provide API compatibility to AWS S3.
This PR adds the following reserved metadata entries:
- X-Minio-Internal-Server-Side-Encryption-Iv          ('guarantees' tamper-proof property)
- X-Minio-Internal-Server-Side-Encryption-Kdf         (makes Key-MAC computation negotiable in future)
- X-Minio-Internal-Server-Side-Encryption-Key-Mac     (provides AWS S3 API compatibility)

The prefix `X-Minio_Internal` specifies an internal metadata entry which must not
send to clients. All client requests containing a metadata key starting with `X-Minio-Internal`
must also rejected. This is implemented by a generic-handler.

This PR implements SSE-C separated from client-side-encryption (CSE). This cannot decrypt
server-side-encrypted objects on the client-side. However, clients can encrypted the same object
with CSE and SSE-C.

This PR does not address:
 - SSE-C Copy and Copy part
 - SSE-C GET with HTTP ranges
 - SSE-C multipart PUT
 - SSE-C Gateway

Each point must be addressed in a separate PR.

Added to vendor dir:
 - x/crypto/chacha20poly1305
 - x/crypto/poly1305
 - github.com/minio/sio
2017-11-07 15:18:59 -08:00
Krishna Srinivas
7e7ae29d89 browser: Remove hardcoding of minioBrowserPrefix=/minio (#5048)
This enable reverse proxy of minio-browser. Fixes #5040
2017-11-06 15:59:37 -08:00
Krishnan Parthasarathi
7d18f00116 Make GCS multipart upload failures S3-compatible (#5138) 2017-11-06 10:09:21 -08:00
Harshavardhana
719f8c258a fix content-sha256 verification for presigned PUT (#5137)
It is possible that x-amz-content-sha256 is set through
the query params in case of presigned PUT calls, make sure
that we validate the incoming x-amz-content-sha256 properly.

Current code simply just allows this without honoring the
set x-amz-content-sha256, fix it.
2017-11-05 16:32:19 +05:30
Harshavardhana
dcdb07433a Support conditions for ListMultipartUploads and ListParts (#5130) 2017-11-02 11:39:48 -07:00
kannappanr
26e9f78a86 Display help when access/secret key is not set (#5132)
Display help message, when access and secret keys are not set in
any of the gateway.

Fixes #5124
2017-11-01 11:45:27 -07:00
Nitish Tiwari
34a1b58a75 Remove redirectHeaders method (#5120)
As of go 1.8, headers are copied on redirect, so we no longer need
to do this manually.

See https://github.com/golang/go/issues/4800 and
https://go-review.googlesource.com/c/go/+/28930 for more context on go
behaviour.

Fixes #5042
2017-11-01 12:43:13 +05:30
Bala FA
32c6b62932 move credentials as separate package (#5115) 2017-10-31 11:54:32 -07:00
Harshavardhana
8d584bd819 Remove unused value from retry-storage (#5129) 2017-10-29 15:43:16 +05:30
Nitish Tiwari
031767791d Update GitHub PR template to include mint tests (#4970)
Mint is the functional testing platform for Minio server. So, it is
important that each PR to Minio server repository is checked for
addition/updating of Mint test cases.
2017-10-28 07:14:19 -07:00
Nitish Tiwari
3b917067d9 Update sample yaml files to latest release RELEASE.2017-10-27T18-59-02Z (#5127) 2017-10-28 07:11:53 -07:00
Harshavardhana
203ac8edaa Bucket policies should use minio-go/pkg/policy instead. (#5090) 2017-10-27 16:14:06 -07:00
Harshavardhana
8bbfb1b714 Allow event notifications to work without region (#5119)
Fixes #5101
2017-10-27 15:09:55 -07:00
Harshavardhana
b4ddccc2f7 browser: Return a more descriptive error for HTTP 500 (#5112) 2017-10-27 15:09:14 -07:00
Timon Wong
6400f506da Simplify gateway backend registration (#5111) 2017-10-27 15:07:46 -07:00
Frank Wessels
7195ac7f14 Add space to error message (#5108) 2017-10-27 15:07:14 -07:00
Krishna Srinivas
03df692ae2 Support for bosh/pcf user-agent when querying for updates. (#5116) 2017-10-26 18:53:45 -07:00
kannappanr
a011fe8450 "0" offset is ignored in GetObject method in Azure Gateway code (#5118)
In GetObject method, Check if startoffset is a non-negative value.
Ignore check for startOffset > and check for only length > 0.

Fixes minio/mint#191
2017-10-26 18:01:46 -07:00
kannappanr
95d97c2d6d GCS gateway to return error in getBucketPolicy, when no policy is set (#5117)
Return NoSuchBucketPolicy error when there is no policy set.
Fixes minio/mint#199
2017-10-26 18:01:00 -07:00
Bala FA
bc8b936d4b convert ETag properly for all gateways (#5099)
Previously ID/ETag from backend service is used as is which causes
failure on s3cmd like tools where those tools use ETag as checksum to
validate data.  This is fixed by prepending "-1".

Refer minio/mint#193 minio/mint#201
2017-10-26 10:17:07 -07:00
Aditya Manthramurthy
d23ded0d83 Use retryableStorage after healing format.json (#5105)
- Previously networkStorage was being used and this lead to errors
  when listing with a down server/disk

Fixes #5089
2017-10-26 09:52:23 -07:00
Julien Maitrehenry
db3fed2279 Fix s3MetaToAzureProperties Content-Md5 key (#5068) 2017-10-25 11:00:07 -07:00
A. Elleuch
866dffcd62 log: Store http request/responses in a log file (#4804)
When MINIO_TRACE_DIR is provided, create a new log file and store all
HTTP requests + responses data, body are excluded to reduce memory
consumption. MINIO_HTTP_TRACE=1 enables logging. Use non mem
consuming http req/resp recorders, the maximum is about 32k per request.
This logs to STDOUT, body logging is disabled for PutObject PutObjectPart
GetObject.
2017-10-25 10:59:53 -07:00
Harshavardhana
5eb210dd2e Set etag properly to calculated value if available (#5106)
Fixes #5100
2017-10-24 12:25:42 -07:00
kula
758d5458f0 Update documenation to reflect correct default region of '' (#5101)
Ever since commit 5db1e9f3dd the
default region as been '', instead of 'us-east-1'. Update
documentation to reflect this, in particular, documentation about
notifications.
2017-10-23 11:57:40 -07:00
asubmani
8a40da3fd0 Update azure.md (#5086) 2017-10-23 14:38:38 +05:30
Harshavardhana
1d8a8c63db Simplify data verification with HashReader. (#5071)
Verify() was being called by caller after the data
has been successfully read after io.EOF. This disconnection
opens a race under concurrent access to such an object.
Verification is not necessary outside of Read() call,
we can simply just do checksum verification right inside
Read() call at io.EOF.

This approach simplifies the usage.
2017-10-22 11:00:34 +05:30
Krishna Srinivas
65a817fe8c Move packages between dependencies and devDependencies (#5094)
This change is needed to generate the right Open Source Disclosure File.
2017-10-21 12:21:32 +05:30
Julien Maitrehenry
1256b0b818 Fix multipart upload etag on azure gateway (#5055) 2017-10-20 14:00:18 -07:00
Krishna Srinivas
7e05b826fa Figure out projectID for GCS automatically from credentials.json (#5029)
fixes #5027
2017-10-20 13:59:12 -07:00
Harshavardhana
d82a1da511 Fix notification unmarshalling, unmarshal only when size is > 0 (#5087)
Fixes #5085
2017-10-20 13:57:57 -07:00
Frank Wessels
f598f4fd1b Fix typo in comment (#5088) 2017-10-20 15:08:15 +05:30
Krishna Srinivas
e65d8fdc7d Remove unused packages from vendor.json (#5079) 2017-10-19 15:14:10 +05:30
A. Elleuch
b919462610 fix: Avoid teeing data into a null cache buffer (#5070)
In some cases, Cache manager returns ErrCacheFull error when creating a
new cache buffer but the code still sends object data to nil cache buffer data.
2017-10-18 14:42:10 -07:00
Nitish Tiwari
8287ab091c Ignore file not found error for multipart-uploads (#5065)
Dont print the error errFileNotFound, as it is expected that concurrent
complete-multipart-uploads or abort-multipart-uploads would have deleted
the file, and the file may not be found

Fixes: https://github.com/minio/minio/issues/5056
2017-10-18 14:26:20 -07:00
Harshavardhana
f25bec6bf1 Do not attempt to generate URLToken for anonymous downloads (#5078)
This is a regression since last release - fixes #5076
2017-10-18 11:14:27 +05:30
Krishna Srinivas
75865efb0e fs: All parts except the last part should be of the same size (#5045)
fixes #4881
2017-10-17 12:01:28 -07:00
Harshavardhana
3d2d63f71e Fix gateway docs remove redundant files (#5072) 2017-10-17 08:12:44 +05:30
Harshavardhana
53e133e844 Remove NotSupported error redundant with NotImplemented (#5074) 2017-10-17 08:11:06 +05:30
Harshavardhana
b2cbade477 Support creating empty directories. (#5049)
Every so often we get requirements for creating
directories/prefixes and we end up rejecting
such requirements. This PR implements this and
allows empty directories without any new file
addition to backend.

Existing lower APIs themselves are leveraged to provide
this behavior. Only FS backend supports this for
the time being as desired.
2017-10-16 17:20:54 -07:00
Harshavardhana
0c0d1e4150 Implement backblaze-b2 gateway support (#5002)
Fixes https://github.com/minio/minio/issues/4072
2017-10-13 16:26:16 +05:30
Harshavardhana
3d0dced23c Remove go1.9 specific code for windows (#5033)
Following fix https://go-review.googlesource.com/#/c/41834/ has
been merged upstream and released with go1.9.
2017-10-13 15:31:15 +05:30
Nitish Tiwari
ad53c5d859 Remove body from POST request in webhook (#5067)
When webhook notification is configured, Minio server tries to lookup the
webhook endpoint by making a POST request with body set as releasetag.
We can remove the body from the POST request as the POST body does not
add any specific value.

This discussion on IETF group says empty POSTs are okay
http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html

Fixes: https://github.com/minio/minio/issues/5066
2017-10-13 13:29:01 +05:30
Harshavardhana
45463b1d6b Fix azure metadata handling for all incoming PUT requests (#5038)
s3cmd cli fails when trying to upload a file to azure gateway.
Previous fixes in azure to handle client side encryption alone
did not completely address the problem.

We need to possibilly convert all the x-amz-meta-<name>
, i.e specifically <name> should be converted into a
C# identifier as mentioned in the docs for `put-blob`.

https://docs.microsoft.com/en-us/rest/api/storageservices/put-blob

```
s3cmd put README.md s3://myanis/
upload: 'README.md' -> 's3://myanis/README.md'  [1 of 1]
 4598 of 4598   100% in    0s    47.24 kB/s  done
upload: 'README.md' -> 's3://myanis/README.md'  [1 of 1]
 4598 of 4598   100% in    0s    50.47 kB/s  done
ERROR: S3 error: 400 (InvalidArgument): Your metadata headers are not supported.
```

There is a separate issue with s3cmd after this fix is applied where
the ETag is wronly validated https://github.com/s3tools/s3cmd/issues/880
But that is an upstream s3cmd problem which wrongly interprets ETag
to be md5sum of the content that was uploaded.
2017-10-12 12:16:24 -07:00
Nitish Tiwari
d5895d3243 Update docker swarm doc to use docker volume prune (#5053)
docker prune command is an improvement over previous two step process
of removal of unused volumes in Swarm.
2017-10-11 10:17:57 -07:00
Krishna Srinivas
db1edfe487 Fix data race bug in the testcase TestHTTPListenerAcceptParallel (#5043) 2017-10-11 10:17:37 -07:00
Bala FA
9c16f73334 pkg/http: use port 65432 than 9000 for unit tests (#5021)
Fixes #5014
2017-10-11 10:16:38 -07:00
Nitish Tiwari
f50bec2987 Update Minio version in docs and example yaml files (#5054) 2017-10-10 17:35:00 -07:00
Harshavardhana
e2bff5cdc0 Move Dockerfile to go 1.9.1 (#5047)
Additionally add Dockerfile.dev to build local development images.
2017-10-10 14:17:16 -07:00
Julien Maitrehenry
02a5f1e96a Add b2s method on pkg/disk/type_bsd.go (#5036) 2017-10-10 02:27:28 -07:00
Harshavardhana
4deefa3695 tests: Remove dependency on check.v1 (#5034)
This PR addresses a long standing dependency on
`gopkg.in/check.v1` project used for our tests.
All tests are re-written to use the go default
testing framework instead.

There was no reason for us to use an external
package where Go tools are sufficient for this.
2017-10-10 02:14:42 -07:00
Bala FA
d28b3d8801 Move to go1.9.1 as default environment. (#5041) 2017-10-09 22:23:59 -07:00
Harshavardhana
099b5293a9 Move gateway unsupported functions into a common struct. (#5009)
This is done to avoid repeated declaration of not-implemented
functions for each gateway. It also avoids a possible bug in go
https://github.com/golang/go/issues/18468 which is triggered on
our multiple PRs already.
2017-10-09 16:41:35 -07:00
Aditya Manthramurthy
4a0a491ca1 Refactor update check code (#5020)
- Add release-time conversion helpers
- Split GetCurrentReleaseTime() into two simpler functions.
- Avoid appending strings when assembling user-agent string.
- Reorder release info URLs to check the newer URLs earlier.
- Remove trivial low-level functions created solely for the purpose of
  writing tests.
- Remove some unnecessary tests.
2017-10-09 16:12:13 -07:00
Harshavardhana
db6b6e9518 S3 peers should be initialized properly (#5024)
Fixes #4991
2017-10-08 20:23:42 -07:00
Bala FA
d8a11c8f4b fix build failure for go1.9 (#4872) 2017-10-06 17:00:15 -07:00
Harshavardhana
0b546ddfd4 Return errors in PutObject()/PutObjectPart() if input size is -1. (#5015)
Amazon S3 API expects all incoming stream has a content-length
set it was superflous for us to support object layer which supports
unknown sized stream as well, this PR removes such requirements
and explicitly error out if input stream is less than zero.
2017-10-06 09:38:01 -07:00
Krishna Srinivas
d1712a46a7 Enable ListMultipartUploads and ListObjectParts for FS (#4996)
* Enable ListMultipartUploads and ListObjectParts for FS.
Previously we had disabled ListMultipartUploads and ListObjectParts
to see if any clients break. Docker registry broke. This patch
enables ListMultipartUploads and ListObjectParts, however
ListMultipartUploads with prefix based listing is not
supported (which is not used by docker registry anyway).
i.e ListMultipartUploads will need exact object name.
2017-10-05 16:08:25 -07:00
Harshavardhana
c46c2a6dd6 Fix healthcheck issue with Docker containers (#5011)
Fixes #5010
2017-10-05 13:52:57 -07:00
Bala FA
88938340b3 remove all dead codes (#5019)
Fixes #5012
2017-10-05 12:25:45 -07:00
Krishnan Parthasarathi
13a7033505 Translate s3 gateway errors at object layer (#5006) 2017-10-05 12:24:45 -07:00
Bala FA
670e3e0c8b verify: add minio gateway s3 setup for verification tests (#4976) 2017-10-04 11:16:39 -07:00
Harshavardhana
89d528a4ed Allow CopyObject() in S3 gateway to support metadata (#5000)
Fixes #4924
2017-10-03 10:38:25 -07:00
A. Elleuch
53f3d2fd65 Push max threads to little less than kernel limit (#5001)
Let Minio server use more threads than allowed by golang runtime. This
is important to better deal with high load.
2017-10-03 10:37:45 -07:00
A. Elleuch
a4f26aec00 fix: List buckets response should return UTC modtime (#5004) 2017-10-03 10:34:51 -07:00
Bala FA
60cc6184d2 azure: handle list objects properly (#4953)
When removing `minio.sys.tmp` prefixed entries and objects/prefixes is
empty, populate till we get all valid entries.
2017-09-29 12:08:23 -07:00
poornas
ce2d185211 Add maxKeys validation for azure and gcs gateway (#4999)
Gateway implementation of ListObjectsV1 does not validate maxKeys range.
Raise an InvalidArgument when maxKeys is negative so that ListObjects
call is compatible with S3 on all gateways.
2017-09-29 12:07:44 -07:00
Aditya Manthramurthy
b05351c420 Fix CopyObject with metadata for Azure gateway (#4986) 2017-09-29 10:58:40 -07:00
Harshavardhana
b415c600e1 Add bucketName checks for azure and s3 gateway in GetBucketInfo. (#4992)
Gateway interface implementations of GetBucketInfo() under
azure and s3 gateway did not perform any bucketname input
validation resulting in incorrect responses when the tests
are expecting InvalidBucketName.

Fixes #4983
2017-09-28 19:37:09 -07:00
Aditya Manthramurthy
4c9fae90ff Optimize healObject by eliminating extra data passes (#4949) 2017-09-28 15:57:19 -07:00
Aditya Manthramurthy
94670a387e Update Azure SDK (#4985) 2017-09-28 15:23:46 -07:00
Nitish Tiwari
789270af3c Vendorize latest minio-go (#4989)
As minio-go behavior is fixed to treat empty byte arrays and nil byte
arrays in the same manner. These changes are needed in minio to
address the PutObject failure for S3 Gateway.

Fixes: https://github.com/minio/minio/issues/4974,
https://github.com/minio/minio-java/issues/615
2017-09-28 08:10:38 -07:00
fangyuxiang
a5fbe1e16c fs: optimize multipart clean work (#4944) 2017-09-28 08:09:28 -07:00
Bala FA
3c836b5f34 tests: remove test cases not applicable for docker. (#4951)
When running `make test` in docker, two test cases cause hanging.
This Patch fixes the problem by removing those test cases.

Thanks to @ws141 for identifying the problem.
2017-09-27 13:51:26 -07:00
Andreas Auernhammer
02af37a394 optimize memory allocs during reconstruct (#4964)
The reedsolomon library now avoids allocations during reconstruction.
This change exploits that to reduce memory allocs and GC preasure during
healing and reading.
2017-09-27 10:29:42 -07:00
Harshavardhana
4879cd73f8 api: MakeBucket() should honor regions properly. (#4969)
Fixes #4967
2017-09-26 20:13:06 -07:00
Aditya Manthramurthy
b5dc4b5873 Fix CopyObject with metadata for GCS Gateway (#4971) 2017-09-26 11:04:42 -07:00
Harshavardhana
6dcfaa877c Fix signature v2 handling for resource names (#4965)
Previously we were wrongly adding `?` as part
of the resource name, add a test case to check
if this is handled properly.

Thanks to @kannappanr for reproducing this.

Without this change presigned URL generated with following
command would fail with signature mismatch.
```
aws s3 presign s3://testbucket/functional-tests.sh
```
2017-09-26 11:00:07 -07:00
Tamer Fahmy
0bf981278e Provide the correct free block size volume/disk information (#4943)
On *NIX platforms the statfs(2) system call returns a struct containing both the
free blocks in the filesystem (Statfs_t.Bfree) and the free blocks available to
the unprivileged or non-superuser (Statfs_t.Bavail).

The `Bfree` and `Bavail` fields (with `Bfree >= Bavail`) will be set to
different values on e.g. filesystems such as ext4 that reserve a certain
percentage of the filesystem blocks which may only be allocated by admnistrative
privileged processes.

The calculations for the `Total` disk space need to subtract the difference
between the `Bfree` and `Bavail` fields for it to correctly show the total
available storage space available for unprivileged users.

This implicitly fixes a bug where the `Used = Total - Free` calculation yielded
different (and also incorrect) results for identical contents stored when only
the sizes of the disks or backing volumes differed. (as can be witnessed in the
`Used:` value displayed in the Minio browser)

See:
- https://wiki.archlinux.org/index.php/ext4#Reserved_blocks
- http://man7.org/linux/man-pages/man2/statfs.2.html
- https://man.openbsd.org/statfs
- http://lingrok.org/xref/coreutils/src/df.c#893
2017-09-25 18:46:19 -07:00
Harshavardhana
d3eb5815d9 Avoid DDOS in PutObject() when objectName is '/' and size '0' (#4962)
It can happen that an incoming PutObject() request might
have inputs of following form eg:-

 - bucketName is 'testbucket'
 - objectName is '/'

bucketName exists and was previously created but there
are no other objects in this bucket. In a situation like
this parentDirIsObject() goes into an infinite loop.

Verifying that if '/' is an object fails on both backends
but the resulting `path.Dir('/')` returns `'/'` this causes
the closure to loop onto itself.

Fixes #4940
2017-09-25 14:47:58 -07:00
Andreas Auernhammer
7e6b5bdbb7 remove ReadFileWithVerify from StorageAPI (#4947)
This change removes the ReadFileWithVerify function from the
StorageAPI. The ReadFile was basically a redirection to ReadFileWithVerify.
This change removes the redirection and moves the logic of
ReadFileWithVerify directly into ReadFile.
This removes a lot of unnecessary code in all StorageAPI implementations.

Fixes #4946

* review: fix doc and typos
2017-09-25 11:32:56 -07:00
Harshavardhana
4cadb33da2 api/PostPolicy: Allow location header fully qualified URL (#4926)
req.Host is used to construct the final object location.

Fixes #4910
2017-09-24 16:43:21 -07:00
Harshavardhana
c3ff402fcb Fix signature v2 and presigned query unescaping. (#4936)
Simplifies the testing code by using s3signer
package from minio-go library.

Fixes #4927
2017-09-24 14:20:12 -07:00
Davor Kapsa
aceaa1ec48 travis: update go versions (#4945) 2017-09-22 14:05:07 -07:00
Harshavardhana
330f79b40e Remove pre go1.8 code and cleanup (#4933)
We don't need certain go1.7.x custom code anymore, since
we have migrated to go1.8
2017-09-22 14:03:31 -07:00
Bala FA
6d7df1d1cb Use mc functional tests for verifying build (#4934) 2017-09-20 13:22:05 -07:00
Aditya Manthramurthy
3c0d3f7510 Fix bug in ErasureStorage.HealFile (#4913) 2017-09-20 09:50:27 -07:00
Krishnan Parthasarathi
f7ae3be586 Removing 100MB part and 1TB limitation (#4939)
With https://github.com/minio/minio/pull/4869 maximum size of a single
multipart upload part in not restricted to 100MB. 1TB maximum object
size limitation is no longer applicable too.
2017-09-20 09:48:48 -07:00
Bala FA
70fec0a53f azure: add stateless gateway support (#4874)
Previously init multipart upload stores metadata of an object which is
used for complete multipart.  This patch makes azure gateway to store
metadata information of init multipart object in azure in the name of
'minio.sys.tmp/multipart/v1/<UPLOAD-ID>/meta.json' and uses this
information on complete multipart.
2017-09-19 16:08:08 -07:00
Nitish Tiwari
fba1669966 Update docker-compose sample file to use local volume driver so data (#4937)
persists even after Minio container is stopped/deleted
2017-09-19 12:43:45 -07:00
Harshavardhana
71201273e2 Remove reference to go1.8 issue from homebrew (#4931) 2017-09-19 12:41:19 -07:00
Andreas Auernhammer
79ba4d3f33 refactor ObjectLayer PutObject and PutObjectPart (#4925)
This change refactor the ObjectLayer PutObject and PutObjectPart
functions. Instead of passing an io.Reader and a size to PUT operations
ObejectLayer expects an HashReader.
A HashReader verifies the MD5 sum (and SHA256 sum if required) of the object.
This change updates all all PutObject(Part) calls and removes unnecessary code
in all ObjectLayer implementations.

Fixes #4923
2017-09-19 12:40:27 -07:00
Harshavardhana
f8024cadbb [security] rpc: Do not transfer access/secret key. (#4857)
This is an improvement upon existing implementation
by avoiding transfer of access and secret keys over
the network. This change only exchanges JWT tokens
generated by an rpc client. Even if the JWT can be
traced over the network on a non-TLS connection, this
change makes sure that we never really expose the
secret key over the network.
2017-09-19 12:37:56 -07:00
Evan
f680b8482f Add a snapcraft build badge. (#4805) 2017-09-19 12:36:35 -07:00
Nitish Tiwari
a56f7e2f23 Refactor CONTRIBUTING.md to make it more reader friendly (#4918) 2017-09-18 11:02:45 -07:00
Nitish Tiwari
6d5d49bfb1 Update CLI examples to be in sync with examples used on Minio website (#4920) 2017-09-14 19:17:42 -07:00
fangyuxiang
8e4842b665 fs: multipart clean only trigger once (#4915) 2017-09-14 19:17:26 -07:00
ebozduman
b74ef6d5f4 Fixes the if condition when uploads.json file cannot be found (#4883) 2017-09-14 16:09:12 -07:00
fangyuxiang
9925640da8 fs: multipart clean doesn't work when object name has '/' (#4919) 2017-09-14 16:00:57 -07:00
Andrej Pregl
f45e0a44b8 Change average from int to int64 in order to support 32-bit systems. (#4921) 2017-09-14 10:23:23 -07:00
Krishna Srinivas
3e632a49ee In gateway mode "continuation-token" will not contain "prefix" (#4911)
fixes #4900
2017-09-13 17:27:19 -07:00
Bala FA
1bb3a03099 build: add verify check on make test (#4901)
This patch adds basic tests for FS/XL/Distribute setups.
2017-09-12 16:56:33 -07:00
Krishna Srinivas
42b3795304 Set NextContinuationToken in ListObjectsV2 response for gateway (#4908)
fixes #4900
2017-09-12 16:19:58 -07:00
Bala FA
302fcb3b17 azure: handle encryption headers and azure InvalidMetadata error (#4893)
Previously minio gateway returns invalid bucket name error for invalid
meta data.  This is fixed by returning BadRequest with 'Unsupported
metadata' in response.

Fixes #4891
2017-09-12 16:14:41 -07:00
Harshavardhana
b9fc4150f6 Fix preInit logic when mixed disk situations exist. (#4904)
When servers are started simultaneously across multiple
nodes or simulating a local setup, it can happen such
that one of the servers in setup reaches a following
situation where it observes

 - Some servers are formatted
 - Some servers are unformatted
 - Some servers are offline

Current state machine doesn't handle this correctly, to fix
this situation where we have unformatted, formatted and
disks offline we do not decisively know the course of
action. So we wait for the offline disks to change their state.

Once the offline disks change their state to either one of these
states we can decisively move forward.

  - nil (formatted disk)
  - errUnformattedDisk
  - Or any other error such as errCorruptedDisk.

Fixes #4903
2017-09-12 12:17:44 -07:00
Krishna Srinivas
f66239e82f Expose common S3 headers in CORS setting (#4839)
fixes #4838
2017-09-11 08:15:51 -07:00
Harshavardhana
6c2bc0568b Increase default read/write timeouts from 30sec to 15minutes (#4888)
The default timeout of 30secs is not enough for high latency
environments, change these values to use 15 minutes instead.

With 30secs I/O timeouts seem to be quite common, this leads
to pretty much most SDKs and clients reconnect. This in-turn
causes significant performance problems. On a low latency
interconnect this can be quite challenging to transfer large
amounts of data. Setting this value to 15minutes covers
pretty much all known cases.

This PR was tested with `wondershaper <NIC> 20000 20000` by
limiting the network bandwidth to 20Mbit/sec. Default timeout
caused a significant amount of I/O timeouts, leading to
constant retires from the client. This seems to be more common
with tools like rclone, restic which have high concurrency set
by default. Once the value was fixed to 15minutes i/o timeouts
stopped and client could steadily upload data to the server
even while saturating the network.

Fixes #4670
2017-09-07 11:16:45 -07:00
poornas
0d154871d5 Admin: Raise error if config and env credentials mismatch (#4870) 2017-09-07 11:16:13 -07:00
Bala FA
189b6682d6 azure: allow parts > 100MiB size to work properly (#4869)
Previously if any multipart part size > 100MiB is uploaded, azure
gateway returns error.

This patch fixes the issue by creating sub parts sizing each 100MiB of
given multipart part.  On complete multipart, it fetches all uploaded
azure block ids for each parts and performs completion.

Fixes #4868
2017-09-05 16:56:23 -07:00
Harshavardhana
cf479eb401 Move to latest release of minio-go (#4886)
- Region handling can now use region endpoints directly.
- All uploads are streaming no more large buffer needed.
- Major API overhaul for CopyObject(dst, src)
- Fixes bugs present in existing code for copying
  - metadata replace directive CopyObject
  - PutObjectPart doesn't require md5Sum and sha256
2017-09-05 14:45:22 -07:00
Harshavardhana
72490bf8db Implement proper reConnect logic for amqp notification target. (#4867)
Fixes #4597
2017-09-04 17:45:30 -07:00
Justin Clift
5a73aecb5c fix: Trivial typo in error message (#4878) 2017-09-03 13:53:03 -07:00
Harshavardhana
e26a706dff Ignore reservedBucket checks for net/rpc requests (#4884)
All `net/rpc` requests go to `/minio`, so the existing
generic handler for reserved bucket check would essentially
erroneously send errors leading to distributed setups to
wait infinitely.

For `net/rpc` requests alone we should skip this check and
allow resource bucket names to be from `/minio` .
2017-09-01 12:16:54 -07:00
Andrej Pregl
9e9c7b4f22 Lower object name length when running in docker to support aufs. (#4879) 2017-09-01 11:00:47 -07:00
Krishna Srinivas
ff8e2b5b4f Init HTTP client and transport for azure sdk (#4871)
Fixes segfault
2017-08-31 17:19:03 -07:00
Frank Wessels
61e0b1454a Add support for timeouts for locks (#4377) 2017-08-31 14:43:59 -07:00
Frank Wessels
93f126364e Updated version of klauspost/reedsolomon with NEON support for ARM (#4865) 2017-08-30 09:49:00 -07:00
Harshavardhana
6dca044ea8 fs: Convert repeated code in rwpool.Open() into a single function. (#4864)
Refer https://github.com/minio/minio/issues/4658 for more information.
2017-08-30 09:48:19 -07:00
Harshavardhana
9e9faf7b53 Support source compilation on s390 and ppc64le (#4859)
NOTE: This doesn't validate that minio will work fine
on these platforms and is tested. Since we do not validate
on these architectures this is to be treated as just a build fix.

Fixes #4858
2017-08-30 09:45:36 -07:00
Nitish Tiwari
2bca51ab2c Add steps to run GCS gateway on Kubernetes via YAML files (#4819) 2017-08-28 12:58:52 -07:00
Leo Arias
34e780e690 add the mount-observe plug to the snap (#4853) 2017-08-28 11:40:26 -07:00
Harshavardhana
6cab6d802d api: Fix the conditional to check for reserved buckets. (#4856)
Current code was an logical `and` instead we should do `or`.

Fixes https://github.com/minio/mc/issues/2231
2017-08-28 11:39:48 -07:00
Andreas Auernhammer
ea65350308 add vscode config files to gitignore (#4855) 2017-08-27 13:44:39 -07:00
Harshavardhana
1bb9d49eaa fs: ListObjects() was reading ETag at wrong offsets (#4846)
Current code was just using io.ReadAll() on an fd()
which might have moved underneath due to a concurrent
read operation. Subsequent read will result in EOF
We should always seek back and read again. pread()
is allowed on all platforms use io.SectionReader to
read from the beginning of the file.

Fixes #4842
2017-08-23 17:59:14 -07:00
Harshavardhana
db5af1b126 fix: tests error conditions should be used properly. (#4833) 2017-08-23 17:58:52 -07:00
Andreas Auernhammer
b233345f19 remove bcrypt code from code-base (#4844) (#4845)
Bcrypt is not neccessary and not used properly. This change
replace the whole bcrypt hash computation through a constant time
compare and removes bcrypt from the code base.
2017-08-23 15:59:37 -07:00
Leo Arias
07ccf4c0d0 Add the install instructions for the snap (#4843) 2017-08-23 15:59:13 -07:00
Aditya Manthramurthy
77d2870f5b Fix validation in PutBucketNotification handler (#4841)
Fixes #4813

If a TopicConfiguration element or CloudFunction element is found in
configuration submitted to PutBucketNotification API, an BadRequest
error is returned.
2017-08-23 15:58:02 -07:00
Andreas Auernhammer
3a73c675a6 restirct max size of http header and user metadata (#4634) (#4680)
S3 only allows http headers with a size of 8 KB and user-defined metadata
with a size of 2 KB. This change adds a new API error and returns this
error to clients which sends to large http requests.

Fixes #4634
2017-08-22 16:53:35 -07:00
Bala FA
b694c1a4d7 fix: bufconn and listener tests for megacheck (#4827)
Fixes #4824
2017-08-20 12:25:08 -07:00
Harshavardhana
2e6ee68409 fix: [minor] Avoid unnecessary typecasting. (#4828)
We don't need to typecast identifiers from
their base to type to same type again. This
is not a bug and compiler is fine to skip
it but it is better to avoid if not needed.
2017-08-18 11:45:16 -07:00
Bala FA
7505bac037 tests: create temporary dir/files than /usr directory. (#4820)
Fixes #4816
2017-08-18 11:44:54 -07:00
Harshavardhana
9dca0c1889 fix: [minor] functions should take inputs with required functionality. (#4823) 2017-08-17 13:49:57 -07:00
Nitish Tiwari
69555f1224 Update Docker commands to use /data as example directory (#4825)
/data as default makes it easy to understand and shortens
the example Minio command for Docker.
2017-08-17 10:56:25 -07:00
Harshavardhana
879cef37a1 Fail to start server if detected cross-device mounts. (#4807)
Fixes #4764
2017-08-15 15:10:50 -07:00
wd256
3d21119ec8 Provide 200 response with per object error listing on access denied for delete multiple object request (#4817) 2017-08-15 12:49:31 -07:00
Frank Wessels
a2f2044528 Minor corrections in comments for xl utils (#4815) 2017-08-14 18:09:29 -07:00
Andreas Auernhammer
85fcee1919 erasure: simplify XL backend operations (#4649) (#4758)
This change provides new implementations of the XL backend operations:
 - create file
 - read   file
 - heal   file
Further this change adds table based tests for all three operations.

This affects also the bitrot algorithm integration. Algorithms are now
integrated in an idiomatic way (like crypto.Hash).
Fixes #4696
Fixes #4649
Fixes #4359
2017-08-14 18:08:42 -07:00
Leo Arias
617f2394fb Hardcode snap version while a store bug is fixed (#4806)
* Hardcode snap version while a store bug is fixed
This is a workaround for for a failure in the store, which limits the version to 32 characters.
https://forum.snapcraft.io/t/versions-can-be-at-most-32-characters/1642
Capitalize the summary
* add the snapcraft dirs to gitignore bring back settings.json to ignore
2017-08-14 11:12:46 -07:00
Bala FA
1729e82361 tests: use port '0' for auto-detecting free port. (#4803)
Fixes #4774
2017-08-14 11:11:38 -07:00
Harshavardhana
58633a383a Deprecate intel-32, arm64, arm support for minio builds. (#4811)
It was decided that we will be deprecating ARM support
for minio builds. ARM users should simply compile from source.

Additionally 32bit version of Linux, Windows and FreeBSD (64bit)
are deprecated.
2017-08-14 11:08:58 -07:00
Nitish Tiwari
d4b107adf4 Retry name lookup for kubernetes and docker swarm environment (#4800)
Wait for remote hosts to resolve instead of failing on first host
resolution error, when running in Kubernetes or Docker environment.

Note that

- Waiting is based on exponential back-off mechanism
- If run as a binary, server fails if remote host is not resolvable

This is needed because in orchestration platforms like Kubernetes, remote
hosts are started sequentially and all the hosts are not up initially,
though they are expected to come up in a short time frame
It is difficult to identify a cap on the waiting time due to
non-deterministic nature of infrastructure platforms, so the server waits
infinitely for the hosts to come up, while logging the error messages to
the console.

Fixes: https://github.com/minio/minio/issues/4669
2017-08-13 13:34:10 -07:00
Nitish Tiwari
53f84d6084 Report healthy status for initial 120s, then switch to healthcheck (#4799)
Swarm routes traffic only to containers that report healthy status,
while Minio in distributed mode needs to talk to other peers before
it can respond to healthcheck probe. As the Minio containers are not
able to talk to each other, distributed Minio is not getting started
on Docker Swarm.

With this PR, Minio Healthcheck report healthy status for initial
120s enough for distributed Minio to start. After that normal
Healthcheck resumes. Also changed the healthcheck method name
in accordance with Google shell styleguide.

Fixes: https://github.com/minio/minio/issues/4761
2017-08-12 19:26:35 -07:00
Harshavardhana
d864e00e24 posix: Deprecate custom removeAll/mkdirAll implementations. (#4808)
Since go1.8 os.RemoveAll and os.MkdirAll both support long
path names i.e UNC path on windows. The code we are carrying
was directly borrowed from `pkg/os` package and doesn't need
to be in our repo anymore. As a side affect this also
addresses our codecoverage issue.

Refer #4658
2017-08-12 19:25:43 -07:00
Harshavardhana
b69aa9c4d0 fs: Return errVolumeNotEmpty properly if path not empty. (#4794)
Refer #4770
2017-08-12 19:24:20 -07:00
Harshavardhana
d20cdac304 Move all Dockerfiles to alpine-3.6 (#4797)
Refer #4759.

Fix this to avoid issues like below

```
docker run --rm minio/minio:edge-armhf version
minio: <ERROR> .... is not compiled by Go >= 1.8.3. ... recompile...
```
2017-08-12 19:17:37 -07:00
Frank Wessels
fffe4ac7e6 Prevent unnecessary verification of parity blocks while reading (#4683)
* Prevent unnecessary verification of parity blocks while reading erasure
  coded file.
* Update klauspost/reedsolomon and just only reconstruct data blocks while
  reading (prevent unnecessary parity block reconstruction)
* Remove Verification of (all) reconstructed Data and Parity blocks since
  in our case we are protected by bit rot protection. And even if the
  verification would fail (essentially impossible) there is no way to
  definitively say whether the data is still correct or not, so this call
  make no sense for our use case.
2017-08-11 18:25:46 -07:00
Frank Wessels
98b62cbec8 Implement an offline mode for a distributed node (#4646)
Implement an offline mode for remote storage to cache the
offline status of a node in order to prevent network calls
that are bound to fail. After a time interval an attempt
will be made to restore the connection and mark the node
as online if successful.

Fixes #4183
2017-08-11 11:49:35 -07:00
Dee Koder
1978b9d8f9 Prevent minio server starting in standalone erasure mode for wrong inputs. (#4700)
It is possible at times due to a typo when distributed mode was intended
a user might end up starting standalone erasure mode causing confusion.
Add code to check this based on some standard heuristic guess work and
report an error to the user.

Fixes #4686
2017-08-11 11:47:28 -07:00
Harshavardhana
3544e5ad01 fs: Fix Shutdown() behavior and handle tests properly. (#4796)
Fixes #4795
2017-08-10 14:11:57 -07:00
Harshavardhana
e7cdd8f02c fs: Avoid non-idempotent code flow in ListBuckets() (#4798)
Under the call flow

```
Readdir
   +
   |
   |
   | path-entry
   |
   |
   v
StatDir
```

Existing code was written in a manner where say
a bucket/top-level directory was indeed deleted
between Readdir() and before StatDir() we would
ignore certain errors. This is not a plausible
situation and might not happen in almost all
practical cases. We do not have to look for
or interpret these errors returned by StatDir()
instead we can just collect the successful
values and return back to the client. We do not
need to pre-maturely decide on bucket access
we just let filesystem decide subsequently for
real I/O operations.

Refer #4658
2017-08-10 13:36:11 -07:00
Leo Arias
3ff09b9b72 Add the packaging metadata to build the minio snap (#4749)
* Add the packaging metadata to build the minio snap.
2017-08-10 09:39:30 -07:00
Nitish
b3b42c72a9 Update orchestration examples to latest release 2017-08-08 17:15:46 -07:00
Aditya Manthramurthy
32da1aa9d6 XL: Simplify heal-format operations
This is in preparation for updated admin heal API.

* Improve case analysis of healFormatXL() - fixes a case where disks
  could have unhandled errors.

* Simplify healFormatXLFreshDisks() and healFormatXLCorruptedDisks()
  to share more code and handle fewer cases for improved simplicity
  and reduced code repetition.

* Fix test cases.
2017-08-08 17:14:24 -07:00
Andreas Auernhammer
b10fa507b2 set http transport config for gateway (#4765)
This change sets the http config for the minio client used by the
minio server in gateway mode.

Fixes #4765
2017-08-08 16:23:52 -07:00
Andreas Auernhammer
1e8925cea0 add settings.json to .gitignore (#4786)
settings.json is the VS code workspace settings file
2017-08-08 13:06:57 -07:00
Harshavardhana
f346ca44f0 config: Avoid stale credentials in memory. (#4466) 2017-08-08 12:14:32 -07:00
A. Elleuch
6f7ace3d3e Honor overriding response headers for HEAD (#4784)
Though not clearly mentioned in S3 specification, we should override
response headers for presigned HEAD requests as we do for GET.
2017-08-08 11:04:04 -07:00
Harshavardhana
812142f007 Fix typo in webhook docs (#4787) 2017-08-07 21:31:17 -07:00
Andrej Pregl
fa52d491c5 Added support for macOS in TestNewHTTPListener (#4782) 2017-08-07 16:02:34 -07:00
Harshavardhana
283ad7e5e5 browser: update ui-assets for new changes. (#4780) 2017-08-07 12:48:51 -07:00
poornas
748b1d6495 azure: For container access type private treat as no policy set. (#4729) 2017-08-06 22:24:40 -07:00
A. Elleuch
b4dc6df35c go1.8: Changes to support golang 1.8 (#4759)
QuirkConn is added to replace net.Conn as a workaround to a golang bug:
https://github.com/golang/go/issues/21133
2017-08-06 11:27:33 -07:00
Aditya Manthramurthy
218049300c Fix testcase to not overflow int type (#4739)
The int type is only 32-bits wide on 32-bit CPUs.

Set the type in the tests to int32 to avoid setting problematic
maxKeys values.

Fixes #4738
2017-08-05 02:36:47 -07:00
Aaron Walker
5db533c024 bucket-policy: Add IPAddress/NotIPAddress conditions support (#4736) 2017-08-05 01:00:05 -07:00
985 changed files with 142898 additions and 81292 deletions

View File

@@ -24,5 +24,6 @@
- [ ] My code follows the code style of this project.
- [ ] My change requires a change to the documentation.
- [ ] I have updated the documentation accordingly.
- [ ] I have added tests to cover my changes.
- [ ] I have added unit tests to cover my changes.
- [ ] I have added/updated functional tests in [mint](https://github.com/minio/mint). (If yes, add `mint` PR # here: )
- [ ] All new and existing tests passed.

11
.gitignore vendored
View File

@@ -9,10 +9,19 @@ site/
/.idea/
/Minio.iml
**/access.log
build
build/
vendor/**/*.js
vendor/**/*.json
release
.DS_Store
*.syso
coverage.txt
.vscode/
.snap
*.tar.bz2
parts/
prime/
snap/.snapcraft/
stage/
.sia_temp/
buildcoveragecoverage.txt

View File

@@ -13,30 +13,17 @@ os:
env:
- ARCH=x86_64
- ARCH=i686
script:
## Run all the tests
- make
- diff -au <(gofmt -d cmd) <(printf "")
- diff -au <(gofmt -d pkg) <(printf "")
- diff -au <(gofmt -s -d cmd) <(printf "")
- diff -au <(gofmt -s -d pkg) <(printf "")
- make test GOFLAGS="-timeout 15m -race -v"
- make coverage
# Refer https://blog.hypriot.com/post/setup-simple-ci-pipeline-for-arm-images/
# push image
- >
if [ "$TRAVIS_BRANCH" == "master" ] && [ "$TRAVIS_PULL_REQUEST" == "false" ] && [ "$ARCH" == "x86_64" ]; then
docker run --rm --privileged multiarch/qemu-user-static:register --reset
docker build -t minio/minio:edge-armhf . -f Dockerfile.armhf
docker build -t minio/minio:edge-aarch64 . -f Dockerfile.aarch64
docker login -u="$DOCKER_USER" -p="$DOCKER_PASS"
docker push minio/minio:edge-armhf
docker push minio/minio:edge-aarch64
fi
after_success:
- bash <(curl -s https://codecov.io/bash)
go:
- 1.7.5
- 1.9.4

View File

@@ -1,9 +1,14 @@
### Install Golang
# Minio Contribution Guide [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
If you do not have a working Golang environment setup please follow [Golang Installation Guide](https://docs.minio.io/docs/how-to-install-golang).
``Minio`` community welcomes your contribution. To make the process as seamless as possible, we recommend you read this contribution guide.
## Development Workflow
Start by forking the Minio GitHub repository, make changes in a branch and then send a pull request. We encourage pull requests to discuss code changes. Here are the steps in details:
### Setup your Minio Github Repository
Fork [Minio upstream](https://github.com/minio/minio/fork) source repository to your own personal repository. Copy the URL for minio from your personal github repo (you will need it for the `git clone` command below).
Fork [Minio upstream](https://github.com/minio/minio/fork) source repository to your own personal repository. Copy the URL of your Minio fork (you will need it for the `git clone` command below).
```sh
$ mkdir -p $GOPATH/src/github.com/minio
$ cd $GOPATH/src/github.com/minio
@@ -11,61 +16,56 @@ $ git clone <paste saved URL for personal forked minio repo>
$ cd minio
```
### Compiling Minio from source
Minio uses ``Makefile`` to wrap around some of redundant checks done through command line.
```sh
$ make
Checking if proper environment variables are set.. Done
...
Checking dependencies for Minio.. Done
Installed govet
Building Libraries
...
...
```
### Setting up git remote as ``upstream``
### Set up git remote as ``upstream``
```sh
$ cd $GOPATH/src/github.com/minio/minio
$ git remote add upstream https://github.com/minio/minio
$ git fetch upstream
$ git merge upstream/master
...
...
$ make
Checking if proper environment variables are set.. Done
...
Checking dependencies for Minio.. Done
Installed govet
Building Libraries
...
```
### Developer Guidelines
``Minio`` community welcomes your contribution. To make the process as seamless as possible, we ask for the following:
* Go ahead and fork the project and make your changes. We encourage pull requests to discuss code changes.
- Fork it
- Create your feature branch (git checkout -b my-new-feature)
- Commit your changes (git commit -am 'Add some feature')
- Push to the branch (git push origin my-new-feature)
- Create new Pull Request
### Create your feature branch
Before making code changes, make sure you create a separate branch for these changes
* If you have additional dependencies for ``Minio``, ``Minio`` manages its dependencies using [govendor](https://github.com/kardianos/govendor)
- Run `go get foo/bar`
- Edit your code to import foo/bar
- Run `make pkg-add PKG=foo/bar` from top-level directory
```
$ git checkout -b my-new-feature
```
* If you have dependencies for ``Minio`` which needs to be removed
- Edit your code to not import foo/bar
- Run `make pkg-remove PKG=foo/bar` from top-level directory
### Test Minio server changes
After your code changes, make sure
* When you're ready to create a pull request, be sure to:
- Have test cases for the new code. If you have questions about how to do it, please ask in your pull request.
- Run `make verifiers`
- Squash your commits into a single commit. `git rebase -i`. It's okay to force update your pull request.
- Make sure `go test -race ./...` and `go build` completes.
- To add test cases for the new code. If you have questions about how to do it, please ask on our [Slack](slack.minio.io) channel.
- To run `make verifiers`
- To squash your commits into a single commit. `git rebase -i`. It's okay to force update your pull request.
- To run `go test -race ./...` and `go build` completes.
* Read [Effective Go](https://github.com/golang/go/wiki/CodeReviewComments) article from Golang project
- `Minio` project is fully conformant with Golang style
- if you happen to observe offending code, please feel free to send a pull request
### Commit changes
After verification, commit your changes. This is a [great post](https://chris.beams.io/posts/git-commit/) on how to write useful commit messages
```
$ git commit -am 'Add some feature'
```
### Push to the branch
Push your locally committed changes to the remote origin (your fork)
```
$ git push origin my-new-feature
```
### Create a Pull Request
Pull requests can be created via GitHub. Refer to [this document](https://help.github.com/articles/creating-a-pull-request/) for detailed steps on how to create a pull request. After a Pull Request gets peer reviewed and approved, it will be merged.
## FAQs
### How does ``Minio`` manages dependencies?
``Minio`` manages its dependencies using [govendor](https://github.com/kardianos/govendor). To add a dependency
- Run `go get foo/bar`
- Edit your code to import foo/bar
- Run `make pkg-add PKG=foo/bar` from top-level directory
To remove a dependency
- Edit your code to not import foo/bar
- Run `make pkg-remove PKG=foo/bar` from top-level directory
### What are the coding guidelines for Minio?
``Minio`` is fully conformant with Golang style. Refer: [Effective Go](https://github.com/golang/go/wiki/CodeReviewComments) article from Golang project. If you observe offending code, please feel free to send a pull request or ping us on [Slack](slack.minio.io).

View File

@@ -1,16 +1,21 @@
FROM alpine:3.5
FROM golang:1.9.4-alpine3.6
MAINTAINER Minio Inc <dev@minio.io>
ENV GOPATH /go
ENV PATH $PATH:$GOPATH/bin
ENV CGO_ENABLED 0
ENV MINIO_UPDATE off
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key
WORKDIR /go/src/github.com/minio/
COPY dockerscripts/docker-entrypoint.sh dockerscripts/healthcheck.sh /usr/bin/
RUN \
apk add --no-cache ca-certificates && \
apk add --no-cache --virtual .build-deps git go musl-dev && \
apk add --no-cache ca-certificates curl && \
apk add --no-cache --virtual .build-deps git && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
go get -v -d github.com/minio/minio && \
cd /go/src/github.com/minio/minio && \
@@ -19,12 +24,11 @@ RUN \
EXPOSE 9000
COPY buildscripts/docker-entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
VOLUME ["/export"]
HEALTHCHECK --interval=30s --timeout=5s \
CMD /usr/bin/healthcheck.sh
CMD ["minio"]

View File

@@ -1,30 +0,0 @@
FROM resin/aarch64-alpine:3.5
MAINTAINER Minio Inc <dev@minio.io>
ENV GOPATH /go
ENV PATH $PATH:$GOPATH/bin
ENV CGO_ENABLED 0
WORKDIR /go/src/github.com/minio/
RUN \
apk add --no-cache ca-certificates && \
apk add --no-cache --virtual .build-deps git go musl-dev && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
go get -v -d github.com/minio/minio && \
cd /go/src/github.com/minio/minio && \
go install -v -ldflags "$(go run buildscripts/gen-ldflags.go)" && \
rm -rf /go/pkg /go/src /usr/local/go && apk del .build-deps
EXPOSE 9000
COPY buildscripts/docker-entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
VOLUME ["/export"]
CMD ["minio"]

View File

@@ -1,30 +1,35 @@
FROM resin/armhf-alpine:3.5
FROM golang:1.9.4-alpine3.6
MAINTAINER Minio Inc <dev@minio.io>
ENV GOPATH /go
ENV PATH $PATH:$GOPATH/bin
ENV CGO_ENABLED 0
ENV MINIO_UPDATE off
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key
WORKDIR /go/src/github.com/minio/
COPY dockerscripts/docker-entrypoint.sh dockerscripts/healthcheck.sh /usr/bin/
COPY . /go/src/github.com/minio/minio
RUN \
apk add --no-cache ca-certificates && \
apk add --no-cache --virtual .build-deps git go musl-dev && \
apk add --no-cache ca-certificates curl && \
apk add --no-cache --virtual .build-deps git && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
go get -v -d github.com/minio/minio && \
cd /go/src/github.com/minio/minio && \
go install -v -ldflags "$(go run buildscripts/gen-ldflags.go)" && \
rm -rf /go/pkg /go/src /usr/local/go && apk del .build-deps
EXPOSE 9000
COPY buildscripts/docker-entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
VOLUME ["/export"]
HEALTHCHECK --interval=30s --timeout=5s \
CMD /usr/bin/healthcheck.sh
CMD ["minio"]

View File

@@ -1,8 +1,12 @@
FROM alpine:3.5
FROM alpine:3.6
MAINTAINER Minio Inc <dev@minio.io>
COPY buildscripts/docker-entrypoint.sh buildscripts/healthcheck.sh /usr/bin/
COPY dockerscripts/docker-entrypoint.sh dockerscripts/healthcheck.sh /usr/bin/
ENV MINIO_UPDATE off
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key
RUN \
apk add --no-cache ca-certificates && \

View File

@@ -1,25 +0,0 @@
FROM resin/aarch64-alpine:3.5
MAINTAINER Minio Inc <dev@minio.io>
COPY buildscripts/docker-entrypoint.sh buildscripts/healthcheck.sh /usr/bin/
RUN \
apk add --no-cache ca-certificates && \
apk add --no-cache --virtual .build-deps curl && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
curl https://dl.minio.io/server/minio/release/linux-arm64/minio > /usr/bin/minio && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/healthcheck.sh
EXPOSE 9000
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
VOLUME ["/export"]
HEALTHCHECK --interval=30s --timeout=5s \
CMD /usr/bin/healthcheck.sh
CMD ["minio"]

View File

@@ -1,25 +0,0 @@
FROM resin/armhf-alpine:3.5
MAINTAINER Minio Inc <dev@minio.io>
COPY buildscripts/docker-entrypoint.sh buildscripts/healthcheck.sh /usr/bin/
RUN \
apk add --no-cache ca-certificates && \
apk add --no-cache --virtual .build-deps curl && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
curl https://dl.minio.io/server/minio/release/linux-arm/minio > /usr/bin/minio && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/healthcheck.sh
EXPOSE 9000
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
VOLUME ["/export"]
HEALTHCHECK --interval=30s --timeout=5s \
CMD /usr/bin/healthcheck.sh
CMD ["minio"]

View File

@@ -7,19 +7,19 @@ BUILD_LDFLAGS := '$(LDFLAGS)'
all: build
checks:
@echo "Check deps"
@echo "Checking dependencies"
@(env bash $(PWD)/buildscripts/checkdeps.sh)
@echo "Checking project is in GOPATH"
@echo "Checking for project in GOPATH"
@(env bash $(PWD)/buildscripts/checkgopath.sh)
getdeps: checks
getdeps:
@echo "Installing golint" && go get -u github.com/golang/lint/golint
@echo "Installing gocyclo" && go get -u github.com/fzipp/gocyclo
@echo "Installing deadcode" && go get -u github.com/remyoudompheng/go-misc/deadcode
@echo "Installing misspell" && go get -u github.com/client9/misspell/cmd/misspell
@echo "Installing ineffassign" && go get -u github.com/gordonklaus/ineffassign
verifiers: getdeps vet fmt lint cyclo spelling
verifiers: getdeps vet fmt lint cyclo deadcode spelling
vet:
@echo "Running $@"
@@ -46,7 +46,8 @@ cyclo:
@${GOPATH}/bin/gocyclo -over 100 pkg
deadcode:
@${GOPATH}/bin/deadcode
@echo "Running $@"
@${GOPATH}/bin/deadcode -test $(shell go list ./...) || true
spelling:
@${GOPATH}/bin/misspell -error `find cmd/`
@@ -56,18 +57,18 @@ spelling:
# Builds minio, runs the verifiers then runs the tests.
check: test
test: verifiers build
@echo "Running all minio testing"
@go test $(GOFLAGS) .
@go test $(GOFLAGS) github.com/minio/minio/cmd...
@go test $(GOFLAGS) github.com/minio/minio/pkg...
@echo "Running unit tests"
@go test $(GOFLAGS) ./...
@echo "Verifying build"
@(env bash $(PWD)/buildscripts/verify-build.sh)
coverage: build
@echo "Running all coverage for minio"
@./buildscripts/go-coverage.sh
@(env bash $(PWD)/buildscripts/go-coverage.sh)
# Builds minio locally.
build:
@echo "Building minio to $(PWD)/minio ..."
build: checks
@echo "Building minio binary to './minio'"
@CGO_ENABLED=0 go build --ldflags $(BUILD_LDFLAGS) -o $(PWD)/minio
pkg-add:
@@ -87,18 +88,13 @@ pkg-list:
# Builds minio and installs it to $GOPATH/bin.
install: build
@echo "Installing minio at $(GOPATH)/bin/minio ..."
@echo "Installing minio binary to '$(GOPATH)/bin/minio'"
@cp $(PWD)/minio $(GOPATH)/bin/minio
@echo "Check 'minio -h' for help."
release: verifiers
@MINIO_RELEASE=RELEASE ./buildscripts/build.sh
experimental: verifiers
@MINIO_RELEASE=EXPERIMENTAL ./buildscripts/build.sh
@echo "Installation successful. To learn more, try \"minio --help\"."
clean:
@echo "Cleaning up all the generated files"
@find . -name '*.test' | xargs rm -fv
@rm -rf build
@rm -rf release
@rm -rvf minio
@rm -rvf build
@rm -rvf release

View File

@@ -1,4 +1,5 @@
# Minio Quickstart Guide [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
# Minio Quickstart Guide
[![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
Minio is an object storage server released under Apache License v2.0. It is compatible with Amazon S3 cloud storage service. It is best suited for storing unstructured data such as photos, videos, log files, backups and container / VM images. Size of an object can range from a few KBs to a maximum of 5TB.
@@ -8,31 +9,29 @@ Minio server is light enough to be bundled with the application stack, similar t
### Stable
```
docker pull minio/minio
docker run -p 9000:9000 minio/minio server /export
docker run -p 9000:9000 minio/minio server /data
```
### Edge
```
docker pull minio/minio:edge
docker run -p 9000:9000 minio/minio:edge server /export
docker run -p 9000:9000 minio/minio:edge server /data
```
Please visit Minio Docker quickstart guide for more [here](https://docs.minio.io/docs/minio-docker-quickstart-guide)
## macOS
### Homebrew
Install minio packages using [Homebrew](http://brew.sh/)
```sh
brew install minio/stable/minio
minio server ~/Photos
minio server /data
```
#### Note
If you previously installed minio using `brew install minio` then reinstall minio from `minio/stable/minio` official repo. Homebrew builds are unstable due to golang 1.8 bugs.
```
brew uninstall minio
> NOTE: If you previously installed minio using `brew install minio` then it is recommended that you reinstall minio from `minio/stable/minio` official repo instead.
```sh
brew uninstall minio
brew install minio/stable/minio
```
```
### Binary Download
| Platform| Architecture | URL|
@@ -40,7 +39,7 @@ brew install minio/stable/minio
|Apple macOS|64-bit Intel|https://dl.minio.io/server/minio/release/darwin-amd64/minio |
```sh
chmod 755 minio
./minio server ~/Photos
./minio server /data
```
## GNU/Linux
@@ -48,13 +47,22 @@ chmod 755 minio
| Platform| Architecture | URL|
| ----------| -------- | ------|
|GNU/Linux|64-bit Intel|https://dl.minio.io/server/minio/release/linux-amd64/minio |
| |32-bit Intel|https://dl.minio.io/server/minio/release/linux-386/minio |
| |32-bit ARM|https://dl.minio.io/server/minio/release/linux-arm/minio |
| |64-bit ARM|https://dl.minio.io/server/minio/release/linux-arm64/minio |
| |32-bit ARMv6|https://dl.minio.io/server/minio/release/linux-arm6vl/minio |
```sh
wget https://dl.minio.io/server/minio/release/linux-amd64/minio
chmod +x minio
./minio server ~/Photos
./minio server /data
```
### Snap
Install minio using [Snap](https://snapcraft.io)
```sh
sudo snap install minio --edge
```
Start minio using `snap run` command
```sh
sudo snap connect minio:mount-observe
sudo snap run minio server /data
```
## Microsoft Windows
@@ -62,7 +70,6 @@ chmod +x minio
| Platform| Architecture | URL|
| ----------| -------- | ------|
|Microsoft Windows|64-bit|https://dl.minio.io/server/minio/release/windows-amd64/minio.exe |
| |32-bit|https://dl.minio.io/server/minio/release/windows-386/minio.exe |
```sh
minio.exe server D:\Photos
```
@@ -78,17 +85,7 @@ sysrc minio_disks=/home/user/Photos
service minio start
```
### Binary Download
| Platform| Architecture | URL|
| ----------| -------- | ------|
|FreeBSD|64-bit|https://dl.minio.io/server/minio/release/freebsd-amd64/minio |
```sh
chmod 755 minio
./minio server ~/Photos
```
## Install from Source
Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://docs.minio.io/docs/how-to-install-golang).
```sh
@@ -103,6 +100,11 @@ Minio Server comes with an embedded web based object browser. Point your web bro
## Test using Minio Client `mc`
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the Minio Client [Quickstart Guide](https://docs.minio.io/docs/minio-client-quickstart-guide) for further instructions.
## Pre-existing data
When deployed on a single drive, Minio server lets clients access any pre-existing data in the data directory. For example, if Minio is started with the command `minio server /mnt/data`, any pre-existing data in the `/mnt/data` directory would be accessible to the clients.
The above statement is also valid for all gateway backends.
## Explore Further
- [Minio Erasure Code QuickStart Guide](https://docs.minio.io/docs/minio-erasure-code-quickstart-guide)
- [Use `mc` with Minio Server](https://docs.minio.io/docs/minio-client-quickstart-guide)
@@ -113,3 +115,7 @@ Minio Server comes with an embedded web based object browser. Point your web bro
## Contribute to Minio Project
Please follow Minio [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)
## License
[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fminio%2Fminio.svg?type=large)](https://app.fossa.io/projects/git%2Bgithub.com%2Fminio%2Fminio?ref=badge_large)

View File

@@ -1,202 +0,0 @@
# Minio 快速入门 [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
Minio是一个对象存储服务基于Apache License v2.0协议. 它完全兼容亚马逊的S3云储存服务非常适合于存储很多非结构化的数据例如图片、视频、日志文件、备份数据和容器/虚拟机镜像等而一个对象文件可以是任意大小从几kb到最大5T不等。
## 1. 下载
Minio是一个非常轻量的服务,可以很简单的和其他应用的结合,类似 NodeJS, Redis 或者 MySQL.
| Platform| Architecture | URL|
| ----------| -------- | ------|
|GNU/Linux|64-bit Intel|https://dl.minio.io/server/minio/release/linux-amd64/minio|
||32-bit Intel|https://dl.minio.io/server/minio/release/linux-386/minio|
||32-bit ARM|https://dl.minio.io/server/minio/release/linux-arm/minio|
||64-bit ARM|https://dl.minio.io/server/minio/release/linux-arm64/minio|
||32-bit ARMv6|https://dl.minio.io/server/minio/release/linux-arm6vl/minio|
|Apple OS X|64-bit Intel|https://dl.minio.io/server/minio/release/darwin-amd64/minio|
|Microsoft Windows|64-bit|https://dl.minio.io/server/minio/release/windows-amd64/minio.exe|
||32-bit|https://dl.minio.io/server/minio/release/windows-386/minio.exe|
|FreeBSD|64-bit|https://dl.minio.io/server/minio/release/freebsd-amd64/minio|
### Homebrew 安装
使用[Homebrew](http://brew.sh/) 来安装minio
```sh
$ brew install minio
$ minio --help
```
### 源码安装
源码安装只针对开发者和一些高级用户如果你还没有golang的环境请安装golang官网安装[How to install Golang](https://docs.minio.io/docs/zh-CN/how-to-install-golang).
```sh
$ go get -u github.com/minio/minio
```
## 2. 运行Minio服务
### GNU/Linux
```sh
$ chmod +x minio
$ ./minio --help
$ ./minio server ~/Photos
端点: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
区域: us-east-1
浏览器访问入口:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
命令行访问: https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
对象操作API (兼容Amazon S3):
Go: https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/zh-CN/java-client-quickstart-guide
Python: https://docs.minio.io/docs/zh-CN/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/zh-CN/javascript-client-quickstart-guide
```
### OS X
```sh
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
端点: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
区域: us-east-1
浏览器访问入口:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
命令行访问: https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
对象操作API (兼容Amazon S3):
Go: https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/zh-CN/java-client-quickstart-guide
Python: https://docs.minio.io/docs/zh-CN/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/zh-CN/javascript-client-quickstart-guide
```
### Microsoft Windows
```sh
C:\Users\Username\Downloads> minio.exe --help
C:\Users\Username\Downloads> minio.exe server D:\Photos
端点: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
区域: us-east-1
浏览器访问入口:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
命令行访问: https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide
$ mc.exe config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
对象操作API (兼容Amazon S3):
Go: https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/zh-CN/java-client-quickstart-guide
Python: https://docs.minio.io/docs/zh-CN/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/zh-CN/javascript-client-quickstart-guide
```
### Docker
```sh
$ docker pull minio/minio
$ docker run -p 9000:9000 minio/minio
```
访问minio的docker入门指南获得更多内容 [here](https://docs.minio.io/docs/zh-CN/minio-docker-quickstart-guide)
### FreeBSD
```sh
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
端点: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
区域: us-east-1
浏览器访问入口:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
命令行访问: https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
对象操作API (兼容Amazon S3):
Go: https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/zh-CN/java-client-quickstart-guide
Python: https://docs.minio.io/docs/zh-CN/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/zh-CN/javascript-client-quickstart-guide
```
请访问FreeBSD的官网指南获取更多详细信息[here](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
## 3. 使用浏览器测试minio服务
打开浏览器并输入 http://127.0.0.1:9000 查看在minio服务器上面的所有bucket
![Screenshot](https://github.com/minio/minio/blob/master/docs/screenshots/minio-browser.jpg?raw=true)
## 4. 使用`mc`测试minio服务
按照 [这个](https://docs.minio.io/docs/minio-client-quickstart-guide) 安装mc. 使用 `mc ls` 命令显示所有在minio服务上面的bucket.
```sh
$ mc ls myminio/
[2015-08-05 08:13:22 IST] 0B andoria/
[2015-08-05 06:14:26 IST] 0B deflector/
[2015-08-05 08:13:11 IST] 0B ferenginar/
[2016-03-08 14:56:35 IST] 0B jarjarbing/
[2016-01-20 16:07:41 IST] 0B my.minio.io/
```
查看更多的例子请访问 [Minio Client Complete Guide](https://docs.minio.io/docs/zh-CN/minio-client-complete-guide).
## 5. 更多内容
- [Minio Erasure Code 快速入门](https://docs.minio.io/docs/zh-CN/minio-erasure-code-quickstart-guide)
- [Minio Docker 快速入门](https://docs.minio.io/docs/zh-CN/minio-docker-quickstart-guide)
- [使用`mc`测试 Minio Server](https://docs.minio.io/docs/zh-CN/minio-client-quickstart-guide)
- [使用 `aws-cli` 测试 Minio Server](https://docs.minio.io/docs/zh-CN/aws-cli-with-minio)
- [使用 `s3cmd` 测试 Minio Server](https://docs.minio.io/docs/zh-CN/s3cmd-with-minio)
- [使用 `minio-go` SDK ce's测试 Minio Server](https://docs.minio.io/docs/zh-CN/golang-client-quickstart-guide)
## 6. 给Minio项目贡献
请按照Minio [贡献者指导手册](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)

121
README_zh_CN.md Normal file
View File

@@ -0,0 +1,121 @@
# Minio Quickstart Guide [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
Minio 是一个基于Apache License v2.0开源协议的对象存储服务。它兼容亚马逊S3云存储服务接口非常适合于存储大容量非结构化的数据例如图片、视频、日志文件、备份数据和容器/虚拟机镜像等而一个对象文件可以是任意大小从几kb到最大5T不等。
Minio是一个非常轻量的服务,可以很简单的和其他应用的结合,类似 NodeJS, Redis 或者 MySQL。
## Docker 容器
### 稳定版
```
docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data
```
### 尝鲜版
```
docker pull minio/minio:edge
docker run -p 9000:9000 minio/minio:edge server /data
```
更多Docker部署信息请访问 [这里](https://docs.minio.io/docs/minio-docker-quickstart-guide)
## macOS
### Homebrew
使用 [Homebrew](http://brew.sh/)安装minio
```sh
brew install minio/stable/minio
minio server /data
```
#### Note
如果你之前使用 `brew install minio`安装过minio, 可以用 `minio/stable/minio` 官方镜像进行重装. 由于golang 1.8的bug,homebrew版本不太稳定。
```
brew uninstall minio
brew install minio/stable/minio
```
### 下载二进制文件
| 操作系统| CPU架构 | 地址|
| ----------| -------- | ------|
|Apple macOS|64-bit Intel|https://dl.minio.io/server/minio/release/darwin-amd64/minio |
```sh
chmod 755 minio
./minio server /data
```
## GNU/Linux
### 下载二进制文件
| 操作系统| CPU架构 | 地址|
| ----------| -------- | ------|
|GNU/Linux|64-bit Intel|https://dl.minio.io/server/minio/release/linux-amd64/minio |
```sh
chmod +x minio
./minio server /data
```
### 快照版
你可以下载最新版 `minio` [快照](https://snapcraft.io), 并且帮助我们一起验证master分支上[所有支持的Linux发行版](https://snapcraft.io/docs/core/install) 的一些最新修改:
```sh
sudo snap install minio --edge
```
每次有最新的 `minio` 被推送到服务器,你都会自动更新下来.
你需要允许minio snap来观察其安装:
```sh
sudo snap connect minio:mount-observe
```
## 微软Windows系统
### 下载二进制文件
| 操作系统| CPU架构 | 地址|
| ----------| -------- | ------|
|微软Windows系统|64位|https://dl.minio.io/server/minio/release/windows-amd64/minio.exe |
```sh
minio.exe server D:\Photos
```
## FreeBSD
### Port
使用 [pkg](https://github.com/freebsd/pkg)进行安装。
```sh
pkg install minio
sysrc minio_enable=yes
sysrc minio_disks=/home/user/Photos
service minio start
```
## 使用源码安装
采用源码安装仅供开发人员和高级用户使用,如果你还没有Golang环境 请参考 [How to install Golang](https://docs.minio.io/docs/how-to-install-golang).
```sh
go get -u github.com/minio/minio
```
## 使用Minio浏览器进行验证
安装后使用浏览器访问[http://127.0.0.1:9000](http://127.0.0.1:9000)如果可以访问则表示minio已经安装成功。
![Screenshot](https://github.com/minio/minio/blob/master/docs/screenshots/minio-browser.jpg?raw=true)
## 使用Minio客户端 `mc`进行验证
`mc` 提供了一些UNIX常用命令的替代品像ls, cat, cp, mirror, diff这些。 它支持文件系统和亚马逊S3云存储服务。 更多信息请参考 [mc快速入门](https://docs.minio.io/docs/minio-client-quickstart-guide) 。
## 已经存在的数据
当在单块磁盘上部署Minio server,Minio server允许客户端访问数据目录下已经存在的数据。比如如果Minio使用`minio server /mnt/data`启动,那么所有已经在`/mnt/data`目录下的数据都可以被客户端访问到。
上述描述对所有网关后端同样有效。
## 了解更多
- [Minio纠删码入门](https://docs.minio.io/docs/minio-erasure-code-quickstart-guide)
- [`mc`快速入门](https://docs.minio.io/docs/minio-client-quickstart-guide)
- [使用 `aws-cli`](https://docs.minio.io/docs/aws-cli-with-minio)
- [使用 `s3cmd`](https://docs.minio.io/docs/s3cmd-with-minio)
- [使用 `minio-go` SDK](https://docs.minio.io/docs/golang-client-quickstart-guide)
- [Minio文档](https://docs.minio.io)
## 如何参与到Minio项目
请参考 [贡献者指南](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)。欢迎各位中国程序员加到Minio项目中。

View File

@@ -11,12 +11,12 @@ clone_folder: c:\gopath\src\github.com\minio\minio
# Environment variables
environment:
GOROOT: c:\go17
GOPATH: c:\gopath
GOROOT: c:\go19
# scripts that run after cloning repository
install:
- set PATH=%GOPATH%\bin;c:\go17\bin;%PATH%
- set PATH=%GOPATH%\bin;%GOROOT%\bin;%PATH%
- go version
- go env
- python --version
@@ -38,9 +38,8 @@ test_script:
# Unit tests
- ps: Add-AppveyorTest "Unit Tests" -Outcome Running
- mkdir build\coverage
- go test -v -timeout 17m -race github.com/minio/minio/cmd...
- go test -v -race github.com/minio/minio/pkg...
- go test -v -coverprofile=build\coverage\coverage.txt -covermode=atomic github.com/minio/minio/cmd
- for /f "" %%G in ('go list github.com/minio/minio/... ^| find /i /v "browser/"') do ( go test -v -timeout 20m -race %%G )
- go test -v -timeout 20m -coverprofile=build\coverage\coverage.txt -covermode=atomic github.com/minio/minio/cmd
- ps: Update-AppveyorTest "Unit Tests" -Outcome Passed
after_test:

View File

@@ -5,13 +5,13 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Minio Browser</title>
<link rel="stylesheet" href="/minio/loader.css" type="text/css">
<link rel="stylesheet" href="loader.css" type="text/css">
</head>
<body>
<div class="page-load">
<div class="pl-inner">
<img src="/minio/logo.svg" alt="">
<img src="logo.svg" alt="">
</div>
</div>
<div id="root"></div>
@@ -27,19 +27,19 @@
<ul>
<li>
<a href="http://www.google.com/chrome/">
<img src="/minio/chrome.png" alt="">
<img src="chrome.png" alt="">
<div>Chrome</div>
</a>
</li>
<li>
<a href="https://www.mozilla.org/en-US/firefox/new/">
<img src="/minio/firefox.png" alt="">
<img src="firefox.png" alt="">
<div>Firefox</div>
</a>
</li>
<li>
<a href="https://www.apple.com/safari/">
<img src="/minio/safari.png" alt="">
<img src="safari.png" alt="">
<div>Safari</div>
</a>
</li>
@@ -51,6 +51,6 @@
<![endif]-->
<script>currentUiVersion = 'MINIO_UI_VERSION'</script>
<script src="/minio/index_bundle.js"></script>
<script src="index_bundle.js"></script>
</body>
</html>

View File

@@ -41,6 +41,8 @@ import _Login from './js/components/Login.js'
import _Browse from './js/components/Browse.js'
import fontAwesome from 'font-awesome/css/font-awesome.css'
import MaterialDesignIconicFonts from 'material-design-iconic-font/dist/css/material-design-iconic-font.min.css'
import Web from './js/web'
window.Web = Web
@@ -81,7 +83,7 @@ ReactDOM.render((
<Provider store={ store } web={ web }>
<Router history={ browserHistory }>
<Route path='/' component={ App }>
<Route path='minio' component={ App }>
<Route path={ minioBrowserPrefix } component={ App }>
<IndexRoute component={ Browse } onEnter={ authNeeded } />
<Route path='login' component={ Login } onEnter={ authNotNeeded } />
<Route path=':bucket' component={ Browse } onEnter={ authNeeded } />

View File

@@ -24,6 +24,8 @@ export const SET_CURRENT_BUCKET = 'SET_CURRENT_BUCKET'
export const SET_CURRENT_PATH = 'SET_CURRENT_PATH'
export const SET_BUCKETS = 'SET_BUCKETS'
export const ADD_BUCKET = 'ADD_BUCKET'
export const REMOVE_BUCKET = 'REMOVE_BUCKET'
export const SHOW_BUCKET_DROPDOWN = 'SHOW_BUCKET_DROPDOWN'
export const SET_VISIBLE_BUCKETS = 'SET_VISIBLE_BUCKETS'
export const SET_OBJECTS = 'SET_OBJECTS'
export const APPEND_OBJECTS = 'APPEND_OBJECTS'
@@ -173,6 +175,27 @@ export const addBucket = bucket => {
}
}
export const removeBucket = bucket => {
return {
type: REMOVE_BUCKET,
bucket
}
}
export const showBucketDropdown = bucket => {
return {
type: SHOW_BUCKET_DROPDOWN,
showBucketDropdown: true
}
}
export const hideBucketDropdown = bucket => {
return {
type: SHOW_BUCKET_DROPDOWN,
showBucketDropdown: false
}
}
export const showMakeBucketModal = () => {
return {
type: SHOW_MAKEBUCKET_MODAL,
@@ -314,10 +337,35 @@ export const selectBucket = (newCurrentBucket, prefix) => {
}
}
export const deleteBucket = (bucket) => {
return (dispatch, getState) => {
// DeleteBucket() RPC call will ONLY delete a bucket if it is empty of
// objects. This means a call can just be sent, as it is entirely reversable
// and won't do any permanent damage.
web.DeleteBucket({
bucketName: bucket
})
.then(() => {
dispatch(showAlert({
type: 'info',
message: `Bucket '${bucket}' has been deleted.`
}))
dispatch(removeBucket(bucket))
})
.catch(err => {
let message = err.message
dispatch(showAlert({
type: 'danger',
message: message
}))
})
}
}
export const listObjects = () => {
return (dispatch, getState) => {
const {currentBucket, currentPath, marker, objects, istruncated, web} = getState()
if (!istruncated) return
const {buckets, currentBucket, currentPath, marker, objects, istruncated, web} = getState()
if (!istruncated || buckets.length === 0) return
web.ListObjects({
bucketName: currentBucket,
prefix: currentPath,
@@ -352,7 +400,6 @@ export const listObjects = () => {
export const selectPrefix = prefix => {
return (dispatch, getState) => {
const {currentBucket, web} = getState()
dispatch(resetObjects())
dispatch(setLoadPath(prefix))
web.ListObjects({
bucketName: currentBucket,
@@ -367,6 +414,7 @@ export const selectPrefix = prefix => {
object.name = object.name.replace(`${prefix}`, '');
return object
})
dispatch(resetObjects())
dispatch(appendObjects(
objects,
res.nextmarker,
@@ -465,7 +513,7 @@ export const uploadFile = (file, xhr) => {
return (dispatch, getState) => {
const {currentBucket, currentPath} = getState()
const objectName = `${currentPath}${file.name}`
const uploadUrl = `${window.location.origin}/minio/upload/${currentBucket}/${objectName}`
const uploadUrl = `${window.location.origin}${minioBrowserPrefix}/upload/${currentBucket}/${objectName}`
// The slug is a unique identifer for the file upload.
const slug = `${currentBucket}-${currentPath}-${file.name}`
@@ -482,7 +530,7 @@ export const uploadFile = (file, xhr) => {
}))
xhr.onload = function(event) {
if (xhr.status == 401 || xhr.status == 403 || xhr.status == 500) {
if (xhr.status == 401 || xhr.status == 403) {
setShowAbortModal(false)
dispatch(stopUpload({
slug
@@ -492,6 +540,16 @@ export const uploadFile = (file, xhr) => {
message: 'Unauthorized request.'
}))
}
if (xhr.status == 500) {
setShowAbortModal(false)
dispatch(stopUpload({
slug
}))
dispatch(showAlert({
type: 'danger',
message: xhr.responseText
}))
}
if (xhr.status == 200) {
setShowAbortModal(false)
dispatch(stopUpload({

View File

@@ -150,16 +150,21 @@ export default class Browse extends React.Component {
if (prefix === currentPath) return
browserHistory.push(utils.pathJoin(currentBucket, encPrefix))
} else {
// Download the selected file.
web.CreateURLToken()
.then(res => {
let url = `${window.location.origin}/minio/download/${currentBucket}/${encPrefix}?token=${res.token}`
window.location = url
})
.catch(err => dispatch(actions.showAlert({
type: 'danger',
message: err.message
})))
if (!web.LoggedIn()) {
let url = `${window.location.origin}/minio/download/${currentBucket}/${encPrefix}?token=''`
window.location = url
} else {
// Download the selected file.
web.CreateURLToken()
.then(res => {
let url = `${window.location.origin}${minioBrowserPrefix}/download/${currentBucket}/${encPrefix}?token=${res.token}`
window.location = url
})
.catch(err => dispatch(actions.showAlert({
type: 'danger',
message: err.message
})))
}
}
}
@@ -205,9 +210,19 @@ export default class Browse extends React.Component {
dispatch(actions.hideAbout())
}
toggleBucketDropdown(e) {
const {dispatch, showBucketDropdown} = this.props
if (showBucketDropdown) {
dispatch(actions.hideBucketDropdown())
} else {
dispatch(actions.showBucketDropdown())
}
}
showBucketPolicy(e) {
e.preventDefault()
const {dispatch} = this.props
this.toggleBucketDropdown(e)
dispatch(actions.showBucketPolicy())
}
@@ -217,10 +232,17 @@ export default class Browse extends React.Component {
dispatch(actions.hideBucketPolicy())
}
deleteBucket(e, bucket) {
e.preventDefault()
const {dispatch} = this.props
this.toggleBucketDropdown(e)
dispatch(actions.deleteBucket(bucket))
browserHistory.push(`${minioBrowserPrefix}/`)
}
uploadFile(e) {
e.preventDefault()
const {dispatch, buckets} = this.props
const {dispatch, buckets, currentBucket} = this.props
if (buckets.length === 0) {
dispatch(actions.showAlert({
type: 'danger',
@@ -228,6 +250,13 @@ export default class Browse extends React.Component {
}))
return
}
if (currentBucket === '') {
dispatch(actions.showAlert({
type: 'danger',
message: "Please choose a bucket before trying to upload files."
}))
return
}
let file = e.target.files[0]
e.target.value = null
this.xhr = new XMLHttpRequest()
@@ -421,18 +450,22 @@ export default class Browse extends React.Component {
objects: this.props.checkedObjects,
prefix: this.props.currentPath
}
web.CreateURLToken()
.then(res => {
let requestUrl = location.origin + "/minio/zip?token=" + res.token
this.xhr = new XMLHttpRequest()
dispatch(actions.downloadSelected(requestUrl, req, this.xhr))
})
.catch(err => dispatch(actions.showAlert({
type: 'danger',
message: err.message
})))
if (!web.LoggedIn()) {
let requestUrl = location.origin + "/minio/zip?token=''"
this.xhr = new XMLHttpRequest()
dispatch(actions.downloadSelected(requestUrl, req, this.xhr))
} else {
web.CreateURLToken()
.then(res => {
let requestUrl = location.origin + minioBrowserPrefix + "/zip?token=" + res.token
this.xhr = new XMLHttpRequest()
dispatch(actions.downloadSelected(requestUrl, req, this.xhr))
})
.catch(err => dispatch(actions.showAlert({
type: 'danger',
message: err.message
})))
}
}
clearSelected() {
@@ -493,7 +526,7 @@ export default class Browse extends React.Component {
settingsFunc={ this.showSettings.bind(this) }
logoutFunc={ this.logout.bind(this) } />
} else {
loginButton = <a className='btn btn-danger' href='/minio/login'>Login</a>
loginButton = <a className='btn btn-danger' href={minioBrowserPrefix+'/login'}>Login</a>
}
if (web.LoggedIn()) {
@@ -568,7 +601,9 @@ export default class Browse extends React.Component {
<SideBar searchBuckets={ this.searchBuckets.bind(this) }
selectBucket={ this.selectBucket.bind(this) }
clickOutside={ this.hideSidebar.bind(this) }
showPolicy={ this.showBucketPolicy.bind(this) } />
showPolicy={ this.showBucketPolicy.bind(this) }
deleteBucket={ this.deleteBucket.bind(this) }
toggleBucketDropdown={ this.toggleBucketDropdown.bind(this) } />
<div className="fe-body">
<div className={ 'list-actions' + (classNames({
' list-actions-toggled': checkedObjects.length > 0

View File

@@ -21,8 +21,9 @@ import Scrollbars from 'react-custom-scrollbars/lib/Scrollbars'
import connect from 'react-redux/lib/components/connect'
import logo from '../../img/logo.svg'
import Dropdown from 'react-bootstrap/lib/Dropdown'
let SideBar = ({visibleBuckets, loadBucket, currentBucket, selectBucket, searchBuckets, sidebarStatus, clickOutside, showPolicy}) => {
let SideBar = ({visibleBuckets, loadBucket, currentBucket, selectBucket, searchBuckets, sidebarStatus, clickOutside, showPolicy, deleteBucket, toggleBucketDropdown, showBucketDropdown}) => {
const list = visibleBuckets.map((bucket, i) => {
return <li className={ classNames({
@@ -33,7 +34,19 @@ let SideBar = ({visibleBuckets, loadBucket, currentBucket, selectBucket, searchB
}) }>
{ bucket }
</a>
<i className="fesli-trigger" onClick={ showPolicy }></i>
<Dropdown open={bucket === currentBucket && showBucketDropdown} onToggle={toggleBucketDropdown} className="bucket-dropdown" id="bucket-dropdown">
<Dropdown.Toggle noCaret>
<i className="zmdi zmdi-more-vert" />
</Dropdown.Toggle>
<Dropdown.Menu className="dropdown-menu-right">
<li>
<a onClick={ showPolicy }>Edit policy</a>
</li>
<li>
<a onClick={ (e) => deleteBucket(e, bucket) }>Delete</a>
</li>
</Dropdown.Menu>
</Dropdown>
</li>
})
@@ -80,6 +93,7 @@ export default connect(state => {
visibleBuckets: state.visibleBuckets,
loadBucket: state.loadBucket,
currentBucket: state.currentBucket,
showBucketDropdown: state.showBucketDropdown,
sidebarStatus: state.sidebarStatus
}
})(SideBar)

View File

@@ -17,7 +17,9 @@
// File for all the browser constants.
// minioBrowserPrefix absolute path.
export const minioBrowserPrefix = '/minio'
var p = window.location.pathname
export const minioBrowserPrefix = p.slice(0, p.indexOf("/", 1))
export const READ_ONLY = 'readonly'
export const WRITE_ONLY = 'writeonly'
export const READ_WRITE = 'readwrite'

View File

@@ -25,6 +25,7 @@ export default (state = {
storageInfo: {},
serverInfo: {},
currentBucket: '',
showBucketDropdown: false,
currentPath: '',
showMakeBucketModal: false,
uploads: {},
@@ -71,6 +72,14 @@ export default (state = {
newState.buckets = [action.bucket, ...newState.buckets]
newState.visibleBuckets = [action.bucket, ...newState.visibleBuckets]
break
case actions.REMOVE_BUCKET:
newState.buckets = newState.buckets.filter(bucket => bucket != action.bucket)
newState.visibleBuckets = newState.visibleBuckets.filter(bucket => bucket != action.bucket)
newState.currentBucket = ""
break
case actions.SHOW_BUCKET_DROPDOWN:
newState.showBucketDropdown = action.showBucketDropdown
break
case actions.SET_VISIBLE_BUCKETS:
newState.visibleBuckets = action.visibleBuckets
break

View File

@@ -87,6 +87,9 @@ export default class Web {
MakeBucket(args) {
return this.makeCall('MakeBucket', args)
}
DeleteBucket(args) {
return this.makeCall('DeleteBucket', args)
}
ListObjects(args) {
return this.makeCall('ListObjects', args)
}

View File

@@ -14,7 +14,7 @@
.fe-body {
@media(min-width: @screen-md-min) {
padding: 0 0 40px @fe-sidebar-width;
padding: 0 0 80px @fe-sidebar-width;
}
@media(max-width: @screen-sm-max) {
@@ -79,7 +79,7 @@
text-align: center;
border: 0;
padding: 0;
span {
display: inline-block;
height: 100%;

View File

@@ -108,7 +108,7 @@ div.fesl-row {
&[data-type=zip] { .list-type(#427089, '\f1c6'); }
&[data-type=audio] { .list-type(#009688, '\f1c7'); }
&[data-type=code] { .list-type(#997867, "\f1c9"); }
&[data-type=excel] { .list-type(#f1c3, '\f1c3'); }
&[data-type=excel] { .list-type(#64c866, '\f1c3'); }
&[data-type=image] { .list-type(#f06292, '\f1c5'); }
&[data-type=video] { .list-type(#f8c363, '\f1c8'); }
&[data-type=other] { .list-type(#afafaf, '\f016'); }
@@ -159,7 +159,7 @@ div.fesl-row {
top: 0;
width: 35px;
height: 35px;
z-index: 20;
z-index: 8;
opacity: 0;
cursor: pointer;
@@ -223,7 +223,7 @@ div.fesl-row {
width: 15px;
height: 15px;
border: 2px solid @white;
z-index: 10;
z-index: 7;
border-radius: 2px;
top: 10px;
left: 10px;

View File

@@ -111,7 +111,7 @@
}
&:hover {
.fesli-trigger {
.bucket-dropdown .dropdown-toggle {
.opacity(0.6);
&:hover {
@@ -132,17 +132,40 @@
}
}
.fesli-trigger {
.opacity(0);
.transition(all);
.transition-duration(200ms);
/* Dropdown */
.bucket-dropdown {
position: absolute;
top: 0;
right: 0;
top: 0px;
right: 0px;
width: 35px;
height: 100%;
cursor: pointer;
background: url(../../img/more-h-light.svg) no-repeat top 20px left;
color: @white;
.dropdown-toggle {
.opacity(0);
.transition(all);
.transition-duration(200ms);
font-size: 20px;
background-color: transparent;
}
.dropdown-menu-right {
padding: 15px 0;
margin-top: -1px;
& li {
a:before {
content: none;
-webkit-box-shadow: @dropdown-shadow;
box-shadow: @dropdown-shadow;
}
&:not(.active):hover {
& > a {
color: @dropdown-link-hover-color;
}
}
}
}
}
/* Scrollbar */

View File

@@ -35,8 +35,6 @@
"esformatter": "^0.10.0",
"esformatter-jsx": "^7.4.1",
"esformatter-jsx-ignore": "^1.0.6",
"expect": "^1.20.2",
"history": "^1.17.0",
"html-webpack-plugin": "^2.22.0",
"json-loader": "^0.5.4",
"karma": "^0.13.22",
@@ -48,38 +46,39 @@
"less": "^2.7.1",
"less-loader": "^2.2.3",
"mocha": "^2.5.3",
"moment": "^2.15.1",
"purifycss-webpack-plugin": "^2.0.3",
"react": "^0.14.8",
"react-addons-test-utils": "^0.14.8",
"react-bootstrap": "^0.28.5",
"react-custom-scrollbars": "^2.3.0",
"react-redux": "^4.4.5",
"react-router": "^2.8.1",
"redux": "^3.6.0",
"redux-thunk": "^1.0.3",
"style-loader": "^0.13.1",
"superagent": "^1.8.4",
"superagent-es6-promise": "^1.0.0",
"url-loader": "^0.5.7",
"webpack": "^1.12.11",
"webpack-dev-server": "^1.14.1"
},
"dependencies": {
"bootstrap": "^3.3.6",
"classnames": "^2.2.3",
"expect": "^1.20.2",
"font-awesome": "^4.7.0",
"history": "^1.17.0",
"humanize": "0.0.9",
"json-loader": "^0.5.4",
"local-storage-fallback": "^1.3.0",
"material-design-iconic-font": "^2.2.0",
"mime-db": "^1.25.0",
"mime-types": "^2.1.13",
"moment": "^2.15.1",
"react": "^0.14.8",
"react-addons-test-utils": "^0.14.8",
"react-bootstrap": "^0.28.5",
"react-copy-to-clipboard": "^4.2.3",
"react-custom-scrollbars": "^2.2.2",
"react-dom": "^0.14.6",
"react-dropzone": "^3.5.3",
"react-infinite-scroller": "^1.0.6",
"react-onclickout": "2.0.4"
"react-onclickout": "2.0.4",
"react-redux": "^4.4.5",
"react-router": "^2.8.1",
"redux": "^3.6.0",
"redux-thunk": "^1.0.3",
"superagent": "^1.8.4",
"superagent-es6-promise": "^1.0.0",
"webpack": "^1.12.11"
}
}

File diff suppressed because one or more lines are too long

View File

@@ -1,110 +0,0 @@
#!/bin/bash
_init() {
# Save release LDFLAGS
LDFLAGS=$(go run buildscripts/gen-ldflags.go)
# Extract release tag
release_tag=$(echo $LDFLAGS | awk {'print $6'} | cut -f2 -d=)
# Verify release tag.
if [ -z "$release_tag" ]; then
echo "Release tag cannot be empty. Please check return value of \`go run buildscripts/gen-ldflags.go\`"
exit 1;
fi
# Extract release string.
release_str=$(echo $MINIO_RELEASE | tr '[:upper:]' '[:lower:]')
# Verify release string.
if [ -z "$release_str" ]; then
echo "Release string cannot be empty. Please set \`MINIO_RELEASE\` env variable."
exit 1;
fi
# List of supported architectures
SUPPORTED_OSARCH='linux/386 linux/amd64 linux/arm linux/arm64 windows/386 windows/amd64 darwin/amd64 freebsd/amd64'
## System binaries
CP=`which cp`
SHASUM=`which shasum`
SHA256SUM="${SHASUM} -a 256"
SED=`which sed`
}
go_build() {
local osarch=$1
os=$(echo $osarch | cut -f1 -d'/')
arch=$(echo $osarch | cut -f2 -d'/')
package=$(go list -f '{{.ImportPath}}')
echo -n "-->"
printf "%15s:%s\n" "${osarch}" "${package}"
# Release binary name
release_bin="$release_str/$os-$arch/$(basename $package).$release_tag"
# Release binary downloadable name
release_real_bin="$release_str/$os-$arch/$(basename $package)"
# Release sha1sum name
release_shasum="$release_str/$os-$arch/$(basename $package).${release_tag}.shasum"
# Release sha1sum default
release_shasum_default="$release_str/$os-$arch/$(basename $package).shasum"
# Release sha256sum name
release_sha256sum="$release_str/$os-$arch/$(basename $package).${release_tag}.sha256sum"
# Release sha256sum default
release_sha256sum_default="$release_str/$os-$arch/$(basename $package).sha256sum"
# Go build to build the binary.
CGO_ENABLED=0 GOOS=$os GOARCH=$arch go build --ldflags "${LDFLAGS}" -o $release_bin
# Create copy
if [ $os == "windows" ]; then
$CP -p $release_bin ${release_real_bin}.exe
else
$CP -p $release_bin $release_real_bin
fi
# Calculate sha1sum
shasum_str=$(${SHASUM} ${release_bin})
echo ${shasum_str} | $SED "s/$release_str\/$os-$arch\///g" > $release_shasum
$CP -p $release_shasum $release_shasum_default
# Calculate sha256sum
sha256sum_str=$(${SHA256SUM} ${release_bin})
echo ${sha256sum_str} | $SED "s/$release_str\/$os-$arch\///g" > $release_sha256sum
$CP -p $release_sha256sum $release_sha256sum_default
}
main() {
# Build releases.
echo "Executing $release_str builds for OS: ${SUPPORTED_OSARCH}"
echo "Choose an OS Arch from the below"
for osarch in ${SUPPORTED_OSARCH}; do
echo ${osarch}
done
read -p "If you want to build for all, Just press Enter: " chosen_osarch
if [ "$chosen_osarch" = "" ] || [ "$chosen_osarch" = "all" ]; then
for each_osarch in ${SUPPORTED_OSARCH}; do
go_build ${each_osarch}
done
else
local found=0
for each_osarch in ${SUPPORTED_OSARCH}; do
if [ "$chosen_osarch" = "$each_osarch" ]; then
found=1
fi
done
if [ ${found} -eq 1 ]; then
go_build ${chosen_osarch}
else
echo "Unknown architecture \"${chosen_osarch}\""
exit 1
fi
fi
}
# Run main.
_init && main

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash
#
# Minio Cloud Storage, (C) 2014-2016 Minio, Inc.
# Minio Cloud Storage, (C) 2014-2018 Minio, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -21,7 +21,7 @@ _init() {
## Minimum required versions for build dependencies
GIT_VERSION="1.0"
GO_VERSION="1.7.1"
GO_VERSION="1.9.4"
OSX_VERSION="10.8"
KNAME=$(uname -s)
ARCH=$(uname -m)
@@ -93,9 +93,7 @@ assert_is_supported_arch() {
return
;;
*)
echo "ERROR"
echo "Arch '${ARCH}' not supported."
echo "Supported Arch: [x86_64, amd64, aarch64, arm*]"
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, arm*]"
exit 1
esac
}
@@ -108,41 +106,31 @@ assert_is_supported_os() {
Darwin )
osx_host_version=$(env sw_vers -productVersion)
if ! check_minimum_version "${OSX_VERSION}" "${osx_host_version}"; then
echo "ERROR"
echo "OSX version '${osx_host_version}' not supported."
echo "Minimum supported version: ${OSX_VERSION}"
echo "OSX version '${osx_host_version}' is not supported. Minimum supported version: ${OSX_VERSION}"
exit 1
fi
return
;;
*)
echo "ERROR"
echo "OS '${KNAME}' is not supported."
echo "Supported OS: [Linux, FreeBSD, OpenBSD, NetBSD, Darwin, DragonFly]"
echo "OS '${KNAME}' is not supported. Supported OS: [Linux, FreeBSD, OpenBSD, NetBSD, Darwin, DragonFly]"
exit 1
esac
}
assert_check_golang_env() {
if ! which go >/dev/null 2>&1; then
echo "ERROR"
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document at"
echo "https://docs.minio.io/docs/how-to-install-golang"
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document at https://docs.minio.io/docs/how-to-install-golang"
exit 1
fi
installed_go_version=$(go version | sed 's/^.* go\([0-9.]*\).*$/\1/')
if ! check_minimum_version "${GO_VERSION}" "${installed_go_version}"; then
echo "ERROR"
echo "Go version '${installed_go_version}' not supported."
echo "Minimum supported version: ${GO_VERSION}"
echo "Go runtime version '${installed_go_version}' is unsupported. Minimum supported version: ${GO_VERSION} to compile."
exit 1
fi
if [ -z "${GOPATH}" ]; then
echo "ERROR"
echo "GOPATH environment variable missing, please refer to Go installation document"
echo "https://docs.minio.io/docs/how-to-install-golang"
echo "GOPATH environment variable missing, please refer to Go installation document at https://docs.minio.io/docs/how-to-install-golang"
exit 1
fi
@@ -152,9 +140,7 @@ assert_check_deps() {
# support unusual Git versions such as: 2.7.4 (Apple Git-66)
installed_git_version=$(git version | perl -ne '$_ =~ m/git version (.*?)( |$)/; print "$1\n";')
if ! check_minimum_version "${GIT_VERSION}" "${installed_git_version}"; then
echo "ERROR"
echo "Git version '${installed_git_version}' not supported."
echo "Minimum supported version: ${GIT_VERSION}"
echo "Git version '${installed_git_version}' is not supported. Minimum supported version: ${GIT_VERSION}"
exit 1
fi
}

View File

@@ -1,81 +0,0 @@
#!/bin/sh
#
# Minio Cloud Storage, (C) 2017 Minio, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# If command starts with an option, prepend minio.
if [ "${1}" != "minio" ]; then
if [ -n "${1}" ]; then
set -- minio "$@"
fi
fi
# Wait for all the hosts to come online and have
# their DNS entries populated properly.
docker_wait_hosts() {
hosts="$@"
num_hosts=0
# Count number of hosts in arguments.
for host in $hosts; do
[ $(echo "$host" | grep -E "^http") ] || continue
num_hosts=$((num_hosts+1))
done
if [ $num_hosts -gt 0 ]; then
echo -n "Waiting for all hosts to resolve..."
while true; do
x=0
for host in $hosts; do
[ $(echo "$host" | grep -E "^http") ] || continue
# Extract the domain.
host=$(echo $host | sed -e 's/^http[s]\?:\/\/\([^\/]\+\).*/\1/')
echo -n .
val=$(ping -c 1 $host 2>/dev/null)
if [ $? != 0 ]; then
echo "Failed to lookup $host"
continue
fi
x=$((x+1))
done
# Provided hosts same as successful hosts, should break out.
test $x -eq $num_hosts && break
echo "Failed to resolve hosts.. retrying after 1 second."
sleep 1
done
echo "All hosts are resolving proceeding to initialize Minio."
fi
}
## Look for docker secrets in default documented location.
docker_secrets_env() {
local MINIO_ACCESS_KEY_FILE="/run/secrets/access_key"
local MINIO_SECRET_KEY_FILE="/run/secrets/secret_key"
if [ -f $MINIO_ACCESS_KEY_FILE -a -f $MINIO_SECRET_KEY_FILE ]; then
if [ -f $MINIO_ACCESS_KEY_FILE ]; then
export MINIO_ACCESS_KEY="$(cat "$MINIO_ACCESS_KEY_FILE")"
fi
if [ -f $MINIO_SECRET_KEY_FILE ]; then
export MINIO_SECRET_KEY="$(cat "$MINIO_SECRET_KEY_FILE")"
fi
fi
}
## Set access env from secrets if necessary.
docker_secrets_env
## Wait for all the hosts to come online.
docker_wait_hosts "$@"
exec "$@"

View File

@@ -3,8 +3,8 @@
set -e
echo "" > coverage.txt
for d in $(go list ./... | grep -v vendor); do
go test -coverprofile=profile.out -covermode=atomic $d
for d in $(go list ./... | grep -v browser); do
go test -coverprofile=profile.out -covermode=atomic "$d"
if [ -f profile.out ]; then
cat profile.out >> coverage.txt
rm profile.out

View File

@@ -1,43 +0,0 @@
#!/bin/sh
#
# Minio Cloud Storage, (C) 2017 Minio, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
_init () {
scheme="http://"
address="127.0.0.1:9000"
resource="/minio/index.html"
}
HealthCheckMain () {
# Get the http response code
http_response=$(curl -s -k -o /dev/null -I -w "%{http_code}" ${scheme}${address}${resource})
# Get the http response body
http_response_body=$(curl -k -s ${scheme}${address}${resource})
# server returns response 403 and body "SSL required" if non-TLS connection is attempted on a TLS-configured server.
# change the scheme and try again
if [ "$http_response" = "403" ] && [ "$http_response_body" = "SSL required" ]; then
scheme="https://"
http_response=$(curl -s -k -o /dev/null -I -w "%{http_code}" ${scheme}${address}${resource})
fi
# If http_repsonse is 200 - server is up.
# When MINIO_BROWSER is set to off, curl responds with 404. We assume that the the server is up
[ "$http_response" = "200" ] || [ "$http_response" = "404" ]
}
_init && HealthCheckMain

331
buildscripts/verify-build.sh Executable file
View File

@@ -0,0 +1,331 @@
#!/bin/bash
#
# Minio Cloud Storage, (C) 2017, 2018 Minio, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
set -e
set -E
set -o pipefail
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
export MINT_MODE=core
export MINT_DATA_DIR="$WORK_DIR/data"
export SERVER_ENDPOINT="127.0.0.1:9000"
export ACCESS_KEY="minio"
export SECRET_KEY="minio123"
export ENABLE_HTTPS=0
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" )
FILE_1_MB="$MINT_DATA_DIR/datafile-1-MB"
FILE_65_MB="$MINT_DATA_DIR/datafile-65-MB"
FUNCTIONAL_TESTS="$WORK_DIR/functional-tests.sh"
function start_minio_fs()
{
"${MINIO[@]}" server "${WORK_DIR}/fs-disk" >"$WORK_DIR/fs-minio.log" 2>&1 &
minio_pid=$!
sleep 10
echo "$minio_pid"
}
function start_minio_erasure()
{
"${MINIO[@]}" server "${WORK_DIR}/erasure-disk1" "${WORK_DIR}/erasure-disk2" "${WORK_DIR}/erasure-disk3" "${WORK_DIR}/erasure-disk4" >"$WORK_DIR/erasure-minio.log" 2>&1 &
minio_pid=$!
sleep 15
echo "$minio_pid"
}
function start_minio_erasure_sets()
{
"${MINIO[@]}" server "${WORK_DIR}/erasure-disk-sets{1...32}" >"$WORK_DIR/erasure-minio-sets.log" 2>&1 &
minio_pid=$!
sleep 15
echo "$minio_pid"
}
function start_minio_dist_erasure_sets()
{
declare -a minio_pids
"${MINIO[@]}" server --address=:9000 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9000.log" 2>&1 &
minio_pids[0]=$!
"${MINIO[@]}" server --address=:9001 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9001.log" 2>&1 &
minio_pids[1]=$!
"${MINIO[@]}" server --address=:9002 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9002.log" 2>&1 &
minio_pids[2]=$!
"${MINIO[@]}" server --address=:9003 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9003.log" 2>&1 &
minio_pids[3]=$!
"${MINIO[@]}" server --address=:9004 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9004.log" 2>&1 &
minio_pids[4]=$!
"${MINIO[@]}" server --address=:9005 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9005.log" 2>&1 &
minio_pids[5]=$!
"${MINIO[@]}" server --address=:9006 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9006.log" 2>&1 &
minio_pids[6]=$!
"${MINIO[@]}" server --address=:9007 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9007.log" 2>&1 &
minio_pids[7]=$!
"${MINIO[@]}" server --address=:9008 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9008.log" 2>&1 &
minio_pids[8]=$!
"${MINIO[@]}" server --address=:9009 "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets4" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets5" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets6" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets7" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets8" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets9" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets10" "http://127.0.0.1:9000${WORK_DIR}/dist-disk-sets11" "http://127.0.0.1:9001${WORK_DIR}/dist-disk-sets12" "http://127.0.0.1:9002${WORK_DIR}/dist-disk-sets13" "http://127.0.0.1:9003${WORK_DIR}/dist-disk-sets14" "http://127.0.0.1:9004${WORK_DIR}/dist-disk-sets15" "http://127.0.0.1:9005${WORK_DIR}/dist-disk-sets16" "http://127.0.0.1:9006${WORK_DIR}/dist-disk-sets17" "http://127.0.0.1:9007${WORK_DIR}/dist-disk-sets18" "http://127.0.0.1:9008${WORK_DIR}/dist-disk-sets19" "http://127.0.0.1:9009${WORK_DIR}/dist-disk-sets20" >"$WORK_DIR/dist-minio-9009.log" 2>&1 &
minio_pids[9]=$!
sleep 30
echo "${minio_pids[@]}"
}
function start_minio_dist_erasure()
{
declare -a minio_pids
"${MINIO[@]}" server --address=:9000 "http://127.0.0.1:9000${WORK_DIR}/dist-disk1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk4" >"$WORK_DIR/dist-minio-9000.log" 2>&1 &
minio_pids[0]=$!
"${MINIO[@]}" server --address=:9001 "http://127.0.0.1:9000${WORK_DIR}/dist-disk1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk4" >"$WORK_DIR/dist-minio-9001.log" 2>&1 &
minio_pids[1]=$!
"${MINIO[@]}" server --address=:9002 "http://127.0.0.1:9000${WORK_DIR}/dist-disk1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk4" >"$WORK_DIR/dist-minio-9002.log" 2>&1 &
minio_pids[2]=$!
"${MINIO[@]}" server --address=:9003 "http://127.0.0.1:9000${WORK_DIR}/dist-disk1" "http://127.0.0.1:9001${WORK_DIR}/dist-disk2" "http://127.0.0.1:9002${WORK_DIR}/dist-disk3" "http://127.0.0.1:9003${WORK_DIR}/dist-disk4" >"$WORK_DIR/dist-minio-9003.log" 2>&1 &
minio_pids[3]=$!
sleep 30
echo "${minio_pids[@]}"
}
function start_minio_gateway_s3()
{
MINIO_ACCESS_KEY=Q3AM3UQ867SPQQA43P2F MINIO_SECRET_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG \
"${MINIO[@]}" gateway s3 https://play.minio.io:9000 >"$WORK_DIR/minio-gateway-s3.log" 2>&1 &
minio_pid=$!
sleep 3
echo "$minio_pid"
}
function run_test_fs()
{
minio_pid="$(start_minio_fs)"
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
kill "$minio_pid"
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/fs-minio.log"
fi
rm -f "$WORK_DIR/fs-minio.log"
return "$rv"
}
function run_test_erasure_sets() {
minio_pid="$(start_minio_erasure_sets)"
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
kill "$minio_pid"
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio-sets.log"
fi
rm -f "$WORK_DIR/erasure-minio-sets.log"
return "$rv"
}
function run_test_dist_erasure_sets()
{
minio_pids=( $(start_minio_dist_erasure_sets) )
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
for pid in "${minio_pids[@]}"; do
kill "$pid"
done
sleep 3
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 9); do
echo "server$i log:"
cat "$WORK_DIR/dist-minio-900$i.log"
done
fi
for i in $(seq 0 9); do
rm -f "$WORK_DIR/dist-minio-900$i.log"
done
return "$rv"
}
function run_test_erasure()
{
minio_pid="$(start_minio_erasure)"
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
kill "$minio_pid"
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio.log"
fi
rm -f "$WORK_DIR/erasure-minio.log"
return "$rv"
}
function run_test_dist_erasure()
{
minio_pids=( $(start_minio_dist_erasure) )
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
for pid in "${minio_pids[@]}"; do
kill "$pid"
done
sleep 3
if [ "$rv" -ne 0 ]; then
echo "server1 log:"
cat "$WORK_DIR/dist-minio-9000.log"
echo "server2 log:"
cat "$WORK_DIR/dist-minio-9001.log"
echo "server3 log:"
cat "$WORK_DIR/dist-minio-9002.log"
echo "server4 log:"
cat "$WORK_DIR/dist-minio-9003.log"
fi
rm -f "$WORK_DIR/dist-minio-9000.log" "$WORK_DIR/dist-minio-9001.log" "$WORK_DIR/dist-minio-9002.log" "$WORK_DIR/dist-minio-9003.log"
return "$rv"
}
function run_test_gateway_s3()
{
minio_pid="$(start_minio_gateway_s3)"
export ACCESS_KEY=Q3AM3UQ867SPQQA43P2F
export SECRET_KEY=zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
kill "$minio_pid"
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/minio-gateway-s3.log"
fi
rm -f "$WORK_DIR/minio-gateway-s3.log"
return "$rv"
}
function __init__()
{
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
mkdir -p "$MINT_DATA_DIR"
if ! go get -u github.com/minio/mc; then
echo "failed to download https://github.com/minio/mc"
exit 1
fi
/bin/cp -a "$(go env GOPATH)"/bin/mc "$WORK_DIR/mc"
chmod a+x "$WORK_DIR/mc"
shred -n 1 -s 1M - 1>"$FILE_1_MB" 2>/dev/null
shred -n 1 -s 65M - 1>"$FILE_65_MB" 2>/dev/null
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' > "$MINIO_CONFIG_DIR/config.json"
if ! wget -q -O "$FUNCTIONAL_TESTS" https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh; then
echo "failed to download https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh"
exit 1
fi
chmod a+x "$FUNCTIONAL_TESTS"
}
function main()
{
echo "Testing in FS setup"
if ! run_test_fs; then
echo "FAILED"
rm -fr "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup"
if ! run_test_erasure; then
echo "FAILED"
rm -fr "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure setup"
if ! run_test_dist_erasure; then
echo "FAILED"
rm -fr "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup as sets"
if ! run_test_erasure_sets; then
echo "FAILED"
rm -fr "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure setup as sets"
if ! run_test_dist_erasure_sets; then
echo "FAILED"
rm -fr "$WORK_DIR"
exit 1
fi
echo "Testing in Gateway S3 setup"
if ! run_test_gateway_s3; then
echo "FAILED"
rm -fr "$WORK_DIR"
exit 1
fi
rm -fr "$WORK_DIR"
}
( __init__ "$@" && main "$@" )
rv=$?
rm -fr "$WORK_DIR"
exit "$rv"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

654
cmd/admin-heal-ops.go Normal file
View File

@@ -0,0 +1,654 @@
/*
* Minio Cloud Storage, (C) 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"encoding/json"
"fmt"
"strings"
"sync"
"time"
"github.com/minio/minio/pkg/madmin"
)
// healStatusSummary - overall short summary of a healing sequence
type healStatusSummary string
// healStatusSummary constants
const (
healNotStartedStatus healStatusSummary = "not started"
healRunningStatus = "running"
healStoppedStatus = "stopped"
healFinishedStatus = "finished"
)
const (
// a heal sequence with this many un-consumed heal result
// items blocks until heal-status consumption resumes or is
// aborted due to timeout.
maxUnconsumedHealResultItems = 1000
// if no heal-results are consumed (via the heal-status API)
// for this timeout duration, the heal sequence is aborted.
healUnconsumedTimeout = 24 * time.Hour
// time-duration to keep heal sequence state after it
// completes.
keepHealSeqStateDuration = time.Minute * 10
)
var (
errHealIdleTimeout = fmt.Errorf("healing results were not consumed for too long")
errHealPushStopNDiscard = fmt.Errorf("heal push stopped due to heal stop signal")
errHealStopSignalled = fmt.Errorf("heal stop signalled")
errFnHealFromAPIErr = func(err error) error {
errCode := toAPIErrorCode(err)
apiErr := getAPIError(errCode)
return fmt.Errorf("Heal internal error: %s: %s",
apiErr.Code, apiErr.Description)
}
)
// healSequenceStatus - accumulated status of the heal sequence
type healSequenceStatus struct {
// lock to update this structure as it is concurrently
// accessed
updateLock *sync.RWMutex
// summary and detail for failures
Summary healStatusSummary `json:"Summary"`
FailureDetail string `json:"Detail,omitempty"`
StartTime time.Time `json:"StartTime"`
// disk information
NumDisks int `json:"NumDisks"`
// settings for the heal sequence
HealSettings madmin.HealOpts `json:"Settings"`
// slice of available heal result records
Items []madmin.HealResultItem `json:"Items"`
}
// structure to hold state of all heal sequences in server memory
type allHealState struct {
sync.Mutex
// map of heal path to heal sequence
healSeqMap map[string]*healSequence
}
var (
// global server heal state
globalAllHealState allHealState
)
// initAllHealState - initialize healing apparatus
func initAllHealState(isErasureMode bool) {
if !isErasureMode {
return
}
globalAllHealState = allHealState{
healSeqMap: make(map[string]*healSequence),
}
}
// getHealSequence - Retrieve a heal sequence by path. The second
// argument returns if a heal sequence actually exists.
func (ahs *allHealState) getHealSequence(path string) (h *healSequence, exists bool) {
ahs.Lock()
defer ahs.Unlock()
h, exists = ahs.healSeqMap[path]
return h, exists
}
// LaunchNewHealSequence - launches a background routine that performs
// healing according to the healSequence argument. For each heal
// sequence, state is stored in the `globalAllHealState`, which is a
// map of the heal path to `healSequence` which holds state about the
// heal sequence.
//
// Heal results are persisted in server memory for
// `keepHealSeqStateDuration`. This function also launches a
// background routine to clean up heal results after the
// aforementioned duration.
func (ahs *allHealState) LaunchNewHealSequence(h *healSequence) (
respBytes []byte, errCode APIErrorCode, errMsg string) {
existsAndLive := false
he, exists := ahs.getHealSequence(h.path)
if exists {
if !he.hasEnded() || len(he.currentStatus.Items) > 0 {
existsAndLive = true
}
}
if existsAndLive {
// A heal sequence exists on the given path.
if h.forceStarted {
// stop the running heal sequence - wait for
// it to finish.
he.stop()
for !he.hasEnded() {
time.Sleep(10 * time.Second)
}
} else {
errMsg = "Heal is already running on the given path " +
"(use force-start option to stop and start afresh). " +
fmt.Sprintf("The heal was started by IP %s at %s",
h.clientAddress, h.startTime)
return nil, ErrHealAlreadyRunning, errMsg
}
}
ahs.Lock()
defer ahs.Unlock()
// Check if new heal sequence to be started overlaps with any
// existing, running sequence
for k, hSeq := range ahs.healSeqMap {
if !hSeq.hasEnded() && (strings.HasPrefix(k, h.path) ||
strings.HasPrefix(h.path, k)) {
errMsg = "The provided heal sequence path overlaps with an existing " +
fmt.Sprintf("heal path: %s", k)
return nil, ErrHealOverlappingPaths, errMsg
}
}
// Add heal state and start sequence
ahs.healSeqMap[h.path] = h
// Launch top-level background heal go-routine
go h.healSequenceStart()
// Launch clean-up routine to remove this heal sequence (after
// it ends) from the global state after timeout has elapsed.
go func() {
var keepStateTimeout <-chan time.Time
ticker := time.NewTicker(time.Minute)
defer ticker.Stop()
everyMinute := ticker.C
for {
select {
// Check every minute if heal sequence has ended.
case <-everyMinute:
if h.hasEnded() {
keepStateTimeout = time.After(keepHealSeqStateDuration)
everyMinute = nil
}
// This case does not fire until the heal
// sequence completes.
case <-keepStateTimeout:
// Heal sequence has ended, keep
// results state duration has elapsed,
// so purge state.
ahs.Lock()
defer ahs.Unlock()
delete(ahs.healSeqMap, h.path)
return
case <-globalServiceDoneCh:
// server could be restarting - need
// to exit immediately
return
}
}
}()
b, err := json.Marshal(madmin.HealStartSuccess{
h.clientToken,
h.clientAddress,
h.startTime,
})
if err != nil {
errorIf(err, "Failed to marshal heal result into json.")
return nil, ErrInternalError, ""
}
return b, ErrNone, ""
}
// PopHealStatusJSON - Called by heal-status API. It fetches the heal
// status results from global state and returns its JSON
// representation. The clientToken helps ensure there aren't
// conflicting clients fetching status.
func (ahs *allHealState) PopHealStatusJSON(path string,
clientToken string) ([]byte, APIErrorCode) {
// fetch heal state for given path
h, exists := ahs.getHealSequence(path)
if !exists {
// If there is no such heal sequence, return error.
return nil, ErrHealNoSuchProcess
}
// Check if client-token is valid
if clientToken != h.clientToken {
return nil, ErrHealInvalidClientToken
}
// Take lock to access and update the heal-sequence
h.currentStatus.updateLock.Lock()
defer h.currentStatus.updateLock.Unlock()
numItems := len(h.currentStatus.Items)
// calculate index of most recently available heal result
// record.
lastResultIndex := h.lastSentResultIndex
if numItems > 0 {
lastResultIndex = h.currentStatus.Items[numItems-1].ResultIndex
}
// After sending status to client, and before relinquishing
// the updateLock, reset Item to nil and record the result
// index sent to the client.
defer func(i int64) {
h.lastSentResultIndex = i
h.currentStatus.Items = nil
}(lastResultIndex)
jbytes, err := json.Marshal(h.currentStatus)
if err != nil {
errorIf(err, "Failed to marshal heal result into json.")
return nil, ErrInternalError
}
return jbytes, ErrNone
}
// healSequence - state for each heal sequence initiated on the
// server.
type healSequence struct {
// bucket, and prefix on which heal seq. was initiated
bucket, objPrefix string
// path is just bucket + "/" + objPrefix
path string
// time at which heal sequence was started
startTime time.Time
// Heal client info
clientToken, clientAddress string
// was this heal sequence force started?
forceStarted bool
// heal settings applied to this heal sequence
settings madmin.HealOpts
// current accumulated status of the heal sequence
currentStatus healSequenceStatus
// channel signalled by background routine when traversal has
// completed
traverseAndHealDoneCh chan error
// channel to signal heal sequence to stop (e.g. from the
// heal-stop API)
stopSignalCh chan struct{}
// the last result index sent to client
lastSentResultIndex int64
}
// NewHealSequence - creates healSettings, assumes bucket and
// objPrefix are already validated.
func newHealSequence(bucket, objPrefix, clientAddr string,
numDisks int, hs madmin.HealOpts, forceStart bool) *healSequence {
return &healSequence{
bucket: bucket,
objPrefix: objPrefix,
path: bucket + "/" + objPrefix,
startTime: UTCNow(),
clientToken: mustGetUUID(),
clientAddress: clientAddr,
forceStarted: forceStart,
settings: hs,
currentStatus: healSequenceStatus{
Summary: healNotStartedStatus,
HealSettings: hs,
NumDisks: numDisks,
updateLock: &sync.RWMutex{},
},
traverseAndHealDoneCh: make(chan error),
stopSignalCh: make(chan struct{}),
}
}
// isQuitting - determines if the heal sequence is quitting (due to an
// external signal)
func (h *healSequence) isQuitting() bool {
select {
case <-h.stopSignalCh:
return true
default:
return false
}
}
// check if the heal sequence has ended
func (h *healSequence) hasEnded() bool {
h.currentStatus.updateLock.RLock()
summary := h.currentStatus.Summary
h.currentStatus.updateLock.RUnlock()
return summary == healStoppedStatus || summary == healFinishedStatus
}
// stops the heal sequence - safe to call multiple times.
func (h *healSequence) stop() {
select {
case <-h.stopSignalCh:
default:
close(h.stopSignalCh)
}
}
// pushHealResultItem - pushes a heal result item for consumption in
// the heal-status API. It blocks if there are
// maxUnconsumedHealResultItems. When it blocks, the heal sequence
// routine is effectively paused - this happens when the server has
// accumulated the maximum number of heal records per heal
// sequence. When the client consumes further records, the heal
// sequence automatically resumes. The return value indicates if the
// operation succeeded.
func (h *healSequence) pushHealResultItem(r madmin.HealResultItem) error {
// start a timer to keep an upper time limit to find an empty
// slot to add the given heal result - if no slot is found it
// means that the server is holding the maximum amount of
// heal-results in memory and the client has not consumed it
// for too long.
unconsumedTimer := time.NewTimer(healUnconsumedTimeout)
defer func() {
// stop the timeout timer so it is garbage collected.
if !unconsumedTimer.Stop() {
<-unconsumedTimer.C
}
}()
var itemsLen int
for {
h.currentStatus.updateLock.Lock()
itemsLen = len(h.currentStatus.Items)
if itemsLen == maxUnconsumedHealResultItems {
// unlock and wait to check again if we can push
h.currentStatus.updateLock.Unlock()
// wait for a second, or quit if an external
// stop signal is received or the
// unconsumedTimer fires.
select {
// Check after a second
case <-time.After(time.Second):
continue
case <-h.stopSignalCh:
// discard result and return.
return errHealPushStopNDiscard
// Timeout if no results consumed for too
// long.
case <-unconsumedTimer.C:
return errHealIdleTimeout
}
}
break
}
// Set the correct result index for the new result item
if itemsLen > 0 {
r.ResultIndex = 1 + h.currentStatus.Items[itemsLen-1].ResultIndex
} else {
r.ResultIndex = 1 + h.lastSentResultIndex
}
// append to results
h.currentStatus.Items = append(h.currentStatus.Items, r)
// release lock
h.currentStatus.updateLock.Unlock()
// This is a "safe" point for the heal sequence to quit if
// signalled externally.
if h.isQuitting() {
return errHealStopSignalled
}
return nil
}
// healSequenceStart - this is the top-level background heal
// routine. It launches another go-routine that actually traverses
// on-disk data, checks and heals according to the selected
// settings. This go-routine itself, (1) monitors the traversal
// routine for completion, and (2) listens for external stop
// signals. When either event happens, it sets the finish status for
// the heal-sequence.
func (h *healSequence) healSequenceStart() {
// Set status as running
h.currentStatus.updateLock.Lock()
h.currentStatus.Summary = healRunningStatus
h.currentStatus.StartTime = UTCNow()
h.currentStatus.updateLock.Unlock()
go h.traverseAndHeal()
select {
case err, ok := <-h.traverseAndHealDoneCh:
h.currentStatus.updateLock.Lock()
defer h.currentStatus.updateLock.Unlock()
// Heal traversal is complete.
if ok {
// heal traversal had an error.
h.currentStatus.Summary = healStoppedStatus
h.currentStatus.FailureDetail = err.Error()
} else {
// heal traversal succeeded.
h.currentStatus.Summary = healFinishedStatus
}
case <-h.stopSignalCh:
h.currentStatus.updateLock.Lock()
h.currentStatus.Summary = healStoppedStatus
h.currentStatus.FailureDetail = errHealStopSignalled.Error()
h.currentStatus.updateLock.Unlock()
// drain traverse channel so the traversal
// go-routine does not leak.
go func() {
// Eventually the traversal go-routine closes
// the channel and returns, so this go-routine
// itself will not leak.
<-h.traverseAndHealDoneCh
}()
}
}
// traverseAndHeal - traverses on-disk data and performs healing
// according to settings. At each "safe" point it also checks if an
// external quit signal has been received and quits if so. Since the
// healing traversal may be mutating on-disk data when an external
// quit signal is received, this routine cannot quit immediately and
// has to wait until a safe point is reached, such as between scanning
// two objects.
func (h *healSequence) traverseAndHeal() {
var err error
checkErr := func(f func() error) {
switch {
case err != nil:
return
case h.isQuitting():
err = errHealStopSignalled
return
}
err = f()
}
// Start with format healing
checkErr(h.healDiskFormat)
// Heal buckets and objects
checkErr(h.healBuckets)
if err != nil {
h.traverseAndHealDoneCh <- err
}
close(h.traverseAndHealDoneCh)
}
// healDiskFormat - heals format.json, return value indicates if a
// failure error occurred.
func (h *healSequence) healDiskFormat() error {
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil {
return errServerNotInitialized
}
res, err := objectAPI.HealFormat(h.settings.DryRun)
if err != nil {
return errFnHealFromAPIErr(err)
}
peersReInitFormat(globalAdminPeers, h.settings.DryRun)
// Push format heal result
return h.pushHealResultItem(res)
}
// healBuckets - check for all buckets heal or just particular bucket.
func (h *healSequence) healBuckets() error {
// 1. If a bucket was specified, heal only the bucket.
if h.bucket != "" {
return h.healBucket(h.bucket)
}
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil {
return errServerNotInitialized
}
buckets, err := objectAPI.ListBucketsHeal()
if err != nil {
return errFnHealFromAPIErr(err)
}
for _, bucket := range buckets {
if err = h.healBucket(bucket.Name); err != nil {
return err
}
}
return nil
}
// healBucket - traverses and heals given bucket
func (h *healSequence) healBucket(bucket string) error {
if h.isQuitting() {
return errHealStopSignalled
}
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil {
return errServerNotInitialized
}
bucketLock := globalNSMutex.NewNSLock(bucket, "")
if err := bucketLock.GetLock(globalHealingTimeout); err != nil {
return err
}
results, err := objectAPI.HealBucket(bucket, h.settings.DryRun)
// push any available results before checking for error
for _, result := range results {
if perr := h.pushHealResultItem(result); perr != nil {
bucketLock.Unlock()
return perr
}
}
bucketLock.Unlock()
// handle heal-bucket error
if err != nil {
return err
}
if !h.settings.Recursive {
if h.objPrefix != "" {
// Check if an object named as the objPrefix exists,
// and if so heal it.
_, err = objectAPI.GetObjectInfo(bucket, h.objPrefix)
if err == nil {
err = h.healObject(bucket, h.objPrefix)
if err != nil {
return err
}
}
}
return nil
}
marker := ""
isTruncated := true
for isTruncated {
objectInfos, err := objectAPI.ListObjectsHeal(bucket,
h.objPrefix, marker, "", 1000)
if err != nil {
return errFnHealFromAPIErr(err)
}
for _, o := range objectInfos.Objects {
if err := h.healObject(o.Bucket, o.Name); err != nil {
return err
}
}
isTruncated = objectInfos.IsTruncated
marker = objectInfos.NextMarker
}
return nil
}
// healObject - heal the given object and record result
func (h *healSequence) healObject(bucket, object string) error {
if h.isQuitting() {
return errHealStopSignalled
}
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil {
return errServerNotInitialized
}
hri, err := objectAPI.HealObject(bucket, object, h.settings.DryRun)
if err != nil {
hri.Detail = err.Error()
}
return h.pushHealResultItem(hri)
}

View File

@@ -16,7 +16,15 @@
package cmd
import router "github.com/gorilla/mux"
import (
"net/http"
router "github.com/gorilla/mux"
)
const (
adminAPIPathPrefix = "/minio/admin"
)
// adminAPIHandlers provides HTTP handlers for Minio admin API.
type adminAPIHandlers struct {
@@ -27,50 +35,44 @@ func registerAdminRouter(mux *router.Router) {
adminAPI := adminAPIHandlers{}
// Admin router
adminRouter := mux.NewRoute().PathPrefix("/").Subrouter()
adminRouter := mux.NewRoute().PathPrefix(adminAPIPathPrefix).Subrouter()
// Version handler
adminRouter.Methods(http.MethodGet).Path("/version").HandlerFunc(adminAPI.VersionHandler)
adminV1Router := adminRouter.PathPrefix("/v1").Subrouter()
/// Service operations
// Service status
adminRouter.Methods("GET").Queries("service", "").Headers(minioAdminOpHeader, "status").HandlerFunc(adminAPI.ServiceStatusHandler)
adminV1Router.Methods(http.MethodGet).Path("/service").HandlerFunc(adminAPI.ServiceStatusHandler)
// Service restart
adminRouter.Methods("POST").Queries("service", "").Headers(minioAdminOpHeader, "restart").HandlerFunc(adminAPI.ServiceRestartHandler)
// Service update credentials
adminRouter.Methods("POST").Queries("service", "").Headers(minioAdminOpHeader, "set-credentials").HandlerFunc(adminAPI.ServiceCredentialsHandler)
// Service restart and stop - TODO
adminV1Router.Methods(http.MethodPost).Path("/service").HandlerFunc(adminAPI.ServiceStopNRestartHandler)
// Info operations
adminRouter.Methods("GET").Queries("info", "").HandlerFunc(adminAPI.ServerInfoHandler)
adminV1Router.Methods(http.MethodGet).Path("/info").HandlerFunc(adminAPI.ServerInfoHandler)
/// Lock operations
// List Locks
adminRouter.Methods("GET").Queries("lock", "").Headers(minioAdminOpHeader, "list").HandlerFunc(adminAPI.ListLocksHandler)
adminV1Router.Methods(http.MethodGet).Path("/locks").HandlerFunc(adminAPI.ListLocksHandler)
// Clear locks
adminRouter.Methods("POST").Queries("lock", "").Headers(minioAdminOpHeader, "clear").HandlerFunc(adminAPI.ClearLocksHandler)
adminV1Router.Methods(http.MethodDelete).Path("/locks").HandlerFunc(adminAPI.ClearLocksHandler)
/// Heal operations
// List Objects needing heal.
adminRouter.Methods("GET").Queries("heal", "").Headers(minioAdminOpHeader, "list-objects").HandlerFunc(adminAPI.ListObjectsHealHandler)
// List Uploads needing heal.
adminRouter.Methods("GET").Queries("heal", "").Headers(minioAdminOpHeader, "list-uploads").HandlerFunc(adminAPI.ListUploadsHealHandler)
// List Buckets needing heal.
adminRouter.Methods("GET").Queries("heal", "").Headers(minioAdminOpHeader, "list-buckets").HandlerFunc(adminAPI.ListBucketsHealHandler)
// Heal Buckets.
adminRouter.Methods("POST").Queries("heal", "").Headers(minioAdminOpHeader, "bucket").HandlerFunc(adminAPI.HealBucketHandler)
// Heal Objects.
adminRouter.Methods("POST").Queries("heal", "").Headers(minioAdminOpHeader, "object").HandlerFunc(adminAPI.HealObjectHandler)
// Heal Format.
adminRouter.Methods("POST").Queries("heal", "").Headers(minioAdminOpHeader, "format").HandlerFunc(adminAPI.HealFormatHandler)
// Heal Uploads.
adminRouter.Methods("POST").Queries("heal", "").Headers(minioAdminOpHeader, "upload").HandlerFunc(adminAPI.HealUploadHandler)
// Heal processing endpoint.
adminV1Router.Methods(http.MethodPost).Path("/heal/").HandlerFunc(adminAPI.HealHandler)
adminV1Router.Methods(http.MethodPost).Path("/heal/{bucket}").HandlerFunc(adminAPI.HealHandler)
adminV1Router.Methods(http.MethodPost).Path("/heal/{bucket}/{prefix:.*}").HandlerFunc(adminAPI.HealHandler)
/// Config operations
// Update credentials
adminV1Router.Methods(http.MethodPut).Path("/config/credential").HandlerFunc(adminAPI.UpdateCredentialsHandler)
// Get config
adminRouter.Methods("GET").Queries("config", "").Headers(minioAdminOpHeader, "get").HandlerFunc(adminAPI.GetConfigHandler)
// Set Config
adminRouter.Methods("PUT").Queries("config", "").Headers(minioAdminOpHeader, "set").HandlerFunc(adminAPI.SetConfigHandler)
adminV1Router.Methods(http.MethodGet).Path("/config").HandlerFunc(adminAPI.GetConfigHandler)
// Set config
adminV1Router.Methods(http.MethodPut).Path("/config").HandlerFunc(adminAPI.SetConfigHandler)
}

View File

@@ -18,25 +18,24 @@ package cmd
import (
"encoding/json"
"errors"
"fmt"
"net"
"os"
"path"
"path/filepath"
"reflect"
"sort"
"sync"
"time"
"github.com/minio/minio-go/pkg/set"
"github.com/minio/minio/pkg/errors"
)
const (
// Admin service names
serviceRestartRPC = "Admin.Restart"
signalServiceRPC = "Admin.SignalService"
reInitFormatRPC = "Admin.ReInitFormat"
listLocksRPC = "Admin.ListLocks"
reInitDisksRPC = "Admin.ReInitDisks"
serverInfoDataRPC = "Admin.ServerInfoData"
getConfigRPC = "Admin.GetConfig"
writeTmpConfigRPC = "Admin.WriteTmpConfig"
@@ -56,60 +55,75 @@ type remoteAdminClient struct {
// adminCmdRunner - abstracts local and remote execution of admin
// commands like service stop and service restart.
type adminCmdRunner interface {
Restart() error
SignalService(s serviceSignal) error
ReInitFormat(dryRun bool) error
ListLocks(bucket, prefix string, duration time.Duration) ([]VolumeLockInfo, error)
ReInitDisks() error
ServerInfoData() (ServerInfoData, error)
GetConfig() ([]byte, error)
WriteTmpConfig(tmpFileName string, configBytes []byte) error
CommitConfig(tmpFileName string) error
}
// Restart - Sends a message over channel to the go-routine
// responsible for restarting the process.
func (lc localAdminClient) Restart() error {
globalServiceSignalCh <- serviceRestart
var errUnsupportedSignal = fmt.Errorf("unsupported signal: only restart and stop signals are supported")
// SignalService - sends a restart or stop signal to the local server
func (lc localAdminClient) SignalService(s serviceSignal) error {
switch s {
case serviceRestart, serviceStop:
globalServiceSignalCh <- s
default:
return errUnsupportedSignal
}
return nil
}
// ReInitFormat - re-initialize disk format.
func (lc localAdminClient) ReInitFormat(dryRun bool) error {
objectAPI := newObjectLayerFn()
if objectAPI == nil {
return errServerNotInitialized
}
_, err := objectAPI.HealFormat(dryRun)
return err
}
// ListLocks - Fetches lock information from local lock instrumentation.
func (lc localAdminClient) ListLocks(bucket, prefix string, duration time.Duration) ([]VolumeLockInfo, error) {
return listLocksInfo(bucket, prefix, duration), nil
}
// Restart - Sends restart command to remote server via RPC.
func (rc remoteAdminClient) Restart() error {
args := AuthRPCArgs{}
func (rc remoteAdminClient) SignalService(s serviceSignal) (err error) {
switch s {
case serviceRestart, serviceStop:
reply := AuthRPCReply{}
err = rc.Call(signalServiceRPC, &SignalServiceArgs{Sig: s},
&reply)
default:
err = errUnsupportedSignal
}
return err
}
// ReInitFormat - re-initialize disk format, remotely.
func (rc remoteAdminClient) ReInitFormat(dryRun bool) error {
reply := AuthRPCReply{}
return rc.Call(serviceRestartRPC, &args, &reply)
return rc.Call(reInitFormatRPC, &ReInitFormatArgs{
DryRun: dryRun,
}, &reply)
}
// ListLocks - Sends list locks command to remote server via RPC.
func (rc remoteAdminClient) ListLocks(bucket, prefix string, duration time.Duration) ([]VolumeLockInfo, error) {
listArgs := ListLocksQuery{
bucket: bucket,
prefix: prefix,
duration: duration,
Bucket: bucket,
Prefix: prefix,
Duration: duration,
}
var reply ListLocksReply
if err := rc.Call(listLocksRPC, &listArgs, &reply); err != nil {
return nil, err
}
return reply.volLocks, nil
}
// ReInitDisks - There is nothing to do here, heal format REST API
// handler has already formatted and reinitialized the local disks.
func (lc localAdminClient) ReInitDisks() error {
return nil
}
// ReInitDisks - Signals peers via RPC to reinitialize their disks and
// object layer.
func (rc remoteAdminClient) ReInitDisks() error {
args := AuthRPCArgs{}
reply := AuthRPCReply{}
return rc.Call(reInitDisksRPC, &args, &reply)
return reply.VolLocks, nil
}
// ServerInfoData - Returns the server info of this server.
@@ -139,7 +153,7 @@ func (lc localAdminClient) ServerInfoData() (sid ServerInfoData, e error) {
Version: Version,
CommitID: CommitID,
SQSARN: arns,
Region: serverConfig.GetRegion(),
Region: globalServerConfig.GetRegion(),
},
}, nil
}
@@ -158,11 +172,11 @@ func (rc remoteAdminClient) ServerInfoData() (sid ServerInfoData, e error) {
// GetConfig - returns config.json of the local server.
func (lc localAdminClient) GetConfig() ([]byte, error) {
if serverConfig == nil {
return nil, errors.New("config not present")
if globalServerConfig == nil {
return nil, fmt.Errorf("config not present")
}
return json.Marshal(serverConfig)
return json.Marshal(globalServerConfig)
}
// GetConfig - returns config.json of the remote server.
@@ -225,10 +239,11 @@ func (rc remoteAdminClient) CommitConfig(tmpFileName string) error {
return nil
}
// adminPeer - represents an entity that implements Restart methods.
// adminPeer - represents an entity that implements admin API RPCs.
type adminPeer struct {
addr string
cmdRunner adminCmdRunner
isLocal bool
}
// type alias for a collection of adminPeer.
@@ -243,10 +258,11 @@ func makeAdminPeers(endpoints EndpointList) (adminPeerList adminPeers) {
adminPeerList = append(adminPeerList, adminPeer{
thisPeer,
localAdminClient{},
true,
})
hostSet := set.CreateStringSet(globalMinioAddr)
cred := serverConfig.GetCredential()
cred := globalServerConfig.GetCredential()
serviceEndpoint := path.Join(minioReservedBucketPath, adminPath)
for _, host := range GetRemotePeers(endpoints) {
if hostSet.Contains(host) {
@@ -269,16 +285,36 @@ func makeAdminPeers(endpoints EndpointList) (adminPeerList adminPeers) {
return adminPeerList
}
// peersReInitFormat - reinitialize remote object layers to new format.
func peersReInitFormat(peers adminPeers, dryRun bool) error {
errs := make([]error, len(peers))
// Send ReInitFormat RPC call to all nodes.
// for local adminPeer this is a no-op.
wg := sync.WaitGroup{}
for i, peer := range peers {
wg.Add(1)
go func(idx int, peer adminPeer) {
defer wg.Done()
if !peer.isLocal {
errs[idx] = peer.cmdRunner.ReInitFormat(dryRun)
}
}(i, peer)
}
wg.Wait()
return nil
}
// Initialize global adminPeer collection.
func initGlobalAdminPeers(endpoints EndpointList) {
globalAdminPeers = makeAdminPeers(endpoints)
}
// invokeServiceCmd - Invoke Restart command.
// invokeServiceCmd - Invoke Restart/Stop command.
func invokeServiceCmd(cp adminPeer, cmd serviceSignal) (err error) {
switch cmd {
case serviceRestart:
err = cp.cmdRunner.Restart()
case serviceRestart, serviceStop:
err = cp.cmdRunner.SignalService(cmd)
}
return err
}
@@ -352,24 +388,6 @@ func listPeerLocksInfo(peers adminPeers, bucket, prefix string, duration time.Du
return groupedLockInfos, nil
}
// reInitPeerDisks - reinitialize disks and object layer on peer servers to use the new format.
func reInitPeerDisks(peers adminPeers) error {
errs := make([]error, len(peers))
// Send ReInitDisks RPC call to all nodes.
// for local adminPeer this is a no-op.
wg := sync.WaitGroup{}
for i, peer := range peers {
wg.Add(1)
go func(idx int, peer adminPeer) {
defer wg.Done()
errs[idx] = peer.cmdRunner.ReInitDisks()
}(i, peer)
}
wg.Wait()
return nil
}
// uptimeSlice - used to sort uptimes in chronological order.
type uptimeSlice []struct {
err error
@@ -466,7 +484,7 @@ func getPeerConfig(peers adminPeers) ([]byte, error) {
// Find the maximally occurring config among peers in a
// distributed setup.
serverConfigs := make([]serverConfigV13, len(peers))
serverConfigs := make([]serverConfig, len(peers))
for i, configBytes := range configs {
if errs[i] != nil {
continue
@@ -483,7 +501,7 @@ func getPeerConfig(peers adminPeers) ([]byte, error) {
configJSON, err := getValidServerConfig(serverConfigs, errs)
if err != nil {
errorIf(err, "Unable to find a valid server config")
return nil, traceError(err)
return nil, errors.Trace(err)
}
// Return the config.json that was present quorum or more
@@ -493,7 +511,7 @@ func getPeerConfig(peers adminPeers) ([]byte, error) {
// getValidServerConfig - finds the server config that is present in
// quorum or more number of servers.
func getValidServerConfig(serverConfigs []serverConfigV13, errs []error) (scv serverConfigV13, e error) {
func getValidServerConfig(serverConfigs []serverConfig, errs []error) (scv serverConfig, e error) {
// majority-based quorum
quorum := len(serverConfigs)/2 + 1
@@ -536,7 +554,7 @@ func getValidServerConfig(serverConfigs []serverConfigV13, errs []error) (scv se
// seen. See example above for
// clarity.
continue
} else if j < i && reflect.DeepEqual(serverConfigs[i], serverConfigs[j]) {
} else if j < i && serverConfigs[i].ConfigDiff(&serverConfigs[j]) == "" {
// serverConfigs[i] is equal to
// serverConfigs[j], update
// serverConfigs[j]'s counter since it
@@ -555,7 +573,7 @@ func getValidServerConfig(serverConfigs []serverConfigV13, errs []error) (scv se
// We find the maximally occurring server config and check if
// there is quorum.
var configJSON serverConfigV13
var configJSON serverConfig
maxOccurrence := 0
for i, count := range configCounter {
if maxOccurrence < count {

View File

@@ -220,7 +220,7 @@ var (
// TestGetValidServerConfig - test for getValidServerConfig.
func TestGetValidServerConfig(t *testing.T) {
var c1, c2 serverConfigV13
var c1, c2 serverConfig
err := json.Unmarshal(config1, &c1)
if err != nil {
t.Fatalf("json unmarshal of %s failed: %v", string(config1), err)
@@ -233,7 +233,7 @@ func TestGetValidServerConfig(t *testing.T) {
// Valid config.
noErrs := []error{nil, nil, nil, nil}
serverConfigs := []serverConfigV13{c1, c2, c1, c1}
serverConfigs := []serverConfig{c1, c2, c1, c1}
validConfig, err := getValidServerConfig(serverConfigs, noErrs)
if err != nil {
t.Errorf("Expected a valid config but received %v instead", err)
@@ -244,7 +244,7 @@ func TestGetValidServerConfig(t *testing.T) {
}
// Invalid config - no quorum.
serverConfigs = []serverConfigV13{c1, c2, c2, c1}
serverConfigs = []serverConfig{c1, c2, c2, c1}
_, err = getValidServerConfig(serverConfigs, noErrs)
if err != errXLWriteQuorum {
t.Errorf("Expected to fail due to lack of quorum but received %v", err)
@@ -252,7 +252,7 @@ func TestGetValidServerConfig(t *testing.T) {
// All errors
allErrs := []error{errDiskNotFound, errDiskNotFound, errDiskNotFound, errDiskNotFound}
serverConfigs = []serverConfigV13{{}, {}, {}, {}}
serverConfigs = []serverConfig{{}, {}, {}, {}}
_, err = getValidServerConfig(serverConfigs, allErrs)
if err != errXLWriteQuorum {
t.Errorf("Expected to fail due to lack of quorum but received %v", err)

View File

@@ -18,7 +18,6 @@ package cmd
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"os"
@@ -26,30 +25,35 @@ import (
"time"
router "github.com/gorilla/mux"
"github.com/minio/minio/pkg/errors"
)
const adminPath = "/admin"
var errUnsupportedBackend = errors.New("not supported for non erasure-code backend")
// adminCmd - exports RPC methods for service status, stop and
// restart commands.
type adminCmd struct {
AuthRPCServer
}
// SignalServiceArgs - provides the signal argument to SignalService RPC
type SignalServiceArgs struct {
AuthRPCArgs
Sig serviceSignal
}
// ListLocksQuery - wraps ListLocks API's query values to send over RPC.
type ListLocksQuery struct {
AuthRPCArgs
bucket string
prefix string
duration time.Duration
Bucket string
Prefix string
Duration time.Duration
}
// ListLocksReply - wraps ListLocks response over RPC.
type ListLocksReply struct {
AuthRPCReply
volLocks []VolumeLockInfo
VolLocks []VolumeLockInfo
}
// ServerInfoDataReply - wraps the server info response over RPC.
@@ -64,60 +68,38 @@ type ConfigReply struct {
Config []byte // json-marshalled bytes of serverConfigV13
}
// Restart - Restart this instance of minio server.
func (s *adminCmd) Restart(args *AuthRPCArgs, reply *AuthRPCReply) error {
// SignalService - Send a restart or stop signal to the service
func (s *adminCmd) SignalService(args *SignalServiceArgs, reply *AuthRPCReply) error {
if err := args.IsAuthenticated(); err != nil {
return err
}
globalServiceSignalCh <- serviceRestart
globalServiceSignalCh <- args.Sig
return nil
}
// ReInitFormatArgs - provides dry-run information to re-initialize format.json
type ReInitFormatArgs struct {
AuthRPCArgs
DryRun bool
}
// ReInitFormat - re-init 'format.json'
func (s *adminCmd) ReInitFormat(args *ReInitFormatArgs, reply *AuthRPCReply) error {
if err := args.IsAuthenticated(); err != nil {
return err
}
_, err := newObjectLayerFn().HealFormat(args.DryRun)
return err
}
// ListLocks - lists locks held by requests handled by this server instance.
func (s *adminCmd) ListLocks(query *ListLocksQuery, reply *ListLocksReply) error {
if err := query.IsAuthenticated(); err != nil {
return err
}
volLocks := listLocksInfo(query.bucket, query.prefix, query.duration)
*reply = ListLocksReply{volLocks: volLocks}
return nil
}
// ReInitDisk - reinitialize storage disks and object layer to use the
// new format.
func (s *adminCmd) ReInitDisks(args *AuthRPCArgs, reply *AuthRPCReply) error {
if err := args.IsAuthenticated(); err != nil {
return err
}
if !globalIsXL {
return errUnsupportedBackend
}
// Get the current object layer instance.
objLayer := newObjectLayerFn()
// Initialize new disks to include the newly formatted disks.
bootstrapDisks, err := initStorageDisks(globalEndpoints)
if err != nil {
return err
}
// Initialize new object layer with newly formatted disks.
newObjectAPI, err := newXLObjects(bootstrapDisks)
if err != nil {
return err
}
// Replace object layer with newly formatted storage.
globalObjLayerMutex.Lock()
globalObjectAPI = newObjectAPI
globalObjLayerMutex.Unlock()
// Shutdown storage belonging to old object layer instance.
objLayer.Shutdown()
volLocks := listLocksInfo(query.Bucket, query.Prefix, query.Duration)
*reply = ListLocksReply{VolLocks: volLocks}
return nil
}
@@ -148,7 +130,7 @@ func (s *adminCmd) ServerInfoData(args *AuthRPCArgs, reply *ServerInfoDataReply)
Uptime: UTCNow().Sub(globalBootTime),
Version: Version,
CommitID: CommitID,
Region: serverConfig.GetRegion(),
Region: globalServerConfig.GetRegion(),
SQSARN: arns,
},
StorageInfo: storageInfo,
@@ -165,11 +147,11 @@ func (s *adminCmd) GetConfig(args *AuthRPCArgs, reply *ConfigReply) error {
return err
}
if serverConfig == nil {
return errors.New("config not present")
if globalServerConfig == nil {
return fmt.Errorf("config not present")
}
jsonBytes, err := json.Marshal(serverConfig)
jsonBytes, err := json.Marshal(globalServerConfig)
if err != nil {
return err
}
@@ -238,7 +220,7 @@ func registerAdminRPCRouter(mux *router.Router) error {
adminRPCServer := newRPCServer()
err := adminRPCServer.RegisterName("Admin", adminRPCHandler)
if err != nil {
return traceError(err)
return errors.Trace(err)
}
adminRouter := mux.NewRoute().PathPrefix(minioReservedBucketPath).Subrouter()
adminRouter.Path(adminPath).Handler(adminRPCServer)

View File

@@ -18,30 +18,34 @@ package cmd
import (
"encoding/json"
"os"
"testing"
)
func testAdminCmd(cmd cmdType, t *testing.T) {
// reset globals.
// this is to make sure that the tests are not affected by modified globals.
// reset globals. this is to make sure that the tests are not
// affected by modified globals.
resetTestGlobals()
rootPath, err := newTestConfig(globalMinioDefaultRegion)
if err != nil {
t.Fatalf("Failed to create test config - %v", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
creds := globalServerConfig.GetCredential()
token, err := authenticateNode(creds.AccessKey, creds.SecretKey)
if err != nil {
t.Fatal(err)
}
adminServer := adminCmd{}
creds := serverConfig.GetCredential()
args := LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: Version,
AuthToken: token,
Version: globalRPCAPIVersion,
RequestTime: UTCNow(),
}
reply := LoginRPCReply{}
err = adminServer.Login(&args, &reply)
err = adminServer.Login(&args, &LoginRPCReply{})
if err != nil {
t.Fatalf("Failed to login to admin server - %v", err)
}
@@ -51,12 +55,23 @@ func testAdminCmd(cmd cmdType, t *testing.T) {
<-globalServiceSignalCh
}()
ga := AuthRPCArgs{AuthToken: reply.AuthToken}
sa := SignalServiceArgs{
AuthRPCArgs: AuthRPCArgs{AuthToken: token, Version: globalRPCAPIVersion},
Sig: cmd.toServiceSignal(),
}
genReply := AuthRPCReply{}
switch cmd {
case restartCmd:
if err = adminServer.Restart(&ga, &genReply); err != nil {
t.Errorf("restartCmd: Expected: <nil>, got: %v", err)
case restartCmd, stopCmd:
if err = adminServer.SignalService(&sa, &genReply); err != nil {
t.Errorf("restartCmd/stopCmd: Expected: <nil>, got: %v",
err)
}
default:
err = adminServer.SignalService(&sa, &genReply)
if err != nil && err.Error() != errUnsupportedSignal.Error() {
t.Errorf("invalidSignal %s: unexpected error got: %v",
cmd, err)
}
}
}
@@ -66,8 +81,18 @@ func TestAdminRestart(t *testing.T) {
testAdminCmd(restartCmd, t)
}
// TestReInitDisks - test for Admin.ReInitDisks RPC service.
func TestReInitDisks(t *testing.T) {
// TestAdminStop - test for Admin.Stop RPC service.
func TestAdminStop(t *testing.T) {
testAdminCmd(stopCmd, t)
}
// TestAdminStatus - test for Admin.Status RPC service (error case)
func TestAdminStatus(t *testing.T) {
testAdminCmd(statusCmd, t)
}
// TestReInitFormat - test for Admin.ReInitFormat RPC service.
func TestReInitFormat(t *testing.T) {
// Reset global variables to start afresh.
resetTestGlobals()
@@ -75,7 +100,7 @@ func TestReInitDisks(t *testing.T) {
if err != nil {
t.Fatalf("Unable to initialize server config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// Initializing objectLayer for HealFormatHandler.
_, xlDirs, xlErr := initTestXLObjLayer()
@@ -90,54 +115,36 @@ func TestReInitDisks(t *testing.T) {
// Setup admin rpc server for an XL backend.
globalIsXL = true
adminServer := adminCmd{}
creds := serverConfig.GetCredential()
creds := globalServerConfig.GetCredential()
token, err := authenticateNode(creds.AccessKey, creds.SecretKey)
if err != nil {
t.Fatal(err)
}
args := LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: Version,
AuthToken: token,
Version: globalRPCAPIVersion,
RequestTime: UTCNow(),
}
reply := LoginRPCReply{}
err = adminServer.Login(&args, &reply)
err = adminServer.Login(&args, &LoginRPCReply{})
if err != nil {
t.Fatalf("Failed to login to admin server - %v", err)
}
authArgs := AuthRPCArgs{
AuthToken: reply.AuthToken,
AuthToken: token,
Version: globalRPCAPIVersion,
}
authReply := AuthRPCReply{}
err = adminServer.ReInitDisks(&authArgs, &authReply)
err = adminServer.ReInitFormat(&ReInitFormatArgs{
AuthRPCArgs: authArgs,
DryRun: false,
}, &authReply)
if err != nil {
t.Errorf("Expected to pass, but failed with %v", err)
}
// Negative test case with admin rpc server setup for FS.
globalIsXL = false
fsAdminServer := adminCmd{}
fsArgs := LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: Version,
RequestTime: UTCNow(),
}
fsReply := LoginRPCReply{}
err = fsAdminServer.Login(&fsArgs, &fsReply)
if err != nil {
t.Fatalf("Failed to login to fs admin server - %v", err)
}
authArgs = AuthRPCArgs{
AuthToken: fsReply.AuthToken,
}
authReply = AuthRPCReply{}
// Attempt ReInitDisks service on a FS backend.
err = fsAdminServer.ReInitDisks(&authArgs, &authReply)
if err != errUnsupportedBackend {
t.Errorf("Expected to fail with %v, but received %v",
errUnsupportedBackend, err)
}
}
// TestGetConfig - Test for GetConfig admin RPC.
@@ -149,14 +156,19 @@ func TestGetConfig(t *testing.T) {
if err != nil {
t.Fatalf("Unable to initialize server config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
adminServer := adminCmd{}
creds := serverConfig.GetCredential()
creds := globalServerConfig.GetCredential()
token, err := authenticateNode(creds.AccessKey, creds.SecretKey)
if err != nil {
t.Fatal(err)
}
args := LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: Version,
AuthToken: token,
Version: globalRPCAPIVersion,
RequestTime: UTCNow(),
}
reply := LoginRPCReply{}
@@ -166,7 +178,8 @@ func TestGetConfig(t *testing.T) {
}
authArgs := AuthRPCArgs{
AuthToken: reply.AuthToken,
AuthToken: token,
Version: globalRPCAPIVersion,
}
configReply := ConfigReply{}
@@ -193,14 +206,17 @@ func TestWriteAndCommitConfig(t *testing.T) {
if err != nil {
t.Fatalf("Unable to initialize server config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
adminServer := adminCmd{}
creds := serverConfig.GetCredential()
creds := globalServerConfig.GetCredential()
token, err := authenticateNode(creds.AccessKey, creds.SecretKey)
if err != nil {
t.Fatal(err)
}
args := LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: Version,
AuthToken: token,
Version: globalRPCAPIVersion,
RequestTime: UTCNow(),
}
reply := LoginRPCReply{}
@@ -214,7 +230,8 @@ func TestWriteAndCommitConfig(t *testing.T) {
tmpFileName := mustGetUUID()
wArgs := WriteConfigArgs{
AuthRPCArgs: AuthRPCArgs{
AuthToken: reply.AuthToken,
AuthToken: token,
Version: globalRPCAPIVersion,
},
TmpFileName: tmpFileName,
Buf: buf,
@@ -231,7 +248,8 @@ func TestWriteAndCommitConfig(t *testing.T) {
cArgs := CommitConfigArgs{
AuthRPCArgs: AuthRPCArgs{
AuthToken: reply.AuthToken,
AuthToken: token,
Version: globalRPCAPIVersion,
},
FileName: tmpFileName,
}

View File

@@ -18,7 +18,12 @@ package cmd
import (
"encoding/xml"
"fmt"
"net/http"
"github.com/minio/minio/pkg/auth"
"github.com/minio/minio/pkg/errors"
"github.com/minio/minio/pkg/hash"
)
// APIError structure
@@ -36,8 +41,8 @@ type APIErrorResponse struct {
Key string
BucketName string
Resource string
RequestID string `xml:"RequestId"`
HostID string `xml:"HostId"`
RequestID string `xml:"RequestId" json:"RequestId"`
HostID string `xml:"HostId" json:"HostId"`
}
// APIErrorCode type of error status.
@@ -114,10 +119,25 @@ const (
ErrInvalidQueryParams
ErrBucketAlreadyOwnedByYou
ErrInvalidDuration
ErrNotSupported
ErrBucketAlreadyExists
ErrMetadataTooLarge
ErrUnsupportedMetadata
ErrMaximumExpires
ErrSlowDown
ErrInvalidPrefixMarker
// Add new error codes here.
// Server-Side-Encryption (with Customer provided key) related API errors.
ErrInsecureSSECustomerRequest
ErrSSEMultipartEncrypted
ErrSSEEncryptedObject
ErrInvalidEncryptionParameters
ErrInvalidSSECustomerAlgorithm
ErrInvalidSSECustomerKey
ErrMissingSSECustomerKey
ErrMissingSSECustomerKeyMD5
ErrSSECustomerKeyMD5Mismatch
// Bucket notification related errors.
ErrEventNotification
ErrARNNotification
@@ -128,6 +148,7 @@ const (
ErrFilterNameSuffix
ErrFilterValueInvalid
ErrOverlappingConfigs
ErrUnsupportedNotification
// S3 extended errors.
ErrContentSHA256Mismatch
@@ -138,19 +159,38 @@ const (
ErrReadQuorum
ErrWriteQuorum
ErrStorageFull
ErrRequestBodyParse
ErrObjectExistsAsDirectory
ErrPolicyNesting
ErrInvalidObjectName
ErrInvalidResourceName
ErrServerNotInitialized
ErrOperationTimedOut
ErrPartsSizeUnequal
ErrInvalidRequest
// Minio storage class error codes
ErrInvalidStorageClass
// Add new extended error codes here.
// Please open a https://github.com/minio/minio/issues before adding
// new error codes here.
ErrMalformedJSON
ErrAdminInvalidAccessKey
ErrAdminInvalidSecretKey
ErrAdminConfigNoQuorum
ErrAdminConfigTooLarge
ErrAdminConfigBadJSON
ErrAdminCredentialsMismatch
ErrInsecureClientRequest
ErrObjectTampered
ErrHealNotImplemented
ErrHealNoSuchProcess
ErrHealInvalidClientToken
ErrHealMissingBucket
ErrHealAlreadyRunning
ErrHealOverlappingPaths
)
// error code to APIError structure, these fields carry respective
@@ -171,6 +211,11 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "Unknown metadata directive.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidStorageClass: {
Code: "InvalidStorageClass",
Description: "Invalid storage class.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidRequestBody: {
Code: "InvalidArgument",
Description: "Body shouldn't be set for this request.",
@@ -483,6 +528,17 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "Request is not valid yet",
HTTPStatusCode: http.StatusForbidden,
},
ErrSlowDown: {
Code: "SlowDown",
Description: "Please reduce your request",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrInvalidPrefixMarker: {
Code: "InvalidPrefixMarker",
Description: "Invalid marker prefix combination",
HTTPStatusCode: http.StatusBadRequest,
},
// FIXME: Actual XML error response also contains the header which missed in list of signed header parameters.
ErrUnsignedHeaders: {
Code: "AccessDenied",
@@ -551,6 +607,11 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "Configurations overlap. Configurations on the same bucket cannot share a common event type.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrUnsupportedNotification: {
Code: "UnsupportedNotification",
Description: "Minio server does not support Topic or Cloud Function based notifications.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidCopyPartRange: {
Code: "InvalidArgument",
Description: "The x-amz-copy-source-range value must be of the form bytes=first-last where first and last are the zero-based offsets of the first and last bytes to copy",
@@ -561,6 +622,56 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "Range specified is not valid for source object",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMetadataTooLarge: {
Code: "InvalidArgument",
Description: "Your metadata headers exceed the maximum allowed metadata size.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInsecureSSECustomerRequest: {
Code: "InvalidRequest",
Description: errInsecureSSERequest.Error(),
HTTPStatusCode: http.StatusBadRequest,
},
ErrSSEMultipartEncrypted: {
Code: "InvalidRequest",
Description: "The multipart upload initiate requested encryption. Subsequent part requests must include the appropriate encryption parameters.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrSSEEncryptedObject: {
Code: "InvalidRequest",
Description: errEncryptedObject.Error(),
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionParameters: {
Code: "InvalidRequest",
Description: "The encryption parameters are not applicable to this object.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidSSECustomerAlgorithm: {
Code: "InvalidArgument",
Description: errInvalidSSEAlgorithm.Error(),
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidSSECustomerKey: {
Code: "InvalidArgument",
Description: errInvalidSSEKey.Error(),
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingSSECustomerKey: {
Code: "InvalidArgument",
Description: errMissingSSEKey.Error(),
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingSSECustomerKeyMD5: {
Code: "InvalidArgument",
Description: errMissingSSEKeyMD5.Error(),
HTTPStatusCode: http.StatusBadRequest,
},
ErrSSECustomerKeyMD5Mismatch: {
Code: "InvalidArgument",
Description: errSSEKeyMD5Mismatch.Error(),
HTTPStatusCode: http.StatusBadRequest,
},
/// S3 extensions.
ErrContentSHA256Mismatch: {
@@ -572,9 +683,14 @@ var errorCodeResponse = map[APIErrorCode]APIError{
/// Minio extensions.
ErrStorageFull: {
Code: "XMinioStorageFull",
Description: "Storage backend has reached its minimum free disk threshold. Please delete few objects to proceed.",
Description: "Storage backend has reached its minimum free disk threshold. Please delete a few objects to proceed.",
HTTPStatusCode: http.StatusInternalServerError,
},
ErrRequestBodyParse: {
Code: "XMinioRequestBodyParse",
Description: "The request body failed to parse.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectExistsAsDirectory: {
Code: "XMinioObjectExistsAsDirectory",
Description: "Object name already exists as a directory.",
@@ -610,6 +726,11 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "Server not initialized, please try again.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrMalformedJSON: {
Code: "XMinioMalformedJSON",
Description: "The JSON you provided was not well-formed or did not validate against our published format.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminInvalidAccessKey: {
Code: "XMinioAdminInvalidAccessKey",
Description: "The access key is invalid.",
@@ -625,11 +746,90 @@ var errorCodeResponse = map[APIErrorCode]APIError{
Description: "Configuration update failed because server quorum was not met",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrAdminConfigTooLarge: {
Code: "XMinioAdminConfigTooLarge",
Description: fmt.Sprintf("Configuration data provided exceeds the allowed maximum of %d bytes",
maxConfigJSONSize),
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigBadJSON: {
Code: "XMinioAdminConfigBadJSON",
Description: "JSON configuration provided has objects with duplicate keys",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminCredentialsMismatch: {
Code: "XMinioAdminCredentialsMismatch",
Description: "Credentials in config mismatch with server environment variables",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrInsecureClientRequest: {
Code: "XMinioInsecureClientRequest",
Description: "Cannot respond to plain-text request from TLS-encrypted server",
HTTPStatusCode: http.StatusBadRequest,
},
ErrOperationTimedOut: {
Code: "XMinioServerTimedOut",
Description: "A timeout occurred while trying to lock a resource",
HTTPStatusCode: http.StatusRequestTimeout,
},
ErrUnsupportedMetadata: {
Code: "InvalidArgument",
Description: "Your metadata headers are not supported.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrPartsSizeUnequal: {
Code: "XMinioPartsSizeUnequal",
Description: "All parts except the last part should be of the same size.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectTampered: {
Code: "XMinioObjectTampered",
Description: errObjectTampered.Error(),
HTTPStatusCode: http.StatusPartialContent,
},
ErrMaximumExpires: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Expires must be less than a week (in seconds); that is, the given X-Amz-Expires must be less than 604800 seconds",
HTTPStatusCode: http.StatusBadRequest,
},
// Generic Invalid-Request error. Should be used for response errors only for unlikely
// corner case errors for which introducing new APIErrorCode is not worth it. errorIf()
// should be used to log the error at the source of the error for debugging purposes.
ErrInvalidRequest: {
Code: "InvalidRequest",
Description: "Invalid Request",
HTTPStatusCode: http.StatusBadRequest,
},
ErrHealNotImplemented: {
Code: "XMinioHealNotImplemented",
Description: "This server does not implement heal functionality.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrHealNoSuchProcess: {
Code: "XMinioHealNoSuchProcess",
Description: "No such heal process is running on the server",
HTTPStatusCode: http.StatusBadRequest,
},
ErrHealInvalidClientToken: {
Code: "XMinioHealInvalidClientToken",
Description: "Client token mismatch",
HTTPStatusCode: http.StatusBadRequest,
},
ErrHealMissingBucket: {
Code: "XMinioHealMissingBucket",
Description: "A heal start request with a non-empty object-prefix parameter requires a bucket to be specified.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrHealAlreadyRunning: {
Code: "XMinioHealAlreadyRunning",
Description: "",
HTTPStatusCode: http.StatusBadRequest,
},
ErrHealOverlappingPaths: {
Code: "XMinioHealOverlappingPaths",
Description: "",
HTTPStatusCode: http.StatusBadRequest,
},
// Add your error structure here.
}
@@ -642,20 +842,18 @@ func toAPIErrorCode(err error) (apiErr APIErrorCode) {
return ErrNone
}
err = errorCause(err)
err = errors.Cause(err)
// Verify if the underlying error is signature mismatch.
switch err {
case errSignatureMismatch:
apiErr = ErrSignatureDoesNotMatch
case errContentSHA256Mismatch:
apiErr = ErrContentSHA256Mismatch
case errDataTooLarge:
apiErr = ErrEntityTooLarge
case errDataTooSmall:
apiErr = ErrEntityTooSmall
case errInvalidAccessKeyLength:
case auth.ErrInvalidAccessKeyLength:
apiErr = ErrAdminInvalidAccessKey
case errInvalidSecretKeyLength:
case auth.ErrInvalidSecretKeyLength:
apiErr = ErrAdminInvalidSecretKey
}
@@ -664,10 +862,31 @@ func toAPIErrorCode(err error) (apiErr APIErrorCode) {
return apiErr
}
switch err { // SSE errors
case errInsecureSSERequest:
return ErrInsecureSSECustomerRequest
case errInvalidSSEAlgorithm:
return ErrInvalidSSECustomerAlgorithm
case errInvalidSSEKey:
return ErrInvalidSSECustomerKey
case errMissingSSEKey:
return ErrMissingSSECustomerKey
case errMissingSSEKeyMD5:
return ErrMissingSSECustomerKeyMD5
case errSSEKeyMD5Mismatch:
return ErrSSECustomerKeyMD5Mismatch
case errObjectTampered:
return ErrObjectTampered
case errEncryptedObject:
return ErrSSEEncryptedObject
case errSSEKeyMismatch:
return ErrAccessDenied // no access without correct key
}
switch err.(type) {
case StorageFull:
apiErr = ErrStorageFull
case BadDigest:
case hash.BadDigest:
apiErr = ErrBadDigest
case AllAccessDisabled:
apiErr = ErrAllAccessDisabled
@@ -713,20 +932,24 @@ func toAPIErrorCode(err error) (apiErr APIErrorCode) {
apiErr = ErrEntityTooSmall
case SignatureDoesNotMatch:
apiErr = ErrSignatureDoesNotMatch
case SHA256Mismatch:
case hash.SHA256Mismatch:
apiErr = ErrContentSHA256Mismatch
case ObjectTooLarge:
apiErr = ErrEntityTooLarge
case ObjectTooSmall:
apiErr = ErrEntityTooSmall
case NotSupported:
apiErr = ErrNotSupported
case NotImplemented:
apiErr = ErrNotImplemented
case PolicyNotFound:
apiErr = ErrNoSuchBucketPolicy
case PartTooBig:
apiErr = ErrEntityTooLarge
case UnsupportedMetadata:
apiErr = ErrUnsupportedMetadata
case PartsSizeUnequal:
apiErr = ErrPartsSizeUnequal
case BucketPolicyNotFound:
apiErr = ErrNoSuchBucketPolicy
default:
apiErr = ErrInternalError
}

View File

@@ -19,122 +19,52 @@ package cmd
import (
"errors"
"testing"
"github.com/minio/minio/pkg/hash"
)
var toAPIErrorCodeTests = []struct {
err error
errCode APIErrorCode
}{
{err: hash.BadDigest{}, errCode: ErrBadDigest},
{err: hash.SHA256Mismatch{}, errCode: ErrContentSHA256Mismatch},
{err: IncompleteBody{}, errCode: ErrIncompleteBody},
{err: ObjectExistsAsDirectory{}, errCode: ErrObjectExistsAsDirectory},
{err: BucketNameInvalid{}, errCode: ErrInvalidBucketName},
{err: BucketExists{}, errCode: ErrBucketAlreadyOwnedByYou},
{err: ObjectNotFound{}, errCode: ErrNoSuchKey},
{err: ObjectNameInvalid{}, errCode: ErrInvalidObjectName},
{err: InvalidUploadID{}, errCode: ErrNoSuchUpload},
{err: InvalidPart{}, errCode: ErrInvalidPart},
{err: InsufficientReadQuorum{}, errCode: ErrReadQuorum},
{err: InsufficientWriteQuorum{}, errCode: ErrWriteQuorum},
{err: UnsupportedDelimiter{}, errCode: ErrNotImplemented},
{err: InvalidMarkerPrefixCombination{}, errCode: ErrNotImplemented},
{err: InvalidUploadIDKeyCombination{}, errCode: ErrNotImplemented},
{err: MalformedUploadID{}, errCode: ErrNoSuchUpload},
{err: PartTooSmall{}, errCode: ErrEntityTooSmall},
{err: BucketNotEmpty{}, errCode: ErrBucketNotEmpty},
{err: BucketNotFound{}, errCode: ErrNoSuchBucket},
{err: StorageFull{}, errCode: ErrStorageFull},
{err: NotImplemented{}, errCode: ErrNotImplemented},
{err: errSignatureMismatch, errCode: ErrSignatureDoesNotMatch},
// SSE-C errors
{err: errInsecureSSERequest, errCode: ErrInsecureSSECustomerRequest},
{err: errInvalidSSEAlgorithm, errCode: ErrInvalidSSECustomerAlgorithm},
{err: errMissingSSEKey, errCode: ErrMissingSSECustomerKey},
{err: errInvalidSSEKey, errCode: ErrInvalidSSECustomerKey},
{err: errMissingSSEKeyMD5, errCode: ErrMissingSSECustomerKeyMD5},
{err: errSSEKeyMD5Mismatch, errCode: ErrSSECustomerKeyMD5Mismatch},
{err: errObjectTampered, errCode: ErrObjectTampered},
{err: nil, errCode: ErrNone},
{err: errors.New("Custom error"), errCode: ErrInternalError}, // Case where err type is unknown.
}
func TestAPIErrCode(t *testing.T) {
testCases := []struct {
err error
errCode APIErrorCode
}{
// Valid cases.
{
BadDigest{},
ErrBadDigest,
},
{
IncompleteBody{},
ErrIncompleteBody,
},
{
ObjectExistsAsDirectory{},
ErrObjectExistsAsDirectory,
},
{
BucketNameInvalid{},
ErrInvalidBucketName,
},
{
BucketExists{},
ErrBucketAlreadyOwnedByYou,
},
{
ObjectNotFound{},
ErrNoSuchKey,
},
{
ObjectNameInvalid{},
ErrInvalidObjectName,
},
{
InvalidUploadID{},
ErrNoSuchUpload,
},
{
InvalidPart{},
ErrInvalidPart,
},
{
InsufficientReadQuorum{},
ErrReadQuorum,
},
{
InsufficientWriteQuorum{},
ErrWriteQuorum,
},
{
UnsupportedDelimiter{},
ErrNotImplemented,
},
{
InvalidMarkerPrefixCombination{},
ErrNotImplemented,
},
{
InvalidUploadIDKeyCombination{},
ErrNotImplemented,
},
{
MalformedUploadID{},
ErrNoSuchUpload,
},
{
PartTooSmall{},
ErrEntityTooSmall,
},
{
BucketNotEmpty{},
ErrBucketNotEmpty,
},
{
BucketNotFound{},
ErrNoSuchBucket,
},
{
StorageFull{},
ErrStorageFull,
},
{
NotSupported{},
ErrNotSupported,
},
{
NotImplemented{},
ErrNotImplemented,
},
{
errSignatureMismatch,
ErrSignatureDoesNotMatch,
},
{
errContentSHA256Mismatch,
ErrContentSHA256Mismatch,
}, // End of all valid cases.
// Case where err is nil.
{
nil,
ErrNone,
},
// Case where err type is unknown.
{
errors.New("Custom error"),
ErrInternalError,
},
}
// Validate all the errors with their API error codes.
for i, testCase := range testCases {
for i, testCase := range toAPIErrorCodeTests {
errCode := toAPIErrorCode(testCase.err)
if errCode != testCase.errCode {
t.Errorf("Test %d: Expected error code %d, got %d", i+1, testCase.errCode, errCode)

View File

@@ -18,6 +18,7 @@ package cmd
import (
"bytes"
"encoding/json"
"encoding/xml"
"fmt"
"net/http"
@@ -38,7 +39,7 @@ func setCommonHeaders(w http.ResponseWriter) {
w.Header().Set("Server", globalServerUserAgent)
// Set `x-amz-bucket-region` only if region is set on the server
// by default minio uses an empty region.
if region := serverConfig.GetRegion(); region != "" {
if region := globalServerConfig.GetRegion(); region != "" {
w.Header().Set("X-Amz-Bucket-Region", region)
}
w.Header().Set("Accept-Ranges", "bytes")
@@ -53,6 +54,14 @@ func encodeResponse(response interface{}) []byte {
return bytesBuffer.Bytes()
}
// Encodes the response headers into JSON format.
func encodeResponseJSON(response interface{}) []byte {
var bytesBuffer bytes.Buffer
e := json.NewEncoder(&bytesBuffer)
e.Encode(response)
return bytesBuffer.Bytes()
}
// Write object header
func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, contentRange *httpRange) {
// set common headers
@@ -70,8 +79,21 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, contentRange *h
w.Header().Set("ETag", "\""+objInfo.ETag+"\"")
}
if objInfo.ContentType != "" {
w.Header().Set("Content-Type", objInfo.ContentType)
}
if objInfo.ContentEncoding != "" {
w.Header().Set("Content-Encoding", objInfo.ContentEncoding)
}
// Set all other user defined metadata.
for k, v := range objInfo.UserDefined {
if hasPrefix(k, ReservedMetadataPrefix) {
// Do not need to send any internal metadata
// values to client.
continue
}
w.Header().Set(k, v)
}

View File

@@ -22,6 +22,8 @@ import (
"net/url"
"path"
"time"
"github.com/minio/minio/pkg/handlers"
)
const (
@@ -166,13 +168,12 @@ type ListBucketsResponse struct {
// Upload container for in progress multipart upload
type Upload struct {
Key string
UploadID string `xml:"UploadId"`
Initiator Initiator
Owner Owner
StorageClass string
Initiated string
HealUploadInfo *HealObjectInfo `xml:"HealObjectInfo,omitempty"`
Key string
UploadID string `xml:"UploadId"`
Initiator Initiator
Owner Owner
StorageClass string
Initiated string
}
// CommonPrefix container for prefix response in ListObjectsResponse
@@ -182,9 +183,8 @@ type CommonPrefix struct {
// Bucket container for bucket metadata
type Bucket struct {
Name string
CreationDate string // time string of format "2006-01-02T15:04:05.000Z"
HealBucketInfo *HealBucketInfo `xml:"HealBucketInfo,omitempty"`
Name string
CreationDate string // time string of format "2006-01-02T15:04:05.000Z"
}
// Object container for object metadata
@@ -198,8 +198,7 @@ type Object struct {
Owner Owner
// The class of storage used to store the object.
StorageClass string
HealObjectInfo *HealObjectInfo `xml:"HealObjectInfo,omitempty"`
StorageClass string
}
// CopyObjectResponse container returns ETag and LastModified of the successfully copied object
@@ -275,9 +274,31 @@ func getLocation(r *http.Request) string {
return path.Clean(r.URL.Path) // Clean any trailing slashes.
}
// getObjectLocation gets the relative URL for an object
func getObjectLocation(bucketName string, key string) string {
return "/" + bucketName + "/" + key
// returns "https" if the tls boolean is true, "http" otherwise.
func getURLScheme(tls bool) string {
if tls {
return httpsScheme
}
return httpScheme
}
// getObjectLocation gets the fully qualified URL of an object.
func getObjectLocation(r *http.Request, domain, bucket, object string) string {
proto := handlers.GetSourceScheme(r)
if proto == "" {
proto = getURLScheme(globalIsSSL)
}
u := &url.URL{
Host: r.Host,
Path: path.Join(slashSeparator, bucket, object),
Scheme: proto,
}
// If domain is set then we need to use bucket DNS style.
if domain != "" {
u.Host = bucket + "." + domain
u.Path = path.Join(slashSeparator, object)
}
return u.String()
}
// generates ListBucketsResponse from array of BucketInfo which can be
@@ -291,8 +312,7 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
for _, bucket := range buckets {
var listbucket = Bucket{}
listbucket.Name = bucket.Name
listbucket.CreationDate = bucket.Created.Format(timeFormatAMZLong)
listbucket.HealBucketInfo = bucket.HealBucketInfo
listbucket.CreationDate = bucket.Created.UTC().Format(timeFormatAMZLong)
listbuckets = append(listbuckets, listbucket)
}
@@ -323,8 +343,6 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter string, max
content.Size = object.Size
content.StorageClass = globalMinioDefaultStorageClass
content.Owner = owner
// object.HealObjectInfo is non-empty only when resp is constructed in ListObjectsHeal.
content.HealObjectInfo = object.HealObjectInfo
contents = append(contents, content)
}
// TODO - support EncodingType in xml decoding
@@ -482,7 +500,6 @@ func generateListMultipartUploadsResponse(bucket string, multipartsInfo ListMult
newUpload.UploadID = upload.UploadID
newUpload.Key = upload.Object
newUpload.Initiated = upload.Initiated.UTC().Format(timeFormatAMZLong)
newUpload.HealUploadInfo = upload.HealUploadInfo
listMultipartUploadsResponse.Uploads[index] = newUpload
}
return listMultipartUploadsResponse
@@ -551,6 +568,12 @@ func writeSuccessResponseHeadersOnly(w http.ResponseWriter) {
// writeErrorRespone writes error headers
func writeErrorResponse(w http.ResponseWriter, errorCode APIErrorCode, reqURL *url.URL) {
switch errorCode {
case ErrSlowDown, ErrServerNotInitialized, ErrReadQuorum, ErrWriteQuorum:
// Set retry-after header to indicate user-agents to retry request after 120secs.
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After
w.Header().Set("Retry-After", "120")
}
apiError := getAPIError(errorCode)
// Generate error response.
errorResponse := getAPIErrorResponse(apiError, reqURL.Path)
@@ -562,3 +585,31 @@ func writeErrorResponseHeadersOnly(w http.ResponseWriter, errorCode APIErrorCode
apiError := getAPIError(errorCode)
writeResponse(w, apiError.HTTPStatusCode, nil, mimeNone)
}
// writeErrorResponseJSON - writes error response in JSON format;
// useful for admin APIs.
func writeErrorResponseJSON(w http.ResponseWriter, errorCode APIErrorCode, reqURL *url.URL) {
apiError := getAPIError(errorCode)
// Generate error response.
errorResponse := getAPIErrorResponse(apiError, reqURL.Path)
encodedErrorResponse := encodeResponseJSON(errorResponse)
writeResponse(w, apiError.HTTPStatusCode, encodedErrorResponse, mimeJSON)
}
// writeCustomErrorResponseJSON - similar to writeErrorResponseJSON,
// but accepts the error message directly (this allows messages to be
// dynamically generated.)
func writeCustomErrorResponseJSON(w http.ResponseWriter, errorCode APIErrorCode,
errBody string, reqURL *url.URL) {
apiError := getAPIError(errorCode)
errorResponse := APIErrorResponse{
Code: apiError.Code,
Message: errBody,
Resource: reqURL.Path,
RequestID: "3L137",
HostID: "3L137",
}
encodedErrorResponse := encodeResponseJSON(errorResponse)
writeResponse(w, apiError.HTTPStatusCode, encodedErrorResponse, mimeJSON)
}

121
cmd/api-response_test.go Normal file
View File

@@ -0,0 +1,121 @@
/*
* Minio Cloud Storage, (C) 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"net/http"
"testing"
)
// Tests object location.
func TestObjectLocation(t *testing.T) {
testCases := []struct {
request *http.Request
bucket, object string
domain string
expectedLocation string
}{
// Server binding to localhost IP with https.
{
request: &http.Request{
Host: "127.0.0.1:9000",
Header: map[string][]string{
"X-Forwarded-Scheme": {httpScheme},
},
},
bucket: "testbucket1",
object: "test/1.txt",
expectedLocation: "http://127.0.0.1:9000/testbucket1/test/1.txt",
},
{
request: &http.Request{
Host: "127.0.0.1:9000",
Header: map[string][]string{
"X-Forwarded-Scheme": {httpsScheme},
},
},
bucket: "testbucket1",
object: "test/1.txt",
expectedLocation: "https://127.0.0.1:9000/testbucket1/test/1.txt",
},
// Server binding to fqdn.
{
request: &http.Request{
Host: "s3.mybucket.org",
Header: map[string][]string{
"X-Forwarded-Scheme": {httpScheme},
},
},
bucket: "mybucket",
object: "test/1.txt",
expectedLocation: "http://s3.mybucket.org/mybucket/test/1.txt",
},
// Server binding to fqdn.
{
request: &http.Request{
Host: "mys3.mybucket.org",
Header: map[string][]string{},
},
bucket: "mybucket",
object: "test/1.txt",
expectedLocation: "http://mys3.mybucket.org/mybucket/test/1.txt",
},
// Server with virtual domain name.
{
request: &http.Request{
Host: "mys3.bucket.org",
Header: map[string][]string{},
},
domain: "mys3.bucket.org",
bucket: "mybucket",
object: "test/1.txt",
expectedLocation: "http://mybucket.mys3.bucket.org/test/1.txt",
},
{
request: &http.Request{
Host: "mys3.bucket.org",
Header: map[string][]string{
"X-Forwarded-Scheme": {httpsScheme},
},
},
domain: "mys3.bucket.org",
bucket: "mybucket",
object: "test/1.txt",
expectedLocation: "https://mybucket.mys3.bucket.org/test/1.txt",
},
}
for i, testCase := range testCases {
gotLocation := getObjectLocation(testCase.request, testCase.domain, testCase.bucket, testCase.object)
if testCase.expectedLocation != gotLocation {
t.Errorf("Test %d: expected %s, got %s", i+1, testCase.expectedLocation, gotLocation)
}
}
}
// Tests getURLScheme function behavior.
func TestGetURLScheme(t *testing.T) {
tls := false
gotScheme := getURLScheme(tls)
if gotScheme != httpScheme {
t.Errorf("Expected %s, got %s", httpScheme, gotScheme)
}
tls = true
gotScheme = getURLScheme(tls)
if gotScheme != httpsScheme {
t.Errorf("Expected %s, got %s", httpsScheme, gotScheme)
}
}

View File

@@ -17,6 +17,7 @@
package cmd
import router "github.com/gorilla/mux"
import "net/http"
// objectAPIHandler implements and provides http handlers for S3 API.
type objectAPIHandlers struct {
@@ -32,70 +33,75 @@ func registerAPIRouter(mux *router.Router) {
// API Router
apiRouter := mux.NewRoute().PathPrefix("/").Subrouter()
var routers []*router.Router
if globalDomainName != "" {
routers = append(routers, apiRouter.Host("{bucket:.+}."+globalDomainName).Subrouter())
}
routers = append(routers, apiRouter.PathPrefix("/{bucket}").Subrouter())
// Bucket router
bucket := apiRouter.PathPrefix("/{bucket}").Subrouter()
for _, bucket := range routers {
// Object operations
// HeadObject
bucket.Methods("HEAD").Path("/{object:.+}").HandlerFunc(httpTraceAll(api.HeadObjectHandler))
// CopyObjectPart
bucket.Methods("PUT").Path("/{object:.+}").HeadersRegexp("X-Amz-Copy-Source", ".*?(\\/|%2F).*?").HandlerFunc(httpTraceAll(api.CopyObjectPartHandler)).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
// PutObjectPart
bucket.Methods("PUT").Path("/{object:.+}").HandlerFunc(httpTraceHdrs(api.PutObjectPartHandler)).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
// ListObjectPxarts
bucket.Methods("GET").Path("/{object:.+}").HandlerFunc(httpTraceAll(api.ListObjectPartsHandler)).Queries("uploadId", "{uploadId:.*}")
// CompleteMultipartUpload
bucket.Methods("POST").Path("/{object:.+}").HandlerFunc(httpTraceAll(api.CompleteMultipartUploadHandler)).Queries("uploadId", "{uploadId:.*}")
// NewMultipartUpload
bucket.Methods("POST").Path("/{object:.+}").HandlerFunc(httpTraceAll(api.NewMultipartUploadHandler)).Queries("uploads", "")
// AbortMultipartUpload
bucket.Methods("DELETE").Path("/{object:.+}").HandlerFunc(httpTraceAll(api.AbortMultipartUploadHandler)).Queries("uploadId", "{uploadId:.*}")
// GetObject
bucket.Methods("GET").Path("/{object:.+}").HandlerFunc(httpTraceHdrs(api.GetObjectHandler))
// CopyObject
bucket.Methods("PUT").Path("/{object:.+}").HeadersRegexp("X-Amz-Copy-Source", ".*?(\\/|%2F).*?").HandlerFunc(httpTraceAll(api.CopyObjectHandler))
// PutObject
bucket.Methods("PUT").Path("/{object:.+}").HandlerFunc(httpTraceHdrs(api.PutObjectHandler))
// DeleteObject
bucket.Methods("DELETE").Path("/{object:.+}").HandlerFunc(httpTraceAll(api.DeleteObjectHandler))
/// Object operations
// HeadObject
bucket.Methods("HEAD").Path("/{object:.+}").HandlerFunc(api.HeadObjectHandler)
// CopyObjectPart
bucket.Methods("PUT").Path("/{object:.+}").HeadersRegexp("X-Amz-Copy-Source", ".*?(\\/|%2F).*?").HandlerFunc(api.CopyObjectPartHandler).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
// PutObjectPart
bucket.Methods("PUT").Path("/{object:.+}").HandlerFunc(api.PutObjectPartHandler).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
// ListObjectPxarts
bucket.Methods("GET").Path("/{object:.+}").HandlerFunc(api.ListObjectPartsHandler).Queries("uploadId", "{uploadId:.*}")
// CompleteMultipartUpload
bucket.Methods("POST").Path("/{object:.+}").HandlerFunc(api.CompleteMultipartUploadHandler).Queries("uploadId", "{uploadId:.*}")
// NewMultipartUpload
bucket.Methods("POST").Path("/{object:.+}").HandlerFunc(api.NewMultipartUploadHandler).Queries("uploads", "")
// AbortMultipartUpload
bucket.Methods("DELETE").Path("/{object:.+}").HandlerFunc(api.AbortMultipartUploadHandler).Queries("uploadId", "{uploadId:.*}")
// GetObject
bucket.Methods("GET").Path("/{object:.+}").HandlerFunc(api.GetObjectHandler)
// CopyObject
bucket.Methods("PUT").Path("/{object:.+}").HeadersRegexp("X-Amz-Copy-Source", ".*?(\\/|%2F).*?").HandlerFunc(api.CopyObjectHandler)
// PutObject
bucket.Methods("PUT").Path("/{object:.+}").HandlerFunc(api.PutObjectHandler)
// DeleteObject
bucket.Methods("DELETE").Path("/{object:.+}").HandlerFunc(api.DeleteObjectHandler)
/// Bucket operations
// GetBucketLocation
bucket.Methods("GET").HandlerFunc(api.GetBucketLocationHandler).Queries("location", "")
// GetBucketPolicy
bucket.Methods("GET").HandlerFunc(api.GetBucketPolicyHandler).Queries("policy", "")
// GetBucketNotification
bucket.Methods("GET").HandlerFunc(api.GetBucketNotificationHandler).Queries("notification", "")
// ListenBucketNotification
bucket.Methods("GET").HandlerFunc(api.ListenBucketNotificationHandler).Queries("events", "{events:.*}")
// ListMultipartUploads
bucket.Methods("GET").HandlerFunc(api.ListMultipartUploadsHandler).Queries("uploads", "")
// ListObjectsV2
bucket.Methods("GET").HandlerFunc(api.ListObjectsV2Handler).Queries("list-type", "2")
// ListObjectsV1 (Legacy)
bucket.Methods("GET").HandlerFunc(api.ListObjectsV1Handler)
// PutBucketPolicy
bucket.Methods("PUT").HandlerFunc(api.PutBucketPolicyHandler).Queries("policy", "")
// PutBucketNotification
bucket.Methods("PUT").HandlerFunc(api.PutBucketNotificationHandler).Queries("notification", "")
// PutBucket
bucket.Methods("PUT").HandlerFunc(api.PutBucketHandler)
// HeadBucket
bucket.Methods("HEAD").HandlerFunc(api.HeadBucketHandler)
// PostPolicy
bucket.Methods("POST").HeadersRegexp("Content-Type", "multipart/form-data*").HandlerFunc(api.PostPolicyBucketHandler)
// DeleteMultipleObjects
bucket.Methods("POST").HandlerFunc(api.DeleteMultipleObjectsHandler)
// DeleteBucketPolicy
bucket.Methods("DELETE").HandlerFunc(api.DeleteBucketPolicyHandler).Queries("policy", "")
// DeleteBucket
bucket.Methods("DELETE").HandlerFunc(api.DeleteBucketHandler)
/// Bucket operations
// GetBucketLocation
bucket.Methods("GET").HandlerFunc(httpTraceAll(api.GetBucketLocationHandler)).Queries("location", "")
// GetBucketPolicy
bucket.Methods("GET").HandlerFunc(httpTraceAll(api.GetBucketPolicyHandler)).Queries("policy", "")
// GetBucketNotification
bucket.Methods("GET").HandlerFunc(httpTraceAll(api.GetBucketNotificationHandler)).Queries("notification", "")
// ListenBucketNotification
bucket.Methods("GET").HandlerFunc(httpTraceAll(api.ListenBucketNotificationHandler)).Queries("events", "{events:.*}")
// ListMultipartUploads
bucket.Methods("GET").HandlerFunc(httpTraceAll(api.ListMultipartUploadsHandler)).Queries("uploads", "")
// ListObjectsV2
bucket.Methods("GET").HandlerFunc(httpTraceAll(api.ListObjectsV2Handler)).Queries("list-type", "2")
// ListObjectsV1 (Legacy)
bucket.Methods("GET").HandlerFunc(httpTraceAll(api.ListObjectsV1Handler))
// PutBucketPolicy
bucket.Methods("PUT").HandlerFunc(httpTraceAll(api.PutBucketPolicyHandler)).Queries("policy", "")
// PutBucketNotification
bucket.Methods("PUT").HandlerFunc(httpTraceAll(api.PutBucketNotificationHandler)).Queries("notification", "")
// PutBucket
bucket.Methods("PUT").HandlerFunc(httpTraceAll(api.PutBucketHandler))
// HeadBucket
bucket.Methods("HEAD").HandlerFunc(httpTraceAll(api.HeadBucketHandler))
// PostPolicy
bucket.Methods("POST").HeadersRegexp("Content-Type", "multipart/form-data*").HandlerFunc(httpTraceAll(api.PostPolicyBucketHandler))
// DeleteMultipleObjects
bucket.Methods("POST").HandlerFunc(httpTraceAll(api.DeleteMultipleObjectsHandler)).Queries("delete", "")
// DeleteBucketPolicy
bucket.Methods("DELETE").HandlerFunc(httpTraceAll(api.DeleteBucketPolicyHandler)).Queries("policy", "")
// DeleteBucket
bucket.Methods("DELETE").HandlerFunc(httpTraceAll(api.DeleteBucketHandler))
}
/// Root operation
// ListBuckets
apiRouter.Methods("GET").HandlerFunc(api.ListBucketsHandler)
apiRouter.Methods("GET").Path("/").HandlerFunc(httpTraceAll(api.ListBucketsHandler))
// If none of the routes match.
apiRouter.NotFoundHandler = http.HandlerFunc(httpTraceAll(notFoundHandler))
}

View File

@@ -18,15 +18,13 @@ package cmd
import (
"bytes"
"errors"
"io/ioutil"
"net/http"
"strings"
)
// Verify if the request http Header "x-amz-content-sha256" == "UNSIGNED-PAYLOAD"
func isRequestUnsignedPayload(r *http.Request) bool {
return r.Header.Get("x-amz-content-sha256") == unsignedPayload
}
"github.com/minio/minio/pkg/handlers"
)
// Verify if request has JWT.
func isRequestJWT(r *http.Request) bool {
@@ -58,13 +56,14 @@ func isRequestPresignedSignatureV2(r *http.Request) bool {
// Verify if request has AWS Post policy Signature Version '4'.
func isRequestPostPolicySignatureV4(r *http.Request) bool {
return strings.Contains(r.Header.Get("Content-Type"), "multipart/form-data") && r.Method == httpPOST
return strings.Contains(r.Header.Get("Content-Type"), "multipart/form-data") &&
r.Method == http.MethodPost
}
// Verify if the request has AWS Streaming Signature Version '4'. This is only valid for 'PUT' operation.
func isRequestSignStreamingV4(r *http.Request) bool {
return r.Header.Get("x-amz-content-sha256") == streamingContentSHA256 &&
r.Method == httpPUT
r.Method == http.MethodPut
}
// Authorization type.
@@ -105,29 +104,38 @@ func getRequestAuthType(r *http.Request) authType {
return authTypeUnknown
}
// checkAdminRequestAuthType checks whether the request is a valid signature V2 or V4 request.
// It does not accept presigned or JWT or anonymous requests.
func checkAdminRequestAuthType(r *http.Request, region string) APIErrorCode {
s3Err := ErrAccessDenied
if getRequestAuthType(r) == authTypeSigned { // we only support V4 (no presign)
s3Err = isReqAuthenticated(r, region)
}
if s3Err != ErrNone {
errorIf(errors.New(getAPIError(s3Err).Description), "%s", dumpRequest(r))
}
return s3Err
}
func checkRequestAuthType(r *http.Request, bucket, policyAction, region string) APIErrorCode {
reqAuthType := getRequestAuthType(r)
switch reqAuthType {
case authTypePresignedV2, authTypeSignedV2:
// Signature V2 validation.
s3Error := isReqAuthenticatedV2(r)
if s3Error != ErrNone {
errorIf(errSignatureMismatch, "%s", dumpRequest(r))
}
return s3Error
return isReqAuthenticatedV2(r)
case authTypeSigned, authTypePresigned:
s3Error := isReqAuthenticated(r, region)
if s3Error != ErrNone {
errorIf(errSignatureMismatch, "%s", dumpRequest(r))
}
return s3Error
return isReqAuthenticated(r, region)
}
if reqAuthType == authTypeAnonymous && policyAction != "" {
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
return enforceBucketPolicy(bucket, policyAction, r.URL.Path,
r.Referer(), r.URL.Query())
resource, err := getResource(r.URL.Path, r.Host, globalDomainName)
if err != nil {
return ErrInternalError
}
return enforceBucketPolicy(bucket, policyAction, resource,
r.Referer(), handlers.GetSourceIP(r), r.URL.Query())
}
// By default return ErrAccessDenied

View File

@@ -21,7 +21,11 @@ import (
"io"
"net/http"
"net/url"
"os"
"testing"
"time"
"github.com/minio/minio/pkg/auth"
)
// Test get request auth type.
@@ -187,40 +191,6 @@ func TestS3SupportedAuthType(t *testing.T) {
}
}
// TestIsRequestUnsignedPayload - Test validates the Unsigned payload detection logic.
func TestIsRequestUnsignedPayload(t *testing.T) {
testCases := []struct {
inputAmzContentHeader string
expectedResult bool
}{
// Test case - 1.
// Test case with "X-Amz-Content-Sha256" header set to empty value.
{"", false},
// Test case - 2.
// Test case with "X-Amz-Content-Sha256" header set to "UNSIGNED-PAYLOAD"
// The payload is flagged as unsigned When "X-Amz-Content-Sha256" header is set to "UNSIGNED-PAYLOAD".
{unsignedPayload, true},
// Test case - 3.
// set to a random value.
{"abcd", false},
}
// creating an input HTTP request.
// Only the headers are relevant for this particular test.
inputReq, err := http.NewRequest("GET", "http://example.com", nil)
if err != nil {
t.Fatalf("Error initializing input HTTP request: %v", err)
}
for i, testCase := range testCases {
inputReq.Header.Set("X-Amz-Content-Sha256", testCase.inputAmzContentHeader)
actualResult := isRequestUnsignedPayload(inputReq)
if testCase.expectedResult != actualResult {
t.Errorf("Test %d: Expected the result to `%v`, but instead got `%v`", i+1, testCase.expectedResult, actualResult)
}
}
}
func TestIsRequestPresignedSignatureV2(t *testing.T) {
testCases := []struct {
inputQueryKey string
@@ -301,17 +271,50 @@ func mustNewRequest(method string, urlStr string, contentLength int64, body io.R
// is signed with AWS Signature V4, fails if not able to do so.
func mustNewSignedRequest(method string, urlStr string, contentLength int64, body io.ReadSeeker, t *testing.T) *http.Request {
req := mustNewRequest(method, urlStr, contentLength, body, t)
cred := serverConfig.GetCredential()
cred := globalServerConfig.GetCredential()
if err := signRequestV4(req, cred.AccessKey, cred.SecretKey); err != nil {
t.Fatalf("Unable to inititalized new signed http request %s", err)
}
return req
}
// This is similar to mustNewRequest but additionally the request
// is signed with AWS Signature V2, fails if not able to do so.
func mustNewSignedV2Request(method string, urlStr string, contentLength int64, body io.ReadSeeker, t *testing.T) *http.Request {
req := mustNewRequest(method, urlStr, contentLength, body, t)
cred := globalServerConfig.GetCredential()
if err := signRequestV2(req, cred.AccessKey, cred.SecretKey); err != nil {
t.Fatalf("Unable to inititalized new signed http request %s", err)
}
return req
}
// This is similar to mustNewRequest but additionally the request
// is presigned with AWS Signature V2, fails if not able to do so.
func mustNewPresignedV2Request(method string, urlStr string, contentLength int64, body io.ReadSeeker, t *testing.T) *http.Request {
req := mustNewRequest(method, urlStr, contentLength, body, t)
cred := globalServerConfig.GetCredential()
if err := preSignV2(req, cred.AccessKey, cred.SecretKey, time.Now().Add(10*time.Minute).Unix()); err != nil {
t.Fatalf("Unable to inititalized new signed http request %s", err)
}
return req
}
// This is similar to mustNewRequest but additionally the request
// is presigned with AWS Signature V4, fails if not able to do so.
func mustNewPresignedRequest(method string, urlStr string, contentLength int64, body io.ReadSeeker, t *testing.T) *http.Request {
req := mustNewRequest(method, urlStr, contentLength, body, t)
cred := globalServerConfig.GetCredential()
if err := preSignV4(req, cred.AccessKey, cred.SecretKey, time.Now().Add(10*time.Minute).Unix()); err != nil {
t.Fatalf("Unable to inititalized new signed http request %s", err)
}
return req
}
func mustNewSignedBadMD5Request(method string, urlStr string, contentLength int64, body io.ReadSeeker, t *testing.T) *http.Request {
req := mustNewRequest(method, urlStr, contentLength, body, t)
req.Header.Set("Content-Md5", "YWFhYWFhYWFhYWFhYWFhCg==")
cred := serverConfig.GetCredential()
cred := globalServerConfig.GetCredential()
if err := signRequestV4(req, cred.AccessKey, cred.SecretKey); err != nil {
t.Fatalf("Unable to initialized new signed http request %s", err)
}
@@ -324,14 +327,14 @@ func TestIsReqAuthenticated(t *testing.T) {
if err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
defer removeAll(path)
defer os.RemoveAll(path)
creds, err := createCredential("myuser", "mypassword")
creds, err := auth.CreateCredentials("myuser", "mypassword")
if err != nil {
t.Fatalf("unable create credential, %s", err)
}
serverConfig.SetCredential(creds)
globalServerConfig.SetCredential(creds)
// List of test cases for validating http request authentication.
testCases := []struct {
@@ -350,8 +353,37 @@ func TestIsReqAuthenticated(t *testing.T) {
// Validates all testcases.
for _, testCase := range testCases {
if s3Error := isReqAuthenticated(testCase.req, serverConfig.GetRegion()); s3Error != testCase.s3Error {
if s3Error := isReqAuthenticated(testCase.req, globalServerConfig.GetRegion()); s3Error != testCase.s3Error {
t.Fatalf("Unexpected s3error returned wanted %d, got %d", testCase.s3Error, s3Error)
}
}
}
func TestCheckAdminRequestAuthType(t *testing.T) {
path, err := newTestConfig(globalMinioDefaultRegion)
if err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
defer os.RemoveAll(path)
creds, err := auth.CreateCredentials("myuser", "mypassword")
if err != nil {
t.Fatalf("unable create credential, %s", err)
}
globalServerConfig.SetCredential(creds)
testCases := []struct {
Request *http.Request
ErrCode APIErrorCode
}{
{Request: mustNewRequest("GET", "http://127.0.0.1:9000", 0, nil, t), ErrCode: ErrAccessDenied},
{Request: mustNewSignedRequest("GET", "http://127.0.0.1:9000", 0, nil, t), ErrCode: ErrNone},
{Request: mustNewSignedV2Request("GET", "http://127.0.0.1:9000", 0, nil, t), ErrCode: ErrAccessDenied},
{Request: mustNewPresignedV2Request("GET", "http://127.0.0.1:9000", 0, nil, t), ErrCode: ErrAccessDenied},
{Request: mustNewPresignedRequest("GET", "http://127.0.0.1:9000", 0, nil, t), ErrCode: ErrAccessDenied},
}
for i, testCase := range testCases {
if s3Error := checkAdminRequestAuthType(testCase.Request, globalServerConfig.GetRegion()); s3Error != testCase.ErrCode {
t.Errorf("Test %d: Unexpected s3error returned wanted %d, got %d", i, testCase.ErrCode, s3Error)
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Minio Cloud Storage, (C) 2016, 2017 Minio, Inc.
* Minio Cloud Storage, (C) 2016, 2017, 2018 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -17,7 +17,16 @@
package cmd
import (
"bufio"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"io"
"net"
"net/http"
"net/rpc"
"strings"
"sync"
"time"
)
@@ -52,10 +61,11 @@ type authConfig struct {
// AuthRPCClient is a authenticated RPC client which does authentication before doing Call().
type AuthRPCClient struct {
sync.RWMutex // Mutex to lock this object.
rpcClient *RPCClient // Reconnectable RPC client to make any RPC call.
config authConfig // Authentication configuration information.
authToken string // Authentication token.
sync.RWMutex // Mutex to lock this object.
rpcClient *rpc.Client // RPC client to make any RPC call.
config authConfig // Authentication configuration information.
authToken string // Authentication token.
version semVersion // RPC version.
}
// newAuthRPCClient - returns a JWT based authenticated (go) rpc client, which does automatic reconnect.
@@ -73,8 +83,8 @@ func newAuthRPCClient(config authConfig) *AuthRPCClient {
}
return &AuthRPCClient{
rpcClient: newRPCClient(config.serverAddr, config.serviceEndpoint, config.secureConn),
config: config,
config: config,
version: globalRPCAPIVersion,
}
}
@@ -99,37 +109,60 @@ func (authClient *AuthRPCClient) Login() (err error) {
// Attempt to login if not logged in already.
if authClient.authToken == "" {
// Login to authenticate and acquire a new auth token.
var authToken string
authToken, err = authenticateNode(authClient.config.accessKey, authClient.config.secretKey)
if err != nil {
return err
}
// Login to authenticate your token.
var (
loginMethod = authClient.config.serviceName + loginMethodName
loginArgs = LoginRPCArgs{
Username: authClient.config.accessKey,
Password: authClient.config.secretKey,
Version: Version,
AuthToken: authToken,
Version: globalRPCAPIVersion,
RequestTime: UTCNow(),
}
loginReply = LoginRPCReply{}
)
if err = authClient.rpcClient.Call(loginMethod, &loginArgs, &loginReply); err != nil {
// Re-dial after we have disconnected or if its a fresh run.
var rpcClient *rpc.Client
rpcClient, err = rpcDial(authClient.config.serverAddr, authClient.config.serviceEndpoint, authClient.config.secureConn)
if err != nil {
return err
}
authClient.authToken = loginReply.AuthToken
if err = rpcClient.Call(loginMethod, &loginArgs, &LoginRPCReply{}); err != nil {
// gob doesn't provide any typed errors for us to reflect
// upon, this is the only way to return proper error.
if strings.Contains(err.Error(), "gob: wrong type") {
return errRPCAPIVersionUnsupported
}
return err
}
// Initialize rpc client and auth token after a successful login.
authClient.authToken = authToken
authClient.rpcClient = rpcClient
}
return nil
}
// call makes a RPC call after logs into the server.
func (authClient *AuthRPCClient) call(serviceMethod string, args interface {
SetAuthToken(authToken string)
SetRPCAPIVersion(version semVersion)
}, reply interface{}) (err error) {
if err = authClient.Login(); err != nil {
return err
} // On successful login, execute RPC call.
authClient.RLock()
// Set token before the rpc call.
authClient.RLock()
defer authClient.RUnlock()
args.SetAuthToken(authClient.authToken)
authClient.RUnlock()
args.SetRPCAPIVersion(authClient.version)
// Do an RPC call.
return authClient.rpcClient.Call(serviceMethod, args, reply)
@@ -138,6 +171,7 @@ func (authClient *AuthRPCClient) call(serviceMethod string, args interface {
// Call executes RPC call till success or globalAuthRPCRetryThreshold on ErrShutdown.
func (authClient *AuthRPCClient) Call(serviceMethod string, args interface {
SetAuthToken(authToken string)
SetRPCAPIVersion(version semVersion)
}, reply interface{}) (err error) {
// Done channel is used to close any lingering retry routine, as soon
@@ -158,6 +192,11 @@ func (authClient *AuthRPCClient) Call(serviceMethod string, args interface {
}
}
}
// gob doesn't provide any typed errors for us to reflect
// upon, this is the only way to return proper error.
if err != nil && strings.Contains(err.Error(), "gob: wrong type") {
err = errRPCAPIVersionUnsupported
}
break
}
return err
@@ -168,6 +207,10 @@ func (authClient *AuthRPCClient) Close() error {
authClient.Lock()
defer authClient.Unlock()
if authClient.rpcClient == nil {
return nil
}
authClient.authToken = ""
return authClient.rpcClient.Close()
}
@@ -181,3 +224,87 @@ func (authClient *AuthRPCClient) ServerAddr() string {
func (authClient *AuthRPCClient) ServiceEndpoint() string {
return authClient.config.serviceEndpoint
}
// default Dial timeout for RPC connections.
const defaultDialTimeout = 3 * time.Second
// Connect success message required from rpc server.
const connectSuccessMessage = "200 Connected to Go RPC"
// dial tries to establish a connection to serverAddr in a safe manner.
// If there is a valid rpc.Cliemt, it returns that else creates a new one.
func rpcDial(serverAddr, serviceEndpoint string, secureConn bool) (netRPCClient *rpc.Client, err error) {
if serverAddr == "" || serviceEndpoint == "" {
return nil, errInvalidArgument
}
d := &net.Dialer{
Timeout: defaultDialTimeout,
}
var conn net.Conn
if secureConn {
var hostname string
if hostname, _, err = net.SplitHostPort(serverAddr); err != nil {
return nil, &net.OpError{
Op: "dial-http",
Net: serverAddr + serviceEndpoint,
Addr: nil,
Err: fmt.Errorf("Unable to parse server address <%s>: %s", serverAddr, err),
}
}
// ServerName in tls.Config needs to be specified to support SNI certificates.
conn, err = tls.DialWithDialer(d, "tcp", serverAddr, &tls.Config{
ServerName: hostname,
RootCAs: globalRootCAs,
})
} else {
conn, err = d.Dial("tcp", serverAddr)
}
if err != nil {
// Print RPC connection errors that are worthy to display in log.
switch err.(type) {
case x509.HostnameError:
errorIf(err, "Unable to establish secure connection to %s", serverAddr)
}
return nil, &net.OpError{
Op: "dial-http",
Net: serverAddr + serviceEndpoint,
Addr: nil,
Err: err,
}
}
// Check for network errors writing over the dialed conn.
if _, err = io.WriteString(conn, "CONNECT "+serviceEndpoint+" HTTP/1.0\n\n"); err != nil {
conn.Close()
return nil, &net.OpError{
Op: "dial-http",
Net: serverAddr + serviceEndpoint,
Addr: nil,
Err: err,
}
}
// Attempt to read the HTTP response for the HTTP method CONNECT, upon
// success return the RPC connection instance.
resp, err := http.ReadResponse(bufio.NewReader(conn), &http.Request{
Method: http.MethodConnect,
})
if err != nil {
conn.Close()
return nil, &net.OpError{
Op: "dial-http",
Net: serverAddr + serviceEndpoint,
Addr: nil,
Err: err,
}
}
if resp.Status != connectSuccessMessage {
conn.Close()
return nil, errors.New("unexpected HTTP response: " + resp.Status)
}
// Initialize rpc client.
return rpc.NewClient(conn), nil
}

View File

@@ -16,7 +16,12 @@
package cmd
import "testing"
import (
"crypto/x509"
"os"
"path"
"testing"
)
// Tests authorized RPC client.
func TestAuthRPCClient(t *testing.T) {
@@ -53,3 +58,81 @@ func TestAuthRPCClient(t *testing.T) {
t.Fatalf("Unexpected node value %s, but expected %s", authRPC.ServiceEndpoint(), authCfg.serviceEndpoint)
}
}
// Test rpc dial test.
func TestRPCDial(t *testing.T) {
prevRootCAs := globalRootCAs
defer func() {
globalRootCAs = prevRootCAs
}()
rootPath, err := newTestConfig(globalMinioDefaultRegion)
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(rootPath)
testServer := StartTestServer(t, "")
defer testServer.Stop()
cert, key, err := generateTLSCertKey("127.0.0.1")
if err != nil {
t.Fatal(err)
}
// Set global root CAs.
globalRootCAs = x509.NewCertPool()
globalRootCAs.AppendCertsFromPEM(cert)
testServerTLS := StartTestTLSServer(t, "", cert, key)
defer testServerTLS.Stop()
adminEndpoint := path.Join(minioReservedBucketPath, adminPath)
testCases := []struct {
serverAddr string
serverEndpoint string
success bool
secure bool
}{
// Empty server addr should fail.
{
serverAddr: "",
serverEndpoint: adminEndpoint,
success: false,
},
// Unexpected server addr should fail.
{
serverAddr: "example.com",
serverEndpoint: adminEndpoint,
success: false,
},
// Server addr connects but fails for CONNECT call.
{
serverAddr: "example.com:80",
serverEndpoint: "/",
success: false,
},
// Successful connecting to insecure RPC server.
{
serverAddr: testServer.Server.Listener.Addr().String(),
serverEndpoint: adminEndpoint,
success: true,
},
// Successful connecting to secure RPC server.
{
serverAddr: testServerTLS.Server.Listener.Addr().String(),
serverEndpoint: adminEndpoint,
success: true,
secure: true,
},
}
for i, testCase := range testCases {
_, err = rpcDial(testCase.serverAddr, testCase.serverEndpoint, testCase.secure)
if err != nil && testCase.success {
t.Errorf("Test %d: Expected success but found failure instead %s", i+1, err)
}
if err == nil && !testCase.success {
t.Errorf("Test %d: Expected failure but found success instead", i+1)
}
}
}

View File

@@ -20,8 +20,7 @@ package cmd
const loginMethodName = ".Login"
// AuthRPCServer RPC server authenticates using JWT.
type AuthRPCServer struct {
}
type AuthRPCServer struct{}
// Login - Handles JWT based RPC login.
func (b AuthRPCServer) Login(args *LoginRPCArgs, reply *LoginRPCReply) error {
@@ -30,14 +29,10 @@ func (b AuthRPCServer) Login(args *LoginRPCArgs, reply *LoginRPCReply) error {
return err
}
// Authenticate using JWT.
token, err := authenticateNode(args.Username, args.Password)
if err != nil {
return err
// Return an error if token is not valid.
if !isAuthTokenValid(args.AuthToken) {
return errAuthentication
}
// Return the token.
reply.AuthToken = token
return nil
}

View File

@@ -17,6 +17,7 @@
package cmd
import (
"os"
"testing"
"time"
)
@@ -26,8 +27,12 @@ func TestLogin(t *testing.T) {
if err != nil {
t.Fatalf("Failed to create test config - %v", err)
}
defer removeAll(rootPath)
creds := serverConfig.GetCredential()
defer os.RemoveAll(rootPath)
creds := globalServerConfig.GetCredential()
token, err := authenticateNode(creds.AccessKey, creds.SecretKey)
if err != nil {
t.Fatal(err)
}
ls := AuthRPCServer{}
testCases := []struct {
args LoginRPCArgs
@@ -37,9 +42,8 @@ func TestLogin(t *testing.T) {
// Valid case.
{
args: LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: Version,
AuthToken: token,
Version: globalRPCAPIVersion,
},
skewTime: 0,
expectedErr: nil,
@@ -47,59 +51,26 @@ func TestLogin(t *testing.T) {
// Valid username, password and request time, not version.
{
args: LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: "INVALID-" + Version,
AuthToken: token,
Version: semVersion{3, 0, 0},
},
skewTime: 0,
expectedErr: errServerVersionMismatch,
expectedErr: errRPCAPIVersionUnsupported,
},
// Valid username, password and version, not request time
{
args: LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: Version,
AuthToken: token,
Version: globalRPCAPIVersion,
},
skewTime: 20 * time.Minute,
expectedErr: errServerTimeMismatch,
},
// Invalid username length
// Invalid token, fails with authentication error
{
args: LoginRPCArgs{
Username: "aaa",
Password: "minio123",
Version: Version,
},
skewTime: 0,
expectedErr: errInvalidAccessKeyLength,
},
// Invalid password length
{
args: LoginRPCArgs{
Username: "minio",
Password: "aaa",
Version: Version,
},
skewTime: 0,
expectedErr: errInvalidSecretKeyLength,
},
// Invalid username
{
args: LoginRPCArgs{
Username: "aaaaa",
Password: creds.SecretKey,
Version: Version,
},
skewTime: 0,
expectedErr: errInvalidAccessKeyID,
},
// Invalid password
{
args: LoginRPCArgs{
Username: creds.AccessKey,
Password: "aaaaaaaa",
Version: Version,
AuthToken: "",
Version: globalRPCAPIVersion,
},
skewTime: 0,
expectedErr: errAuthentication,
@@ -107,7 +78,7 @@ func TestLogin(t *testing.T) {
}
for i, test := range testCases {
reply := LoginRPCReply{}
test.args.RequestTime = time.Now().Add(test.skewTime).UTC()
test.args.RequestTime = UTCNow().Add(test.skewTime)
err := ls.Login(&test.args, &reply)
if err != test.expectedErr {
t.Errorf("Test %d: Expected error %v but received %v",

View File

@@ -21,6 +21,7 @@ import (
"io/ioutil"
"math"
"math/rand"
"os"
"strconv"
"testing"
@@ -49,20 +50,23 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
metadata := make(map[string]string)
metadata["etag"] = getMD5Hash(textData)
sha256sum := ""
md5hex := getMD5Hash(textData)
sha256hex := ""
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObject starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
// insert the object.
objInfo, err := obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata, sha256sum)
objInfo, err := obj.PutObject(bucket, "object"+strconv.Itoa(i),
mustGetHashReader(b, bytes.NewBuffer(textData), int64(len(textData)), md5hex, sha256hex), metadata)
if err != nil {
b.Fatal(err)
}
if objInfo.ETag != metadata["etag"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.ETag, metadata["etag"])
if objInfo.ETag != md5hex {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.ETag, md5hex)
}
}
// Benchmark ends here. Stop timer.
@@ -93,13 +97,14 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for NewMultipartUpload.
metadata := make(map[string]string)
metadata["etag"] = getMD5Hash(textData)
sha256sum := ""
uploadID, err = obj.NewMultipartUpload(bucket, object, metadata)
if err != nil {
b.Fatal(err)
}
md5hex := getMD5Hash(textData)
sha256hex := ""
var textPartData []byte
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
@@ -114,15 +119,15 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
} else {
textPartData = textData[j*partSize:]
}
metadata := make(map[string]string)
metadata["etag"] = getMD5Hash([]byte(textPartData))
md5hex = getMD5Hash([]byte(textPartData))
var partInfo PartInfo
partInfo, err = obj.PutObjectPart(bucket, object, uploadID, j, int64(len(textPartData)), bytes.NewBuffer(textPartData), metadata["etag"], sha256sum)
partInfo, err = obj.PutObjectPart(bucket, object, uploadID, j,
mustGetHashReader(b, bytes.NewBuffer(textPartData), int64(len(textPartData)), md5hex, sha256hex))
if err != nil {
b.Fatal(err)
}
if partInfo.ETag != metadata["etag"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, etag, metadata["etag"])
if partInfo.ETag != md5hex {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, etag, md5hex)
}
}
}
@@ -136,7 +141,7 @@ func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
@@ -155,7 +160,7 @@ func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
@@ -174,7 +179,7 @@ func benchmarkPutObjectParallel(b *testing.B, instanceType string, objSize int)
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
@@ -194,7 +199,7 @@ func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// obtains random bucket name.
bucket := getRandomBucketName()
@@ -204,23 +209,27 @@ func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
b.Fatal(err)
}
sha256sum := ""
textData := generateBytesData(objSize)
// generate etag for the generated data.
// etag of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
metadata := make(map[string]string)
// get text data generated for number of bytes equal to object size.
md5hex := getMD5Hash(textData)
sha256hex := ""
for i := 0; i < 10; i++ {
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate etag for the generated data.
// etag of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
metadata := make(map[string]string)
metadata["etag"] = getMD5Hash(textData)
// insert the object.
var objInfo ObjectInfo
objInfo, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata, sha256sum)
objInfo, err = obj.PutObject(bucket, "object"+strconv.Itoa(i),
mustGetHashReader(b, bytes.NewBuffer(textData), int64(len(textData)), md5hex, sha256hex), metadata)
if err != nil {
b.Fatal(err)
}
if objInfo.ETag != metadata["etag"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.ETag, metadata["etag"])
if objInfo.ETag != md5hex {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.ETag, md5hex)
}
}
@@ -230,7 +239,7 @@ func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
var buffer = new(bytes.Buffer)
err = obj.GetObject(bucket, "object"+strconv.Itoa(i%10), 0, int64(objSize), buffer)
err = obj.GetObject(bucket, "object"+strconv.Itoa(i%10), 0, int64(objSize), buffer, "")
if err != nil {
b.Error(err)
}
@@ -263,7 +272,7 @@ func benchmarkGetObject(b *testing.B, instanceType string, objSize int) {
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
@@ -282,7 +291,7 @@ func benchmarkGetObjectParallel(b *testing.B, instanceType string, objSize int)
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
@@ -302,7 +311,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// obtains random bucket name.
bucket := getRandomBucketName()
@@ -317,8 +326,10 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
metadata := make(map[string]string)
metadata["etag"] = getMD5Hash([]byte(textData))
sha256sum := ""
md5hex := getMD5Hash([]byte(textData))
sha256hex := ""
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObject starts here. Reset the benchmark timer.
@@ -328,12 +339,13 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
i := 0
for pb.Next() {
// insert the object.
objInfo, err := obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata, sha256sum)
objInfo, err := obj.PutObject(bucket, "object"+strconv.Itoa(i),
mustGetHashReader(b, bytes.NewBuffer(textData), int64(len(textData)), md5hex, sha256hex), metadata)
if err != nil {
b.Fatal(err)
}
if objInfo.ETag != metadata["etag"] {
b.Fatalf("Write no: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", objInfo.ETag, metadata["etag"])
if objInfo.ETag != md5hex {
b.Fatalf("Write no: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", objInfo.ETag, md5hex)
}
i++
}
@@ -350,7 +362,7 @@ func runGetObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
if err != nil {
b.Fatalf("Unable to initialize config. %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// obtains random bucket name.
bucket := getRandomBucketName()
@@ -360,23 +372,26 @@ func runGetObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
b.Fatal(err)
}
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
metadata := make(map[string]string)
md5hex := getMD5Hash([]byte(textData))
sha256hex := ""
for i := 0; i < 10; i++ {
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
metadata := make(map[string]string)
metadata["etag"] = getMD5Hash([]byte(textData))
sha256sum := ""
// insert the object.
var objInfo ObjectInfo
objInfo, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata, sha256sum)
objInfo, err = obj.PutObject(bucket, "object"+strconv.Itoa(i),
mustGetHashReader(b, bytes.NewBuffer(textData), int64(len(textData)), md5hex, sha256hex), metadata)
if err != nil {
b.Fatal(err)
}
if objInfo.ETag != metadata["etag"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.ETag, metadata["etag"])
if objInfo.ETag != md5hex {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.ETag, md5hex)
}
}
@@ -387,7 +402,7 @@ func runGetObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
b.RunParallel(func(pb *testing.PB) {
i := 0
for pb.Next() {
err = obj.GetObject(bucket, "object"+strconv.Itoa(i), 0, int64(objSize), ioutil.Discard)
err = obj.GetObject(bucket, "object"+strconv.Itoa(i), 0, int64(objSize), ioutil.Discard, "")
if err != nil {
b.Error(err)
}

View File

@@ -21,35 +21,17 @@ import (
"path"
"sync"
"time"
"github.com/minio/minio/pkg/auth"
)
// Login handler implements JWT login token generator, which upon login request
// along with username and password is generated.
func (br *browserPeerAPIHandlers) Login(args *LoginRPCArgs, reply *LoginRPCReply) error {
// Validate LoginRPCArgs
if err := args.IsValid(); err != nil {
return err
}
// Authenticate using JWT.
token, err := authenticateWeb(args.Username, args.Password)
if err != nil {
return err
}
// Return the token.
reply.AuthToken = token
return nil
}
// SetAuthPeerArgs - Arguments collection for SetAuth RPC call
type SetAuthPeerArgs struct {
// For Auth
AuthRPCArgs
// New credentials that receiving peer should update to.
Creds credential
Creds auth.Credentials
}
// SetAuthPeer - Update to new credentials sent from a peer Minio
@@ -68,12 +50,19 @@ func (br *browserPeerAPIHandlers) SetAuthPeer(args SetAuthPeerArgs, reply *AuthR
return fmt.Errorf("Invalid credential passed")
}
// Acquire lock before updating global configuration.
globalServerConfigMu.Lock()
defer globalServerConfigMu.Unlock()
// Update credentials in memory
serverConfig.SetCredential(args.Creds)
prevCred := globalServerConfig.SetCredential(args.Creds)
// Save credentials to config file
if err := serverConfig.Save(); err != nil {
errorIf(err, "Error updating config file with new credentials sent from browser RPC.")
if err := globalServerConfig.Save(); err != nil {
// Save the current creds when failed to update.
globalServerConfig.SetCredential(prevCred)
errorIf(err, "Unable to update the config with new credentials sent from browser RPC.")
return err
}
@@ -81,7 +70,7 @@ func (br *browserPeerAPIHandlers) SetAuthPeer(args SetAuthPeerArgs, reply *AuthR
}
// Sends SetAuthPeer RPCs to all peers in the Minio cluster
func updateCredsOnPeers(creds credential) map[string]error {
func updateCredsOnPeers(creds auth.Credentials) map[string]error {
// Get list of peer addresses (from globalS3Peers)
peers := []string{}
for _, p := range globalS3Peers {
@@ -92,7 +81,7 @@ func updateCredsOnPeers(creds credential) map[string]error {
errs := make([]error, len(peers))
var wg sync.WaitGroup
serverCred := serverConfig.GetCredential()
serverCred := globalServerConfig.GetCredential()
// Launch go routines to send request to each peer in parallel.
for ix := range peers {
wg.Add(1)

View File

@@ -19,6 +19,8 @@ package cmd
import (
"path"
"testing"
"github.com/minio/minio/pkg/auth"
)
// API suite container common to both FS and XL.
@@ -29,8 +31,8 @@ type TestRPCBrowserPeerSuite struct {
}
// Setting up the test suite and starting the Test server.
func (s *TestRPCBrowserPeerSuite) SetUpSuite(c *testing.T) {
s.testServer = StartTestBrowserPeerRPCServer(c, s.serverType)
func (s *TestRPCBrowserPeerSuite) SetUpSuite(t *testing.T) {
s.testServer = StartTestBrowserPeerRPCServer(t, s.serverType)
s.testAuthConf = authConfig{
serverAddr: s.testServer.Server.Listener.Addr().String(),
accessKey: s.testServer.AccessKey,
@@ -40,10 +42,9 @@ func (s *TestRPCBrowserPeerSuite) SetUpSuite(c *testing.T) {
}
}
// No longer used with gocheck, but used in explicit teardown code in
// each test function. // Called implicitly by "gopkg.in/check.v1"
// after all tests are run.
func (s *TestRPCBrowserPeerSuite) TearDownSuite(c *testing.T) {
// TeatDownSuite - called implicitly by after all tests are run in
// browser peer rpc suite.
func (s *TestRPCBrowserPeerSuite) TearDownSuite(t *testing.T) {
s.testServer.Stop()
}
@@ -62,18 +63,20 @@ func TestBrowserPeerRPC(t *testing.T) {
// Tests for browser peer rpc.
func (s *TestRPCBrowserPeerSuite) testBrowserPeerRPC(t *testing.T) {
// Construct RPC call arguments.
creds, err := createCredential("abcd1", "abcd1234")
creds, err := auth.CreateCredentials("abcd1", "abcd1234")
if err != nil {
t.Fatalf("unable to create credential. %v", err)
}
// Validate for invalid token.
args := SetAuthPeerArgs{Creds: creds}
args.AuthToken = "garbage"
rclient := newRPCClient(s.testAuthConf.serverAddr, s.testAuthConf.serviceEndpoint, false)
rclient := newAuthRPCClient(s.testAuthConf)
defer rclient.Close()
err = rclient.Call("BrowserPeer.SetAuthPeer", &args, &AuthRPCReply{})
if err != nil {
if err = rclient.Login(); err != nil {
t.Fatal(err)
}
rclient.authToken = "garbage"
if err = rclient.Call("BrowserPeer.SetAuthPeer", &args, &AuthRPCReply{}); err != nil {
if err.Error() != errInvalidToken.Error() {
t.Fatal(err)
}
@@ -89,36 +92,25 @@ func (s *TestRPCBrowserPeerSuite) testBrowserPeerRPC(t *testing.T) {
}
// Validate for failure in login handler with previous credentials.
rclient = newRPCClient(s.testAuthConf.serverAddr, s.testAuthConf.serviceEndpoint, false)
rclient = newAuthRPCClient(s.testAuthConf)
defer rclient.Close()
rargs := &LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: Version,
RequestTime: UTCNow(),
}
rreply := &LoginRPCReply{}
err = rclient.Call("BrowserPeer"+loginMethodName, rargs, rreply)
token, err := authenticateNode(creds.AccessKey, creds.SecretKey)
if err != nil {
t.Fatal(err)
}
rclient.authToken = token
if err = rclient.Login(); err != nil {
if err.Error() != errInvalidAccessKeyID.Error() {
t.Fatal(err)
}
}
// Validate for success in loing handled with valid credetnails.
rargs = &LoginRPCArgs{
Username: creds.AccessKey,
Password: creds.SecretKey,
Version: Version,
RequestTime: UTCNow(),
}
rreply = &LoginRPCReply{}
err = rclient.Call("BrowserPeer"+loginMethodName, rargs, rreply)
token, err = authenticateNode(creds.AccessKey, creds.SecretKey)
if err != nil {
t.Fatal(err)
}
// Validate all the replied fields after successful login.
if rreply.AuthToken == "" {
t.Fatalf("Generated token cannot be empty %s", errInvalidToken)
rclient.authToken = token
if err = rclient.Login(); err != nil {
t.Fatal(err)
}
}

View File

@@ -18,6 +18,8 @@ package cmd
import (
router "github.com/gorilla/mux"
"github.com/minio/minio/pkg/errors"
)
// Set up an RPC endpoint that receives browser related calls. The
@@ -35,12 +37,12 @@ type browserPeerAPIHandlers struct {
// Register RPC router
func registerBrowserPeerRPCRouter(mux *router.Router) error {
bpHandlers := &browserPeerAPIHandlers{}
bpHandlers := &browserPeerAPIHandlers{AuthRPCServer{}}
bpRPCServer := newRPCServer()
err := bpRPCServer.RegisterName("BrowserPeer", bpHandlers)
if err != nil {
return traceError(err)
return errors.Trace(err)
}
bpRouter := mux.NewRoute().PathPrefix(minioReservedBucketPath).Subrouter()

View File

@@ -40,13 +40,6 @@ func validateListObjectsArgs(prefix, marker, delimiter string, maxKeys int) APIE
if delimiter != "" && delimiter != "/" {
return ErrNotImplemented
}
// Marker is set validate pre-condition.
if marker != "" {
// Marker not common with prefix is not implemented.
if !hasPrefix(marker, prefix) {
return ErrNotImplemented
}
}
// Success.
return ErrNone
}
@@ -69,7 +62,7 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
return
}
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucket", serverConfig.GetRegion()); s3Error != ErrNone {
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucket", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -84,23 +77,35 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
// Then we need to use 'start-after' as marker instead.
marker = startAfter
}
// Validate the query params before beginning to serve the request.
// fetch-owner is not validated since it is a boolean
if s3Error := validateListObjectsArgs(prefix, marker, delimiter, maxKeys); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
// Inititate a list objects operation based on the input params.
// On success would return back ListObjectsInfo object to be
// marshalled into S3 compatible XML header.
listObjectsInfo, err := objectAPI.ListObjects(bucket, prefix, marker, delimiter, maxKeys)
listObjectsV2Info, err := objectAPI.ListObjectsV2(bucket, prefix, marker, delimiter, maxKeys, fetchOwner, startAfter)
if err != nil {
errorIf(err, "Unable to list objects.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
response := generateListObjectsV2Response(bucket, prefix, token, listObjectsInfo.NextMarker, startAfter, delimiter, fetchOwner, listObjectsInfo.IsTruncated, maxKeys, listObjectsInfo.Objects, listObjectsInfo.Prefixes)
for i := range listObjectsV2Info.Objects {
if listObjectsV2Info.Objects[i].IsEncrypted() {
listObjectsV2Info.Objects[i].Size, err = listObjectsV2Info.Objects[i].DecryptedSize()
if err != nil {
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
}
}
response := generateListObjectsV2Response(bucket, prefix, token, listObjectsV2Info.NextContinuationToken, startAfter,
delimiter, fetchOwner, listObjectsV2Info.IsTruncated, maxKeys, listObjectsV2Info.Objects, listObjectsV2Info.Prefixes)
// Write success response.
writeSuccessResponseXML(w, encodeResponse(response))
@@ -122,7 +127,7 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
return
}
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucket", serverConfig.GetRegion()); s3Error != ErrNone {
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucket", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -130,7 +135,12 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
// Extract all the litsObjectsV1 query params to their native values.
prefix, marker, delimiter, maxKeys, _ := getListObjectsV1Args(r.URL.Query())
// Validate all the query params before beginning to serve the request.
// Validate the maxKeys lowerbound. When maxKeys > 1000, S3 returns 1000 but
// does not throw an error.
if maxKeys < 0 {
writeErrorResponse(w, ErrInvalidMaxKeys, r.URL)
return
} // Validate all the query params before beginning to serve the request.
if s3Error := validateListObjectsArgs(prefix, marker, delimiter, maxKeys); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
@@ -141,10 +151,20 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
// marshalled into S3 compatible XML header.
listObjectsInfo, err := objectAPI.ListObjects(bucket, prefix, marker, delimiter, maxKeys)
if err != nil {
errorIf(err, "Unable to list objects.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
for i := range listObjectsInfo.Objects {
if listObjectsInfo.Objects[i].IsEncrypted() {
listObjectsInfo.Objects[i].Size, err = listObjectsInfo.Objects[i].DecryptedSize()
if err != nil {
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
}
}
response := generateListObjectsV1Response(bucket, prefix, marker, delimiter, maxKeys, listObjectsInfo)
// Write success response.

View File

@@ -24,19 +24,25 @@ import (
"net/http"
"net/url"
"path"
"path/filepath"
"reflect"
"strings"
"sync"
mux "github.com/gorilla/mux"
"github.com/minio/minio-go/pkg/policy"
"github.com/minio/minio-go/pkg/set"
"github.com/minio/minio/pkg/errors"
"github.com/minio/minio/pkg/hash"
)
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
// Enforces bucket policies for a bucket for a given tatusaction.
func enforceBucketPolicy(bucket, action, resource, referer string, queryParams url.Values) (s3Error APIErrorCode) {
func enforceBucketPolicy(bucket, action, resource, referer, sourceIP string, queryParams url.Values) (s3Error APIErrorCode) {
// Verify if bucket actually exists
if err := checkBucketExist(bucket, newObjectLayerFn()); err != nil {
err = errorCause(err)
objAPI := newObjectLayerFn()
if err := checkBucketExist(bucket, objAPI); err != nil {
err = errors.Cause(err)
switch err.(type) {
case BucketNameInvalid:
// Return error for invalid bucket name.
@@ -50,13 +56,12 @@ func enforceBucketPolicy(bucket, action, resource, referer string, queryParams u
return ErrInternalError
}
if globalBucketPolicies == nil {
// Fetch bucket policy, if policy is not set return access denied.
p, err := objAPI.GetBucketPolicy(bucket)
if err != nil {
return ErrAccessDenied
}
// Fetch bucket policy, if policy is not set return access denied.
policy := globalBucketPolicies.GetBucketPolicy(bucket)
if policy == nil {
if reflect.DeepEqual(p, emptyBucketPolicy) {
return ErrAccessDenied
}
@@ -64,7 +69,7 @@ func enforceBucketPolicy(bucket, action, resource, referer string, queryParams u
arn := bucketARNPrefix + strings.TrimSuffix(strings.TrimPrefix(resource, "/"), "/")
// Get conditions for policy verification.
conditionKeyMap := make(map[string]set.StringSet)
conditionKeyMap := make(policy.ConditionKeyMap)
for queryParam := range queryParams {
conditionKeyMap[queryParam] = set.CreateStringSet(queryParams.Get(queryParam))
}
@@ -73,28 +78,30 @@ func enforceBucketPolicy(bucket, action, resource, referer string, queryParams u
if referer != "" {
conditionKeyMap["referer"] = set.CreateStringSet(referer)
}
// Add request source Ip to conditionKeyMap.
conditionKeyMap["ip"] = set.CreateStringSet(sourceIP)
// Validate action, resource and conditions with current policy statements.
if !bucketPolicyEvalStatements(action, arn, conditionKeyMap, policy.Statements) {
if !bucketPolicyEvalStatements(action, arn, conditionKeyMap, p.Statements) {
return ErrAccessDenied
}
return ErrNone
}
// Check if the action is allowed on the bucket/prefix.
func isBucketActionAllowed(action, bucket, prefix string) bool {
if globalBucketPolicies == nil {
func isBucketActionAllowed(action, bucket, prefix string, objectAPI ObjectLayer) bool {
bp, err := objectAPI.GetBucketPolicy(bucket)
if err != nil {
return false
}
policy := globalBucketPolicies.GetBucketPolicy(bucket)
if policy == nil {
if reflect.DeepEqual(bp, emptyBucketPolicy) {
return false
}
resource := bucketARNPrefix + path.Join(bucket, prefix)
var conditionKeyMap map[string]set.StringSet
// Validate action, resource and conditions with current policy statements.
return bucketPolicyEvalStatements(action, resource, conditionKeyMap, policy.Statements)
return bucketPolicyEvalStatements(action, resource, conditionKeyMap, bp.Statements)
}
// GetBucketLocationHandler - GET Bucket location.
@@ -113,15 +120,21 @@ func (api objectAPIHandlers) GetBucketLocationHandler(w http.ResponseWriter, r *
s3Error := checkRequestAuthType(r, bucket, "s3:GetBucketLocation", globalMinioDefaultRegion)
if s3Error == ErrInvalidRegion {
// Clients like boto3 send getBucketLocation() call signed with region that is configured.
s3Error = checkRequestAuthType(r, "", "s3:GetBucketLocation", serverConfig.GetRegion())
s3Error = checkRequestAuthType(r, "", "s3:GetBucketLocation", globalServerConfig.GetRegion())
}
if s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
bucketLock := globalNSMutex.NewNSLock(bucket, "")
if err := bucketLock.GetRLock(globalObjectTimeout); err != nil {
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
defer bucketLock.RUnlock()
if _, err := objectAPI.GetBucketInfo(bucket); err != nil {
errorIf(err, "Unable to fetch bucket info.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
@@ -129,7 +142,7 @@ func (api objectAPIHandlers) GetBucketLocationHandler(w http.ResponseWriter, r *
// Generate response.
encodedSuccessResponse := encodeResponse(LocationResponse{})
// Get current region.
region := serverConfig.GetRegion()
region := globalServerConfig.GetRegion()
if region != globalMinioDefaultRegion {
encodedSuccessResponse = encodeResponse(LocationResponse{
Location: region,
@@ -158,7 +171,7 @@ func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter,
return
}
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucketMultipartUploads", serverConfig.GetRegion()); s3Error != ErrNone {
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucketMultipartUploads", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -178,7 +191,6 @@ func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter,
listMultipartsInfo, err := objectAPI.ListMultipartUploads(bucket, prefix, keyMarker, uploadIDMarker, delimiter, maxUploads)
if err != nil {
errorIf(err, "Unable to list multipart uploads.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
@@ -205,7 +217,7 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
s3Error := checkRequestAuthType(r, "", "", globalMinioDefaultRegion)
if s3Error == ErrInvalidRegion {
// Clients like boto3 send listBuckets() call signed with region that is configured.
s3Error = checkRequestAuthType(r, "", "", serverConfig.GetRegion())
s3Error = checkRequestAuthType(r, "", "", globalServerConfig.GetRegion())
}
if s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
@@ -214,7 +226,6 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
// Invoke the list buckets.
bucketsInfo, err := objectAPI.ListBuckets()
if err != nil {
errorIf(err, "Unable to list buckets.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
@@ -238,9 +249,14 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
return
}
if s3Error := checkRequestAuthType(r, bucket, "s3:DeleteObject", serverConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
var authError APIErrorCode
if authError = checkRequestAuthType(r, bucket, "s3:DeleteObject", globalServerConfig.GetRegion()); authError != ErrNone {
// In the event access is denied, a 200 response should still be returned
// http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
if authError != ErrAccessDenied {
writeErrorResponse(w, authError, r.URL)
return
}
}
// Content-Length is required and should be non-zero
@@ -282,11 +298,16 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
for index, object := range deleteObjects.Objects {
wg.Add(1)
go func(i int, obj ObjectIdentifier) {
objectLock := globalNSMutex.NewNSLock(bucket, obj.ObjectName)
objectLock.Lock()
defer objectLock.Unlock()
defer wg.Done()
// If the request is denied access, each item
// should be marked as 'AccessDenied'
if authError == ErrAccessDenied {
dErrs[i] = PrefixAccessDenied{
Bucket: bucket,
Object: obj.ObjectName,
}
return
}
dErr := objectAPI.DeleteObject(bucket, obj.ObjectName)
if dErr != nil {
dErrs[i] = dErr
@@ -305,13 +326,12 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
deletedObjects = append(deletedObjects, object)
continue
}
if _, ok := errorCause(err).(ObjectNotFound); ok {
if _, ok := errors.Cause(err).(ObjectNotFound); ok {
// If the object is not found it should be
// accounted as deleted as per S3 spec.
deletedObjects = append(deletedObjects, object)
continue
}
errorIf(err, "Unable to delete object. %s", object.ObjectName)
// Error during delete should be collected separately.
deleteErrors = append(deleteErrors, DeleteError{
Code: errorCodeResponse[toAPIErrorCode(err)].Code,
@@ -361,11 +381,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
// PutBucket does not have any bucket action.
s3Error := checkRequestAuthType(r, "", "", globalMinioDefaultRegion)
if s3Error == ErrInvalidRegion {
// Clients like boto3 send putBucket() call signed with region that is configured.
s3Error = checkRequestAuthType(r, "", "", serverConfig.GetRegion())
}
s3Error := checkRequestAuthType(r, "", "", globalServerConfig.GetRegion())
if s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
@@ -389,13 +405,15 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
if err := bucketLock.GetLock(globalObjectTimeout); err != nil {
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
defer bucketLock.Unlock()
// Proceed to creating a bucket.
err := objectAPI.MakeBucketWithLocation(bucket, "")
err := objectAPI.MakeBucketWithLocation(bucket, location)
if err != nil {
errorIf(err, "Unable to create a bucket.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
@@ -417,12 +435,24 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
bucket := mux.Vars(r)["bucket"]
// Require Content-Length to be set in the request
size := r.ContentLength
if size < 0 {
writeErrorResponse(w, ErrMissingContentLength, r.URL)
return
}
resource, err := getResource(r.URL.Path, r.Host, globalDomainName)
if err != nil {
writeErrorResponse(w, ErrInvalidRequest, r.URL)
return
}
// Make sure that the URL does not contain object name.
if bucket != filepath.Clean(resource[1:]) {
writeErrorResponse(w, ErrMethodNotAllowed, r.URL)
return
}
// Here the parameter is the size of the form data that should
// be loaded in memory, the remaining being put in temporary files.
@@ -461,7 +491,6 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Close multipart file
defer fileBody.Close()
bucket := mux.Vars(r)["bucket"]
formValues.Set("Bucket", bucket)
if fileName != "" && strings.Contains(formValues.Get("Key"), "${filename}") {
@@ -512,13 +541,11 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
lengthRange := postPolicyForm.Conditions.ContentLengthRange
if lengthRange.Valid {
if fileSize < lengthRange.Min {
errorIf(err, "Unable to create object.")
writeErrorResponse(w, toAPIErrorCode(errDataTooSmall), r.URL)
return
}
if fileSize > lengthRange.Max || isMaxObjectSize(fileSize) {
errorIf(err, "Unable to create object.")
writeErrorResponse(w, toAPIErrorCode(errDataTooLarge), r.URL)
return
}
@@ -531,21 +558,46 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponse(w, ErrInternalError, r.URL)
return
}
sha256sum := ""
objectLock := globalNSMutex.NewNSLock(bucket, object)
objectLock.Lock()
defer objectLock.Unlock()
objInfo, err := objectAPI.PutObject(bucket, object, fileSize, fileBody, metadata, sha256sum)
hashReader, err := hash.NewReader(fileBody, fileSize, "", "")
if err != nil {
errorIf(err, "Unable to create object.")
errorIf(err, "Unable to initialize hashReader.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
if objectAPI.IsEncryptionSupported() {
if hasSSECustomerHeader(formValues) && !hasSuffix(object, slashSeparator) { // handle SSE-C requests
var reader io.Reader
var key []byte
key, err = ParseSSECustomerHeader(formValues)
if err != nil {
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
reader, err = newEncryptReader(hashReader, key, metadata)
if err != nil {
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
info := ObjectInfo{Size: fileSize}
hashReader, err = hash.NewReader(reader, info.EncryptedSize(), "", "") // do not try to verify encrypted content
if err != nil {
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
}
}
objInfo, err := objectAPI.PutObject(bucket, object, hashReader, metadata)
if err != nil {
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
location := getObjectLocation(r, globalDomainName, bucket, object)
w.Header().Set("ETag", `"`+objInfo.ETag+`"`)
w.Header().Set("Location", getObjectLocation(bucket, object))
w.Header().Set("Location", location)
// Get host and port from Request.RemoteAddr.
host, port, err := net.SplitHostPort(r.RemoteAddr)
@@ -578,7 +630,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
Bucket: objInfo.Bucket,
Key: objInfo.Name,
ETag: `"` + objInfo.ETag + `"`,
Location: getObjectLocation(objInfo.Bucket, objInfo.Name),
Location: location,
})
writeResponse(w, http.StatusCreated, resp, "application/xml")
case "200":
@@ -604,17 +656,12 @@ func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Re
return
}
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucket", serverConfig.GetRegion()); s3Error != ErrNone {
if s3Error := checkRequestAuthType(r, bucket, "s3:ListBucket", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponseHeadersOnly(w, s3Error)
return
}
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.RLock()
defer bucketLock.RUnlock()
if _, err := objectAPI.GetBucketInfo(bucket); err != nil {
errorIf(err, "Unable to fetch bucket info.")
writeErrorResponseHeadersOnly(w, toAPIErrorCode(err))
return
}
@@ -631,7 +678,7 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
}
// DeleteBucket does not have any bucket action.
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
if s3Error := checkRequestAuthType(r, "", "", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -639,35 +686,12 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
vars := mux.Vars(r)
bucket := vars["bucket"]
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
defer bucketLock.Unlock()
// Attempt to delete bucket.
if err := objectAPI.DeleteBucket(bucket); err != nil {
errorIf(err, "Unable to delete a bucket.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
// Delete bucket access policy, if present - ignore any errors.
_ = removeBucketPolicy(bucket, objectAPI)
// Notify all peers (including self) to update in-memory state
S3PeersUpdateBucketPolicy(bucket, policyChange{true, nil})
// Delete notification config, if present - ignore any errors.
_ = removeNotificationConfig(bucket, objectAPI)
// Notify all peers (including self) to update in-memory state
S3PeersUpdateBucketNotification(bucket, nil)
// Delete listener config, if present - ignore any errors.
_ = removeListenerConfig(bucket, objectAPI)
// Notify all peers (including self) to update in-memory state
S3PeersUpdateBucketListener(bucket, []listenerConfig{})
// Write success response.
writeSuccessNoContent(w)
}

View File

@@ -24,6 +24,8 @@ import (
"net/http/httptest"
"strconv"
"testing"
"github.com/minio/minio/pkg/auth"
)
// Wrapper for calling GetBucketPolicy HTTP handler tests for both XL multiple disks and single node setup.
@@ -32,8 +34,7 @@ func TestGetBucketLocationHandler(t *testing.T) {
}
func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
credentials auth.Credentials, t *testing.T) {
// test cases with sample input and expected output.
testCases := []struct {
@@ -153,7 +154,7 @@ func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName stri
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getReadOnlyBucketStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "TestGetBucketLocationHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyBucketStatement)
ExecObjectLayerAPIAnonTest(t, obj, "TestGetBucketLocationHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyBucketStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
@@ -177,8 +178,7 @@ func TestHeadBucketHandler(t *testing.T) {
}
func testHeadBucketHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
credentials auth.Credentials, t *testing.T) {
// test cases with sample input and expected output.
testCases := []struct {
@@ -259,7 +259,7 @@ func testHeadBucketHandler(obj ObjectLayer, instanceType, bucketName string, api
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getReadOnlyBucketStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "TestHeadBucketHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyBucketStatement)
ExecObjectLayerAPIAnonTest(t, obj, "TestHeadBucketHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyBucketStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
@@ -284,8 +284,7 @@ func TestListMultipartUploadsHandler(t *testing.T) {
// testListMultipartUploadsHandler - Tests validate listing of multipart uploads.
func testListMultipartUploadsHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
credentials auth.Credentials, t *testing.T) {
// Collection of non-exhaustive ListMultipartUploads test cases, valid errors
// and success responses.
@@ -455,7 +454,6 @@ func testListMultipartUploadsHandler(obj ObjectLayer, instanceType, bucketName s
// verify response for V2 signed HTTP request.
reqV2, err := newTestSignedRequestV2("GET", u, 0, nil, testCase.accessKey, testCase.secretKey)
if err != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for PutBucketPolicyHandler: <ERROR> %v", i+1, instanceType, err)
}
@@ -495,7 +493,7 @@ func testListMultipartUploadsHandler(obj ObjectLayer, instanceType, bucketName s
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyBucketStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "TestListMultipartUploadsHandler", bucketName, "", instanceType, apiRouter, anonReq, getWriteOnlyBucketStatement)
ExecObjectLayerAPIAnonTest(t, obj, "TestListMultipartUploadsHandler", bucketName, "", instanceType, apiRouter, anonReq, getWriteOnlyBucketStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
@@ -523,7 +521,7 @@ func TestListBucketsHandler(t *testing.T) {
// testListBucketsHandler - Tests validate listing of buckets.
func testListBucketsHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
credentials auth.Credentials, t *testing.T) {
testCases := []struct {
bucketName string
@@ -593,7 +591,7 @@ func testListBucketsHandler(obj ObjectLayer, instanceType, bucketName string, ap
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyObjectStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "ListBucketsHandler", "", "", instanceType, apiRouter, anonReq, getWriteOnlyObjectStatement)
ExecObjectLayerAPIAnonTest(t, obj, "ListBucketsHandler", "", "", instanceType, apiRouter, anonReq, getWriteOnlyObjectStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
@@ -616,8 +614,7 @@ func TestAPIDeleteMultipleObjectsHandler(t *testing.T) {
}
func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
credentials auth.Credentials, t *testing.T) {
var err error
// register event notifier.
@@ -632,8 +629,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
for i := 0; i < 10; i++ {
objectName := "test-object-" + strconv.Itoa(i)
// uploading the object.
_, err = obj.PutObject(bucketName, objectName, int64(len(contentBytes)), bytes.NewBuffer(contentBytes),
make(map[string]string), sha256sum)
_, err = obj.PutObject(bucketName, objectName, mustGetHashReader(t, bytes.NewBuffer(contentBytes), int64(len(contentBytes)), "", sha256sum), nil)
// if object upload fails stop the test.
if err != nil {
t.Fatalf("Put Object %d: Error uploading object: <ERROR> %v", i, err)
@@ -650,6 +646,17 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
return objectIdentifierList
}
getDeleteErrorList := func(objects []ObjectIdentifier) (deleteErrorList []DeleteError) {
for _, obj := range objects {
deleteErrorList = append(deleteErrorList, DeleteError{
Code: errorCodeResponse[ErrAccessDenied].Code,
Message: errorCodeResponse[ErrAccessDenied].Description,
Key: obj.ObjectName,
})
}
return deleteErrorList
}
requestList := []DeleteObjectsRequest{
{Quiet: false, Objects: getObjectIdentifierList(objectNames[:5])},
@@ -670,6 +677,10 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
errorResponse := generateMultiDeleteResponse(requestList[1].Quiet, requestList[1].Objects, nil)
encodedErrorResponse := encodeResponse(errorResponse)
anonRequest := encodeResponse(requestList[0])
anonResponse := generateMultiDeleteResponse(requestList[0].Quiet, nil, getDeleteErrorList(requestList[0].Objects))
encodedAnonResponse := encodeResponse(anonResponse)
testCases := []struct {
bucket string
objects []byte
@@ -718,15 +729,32 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
expectedContent: encodedErrorResponse,
expectedRespStatus: http.StatusOK,
},
// Test case - 5.
// Anonymous user access denied response
// Currently anonymous users cannot delete multiple objects in Minio server
{
bucket: bucketName,
objects: anonRequest,
accessKey: "",
secretKey: "",
expectedContent: encodedAnonResponse,
expectedRespStatus: http.StatusOK,
},
}
for i, testCase := range testCases {
var req *http.Request
var actualContent []byte
// Indicating that all parts are uploaded and initiating completeMultipartUpload.
req, err = newTestSignedRequestV4("POST", getDeleteMultipleObjectsURL("", bucketName),
int64(len(testCase.objects)), bytes.NewReader(testCase.objects), testCase.accessKey, testCase.secretKey)
// Generate a signed or anonymous request based on the testCase
if testCase.accessKey != "" {
req, err = newTestSignedRequestV4("POST", getDeleteMultipleObjectsURL("", bucketName),
int64(len(testCase.objects)), bytes.NewReader(testCase.objects), testCase.accessKey, testCase.secretKey)
} else {
req, err = newTestRequest("POST", getDeleteMultipleObjectsURL("", bucketName),
int64(len(testCase.objects)), bytes.NewReader(testCase.objects))
}
if err != nil {
t.Fatalf("Failed to create HTTP request for DeleteMultipleObjects: <ERROR> %v", err)
}
@@ -753,8 +781,6 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
}
}
// Currently anonymous user cannot delete multiple objects in Minio server, hence no test case is required.
// HTTP request to test the case of `objectLayer` being set to `nil`.
// There is no need to use an existing bucket or valid input for creating the request,
// since the `objectLayer==nil` check is performed before any other checks inside the handlers.
@@ -777,7 +803,7 @@ func TestIsBucketActionAllowed(t *testing.T) {
}
func testIsBucketActionAllowedHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
credentials auth.Credentials, t *testing.T) {
testCases := []struct {
// input.
@@ -792,12 +818,7 @@ func testIsBucketActionAllowedHandler(obj ObjectLayer, instanceType, bucketName
{"s3:ListObject", "mybucket", "abc", false, false},
}
for i, testCase := range testCases {
if testCase.isGlobalPoliciesNil {
globalBucketPolicies = nil
} else {
initBucketPolicies(obj)
}
isAllowed := isBucketActionAllowed(testCase.action, testCase.bucket, testCase.prefix)
isAllowed := isBucketActionAllowed(testCase.action, testCase.bucket, testCase.prefix, obj)
if isAllowed != testCase.shouldPass {
t.Errorf("Case %d: Expected the response status to be `%t`, but instead found `%t`", i+1, testCase.shouldPass, isAllowed)
}

View File

@@ -16,8 +16,6 @@
package cmd
import "encoding/json"
// BucketMetaState - Interface to update bucket metadata in-memory
// state.
type BucketMetaState interface {
@@ -79,13 +77,7 @@ func (lc *localBucketMetaState) UpdateBucketPolicy(args *SetBucketPolicyPeerArgs
if objAPI == nil {
return errServerNotInitialized
}
var pCh policyChange
if err := json.Unmarshal(args.PChBytes, &pCh); err != nil {
return err
}
return globalBucketPolicies.SetBucketPolicy(args.Bucket, pCh)
return objAPI.RefreshBucketPolicy(args.Bucket)
}
// localBucketMetaState.SendEvent - sends event to local event notifier via

View File

@@ -67,6 +67,7 @@ type notificationConfig struct {
XMLName xml.Name `xml:"NotificationConfiguration"`
QueueConfigs []queueConfig `xml:"QueueConfiguration"`
LambdaConfigs []lambdaConfig `xml:"CloudFunctionConfiguration"`
TopicConfigs []topicConfig `xml:"TopicConfiguration"`
}
// listenerConfig structure represents run-time notification
@@ -136,13 +137,13 @@ type bucketMeta struct {
// Notification event object metadata.
type objectMeta struct {
Key string `json:"key"`
Size int64 `json:"size,omitempty"`
ETag string `json:"eTag,omitempty"`
ContentType string `json:"contentType,omitempty"`
UserDefined map[string]string `json:"userDefined,omitempty"`
VersionID string `json:"versionId,omitempty"`
Sequencer string `json:"sequencer"`
Key string `json:"key"`
Size int64 `json:"size,omitempty"`
ETag string `json:"eTag,omitempty"`
ContentType string `json:"contentType,omitempty"`
UserMetadata map[string]string `json:"userMetadata,omitempty"`
VersionID string `json:"versionId,omitempty"`
Sequencer string `json:"sequencer"`
}
const (
@@ -208,5 +209,5 @@ type arnSQS struct {
// Stringer for constructing AWS ARN compatible string.
func (m arnSQS) String() string {
return minioSqs + serverConfig.GetRegion() + ":" + m.AccountID + ":" + m.Type
return minioSqs + globalServerConfig.GetRegion() + ":" + m.AccountID + ":" + m.Type
}

View File

@@ -22,10 +22,13 @@ import (
"encoding/xml"
"fmt"
"io"
"net"
"net/http"
"syscall"
"time"
"github.com/gorilla/mux"
"github.com/minio/minio/pkg/errors"
)
const (
@@ -40,13 +43,18 @@ const (
// not enabled on the bucket, the operation returns an empty
// NotificationConfiguration element.
func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
objAPI := api.ObjectAPI()
if objAPI == nil {
writeErrorResponse(w, ErrServerNotInitialized, r.URL)
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
if !objAPI.IsNotificationSupported() {
writeErrorResponse(w, ErrNotImplemented, r.URL)
return
}
if s3Error := checkRequestAuthType(r, "", "", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -56,20 +64,19 @@ func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter,
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
// Attempt to successfully load notification config.
nConfig, err := loadNotificationConfig(bucket, objAPI)
if err != nil && err != errNoSuchNotifications {
if err != nil && errors.Cause(err) != errNoSuchNotifications {
errorIf(err, "Unable to read notification configuration.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
// For no notifications we write a dummy XML.
if err == errNoSuchNotifications {
if errors.Cause(err) == errNoSuchNotifications {
// Complies with the s3 behavior in this regard.
nConfig = &notificationConfig{}
}
@@ -94,13 +101,18 @@ func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter,
// By default, your bucket has no event notifications configured. That is,
// the notification configuration will be an empty NotificationConfiguration.
func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(w, ErrServerNotInitialized, r.URL)
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
if !objectAPI.IsNotificationSupported() {
writeErrorResponse(w, ErrNotImplemented, r.URL)
return
}
if s3Error := checkRequestAuthType(r, "", "", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -110,7 +122,6 @@ func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter,
_, err := objectAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
@@ -150,6 +161,12 @@ func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter,
return
}
// Convert the incoming ARNs properly to the GetRegion().
for i, queueConfig := range notificationCfg.QueueConfigs {
queueConfig.QueueARN = unmarshalSqsARN(queueConfig.QueueARN).String()
notificationCfg.QueueConfigs[i] = queueConfig
}
// Put bucket notification config.
err = PutBucketNotificationConfig(bucket, &notificationCfg, objectAPI)
if err != nil {
@@ -173,7 +190,9 @@ func PutBucketNotificationConfig(bucket string, ncfg *notificationConfig, objAPI
// Acquire a write lock on bucket before modifying its
// configuration.
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
if err := bucketLock.GetLock(globalOperationTimeout); err != nil {
return err
}
// Release lock after notifying peers
defer bucketLock.Unlock()
@@ -205,14 +224,6 @@ func writeNotification(w http.ResponseWriter, notification map[string][]Notifica
return err
}
// https://github.com/containous/traefik/issues/560
// https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events
//
// Proxies might buffer the connection to avoid this we
// need the proper MIME type before writing to client.
// This MIME header tells the proxies to avoid buffering
w.Header().Set("Content-Type", "text/event-stream")
// Add additional CRLF characters for client to
// differentiate the individual events properly.
_, err = w.Write(append(notificationBytes, crlf...))
@@ -224,25 +235,61 @@ func writeNotification(w http.ResponseWriter, notification map[string][]Notifica
// CRLF character used for chunked transfer in accordance with HTTP standards.
var crlf = []byte("\r\n")
// sendBucketNotification - writes notification back to client on the response writer
// for each notification input, otherwise writes whitespace characters periodically
// to keep the connection active. Each notification messages are terminated by CRLF
// character. Upon any error received on response writer the for loop exits.
func sendBucketNotification(w http.ResponseWriter, arnListenerCh <-chan []NotificationEvent) {
var dummyEvents = map[string][]NotificationEvent{"Records": nil}
// Continuously write to client either timely empty structures
// every 5 seconds, or return back the notifications.
// listenChan A `listenChan` provides a data channel to send event
// notifications on and `doneCh` to signal that events are no longer
// being received. It also sends empty events (whitespace) to keep the
// underlying connection alive.
type listenChan struct {
doneCh chan struct{}
dataCh chan []NotificationEvent
}
// newListenChan returns a listenChan with properly initialized
// unbuffered channels.
func newListenChan() *listenChan {
return &listenChan{
doneCh: make(chan struct{}),
dataCh: make(chan []NotificationEvent),
}
}
// sendNotificationEvent sends notification events on the data channel
// unless doneCh is not closed
func (l *listenChan) sendNotificationEvent(events []NotificationEvent) {
select {
// Returns immediately if receiver has quit.
case <-l.doneCh:
// Blocks until receiver is available.
case l.dataCh <- events:
}
}
// waitForListener writes event notification OR whitespaces on
// ResponseWriter until client closes connection
func (l *listenChan) waitForListener(w http.ResponseWriter) {
// Logs errors other than EPIPE and ECONNRESET.
// EPIPE and ECONNRESET indicate that the client stopped
// listening to notification events.
logClientError := func(err error, msg string) {
if oe, ok := err.(*net.OpError); ok && (oe.Err == syscall.EPIPE || oe.Err ==
syscall.ECONNRESET) {
errorIf(err, msg)
}
}
emptyEvent := map[string][]NotificationEvent{"Records": nil}
defer close(l.doneCh)
for {
select {
case events := <-arnListenerCh:
case events := <-l.dataCh:
if err := writeNotification(w, map[string][]NotificationEvent{"Records": events}); err != nil {
errorIf(err, "Unable to write notification to client.")
logClientError(err, "Unable to write notification")
return
}
case <-time.After(globalSNSConnAlive): // Wait for global conn active seconds.
if err := writeNotification(w, dummyEvents); err != nil {
// FIXME - do not log for all errors.
errorIf(err, "Unable to write notification to client.")
case <-time.After(globalSNSConnAlive):
if err := writeNotification(w, emptyEvent); err != nil {
logClientError(err, "Unable to write empty notification")
return
}
}
@@ -257,8 +304,11 @@ func (api objectAPIHandlers) ListenBucketNotificationHandler(w http.ResponseWrit
writeErrorResponse(w, ErrServerNotInitialized, r.URL)
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
if !objAPI.IsNotificationSupported() {
writeErrorResponse(w, ErrNotImplemented, r.URL)
return
}
if s3Error := checkRequestAuthType(r, "", "", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -289,20 +339,21 @@ func (api objectAPIHandlers) ListenBucketNotificationHandler(w http.ResponseWrit
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to get bucket info.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
targetServer := GetLocalPeer(globalEndpoints)
accountID := fmt.Sprintf("%d", UTCNow().UnixNano())
accountARN := fmt.Sprintf(
"%s:%s:%s:%s-%s",
minioTopic,
serverConfig.GetRegion(),
globalServerConfig.GetRegion(),
accountID,
snsTypeMinio,
globalMinioAddr,
targetServer,
)
var filterRules []filterRule
for _, prefix := range prefixes {
@@ -336,12 +387,11 @@ func (api objectAPIHandlers) ListenBucketNotificationHandler(w http.ResponseWrit
},
}
// Setup a listening channel that will receive notifications
// from the RPC handler.
nEventCh := make(chan []NotificationEvent)
defer close(nEventCh)
// Setup a listen channel to receive notifications like
// s3:ObjectCreated, s3:ObjectDeleted etc.
nListenCh := newListenChan()
// Add channel for listener events
if err = globalEventNotifier.AddListenerChan(accountARN, nEventCh); err != nil {
if err = globalEventNotifier.AddListenerChan(accountARN, nListenCh); err != nil {
errorIf(err, "Error adding a listener!")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
@@ -351,11 +401,11 @@ func (api objectAPIHandlers) ListenBucketNotificationHandler(w http.ResponseWrit
defer globalEventNotifier.RemoveListenerChan(accountARN)
// Update topic config to bucket config and persist - as soon
// as this call compelets, events may start appearing in
// nEventCh
// as this call completes, events may start appearing in
// nListenCh
lc := listenerConfig{
TopicConfig: *topicCfg,
TargetServer: globalMinioAddr,
TargetServer: targetServer,
}
err = AddBucketListenerConfig(bucket, &lc, objAPI)
@@ -368,8 +418,16 @@ func (api objectAPIHandlers) ListenBucketNotificationHandler(w http.ResponseWrit
// Add all common headers.
setCommonHeaders(w)
// Start sending bucket notifications.
sendBucketNotification(w, nEventCh)
// https://github.com/containous/traefik/issues/560
// https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events
//
// Proxies might buffer the connection to avoid this we
// need the proper MIME type before writing to client.
// This MIME header tells the proxies to avoid buffering
w.Header().Set("Content-Type", "text/event-stream")
// Start writing bucket notifications to ResponseWriter.
nListenCh.waitForListener(w)
}
// AddBucketListenerConfig - Updates on disk state of listeners, and
@@ -386,7 +444,9 @@ func AddBucketListenerConfig(bucket string, lcfg *listenerConfig, objAPI ObjectL
// Acquire a write lock on bucket before modifying its
// configuration.
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
if err := bucketLock.GetLock(globalOperationTimeout); err != nil {
return err
}
// Release lock after notifying peers
defer bucketLock.Unlock()
@@ -427,7 +487,9 @@ func RemoveBucketListenerConfig(bucket string, lcfg *listenerConfig, objAPI Obje
// Acquire a write lock on bucket before modifying its
// configuration.
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
if bucketLock.GetLock(globalOperationTimeout) != nil {
return
}
// Release lock after notifying peers
defer bucketLock.Unlock()

View File

@@ -21,12 +21,19 @@ import (
"bytes"
"encoding/json"
"encoding/xml"
"errors"
"io"
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"reflect"
"sync"
"testing"
"time"
"github.com/minio/minio/pkg/auth"
)
// Implement a dummy flush writer.
@@ -51,7 +58,7 @@ func TestWriteNotification(t *testing.T) {
if err != nil {
t.Fatalf("Unable to initialize test config %s", err)
}
defer removeAll(root)
defer os.RemoveAll(root)
var buffer bytes.Buffer
// Collection of test cases for each event writer.
@@ -110,26 +117,56 @@ func TestWriteNotification(t *testing.T) {
}
}
func TestSendBucketNotification(t *testing.T) {
// testResponseWriter implements `http.ResponseWriter` that buffers
// response body in a `bytes.Buffer` and returns error after `failCount`
// calls to `Write` method
type testResponseWriter struct {
mu sync.Mutex
failCount int
buf *bytes.Buffer
m http.Header
}
func newTestResponseWriter(failAt int) *testResponseWriter {
return &testResponseWriter{
buf: new(bytes.Buffer),
m: make(http.Header),
failCount: failAt,
}
}
func (trw *testResponseWriter) Flush() {
}
func (trw *testResponseWriter) Write(p []byte) (int, error) {
trw.mu.Lock()
defer trw.mu.Unlock()
if trw.failCount == 0 {
return 0, errors.New("Custom error")
}
trw.failCount--
return trw.buf.Write(p)
}
func (trw *testResponseWriter) Header() http.Header {
return trw.m
}
func (trw *testResponseWriter) WriteHeader(i int) {
}
func TestListenChan(t *testing.T) {
// Initialize a new test config.
root, err := newTestConfig(globalMinioDefaultRegion)
if err != nil {
t.Fatalf("Unable to initialize test config %s", err)
}
defer removeAll(root)
defer os.RemoveAll(root)
eventCh := make(chan []NotificationEvent)
// Create a Pipe with FlushWriter on the write-side and bufio.Scanner
// on the reader-side to receive notification over the listen channel in a
// synchronized manner.
pr, pw := io.Pipe()
fw := newFlushWriter(pw)
scanner := bufio.NewScanner(pr)
// Start a go-routine to wait for notification events.
go func(listenerCh <-chan []NotificationEvent) {
sendBucketNotification(fw, listenerCh)
}(eventCh)
// Create a listen channel to manage notifications
nListenCh := newListenChan()
// Construct notification events to be passed on the events channel.
var events []NotificationEvent
@@ -139,37 +176,68 @@ func TestSendBucketNotification(t *testing.T) {
ObjectCreatedCopy,
ObjectCreatedCompleteMultipartUpload,
}
for _, evType := range evTypes {
events = append(events, newNotificationEvent(eventData{
Type: evType,
}))
}
// Send notification events to the channel on which sendBucketNotification
// is waiting on.
eventCh <- events
// Read from the pipe connected to the ResponseWriter.
scanner.Scan()
notificationBytes := scanner.Bytes()
// Close the read-end and send an empty notification event on the channel
// to signal sendBucketNotification to terminate.
pr.Close()
eventCh <- []NotificationEvent{}
close(eventCh)
// Checking if the notification are the same as those sent over the channel.
var notifications map[string][]NotificationEvent
err = json.Unmarshal(notificationBytes, &notifications)
if err != nil {
t.Fatal("Failed to Unmarshal notification")
}
records := notifications["Records"]
for i, rec := range records {
if rec.EventName == evTypes[i].String() {
continue
// Send notification events one-by-one
go func() {
for _, event := range events {
nListenCh.sendNotificationEvent([]NotificationEvent{event})
}
t.Errorf("Failed to receive %d event %s", i, evTypes[i].String())
}()
// Create a http.ResponseWriter that fails after len(events)
// number of times
trw := newTestResponseWriter(len(events))
// Wait for all (4) notification events to be received
nListenCh.waitForListener(trw)
// Used to read JSON-formatted event stream line-by-line
scanner := bufio.NewScanner(trw.buf)
var records map[string][]NotificationEvent
for i := 0; scanner.Scan(); i++ {
err = json.Unmarshal(scanner.Bytes(), &records)
if err != nil {
t.Fatalf("Failed to unmarshal json %v", err)
}
nEvent := records["Records"][0]
if nEvent.EventName != evTypes[i].String() {
t.Errorf("notification event name mismatch, expected %s but got %s", evTypes[i], nEvent.EventName)
}
}
}
func TestSendNotificationEvent(t *testing.T) {
// This test verifies that sendNotificationEvent function
// returns once listenChan.doneCh is closed
l := newListenChan()
testCh := make(chan struct{})
timeout := 5 * time.Second
go func() {
// Send one empty notification event on listenChan
events := []NotificationEvent{{}}
l.sendNotificationEvent(events)
testCh <- struct{}{}
}()
// close l.doneCh to signal client exiting from
// ListenBucketNotification API call
close(l.doneCh)
select {
case <-time.After(timeout):
t.Fatalf("sendNotificationEvent didn't return after %v seconds", timeout)
case <-testCh:
// If we reach this case, sendNotificationEvent
// returned on closing l.doneCh
}
}
@@ -180,7 +248,7 @@ func TestGetBucketNotificationHandler(t *testing.T) {
}
func testGetBucketNotificationHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
credentials auth.Credentials, t *testing.T) {
// declare sample configs
filterRules := []filterRule{
{
@@ -246,6 +314,95 @@ func testGetBucketNotificationHandler(obj ObjectLayer, instanceType, bucketName
}
}
func TestPutBucketNotificationHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testPutBucketNotificationHandler, []string{
"PutBucketNotification",
})
}
func testPutBucketNotificationHandler(obj ObjectLayer, instanceType,
bucketName string, apiRouter http.Handler, credentials auth.Credentials,
t *testing.T) {
// declare sample configs
filterRules := []filterRule{
{
Name: "prefix",
Value: "minio",
},
{
Name: "suffix",
Value: "*.jpg",
},
}
sampleSvcCfg := ServiceConfig{
[]string{"s3:ObjectRemoved:*", "s3:ObjectCreated:*"},
filterStruct{
keyFilter{filterRules},
},
"1",
}
sampleNotifCfg := notificationConfig{
QueueConfigs: []queueConfig{
{
ServiceConfig: sampleSvcCfg,
QueueARN: "testqARN",
},
},
}
{
sampleNotifCfg.LambdaConfigs = []lambdaConfig{
{
sampleSvcCfg, "testLARN",
},
}
xmlBytes, err := xml.Marshal(sampleNotifCfg)
if err != nil {
t.Fatalf("%s: Unexpected err: %#v", instanceType, err)
}
rec := httptest.NewRecorder()
req, err := newTestSignedRequestV4("PUT",
getPutBucketNotificationURL("", bucketName),
int64(len(xmlBytes)), bytes.NewReader(xmlBytes),
credentials.AccessKey, credentials.SecretKey)
if err != nil {
t.Fatalf("%s: Failed to create HTTP testRequest for PutBucketNotification: <ERROR> %v",
instanceType, err)
}
apiRouter.ServeHTTP(rec, req)
if rec.Code != http.StatusBadRequest {
t.Fatalf("Unexpected http response %d", rec.Code)
}
}
{
sampleNotifCfg.LambdaConfigs = nil
sampleNotifCfg.TopicConfigs = []topicConfig{
{
sampleSvcCfg, "testTARN",
},
}
xmlBytes, err := xml.Marshal(sampleNotifCfg)
if err != nil {
t.Fatalf("%s: Unexpected err: %#v", instanceType, err)
}
rec := httptest.NewRecorder()
req, err := newTestSignedRequestV4("PUT",
getPutBucketNotificationURL("", bucketName),
int64(len(xmlBytes)), bytes.NewReader(xmlBytes),
credentials.AccessKey, credentials.SecretKey)
if err != nil {
t.Fatalf("%s: Failed to create HTTP testRequest for PutBucketNotification: <ERROR> %v",
instanceType, err)
}
apiRouter.ServeHTTP(rec, req)
if rec.Code != http.StatusBadRequest {
t.Fatalf("Unexpected http response %d", rec.Code)
}
}
}
func TestListenBucketNotificationNilHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListenBucketNotificationNilHandler, []string{
"ListenBucketNotification",
@@ -254,7 +411,7 @@ func TestListenBucketNotificationNilHandler(t *testing.T) {
}
func testListenBucketNotificationNilHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
credentials auth.Credentials, t *testing.T) {
// get random bucket name.
randBucket := getRandomBucketName()
@@ -280,26 +437,28 @@ func testListenBucketNotificationNilHandler(obj ObjectLayer, instanceType, bucke
}
}
func testRemoveNotificationConfig(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
func testRemoveNotificationConfig(obj ObjectLayer, instanceType,
bucketName string, apiRouter http.Handler, credentials auth.Credentials,
t *testing.T) {
invalidBucket := "Invalid\\Bucket"
// get random bucket name.
randBucket := bucketName
sampleNotificationBytes := []byte("<NotificationConfiguration><TopicConfiguration>" +
"<Event>s3:ObjectCreated:*</Event><Event>s3:ObjectRemoved:*</Event><Filter>" +
"<S3Key></S3Key></Filter><Id></Id><Topic>arn:minio:sns:us-east-1:1474332374:listen</Topic>" +
"</TopicConfiguration></NotificationConfiguration>")
// Set sample bucket notification on randBucket.
testRec := httptest.NewRecorder()
testReq, tErr := newTestSignedRequestV4("PUT", getPutBucketNotificationURL("", randBucket),
int64(len(sampleNotificationBytes)), bytes.NewReader(sampleNotificationBytes),
credentials.AccessKey, credentials.SecretKey)
if tErr != nil {
t.Fatalf("%s: Failed to create HTTP testRequest for PutBucketNotification: <ERROR> %v", instanceType, tErr)
nCfg := notificationConfig{
QueueConfigs: []queueConfig{
{
ServiceConfig: ServiceConfig{
Events: []string{"s3:ObjectRemoved:*",
"s3:ObjectCreated:*"},
},
QueueARN: "testqARN",
},
},
}
if err := persistNotificationConfig(randBucket, &nCfg, obj); err != nil {
t.Fatalf("Unexpected error: %#v", err)
}
apiRouter.ServeHTTP(testRec, testReq)
testCases := []struct {
bucketName string

View File

@@ -116,9 +116,13 @@ func checkARN(arn, arnType string) APIErrorCode {
if len(strs) != 6 {
return ErrARNNotification
}
if serverConfig.GetRegion() != "" {
// Server region is allowed to be empty by default,
// in such a scenario ARN region is not validating
// allowing all regions.
if sregion := globalServerConfig.GetRegion(); sregion != "" {
region := strs[3]
if region != serverConfig.GetRegion() {
if region != sregion {
return ErrRegionNotification
}
}
@@ -142,34 +146,34 @@ func isValidQueueID(queueARN string) bool {
// Is Queue identifier valid?.
if isAMQPQueue(sqsARN) { // AMQP eueue.
amqpN := serverConfig.Notify.GetAMQPByID(sqsARN.AccountID)
amqpN := globalServerConfig.Notify.GetAMQPByID(sqsARN.AccountID)
return amqpN.Enable && amqpN.URL != ""
} else if isMQTTQueue(sqsARN) {
mqttN := serverConfig.Notify.GetMQTTByID(sqsARN.AccountID)
mqttN := globalServerConfig.Notify.GetMQTTByID(sqsARN.AccountID)
return mqttN.Enable && mqttN.Broker != ""
} else if isNATSQueue(sqsARN) {
natsN := serverConfig.Notify.GetNATSByID(sqsARN.AccountID)
natsN := globalServerConfig.Notify.GetNATSByID(sqsARN.AccountID)
return natsN.Enable && natsN.Address != ""
} else if isElasticQueue(sqsARN) { // Elastic queue.
elasticN := serverConfig.Notify.GetElasticSearchByID(sqsARN.AccountID)
elasticN := globalServerConfig.Notify.GetElasticSearchByID(sqsARN.AccountID)
return elasticN.Enable && elasticN.URL != ""
} else if isRedisQueue(sqsARN) { // Redis queue.
redisN := serverConfig.Notify.GetRedisByID(sqsARN.AccountID)
redisN := globalServerConfig.Notify.GetRedisByID(sqsARN.AccountID)
return redisN.Enable && redisN.Addr != ""
} else if isPostgreSQLQueue(sqsARN) {
pgN := serverConfig.Notify.GetPostgreSQLByID(sqsARN.AccountID)
pgN := globalServerConfig.Notify.GetPostgreSQLByID(sqsARN.AccountID)
// Postgres can work with only default conn. info.
return pgN.Enable
} else if isMySQLQueue(sqsARN) {
msqlN := serverConfig.Notify.GetMySQLByID(sqsARN.AccountID)
msqlN := globalServerConfig.Notify.GetMySQLByID(sqsARN.AccountID)
// Mysql can work with only default conn. info.
return msqlN.Enable
} else if isKafkaQueue(sqsARN) {
kafkaN := serverConfig.Notify.GetKafkaByID(sqsARN.AccountID)
kafkaN := globalServerConfig.Notify.GetKafkaByID(sqsARN.AccountID)
return (kafkaN.Enable && len(kafkaN.Brokers) > 0 &&
kafkaN.Topic != "")
} else if isWebhookQueue(sqsARN) {
webhookN := serverConfig.Notify.GetWebhookByID(sqsARN.AccountID)
webhookN := globalServerConfig.Notify.GetWebhookByID(sqsARN.AccountID)
return webhookN.Enable && webhookN.Endpoint != ""
}
return false
@@ -235,6 +239,12 @@ func checkDuplicateQueueConfigs(configs []queueConfig) APIErrorCode {
// if one of the config is malformed or has invalid data it is rejected.
// Configuration is never applied partially.
func validateNotificationConfig(nConfig notificationConfig) APIErrorCode {
// Minio server does not support lambda/topic configurations
// currently. Such configuration is rejected.
if len(nConfig.LambdaConfigs) > 0 || len(nConfig.TopicConfigs) > 0 {
return ErrUnsupportedNotification
}
// Validate all queue configs.
if s3Error := validateQueueConfigs(nConfig.QueueConfigs); s3Error != ErrNone {
return s3Error
@@ -267,9 +277,13 @@ func unmarshalSqsARN(queueARN string) (mSqs arnSQS) {
if len(strs) != 6 {
return
}
if serverConfig.GetRegion() != "" {
// Server region is allowed to be empty by default,
// in such a scenario ARN region is not validating
// allowing all regions.
if sregion := globalServerConfig.GetRegion(); sregion != "" {
region := strs[3]
if region != serverConfig.GetRegion() {
if region != sregion {
return
}
}

View File

@@ -17,6 +17,7 @@
package cmd
import (
"os"
"strings"
"testing"
)
@@ -222,7 +223,7 @@ func TestQueueARN(t *testing.T) {
if err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
testCases := []struct {
queueARN string
@@ -299,7 +300,7 @@ func TestQueueARN(t *testing.T) {
if err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
testCases = []struct {
queueARN string
@@ -332,7 +333,7 @@ func TestUnmarshalSQSARN(t *testing.T) {
if err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
testCases := []struct {
queueARN string
@@ -392,7 +393,7 @@ func TestUnmarshalSQSARN(t *testing.T) {
if err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
testCases = []struct {
queueARN string

View File

@@ -17,16 +17,18 @@
package cmd
import (
"fmt"
"encoding/json"
"io"
"io/ioutil"
"net"
"net/http"
"runtime"
"strings"
humanize "github.com/dustin/go-humanize"
mux "github.com/gorilla/mux"
"github.com/minio/minio-go/pkg/set"
"github.com/minio/minio-go/pkg/policy"
"github.com/minio/minio/pkg/errors"
"github.com/minio/minio/pkg/wildcard"
)
@@ -35,7 +37,8 @@ const maxAccessPolicySize = 20 * humanize.KiByte
// Verify if a given action is valid for the url path based on the
// existing bucket access policy.
func bucketPolicyEvalStatements(action string, resource string, conditions map[string]set.StringSet, statements []policyStatement) bool {
func bucketPolicyEvalStatements(action string, resource string, conditions policy.ConditionKeyMap,
statements []policy.Statement) bool {
for _, statement := range statements {
if bucketPolicyMatchStatement(action, resource, conditions, statement) {
if statement.Effect == "Allow" {
@@ -51,7 +54,8 @@ func bucketPolicyEvalStatements(action string, resource string, conditions map[s
}
// Verify if action, resource and conditions match input policy statement.
func bucketPolicyMatchStatement(action string, resource string, conditions map[string]set.StringSet, statement policyStatement) bool {
func bucketPolicyMatchStatement(action string, resource string, conditions policy.ConditionKeyMap,
statement policy.Statement) bool {
// Verify if action, resource and condition match in given statement.
return (bucketPolicyActionMatch(action, statement) &&
bucketPolicyResourceMatch(resource, statement) &&
@@ -59,7 +63,7 @@ func bucketPolicyMatchStatement(action string, resource string, conditions map[s
}
// Verify if given action matches with policy statement.
func bucketPolicyActionMatch(action string, statement policyStatement) bool {
func bucketPolicyActionMatch(action string, statement policy.Statement) bool {
return !statement.Actions.FuncMatch(actionMatch, action).IsEmpty()
}
@@ -81,8 +85,20 @@ func refererMatch(pattern, referer string) bool {
return wildcard.MatchSimple(pattern, referer)
}
// isIPInCIDR - checks if a given a IP address is a member of the given subnet.
func isIPInCIDR(cidr, ip string) bool {
// AWS S3 spec says IPs must use standard CIDR notation.
// http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-3.
_, cidrNet, err := net.ParseCIDR(cidr)
if err != nil {
return false // If provided CIDR can't be parsed no IP will be in the subnet.
}
addr := net.ParseIP(ip)
return cidrNet.Contains(addr)
}
// Verify if given resource matches with policy statement.
func bucketPolicyResourceMatch(resource string, statement policyStatement) bool {
func bucketPolicyResourceMatch(resource string, statement policy.Statement) bool {
// the resource rule for object could contain "*" wild card.
// the requested object can be given access based on the already set bucket policy if
// the match is successful.
@@ -91,17 +107,20 @@ func bucketPolicyResourceMatch(resource string, statement policyStatement) bool
}
// Verify if given condition matches with policy statement.
func bucketPolicyConditionMatch(conditions map[string]set.StringSet, statement policyStatement) bool {
func bucketPolicyConditionMatch(conditions policy.ConditionKeyMap, statement policy.Statement) bool {
// Supports following conditions.
// - StringEquals
// - StringNotEquals
// - StringLike
// - StringNotLike
// - IpAddress
// - NotIpAddress
//
// Supported applicable condition keys for each conditions.
// - s3:prefix
// - s3:max-keys
// - s3:aws-Referer
// - s3:aws-SourceIp
// The following loop evaluates the logical AND of all the
// conditions in the statement. Note: we can break out of the
@@ -159,6 +178,37 @@ func bucketPolicyConditionMatch(conditions map[string]set.StringSet, statement p
return false
}
}
} else if condition == "IpAddress" {
awsIps := conditionKeyVal["aws:SourceIp"]
// Skip empty condition, it is trivially satisfied.
if awsIps.IsEmpty() {
continue
}
// wildcard match of ip if statement was not empty.
// Find a valid ip.
ipFound := false
for ip := range conditions["ip"] {
if !awsIps.FuncMatch(isIPInCIDR, ip).IsEmpty() {
ipFound = true
break
}
}
if !ipFound {
return false
}
} else if condition == "NotIpAddress" {
awsIps := conditionKeyVal["aws:SourceIp"]
// Skip empty condition, it is trivially satisfied.
if awsIps.IsEmpty() {
continue
}
// wildcard match of ip if statement was not empty.
// Find if nothing matches.
for ip := range conditions["ip"] {
if !awsIps.FuncMatch(isIPInCIDR, ip).IsEmpty() {
return false
}
}
}
}
@@ -176,7 +226,7 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
if s3Error := checkRequestAuthType(r, "", "", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -187,7 +237,6 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
// Before proceeding validate if bucket exists.
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
@@ -213,13 +262,29 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
// Parse validate and save bucket policy.
if s3Error := parseAndPersistBucketPolicy(bucket, policyBytes, objAPI); s3Error != ErrNone {
policyInfo := policy.BucketAccessPolicy{}
if err = json.Unmarshal(policyBytes, &policyInfo); err != nil {
writeErrorResponse(w, ErrInvalidPolicyDocument, r.URL)
return
}
// Parse check bucket policy.
if s3Error := checkBucketPolicyResources(bucket, policyInfo); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
if err = objAPI.SetBucketPolicy(bucket, policyInfo); err != nil {
err = errors.Cause(err)
switch err.(type) {
case NotImplemented:
// Return error for invalid bucket name.
writeErrorResponse(w, ErrNotImplemented, r.URL)
default:
writeErrorResponse(w, ErrMalformedPolicy, r.URL)
}
return
}
// Success.
writeSuccessNoContent(w)
}
@@ -235,7 +300,7 @@ func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
if s3Error := checkRequestAuthType(r, "", "", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -246,20 +311,14 @@ func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r
// Before proceeding validate if bucket exists.
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
// Delete bucket access policy, by passing an empty policy
// struct.
if err := persistAndNotifyBucketPolicyChange(bucket, policyChange{true, nil}, objAPI); err != nil {
switch err.(type) {
case BucketPolicyNotFound:
writeErrorResponse(w, ErrNoSuchBucketPolicy, r.URL)
default:
writeErrorResponse(w, ErrInternalError, r.URL)
}
if err := objAPI.DeleteBucketPolicy(bucket); err != nil {
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
@@ -278,7 +337,7 @@ func (api objectAPIHandlers) GetBucketPolicyHandler(w http.ResponseWriter, r *ht
return
}
if s3Error := checkRequestAuthType(r, "", "", serverConfig.GetRegion()); s3Error != ErrNone {
if s3Error := checkRequestAuthType(r, "", "", globalServerConfig.GetRegion()); s3Error != ErrNone {
writeErrorResponse(w, s3Error, r.URL)
return
}
@@ -289,24 +348,24 @@ func (api objectAPIHandlers) GetBucketPolicyHandler(w http.ResponseWriter, r *ht
// Before proceeding validate if bucket exists.
_, err := objAPI.GetBucketInfo(bucket)
if err != nil {
errorIf(err, "Unable to find bucket info.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
// Read bucket access policy.
policy, err := readBucketPolicy(bucket, objAPI)
policy, err := objAPI.GetBucketPolicy(bucket)
if err != nil {
errorIf(err, "Unable to read bucket policy.")
switch err.(type) {
case BucketPolicyNotFound:
writeErrorResponse(w, ErrNoSuchBucketPolicy, r.URL)
default:
writeErrorResponse(w, ErrInternalError, r.URL)
}
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
policyBytes, err := json.Marshal(&policy)
if err != nil {
errorIf(err, "Unable to marshal bucket policy.")
writeErrorResponse(w, toAPIErrorCode(err), r.URL)
return
}
// Write to client.
fmt.Fprint(w, policy)
w.Write(policyBytes)
}

View File

@@ -25,15 +25,17 @@ import (
"net/http/httptest"
"testing"
"github.com/minio/minio-go/pkg/policy"
"github.com/minio/minio-go/pkg/set"
"github.com/minio/minio/pkg/auth"
)
// Tests validate Bucket policy resource matcher.
func TestBucketPolicyResourceMatch(t *testing.T) {
// generates statement with given resource..
generateStatement := func(resource string) policyStatement {
statement := policyStatement{}
generateStatement := func(resource string) policy.Statement {
statement := policy.Statement{}
statement.Resources = set.CreateStringSet([]string{resource}...)
return statement
}
@@ -45,7 +47,7 @@ func TestBucketPolicyResourceMatch(t *testing.T) {
testCases := []struct {
resourceToMatch string
statement policyStatement
statement policy.Statement
expectedResourceMatch bool
}{
// Test case 1-4.
@@ -85,7 +87,7 @@ func TestBucketPolicyResourceMatch(t *testing.T) {
}
// TestBucketPolicyActionMatch - Test validates whether given action on the
// bucket/object matches the allowed actions in policyStatement.
// bucket/object matches the allowed actions in policy.Statement.
// This test preserves the allowed actions for all 3 sets of policies, that is read-write,read-only, write-only.
// The intention of the test is to catch any changes made to allowed action for on eof the above 3 major policy groups mentioned.
func TestBucketPolicyActionMatch(t *testing.T) {
@@ -94,7 +96,7 @@ func TestBucketPolicyActionMatch(t *testing.T) {
testCases := []struct {
action string
statement policyStatement
statement policy.Statement
expectedResult bool
}{
// s3:GetBucketLocation is the action necessary to be present in the bucket policy to allow
@@ -247,8 +249,7 @@ func TestPutBucketPolicyHandler(t *testing.T) {
// testPutBucketPolicyHandler - Test for Bucket policy end point.
func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
initBucketPolicies(obj)
credentials auth.Credentials, t *testing.T) {
bucketName1 := fmt.Sprintf("%s-1", bucketName)
if err := obj.MakeBucketWithLocation(bucketName1, ""); err != nil {
@@ -428,7 +429,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyObjectStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "PutBucketPolicyHandler", bucketName, "", instanceType, apiRouter, anonReq, getWriteOnlyObjectStatement)
ExecObjectLayerAPIAnonTest(t, obj, "PutBucketPolicyHandler", bucketName, "", instanceType, apiRouter, anonReq, getWriteOnlyObjectStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
@@ -445,6 +446,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// execute the object layer set to `nil` test.
// `ExecObjectLayerAPINilTest` manages the operation.
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling Get Bucket Policy HTTP handler tests for both XL multiple disks and single node setup.
@@ -454,10 +456,7 @@ func TestGetBucketPolicyHandler(t *testing.T) {
// testGetBucketPolicyHandler - Test for end point which fetches the access policy json of the given bucket.
func testGetBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
// initialize bucket policy.
initBucketPolicies(obj)
credentials auth.Credentials, t *testing.T) {
// template for constructing HTTP request body for PUT bucket policy.
bucketPolicyTemplate := `{"Version":"2012-10-17","Statement":[{"Action":["s3:GetBucketLocation","s3:ListBucket"],"Effect":"Allow","Principal":{"AWS":["*"]},"Resource":["arn:aws:s3:::%s"],"Sid":""},{"Action":["s3:GetObject"],"Effect":"Allow","Principal":{"AWS":["*"]},"Resource":["arn:aws:s3:::%s/this*"],"Sid":""}]}`
@@ -617,7 +616,7 @@ func testGetBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyObjectStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "GetBucketPolicyHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyObjectStatement)
ExecObjectLayerAPIAnonTest(t, obj, "GetBucketPolicyHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyObjectStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
@@ -643,9 +642,7 @@ func TestDeleteBucketPolicyHandler(t *testing.T) {
// testDeleteBucketPolicyHandler - Test for Delete bucket policy end point.
func testDeleteBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials credential, t *testing.T) {
// initialize bucket policy.
initBucketPolicies(obj)
credentials auth.Credentials, t *testing.T) {
// template for constructing HTTP request body for PUT bucket policy.
bucketPolicyTemplate := `{
@@ -822,7 +819,7 @@ func testDeleteBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName str
// ExecObjectLayerAPIAnonTest - Calls the HTTP API handler using the anonymous request, validates the ErrAccessDeniedResponse,
// sets the bucket policy using the policy statement generated from `getWriteOnlyObjectStatement` so that the
// unsigned request goes through and its validated again.
ExecObjectLayerAPIAnonTest(t, "DeleteBucketPolicyHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyObjectStatement)
ExecObjectLayerAPIAnonTest(t, obj, "DeleteBucketPolicyHandler", bucketName, "", instanceType, apiRouter, anonReq, getReadOnlyObjectStatement)
// HTTP request for testing when `objectLayer` is set to `nil`.
// There is no need to use an existing bucket and valid input for creating the request
@@ -843,28 +840,28 @@ func testDeleteBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName str
// TestBucketPolicyConditionMatch - Tests to validate whether bucket policy conditions match.
func TestBucketPolicyConditionMatch(t *testing.T) {
// obtain the inner map[string]set.StringSet for policyStatement.Conditions .
// obtain the inner map[string]set.StringSet for policy.Statement.Conditions.
getInnerMap := func(key2, value string) map[string]set.StringSet {
innerMap := make(map[string]set.StringSet)
innerMap[key2] = set.CreateStringSet(value)
return innerMap
}
// obtain policyStatement with Conditions set.
getStatementWithCondition := func(key1, key2, value string) policyStatement {
// obtain policy.Statement with Conditions set.
getStatementWithCondition := func(key1, key2, value string) policy.Statement {
innerMap := getInnerMap(key2, value)
// to set policyStatment.Conditions .
conditions := make(map[string]map[string]set.StringSet)
conditions := make(policy.ConditionMap)
conditions[key1] = innerMap
// new policy statement.
statement := policyStatement{}
statement := policy.Statement{}
// set the condition.
statement.Conditions = conditions
return statement
}
testCases := []struct {
statementCondition policyStatement
statementCondition policy.Statement
condition map[string]set.StringSet
expectedMatch bool
@@ -970,6 +967,34 @@ func TestBucketPolicyConditionMatch(t *testing.T) {
condition: getInnerMap("referer", "http://somethingelse.com/"),
expectedMatch: true,
},
// Test case 13.
// IpAddress condition evaluates to true.
{
statementCondition: getStatementWithCondition("IpAddress", "aws:SourceIp", "54.240.143.0/24"),
condition: getInnerMap("ip", "54.240.143.2"),
expectedMatch: true,
},
// Test case 14.
// IpAddress condition evaluates to false.
{
statementCondition: getStatementWithCondition("IpAddress", "aws:SourceIp", "54.240.143.0/24"),
condition: getInnerMap("ip", "127.240.143.224"),
expectedMatch: false,
},
// Test case 15.
// NotIpAddress condition evaluates to true.
{
statementCondition: getStatementWithCondition("NotIpAddress", "aws:SourceIp", "54.240.143.0/24"),
condition: getInnerMap("ip", "54.240.144.188"),
expectedMatch: true,
},
// Test case 16.
// NotIpAddress condition evaluates to false.
{
statementCondition: getStatementWithCondition("NotIpAddress", "aws:SourceIp", "54.240.143.0/24"),
condition: getInnerMap("ip", "54.240.143.243"),
expectedMatch: false,
},
}
for i, tc := range testCases {

View File

@@ -1,5 +1,5 @@
/*
* Minio Cloud Storage, (C) 2015, 2016 Minio, Inc.
* Minio Cloud Storage, (C) 2015, 2016, 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -26,12 +26,16 @@ import (
"sort"
"strings"
"github.com/minio/minio-go/pkg/policy"
"github.com/minio/minio-go/pkg/set"
)
var conditionKeyActionMap = map[string]set.StringSet{
"s3:prefix": set.CreateStringSet("s3:ListBucket"),
"s3:max-keys": set.CreateStringSet("s3:ListBucket"),
var emptyBucketPolicy = policy.BucketAccessPolicy{}
var conditionKeyActionMap = policy.ConditionKeyMap{
"s3:prefix": set.CreateStringSet("s3:ListBucket", "s3:ListBucketMultipartUploads"),
"s3:max-keys": set.CreateStringSet("s3:ListBucket", "s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts"),
}
// supportedActionMap - lists all the actions supported by minio.
@@ -40,50 +44,25 @@ var supportedActionMap = set.CreateStringSet("*", "s3:*", "s3:GetObject",
"s3:AbortMultipartUpload", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts")
// supported Conditions type.
var supportedConditionsType = set.CreateStringSet("StringEquals", "StringNotEquals", "StringLike", "StringNotLike")
var supportedConditionsType = set.CreateStringSet("StringEquals", "StringNotEquals", "StringLike", "StringNotLike", "IpAddress", "NotIpAddress")
// Validate s3:prefix, s3:max-keys are present if not
// supported keys for the conditions.
var supportedConditionsKey = set.CreateStringSet("s3:prefix", "s3:max-keys", "aws:Referer")
var supportedConditionsKey = set.CreateStringSet("s3:prefix", "s3:max-keys", "aws:Referer", "aws:SourceIp")
// supportedEffectMap - supported effects.
var supportedEffectMap = set.CreateStringSet("Allow", "Deny")
// Statement - minio policy statement
type policyStatement struct {
Actions set.StringSet `json:"Action"`
Conditions map[string]map[string]set.StringSet `json:"Condition,omitempty"`
Effect string
Principal interface{} `json:"Principal"`
Resources set.StringSet `json:"Resource"`
Sid string
}
// bucketPolicy - collection of various bucket policy statements.
type bucketPolicy struct {
Version string // date in YYYY-MM-DD format
Statements []policyStatement `json:"Statement"`
}
// Stringer implementation for the bucket policies.
func (b bucketPolicy) String() string {
bbytes, err := json.Marshal(&b)
if err != nil {
errorIf(err, "Unable to marshal bucket policy into JSON %#v", b)
return ""
}
return string(bbytes)
}
// isValidActions - are actions valid.
func isValidActions(actions set.StringSet) (err error) {
// Statement actions cannot be empty.
if len(actions) == 0 {
if actions.IsEmpty() {
err = errors.New("Action list cannot be empty")
return err
}
if unsupportedActions := actions.Difference(supportedActionMap); !unsupportedActions.IsEmpty() {
err = fmt.Errorf("Unsupported actions found: %#v, please validate your policy document", unsupportedActions)
err = fmt.Errorf("Unsupported actions found: %#v, please validate your policy document",
unsupportedActions)
return err
}
return nil
@@ -106,7 +85,7 @@ func isValidEffect(effect string) (err error) {
// isValidResources - are valid resources.
func isValidResources(resources set.StringSet) (err error) {
// Statement resources cannot be empty.
if len(resources) == 0 {
if resources.IsEmpty() {
err = errors.New("Resource list cannot be empty")
return err
}
@@ -124,60 +103,17 @@ func isValidResources(resources set.StringSet) (err error) {
return nil
}
// Parse principals parses a incoming json. Handles cases for
// these three combinations.
// - "Principal": "*",
// - "Principal": { "AWS" : "*" }
// - "Principal": { "AWS" : [ "*" ]}
func parsePrincipals(principal interface{}) set.StringSet {
principals, ok := principal.(map[string]interface{})
if !ok {
var principalStr string
principalStr, ok = principal.(string)
if ok {
return set.CreateStringSet(principalStr)
}
} // else {
var principalStrs []string
for _, p := range principals {
principalStr, isStr := p.(string)
if !isStr {
principalsAdd, isInterface := p.([]interface{})
if !isInterface {
principalStrsAddr, isStrs := p.([]string)
if !isStrs {
continue
}
principalStrs = append(principalStrs, principalStrsAddr...)
} else {
for _, pa := range principalsAdd {
var pstr string
pstr, isStr = pa.(string)
if !isStr {
continue
}
principalStrs = append(principalStrs, pstr)
}
}
continue
} // else {
principalStrs = append(principalStrs, principalStr)
}
return set.CreateStringSet(principalStrs...)
}
// isValidPrincipals - are valid principals.
func isValidPrincipals(principal interface{}) (err error) {
principals := parsePrincipals(principal)
// Statement principal should have a value.
if len(principals) == 0 {
err = errors.New("Principal cannot be empty")
return err
func isValidPrincipals(principal policy.User) (err error) {
if principal.AWS.IsEmpty() {
return errors.New("Principal cannot be empty")
}
if unsuppPrincipals := principals.Difference(set.CreateStringSet([]string{"*"}...)); !unsuppPrincipals.IsEmpty() {
if diff := principal.AWS.Difference(set.CreateStringSet("*")); !diff.IsEmpty() {
// Minio does not support or implement IAM, "*" is the only valid value.
// Amazon s3 doc on principals: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal
err = fmt.Errorf("Unsupported principals found: %#v, please validate your policy document", unsuppPrincipals)
// Amazon s3 doc on principal:
// http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal
err = fmt.Errorf("Unsupported principals found: %#v, please validate your policy document",
diff)
return err
}
return nil
@@ -185,7 +121,7 @@ func isValidPrincipals(principal interface{}) (err error) {
// isValidConditions - returns nil if the given conditions valid and
// corresponding error otherwise.
func isValidConditions(actions set.StringSet, conditions map[string]map[string]set.StringSet) (err error) {
func isValidConditions(actions set.StringSet, conditions policy.ConditionMap) (err error) {
// Verify conditions should be valid. Validate if only
// supported condition keys are present and return error
// otherwise.
@@ -239,7 +175,7 @@ func resourcePrefix(resource string) string {
// checkBucketPolicyResources validates Resources in unmarshalled bucket policy structure.
// - Resources are validated against the given set of Actions.
// -
func checkBucketPolicyResources(bucket string, bucketPolicy *bucketPolicy) APIErrorCode {
func checkBucketPolicyResources(bucket string, bucketPolicy policy.BucketAccessPolicy) APIErrorCode {
// Validate statements for special actions and collect resources
// for others to validate nesting.
var resourceMap = set.NewStringSet()
@@ -294,27 +230,27 @@ func checkBucketPolicyResources(bucket string, bucketPolicy *bucketPolicy) APIEr
// parseBucketPolicy - parses and validates if bucket policy is of
// proper JSON and follows allowed restrictions with policy standards.
func parseBucketPolicy(bucketPolicyReader io.Reader, policy *bucketPolicy) (err error) {
func parseBucketPolicy(bucketPolicyReader io.Reader, bktPolicy *policy.BucketAccessPolicy) (err error) {
// Parse bucket policy reader.
decoder := json.NewDecoder(bucketPolicyReader)
if err = decoder.Decode(&policy); err != nil {
if err = decoder.Decode(bktPolicy); err != nil {
return err
}
// Policy version cannot be empty.
if len(policy.Version) == 0 {
if len(bktPolicy.Version) == 0 {
err = errors.New("Policy version cannot be empty")
return err
}
// Policy statements cannot be empty.
if len(policy.Statements) == 0 {
if len(bktPolicy.Statements) == 0 {
err = errors.New("Policy statement cannot be empty")
return err
}
// Loop through all policy statements and validate entries.
for _, statement := range policy.Statements {
for _, statement := range bktPolicy.Statements {
// Statement effect should be valid.
if err := isValidEffect(statement.Effect); err != nil {
return err
@@ -339,19 +275,20 @@ func parseBucketPolicy(bucketPolicyReader io.Reader, policy *bucketPolicy) (err
// Separate deny and allow statements, so that we can apply deny
// statements in the beginning followed by Allow statements.
var denyStatements []policyStatement
var allowStatements []policyStatement
for _, statement := range policy.Statements {
var denyStatements []policy.Statement
var allowStatements []policy.Statement
for _, statement := range bktPolicy.Statements {
if statement.Effect == "Deny" {
denyStatements = append(denyStatements, statement)
continue
}
// else if statement.Effect == "Allow"
allowStatements = append(allowStatements, statement)
}
// Deny statements are enforced first once matched.
policy.Statements = append(denyStatements, allowStatements...)
bktPolicy.Statements = append(denyStatements, allowStatements...)
// Return successfully parsed policy structure.
return nil

View File

@@ -20,6 +20,7 @@ import (
"encoding/json"
"errors"
"fmt"
"reflect"
"testing"
"github.com/minio/minio-go/pkg/policy"
@@ -74,11 +75,11 @@ var (
)
// Obtain bucket statement for read-write bucketPolicy.
func getReadWriteObjectStatement(bucketName, objectPrefix string) policyStatement {
objectResourceStatement := policyStatement{}
func getReadWriteObjectStatement(bucketName, objectPrefix string) policy.Statement {
objectResourceStatement := policy.Statement{}
objectResourceStatement.Effect = "Allow"
objectResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
objectResourceStatement.Principal = policy.User{
AWS: set.StringSet{"*": struct{}{}},
}
objectResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", bucketARNPrefix, bucketName+"/"+objectPrefix+"*")}...)
objectResourceStatement.Actions = set.CreateStringSet(readWriteObjectActions...)
@@ -86,11 +87,11 @@ func getReadWriteObjectStatement(bucketName, objectPrefix string) policyStatemen
}
// Obtain object statement for read-write bucketPolicy.
func getReadWriteBucketStatement(bucketName, objectPrefix string) policyStatement {
bucketResourceStatement := policyStatement{}
func getReadWriteBucketStatement(bucketName, objectPrefix string) policy.Statement {
bucketResourceStatement := policy.Statement{}
bucketResourceStatement.Effect = "Allow"
bucketResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
bucketResourceStatement.Principal = policy.User{
AWS: set.StringSet{"*": struct{}{}},
}
bucketResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", bucketARNPrefix, bucketName)}...)
bucketResourceStatement.Actions = set.CreateStringSet(readWriteBucketActions...)
@@ -98,19 +99,19 @@ func getReadWriteBucketStatement(bucketName, objectPrefix string) policyStatemen
}
// Obtain statements for read-write bucketPolicy.
func getReadWriteStatement(bucketName, objectPrefix string) []policyStatement {
statements := []policyStatement{}
func getReadWriteStatement(bucketName, objectPrefix string) []policy.Statement {
statements := []policy.Statement{}
// Save the read write policy.
statements = append(statements, getReadWriteBucketStatement(bucketName, objectPrefix), getReadWriteObjectStatement(bucketName, objectPrefix))
return statements
}
// Obtain bucket statement for read only bucketPolicy.
func getReadOnlyBucketStatement(bucketName, objectPrefix string) policyStatement {
bucketResourceStatement := policyStatement{}
func getReadOnlyBucketStatement(bucketName, objectPrefix string) policy.Statement {
bucketResourceStatement := policy.Statement{}
bucketResourceStatement.Effect = "Allow"
bucketResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
bucketResourceStatement.Principal = policy.User{
AWS: set.StringSet{"*": struct{}{}},
}
bucketResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", bucketARNPrefix, bucketName)}...)
bucketResourceStatement.Actions = set.CreateStringSet(readOnlyBucketActions...)
@@ -118,11 +119,11 @@ func getReadOnlyBucketStatement(bucketName, objectPrefix string) policyStatement
}
// Obtain object statement for read only bucketPolicy.
func getReadOnlyObjectStatement(bucketName, objectPrefix string) policyStatement {
objectResourceStatement := policyStatement{}
func getReadOnlyObjectStatement(bucketName, objectPrefix string) policy.Statement {
objectResourceStatement := policy.Statement{}
objectResourceStatement.Effect = "Allow"
objectResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
objectResourceStatement.Principal = policy.User{
AWS: set.StringSet{"*": struct{}{}},
}
objectResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", bucketARNPrefix, bucketName+"/"+objectPrefix+"*")}...)
objectResourceStatement.Actions = set.CreateStringSet(readOnlyObjectActions...)
@@ -130,20 +131,20 @@ func getReadOnlyObjectStatement(bucketName, objectPrefix string) policyStatement
}
// Obtain statements for read only bucketPolicy.
func getReadOnlyStatement(bucketName, objectPrefix string) []policyStatement {
statements := []policyStatement{}
func getReadOnlyStatement(bucketName, objectPrefix string) []policy.Statement {
statements := []policy.Statement{}
// Save the read only policy.
statements = append(statements, getReadOnlyBucketStatement(bucketName, objectPrefix), getReadOnlyObjectStatement(bucketName, objectPrefix))
return statements
}
// Obtain bucket statements for write only bucketPolicy.
func getWriteOnlyBucketStatement(bucketName, objectPrefix string) policyStatement {
func getWriteOnlyBucketStatement(bucketName, objectPrefix string) policy.Statement {
bucketResourceStatement := policyStatement{}
bucketResourceStatement := policy.Statement{}
bucketResourceStatement.Effect = "Allow"
bucketResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
bucketResourceStatement.Principal = policy.User{
AWS: set.StringSet{"*": struct{}{}},
}
bucketResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", bucketARNPrefix, bucketName)}...)
bucketResourceStatement.Actions = set.CreateStringSet(writeOnlyBucketActions...)
@@ -151,11 +152,11 @@ func getWriteOnlyBucketStatement(bucketName, objectPrefix string) policyStatemen
}
// Obtain object statements for write only bucketPolicy.
func getWriteOnlyObjectStatement(bucketName, objectPrefix string) policyStatement {
objectResourceStatement := policyStatement{}
func getWriteOnlyObjectStatement(bucketName, objectPrefix string) policy.Statement {
objectResourceStatement := policy.Statement{}
objectResourceStatement.Effect = "Allow"
objectResourceStatement.Principal = map[string]interface{}{
"AWS": "*",
objectResourceStatement.Principal = policy.User{
AWS: set.StringSet{"*": struct{}{}},
}
objectResourceStatement.Resources = set.CreateStringSet([]string{fmt.Sprintf("%s%s", bucketARNPrefix, bucketName+"/"+objectPrefix+"*")}...)
objectResourceStatement.Actions = set.CreateStringSet(writeOnlyObjectActions...)
@@ -163,8 +164,8 @@ func getWriteOnlyObjectStatement(bucketName, objectPrefix string) policyStatemen
}
// Obtain statements for write only bucketPolicy.
func getWriteOnlyStatement(bucketName, objectPrefix string) []policyStatement {
statements := []policyStatement{}
func getWriteOnlyStatement(bucketName, objectPrefix string) []policy.Statement {
statements := []policy.Statement{}
// Write only policy.
// Save the write only policy.
statements = append(statements, getWriteOnlyBucketStatement(bucketName, objectPrefix), getWriteOnlyBucketStatement(bucketName, objectPrefix))
@@ -325,9 +326,10 @@ func TestIsValidPrincipals(t *testing.T) {
{[]string{"*"}, nil, true},
}
for i, testCase := range testCases {
err := isValidPrincipals(map[string]interface{}{
"AWS": testCase.principals,
})
u := policy.User{
AWS: set.CreateStringSet(testCase.principals...),
}
err := isValidPrincipals(u)
if err != nil && testCase.shouldPass {
t.Errorf("Test %d: Expected to pass, but failed with: <ERROR> %s", i+1, err.Error())
}
@@ -343,80 +345,86 @@ func TestIsValidPrincipals(t *testing.T) {
}
}
// getEmptyConditionKeyMap - returns a function that generates a
// getEmptyConditionMap - returns a function that generates a
// condition key map for a given key.
func getEmptyConditionKeyMap(conditionKey string) func() map[string]map[string]set.StringSet {
emptyConditonGenerator := func() map[string]map[string]set.StringSet {
emptyMap := make(map[string]set.StringSet)
conditions := make(map[string]map[string]set.StringSet)
func getEmptyConditionMap(conditionKey string) func() policy.ConditionMap {
emptyConditonGenerator := func() policy.ConditionMap {
emptyMap := make(policy.ConditionKeyMap)
conditions := make(policy.ConditionMap)
conditions[conditionKey] = emptyMap
return conditions
}
return emptyConditonGenerator
}
// Tests validate policyStatement condition validator.
// Tests validate policy.Statement condition validator.
func TestIsValidConditions(t *testing.T) {
// returns empty conditions map.
setEmptyConditions := func() map[string]map[string]set.StringSet {
return make(map[string]map[string]set.StringSet)
setEmptyConditions := func() policy.ConditionMap {
return make(policy.ConditionMap)
}
// returns map with the "StringEquals" set to empty map.
setEmptyStringEquals := getEmptyConditionKeyMap("StringEquals")
setEmptyStringEquals := getEmptyConditionMap("StringEquals")
// returns map with the "StringNotEquals" set to empty map.
setEmptyStringNotEquals := getEmptyConditionKeyMap("StringNotEquals")
setEmptyStringNotEquals := getEmptyConditionMap("StringNotEquals")
// returns map with the "StringLike" set to empty map.
setEmptyStringLike := getEmptyConditionKeyMap("StringLike")
setEmptyStringLike := getEmptyConditionMap("StringLike")
// returns map with the "StringNotLike" set to empty map.
setEmptyStringNotLike := getEmptyConditionKeyMap("StringNotLike")
setEmptyStringNotLike := getEmptyConditionMap("StringNotLike")
// returns map with the "IpAddress" set to empty map.
setEmptyIPAddress := getEmptyConditionMap("IpAddress")
// returns map with "NotIpAddress" set to empty map.
setEmptyNotIPAddress := getEmptyConditionMap("NotIpAddress")
// Generate conditions.
generateConditions := func(key1, key2, value string) map[string]map[string]set.StringSet {
innerMap := make(map[string]set.StringSet)
generateConditions := func(key1, key2, value string) policy.ConditionMap {
innerMap := make(policy.ConditionKeyMap)
innerMap[key2] = set.CreateStringSet(value)
conditions := make(map[string]map[string]set.StringSet)
conditions := make(policy.ConditionMap)
conditions[key1] = innerMap
return conditions
}
// generate ambigious conditions.
generateAmbigiousConditions := func() map[string]map[string]set.StringSet {
prefixMap := make(map[string]set.StringSet)
generateAmbigiousConditions := func() policy.ConditionMap {
prefixMap := make(policy.ConditionKeyMap)
prefixMap["s3:prefix"] = set.CreateStringSet("Asia/")
conditions := make(map[string]map[string]set.StringSet)
conditions := make(policy.ConditionMap)
conditions["StringEquals"] = prefixMap
conditions["StringNotEquals"] = prefixMap
return conditions
}
// generate valid and non valid type in the condition map.
generateValidInvalidConditions := func() map[string]map[string]set.StringSet {
innerMap := make(map[string]set.StringSet)
generateValidInvalidConditions := func() policy.ConditionMap {
innerMap := make(policy.ConditionKeyMap)
innerMap["s3:prefix"] = set.CreateStringSet("Asia/")
conditions := make(map[string]map[string]set.StringSet)
conditions := make(policy.ConditionMap)
conditions["StringEquals"] = innerMap
conditions["InvalidType"] = innerMap
return conditions
}
// generate valid and invalid keys for valid types in the same condition map.
generateValidInvalidConditionKeys := func() map[string]map[string]set.StringSet {
innerMapValid := make(map[string]set.StringSet)
generateValidInvalidConditionKeys := func() policy.ConditionMap {
innerMapValid := make(policy.ConditionKeyMap)
innerMapValid["s3:prefix"] = set.CreateStringSet("Asia/")
innerMapInValid := make(map[string]set.StringSet)
innerMapInValid["s3:invalid"] = set.CreateStringSet("Asia/")
conditions := make(map[string]map[string]set.StringSet)
conditions := make(policy.ConditionMap)
conditions["StringEquals"] = innerMapValid
conditions["StringEquals"] = innerMapInValid
return conditions
}
// List of Conditions used for test cases.
testConditions := []map[string]map[string]set.StringSet{
testConditions := []policy.ConditionMap{
generateConditions("StringValues", "s3:max-keys", "100"),
generateConditions("StringEquals", "s3:Object", "100"),
generateAmbigiousConditions(),
@@ -427,6 +435,8 @@ func TestIsValidConditions(t *testing.T) {
setEmptyStringNotEquals(),
setEmptyStringLike(),
setEmptyStringNotLike(),
setEmptyIPAddress(),
setEmptyNotIPAddress(),
generateConditions("StringEquals", "s3:prefix", "Asia/"),
generateConditions("StringEquals", "s3:max-keys", "100"),
generateConditions("StringNotEquals", "s3:prefix", "Asia/"),
@@ -439,7 +449,7 @@ func TestIsValidConditions(t *testing.T) {
"please validate your policy document", "s3:max-keys", getObjectActionSet)
testCases := []struct {
inputActions set.StringSet
inputCondition map[string]map[string]set.StringSet
inputCondition policy.ConditionMap
// expected result.
expectedErr error
// flag indicating whether test should pass.
@@ -482,7 +492,13 @@ func TestIsValidConditions(t *testing.T) {
// Test case - 12.
{roBucketActionSet, testConditions[11], nil, true},
// Test case - 13.
{getObjectActionSet, testConditions[11], maxKeysConditionErr, false},
{roBucketActionSet, testConditions[12], nil, true},
// Test case - 11.
{roBucketActionSet, testConditions[13], nil, true},
// Test case - 12.
{roBucketActionSet, testConditions[14], nil, true},
// Test case - 13.
{getObjectActionSet, testConditions[15], maxKeysConditionErr, false},
}
for i, testCase := range testCases {
actualErr := isValidConditions(testCase.inputActions, testCase.inputCondition)
@@ -504,26 +520,26 @@ func TestIsValidConditions(t *testing.T) {
// Tests validate Policy Action and Resource fields.
func TestCheckbucketPolicyResources(t *testing.T) {
// constructing policy statement without invalidPrefixActions (check bucket-policy-parser.go).
setValidPrefixActions := func(statements []policyStatement) []policyStatement {
setValidPrefixActions := func(statements []policy.Statement) []policy.Statement {
statements[0].Actions = set.CreateStringSet([]string{"s3:DeleteObject", "s3:PutObject"}...)
return statements
}
// contracting policy statement with recursive resources.
// should result in ErrMalformedPolicy
setRecurseResource := func(statements []policyStatement) []policyStatement {
setRecurseResource := func(statements []policy.Statement) []policy.Statement {
statements[0].Resources = set.CreateStringSet([]string{"arn:aws:s3:::minio-bucket/Asia/*", "arn:aws:s3:::minio-bucket/Asia/India/*"}...)
return statements
}
// constructing policy statement with lexically close characters.
// should not result in ErrMalformedPolicy
setResourceLexical := func(statements []policyStatement) []policyStatement {
setResourceLexical := func(statements []policy.Statement) []policy.Statement {
statements[0].Resources = set.CreateStringSet([]string{"arn:aws:s3:::minio-bucket/op*", "arn:aws:s3:::minio-bucket/oo*"}...)
return statements
}
// List of bucketPolicy used for tests.
bucketAccessPolicies := []bucketPolicy{
bucketAccessPolicies := []policy.BucketAccessPolicy{
// bucketPolicy - 1.
// Contains valid read only policy statement.
{Version: "1.0", Statements: getReadOnlyStatement("minio-bucket", "")},
@@ -554,7 +570,7 @@ func TestCheckbucketPolicyResources(t *testing.T) {
}
testCases := []struct {
inputPolicy bucketPolicy
inputPolicy policy.BucketAccessPolicy
// expected results.
apiErrCode APIErrorCode
// Flag indicating whether the test should pass.
@@ -585,7 +601,7 @@ func TestCheckbucketPolicyResources(t *testing.T) {
{bucketAccessPolicies[6], ErrNone, true},
}
for i, testCase := range testCases {
apiErrCode := checkBucketPolicyResources("minio-bucket", &testCase.inputPolicy)
apiErrCode := checkBucketPolicyResources("minio-bucket", testCase.inputPolicy)
if apiErrCode != ErrNone && testCase.shouldPass {
t.Errorf("Test %d: Expected to pass, but failed with Errocode %v", i+1, apiErrCode)
}
@@ -604,39 +620,39 @@ func TestCheckbucketPolicyResources(t *testing.T) {
// Tests validate parsing of BucketAccessPolicy.
func TestParseBucketPolicy(t *testing.T) {
// set Unsupported Actions.
setUnsupportedActions := func(statements []policyStatement) []policyStatement {
setUnsupportedActions := func(statements []policy.Statement) []policy.Statement {
// "s3:DeleteEverything"" is an Unsupported Action.
statements[0].Actions = set.CreateStringSet([]string{"s3:GetObject", "s3:ListBucket", "s3:PutObject", "s3:DeleteEverything"}...)
return statements
}
// set unsupported Effect.
setUnsupportedEffect := func(statements []policyStatement) []policyStatement {
setUnsupportedEffect := func(statements []policy.Statement) []policy.Statement {
// Effect "Don't allow" is Unsupported.
statements[0].Effect = "DontAllow"
return statements
}
// set unsupported principals.
setUnsupportedPrincipals := func(statements []policyStatement) []policyStatement {
setUnsupportedPrincipals := func(statements []policy.Statement) []policy.Statement {
// "User1111"" is an Unsupported Principal.
statements[0].Principal = map[string]interface{}{
"AWS": []string{"*", "User1111"},
statements[0].Principal = policy.User{
AWS: set.CreateStringSet([]string{"*", "User1111"}...),
}
return statements
}
// set unsupported Resources.
setUnsupportedResources := func(statements []policyStatement) []policyStatement {
setUnsupportedResources := func(statements []policy.Statement) []policy.Statement {
// "s3:DeleteEverything"" is an Unsupported Action.
statements[0].Resources = set.CreateStringSet([]string{"my-resource"}...)
return statements
}
// List of bucketPolicy used for test cases.
bucketAccesPolicies := []bucketPolicy{
bucketAccesPolicies := []policy.BucketAccessPolicy{
// bucketPolicy - 0.
// bucketPolicy statement empty.
{Version: "1.0"},
// bucketPolicy - 1.
// bucketPolicy version empty.
{Version: "", Statements: []policyStatement{}},
{Version: "", Statements: []policy.Statement{}},
// bucketPolicy - 2.
// Readonly bucketPolicy.
{Version: "1.0", Statements: getReadOnlyStatement("minio-bucket", "")},
@@ -661,19 +677,19 @@ func TestParseBucketPolicy(t *testing.T) {
}
testCases := []struct {
inputPolicy bucketPolicy
inputPolicy policy.BucketAccessPolicy
// expected results.
expectedPolicy bucketPolicy
expectedPolicy policy.BucketAccessPolicy
err error
// Flag indicating whether the test should pass.
shouldPass bool
}{
// Test case - 1.
// bucketPolicy statement empty.
{bucketAccesPolicies[0], bucketPolicy{}, errors.New("Policy statement cannot be empty"), false},
{bucketAccesPolicies[0], policy.BucketAccessPolicy{}, errors.New("Policy statement cannot be empty"), false},
// Test case - 2.
// bucketPolicy version empty.
{bucketAccesPolicies[1], bucketPolicy{}, errors.New("Policy version cannot be empty"), false},
{bucketAccesPolicies[1], policy.BucketAccessPolicy{}, errors.New("Policy version cannot be empty"), false},
// Test case - 3.
// Readonly bucketPolicy.
{bucketAccesPolicies[2], bucketAccesPolicies[2], nil, true},
@@ -704,8 +720,8 @@ func TestParseBucketPolicy(t *testing.T) {
t.Fatalf("Test %d: Couldn't Marshal bucket policy %s", i+1, err)
}
var actualAccessPolicy = &bucketPolicy{}
err = parseBucketPolicy(&buffer, actualAccessPolicy)
var actualAccessPolicy = policy.BucketAccessPolicy{}
err = parseBucketPolicy(&buffer, &actualAccessPolicy)
if err != nil && testCase.shouldPass {
t.Errorf("Test %d: Expected to pass, but failed with: <ERROR> %s", i+1, err.Error())
}
@@ -720,7 +736,7 @@ func TestParseBucketPolicy(t *testing.T) {
}
// Test passes as expected, but the output values are verified for correctness here.
if err == nil && testCase.shouldPass {
if testCase.expectedPolicy.String() != actualAccessPolicy.String() {
if !reflect.DeepEqual(testCase.expectedPolicy, actualAccessPolicy) {
t.Errorf("Test %d: The expected statements from resource statement generator doesn't match the actual statements", i+1)
}
}
@@ -737,8 +753,8 @@ func TestAWSRefererCondition(t *testing.T) {
set.CreateStringSet("www.example.com",
"http://www.example.com"))
requestConditionKeyMap := make(map[string]set.StringSet)
requestConditionKeyMap["referer"] = set.CreateStringSet("www.example.com")
requestConditionMap := make(policy.ConditionKeyMap)
requestConditionMap["referer"] = set.CreateStringSet("www.example.com")
testCases := []struct {
effect string
@@ -768,20 +784,82 @@ func TestAWSRefererCondition(t *testing.T) {
}
for i, test := range testCases {
conditions := make(map[string]map[string]set.StringSet)
conditions := make(policy.ConditionMap)
conditions[test.conditionKey] = conditionsKeyMap
allowStatement := policyStatement{
allowStatement := policy.Statement{
Sid: "Testing AWS referer condition",
Effect: test.effect,
Principal: map[string]interface{}{
"AWS": "*",
Principal: policy.User{
AWS: set.CreateStringSet("*"),
},
Resources: resource,
Conditions: conditions,
}
if result := bucketPolicyConditionMatch(requestConditionKeyMap, allowStatement); result != test.match {
if result := bucketPolicyConditionMatch(requestConditionMap, allowStatement); result != test.match {
t.Errorf("Test %d - Expected conditons to evaluate to %v but got %v",
i+1, test.match, result)
}
}
}
func TestAWSSourceIPCondition(t *testing.T) {
resource := set.CreateStringSet([]string{
fmt.Sprintf("%s%s", bucketARNPrefix, "minio-bucket"+"/"+"Asia"+"*"),
}...)
conditionsKeyMap := make(policy.ConditionKeyMap)
// Test both IPv4 and IPv6 addresses.
conditionsKeyMap.Add("aws:SourceIp",
set.CreateStringSet("54.240.143.0/24",
"2001:DB8:1234:5678::/64"))
requestConditionMap := make(policy.ConditionKeyMap)
requestConditionMap["ip"] = set.CreateStringSet("54.240.143.2")
testCases := []struct {
effect string
conditionKey string
match bool
}{
{
effect: "Allow",
conditionKey: "IpAddress",
match: true,
},
{
effect: "Allow",
conditionKey: "NotIpAddress",
match: false,
},
{
effect: "Deny",
conditionKey: "IpAddress",
match: true,
},
{
effect: "Deny",
conditionKey: "NotIpAddress",
match: false,
},
}
for i, test := range testCases {
conditions := make(policy.ConditionMap)
conditions[test.conditionKey] = conditionsKeyMap
allowStatement := policy.Statement{
Sid: "Testing AWS referer condition",
Effect: test.effect,
Principal: policy.User{
AWS: set.CreateStringSet("*"),
},
Resources: resource,
Conditions: conditions,
}
if result := bucketPolicyConditionMatch(requestConditionMap, allowStatement); result != test.match {
t.Errorf("Test %d - Expected conditons to evaluate to %v but got %v",
i+1, test.match, result)
}

View File

@@ -20,7 +20,12 @@ import (
"bytes"
"encoding/json"
"io"
"reflect"
"sync"
"github.com/minio/minio-go/pkg/policy"
"github.com/minio/minio/pkg/errors"
"github.com/minio/minio/pkg/hash"
)
const (
@@ -32,30 +37,17 @@ const (
bucketPolicyConfig = "policy.json"
)
// Variable represents bucket policies in memory.
var globalBucketPolicies *bucketPolicies
// Global bucket policies list, policies are enforced on each bucket looking
// through the policies here.
type bucketPolicies struct {
rwMutex *sync.RWMutex
// Collection of 'bucket' policies.
bucketPolicyConfigs map[string]*bucketPolicy
}
// Represent a policy change
type policyChange struct {
// isRemove is true if the policy change is to delete the
// policy on a bucket.
IsRemove bool
// represents the new policy for the bucket
BktPolicy *bucketPolicy
bucketPolicyConfigs map[string]policy.BucketAccessPolicy
}
// Fetch bucket policy for a given bucket.
func (bp bucketPolicies) GetBucketPolicy(bucket string) *bucketPolicy {
func (bp bucketPolicies) GetBucketPolicy(bucket string) policy.BucketAccessPolicy {
bp.rwMutex.RLock()
defer bp.rwMutex.RUnlock()
return bp.bucketPolicyConfigs[bucket]
@@ -63,80 +55,61 @@ func (bp bucketPolicies) GetBucketPolicy(bucket string) *bucketPolicy {
// Set a new bucket policy for a bucket, this operation will overwrite
// any previous bucket policies for the bucket.
func (bp *bucketPolicies) SetBucketPolicy(bucket string, pCh policyChange) error {
func (bp *bucketPolicies) SetBucketPolicy(bucket string, newpolicy policy.BucketAccessPolicy) error {
bp.rwMutex.Lock()
defer bp.rwMutex.Unlock()
if pCh.IsRemove {
delete(bp.bucketPolicyConfigs, bucket)
} else {
if pCh.BktPolicy == nil {
return errInvalidArgument
}
bp.bucketPolicyConfigs[bucket] = pCh.BktPolicy
if reflect.DeepEqual(newpolicy, emptyBucketPolicy) {
return errInvalidArgument
}
bp.bucketPolicyConfigs[bucket] = newpolicy
return nil
}
// Loads all bucket policies from persistent layer.
func loadAllBucketPolicies(objAPI ObjectLayer) (policies map[string]*bucketPolicy, err error) {
// List buckets to proceed loading all notification configuration.
buckets, err := objAPI.ListBuckets()
errorIf(err, "Unable to list buckets.")
if err != nil {
return nil, errorCause(err)
// Delete bucket policy from struct for a given bucket.
func (bp *bucketPolicies) DeleteBucketPolicy(bucket string) error {
bp.rwMutex.Lock()
defer bp.rwMutex.Unlock()
delete(bp.bucketPolicyConfigs, bucket)
return nil
}
// Intialize all bucket policies.
func initBucketPolicies(objAPI ObjectLayer) (*bucketPolicies, error) {
if objAPI == nil {
return nil, errInvalidArgument
}
policies = make(map[string]*bucketPolicy)
var pErrs []error
// List buckets to proceed loading all notification configuration.
buckets, err := objAPI.ListBuckets()
if err != nil {
return nil, errors.Cause(err)
}
policies := make(map[string]policy.BucketAccessPolicy)
// Loads bucket policy.
for _, bucket := range buckets {
policy, pErr := readBucketPolicy(bucket.Name, objAPI)
bp, pErr := ReadBucketPolicy(bucket.Name, objAPI)
if pErr != nil {
// net.Dial fails for rpc client or any
// other unexpected errors during net.Dial.
if !isErrIgnored(pErr, errDiskNotFound) {
if !errors.IsErrIgnored(pErr, errDiskNotFound) {
if !isErrBucketPolicyNotFound(pErr) {
pErrs = append(pErrs, pErr)
return nil, errors.Cause(pErr)
}
}
// Continue to load other bucket policies if possible.
continue
}
policies[bucket.Name] = policy
policies[bucket.Name] = bp
}
// Look for any errors occurred while reading bucket policies.
for _, pErr := range pErrs {
if pErr != nil {
return policies, pErr
}
}
// Success.
return policies, nil
}
// Intialize all bucket policies.
func initBucketPolicies(objAPI ObjectLayer) error {
if objAPI == nil {
return errInvalidArgument
}
// Read all bucket policies.
policies, err := loadAllBucketPolicies(objAPI)
if err != nil {
return err
}
// Populate global bucket collection.
globalBucketPolicies = &bucketPolicies{
// Return all bucket policies.
return &bucketPolicies{
rwMutex: &sync.RWMutex{},
bucketPolicyConfigs: policies,
}
// Success.
return nil
}, nil
}
// readBucketPolicyJSON - reads bucket policy for an input bucket, returns BucketPolicyNotFound
@@ -144,54 +117,44 @@ func initBucketPolicies(objAPI ObjectLayer) error {
func readBucketPolicyJSON(bucket string, objAPI ObjectLayer) (bucketPolicyReader io.Reader, err error) {
policyPath := pathJoin(bucketConfigPrefix, bucket, bucketPolicyConfig)
// Acquire a read lock on policy config before reading.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, policyPath)
objLock.RLock()
defer objLock.RUnlock()
var buffer bytes.Buffer
err = objAPI.GetObject(minioMetaBucket, policyPath, 0, -1, &buffer)
err = objAPI.GetObject(minioMetaBucket, policyPath, 0, -1, &buffer, "")
if err != nil {
if isErrObjectNotFound(err) || isErrIncompleteBody(err) {
return nil, BucketPolicyNotFound{Bucket: bucket}
return nil, PolicyNotFound{Bucket: bucket}
}
errorIf(err, "Unable to load policy for the bucket %s.", bucket)
return nil, errorCause(err)
return nil, errors.Cause(err)
}
return &buffer, nil
}
// readBucketPolicy - reads bucket policy for an input bucket, returns BucketPolicyNotFound
// ReadBucketPolicy - reads bucket policy for an input bucket, returns BucketPolicyNotFound
// if bucket policy is not found. This function also parses the bucket policy into an object.
func readBucketPolicy(bucket string, objAPI ObjectLayer) (*bucketPolicy, error) {
func ReadBucketPolicy(bucket string, objAPI ObjectLayer) (policy.BucketAccessPolicy, error) {
// Read bucket policy JSON.
bucketPolicyReader, err := readBucketPolicyJSON(bucket, objAPI)
if err != nil {
return nil, err
return emptyBucketPolicy, err
}
// Parse the saved policy.
var policy = &bucketPolicy{}
err = parseBucketPolicy(bucketPolicyReader, policy)
if err != nil {
return nil, err
var bp policy.BucketAccessPolicy
if err = parseBucketPolicy(bucketPolicyReader, &bp); err != nil {
return emptyBucketPolicy, err
}
return policy, nil
return bp, nil
}
// removeBucketPolicy - removes any previously written bucket policy. Returns BucketPolicyNotFound
// if no policies are found.
func removeBucketPolicy(bucket string, objAPI ObjectLayer) error {
policyPath := pathJoin(bucketConfigPrefix, bucket, bucketPolicyConfig)
// Acquire a write lock on policy config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, policyPath)
objLock.Lock()
defer objLock.Unlock()
if err := objAPI.DeleteObject(minioMetaBucket, policyPath); err != nil {
errorIf(err, "Unable to remove bucket-policy on bucket %s.", bucket)
err = errorCause(err)
err := objAPI.DeleteObject(minioMetaBucket, policyPath)
if err != nil {
err = errors.Cause(err)
if _, ok := err.(ObjectNotFound); ok {
return BucketPolicyNotFound{Bucket: bucket}
}
@@ -201,77 +164,45 @@ func removeBucketPolicy(bucket string, objAPI ObjectLayer) error {
}
// writeBucketPolicy - save a bucket policy that is assumed to be validated.
func writeBucketPolicy(bucket string, objAPI ObjectLayer, bpy *bucketPolicy) error {
func writeBucketPolicy(bucket string, objAPI ObjectLayer, bpy policy.BucketAccessPolicy) error {
buf, err := json.Marshal(bpy)
if err != nil {
errorIf(err, "Unable to marshal bucket policy '%v' to JSON", *bpy)
errorIf(err, "Unable to marshal bucket policy '%#v' to JSON", bpy)
return err
}
policyPath := pathJoin(bucketConfigPrefix, bucket, bucketPolicyConfig)
// Acquire a write lock on policy config before modifying.
objLock := globalNSMutex.NewNSLock(minioMetaBucket, policyPath)
objLock.Lock()
defer objLock.Unlock()
if _, err := objAPI.PutObject(minioMetaBucket, policyPath, int64(len(buf)), bytes.NewReader(buf), nil, ""); err != nil {
hashReader, err := hash.NewReader(bytes.NewReader(buf), int64(len(buf)), "", getSHA256Hash(buf))
if err != nil {
errorIf(err, "Unable to set policy for the bucket %s", bucket)
return errorCause(err)
return errors.Cause(err)
}
if _, err = objAPI.PutObject(minioMetaBucket, policyPath, hashReader, nil); err != nil {
errorIf(err, "Unable to set policy for the bucket %s", bucket)
return errors.Cause(err)
}
return nil
}
func parseAndPersistBucketPolicy(bucket string, policyBytes []byte, objAPI ObjectLayer) APIErrorCode {
// Parse bucket policy.
var policy = &bucketPolicy{}
err := parseBucketPolicy(bytes.NewReader(policyBytes), policy)
if err != nil {
errorIf(err, "Unable to parse bucket policy.")
return ErrInvalidPolicyDocument
}
// Parse check bucket policy.
if s3Error := checkBucketPolicyResources(bucket, policy); s3Error != ErrNone {
return s3Error
}
// Acquire a write lock on bucket before modifying its configuration.
bucketLock := globalNSMutex.NewNSLock(bucket, "")
bucketLock.Lock()
// Release lock after notifying peers
defer bucketLock.Unlock()
// Save bucket policy.
if err = persistAndNotifyBucketPolicyChange(bucket, policyChange{false, policy}, objAPI); err != nil {
switch err.(type) {
case BucketNameInvalid:
return ErrInvalidBucketName
case BucketNotFound:
return ErrNoSuchBucket
default:
errorIf(err, "Unable to save bucket policy.")
return ErrInternalError
}
}
return ErrNone
}
// persistAndNotifyBucketPolicyChange - takes a policyChange argument,
// persists it to storage, and notify nodes in the cluster about the
// change. In-memory state is updated in response to the notification.
func persistAndNotifyBucketPolicyChange(bucket string, pCh policyChange, objAPI ObjectLayer) error {
if pCh.IsRemove {
if err := removeBucketPolicy(bucket, objAPI); err != nil {
func persistAndNotifyBucketPolicyChange(bucket string, isRemove bool, bktPolicy policy.BucketAccessPolicy, objAPI ObjectLayer) error {
if isRemove {
err := removeBucketPolicy(bucket, objAPI)
if err != nil {
return err
}
} else {
if pCh.BktPolicy == nil {
if reflect.DeepEqual(bktPolicy, emptyBucketPolicy) {
return errInvalidArgument
}
if err := writeBucketPolicy(bucket, objAPI, pCh.BktPolicy); err != nil {
if err := writeBucketPolicy(bucket, objAPI, bktPolicy); err != nil {
return err
}
}
// Notify all peers (including self) to update in-memory state
S3PeersUpdateBucketPolicy(bucket, pCh)
S3PeersUpdateBucketPolicy(bucket)
return nil
}

View File

@@ -16,11 +16,13 @@
package cmd
import "go/build"
// DO NOT EDIT THIS FILE DIRECTLY. These are build-time constants
// set through buildscripts/gen-ldflags.go.
var (
// GOPATH - GOPATH value at the time of build.
GOPATH = ""
GOPATH = build.Default.GOPATH
// Go get development tag.
goGetTag = "DEVELOPMENT.GOGET"

View File

@@ -22,9 +22,14 @@ import (
"encoding/pem"
"fmt"
"io/ioutil"
"path/filepath"
"os"
)
// TLSPrivateKeyPassword is the environment variable which contains the password used
// to decrypt the TLS private key. It must be set if the TLS private key is
// password protected.
const TLSPrivateKeyPassword = "MINIO_CERT_PASSWD"
func parsePublicCertFile(certFile string) (x509Certs []*x509.Certificate, err error) {
// Read certificate file.
var data []byte
@@ -58,14 +63,18 @@ func parsePublicCertFile(certFile string) (x509Certs []*x509.Certificate, err er
func getRootCAs(certsCAsDir string) (*x509.CertPool, error) {
// Get all CA file names.
var caFiles []string
fis, err := ioutil.ReadDir(certsCAsDir)
fis, err := readDir(certsCAsDir)
if err != nil {
return nil, err
}
for _, fi := range fis {
caFiles = append(caFiles, filepath.Join(certsCAsDir, fi.Name()))
// Skip all directories.
if hasSuffix(fi, slashSeparator) {
continue
}
// We are only interested in regular files here.
caFiles = append(caFiles, pathJoin(certsCAsDir, fi))
}
if len(caFiles) == 0 {
return nil, nil
}
@@ -90,6 +99,36 @@ func getRootCAs(certsCAsDir string) (*x509.CertPool, error) {
return rootCAs, nil
}
// load an X509 key pair (private key , certificate) from the provided
// paths. The private key may be encrypted and is decrypted using the
// ENV_VAR: MINIO_CERT_PASSWD.
func loadX509KeyPair(certFile, keyFile string) (tls.Certificate, error) {
certPEMBlock, err := ioutil.ReadFile(certFile)
if err != nil {
return tls.Certificate{}, fmt.Errorf("TLS: failed to read cert file: %v", err)
}
keyPEMBlock, err := ioutil.ReadFile(keyFile)
if err != nil {
return tls.Certificate{}, fmt.Errorf("TLS: failed to read private key: %v", err)
}
key, rest := pem.Decode(keyPEMBlock)
if len(rest) > 0 {
return tls.Certificate{}, fmt.Errorf("TLS: private key contains additional data")
}
if x509.IsEncryptedPEMBlock(key) {
password, ok := os.LookupEnv(TLSPrivateKeyPassword)
if !ok {
return tls.Certificate{}, fmt.Errorf("TLS: private key is encrypted but no password is present - set env var: %s", TLSPrivateKeyPassword)
}
decryptedKey, decErr := x509.DecryptPEMBlock(key, []byte(password))
if decErr != nil {
return tls.Certificate{}, fmt.Errorf("TLS: failed to decrypt private key: %v", decErr)
}
keyPEMBlock = pem.EncodeToMemory(&pem.Block{Type: key.Type, Bytes: decryptedKey})
}
return tls.X509KeyPair(certPEMBlock, keyPEMBlock)
}
func getSSLConfig() (x509Certs []*x509.Certificate, rootCAs *x509.CertPool, tlsCert *tls.Certificate, secureConn bool, err error) {
if !(isFile(getPublicCertFile()) && isFile(getPrivateKeyFile())) {
return nil, nil, nil, false, nil
@@ -100,7 +139,7 @@ func getSSLConfig() (x509Certs []*x509.Certificate, rootCAs *x509.CertPool, tlsC
}
var cert tls.Certificate
if cert, err = tls.LoadX509KeyPair(getPublicCertFile(), getPrivateKeyFile()); err != nil {
if cert, err = loadX509KeyPair(getPublicCertFile(), getPrivateKeyFile()); err != nil {
return nil, nil, nil, false, err
}

View File

@@ -219,27 +219,16 @@ func TestGetRootCAs(t *testing.T) {
t.Fatalf("Unable create test file. %v", err)
}
nonexistentErr := fmt.Errorf("open nonexistent-dir: no such file or directory")
if runtime.GOOS == "windows" {
// Below concatenation is done to get rid of goline error
// "error strings should not be capitalized or end with punctuation or a newline"
nonexistentErr = fmt.Errorf("open nonexistent-dir:" + " The system cannot find the file specified.")
}
err1 := fmt.Errorf("read %s: is a directory", filepath.Join(dir1, "empty-dir"))
if runtime.GOOS == "windows" {
// Below concatenation is done to get rid of goline error
// "error strings should not be capitalized or end with punctuation or a newline"
err1 = fmt.Errorf("read %s:"+" The handle is invalid.", filepath.Join(dir1, "empty-dir"))
}
testCases := []struct {
certCAsDir string
expectedErr error
}{
{"nonexistent-dir", nonexistentErr},
{dir1, err1},
{"nonexistent-dir", errFileNotFound},
// Ignores directories.
{dir1, nil},
// Ignore empty directory.
{emptydir, nil},
// Loads the cert properly.
{dir2, nil},
}
@@ -257,3 +246,252 @@ func TestGetRootCAs(t *testing.T) {
}
}
}
func TestLoadX509KeyPair(t *testing.T) {
for i, testCase := range loadX509KeyPairTests {
privateKey, err := createTempFile("private.key", testCase.privateKey)
if err != nil {
t.Fatalf("Test %d: failed to create tmp private key file: %v", i, err)
}
certificate, err := createTempFile("public.crt", testCase.certificate)
if err != nil {
os.Remove(privateKey)
t.Fatalf("Test %d: failed to create tmp certificate file: %v", i, err)
}
os.Unsetenv(TLSPrivateKeyPassword)
if testCase.password != "" {
os.Setenv(TLSPrivateKeyPassword, testCase.password)
}
_, err = loadX509KeyPair(certificate, privateKey)
if err != nil && !testCase.shouldFail {
t.Errorf("Test %d: test should succeed but it failed: %v", i, err)
}
if err == nil && testCase.shouldFail {
t.Errorf("Test %d: test should fail but it succeed", i)
}
os.Remove(privateKey)
os.Remove(certificate)
}
}
var loadX509KeyPairTests = []struct {
password string
privateKey, certificate string
shouldFail bool
}{
{
password: "foobar",
privateKey: `-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,CC483BF11678C35F9F02A1AD85DAE285
nMDFd+Qxk1f+S7LwMitmMofNXYNbCY4L1QEqPOOx5wnjNF1wSxmEkL7+h8W4Y/vb
AQt/7TCcUSuSqEMl45nUIcCbhBos5wz+ShvFiez3qKwmR5HSURvqyN6PIJeAbU+h
uw/cvAQsCH1Cq+gYkDJqjrizPhGqg7mSkqyeST3PbOl+ZXc0wynIjA34JSwO3c5j
cF7XKHETtNGj1+AiLruX4wYZAJwQnK375fCoNVMO992zC6K83d8kvGMUgmJjkiIj
q3s4ymFGfoo0S/XNDQXgE5A5QjAKRKUyW2i7pHIIhTyOpeJQeFHDi2/zaZRxoCog
lD2/HKLi5xJtRelZaaGyEJ20c05VzaSZ+EtRIN33foNdyQQL6iAUU3hJ6JlcmRIB
bRfX4XPH1w9UfFU5ZKwUciCoDcL65bsyv/y56ItljBp7Ok+UUKl0H4myFNOSfsuU
IIj4neslnAvwQ8SN4XUpug+7pGF+2m/5UDwRzSUN1H2RfgWN95kqR+tYqCq/E+KO
i0svzFrljSHswsFoPBqKngI7hHwc9QTt5q4frXwj9I4F6HHrTKZnC5M4ef26sbJ1
r7JRmkt0h/GfcS355b0uoBTtF1R8tSJo85Zh47wE+ucdjEvy9/pjnzKqIoJo9bNZ
ri+ue7GhH5EUca1Kd10bH8FqTF+8AHh4yW6xMxSkSgFGp7KtraAVpdp+6kosymqh
dz9VMjA8i28btfkS2isRaCpyumaFYJ3DJMFYhmeyt6gqYovmRLX0qrBf8nrkFTAA
ZmykWsc8ErsCudxlDmKVemuyFL7jtm9IRPq+Jh+IrmixLJFx8PKkNAM6g+A8irx8
piw+yhRsVy5Jk2QeIqvbpxN6BfCNcix4sWkusiCJrAqQFuSm26Mhh53Ig1DXG4d3
6QY1T8tW80Q6JHUtDR+iOPqW6EmrNiEopzirvhGv9FicXZ0Lo2yKJueeeihWhFLL
GmlnCjWVMO4hoo8lWCHv95JkPxGMcecCacKKUbHlXzCGyw3+eeTEHMWMEhziLeBy
HZJ1/GReI3Sx7XlUCkG4468Yz3PpmbNIk/U5XKE7TGuxKmfcWQpu022iF/9DrKTz
KVhKimCBXJX345bCFe1rN2z5CV6sv87FkMs5Y+OjPw6qYFZPVKO2TdUUBcpXbQMg
UW+Kuaax9W7214Stlil727MjRCiH1+0yODg4nWj4pTSocA5R3pn5cwqrjMu97OmL
ESx4DHmy4keeSy3+AIAehCZlwgeLb70/xCSRhJMIMS9Q6bz8CPkEWN8bBZt95oeo
37LqZ7lNmq61fs1x1tq0VUnI9HwLFEnsiubp6RG0Yu8l/uImjjjXa/ytW2GXrfUi
zM22dOntu6u23iBxRBJRWdFTVUz7qrdu+PHavr+Y7TbCeiBwiypmz5llf823UIVx
btamI6ziAq2gKZhObIhut7sjaLkAyTLlNVkNN1WNaplAXpW25UFVk93MHbvZ27bx
9iLGs/qB2kDTUjffSQoHTLY1GoLxv83RgVspUGQjslztEEpWfYvGfVLcgYLv933B
aRW9BRoNZ0czKx7Lhuwjreyb5IcWDarhC8q29ZkkWsQQonaPb0kTEFJul80Yqk0k
-----END RSA PRIVATE KEY-----`,
certificate: `-----BEGIN CERTIFICATE-----
MIIDiTCCAnGgAwIBAgIJAK5m5S7EE46kMA0GCSqGSIb3DQEBCwUAMFsxCzAJBgNV
BAYTAlVTMQ4wDAYDVQQIDAVzdGF0ZTERMA8GA1UEBwwIbG9jYXRpb24xFTATBgNV
BAoMDG9yZ2FuaXphdGlvbjESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTE3MTIxODE4
MDUyOFoXDTI3MTIxNjE4MDUyOFowWzELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBXN0
YXRlMREwDwYDVQQHDAhsb2NhdGlvbjEVMBMGA1UECgwMb3JnYW5pemF0aW9uMRIw
EAYDVQQDDAlsb2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDPJfYY5Dhsntrqwyu7ZgKM/zrlKEjCwGHhWJBdZdeZCHQlY8ISrtDxxp2XMmI6
HsszalEhNF9fk3vSXWclTuomG03fgGzP4R6QpcwGUCxhRF1J+0b64Yi8pw2uEGsR
GuMwLhGorcWalNoihgHc0BQ4vO8aaTNTX7iD06olesP6vGNu/S8h0VomE+0v9qYc
VF66Zaiv/6OmxAtDpElJjVd0mY7G85BlDlFrVwzd7zhRiuJZ4iDg749Xt9GuuKla
Dvr14glHhP4dQgUbhluJmIHMdx2ZPjk+5FxaDK6I9IUpxczFDe4agDE6lKzU1eLd
cCXRWFOf6q9lTB1hUZfmWfTxAgMBAAGjUDBOMB0GA1UdDgQWBBTQh7lDTq+8salD
0HBNILochiiNaDAfBgNVHSMEGDAWgBTQh7lDTq+8salD0HBNILochiiNaDAMBgNV
HRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQAqi9LycxcXKNSDXaPkCKvw7RQy
iMBDGm1kIY++p3tzbUGuaeu85TsswKnqd50AullEU+aQxRRJGfR8eSKzQJMBXLMQ
b4ptYCc5OrZtRHT8NaZ/df2tc6I88kN8dBu6ybcNGsevXA/iNX3kKLW7naxdr5jj
KUudWSuqDCjCmQa5bYb9H6DreLH2lUItSWBa/YmeZ3VSezDCd+XYO53QKwZVj8Jb
bulZmoo7e7HO1qecEzWKL10UYyEbG3UDPtw+NZc142ZYeEhXQ0dsstGAO5hf3hEl
kQyKGUTpDbKLuyYMFsoH73YLjBqNe+UEhPwE+FWpcky1Sp9RTx/oMLpiZaPR
-----END CERTIFICATE-----`,
shouldFail: false,
},
{
password: "password",
privateKey: `-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,CC483BF11678C35F9F02A1AD85DAE285
nMDFd+Qxk1f+S7LwMitmMofNXYNbCY4L1QEqPOOx5wnjNF1wSxmEkL7+h8W4Y/vb
AQt/7TCcUSuSqEMl45nUIcCbhBos5wz+ShvFiez3qKwmR5HSURvqyN6PIJeAbU+h
uw/cvAQsCH1Cq+gYkDJqjrizPhGqg7mSkqyeST3PbOl+ZXc0wynIjA34JSwO3c5j
cF7XKHETtNGj1+AiLruX4wYZAJwQnK375fCoNVMO992zC6K83d8kvGMUgmJjkiIj
q3s4ymFGfoo0S/XNDQXgE5A5QjAKRKUyW2i7pHIIhTyOpeJQeFHDi2/zaZRxoCog
lD2/HKLi5xJtRelZaaGyEJ20c05VzaSZ+EtRIN33foNdyQQL6iAUU3hJ6JlcmRIB
bRfX4XPH1w9UfFU5ZKwUciCoDcL65bsyv/y56ItljBp7Ok+UUKl0H4myFNOSfsuU
IIj4neslnAvwQ8SN4XUpug+7pGF+2m/5UDwRzSUN1H2RfgWN95kqR+tYqCq/E+KO
i0svzFrljSHswsFoPBqKngI7hHwc9QTt5q4frXwj9I4F6HHrTKZnC5M4ef26sbJ1
r7JRmkt0h/GfcS355b0uoBTtF1R8tSJo85Zh47wE+ucdjEvy9/pjnzKqIoJo9bNZ
ri+ue7GhH5EUca1Kd10bH8FqTF+8AHh4yW6xMxSkSgFGp7KtraAVpdp+6kosymqh
dz9VMjA8i28btfkS2isRaCpyumaFYJ3DJMFYhmeyt6gqYovmRLX0qrBf8nrkFTAA
ZmykWsc8ErsCudxlDmKVemuyFL7jtm9IRPq+Jh+IrmixLJFx8PKkNAM6g+A8irx8
piw+yhRsVy5Jk2QeIqvbpxN6BfCNcix4sWkusiCJrAqQFuSm26Mhh53Ig1DXG4d3
6QY1T8tW80Q6JHUtDR+iOPqW6EmrNiEopzirvhGv9FicXZ0Lo2yKJueeeihWhFLL
GmlnCjWVMO4hoo8lWCHv95JkPxGMcecCacKKUbHlXzCGyw3+eeTEHMWMEhziLeBy
HZJ1/GReI3Sx7XlUCkG4468Yz3PpmbNIk/U5XKE7TGuxKmfcWQpu022iF/9DrKTz
KVhKimCBXJX345bCFe1rN2z5CV6sv87FkMs5Y+OjPw6qYFZPVKO2TdUUBcpXbQMg
UW+Kuaax9W7214Stlil727MjRCiH1+0yODg4nWj4pTSocA5R3pn5cwqrjMu97OmL
ESx4DHmy4keeSy3+AIAehCZlwgeLb70/xCSRhJMIMS9Q6bz8CPkEWN8bBZt95oeo
37LqZ7lNmq61fs1x1tq0VUnI9HwLFEnsiubp6RG0Yu8l/uImjjjXa/ytW2GXrfUi
zM22dOntu6u23iBxRBJRWdFTVUz7qrdu+PHavr+Y7TbCeiBwiypmz5llf823UIVx
btamI6ziAq2gKZhObIhut7sjaLkAyTLlNVkNN1WNaplAXpW25UFVk93MHbvZ27bx
9iLGs/qB2kDTUjffSQoHTLY1GoLxv83RgVspUGQjslztEEpWfYvGfVLcgYLv933B
aRW9BRoNZ0czKx7Lhuwjreyb5IcWDarhC8q29ZkkWsQQonaPb0kTEFJul80Yqk0k
-----END RSA PRIVATE KEY-----`,
certificate: `-----BEGIN CERTIFICATE-----
MIIDiTCCAnGgAwIBAgIJAK5m5S7EE46kMA0GCSqGSIb3DQEBCwUAMFsxCzAJBgNV
BAYTAlVTMQ4wDAYDVQQIDAVzdGF0ZTERMA8GA1UEBwwIbG9jYXRpb24xFTATBgNV
BAoMDG9yZ2FuaXphdGlvbjESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTE3MTIxODE4
MDUyOFoXDTI3MTIxNjE4MDUyOFowWzELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBXN0
YXRlMREwDwYDVQQHDAhsb2NhdGlvbjEVMBMGA1UECgwMb3JnYW5pemF0aW9uMRIw
EAYDVQQDDAlsb2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDPJfYY5Dhsntrqwyu7ZgKM/zrlKEjCwGHhWJBdZdeZCHQlY8ISrtDxxp2XMmI6
HsszalEhNF9fk3vSXWclTuomG03fgGzP4R6QpcwGUCxhRF1J+0b64Yi8pw2uEGsR
GuMwLhGorcWalNoihgHc0BQ4vO8aaTNTX7iD06olesP6vGNu/S8h0VomE+0v9qYc
VF66Zaiv/6OmxAtDpElJjVd0mY7G85BlDlFrVwzd7zhRiuJZ4iDg749Xt9GuuKla
Dvr14glHhP4dQgUbhluJmIHMdx2ZPjk+5FxaDK6I9IUpxczFDe4agDE6lKzU1eLd
cCXRWFOf6q9lTB1hUZfmWfTxAgMBAAGjUDBOMB0GA1UdDgQWBBTQh7lDTq+8salD
0HBNILochiiNaDAfBgNVHSMEGDAWgBTQh7lDTq+8salD0HBNILochiiNaDAMBgNV
HRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQAqi9LycxcXKNSDXaPkCKvw7RQy
iMBDGm1kIY++p3tzbUGuaeu85TsswKnqd50AullEU+aQxRRJGfR8eSKzQJMBXLMQ
b4ptYCc5OrZtRHT8NaZ/df2tc6I88kN8dBu6ybcNGsevXA/iNX3kKLW7naxdr5jj
KUudWSuqDCjCmQa5bYb9H6DreLH2lUItSWBa/YmeZ3VSezDCd+XYO53QKwZVj8Jb
bulZmoo7e7HO1qecEzWKL10UYyEbG3UDPtw+NZc142ZYeEhXQ0dsstGAO5hf3hEl
kQyKGUTpDbKLuyYMFsoH73YLjBqNe+UEhPwE+FWpcky1Sp9RTx/oMLpiZaPR
-----END CERTIFICATE-----`,
shouldFail: true,
},
{
password: "",
privateKey: `-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA4K9Qq7vMY2bGkrdFAYpBYNLlCgnnFU+0pi+N+3bjuWmfX/kw
WXBa3SDqKD08PWWzwvBSLPCCUV2IuUd7tBa1pJ2wXkdoDeI5InYHJKrXbSZonni6
Bex7sgnqV/9o8xFkSOleoQWZgyeKGxtt0J/Z+zhpH+zXahwM4wOL3yzLSQt+NCKM
6N96zXYi16DEa89fYwRxPwE1XTRc7Ddggqx+4iRHvYG0fyTNcPB/+UiFw59EE1Sg
QIyTVntVqpsb6s8XdkFxURoLxefhcMVf2kU0T04OWI3gmeavKfTcj8Z2/bjPSsqP
mgkADv9Ru6VnSK/96TW/NwxWJ32PBz6Sbl9LdwIDAQABAoIBABVh+d5uH/RxyoIZ
+PI9kx1A1NVQvfI0RK/wJKYC2YdCuw0qLOTGIY+b20z7DumU7TenIVrvhKdzrFhd
qjMoWh8RdsByMT/pAKD79JATxi64EgrK2IFJ0TfPY8L+JqHDTPT3aK8QVly5/ZW4
1YmePOOAqdiE9Lc/diaApuYVYD9SL/X7fYs1ezOB4oGXoz0rthX77zHMxcEurpK3
VgSnaq7FYTVY7GrFB+ASiAlDIyLwztz08Ijn8aG0QAZ8GFuPGSmPMXWjLwFhRZsa
Gfy5BYiA0bVSnQSPHzAnHu9HyGlsdouVPPvJB3SrvMl+BFhZiUuR8OGSob7z7hfI
hMyHbNECgYEA/gyG7sHAb5mPkhq9JkTv+LrMY5NDZKYcSlbvBlM3kd6Ib3Hxl+6T
FMq2TWIrh2+mT1C14htziHd05dF6St995Tby6CJxTj6a/2Odnfm+JcOou/ula4Sz
92nIGlGPTJXstDbHGnRCpk6AomXK02stydTyrCisOw1H+LyTG6aT0q8CgYEA4mkO
hfLJkgmJzWIhxHR901uWHz/LId0gC6FQCeaqWmRup6Bl97f0U6xokw4tw8DJOncF
yZpYRXUXhdv/FXCjtXvAhKIX5+e+3dlzPHIdekSfcY00ip/ifAS1OyVviJia+cna
eJgq8WLHxJZim9Ah93NlPyiqGPwtasub90qjZbkCgYEA35WK02o1wII3dvCNc7bM
M+3CoAglEdmXoF1uM/TdPUXKcbqoU3ymeXAGjYhOov3CMp/n0z0xqvLnMLPxmx+i
ny6DDYXyjlhO9WFogHYhwP636+mHJl8+PAsfDvqk0VRJZDmpdUDIv7DrSQGpRfRX
8f+2K4oIOlhv9RuRpI4wHwUCgYB8OjaMyn1NEsy4k2qBt4U+jhcdyEv1pbWqi/U1
qYm5FTgd44VvWVDHBGdQoMv9h28iFCJpzrU2Txv8B4y7v9Ujg+ZLIAFL7j0szt5K
wTZpWvO9Q0Qb98Q2VgL2lADRiyIlglrMJnoRfiisNfOfGKE6e+eGsxI5qUxmN5e5
JQvoiQKBgQCqgyuUBIu/Qsb3qUED/o0S5wCel43Yh/Rl+mxDinOUvJfKJSW2SyEk
+jDo0xw3Opg6ZC5Lj2V809LA/XteaIuyhRuqOopjhHIvIvrYGe+2O8q9/Mv40BYW
0BhJ/Gdseps0C6Z5mTT5Fee4YVlGZuyuNKmKTd4JmqInfBV3ncMWQg==
-----END RSA PRIVATE KEY-----`,
certificate: `-----BEGIN CERTIFICATE-----
MIIDiTCCAnGgAwIBAgIJAIb84Z5Mh31iMA0GCSqGSIb3DQEBCwUAMFsxCzAJBgNV
BAYTAlVTMQ4wDAYDVQQIDAVzdGF0ZTERMA8GA1UEBwwIbG9jYXRpb24xFTATBgNV
BAoMDG9yZ2FuaXphdGlvbjESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTE3MTIxODE4
NTcyM1oXDTI3MTIxNjE4NTcyM1owWzELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBXN0
YXRlMREwDwYDVQQHDAhsb2NhdGlvbjEVMBMGA1UECgwMb3JnYW5pemF0aW9uMRIw
EAYDVQQDDAlsb2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDgr1Cru8xjZsaSt0UBikFg0uUKCecVT7SmL437duO5aZ9f+TBZcFrdIOooPTw9
ZbPC8FIs8IJRXYi5R3u0FrWknbBeR2gN4jkidgckqtdtJmieeLoF7HuyCepX/2jz
EWRI6V6hBZmDJ4obG23Qn9n7OGkf7NdqHAzjA4vfLMtJC340Iozo33rNdiLXoMRr
z19jBHE/ATVdNFzsN2CCrH7iJEe9gbR/JM1w8H/5SIXDn0QTVKBAjJNWe1Wqmxvq
zxd2QXFRGgvF5+FwxV/aRTRPTg5YjeCZ5q8p9NyPxnb9uM9Kyo+aCQAO/1G7pWdI
r/3pNb83DFYnfY8HPpJuX0t3AgMBAAGjUDBOMB0GA1UdDgQWBBQ2/bSCHscnoV+0
d+YJxLu4XLSNIDAfBgNVHSMEGDAWgBQ2/bSCHscnoV+0d+YJxLu4XLSNIDAMBgNV
HRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQC6p4gPwmkoDtRsP1c8IWgXFka+
Q59oe79ZK1RqDE6ZZu0rgw07rPzKr4ofW4hTxnx7PUgKOhWLq9VvwEC/9tDbD0Gw
SKknRZZOiEE3qUZbwNtHMd4UBzpzChTRC6RcwC5zT1/WICMUHxa4b8E2umJuf3Qd
5Y23sXEESx5evr49z6DLcVe2i70o2wJeWs2kaXqhCJt0X7z0rnYqjfFdvxd8dyzt
1DXmE45cLadpWHDg26DMsdchamgnqEo79YUxkH6G/Cb8ZX4igQ/CsxCDOKvccjHO
OncDtuIpK8O7OyfHP3+MBpUFG4P6Ctn7RVcZe9fQweTpfAy18G+loVzuUeOD
-----END CERTIFICATE-----`,
shouldFail: false,
},
{
password: "foobar",
privateKey: `-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA4K9Qq7vMY2bGkrdFAYpBYNLlCgnnFU+0pi+N+3bjuWmfX/kw
WXBa3SDqKD08PWWzwvBSLPCCUV2IuUd7tBa1pJ2wXkdoDeI5InYHJKrXbSZonni6
Bex7sgnqV/9o8xFkSOleoQWZgyeKGxtt0J/Z+zhpH+zXahwM4wOL3yzLSQt+NCKM
6N96zXYi16DEa89fYwRxPwE1XTRc7Ddggqx+4iRHvYG0fyTNcPB/+UiFw59EE1Sg
QIyTVntVqpsb6s8XdkFxURoLxefhcMVf2kU0T04OWI3gmeavKfTcj8Z2/bjPSsqP
mgkADv9Ru6VnSK/96TW/NwxWJ32PBz6Sbl9LdwIDAQABAoIBABVh+d5uH/RxyoIZ
+PI9kx1A1NVQvfI0RK/wJKYC2YdCuw0qLOTGIY+b20z7DumU7TenIVrvhKdzrFhd
qjMoWh8RdsByMT/pAKD79JATxi64EgrK2IFJ0TfPY8L+JqHDTPT3aK8QVly5/ZW4
1YmePOOAqdiE9Lc/diaApuYVYD9SL/X7fYs1ezOB4oGXoz0rthX77zHMxcEurpK3
VgSnaq7FYTVY7GrFB+ASiAlDIyLwztz08Ijn8aG0QAZ8GFuPGSmPMXWjLwFhRZsa
Gfy5BYiA0bVSnQSPHzAnHu9HyGlsdouVPPvJB3SrvMl+BFhZiUuR8OGSob7z7hfI
hMyHbNECgYEA/gyG7sHAb5mPkhq9JkTv+LrMY5NDZKYcSlbvBlM3kd6Ib3Hxl+6T
FMq2TWIrh2+mT1C14htziHd05dF6St995Tby6CJxTj6a/2Odnfm+JcOou/ula4Sz
92nIGlGPTJXstDbHGnRCpk6AomXK02stydTyrCisOw1H+LyTG6aT0q8CgYEA4mkO
hfLJkgmJzWIhxHR901uWHz/LId0gC6FQCeaqWmRup6Bl97f0U6xokw4tw8DJOncF
yZpYRXUXhdv/FXCjtXvAhKIX5+e+3dlzPHIdekSfcY00ip/ifAS1OyVviJia+cna
eJgq8WLHxJZim9Ah93NlPyiqGPwtasub90qjZbkCgYEA35WK02o1wII3dvCNc7bM
M+3CoAglEdmXoF1uM/TdPUXKcbqoU3ymeXAGjYhOov3CMp/n0z0xqvLnMLPxmx+i
ny6DDYXyjlhO9WFogHYhwP636+mHJl8+PAsfDvqk0VRJZDmpdUDIv7DrSQGpRfRX
8f+2K4oIOlhv9RuRpI4wHwUCgYB8OjaMyn1NEsy4k2qBt4U+jhcdyEv1pbWqi/U1
qYm5FTgd44VvWVDHBGdQoMv9h28iFCJpzrU2Txv8B4y7v9Ujg+ZLIAFL7j0szt5K
wTZpWvO9Q0Qb98Q2VgL2lADRiyIlglrMJnoRfiisNfOfGKE6e+eGsxI5qUxmN5e5
JQvoiQKBgQCqgyuUBIu/Qsb3qUED/o0S5wCel43Yh/Rl+mxDinOUvJfKJSW2SyEk
+jDo0xw3Opg6ZC5Lj2V809LA/XteaIuyhRuqOopjhHIvIvrYGe+2O8q9/Mv40BYW
0BhJ/Gdseps0C6Z5mTT5Fee4YVlGZuyuNKmKTd4JmqInfBV3ncMWQg==
-----END RSA PRIVATE KEY-----`,
certificate: `-----BEGIN CERTIFICATE-----
MIIDiTCCAnGgAwIBAgIJAIb84Z5Mh31iMA0GCSqGSIb3DQEBCwUAMFsxCzAJBgNV
BAYTAlVTMQ4wDAYDVQQIDAVzdGF0ZTERMA8GA1UEBwwIbG9jYXRpb24xFTATBgNV
BAoMDG9yZ2FuaXphdGlvbjESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTE3MTIxODE4
NTcyM1oXDTI3MTIxNjE4NTcyM1owWzELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBXN0
YXRlMREwDwYDVQQHDAhsb2NhdGlvbjEVMBMGA1UECgwMb3JnYW5pemF0aW9uMRIw
EAYDVQQDDAlsb2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDgr1Cru8xjZsaSt0UBikFg0uUKCecVT7SmL437duO5aZ9f+TBZcFrdIOooPTw9
ZbPC8FIs8IJRXYi5R3u0FrWknbBeR2gN4jkidgckqtdtJmieeLoF7HuyCepX/2jz
EWRI6V6hBZmDJ4obG23Qn9n7OGkf7NdqHAzjA4vfLMtJC340Iozo33rNdiLXoMRr
z19jBHE/ATVdNFzsN2CCrH7iJEe9gbR/JM1w8H/5SIXDn0QTVKBAjJNWe1Wqmxvq
zxd2QXFRGgvF5+FwxV/aRTRPTg5YjeCZ5q8p9NyPxnb9uM9Kyo+aCQAO/1G7pWdI
r/3pNb83DFYnfY8HPpJuX0t3AgMBAAGjUDBOMB0GA1UdDgQWBBQ2/bSCHscnoV+0
d+YJxLu4XLSNIDAfBgNVHSMEGDAWgBQ2/bSCHscnoV+0d+YJxLu4XLSNIDAMBgNV
HRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQC6p4gPwmkoDtRsP1c8IWgXFka+
Q59oe79ZK1RqDE6ZZu0rgw07rPzKr4ofW4hTxnx7PUgKOhWLq9VvwEC/9tDbD0Gw
SKknRZZOiEE3qUZbwNtHMd4UBzpzChTRC6RcwC5zT1/WICMUHxa4b8E2umJuf3Qd
5Y23sXEESx5evr49z6DLcVe2i70o2wJeWs2kaXqhCJt0X7z0rnYqjfFdvxd8dyzt
1DXmE45cLadpWHDg26DMsdchamgnqEo79YUxkH6G/Cb8ZX4igQ/CsxCDOKvccjHO
OncDtuIpK8O7OyfHP3+MBpUFG4P6Ctn7RVcZe9fQweTpfAy18G+loVzuUeOD
-----END CERTIFICATE-----`,
shouldFail: false,
},
}

View File

@@ -1,5 +1,5 @@
/*
* Minio Cloud Storage, (C) 2017 Minio, Inc.
* Minio Cloud Storage, (C) 2017, 2018 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -24,39 +24,26 @@ import (
"time"
"github.com/minio/cli"
"github.com/minio/minio/pkg/auth"
)
// Check for updates and print a notification message
func checkUpdate(mode string) {
// Its OK to ignore any errors during getUpdateInfo() here.
if older, downloadURL, err := getUpdateInfo(1*time.Second, mode); err == nil {
if updateMsg := computeUpdateMessage(downloadURL, older); updateMsg != "" {
// Its OK to ignore any errors during doUpdate() here.
if updateMsg, _, currentReleaseTime, latestReleaseTime, err := getUpdateInfo(2*time.Second, mode); err == nil {
if globalInplaceUpdateDisabled {
log.Println(updateMsg)
} else {
log.Println(prepareUpdateMessage("Run `minio update`", latestReleaseTime.Sub(currentReleaseTime)))
}
}
}
func enableLoggers() {
fileLogTarget := serverConfig.Logger.GetFile()
if fileLogTarget.Enable {
err := InitFileLogger(&fileLogTarget)
fatalIf(err, "Unable to initialize file logger")
log.AddTarget(fileLogTarget)
}
consoleLogTarget := serverConfig.Logger.GetConsole()
if consoleLogTarget.Enable {
InitConsoleLogger(&consoleLogTarget)
}
log.SetConsoleTarget(consoleLogTarget)
}
func initConfig() {
// Config file does not exist, we create it fresh and return upon success.
if isFile(getConfigFile()) {
fatalIf(migrateConfig(), "Config migration failed.")
fatalIf(loadConfig(), "Unable to load config version: '%s'.", v19)
fatalIf(loadConfig(), "Unable to load config version: '%s'.", serverConfigVersion)
} else {
fatalIf(newConfig(), "Unable to initialize minio config for the first time.")
log.Println("Created minio configuration file successfully at " + getConfigDir())
@@ -64,23 +51,36 @@ func initConfig() {
}
func handleCommonCmdArgs(ctx *cli.Context) {
// Set configuration directory.
{
// Get configuration directory from command line argument.
configDir := ctx.String("config-dir")
if !ctx.IsSet("config-dir") && ctx.GlobalIsSet("config-dir") {
configDir = ctx.GlobalString("config-dir")
var configDir string
if ctx.IsSet("config-dir") {
configDir = ctx.String("config-dir")
} else if ctx.GlobalIsSet("config-dir") {
configDir = ctx.GlobalString("config-dir")
// cli package does not expose parent's "config-dir" option. Below code is workaround.
if configDir == "" || configDir == getConfigDir() {
if ctx.Parent().GlobalIsSet("config-dir") {
configDir = ctx.Parent().GlobalString("config-dir")
}
}
} else {
// Neither local nor global config-dir option is provided. In this case, try to use
// default config directory.
configDir = getConfigDir()
if configDir == "" {
fatalIf(errors.New("empty directory"), "Configuration directory cannot be empty.")
fatalIf(errors.New("missing option"), "config-dir option must be provided.")
}
// Disallow relative paths, figure out absolute paths.
configDirAbs, err := filepath.Abs(configDir)
fatalIf(err, "Unable to fetch absolute path for config directory %s", configDir)
setConfigDir(configDirAbs)
}
if configDir == "" {
fatalIf(errors.New("empty directory"), "Configuration directory cannot be empty.")
}
// Disallow relative paths, figure out absolute paths.
configDirAbs, err := filepath.Abs(configDir)
fatalIf(err, "Unable to fetch absolute path for config directory %s", configDir)
setConfigDir(configDirAbs)
}
func handleCommonEnvVars() {
@@ -89,13 +89,10 @@ func handleCommonEnvVars() {
globalProfiler = startProfiler(profiler)
}
// Check if object cache is disabled.
globalXLObjCacheDisabled = strings.EqualFold(os.Getenv("_MINIO_CACHE"), "off")
accessKey := os.Getenv("MINIO_ACCESS_KEY")
secretKey := os.Getenv("MINIO_SECRET_KEY")
if accessKey != "" && secretKey != "" {
cred, err := createCredential(accessKey, secretKey)
cred, err := auth.CreateCredentials(accessKey, secretKey)
fatalIf(err, "Invalid access/secret Key set in environment.")
// credential Envs are set globally.
@@ -114,4 +111,51 @@ func handleCommonEnvVars() {
globalIsEnvBrowser = true
globalIsBrowserEnabled = bool(browserFlag)
}
traceFile := os.Getenv("MINIO_HTTP_TRACE")
if traceFile != "" {
var err error
globalHTTPTraceFile, err = os.OpenFile(traceFile, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0660)
fatalIf(err, "error opening file %s", traceFile)
}
globalDomainName = os.Getenv("MINIO_DOMAIN")
if globalDomainName != "" {
globalIsEnvDomainName = true
}
// In place update is true by default if the MINIO_UPDATE is not set
// or is not set to 'off', if MINIO_UPDATE is set to 'off' then
// in-place update is off.
globalInplaceUpdateDisabled = strings.EqualFold(os.Getenv("MINIO_UPDATE"), "off")
// Validate and store the storage class env variables only for XL/Dist XL setups
if globalIsXL {
var err error
// Check for environment variables and parse into storageClass struct
if ssc := os.Getenv(standardStorageClassEnv); ssc != "" {
globalStandardStorageClass, err = parseStorageClass(ssc)
fatalIf(err, "Invalid value set in environment variable %s.", standardStorageClassEnv)
}
if rrsc := os.Getenv(reducedRedundancyStorageClassEnv); rrsc != "" {
globalRRStorageClass, err = parseStorageClass(rrsc)
fatalIf(err, "Invalid value set in environment variable %s.", reducedRedundancyStorageClassEnv)
}
// Validation is done after parsing both the storage classes. This is needed because we need one
// storage class value to deduce the correct value of the other storage class.
if globalRRStorageClass.Scheme != "" {
err = validateParity(globalStandardStorageClass.Parity, globalRRStorageClass.Parity)
fatalIf(err, "Invalid value set in environment variable %s.", reducedRedundancyStorageClassEnv)
globalIsStorageClass = true
}
if globalStandardStorageClass.Scheme != "" {
err = validateParity(globalStandardStorageClass.Parity, globalRRStorageClass.Parity)
fatalIf(err, "Invalid value set in environment variable %s.", standardStorageClassEnv)
globalIsStorageClass = true
}
}
}

View File

@@ -20,119 +20,151 @@ import (
"errors"
"fmt"
"io/ioutil"
"reflect"
"sync"
"github.com/minio/minio/pkg/auth"
"github.com/minio/minio/pkg/quick"
"github.com/tidwall/gjson"
)
// Steps to move from version N to version N+1
// 1. Add new struct serverConfigVN+1 in config-versions.go
// 2. Set serverConfigVersion to "N+1"
// 3. Set serverConfig to serverConfigVN+1
// 4. Add new migration function (ex. func migrateVNToVN+1()) in config-migrate.go
// 5. Call migrateVNToVN+1() from migrateConfig() in config-migrate.go
// 6. Make changes in config-current_test.go for any test change
// Config version
const v19 = "19"
const serverConfigVersion = "22"
type serverConfig = serverConfigV22
var (
// serverConfig server config.
serverConfig *serverConfigV19
serverConfigMu sync.RWMutex
// globalServerConfig server config.
globalServerConfig *serverConfig
globalServerConfigMu sync.RWMutex
)
// serverConfigV19 server configuration version '19' which is like
// version '18' except it adds support for MQTT notifications.
type serverConfigV19 struct {
sync.RWMutex
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
// Additional error logging configuration.
Logger *loggers `json:"logger"`
// Notification queue configuration.
Notify *notifier `json:"notify"`
}
// GetVersion get current config version.
func (s *serverConfigV19) GetVersion() string {
s.RLock()
defer s.RUnlock()
func (s *serverConfig) GetVersion() string {
return s.Version
}
// SetRegion set new region.
func (s *serverConfigV19) SetRegion(region string) {
s.Lock()
defer s.Unlock()
// SetRegion set a new region.
func (s *serverConfig) SetRegion(region string) {
// Save new region.
s.Region = region
}
// GetRegion get current region.
func (s *serverConfigV19) GetRegion() string {
s.RLock()
defer s.RUnlock()
func (s *serverConfig) GetRegion() string {
return s.Region
}
// SetCredentials set new credentials.
func (s *serverConfigV19) SetCredential(creds credential) {
s.Lock()
defer s.Unlock()
// SetCredential sets new credential and returns the previous credential.
func (s *serverConfig) SetCredential(creds auth.Credentials) (prevCred auth.Credentials) {
// Save previous credential.
prevCred = s.Credential
// Set updated credential.
s.Credential = creds
// Return previous credential.
return prevCred
}
// GetCredentials get current credentials.
func (s *serverConfigV19) GetCredential() credential {
s.RLock()
defer s.RUnlock()
func (s *serverConfig) GetCredential() auth.Credentials {
return s.Credential
}
// SetBrowser set if browser is enabled.
func (s *serverConfigV19) SetBrowser(b bool) {
s.Lock()
defer s.Unlock()
func (s *serverConfig) SetBrowser(b bool) {
// Set the new value.
s.Browser = BrowserFlag(b)
}
// GetCredentials get current credentials.
func (s *serverConfigV19) GetBrowser() bool {
s.RLock()
defer s.RUnlock()
func (s *serverConfig) SetStorageClass(standardClass, rrsClass storageClass) {
s.StorageClass.Standard = standardClass
s.StorageClass.RRS = rrsClass
}
// GetStorageClass reads storage class fields from current config.
// It returns the standard and reduced redundancy storage class struct
func (s *serverConfig) GetStorageClass() (storageClass, storageClass) {
return s.StorageClass.Standard, s.StorageClass.RRS
}
// GetCredentials get current credentials.
func (s *serverConfig) GetBrowser() bool {
return bool(s.Browser)
}
// Save config.
func (s *serverConfigV19) Save() error {
s.RLock()
defer s.RUnlock()
func (s *serverConfig) Save() error {
// Save config file.
return quick.Save(getConfigFile(), s)
}
func newServerConfigV19() *serverConfigV19 {
srvCfg := &serverConfigV19{
Version: v19,
Credential: mustGetNewCredential(),
// Returns the string describing a difference with the given
// configuration object. If the given configuration object is
// identical, an empty string is returned.
func (s *serverConfig) ConfigDiff(t *serverConfig) string {
switch {
case t == nil:
return "Given configuration is empty"
case s.Credential != t.Credential:
return "Credential configuration differs"
case s.Region != t.Region:
return "Region configuration differs"
case s.Browser != t.Browser:
return "Browser configuration differs"
case s.Domain != t.Domain:
return "Domain configuration differs"
case s.StorageClass != t.StorageClass:
return "StorageClass configuration differs"
case !reflect.DeepEqual(s.Notify.AMQP, t.Notify.AMQP):
return "AMQP Notification configuration differs"
case !reflect.DeepEqual(s.Notify.NATS, t.Notify.NATS):
return "NATS Notification configuration differs"
case !reflect.DeepEqual(s.Notify.ElasticSearch, t.Notify.ElasticSearch):
return "ElasticSearch Notification configuration differs"
case !reflect.DeepEqual(s.Notify.Redis, t.Notify.Redis):
return "Redis Notification configuration differs"
case !reflect.DeepEqual(s.Notify.PostgreSQL, t.Notify.PostgreSQL):
return "PostgreSQL Notification configuration differs"
case !reflect.DeepEqual(s.Notify.Kafka, t.Notify.Kafka):
return "Kafka Notification configuration differs"
case !reflect.DeepEqual(s.Notify.Webhook, t.Notify.Webhook):
return "Webhook Notification configuration differs"
case !reflect.DeepEqual(s.Notify.MySQL, t.Notify.MySQL):
return "MySQL Notification configuration differs"
case !reflect.DeepEqual(s.Notify.MQTT, t.Notify.MQTT):
return "MQTT Notification configuration differs"
case reflect.DeepEqual(s, t):
return ""
default:
// This case will not happen unless this comparison
// function has become stale.
return "Configuration differs"
}
}
func newServerConfig() *serverConfig {
srvCfg := &serverConfig{
Version: serverConfigVersion,
Credential: auth.MustGetNewCredentials(),
Region: globalMinioDefaultRegion,
Browser: true,
Logger: &loggers{},
Notify: &notifier{},
StorageClass: storageClassConfig{
Standard: storageClass{},
RRS: storageClass{},
},
Notify: notifier{},
}
// Enable console logger by default on a fresh run.
srvCfg.Logger.Console = NewConsoleLogger()
// Make sure to initialize notification configs.
srvCfg.Notify.AMQP = make(map[string]amqpNotify)
srvCfg.Notify.AMQP["1"] = amqpNotify{}
@@ -160,7 +192,7 @@ func newServerConfigV19() *serverConfigV19 {
// found, otherwise use default parameters
func newConfig() error {
// Initialize server config.
srvCfg := newServerConfigV19()
srvCfg := newServerConfig()
// If env is set override the credentials from config file.
if globalIsEnvCreds {
@@ -175,15 +207,23 @@ func newConfig() error {
srvCfg.SetRegion(globalServerRegion)
}
if globalIsEnvDomainName {
srvCfg.Domain = globalDomainName
}
if globalIsStorageClass {
srvCfg.SetStorageClass(globalStandardStorageClass, globalRRStorageClass)
}
// hold the mutex lock before a new config is assigned.
// Save the new config globally.
// unlock the mutex.
serverConfigMu.Lock()
serverConfig = srvCfg
serverConfigMu.Unlock()
globalServerConfigMu.Lock()
globalServerConfig = srvCfg
globalServerConfigMu.Unlock()
// Save config into file.
return serverConfig.Save()
return globalServerConfig.Save()
}
// doCheckDupJSONKeys recursively detects duplicate json keys
@@ -238,8 +278,8 @@ func checkDupJSONKeys(json string) error {
}
// getValidConfig - returns valid server configuration
func getValidConfig() (*serverConfigV19, error) {
srvCfg := &serverConfigV19{
func getValidConfig() (*serverConfig, error) {
srvCfg := &serverConfig{
Region: globalMinioDefaultRegion,
Browser: true,
}
@@ -249,8 +289,8 @@ func getValidConfig() (*serverConfigV19, error) {
return nil, err
}
if srvCfg.Version != v19 {
return nil, fmt.Errorf("configuration version mismatch. Expected: %s, Got: %s", v19, srvCfg.Version)
if srvCfg.Version != serverConfigVersion {
return nil, fmt.Errorf("configuration version mismatch. Expected: %s, Got: %s", serverConfigVersion, srvCfg.Version)
}
// Load config file json and check for duplication json keys
@@ -270,11 +310,6 @@ func getValidConfig() (*serverConfigV19, error) {
return nil, errors.New("invalid credential in config file " + configFile)
}
// Validate logger field
if err = srvCfg.Logger.Validate(); err != nil {
return nil, err
}
// Validate notify field
if err = srvCfg.Notify.Validate(); err != nil {
return nil, err
@@ -304,19 +339,33 @@ func loadConfig() error {
srvCfg.SetRegion(globalServerRegion)
}
if globalIsEnvDomainName {
srvCfg.Domain = globalDomainName
}
if globalIsStorageClass {
srvCfg.SetStorageClass(globalStandardStorageClass, globalRRStorageClass)
}
// hold the mutex lock before a new config is assigned.
serverConfigMu.Lock()
serverConfig = srvCfg
globalServerConfigMu.Lock()
globalServerConfig = srvCfg
if !globalIsEnvCreds {
globalActiveCred = serverConfig.GetCredential()
globalActiveCred = globalServerConfig.GetCredential()
}
if !globalIsEnvBrowser {
globalIsBrowserEnabled = serverConfig.GetBrowser()
globalIsBrowserEnabled = globalServerConfig.GetBrowser()
}
if !globalIsEnvRegion {
globalServerRegion = serverConfig.GetRegion()
globalServerRegion = globalServerConfig.GetRegion()
}
serverConfigMu.Unlock()
if !globalIsEnvDomainName {
globalDomainName = globalServerConfig.Domain
}
if !globalIsStorageClass {
globalStandardStorageClass, globalRRStorageClass = globalServerConfig.GetStorageClass()
}
globalServerConfigMu.Unlock()
return nil
}

View File

@@ -23,6 +23,7 @@ import (
"reflect"
"testing"
"github.com/minio/minio/pkg/auth"
"github.com/tidwall/gjson"
)
@@ -32,97 +33,74 @@ func TestServerConfig(t *testing.T) {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
if serverConfig.GetRegion() != globalMinioDefaultRegion {
t.Errorf("Expecting region `us-east-1` found %s", serverConfig.GetRegion())
if globalServerConfig.GetRegion() != globalMinioDefaultRegion {
t.Errorf("Expecting region `us-east-1` found %s", globalServerConfig.GetRegion())
}
// Set new region and verify.
serverConfig.SetRegion("us-west-1")
if serverConfig.GetRegion() != "us-west-1" {
t.Errorf("Expecting region `us-west-1` found %s", serverConfig.GetRegion())
globalServerConfig.SetRegion("us-west-1")
if globalServerConfig.GetRegion() != "us-west-1" {
t.Errorf("Expecting region `us-west-1` found %s", globalServerConfig.GetRegion())
}
// Set new amqp notification id.
serverConfig.Notify.SetAMQPByID("2", amqpNotify{})
savedNotifyCfg1 := serverConfig.Notify.GetAMQPByID("2")
globalServerConfig.Notify.SetAMQPByID("2", amqpNotify{})
savedNotifyCfg1 := globalServerConfig.Notify.GetAMQPByID("2")
if !reflect.DeepEqual(savedNotifyCfg1, amqpNotify{}) {
t.Errorf("Expecting AMQP config %#v found %#v", amqpNotify{}, savedNotifyCfg1)
}
// Set new elastic search notification id.
serverConfig.Notify.SetElasticSearchByID("2", elasticSearchNotify{})
savedNotifyCfg2 := serverConfig.Notify.GetElasticSearchByID("2")
globalServerConfig.Notify.SetElasticSearchByID("2", elasticSearchNotify{})
savedNotifyCfg2 := globalServerConfig.Notify.GetElasticSearchByID("2")
if !reflect.DeepEqual(savedNotifyCfg2, elasticSearchNotify{}) {
t.Errorf("Expecting Elasticsearch config %#v found %#v", elasticSearchNotify{}, savedNotifyCfg2)
}
// Set new redis notification id.
serverConfig.Notify.SetRedisByID("2", redisNotify{})
savedNotifyCfg3 := serverConfig.Notify.GetRedisByID("2")
globalServerConfig.Notify.SetRedisByID("2", redisNotify{})
savedNotifyCfg3 := globalServerConfig.Notify.GetRedisByID("2")
if !reflect.DeepEqual(savedNotifyCfg3, redisNotify{}) {
t.Errorf("Expecting Redis config %#v found %#v", redisNotify{}, savedNotifyCfg3)
}
// Set new kafka notification id.
serverConfig.Notify.SetKafkaByID("2", kafkaNotify{})
savedNotifyCfg4 := serverConfig.Notify.GetKafkaByID("2")
globalServerConfig.Notify.SetKafkaByID("2", kafkaNotify{})
savedNotifyCfg4 := globalServerConfig.Notify.GetKafkaByID("2")
if !reflect.DeepEqual(savedNotifyCfg4, kafkaNotify{}) {
t.Errorf("Expecting Kafka config %#v found %#v", kafkaNotify{}, savedNotifyCfg4)
}
// Set new Webhook notification id.
serverConfig.Notify.SetWebhookByID("2", webhookNotify{})
savedNotifyCfg5 := serverConfig.Notify.GetWebhookByID("2")
globalServerConfig.Notify.SetWebhookByID("2", webhookNotify{})
savedNotifyCfg5 := globalServerConfig.Notify.GetWebhookByID("2")
if !reflect.DeepEqual(savedNotifyCfg5, webhookNotify{}) {
t.Errorf("Expecting Webhook config %#v found %#v", webhookNotify{}, savedNotifyCfg5)
}
// Set new console logger.
// Set new MySQL notification id.
serverConfig.Notify.SetMySQLByID("2", mySQLNotify{})
savedNotifyCfg6 := serverConfig.Notify.GetMySQLByID("2")
globalServerConfig.Notify.SetMySQLByID("2", mySQLNotify{})
savedNotifyCfg6 := globalServerConfig.Notify.GetMySQLByID("2")
if !reflect.DeepEqual(savedNotifyCfg6, mySQLNotify{}) {
t.Errorf("Expecting Webhook config %#v found %#v", mySQLNotify{}, savedNotifyCfg6)
}
// Set new console logger.
// Set new MQTT notification id.
serverConfig.Notify.SetMQTTByID("2", mqttNotify{})
savedNotifyCfg7 := serverConfig.Notify.GetMQTTByID("2")
globalServerConfig.Notify.SetMQTTByID("2", mqttNotify{})
savedNotifyCfg7 := globalServerConfig.Notify.GetMQTTByID("2")
if !reflect.DeepEqual(savedNotifyCfg7, mqttNotify{}) {
t.Errorf("Expecting Webhook config %#v found %#v", mqttNotify{}, savedNotifyCfg7)
}
consoleLogger := NewConsoleLogger()
serverConfig.Logger.SetConsole(consoleLogger)
consoleCfg := serverConfig.Logger.GetConsole()
if !reflect.DeepEqual(consoleCfg, consoleLogger) {
t.Errorf("Expecting console logger config %#v found %#v", consoleLogger, consoleCfg)
}
// Set new console logger.
consoleLogger.Enable = false
serverConfig.Logger.SetConsole(consoleLogger)
// Set new file logger.
fileLogger := NewFileLogger("test-log-file")
serverConfig.Logger.SetFile(fileLogger)
fileCfg := serverConfig.Logger.GetFile()
if !reflect.DeepEqual(fileCfg, fileLogger) {
t.Errorf("Expecting file logger config %#v found %#v", fileLogger, fileCfg)
}
// Set new file logger.
fileLogger.Enable = false
serverConfig.Logger.SetFile(fileLogger)
// Match version.
if serverConfig.GetVersion() != v19 {
t.Errorf("Expecting version %s found %s", serverConfig.GetVersion(), v19)
if globalServerConfig.GetVersion() != serverConfigVersion {
t.Errorf("Expecting version %s found %s", globalServerConfig.GetVersion(), serverConfigVersion)
}
// Attempt to save.
if err := serverConfig.Save(); err != nil {
if err := globalServerConfig.Save(); err != nil {
t.Fatalf("Unable to save updated config file %s", err)
}
@@ -149,6 +127,9 @@ func TestServerConfigWithEnvs(t *testing.T) {
os.Setenv("MINIO_REGION", "us-west-1")
defer os.Unsetenv("MINIO_REGION")
os.Setenv("MINIO_DOMAIN", "domain.com")
defer os.Unsetenv("MINIO_DOMAIN")
defer resetGlobalIsEnvs()
// Get test root.
@@ -166,20 +147,20 @@ func TestServerConfigWithEnvs(t *testing.T) {
initConfig()
// remove the root directory after the test ends.
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
// Check if serverConfig has
if serverConfig.GetBrowser() {
t.Errorf("Expecting browser is set to false found %v", serverConfig.GetBrowser())
if globalServerConfig.GetBrowser() {
t.Errorf("Expecting browser is set to false found %v", globalServerConfig.GetBrowser())
}
// Check if serverConfig has
if serverConfig.GetRegion() != "us-west-1" {
t.Errorf("Expecting region to be \"us-west-1\" found %v", serverConfig.GetRegion())
if globalServerConfig.GetRegion() != "us-west-1" {
t.Errorf("Expecting region to be \"us-west-1\" found %v", globalServerConfig.GetRegion())
}
// Check if serverConfig has
cred := serverConfig.GetCredential()
cred := globalServerConfig.GetCredential()
if cred.AccessKey != "minio" {
t.Errorf("Expecting access key to be `minio` found %s", cred.AccessKey)
@@ -189,6 +170,9 @@ func TestServerConfigWithEnvs(t *testing.T) {
t.Errorf("Expecting access key to be `minio123` found %s", cred.SecretKey)
}
if globalServerConfig.Domain != "domain.com" {
t.Errorf("Expecting Domain to be `domain.com` found " + globalServerConfig.Domain)
}
}
func TestCheckDupJSONKeys(t *testing.T) {
@@ -227,11 +211,11 @@ func TestValidateConfig(t *testing.T) {
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
configPath := filepath.Join(rootPath, minioConfigFile)
v := v19
v := serverConfigVersion
testCases := []struct {
configData string
@@ -267,58 +251,55 @@ func TestValidateConfig(t *testing.T) {
// Test 10 - duplicated json keys
{`{"version": "` + v + `", "browser": "on", "browser": "on", "region":"us-east-1", "credential" : {"accessKey":"minio", "secretKey":"minio123"}}`, false},
// Test 11 - empty filename field in File
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "logger": { "file": { "enable": true, "filename": "" } }}`, false},
// Test 12 - Test AMQP
// Test 11 - Test AMQP
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "amqp": { "1": { "enable": true, "url": "", "exchange": "", "routingKey": "", "exchangeType": "", "mandatory": false, "immediate": false, "durable": false, "internal": false, "noWait": false, "autoDeleted": false }}}}`, false},
// Test 13 - Test NATS
// Test 12 - Test NATS
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "nats": { "1": { "enable": true, "address": "", "subject": "", "username": "", "password": "", "token": "", "secure": false, "pingInterval": 0, "streaming": { "enable": false, "clusterID": "", "clientID": "", "async": false, "maxPubAcksInflight": 0 } } }}}`, false},
// Test 14 - Test ElasticSearch
// Test 13 - Test ElasticSearch
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "elasticsearch": { "1": { "enable": true, "url": "", "index": "" } }}}`, false},
// Test 15 - Test Redis
// Test 14 - Test Redis
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "redis": { "1": { "enable": true, "address": "", "password": "", "key": "" } }}}`, false},
// Test 16 - Test PostgreSQL
// Test 15 - Test PostgreSQL
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "postgresql": { "1": { "enable": true, "connectionString": "", "table": "", "host": "", "port": "", "user": "", "password": "", "database": "" }}}}`, false},
// Test 17 - Test Kafka
// Test 16 - Test Kafka
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "kafka": { "1": { "enable": true, "brokers": null, "topic": "" } }}}`, false},
// Test 18 - Test Webhook
// Test 17 - Test Webhook
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "webhook": { "1": { "enable": true, "endpoint": "" } }}}`, false},
// Test 20 - Test MySQL
// Test 18 - Test MySQL
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "mysql": { "1": { "enable": true, "dsnString": "", "table": "", "host": "", "port": "", "user": "", "password": "", "database": "" }}}}`, false},
// Test 21 - Test Format for MySQL
// Test 19 - Test Format for MySQL
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "mysql": { "1": { "enable": true, "dsnString": "", "format": "invalid", "table": "xxx", "host": "10.0.0.1", "port": "3306", "user": "abc", "password": "pqr", "database": "test1" }}}}`, false},
// Test 22 - Test valid Format for MySQL
// Test 20 - Test valid Format for MySQL
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "mysql": { "1": { "enable": true, "dsnString": "", "format": "namespace", "table": "xxx", "host": "10.0.0.1", "port": "3306", "user": "abc", "password": "pqr", "database": "test1" }}}}`, true},
// Test 23 - Test Format for PostgreSQL
// Test 21 - Test Format for PostgreSQL
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "postgresql": { "1": { "enable": true, "connectionString": "", "format": "invalid", "table": "xxx", "host": "myhost", "port": "5432", "user": "abc", "password": "pqr", "database": "test1" }}}}`, false},
// Test 24 - Test valid Format for PostgreSQL
// Test 22 - Test valid Format for PostgreSQL
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "postgresql": { "1": { "enable": true, "connectionString": "", "format": "namespace", "table": "xxx", "host": "myhost", "port": "5432", "user": "abc", "password": "pqr", "database": "test1" }}}}`, true},
// Test 25 - Test Format for ElasticSearch
// Test 23 - Test Format for ElasticSearch
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "elasticsearch": { "1": { "enable": true, "format": "invalid", "url": "example.com", "index": "myindex" } }}}`, false},
// Test 26 - Test valid Format for ElasticSearch
// Test 24 - Test valid Format for ElasticSearch
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "elasticsearch": { "1": { "enable": true, "format": "namespace", "url": "example.com", "index": "myindex" } }}}`, true},
// Test 27 - Test Format for Redis
// Test 25 - Test Format for Redis
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "redis": { "1": { "enable": true, "format": "invalid", "address": "example.com:80", "password": "xxx", "key": "key1" } }}}`, false},
// Test 28 - Test valid Format for Redis
// Test 26 - Test valid Format for Redis
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "redis": { "1": { "enable": true, "format": "namespace", "address": "example.com:80", "password": "xxx", "key": "key1" } }}}`, true},
// Test 29 - Test MQTT
// Test 27 - Test MQTT
{`{"version": "` + v + `", "credential": { "accessKey": "minio", "secretKey": "minio123" }, "region": "us-east-1", "browser": "on", "notify": { "mqtt": { "1": { "enable": true, "broker": "", "topic": "", "qos": 0, "clientId": "", "username": "", "password": ""}}}}`, false},
}
@@ -331,8 +312,97 @@ func TestValidateConfig(t *testing.T) {
t.Errorf("Test %d, should pass but it failed with err = %v", i+1, verr)
}
if !testCase.shouldPass && verr == nil {
t.Errorf("Test %d, should fail but it succeed.", i+1)
t.Errorf("Test %d, should fail but it succeeded.", i+1)
}
}
}
func TestConfigDiff(t *testing.T) {
testCases := []struct {
s, t *serverConfig
diff string
}{
// 1
{&serverConfig{}, nil, "Given configuration is empty"},
// 2
{
&serverConfig{Credential: auth.Credentials{"u1", "p1"}},
&serverConfig{Credential: auth.Credentials{"u1", "p2"}},
"Credential configuration differs",
},
// 3
{&serverConfig{Region: "us-east-1"}, &serverConfig{Region: "us-west-1"}, "Region configuration differs"},
// 4
{&serverConfig{Browser: false}, &serverConfig{Browser: true}, "Browser configuration differs"},
// 5
{&serverConfig{Domain: "domain1"}, &serverConfig{Domain: "domain2"}, "Domain configuration differs"},
// 6
{
&serverConfig{StorageClass: storageClassConfig{storageClass{"1", 8}, storageClass{"2", 6}}},
&serverConfig{StorageClass: storageClassConfig{storageClass{"1", 8}, storageClass{"2", 4}}},
"StorageClass configuration differs",
},
// 7
{
&serverConfig{Notify: notifier{AMQP: map[string]amqpNotify{"1": {Enable: true}}}},
&serverConfig{Notify: notifier{AMQP: map[string]amqpNotify{"1": {Enable: false}}}},
"AMQP Notification configuration differs",
},
// 8
{
&serverConfig{Notify: notifier{NATS: map[string]natsNotify{"1": {Enable: true}}}},
&serverConfig{Notify: notifier{NATS: map[string]natsNotify{"1": {Enable: false}}}},
"NATS Notification configuration differs",
},
// 9
{
&serverConfig{Notify: notifier{ElasticSearch: map[string]elasticSearchNotify{"1": {Enable: true}}}},
&serverConfig{Notify: notifier{ElasticSearch: map[string]elasticSearchNotify{"1": {Enable: false}}}},
"ElasticSearch Notification configuration differs",
},
// 10
{
&serverConfig{Notify: notifier{Redis: map[string]redisNotify{"1": {Enable: true}}}},
&serverConfig{Notify: notifier{Redis: map[string]redisNotify{"1": {Enable: false}}}},
"Redis Notification configuration differs",
},
// 11
{
&serverConfig{Notify: notifier{PostgreSQL: map[string]postgreSQLNotify{"1": {Enable: true}}}},
&serverConfig{Notify: notifier{PostgreSQL: map[string]postgreSQLNotify{"1": {Enable: false}}}},
"PostgreSQL Notification configuration differs",
},
// 12
{
&serverConfig{Notify: notifier{Kafka: map[string]kafkaNotify{"1": {Enable: true}}}},
&serverConfig{Notify: notifier{Kafka: map[string]kafkaNotify{"1": {Enable: false}}}},
"Kafka Notification configuration differs",
},
// 13
{
&serverConfig{Notify: notifier{Webhook: map[string]webhookNotify{"1": {Enable: true}}}},
&serverConfig{Notify: notifier{Webhook: map[string]webhookNotify{"1": {Enable: false}}}},
"Webhook Notification configuration differs",
},
// 14
{
&serverConfig{Notify: notifier{MySQL: map[string]mySQLNotify{"1": {Enable: true}}}},
&serverConfig{Notify: notifier{MySQL: map[string]mySQLNotify{"1": {Enable: false}}}},
"MySQL Notification configuration differs",
},
// 15
{
&serverConfig{Notify: notifier{MQTT: map[string]mqttNotify{"1": {Enable: true}}}},
&serverConfig{Notify: notifier{MQTT: map[string]mqttNotify{"1": {Enable: false}}}},
"MQTT Notification configuration differs",
},
}
for i, testCase := range testCases {
got := testCase.s.ConfigDiff(testCase.t)
if got != testCase.diff {
t.Errorf("Test %d: got %s expected %s", i+1, got, testCase.diff)
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Minio Cloud Storage, (C) 2015, 2016, 2017 Minio, Inc.
* Minio Cloud Storage, (C) 2015, 2016, 2017, 2018 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -17,6 +17,7 @@
package cmd
import (
"os"
"path/filepath"
"sync"
@@ -76,7 +77,7 @@ func (config *ConfigDir) GetCADir() string {
// Create - creates configuration directory tree.
func (config *ConfigDir) Create() error {
return mkdirAll(config.GetCADir(), 0700)
return os.MkdirAll(config.GetCADir(), 0700)
}
// GetMinioConfigFile - returns absolute path of config.json file.
@@ -94,14 +95,16 @@ func (config *ConfigDir) GetPrivateKeyFile() string {
return filepath.Join(config.getCertsDir(), privateKeyFile)
}
func mustGetDefaultConfigDir() string {
func getDefaultConfigDir() string {
homeDir, err := homedir.Dir()
fatalIf(err, "Unable to get home directory.")
if err != nil {
return ""
}
return filepath.Join(homeDir, defaultMinioConfigDir)
}
var configDir = &ConfigDir{dir: mustGetDefaultConfigDir()}
var configDir = &ConfigDir{dir: getDefaultConfigDir()}
func setConfigDir(dir string) {
configDir.Set(dir)

View File

@@ -21,6 +21,7 @@ import (
"os"
"path/filepath"
"github.com/minio/minio/pkg/auth"
"github.com/minio/minio/pkg/quick"
)
@@ -142,12 +143,26 @@ func migrateConfig() error {
}
fallthrough
case "18":
// Migrate version '17' to '18'.
// Migrate version '18' to '19'.
if err = migrateV18ToV19(); err != nil {
return err
}
fallthrough
case v19:
case "19":
if err = migrateV19ToV20(); err != nil {
return err
}
fallthrough
case "20":
if err = migrateV20ToV21(); err != nil {
return err
}
fallthrough
case "21":
if err = migrateV21ToV22(); err != nil {
return err
}
case serverConfigVersion:
// No migration needed. this always points to current version.
err = nil
}
@@ -169,7 +184,7 @@ func purgeV1() error {
return fmt.Errorf("unrecognized config version %s", cv1.Version)
}
removeAll(configFile)
os.RemoveAll(configFile)
log.Println("Removed unsupported config version 1.")
return nil
}
@@ -190,7 +205,7 @@ func migrateV2ToV3() error {
return nil
}
cred, err := createCredential(cv2.Credentials.AccessKey, cv2.Credentials.SecretKey)
cred, err := auth.CreateCredentials(cv2.Credentials.AccessKey, cv2.Credentials.SecretKey)
if err != nil {
return fmt.Errorf("Invalid credential in V2 configuration file. %v", err)
}
@@ -1132,14 +1147,6 @@ func migrateV15ToV16() error {
// Load browser config from existing config in the file.
srvConfig.Browser = cv15.Browser
// Migrate console and file fields
if cv15.Logger.Console.Enable {
srvConfig.Logger.Console = NewConsoleLogger()
}
if cv15.Logger.File.Enable {
srvConfig.Logger.File = NewFileLogger(cv15.Logger.File.Filename)
}
if err = quick.Save(configFile, srvConfig); err != nil {
return fmt.Errorf("Failed to migrate config from %s to %s. %v", cv15.Version, srvConfig.Version, err)
}
@@ -1478,3 +1485,325 @@ func migrateV18ToV19() error {
log.Printf(configMigrateMSGTemplate, configFile, cv18.Version, srvConfig.Version)
return nil
}
func migrateV19ToV20() error {
configFile := getConfigFile()
cv19 := &serverConfigV19{}
_, err := quick.Load(configFile, cv19)
if os.IsNotExist(err) {
return nil
} else if err != nil {
return fmt.Errorf("Unable to load config version 18. %v", err)
}
if cv19.Version != "19" {
return nil
}
// Copy over fields from V19 into V20 config struct
srvConfig := &serverConfigV20{
Logger: &loggers{},
Notify: &notifier{},
}
srvConfig.Version = "20"
srvConfig.Credential = cv19.Credential
srvConfig.Region = cv19.Region
if srvConfig.Region == "" {
// Region needs to be set for AWS Signature Version 4.
srvConfig.Region = globalMinioDefaultRegion
}
srvConfig.Logger.Console = cv19.Logger.Console
srvConfig.Logger.File = cv19.Logger.File
// check and set notifiers config
if len(cv19.Notify.AMQP) == 0 {
srvConfig.Notify.AMQP = make(map[string]amqpNotify)
srvConfig.Notify.AMQP["1"] = amqpNotify{}
} else {
// New deliveryMode parameter is added for AMQP,
// default value is already 0, so nothing to
// explicitly migrate here.
srvConfig.Notify.AMQP = cv19.Notify.AMQP
}
if len(cv19.Notify.ElasticSearch) == 0 {
srvConfig.Notify.ElasticSearch = make(map[string]elasticSearchNotify)
srvConfig.Notify.ElasticSearch["1"] = elasticSearchNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.ElasticSearch = cv19.Notify.ElasticSearch
}
if len(cv19.Notify.Redis) == 0 {
srvConfig.Notify.Redis = make(map[string]redisNotify)
srvConfig.Notify.Redis["1"] = redisNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.Redis = cv19.Notify.Redis
}
if len(cv19.Notify.PostgreSQL) == 0 {
srvConfig.Notify.PostgreSQL = make(map[string]postgreSQLNotify)
srvConfig.Notify.PostgreSQL["1"] = postgreSQLNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.PostgreSQL = cv19.Notify.PostgreSQL
}
if len(cv19.Notify.Kafka) == 0 {
srvConfig.Notify.Kafka = make(map[string]kafkaNotify)
srvConfig.Notify.Kafka["1"] = kafkaNotify{}
} else {
srvConfig.Notify.Kafka = cv19.Notify.Kafka
}
if len(cv19.Notify.NATS) == 0 {
srvConfig.Notify.NATS = make(map[string]natsNotify)
srvConfig.Notify.NATS["1"] = natsNotify{}
} else {
srvConfig.Notify.NATS = cv19.Notify.NATS
}
if len(cv19.Notify.Webhook) == 0 {
srvConfig.Notify.Webhook = make(map[string]webhookNotify)
srvConfig.Notify.Webhook["1"] = webhookNotify{}
} else {
srvConfig.Notify.Webhook = cv19.Notify.Webhook
}
if len(cv19.Notify.MySQL) == 0 {
srvConfig.Notify.MySQL = make(map[string]mySQLNotify)
srvConfig.Notify.MySQL["1"] = mySQLNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.MySQL = cv19.Notify.MySQL
}
if len(cv19.Notify.MQTT) == 0 {
srvConfig.Notify.MQTT = make(map[string]mqttNotify)
srvConfig.Notify.MQTT["1"] = mqttNotify{}
} else {
srvConfig.Notify.MQTT = cv19.Notify.MQTT
}
// Load browser config from existing config in the file.
srvConfig.Browser = cv19.Browser
if err = quick.Save(configFile, srvConfig); err != nil {
return fmt.Errorf("Failed to migrate config from %s to %s. %v", cv19.Version, srvConfig.Version, err)
}
log.Printf(configMigrateMSGTemplate, configFile, cv19.Version, srvConfig.Version)
return nil
}
func migrateV20ToV21() error {
configFile := getConfigFile()
cv20 := &serverConfigV20{}
_, err := quick.Load(configFile, cv20)
if os.IsNotExist(err) {
return nil
} else if err != nil {
return fmt.Errorf("Unable to load config version 20. %v", err)
}
if cv20.Version != "20" {
return nil
}
// Copy over fields from V20 into V21 config struct
srvConfig := &serverConfigV21{
Notify: &notifier{},
}
srvConfig.Version = "21"
srvConfig.Credential = cv20.Credential
srvConfig.Region = cv20.Region
if srvConfig.Region == "" {
// Region needs to be set for AWS Signature Version 4.
srvConfig.Region = globalMinioDefaultRegion
}
// check and set notifiers config
if len(cv20.Notify.AMQP) == 0 {
srvConfig.Notify.AMQP = make(map[string]amqpNotify)
srvConfig.Notify.AMQP["1"] = amqpNotify{}
} else {
// New deliveryMode parameter is added for AMQP,
// default value is already 0, so nothing to
// explicitly migrate here.
srvConfig.Notify.AMQP = cv20.Notify.AMQP
}
if len(cv20.Notify.ElasticSearch) == 0 {
srvConfig.Notify.ElasticSearch = make(map[string]elasticSearchNotify)
srvConfig.Notify.ElasticSearch["1"] = elasticSearchNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.ElasticSearch = cv20.Notify.ElasticSearch
}
if len(cv20.Notify.Redis) == 0 {
srvConfig.Notify.Redis = make(map[string]redisNotify)
srvConfig.Notify.Redis["1"] = redisNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.Redis = cv20.Notify.Redis
}
if len(cv20.Notify.PostgreSQL) == 0 {
srvConfig.Notify.PostgreSQL = make(map[string]postgreSQLNotify)
srvConfig.Notify.PostgreSQL["1"] = postgreSQLNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.PostgreSQL = cv20.Notify.PostgreSQL
}
if len(cv20.Notify.Kafka) == 0 {
srvConfig.Notify.Kafka = make(map[string]kafkaNotify)
srvConfig.Notify.Kafka["1"] = kafkaNotify{}
} else {
srvConfig.Notify.Kafka = cv20.Notify.Kafka
}
if len(cv20.Notify.NATS) == 0 {
srvConfig.Notify.NATS = make(map[string]natsNotify)
srvConfig.Notify.NATS["1"] = natsNotify{}
} else {
srvConfig.Notify.NATS = cv20.Notify.NATS
}
if len(cv20.Notify.Webhook) == 0 {
srvConfig.Notify.Webhook = make(map[string]webhookNotify)
srvConfig.Notify.Webhook["1"] = webhookNotify{}
} else {
srvConfig.Notify.Webhook = cv20.Notify.Webhook
}
if len(cv20.Notify.MySQL) == 0 {
srvConfig.Notify.MySQL = make(map[string]mySQLNotify)
srvConfig.Notify.MySQL["1"] = mySQLNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.MySQL = cv20.Notify.MySQL
}
if len(cv20.Notify.MQTT) == 0 {
srvConfig.Notify.MQTT = make(map[string]mqttNotify)
srvConfig.Notify.MQTT["1"] = mqttNotify{}
} else {
srvConfig.Notify.MQTT = cv20.Notify.MQTT
}
// Load browser config from existing config in the file.
srvConfig.Browser = cv20.Browser
// Load domain config from existing config in the file.
srvConfig.Domain = cv20.Domain
if err = quick.Save(configFile, srvConfig); err != nil {
return fmt.Errorf("Failed to migrate config from %s to %s. %v", cv20.Version, srvConfig.Version, err)
}
log.Printf(configMigrateMSGTemplate, configFile, cv20.Version, srvConfig.Version)
return nil
}
func migrateV21ToV22() error {
configFile := getConfigFile()
cv21 := &serverConfigV21{}
_, err := quick.Load(configFile, cv21)
if os.IsNotExist(err) {
return nil
} else if err != nil {
return fmt.Errorf("Unable to load config version 21. %v", err)
}
if cv21.Version != "21" {
return nil
}
// Copy over fields from V21 into V22 config struct
srvConfig := &serverConfigV22{
Notify: notifier{},
}
srvConfig.Version = serverConfigVersion
srvConfig.Credential = cv21.Credential
srvConfig.Region = cv21.Region
if srvConfig.Region == "" {
// Region needs to be set for AWS Signature Version 4.
srvConfig.Region = globalMinioDefaultRegion
}
// check and set notifiers config
if len(cv21.Notify.AMQP) == 0 {
srvConfig.Notify.AMQP = make(map[string]amqpNotify)
srvConfig.Notify.AMQP["1"] = amqpNotify{}
} else {
// New deliveryMode parameter is added for AMQP,
// default value is already 0, so nothing to
// explicitly migrate here.
srvConfig.Notify.AMQP = cv21.Notify.AMQP
}
if len(cv21.Notify.ElasticSearch) == 0 {
srvConfig.Notify.ElasticSearch = make(map[string]elasticSearchNotify)
srvConfig.Notify.ElasticSearch["1"] = elasticSearchNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.ElasticSearch = cv21.Notify.ElasticSearch
}
if len(cv21.Notify.Redis) == 0 {
srvConfig.Notify.Redis = make(map[string]redisNotify)
srvConfig.Notify.Redis["1"] = redisNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.Redis = cv21.Notify.Redis
}
if len(cv21.Notify.PostgreSQL) == 0 {
srvConfig.Notify.PostgreSQL = make(map[string]postgreSQLNotify)
srvConfig.Notify.PostgreSQL["1"] = postgreSQLNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.PostgreSQL = cv21.Notify.PostgreSQL
}
if len(cv21.Notify.Kafka) == 0 {
srvConfig.Notify.Kafka = make(map[string]kafkaNotify)
srvConfig.Notify.Kafka["1"] = kafkaNotify{}
} else {
srvConfig.Notify.Kafka = cv21.Notify.Kafka
}
if len(cv21.Notify.NATS) == 0 {
srvConfig.Notify.NATS = make(map[string]natsNotify)
srvConfig.Notify.NATS["1"] = natsNotify{}
} else {
srvConfig.Notify.NATS = cv21.Notify.NATS
}
if len(cv21.Notify.Webhook) == 0 {
srvConfig.Notify.Webhook = make(map[string]webhookNotify)
srvConfig.Notify.Webhook["1"] = webhookNotify{}
} else {
srvConfig.Notify.Webhook = cv21.Notify.Webhook
}
if len(cv21.Notify.MySQL) == 0 {
srvConfig.Notify.MySQL = make(map[string]mySQLNotify)
srvConfig.Notify.MySQL["1"] = mySQLNotify{
Format: formatNamespace,
}
} else {
srvConfig.Notify.MySQL = cv21.Notify.MySQL
}
if len(cv21.Notify.MQTT) == 0 {
srvConfig.Notify.MQTT = make(map[string]mqttNotify)
srvConfig.Notify.MQTT["1"] = mqttNotify{}
} else {
srvConfig.Notify.MQTT = cv21.Notify.MQTT
}
// Load browser config from existing config in the file.
srvConfig.Browser = cv21.Browser
// Load domain config from existing config in the file.
srvConfig.Domain = cv21.Domain
if err = quick.Save(configFile, srvConfig); err != nil {
return fmt.Errorf("Failed to migrate config from %s to %s. %v", cv21.Version, srvConfig.Version, err)
}
log.Printf(configMigrateMSGTemplate, configFile, cv21.Version, srvConfig.Version)
return nil
}

View File

@@ -30,7 +30,7 @@ func TestServerConfigMigrateV1(t *testing.T) {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
setConfigDir(rootPath)
@@ -46,7 +46,7 @@ func TestServerConfigMigrateV1(t *testing.T) {
t.Fatal("Unexpected error: ", err)
}
// Check if config v1 is removed from filesystem
if _, err := osStat(configPath); err == nil || !os.IsNotExist(err) {
if _, err := os.Stat(configPath); err == nil || !os.IsNotExist(err) {
t.Fatal("Config V1 file is not purged")
}
@@ -64,7 +64,7 @@ func TestServerConfigMigrateInexistentConfig(t *testing.T) {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
setConfigDir(rootPath)
configPath := rootPath + "/" + minioConfigFile
@@ -125,17 +125,22 @@ func TestServerConfigMigrateInexistentConfig(t *testing.T) {
if err := migrateV18ToV19(); err != nil {
t.Fatal("migrate v18 to v19 should succeed when no config file is found")
}
if err := migrateV19ToV20(); err != nil {
t.Fatal("migrate v19 to v20 should succeed when no config file is found")
}
if err := migrateV20ToV21(); err != nil {
t.Fatal("migrate v20 to v21 should succeed when no config file is found")
}
}
// Test if a config migration from v2 to v19 is successfully done
func TestServerConfigMigrateV2toV19(t *testing.T) {
// Test if a config migration from v2 to v21 is successfully done
func TestServerConfigMigrateV2toV21(t *testing.T) {
rootPath, err := newTestConfig(globalMinioDefaultRegion)
if err != nil {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
setConfigDir(rootPath)
configPath := rootPath + "/" + minioConfigFile
@@ -169,17 +174,17 @@ func TestServerConfigMigrateV2toV19(t *testing.T) {
}
// Check the version number in the upgraded config file
expectedVersion := v19
if serverConfig.Version != expectedVersion {
t.Fatalf("Expect version "+expectedVersion+", found: %v", serverConfig.Version)
expectedVersion := serverConfigVersion
if globalServerConfig.Version != expectedVersion {
t.Fatalf("Expect version "+expectedVersion+", found: %v", globalServerConfig.Version)
}
// Check if accessKey and secretKey are not altered during migration
if serverConfig.Credential.AccessKey != accessKey {
t.Fatalf("Access key lost during migration, expected: %v, found:%v", accessKey, serverConfig.Credential.AccessKey)
if globalServerConfig.Credential.AccessKey != accessKey {
t.Fatalf("Access key lost during migration, expected: %v, found:%v", accessKey, globalServerConfig.Credential.AccessKey)
}
if serverConfig.Credential.SecretKey != secretKey {
t.Fatalf("Secret key lost during migration, expected: %v, found: %v", secretKey, serverConfig.Credential.SecretKey)
if globalServerConfig.Credential.SecretKey != secretKey {
t.Fatalf("Secret key lost during migration, expected: %v, found: %v", secretKey, globalServerConfig.Credential.SecretKey)
}
}
@@ -190,7 +195,7 @@ func TestServerConfigMigrateFaultyConfig(t *testing.T) {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
setConfigDir(rootPath)
configPath := rootPath + "/" + minioConfigFile
@@ -252,6 +257,12 @@ func TestServerConfigMigrateFaultyConfig(t *testing.T) {
if err := migrateV18ToV19(); err == nil {
t.Fatal("migrateConfigV18ToV19() should fail with a corrupted json")
}
if err := migrateV19ToV20(); err == nil {
t.Fatal("migrateConfigV19ToV20() should fail with a corrupted json")
}
if err := migrateV20ToV21(); err == nil {
t.Fatal("migrateConfigV20ToV21() should fail with a corrupted json")
}
}
// Test if all migrate code returns error with corrupted config files
@@ -261,7 +272,7 @@ func TestServerConfigMigrateCorruptedConfig(t *testing.T) {
t.Fatalf("Init Test config failed")
}
// remove the root directory after the test ends.
defer removeAll(rootPath)
defer os.RemoveAll(rootPath)
setConfigDir(rootPath)
configPath := rootPath + "/" + minioConfigFile

View File

@@ -16,7 +16,11 @@
package cmd
import "sync"
import (
"sync"
"github.com/minio/minio/pkg/auth"
)
/////////////////// Config V1 ///////////////////
type configV1 struct {
@@ -92,8 +96,8 @@ type configV3 struct {
Addr string `json:"address"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV3 `json:"logger"`
@@ -122,8 +126,8 @@ type configV4 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV4 `json:"logger"`
@@ -179,8 +183,8 @@ type configV5 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV5 `json:"logger"`
@@ -209,8 +213,8 @@ type configV6 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV6 `json:"logger"`
@@ -246,8 +250,8 @@ type serverConfigV7 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV6 `json:"logger"`
@@ -265,8 +269,8 @@ type serverConfigV8 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV6 `json:"logger"`
@@ -284,8 +288,8 @@ type serverConfigV9 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV6 `json:"logger"`
@@ -310,8 +314,8 @@ type serverConfigV10 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV7 `json:"logger"`
@@ -338,8 +342,8 @@ type serverConfigV11 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV7 `json:"logger"`
@@ -354,8 +358,8 @@ type serverConfigV12 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger loggerV7 `json:"logger"`
@@ -370,8 +374,8 @@ type serverConfigV13 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
// Additional error logging configuration.
Logger *loggerV7 `json:"logger"`
@@ -386,9 +390,9 @@ type serverConfigV14 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
// Additional error logging configuration.
Logger *loggerV7 `json:"logger"`
@@ -403,9 +407,9 @@ type serverConfigV15 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
// Additional error logging configuration.
Logger *loggerV7 `json:"logger"`
@@ -414,15 +418,36 @@ type serverConfigV15 struct {
Notify *notifier `json:"notify"`
}
// FileLogger is introduced to workaround the dependency about logrus
type FileLogger struct {
Enable bool `json:"enable"`
Filename string `json:"filename"`
}
// ConsoleLogger is introduced to workaround the dependency about logrus
type ConsoleLogger struct {
Enable bool `json:"enable"`
}
// Loggers struct is defined with FileLogger and ConsoleLogger
// although they are removed from logging logic. They are
// kept here just to workaround the dependency migration
// code/logic has on them.
type loggers struct {
sync.RWMutex
Console ConsoleLogger `json:"console"`
File FileLogger `json:"file"`
}
// serverConfigV16 server configuration version '16' which is like
// version '15' except it makes a change to logging configuration.
type serverConfigV16 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
// Additional error logging configuration.
Logger *loggers `json:"logger"`
@@ -439,9 +464,9 @@ type serverConfigV17 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
// Additional error logging configuration.
Logger *loggers `json:"logger"`
@@ -458,9 +483,9 @@ type serverConfigV18 struct {
Version string `json:"version"`
// S3 API configuration.
Credential credential `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
// Additional error logging configuration.
Logger *loggers `json:"logger"`
@@ -468,3 +493,76 @@ type serverConfigV18 struct {
// Notification queue configuration.
Notify *notifier `json:"notify"`
}
// serverConfigV19 server configuration version '19' which is like
// version '18' except it adds support for MQTT notifications.
type serverConfigV19 struct {
sync.RWMutex
Version string `json:"version"`
// S3 API configuration.
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
// Additional error logging configuration.
Logger *loggers `json:"logger"`
// Notification queue configuration.
Notify *notifier `json:"notify"`
}
// serverConfigV20 server configuration version '20' which is like
// version '19' except it adds support for VirtualHostDomain
type serverConfigV20 struct {
sync.RWMutex
Version string `json:"version"`
// S3 API configuration.
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
Domain string `json:"domain"`
// Additional error logging configuration.
Logger *loggers `json:"logger"`
// Notification queue configuration.
Notify *notifier `json:"notify"`
}
// serverConfigV21 is just like version '20' without logger field
type serverConfigV21 struct {
sync.RWMutex
Version string `json:"version"`
// S3 API configuration.
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
Domain string `json:"domain"`
// Notification queue configuration.
Notify *notifier `json:"notify"`
}
// serverConfigV22 is just like version '21' with added support
// for StorageClass.
//
// IMPORTANT NOTE: When updating this struct make sure that
// serverConfig.ConfigDiff() is updated as necessary.
type serverConfigV22 struct {
Version string `json:"version"`
// S3 API configuration.
Credential auth.Credentials `json:"credential"`
Region string `json:"region"`
Browser BrowserFlag `json:"browser"`
Domain string `json:"domain"`
// Storage class configuration
StorageClass storageClassConfig `json:"storageclass"`
// Notification queue configuration.
Notify notifier `json:"notify"`
}

View File

@@ -1,74 +0,0 @@
/*
* Minio Cloud Storage, (C) 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"fmt"
"os"
"github.com/Sirupsen/logrus"
)
// ConsoleLogger - console logger which logs into stderr.
type ConsoleLogger struct {
BaseLogTarget
}
// Fire - log entry handler.
func (logger ConsoleLogger) Fire(entry *logrus.Entry) error {
if !logger.Enable {
return nil
}
msgBytes, err := logger.formatter.Format(entry)
if err == nil {
fmt.Fprintf(os.Stderr, string(msgBytes))
}
return err
}
// String - represents ConsoleLogger as string.
func (logger ConsoleLogger) String() string {
enableStr := "disabled"
if logger.Enable {
enableStr = "enabled"
}
return fmt.Sprintf("console:%s", enableStr)
}
// NewConsoleLogger - return new console logger object.
func NewConsoleLogger() (logger ConsoleLogger) {
logger.Enable = true
logger.formatter = new(logrus.TextFormatter)
return logger
}
// InitConsoleLogger - initializes console logger.
func InitConsoleLogger(logger *ConsoleLogger) {
if !logger.Enable {
return
}
if logger.formatter == nil {
logger.formatter = new(logrus.TextFormatter)
}
return
}

View File

@@ -1,148 +0,0 @@
/*
* Minio Cloud Storage, (C) 2015, 2016, 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"crypto/rand"
"encoding/base64"
"errors"
"golang.org/x/crypto/bcrypt"
)
const (
// Minimum length for Minio access key.
accessKeyMinLen = 5
// Maximum length for Minio access key.
// There is no max length enforcement for access keys
accessKeyMaxLen = 20
// Minimum length for Minio secret key for both server and gateway mode.
secretKeyMinLen = 8
// Maximum secret key length for Minio, this
// is used when autogenerating new credentials.
// There is no max length enforcement for secret keys
secretKeyMaxLen = 40
// Alpha numeric table used for generating access keys.
alphaNumericTable = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
// Total length of the alpha numeric table.
alphaNumericTableLen = byte(len(alphaNumericTable))
)
// Common errors generated for access and secret key validation.
var (
errInvalidAccessKeyLength = errors.New("Invalid access key, access key should be minimum 5 characters in length")
errInvalidSecretKeyLength = errors.New("Invalid secret key, secret key should be minimum 8 characters in length")
)
// isAccessKeyValid - validate access key for right length.
func isAccessKeyValid(accessKey string) bool {
return len(accessKey) >= accessKeyMinLen
}
// isSecretKeyValid - validate secret key for right length.
func isSecretKeyValid(secretKey string) bool {
return len(secretKey) >= secretKeyMinLen
}
// credential container for access and secret keys.
type credential struct {
AccessKey string `json:"accessKey,omitempty"`
SecretKey string `json:"secretKey,omitempty"`
secretKeyHash []byte
}
// IsValid - returns whether credential is valid or not.
func (cred credential) IsValid() bool {
return isAccessKeyValid(cred.AccessKey) && isSecretKeyValid(cred.SecretKey)
}
// Equals - returns whether two credentials are equal or not.
func (cred credential) Equal(ccred credential) bool {
if !ccred.IsValid() {
return false
}
if cred.secretKeyHash == nil {
secretKeyHash, err := bcrypt.GenerateFromPassword([]byte(cred.SecretKey), bcrypt.DefaultCost)
if err != nil {
errorIf(err, "Unable to generate hash of given password")
return false
}
cred.secretKeyHash = secretKeyHash
}
return (cred.AccessKey == ccred.AccessKey &&
bcrypt.CompareHashAndPassword(cred.secretKeyHash, []byte(ccred.SecretKey)) == nil)
}
func createCredential(accessKey, secretKey string) (cred credential, err error) {
if !isAccessKeyValid(accessKey) {
err = errInvalidAccessKeyLength
} else if !isSecretKeyValid(secretKey) {
err = errInvalidSecretKeyLength
} else {
var secretKeyHash []byte
secretKeyHash, err = bcrypt.GenerateFromPassword([]byte(secretKey), bcrypt.DefaultCost)
if err == nil {
cred.AccessKey = accessKey
cred.SecretKey = secretKey
cred.secretKeyHash = secretKeyHash
}
}
return cred, err
}
// Initialize a new credential object
func getNewCredential(accessKeyLen, secretKeyLen int) (cred credential, err error) {
// Generate access key.
keyBytes := make([]byte, accessKeyLen)
_, err = rand.Read(keyBytes)
if err != nil {
return cred, err
}
for i := 0; i < accessKeyLen; i++ {
keyBytes[i] = alphaNumericTable[keyBytes[i]%alphaNumericTableLen]
}
accessKey := string(keyBytes)
// Generate secret key.
keyBytes = make([]byte, secretKeyLen)
_, err = rand.Read(keyBytes)
if err != nil {
return cred, err
}
secretKey := string([]byte(base64.StdEncoding.EncodeToString(keyBytes))[:secretKeyLen])
cred, err = createCredential(accessKey, secretKey)
return cred, err
}
func mustGetNewCredential() credential {
// Generate Minio credentials with Minio key max lengths.
cred, err := getNewCredential(accessKeyMaxLen, secretKeyMaxLen)
fatalIf(err, "Unable to generate new credentials.")
return cred
}

View File

@@ -1,100 +0,0 @@
/*
* Minio Cloud Storage, (C) 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import "testing"
func TestMustGetNewCredential(t *testing.T) {
cred := mustGetNewCredential()
if !cred.IsValid() {
t.Fatalf("Failed to get new valid credential")
}
if len(cred.SecretKey) != secretKeyMaxLen {
t.Fatalf("Invalid length %d of the secretKey credential generated, expected %d", len(cred.SecretKey), secretKeyMaxLen)
}
}
func TestCreateCredential(t *testing.T) {
cred := mustGetNewCredential()
testCases := []struct {
accessKey string
secretKey string
expectedResult bool
expectedErr error
}{
// Access key too small (min 5 chars).
{"user", "pass", false, errInvalidAccessKeyLength},
// Long access key is ok.
{"user123456789012345678901234567890", "password", true, nil},
// Secret key too small (min 8 chars).
{"myuser", "pass", false, errInvalidSecretKeyLength},
// Long secret key is ok.
{"myuser", "pass1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890", true, nil},
{"myuser", "mypassword", true, nil},
{cred.AccessKey, cred.SecretKey, true, nil},
}
for _, testCase := range testCases {
cred, err := createCredential(testCase.accessKey, testCase.secretKey)
if testCase.expectedErr == nil {
if err != nil {
t.Fatalf("error: expected = <nil>, got = %v", err)
}
} else if err == nil {
t.Fatalf("error: expected = %v, got = <nil>", testCase.expectedErr)
} else if testCase.expectedErr.Error() != err.Error() {
t.Fatalf("error: expected = %v, got = %v", testCase.expectedErr, err)
}
if testCase.expectedResult != cred.IsValid() {
t.Fatalf("cred: expected: %v, got: %v", testCase.expectedResult, cred.IsValid())
}
}
}
func TestCredentialEqual(t *testing.T) {
cred := mustGetNewCredential()
testCases := []struct {
cred credential
ccred credential
expectedResult bool
}{
// Empty compare credential
{cred, credential{}, false},
// Empty credential
{credential{}, cred, false},
// Two different credentials
{cred, mustGetNewCredential(), false},
// Access key is different in compare credential.
{cred, credential{AccessKey: "myuser", SecretKey: cred.SecretKey}, false},
// Secret key is different in compare credential.
{cred, credential{AccessKey: cred.AccessKey, SecretKey: "mypassword"}, false},
// secretHashKey is missing in compare credential.
{cred, credential{AccessKey: cred.AccessKey, SecretKey: cred.SecretKey}, true},
// secretHashKey is missing in credential.
{credential{AccessKey: cred.AccessKey, SecretKey: cred.SecretKey}, cred, true},
// Same credentials.
{cred, cred, true},
}
for _, testCase := range testCases {
result := testCase.cred.Equal(testCase.ccred)
if result != testCase.expectedResult {
t.Fatalf("cred: expected: %v, got: %v", testCase.expectedResult, result)
}
}
}

120
cmd/dynamic-timeouts.go Normal file
View File

@@ -0,0 +1,120 @@
/*
* Minio Cloud Storage, (C) 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"sync"
"sync/atomic"
"time"
)
const (
dynamicTimeoutIncreaseThresholdPct = 0.33 // Upper threshold for failures in order to increase timeout
dynamicTimeoutDecreaseThresholdPct = 0.10 // Lower threshold for failures in order to decrease timeout
dynamicTimeoutLogSize = 16
maxDuration = time.Duration(1<<63 - 1)
)
// timeouts that are dynamically adapted based on actual usage results
type dynamicTimeout struct {
timeout int64
minimum int64
entries int64
log [dynamicTimeoutLogSize]time.Duration
mutex sync.Mutex
}
// newDynamicTimeout returns a new dynamic timeout initialized with timeout value
func newDynamicTimeout(timeout, minimum time.Duration) *dynamicTimeout {
return &dynamicTimeout{timeout: int64(timeout), minimum: int64(minimum)}
}
// Timeout returns the current timeout value
func (dt *dynamicTimeout) Timeout() time.Duration {
return time.Duration(atomic.LoadInt64(&dt.timeout))
}
// LogSuccess logs the duration of a successful action that
// did not hit the timeout
func (dt *dynamicTimeout) LogSuccess(duration time.Duration) {
dt.logEntry(duration)
}
// LogFailure logs an action that hit the timeout
func (dt *dynamicTimeout) LogFailure() {
dt.logEntry(maxDuration)
}
// logEntry stores a log entry
func (dt *dynamicTimeout) logEntry(duration time.Duration) {
entries := int(atomic.AddInt64(&dt.entries, 1))
index := entries - 1
if index < dynamicTimeoutLogSize {
dt.mutex.Lock()
dt.log[index] = duration
dt.mutex.Unlock()
}
if entries == dynamicTimeoutLogSize {
dt.mutex.Lock()
// Make copy on stack in order to call adjust()
logCopy := [dynamicTimeoutLogSize]time.Duration{}
copy(logCopy[:], dt.log[:])
// reset log entries
atomic.StoreInt64(&dt.entries, 0)
dt.mutex.Unlock()
dt.adjust(logCopy)
}
}
// adjust changes the value of the dynamic timeout based on the
// previous results
func (dt *dynamicTimeout) adjust(entries [dynamicTimeoutLogSize]time.Duration) {
failures, average := 0, int64(0)
for i := 0; i < len(entries); i++ {
if entries[i] == maxDuration {
failures++
} else {
average += int64(entries[i])
}
}
if failures < len(entries) {
average /= int64(len(entries) - failures)
}
timeOutHitPct := float64(failures) / float64(len(entries))
if timeOutHitPct > dynamicTimeoutIncreaseThresholdPct {
// We are hitting the timeout too often, so increase the timeout by 25%
timeout := atomic.LoadInt64(&dt.timeout) * 125 / 100
atomic.StoreInt64(&dt.timeout, timeout)
} else if timeOutHitPct < dynamicTimeoutDecreaseThresholdPct {
// We are hitting the timeout relatively few times, so decrease the timeout
average = average * 125 / 100 // Add buffer of 25% on top of average
timeout := (atomic.LoadInt64(&dt.timeout) + int64(average)) / 2 // Middle between current timeout and average success
if timeout < dt.minimum {
timeout = dt.minimum
}
atomic.StoreInt64(&dt.timeout, timeout)
}
}

View File

@@ -0,0 +1,207 @@
/*
* Minio Cloud Storage, (C) 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"math/rand"
"testing"
"time"
)
func TestDynamicTimeoutSingleIncrease(t *testing.T) {
timeout := newDynamicTimeout(time.Minute, time.Second)
initial := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ {
timeout.LogFailure()
}
adjusted := timeout.Timeout()
if initial >= adjusted {
t.Errorf("Failure to increase timeout, expected %v to be more than %v", adjusted, initial)
}
}
func TestDynamicTimeoutDualIncrease(t *testing.T) {
timeout := newDynamicTimeout(time.Minute, time.Second)
initial := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ {
timeout.LogFailure()
}
adjusted := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ {
timeout.LogFailure()
}
adjustedAgain := timeout.Timeout()
if initial >= adjusted || adjusted >= adjustedAgain {
t.Errorf("Failure to increase timeout multiple times")
}
}
func TestDynamicTimeoutSingleDecrease(t *testing.T) {
timeout := newDynamicTimeout(time.Minute, time.Second)
initial := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ {
timeout.LogSuccess(20 * time.Second)
}
adjusted := timeout.Timeout()
if initial <= adjusted {
t.Errorf("Failure to decrease timeout, expected %v to be less than %v", adjusted, initial)
}
}
func TestDynamicTimeoutDualDecrease(t *testing.T) {
timeout := newDynamicTimeout(time.Minute, time.Second)
initial := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ {
timeout.LogSuccess(20 * time.Second)
}
adjusted := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ {
timeout.LogSuccess(20 * time.Second)
}
adjustedAgain := timeout.Timeout()
if initial <= adjusted || adjusted <= adjustedAgain {
t.Errorf("Failure to decrease timeout multiple times")
}
}
func TestDynamicTimeoutManyDecreases(t *testing.T) {
timeout := newDynamicTimeout(time.Minute, time.Second)
initial := timeout.Timeout()
const successTimeout = 20 * time.Second
for l := 0; l < 100; l++ {
for i := 0; i < dynamicTimeoutLogSize; i++ {
timeout.LogSuccess(successTimeout)
}
}
adjusted := timeout.Timeout()
// Check whether eventual timeout is between initial value and success timeout
if initial <= adjusted || adjusted <= successTimeout {
t.Errorf("Failure to decrease timeout appropriately")
}
}
func TestDynamicTimeoutHitMinimum(t *testing.T) {
const minimum = 30 * time.Second
timeout := newDynamicTimeout(time.Minute, minimum)
initial := timeout.Timeout()
const successTimeout = 20 * time.Second
for l := 0; l < 100; l++ {
for i := 0; i < dynamicTimeoutLogSize; i++ {
timeout.LogSuccess(successTimeout)
}
}
adjusted := timeout.Timeout()
// Check whether eventual timeout has hit the minimum value
if initial <= adjusted || adjusted != minimum {
t.Errorf("Failure to decrease timeout appropriately")
}
}
func testDynamicTimeoutAdjust(t *testing.T, timeout *dynamicTimeout, f func() float64) {
const successTimeout = 20 * time.Second
for i := 0; i < dynamicTimeoutLogSize; i++ {
rnd := f()
duration := time.Duration(float64(successTimeout) * rnd)
if duration < 100*time.Millisecond {
duration = 100 * time.Millisecond
}
if duration >= time.Minute {
timeout.LogFailure()
} else {
timeout.LogSuccess(duration)
}
}
}
func TestDynamicTimeoutAdjustExponential(t *testing.T) {
timeout := newDynamicTimeout(time.Minute, time.Second)
rand.Seed(time.Now().UTC().UnixNano())
initial := timeout.Timeout()
for try := 0; try < 10; try++ {
testDynamicTimeoutAdjust(t, timeout, rand.ExpFloat64)
}
adjusted := timeout.Timeout()
if initial <= adjusted {
t.Errorf("Failure to decrease timeout, expected %v to be less than %v", adjusted, initial)
}
}
func TestDynamicTimeoutAdjustNormalized(t *testing.T) {
timeout := newDynamicTimeout(time.Minute, time.Second)
rand.Seed(time.Now().UTC().UnixNano())
initial := timeout.Timeout()
for try := 0; try < 10; try++ {
testDynamicTimeoutAdjust(t, timeout, func() float64 {
return 1.0 + rand.NormFloat64()
})
}
adjusted := timeout.Timeout()
if initial <= adjusted {
t.Errorf("Failure to decrease timeout, expected %v to be less than %v", adjusted, initial)
}
}

826
cmd/encryption-v1.go Normal file
View File

@@ -0,0 +1,826 @@
/*
* Minio Cloud Storage, (C) 2017, 2018 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"bytes"
"crypto/hmac"
"crypto/md5"
"crypto/rand"
"crypto/subtle"
"encoding/base64"
"encoding/binary"
"errors"
"io"
"net/http"
"github.com/minio/minio/pkg/ioutil"
sha256 "github.com/minio/sha256-simd"
"github.com/minio/sio"
)
var (
// AWS errors for invalid SSE-C requests.
errInsecureSSERequest = errors.New("Requests specifying Server Side Encryption with Customer provided keys must be made over a secure connection")
errEncryptedObject = errors.New("The object was stored using a form of Server Side Encryption. The correct parameters must be provided to retrieve the object")
errInvalidSSEAlgorithm = errors.New("Requests specifying Server Side Encryption with Customer provided keys must provide a valid encryption algorithm")
errMissingSSEKey = errors.New("Requests specifying Server Side Encryption with Customer provided keys must provide an appropriate secret key")
errInvalidSSEKey = errors.New("The secret key was invalid for the specified algorithm")
errMissingSSEKeyMD5 = errors.New("Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key")
errSSEKeyMD5Mismatch = errors.New("The calculated MD5 hash of the key did not match the hash that was provided")
errSSEKeyMismatch = errors.New("The client provided key does not match the key provided when the object was encrypted") // this msg is not shown to the client
// Additional Minio errors for SSE-C requests.
errObjectTampered = errors.New("The requested object was modified and may be compromised")
)
const (
// SSECustomerAlgorithm is the AWS SSE-C algorithm HTTP header key.
SSECustomerAlgorithm = "X-Amz-Server-Side-Encryption-Customer-Algorithm"
// SSECustomerKey is the AWS SSE-C encryption key HTTP header key.
SSECustomerKey = "X-Amz-Server-Side-Encryption-Customer-Key"
// SSECustomerKeyMD5 is the AWS SSE-C encryption key MD5 HTTP header key.
SSECustomerKeyMD5 = "X-Amz-Server-Side-Encryption-Customer-Key-MD5"
// SSECopyCustomerAlgorithm is the AWS SSE-C algorithm HTTP header key for CopyObject API.
SSECopyCustomerAlgorithm = "X-Amz-Copy-Source-Server-Side-Encryption-Customer-Algorithm"
// SSECopyCustomerKey is the AWS SSE-C encryption key HTTP header key for CopyObject API.
SSECopyCustomerKey = "X-Amz-Copy-Source-Server-Side-Encryption-Customer-Key"
// SSECopyCustomerKeyMD5 is the AWS SSE-C encryption key MD5 HTTP header key for CopyObject API.
SSECopyCustomerKeyMD5 = "X-Amz-Copy-Source-Server-Side-Encryption-Customer-Key-MD5"
)
const (
// SSECustomerKeySize is the size of valid client provided encryption keys in bytes.
// Currently AWS supports only AES256. So the SSE-C key size is fixed to 32 bytes.
SSECustomerKeySize = 32
// SSEIVSize is the size of the IV data
SSEIVSize = 32 // 32 bytes
// SSECustomerAlgorithmAES256 the only valid S3 SSE-C encryption algorithm identifier.
SSECustomerAlgorithmAES256 = "AES256"
// SSE dare package block size.
sseDAREPackageBlockSize = 64 * 1024 // 64KiB bytes
// SSE dare package meta padding bytes.
sseDAREPackageMetaSize = 32 // 32 bytes
)
// SSE-C key derivation, key verification and key update:
// H: Hash function [32 = |H(m)|]
// AE: authenticated encryption scheme, AD: authenticated decryption scheme [m = AD(k, AE(k, m))]
//
// Key derivation:
// Input:
// key := 32 bytes # client provided key
// Re, Rm := 32 bytes, 32 bytes # uniformly random
//
// Seal:
// k := H(key || Re) # object encryption key
// r := H(Rm) # save as object metadata [ServerSideEncryptionIV]
// KeK := H(key || r) # key encryption key
// K := AE(KeK, k) # save as object metadata [ServerSideEncryptionSealedKey]
// ------------------------------------------------------------------------------------------------
// Key verification:
// Input:
// key := 32 bytes # client provided key
// r := 32 bytes # object metadata [ServerSideEncryptionIV]
// K := 32 bytes # object metadata [ServerSideEncryptionSealedKey]
//
// Open:
// KeK := H(key || r) # key encryption key
// k := AD(Kek, K) # object encryption key
// -------------------------------------------------------------------------------------------------
// Key update:
// Input:
// key := 32 bytes # old client provided key
// key' := 32 bytes # new client provided key
// Rm := 32 bytes # uniformly random
// r := 32 bytes # object metadata [ServerSideEncryptionIV]
// K := 32 bytes # object metadata [ServerSideEncryptionSealedKey]
//
// Update:
// 1. open:
// KeK := H(key || r) # key encryption key
// k := AD(Kek, K) # object encryption key
// 2. seal:
// r' := H(Rm) # save as object metadata [ServerSideEncryptionIV]
// KeK' := H(key' || r') # new key encryption key
// K' := AE(KeK', k) # save as object metadata [ServerSideEncryptionSealedKey]
const (
// ServerSideEncryptionIV is a 32 byte randomly generated IV used to derive an
// unique key encryption key from the client provided key. The combination of this value
// and the client-provided key MUST be unique.
ServerSideEncryptionIV = ReservedMetadataPrefix + "Server-Side-Encryption-Iv"
// ServerSideEncryptionSealAlgorithm identifies a combination of a cryptographic hash function and
// an authenticated en/decryption scheme to seal the object encryption key.
ServerSideEncryptionSealAlgorithm = ReservedMetadataPrefix + "Server-Side-Encryption-Seal-Algorithm"
// ServerSideEncryptionSealedKey is the sealed object encryption key. The sealed key can be decrypted
// by the key encryption key derived from the client provided key and the server-side-encryption IV.
ServerSideEncryptionSealedKey = ReservedMetadataPrefix + "Server-Side-Encryption-Sealed-Key"
)
// SSESealAlgorithmDareSha256 specifies DARE as authenticated en/decryption scheme and SHA256 as cryptographic
// hash function.
const SSESealAlgorithmDareSha256 = "DARE-SHA256"
// hasSSECustomerHeader returns true if the given HTTP header
// contains server-side-encryption with customer provided key fields.
func hasSSECustomerHeader(header http.Header) bool {
return header.Get(SSECustomerAlgorithm) != "" || header.Get(SSECustomerKey) != "" || header.Get(SSECustomerKeyMD5) != ""
}
// hasSSECopyCustomerHeader returns true if the given HTTP header
// contains copy source server-side-encryption with customer provided key fields.
func hasSSECopyCustomerHeader(header http.Header) bool {
return header.Get(SSECopyCustomerAlgorithm) != "" || header.Get(SSECopyCustomerKey) != "" || header.Get(SSECopyCustomerKeyMD5) != ""
}
// ParseSSECopyCustomerRequest parses the SSE-C header fields of the provided request.
// It returns the client provided key on success.
func ParseSSECopyCustomerRequest(r *http.Request) (key []byte, err error) {
if !globalIsSSL { // minio only supports HTTP or HTTPS requests not both at the same time
// we cannot use r.TLS == nil here because Go's http implementation reflects on
// the net.Conn and sets the TLS field of http.Request only if it's an tls.Conn.
// Minio uses a BufConn (wrapping a tls.Conn) so the type check within the http package
// will always fail -> r.TLS is always nil even for TLS requests.
return nil, errInsecureSSERequest
}
header := r.Header
if algorithm := header.Get(SSECopyCustomerAlgorithm); algorithm != SSECustomerAlgorithmAES256 {
return nil, errInvalidSSEAlgorithm
}
if header.Get(SSECopyCustomerKey) == "" {
return nil, errMissingSSEKey
}
if header.Get(SSECopyCustomerKeyMD5) == "" {
return nil, errMissingSSEKeyMD5
}
key, err = base64.StdEncoding.DecodeString(header.Get(SSECopyCustomerKey))
if err != nil {
return nil, errInvalidSSEKey
}
if len(key) != SSECustomerKeySize {
return nil, errInvalidSSEKey
}
// Make sure we purged the keys from http headers by now.
header.Del(SSECopyCustomerKey)
keyMD5, err := base64.StdEncoding.DecodeString(header.Get(SSECopyCustomerKeyMD5))
if err != nil {
return nil, errSSEKeyMD5Mismatch
}
if md5Sum := md5.Sum(key); !bytes.Equal(md5Sum[:], keyMD5) {
return nil, errSSEKeyMD5Mismatch
}
return key, nil
}
// ParseSSECustomerRequest parses the SSE-C header fields of the provided request.
// It returns the client provided key on success.
func ParseSSECustomerRequest(r *http.Request) (key []byte, err error) {
return ParseSSECustomerHeader(r.Header)
}
// ParseSSECustomerHeader parses the SSE-C header fields and returns
// the client provided key on success.
func ParseSSECustomerHeader(header http.Header) (key []byte, err error) {
if !globalIsSSL { // minio only supports HTTP or HTTPS requests not both at the same time
// we cannot use r.TLS == nil here because Go's http implementation reflects on
// the net.Conn and sets the TLS field of http.Request only if it's an tls.Conn.
// Minio uses a BufConn (wrapping a tls.Conn) so the type check within the http package
// will always fail -> r.TLS is always nil even for TLS requests.
return nil, errInsecureSSERequest
}
if algorithm := header.Get(SSECustomerAlgorithm); algorithm != SSECustomerAlgorithmAES256 {
return nil, errInvalidSSEAlgorithm
}
if header.Get(SSECustomerKey) == "" {
return nil, errMissingSSEKey
}
if header.Get(SSECustomerKeyMD5) == "" {
return nil, errMissingSSEKeyMD5
}
key, err = base64.StdEncoding.DecodeString(header.Get(SSECustomerKey))
if err != nil {
return nil, errInvalidSSEKey
}
if len(key) != SSECustomerKeySize {
return nil, errInvalidSSEKey
}
// Make sure we purged the keys from http headers by now.
header.Del(SSECustomerKey)
keyMD5, err := base64.StdEncoding.DecodeString(header.Get(SSECustomerKeyMD5))
if err != nil {
return nil, errSSEKeyMD5Mismatch
}
if md5Sum := md5.Sum(key); !bytes.Equal(md5Sum[:], keyMD5) {
return nil, errSSEKeyMD5Mismatch
}
return key, nil
}
// This function rotates old to new key.
func rotateKey(oldKey []byte, newKey []byte, metadata map[string]string) error {
if subtle.ConstantTimeCompare(oldKey, newKey) == 1 {
return nil
}
delete(metadata, SSECustomerKey) // make sure we do not save the key by accident
if metadata[ServerSideEncryptionSealAlgorithm] != SSESealAlgorithmDareSha256 { // currently DARE-SHA256 is the only option
return errObjectTampered
}
iv, err := base64.StdEncoding.DecodeString(metadata[ServerSideEncryptionIV])
if err != nil || len(iv) != SSEIVSize {
return errObjectTampered
}
sealedKey, err := base64.StdEncoding.DecodeString(metadata[ServerSideEncryptionSealedKey])
if err != nil || len(sealedKey) != 64 {
return errObjectTampered
}
sha := sha256.New() // derive key encryption key
sha.Write(oldKey)
sha.Write(iv)
keyEncryptionKey := sha.Sum(nil)
objectEncryptionKey := bytes.NewBuffer(nil) // decrypt object encryption key
n, err := sio.Decrypt(objectEncryptionKey, bytes.NewReader(sealedKey), sio.Config{
Key: keyEncryptionKey,
})
if n != 32 || err != nil {
// Either the provided key does not match or the object was tampered.
// To provide strict AWS S3 compatibility we return: access denied.
return errSSEKeyMismatch
}
nonce := make([]byte, 32) // generate random values for key derivation
if _, err = io.ReadFull(rand.Reader, nonce); err != nil {
return err
}
niv := sha256.Sum256(nonce[:]) // derive key encryption key
sha = sha256.New()
sha.Write(newKey)
sha.Write(niv[:])
keyEncryptionKey = sha.Sum(nil)
sealedKeyW := bytes.NewBuffer(nil) // sealedKey := 16 byte header + 32 byte payload + 16 byte tag
n, err = sio.Encrypt(sealedKeyW, bytes.NewReader(objectEncryptionKey.Bytes()), sio.Config{
Key: keyEncryptionKey,
})
if n != 64 || err != nil {
return errors.New("failed to seal object encryption key") // if this happens there's a bug in the code (may panic ?)
}
metadata[ServerSideEncryptionIV] = base64.StdEncoding.EncodeToString(niv[:])
metadata[ServerSideEncryptionSealAlgorithm] = SSESealAlgorithmDareSha256
metadata[ServerSideEncryptionSealedKey] = base64.StdEncoding.EncodeToString(sealedKeyW.Bytes())
return nil
}
func newEncryptMetadata(key []byte, metadata map[string]string) ([]byte, error) {
delete(metadata, SSECustomerKey) // make sure we do not save the key by accident
// security notice:
// - If the first 32 bytes of the random value are ever repeated under the same client-provided
// key the encrypted object will not be tamper-proof. [ P(coll) ~= 1 / 2^(256 / 2)]
// - If the last 32 bytes of the random value are ever repeated under the same client-provided
// key an adversary may be able to extract the object encryption key. This depends on the
// authenticated en/decryption scheme. The DARE format will generate an 8 byte nonce which must
// be repeated in addition to reveal the object encryption key.
// [ P(coll) ~= 1 / 2^((256 + 64) / 2) ]
nonce := make([]byte, 32+SSEIVSize) // generate random values for key derivation
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
return nil, err
}
sha := sha256.New() // derive object encryption key
sha.Write(key)
sha.Write(nonce[:32])
objectEncryptionKey := sha.Sum(nil)
iv := sha256.Sum256(nonce[32:]) // derive key encryption key
sha = sha256.New()
sha.Write(key)
sha.Write(iv[:])
keyEncryptionKey := sha.Sum(nil)
sealedKey := bytes.NewBuffer(nil) // sealedKey := 16 byte header + 32 byte payload + 16 byte tag
n, err := sio.Encrypt(sealedKey, bytes.NewReader(objectEncryptionKey), sio.Config{
Key: keyEncryptionKey,
})
if n != 64 || err != nil {
return nil, errors.New("failed to seal object encryption key") // if this happens there's a bug in the code (may panic ?)
}
metadata[ServerSideEncryptionIV] = base64.StdEncoding.EncodeToString(iv[:])
metadata[ServerSideEncryptionSealAlgorithm] = SSESealAlgorithmDareSha256
metadata[ServerSideEncryptionSealedKey] = base64.StdEncoding.EncodeToString(sealedKey.Bytes())
return objectEncryptionKey, nil
}
func newEncryptReader(content io.Reader, key []byte, metadata map[string]string) (io.Reader, error) {
objectEncryptionKey, err := newEncryptMetadata(key, metadata)
if err != nil {
return nil, err
}
reader, err := sio.EncryptReader(content, sio.Config{Key: objectEncryptionKey})
if err != nil {
return nil, errInvalidSSEKey
}
return reader, nil
}
// EncryptRequest takes the client provided content and encrypts the data
// with the client provided key. It also marks the object as client-side-encrypted
// and sets the correct headers.
func EncryptRequest(content io.Reader, r *http.Request, metadata map[string]string) (io.Reader, error) {
key, err := ParseSSECustomerRequest(r)
if err != nil {
return nil, err
}
return newEncryptReader(content, key, metadata)
}
// DecryptCopyRequest decrypts the object with the client provided key. It also removes
// the client-side-encryption metadata from the object and sets the correct headers.
func DecryptCopyRequest(client io.Writer, r *http.Request, metadata map[string]string) (io.WriteCloser, error) {
key, err := ParseSSECopyCustomerRequest(r)
if err != nil {
return nil, err
}
delete(metadata, SSECopyCustomerKey) // make sure we do not save the key by accident
return newDecryptWriter(client, key, 0, metadata)
}
func decryptObjectInfo(key []byte, metadata map[string]string) ([]byte, error) {
if metadata[ServerSideEncryptionSealAlgorithm] != SSESealAlgorithmDareSha256 { // currently DARE-SHA256 is the only option
return nil, errObjectTampered
}
iv, err := base64.StdEncoding.DecodeString(metadata[ServerSideEncryptionIV])
if err != nil || len(iv) != SSEIVSize {
return nil, errObjectTampered
}
sealedKey, err := base64.StdEncoding.DecodeString(metadata[ServerSideEncryptionSealedKey])
if err != nil || len(sealedKey) != 64 {
return nil, errObjectTampered
}
sha := sha256.New() // derive key encryption key
sha.Write(key)
sha.Write(iv)
keyEncryptionKey := sha.Sum(nil)
objectEncryptionKey := bytes.NewBuffer(nil) // decrypt object encryption key
n, err := sio.Decrypt(objectEncryptionKey, bytes.NewReader(sealedKey), sio.Config{
Key: keyEncryptionKey,
})
if n != 32 || err != nil {
// Either the provided key does not match or the object was tampered.
// To provide strict AWS S3 compatibility we return: access denied.
return nil, errSSEKeyMismatch
}
return objectEncryptionKey.Bytes(), nil
}
func newDecryptWriter(client io.Writer, key []byte, seqNumber uint32, metadata map[string]string) (io.WriteCloser, error) {
objectEncryptionKey, err := decryptObjectInfo(key, metadata)
if err != nil {
return nil, err
}
return newDecryptWriterWithObjectKey(client, objectEncryptionKey, seqNumber, metadata)
}
func newDecryptWriterWithObjectKey(client io.Writer, objectEncryptionKey []byte, seqNumber uint32, metadata map[string]string) (io.WriteCloser, error) {
writer, err := sio.DecryptWriter(client, sio.Config{
Key: objectEncryptionKey,
SequenceNumber: seqNumber,
})
if err != nil {
return nil, errInvalidSSEKey
}
delete(metadata, ServerSideEncryptionIV)
delete(metadata, ServerSideEncryptionSealAlgorithm)
delete(metadata, ServerSideEncryptionSealedKey)
delete(metadata, ReservedMetadataPrefix+"Encrypted-Multipart")
return writer, nil
}
// DecryptRequestWithSequenceNumber decrypts the object with the client provided key. It also removes
// the client-side-encryption metadata from the object and sets the correct headers.
func DecryptRequestWithSequenceNumber(client io.Writer, r *http.Request, seqNumber uint32, metadata map[string]string) (io.WriteCloser, error) {
key, err := ParseSSECustomerRequest(r)
if err != nil {
return nil, err
}
delete(metadata, SSECustomerKey) // make sure we do not save the key by accident
return newDecryptWriter(client, key, seqNumber, metadata)
}
// DecryptRequest decrypts the object with the client provided key. It also removes
// the client-side-encryption metadata from the object and sets the correct headers.
func DecryptRequest(client io.Writer, r *http.Request, metadata map[string]string) (io.WriteCloser, error) {
return DecryptRequestWithSequenceNumber(client, r, 0, metadata)
}
// DecryptBlocksWriter - decrypts multipart parts, while implementing a io.Writer compatible interface.
type DecryptBlocksWriter struct {
// Original writer where the plain data will be written
writer io.Writer
// Current decrypter for the current encrypted data block
decrypter io.WriteCloser
// Start sequence number
startSeqNum uint32
// Current part index
partIndex int
// Parts information
parts []objectPartInfo
req *http.Request
metadata map[string]string
partEncRelOffset int64
copySource bool
// Customer Key
customerKeyHeader string
}
func (w *DecryptBlocksWriter) buildDecrypter(partID int) error {
m := make(map[string]string)
for k, v := range w.metadata {
m[k] = v
}
// Initialize the first decrypter, new decrypters will be initialized in Write() operation as needed.
var key []byte
var err error
if w.copySource {
w.req.Header.Set(SSECopyCustomerKey, w.customerKeyHeader)
key, err = ParseSSECopyCustomerRequest(w.req)
} else {
w.req.Header.Set(SSECustomerKey, w.customerKeyHeader)
key, err = ParseSSECustomerRequest(w.req)
}
if err != nil {
return err
}
objectEncryptionKey, err := decryptObjectInfo(key, m)
if err != nil {
return err
}
var partIDbin [4]byte
binary.LittleEndian.PutUint32(partIDbin[:], uint32(partID)) // marshal part ID
mac := hmac.New(sha256.New, objectEncryptionKey) // derive part encryption key from part ID and object key
mac.Write(partIDbin[:])
partEncryptionKey := mac.Sum(nil)
// make sure we do not save the key by accident
if w.copySource {
delete(m, SSECopyCustomerKey)
} else {
delete(m, SSECustomerKey)
}
// make sure to provide a NopCloser such that a Close
// on sio.decryptWriter doesn't close the underlying writer's
// close which perhaps can close the stream prematurely.
decrypter, err := newDecryptWriterWithObjectKey(ioutil.NopCloser(w.writer), partEncryptionKey, w.startSeqNum, m)
if err != nil {
return err
}
if w.decrypter != nil {
// Pro-actively close the writer such that any pending buffers
// are flushed already before we allocate a new decrypter.
err = w.decrypter.Close()
if err != nil {
return err
}
}
w.decrypter = decrypter
return nil
}
func (w *DecryptBlocksWriter) Write(p []byte) (int, error) {
var err error
var n1 int
if int64(len(p)) < w.parts[w.partIndex].Size-w.partEncRelOffset {
n1, err = w.decrypter.Write(p)
if err != nil {
return 0, err
}
w.partEncRelOffset += int64(n1)
} else {
n1, err = w.decrypter.Write(p[:w.parts[w.partIndex].Size-w.partEncRelOffset])
if err != nil {
return 0, err
}
// We should now proceed to next part, reset all values appropriately.
w.partEncRelOffset = 0
w.startSeqNum = 0
w.partIndex++
err = w.buildDecrypter(w.partIndex + 1)
if err != nil {
return 0, err
}
n1, err = w.decrypter.Write(p[n1:])
if err != nil {
return 0, err
}
w.partEncRelOffset += int64(n1)
}
return len(p), nil
}
// Close closes the LimitWriter. It behaves like io.Closer.
func (w *DecryptBlocksWriter) Close() error {
if w.decrypter != nil {
err := w.decrypter.Close()
if err != nil {
return err
}
}
if closer, ok := w.writer.(io.Closer); ok {
return closer.Close()
}
return nil
}
// DecryptAllBlocksCopyRequest - setup a struct which can decrypt many concatenated encrypted data
// parts information helps to know the boundaries of each encrypted data block, this function decrypts
// all parts starting from part-1.
func DecryptAllBlocksCopyRequest(client io.Writer, r *http.Request, objInfo ObjectInfo) (io.WriteCloser, int64, error) {
w, _, size, err := DecryptBlocksRequest(client, r, 0, objInfo.Size, objInfo, true)
return w, size, err
}
// DecryptBlocksRequest - setup a struct which can decrypt many concatenated encrypted data
// parts information helps to know the boundaries of each encrypted data block.
func DecryptBlocksRequest(client io.Writer, r *http.Request, startOffset, length int64, objInfo ObjectInfo, copySource bool) (io.WriteCloser, int64, int64, error) {
seqNumber, encStartOffset, encLength := getEncryptedStartOffset(startOffset, length)
// Encryption length cannot be bigger than the file size, if it is
// which is allowed in AWS S3, we simply default to EncryptedSize().
if encLength+encStartOffset > objInfo.EncryptedSize() {
encLength = objInfo.EncryptedSize() - encStartOffset
}
if len(objInfo.Parts) == 0 || !objInfo.IsEncryptedMultipart() {
var writer io.WriteCloser
var err error
if copySource {
writer, err = DecryptCopyRequest(client, r, objInfo.UserDefined)
} else {
writer, err = DecryptRequestWithSequenceNumber(client, r, seqNumber, objInfo.UserDefined)
}
if err != nil {
return nil, 0, 0, err
}
return writer, encStartOffset, encLength, nil
}
var partStartIndex int
var partStartOffset = startOffset
// Skip parts until final offset maps to a particular part offset.
for i, part := range objInfo.Parts {
decryptedSize, err := decryptedSize(part.Size)
if err != nil {
return nil, -1, -1, err
}
partStartIndex = i
// Offset is smaller than size we have reached the
// proper part offset, break out we start from
// this part index.
if partStartOffset < decryptedSize {
break
}
// Continue to look for next part.
partStartOffset -= decryptedSize
}
startSeqNum := partStartOffset / sseDAREPackageBlockSize
partEncRelOffset := int64(startSeqNum) * (sseDAREPackageBlockSize + sseDAREPackageMetaSize)
w := &DecryptBlocksWriter{
writer: client,
startSeqNum: uint32(startSeqNum),
partEncRelOffset: partEncRelOffset,
parts: objInfo.Parts,
partIndex: partStartIndex,
req: r,
customerKeyHeader: r.Header.Get(SSECustomerKey),
copySource: copySource,
}
w.metadata = map[string]string{}
// Copy encryption metadata for internal use.
for k, v := range objInfo.UserDefined {
w.metadata[k] = v
}
// Purge all the encryption headers.
delete(objInfo.UserDefined, ServerSideEncryptionIV)
delete(objInfo.UserDefined, ServerSideEncryptionSealAlgorithm)
delete(objInfo.UserDefined, ServerSideEncryptionSealedKey)
delete(objInfo.UserDefined, ReservedMetadataPrefix+"Encrypted-Multipart")
if w.copySource {
w.customerKeyHeader = r.Header.Get(SSECopyCustomerKey)
}
if err := w.buildDecrypter(partStartIndex + 1); err != nil {
return nil, 0, 0, err
}
return w, encStartOffset, encLength, nil
}
// getEncryptedStartOffset - fetch sequence number, encrypted start offset and encrypted length.
func getEncryptedStartOffset(offset, length int64) (seqNumber uint32, encOffset int64, encLength int64) {
onePkgSize := int64(sseDAREPackageBlockSize + sseDAREPackageMetaSize)
seqNumber = uint32(offset / sseDAREPackageBlockSize)
encOffset = int64(seqNumber) * onePkgSize
// The math to compute the encrypted length is always
// originalLength i.e (offset+length-1) to be divided under
// 64KiB blocks which is the payload size for each encrypted
// block. This is then multiplied by final package size which
// is basically 64KiB + 32. Finally negate the encrypted offset
// to get the final encrypted length on disk.
encLength = ((offset+length)/sseDAREPackageBlockSize)*onePkgSize - encOffset
// Check for the remainder, to figure if we need one extract package to read from.
if (offset+length)%sseDAREPackageBlockSize > 0 {
encLength += onePkgSize
}
return seqNumber, encOffset, encLength
}
// IsEncryptedMultipart - is the encrypted content multiparted?
func (o *ObjectInfo) IsEncryptedMultipart() bool {
_, ok := o.UserDefined[ReservedMetadataPrefix+"Encrypted-Multipart"]
return ok
}
// IsEncrypted returns true if the object is marked as encrypted.
func (o *ObjectInfo) IsEncrypted() bool {
if _, ok := o.UserDefined[ServerSideEncryptionIV]; ok {
return true
}
if _, ok := o.UserDefined[ServerSideEncryptionSealAlgorithm]; ok {
return true
}
if _, ok := o.UserDefined[ServerSideEncryptionSealedKey]; ok {
return true
}
return false
}
// IsEncrypted returns true if the object is marked as encrypted.
func (li *ListPartsInfo) IsEncrypted() bool {
if _, ok := li.UserDefined[ServerSideEncryptionIV]; ok {
return true
}
if _, ok := li.UserDefined[ServerSideEncryptionSealAlgorithm]; ok {
return true
}
if _, ok := li.UserDefined[ServerSideEncryptionSealedKey]; ok {
return true
}
return false
}
func decryptedSize(encryptedSize int64) (int64, error) {
if encryptedSize == 0 {
return encryptedSize, nil
}
size := (encryptedSize / (sseDAREPackageBlockSize + sseDAREPackageMetaSize)) * sseDAREPackageBlockSize
if mod := encryptedSize % (sseDAREPackageBlockSize + sseDAREPackageMetaSize); mod > 0 {
if mod < sseDAREPackageMetaSize+1 {
return -1, errObjectTampered // object is not 0 size but smaller than the smallest valid encrypted object
}
size += mod - sseDAREPackageMetaSize
}
return size, nil
}
// DecryptedSize returns the size of the object after decryption in bytes.
// It returns an error if the object is not encrypted or marked as encrypted
// but has an invalid size.
// DecryptedSize panics if the referred object is not encrypted.
func (o *ObjectInfo) DecryptedSize() (int64, error) {
if !o.IsEncrypted() {
panic("cannot compute decrypted size of an object which is not encrypted")
}
return decryptedSize(o.Size)
}
// EncryptedSize returns the size of the object after encryption.
// An encrypted object is always larger than a plain object
// except for zero size objects.
func (o *ObjectInfo) EncryptedSize() int64 {
size := (o.Size / sseDAREPackageBlockSize) * (sseDAREPackageBlockSize + sseDAREPackageMetaSize)
if mod := o.Size % (sseDAREPackageBlockSize); mod > 0 {
size += mod + sseDAREPackageMetaSize
}
return size
}
// DecryptCopyObjectInfo tries to decrypt the provided object if it is encrypted.
// It fails if the object is encrypted and the HTTP headers don't contain
// SSE-C headers or the object is not encrypted but SSE-C headers are provided. (AWS behavior)
// DecryptObjectInfo returns 'ErrNone' if the object is not encrypted or the
// decryption succeeded.
//
// DecryptCopyObjectInfo also returns whether the object is encrypted or not.
func DecryptCopyObjectInfo(info *ObjectInfo, headers http.Header) (apiErr APIErrorCode, encrypted bool) {
// Directories are never encrypted.
if info.IsDir {
return ErrNone, false
}
if apiErr, encrypted = ErrNone, info.IsEncrypted(); !encrypted && hasSSECopyCustomerHeader(headers) {
apiErr = ErrInvalidEncryptionParameters
} else if encrypted {
if !hasSSECopyCustomerHeader(headers) {
apiErr = ErrSSEEncryptedObject
return
}
var err error
if info.Size, err = info.DecryptedSize(); err != nil {
apiErr = toAPIErrorCode(err)
}
}
return
}
// DecryptObjectInfo tries to decrypt the provided object if it is encrypted.
// It fails if the object is encrypted and the HTTP headers don't contain
// SSE-C headers or the object is not encrypted but SSE-C headers are provided. (AWS behavior)
// DecryptObjectInfo returns 'ErrNone' if the object is not encrypted or the
// decryption succeeded.
//
// DecryptObjectInfo also returns whether the object is encrypted or not.
func DecryptObjectInfo(info *ObjectInfo, headers http.Header) (apiErr APIErrorCode, encrypted bool) {
// Directories are never encrypted.
if info.IsDir {
return ErrNone, false
}
if apiErr, encrypted = ErrNone, info.IsEncrypted(); !encrypted && hasSSECustomerHeader(headers) {
apiErr = ErrInvalidEncryptionParameters
} else if encrypted {
if !hasSSECustomerHeader(headers) {
apiErr = ErrSSEEncryptedObject
return
}
var err error
if info.Size, err = info.DecryptedSize(); err != nil {
apiErr = toAPIErrorCode(err)
}
}
return
}

517
cmd/encryption-v1_test.go Normal file
View File

@@ -0,0 +1,517 @@
/*
* Minio Cloud Storage, (C) 2017, 2018 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"bytes"
"net/http"
"testing"
)
var hasSSECopyCustomerHeaderTests = []struct {
headers map[string]string
sseRequest bool
}{
{headers: map[string]string{SSECopyCustomerAlgorithm: "AES256", SSECopyCustomerKey: "key", SSECopyCustomerKeyMD5: "md5"}, sseRequest: true}, // 0
{headers: map[string]string{SSECopyCustomerAlgorithm: "AES256"}, sseRequest: true}, // 1
{headers: map[string]string{SSECopyCustomerKey: "key"}, sseRequest: true}, // 2
{headers: map[string]string{SSECopyCustomerKeyMD5: "md5"}, sseRequest: true}, // 3
{headers: map[string]string{}, sseRequest: false}, // 4
{headers: map[string]string{SSECopyCustomerAlgorithm + " ": "AES256", " " + SSECopyCustomerKey: "key", SSECopyCustomerKeyMD5 + " ": "md5"}, sseRequest: false}, // 5
{headers: map[string]string{SSECopyCustomerAlgorithm: "", SSECopyCustomerKey: "", SSECopyCustomerKeyMD5: ""}, sseRequest: false}, // 6
}
func TestIsSSECopyCustomerRequest(t *testing.T) {
for i, test := range hasSSECopyCustomerHeaderTests {
headers := http.Header{}
for k, v := range test.headers {
headers.Set(k, v)
}
if hasSSECopyCustomerHeader(headers) != test.sseRequest {
t.Errorf("Test %d: Expected hasSSECopyCustomerHeader to return %v", i, test.sseRequest)
}
}
}
var hasSSECustomerHeaderTests = []struct {
headers map[string]string
sseRequest bool
}{
{headers: map[string]string{SSECustomerAlgorithm: "AES256", SSECustomerKey: "key", SSECustomerKeyMD5: "md5"}, sseRequest: true}, // 0
{headers: map[string]string{SSECustomerAlgorithm: "AES256"}, sseRequest: true}, // 1
{headers: map[string]string{SSECustomerKey: "key"}, sseRequest: true}, // 2
{headers: map[string]string{SSECustomerKeyMD5: "md5"}, sseRequest: true}, // 3
{headers: map[string]string{}, sseRequest: false}, // 4
{headers: map[string]string{SSECustomerAlgorithm + " ": "AES256", " " + SSECustomerKey: "key", SSECustomerKeyMD5 + " ": "md5"}, sseRequest: false}, // 5
{headers: map[string]string{SSECustomerAlgorithm: "", SSECustomerKey: "", SSECustomerKeyMD5: ""}, sseRequest: false}, // 6
}
func TesthasSSECustomerHeader(t *testing.T) {
for i, test := range hasSSECustomerHeaderTests {
headers := http.Header{}
for k, v := range test.headers {
headers.Set(k, v)
}
if hasSSECustomerHeader(headers) != test.sseRequest {
t.Errorf("Test %d: Expected hasSSECustomerHeader to return %v", i, test.sseRequest)
}
}
}
var parseSSECustomerRequestTests = []struct {
headers map[string]string
useTLS bool
err error
}{
{
headers: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=", // 0
SSECustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
useTLS: true, err: nil,
},
{
headers: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=", // 1
SSECustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
useTLS: false, err: errInsecureSSERequest,
},
{
headers: map[string]string{
SSECustomerAlgorithm: "AES 256",
SSECustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=", // 2
SSECustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
useTLS: true, err: errInvalidSSEAlgorithm,
},
{
headers: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "NjE0SL87s+ZhYtaTrg5eI5cjhCQLGPVMKenPG2bCJFw=", // 3
SSECustomerKeyMD5: "H+jq/LwEOEO90YtiTuNFVw==",
},
useTLS: true, err: errSSEKeyMD5Mismatch,
},
{
headers: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: " jE0SL87s+ZhYtaTrg5eI5cjhCQLGPVMKenPG2bCJFw=", // 4
SSECustomerKeyMD5: "H+jq/LwEOEO90YtiTuNFVw==",
},
useTLS: true, err: errInvalidSSEKey,
},
{
headers: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "NjE0SL87s+ZhYtaTrg5eI5cjhCQLGPVMKenPG2bCJFw=", // 5
SSECustomerKeyMD5: " +jq/LwEOEO90YtiTuNFVw==",
},
useTLS: true, err: errSSEKeyMD5Mismatch,
},
{
headers: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "vFQ9ScFOF6Tu/BfzMS+rVMvlZGJHi5HmGJenJfrfKI45", // 6
SSECustomerKeyMD5: "9KPgDdZNTHimuYCwnJTp5g==",
},
useTLS: true, err: errInvalidSSEKey,
},
{
headers: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "", // 7
SSECustomerKeyMD5: "9KPgDdZNTHimuYCwnJTp5g==",
},
useTLS: true, err: errMissingSSEKey,
},
{
headers: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "vFQ9ScFOF6Tu/BfzMS+rVMvlZGJHi5HmGJenJfrfKI45", // 8
SSECustomerKeyMD5: "",
},
useTLS: true, err: errMissingSSEKeyMD5,
},
}
func TestParseSSECustomerRequest(t *testing.T) {
defer func(flag bool) { globalIsSSL = flag }(globalIsSSL)
for i, test := range parseSSECustomerRequestTests {
headers := http.Header{}
for k, v := range test.headers {
headers.Set(k, v)
}
request := &http.Request{}
request.Header = headers
globalIsSSL = test.useTLS
_, err := ParseSSECustomerRequest(request)
if err != test.err {
t.Errorf("Test %d: Parse returned: %v want: %v", i, err, test.err)
}
key := request.Header.Get(SSECustomerKey)
if (err == nil || err == errSSEKeyMD5Mismatch) && key != "" {
t.Errorf("Test %d: Client key survived parsing - found key: %v", i, key)
}
}
}
var parseSSECopyCustomerRequestTests = []struct {
headers map[string]string
useTLS bool
err error
}{
{
headers: map[string]string{
SSECopyCustomerAlgorithm: "AES256",
SSECopyCustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=", // 0
SSECopyCustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
useTLS: true, err: nil,
},
{
headers: map[string]string{
SSECopyCustomerAlgorithm: "AES256",
SSECopyCustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=", // 1
SSECopyCustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
useTLS: false, err: errInsecureSSERequest,
},
{
headers: map[string]string{
SSECopyCustomerAlgorithm: "AES 256",
SSECopyCustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=", // 2
SSECopyCustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
useTLS: true, err: errInvalidSSEAlgorithm,
},
{
headers: map[string]string{
SSECopyCustomerAlgorithm: "AES256",
SSECopyCustomerKey: "NjE0SL87s+ZhYtaTrg5eI5cjhCQLGPVMKenPG2bCJFw=", // 3
SSECopyCustomerKeyMD5: "H+jq/LwEOEO90YtiTuNFVw==",
},
useTLS: true, err: errSSEKeyMD5Mismatch,
},
{
headers: map[string]string{
SSECopyCustomerAlgorithm: "AES256",
SSECopyCustomerKey: " jE0SL87s+ZhYtaTrg5eI5cjhCQLGPVMKenPG2bCJFw=", // 4
SSECopyCustomerKeyMD5: "H+jq/LwEOEO90YtiTuNFVw==",
},
useTLS: true, err: errInvalidSSEKey,
},
{
headers: map[string]string{
SSECopyCustomerAlgorithm: "AES256",
SSECopyCustomerKey: "NjE0SL87s+ZhYtaTrg5eI5cjhCQLGPVMKenPG2bCJFw=", // 5
SSECopyCustomerKeyMD5: " +jq/LwEOEO90YtiTuNFVw==",
},
useTLS: true, err: errSSEKeyMD5Mismatch,
},
{
headers: map[string]string{
SSECopyCustomerAlgorithm: "AES256",
SSECopyCustomerKey: "vFQ9ScFOF6Tu/BfzMS+rVMvlZGJHi5HmGJenJfrfKI45", // 6
SSECopyCustomerKeyMD5: "9KPgDdZNTHimuYCwnJTp5g==",
},
useTLS: true, err: errInvalidSSEKey,
},
{
headers: map[string]string{
SSECopyCustomerAlgorithm: "AES256",
SSECopyCustomerKey: "", // 7
SSECopyCustomerKeyMD5: "9KPgDdZNTHimuYCwnJTp5g==",
},
useTLS: true, err: errMissingSSEKey,
},
{
headers: map[string]string{
SSECopyCustomerAlgorithm: "AES256",
SSECopyCustomerKey: "vFQ9ScFOF6Tu/BfzMS+rVMvlZGJHi5HmGJenJfrfKI45", // 8
SSECopyCustomerKeyMD5: "",
},
useTLS: true, err: errMissingSSEKeyMD5,
},
}
func TestParseSSECopyCustomerRequest(t *testing.T) {
defer func(flag bool) { globalIsSSL = flag }(globalIsSSL)
for i, test := range parseSSECopyCustomerRequestTests {
headers := http.Header{}
for k, v := range test.headers {
headers.Set(k, v)
}
request := &http.Request{}
request.Header = headers
globalIsSSL = test.useTLS
_, err := ParseSSECopyCustomerRequest(request)
if err != test.err {
t.Errorf("Test %d: Parse returned: %v want: %v", i, err, test.err)
}
key := request.Header.Get(SSECopyCustomerKey)
if (err == nil || err == errSSEKeyMD5Mismatch) && key != "" {
t.Errorf("Test %d: Client key survived parsing - found key: %v", i, key)
}
}
}
var encryptedSizeTests = []struct {
size, encsize int64
}{
{size: 0, encsize: 0}, // 0
{size: 1, encsize: 33}, // 1
{size: 1024, encsize: 1024 + 32}, // 2
{size: 2 * sseDAREPackageBlockSize, encsize: 2 * (sseDAREPackageBlockSize + 32)}, // 3
{size: 100*sseDAREPackageBlockSize + 1, encsize: 100*(sseDAREPackageBlockSize+32) + 33}, // 4
{size: sseDAREPackageBlockSize + 1, encsize: (sseDAREPackageBlockSize + 32) + 33}, // 5
{size: 5 * 1024 * 1024 * 1024, encsize: 81920 * (sseDAREPackageBlockSize + 32)}, // 6
}
func TestEncryptedSize(t *testing.T) {
for i, test := range encryptedSizeTests {
objInfo := ObjectInfo{Size: test.size}
if size := objInfo.EncryptedSize(); test.encsize != size {
t.Errorf("Test %d: got encrypted size: #%d want: #%d", i, size, test.encsize)
}
}
}
var decryptSSECustomerObjectInfoTests = []struct {
encsize, size int64
err error
}{
{encsize: 0, size: 0, err: nil}, // 0
{encsize: 33, size: 1, err: nil}, // 1
{encsize: 1024 + 32, size: 1024, err: nil}, // 2
{encsize: 2 * (sseDAREPackageBlockSize + 32), size: 2 * sseDAREPackageBlockSize, err: nil}, // 3
{encsize: 100*(sseDAREPackageBlockSize+32) + 33, size: 100*sseDAREPackageBlockSize + 1, err: nil}, // 4
{encsize: (sseDAREPackageBlockSize + 32) + 33, size: sseDAREPackageBlockSize + 1, err: nil}, // 5
{encsize: 81920 * (sseDAREPackageBlockSize + 32), size: 5 * 1024 * 1024 * 1024, err: nil}, // 6
{encsize: 0, size: 0, err: nil}, // 7
{encsize: sseDAREPackageBlockSize + 32 + 31, size: 0, err: errObjectTampered}, // 8
}
func TestDecryptedSize(t *testing.T) {
for i, test := range decryptSSECustomerObjectInfoTests {
objInfo := ObjectInfo{Size: test.encsize}
objInfo.UserDefined = map[string]string{
ServerSideEncryptionSealAlgorithm: SSESealAlgorithmDareSha256,
}
size, err := objInfo.DecryptedSize()
if err != test.err || (size != test.size && err == nil) {
t.Errorf("Test %d: decryption returned: %v want: %v", i, err, test.err)
}
if err == nil && size != test.size {
t.Errorf("Test %d: got decrypted size: #%d want: #%d", i, size, test.size)
}
}
}
var encryptRequestTests = []struct {
header map[string]string
metadata map[string]string
}{
{
header: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=",
SSECustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
metadata: map[string]string{},
},
{
header: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=",
SSECustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
metadata: map[string]string{
SSECustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=",
},
},
}
func TestEncryptRequest(t *testing.T) {
defer func(flag bool) { globalIsSSL = flag }(globalIsSSL)
globalIsSSL = true
for i, test := range encryptRequestTests {
content := bytes.NewReader(make([]byte, 64))
req := &http.Request{Header: http.Header{}}
for k, v := range test.header {
req.Header.Set(k, v)
}
_, err := EncryptRequest(content, req, test.metadata)
if err != nil {
t.Fatalf("Test %d: Failed to encrypt request: %v", i, err)
}
if key, ok := test.metadata[SSECustomerKey]; ok {
t.Errorf("Test %d: Client provided key survived in metadata - key: %s", i, key)
}
if kdf, ok := test.metadata[ServerSideEncryptionSealAlgorithm]; !ok {
t.Errorf("Test %d: ServerSideEncryptionKDF must be part of metadata: %v", i, kdf)
}
if iv, ok := test.metadata[ServerSideEncryptionIV]; !ok {
t.Errorf("Test %d: ServerSideEncryptionIV must be part of metadata: %v", i, iv)
}
if mac, ok := test.metadata[ServerSideEncryptionSealedKey]; !ok {
t.Errorf("Test %d: ServerSideEncryptionKeyMAC must be part of metadata: %v", i, mac)
}
}
}
var decryptRequestTests = []struct {
header map[string]string
metadata map[string]string
shouldFail bool
}{
{
header: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "MzJieXRlc2xvbmdzZWNyZXRrZXltdXN0cHJvdmlkZWQ=",
SSECustomerKeyMD5: "7PpPLAK26ONlVUGOWlusfg==",
},
metadata: map[string]string{
ServerSideEncryptionSealAlgorithm: SSESealAlgorithmDareSha256,
ServerSideEncryptionIV: "7nQqotA8xgrPx6QK7Ap3GCfjKitqJSrGP7xzgErSJlw=",
ServerSideEncryptionSealedKey: "EAAfAAAAAAD7v1hQq3PFRUHsItalxmrJqrOq6FwnbXNarxOOpb8jTWONPPKyM3Gfjkjyj6NCf+aB/VpHCLCTBA==",
},
shouldFail: false,
},
{
header: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=",
SSECustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
metadata: map[string]string{
ServerSideEncryptionSealAlgorithm: "HMAC-SHA3",
ServerSideEncryptionIV: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=",
ServerSideEncryptionSealedKey: "SY5E9AvI2tI7/nUrUAssIGE32Hcs4rR9z/CUuPqu5N4=",
},
shouldFail: true,
},
{
header: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=",
SSECustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
metadata: map[string]string{
ServerSideEncryptionSealAlgorithm: SSESealAlgorithmDareSha256,
ServerSideEncryptionIV: "RrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=",
ServerSideEncryptionSealedKey: "SY5E9AvI2tI7/nUrUAssIGE32Hcs4rR9z/CUuPqu5N4=",
},
shouldFail: true,
},
{
header: map[string]string{
SSECustomerAlgorithm: "AES256",
SSECustomerKey: "XAm0dRrJsEsyPb1UuFNezv1bl9hxuYsgUVC/MUctE2k=",
SSECustomerKeyMD5: "bY4wkxQejw9mUJfo72k53A==",
},
metadata: map[string]string{
ServerSideEncryptionSealAlgorithm: SSESealAlgorithmDareSha256,
ServerSideEncryptionIV: "XAm0dRrJsEsyPb1UuFNezv1bl9ehxuYsgUVC/MUctE2k=",
ServerSideEncryptionSealedKey: "SY5E9AvI2tI7/nUrUAssIGE32Hds4rR9z/CUuPqu5N4=",
},
shouldFail: true,
},
}
func TestDecryptRequest(t *testing.T) {
defer func(flag bool) { globalIsSSL = flag }(globalIsSSL)
globalIsSSL = true
for i, test := range decryptRequestTests {
client := bytes.NewBuffer(nil)
req := &http.Request{Header: http.Header{}}
for k, v := range test.header {
req.Header.Set(k, v)
}
_, err := DecryptRequest(client, req, test.metadata)
if err != nil && !test.shouldFail {
t.Fatalf("Test %d: Failed to encrypt request: %v", i, err)
}
if key, ok := test.metadata[SSECustomerKey]; ok {
t.Errorf("Test %d: Client provided key survived in metadata - key: %s", i, key)
}
if kdf, ok := test.metadata[ServerSideEncryptionSealAlgorithm]; ok && !test.shouldFail {
t.Errorf("Test %d: ServerSideEncryptionKDF should not be part of metadata: %v", i, kdf)
}
if iv, ok := test.metadata[ServerSideEncryptionIV]; ok && !test.shouldFail {
t.Errorf("Test %d: ServerSideEncryptionIV should not be part of metadata: %v", i, iv)
}
if mac, ok := test.metadata[ServerSideEncryptionSealedKey]; ok && !test.shouldFail {
t.Errorf("Test %d: ServerSideEncryptionKeyMAC should not be part of metadata: %v", i, mac)
}
}
}
var decryptObjectInfoTests = []struct {
info ObjectInfo
headers http.Header
expErr APIErrorCode
}{
{
info: ObjectInfo{Size: 100},
headers: http.Header{},
expErr: ErrNone,
},
{
info: ObjectInfo{Size: 100, UserDefined: map[string]string{ServerSideEncryptionSealAlgorithm: SSESealAlgorithmDareSha256}},
headers: http.Header{SSECustomerAlgorithm: []string{SSECustomerAlgorithmAES256}},
expErr: ErrNone,
},
{
info: ObjectInfo{Size: 0, UserDefined: map[string]string{ServerSideEncryptionSealAlgorithm: SSESealAlgorithmDareSha256}},
headers: http.Header{SSECustomerAlgorithm: []string{SSECustomerAlgorithmAES256}},
expErr: ErrNone,
},
{
info: ObjectInfo{Size: 100, UserDefined: map[string]string{ServerSideEncryptionSealAlgorithm: SSESealAlgorithmDareSha256}},
headers: http.Header{},
expErr: ErrSSEEncryptedObject,
},
{
info: ObjectInfo{Size: 100, UserDefined: map[string]string{}},
headers: http.Header{SSECustomerAlgorithm: []string{SSECustomerAlgorithmAES256}},
expErr: ErrInvalidEncryptionParameters,
},
{
info: ObjectInfo{Size: 31, UserDefined: map[string]string{ServerSideEncryptionSealAlgorithm: SSESealAlgorithmDareSha256}},
headers: http.Header{SSECustomerAlgorithm: []string{SSECustomerAlgorithmAES256}},
expErr: ErrObjectTampered,
},
}
func TestDecryptObjectInfo(t *testing.T) {
for i, test := range decryptObjectInfoTests {
if err, encrypted := DecryptObjectInfo(&test.info, test.headers); err != test.expErr {
t.Errorf("Test %d: Decryption returned wrong error code: got %d , want %d", i, err, test.expErr)
} else if enc := test.info.IsEncrypted(); encrypted && enc != encrypted {
t.Errorf("Test %d: Decryption thinks object is encrypted but it is not", i)
} else if !encrypted && enc != encrypted {
t.Errorf("Test %d: Decryption thinks object is not encrypted but it is", i)
}
}
}

251
cmd/endpoint-ellipses.go Normal file
View File

@@ -0,0 +1,251 @@
/*
* Minio Cloud Storage, (C) 2018 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"fmt"
"strings"
"github.com/minio/minio-go/pkg/set"
"github.com/minio/minio/pkg/ellipses"
)
// This file implements and supports ellipses pattern for
// `minio server` command line arguments.
// Maximum number of unique args supported on the command line.
const (
serverCommandLineArgsMax = 32
)
// Endpoint set represents parsed ellipses values, also provides
// methods to get the sets of endpoints.
type endpointSet struct {
argPatterns []ellipses.ArgPattern
endpoints []string // Endpoints saved from previous GetEndpoints().
setIndexes [][]uint64 // All the sets.
}
// Supported set sizes this is used to find the optimal
// single set size.
var setSizes = []uint64{4, 6, 8, 10, 12, 14, 16}
// getDivisibleSize - returns a greatest common divisor of
// all the ellipses sizes.
func getDivisibleSize(totalSizes []uint64) (result uint64) {
gcd := func(x, y uint64) uint64 {
for y != 0 {
x, y = y, x%y
}
return x
}
result = totalSizes[0]
for i := 1; i < len(totalSizes); i++ {
result = gcd(result, totalSizes[i])
}
return result
}
// getSetIndexes returns list of indexes which provides the set size
// on each index, this function also determines the final set size
// The final set size has the affinity towards choosing smaller
// indexes (total sets)
func getSetIndexes(args []string, totalSizes []uint64) (setIndexes [][]uint64, err error) {
if len(totalSizes) == 0 || len(args) == 0 {
return nil, errInvalidArgument
}
setIndexes = make([][]uint64, len(totalSizes))
for i, totalSize := range totalSizes {
// Check if totalSize has minimum range upto setSize
if totalSize < setSizes[0] {
return nil, fmt.Errorf("Invalid inputs (%s). Ellipses range or number of args %d should be atleast divisible by least possible set size %d",
args[i], totalSize, setSizes[0])
}
}
var setSize uint64
commonSize := getDivisibleSize(totalSizes)
if commonSize > setSizes[len(setSizes)-1] {
prevD := commonSize / setSizes[0]
for _, i := range setSizes {
if commonSize%i == 0 {
d := commonSize / i
if d <= prevD {
prevD = d
setSize = i
}
}
}
} else {
setSize = commonSize
}
// isValidSetSize - checks whether given count is a valid set size for erasure coding.
isValidSetSize := func(count uint64) bool {
return (count >= setSizes[0] && count <= setSizes[len(setSizes)-1] && count%2 == 0)
}
// Check whether setSize is with the supported range.
if !isValidSetSize(setSize) {
return nil, fmt.Errorf("Invalid inputs (%s). Ellipses range or number of args %d should be atleast divisible by least possible set size %d",
args, setSize, setSizes[0])
}
for i := range totalSizes {
for j := uint64(0); j < totalSizes[i]/setSize; j++ {
setIndexes[i] = append(setIndexes[i], setSize)
}
}
return setIndexes, nil
}
// Returns all the expanded endpoints, each argument is expanded separately.
func (s endpointSet) getEndpoints() (endpoints []string) {
if len(s.endpoints) != 0 {
return s.endpoints
}
for _, argPattern := range s.argPatterns {
for _, lbls := range argPattern.Expand() {
endpoints = append(endpoints, strings.Join(lbls, ""))
}
}
s.endpoints = endpoints
return endpoints
}
// Get returns the sets representation of the endpoints
// this function also intelligently decides on what will
// be the right set size etc.
func (s endpointSet) Get() (sets [][]string) {
var k = uint64(0)
endpoints := s.getEndpoints()
for i := range s.setIndexes {
for j := range s.setIndexes[i] {
sets = append(sets, endpoints[k:s.setIndexes[i][j]+k])
k = s.setIndexes[i][j] + k
}
}
return sets
}
// Return the total size for each argument patterns.
func getTotalSizes(argPatterns []ellipses.ArgPattern) []uint64 {
var totalSizes []uint64
for _, argPattern := range argPatterns {
var totalSize uint64 = 1
for _, p := range argPattern {
totalSize = totalSize * uint64(len(p.Seq))
}
totalSizes = append(totalSizes, totalSize)
}
return totalSizes
}
// Parses all arguments and returns an endpointSet which is a collection
// of endpoints following the ellipses pattern, this is what is used
// by the object layer for initializing itself.
func parseEndpointSet(args ...string) (ep endpointSet, err error) {
var argPatterns = make([]ellipses.ArgPattern, len(args))
for i, arg := range args {
patterns, perr := ellipses.FindEllipsesPatterns(arg)
if perr != nil {
return endpointSet{}, perr
}
argPatterns[i] = patterns
}
ep.setIndexes, err = getSetIndexes(args, getTotalSizes(argPatterns))
if err != nil {
return endpointSet{}, err
}
ep.argPatterns = argPatterns
return ep, nil
}
// Parses all ellipses input arguments, expands them into corresponding
// list of endpoints chunked evenly in accordance with a specific
// set size.
// For example: {1...64} is divided into 4 sets each of size 16.
// This applies to even distributed setup syntax as well.
func getAllSets(args ...string) ([][]string, error) {
if len(args) == 0 {
return nil, errInvalidArgument
}
var setArgs [][]string
if !ellipses.HasEllipses(args...) {
var setIndexes [][]uint64
// Check if we have more one args.
if len(args) > 1 {
var err error
setIndexes, err = getSetIndexes(args, []uint64{uint64(len(args))})
if err != nil {
return nil, err
}
} else {
// We are in FS setup, proceed forward.
setIndexes = [][]uint64{{uint64(len(args))}}
}
s := endpointSet{
endpoints: args,
setIndexes: setIndexes,
}
setArgs = s.Get()
} else {
s, err := parseEndpointSet(args...)
if err != nil {
return nil, err
}
setArgs = s.Get()
}
uniqueArgs := set.NewStringSet()
for _, sargs := range setArgs {
for _, arg := range sargs {
if uniqueArgs.Contains(arg) {
return nil, fmt.Errorf("Input args (%s) has duplicate ellipses", args)
}
uniqueArgs.Add(arg)
}
}
return setArgs, nil
}
// CreateServerEndpoints - validates and creates new endpoints from input args, supports
// both ellipses and without ellipses transparently.
func createServerEndpoints(serverAddr string, args ...string) (string, EndpointList, SetupType, int, int, error) {
setArgs, err := getAllSets(args...)
if err != nil {
return serverAddr, nil, -1, 0, 0, err
}
var endpoints EndpointList
var setupType SetupType
serverAddr, endpoints, setupType, err = CreateEndpoints(serverAddr, setArgs...)
if err != nil {
return serverAddr, nil, -1, 0, 0, err
}
return serverAddr, endpoints, setupType, len(setArgs), len(setArgs[0]), nil
}

View File

@@ -0,0 +1,388 @@
/*
* Minio Cloud Storage, (C) 2018 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"fmt"
"reflect"
"testing"
"github.com/minio/minio/pkg/ellipses"
)
// Tests create endpoints with ellipses and without.
func TestCreateServerEndpoints(t *testing.T) {
testCases := []struct {
serverAddr string
args []string
success bool
}{
// Invalid input.
{"", []string{}, false},
// Range cannot be negative.
{":9000", []string{"/export1{-1...1}"}, false},
// Range cannot start bigger than end.
{":9000", []string{"/export1{64...1}"}, false},
// Range can only be numeric.
{":9000", []string{"/export1{a...z}"}, false},
// Duplicate disks not allowed.
{":9000", []string{"/export1{1...32}", "/export1{1...32}"}, false},
// Same host cannot export same disk on two ports - special case localhost.
{":9001", []string{"http://localhost:900{1...2}/export{1...64}"}, false},
// Valid inputs.
{":9000", []string{"/export1"}, true},
{":9000", []string{"/export1", "/export2", "/export3", "/export4"}, true},
{":9000", []string{"/export1{1...64}"}, true},
{":9000", []string{"/export1{01...64}"}, true},
{":9000", []string{"/export1{1...32}", "/export1{33...64}"}, true},
{":9001", []string{"http://localhost:9001/export{1...64}"}, true},
{":9001", []string{"http://localhost:9001/export{01...64}"}, true},
}
for i, testCase := range testCases {
_, _, _, _, _, err := createServerEndpoints(testCase.serverAddr, testCase.args...)
if err != nil && testCase.success {
t.Errorf("Test %d: Expected success but failed instead %s", i+1, err)
}
if err == nil && !testCase.success {
t.Errorf("Test %d: Expected failure but passed instead", i+1)
}
}
}
// Test tests calculating set indexes.
func TestGetSetIndexes(t *testing.T) {
testCases := []struct {
args []string
totalSizes []uint64
indexes [][]uint64
success bool
}{
// Invalid inputs.
{
[]string{"data{1...27}"},
[]uint64{27},
nil,
false,
},
// Valid inputs.
{
[]string{"data{1...64}"},
[]uint64{64},
[][]uint64{{16, 16, 16, 16}},
true,
},
{
[]string{"data{1...24}"},
[]uint64{24},
[][]uint64{{12, 12}},
true,
},
{
[]string{"data/controller{1...11}/export{1...8}"},
[]uint64{88},
[][]uint64{{8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8}},
true,
},
{
[]string{"data{1...4}"},
[]uint64{4},
[][]uint64{{4}},
true,
},
}
for i, testCase := range testCases {
t.Run(fmt.Sprintf("Test%d", i+1), func(t *testing.T) {
gotIndexes, err := getSetIndexes(testCase.args, testCase.totalSizes)
if err != nil && testCase.success {
t.Errorf("Expected success but failed instead %s", err)
}
if err == nil && !testCase.success {
t.Errorf("Expected failure but passed instead")
}
if !reflect.DeepEqual(testCase.indexes, gotIndexes) {
t.Errorf("Expected %v, got %v", testCase.indexes, gotIndexes)
}
})
}
}
func getSequences(start int, number int, paddinglen int) (seq []string) {
for i := start; i <= number; i++ {
if paddinglen == 0 {
seq = append(seq, fmt.Sprintf("%d", i))
} else {
seq = append(seq, fmt.Sprintf(fmt.Sprintf("%%0%dd", paddinglen), i))
}
}
return seq
}
// Test tests parses endpoint ellipses input pattern.
func TestParseEndpointSet(t *testing.T) {
testCases := []struct {
arg string
es endpointSet
success bool
}{
// Tests invalid inputs.
{
"...",
endpointSet{},
false,
},
// Indivisible range.
{
"{1...27}",
endpointSet{},
false,
},
// No range specified.
{
"{...}",
endpointSet{},
false,
},
// Invalid range.
{
"http://minio{2...3}/export/set{1...0}",
endpointSet{},
false,
},
// Range cannot be smaller than 4 minimum.
{
"/export{1..2}",
endpointSet{},
false,
},
// Unsupported characters.
{
"/export/test{1...2O}",
endpointSet{},
false,
},
// Tests valid inputs.
{
"/export/set{1...64}",
endpointSet{
[]ellipses.ArgPattern{
[]ellipses.Pattern{
{
"/export/set",
"",
getSequences(1, 64, 0),
},
},
},
nil,
[][]uint64{{16, 16, 16, 16}},
},
true,
},
// Valid input for distributed setup.
{
"http://minio{2...3}/export/set{1...64}",
endpointSet{
[]ellipses.ArgPattern{
[]ellipses.Pattern{
{
"",
"",
getSequences(1, 64, 0),
},
{
"http://minio",
"/export/set",
getSequences(2, 3, 0),
},
},
},
nil,
[][]uint64{{16, 16, 16, 16, 16, 16, 16, 16}},
},
true,
},
// Supporting some advanced cases.
{
"http://minio{1...64}.mydomain.net/data",
endpointSet{
[]ellipses.ArgPattern{
[]ellipses.Pattern{
{
"http://minio",
".mydomain.net/data",
getSequences(1, 64, 0),
},
},
},
nil,
[][]uint64{{16, 16, 16, 16}},
},
true,
},
{
"http://rack{1...4}.mydomain.minio{1...16}/data",
endpointSet{
[]ellipses.ArgPattern{
[]ellipses.Pattern{
{
"",
"/data",
getSequences(1, 16, 0),
},
{
"http://rack",
".mydomain.minio",
getSequences(1, 4, 0),
},
},
},
nil,
[][]uint64{{16, 16, 16, 16}},
},
true,
},
// Supporting kubernetes cases.
{
"http://minio{0...15}.mydomain.net/data{0...1}",
endpointSet{
[]ellipses.ArgPattern{
[]ellipses.Pattern{
{
"",
"",
getSequences(0, 1, 0),
},
{
"http://minio",
".mydomain.net/data",
getSequences(0, 15, 0),
},
},
},
nil,
[][]uint64{{16, 16}},
},
true,
},
// No host regex, just disks.
{
"http://server1/data{1...32}",
endpointSet{
[]ellipses.ArgPattern{
[]ellipses.Pattern{
{
"http://server1/data",
"",
getSequences(1, 32, 0),
},
},
},
nil,
[][]uint64{{16, 16}},
},
true,
},
// No host regex, just disks with two position numerics.
{
"http://server1/data{01...32}",
endpointSet{
[]ellipses.ArgPattern{
[]ellipses.Pattern{
{
"http://server1/data",
"",
getSequences(1, 32, 2),
},
},
},
nil,
[][]uint64{{16, 16}},
},
true,
},
// More than 2 ellipses are supported as well.
{
"http://minio{2...3}/export/set{1...64}/test{1...2}",
endpointSet{
[]ellipses.ArgPattern{
[]ellipses.Pattern{
{
"",
"",
getSequences(1, 2, 0),
},
{
"",
"/test",
getSequences(1, 64, 0),
},
{
"http://minio",
"/export/set",
getSequences(2, 3, 0),
},
},
},
nil,
[][]uint64{{16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16, 16, 16, 16, 16}},
},
true,
},
// More than 1 ellipses per argument for standalone setup.
{
"/export{1...10}/disk{1...10}",
endpointSet{
[]ellipses.ArgPattern{
[]ellipses.Pattern{
{
"",
"",
getSequences(1, 10, 0),
},
{
"/export",
"/disk",
getSequences(1, 10, 0),
},
},
},
nil,
[][]uint64{{10, 10, 10, 10, 10, 10, 10, 10, 10, 10}},
},
true,
},
}
for i, testCase := range testCases {
t.Run(fmt.Sprintf("Test%d", i+1), func(t *testing.T) {
gotEs, err := parseEndpointSet(testCase.arg)
if err != nil && testCase.success {
t.Errorf("Expected success but failed instead %s", err)
}
if err == nil && !testCase.success {
t.Errorf("Expected failure but passed instead")
}
if !reflect.DeepEqual(testCase.es, gotEs) {
t.Errorf("Expected %v, got %v", testCase.es, gotEs)
}
})
}
}

View File

@@ -21,11 +21,13 @@ import (
"net"
"net/url"
"path"
"sort"
"path/filepath"
"runtime"
"strconv"
"strings"
"github.com/minio/minio-go/pkg/set"
"github.com/minio/minio/pkg/mountinfo"
)
// EndpointType - enum for endpoint type.
@@ -42,7 +44,8 @@ const (
// Endpoint - any type of endpoint.
type Endpoint struct {
*url.URL
IsLocal bool
IsLocal bool
SetIndex int
}
func (endpoint Endpoint) String() string {
@@ -62,18 +65,9 @@ func (endpoint Endpoint) Type() EndpointType {
return URLEndpointType
}
// SetHTTPS - sets secure http for URLEndpointType.
func (endpoint Endpoint) SetHTTPS() {
if endpoint.Host != "" {
endpoint.Scheme = "https"
}
}
// SetHTTP - sets insecure http for URLEndpointType.
func (endpoint Endpoint) SetHTTP() {
if endpoint.Host != "" {
endpoint.Scheme = "http"
}
// IsHTTPS - returns true if secure for URLEndpointType.
func (endpoint Endpoint) IsHTTPS() bool {
return endpoint.Scheme == "https"
}
// NewEndpoint - returns new endpoint based on given arguments.
@@ -99,7 +93,8 @@ func NewEndpoint(arg string) (ep Endpoint, e error) {
return ep, fmt.Errorf("invalid URL endpoint format")
}
host, port, err := net.SplitHostPort(u.Host)
var host, port string
host, port, err = net.SplitHostPort(u.Host)
if err != nil {
if !strings.Contains(err.Error(), "missing port in address") {
return ep, fmt.Errorf("invalid URL endpoint format: %s", err)
@@ -127,11 +122,37 @@ func NewEndpoint(arg string) (ep Endpoint, e error) {
return ep, fmt.Errorf("empty or root path is not supported in URL endpoint")
}
// On windows having a preceding "/" will cause problems, if the
// command line already has C:/<export-folder/ in it. Final resulting
// path on windows might become C:/C:/ this will cause problems
// of starting minio server properly in distributed mode on windows.
// As a special case make sure to trim the separator.
// NOTE: It is also perfectly fine for windows users to have a path
// without C:/ since at that point we treat it as relative path
// and obtain the full filesystem path as well. Providing C:/
// style is necessary to provide paths other than C:/,
// such as F:/, D:/ etc.
//
// Another additional benefit here is that this style also
// supports providing \\host\share support as well.
if runtime.GOOS == globalWindowsOSName {
if filepath.VolumeName(u.Path[1:]) != "" {
u.Path = u.Path[1:]
}
}
isLocal, err = isLocalHost(host)
if err != nil {
return ep, err
}
} else {
// Only check if the arg is an ip address and ask for scheme since its absent.
// localhost, example.com, any FQDN cannot be disambiguated from a regular file path such as
// /mnt/export1. So we go ahead and start the minio server in FS modes in these cases.
if isHostIPv4(arg) {
return ep, fmt.Errorf("invalid URL endpoint format: missing scheme http or https")
}
u = &url.URL{Path: path.Clean(arg)}
isLocal = true
}
@@ -145,47 +166,22 @@ func NewEndpoint(arg string) (ep Endpoint, e error) {
// EndpointList - list of same type of endpoint.
type EndpointList []Endpoint
// Swap - helper method for sorting.
func (endpoints EndpointList) Swap(i, j int) {
endpoints[i], endpoints[j] = endpoints[j], endpoints[i]
// IsHTTPS - returns true if secure for URLEndpointType.
func (endpoints EndpointList) IsHTTPS() bool {
return endpoints[0].IsHTTPS()
}
// Len - helper method for sorting.
func (endpoints EndpointList) Len() int {
return len(endpoints)
}
// Less - helper method for sorting.
func (endpoints EndpointList) Less(i, j int) bool {
return endpoints[i].String() < endpoints[j].String()
}
// SetHTTPS - sets secure http for URLEndpointType.
func (endpoints EndpointList) SetHTTPS() {
for i := range endpoints {
endpoints[i].SetHTTPS()
}
}
// SetHTTP - sets insecure http for URLEndpointType.
func (endpoints EndpointList) SetHTTP() {
for i := range endpoints {
endpoints[i].SetHTTP()
// GetString - returns endpoint string of i-th endpoint (0-based),
// and empty string for invalid indexes.
func (endpoints EndpointList) GetString(i int) string {
if i < 0 || i >= len(endpoints) {
return ""
}
return endpoints[i].String()
}
// NewEndpointList - returns new endpoint list based on input args.
func NewEndpointList(args ...string) (endpoints EndpointList, err error) {
// isValidDistribution - checks whether given count is a valid distribution for erasure coding.
isValidDistribution := func(count int) bool {
return (count >= minErasureBlocks && count <= maxErasureBlocks && count%2 == 0)
}
// Check whether no. of args are valid for XL distribution.
if !isValidDistribution(len(args)) {
return nil, fmt.Errorf("A total of %d endpoints were found. For erasure mode it should be an even number between %d and %d", len(args), minErasureBlocks, maxErasureBlocks)
}
var endpointType EndpointType
var scheme string
@@ -212,17 +208,30 @@ func NewEndpointList(args ...string) (endpoints EndpointList, err error) {
return nil, fmt.Errorf("duplicate endpoints found")
}
uniqueArgs.Add(arg)
endpoints = append(endpoints, endpoint)
}
sort.Sort(endpoints)
return endpoints, nil
}
// Checks if there are any cross device mounts.
func checkCrossDeviceMounts(endpoints EndpointList) (err error) {
var absPaths []string
for _, endpoint := range endpoints {
if endpoint.IsLocal {
var absPath string
absPath, err = filepath.Abs(endpoint.Path)
if err != nil {
return err
}
absPaths = append(absPaths, absPath)
}
}
return mountinfo.CheckCrossDevice(absPaths)
}
// CreateEndpoints - validates and creates new endpoints for given args.
func CreateEndpoints(serverAddr string, args ...string) (string, EndpointList, SetupType, error) {
func CreateEndpoints(serverAddr string, args ...[]string) (string, EndpointList, SetupType, error) {
var endpoints EndpointList
var setupType SetupType
var err error
@@ -235,25 +244,44 @@ func CreateEndpoints(serverAddr string, args ...string) (string, EndpointList, S
_, serverAddrPort := mustSplitHostPort(serverAddr)
// For single arg, return FS setup.
if len(args) == 1 {
if len(args) == 1 && len(args[0]) == 1 {
var endpoint Endpoint
endpoint, err = NewEndpoint(args[0])
endpoint, err = NewEndpoint(args[0][0])
if err != nil {
return serverAddr, endpoints, setupType, err
}
if endpoint.Type() != PathEndpointType {
return serverAddr, endpoints, setupType, fmt.Errorf("use path style endpoint for FS setup")
}
endpoints = append(endpoints, endpoint)
setupType = FSSetupType
// Check for cross device mounts if any.
if err = checkCrossDeviceMounts(endpoints); err != nil {
return serverAddr, endpoints, setupType, err
}
return serverAddr, endpoints, setupType, nil
}
for i, iargs := range args {
var newEndpoints EndpointList
// Convert args to endpoints
var eps EndpointList
eps, err = NewEndpointList(iargs...)
if err != nil {
return serverAddr, endpoints, setupType, err
}
if endpoint.Type() != PathEndpointType {
return serverAddr, endpoints, setupType, fmt.Errorf("use path style endpoint for FS setup")
// Check for cross device mounts if any.
if err = checkCrossDeviceMounts(eps); err != nil {
return serverAddr, endpoints, setupType, err
}
endpoints = append(endpoints, endpoint)
setupType = FSSetupType
return serverAddr, endpoints, setupType, nil
}
// Convert args to endpoints
if endpoints, err = NewEndpointList(args...); err != nil {
return serverAddr, endpoints, setupType, err
for _, ep := range eps {
ep.SetIndex = i
newEndpoints = append(newEndpoints, ep)
}
endpoints = append(endpoints, newEndpoints...)
}
// Return XL setup when all endpoints are path style.
@@ -267,6 +295,7 @@ func CreateEndpoints(serverAddr string, args ...string) (string, EndpointList, S
localEndpointCount := 0
localServerAddrSet := set.NewStringSet()
localPortSet := set.NewStringSet()
for _, endpoint := range endpoints {
endpointPathSet.Add(endpoint.Path)
if endpoint.IsLocal {
@@ -289,7 +318,7 @@ func CreateEndpoints(serverAddr string, args ...string) (string, EndpointList, S
return serverAddr, endpoints, setupType, fmt.Errorf("no endpoint found for this host")
}
// Check whether same path is not used in endpoints of a host.
// Check whether same path is not used in endpoints of a host on different port.
{
pathIPMap := make(map[string]set.StringSet)
for _, endpoint := range endpoints {
@@ -304,7 +333,6 @@ func CreateEndpoints(serverAddr string, args ...string) (string, EndpointList, S
err = fmt.Errorf("path '%s' can not be served by different port on same address", endpoint.Path)
return serverAddr, endpoints, setupType, err
}
pathIPMap[endpoint.Path] = IPSet.Union(hostIPSet)
} else {
pathIPMap[endpoint.Path] = hostIPSet
@@ -312,6 +340,21 @@ func CreateEndpoints(serverAddr string, args ...string) (string, EndpointList, S
}
}
// Check whether same path is used for more than 1 local endpoints.
{
localPathSet := set.CreateStringSet()
for _, endpoint := range endpoints {
if !endpoint.IsLocal {
continue
}
if localPathSet.Contains(endpoint.Path) {
err = fmt.Errorf("path '%s' cannot be served by different address on same server", endpoint.Path)
return serverAddr, endpoints, setupType, err
}
localPathSet.Add(endpoint.Path)
}
}
// Check whether serverAddrPort matches at least in one of port used in local endpoints.
{
if !localPortSet.Contains(serverAddrPort) {
@@ -382,10 +425,50 @@ func CreateEndpoints(serverAddr string, args ...string) (string, EndpointList, S
}
}
uniqueArgs := set.NewStringSet()
for _, endpoint := range endpoints {
uniqueArgs.Add(endpoint.Host)
}
// Error out if we have more than serverCommandLineArgsMax unique servers.
if len(uniqueArgs.ToSlice()) > serverCommandLineArgsMax {
err := fmt.Errorf("Unsupported number of endpoints (%s), total number of servers cannot be more than %d", endpoints, serverCommandLineArgsMax)
return serverAddr, endpoints, setupType, err
}
// Error out if we have less than 2 unique servers.
if len(uniqueArgs.ToSlice()) < 2 && setupType == DistXLSetupType {
err := fmt.Errorf("Unsupported number of endpoints (%s), minimum number of servers cannot be less than 2 in distributed setup", endpoints)
return serverAddr, endpoints, setupType, err
}
setupType = DistXLSetupType
return serverAddr, endpoints, setupType, nil
}
// GetLocalPeer - returns local peer value, returns globalMinioAddr
// for FS and Erasure mode. In case of distributed server return
// the first element from the set of peers which indicate that
// they are local. There is always one entry that is local
// even with repeated server endpoints.
func GetLocalPeer(endpoints EndpointList) (localPeer string) {
peerSet := set.NewStringSet()
for _, endpoint := range endpoints {
if endpoint.Type() != URLEndpointType {
continue
}
if endpoint.IsLocal && endpoint.Host != "" {
peerSet.Add(endpoint.Host)
}
}
if peerSet.IsEmpty() {
// If local peer is empty can happen in FS or Erasure coded mode.
// then set the value to globalMinioAddr instead.
return globalMinioAddr
}
return peerSet.ToSlice()[0]
}
// GetRemotePeers - get hosts information other than this minio service.
func GetRemotePeers(endpoints EndpointList) []string {
peerSet := set.NewStringSet()

View File

@@ -20,8 +20,6 @@ import (
"fmt"
"net/url"
"reflect"
"runtime"
"sort"
"strings"
"testing"
)
@@ -32,11 +30,6 @@ func TestNewEndpoint(t *testing.T) {
u3, _ := url.Parse("http://127.0.0.1:8080/path")
u4, _ := url.Parse("http://192.168.253.200/path")
errMsg := ": no such host"
if runtime.GOOS == "windows" {
errMsg = ": No such host is known."
}
testCases := []struct {
arg string
expectedEndpoint Endpoint
@@ -73,7 +66,7 @@ func TestNewEndpoint(t *testing.T) {
{"https://93.184.216.34:808080/path", Endpoint{}, -1, fmt.Errorf("invalid URL endpoint format: port number must be between 1 to 65535")},
{"http://server:8080//", Endpoint{}, -1, fmt.Errorf("empty or root path is not supported in URL endpoint")},
{"http://server:8080/", Endpoint{}, -1, fmt.Errorf("empty or root path is not supported in URL endpoint")},
{"http://server/path", Endpoint{}, -1, fmt.Errorf("lookup server" + errMsg)},
{"192.168.1.210:9000", Endpoint{}, -1, fmt.Errorf("invalid URL endpoint format: missing scheme http or https")},
}
for _, testCase := range testCases {
@@ -84,16 +77,8 @@ func TestNewEndpoint(t *testing.T) {
}
} else if err == nil {
t.Fatalf("error: expected = %v, got = <nil>", testCase.expectedErr)
} else {
var match bool
if strings.HasSuffix(testCase.expectedErr.Error(), errMsg) {
match = strings.HasSuffix(err.Error(), errMsg)
} else {
match = (testCase.expectedErr.Error() == err.Error())
}
if !match {
t.Fatalf("error: expected = %v, got = %v", testCase.expectedErr, err)
}
} else if testCase.expectedErr.Error() != err.Error() {
t.Fatalf("error: expected = %v, got = %v", testCase.expectedErr, err)
}
if err == nil && !reflect.DeepEqual(testCase.expectedEndpoint, endpoint) {
@@ -122,10 +107,10 @@ func TestNewEndpointList(t *testing.T) {
{[]string{"d1", "d2", "d3", "d1"}, fmt.Errorf("duplicate endpoints found")},
{[]string{"d1", "d2", "d3", "./d1"}, fmt.Errorf("duplicate endpoints found")},
{[]string{"http://localhost/d1", "http://localhost/d2", "http://localhost/d1", "http://localhost/d4"}, fmt.Errorf("duplicate endpoints found")},
{[]string{"d1", "d2", "d3", "d4", "d5"}, fmt.Errorf("A total of 5 endpoints were found. For erasure mode it should be an even number between 4 and 16")},
{[]string{"ftp://server/d1", "http://server/d2", "http://server/d3", "http://server/d4"}, fmt.Errorf("'ftp://server/d1': invalid URL endpoint format")},
{[]string{"d1", "http://localhost/d2", "d3", "d4"}, fmt.Errorf("mixed style endpoints are not supported")},
{[]string{"http://example.org/d1", "https://example.com/d1", "http://example.net/d1", "https://example.edut/d1"}, fmt.Errorf("mixed scheme is not supported")},
{[]string{"192.168.1.210:9000/tmp/dir0", "192.168.1.210:9000/tmp/dir1", "192.168.1.210:9000/tmp/dir2", "192.168.110:9000/tmp/dir3"}, fmt.Errorf("'192.168.1.210:9000/tmp/dir0': invalid URL endpoint format: missing scheme http or https")},
}
for _, testCase := range testCases {
@@ -155,7 +140,6 @@ func TestCreateEndpoints(t *testing.T) {
getExpectedEndpoints := func(args []string, prefix string) ([]*url.URL, []bool) {
var URLs []*url.URL
var localFlags []bool
sort.Strings(args)
for _, arg := range args {
u, _ := url.Parse(arg)
URLs = append(URLs, u)
@@ -170,8 +154,8 @@ func TestCreateEndpoints(t *testing.T) {
args := []string{
"http://" + nonLoopBackIP + ":10000/d1",
"http://" + nonLoopBackIP + ":10000/d2",
"http://example.com:10000/d4",
"http://example.org:10000/d3",
"http://example.com:10000/d4",
}
case1URLs, case1LocalFlags := getExpectedEndpoints(args, "http://"+nonLoopBackIP+":10000/")
@@ -180,26 +164,26 @@ func TestCreateEndpoints(t *testing.T) {
args = []string{
"http://" + nonLoopBackIP + ":10000/d1",
"http://" + nonLoopBackIP + ":9000/d2",
"http://example.com:10000/d4",
"http://example.org:10000/d3",
"http://example.com:10000/d4",
}
case2URLs, case2LocalFlags := getExpectedEndpoints(args, "http://"+nonLoopBackIP+":10000/")
case3Endpoint1 := "http://" + nonLoopBackIP + "/d1"
args = []string{
"http://" + nonLoopBackIP + ":80/d1",
"http://example.org:9000/d2",
"http://example.com:80/d3",
"http://example.net:80/d4",
"http://example.org:9000/d2",
}
case3URLs, case3LocalFlags := getExpectedEndpoints(args, "http://"+nonLoopBackIP+":80/")
case4Endpoint1 := "http://" + nonLoopBackIP + "/d1"
args = []string{
"http://" + nonLoopBackIP + ":9000/d1",
"http://example.org:9000/d2",
"http://example.com:9000/d3",
"http://example.net:9000/d4",
"http://example.org:9000/d2",
}
case4URLs, case4LocalFlags := getExpectedEndpoints(args, "http://"+nonLoopBackIP+":9000/")
@@ -226,29 +210,29 @@ func TestCreateEndpoints(t *testing.T) {
testCases := []struct {
serverAddr string
args []string
args [][]string
expectedServerAddr string
expectedEndpoints EndpointList
expectedSetupType SetupType
expectedErr error
}{
{"localhost", []string{}, "", EndpointList{}, -1, fmt.Errorf("missing port in address localhost")},
{"localhost", [][]string{}, "", EndpointList{}, -1, fmt.Errorf("address localhost: missing port in address")},
// FS Setup
{"localhost:9000", []string{"http://localhost/d1"}, "", EndpointList{}, -1, fmt.Errorf("use path style endpoint for FS setup")},
{":443", []string{"d1"}, ":443", EndpointList{Endpoint{URL: &url.URL{Path: "d1"}, IsLocal: true}}, FSSetupType, nil},
{"localhost:10000", []string{"/d1"}, "localhost:10000", EndpointList{Endpoint{URL: &url.URL{Path: "/d1"}, IsLocal: true}}, FSSetupType, nil},
{"localhost:10000", []string{"./d1"}, "localhost:10000", EndpointList{Endpoint{URL: &url.URL{Path: "d1"}, IsLocal: true}}, FSSetupType, nil},
{"localhost:10000", []string{`\d1`}, "localhost:10000", EndpointList{Endpoint{URL: &url.URL{Path: `\d1`}, IsLocal: true}}, FSSetupType, nil},
{"localhost:10000", []string{`.\d1`}, "localhost:10000", EndpointList{Endpoint{URL: &url.URL{Path: `.\d1`}, IsLocal: true}}, FSSetupType, nil},
{":8080", []string{"https://example.org/d1", "https://example.org/d2", "https://example.org/d3", "https://example.org/d4"}, "", EndpointList{}, -1, fmt.Errorf("no endpoint found for this host")},
{":8080", []string{"https://example.org/d1", "https://example.com/d2", "https://example.net:8000/d3", "https://example.edu/d1"}, "", EndpointList{}, -1, fmt.Errorf("no endpoint found for this host")},
{"localhost:9000", []string{"https://127.0.0.1:9000/d1", "https://localhost:9001/d1", "https://example.com/d1", "https://example.com/d2"}, "", EndpointList{}, -1, fmt.Errorf("path '/d1' can not be served by different port on same address")},
{"localhost:9000", []string{"https://127.0.0.1:8000/d1", "https://localhost:9001/d2", "https://example.com/d1", "https://example.com/d2"}, "", EndpointList{}, -1, fmt.Errorf("port number in server address must match with one of the port in local endpoints")},
{"localhost:10000", []string{"https://127.0.0.1:8000/d1", "https://localhost:8000/d2", "https://example.com/d1", "https://example.com/d2"}, "", EndpointList{}, -1, fmt.Errorf("server address and local endpoint have different ports")},
{"localhost:9000", [][]string{{"http://localhost/d1"}}, "", EndpointList{}, -1, fmt.Errorf("use path style endpoint for FS setup")},
{":443", [][]string{{"d1"}}, ":443", EndpointList{Endpoint{URL: &url.URL{Path: "d1"}, IsLocal: true}}, FSSetupType, nil},
{"localhost:10000", [][]string{{"/d1"}}, "localhost:10000", EndpointList{Endpoint{URL: &url.URL{Path: "/d1"}, IsLocal: true}}, FSSetupType, nil},
{"localhost:10000", [][]string{{"./d1"}}, "localhost:10000", EndpointList{Endpoint{URL: &url.URL{Path: "d1"}, IsLocal: true}}, FSSetupType, nil},
{"localhost:10000", [][]string{{`\d1`}}, "localhost:10000", EndpointList{Endpoint{URL: &url.URL{Path: `\d1`}, IsLocal: true}}, FSSetupType, nil},
{"localhost:10000", [][]string{{`.\d1`}}, "localhost:10000", EndpointList{Endpoint{URL: &url.URL{Path: `.\d1`}, IsLocal: true}}, FSSetupType, nil},
{":8080", [][]string{{"https://example.org/d1", "https://example.org/d2", "https://example.org/d3", "https://example.org/d4"}}, "", EndpointList{}, -1, fmt.Errorf("no endpoint found for this host")},
{":8080", [][]string{{"https://example.org/d1", "https://example.com/d2", "https://example.net:8000/d3", "https://example.edu/d1"}}, "", EndpointList{}, -1, fmt.Errorf("no endpoint found for this host")},
{"localhost:9000", [][]string{{"https://127.0.0.1:9000/d1", "https://localhost:9001/d1", "https://example.com/d1", "https://example.com/d2"}}, "", EndpointList{}, -1, fmt.Errorf("path '/d1' can not be served by different port on same address")},
{"localhost:9000", [][]string{{"https://127.0.0.1:8000/d1", "https://localhost:9001/d2", "https://example.com/d1", "https://example.com/d2"}}, "", EndpointList{}, -1, fmt.Errorf("port number in server address must match with one of the port in local endpoints")},
{"localhost:10000", [][]string{{"https://127.0.0.1:8000/d1", "https://localhost:8000/d2", "https://example.com/d1", "https://example.com/d2"}}, "", EndpointList{}, -1, fmt.Errorf("server address and local endpoint have different ports")},
// XL Setup with PathEndpointType
{":1234", []string{"/d1", "/d2", "d3", "d4"}, ":1234",
{":1234", [][]string{{"/d1", "/d2", "d3", "d4"}}, ":1234",
EndpointList{
Endpoint{URL: &url.URL{Path: "/d1"}, IsLocal: true},
Endpoint{URL: &url.URL{Path: "/d2"}, IsLocal: true},
@@ -256,53 +240,55 @@ func TestCreateEndpoints(t *testing.T) {
Endpoint{URL: &url.URL{Path: "d4"}, IsLocal: true},
}, XLSetupType, nil},
// XL Setup with URLEndpointType
{":9000", []string{"http://localhost/d1", "http://localhost/d2", "http://localhost/d3", "http://localhost/d4"}, ":9000", EndpointList{
{":9000", [][]string{{"http://localhost/d1", "http://localhost/d2", "http://localhost/d3", "http://localhost/d4"}}, ":9000", EndpointList{
Endpoint{URL: &url.URL{Path: "/d1"}, IsLocal: true},
Endpoint{URL: &url.URL{Path: "/d2"}, IsLocal: true},
Endpoint{URL: &url.URL{Path: "/d3"}, IsLocal: true},
Endpoint{URL: &url.URL{Path: "/d4"}, IsLocal: true},
}, XLSetupType, nil},
// XL Setup with URLEndpointType having mixed naming to local host.
{"127.0.0.1:10000", []string{"http://localhost/d1", "http://localhost/d2", "http://127.0.0.1/d3", "http://127.0.0.1/d4"}, ":10000", EndpointList{
{"127.0.0.1:10000", [][]string{{"http://localhost/d1", "http://localhost/d2", "http://127.0.0.1/d3", "http://127.0.0.1/d4"}}, ":10000", EndpointList{
Endpoint{URL: &url.URL{Path: "/d1"}, IsLocal: true},
Endpoint{URL: &url.URL{Path: "/d2"}, IsLocal: true},
Endpoint{URL: &url.URL{Path: "/d3"}, IsLocal: true},
Endpoint{URL: &url.URL{Path: "/d4"}, IsLocal: true},
}, XLSetupType, nil},
{":9001", []string{"http://10.0.0.1:9000/export", "http://10.0.0.2:9000/export", "http://" + nonLoopBackIP + ":9001/export", "http://10.0.0.2:9001/export"}, "", EndpointList{}, -1, fmt.Errorf("path '/export' can not be served by different port on same address")},
{":9001", [][]string{{"http://10.0.0.1:9000/export", "http://10.0.0.2:9000/export", "http://" + nonLoopBackIP + ":9001/export", "http://10.0.0.2:9001/export"}}, "", EndpointList{}, -1, fmt.Errorf("path '/export' can not be served by different port on same address")},
{":9000", []string{"http://localhost/d1", "http://localhost/d2", "http://example.org/d3", "http://example.com/d4"}, "", EndpointList{}, -1, fmt.Errorf("'localhost' resolves to loopback address is not allowed for distributed XL")},
{":9000", [][]string{{"http://127.0.0.1:9000/export", "http://" + nonLoopBackIP + ":9000/export", "http://10.0.0.1:9000/export", "http://10.0.0.2:9000/export"}}, "", EndpointList{}, -1, fmt.Errorf("path '/export' cannot be served by different address on same server")},
{":9000", [][]string{{"http://localhost/d1", "http://localhost/d2", "http://example.org/d3", "http://example.com/d4"}}, "", EndpointList{}, -1, fmt.Errorf("'localhost' resolves to loopback address is not allowed for distributed XL")},
// DistXL type
{"127.0.0.1:10000", []string{case1Endpoint1, case1Endpoint2, "http://example.org/d3", "http://example.com/d4"}, "127.0.0.1:10000", EndpointList{
{"127.0.0.1:10000", [][]string{{case1Endpoint1, case1Endpoint2, "http://example.org/d3", "http://example.com/d4"}}, "127.0.0.1:10000", EndpointList{
Endpoint{URL: case1URLs[0], IsLocal: case1LocalFlags[0]},
Endpoint{URL: case1URLs[1], IsLocal: case1LocalFlags[1]},
Endpoint{URL: case1URLs[2], IsLocal: case1LocalFlags[2]},
Endpoint{URL: case1URLs[3], IsLocal: case1LocalFlags[3]},
}, DistXLSetupType, nil},
{"127.0.0.1:10000", []string{case2Endpoint1, case2Endpoint2, "http://example.org/d3", "http://example.com/d4"}, "127.0.0.1:10000", EndpointList{
{"127.0.0.1:10000", [][]string{{case2Endpoint1, case2Endpoint2, "http://example.org/d3", "http://example.com/d4"}}, "127.0.0.1:10000", EndpointList{
Endpoint{URL: case2URLs[0], IsLocal: case2LocalFlags[0]},
Endpoint{URL: case2URLs[1], IsLocal: case2LocalFlags[1]},
Endpoint{URL: case2URLs[2], IsLocal: case2LocalFlags[2]},
Endpoint{URL: case2URLs[3], IsLocal: case2LocalFlags[3]},
}, DistXLSetupType, nil},
{":80", []string{case3Endpoint1, "http://example.org:9000/d2", "http://example.com/d3", "http://example.net/d4"}, ":80", EndpointList{
{":80", [][]string{{case3Endpoint1, "http://example.org:9000/d2", "http://example.com/d3", "http://example.net/d4"}}, ":80", EndpointList{
Endpoint{URL: case3URLs[0], IsLocal: case3LocalFlags[0]},
Endpoint{URL: case3URLs[1], IsLocal: case3LocalFlags[1]},
Endpoint{URL: case3URLs[2], IsLocal: case3LocalFlags[2]},
Endpoint{URL: case3URLs[3], IsLocal: case3LocalFlags[3]},
}, DistXLSetupType, nil},
{":9000", []string{case4Endpoint1, "http://example.org/d2", "http://example.com/d3", "http://example.net/d4"}, ":9000", EndpointList{
{":9000", [][]string{{case4Endpoint1, "http://example.org/d2", "http://example.com/d3", "http://example.net/d4"}}, ":9000", EndpointList{
Endpoint{URL: case4URLs[0], IsLocal: case4LocalFlags[0]},
Endpoint{URL: case4URLs[1], IsLocal: case4LocalFlags[1]},
Endpoint{URL: case4URLs[2], IsLocal: case4LocalFlags[2]},
Endpoint{URL: case4URLs[3], IsLocal: case4LocalFlags[3]},
}, DistXLSetupType, nil},
{":9000", []string{case5Endpoint1, case5Endpoint2, case5Endpoint3, case5Endpoint4}, ":9000", EndpointList{
{":9000", [][]string{{case5Endpoint1, case5Endpoint2, case5Endpoint3, case5Endpoint4}}, ":9000", EndpointList{
Endpoint{URL: case5URLs[0], IsLocal: case5LocalFlags[0]},
Endpoint{URL: case5URLs[1], IsLocal: case5LocalFlags[1]},
Endpoint{URL: case5URLs[2], IsLocal: case5LocalFlags[2]},
@@ -310,7 +296,7 @@ func TestCreateEndpoints(t *testing.T) {
}, DistXLSetupType, nil},
// DistXL Setup using only local host.
{":9003", []string{"http://localhost:9000/d1", "http://localhost:9001/d2", "http://127.0.0.1:9002/d3", case6Endpoint}, ":9003", EndpointList{
{":9003", [][]string{{"http://localhost:9000/d1", "http://localhost:9001/d2", "http://127.0.0.1:9002/d3", case6Endpoint}}, ":9003", EndpointList{
Endpoint{URL: case6URLs[0], IsLocal: case6LocalFlags[0]},
Endpoint{URL: case6URLs[1], IsLocal: case6LocalFlags[1]},
Endpoint{URL: case6URLs[2], IsLocal: case6LocalFlags[2]},
@@ -343,6 +329,38 @@ func TestCreateEndpoints(t *testing.T) {
}
}
// Tests get local peer functionality, local peer is supposed to only return one entry per minio service.
// So it means that if you have say localhost:9000 and localhost:9001 as endpointArgs then localhost:9001
// is considered a remote service from localhost:9000 perspective.
func TestGetLocalPeer(t *testing.T) {
tempGlobalMinioAddr := globalMinioAddr
defer func() {
globalMinioAddr = tempGlobalMinioAddr
}()
globalMinioAddr = ":9000"
testCases := []struct {
endpointArgs []string
expectedResult string
}{
{[]string{"/d1", "/d2", "d3", "d4"}, ":9000"},
{[]string{"http://localhost:9000/d1", "http://localhost:9000/d2", "http://example.org:9000/d3", "http://example.com:9000/d4"},
"localhost:9000"},
{[]string{"http://localhost:9000/d1", "http://example.org:9000/d2", "http://example.com:9000/d3", "http://example.net:9000/d4"},
"localhost:9000"},
{[]string{"http://localhost:9000/d1", "http://localhost:9001/d2", "http://localhost:9002/d3", "http://localhost:9003/d4"},
"localhost:9000"},
}
for i, testCase := range testCases {
endpoints, _ := NewEndpointList(testCase.endpointArgs...)
remotePeer := GetLocalPeer(endpoints)
if remotePeer != testCase.expectedResult {
t.Fatalf("Test %d: expected: %v, got: %v", i+1, testCase.expectedResult, remotePeer)
}
}
}
func TestGetRemotePeers(t *testing.T) {
tempGlobalMinioPort := globalMinioPort
defer func() {

View File

@@ -17,128 +17,82 @@
package cmd
import (
"encoding/hex"
"hash"
"io"
"sync"
"github.com/klauspost/reedsolomon"
"github.com/minio/minio/pkg/errors"
)
// erasureCreateFile - writes an entire stream by erasure coding to
// all the disks, writes also calculate individual block's checksum
// for future bit-rot protection.
func erasureCreateFile(disks []StorageAPI, volume, path string, reader io.Reader, allowEmpty bool, blockSize int64,
dataBlocks, parityBlocks int, algo HashAlgo, writeQuorum int) (newDisks []StorageAPI, bytesWritten int64, checkSums []string, err error) {
// Allocated blockSized buffer for reading from incoming stream.
buf := make([]byte, blockSize)
hashWriters := newHashWriters(len(disks), algo)
// Read until io.EOF, erasure codes data and writes to all disks.
for {
var blocks [][]byte
n, rErr := io.ReadFull(reader, buf)
// FIXME: this is a bug in Golang, n == 0 and err ==
// io.ErrUnexpectedEOF for io.ReadFull function.
if n == 0 && rErr == io.ErrUnexpectedEOF {
return nil, 0, nil, traceError(rErr)
}
if rErr == io.EOF {
// We have reached EOF on the first byte read, io.Reader
// must be 0bytes, we don't need to erasure code
// data. Will create a 0byte file instead.
if bytesWritten == 0 && allowEmpty {
blocks = make([][]byte, len(disks))
newDisks, rErr = appendFile(disks, volume, path, blocks, hashWriters, writeQuorum)
if rErr != nil {
return nil, 0, nil, rErr
}
} // else we have reached EOF after few reads, no need to
// add an additional 0bytes at the end.
break
}
if rErr != nil && rErr != io.ErrUnexpectedEOF {
return nil, 0, nil, traceError(rErr)
}
if n > 0 {
// Returns encoded blocks.
var enErr error
blocks, enErr = encodeData(buf[0:n], dataBlocks, parityBlocks)
if enErr != nil {
return nil, 0, nil, enErr
}
// Write to all disks.
if newDisks, err = appendFile(disks, volume, path, blocks, hashWriters, writeQuorum); err != nil {
return nil, 0, nil, err
}
bytesWritten += int64(n)
}
// CreateFile creates a new bitrot encoded file spread over all available disks. CreateFile will create
// the file at the given volume and path. It will read from src until an io.EOF occurs. The given algorithm will
// be used to protect the erasure encoded file.
func (s *ErasureStorage) CreateFile(src io.Reader, volume, path string, buffer []byte, algorithm BitrotAlgorithm, writeQuorum int) (f ErasureFileInfo, err error) {
if !algorithm.Available() {
return f, errors.Trace(errBitrotHashAlgoInvalid)
}
f.Checksums = make([][]byte, len(s.disks))
hashers := make([]hash.Hash, len(s.disks))
for i := range hashers {
hashers[i] = algorithm.New()
}
errChans, errs := make([]chan error, len(s.disks)), make([]error, len(s.disks))
for i := range errChans {
errChans[i] = make(chan error, 1) // create buffered channel to let finished go-routines die early
}
checkSums = make([]string, len(disks))
for i := range checkSums {
checkSums[i] = hex.EncodeToString(hashWriters[i].Sum(nil))
}
return newDisks, bytesWritten, checkSums, nil
}
// encodeData - encodes incoming data buffer into
// dataBlocks+parityBlocks returns a 2 dimensional byte array.
func encodeData(dataBuffer []byte, dataBlocks, parityBlocks int) ([][]byte, error) {
rs, err := reedsolomon.New(dataBlocks, parityBlocks)
if err != nil {
return nil, traceError(err)
}
// Split the input buffer into data and parity blocks.
var blocks [][]byte
blocks, err = rs.Split(dataBuffer)
if err != nil {
return nil, traceError(err)
var n = len(buffer)
for n == len(buffer) {
n, err = io.ReadFull(src, buffer)
if n == 0 && err == io.EOF {
if f.Size != 0 { // don't write empty block if we have written to the disks
break
}
blocks = make([][]byte, len(s.disks)) // write empty block
} else if err == nil || (n > 0 && err == io.ErrUnexpectedEOF) {
blocks, err = s.ErasureEncode(buffer[:n])
if err != nil {
return f, err
}
} else {
return f, errors.Trace(err)
}
for i := range errChans { // span workers
go erasureAppendFile(s.disks[i], volume, path, hashers[i], blocks[i], errChans[i])
}
for i := range errChans { // what until all workers are finished
errs[i] = <-errChans[i]
}
if err = reduceWriteQuorumErrs(errs, objectOpIgnoredErrs, writeQuorum); err != nil {
return f, err
}
s.disks = evalDisks(s.disks, errs)
f.Size += int64(n)
}
// Encode parity blocks using data blocks.
err = rs.Encode(blocks)
if err != nil {
return nil, traceError(err)
}
// Return encoded blocks.
return blocks, nil
}
// appendFile - append data buffer at path.
func appendFile(disks []StorageAPI, volume, path string, enBlocks [][]byte, hashWriters []hash.Hash, writeQuorum int) ([]StorageAPI, error) {
var wg = &sync.WaitGroup{}
var wErrs = make([]error, len(disks))
// Write encoded data to quorum disks in parallel.
for index, disk := range disks {
if disk == nil {
wErrs[index] = traceError(errDiskNotFound)
f.Algorithm = algorithm
for i, disk := range s.disks {
if disk == OfflineDisk {
continue
}
wg.Add(1)
// Write encoded data in routine.
go func(index int, disk StorageAPI) {
defer wg.Done()
wErr := disk.AppendFile(volume, path, enBlocks[index])
if wErr != nil {
wErrs[index] = traceError(wErr)
return
}
// Calculate hash for each blocks.
hashWriters[index].Write(enBlocks[index])
// Successfully wrote.
wErrs[index] = nil
}(index, disk)
f.Checksums[i] = hashers[i].Sum(nil)
}
// Wait for all the appends to finish.
wg.Wait()
return evalDisks(disks, wErrs), reduceWriteQuorumErrs(wErrs, objectOpIgnoredErrs, writeQuorum)
return f, nil
}
// erasureAppendFile appends the content of buf to the file on the given disk and updates computes
// the hash of the written data. It sends the write error (or nil) over the error channel.
func erasureAppendFile(disk StorageAPI, volume, path string, hash hash.Hash, buf []byte, errChan chan<- error) {
if disk == OfflineDisk {
errChan <- errors.Trace(errDiskNotFound)
return
}
err := disk.AppendFile(volume, path, buf)
if err != nil {
errChan <- err
return
}
hash.Write(buf)
errChan <- err
}

View File

@@ -19,186 +19,174 @@ package cmd
import (
"bytes"
"crypto/rand"
"io"
"testing"
humanize "github.com/dustin/go-humanize"
"github.com/klauspost/reedsolomon"
)
// Simulates a faulty disk for AppendFile()
type AppendDiskDown struct {
*posix
}
type badDisk struct{ StorageAPI }
func (a AppendDiskDown) AppendFile(volume string, path string, buf []byte) error {
func (a badDisk) AppendFile(volume string, path string, buf []byte) error {
return errFaultyDisk
}
// Test erasureCreateFile()
func TestErasureCreateFile(t *testing.T) {
// Initialize environment needed for the test.
dataBlocks := 7
parityBlocks := 7
blockSize := int64(blockSizeV1)
setup, err := newErasureTestSetup(dataBlocks, parityBlocks, blockSize)
if err != nil {
t.Error(err)
return
}
defer setup.Remove()
const oneMiByte = 1 * humanize.MiByte
disks := setup.disks
// Prepare a slice of 1MiB with random data.
data := make([]byte, 1*humanize.MiByte)
_, err = rand.Read(data)
if err != nil {
t.Fatal(err)
}
// Test when all disks are up.
_, size, _, err := erasureCreateFile(disks, "testbucket", "testobject1", bytes.NewReader(data), true, blockSize, dataBlocks, parityBlocks, bitRotAlgo, dataBlocks+1)
if err != nil {
t.Fatal(err)
}
if size != int64(len(data)) {
t.Errorf("erasureCreateFile returned %d, expected %d", size, len(data))
}
// 2 disks down.
disks[4] = AppendDiskDown{disks[4].(*posix)}
disks[5] = AppendDiskDown{disks[5].(*posix)}
// Test when two disks are down.
_, size, _, err = erasureCreateFile(disks, "testbucket", "testobject2", bytes.NewReader(data), true, blockSize, dataBlocks, parityBlocks, bitRotAlgo, dataBlocks+1)
if err != nil {
t.Fatal(err)
}
if size != int64(len(data)) {
t.Errorf("erasureCreateFile returned %d, expected %d", size, len(data))
}
// 4 more disks down. 6 disks down in total.
disks[6] = AppendDiskDown{disks[6].(*posix)}
disks[7] = AppendDiskDown{disks[7].(*posix)}
disks[8] = AppendDiskDown{disks[8].(*posix)}
disks[9] = AppendDiskDown{disks[9].(*posix)}
_, size, _, err = erasureCreateFile(disks, "testbucket", "testobject3", bytes.NewReader(data), true, blockSize, dataBlocks, parityBlocks, bitRotAlgo, dataBlocks+1)
if err != nil {
t.Fatal(err)
}
if size != int64(len(data)) {
t.Errorf("erasureCreateFile returned %d, expected %d", size, len(data))
}
// 1 more disk down. 7 disk down in total. Should return quorum error.
disks[10] = AppendDiskDown{disks[10].(*posix)}
_, _, _, err = erasureCreateFile(disks, "testbucket", "testobject4", bytes.NewReader(data), true, blockSize, dataBlocks, parityBlocks, bitRotAlgo, dataBlocks+1)
if errorCause(err) != errXLWriteQuorum {
t.Errorf("erasureCreateFile return value: expected errXLWriteQuorum, got %s", err)
}
var erasureCreateFileTests = []struct {
dataBlocks int
onDisks, offDisks int
blocksize, data int64
offset int
algorithm BitrotAlgorithm
shouldFail, shouldFailQuorum bool
}{
{dataBlocks: 2, onDisks: 4, offDisks: 0, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 0
{dataBlocks: 3, onDisks: 6, offDisks: 0, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 1, algorithm: SHA256, shouldFail: false, shouldFailQuorum: false}, // 1
{dataBlocks: 4, onDisks: 8, offDisks: 2, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 2, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 2
{dataBlocks: 5, onDisks: 10, offDisks: 3, blocksize: int64(blockSizeV1), data: oneMiByte, offset: oneMiByte, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 3
{dataBlocks: 6, onDisks: 12, offDisks: 4, blocksize: int64(blockSizeV1), data: oneMiByte, offset: oneMiByte, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 4
{dataBlocks: 7, onDisks: 14, offDisks: 5, blocksize: int64(blockSizeV1), data: 0, offset: 0, shouldFail: false, algorithm: SHA256, shouldFailQuorum: false}, // 5
{dataBlocks: 8, onDisks: 16, offDisks: 7, blocksize: int64(blockSizeV1), data: 0, offset: 0, shouldFail: false, algorithm: DefaultBitrotAlgorithm, shouldFailQuorum: false}, // 6
{dataBlocks: 2, onDisks: 4, offDisks: 2, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: true}, // 7
{dataBlocks: 4, onDisks: 8, offDisks: 4, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, algorithm: SHA256, shouldFail: false, shouldFailQuorum: true}, // 8
{dataBlocks: 7, onDisks: 14, offDisks: 7, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: true}, // 9
{dataBlocks: 8, onDisks: 16, offDisks: 8, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: true}, // 10
{dataBlocks: 5, onDisks: 10, offDisks: 3, blocksize: int64(oneMiByte), data: oneMiByte, offset: 0, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 11
{dataBlocks: 6, onDisks: 12, offDisks: 5, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 102, algorithm: 0, shouldFail: true, shouldFailQuorum: false}, // 12
{dataBlocks: 3, onDisks: 6, offDisks: 1, blocksize: int64(blockSizeV1), data: oneMiByte, offset: oneMiByte / 2, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 13
{dataBlocks: 2, onDisks: 4, offDisks: 0, blocksize: int64(oneMiByte / 2), data: oneMiByte, offset: oneMiByte/2 + 1, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 14
{dataBlocks: 4, onDisks: 8, offDisks: 0, blocksize: int64(oneMiByte - 1), data: oneMiByte, offset: oneMiByte - 1, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 15
{dataBlocks: 8, onDisks: 12, offDisks: 2, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 2, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 16
{dataBlocks: 8, onDisks: 10, offDisks: 1, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 17
{dataBlocks: 10, onDisks: 14, offDisks: 0, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 17, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 18
{dataBlocks: 2, onDisks: 6, offDisks: 2, blocksize: int64(oneMiByte), data: oneMiByte, offset: oneMiByte / 2, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 19
{dataBlocks: 10, onDisks: 16, offDisks: 8, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: true}, // 20
}
// TestErasureEncode checks for encoding for different data sets.
func TestErasureEncode(t *testing.T) {
// Collection of cases for encode cases.
testEncodeCases := []struct {
inputData []byte
inputDataBlocks int
inputParityBlocks int
shouldPass bool
expectedErr error
}{
// TestCase - 1.
// Regular data encoded.
{
[]byte("Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum."),
8,
8,
true,
nil,
},
// TestCase - 2.
// Empty data errors out.
{
[]byte(""),
8,
8,
false,
reedsolomon.ErrShortData,
},
// TestCase - 3.
// Single byte encoded.
{
[]byte("1"),
4,
4,
true,
nil,
},
// TestCase - 4.
// test case with negative data block.
{
[]byte("1"),
-1,
8,
false,
reedsolomon.ErrInvShardNum,
},
// TestCase - 5.
// test case with negative parity block.
{
[]byte("1"),
8,
-1,
false,
reedsolomon.ErrInvShardNum,
},
// TestCase - 6.
// test case with zero data block.
{
[]byte("1"),
0,
8,
false,
reedsolomon.ErrInvShardNum,
},
// TestCase - 7.
// test case with zero parity block.
{
[]byte("1"),
8,
0,
false,
reedsolomon.ErrInvShardNum,
},
// TestCase - 8.
// test case with data + parity blocks > 255.
// expected to fail with Error Max Shard number.
{
[]byte("1"),
128,
128,
false,
reedsolomon.ErrMaxShardNum,
},
}
// Test encode cases.
for i, testCase := range testEncodeCases {
_, actualErr := encodeData(testCase.inputData, testCase.inputDataBlocks, testCase.inputParityBlocks)
if actualErr != nil && testCase.shouldPass {
t.Errorf("Test %d: Expected to pass but failed instead with \"%s\"", i+1, actualErr)
func TestErasureCreateFile(t *testing.T) {
for i, test := range erasureCreateFileTests {
setup, err := newErasureTestSetup(test.dataBlocks, test.onDisks-test.dataBlocks, test.blocksize)
if err != nil {
t.Fatalf("Test %d: failed to create test setup: %v", i, err)
}
if actualErr == nil && !testCase.shouldPass {
t.Errorf("Test %d: Expected to fail with error <Error> \"%v\", but instead passed", i+1, testCase.expectedErr)
storage, err := NewErasureStorage(setup.disks, test.dataBlocks, test.onDisks-test.dataBlocks, test.blocksize)
if err != nil {
setup.Remove()
t.Fatalf("Test %d: failed to create ErasureStorage: %v", i, err)
}
// Failed as expected, but does it fail for the expected reason.
if actualErr != nil && !testCase.shouldPass {
if errorCause(actualErr) != testCase.expectedErr {
t.Errorf("Test %d: Expected Error to be \"%v\", but instead found \"%v\" ", i+1, testCase.expectedErr, actualErr)
buffer := make([]byte, test.blocksize, 2*test.blocksize)
data := make([]byte, test.data)
if _, err = io.ReadFull(rand.Reader, data); err != nil {
setup.Remove()
t.Fatalf("Test %d: failed to generate random test data: %v", i, err)
}
file, err := storage.CreateFile(bytes.NewReader(data[test.offset:]), "testbucket", "object", buffer, test.algorithm, test.dataBlocks+1)
if err != nil && !test.shouldFail {
t.Errorf("Test %d: should pass but failed with: %v", i, err)
}
if err == nil && test.shouldFail {
t.Errorf("Test %d: should fail but it passed", i)
}
if err == nil {
if length := int64(len(data[test.offset:])); file.Size != length {
t.Errorf("Test %d: invalid number of bytes written: got: #%d want #%d", i, file.Size, length)
}
for j := range storage.disks[:test.offDisks] {
storage.disks[j] = badDisk{nil}
}
if test.offDisks > 0 {
storage.disks[0] = OfflineDisk
}
file, err = storage.CreateFile(bytes.NewReader(data[test.offset:]), "testbucket", "object2", buffer, test.algorithm, test.dataBlocks+1)
if err != nil && !test.shouldFailQuorum {
t.Errorf("Test %d: should pass but failed with: %v", i, err)
}
if err == nil && test.shouldFailQuorum {
t.Errorf("Test %d: should fail but it passed", i)
}
if err == nil {
if length := int64(len(data[test.offset:])); file.Size != length {
t.Errorf("Test %d: invalid number of bytes written: got: #%d want #%d", i, file.Size, length)
}
}
}
setup.Remove()
}
}
// Benchmarks
func benchmarkErasureWrite(data, parity, dataDown, parityDown int, size int64, b *testing.B) {
setup, err := newErasureTestSetup(data, parity, blockSizeV1)
if err != nil {
b.Fatalf("failed to create test setup: %v", err)
}
defer setup.Remove()
storage, err := NewErasureStorage(setup.disks, data, parity, blockSizeV1)
if err != nil {
b.Fatalf("failed to create ErasureStorage: %v", err)
}
buffer := make([]byte, blockSizeV1, 2*blockSizeV1)
content := make([]byte, size)
for i := 0; i < dataDown; i++ {
storage.disks[i] = OfflineDisk
}
for i := data; i < data+parityDown; i++ {
storage.disks[i] = OfflineDisk
}
b.ResetTimer()
b.SetBytes(size)
b.ReportAllocs()
for i := 0; i < b.N; i++ {
_, err := storage.CreateFile(bytes.NewReader(content), "testbucket", "object", buffer, DefaultBitrotAlgorithm, data+1)
if err != nil {
panic(err)
}
}
}
func BenchmarkErasureWriteQuick(b *testing.B) {
const size = 12 * 1024 * 1024
b.Run(" 00|00 ", func(b *testing.B) { benchmarkErasureWrite(2, 2, 0, 0, size, b) })
b.Run(" 00|X0 ", func(b *testing.B) { benchmarkErasureWrite(2, 2, 0, 1, size, b) })
b.Run(" X0|00 ", func(b *testing.B) { benchmarkErasureWrite(2, 2, 1, 0, size, b) })
}
func BenchmarkErasureWrite_4_64KB(b *testing.B) {
const size = 64 * 1024
b.Run(" 00|00 ", func(b *testing.B) { benchmarkErasureWrite(2, 2, 0, 0, size, b) })
b.Run(" 00|X0 ", func(b *testing.B) { benchmarkErasureWrite(2, 2, 0, 1, size, b) })
b.Run(" X0|00 ", func(b *testing.B) { benchmarkErasureWrite(2, 2, 1, 0, size, b) })
}
func BenchmarkErasureWrite_8_20MB(b *testing.B) {
const size = 20 * 1024 * 1024
b.Run(" 0000|0000 ", func(b *testing.B) { benchmarkErasureWrite(4, 4, 0, 0, size, b) })
b.Run(" 0000|X000 ", func(b *testing.B) { benchmarkErasureWrite(4, 4, 0, 1, size, b) })
b.Run(" X000|0000 ", func(b *testing.B) { benchmarkErasureWrite(4, 4, 1, 0, size, b) })
b.Run(" 0000|XXX0 ", func(b *testing.B) { benchmarkErasureWrite(4, 4, 0, 3, size, b) })
b.Run(" XXX0|0000 ", func(b *testing.B) { benchmarkErasureWrite(4, 4, 3, 0, size, b) })
}
func BenchmarkErasureWrite_12_30MB(b *testing.B) {
const size = 30 * 1024 * 1024
b.Run(" 000000|000000 ", func(b *testing.B) { benchmarkErasureWrite(6, 6, 0, 0, size, b) })
b.Run(" 000000|X00000 ", func(b *testing.B) { benchmarkErasureWrite(6, 6, 0, 1, size, b) })
b.Run(" X00000|000000 ", func(b *testing.B) { benchmarkErasureWrite(6, 6, 1, 0, size, b) })
b.Run(" 000000|XXXXX0 ", func(b *testing.B) { benchmarkErasureWrite(6, 6, 0, 5, size, b) })
b.Run(" XXXXX0|000000 ", func(b *testing.B) { benchmarkErasureWrite(6, 6, 5, 0, size, b) })
}
func BenchmarkErasureWrite_16_40MB(b *testing.B) {
const size = 40 * 1024 * 1024
b.Run(" 00000000|00000000 ", func(b *testing.B) { benchmarkErasureWrite(8, 8, 0, 0, size, b) })
b.Run(" 00000000|X0000000 ", func(b *testing.B) { benchmarkErasureWrite(8, 8, 0, 1, size, b) })
b.Run(" X0000000|00000000 ", func(b *testing.B) { benchmarkErasureWrite(8, 8, 1, 0, size, b) })
b.Run(" 00000000|XXXXXXX0 ", func(b *testing.B) { benchmarkErasureWrite(8, 8, 0, 7, size, b) })
b.Run(" XXXXXXX0|00000000 ", func(b *testing.B) { benchmarkErasureWrite(8, 8, 7, 0, size, b) })
}

View File

@@ -16,71 +16,169 @@
package cmd
import "encoding/hex"
import (
"fmt"
"hash"
"strings"
// Heals the erasure coded file. reedsolomon.Reconstruct() is used to reconstruct the missing parts.
func erasureHealFile(latestDisks []StorageAPI, outDatedDisks []StorageAPI, volume, path, healBucket, healPath string,
size, blockSize int64, dataBlocks, parityBlocks int, algo HashAlgo) (checkSums []string, err error) {
"github.com/minio/minio/pkg/errors"
)
var offset int64
remainingSize := size
// HealFile tries to reconstruct an erasure-coded file spread over all
// available disks. HealFile will read the valid parts of the file,
// reconstruct the missing data and write the reconstructed parts back
// to `staleDisks` at the destination `dstVol/dstPath/`. Parts are
// verified against the given BitrotAlgorithm and checksums.
//
// `staleDisks` is a slice of disks where each non-nil entry has stale
// or no data, and so will be healed.
//
// It is required that `s.disks` have a (read-quorum) majority of
// disks with valid data for healing to work.
//
// In addition, `staleDisks` and `s.disks` must have the same ordering
// of disks w.r.t. erasure coding of the object.
//
// Errors when writing to `staleDisks` are not propagated as long as
// writes succeed for at least one disk. This allows partial healing
// despite stale disks being faulty.
//
// It returns bitrot checksums for the non-nil staleDisks on which
// healing succeeded.
func (s ErasureStorage) HealFile(staleDisks []StorageAPI, volume, path string, blocksize int64,
dstVol, dstPath string, size int64, alg BitrotAlgorithm, checksums [][]byte) (
f ErasureFileInfo, err error) {
// Hash for bitrot protection.
hashWriters := newHashWriters(len(outDatedDisks), bitRotAlgo)
for remainingSize > 0 {
curBlockSize := blockSize
if remainingSize < curBlockSize {
curBlockSize = remainingSize
}
// Calculate the block size that needs to be read from each disk.
curEncBlockSize := getChunkSize(curBlockSize, dataBlocks)
// Memory for reading data from disks and reconstructing missing data using erasure coding.
enBlocks := make([][]byte, len(latestDisks))
// Read data from the latest disks.
// FIXME: no need to read from all the disks. dataBlocks+1 is enough.
for index, disk := range latestDisks {
if disk == nil {
continue
}
enBlocks[index] = make([]byte, curEncBlockSize)
_, err := disk.ReadFile(volume, path, offset, enBlocks[index])
if err != nil {
enBlocks[index] = nil
}
}
// Reconstruct missing data.
err := decodeData(enBlocks, dataBlocks, parityBlocks)
if err != nil {
return nil, err
}
// Write to the healPath file.
for index, disk := range outDatedDisks {
if disk == nil {
continue
}
err := disk.AppendFile(healBucket, healPath, enBlocks[index])
if err != nil {
return nil, traceError(err)
}
hashWriters[index].Write(enBlocks[index])
}
remainingSize -= curBlockSize
offset += curEncBlockSize
if !alg.Available() {
return f, errors.Trace(errBitrotHashAlgoInvalid)
}
// Checksums for the bit rot.
checkSums = make([]string, len(outDatedDisks))
for index, disk := range outDatedDisks {
if disk == nil {
// Initialization
f.Checksums = make([][]byte, len(s.disks))
hashers := make([]hash.Hash, len(s.disks))
verifiers := make([]*BitrotVerifier, len(s.disks))
for i, disk := range s.disks {
switch {
case staleDisks[i] != nil:
hashers[i] = alg.New()
case disk == nil:
// disregard unavailable disk
continue
default:
verifiers[i] = NewBitrotVerifier(alg, checksums[i])
}
}
writeErrors := make([]error, len(s.disks))
// Read part file data on each disk
chunksize := ceilFrac(blocksize, int64(s.dataBlocks))
numBlocks := ceilFrac(size, blocksize)
readLen := chunksize * (numBlocks - 1)
lastChunkSize := chunksize
hasSmallerLastBlock := size%blocksize != 0
if hasSmallerLastBlock {
lastBlockLen := size % blocksize
lastChunkSize = ceilFrac(lastBlockLen, int64(s.dataBlocks))
}
readLen += lastChunkSize
var buffers [][]byte
buffers, _, err = s.readConcurrent(volume, path, 0, readLen, verifiers)
if err != nil {
return f, err
}
// Scan part files on disk, block-by-block reconstruct it and
// write to stale disks.
blocks := make([][]byte, len(s.disks))
if numBlocks > 1 {
// Allocate once for all the equal length blocks. The
// last block may have a different length - allocation
// for this happens inside the for loop below.
for i := range blocks {
if len(buffers[i]) == 0 {
blocks[i] = make([]byte, chunksize)
}
}
}
var buffOffset int64
for blockNumber := int64(0); blockNumber < numBlocks; blockNumber++ {
if blockNumber == numBlocks-1 && lastChunkSize != chunksize {
for i := range blocks {
if len(buffers[i]) == 0 {
blocks[i] = make([]byte, lastChunkSize)
}
}
}
for i := range blocks {
if len(buffers[i]) == 0 {
blocks[i] = blocks[i][0:0]
}
}
csize := chunksize
if blockNumber == numBlocks-1 {
csize = lastChunkSize
}
for i := range blocks {
if len(buffers[i]) != 0 {
blocks[i] = buffers[i][buffOffset : buffOffset+csize]
}
}
buffOffset += csize
if err = s.ErasureDecodeDataAndParityBlocks(blocks); err != nil {
return f, err
}
// write computed shards as chunks on file in each
// stale disk
writeSucceeded := false
for i, disk := range staleDisks {
// skip nil disk or disk that had error on
// previous write
if disk == nil || writeErrors[i] != nil {
continue
}
writeErrors[i] = disk.AppendFile(dstVol, dstPath, blocks[i])
if writeErrors[i] == nil {
hashers[i].Write(blocks[i])
writeSucceeded = true
}
}
// If all disks had write errors we quit.
if !writeSucceeded {
// build error from all write errors
return f, errors.Trace(joinWriteErrors(writeErrors))
}
}
// copy computed file hashes into output variable
f.Size = size
f.Algorithm = alg
for i, disk := range staleDisks {
if disk == nil || writeErrors[i] != nil {
continue
}
checkSums[index] = hex.EncodeToString(hashWriters[index].Sum(nil))
f.Checksums[i] = hashers[i].Sum(nil)
}
return checkSums, nil
return f, nil
}
func joinWriteErrors(errs []error) error {
msgs := []string{}
for i, err := range errs {
if err == nil {
continue
}
msgs = append(msgs, fmt.Sprintf("disk %d: %v", i+1, err))
}
return fmt.Errorf("all stale disks had write errors during healing: %s",
strings.Join(msgs, ", "))
}

View File

@@ -19,111 +19,125 @@ package cmd
import (
"bytes"
"crypto/rand"
"os"
"path"
"io"
"reflect"
"testing"
humanize "github.com/dustin/go-humanize"
)
// Test erasureHealFile()
var erasureHealFileTests = []struct {
dataBlocks, disks int
// number of offline disks is also number of staleDisks for
// erasure reconstruction in this test
offDisks int
// bad disks are online disks which return errors
badDisks, badStaleDisks int
blocksize, size int64
algorithm BitrotAlgorithm
shouldFail bool
}{
{dataBlocks: 2, disks: 4, offDisks: 1, badDisks: 0, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: SHA256, shouldFail: false}, // 0
{dataBlocks: 3, disks: 6, offDisks: 2, badDisks: 0, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: BLAKE2b512, shouldFail: false}, // 1
{dataBlocks: 4, disks: 8, offDisks: 2, badDisks: 1, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: BLAKE2b512, shouldFail: false}, // 2
{dataBlocks: 5, disks: 10, offDisks: 3, badDisks: 1, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false}, // 3
{dataBlocks: 6, disks: 12, offDisks: 2, badDisks: 3, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: SHA256, shouldFail: false}, // 4
{dataBlocks: 7, disks: 14, offDisks: 4, badDisks: 1, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false}, // 5
{dataBlocks: 8, disks: 16, offDisks: 6, badDisks: 1, badStaleDisks: 1, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false}, // 6
{dataBlocks: 7, disks: 14, offDisks: 2, badDisks: 3, badStaleDisks: 0, blocksize: int64(oneMiByte / 2), size: oneMiByte, algorithm: BLAKE2b512, shouldFail: false}, // 7
{dataBlocks: 6, disks: 12, offDisks: 1, badDisks: 0, badStaleDisks: 1, blocksize: int64(oneMiByte - 1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: true}, // 8
{dataBlocks: 5, disks: 10, offDisks: 3, badDisks: 0, badStaleDisks: 3, blocksize: int64(oneMiByte / 2), size: oneMiByte, algorithm: SHA256, shouldFail: true}, // 9
{dataBlocks: 4, disks: 8, offDisks: 1, badDisks: 1, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false}, // 10
{dataBlocks: 2, disks: 4, offDisks: 1, badDisks: 0, badStaleDisks: 1, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: true}, // 11
{dataBlocks: 6, disks: 12, offDisks: 8, badDisks: 3, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: true}, // 12
{dataBlocks: 7, disks: 14, offDisks: 3, badDisks: 4, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: BLAKE2b512, shouldFail: false}, // 13
{dataBlocks: 7, disks: 14, offDisks: 6, badDisks: 1, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false}, // 14
{dataBlocks: 8, disks: 16, offDisks: 4, badDisks: 5, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: true}, // 15
{dataBlocks: 2, disks: 4, offDisks: 1, badDisks: 0, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false}, // 16
{dataBlocks: 2, disks: 4, offDisks: 0, badDisks: 0, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: 0, shouldFail: true}, // 17
{dataBlocks: 12, disks: 16, offDisks: 2, badDisks: 1, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false}, // 18
{dataBlocks: 6, disks: 8, offDisks: 1, badDisks: 0, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: BLAKE2b512, shouldFail: false}, // 19
{dataBlocks: 7, disks: 10, offDisks: 1, badDisks: 0, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte, algorithm: 0, shouldFail: true}, // 20
{dataBlocks: 2, disks: 4, offDisks: 1, badDisks: 0, badStaleDisks: 0, blocksize: int64(blockSizeV1), size: oneMiByte * 64, algorithm: SHA256, shouldFail: false}, // 21
}
func TestErasureHealFile(t *testing.T) {
// Initialize environment needed for the test.
dataBlocks := 7
parityBlocks := 7
blockSize := int64(blockSizeV1)
setup, err := newErasureTestSetup(dataBlocks, parityBlocks, blockSize)
if err != nil {
t.Error(err)
return
}
defer setup.Remove()
for i, test := range erasureHealFileTests {
if test.offDisks < test.badStaleDisks {
// test case sanity check
t.Fatalf("Test %d: Bad test case - number of stale disks cannot be less than number of badstale disks", i)
}
disks := setup.disks
// Prepare a slice of 1MiB with random data.
data := make([]byte, 1*humanize.MiByte)
_, err = rand.Read(data)
if err != nil {
t.Fatal(err)
}
// Create a test file.
_, size, checkSums, err := erasureCreateFile(disks, "testbucket", "testobject1", bytes.NewReader(data), true, blockSize, dataBlocks, parityBlocks, bitRotAlgo, dataBlocks+1)
if err != nil {
t.Fatal(err)
}
if size != int64(len(data)) {
t.Errorf("erasureCreateFile returned %d, expected %d", size, len(data))
}
latest := make([]StorageAPI, len(disks)) // Slice of latest disks
outDated := make([]StorageAPI, len(disks)) // Slice of outdated disks
// Test case when one part needs to be healed.
dataPath := path.Join(setup.diskPaths[0], "testbucket", "testobject1")
err = os.Remove(dataPath)
if err != nil {
t.Fatal(err)
}
copy(latest, disks)
latest[0] = nil
outDated[0] = disks[0]
healCheckSums, err := erasureHealFile(latest, outDated, "testbucket", "testobject1", "testbucket", "testobject1", 1*humanize.MiByte, blockSize, dataBlocks, parityBlocks, bitRotAlgo)
if err != nil {
t.Fatal(err)
}
// Checksum of the healed file should match.
if checkSums[0] != healCheckSums[0] {
t.Error("Healing failed, data does not match.")
}
// Test case when parityBlocks number of disks need to be healed.
// Should succeed.
copy(latest, disks)
for index := 0; index < parityBlocks; index++ {
dataPath := path.Join(setup.diskPaths[index], "testbucket", "testobject1")
err = os.Remove(dataPath)
// create some test data
setup, err := newErasureTestSetup(test.dataBlocks, test.disks-test.dataBlocks, test.blocksize)
if err != nil {
t.Fatal(err)
t.Fatalf("Test %d: failed to setup XL environment: %v", i, err)
}
latest[index] = nil
outDated[index] = disks[index]
}
healCheckSums, err = erasureHealFile(latest, outDated, "testbucket", "testobject1", "testbucket", "testobject1", 1*humanize.MiByte, blockSize, dataBlocks, parityBlocks, bitRotAlgo)
if err != nil {
t.Fatal(err)
}
// Checksums of the healed files should match.
for index := 0; index < parityBlocks; index++ {
if checkSums[index] != healCheckSums[index] {
t.Error("Healing failed, data does not match.")
}
}
for index := dataBlocks; index < len(disks); index++ {
if healCheckSums[index] != "" {
t.Errorf("expected healCheckSums[%d] to be empty", index)
}
}
// Test case when parityBlocks+1 number of disks need to be healed.
// Should fail.
copy(latest, disks)
for index := 0; index < parityBlocks+1; index++ {
dataPath := path.Join(setup.diskPaths[index], "testbucket", "testobject1")
err = os.Remove(dataPath)
storage, err := NewErasureStorage(setup.disks, test.dataBlocks, test.disks-test.dataBlocks, test.blocksize)
if err != nil {
t.Fatal(err)
setup.Remove()
t.Fatalf("Test %d: failed to create ErasureStorage: %v", i, err)
}
data := make([]byte, test.size)
if _, err = io.ReadFull(rand.Reader, data); err != nil {
setup.Remove()
t.Fatalf("Test %d: failed to create random test data: %v", i, err)
}
algorithm := test.algorithm
if !algorithm.Available() {
algorithm = DefaultBitrotAlgorithm
}
buffer := make([]byte, test.blocksize, 2*test.blocksize)
file, err := storage.CreateFile(bytes.NewReader(data), "testbucket", "testobject", buffer, algorithm, test.dataBlocks+1)
if err != nil {
setup.Remove()
t.Fatalf("Test %d: failed to create random test data: %v", i, err)
}
latest[index] = nil
outDated[index] = disks[index]
}
_, err = erasureHealFile(latest, outDated, "testbucket", "testobject1", "testbucket", "testobject1", 1*humanize.MiByte, blockSize, dataBlocks, parityBlocks, bitRotAlgo)
if err == nil {
t.Error("Expected erasureHealFile() to fail when the number of available disks <= parityBlocks")
// setup stale disks for the test case
staleDisks := make([]StorageAPI, len(storage.disks))
copy(staleDisks, storage.disks)
for j := 0; j < len(storage.disks); j++ {
if j < test.offDisks {
storage.disks[j] = OfflineDisk
} else {
staleDisks[j] = nil
}
}
for j := 0; j < test.badDisks; j++ {
storage.disks[test.offDisks+j] = badDisk{nil}
}
for j := 0; j < test.badStaleDisks; j++ {
staleDisks[j] = badDisk{nil}
}
// test case setup is complete - now call Healfile()
info, err := storage.HealFile(staleDisks, "testbucket", "testobject", test.blocksize, "testbucket", "healedobject", test.size, test.algorithm, file.Checksums)
if err != nil && !test.shouldFail {
t.Errorf("Test %d: should pass but it failed with: %v", i, err)
}
if err == nil && test.shouldFail {
t.Errorf("Test %d: should fail but it passed", i)
}
if err == nil {
if info.Size != test.size {
t.Errorf("Test %d: healed wrong number of bytes: got: #%d want: #%d", i, info.Size, test.size)
}
if info.Algorithm != test.algorithm {
t.Errorf("Test %d: healed with wrong algorithm: got: %v want: %v", i, info.Algorithm, test.algorithm)
}
// Verify that checksums of staleDisks
// match expected values
for i, disk := range staleDisks {
if disk == nil || info.Checksums[i] == nil {
continue
}
if !reflect.DeepEqual(info.Checksums[i], file.Checksums[i]) {
t.Errorf("Test %d: heal returned different bitrot checksums", i)
}
}
}
setup.Remove()
}
}

View File

@@ -17,328 +17,213 @@
package cmd
import (
"errors"
"io"
"sync"
"github.com/klauspost/reedsolomon"
"github.com/minio/minio/pkg/bpool"
"github.com/minio/minio/pkg/errors"
)
// isSuccessDecodeBlocks - do we have all the blocks to be
// successfully decoded?. Input encoded blocks ordered matrix.
func isSuccessDecodeBlocks(enBlocks [][]byte, dataBlocks int) bool {
// Count number of data and parity blocks that were read.
var successDataBlocksCount = 0
var successParityBlocksCount = 0
for index := range enBlocks {
if enBlocks[index] == nil {
continue
}
// block index lesser than data blocks, update data block count.
if index < dataBlocks {
successDataBlocksCount++
continue
} // else { // update parity block count.
successParityBlocksCount++
}
// Returns true if we have atleast dataBlocks parity.
return successDataBlocksCount == dataBlocks || successDataBlocksCount+successParityBlocksCount >= dataBlocks
type errIdx struct {
idx int
err error
}
// isSuccessDataBlocks - do we have all the data blocks?
// Input encoded blocks ordered matrix.
func isSuccessDataBlocks(enBlocks [][]byte, dataBlocks int) bool {
// Count number of data blocks that were read.
var successDataBlocksCount = 0
for index := range enBlocks[:dataBlocks] {
if enBlocks[index] == nil {
continue
func (s ErasureStorage) readConcurrent(volume, path string, offset, length int64,
verifiers []*BitrotVerifier) (buffers [][]byte, needsReconstruction bool,
err error) {
errChan := make(chan errIdx)
stageBuffers := make([][]byte, len(s.disks))
buffers = make([][]byte, len(s.disks))
readDisk := func(i int) {
stageBuffers[i] = make([]byte, length)
disk := s.disks[i]
if disk == OfflineDisk {
errChan <- errIdx{i, errors.Trace(errDiskNotFound)}
return
}
// block index lesser than data blocks, update data block count.
if index < dataBlocks {
successDataBlocksCount++
_, rerr := disk.ReadFile(volume, path, offset, stageBuffers[i], verifiers[i])
errChan <- errIdx{i, rerr}
}
var finishedCount, successCount, launchIndex int
for ; launchIndex < s.dataBlocks; launchIndex++ {
go readDisk(launchIndex)
}
for finishedCount < launchIndex {
select {
case errVal := <-errChan:
finishedCount++
if errVal.err != nil {
// TODO: meaningfully log the disk read error
// A disk failed to return data, so we
// request an additional disk if possible
if launchIndex < s.dataBlocks+s.parityBlocks {
needsReconstruction = true
// requiredBlocks++
go readDisk(launchIndex)
launchIndex++
}
} else {
successCount++
buffers[errVal.idx] = stageBuffers[errVal.idx]
stageBuffers[errVal.idx] = nil
}
}
}
// Returns true if we have atleast the dataBlocks.
return successDataBlocksCount >= dataBlocks
if successCount != s.dataBlocks {
// Not enough disks returns data.
err = errors.Trace(errXLReadQuorum)
}
return
}
// Return readable disks slice from which we can read parallelly.
func getReadDisks(orderedDisks []StorageAPI, index int, dataBlocks int) (readDisks []StorageAPI, nextIndex int, err error) {
readDisks = make([]StorageAPI, len(orderedDisks))
dataDisks := 0
parityDisks := 0
// Count already read data and parity chunks.
for i := 0; i < index; i++ {
if orderedDisks[i] == nil {
// ReadFile reads as much data as requested from the file under the
// given volume and path and writes the data to the provided writer.
// The algorithm and the keys/checksums are used to verify the
// integrity of the given file. ReadFile will read data from the given
// offset up to the given length. If parts of the file are corrupted
// ReadFile tries to reconstruct the data.
func (s ErasureStorage) ReadFile(writer io.Writer, volume, path string, offset,
length, totalLength int64, checksums [][]byte, algorithm BitrotAlgorithm,
blocksize int64) (f ErasureFileInfo, err error) {
if offset < 0 || length < 0 {
return f, errors.Trace(errUnexpected)
}
if offset+length > totalLength {
return f, errors.Trace(errUnexpected)
}
if !algorithm.Available() {
return f, errors.Trace(errBitrotHashAlgoInvalid)
}
f.Checksums = make([][]byte, len(s.disks))
verifiers := make([]*BitrotVerifier, len(s.disks))
for i, disk := range s.disks {
if disk == OfflineDisk {
continue
}
if i < dataBlocks {
dataDisks++
} else {
parityDisks++
verifiers[i] = NewBitrotVerifier(algorithm, checksums[i])
}
chunksize := ceilFrac(blocksize, int64(s.dataBlocks))
// We read all whole-blocks of erasure coded data containing
// the requested data range.
//
// The start index of the erasure coded block containing the
// `offset` byte of data is:
partDataStartIndex := (offset / blocksize) * chunksize
// The start index of the erasure coded block containing the
// (last) byte of data at the index `offset + length - 1` is:
blockStartIndex := ((offset + length - 1) / blocksize) * chunksize
// However, we need the end index of the e.c. block containing
// the last byte - we need to check if that block is the last
// block in the part (in that case, it may be have a different
// chunk size)
isLastBlock := (totalLength-1)/blocksize == (offset+length-1)/blocksize
var partDataEndIndex int64
if isLastBlock {
lastBlockChunkSize := chunksize
if totalLength%blocksize != 0 {
lastBlockChunkSize = ceilFrac(totalLength%blocksize, int64(s.dataBlocks))
}
partDataEndIndex = blockStartIndex + lastBlockChunkSize - 1
} else {
partDataEndIndex = blockStartIndex + chunksize - 1
}
// Thus, the length of data to be read from the part file(s) is:
partDataLength := partDataEndIndex - partDataStartIndex + 1
// The calculation above does not apply when length == 0:
if length == 0 {
partDataLength = 0
}
var buffers [][]byte
var needsReconstruction bool
buffers, needsReconstruction, err = s.readConcurrent(volume, path,
partDataStartIndex, partDataLength, verifiers)
if err != nil {
// Could not read enough disks.
return
}
numChunks := ceilFrac(partDataLength, chunksize)
blocks := make([][]byte, len(s.disks))
if needsReconstruction && numChunks > 1 {
// Allocate once for all the equal length blocks. The
// last block may have a different length - allocation
// for this happens inside the for loop below.
for i := range blocks {
if len(buffers[i]) == 0 {
blocks[i] = make([]byte, chunksize)
}
}
}
// Sanity checks - we should never have this situation.
if dataDisks == dataBlocks {
return nil, 0, traceError(errUnexpected)
}
if dataDisks+parityDisks >= dataBlocks {
return nil, 0, traceError(errUnexpected)
}
// Find the disks from which next set of parallel reads should happen.
for i := index; i < len(orderedDisks); i++ {
if orderedDisks[i] == nil {
continue
}
if i < dataBlocks {
dataDisks++
} else {
parityDisks++
}
readDisks[i] = orderedDisks[i]
if dataDisks == dataBlocks {
return readDisks, i + 1, nil
} else if dataDisks+parityDisks == dataBlocks {
return readDisks, i + 1, nil
}
}
return nil, 0, traceError(errXLReadQuorum)
}
// parallelRead - reads chunks in parallel from the disks specified in []readDisks.
func parallelRead(volume, path string, readDisks, orderedDisks []StorageAPI, enBlocks [][]byte,
blockOffset, curChunkSize int64, brVerifiers []bitRotVerifier, pool *bpool.BytePool) {
// WaitGroup to synchronise the read go-routines.
wg := &sync.WaitGroup{}
// Read disks in parallel.
for index := range readDisks {
if readDisks[index] == nil {
continue
}
wg.Add(1)
// Reads chunk from readDisk[index] in routine.
go func(index int) {
defer wg.Done()
// evaluate if we need to perform bit-rot checking
needBitRotVerification := true
if brVerifiers[index].isVerified {
needBitRotVerification = false
// if file has bit-rot, do not reuse disk
if brVerifiers[index].hasBitRot {
orderedDisks[index] = nil
return
var buffOffset int64
for chunkNumber := int64(0); chunkNumber < numChunks; chunkNumber++ {
if chunkNumber == numChunks-1 && partDataLength%chunksize != 0 {
chunksize = partDataLength % chunksize
// We allocate again as the last chunk has a
// different size.
for i := range blocks {
if len(buffers[i]) == 0 {
blocks[i] = make([]byte, chunksize)
}
}
buf, err := pool.Get()
if err != nil {
errorIf(err, "unable to get buffer from byte pool")
orderedDisks[index] = nil
return
}
buf = buf[:curChunkSize]
if needBitRotVerification {
_, err = readDisks[index].ReadFileWithVerify(
volume, path, blockOffset, buf,
brVerifiers[index].algo,
brVerifiers[index].checkSum)
} else {
_, err = readDisks[index].ReadFile(volume, path,
blockOffset, buf)
}
// if bit-rot verification was done, store the
// result of verification so we can skip
// re-doing it next time
if needBitRotVerification {
brVerifiers[index].isVerified = true
_, ok := err.(hashMismatchError)
brVerifiers[index].hasBitRot = ok
}
if err != nil {
orderedDisks[index] = nil
return
}
enBlocks[index] = buf
}(index)
}
// Waiting for first routines to finish.
wg.Wait()
}
// erasureReadFile - read bytes from erasure coded files and writes to
// given writer. Erasure coded files are read block by block as per
// given erasureInfo and data chunks are decoded into a data
// block. Data block is trimmed for given offset and length, then
// written to given writer. This function also supports bit-rot
// detection by verifying checksum of individual block's checksum.
func erasureReadFile(writer io.Writer, disks []StorageAPI, volume, path string,
offset, length, totalLength, blockSize int64, dataBlocks, parityBlocks int,
checkSums []string, algo HashAlgo, pool *bpool.BytePool) (int64, error) {
// Offset and length cannot be negative.
if offset < 0 || length < 0 {
return 0, traceError(errUnexpected)
}
// Can't request more data than what is available.
if offset+length > totalLength {
return 0, traceError(errUnexpected)
}
// chunkSize is the amount of data that needs to be read from
// each disk at a time.
chunkSize := getChunkSize(blockSize, dataBlocks)
brVerifiers := make([]bitRotVerifier, len(disks))
for i := range brVerifiers {
brVerifiers[i].algo = algo
brVerifiers[i].checkSum = checkSums[i]
}
// Total bytes written to writer
var bytesWritten int64
startBlock := offset / blockSize
endBlock := (offset + length) / blockSize
// curChunkSize = chunk size for the current block in the for loop below.
// curBlockSize = block size for the current block in the for loop below.
// curChunkSize and curBlockSize can change for the last block if totalLength%blockSize != 0
curChunkSize := chunkSize
curBlockSize := blockSize
// For each block, read chunk from each disk. If we are able to read all the data disks then we don't
// need to read parity disks. If one of the data disk is missing we need to read DataBlocks+1 number
// of disks. Once read, we Reconstruct() missing data if needed and write it to the given writer.
for block := startBlock; block <= endBlock; block++ {
// Mark all buffers as unused at the start of the loop so that the buffers
// can be reused.
pool.Reset()
// Each element of enBlocks holds curChunkSize'd amount of data read from its corresponding disk.
enBlocks := make([][]byte, len(disks))
if ((offset + bytesWritten) / blockSize) == (totalLength / blockSize) {
// This is the last block for which curBlockSize and curChunkSize can change.
// For ex. if totalLength is 15M and blockSize is 10MB, curBlockSize for
// the last block should be 5MB.
curBlockSize = totalLength % blockSize
curChunkSize = getChunkSize(curBlockSize, dataBlocks)
}
// NOTE: That for the offset calculation we have to use chunkSize and
// not curChunkSize. If we use curChunkSize for offset calculation
// then it can result in wrong offset for the last block.
blockOffset := block * chunkSize
// nextIndex - index from which next set of parallel reads
// should happen.
nextIndex := 0
for {
// readDisks - disks from which we need to read in parallel.
var readDisks []StorageAPI
var err error
// get readable disks slice from which we can read parallelly.
readDisks, nextIndex, err = getReadDisks(disks, nextIndex, dataBlocks)
if err != nil {
return bytesWritten, err
}
// Issue a parallel read across the disks specified in readDisks.
parallelRead(volume, path, readDisks, disks, enBlocks, blockOffset, curChunkSize, brVerifiers, pool)
if isSuccessDecodeBlocks(enBlocks, dataBlocks) {
// If enough blocks are available to do rs.Reconstruct()
break
}
if nextIndex == len(disks) {
// No more disks to read from.
return bytesWritten, traceError(errXLReadQuorum)
}
// We do not have enough enough data blocks to reconstruct the data
// hence continue the for-loop till we have enough data blocks.
}
// If we have all the data blocks no need to decode, continue to write.
if !isSuccessDataBlocks(enBlocks, dataBlocks) {
// Reconstruct the missing data blocks.
if err := decodeData(enBlocks, dataBlocks, parityBlocks); err != nil {
return bytesWritten, err
for i := range blocks {
if len(buffers[i]) == 0 {
blocks[i] = blocks[i][0:0]
}
}
// Offset in enBlocks from where data should be read from.
var enBlocksOffset int64
for i := range blocks {
if len(buffers[i]) != 0 {
blocks[i] = buffers[i][buffOffset : buffOffset+chunksize]
}
}
buffOffset += chunksize
// Total data to be read from enBlocks.
enBlocksLength := curBlockSize
// If this is the start block then enBlocksOffset might not be 0.
if block == startBlock {
enBlocksOffset = offset % blockSize
enBlocksLength -= enBlocksOffset
if needsReconstruction {
if err = s.ErasureDecodeDataBlocks(blocks); err != nil {
return f, errors.Trace(err)
}
}
remaining := length - bytesWritten
if remaining < enBlocksLength {
// We should not send more data than what was requested.
enBlocksLength = remaining
var writeStart int64
if chunkNumber == 0 {
writeStart = offset % blocksize
}
// Write data blocks.
n, err := writeDataBlocks(writer, enBlocks, dataBlocks, enBlocksOffset, enBlocksLength)
writeLength := blocksize - writeStart
if chunkNumber == numChunks-1 {
lastBlockLength := (offset + length) % blocksize
if lastBlockLength != 0 {
writeLength = lastBlockLength - writeStart
}
}
n, err := writeDataBlocks(writer, blocks, s.dataBlocks, writeStart, writeLength)
if err != nil {
return bytesWritten, err
return f, err
}
// Update total bytes written.
bytesWritten += n
f.Size += n
}
if bytesWritten == length {
// Done writing all the requested data.
break
f.Algorithm = algorithm
for i, disk := range s.disks {
if disk == OfflineDisk || buffers[i] == nil {
continue
}
f.Checksums[i] = verifiers[i].Sum(nil)
}
// Success.
return bytesWritten, nil
}
// decodeData - decode encoded blocks.
func decodeData(enBlocks [][]byte, dataBlocks, parityBlocks int) error {
// Initialized reedsolomon.
rs, err := reedsolomon.New(dataBlocks, parityBlocks)
if err != nil {
return traceError(err)
}
// Reconstruct encoded blocks.
err = rs.Reconstruct(enBlocks)
if err != nil {
return traceError(err)
}
// Verify reconstructed blocks (parity).
ok, err := rs.Verify(enBlocks)
if err != nil {
return traceError(err)
}
if !ok {
// Blocks cannot be reconstructed, corrupted data.
err = errors.New("Verification failed after reconstruction, data likely corrupted")
return traceError(err)
}
// Success.
return nil
return f, nil
}

View File

@@ -18,358 +18,142 @@ package cmd
import (
"bytes"
crand "crypto/rand"
"io"
"math/rand"
"testing"
"reflect"
humanize "github.com/dustin/go-humanize"
"github.com/minio/minio/pkg/bpool"
)
// Tests getReadDisks which returns readable disks slice from which we can
// read parallelly.
func testGetReadDisks(t *testing.T, xl *xlObjects) {
d := xl.storageDisks
testCases := []struct {
index int // index argument for getReadDisks
argDisks []StorageAPI // disks argument for getReadDisks
retDisks []StorageAPI // disks return value from getReadDisks
nextIndex int // return value from getReadDisks
err error // error return value from getReadDisks
}{
// Test case - 1.
// When all disks are available, should return data disks.
{
0,
[]StorageAPI{d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], d[8], d[9], d[10], d[11], d[12], d[13], d[14], d[15]},
[]StorageAPI{d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], nil, nil, nil, nil, nil, nil, nil, nil},
8,
nil,
},
// Test case - 2.
// If a parity disk is down, should return all data disks.
{
0,
[]StorageAPI{d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], d[8], nil, d[10], d[11], d[12], d[13], d[14], d[15]},
[]StorageAPI{d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], nil, nil, nil, nil, nil, nil, nil, nil},
8,
nil,
},
// Test case - 3.
// If a data disk is down, should return 7 data and 1 parity.
{
0,
[]StorageAPI{nil, d[1], d[2], d[3], d[4], d[5], d[6], d[7], d[8], d[9], d[10], d[11], d[12], d[13], d[14], d[15]},
[]StorageAPI{nil, d[1], d[2], d[3], d[4], d[5], d[6], d[7], d[8], nil, nil, nil, nil, nil, nil, nil},
9,
nil,
},
// Test case - 4.
// If 7 data disks are down, should return 1 data and 7 parity.
{
0,
[]StorageAPI{nil, nil, nil, nil, nil, nil, nil, d[7], d[8], d[9], d[10], d[11], d[12], d[13], d[14], d[15]},
[]StorageAPI{nil, nil, nil, nil, nil, nil, nil, d[7], d[8], d[9], d[10], d[11], d[12], d[13], d[14], nil},
15,
nil,
},
// Test case - 5.
// When 2 disks fail during parallelRead, next call to getReadDisks should return 3 disks
{
8,
[]StorageAPI{nil, nil, d[2], d[3], d[4], d[5], d[6], d[7], d[8], d[9], d[10], d[11], d[12], d[13], d[14], d[15]},
[]StorageAPI{nil, nil, nil, nil, nil, nil, nil, nil, d[8], d[9], nil, nil, nil, nil, nil, nil},
10,
nil,
},
// Test case - 6.
// If 2 disks again fail from the 3 disks returned previously, return next 2 disks
{
11,
[]StorageAPI{nil, nil, d[2], d[3], d[4], d[5], d[6], d[7], nil, nil, d[10], d[11], d[12], d[13], d[14], d[15]},
[]StorageAPI{nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, d[11], nil, nil, nil, nil},
12,
nil,
},
// Test case - 7.
// No more disks are available for read, return error
{
13,
[]StorageAPI{nil, nil, d[2], d[3], d[4], d[5], d[6], d[7], nil, nil, d[10], nil, nil, nil, nil, nil},
nil,
0,
errXLReadQuorum,
},
}
for i, test := range testCases {
disks, nextIndex, err := getReadDisks(test.argDisks, test.index, xl.dataBlocks)
if errorCause(err) != test.err {
t.Errorf("test-case %d - expected error : %s, got : %s", i+1, test.err, err)
continue
}
if test.nextIndex != nextIndex {
t.Errorf("test-case %d - expected nextIndex: %d, got : %d", i+1, test.nextIndex, nextIndex)
continue
}
if !reflect.DeepEqual(test.retDisks, disks) {
t.Errorf("test-case %d : incorrect disks returned. expected %+v, got %+v", i+1, test.retDisks, disks)
continue
}
}
}
// Test for isSuccessDataBlocks and isSuccessDecodeBlocks.
func TestIsSuccessBlocks(t *testing.T) {
dataBlocks := 8
testCases := []struct {
enBlocks [][]byte // data and parity blocks.
successData bool // expected return value of isSuccessDataBlocks()
successDecode bool // expected return value of isSuccessDecodeBlocks()
}{
{
// When all data and partity blocks are available.
[][]byte{
{'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'},
{'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'},
},
true,
true,
},
{
// When one data block is not available.
[][]byte{
nil, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'},
{'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'},
},
false,
true,
},
{
// When one data and all parity are available, enough for reedsolomon.Reconstruct()
[][]byte{
nil, nil, nil, nil, nil, nil, nil, {'a'},
{'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'},
},
false,
true,
},
{
// When all data disks are not available, enough for reedsolomon.Reconstruct()
[][]byte{
nil, nil, nil, nil, nil, nil, nil, nil,
{'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'},
},
false,
true,
},
{
// Not enough disks for reedsolomon.Reconstruct()
[][]byte{
nil, nil, nil, nil, nil, nil, nil, nil,
nil, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'}, {'a'},
},
false,
false,
},
}
for i, test := range testCases {
got := isSuccessDataBlocks(test.enBlocks, dataBlocks)
if test.successData != got {
t.Errorf("test-case %d : expected %v got %v", i+1, test.successData, got)
}
got = isSuccessDecodeBlocks(test.enBlocks, dataBlocks)
if test.successDecode != got {
t.Errorf("test-case %d : expected %v got %v", i+1, test.successDecode, got)
}
}
}
// Wrapper function for testGetReadDisks, testShuffleDisks.
func TestErasureReadUtils(t *testing.T) {
nDisks := 16
disks, err := getRandomDisks(nDisks)
if err != nil {
t.Fatal(err)
}
objLayer, _, err := initObjectLayer(mustGetNewEndpointList(disks...))
if err != nil {
removeRoots(disks)
t.Fatal(err)
}
defer removeRoots(disks)
xl := objLayer.(*xlObjects)
testGetReadDisks(t, xl)
}
// Simulates a faulty disk for ReadFile()
type ReadDiskDown struct {
*posix
}
func (r ReadDiskDown) ReadFile(volume string, path string, offset int64, buf []byte) (n int64, err error) {
func (d badDisk) ReadFile(volume string, path string, offset int64, buf []byte, verifier *BitrotVerifier) (n int64, err error) {
return 0, errFaultyDisk
}
func (r ReadDiskDown) ReadFileWithVerify(volume string, path string, offset int64, buf []byte,
algo HashAlgo, expectedHash string) (n int64, err error) {
return 0, errFaultyDisk
var erasureReadFileTests = []struct {
dataBlocks int
onDisks, offDisks int
blocksize, data int64
offset int64
length int64
algorithm BitrotAlgorithm
shouldFail, shouldFailQuorum bool
}{
{dataBlocks: 2, onDisks: 4, offDisks: 0, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 0
{dataBlocks: 3, onDisks: 6, offDisks: 0, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: SHA256, shouldFail: false, shouldFailQuorum: false}, // 1
{dataBlocks: 4, onDisks: 8, offDisks: 0, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 2
{dataBlocks: 5, onDisks: 10, offDisks: 0, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 1, length: oneMiByte - 1, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 3
{dataBlocks: 6, onDisks: 12, offDisks: 0, blocksize: int64(oneMiByte), data: oneMiByte, offset: oneMiByte, length: 0, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 4
{dataBlocks: 7, onDisks: 14, offDisks: 0, blocksize: int64(oneMiByte), data: oneMiByte, offset: 3, length: 1024, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 5
{dataBlocks: 8, onDisks: 16, offDisks: 0, blocksize: int64(oneMiByte), data: oneMiByte, offset: 4, length: 8 * 1024, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 6
{dataBlocks: 7, onDisks: 14, offDisks: 7, blocksize: int64(blockSizeV1), data: oneMiByte, offset: oneMiByte, length: 1, algorithm: DefaultBitrotAlgorithm, shouldFail: true, shouldFailQuorum: false}, // 7
{dataBlocks: 6, onDisks: 12, offDisks: 6, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 8
{dataBlocks: 5, onDisks: 10, offDisks: 5, blocksize: int64(oneMiByte), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 9
{dataBlocks: 4, onDisks: 8, offDisks: 4, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: SHA256, shouldFail: false, shouldFailQuorum: false}, // 10
{dataBlocks: 3, onDisks: 6, offDisks: 3, blocksize: int64(oneMiByte), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 11
{dataBlocks: 2, onDisks: 4, offDisks: 2, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 12
{dataBlocks: 2, onDisks: 4, offDisks: 1, blocksize: int64(oneMiByte), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 13
{dataBlocks: 3, onDisks: 6, offDisks: 2, blocksize: int64(oneMiByte), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 14
{dataBlocks: 4, onDisks: 8, offDisks: 3, blocksize: int64(2 * oneMiByte), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 15
{dataBlocks: 5, onDisks: 10, offDisks: 6, blocksize: int64(oneMiByte), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: true}, // 16
{dataBlocks: 5, onDisks: 10, offDisks: 2, blocksize: int64(blockSizeV1), data: 2 * oneMiByte, offset: oneMiByte, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 17
{dataBlocks: 5, onDisks: 10, offDisks: 1, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 18
{dataBlocks: 6, onDisks: 12, offDisks: 3, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: SHA256, shouldFail: false, shouldFailQuorum: false}, // 19
{dataBlocks: 6, onDisks: 12, offDisks: 7, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: true}, // 20
{dataBlocks: 8, onDisks: 16, offDisks: 8, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 21
{dataBlocks: 8, onDisks: 16, offDisks: 9, blocksize: int64(oneMiByte), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: true}, // 22
{dataBlocks: 8, onDisks: 16, offDisks: 7, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 23
{dataBlocks: 2, onDisks: 4, offDisks: 1, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 24
{dataBlocks: 2, onDisks: 4, offDisks: 0, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 25
{dataBlocks: 2, onDisks: 4, offDisks: 0, blocksize: int64(blockSizeV1), data: oneMiByte, offset: 0, length: oneMiByte, algorithm: 0, shouldFail: true, shouldFailQuorum: false}, // 26
{dataBlocks: 2, onDisks: 4, offDisks: 0, blocksize: int64(blockSizeV1), data: int64(blockSizeV1) + 1, offset: 0, length: int64(blockSizeV1) + 1, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 27
{dataBlocks: 2, onDisks: 4, offDisks: 0, blocksize: int64(blockSizeV1), data: int64(2 * blockSizeV1), offset: 12, length: int64(blockSizeV1) + 17, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 28
{dataBlocks: 3, onDisks: 6, offDisks: 0, blocksize: int64(blockSizeV1), data: int64(2 * blockSizeV1), offset: 1023, length: int64(blockSizeV1) + 1024, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 29
{dataBlocks: 4, onDisks: 8, offDisks: 0, blocksize: int64(blockSizeV1), data: int64(2 * blockSizeV1), offset: 11, length: int64(blockSizeV1) + 2*1024, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 30
{dataBlocks: 6, onDisks: 12, offDisks: 0, blocksize: int64(blockSizeV1), data: int64(2 * blockSizeV1), offset: 512, length: int64(blockSizeV1) + 8*1024, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 31
{dataBlocks: 8, onDisks: 16, offDisks: 0, blocksize: int64(blockSizeV1), data: int64(2 * blockSizeV1), offset: int64(blockSizeV1), length: int64(blockSizeV1) - 1, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 32
{dataBlocks: 2, onDisks: 4, offDisks: 0, blocksize: int64(blockSizeV1), data: int64(oneMiByte), offset: -1, length: 3, algorithm: DefaultBitrotAlgorithm, shouldFail: true, shouldFailQuorum: false}, // 33
{dataBlocks: 2, onDisks: 4, offDisks: 0, blocksize: int64(blockSizeV1), data: int64(oneMiByte), offset: 1024, length: -1, algorithm: DefaultBitrotAlgorithm, shouldFail: true, shouldFailQuorum: false}, // 34
{dataBlocks: 4, onDisks: 6, offDisks: 0, blocksize: int64(blockSizeV1), data: int64(blockSizeV1), offset: 0, length: int64(blockSizeV1), algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 35
{dataBlocks: 4, onDisks: 6, offDisks: 1, blocksize: int64(blockSizeV1), data: int64(2 * blockSizeV1), offset: 12, length: int64(blockSizeV1) + 17, algorithm: BLAKE2b512, shouldFail: false, shouldFailQuorum: false}, // 36
{dataBlocks: 4, onDisks: 6, offDisks: 3, blocksize: int64(blockSizeV1), data: int64(2 * blockSizeV1), offset: 1023, length: int64(blockSizeV1) + 1024, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: true}, // 37
{dataBlocks: 8, onDisks: 12, offDisks: 4, blocksize: int64(blockSizeV1), data: int64(2 * blockSizeV1), offset: 11, length: int64(blockSizeV1) + 2*1024, algorithm: DefaultBitrotAlgorithm, shouldFail: false, shouldFailQuorum: false}, // 38
}
func TestErasureReadFileDiskFail(t *testing.T) {
// Initialize environment needed for the test.
dataBlocks := 7
parityBlocks := 7
blockSize := int64(blockSizeV1)
setup, err := newErasureTestSetup(dataBlocks, parityBlocks, blockSize)
if err != nil {
t.Error(err)
return
}
defer setup.Remove()
disks := setup.disks
// Prepare a slice of 1humanize.MiByte with random data.
data := make([]byte, 1*humanize.MiByte)
length := int64(len(data))
_, err = rand.Read(data)
if err != nil {
t.Fatal(err)
}
// Create a test file to read from.
_, size, checkSums, err := erasureCreateFile(disks, "testbucket", "testobject", bytes.NewReader(data), true, blockSize, dataBlocks, parityBlocks, bitRotAlgo, dataBlocks+1)
if err != nil {
t.Fatal(err)
}
if size != length {
t.Errorf("erasureCreateFile returned %d, expected %d", size, length)
}
// create byte pool which will be used by erasureReadFile for
// reading from disks and erasure decoding.
chunkSize := getChunkSize(blockSize, dataBlocks)
pool := bpool.NewBytePool(chunkSize, len(disks))
buf := &bytes.Buffer{}
_, err = erasureReadFile(buf, disks, "testbucket", "testobject", 0, length, length, blockSize, dataBlocks, parityBlocks, checkSums, bitRotAlgo, pool)
if err != nil {
t.Error(err)
}
if !bytes.Equal(buf.Bytes(), data) {
t.Error("Contents of the erasure coded file differs")
}
// 2 disks down. Read should succeed.
disks[4] = ReadDiskDown{disks[4].(*posix)}
disks[5] = ReadDiskDown{disks[5].(*posix)}
buf.Reset()
_, err = erasureReadFile(buf, disks, "testbucket", "testobject", 0, length, length, blockSize, dataBlocks, parityBlocks, checkSums, bitRotAlgo, pool)
if err != nil {
t.Error(err)
}
if !bytes.Equal(buf.Bytes(), data) {
t.Error("Contents of the erasure coded file differs")
}
// 4 more disks down. 6 disks down in total. Read should succeed.
disks[6] = ReadDiskDown{disks[6].(*posix)}
disks[8] = ReadDiskDown{disks[8].(*posix)}
disks[9] = ReadDiskDown{disks[9].(*posix)}
disks[11] = ReadDiskDown{disks[11].(*posix)}
buf.Reset()
_, err = erasureReadFile(buf, disks, "testbucket", "testobject", 0, length, length, blockSize, dataBlocks, parityBlocks, checkSums, bitRotAlgo, pool)
if err != nil {
t.Error(err)
}
if !bytes.Equal(buf.Bytes(), data) {
t.Error("Contents of the erasure coded file differs")
}
// 2 more disk down. 8 disks down in total. Read should fail.
disks[12] = ReadDiskDown{disks[12].(*posix)}
disks[13] = ReadDiskDown{disks[13].(*posix)}
buf.Reset()
_, err = erasureReadFile(buf, disks, "testbucket", "testobject", 0, length, length, blockSize, dataBlocks, parityBlocks, checkSums, bitRotAlgo, pool)
if errorCause(err) != errXLReadQuorum {
t.Fatal("expected errXLReadQuorum error")
}
}
func TestErasureReadFileOffsetLength(t *testing.T) {
// Initialize environment needed for the test.
dataBlocks := 7
parityBlocks := 7
blockSize := int64(1 * humanize.MiByte)
setup, err := newErasureTestSetup(dataBlocks, parityBlocks, blockSize)
if err != nil {
t.Error(err)
return
}
defer setup.Remove()
disks := setup.disks
// Prepare a slice of 5humanize.MiByte with random data.
data := make([]byte, 5*humanize.MiByte)
length := int64(len(data))
_, err = rand.Read(data)
if err != nil {
t.Fatal(err)
}
// Create a test file to read from.
_, size, checkSums, err := erasureCreateFile(disks, "testbucket", "testobject", bytes.NewReader(data), true, blockSize, dataBlocks, parityBlocks, bitRotAlgo, dataBlocks+1)
if err != nil {
t.Fatal(err)
}
if size != length {
t.Errorf("erasureCreateFile returned %d, expected %d", size, length)
}
testCases := []struct {
offset, length int64
}{
// Full file.
{0, length},
// Read nothing.
{length, 0},
// 2nd block.
{blockSize, blockSize},
// Test cases for random offsets and lengths.
{blockSize - 1, 2},
{blockSize - 1, blockSize + 1},
{blockSize + 1, blockSize - 1},
{blockSize + 1, blockSize},
{blockSize + 1, blockSize + 1},
{blockSize*2 - 1, blockSize + 1},
{length - 1, 1},
{length - blockSize, blockSize},
{length - blockSize - 1, blockSize},
{length - blockSize - 1, blockSize + 1},
}
chunkSize := getChunkSize(blockSize, dataBlocks)
pool := bpool.NewBytePool(chunkSize, len(disks))
// Compare the data read from file with "data" byte array.
for i, testCase := range testCases {
expected := data[testCase.offset:(testCase.offset + testCase.length)]
buf := &bytes.Buffer{}
_, err = erasureReadFile(buf, disks, "testbucket", "testobject", testCase.offset, testCase.length, length, blockSize, dataBlocks, parityBlocks, checkSums, bitRotAlgo, pool)
func TestErasureReadFile(t *testing.T) {
for i, test := range erasureReadFileTests {
setup, err := newErasureTestSetup(test.dataBlocks, test.onDisks-test.dataBlocks, test.blocksize)
if err != nil {
t.Error(err)
continue
t.Fatalf("Test %d: failed to create test setup: %v", i, err)
}
got := buf.Bytes()
if !bytes.Equal(expected, got) {
t.Errorf("Test %d : read data is different from what was expected", i+1)
storage, err := NewErasureStorage(setup.disks, test.dataBlocks, test.onDisks-test.dataBlocks, test.blocksize)
if err != nil {
setup.Remove()
t.Fatalf("Test %d: failed to create ErasureStorage: %v", i, err)
}
data := make([]byte, test.data)
if _, err = io.ReadFull(crand.Reader, data); err != nil {
setup.Remove()
t.Fatalf("Test %d: failed to generate random test data: %v", i, err)
}
writeAlgorithm := test.algorithm
if !test.algorithm.Available() {
writeAlgorithm = DefaultBitrotAlgorithm
}
buffer := make([]byte, test.blocksize, 2*test.blocksize)
file, err := storage.CreateFile(bytes.NewReader(data[:]), "testbucket", "object", buffer, writeAlgorithm, test.dataBlocks+1)
if err != nil {
setup.Remove()
t.Fatalf("Test %d: failed to create erasure test file: %v", i, err)
}
writer := bytes.NewBuffer(nil)
readInfo, err := storage.ReadFile(writer, "testbucket", "object", test.offset, test.length, test.data, file.Checksums, test.algorithm, test.blocksize)
if err != nil && !test.shouldFail {
t.Errorf("Test %d: should pass but failed with: %v", i, err)
}
if err == nil && test.shouldFail {
t.Errorf("Test %d: should fail but it passed", i)
}
if err == nil {
if readInfo.Size != test.length {
t.Errorf("Test %d: read returns wrong number of bytes: got: #%d want: #%d", i, readInfo.Size, test.length)
}
if readInfo.Algorithm != test.algorithm {
t.Errorf("Test %d: read returns wrong algorithm: got: %v want: %v", i, readInfo.Algorithm, test.algorithm)
}
if content := writer.Bytes(); !bytes.Equal(content, data[test.offset:test.offset+test.length]) {
t.Errorf("Test %d: read retruns wrong file content", i)
}
}
if err == nil && !test.shouldFail {
writer.Reset()
for j := range storage.disks[:test.offDisks] {
storage.disks[j] = badDisk{nil}
}
if test.offDisks > 0 {
storage.disks[0] = OfflineDisk
}
readInfo, err = storage.ReadFile(writer, "testbucket", "object", test.offset, test.length, test.data, file.Checksums, test.algorithm, test.blocksize)
if err != nil && !test.shouldFailQuorum {
t.Errorf("Test %d: should pass but failed with: %v", i, err)
}
if err == nil && test.shouldFailQuorum {
t.Errorf("Test %d: should fail but it passed", i)
}
if !test.shouldFailQuorum {
if readInfo.Size != test.length {
t.Errorf("Test %d: read returns wrong number of bytes: got: #%d want: #%d", i, readInfo.Size, test.length)
}
if readInfo.Algorithm != test.algorithm {
t.Errorf("Test %d: read returns wrong algorithm: got: %v want: %v", i, readInfo.Algorithm, test.algorithm)
}
if content := writer.Bytes(); !bytes.Equal(content, data[test.offset:test.offset+test.length]) {
t.Errorf("Test %d: read retruns wrong file content", i)
}
}
}
setup.Remove()
}
}
@@ -390,8 +174,10 @@ func TestErasureReadFileRandomOffsetLength(t *testing.T) {
}
defer setup.Remove()
disks := setup.disks
storage, err := NewErasureStorage(setup.disks, dataBlocks, parityBlocks, blockSize)
if err != nil {
t.Fatalf("failed to create ErasureStorage: %v", err)
}
// Prepare a slice of 5MiB with random data.
data := make([]byte, 5*humanize.MiByte)
length := int64(len(data))
@@ -404,22 +190,18 @@ func TestErasureReadFileRandomOffsetLength(t *testing.T) {
iterations := 10000
// Create a test file to read from.
_, size, checkSums, err := erasureCreateFile(disks, "testbucket", "testobject", bytes.NewReader(data), true, blockSize, dataBlocks, parityBlocks, bitRotAlgo, dataBlocks+1)
buffer := make([]byte, blockSize, 2*blockSize)
file, err := storage.CreateFile(bytes.NewReader(data), "testbucket", "testobject", buffer, DefaultBitrotAlgorithm, dataBlocks+1)
if err != nil {
t.Fatal(err)
}
if size != length {
t.Errorf("erasureCreateFile returned %d, expected %d", size, length)
if file.Size != length {
t.Errorf("erasureCreateFile returned %d, expected %d", file.Size, length)
}
// To generate random offset/length.
r := rand.New(rand.NewSource(UTCNow().UnixNano()))
// create pool buffer which will be used by erasureReadFile for
// reading from disks and erasure decoding.
chunkSize := getChunkSize(blockSize, dataBlocks)
pool := bpool.NewBytePool(chunkSize, len(disks))
buf := &bytes.Buffer{}
// Verify erasureReadFile() for random offsets and lengths.
@@ -429,7 +211,7 @@ func TestErasureReadFileRandomOffsetLength(t *testing.T) {
expected := data[offset : offset+readLen]
_, err = erasureReadFile(buf, disks, "testbucket", "testobject", offset, readLen, length, blockSize, dataBlocks, parityBlocks, checkSums, bitRotAlgo, pool)
_, err = storage.ReadFile(buf, "testbucket", "testobject", offset, readLen, length, file.Checksums, DefaultBitrotAlgorithm, blockSize)
if err != nil {
t.Fatal(err, offset, readLen)
}
@@ -440,3 +222,92 @@ func TestErasureReadFileRandomOffsetLength(t *testing.T) {
buf.Reset()
}
}
// Benchmarks
func benchmarkErasureRead(data, parity, dataDown, parityDown int, size int64, b *testing.B) {
setup, err := newErasureTestSetup(data, parity, blockSizeV1)
if err != nil {
b.Fatalf("failed to create test setup: %v", err)
}
defer setup.Remove()
storage, err := NewErasureStorage(setup.disks, data, parity, blockSizeV1)
if err != nil {
b.Fatalf("failed to create ErasureStorage: %v", err)
}
content := make([]byte, size)
buffer := make([]byte, blockSizeV1, 2*blockSizeV1)
file, err := storage.CreateFile(bytes.NewReader(content), "testbucket", "object", buffer, DefaultBitrotAlgorithm, data+1)
if err != nil {
b.Fatalf("failed to create erasure test file: %v", err)
}
checksums := file.Checksums
for i := 0; i < dataDown; i++ {
storage.disks[i] = OfflineDisk
}
for i := data; i < data+parityDown; i++ {
storage.disks[i] = OfflineDisk
}
b.ResetTimer()
b.SetBytes(size)
b.ReportAllocs()
for i := 0; i < b.N; i++ {
if file, err = storage.ReadFile(bytes.NewBuffer(content[:0]), "testbucket", "object", 0, size, size, checksums, DefaultBitrotAlgorithm, blockSizeV1); err != nil {
panic(err)
}
}
}
func BenchmarkErasureReadQuick(b *testing.B) {
const size = 12 * 1024 * 1024
b.Run(" 00|00 ", func(b *testing.B) { benchmarkErasureRead(2, 2, 0, 0, size, b) })
b.Run(" 00|X0 ", func(b *testing.B) { benchmarkErasureRead(2, 2, 0, 1, size, b) })
b.Run(" X0|00 ", func(b *testing.B) { benchmarkErasureRead(2, 2, 1, 0, size, b) })
b.Run(" X0|X0 ", func(b *testing.B) { benchmarkErasureRead(2, 2, 1, 1, size, b) })
}
func BenchmarkErasureRead_4_64KB(b *testing.B) {
const size = 64 * 1024
b.Run(" 00|00 ", func(b *testing.B) { benchmarkErasureRead(2, 2, 0, 0, size, b) })
b.Run(" 00|X0 ", func(b *testing.B) { benchmarkErasureRead(2, 2, 0, 1, size, b) })
b.Run(" X0|00 ", func(b *testing.B) { benchmarkErasureRead(2, 2, 1, 0, size, b) })
b.Run(" X0|X0 ", func(b *testing.B) { benchmarkErasureRead(2, 2, 1, 1, size, b) })
b.Run(" 00|XX ", func(b *testing.B) { benchmarkErasureRead(2, 2, 0, 2, size, b) })
b.Run(" XX|00 ", func(b *testing.B) { benchmarkErasureRead(2, 2, 2, 0, size, b) })
}
func BenchmarkErasureRead_8_20MB(b *testing.B) {
const size = 20 * 1024 * 1024
b.Run(" 0000|0000 ", func(b *testing.B) { benchmarkErasureRead(4, 4, 0, 0, size, b) })
b.Run(" 0000|X000 ", func(b *testing.B) { benchmarkErasureRead(4, 4, 0, 1, size, b) })
b.Run(" X000|0000 ", func(b *testing.B) { benchmarkErasureRead(4, 4, 1, 0, size, b) })
b.Run(" X000|X000 ", func(b *testing.B) { benchmarkErasureRead(4, 4, 1, 1, size, b) })
b.Run(" 0000|XXXX ", func(b *testing.B) { benchmarkErasureRead(4, 4, 0, 4, size, b) })
b.Run(" XX00|XX00 ", func(b *testing.B) { benchmarkErasureRead(4, 4, 2, 2, size, b) })
b.Run(" XXXX|0000 ", func(b *testing.B) { benchmarkErasureRead(4, 4, 4, 0, size, b) })
}
func BenchmarkErasureRead_12_30MB(b *testing.B) {
const size = 30 * 1024 * 1024
b.Run(" 000000|000000 ", func(b *testing.B) { benchmarkErasureRead(6, 6, 0, 0, size, b) })
b.Run(" 000000|X00000 ", func(b *testing.B) { benchmarkErasureRead(6, 6, 0, 1, size, b) })
b.Run(" X00000|000000 ", func(b *testing.B) { benchmarkErasureRead(6, 6, 1, 0, size, b) })
b.Run(" X00000|X00000 ", func(b *testing.B) { benchmarkErasureRead(6, 6, 1, 1, size, b) })
b.Run(" 000000|XXXXXX ", func(b *testing.B) { benchmarkErasureRead(6, 6, 0, 6, size, b) })
b.Run(" XXX000|XXX000 ", func(b *testing.B) { benchmarkErasureRead(6, 6, 3, 3, size, b) })
b.Run(" XXXXXX|000000 ", func(b *testing.B) { benchmarkErasureRead(6, 6, 6, 0, size, b) })
}
func BenchmarkErasureRead_16_40MB(b *testing.B) {
const size = 40 * 1024 * 1024
b.Run(" 00000000|00000000 ", func(b *testing.B) { benchmarkErasureRead(8, 8, 0, 0, size, b) })
b.Run(" 00000000|X0000000 ", func(b *testing.B) { benchmarkErasureRead(8, 8, 0, 1, size, b) })
b.Run(" X0000000|00000000 ", func(b *testing.B) { benchmarkErasureRead(8, 8, 1, 0, size, b) })
b.Run(" X0000000|X0000000 ", func(b *testing.B) { benchmarkErasureRead(8, 8, 1, 1, size, b) })
b.Run(" 00000000|XXXXXXXX ", func(b *testing.B) { benchmarkErasureRead(8, 8, 0, 8, size, b) })
b.Run(" XXXX0000|XXXX0000 ", func(b *testing.B) { benchmarkErasureRead(8, 8, 4, 4, size, b) })
b.Run(" XXXXXXXX|00000000 ", func(b *testing.B) { benchmarkErasureRead(8, 8, 8, 0, size, b) })
}

View File

@@ -18,72 +18,12 @@ package cmd
import (
"bytes"
"errors"
"hash"
"io"
"sync"
"github.com/klauspost/reedsolomon"
"github.com/minio/sha256-simd"
"golang.org/x/crypto/blake2b"
"github.com/minio/minio/pkg/errors"
)
// newHashWriters - inititialize a slice of hashes for the disk count.
func newHashWriters(diskCount int, algo HashAlgo) []hash.Hash {
hashWriters := make([]hash.Hash, diskCount)
for index := range hashWriters {
hashWriters[index] = newHash(algo)
}
return hashWriters
}
// newHash - gives you a newly allocated hash depending on the input algorithm.
func newHash(algo HashAlgo) (h hash.Hash) {
switch algo {
case HashSha256:
// sha256 checksum specially on ARM64 platforms or whenever
// requested as dictated by `xl.json` entry.
h = sha256.New()
case HashBlake2b:
// ignore the error, because New512 without a key never fails
// New512 only returns a non-nil error, if the length of the passed
// key > 64 bytes - but we use blake2b as hash function (no key)
h, _ = blake2b.New512(nil)
// Add new hashes here.
default:
// Default to blake2b.
// ignore the error, because New512 without a key never fails
// New512 only returns a non-nil error, if the length of the passed
// key > 64 bytes - but we use blake2b as hash function (no key)
h, _ = blake2b.New512(nil)
}
return h
}
// Hash buffer pool is a pool of reusable
// buffers used while checksumming a stream.
var hashBufferPool = sync.Pool{
New: func() interface{} {
b := make([]byte, readSizeV1)
return &b
},
}
// hashSum calculates the hash of the entire path and returns.
func hashSum(disk StorageAPI, volume, path string, writer hash.Hash) ([]byte, error) {
// Fetch a new staging buffer from the pool.
bufp := hashBufferPool.Get().(*[]byte)
defer hashBufferPool.Put(bufp)
// Copy entire buffer to writer.
if err := copyBuffer(writer, disk, volume, path, *bufp); err != nil {
return nil, err
}
// Return the final hash sum.
return writer.Sum(nil), nil
}
// getDataBlockLen - get length of data blocks from encoded blocks.
func getDataBlockLen(enBlocks [][]byte, dataBlocks int) int {
size := 0
@@ -99,17 +39,17 @@ func getDataBlockLen(enBlocks [][]byte, dataBlocks int) int {
func writeDataBlocks(dst io.Writer, enBlocks [][]byte, dataBlocks int, offset int64, length int64) (int64, error) {
// Offset and out size cannot be negative.
if offset < 0 || length < 0 {
return 0, traceError(errUnexpected)
return 0, errors.Trace(errUnexpected)
}
// Do we have enough blocks?
if len(enBlocks) < dataBlocks {
return 0, traceError(reedsolomon.ErrTooFewShards)
return 0, errors.Trace(reedsolomon.ErrTooFewShards)
}
// Do we have enough data?
if int64(getDataBlockLen(enBlocks, dataBlocks)) < length {
return 0, traceError(reedsolomon.ErrShortData)
return 0, errors.Trace(reedsolomon.ErrShortData)
}
// Counter to decrement total left to write.
@@ -137,7 +77,7 @@ func writeDataBlocks(dst io.Writer, enBlocks [][]byte, dataBlocks int, offset in
if write < int64(len(block)) {
n, err := io.Copy(dst, bytes.NewReader(block[:write]))
if err != nil {
return 0, traceError(err)
return 0, errors.Trace(err)
}
totalWritten += n
break
@@ -145,7 +85,7 @@ func writeDataBlocks(dst io.Writer, enBlocks [][]byte, dataBlocks int, offset in
// Copy the block.
n, err := io.Copy(dst, bytes.NewReader(block))
if err != nil {
return 0, traceError(err)
return 0, errors.Trace(err)
}
// Decrement output size.
@@ -158,65 +98,3 @@ func writeDataBlocks(dst io.Writer, enBlocks [][]byte, dataBlocks int, offset in
// Success.
return totalWritten, nil
}
// chunkSize is roughly BlockSize/DataBlocks.
// chunkSize is calculated such that chunkSize*DataBlocks accommodates BlockSize bytes.
// So chunkSize*DataBlocks can be slightly larger than BlockSize if BlockSize is not divisible by
// DataBlocks. The extra space will have 0-padding.
func getChunkSize(blockSize int64, dataBlocks int) int64 {
return (blockSize + int64(dataBlocks) - 1) / int64(dataBlocks)
}
// copyBuffer - copies from disk, volume, path to input writer until either EOF
// is reached at volume, path or an error occurs. A success copyBuffer returns
// err == nil, not err == EOF. Because copyBuffer is defined to read from path
// until EOF. It does not treat an EOF from ReadFile an error to be reported.
// Additionally copyBuffer stages through the provided buffer; otherwise if it
// has zero length, returns error.
func copyBuffer(writer io.Writer, disk StorageAPI, volume string, path string, buf []byte) error {
// Error condition of zero length buffer.
if buf != nil && len(buf) == 0 {
return errors.New("empty buffer in readBuffer")
}
// Starting offset for Reading the file.
var startOffset int64
// Read until io.EOF.
for {
n, err := disk.ReadFile(volume, path, startOffset, buf)
if n > 0 {
m, wErr := writer.Write(buf[:n])
if wErr != nil {
return wErr
}
if int64(m) != n {
return io.ErrShortWrite
}
}
if err != nil {
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
}
return err
}
// Progress the offset.
startOffset += n
}
// Success.
return nil
}
// bitRotVerifier - type representing bit-rot verification process for
// a single under-lying object (currently whole files)
type bitRotVerifier struct {
// has the bit-rot verification been done?
isVerified bool
// is the data free of bit-rot?
hasBitRot bool
// hashing algorithm
algo HashAlgo
// hex-encoded expected raw-hash value
checkSum string
}

Some files were not shown because too many files have changed in this diff Show More