Compare commits

...

500 Commits

Author SHA1 Message Date
Harshavardhana
cfc5681b36 fix: policy_test to use minio-go/v7 2020-07-14 10:30:00 -07:00
Harshavardhana
369a876ebe fix: handle array policies in JWT claim (#10041)
PR #10014 was not complete as only handled
policy claims partially.
2020-07-14 10:26:47 -07:00
Anis Elleuch
778e9c864f Move dependency from minio-go v6 to v7 (#10042) 2020-07-14 09:38:05 -07:00
Harshavardhana
a2616b8227 allow turning off secure ciphers (#10038)
this PR to allow legacy support for big-data
applications which run older Java versions
which do not support the secure ciphers
currently defaulted in MinIO. This option
allows optionally to turn them off such
that client and server can negotiate the
best ciphers themselves.

This env is purposefully not documented,
meant as a last resort when client
application cannot be changed easily.
2020-07-13 14:20:21 -07:00
Minio Trusted
8d8f28eae4 Update yaml files to latest version RELEASE.2020-07-13T18-09-56Z 2020-07-13 18:29:35 +00:00
Harshavardhana
e7d7d5232c fix: admin info output and improve overall performance (#10015)
- admin info node offline check is now quicker
- admin info now doesn't duplicate the code
  across doing the same checks for disks
- rely on StorageInfo to return appropriate errors
  instead of calling locally.
- diskID checks now return proper errors when
  disk not found v/s format.json missing.
- add more disk states for more clarity on the
  underlying disk errors.
2020-07-13 09:51:07 -07:00
Harshavardhana
1d65ef3201 fix: deletes on older format properly (#10029)
while we handle all situations for writes and reads
on older format, what we didn't cater for properly
yet was delete where we only ended up deleting
just `xl.meta` - instead we should allow all the
deletes to go through for older format without
versioning enabled buckets.
2020-07-13 09:01:17 -07:00
Andreas Auernhammer
8d425e3372 fix etcd module dependency (#10032)
This commit fixes a dependency resolution problem w.r.t. minio => etcd.
Building any project depending on minio (e.g. mc) currently fails with
latest master since the module replace directive is not honored for
transitive / indirect dependencies.

This commit fixes this by adding the etcd module directly instead of
using a module replace instruction.
2020-07-13 07:46:34 -07:00
Minio Trusted
3939c6f6e7 Update yaml files to latest version RELEASE.2020-07-12T19-14-17Z 2020-07-12 19:48:42 +00:00
Harshavardhana
60d91234b9 fix: avoid broken link when preview image (#10021)
closes #9589
2020-07-12 12:09:22 -07:00
Harshavardhana
37c14207d6 fix: cors handling again for not just OPTIONS request (#10025)
CORS is notorious requires specific headers to be
handled appropriately in request and response,
using cors package as part of handlerFunc() for
options method lacks the necessary control this
package needs to add headers.
2020-07-12 10:56:57 -07:00
Harshavardhana
3b9fbf80ad fix: make sure to use new restClient for healthcheck (#10026)
Without instantiating a new rest client we can
have a recursive error which can lead to
healthcheck returning always offline, this can
prematurely take the servers offline.
2020-07-11 22:19:38 -07:00
Minio Trusted
c2fdf73491 Update yaml files to latest version RELEASE.2020-07-11T21-14-23Z 2020-07-11 21:30:53 +00:00
Harshavardhana
143f9371c6 fix: loading users regression
additionally also move to latest gorilla/mux master
to fix the DNS style bucket routing regression

resolves #10022
resolves #10023
2020-07-11 14:03:27 -07:00
Harshavardhana
3f1902face fix: cors should be available on all paths (#10020) 2020-07-11 13:49:24 -07:00
Harshavardhana
c0adb52213 sync to disk only upon successful legacy metadata rename (#10018) 2020-07-11 09:37:34 -07:00
Harshavardhana
3520e946a2 fix: versioning docs add more examples 2020-07-11 00:57:46 -07:00
Harshavardhana
f38adc1865 cleanup security overview guide 2020-07-11 00:34:56 -07:00
Harshavardhana
d5ff1c8e3b fix docs image urls to be absolute path 2020-07-11 00:27:30 -07:00
Minio Trusted
ad7417bc50 Update yaml files to latest version RELEASE.2020-07-11T06-07-16Z 2020-07-11 06:28:08 +00:00
Harshavardhana
2d17c16d93 fix: make sure to honor versioning from browser UI deletes (#10016) 2020-07-10 22:21:04 -07:00
Harshavardhana
aded0bc81a Update dockerfiles for the release 2020-07-10 18:43:35 -07:00
Harshavardhana
36d36fab0b fix: add virtual host style workaround for gorilla/mux issue (#10010)
gorilla/mux broke their recent release 1.7.4 which we
upgraded to, we need the current workaround to ensure
that our regex matches appropriately.

An upstream PR is sent, we should remove the
workaround once we have a new release.
2020-07-10 15:21:32 -07:00
Harshavardhana
ba756cf366 fix: extract array type for policy claim if present (#10014) 2020-07-10 14:48:44 -07:00
Benjamin Sodenkamp
c00d410e61 Added bucket name param to ToJSONError call (#9961)
when called with InvalidBucketName error.
The user is shown a more specific error
when the param is present.
2020-07-10 12:10:39 -07:00
Klaus Post
968342c732 Remove usage of go-ieproxy for windows (#10009)
There is a potential for deadlock on Windows 10
refer https://github.com/mattn/go-ieproxy/issues/17 

remove this dependency for now.
2020-07-10 12:08:14 -07:00
Harshavardhana
5c15656c55 support bootstrap client to use healthcheck restClient (#10004)
- reduce locker timeout for early transaction lock
  for more eagerness to timeout
- reduce leader lock timeout to range from 30sec to 1minute
- add additional log message during bootstrap phase
2020-07-10 09:26:21 -07:00
Harshavardhana
2e8fc6ebfe cleanup STS docs (#10003) 2020-07-10 09:07:12 -07:00
kannappanr
efe9fe6124 azure: Return success when deleting non-existent object (#9981) 2020-07-10 08:30:23 -07:00
Nitish Tiwari
30c251efd3 Add Grafana dashboard (#10000) 2020-07-09 12:01:58 -07:00
Klaus Post
c850905e43 fix: threadwalk lockup under high load (#9992)
Main issue is that `t.pool[params]` should be `t.pool[oldest]`.

We add a bit more safety features for the code.

* Make writes to the endTimerCh non-blocking in all cases
   so multiple releases cannot lock up.
* Double check expectations.
* Shift down deletes with copy instead of truncating slice.
* Actually delete the oldest if we are above total limit.
* Actually delete the oldest found and not the current.
* Unexport the mutex so nobody from the outside can meddle with it.
2020-07-09 07:02:18 -07:00
Andreas Auernhammer
a317a2531c admin: new API for creating KMS master keys (#9982)
This commit adds a new admin API for creating master keys.
An admin client can send a POST request to:
```
/minio/admin/v3/kms/key/create?key-id=<keyID>
```

The name / ID of the new key is specified as request
query parameter `key-id=<ID>`.

Creating new master keys requires KES - it does not work with
the native Vault KMS (deprecated) nor with a static master key
(deprecated).

Further, this commit removes the `UpdateKey` method from the `KMS`
interface. This method is not needed and not used anymore.
2020-07-08 18:50:43 -07:00
Ravind Kumar
ee20ebe07a Remove dead link related to DC/OS Deployment Guide (#9996) 2020-07-08 17:38:20 -07:00
Harshavardhana
2743d4ca87 fix: Add support for preserving mtime for replication (#9995)
This PR is needed for bucket replication support
2020-07-08 17:36:56 -07:00
Harshavardhana
6136a963c8 fix: bump the response header timeout for forwarder as well (#9994)
continuation of #9986, add more place where the lower timeout
comes into effect.
2020-07-08 10:55:24 -07:00
Harshavardhana
60417950c7 fix: the versioning/object lock documentation appropriately (#9988)
- Move the bucket level features into `docs/bucket` directory
- fix issue template and simplify some of them
2020-07-08 08:44:43 -07:00
Anis Elleuch
fa211f6a10 heal: Fix healing delete markers (#9989) 2020-07-07 20:54:09 -07:00
Harshavardhana
72e0745e2f fix: migrate to go.etcd.io import path (#9987)
with the merge of https://github.com/etcd-io/etcd/pull/11823
etcd v3.5.0 will now have a properly imported versioned path

this fixes our pending migration to newer repo
2020-07-07 19:04:29 -07:00
Klaus Post
aa4d1021eb Remove timeout from putobject and listobjects (#9986)
Use a separate client for these calls that can take a long time.

Add request context to these so they are canceled when the client 
disconnects instead except for ListObject which doesn't have any equivalent.
2020-07-07 12:19:57 -07:00
Harshavardhana
93e7e4a0e5 fix: cors handling after gorilla mux update (#9980)
fixes #9979
2020-07-06 20:55:19 -07:00
Anis Elleuch
c2f7cd1104 Consider errFileVersionNotFound during healing assessment (#9977)
Healing an object which has multiple versions was not working because
the healing code forgot to consider errFileVersionNotFound error as a
use case that needs healing
2020-07-06 08:09:48 -07:00
Harshavardhana
38eef5ce4c fix: documentation fixes for docker ENV settings (#9975)
- update CREDITS file
- fix markdown links
- talk a bit more about upgrades
2020-07-06 06:42:34 -07:00
Anis Elleuch
4cf80f96ad fix: lifecycle XML parsing errors with Versioning (#9974) 2020-07-05 09:08:42 -07:00
Anis Elleuch
d4af132fc4 lifecycle: Expiry should not delete versions (#9972)
Currently, lifecycle expiry is deleting all object versions which is not
correct, unless noncurrent versions field is specified.

Also, only delete the delete marker if it is the only version of the
given object.
2020-07-04 20:56:02 -07:00
Harshavardhana
c087a05b43 fix: simplify data structure before release (#9968)
- additionally upgrade to msgp@v1.1.2
- change StatModTime,StatSize fields as
  simple Size/ModTime
- reduce 50000 entries per List batch to 10000
  as client needs to wait too long to see the
  first batch some times which is not desired
  and it is worth we write the data as soon
  as we have it.
2020-07-04 12:25:53 -07:00
Harshavardhana
cdb0e6ffed support proper values for listMultipartUploads/listParts (#9970)
object KMS is configured with auto-encryption,
there were issues when using docker registry -
this has been left unnoticed for a while.

This PR fixes an issue with compatibility.

Additionally also fix the continuation-token implementation
infinite loop issue which was missed as part of #9939

Also fix the heal token to be generated as a client
facing value instead of what is remembered by the
server, this allows for the server to be stateless 
regarding the token's behavior.
2020-07-03 19:27:13 -07:00
Harshavardhana
03b84091fc auto enable versioning with object locking (#9967)
this is to preserve versioning for object-locked
buckets from current release code.
2020-07-03 15:30:06 -07:00
Anis Elleuch
2be20588bf Reroute requests based token heal/listing (#9939)
When manual healing is triggered, one node in a cluster will 
become the authority to heal. mc regularly sends new requests 
to fetch the status of the ongoing healing process, but a load 
balancer could land the healing request to a node that is not 
doing the healing request.

This PR will redirect a request to the node based on the node 
index found described as part of the client token. A similar
technique is also used to proxy ListObjectsV2 requests
by encoding this information in continuation-token
2020-07-03 11:53:03 -07:00
Harshavardhana
e59ee14f40 Tune tcp keep-alives with new kernel timeout options (#9963)
For more deeper understanding https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
2020-07-03 10:03:41 -07:00
Anis Elleuch
21a37e3393 fix: ListObjectVersions should return ordered Version & DeleteMarker (#9959)
The S3 specification says that versions are ordered in the response of
list object versions.

mc snapshot needs this to know which version comes first especially when
two versions have the same exact last-modified field.
2020-07-03 09:15:44 -07:00
Harshavardhana
810a4f0723 fix: return proper errors Get/HeadObject for deleteMarkers (#9957) 2020-07-02 16:17:27 -07:00
Krishna Srinivas
4c266df863 fix: proxy ListObjects request to one of the server based on hash(bucket) (#9881) 2020-07-02 10:56:22 -07:00
Klaus Post
abd999f64a fix: list object versions in distributed setup (#9958)
Remove calls to `WalkVersions` was calling the wrong endpoint, 
so unless quorum could be reached with local disks no results 
would ever be returned.
2020-07-02 10:29:50 -07:00
Minio Trusted
04de19c870 Update yaml files to latest version RELEASE.2020-07-02T00-15-09Z 2020-07-01 17:36:52 -07:00
Benjamin Sodenkamp
648cb13e02 Added 'close' to results channel in Walk() (#9956) 2020-07-01 14:29:45 -07:00
Harshavardhana
174f428571 add additional fdatasync before close() on writes (#9947) 2020-07-01 10:57:23 -07:00
Harshavardhana
5388ae4acb make sure to delete data-usage cache upon bucket deletes (#9952) 2020-07-01 10:55:28 -07:00
Klaus Post
2e338e84cb fix owhanging crashes in parquet S3 select (#9921) 2020-07-01 08:15:41 -07:00
kannappanr
5089a7167d Handle empty retention in get/put object retention (#9948)
Fixes #9943
2020-06-30 16:44:24 -07:00
Harshavardhana
c0ac25bfff fix: readiness needs to be like liveness (#9941)
Readiness as no reasoning to be cluster scope
because that is not how the k8s networking works
for pods, all the pods to a deployment are not
sharing the network in a singleton. Instead they
are run as local scopes to themselves, with
readiness failures the pod is potentially taken
out of the network to be resolvable - this
affects the distributed setup in myriad of
different ways.

Instead readiness should behave like liveness
with local scope alone, and should be a dummy
implementation.

This PR all the startup times and overal k8s
startup time dramatically improves.

Added another handler called as `/minio/health/cluster`
to understand the cluster scope health.
2020-06-30 11:28:27 -07:00
Klaus Post
27a1f3ed2b fs: Check if cache root was added (#9945)
Fixes #9942
2020-06-30 09:32:36 -07:00
Anis Elleuch
90f36c1389 Update documentation for Accounting API (#9909) 2020-06-30 08:34:08 -07:00
Harshavardhana
91817d0d1a fix: implement generic Walk for gateway (#9938)
Walk() functionality was missing on gateway
implementations leading to missing functionality
for the browser UI such as remove multiple objects,
download as zip file etc.

This PR brings a generic implementation across
all gateway's, it is not required to repeat the
same code in all gateway's
2020-06-29 17:07:23 -07:00
poornas
55a3b071ea Allow optionally to disable range caching. (#9908)
The default behavior is to cache each range requested
to cache drive. Add an environment variable
`MINIO_RANGE_CACHE` - when set to off, it disables
range caching and instead downloads entire object
in the background.

Fixes #9870
2020-06-29 13:25:29 -07:00
Harshavardhana
a38ce29137 fix: simplify background heal and trigger heal items early (#9928)
Bonus fix during versioning merge one of the PR was missing
the offline/online disk count fix from #9801 port it correctly
over to the master branch from release.

Additionally, add versionID support for MRF

Fixes #9910
Fixes #9931
2020-06-29 13:07:26 -07:00
Anis Elleuch
7cea3f7da4 madmin: Strip 80/443 from the endpoint when http/https (#9937)
Users having endpoints with this format http://url:80 or http://url:443
will face signature mismatch error.

The reason is that  S3 spec ignores :80 or :443 port in the endpoint
when calculating the signature, so this PR will just strip them.
2020-06-29 12:31:07 -07:00
Harshavardhana
dcffd87e08 upgrade docker images to alpine 3.12 (#9934) 2020-06-29 09:36:29 +05:30
Harshavardhana
3b813148b3 update gorilla deps for query parsing performance improvements (#9929) 2020-06-28 11:37:56 -07:00
Harshavardhana
4bba2cd034 fix: disallow versioning to be suspended with object lock (#9930) 2020-06-28 08:15:15 -07:00
Harshavardhana
28a1a17187 docs update versioning images (#9925) 2020-06-27 10:27:23 -07:00
Eco
a5e879fbe4 doc: typo fix in versioning README (#9926)
requets -> requests
2020-06-27 05:06:31 -07:00
Harshavardhana
f7f12b8604 fix: crash in storage rest client due to spurious query params (#9924)
regression got introduced in dee3cf2d7f
when the DeleteVersion API was changed, but the corresponding query
params were left in-tact.
2020-06-26 16:49:49 -07:00
Praveen raj Mani
cf5d051afc update notification rulesMap when reloading bucketMetadata (#9917) 2020-06-26 13:17:31 -07:00
Harshavardhana
2f681bed57 fix: pop entries from each drives in parallel (#9918) 2020-06-25 23:20:12 -07:00
Klaus Post
2d0f65a5e3 Add archived parquet as int. package (#9912)
Since github.com/minio/parquet-go is archived add it as internal package.
2020-06-25 07:31:16 -07:00
Praveen raj Mani
b1705599e1 Fix config leaks and deprecate file-based config setters in NAS gateway (#9884)
This PR has the following changes

- Removing duplicate lookupConfigs() calls.
- Deprecate admin config APIs for NAS gateways. This will avoid repeated reloads of the config from the disk.
- WatchConfigNASDisk will be removed
- Migration guide for NAS gateways users to migrate to ENV settings.

NOTE: THIS PR HAS A BREAKING CHANGE

Fixes #9875

Co-authored-by: Harshavardhana <harsha@minio.io>
2020-06-25 15:59:28 +05:30
Ivan Martinez-Ortiz
969b2d2110 Updates the usage documentation of OpenID custom scopes (#9902) 2020-06-24 07:49:09 -07:00
Harshavardhana
f4b2ed2a92 fix: filter list buckets operation with ListObjects perm (#9907)
fix regression introduced in #9305
2020-06-23 23:21:11 -07:00
Harshavardhana
dee3cf2d7f fix: preserve modTime for DeleteMarker on remote disks (#9905) 2020-06-23 10:20:31 -07:00
kannappanr
b460b5967f fix IAM policy action name for Get/PutBucketLifeCycle (#9893) 2020-06-23 10:18:32 -07:00
Harshavardhana
21058c34d0 add some description of xl.meta (#9901) 2020-06-22 17:27:54 -07:00
Harshavardhana
5b1e6c7dbc Add check for object statTime non-negative (#9899) 2020-06-22 14:33:58 -07:00
Klaus Post
691dc04fac Add neater metadata printer (#9898)
* Just read files from args (more than 1 now supported)
* Pretty print by default. `-ndjson` will disable.
* Check header.
* Support stdin as '-'
* Don't just ignore errors.
2020-06-22 13:20:22 -07:00
Harshavardhana
e92434c2e7 fix: support client customized scopes for OpenID (#9880)
Fixes #9238
2020-06-22 12:08:50 -07:00
Klaus Post
cae09d8b84 crawler: Wait max 1 second (#9894)
Add 1-second timeout to crawler wait.

This will make the crawler able to run, albeit very, 
very slowly on high load servers.
2020-06-22 11:57:22 -07:00
Kaan Kabalak
f706a5b4c8 Select object if user clicks anywhere on object name for browser (#9649)
Fixes #9647
2020-06-22 11:03:24 -07:00
Harshavardhana
c54e3b4ea3 Add support for minioreleaser a fork for goreleaser (#9890)
This is to support building containers for multiple
platforms, rpms and debs all in a single build process

https://github.com/harshavardhana/minioreleaser
2020-06-22 08:26:40 -07:00
Minio Trusted
0fff9f9fa6 Update yaml files to latest version RELEASE.2020-06-22T03-12-50Z 2020-06-21 20:26:38 -07:00
Benjamin Sodenkamp
0becf7f03f Added note to help differentiate MINIO creds from S3 rotating creds (#9887) 2020-06-21 12:39:03 -07:00
Klaus Post
972d876ca9 Do not select zones with <5% free after upload (#9877)
Looking into full disk errors on zoned setup. We don't take the
5% space requirement into account when selecting a zone.

The interesting part is that even considering this we don't
know the size of the object the user wants to upload when
they do multipart uploads.

It seems quite defensive to always upload multiparts to
the zone where there is the most space since all load will
be directed to a part of the cluster.

In these cases we make sure it can at least hold a 1GiB file
and we disadvantage fuller zones more by subtracting the
expected size before weighing.
2020-06-20 06:36:44 -07:00
Harshavardhana
b8cb21c954 allow more than N number of locks in TopLocks (#9883) 2020-06-20 06:33:01 -07:00
Harshavardhana
67062840c1 fix: perform CopyObject under more conditions (#9879)
- x-amz-storage-class specified CopyObject
  should proceed regardless, its not a precondition
- sourceVersionID is specified CopyObject should
  proceed regardless, its not a precondition
2020-06-19 13:53:45 -07:00
Michael Mayr
53f0cc1340 Implement CLIENT SETNAME for Redis connections (#9876)
Add note about CLIENT SETNAME needing auth
2020-06-19 13:28:28 -07:00
Harshavardhana
9626a981bc fix: Preserve old data appropriately (#9873)
This PR fixes all the below scenarios
and handles them correctly.

- existing data/bucket is replaced with
  new content, no versioning enabled old
  structure vanishes.

- existing data/bucket - enable versioning
  before uploading any data, once versioning
  enabled upload new content, old content
  is preserved.

- suspend versioning on the bucket again, now
  upload content again the old content is purged
  since that is the default "null" version.

Additionally sync data after xl.json -> xl.meta
rename(), to avoid any surprises if there is a
crash during this rename operation.
2020-06-19 10:58:17 -07:00
Harshavardhana
b912c8f035 fix: generate new version when replacing metadata in CopyObject (#9871) 2020-06-19 08:44:51 -07:00
Harshavardhana
fa13fe2184 allow loading some from config and some values from ENVs (#9872)
A regression perhaps introduced in #9851
2020-06-18 17:31:56 -07:00
Harshavardhana
85a1956e5c Avoid duplicate object holding locks (#9867)
Fixes #9866
2020-06-18 10:25:07 -07:00
Minio Trusted
72743d1590 Update yaml files to latest version RELEASE.2020-06-18T02-23-35Z 2020-06-17 19:33:28 -07:00
Harshavardhana
7ed1077879 Add a custom healthcheck function for online status (#9858)
- Add changes to ensure remote disks are not
  incorrectly taken online if their order has
  changed or are incorrect disks.
- Bring changes to peer to detect disconnection
  with separate Health handler, to avoid a
  rather expensive call GetLocakDiskIDs()
- Follow up on the same changes for Lockers
  as well
2020-06-17 14:49:26 -07:00
poornas
16d7b90adf Remove nuget from docker mint build (#9864) 2020-06-17 14:47:57 -07:00
Harshavardhana
94424e14d7 fix: rename legacy xl.json to xl.meta properly in ListDir() (#9863) 2020-06-17 13:58:38 -07:00
Harshavardhana
e79874f58e [feat] Preserve version supplied by client (#9854)
Just like GET/DELETE APIs it is possible to preserve
client supplied versionId's, of course the versionIds
have to be uuid, if an existing versionId is found
it is overwritten if no object locking policies
are found.

- PUT /bucketname/objectname?versionId=<id>
- POST /bucketname/objectname?uploads=&versionId=<id>
- PUT /bucketname/objectname?verisonId=<id> (with x-amz-copy-source)
2020-06-17 11:13:41 -07:00
Klaus Post
8aae8b1d27 Put an upper limit on walk pool sizes (#9848)
Fixes potentially infinite allocations, especially in FS mode, 
since lookups live up to 30 minutes. Limit walk pool sizes to 50 
max parameter entries and 4 concurrent operations with the same
parameters.

Fixes #9835
2020-06-17 09:52:07 -07:00
Klaus Post
1813ff9dfa Re-add missing bucket bloom filters (#9861) 2020-06-17 08:54:41 -07:00
Harshavardhana
4ac31ea82b fix: find current location of object multi-zones (#9840)
PutObject on multiple-zone with versioning would not
overwrite the correct location of the object if the
object has delete marker, leading to duplicate objects
on two zones.

This PR fixes by adding affinity towards delete marker
when GetObjectInfo() returns error, use the zone index
which has the delete marker.
2020-06-17 08:33:14 -07:00
Harshavardhana
67ca157329 fix: content-md5 is not mandatory for PutBucketVersioning (#9852) 2020-06-17 07:59:08 -07:00
Harshavardhana
7c061fa3b6 on darwin fallback to maximum possible rlimit (#9859)
fixes #9857
2020-06-17 07:47:42 -07:00
Harshavardhana
f5e1b3d09e fix: initialize config once per startup (#9851) 2020-06-16 20:15:21 -07:00
Klaus Post
3ba4804d6c Move online status to REST client (#9808) 2020-06-16 18:59:32 -07:00
Harshavardhana
216de230e2 remove unnecessary log for setMaxResources (#9856)
fixes #9855
2020-06-16 18:57:29 -07:00
ebozduman
a91cfa03e7 extend the HINT on backend ownership and its contents (#9846) 2020-06-16 15:32:29 -07:00
Harshavardhana
087aaaf894 fix: save deleteMarker properly, precision upto UnixNano() (#9843) 2020-06-16 07:54:27 -07:00
Harshavardhana
cbb7a09376 Allow etcd, cache setup to exit when starting gateway mode (#9842)
- Initialize etcd once per call
- Fail etcd, cache setup pro-actively for gateway setups
- Support deleting/updating bucket notification,
  tagging, lifecycle, sse-encryption
2020-06-15 22:09:39 -07:00
Harshavardhana
1a956424e0 Add logs when quorum is lost during readiness checks (#9839) 2020-06-15 13:11:22 -07:00
Harshavardhana
f9aa239973 fix: export prometheus metrics for cache GC triggers (#9815)
Bonus change to use channel to serialize triggers,
instead of using atomic variables. More efficient
mechanism for synchronization.

Co-authored-by: Nitish Tiwari <nitish@minio.io>
2020-06-15 09:05:35 -07:00
Anis Elleuch
2073b79633 fix: Remove unnecessary debug log line (#9834) 2020-06-15 08:55:33 -07:00
Harshavardhana
6a6a30c33c Build docker edge from master 2020-06-14 22:45:36 -07:00
Minio Trusted
1cd5d7942f Update yaml files to latest version RELEASE.2020-06-14T18-32-17Z 2020-06-14 11:41:18 -07:00
Anis Elleuch
63e9005f01 fix: Avoid updating object tags on failed disks (#9819) 2020-06-14 10:53:07 -07:00
Harshavardhana
d55f4336ae preserve context per request for local locks (#9828)
In the Current bug we were re-using the context
from previously granted lockers, this would
lead to lock timeouts for existing valid
read or write locks, leading to premature
timeout of locks.

This bug affects only local lockers in FS
or standalone erasure coded mode. This issue
is rather historical as well and was present
in lsync for some time but we were lucky to
not see it.

Similar changes are done in dsync as well
to keep the code more familiar

Fixes #9827
2020-06-14 07:43:10 -07:00
ethan ho
535efd34a0 Fix peer server update failure (#9824)
When updating all servers following the constructions of mc update,
only the endpoint server will be updated successfully.
All the other peer servers' updating failed due to the error below:
--------------------------------------------------------------------------
parsing time "2006-01-02T15:04:05Z07:00" as "<release version>": cannot parse "-01-02T15:04:05Z07:00" as "0-" 
--------------------------------------------------------------------------
2020-06-13 07:12:49 -07:00
Harshavardhana
e11e4bcbc7 stick to go1.13.x for mint 2020-06-12 22:47:47 -07:00
Harshavardhana
4915433bd2 Support bucket versioning (#9377)
- Implement a new xl.json 2.0.0 format to support,
  this moves the entire marshaling logic to POSIX
  layer, top layer always consumes a common FileInfo
  construct which simplifies the metadata reads.
- Implement list object versions
- Migrate to siphash from crchash for new deployments
  for object placements.

Fixes #2111
2020-06-12 20:04:01 -07:00
Klaus Post
43d6e3ae06 merge object lifecycle checks into usage crawler (#9579) 2020-06-12 10:28:21 -07:00
kannappanr
225b812b5e Update minio-go library to latest (#9813) 2020-06-12 10:18:42 -07:00
Dan Garrick
a53cb236bb Add enable="on" to set compression command (#9811) 2020-06-11 18:14:13 -07:00
Minio Trusted
761992ca89 Update yaml files to latest version RELEASE.2020-06-12T00-06-19Z 2020-06-12 00:14:09 +00:00
Harshavardhana
96ed0991b5 fix: optimize IAM users load, add fallback (#9809)
Bonus fix, load service accounts properly
when service accounts were generated with
LDAP
2020-06-11 14:11:30 -07:00
Harshavardhana
a42df3d364 Allow idiomatic usage of middlewares in gorilla/mux (#9802)
Historically due to lack of support for middlewares
we ended up writing wrapped handlers for all
middlewares on top of the gorilla/mux, this causes
multiple issues when we want to let's say

- Overload r.Body with some custom implementation
  to track the incoming Reads()
- Add other sort of top level checks to avoid
  DDOSing the server with large incoming HTTP
  bodies.

Since 1.7.x release gorilla/mux provides proper
use of middlewares, which are honored by the muxer
directly. This makes sure that Go can honor its
own internal ServeHTTP(w, r) implementation where
Go net/http can wrap into its own customer readers.

This PR as a side-affect fixes rare issues of client
hangs which were reported in the wild but never really
understood or fixed in our codebase.

Fixes #9759
Fixes #7266
Fixes #6540
Fixes #5455
Fixes #5150

Refer https://github.com/boto/botocore/pull/1328 for
one variation of the same issue in #9759
2020-06-11 08:19:55 -07:00
Harshavardhana
ff94b1b0a9 isEndpointConnected should take local disk inputs (#9803)
PR #9801 while it is correct, the loop isEndpointConnected()
was changed to rely on endpoint.String() which has the host
information as well, which is not correct value as input to
detect if the disk is down or up, if endpoint is local use
its local path value instead.
2020-06-11 08:05:25 -07:00
Andreas Auernhammer
b1845c6c83 kes: try to auto. create master key if not present (#9790)
This commit changes the data key generation such that
if a MinIO server/nodes tries to generate a new DEK
but the particular master key does not exist - then
MinIO asks KES to create a new master key and then
requests the DEK again.

From now on, a SSE-S3 master key must not be created
explicitly via: `kes key create <key-name>`.
Instead, it is sufficient to just set the env. var.
```
export MINIO_KMS_KES_KEY_NAME=<key-name>
```

However, the MinIO identity (mTLS client certificate)
must have the permission to access the `/v1/key/create/`
API. Therefore, KES policy for MinIO must look similar to:
```
[
  /v1/key/create/<key-name-pattern>
  /v1/key/generate/<key-name-pattern>
  /v1/key/decrypt/<key-name-pattern>
]
```
However, in our guides we already suggest that.
See e.g.: https://github.com/minio/kes/wiki/MinIO-Object-Storage#kes-server-setup

***

The ability to create master keys on request may also be
necessary / useful in case of SSE-KMS.
2020-06-11 02:00:47 -07:00
Harshavardhana
62b1da3e2c fix offline disk calculation (#9801)
Current code was relying on globalEndpoints as
the source of secondary truth to obtain
the missing endpoints list when the disk
is offline, this is problematic

- there is no way to know if the getDisks()
  returned endpoints total is same as the
  ones list of globalEndpoints and it
  belongs to a particular set.
- there is no order guarantee as getDisks()
  is ordered as per format.json, globalEndpoints
  may not be, so potentially end up including
  incorrect endpoints.

To fix this bring getEndpoints() just like getDisks()
to ensure that consistently ordered endpoints are
always available for us to ensure that returned values
are consistent with what each erasure set would observe.
2020-06-10 17:10:31 -07:00
poornas
d26b24f670 avoid storing X-Amz-Tagging-Directive in metadata (#9800) 2020-06-10 14:29:24 -07:00
Klaus Post
8ba40e492c upgrade to latest reed/solomon (#9797)
- faster AVX2
- big ARM64 improvement
2020-06-10 09:28:15 -07:00
kannappanr
2c372a9894 Send Partscount only when partnumber is specified (#9793)
Fixes #9789
2020-06-10 09:22:15 -07:00
poornas
3d3b75fb8d Avoid overwriting object tags when changing lock (#9794) 2020-06-10 08:16:30 -07:00
Klaus Post
142b057be8 Check object names on windows (#9798)
Uploading files with names that could not be written to disk 
would result in "reduce your request" errors returned.

Instead check explicitly for disallowed characters and reject 
files with `Object name contains unsupported characters.`
2020-06-10 08:14:22 -07:00
Harshavardhana
4790868878 allow background IAM load to speed up startup (#9796)
Also fix healthcheck handler to run success
only if object layer has initialized fully
for S3 API access call.
2020-06-09 19:19:03 -07:00
Harshavardhana
342ade03f6 deprecate listDir usage for healing (#9792)
listDir was incorrectly used for healing which
is slower, instead use Walk() to heal the entire
set.
2020-06-09 17:09:19 -07:00
Harshavardhana
1a0b7f58f9 update browser assets with latest changes 2020-06-09 15:30:39 -07:00
P R
9407dbf387 display proper used space based on disk usage (#9551)
Fixes #9346
2020-06-09 15:05:39 -07:00
Harshavardhana
423aeb0d81 allow large buffer to list more entries per directory (#9785) 2020-06-09 09:44:50 -07:00
Anis Elleuch
790323ac37 lifecycle: Fix object expiration date (#9791)
re-use PredictExpiryTime() in ComputeAction()
2020-06-09 09:40:53 -07:00
Ritesh H Shukla
920b863955 Fix formatting and wording in distributed docs (#9784) 2020-06-08 18:03:15 -07:00
Harshavardhana
c1382c09d9 add reference to distributed doc for clarity (#9783) 2020-06-08 12:30:42 -07:00
Harshavardhana
febe9cc26a fix: avoid timer leaks in dsync/lsync (#9781)
At a customer setup with lots of concurrent calls
it can be observed that in newRetryTimer there
were lots of tiny alloations which are not
relinquished upon retries, in this codepath
we were only interested in re-using the timer
and use it wisely for each locker.

```
(pprof) top
Showing nodes accounting for 8.68TB, 97.02% of 8.95TB total
Dropped 1198 nodes (cum <= 0.04TB)
Showing top 10 nodes out of 79
      flat  flat%   sum%        cum   cum%
    5.95TB 66.50% 66.50%     5.95TB 66.50%  time.NewTimer
    1.16TB 13.02% 79.51%     1.16TB 13.02%  github.com/ncw/directio.AlignedBlock
    0.67TB  7.53% 87.04%     0.70TB  7.78%  github.com/minio/minio/cmd.xlObjects.putObject
    0.21TB  2.36% 89.40%     0.21TB  2.36%  github.com/minio/minio/cmd.(*posix).Walk
    0.19TB  2.08% 91.49%     0.27TB  2.99%  os.statNolog
    0.14TB  1.59% 93.08%     0.14TB  1.60%  os.(*File).readdirnames
    0.10TB  1.09% 94.17%     0.11TB  1.25%  github.com/minio/minio/cmd.readDirN
    0.10TB  1.07% 95.23%     0.10TB  1.07%  syscall.ByteSliceFromString
    0.09TB  1.03% 96.27%     0.09TB  1.03%  strings.(*Builder).grow
    0.07TB  0.75% 97.02%     0.07TB  0.75%  path.(*lazybuf).append
```
2020-06-08 11:28:40 -07:00
Praveen raj Mani
2ce2e88adf Support mTLS Authentication in Webhooks (#9777) 2020-06-08 05:55:44 -07:00
Harshavardhana
c7599d323b fix: throw error if symmetry cannot be obtained (#9780)
For example `{1...17}/{1...52}` symmetrical
distribution of drives cannot be obtained

- Because 17 is a prime number
- Is not divisible by any pre-defined setCounts i.e
  from 1 to 16
2020-06-06 22:13:48 -07:00
Anis Elleuch
e906b511e9 lifecycle: Consider multiple tags filtering (#9775)
* lifecycle: Consider multiple tags filtering
* lifecycle: Disallow duplicated Key/Value xml in <Tag>
2020-06-05 10:30:10 -07:00
Harshavardhana
4dd07e5763 update browser assets 2020-06-04 22:09:20 -07:00
Harshavardhana
d93bdea433 fix remove LDAPPassword from audit logs (#9773)
the previous fix for #9707 was not correct,
fix this properly passing the right filter
keys to be filtered from the audit
log output.

Fixes #9767
2020-06-04 22:07:55 -07:00
P R
26cfd52e7e allow direct access to bucket from browser with limited permissions (#9580) 2020-06-04 20:23:14 -07:00
darkdragon-001
2d7a96342c Allow uploading folders on supported browsers. (#9627) 2020-06-04 17:24:18 -07:00
Praveen raj Mani
cdd6c9f52e Update go-sql-driver pkg to support checkConnLiveness (#9766)
Fixes #9679
2020-06-04 16:11:42 -07:00
Harshavardhana
5e529a1c96 simplify context timeout for readiness (#9772)
additionally also add CORS support to restrict
for specific origin, adds a new config and
updated the documentation as well
2020-06-04 14:58:34 -07:00
Minio Trusted
7fee96e9de Update yaml files to latest version RELEASE.2020-06-03T22-13-49Z 2020-06-03 22:22:13 +00:00
Harshavardhana
5686a7e273 fix NAS gateway support for policy/notification (#9765)
Fixes #9764
2020-06-03 13:18:54 -07:00
Justin Hutchings
b91040f7fb Add CodeQL security scanning (#9763)
Manually specify go and javascript for analysis
2020-06-03 11:14:01 -07:00
Harshavardhana
566e0e2048 allow deleting of dropped multiparts (#9753)
bonus change trigger MRF heal when single
offline disk is found, break out early.
2020-06-02 15:27:03 -07:00
Anis Elleuch
3aad09be28 heal: Fix passing healing opts (#9756)
Manual healing (as background healing) creates a heal task with a
possiblity to override healing options, such as deep or normal mode.

Use a pointer type in heal opts so nil would mean use the default
healing options.
2020-06-02 09:07:16 -07:00
Harshavardhana
f0358acb32 concurrently load bucket metadata (#9749) 2020-06-01 22:32:53 -07:00
Anis Elleuch
fd0de4ab32 azure: Show better message when credentials are wrong (#9748) 2020-06-01 18:23:48 -07:00
Anis Elleuch
73a308502f Relax content-md5 requirement in set encryption handler (#9750)
aws cli fails to set a bucket encryption configuration to MinIO server.
The reason is that aws cli does not send MD5-Content header. It seems
that MD5-Content is not required anymore.

This commit also returns Not Implemented header early to help mint tests
to ignore testing this API in gateway modes.
2020-06-01 18:08:19 -07:00
Anis Elleuch
bd59f150b8 azure: Implement CopyPart API (#9747) 2020-06-01 11:12:18 -07:00
Minio Trusted
4b68d69188 Update yaml files to latest version RELEASE.2020-06-01T17-28-03Z 2020-06-01 17:36:17 +00:00
Harshavardhana
f90422a890 fix prometheus calculation of offline disks per instance (#9744)
This was a regression introduced in 9baeda7 for prometheus
calculation of offline disks which should be local to
an instance.

fixes #9742
2020-06-01 07:35:40 -07:00
Harshavardhana
8befedef14 simplify FS multipart cleanup (#9740)
fixes #9671
2020-05-30 13:56:31 -07:00
Nathan Brown
2af3004409 Use registry to check Atime support on Windows (#9741) 2020-05-30 09:47:42 -07:00
Harshavardhana
38ee40d59c move to upstream code colinmarc/hdfs (#9738)
- supports SASL based authentication now
- upgrades to new changes in gokrb library
- implement force delete feature

Fixes #8206
2020-05-29 18:38:50 -07:00
kannappanr
d583f1ac0e check if container is empty before invoking DeleteContainer (#9733) 2020-05-29 13:24:39 -07:00
Harshavardhana
2bcb02f628 Avoid '\n' from constant strings (#9737)
Fixes #9736
2020-05-29 11:40:57 -07:00
Harshavardhana
4301a33371 update docs to mention quota not supported on FS 2020-05-29 07:57:45 -07:00
Minio Trusted
f1a03a4ee8 Update yaml files to latest version RELEASE.2020-05-29T14-08-49Z 2020-05-29 14:17:50 +00:00
Klaus Post
167ddf9c9c Workaround for Windows Docker Engine 19.03.8 (#9735)
Add workaround for issue preventing servers from starting on 
Windows Docker Engine 19.03.8

Fixes #9726
2020-05-29 07:05:19 -07:00
Anton Huck
f833e41e69 IAM: Fix nil panic due to uninit. iamGroupPolicyMap. Fixes #9730 (#9734) 2020-05-29 06:13:54 -07:00
Minio Trusted
3780ec699f Update yaml files to latest version RELEASE.2020-05-28T23-29-21Z 2020-05-28 23:38:19 +00:00
Harshavardhana
41688a936b fix: CopyObject behavior on expanded zones (#9729)
CopyObject was not correctly figuring out the correct
destination object location and would end up creating
duplicate objects on two different zones, reproduced
by doing encryption based key rotation.
2020-05-28 14:36:38 -07:00
Harshavardhana
b2db8123ec Preserve errors returned by diskInfo to detect disk errors (#9727)
This PR basically reverts #9720 and re-implements it differently
2020-05-28 13:03:04 -07:00
Harshavardhana
b330c2c57e Introduce simpler GetMultipartInfo call for performance (#9722)
Advantages avoids 100's of stats which are needed for each
upload operation in FS/NAS gateway mode when uploading a large
multipart object, dramatically increases performance for
multipart uploads by avoiding recursive calls.

For other gateway's simplifies the approach since
azure, gcs, hdfs gateway's don't capture any specific
metadata during upload which needs handler validation
for encryption/compression.

Erasure coding was already optimized, additionally
just avoids small allocations of large data structure.

Fixes #7206
2020-05-28 12:36:20 -07:00
Anis Elleuch
231c5cf6de lifecycle: Iterate over all rules until actionable expiry is found (#9725) 2020-05-28 10:55:48 -07:00
kannappanr
7214a0160a allow bucket policy to set/removed in NAS gateway (#9706) 2020-05-28 08:31:16 -07:00
Anis Elleuch
375b79f11b storage: Implement GetDiskID request in REST server side (#9720)
GetDiskID() in storage rest client does not really issue a REST request
to the remote disk, but returns an in-memory value instead.

However, GetDiskID() should return an error when format.json is not
found or for other similar issues (unmounted disks, etc..)

GetDiskID() is only called when formatting disks and getting storage
informatio, hence this commit should not have a performance degradation.
2020-05-28 08:17:42 -07:00
Harshavardhana
661068d1a2 update browser assets after dependency updates (#9721) 2020-05-27 17:36:22 -07:00
Harshavardhana
3da1869d5e Avoid double reads on metadata during GetObject() (#9719)
Overall TTFB can see a dramatic improvement with
this change - did not do any benchmark as such
but the change itself is self-explanatory
2020-05-27 16:14:26 -07:00
darkdragon-001
000a7aa094 Update dependencies (#9673) 2020-05-27 14:25:06 -07:00
Harshavardhana
7cedc5369d fix: send valid claims in AuditLogs for browser requests (#9713)
Additionally also fix STS logs to filter out LDAP
password to be sent out in audit logs.

Bonus fix handle the reload of users properly by
making sure to preserve the newer users during the
reload to be not invalidated.

Fixes #9707
Fixes #9644
Fixes #9651
2020-05-27 12:38:44 -07:00
poornas
e5ecd20d44 add docs for caching drives with docker (#9712)
fixes: #9688
2020-05-27 09:49:09 -07:00
Harshavardhana
53aaa5d2a5 Export bucket usage counts as part of bucket metrics (#9710)
Bonus fixes in quota enforcement to use the
new datastructure and use timedValue to cache
a value/reload automatically avoids one less
global variable.
2020-05-27 06:45:43 -07:00
Anis Elleuch
cccf2de129 madmin: Little error description when parsing server error msg (#9709) 2020-05-26 18:03:00 -07:00
P R
9d39fb3604 add copyobject tagging replace directive for gateway (#9711) 2020-05-26 17:32:53 -07:00
Klaus Post
4a007e3767 Prefer local disks when fetching data blocks (#9563)
If the requested server is part of the set this will always read 
from the local disk, even if the disk contains a parity shard. 
In default setup there is a 50% chance that at least 
one shard that otherwise would have been fetched remotely 
will be read locally instead.

It basically trades RPC call overhead for reed-solomon. 
On distributed localhost this seems to be fairly break-even, 
with a very small gain in throughput and latency. 
However on networked servers this should be a bigger

1MB objects, before:

```
Operation: GET. Concurrency: 32. Hosts: 4.

Requests considered: 76257:
 * Avg: 25ms 50%: 24ms 90%: 32ms 99%: 42ms Fastest: 7ms Slowest: 67ms
 * First Byte: Average: 23ms, Median: 22ms, Best: 5ms, Worst: 65ms

Throughput:
* Average: 1213.68 MiB/s, 1272.63 obj/s (59.948s, starting 14:45:44 CEST)
```

After:
```
Operation: GET. Concurrency: 32. Hosts: 4.

Requests considered: 78845:
 * Avg: 24ms 50%: 24ms 90%: 31ms 99%: 39ms Fastest: 8ms Slowest: 62ms
 * First Byte: Average: 22ms, Median: 21ms, Best: 6ms, Worst: 57ms

Throughput:
* Average: 1255.11 MiB/s, 1316.08 obj/s (59.938s, starting 14:43:58 CEST)
```

Bonus fix: Only ask for heal once on an object.
2020-05-26 16:47:23 -07:00
Klaus Post
95814359bd cache disk info to avoid repeated calls (#9682)
This value is requested on every upload when there are multiple zones.

Since this will result in an RPC call to every remote disk this scales 
quite badly in a distributed setup. Load every 1second interval.

2 servers, localhost only. In large distributed setups much bigger 
gains can be expected.

```
Operations: 21743 -> 22454
* Average: +3.28% (+0.0 MiB/s) throughput, +3.28% (+11.9) obj/s
* Fastest: +3.37% (+0.0 MiB/s) throughput, +3.37% (+13.0) obj/s
* 50% Median: +3.03% (+0.0 MiB/s) throughput, +3.03% (+11.2) obj/s
* Slowest: +8.03% (+0.0 MiB/s) throughput, +8.03% (+22.8) obj/s
```

For easy management of this a generic helper has been added.
2020-05-26 12:52:24 -07:00
Klaus Post
de6c286258 Allocate more buffer (#9683)
The documentation states that `nVolumeNameSize` and `nFileSystemNameSize` are:

> The length of a volume name buffer, in TCHARs. The maximum buffer size is MAX_PATH+1.

It seems like we allocated too little for them before, so expand it to 260 wchars.
2020-05-26 12:35:40 -07:00
Harshavardhana
d0ae69087c fix: add proper errors for disks with preexisting content (#9703) 2020-05-26 09:32:33 -07:00
Justin Clift
5e15b0b844 fix misspelling of Certbot (#9705) 2020-05-26 08:56:50 -07:00
Harshavardhana
7ea026ff1d fix: reply back user-metadata in lower case form (#9697)
some clients such as veeam expect the x-amz-meta to
be sent in lower cased form, while this does indeed
defeats the HTTP protocol contract it is harder to
change these applications, while these applications
get fixed appropriately in future.

x-amz-meta is usually sent in lowercased form
by AWS S3 and some applications like veeam
incorrectly end up relying on the case sensitivity
of the HTTP headers.

Bonus fixes

 - Fix the iso8601 time format to keep it same as
   AWS S3 response
 - Increase maxObjectList to 50,000 and use
   maxDeleteList as 10,000 whenever multi-object
   deletes are needed.
2020-05-25 16:51:32 -07:00
Harshavardhana
6e0575a53d Revert "Disable crawler in FS/NAS gateway mode (#9695)" (#9702)
This reverts commit eba423bb9d.

Additionally also address the FS crawler to properly
calculate the sizes for encrypted/compressed content.
2020-05-25 11:32:53 -07:00
darkdragon-001
fb9be81fab Add Docker development workflow for browser (#9664) 2020-05-25 10:09:12 -07:00
darkdragon-001
60791d6dd1 Web UI: Improve "..." menu (#9631) 2020-05-25 10:08:19 -07:00
Harshavardhana
eba423bb9d Disable crawler in FS/NAS gateway mode (#9695)
No one really uses FS for large scale accounting
usage, neither we crawl in NAS gateway mode. It is
worthwhile to simply disable this feature as its
not useful for anyone.

Bonus disable bucket quota ops as well in, FS
and gateway mode
2020-05-25 00:17:52 -07:00
Erkki Eilonen
301de169e9 in cache build ranges metadata as needed (#9698) 2020-05-25 00:17:03 -07:00
Kianoosh Hooshmand
1868c7016d remove additional back-ticks in compression docs (#9685) 2020-05-24 18:34:47 -07:00
Harshavardhana
0c71ce3398 fix size accounting for encrypted/compressed objects (#9690)
size calculation in crawler was using the real size
of the object instead of its actual size i.e either
a decrypted or uncompressed size.

this is needed to make sure all other accounting
such as bucket quota and mcs UI to display the
correct values.
2020-05-24 11:19:17 -07:00
谭九鼎
bc285cf0dd use https link for brew.sh (#9692) 2020-05-24 09:28:10 -07:00
谭九鼎
209a3ee7af fix backtick markdown in chinese docs (#9691) 2020-05-24 09:27:47 -07:00
Krishna Srinivas
7d19ab9f62 readiness returns error quickly if any of the set is down (#9662)
This PR adds a new configuration parameter which allows readiness
check to respond within 10secs, this can be reduced to a lower value
if necessary using 

```
mc admin config set api ready_deadline=5s
```

 or

```
export MINIO_API_READY_DEADLINE=5s
```
2020-05-23 17:38:39 -07:00
P R
3f6d624c7b add gateway object tagging support (#9124) 2020-05-23 11:09:35 -07:00
Harshavardhana
c138272d63 reject object lock requests on existing buckets (#9684)
a regression was introduced fix it to ensure that we
do not allow object locking settings on existing buckets
without object locking
2020-05-23 10:01:01 -07:00
Harshavardhana
7dbfea1353 avoid net/http ErrorLog for consistent logging experience (#9672)
net/http exposes ErrorLog but it is log.Logger
instance not an interface which can be overridden,
because of this reason the logging is interleaved
sometimes with TLS with messages like this on the
server

```
http: TLS handshake error from 139.178.70.188:63760: EOF
```

This is bit problematic for us as we need to have
consistent logging view for allow --json or --quiet
flags.

With this PR we ensure that this format is adhered to.
2020-05-22 21:59:18 -07:00
Sidhartha Mani
c121d27f31 progressively report obd results (#9639) 2020-05-22 17:56:45 -07:00
Anis Elleuch
43c19a6b82 nas: ensure loading of bucket notifications during startup (#9681) 2020-05-22 11:55:30 -07:00
Anis Elleuch
6542bc4a03 sql, csv: Cache some values between Read() calls to gain performance (#9645)
Below is the benchmark enhancement after this commit:

benchmark                                            old ns/op     new ns/op     delta
BenchmarkRead-8                                      2807          2189          -22.02%
BenchmarkReadWithFieldsPerRecord-8                   2802          2179          -22.23%
BenchmarkReadWithoutFieldsPerRecord-8                2824          2181          -22.77%
BenchmarkReadLargeFields-8                           3584          3371          -5.94%
BenchmarkReadReuseRecord-8                           2044          1480          -27.59%
BenchmarkReadReuseRecordWithFieldsPerRecord-8        2056          1483          -27.87%
BenchmarkReadReuseRecordWithoutFieldsPerRecord-8     2047          1482          -27.60%
BenchmarkReadReuseRecordLargeFields-8                2777          2594          -6.59%

benchmark                                            old allocs     new allocs     delta
BenchmarkRead-8                                      26             16             -38.46%
BenchmarkReadWithFieldsPerRecord-8                   26             16             -38.46%
BenchmarkReadWithoutFieldsPerRecord-8                26             16             -38.46%
BenchmarkReadLargeFields-8                           36             24             -33.33%
BenchmarkReadReuseRecord-8                           16             6              -62.50%
BenchmarkReadReuseRecordWithFieldsPerRecord-8        16             6              -62.50%
BenchmarkReadReuseRecordWithoutFieldsPerRecord-8     16             6              -62.50%
BenchmarkReadReuseRecordLargeFields-8                24             12             -50.00%

benchmark                                            old bytes     new bytes     delta
BenchmarkRead-8                                      672           664           -1.19%
BenchmarkReadWithFieldsPerRecord-8                   672           664           -1.19%
BenchmarkReadWithoutFieldsPerRecord-8                672           664           -1.19%
BenchmarkReadLargeFields-8                           3948          3936          -0.30%
BenchmarkReadReuseRecord-8                           32            24            -25.00%
BenchmarkReadReuseRecordWithFieldsPerRecord-8        32            24            -25.00%
BenchmarkReadReuseRecordWithoutFieldsPerRecord-8     32            24            -25.00%
BenchmarkReadReuseRecordLargeFields-8                2988          2976          -0.40%
2020-05-22 10:15:08 -07:00
Ashish Kumar Sinha
bede525dc9 bypass flag: Allow object retention removal (#9677) 2020-05-22 09:38:58 -07:00
Harshavardhana
e45c90060f remove references for deprecated dockerfiles and deployment styles (#9675) 2020-05-22 08:40:59 -07:00
Klaus Post
7d79c723e5 Fix Windows memory leak (#9680)
When running a zoned setup simply uploading will run
the system out of memory very fast.

Root cause: nFileSystemNameSize is a DWORD and 
not a pointer. No idea how this didn't crash hard.

Furthermore replace poor mans utf16 -> string 
conversion to support arbitrary output.

Fixes #9630
2020-05-22 08:26:43 -07:00
darkdragon-001
5f6d6c3b70 Facilitate selective running of single CI stages (#9663)
Pass '--target core' to docker build in order to only test core.
Pass '--target frontend' to docker build in order to only test frontend.
Pass '--target gateway' to docker build in order to only test gateway.

Numerical references might be supported in the future: https://github.com/moby/buildkit/issues/1503
2020-05-21 23:10:07 -07:00
Harshavardhana
d15042470e add missing signature v2 query params (#9670) 2020-05-21 18:51:23 -07:00
poornas
f1f414ca59 fix madmin SetBucketQuota API signature (#9669) 2020-05-21 14:45:12 -07:00
Anis Elleuch
cdf4815a6b Add x-amz-expiration header in some S3 responses (#9667)
x-amz-expiration is described in the S3 specification as a header which
indicates if the object in question will expire any time in the future.
2020-05-21 14:12:52 -07:00
kannappanr
fade056244 filter all encryption headers in gateway (#9661)
fixes #9655
2020-05-21 11:07:50 -07:00
Harshavardhana
a546047c95 keep bucket metadata fields to be consistent (#9660)
added bonus reload bucket metadata always after
a successful MakeBucket, current we were only
doing it with object locking enabled.
2020-05-21 11:03:59 -07:00
darkdragon-001
ea210319ce fix whitespace in browser/webpack.config.js (#9665) 2020-05-21 09:21:14 -07:00
ebozduman
2896e780ae fixes misleading assume role error msgs (#9642) 2020-05-21 09:09:18 -07:00
Nicholas Bishop
eaafb23535 fix a typo in README.md (#9666) 2020-05-21 08:45:27 -07:00
Harshavardhana
baa30f4289 reload bucket metadata outside the locker (#9659) 2020-05-20 14:11:13 -07:00
Andreas Auernhammer
2164984d2b fix KMS quickstart guide layout (#9658)
This commit fixes a layout issue w.r.t. the KMS
Quickstart guide. The problem seems to be caused
by docs server not converting the markdown into html
as expected.

This commit fixes this by converting the ordered list
into subsections.
2020-05-20 13:55:54 -07:00
Harshavardhana
189c861835 fix: remove LDAP groups claim and store them on server (#9637)
Groups information shall be now stored as part of the
credential data structure, this is a more idiomatic
way to support large LDAP groups.

Avoids the complication of setups where LDAP groups
can be in the range of 150+ which may lead to excess
HTTP header size > 8KiB, to reduce such an occurrence
we shall save the group information on the server as
part of the credential data structure.

Bonus change support multiple mapped policies, across
all types of users.
2020-05-20 11:33:35 -07:00
Harshavardhana
6656fa3066 simplify further bucket configuration properly (#9650)
This PR is a continuation from #9586, now the
entire parsing logic is fully merged into
bucket metadata sub-system, simplify the
quota API further by reducing the remove
quota handler implementation.
2020-05-20 10:18:15 -07:00
Praveen raj Mani
0cc2ed04f5 humanize timeToFirstByte and timeToResponse upto nanoseconds (#9641) 2020-05-19 18:34:02 -07:00
Andreas Auernhammer
b11adfa5cd simplify the KMS guide and remove unnecessary sections (#9629)
This commit simplifies the KMS configuration guide by
adding a get started section that uses our KES play instance
at `https://play.min.io:7373`.

Further, it removes sections that we don't recommend for production
anyways (MASTER_KEY).
2020-05-19 18:33:11 -07:00
Anis Elleuch
9baeda781a fix storage info output with unordered endpoints arguments (#9610)
Shuffling arguments that we pass to MinIO server are supported. However,
when that happens, Prometheus returns wrong information about disks usage
and online/offline status.

The commit fixes the issue by avoiding relying on xl.endpoints since
it is not ordered.
2020-05-19 14:27:20 -07:00
Harshavardhana
bd032d13ff migrate all bucket metadata into a single file (#9586)
this is a major overhaul by migrating off all
bucket metadata related configs into a single
object '.metadata.bin' this allows us for faster
bootups across 1000's of buckets and as well
as keeps the code simple enough for future
work and additions.

Additionally also fixes #9396, #9394
2020-05-19 13:53:54 -07:00
Harshavardhana
730775bb4e update all the outdated browser deps (#9640) 2020-05-19 09:32:14 -07:00
Harshavardhana
d31eaddba3 fix: avoid double body reads in SelectObject call (#9638)
Bonus fix handle encryption headers in response
properly for both notification and response to
the client.
2020-05-19 02:01:08 -07:00
poornas
3202f78f0f Fix cache metadata update for range GET (#9636)
This was inadvertently deleting cached ranges
because HTTPRangeSpec was not being passed down

fixes #9597
2020-05-18 18:33:43 -07:00
Harshavardhana
6de410a0aa fix: possiblity of double write lockers on same resource (#9616)
To avoid this issue with refCounter refactor the code
such that

- locker() always increases refCount upon success
- unlocker() always decrements refCount upon success
  (as a special case removes the resource if the
  refCount is zero)

By these two assumptions we are able to see that we
are never granted two write lockers in any situation.

Thanks to @vcabbage for writing a nice reproducer.
2020-05-18 17:33:35 -07:00
darkdragon-001
a3f41c7049 Improve browser README (#9628)
- Add instructions to download node dependencies
- Add instructions to allow development from IPs other than localhost
2020-05-18 11:35:57 -07:00
Klaus Post
1847f17f50 Set Deployment ID before starting handlers (#9635)
Global handler ID is added to response headers, so initialize it before the server starts.

Fixes #9634
2020-05-18 11:35:05 -07:00
Harshavardhana
1bc32215b9 enable full linter across the codebase (#9620)
enable linter using golangci-lint across
codebase to run a bunch of linters together,
we shall enable new linters as we fix more
things the codebase.

This PR fixes the first stage of this
cleanup.
2020-05-18 09:59:45 -07:00
Anis Elleuch
96009975d6 relax validation when loading lifecycle document from the backend (#9612) 2020-05-18 08:33:43 -07:00
Harshavardhana
de9b391db3 fix: Disable presigned without appropriate policy (#9621)
Fixes #9590
2020-05-17 23:38:52 -07:00
kannappanr
a62572fb86 Check for address flags in all positions (#9615)
Fixes #9599
2020-05-17 08:46:23 -07:00
poornas
011a2c0b78 Add docs for bucket quota feature (#9503)
This PR also adds a check to not enforce
bucket quota for server-side metadata copy
of an object onto itself.
2020-05-16 19:27:33 -07:00
Harshavardhana
daf4418cbb add 50m for windows tests (#9614) 2020-05-15 21:07:27 -07:00
Minio Trusted
0f8df3d340 Update yaml files to latest version RELEASE.2020-05-16T01-33-21Z 2020-05-16 01:40:31 +00:00
Harshavardhana
9cac385aec add comment on exported error 2020-05-15 18:17:54 -07:00
Harshavardhana
814ddc0923 add missing admin actions, enhance AccountUsageInfo (#9607) 2020-05-15 18:16:45 -07:00
Harshavardhana
247795dd36 add github workflow for windows (#9611)
bye, bye travis
2020-05-15 15:54:39 -07:00
Anis Elleuch
dfadf70a7f mint: Add more SQL tests (#9540) 2020-05-15 11:20:57 -07:00
Harshavardhana
d348ec0f6c avoid double listObjectParts calls improves performance (#9606)
this PR is to avoid double calls across multiple calls
in APIs

- CopyObjectPart
- PutObjectPart
2020-05-15 08:06:45 -07:00
Harshavardhana
b730bd1396 fix: possible race in FS local lockMap (#9598) 2020-05-14 23:59:07 -07:00
Klaus Post
56e0c6adf8 Track if bloom filter is dirty (#9601)
Only save bloom filter on cycles and updates.

Fixes #9600
2020-05-14 21:46:36 -07:00
Anis Elleuch
f44a960dcd tests: Fix one multi-delete test failure in Windows CI (#9602)
There is a disparency of behavior under Linux & Windows about
the returned error when trying to rename a non existant path.

err := os.Rename("/path/does/not/exist", "/tmp/copy")

Linux:
  isSysErrNotDir(err) = false
  os.IsNotExist(err) = true

Windows:
  isSysErrNotDir(err) = true
  os.IsNotExist(err) = true

ENOTDIR in Linux is returned when the destination path
of the rename call contains a file in one of the middle
segments of the path (e.g. /tmp/file/dst, where /tmp/file
is an actual file not a directory)

However, as shown above, Windows has more scenarios when
it returns ENOTDIR. For example, when the source path contains
an inexistant directory in its path.

In that case, we want errFileNotFound returned and not
errFileAccessDenied, so this commit will add a further check to close
the disparency between Windows & Linux.
2020-05-14 18:09:30 -07:00
kannappanr
6c1bbf918d do not add quotes around etag, if already present (#9603) 2020-05-14 17:43:54 -07:00
Anis Elleuch
48e614b167 honor lifecycle expiration with tag rule (#9604) 2020-05-14 16:21:03 -07:00
poornas
fe8d33452b Allow writes for bucket exceeding FIFO quota (#9575)
the quota will be enforced while
deleting oldest entries in FIFO manner.
2020-05-14 15:18:24 -07:00
Andreas Auernhammer
c19ece6921 update KMS guide to reflect latest KES changes (#9591)
This commit updates the two client env. variables:
```
KES_CLIENT_TLS_KEY_FILE
KES_CLIENT_TLS_CERT_FILE
```

The KES CLI client expects the client key and certificate
as `KES_CLIENT_KEY` resp. `KES_CLIENT_CERT`.
2020-05-14 14:02:43 -07:00
Klaus Post
216fa57b88 merge nested hash readers (#9582)
The `ioutil.NopCloser(reader)` was hiding nested hash readers.

We make it an `io.Closer` so it can be attached without wrapping 
and allows for nesting, by merging the requests.
2020-05-14 14:01:31 -07:00
Dzmitry Pasiukevich
a9558ae248 Simplify cast of string to rune slice in wildcard matching (#9577) 2020-05-14 08:20:13 -07:00
Klaus Post
ee9077db7d fix: windows tests for all cases (#9594)
Replaces #9299
2020-05-13 23:55:38 -07:00
Harshavardhana
9c85928740 add formatting message for zones in ordinals (#9596)
Unlike the message
> Formatting 2 zone, 1 set(s), 6 drives per set.

It is more readable as ordinal
> Formatting 2nd zone, 1 set(s), 6 drives per set.
2020-05-13 20:25:29 -07:00
kannappanr
af0309371e mint tests s3cmd remove copy object test (#9595)
test is failing in Azure gateway as a mapping from 
x-amz-storage-class to Azure storage tier is failing 
and there is no way to not send storage class in s3cmd.
2020-05-13 19:55:34 -07:00
ebozduman
ead3c186a6 Fixes broken image preview for anon user (#9584) 2020-05-13 13:26:12 -07:00
Harshavardhana
6ac48a65cb fix: use unused cacheMetrics code in prometheus (#9588)
remove all other unusued/deadcode
2020-05-13 08:15:26 -07:00
poornas
2ecf5ba1de fix panic when checking if es/nats event target is active (#9587) 2020-05-13 06:37:22 -07:00
Krishna Srinivas
94f1a1dea3 add option for O_SYNC writes for standalone FS backend (#9581) 2020-05-12 19:24:59 -07:00
Anis Elleuch
c045ae15e7 fix: avoid undoing bucket creation and return the first err instead (#9578) 2020-05-12 15:20:42 -07:00
Harshavardhana
1756b7c6ff fix: LDAP derivative accounts parentUser validation is not needed (#9573)
* fix: LDAP derivative accounts parentUser validation is not needed

fixes #9435

* Update cmd/iam.go

Co-authored-by: Lenin Alevski <alevsk.8772@gmail.com>

Co-authored-by: Lenin Alevski <alevsk.8772@gmail.com>
2020-05-12 09:21:08 -07:00
Klaus Post
e25ace2151 Forward RPC errors from crawler (#9569)
The `keepHTTPResponseAlive` would cause errors to be 
returned with status OK.

- Add '32' as a filler byte until a response is ready
- '0' to indicate the response is ready to be consumed
- '1' to indicate response has an error which needs
to be returned to the caller

Clear out 'file not found' errors from dir walker, since it may be 
in a folder that has been deleted since it was scanned.
2020-05-11 20:41:38 -07:00
poornas
a8e5a86fa0 Remove brittle tests for cache (#9570) 2020-05-11 15:41:10 -07:00
Harshavardhana
f8edc233ab support multiple policies for temporary users (#9550) 2020-05-11 13:04:11 -07:00
Harshavardhana
337c2a7cb4 add audit logging for all admin calls (#9568)
- add ServiceRestart/ServiceStop actions
- audit log appropriately in all admin handlers

fixes #9522
2020-05-11 10:34:08 -07:00
Harshavardhana
2d735144b9 fix: distributed docs image path 2020-05-11 09:33:55 -07:00
Harshavardhana
0113035237 update distributed setup guide (#9566) 2020-05-11 09:19:10 -07:00
Anis Elleuch
52a1d248b2 policy: Do not return an error for invalid value during parsing (#9442)
s3:HardwareInfo was removed recently. Users having that admin action
stored in the backend will have an issue starting the server.

To fix this, we need to avoid returning an error in Marshal/Unmarshal
when they encounter an invalid action and validate only in specific
location.

Currently the validation is done and in ParseConfig().
2020-05-10 10:55:28 -07:00
Harshavardhana
b5ed42c845 ignore policy/group missing errors appropriately (#9559) 2020-05-09 13:59:12 -07:00
Klaus Post
d9e7cadacf Update reed+solomon (#9562)
Only create encoder when strictly needed.
2020-05-09 09:54:20 -07:00
Harshavardhana
36e88cbd50 fix mint tests in awscli to ignore NotImplemented properly (#9561) 2020-05-08 22:32:10 -07:00
Anis Elleuch
6d76efb9bb Add support of TCP fast open in internode calls (#9486) 2020-05-08 14:33:23 -07:00
Harshavardhana
a1de9cec58 cleanup object-lock/bucket tagging for gateways (#9548)
This PR is to ensure that we call the relevant object
layer APIs for necessary S3 API level functionalities
allowing gateway implementations to return proper
errors as NotImplemented{}

This allows for all our tests in mint to behave
appropriately and can be handled appropriately as
well.
2020-05-08 13:44:44 -07:00
Anis Elleuch
6885c72f32 disable check for DirectIO in standalone FS mode (#9558) 2020-05-08 12:07:51 -07:00
poornas
0f1389e992 Fix azure gateway handling of ETag for CopyObject (#9544)
fixes #9428
2020-05-08 11:30:35 -07:00
Minio Trusted
5bf3eeaa77 Update yaml files to latest version RELEASE.2020-05-08T02-40-49Z 2020-05-08 02:49:15 +00:00
Harshavardhana
9dda1fd624 Remove B2 gateway implementation (#9547)
S3 is now natively supported by B2 cloud storage provider
there is no reason to use specialized gateway for B2 anymore,
our current S3 gateway with caching would work with B2.

Resolves #8584
2020-05-07 19:00:30 -07:00
Harshavardhana
2dc46cb153 Report correct error when O_DIRECT is not supported (#9545)
fixes #9537
2020-05-07 16:12:16 -07:00
remche
0674c0075e add LDAP StartTLS support (#9472) 2020-05-07 15:08:33 -07:00
Harshavardhana
518ef670da update browser assets with new changes (#9543)
- f7c91eff5 - Share button for public objects (#9162) (5 days ago) <Egor Rudinsky>
- 60d415bb8 - deprecate/remove global WORM mode (#9436) (13 days ago) <Harshavardhana>
2020-05-07 13:21:05 -07:00
Harshavardhana
0dd626ec67 fix: requests without bucket should route to the original router (#9541)
requests in federated setups for STS type calls which are
performed at '/' resource should be routed by the muxer,
the assumption is simply such that requests without a bucket
in a federated setup cannot be proxied, so serve them at
current server.
2020-05-07 11:49:04 -07:00
Harshavardhana
53f4c0fdc0 fix: deprecate old settings and elaborate on tuning (#9542)
fixes #9538
2020-05-07 11:08:57 -07:00
P R
7e3ea77fdf Checking for access denied in web browser request. (#9523)
Fixes #9485
2020-05-06 21:31:44 -07:00
Harshavardhana
7290d23b26 Apply partNumber checks only on multipart objects (#9528) 2020-05-06 16:58:09 -07:00
Minio Trusted
24f20eb1bd Update yaml files to latest version RELEASE.2020-05-06T23-23-25Z 2020-05-06 23:32:41 +00:00
Harshavardhana
4c9de098b0 heal buckets during init and make sure to wait on quorum (#9526)
heal buckets properly during expansion, and make sure
to wait for the quorum properly such that healing can
be retried.
2020-05-06 14:25:05 -07:00
Harshavardhana
a2ccba69e5 add kes retries upto two times with jitter backoff (#9527)
KES calls are not retried and under certain situations
when KES is under high load, the request should be
retried automatically.
2020-05-06 11:44:06 -07:00
Harshavardhana
8eb99d3a87 fix: complete multipart upload respond with ETag quoted (#9525)
Fixes #9517
2020-05-05 17:47:54 -07:00
Bala FA
3773874cd3 add bucket tagging support (#9389)
This patch also simplifies object tagging support
2020-05-05 14:18:13 -07:00
Harshavardhana
6c62b1a2ea fix broken retry tests 2020-05-04 22:01:39 -07:00
Harshavardhana
b768645fde fix: unexpected logging with bucket metadata conversions (#9519) 2020-05-04 20:04:06 -07:00
Harshavardhana
7b58dcb28c fix: return context error from context reader (#9507) 2020-05-04 14:33:49 -07:00
Harshavardhana
fea4a1e68e fix logical error in path length handling for windows (#9520)
fixes #9515
2020-05-04 13:11:56 -07:00
Andreas Auernhammer
a9e83dd42c crypto: remove dead code (#9516)
This commit removes some crypto-related code
that is not used anywhere anymore.
2020-05-04 11:41:18 -07:00
Andreas Auernhammer
145f501a21 use HTTP/2 when connecting to KES (#9514)
This commit makes the KES client use HTTP/2
when establishing a connection to the KES server.

This is necessary since the next KES server release
will require HTTP/2.
2020-05-04 10:17:13 -07:00
Harshavardhana
9b3b04ecec allow retries for bucket encryption/policy quorum reloads (#9513)
We should allow quorum errors to be send upwards
such that caller can retry while reading bucket
encryption/policy configs when server is starting
up, this allows distributed setups to load the
configuration properly.

Current code didn't facilitate this and would have
never loaded the actual configs during rolling,
server restarts.
2020-05-04 09:42:58 -07:00
Anis Elleuch
3e063cca5c Show the cause error in startup when directio is not supported (#9497)
This commit tries to create a file using direct i/o in the startup
so the server returns quickly and avoid cryptic other errors.
2020-05-04 08:48:03 -07:00
Harshavardhana
27d716c663 simplify usage of mutexes and atomic constants (#9501) 2020-05-03 22:35:40 -07:00
ebozduman
fbd15cb7b7 Fixes browser delete issue for anon and authorized users (#9440) 2020-05-03 14:01:28 -07:00
Egor Rudinsky
f7c91eff54 Share button for public objects (#9162) 2020-05-01 23:55:53 -07:00
Dmitry Gadeev
a6bdc086a2 fix: use source scheme retrieved from X-Forwarded headers (#9483) 2020-05-01 23:53:01 -07:00
Minio Trusted
1242dd951a Update yaml files to latest version RELEASE.2020-05-01T22-19-14Z 2020-05-01 22:28:00 +00:00
Andreas Auernhammer
d1c8e9f31b update KMS guide to work with latest KES changes (#9498)
This commit updates the KMS guide to reflect the
latest changes in KES. Based on internal design
meetings we made some adjustments to the overall
KES configuration.
This commit ensures that the KMS guide contains
a working KES demo-setup with Vault.
2020-05-01 12:36:30 -07:00
Bala FA
83ccae6c8b Store bucket created time as a metadata (#9465)
Fixes #9459
2020-05-01 09:53:14 -07:00
Frank Wessels
086be07bf5 Fix ndjson unsupported (#9500) 2020-05-01 08:06:29 -07:00
Harshavardhana
28f9c477a8 fix: assume parentUser correctly for serviceAccounts (#9504)
ListServiceAccounts/DeleteServiceAccount didn't work properly
with STS credentials yet due to incorrect Parent user.
2020-05-01 08:05:14 -07:00
Harshavardhana
09571d03a5 avoid unnecessary logging in IAM (#9502) 2020-05-01 18:11:17 +05:30
Harshavardhana
71ce63f79c fix: background heal to call HealFormat only if needed (#9491)
In large setups this avoids unnecessary data transfer
across nodes and potential locks.

This PR also optimizes heal result channel, which should
be avoided for each queueHealTask as its expensive
to create/close channels for large number of objects.
2020-04-30 20:23:00 -07:00
Harshavardhana
5205c9591f print proper certinfo on console when starting up (#9479)
also potentially fix a race in certs.go implementation
while accessing tls.Certificate concurrently.
2020-04-30 16:15:29 -07:00
poornas
9a547dcbfb Add API's for managing bucket quota (#9379)
This PR allows setting a "hard" or "fifo" quota
restriction at the bucket level. Buckets that
have reached the FIFO quota configured, will
automatically be cleaned up in FIFO manner until
bucket usage drops to configured quota.
If a bucket is configured with a "hard" quota
ceiling, all further writes are disallowed.
2020-04-30 15:55:54 -07:00
Anis Elleuch
27632ca6ec audit: Merge ResponseWriter with RecordAPIStats (#9496)
ResponseWriter & RecordAPIStats has similar role, merge them.

This commit will also fix wrong auditing for STS and Web and others
since they are using ResponseWriter instead of the RecordAPIStats.
2020-04-30 11:27:19 -07:00
Harshavardhana
c7470e6e6e fix: go mod tidy 2020-04-29 22:31:34 -07:00
Anis Elleuch
d090a17ed0 fix: Audit tests on the correct response writer type (#9445) 2020-04-29 22:17:36 -07:00
Harshavardhana
c2529260e7 fix: crash observed when position of drives different (#9490)
allocate the disk slice properly before populating
disk by its ID and its position.

Fixes #9416
2020-04-29 13:42:37 -07:00
Arthur Lutz
da87188ff8 fix: tls doc markdown title (#9487) 2020-04-29 12:28:45 -07:00
Harshavardhana
d099039f5d Add new github workflow (#9480) 2020-04-29 09:17:32 -07:00
P R
5dd9cf4398 fix: CopyObject with REPLACE directive deletes existing tags (#9478)
Fixes #9477
2020-04-29 10:26:37 +05:30
Harshavardhana
ab77b216d1 fix: remove restrictions on windows for NAME_MAX (#9469)
Fixes #9393
2020-04-28 17:32:46 -07:00
Minio Trusted
b37a02cddf Update yaml files to latest version RELEASE.2020-04-28T23-56-56Z 2020-04-29 00:05:12 +00:00
Anis Elleuch
c3c3e9087b config: More fixes in parsing Audit & Logger env variables (#9474)
- Add support of missed legacy Logger webhook
- Disable enabling Audit or logger if _ENABLE
  if not explicitly set to "on".
2020-04-28 15:20:40 -07:00
Anis Elleuch
7ad6bc955f show a notice when mixed rootfs & mounted disks is detected (#9471)
A user can incorrectly mounts a newly fresh disk. MinIO will detect
that it is writing with a rootfs disk and will mark it down. However,
it is hard for the user to understand what's going on.

This commit will just print a notice so it will be easy to spot
such use case.
2020-04-28 14:55:01 -07:00
Harshavardhana
7a5271ad96 fix: re-use connections in webhook/elasticsearch (#9461)
- elasticsearch client should rely on the SDK helpers
  instead of pure HTTP calls.
- webhook shouldn't need to check for IsActive() for
  all notifications, failure should be delayed.
- Remove DialHTTP as its never used properly

Fixes #9460
2020-04-28 13:57:56 -07:00
Harshavardhana
1b122526aa fix: add service account support for AssumeRole/LDAPIdentity creds (#9451)
allow generating service accounts for temporary credentials
which have a designated parent, currently OpenID is not yet
supported.

added checks to ensure that service account cannot generate
further service accounts for itself, service accounts can
never be a parent to any credential.
2020-04-28 12:49:56 -07:00
Anis Elleuch
a3b266761e Fix audit loading from the env and consider enable env variable (#9467)
Audit was not working properly when enabled from the environment
caused by a typo in the code.

This commit fixes that but also consider the following variables:
  `MINIO_LOGGER_WEBHOOK_ENABLE_*` and 
`MINIO_AUDIT_WEBHOOK_ENABLE_*` so the user can use 
this latter to temporarily disable a logger or audit configuration.
2020-04-28 16:10:51 +05:30
Harshavardhana
498389123e avoid unnecessary logging on fresh/newly replaced drives (#9470)
data usage tracker and crawler seem to be logging
non-actionable information on console, which is not
useful and is fixed on its own in almost all deployments,
lets keep this logging to minimal.
2020-04-28 01:16:57 -07:00
Harshavardhana
bc61417284 calculate automatic node based symmetry (#9446)
it is possible in many screnarios that even
if the divisible value is optimal, we may
end up with uneven distribution due to number
of nodes present in the configuration.

added code allow for affinity towards various
ellipses to figure out optimal value across
ellipses such that we can always reach a
symmetric value automatically.

Fixes #9416
2020-04-27 14:39:57 -07:00
Harshavardhana
97d952e61c fix: ensure buckets are preserved if one set returns error (#9468)
the bucket should be deleted if it can be successfully
deleted on all sets, if not we should ensure to
restore those buckets properly.
2020-04-27 14:18:02 -07:00
Klaus Post
073aac3d92 add data update tracking using bloom filter (#9208)
By monitoring PUT/DELETE and heal operations it is possible
to track changed paths and keep a bloom filter for this data. 

This can help prioritize paths to scan. The bloom filter can identify
paths that have not changed, and the few collisions will only result
in a marginal extra workload. This can be implemented on either a
bucket+(1 prefix level) with reasonable performance.

The bloom filter is set to have a false positive rate at 1% at 1M 
entries. A bloom table of this size is about ~2500 bytes when serialized.

To not force a full scan of all paths that have changed cycle bloom
filters would need to be kept, so we guarantee that dirty paths have
been scanned within cycle runs. Until cycle bloom filters have been
collected all paths are considered dirty.
2020-04-27 10:06:21 -07:00
Harshavardhana
eff4127efd Revert "Write files in O_SYNC for fs backend to protect against machine crashes (#9434)"
This reverts commit 4843affd0e.
2020-04-27 09:22:05 -07:00
Harshavardhana
b1c0c32ba6 fix: ignore symlinks in backend filesystems (#9457)
fixes #9419
2020-04-27 06:30:12 -07:00
Harshavardhana
f14bf25cb9 optimize Listen bucket notification implementation (#9444)
this commit avoids lots of tiny allocations, repeated
channel creates which are performed when filtering
the incoming events, unescaping a key just for matching.

also remove deprecated code which is not needed
anymore, avoids unexpected data structure transformations
from the map to slice.
2020-04-27 06:25:05 -07:00
Harshavardhana
f216670814 use context specific to the etcd call (#9458) 2020-04-26 21:42:41 -07:00
Harshavardhana
6ecc98fddb fix: crash in metrics handler when some disks are offline (#9450)
Fixes #9449
2020-04-25 19:48:07 -07:00
Krishna Srinivas
4843affd0e Write files in O_SYNC for fs backend to protect against machine crashes (#9434) 2020-04-25 01:18:54 -07:00
Harshavardhana
558785a4bb fix: config Set/Get decrypt/encrypt using authenticated credentials (#9447)
we have policy available for sub-admin users to set/get/delete
config, but we incorrectly decrypt the content using admin secret
key which in-fact should be the credential authenticating the
request.
2020-04-24 22:36:48 -07:00
Harshavardhana
60d415bb8a deprecate/remove global WORM mode (#9436)
global WORM mode is a complex piece for which
the time has passed, with the advent of S3 compatible
object locking and retention implementation global
WORM is sort of deprecated, this has been mentioned
in our documentation for some time, now the time
has come for this to go.
2020-04-24 16:37:05 -07:00
BigUstad
45e22cf8aa fix: selectObject to return error when object does not exist (#9423) 2020-04-24 13:51:48 -07:00
Klaus Post
e4900b99d7 s3 select: Infer types for comparison (#9438) 2020-04-24 13:02:59 -07:00
Anis Elleuch
20766069a8 add list/delete API service accounts admin API (#9402) 2020-04-24 12:10:09 -07:00
Tim Hughes
e8160c9fae fix: same endpoint for NewLDAPIdentity & NewWithCredentials (#9433)
Also enables use of https endpoints

Fixes  #9431
2020-04-24 10:44:45 +05:30
Harshavardhana
957ecb1b64 use optimal memory while purging cache (#9426)
re-implement the cache purging routine to
avoid using ioutil.ReadDir which can lead
to high allocations when there are cache
directories with lots of content, or
when cache is installed in memory constrainted
environments.

Instead rely on a callback function where we
are not using memory no-more than 8KiB per
cycle.

Precursor for this change refer #9425, original
issue pointed by Caleb Case <caleb@storj.io>
2020-04-23 12:26:13 -07:00
Boaz
ac5061df2c fix: make azure gateway chunk size configurable (#9292) 2020-04-23 02:04:13 -07:00
Tim Hughes
cddb2714ef documentation: fix group search filter (#9420) 2020-04-22 22:29:17 -07:00
Minio Trusted
d7d9cac20b Update yaml files to latest version RELEASE.2020-04-23T00-58-49Z 2020-04-23 01:07:52 +00:00
Harshavardhana
6817c5ea58 migrate mint tests to latest versions (#9424) 2020-04-22 16:06:58 -07:00
Anis Elleuch
4cd6ca02c7 fix: Add missing return in admin requests auth (#9422) 2020-04-22 13:42:01 -07:00
Egon Elbre
a5efcbab51 fix: cacheReader.Close in all paths that don't return it. (#9418) 2020-04-22 12:13:57 -07:00
Egon Elbre
85be7b39ac Call cleanup funcs when skip fails (#9417) 2020-04-22 10:06:56 -07:00
Nitish Tiwari
ebf3dda449 Update server startup example to showcase local erasure code (#9407) 2020-04-21 23:59:13 -07:00
poornas
582953260b Increase response header timeout for gateway (#9400)
fixes: #9295
2020-04-21 19:21:27 -07:00
Minio Trusted
2d1ea86fc6 Update yaml files to latest version RELEASE.2020-04-22T00-11-12Z 2020-04-22 00:19:12 +00:00
Praveen raj Mani
322385f1b6 fix: only show active/available ARNs in server startup banner (#9392) 2020-04-21 09:38:32 -07:00
Anis Elleuch
1b38aed05f fix: Correct typo when registering peer Delete User API (#9403) 2020-04-21 09:31:51 -07:00
Anis Elleuch
a69c98e394 fix: Correct typo when registering peer Delete User API (#9403) 2020-04-21 08:35:19 -07:00
Harshavardhana
282c9f790a fix: validate partNumber in queryParam as part of preConditions (#9386) 2020-04-20 22:01:59 -07:00
Anis Elleuch
2eeb0e6a0b heal: Fix heal buckets result reporting (#9397)
healBucket() was not properly collecting results after healing
buckets. This commit adds After drives information correctly.
2020-04-20 13:48:54 -07:00
Harshavardhana
3ff5bf2369 fix: convert storage class into azure tiers (#9381) 2020-04-19 13:42:56 -07:00
Harshavardhana
69ee28a082 remove OSS gateway due to lack of licensing (#9390)
OSS go sdk lacks licensing terms in their
repository, and there has been no activity

On the issue here https://github.com/aliyun/aliyun-oss-go-sdk/issues/245

This PR is to ensure we remove any dependency code which
lacks explicit license file in their repo.
2020-04-18 22:12:51 -07:00
sreenivas alapati
d02deff3d7 fixed typo in KMS documentation (#9384) 2020-04-18 18:09:25 -07:00
Sidhartha Mani
3e78ea8acc improve obd tests and optimize network (#9378)
- keep long running obd network tests alive
- fix error - wrong number of parents in process OBD info
- ensure that osinfo does not error out when inside containers
- remove limit on max number of connections per client transport

The generic client transport uses a default limit of 64 conns per transport.
This could end up limiting and throttling usage, and artificially slowing
down the performance of MinIO even on hardware capable of doing better.
2020-04-18 11:06:11 -07:00
Harshavardhana
b54c0f0ef3 Add stale/lock bot for issues (#9387) 2020-04-18 11:03:03 -07:00
Praveen raj Mani
c79358c67e notification queue limit has no maxLimit (#9380)
New value defaults to 100K events by default,
but users can tune this value upto any value
they seem necessary.

* increase the limit to maxint64 while validating
2020-04-18 01:20:56 -07:00
Harshavardhana
75107d7698 fix: remove any duplicate statements in policy input (#9385)
Add support for removing duplicate statements automatically
2020-04-17 21:26:42 -07:00
Klaus Post
c4464e36c8 fix: limit HTTP transport tuables to affordable values (#9383)
Close connections pro-actively in transient calls
2020-04-17 11:20:56 -07:00
Harshavardhana
d92db198d1 Add target parsing code for config (#9375)
This code is helper for mcs project
2020-04-16 17:43:14 -07:00
Harshavardhana
8bae956df6 allow copyObject to rotate storageClass of objects (#9362)
Added additional mint tests as well to verify, this
functionality.

Fixes #9357
2020-04-16 17:42:44 -07:00
Eco
7758524703 Add documentation for using MinIO with Veeam (#9355) 2020-04-16 17:36:14 -07:00
Harshavardhana
c82fa2c829 fix: load LDAP users appropriately (#9360)
This PR also fixes issues when

deletePolicy, deleteUser is idempotent so can lead to
issues when client can prematurely timeout, so a retry
call error response should be ignored when call returns
http.StatusNotFound

Fixes #9347
2020-04-16 16:22:34 -07:00
Harshavardhana
a51280fd20 allow config help in gateway mode (#9356)
allow `mc admin config set mygateway/ audit_webhook --env`
to fetch the documentation as needed, this is just to
ensure that our users can still access the relevant
ENV docs while running in gateway mode.
2020-04-16 14:49:12 -07:00
Klaus Post
bd437c1c17 set server base context on gateway http server (#9365) 2020-04-16 11:54:12 -07:00
Harshavardhana
69fb68ef0b fix simplify code to start using context (#9350) 2020-04-16 10:56:18 -07:00
Nitish Tiwari
787dbaff36 Enforce issue templates for new issues (#9364) 2020-04-16 10:54:59 -07:00
Minio Trusted
c50ae1fdbe Update yaml files to latest version RELEASE.2020-04-15T19-42-18Z 2020-04-15 20:00:16 +00:00
Harshavardhana
bde0f444db fix support OBDAdminAction is valid action (#9354) 2020-04-15 12:16:40 -07:00
Klaus Post
6a8298b137 Reduce Mutex test runs (#9345)
Some tests take a long time on CI:

* `--- PASS: TestRWMutex (226.49s)`
* ` --- PASS: TestRWMutex (7.13s)`

Reduce the number of runs.

Before/after locally:

```
--- PASS: TestRWMutex (20.95s)
--- PASS: TestRWMutex (7.13s)

--- PASS: TestMutex (3.01s)
--- PASS: TestMutex (1.65s)
```
2020-04-14 18:39:03 -07:00
Klaus Post
f19cbfad5c fix: use per test context (#9343)
Instead of GlobalContext use a local context for tests.
Most notably this allows stuff created to be shut down 
when tests using it is done. After PR #9345 9331 CI is 
often running out of memory/time.
2020-04-14 17:52:38 -07:00
Minio Trusted
78f2183e70 Update yaml files to latest version RELEASE.2020-04-15T00-39-01Z 2020-04-15 00:46:50 +00:00
Harshavardhana
5c11a46412 update minio-go/parquet-go to latest 2020-04-14 16:53:29 -07:00
Anis Elleuch
8a94aebdb8 config: Add api requests max & deadline configs (#9273)
Add two new configuration entries, api.requests-max and
api.requests-deadline which have the same role of
MINIO_API_REQUESTS_MAX and MINIO_API_REQUESTS_DEADLINE.
2020-04-14 12:46:37 -07:00
Sidhartha Mani
ec11e99667 implement configurable timeout for OBD tests (#9324) 2020-04-14 11:48:32 -07:00
Harshavardhana
37d066b563 fix: deprecate requirement of session token for service accounts (#9320)
This PR fixes couple of behaviors with service accounts

- not need to have session token for service accounts
- service accounts can be generated by any user for themselves
  implicitly, with a valid signature.
- policy input for AddNewServiceAccount API is not fully typed
  allowing for validation before it is sent to the server.
- also bring in additional context for admin API errors if any
  when replying back to client.
- deprecate GetServiceAccount API as we do not need to reply
  back session tokens
2020-04-14 11:28:56 -07:00
Praveen raj Mani
bfec5fe200 fix: fetchLambdaInfo should return consistent results (#9332)
- Introduced a function `FetchRegisteredTargets` which will return
  a complete set of registered targets irrespective to their states,
  if the `returnOnTargetError` flag is set to `False`
- Refactor NewTarget functions to return non-nil targets
- Refactor GetARNList() to return a complete list of configured targets
2020-04-14 11:19:25 -07:00
Bala FA
525287f4b6 remove queue only if index is within the range (#9341)
Fixes minio/mc#3155
2020-04-14 11:06:23 -07:00
Harshavardhana
9054ce73b2 fix: deprecate skyring/uuid and use maintained google/uuid (#9340) 2020-04-14 02:40:05 -07:00
Harshavardhana
d079adc167 fix: remove initGlobalContext writes in tests (#9331)
since we do not close GlobalContext, we do not
need to reinitialize it inside test code
2020-04-13 23:21:01 -07:00
Harshavardhana
a9d401ac10 fix: update docs to mention erasure guide (#9339) 2020-04-14 11:38:14 +05:30
kannappanr
1fa65c7f2f fix: object lock behavior when default lock config is enabled (#9305) 2020-04-13 14:03:23 -07:00
Harshavardhana
cc9b63eb51 add deprecation docs for PostgresSQL/MySQL targets (#9333) 2020-04-13 12:13:33 -07:00
Harshavardhana
7e12eab3ad fix: cleanup madmin docs (#9330) 2020-04-13 10:30:41 +05:30
Roland Huß
fa685d7d9c Make multistage Dockerfile self-contained (#9323)
Picking up all support files from the builder image has the advantage
that the Dockerfile is now fully selfcontained and can be also
run just standalone.

This allows also cross-compilation and pushing with the proper manifests
with Docker Buildkit:

```
docker buildx create --name xbuilder
docker buildx use xbuilder

docker buildx build -f Dockerfile.minio --platform linux/arm/v7,linux/amd64 --progress plain --push -t minio/minio .
```

which also has the advantage that the Dockerfile is the same
for all platforms.

Co-authored-by: Harshavardhana <harsha@minio.io>
2020-04-12 20:03:02 -07:00
Harshavardhana
4314ee1670 fix: remove unusued PerfInfoHandler code (#9328)
- Removes PerfInfo admin API as its not OBDInfo
- Keep the drive path without the metaBucket in OBD
  global latency map.
- Remove all the unused code related to PerfInfo API
- Do not redefined global mib,gib constants use
  humanize.MiByte and humanize.GiByte instead always
2020-04-12 19:37:09 -07:00
Harshavardhana
7d636a7c13 enable --compat flag by default (#9326)
if needed use --no-compat to disable md5sum while
verifying any performance numbers.

bring back --compat behavior as default to avoid
additional documentation and confusing behavior,
as we are working towards improving md5sum to
be faster on AVX instructions, enabling this
should be hardly a problem in future versions
of MinIO.

fixes #8012
fixes #7859
fixes #7642
2020-04-12 18:08:27 -07:00
Harshavardhana
bf9d51cf14 fix: add missing copyright headers in some files (#9321) 2020-04-12 13:55:22 -07:00
Harshavardhana
29e0727b58 fix: regression in CopyObject not preserving ETag in --compat (#9322)
issue found after `git bisect` to commit db41953618
2020-04-11 20:20:30 -07:00
Anis Elleuch
c434dff0a4 posix: Add missing error return in RenameFile() (#9319)
Although it should not happen in most cases.
2020-04-11 11:15:30 -07:00
Taras Parkhomenko
b2a8cb4aba Add SHA-3 support (#9308) 2020-04-10 14:59:52 -07:00
Harshavardhana
b412a222ae Add missing comment key from key list (#9313)
Continuing from previous PR #9304, comment
is a special key is not present in the
default KV list. Add it explicitly when
tokenizing fields as it may be possible that
some clients might try to set comments.
2020-04-10 11:44:28 -07:00
Harshavardhana
79bcb705bf update CREDITS file to reflect new deps (#9311) 2020-04-10 00:16:38 -07:00
Sidhartha Mani
9f81d014f1 fix: drive names in output of parallel obd test (#9312) 2020-04-09 22:44:17 -07:00
Harshavardhana
3184205519 fix: config to support keys with special values (#9304)
This PR adds context-based `k=v` splits based
on the sub-system which was obtained, if the
keys are not provided an error will be thrown
during parsing, if keys are provided with wrong
values an error will be thrown. Keys can now
have values which are of a much more complex
form such as `k="v=v"` or `k=" v = v"`
and other variations.

additionally, deprecate unnecessary postgres/mysql
configuration styles, support only

- connection_string for Postgres
- dsn_string for MySQL

All other parameters are removed.
2020-04-09 21:45:17 -07:00
Minio Trusted
7c919329e8 Update yaml files to latest version RELEASE.2020-04-10T03-34-42Z 2020-04-10 03:47:00 +00:00
Andreas Auernhammer
db41953618 avoid unnecessary KMS requests during single-part PUT (#9220)
This commit fixes a performance issue caused
by too many calls to the external KMS - i.e.
for single-part PUT requests.

In general, the issue is caused by a sub-optimal
code structure. In particular, when the server
encrypts an object it requests a new data encryption
key from the KMS. With this key it does some key
derivation and encrypts the object content and
ETag.

However, to behave S3-compatible the MinIO server
has to return the plaintext ETag to the client
in case SSE-S3.
Therefore, the server code used to decrypt the
(previously encrypted) ETag again by requesting
the data encryption key (KMS decrypt API) from
the KMS.

This leads to 2 KMS API calls (1 generate key and
1 decrypt key) per PUT operation - while only
one KMS call is necessary.

This commit fixes this by fetching a data key only
once from the KMS and keeping the derived object
encryption key around (for the lifetime of the request).

This leads to a significant performance improvement
w.r.t. to PUT workloads:
```
Operation: PUT
Operations: 161 -> 239
Duration: 28s -> 29s
* Average: +47.56% (+25.8 MiB/s) throughput, +47.56% (+2.6) obj/s
* Fastest: +55.49% (+34.5 MiB/s) throughput, +55.49% (+3.5) obj/s
* 50% Median: +58.24% (+32.8 MiB/s) throughput, +58.24% (+3.3) obj/s
* Slowest: +1.83% (+0.6 MiB/s) throughput, +1.83% (+0.1) obj/s
```
2020-04-09 17:01:45 -07:00
Harshavardhana
cea078a593 update browser assets for image-preview feature 2020-04-09 14:34:37 -07:00
Harshavardhana
f44cfb2863 use GlobalContext whenever possible (#9280)
This change is throughout the codebase to
ensure that all codepaths honor GlobalContext
2020-04-09 09:30:02 -07:00
Anis Elleuch
1b45be0d60 lifecycle: Disallow delete when the object is locked (#9272) 2020-04-09 09:28:57 -07:00
Aditya Manthramurthy
6bb693488c Fix policy setting error in LDAP setups (#9303)
Fixes #8667

In addition to the above, if the user is mapped to a policy or 
belongs in a group, the user-info API returns this information, 
but otherwise, the API will now return a non-existent user error.
2020-04-09 01:04:08 -07:00
Harshavardhana
e20e08d700 fix: remove the sleep from listing operations (#9287)
make rest of the Walk() function more predictable,
it was observed that in nominal deployments even
without much workload the drives are generally
slow for respond for readdir operations, for the
sleepDuration factor of 10 this can cause
unexpected slowness in the Listing calls, while
it is good for all other I/O, it may simply slow
down Listing immensely which is not useful.

fixes #9261
2020-04-08 19:42:57 -07:00
Harshavardhana
ac07df2985 start watcher after all creds have been loaded (#9301)
start watcher after all creds have been loaded
to avoid any conflicting locks that might get
deadlocked.

Deprecate unused peer calls for LoadUsers()
2020-04-08 19:00:39 -07:00
Praveen raj Mani
2054ca5c9a fix: honor token based authentication in NATS streaming (#9296)
fixes #9148
2020-04-08 12:45:24 -07:00
Anis Elleuch
e51e465543 delete: Use physical Dir() for proper prefix cleanup in Windows (#9297)
In FS mode under Windows, removing an object will not automatically.
remove parent empty prefixes.

The reason is that path.Dir() was used, however filepath.Dir() is
more appropriate since filepath is physical (meaning it operates
on OS filesystem paths)

This is not caught because failure for Windows CI is not caught.
2020-04-08 11:32:58 -07:00
tweigel-dev
2bbc6a83e8 feature preview of image-objects (#9239) 2020-04-08 10:47:47 -07:00
ebozduman
a78731a3ba Adds info on policy for STS authentication using web-id (#9289) 2020-04-08 10:34:43 -07:00
kumy
f4e779c964 Fix typo in LDAP STS guide (#9294) 2020-04-08 08:58:03 -07:00
Pontus Leitzler
a973402821 add object api check in fs-v1 before returning ready (#9285)
fs-v1 in server mode only checks to see if the path exist, so that it
returns ready before it is indeed ready.

This change adds a check to ensure that the global object api is
available too before reporting ready.

Fixes #9283
2020-04-08 08:53:20 -07:00
Sidhartha Mani
44decbeae0 increase drive OBD blocksize to 4MB (#9258) 2020-04-08 06:04:27 -07:00
César Nieto
3ea1be3c52 allow delete of a group with no policy set (#9288) 2020-04-08 06:03:57 -07:00
Harshavardhana
2642e12d14 fix: change policies API to return and take struct (#9181)
This allows for order guarantees in returned values
can be consumed safely by the caller to avoid any
additional parsing and validation.

Fixes #9171
2020-04-07 19:30:59 -07:00
Harshavardhana
e7276b7b9b fix: make single locks for both IAM and object-store (#9279)
Additionally add context support for IAM sub-system
2020-04-07 14:26:39 -07:00
Harshavardhana
e375341c33 fix: allow any 127.0.0.x as bind IPs (#9281)
It is some times common and convenient to use
just local IPs for testing purposes, 127.0.0.x
are special IPs regardless of being available on
an interface they can be bound to on all operating
systems.

Allow this behavior to work for minio server

fixes #9274
2020-04-07 09:40:20 -07:00
Harshavardhana
2c20716f37 fix: Avoid force delete in compliance/worm mode (#9276)
also, bring in an additional policy to ensure that
force delete bucket is only allowed with the right
policy for the user, just DeleteBucketAction
policy action is not enough.
2020-04-06 17:51:05 -07:00
Harshavardhana
928f5b0564 fix: Quit when the context is canceled in madmin (#9264) 2020-04-06 17:50:14 -07:00
Harshavardhana
91f21ddc47 fix: ignore lost+found properly while reading disks (#9278)
Fixes #9277
2020-04-06 16:51:18 -07:00
Harshavardhana
43a3778b45 fix: support object-remaining-retention-days policy condition (#9259)
This PR also tries to simplify the approach taken in
object-locking implementation by preferential treatment
given towards full validation.

This in-turn has fixed couple of bugs related to
how policy should have been honored when ByPassGovernance
is provided.

Simplifies code a bit, but also duplicates code intentionally
for clarity due to complex nature of object locking
implementation.
2020-04-06 13:44:16 -07:00
Bitworks LLC
b9b1bfefe7 Added a function which allows passing the UID/GID for suexec from the outside. (#9251) 2020-04-06 13:28:23 -07:00
Minio Trusted
05cda35b14 Update yaml files to latest version RELEASE.2020-04-04T05-39-31Z 2020-04-04 05:48:22 +00:00
Harshavardhana
2155e74951 update minio-go to latest v6.0.52 2020-04-03 18:06:50 -07:00
Harshavardhana
4714958e99 fix: possible connection leaks in sets init, heal (#9263) 2020-04-03 18:06:31 -07:00
Minio Trusted
c6e62b9175 Update yaml files to latest version RELEASE.2020-04-02T21-34-49Z 2020-04-02 21:44:04 +00:00
Harshavardhana
ab66b23194 fix: allow listBuckets with listBuckets permission (#9253) 2020-04-02 12:35:22 -07:00
Harshavardhana
73f9d8a636 set default storage class always (#9250)
gateway implementations might not respond
back with right storage class which is
an AWS S3 concept, add default storage
if its empty.
2020-04-02 00:23:09 -07:00
Krishna Srinivas
541a778d7b fix: do not exit on bootstrap Verify() to allow for rolling upgrades (#9235) 2020-04-01 21:40:03 -07:00
Harshavardhana
d49f2ec19c fix: use specified authToken for audit/logger HTTP targets (#9249)
We were not using the auth token specified
even when config supports it.
2020-04-01 20:53:07 -07:00
ebozduman
8dd63a462f fix: ETag returned by OSS endpoint (#9243) 2020-04-01 19:51:12 -07:00
Anis Elleuch
9902c9baaa sql: Add support of escape quote in CSV (#9231)
This commit modifies csv parser, a fork of golang csv
parser to support a custom quote escape character.

The quote escape character is used to escape the quote
character when a csv field contains a quote character
as part of data.
2020-04-01 15:39:34 -07:00
Harshavardhana
7de29e6e6b Add rotating token support for admin API (#9244)
Use the *credentials.Credentials implementation method *Get*

```
func (c *Credentials) Get() (Value, error) {
```

which also handles auto-refresh, this allows for chaining
of various implementations together if necessary or simply
initialize with credentials.NewStaticV4(access, secret, token)

Co-authored-by: Klaus Post <klauspost@gmail.com>
2020-04-01 13:34:20 -07:00
poornas
336460f67e fix: gateway_s3_bytes_sent metric for all API methods (#9242)
Co-authored-by: Harshavardhana <harsha@minio.io>
2020-04-01 12:52:31 -07:00
Bala FA
95e89f1712 proactive deep heal object when a bitrot is detected (#9192) 2020-04-01 12:14:00 -07:00
Harshavardhana
886ae15464 trimpaths when building minio binaries (#9246) 2020-04-01 10:45:11 -07:00
Harshavardhana
d8af244708 Add numeric/date policy conditions (#9233)
add new policy conditions

- NumericEquals
- NumericNotEquals
- NumericLessThan
- NumericLessThanEquals
- NumericGreaterThan
- NumericGreaterThanEquals
- DateEquals
- DateNotEquals
- DateLessThan
- DateLessThanEquals
- DateGreaterThan
- DateGreaterThanEquals
2020-04-01 00:04:25 -07:00
Sidhartha Mani
c8243706b4 Add Parallel NetOBD tests to saturate all nodes at once (#9241) 2020-03-31 17:08:28 -07:00
Harshavardhana
30707659b5 [feature] allow for an odd number of erasure packs (#9221)
Too many deployments come up with an odd number
of hosts or drives, to facilitate even distribution
among those setups allow for odd and prime numbers
based packs.
2020-03-31 09:32:16 -07:00
poornas
90c365a174 fix: allow overwriting objects under lock after retention period (#9232)
fixes #9230
2020-03-31 09:15:42 -07:00
Sidhartha Mani
7b732b566f [Bugfix] Fix Net tests being omitted (#9234) 2020-03-31 01:15:21 -07:00
Harshavardhana
ba52a925f9 fix: delete dangling directories properly (#9222) 2020-03-30 09:48:24 -07:00
ebozduman
fdda5f98c6 Makes mandatory dsn_string parameter optional (#8931) 2020-03-28 22:20:02 -07:00
Ingmar Runge
fa4d627b57 B2 gateway S3 compat: return MD5 hash as ETag from PutObject (#9183)
- B2 does actually return an MD5 hash for newly uploaded objects
  so we can use it to provide better compatibility with S3 client
  libraries that assume the ETag is the MD5 hash such as boto.
- depends on change in blazer library.
- new behaviour is only enabled if MinIO's --compat mode is active.
- behaviour for multipart uploads is unchanged (works fine as is).
2020-03-28 13:59:55 -07:00
Bala FA
2c3e34f001 add force delete option of non-empty bucket (#9166)
passing HTTP header `x-minio-force-delete: true` would 
allow standard S3 API DeleteBucket to delete a non-empty
bucket forcefully.
2020-03-27 21:52:59 -07:00
Anis Elleuch
7f8f1ad4e3 fix: cleanup lifecycle unused code (#9219) 2020-03-27 18:57:50 -07:00
Harshavardhana
6f992134a2 fix: startup load time by reusing storageDisks (#9210) 2020-03-27 14:48:30 -07:00
Sidhartha Mani
0c80bf45d0 Implement oboard diagnostics admin API (#9024)
- Implement a graph algorithm to test network bandwidth from every 
  node to every other node
- Saturate any network bandwidth adaptively, accounting for slow 
  and fast network capacity
- Implement parallel drive OBD tests
- Implement a paging mechanism for OBD test to provide periodic updates to client
- Implement Sys, Process, Host, Mem OBD Infos
2020-03-26 21:07:39 -07:00
Robert Thomas
2777956581 Improve YAML download links listed in K8s doc (#9213) 2020-03-26 11:17:00 -07:00
Anis Elleuch
b207520d98 Fix lifecycle GET: AWS SDK complaints on empty config (#9201) 2020-03-25 21:06:03 -07:00
Minio Trusted
2196fd9cd5 Update yaml files to latest version RELEASE.2020-03-25T07-03-04Z 2020-03-25 07:11:33 +00:00
Krishna Srinivas
ef6304c5c2 Improve connectDisks() performance (#9203) 2020-03-24 23:26:13 -07:00
Nitish Tiwari
6b984410d5 Add support for self-healing related metrics in Prometheus (#9079)
Fixes #8988

Co-authored-by: Anis Elleuch <vadmeste@users.noreply.github.com>
Co-authored-by: Harshavardhana <harsha@minio.io>
2020-03-24 22:40:45 -07:00
Harshavardhana
813e0fc1a8 fix: optimize isConnected to avoid url.String() conversions (#9202)
Stringifying in a loop can tax the system, avoid this
and convert the endpoints to strings early on and
remember them for the lifetime of the server.
2020-03-24 18:53:24 -07:00
Harshavardhana
38cf263409 fix: docs remove goreportcard, its deprecated 2020-03-24 14:51:06 -07:00
Harshavardhana
6f6a2214fc Add rate limiter for S3 API layer (#9196)
- total number of S3 API calls per server
- maximum wait duration for any S3 API call

This implementation is primarily meant for situations
where HDDs are not capable enough to handle the incoming
workload and there is no way to throttle the client.

This feature allows MinIO server to throttle itself
such that we do not overwhelm the HDDs.
2020-03-24 12:43:40 -07:00
Anis Elleuch
791821d590 sa: Allow empty policy to indicate parent user's policy is inherited (#9185) 2020-03-23 14:17:18 -07:00
Harshavardhana
9a951da881 honor the credentials of user admin for encrypt/decrypt (#9194)
Fixes #9193
2020-03-23 14:06:00 -07:00
Praveen raj Mani
e7a0be5bd3 fix: throttling of events during their replay (#9188) 2020-03-23 12:34:39 -07:00
Harshavardhana
ff932ca2a0 fix: log only catastrophic errors in prepare storage (#9189) 2020-03-23 07:32:18 -07:00
poornas
818d3bcaf5 fix: deprecate TestDiskCache test from unit tests (#9187) 2020-03-22 23:46:36 -07:00
Krishna Srinivas
45b1c66195 fix: implement splunk specific listObjects when delimiter=guidSplunk (#9186) 2020-03-22 19:23:47 -07:00
Harshavardhana
da04cb91ce optimize listObjects to list only from 3 random disks (#9184) 2020-03-22 16:33:49 -07:00
Harshavardhana
cfc9cfd84a fix: various optimizations, idiomatic changes (#9179)
- acquire since leader lock for all background operations
  - healing, crawling and applying lifecycle policies.

- simplify lifecyle to avoid network calls, which was a
  bug in implementation - we should hold a leader and
  do everything from there, we have access to entire
  name space.

- make listing, walking not interfere by slowing itself
  down like the crawler.

- effectively use global context everywhere to ensure
  proper shutdown, in cache, lifecycle, healing

- don't read `format.json` for prometheus metrics in
  StorageInfo() call.
2020-03-22 12:16:36 -07:00
Harshavardhana
ea18e51f4d Support multiple LDAP OU's, smAccountName support (#9139)
Fixes #8532
2020-03-21 22:47:26 -07:00
Harshavardhana
3d3beb6a9d Add response header timeouts (#9170)
- Add conservative timeouts upto 3 minutes
  for internode communication
- Add aggressive timeouts of 30 seconds
  for gateway communication

Fixes #9105
Fixes #8732
Fixes #8881
Fixes #8376
Fixes #9028
2020-03-21 22:10:13 -07:00
poornas
27b8f18cce Fix storage info message on startup (#9177) 2020-03-21 10:02:20 -07:00
Harshavardhana
bf545dc320 migrate to new minio-go with latest changes (#9176)
- extract userTags from Get/Head request (#1249)
- fix: Context cancellation not handled (#1250)
- Check for correct http status in remove object tagging (#1248)
- simplify extracting metadata in Head/Get object (#1245)
- fix: close and remove .minio.part file on errors (#1243)
2020-03-20 17:28:36 -07:00
stefan-work
f001e99fcd create the final file with mode 0666 for multipart-uploads (#9173)
NAS gateway creates non-multipart-uploads with mode 0666.
But multipart-uploads are created with a differing mode of 0644.

Both modes should be equal! Else it leads to files with different
permissions based on its file-size. This patch solves that by
using 0666 for both cases.
2020-03-20 15:32:15 -07:00
Harshavardhana
b4bfdc92cc fix: admin console logger changes to log.Info 2020-03-20 15:14:14 -07:00
Harshavardhana
ae654831aa Add madmin package context support (#9172)
This is to improve responsiveness for all
admin API operations and allowing callers
to cancel any on-going admin operations,
if they happen to be waiting too long.
2020-03-20 15:00:44 -07:00
Stephen N
1ffa983a9d added support for SASL/SCRAM on Kafka bucket notifications. (#9168)
fixes #9167
2020-03-20 11:10:27 -07:00
Nitish Tiwari
ecf1566266 Add an option to allow plaintext connection to LDAP/AD Server (#9151) 2020-03-19 19:20:51 -07:00
Minio Trusted
c5b87f93dd Update yaml files to latest version RELEASE.2020-03-19T21-49-00Z 2020-03-19 21:57:16 +00:00
Harshavardhana
b1a2169dcc fix: data usage crawler env handling, usage-cache.bin location (#9163)
canonicalize the ENVs such that we can bring these ENVs 
as part of the config values, as a subsequent change.

- fix location of per bucket usage to `.minio.sys/buckets/<bucket_name>/usage-cache.bin`
- fix location of the overall usage in `json` at `.minio.sys/buckets/.usage.json`
  (avoid conflicts with a bucket named `usage.json` )
- fix location of the overall usage in `msgp` at `.minio.sys/buckets/.usage.bin`
  (avoid conflicts with a bucket named `usage.bin`
2020-03-19 09:47:47 -07:00
Harshavardhana
d45a1808f2 fix: Walk() should require quorum number of disks only (#9164) 2020-03-18 20:56:07 -07:00
Anis Elleuch
db2155551a heal: Pass scan mode to HealObjects to deep scan full quorum objects (#9159)
As an optimization of the healing, HealObjects() avoid sending an
object to the background healing subsystem when the object is
present in all disks.

However, HealObjects() should have checked the scan type, if this
deep, always pass the object to the healing subsystem.
2020-03-18 17:50:00 -07:00
Harshavardhana
09d35d3b4c fix: sts to return appropriate errors (#9161) 2020-03-18 17:25:45 -07:00
Anis Elleuch
5b9342d35c xl: Tree walking should not quit when one disk returns empty (#9160)
Currently, a tree walking, needed to a list objects in a specific
set quits listing as long as it finds no entries in a disk, which
is wrong.

This affected background healing, because the latter is using
tree walk directly. If one object does not exist in the first
disk for example, it will be seemed like the object does not
exist at all and no healing work is needed.

This commit fixes the behavior.
2020-03-18 16:58:05 -07:00
Klaus Post
8d98662633 re-implement data usage crawler to be more efficient (#9075)
Implementation overview: 

https://gist.github.com/klauspost/1801c858d5e0df391114436fdad6987b
2020-03-18 16:19:29 -07:00
Anis Elleuch
7fdeb44372 info: Initialize boot time early so uptime will always be correct (#9154) 2020-03-17 16:37:28 -07:00
poornas
59dced8237 Print node status even in --quiet mode (#9149) 2020-03-17 15:25:00 -07:00
Anis Elleuch
496f4a7dc7 Add service account type in IAM (#9029) 2020-03-17 10:36:13 -07:00
kannappanr
8b880a246a fix: deleteObjectTagging should 204 on success (#9150) 2020-03-16 23:21:24 -07:00
Klaus Post
eeb5942b6b fix: remote profile names and extension (#9145)
Remote profiles are not formatted correctly:

```
profile-172.31.91.126_9000-cpu.pprof
profile-172.31.91.126_9000-goroutines-before.txt
profile-172.31.91.126_9000-goroutines.txt
profiling-172.31.80.49_9000-cpu.pprof.pprof
profiling-172.31.80.49_9000-goroutines-before.txt.pprof
profiling-172.31.80.49_9000-goroutines.txt.pprof
profiling-172.31.86.101_9000-cpu.pprof.pprof
profiling-172.31.86.101_9000-goroutines-before.txt.pprof
profiling-172.31.86.101_9000-goroutines.txt.pprof
profiling-172.31.91.191_9000-cpu.pprof.pprof
profiling-172.31.91.191_9000-goroutines-before.txt.pprof
profiling-172.31.91.191_9000-goroutines.txt.pprof
```

`profiling` -> `profile`, remove extra extension.
2020-03-16 11:39:53 -07:00
yeungc
7ec904d67b fix: wording and update content of chinese docs (#9140) 2020-03-16 10:04:16 -07:00
Harshavardhana
c9212819af fix: lock maintenance should honor quorum (#9138)
The staleness of a lock should be determined by
the quorum number of entries returning stale,
this allows for situations when locks are held
when nodes are down - we don't accidentally
clear locks unintentionally when they are valid
and correct.

Also lock maintenance should be run by all servers,
not one server, stale locks need to be run outside
the requirement for holding distributed locks.

Thanks @klauspost for reproducing this issue
2020-03-15 11:55:52 -07:00
poornas
10fd53d6bb Fix: admin config set API for notifications (#9085)
Filter out targets set via env when
validating incoming config change against
configured notification targets

Fixes #9066
2020-03-14 00:01:15 -07:00
gzur
3fea1d5e35 Align STS web-identity code snippet to documentation (minio#9114) (#9130) 2020-03-13 22:58:53 -07:00
Anis Elleuch
35ecc04223 Support configurable quote character parameter in Select (#8955) 2020-03-13 22:09:34 -07:00
Harshavardhana
3ca9f5ffa3 Update yaml files to latest version RELEASE.2020-03-14T02-21-58Z 2020-03-13 20:05:27 -07:00
776 changed files with 78697 additions and 40644 deletions

View File

@@ -1,3 +1,12 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: community, triage
assignees: ''
---
<!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
@@ -15,6 +24,8 @@
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
<!--- and make sure you have followed https://github.com/minio/minio/tree/release/docs/debugging to capture relevant logs -->
1.
2.
3.
@@ -30,8 +41,6 @@
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`):
* Environment name and version (e.g. nginx 1.9.1):
* Server type and version:
* Version used (`minio --version`):
* Server setup and configuration:
* Operating System and version (`uname -a`):
* Link to your project:

View File

@@ -24,6 +24,8 @@ assignees: ''
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
<!--- and make sure you have followed https://github.com/minio/minio/tree/release/docs/debugging to capture relevant logs -->
1.
2.
3.
@@ -39,8 +41,6 @@ assignees: ''
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`):
* Environment name and version (e.g. nginx 1.9.1):
* Server type and version:
* Version used (`minio --version`):
* Server setup and configuration:
* Operating System and version (`uname -a`):
* Link to your project:

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: MinIO Community Support
url: https://slack.min.io
about: Join here for Community Support
- name: MinIO SUBNET Support
url: https://min.io/pricing
about: Join here for Enterprise Support

View File

@@ -16,4 +16,3 @@
- [ ] Fixes a regression (If yes, please add `commit-id` or `PR #` here)
- [ ] Documentation needed
- [ ] Unit tests needed
- [ ] Functional tests needed (If yes, add [mint](https://github.com/minio/mint) PR # here: )

39
.github/lock.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
# Configuration for Lock Threads - https://github.com/dessant/lock-threads-app
# Number of days of inactivity before a closed issue or pull request is locked
daysUntilLock: 365
# Skip issues and pull requests created before a given timestamp. Timestamp must
# follow ISO 8601 (`YYYY-MM-DD`). Set to `false` to disable
skipCreatedBefore: false
# Issues and pull requests with these labels will be ignored. Set to `[]` to disable
exemptLabels: []
# Label to add before locking, such as `outdated`. Set to `false` to disable
lockLabel: true
# Comment to post before locking. Set to `false` to disable
lockComment: >-
This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a new issue for
related bugs.
# Assign `resolved` as the reason for locking. Set to `false` to disable
setLockReason: true
# Limit to only `issues` or `pulls`
only: issues
# Optionally, specify configuration settings just for `issues` or `pulls`
# issues:
# exemptLabels:
# - help-wanted
# lockLabel: outdated
# pulls:
# daysUntilLock: 30
# Repository to extend settings from
# _extends: repo

59
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an Issue or Pull Request becomes stale
daysUntilStale: 30
# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.
# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.
daysUntilClose: 15
# Only issues or pull requests with all of these labels are check if stale. Defaults to `[]` (disabled)
onlyLabels: []
# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable
exemptLabels:
- "security"
- "pending discussion"
# Set to true to ignore issues in a project (defaults to false)
exemptProjects: false
# Set to true to ignore issues in a milestone (defaults to false)
exemptMilestones: false
# Set to true to ignore issues with an assignee (defaults to false)
exemptAssignees: false
# Label to use when marking as stale
staleLabel: stale
# Comment to post when marking as stale. Set to `false` to disable
markComment: >-
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed after 15 days if no further activity
occurs. Thank you for your contributions.
# Comment to post when removing the stale label.
# unmarkComment: >
# Your comment here.
# Comment to post when closing a stale Issue or Pull Request.
# closeComment: >
# Your comment here.
# Limit the number of actions per hour, from 1-30. Default is 30
limitPerRun: 1
# Limit to only `issues` or `pulls`
# only: issues
# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls':
# pulls:
# daysUntilStale: 30
# markComment: >
# This pull request has been automatically marked as stale because it has not had
# recent activity. It will be closed if no further activity occurs. Thank you
# for your contributions.
# issues:
# exemptLabels:
# - confirmed

51
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: "Code scanning - action"
on:
push:
pull_request:
schedule:
- cron: '0 19 * * 0'
jobs:
CodeQL-Build:
# CodeQL runs on ubuntu-latest and windows-latest
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: go, javascript
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

50
.github/workflows/go.yml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: Go
on:
pull_request:
branches:
- master
- release
jobs:
build:
name: Test on Go ${{ matrix.go-version }} and ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.13.x]
os: [ubuntu-latest, windows-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v1
with:
node-version: '12'
- uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Build on ${{ matrix.os }}
if: matrix.os == 'windows-latest'
env:
CGO_ENABLED: 0
GO111MODULE: on
MINIO_CI_CD: 1
run: |
go build --ldflags="-s -w" -o %GOPATH%\bin\minio.exe
go test -v --timeout 50m ./...
- name: Build on ${{ matrix.os }}
if: matrix.os == 'ubuntu-latest'
env:
CGO_ENABLED: 0
GO111MODULE: on
MINIO_CI_CD: 1
run: |
sudo apt-get install devscripts shellcheck
make
diff -au <(gofmt -s -d cmd) <(printf "")
diff -au <(gofmt -s -d pkg) <(printf "")
make test-race
make crosscompile
make verify
make verify-healing
cd browser && npm install && npm run test && cd ..
bash -c 'shopt -s globstar; shellcheck mint/**/*.sh'

27
.golangci.yml Normal file
View File

@@ -0,0 +1,27 @@
linters-settings:
golint:
min-confidence: 0
misspell:
locale: US
linters:
disable-all: true
enable:
- typecheck
- goimports
- misspell
- govet
- golint
- ineffassign
- gosimple
- deadcode
- structcheck
issues:
exclude-use-default: false
exclude:
- should have a package comment
- error strings should not be capitalized or end with punctuation or a newline
service:
golangci-lint-version: 1.20.0 # use the fixed version to not introduce new linters unexpectedly

167
.goreleaser.yml Normal file
View File

@@ -0,0 +1,167 @@
project_name: minio
release:
name_template: "Version {{.MinIO.Version}}"
disable: true
github:
owner: minio
name: minio
env:
- CGO_ENABLED=0
- GO111MODULE=on
before:
hooks:
- make clean
- go generate ./...
- go mod tidy
- go mod download
builds:
-
goos:
- linux
- darwin
- windows
- freebsd
goarch:
- amd64
- arm64
- arm
- ppc64le
- s390x
goarm:
- 7
ignore:
- goos: darwin
goarch: arm64
- goos: darwin
goarch: arm
- goos: darwin
goarch: ppc64le
- goos: darwin
goarch: s390x
- goos: windows
goarch: arm64
- goos: windows
goarch: arm
- goos: windows
goarch: ppc64le
- goos: windows
goarch: s390x
- goos: freebsd
goarch: arm
- goos: freebsd
goarch: arm64
- goos: freebsd
goarch: ppc64le
- goos: freebsd
goarch: s390x
flags:
- -tags=kqueue
- -trimpath
ldflags:
- "-s -w -X github.com/minio/minio/cmd.Version={{.Version}} -X github.com/minio/minio/cmd.ReleaseTag={{.Tag}} -X github.com/minio/minio/cmd.CommitID={{.FullCommit}} -X github.com/minio/minio/cmd.ShortCommitID={{.ShortCommit}}"
archives:
-
format: binary
name_template: "{{ .Binary }}-release/{{ .Os }}-{{ .Arch }}/{{ .Binary }}.{{ .Version }}"
nfpms:
-
id: minio
package_name: minio
vendor: MinIO, Inc.
homepage: https://min.io/
maintainer: dev@min.io
description: MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
license: Apache 2.0
bindir: /usr/bin
formats:
- deb
- rpm
overrides:
deb:
file_name_template: "{{ .Binary }}-release/debs/{{ .ProjectName }}-{{ .Version }}_{{ .Arch }}"
replacements:
arm: armv7
files:
"NOTICE": "/usr/share/minio/NOTICE"
"CREDITS": "/usr/share/minio/CREDITS"
"LICENSE": "/usr/share/minio/LICENSE"
"README.md": "/usr/share/minio/README.md"
rpm:
file_name_template: "{{ .Binary }}-release/rpms/{{ .ProjectName }}-{{ .Version }}.{{ .Arch }}"
replacements:
amd64: x86_64
arm64: aarch64
arm: armv7
files:
"NOTICE": "/usr/share/minio/NOTICE"
"CREDITS": "/usr/share/minio/CREDITS"
"LICENSE": "/usr/share/minio/LICENSE"
"README.md": "/usr/share/minio/README.md"
checksum:
algorithm: sha256
signs:
-
signature: "${artifact}.asc"
cmd: "sh"
args:
- '-c'
- 'gpg --quiet --detach-sign -a ${artifact}'
artifacts: all
changelog:
sort: asc
filters:
exclude:
- '^Update yaml files'
dockers:
-
goos: linux
goarch: amd64
dockerfile: Dockerfile.release
image_templates:
- minio/minio:{{ .Tag }}
- minio/minio:latest
-
goos: linux
goarch: ppc64le
dockerfile: Dockerfile.ppc64le.release
image_templates:
- minio/minio:{{ .Tag }}-ppc64le
-
goos: linux
goarch: s390x
dockerfile: Dockerfile.s390x.release
image_templates:
- minio/minio:{{ .Tag }}-s390x
-
goos: linux
goarch: arm64
goarm: ''
dockerfile: Dockerfile.arm64.release
image_templates:
- minio/minio:{{ .Tag }}-arm64
-
goos: linux
goarch: arm
goarm: '7'
dockerfile: Dockerfile.arm.release
image_templates:
- minio/minio:{{ .Tag }}-arm

View File

@@ -1,58 +0,0 @@
go_import_path: github.com/minio/minio
language: go
addons:
apt:
packages:
- shellcheck
services:
- docker
# this ensures PRs based on a local branch are not built twice
# the downside is that a PR targeting a different branch is not built
# but as a workaround you can add the branch to this list
branches:
only:
- master
matrix:
include:
- os: linux
dist: trusty
sudo: required
env:
- ARCH=x86_64
- CGO_ENABLED=0
- GO111MODULE=on
- SIMPLE_CI=1
go: 1.13.x
script:
- make
- diff -au <(gofmt -s -d cmd) <(printf "")
- diff -au <(gofmt -s -d pkg) <(printf "")
- make test-race
- make crosscompile
- make verify
- cd browser && npm install && npm run test && cd ..
- bash -c 'shopt -s globstar; shellcheck mint/**/*.sh'
- os: windows
env:
- ARCH=x86_64
- CGO_ENABLED=0
- GO111MODULE=on
- SIMPLE_CI=1
go: 1.13.x
script:
- go build --ldflags="$(go run buildscripts/gen-ldflags.go)" -o %GOPATH%\bin\minio.exe
- for d in $(go list ./... | grep -v browser); do go test -v --timeout 20m "$d"; done
before_script:
# Add an IPv6 config - see the corresponding Travis issue
# https://github.com/travis-ci/travis-ci/issues/8361
- if [[ "${TRAVIS_OS_NAME}" == "linux" ]]; then sudo sh -c 'echo 0 > /proc/sys/net/ipv6/conf/all/disable_ipv6'; fi
before_install:
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then nvm install stable ; fi

View File

@@ -1,4 +1,4 @@
# MinIO Contribution Guide [![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/)
# MinIO Contribution Guide [![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/)
``MinIO`` community welcomes your contribution. To make the process as seamless as possible, we recommend you read this contribution guide.

6619
CREDITS

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
FROM golang:1.13-alpine
FROM golang:1.13-alpine as builder
LABEL maintainer="MinIO Inc <dev@min.io>"
@@ -9,9 +9,9 @@ ENV GO111MODULE on
RUN \
apk add --no-cache git && \
git clone https://github.com/minio/minio && cd minio && \
go install -v -ldflags "$(go run buildscripts/gen-ldflags.go)"
git checkout master && go install -v -ldflags "$(go run buildscripts/gen-ldflags.go)"
FROM alpine:3.10
FROM alpine:3.12
ENV MINIO_UPDATE off
ENV MINIO_ACCESS_KEY_FILE=access_key \
@@ -21,9 +21,9 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
EXPOSE 9000
COPY --from=0 /go/bin/minio /usr/bin/minio
COPY CREDITS /third_party/
COPY dockerscripts/docker-entrypoint.sh /usr/bin/
COPY --from=builder /go/bin/minio /usr/bin/minio
COPY --from=builder /go/minio/CREDITS /third_party/
COPY --from=builder /go/minio/dockerscripts/docker-entrypoint.sh /usr/bin/
RUN \
apk add --no-cache ca-certificates 'curl>7.61.0' 'su-exec>=0.2' && \

View File

@@ -1,23 +1,9 @@
FROM golang:1.13-alpine as builder
WORKDIR /home
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GO111MODULE on
RUN \
apk add --no-cache git 'curl>7.61.0' && \
git clone https://github.com/minio/minio && \
curl -L https://github.com/balena-io/qemu/releases/download/v3.0.0%2Bresin/qemu-3.0.0+resin-arm.tar.gz | tar zxvf - -C . && mv qemu-3.0.0+resin-arm/qemu-arm-static .
FROM arm32v7/alpine:3.10
FROM multiarch/qemu-user-static:x86_64-arm as qemu
FROM arm32v7/alpine:3.12
LABEL maintainer="MinIO Inc <dev@min.io>"
COPY dockerscripts/docker-entrypoint.sh /usr/bin/
COPY CREDITS /third_party/
COPY --from=builder /home/qemu-arm-static /usr/bin/qemu-arm-static
COPY --from=qemu /usr/bin/qemu-arm-static /usr/bin
ENV MINIO_UPDATE off
ENV MINIO_ACCESS_KEY_FILE=access_key \
@@ -28,9 +14,11 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
RUN \
apk add --no-cache ca-certificates 'curl>7.61.0' 'su-exec>=0.2' && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
curl https://dl.min.io/server/minio/release/linux-arm/minio > /usr/bin/minio && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh
curl -s -q https://dl.min.io/server/minio/release/linux-arm/minio -o /usr/bin/minio && \
curl -s -q https://raw.githubusercontent.com/minio/minio/release/dockerscripts/docker-entrypoint.sh -o /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \
curl -s -q -O https://raw.githubusercontent.com/minio/minio/release/CREDITS
EXPOSE 9000

View File

@@ -1,23 +1,9 @@
FROM golang:1.13-alpine as builder
WORKDIR /home
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GO111MODULE on
RUN \
apk add --no-cache git 'curl>7.61.0' && \
git clone https://github.com/minio/minio && \
curl -L https://github.com/balena-io/qemu/releases/download/v3.0.0%2Bresin/qemu-3.0.0+resin-arm.tar.gz | tar zxvf - -C . && mv qemu-3.0.0+resin-arm/qemu-arm-static .
FROM arm64v8/alpine:3.10
FROM multiarch/qemu-user-static:x86_64-aarch64 as qemu
FROM arm64v8/alpine:3.12
LABEL maintainer="MinIO Inc <dev@min.io>"
COPY dockerscripts/docker-entrypoint.sh /usr/bin/
COPY CREDITS /third_party/
COPY --from=builder /home/qemu-arm-static /usr/bin/qemu-arm-static
COPY --from=qemu /usr/bin/qemu-aarch64-static /usr/bin
ENV MINIO_UPDATE off
ENV MINIO_ACCESS_KEY_FILE=access_key \
@@ -28,9 +14,11 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
RUN \
apk add --no-cache ca-certificates 'curl>7.61.0' 'su-exec>=0.2' && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
curl https://dl.min.io/server/minio/release/linux-arm64/minio > /usr/bin/minio && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh
curl -s -q https://dl.min.io/server/minio/release/linux-arm64/minio -o /usr/bin/minio && \
curl -s -q https://raw.githubusercontent.com/minio/minio/release/dockerscripts/docker-entrypoint.sh -o /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \
curl -s -q -O https://raw.githubusercontent.com/minio/minio/release/CREDITS
EXPOSE 9000

View File

@@ -1,4 +1,4 @@
FROM alpine:3.10
FROM alpine:3.12
LABEL maintainer="MinIO Inc <dev@min.io>"

12
Dockerfile.dev.browser Normal file
View File

@@ -0,0 +1,12 @@
FROM ubuntu
LABEL maintainer="MinIO Inc <dev@min.io>"
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends --no-install-suggests \
git golang make npm && \
apt-get clean && rm -rf /var/lib/apt/lists/*
ENV PATH=$PATH:/root/go/bin
RUN go get github.com/go-bindata/go-bindata/go-bindata && \
go get github.com/elazarl/go-bindata-assetfs/go-bindata-assetfs

View File

@@ -1,17 +1,11 @@
FROM ubuntu:16.04
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND noninteractive
ENV LANG C.UTF-8
ENV GOROOT /usr/local/go
ENV GOPATH /usr/local
ENV GOPATH /usr/local/gopath
ENV PATH $GOPATH/bin:$GOROOT/bin:$PATH
ENV MINT_ROOT_DIR /mint
COPY mint /mint
RUN apt-get --yes update && apt-get --yes upgrade && \

View File

@@ -0,0 +1,29 @@
FROM multiarch/qemu-user-static:x86_64-ppc64le as qemu
FROM ppc64le/alpine:3.12
LABEL maintainer="MinIO Inc <dev@min.io>"
COPY --from=qemu /usr/bin/qemu-ppc64le-static /usr/bin
ENV MINIO_UPDATE off
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key \
MINIO_KMS_MASTER_KEY_FILE=kms_master_key \
MINIO_SSE_MASTER_KEY_FILE=sse_master_key
RUN \
apk add --no-cache ca-certificates 'curl>7.61.0' 'su-exec>=0.2' && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
curl -s -q https://dl.min.io/server/minio/release/linux-ppc64le/minio -o /usr/bin/minio && \
curl -s -q https://raw.githubusercontent.com/minio/minio/release/dockerscripts/docker-entrypoint.sh -o /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \
curl -s -q -O https://raw.githubusercontent.com/minio/minio/release/CREDITS
EXPOSE 9000
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
VOLUME ["/data"]
CMD ["minio"]

View File

@@ -1,20 +1,7 @@
FROM golang:1.13-alpine
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GO111MODULE on
RUN \
apk add --no-cache git && \
git clone https://github.com/minio/minio
FROM alpine:3.10
FROM alpine:3.12
LABEL maintainer="MinIO Inc <dev@min.io>"
COPY dockerscripts/docker-entrypoint.sh /usr/bin/
COPY CREDITS /third_party/
ENV MINIO_UPDATE off
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key \
@@ -24,9 +11,11 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
RUN \
apk add --no-cache ca-certificates 'curl>7.61.0' 'su-exec>=0.2' && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
curl https://dl.min.io/server/minio/release/linux-amd64/minio > /usr/bin/minio && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh
curl -s -q https://dl.min.io/server/minio/release/linux-amd64/minio -o /usr/bin/minio && \
curl -s -q https://raw.githubusercontent.com/minio/minio/release/dockerscripts/docker-entrypoint.sh -o /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \
curl -s -q -O https://raw.githubusercontent.com/minio/minio/release/CREDITS
EXPOSE 9000

29
Dockerfile.s390x.release Normal file
View File

@@ -0,0 +1,29 @@
FROM multiarch/qemu-user-static:x86_64-s390x as qemu
FROM s390x/alpine:3.12
LABEL maintainer="MinIO Inc <dev@min.io>"
COPY --from=qemu /usr/bin/qemu-s390x-static /usr/bin
ENV MINIO_UPDATE off
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key \
MINIO_KMS_MASTER_KEY_FILE=kms_master_key \
MINIO_SSE_MASTER_KEY_FILE=sse_master_key
RUN \
apk add --no-cache ca-certificates 'curl>7.61.0' 'su-exec>=0.2' && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && \
curl -s -q https://dl.min.io/server/minio/release/linux-s390x/minio -o /usr/bin/minio && \
curl -s -q https://raw.githubusercontent.com/minio/minio/release/dockerscripts/docker-entrypoint.sh -o /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \
curl -s -q -O https://raw.githubusercontent.com/minio/minio/release/CREDITS
EXPOSE 9000
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
VOLUME ["/data"]
CMD ["minio"]

View File

@@ -1,80 +0,0 @@
#-------------------------------------------------------------
# Stage 1: Build and Unit tests
#-------------------------------------------------------------
FROM golang:1.13
COPY . /go/src/github.com/minio/minio
WORKDIR /go/src/github.com/minio/minio
RUN apt-get update && apt-get install -y jq
ENV GO111MODULE=on
ENV SIMPLE_CI 1
RUN git config --global http.cookiefile /gitcookie/.gitcookie
RUN apt-get update && \
apt-get -y install sudo
RUN touch /etc/sudoers
RUN echo "root ALL=(ALL) ALL" >> /etc/sudoers
RUN echo "ci ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
RUN echo "Defaults env_reset" >> /etc/sudoers
RUN echo 'Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go:/usr/local/go/bin"' >> /etc/sudoers
RUN mkdir -p /home/ci/.cache
RUN groupadd -g 999 ci && \
useradd -r -u 999 -g ci ci && \
chown -R ci:ci /go /home/ci && \
chmod -R a+rw /go
USER ci
# -- tests --
RUN make
RUN bash -c 'diff -au <(gofmt -s -d cmd) <(printf "")'
RUN bash -c 'diff -au <(gofmt -s -d pkg) <(printf "")'
RUN make test-race
RUN make crosscompile
RUN make verify
## -- add healing tests
RUN make verify-healing
#-------------------------------------------------------------
# Stage 2: Test Frontend
#-------------------------------------------------------------
FROM node:10.15-stretch-slim
ENV SIMPLE_CI 1
COPY browser /minio/browser
WORKDIR /minio/browser
RUN yarn
RUN yarn test
#-------------------------------------------------------------
# Stage 3: Run Gateway Tests
#-------------------------------------------------------------
FROM ubuntu:16.04
COPY --from=0 /go/src/github.com/minio/minio/minio /usr/bin/minio
COPY buildscripts/gateway-tests.sh /usr/bin/gateway-tests.sh
COPY mint /mint
ENV DEBIAN_FRONTEND noninteractive
ENV LANG C.UTF-8
ENV GOROOT /usr/local/go
ENV GOPATH /usr/local
ENV PATH $GOPATH/bin:$GOROOT/bin:$PATH
ENV SIMPLE_CI 1
ENV MINT_ROOT_DIR /mint
RUN apt-get --yes update && apt-get --yes upgrade && \
apt-get --yes --quiet install wget jq curl git dnsmasq && \
cd /mint && /mint/release.sh
WORKDIR /mint
RUN /usr/bin/gateway-tests.sh

View File

@@ -8,8 +8,6 @@ GOOS := $(shell go env GOOS)
VERSION ?= $(shell git describe --tags)
TAG ?= "minio/minio:$(VERSION)"
BUILD_LDFLAGS := '$(LDFLAGS)'
all: build
checks:
@@ -18,22 +16,12 @@ checks:
getdeps:
@mkdir -p ${GOPATH}/bin
@which golint 1>/dev/null || (echo "Installing golint" && GO111MODULE=off go get -u golang.org/x/lint/golint)
ifeq ($(GOARCH),s390x)
@which staticcheck 1>/dev/null || (echo "Installing staticcheck" && GO111MODULE=off go get honnef.co/go/tools/cmd/staticcheck)
else
@which staticcheck 1>/dev/null || (echo "Installing staticcheck" && wget --quiet https://github.com/dominikh/go-tools/releases/download/2019.2.3/staticcheck_${GOOS}_${GOARCH}.tar.gz && tar xf staticcheck_${GOOS}_${GOARCH}.tar.gz && mv staticcheck/staticcheck ${GOPATH}/bin/staticcheck && chmod +x ${GOPATH}/bin/staticcheck && rm -f staticcheck_${GOOS}_${GOARCH}.tar.gz && rm -rf staticcheck)
endif
@which misspell 1>/dev/null || (echo "Installing misspell" && GO111MODULE=off go get -u github.com/client9/misspell/cmd/misspell)
@which golangci-lint 1>/dev/null || (echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin v1.27.0)
crosscompile:
@(env bash $(PWD)/buildscripts/cross-compile.sh)
verifiers: getdeps vet fmt lint staticcheck spelling
vet:
@echo "Running $@ check"
@GO111MODULE=on go vet github.com/minio/minio/...
verifiers: getdeps fmt lint
fmt:
@echo "Running $@ check"
@@ -42,21 +30,8 @@ fmt:
lint:
@echo "Running $@ check"
@GO111MODULE=on ${GOPATH}/bin/golint -set_exit_status github.com/minio/minio/cmd/...
@GO111MODULE=on ${GOPATH}/bin/golint -set_exit_status github.com/minio/minio/pkg/...
staticcheck:
@echo "Running $@ check"
@GO111MODULE=on ${GOPATH}/bin/staticcheck github.com/minio/minio/cmd/...
@GO111MODULE=on ${GOPATH}/bin/staticcheck github.com/minio/minio/pkg/...
spelling:
@echo "Running $@ check"
@GO111MODULE=on ${GOPATH}/bin/misspell -locale US -error `find cmd/`
@GO111MODULE=on ${GOPATH}/bin/misspell -locale US -error `find pkg/`
@GO111MODULE=on ${GOPATH}/bin/misspell -locale US -error `find docs/`
@GO111MODULE=on ${GOPATH}/bin/misspell -locale US -error `find buildscripts/`
@GO111MODULE=on ${GOPATH}/bin/misspell -locale US -error `find dockerscripts/`
@GO111MODULE=on ${GOPATH}/bin/golangci-lint cache clean
@GO111MODULE=on ${GOPATH}/bin/golangci-lint run --timeout=5m --config ./.golangci.yml
# Builds minio, runs the verifiers then runs the tests.
check: test
@@ -71,21 +46,23 @@ test-race: verifiers build
# Verify minio binary
verify:
@echo "Verifying build with race"
@GO111MODULE=on CGO_ENABLED=1 go build -race -tags kqueue --ldflags $(BUILD_LDFLAGS) -o $(PWD)/minio 1>/dev/null
@GO111MODULE=on CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/verify-build.sh)
# Verify healing of disks with minio binary
verify-healing:
@echo "Verify healing build with race"
@GO111MODULE=on CGO_ENABLED=1 go build -race -tags kqueue --ldflags $(BUILD_LDFLAGS) -o $(PWD)/minio 1>/dev/null
@GO111MODULE=on CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/verify-healing.sh)
# Builds minio locally.
build: checks
@echo "Building minio binary to './minio'"
@GO111MODULE=on CGO_ENABLED=0 go build -tags kqueue --ldflags $(BUILD_LDFLAGS) -o $(PWD)/minio 1>/dev/null
@GO111MODULE=on CGO_ENABLED=0 go build -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
docker: build
docker: checks
@echo "Building minio docker image '$(TAG)'"
@GOOS=linux GO111MODULE=on CGO_ENABLED=0 go build -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@docker build -t $(TAG) . -f Dockerfile.dev
# Builds minio and installs it to $GOPATH/bin.
@@ -101,3 +78,4 @@ clean:
@rm -rvf minio
@rm -rvf build
@rm -rvf release
@rm -rvf .verify*

View File

@@ -1,28 +1,32 @@
# MinIO Quickstart Guide
[![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/)
[![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/)
[![MinIO](https://raw.githubusercontent.com/minio/minio/master/.github/logo.svg?sanitize=true)](https://min.io)
MinIO is High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storage service. Using MinIO build high performance infrastructure for machine learning, analytics and application data workloads.
MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
## Docker Container
### Stable
```
docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
minio/minio server /data
```
### Edge
```
docker pull minio/minio:edge
docker run -p 9000:9000 minio/minio:edge server /data
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
minio/minio:edge server /data
```
> NOTE: Docker will not display the default keys unless you start the container with the `-it`(interactive TTY) argument. Generally, it is not recommended to use default keys with containers. Please visit MinIO Docker quickstart guide for more information [here](https://docs.min.io/docs/minio-docker-quickstart-guide)
## macOS
### Homebrew (recommended)
Install minio packages using [Homebrew](http://brew.sh/)
Install minio packages using [Homebrew](https://brew.sh/)
```sh
brew install minio/stable/minio
minio server /data
@@ -94,23 +98,6 @@ GO111MODULE=on go get github.com/minio/minio
By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.
### iptables
For hosts with iptables enabled (RHEL, CentOS, etc), you can use `iptables` command to enable all traffic coming to specific ports. Use below command to allow
access to port 9000
```sh
iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
service iptables restart
```
Below command enables all incoming traffic to ports ranging from 9000 to 9010.
```sh
iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart
```
### ufw
For hosts with ufw enabled (Debian based distros), you can use `ufw` command to allow traffic to specific ports. Use below command to allow access to port 9000
@@ -145,6 +132,23 @@ Note that `permanent` makes sure the rules are persistent across firewall start,
firewall-cmd --reload
```
### iptables
For hosts with iptables enabled (RHEL, CentOS, etc), you can use `iptables` command to enable all traffic coming to specific ports. Use below command to allow
access to port 9000
```sh
iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
service iptables restart
```
Below command enables all incoming traffic to ports ranging from 9000 to 9010.
```sh
iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart
```
## Test using MinIO Browser
MinIO Server comes with an embedded web based object browser. Point your web browser to http://127.0.0.1:9000 ensure your server has started successfully.
@@ -159,20 +163,23 @@ When deployed on a single drive, MinIO server lets clients access any pre-existi
The above statement is also valid for all gateway backends.
## Upgrading MinIO
MinIO server supports rolling upgrades, i.e. you can update one MinIO instance at a time in a distributed cluster. This allows upgrades with no downtime. Upgrades can be done manually by replacing the binary with the latest release and restarting all servers in a rolling fashion. However, we recommend all our users to use [`mc admin update`](https://docs.min.io/docs/minio-admin-complete-guide.html#update) from the client. This will update all the nodes in the cluster and restart them, as shown in the following command from the MinIO client (mc):
MinIO server supports rolling upgrades, i.e. you can update one MinIO instance at a time in a distributed cluster. This allows upgrades with no downtime. Upgrades can be done manually by replacing the binary with the latest release and restarting all servers in a rolling fashion. However, we recommend all our users to use [`mc admin update`](https://docs.min.io/docs/minio-admin-complete-guide.html#update) from the client. This will update all the nodes in the cluster simultaneously and restart them, as shown in the following command from the MinIO client (mc):
```
mc admin update <minio alias, e.g., myminio>
```
**Important things to remember during upgrades**:
> NOTE: some releases might not allow rolling upgrades, this is always called out in the release notes and it is generally advised to read release notes before upgrading. In such a situation `mc admin update` is the recommended upgrading mechanism to upgrade all servers at once.
### Important things to remember during MinIO upgrades
- `mc admin update` will only work if the user running MinIO has write access to the parent directory where the binary is located, for example if the current binary is at `/usr/local/bin/minio`, you would need write access to `/usr/local/bin`.
- In the case of federated setups `mc admin update` should be run against each cluster individually. Avoid updating `mc` until all clusters have been updated.
- If you are updating the server it is always recommended (unless explicitly mentioned in MinIO server release notes), to update `mc` once all the servers have been upgraded using `mc update`.
- `mc admin update` is disabled in docker/container environments, container environments provide their own mechanisms for updating running containers.
- If you are using Vault as KMS with MinIO, ensure you have followed the Vault upgrade procedure outlined here: https://www.vaultproject.io/docs/upgrading/index.html
- If you are using etcd with MinIO for the federation, ensure you have followed the etcd upgrade procedure outlined here: https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrading-etcd.md
- `mc admin update` updates and restarts all servers simultaneously, applications would retry and continue their respective operations upon upgrade.
- `mc admin update` is disabled in kubernetes/container environments, container environments provide their own mechanisms to rollout of updates.
- In the case of federated setups `mc admin update` should be run against each cluster individually. Avoid updating `mc` to any new releases until all clusters have been successfully updated.
- If using `kes` as KMS with MinIO, just replace the binary and restart `kes` more information about `kes` can be found [here](https://github.com/minio/kes/wiki)x
- If using Vault as KMS with MinIO, ensure you have followed the Vault upgrade procedure outlined here: https://www.vaultproject.io/docs/upgrading/index.html
- If using etcd with MinIO for the federation, ensure you have followed the etcd upgrade procedure outlined here: https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrading-etcd.md
## Explore Further
- [MinIO Erasure Code QuickStart Guide](https://docs.min.io/docs/minio-erasure-code-quickstart-guide)
@@ -185,11 +192,5 @@ mc admin update <minio alias, e.g., myminio>
## Contribute to MinIO Project
Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)
## Caveats
MinIO in its default mode doesn't use MD5Sum checkums of incoming streams unless requested by the client in `Content-Md5` header for validation. This may lead to incompatibility with rare S3 clients like `s3ql` which unfortunately do not set `Content-Md5` but depend on hex MD5Sum for the stream to be calculated by the server. MinIO considers this as a bug in `s3ql` and should be fixed on the client side because MD5Sum is a poor way to checksum and validate the authenticity of the objects. Although MinIO provides a workaround until client applications are fixed use `--compat` option instead to start the server.
```sh
./minio --compat server /data
```
## License
[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fminio%2Fminio.svg?type=large)](https://app.fossa.io/projects/git%2Bgithub.com%2Fminio%2Fminio?ref=badge_large)

View File

@@ -1,4 +1,4 @@
# MinIO Quickstart Guide [![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/)
# MinIO Quickstart Guide [![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/)
MinIO 是一个基于Apache License v2.0开源协议的对象存储服务。它兼容亚马逊S3云存储服务接口非常适合于存储大容量非结构化的数据例如图片、视频、日志文件、备份数据和容器/虚拟机镜像等而一个对象文件可以是任意大小从几kb到最大5T不等。

View File

@@ -2,6 +2,7 @@
``MinIO Browser`` provides minimal set of UI to manage buckets and objects on ``minio`` server. ``MinIO Browser`` is written in javascript and released under [Apache 2.0 License](./LICENSE).
## Installation
### Install node
@@ -11,6 +12,11 @@ exec -l $SHELL
nvm install stable
```
### Install node dependencies
```sh
npm install
```
### Install `go-bindata` and `go-bindata-assetfs`
If you do not have a working Golang environment, please follow [Install Golang](https://golang.org/doc/install)
@@ -30,13 +36,16 @@ npm run release
This generates ui-assets.go in the current directory. Now do `make` in the parent directory to build the minio binary with the newly generated ``ui-assets.go``
## Run MinIO Browser with live reload
### Run MinIO Browser with live reload
```sh
npm run dev
```
Open [http://localhost:8080/minio/](http://localhost:8080/minio/) in your browser to play with the application
Open [http://localhost:8080/minio/](http://localhost:8080/minio/) in your browser to play with the application.
### Run MinIO Browser with live reload on custom port
@@ -54,7 +63,7 @@ index 3ccdaba..9496c56 100644
+ port: 8888,
proxy: {
'/minio/webrpc': {
target: 'http://localhost:9000',
target: 'http://localhost:9000',
@@ -97,7 +98,7 @@ var exports = {
if (process.env.NODE_ENV === 'dev') {
exports.entry = [
@@ -70,4 +79,102 @@ index 3ccdaba..9496c56 100644
npm run dev
```
Open [http://localhost:8888/minio/](http://localhost:8888/minio/) in your browser to play with the application
Open [http://localhost:8888/minio/](http://localhost:8888/minio/) in your browser to play with the application.
### Run MinIO Browser with live reload on any IP
Edit `browser/webpack.config.js`
```diff
diff --git a/browser/webpack.config.js b/browser/webpack.config.js
index 8bdbba53..139f6049 100644
--- a/browser/webpack.config.js
+++ b/browser/webpack.config.js
@@ -71,6 +71,7 @@ var exports = {
historyApiFallback: {
index: '/minio/'
},
+ host: '0.0.0.0',
proxy: {
'/minio/webrpc': {
target: 'http://localhost:9000',
```
```sh
npm run dev
```
Open [http://IP:8080/minio/](http://IP:8080/minio/) in your browser to play with the application.
## Run tests
npm run test
## Docker development environment
This approach will download the sources on your machine such that you are able to use your IDE or editor of choice.
A Docker container will be used in order to provide a controlled build environment without messing with your host system.
### Prepare host system
Install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [Docker](https://docs.docker.com/get-docker/).
### Development within container
Prepare and build container
```
git clone git@github.com:minio/minio.git
cd minio
docker build -t minio-dev -f Dockerfile.dev.browser .
```
Run container, build and run core
```sh
docker run -it --rm --name minio-dev -v "$PWD":/minio minio-dev
cd /minio/browser
npm install
npm run release
cd /minio
make
./minio server /data
```
Note `Endpoint` IP (the one which is _not_ `127.0.0.1`), `AccessKey` and `SecretKey` (both default to `minioadmin`) in order to enter them in the browser later.
Open another terminal.
Connect to container
```sh
docker exec -it minio-dev bash
```
Apply patch to allow access from outside container
```sh
cd /minio
git apply --ignore-whitespace <<EOF
diff --git a/browser/webpack.config.js b/browser/webpack.config.js
index 8bdbba53..139f6049 100644
--- a/browser/webpack.config.js
+++ b/browser/webpack.config.js
@@ -71,6 +71,7 @@ var exports = {
historyApiFallback: {
index: '/minio/'
},
+ host: '0.0.0.0',
proxy: {
'/minio/webrpc': {
target: 'http://localhost:9000',
EOF
```
Build and run frontend with auto-reload
```sh
cd /minio/browser
npm install
npm run dev
```
Open [http://IP:8080/minio/](http://IP:8080/minio/) in your browser to play with the application.

View File

@@ -88,11 +88,6 @@ export class ChangePasswordModal extends React.Component {
canChangePassword() {
const { serverInfo } = this.props
// Password change is not allowed in WORM mode
if (serverInfo.info.isWorm) {
return false
}
// Password change is not allowed for temporary users(STS)
if(serverInfo.userInfo.isTempUser) {
return false

View File

@@ -26,13 +26,10 @@ export class StorageInfo extends React.Component {
}
render() {
const { used } = this.props.storageInfo
if (!used) {
if (!used || used == 0) {
return <noscript />
}
const totalUsed = used.reduce((v1, v2) => v1 + v2, 0)
return (
<div className="feh-used">
<div className="fehu-chart">
@@ -41,7 +38,7 @@ export class StorageInfo extends React.Component {
<ul>
<li>
<span>Used: </span>
{humanize.filesize(totalUsed)}
{humanize.filesize(used)}
</li>
</ul>
</div>

View File

@@ -64,17 +64,6 @@ describe("ChangePasswordModal", () => {
shallow(<ChangePasswordModal serverInfo={serverInfo} />)
})
it("should not allow changing password when isWorm is true", () => {
const newServerInfo = { ...serverInfo, info: { isWorm: true } }
const wrapper = shallow(<ChangePasswordModal serverInfo={newServerInfo} />)
expect(
wrapper
.find("ModalBody")
.childAt(0)
.text()
).toBe("Credentials of this user cannot be updated through MinIO Browser.")
})
it("should not allow changing password when not IAM user", () => {
const newServerInfo = {
...serverInfo,

View File

@@ -21,7 +21,7 @@ import { StorageInfo } from "../StorageInfo"
describe("StorageInfo", () => {
it("should render without crashing", () => {
shallow(
<StorageInfo storageInfo={{ used: [60] }} fetchStorageInfo={jest.fn()} />
<StorageInfo storageInfo={ {used: 60} } fetchStorageInfo={jest.fn()} />
)
})
@@ -29,7 +29,7 @@ describe("StorageInfo", () => {
const fetchStorageInfo = jest.fn()
shallow(
<StorageInfo
storageInfo={{ used: [60] }}
storageInfo={ {used: 60} }
fetchStorageInfo={fetchStorageInfo}
/>
)
@@ -40,7 +40,7 @@ describe("StorageInfo", () => {
const fetchStorageInfo = jest.fn()
const wrapper = shallow(
<StorageInfo
storageInfo={{ used: null }}
storageInfo={ {used: 0} }
fetchStorageInfo={fetchStorageInfo}
/>
)

View File

@@ -20,7 +20,9 @@ import * as actionsCommon from "../actions"
jest.mock("../../web", () => ({
StorageInfo: jest.fn(() => {
return Promise.resolve({ storageInfo: { Used: [60] } })
return Promise.resolve({
used: 60
})
}),
ServerInfo: jest.fn(() => {
return Promise.resolve({
@@ -39,7 +41,7 @@ describe("Common actions", () => {
it("creates common/SET_STORAGE_INFO after fetching the storage details ", () => {
const store = mockStore()
const expectedActions = [
{ type: "common/SET_STORAGE_INFO", storageInfo: { used: [60] } }
{ type: "common/SET_STORAGE_INFO", storageInfo: { used: 60 } }
]
return store.dispatch(actionsCommon.fetchStorageInfo()).then(() => {
const actions = store.getActions()

View File

@@ -21,11 +21,7 @@ describe("common reducer", () => {
it("should return the initial state", () => {
expect(reducer(undefined, {})).toEqual({
sidebarOpen: false,
storageInfo: {
total: [0],
free: [0],
used: [0]
},
storageInfo: {used: 0},
serverInfo: {}
})
})
@@ -62,11 +58,11 @@ describe("common reducer", () => {
{},
{
type: actionsCommon.SET_STORAGE_INFO,
storageInfo: { total: [100], free: [40] }
storageInfo: { }
}
)
).toEqual({
storageInfo: { total: [100], free: [40] }
storageInfo: { }
})
})

View File

@@ -33,8 +33,7 @@ export const fetchStorageInfo = () => {
return function(dispatch) {
return web.StorageInfo().then(res => {
const storageInfo = {
total: res.storageInfo.Total,
used: res.storageInfo.Used
used: res.used
}
dispatch(setStorageInfo(storageInfo))
})

View File

@@ -19,7 +19,7 @@ import * as actionsCommon from "./actions"
export default (
state = {
sidebarOpen: false,
storageInfo: { total: [0], free: [0], used: [0] },
storageInfo: {used: 0},
serverInfo: {}
},
action

View File

@@ -0,0 +1,24 @@
/*
* MinIO Cloud Storage (C) 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { createSelector } from "reselect"
export const getServerInfo = state => state.browser.serverInfo
export const hasServerPublicDomain = createSelector(
getServerInfo,
serverInfo => Boolean(serverInfo.info && serverInfo.info.domains && serverInfo.info.domains.length),
)

View File

@@ -31,19 +31,38 @@ export const SET_POLICIES = "buckets/SET_POLICIES"
export const fetchBuckets = () => {
return function(dispatch) {
const { bucket, prefix } = pathSlice(history.location.pathname)
return web.ListBuckets().then(res => {
const buckets = res.buckets ? res.buckets.map(bucket => bucket.name) : []
dispatch(setList(buckets))
if (buckets.length > 0) {
const { bucket, prefix } = pathSlice(history.location.pathname)
dispatch(setList(buckets))
if (bucket && buckets.indexOf(bucket) > -1) {
dispatch(selectBucket(bucket, prefix))
} else {
dispatch(selectBucket(buckets[0]))
}
} else {
dispatch(selectBucket(""))
history.replace("/")
if (bucket) {
dispatch(setList([bucket]))
dispatch(selectBucket(bucket, prefix))
} else {
dispatch(selectBucket(""))
history.replace("/")
}
}
})
.catch(err => {
if (bucket && err.message === "Access Denied." || err.message.indexOf('Prefix access is denied') > -1 ) {
dispatch(setList([bucket]))
dispatch(selectBucket(bucket, prefix))
} else {
dispatch(
alertActions.set({
type: "danger",
message: err.message,
autoClear: true,
})
)
}
})
}

View File

@@ -14,30 +14,67 @@
* limitations under the License.
*/
import mimedb from 'mime-types'
import mimedb from "mime-types"
const isFolder = (name, contentType) => {
if (name.endsWith('/')) return true
if (name.endsWith("/")) return true
return false
}
const isPdf = (name, contentType) => {
if (contentType === 'application/pdf') return true
if (contentType === "application/pdf") return true
return false
}
const isImage = (name, contentType) => {
if (
contentType === "image/jpeg" ||
contentType === "image/gif" ||
contentType === "image/x-icon" ||
contentType === "image/png" ||
contentType === "image/svg+xml" ||
contentType === "image/tiff" ||
contentType === "image/webp"
)
return true
return false
}
const isZip = (name, contentType) => {
if (!contentType || !contentType.includes('/')) return false
if (contentType.split('/')[1].includes('zip')) return true
if (!contentType || !contentType.includes("/")) return false
if (contentType.split("/")[1].includes("zip")) return true
return false
}
const isCode = (name, contentType) => {
const codeExt = ['c', 'cpp', 'go', 'py', 'java', 'rb', 'js', 'pl', 'fs',
'php', 'css', 'less', 'scss', 'coffee', 'net', 'html',
'rs', 'exs', 'scala', 'hs', 'clj', 'el', 'scm', 'lisp',
'asp', 'aspx']
const ext = name.split('.').reverse()[0]
const codeExt = [
"c",
"cpp",
"go",
"py",
"java",
"rb",
"js",
"pl",
"fs",
"php",
"css",
"less",
"scss",
"coffee",
"net",
"html",
"rs",
"exs",
"scala",
"hs",
"clj",
"el",
"scm",
"lisp",
"asp",
"aspx",
]
const ext = name.split(".").reverse()[0]
for (var i in codeExt) {
if (ext === codeExt[i]) return true
}
@@ -45,9 +82,9 @@ const isCode = (name, contentType) => {
}
const isExcel = (name, contentType) => {
if (!contentType || !contentType.includes('/')) return false
const types = ['excel', 'spreadsheet']
const subType = contentType.split('/')[1]
if (!contentType || !contentType.includes("/")) return false
const types = ["excel", "spreadsheet"]
const subType = contentType.split("/")[1]
for (var i in types) {
if (subType.includes(types[i])) return true
}
@@ -55,9 +92,9 @@ const isExcel = (name, contentType) => {
}
const isDoc = (name, contentType) => {
if (!contentType || !contentType.includes('/')) return false
const types = ['word', '.document']
const subType = contentType.split('/')[1]
if (!contentType || !contentType.includes("/")) return false
const types = ["word", ".document"]
const subType = contentType.split("/")[1]
for (var i in types) {
if (subType.includes(types[i])) return true
}
@@ -65,9 +102,9 @@ const isDoc = (name, contentType) => {
}
const isPresentation = (name, contentType) => {
if (!contentType || !contentType.includes('/')) return false
var types = ['powerpoint', 'presentation']
const subType = contentType.split('/')[1]
if (!contentType || !contentType.includes("/")) return false
var types = ["powerpoint", "presentation"]
const subType = contentType.split("/")[1]
for (var i in types) {
if (subType.includes(types[i])) return true
}
@@ -76,31 +113,32 @@ const isPresentation = (name, contentType) => {
const typeToIcon = (type) => {
return (name, contentType) => {
if (!contentType || !contentType.includes('/')) return false
if (contentType.split('/')[0] === type) return true
if (!contentType || !contentType.includes("/")) return false
if (contentType.split("/")[0] === type) return true
return false
}
}
export const getDataType = (name, contentType) => {
if (contentType === "") {
contentType = mimedb.lookup(name) || 'application/octet-stream'
contentType = mimedb.lookup(name) || "application/octet-stream"
}
const check = [
['folder', isFolder],
['code', isCode],
['audio', typeToIcon('audio')],
['image', typeToIcon('image')],
['video', typeToIcon('video')],
['text', typeToIcon('text')],
['pdf', isPdf],
['zip', isZip],
['excel', isExcel],
['doc', isDoc],
['presentation', isPresentation]
["folder", isFolder],
["code", isCode],
["audio", typeToIcon("audio")],
["image", typeToIcon("image")],
["video", typeToIcon("video")],
["text", typeToIcon("text")],
["pdf", isPdf],
["image", isImage],
["zip", isZip],
["excel", isExcel],
["doc", isDoc],
["presentation", isPresentation],
]
for (var i in check) {
if (check[i][1](name, contentType)) return check[i][0]
}
return 'other'
return "other"
}

View File

@@ -19,18 +19,22 @@ import { connect } from "react-redux"
import { Dropdown } from "react-bootstrap"
import ShareObjectModal from "./ShareObjectModal"
import DeleteObjectConfirmModal from "./DeleteObjectConfirmModal"
import PreviewObjectModal from "./PreviewObjectModal"
import * as objectsActions from "./actions"
import { getDataType } from "../mime.js"
import {
SHARE_OBJECT_EXPIRY_DAYS,
SHARE_OBJECT_EXPIRY_HOURS,
SHARE_OBJECT_EXPIRY_MINUTES
SHARE_OBJECT_EXPIRY_MINUTES,
} from "../constants"
export class ObjectActions extends React.Component {
constructor(props) {
super(props)
this.state = {
showDeleteConfirmation: false
showDeleteConfirmation: false,
showPreview: false,
}
}
shareObject(e) {
@@ -43,6 +47,11 @@ export class ObjectActions extends React.Component {
SHARE_OBJECT_EXPIRY_MINUTES
)
}
handleDownload(e) {
e.preventDefault()
const { object, downloadObject } = this.props
downloadObject(object.name)
}
deleteObject() {
const { object, deleteObject } = this.props
deleteObject(object.name)
@@ -53,7 +62,20 @@ export class ObjectActions extends React.Component {
}
hideDeleteConfirmModal() {
this.setState({
showDeleteConfirmation: false
showDeleteConfirmation: false,
})
}
getObjectURL(objectname, callback) {
const { getObjectURL } = this.props
getObjectURL(objectname, callback)
}
showPreviewModal(e) {
e.preventDefault()
this.setState({ showPreview: true })
}
hidePreviewModal() {
this.setState({
showPreview: false,
})
}
render() {
@@ -65,26 +87,54 @@ export class ObjectActions extends React.Component {
<a
href=""
className="fiad-action"
title="Share"
onClick={this.shareObject.bind(this)}
>
<i className="fas fa-share-alt" />
</a>
{getDataType(object.name, object.contentType) == "image" && (
<a
href=""
className="fiad-action"
title="Preview"
onClick={this.showPreviewModal.bind(this)}
>
<i className="far fa-file-image" />
</a>
)}
<a
href=""
className="fiad-action"
title="Download"
onClick={this.handleDownload.bind(this)}
>
<i className="fas fa-cloud-download-alt" />
</a>
<a
href=""
className="fiad-action"
title="Delete"
onClick={this.showDeleteConfirmModal.bind(this)}
>
<i className="fas fa-trash-alt" />
</a>
</Dropdown.Menu>
{(showShareObjectModal && shareObjectName === object.name) &&
<ShareObjectModal object={object} />}
{showShareObjectModal && shareObjectName === object.name && (
<ShareObjectModal object={object} />
)}
{this.state.showDeleteConfirmation && (
<DeleteObjectConfirmModal
deleteObject={this.deleteObject.bind(this)}
hideDeleteConfirmModal={this.hideDeleteConfirmModal.bind(this)}
/>
)}
{this.state.showPreview && (
<PreviewObjectModal
object={object}
hidePreviewModal={this.hidePreviewModal.bind(this)}
getObjectURL={this.getObjectURL.bind(this)}
/>
)}
</Dropdown>
)
}
@@ -94,15 +144,18 @@ const mapStateToProps = (state, ownProps) => {
return {
object: ownProps.object,
showShareObjectModal: state.objects.shareObject.show,
shareObjectName: state.objects.shareObject.object
shareObjectName: state.objects.shareObject.object,
}
}
const mapDispatchToProps = dispatch => {
const mapDispatchToProps = (dispatch) => {
return {
downloadObject: object => dispatch(objectsActions.downloadObject(object)),
shareObject: (object, days, hours, minutes) =>
dispatch(objectsActions.shareObject(object, days, hours, minutes)),
deleteObject: object => dispatch(objectsActions.deleteObject(object))
deleteObject: (object) => dispatch(objectsActions.deleteObject(object)),
getObjectURL: (object, callback) =>
dispatch(objectsActions.getObjectURL(object, callback)),
}
}

View File

@@ -54,8 +54,11 @@ export const ObjectItem = ({
href={getDataType(name, contentType) === "folder" ? name : "#"}
onClick={e => {
e.preventDefault()
// onclick function is passed only when we have a prefix
if (onClick) {
onClick()
} else {
checked ? uncheckObject(name) : checkObject(name)
}
}}
>

View File

@@ -0,0 +1,95 @@
/*
* MinIO Cloud Storage (C) 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import React from "react"
import { connect } from "react-redux"
import { Dropdown } from "react-bootstrap"
import DeleteObjectConfirmModal from "./DeleteObjectConfirmModal"
import * as actions from "./actions"
export class PrefixActions extends React.Component {
constructor(props) {
super(props)
this.state = {
showDeleteConfirmation: false,
}
}
handleDownload(e) {
e.preventDefault()
const { object, downloadPrefix } = this.props
downloadPrefix(object.name)
}
deleteObject() {
const { object, deleteObject } = this.props
deleteObject(object.name)
}
showDeleteConfirmModal(e) {
e.preventDefault()
this.setState({ showDeleteConfirmation: true })
}
hideDeleteConfirmModal() {
this.setState({
showDeleteConfirmation: false,
})
}
render() {
const { object, showShareObjectModal, shareObjectName } = this.props
return (
<Dropdown id={`obj-actions-${object.name}`}>
<Dropdown.Toggle noCaret className="fia-toggle" />
<Dropdown.Menu>
<a
href=""
className="fiad-action"
title="Download as zip"
onClick={this.handleDownload.bind(this)}
>
<i className="fas fa-cloud-download-alt" />
</a>
<a
href=""
className="fiad-action"
title="Delete"
onClick={this.showDeleteConfirmModal.bind(this)}
>
<i className="fas fa-trash-alt" />
</a>
</Dropdown.Menu>
{this.state.showDeleteConfirmation && (
<DeleteObjectConfirmModal
deleteObject={this.deleteObject.bind(this)}
hideDeleteConfirmModal={this.hideDeleteConfirmModal.bind(this)}
/>
)}
</Dropdown>
)
}
}
const mapStateToProps = (state, ownProps) => {
return {
object: ownProps.object,
}
}
const mapDispatchToProps = (dispatch) => {
return {
downloadPrefix: object => dispatch(actions.downloadPrefix(object)),
deleteObject: (object) => dispatch(actions.deleteObject(object)),
}
}
export default connect(mapStateToProps, mapDispatchToProps)(PrefixActions)

View File

@@ -17,22 +17,32 @@
import React from "react"
import { connect } from "react-redux"
import ObjectItem from "./ObjectItem"
import PrefixActions from "./PrefixActions"
import * as actionsObjects from "./actions"
import { getCheckedList } from "./selectors"
export const PrefixContainer = ({ object, currentPrefix, selectPrefix }) => {
export const PrefixContainer = ({
object,
currentPrefix,
checkedObjectsCount,
selectPrefix
}) => {
const props = {
name: object.name,
contentType: object.contentType,
onClick: () => selectPrefix(`${currentPrefix}${object.name}`)
}
if (checkedObjectsCount == 0) {
props.actionButtons = <PrefixActions object={object} />
}
return <ObjectItem {...props} />
}
const mapStateToProps = (state, ownProps) => {
return {
object: ownProps.object,
currentPrefix: state.objects.currentPrefix
currentPrefix: state.objects.currentPrefix,
checkedObjectsCount: getCheckedList(state).length
}
}

View File

@@ -0,0 +1,68 @@
/*
* MinIO Cloud Storage (C) 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import React from "react"
import { Modal, ModalHeader, ModalBody } from "react-bootstrap"
class PreviewObjectModal extends React.Component {
constructor(props) {
super(props)
this.state = {
url: "",
}
}
componentDidMount() {
this.props.getObjectURL(this.props.object.name, (url) => {
this.setState({
url: url,
})
})
}
render() {
const { hidePreviewModal } = this.props
return (
<Modal
show={true}
animation={false}
onHide={hidePreviewModal}
bsSize="large"
>
<ModalHeader>Preview</ModalHeader>
<ModalBody>
<div className="input-group">
{this.state.url && (
<object data={this.state.url}>
<h3 style={{ textAlign: "center", display: "block", width: "100%" }}>
Do not have read permissions to preview "{this.props.object.name}"
</h3>
</object>
)}
</div>
</ModalBody>
<div className="modal-footer">
{
<button className="btn btn-link" onClick={hidePreviewModal}>
Cancel
</button>
}
</div>
</Modal>
)
}
}
export default PreviewObjectModal

View File

@@ -77,7 +77,8 @@ export class ShareObjectModal extends React.Component {
hideShareObject()
}
render() {
const { shareObjectDetails, shareObject, hideShareObject } = this.props
const { shareObjectDetails, hideShareObject } = this.props
const url = `${window.location.protocol}//${shareObjectDetails.url}`
return (
<Modal
show={true}
@@ -93,11 +94,12 @@ export class ShareObjectModal extends React.Component {
type="text"
ref={node => (this.copyTextInput = node)}
readOnly="readOnly"
value={window.location.protocol + "//" + shareObjectDetails.url}
value={url}
onClick={() => this.copyTextInput.select()}
/>
</div>
<div
{shareObjectDetails.showExpiryDate && (
<div
className="input-group"
style={{ display: web.LoggedIn() ? "block" : "none" }}
>
@@ -174,10 +176,11 @@ export class ShareObjectModal extends React.Component {
</div>
</div>
</div>
)}
</ModalBody>
<div className="modal-footer">
<CopyToClipboard
text={window.location.protocol + "//" + shareObjectDetails.url}
text={url}
onCopy={this.onUrlCopied.bind(this)}
>
<button className="btn btn-success">Copy Link</button>

View File

@@ -66,6 +66,63 @@ describe("ObjectActions", () => {
expect(deleteObject).toHaveBeenCalledWith("obj1")
})
it("should call downloadObject when single object is selected and download button is clicked", () => {
const downloadObject = jest.fn()
const wrapper = shallow(
<ObjectActions
object={{ name: "obj1" }}
currentPrefix={"pre1/"}
downloadObject={downloadObject} />
)
wrapper
.find("a")
.at(1)
.simulate("click", { preventDefault: jest.fn() })
expect(downloadObject).toHaveBeenCalled()
})
it("should show PreviewObjectModal when preview action is clicked", () => {
const wrapper = shallow(
<ObjectActions
object={{ name: "obj1", contentType: "image/jpeg"}}
currentPrefix={"pre1/"} />
)
wrapper
.find("a")
.at(1)
.simulate("click", { preventDefault: jest.fn() })
expect(wrapper.state("showPreview")).toBeTruthy()
expect(wrapper.find("PreviewObjectModal").length).toBe(1)
})
it("should hide PreviewObjectModal when cancel button is clicked", () => {
const wrapper = shallow(
<ObjectActions
object={{ name: "obj1" , contentType: "image/jpeg"}}
currentPrefix={"pre1/"} />
)
wrapper
.find("a")
.at(1)
.simulate("click", { preventDefault: jest.fn() })
wrapper.find("PreviewObjectModal").prop("hidePreviewModal")()
wrapper.update()
expect(wrapper.state("showPreview")).toBeFalsy()
expect(wrapper.find("PreviewObjectModal").length).toBe(0)
})
it("should not show PreviewObjectModal when preview action is clicked if object is not an image", () => {
const wrapper = shallow(
<ObjectActions
object={{ name: "obj1"}}
currentPrefix={"pre1/"} />
)
expect(wrapper
.find("a")
.length).toBe(3) // find only the other 2
})
it("should call shareObject with object and expiry", () => {
const shareObject = jest.fn()
const wrapper = shallow(

View File

@@ -30,7 +30,10 @@ describe("ObjectItem", () => {
it("shouldn't call onClick when the object isclicked", () => {
const onClick = jest.fn()
const wrapper = shallow(<ObjectItem name={"test"} />)
const checkObject = jest.fn()
const wrapper = shallow(
<ObjectItem name={"test"} checkObject={checkObject} />
)
wrapper.find("a").simulate("click", { preventDefault: jest.fn() })
expect(onClick).not.toHaveBeenCalled()
})
@@ -57,9 +60,15 @@ describe("ObjectItem", () => {
})
it("should call uncheckObject when the object/prefix is unchecked", () => {
const checkObject = jest.fn()
const uncheckObject = jest.fn()
const wrapper = shallow(
<ObjectItem name={"test"} checked={true} uncheckObject={uncheckObject} />
<ObjectItem
name={"test"}
checked={true}
checkObject={checkObject}
uncheckObject={uncheckObject}
/>
)
wrapper.find("input[type='checkbox']").simulate("change")
expect(uncheckObject).toHaveBeenCalledWith("test")

View File

@@ -0,0 +1,84 @@
/*
* MinIO Cloud Storage (C) 2018 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import React from "react"
import { shallow } from "enzyme"
import { PrefixActions } from "../PrefixActions"
describe("PrefixActions", () => {
it("should render without crashing", () => {
shallow(<PrefixActions object={{ name: "abc/" }} currentPrefix={"pre1/"} />)
})
it("should show DeleteObjectConfirmModal when delete action is clicked", () => {
const wrapper = shallow(
<PrefixActions object={{ name: "abc/" }} currentPrefix={"pre1/"} />
)
wrapper
.find("a")
.last()
.simulate("click", { preventDefault: jest.fn() })
expect(wrapper.state("showDeleteConfirmation")).toBeTruthy()
expect(wrapper.find("DeleteObjectConfirmModal").length).toBe(1)
})
it("should hide DeleteObjectConfirmModal when Cancel button is clicked", () => {
const wrapper = shallow(
<PrefixActions object={{ name: "abc/" }} currentPrefix={"pre1/"} />
)
wrapper
.find("a")
.last()
.simulate("click", { preventDefault: jest.fn() })
wrapper.find("DeleteObjectConfirmModal").prop("hideDeleteConfirmModal")()
wrapper.update()
expect(wrapper.state("showDeleteConfirmation")).toBeFalsy()
expect(wrapper.find("DeleteObjectConfirmModal").length).toBe(0)
})
it("should call deleteObject with object name", () => {
const deleteObject = jest.fn()
const wrapper = shallow(
<PrefixActions
object={{ name: "abc/" }}
currentPrefix={"pre1/"}
deleteObject={deleteObject}
/>
)
wrapper
.find("a")
.last()
.simulate("click", { preventDefault: jest.fn() })
wrapper.find("DeleteObjectConfirmModal").prop("deleteObject")()
expect(deleteObject).toHaveBeenCalledWith("abc/")
})
it("should call downloadPrefix when single object is selected and download button is clicked", () => {
const downloadPrefix = jest.fn()
const wrapper = shallow(
<PrefixActions
object={{ name: "abc/" }}
currentPrefix={"pre1/"}
downloadPrefix={downloadPrefix} />
)
wrapper
.find("a")
.first()
.simulate("click", { preventDefault: jest.fn() })
expect(downloadPrefix).toHaveBeenCalled()
})
})

View File

@@ -41,4 +41,22 @@ describe("PrefixContainer", () => {
wrapper.find("Connect(ObjectItem)").prop("onClick")()
expect(selectPrefix).toHaveBeenCalledWith("xyz/abc/")
})
it("should pass actions to ObjectItem", () => {
const wrapper = shallow(
<PrefixContainer object={{ name: "abc/" }} checkedObjectsCount={0} />
)
expect(wrapper.find("Connect(ObjectItem)").prop("actionButtons")).not.toBe(
undefined
)
})
it("should pass empty actions to ObjectItem when checkedObjectCount is more than 0", () => {
const wrapper = shallow(
<PrefixContainer object={{ name: "abc/" }} checkedObjectsCount={1} />
)
expect(wrapper.find("Connect(ObjectItem)").prop("actionButtons")).toBe(
undefined
)
})
})

View File

@@ -34,7 +34,7 @@ describe("ShareObjectModal", () => {
shallow(
<ShareObjectModal
object={{ name: "obj1" }}
shareObjectDetails={{ show: true, object: "obj1", url: "test" }}
shareObjectDetails={{ show: true, object: "obj1", url: "test", showExpiryDate: true }}
/>
)
})
@@ -44,7 +44,7 @@ describe("ShareObjectModal", () => {
const wrapper = shallow(
<ShareObjectModal
object={{ name: "obj1" }}
shareObjectDetails={{ show: true, object: "obj1", url: "test" }}
shareObjectDetails={{ show: true, object: "obj1", url: "test", showExpiryDate: true }}
hideShareObject={hideShareObject}
/>
)
@@ -59,7 +59,7 @@ describe("ShareObjectModal", () => {
const wrapper = shallow(
<ShareObjectModal
object={{ name: "obj1" }}
shareObjectDetails={{ show: true, object: "obj1", url: "test" }}
shareObjectDetails={{ show: true, object: "obj1", url: "test", showExpiryDate: true }}
/>
)
expect(
@@ -76,7 +76,7 @@ describe("ShareObjectModal", () => {
const wrapper = shallow(
<ShareObjectModal
object={{ name: "obj1" }}
shareObjectDetails={{ show: true, object: "obj1", url: "test" }}
shareObjectDetails={{ show: true, object: "obj1", url: "test", showExpiryDate: true }}
hideShareObject={hideShareObject}
showCopyAlert={showCopyAlert}
/>
@@ -89,8 +89,15 @@ describe("ShareObjectModal", () => {
describe("Update expiry values", () => {
const props = {
object: { name: "obj1" },
shareObjectDetails: { show: true, object: "obj1", url: "test" }
shareObjectDetails: { show: true, object: "obj1", url: "test", showExpiryDate: true }
}
it("should not show expiry values if shared with public link", () => {
const shareObjectDetails = { show: true, object: "obj1", url: "test", showExpiryDate: false }
const wrapper = shallow(<ShareObjectModal {...props} shareObjectDetails={shareObjectDetails} />)
expect(wrapper.find('.set-expire').exists()).toEqual(false)
})
it("should have default expiry values", () => {
const wrapper = shallow(<ShareObjectModal {...props} />)
expect(wrapper.state("expiry")).toEqual({

View File

@@ -34,6 +34,7 @@ jest.mock("../../web", () => ({
.mockReturnValueOnce(false)
.mockReturnValueOnce(true)
.mockReturnValueOnce(true)
.mockReturnValueOnce(true)
.mockReturnValueOnce(false),
ListObjects: jest.fn(({ bucketName }) => {
if (bucketName === "test-deny") {
@@ -70,6 +71,16 @@ jest.mock("../../web", () => ({
.mockImplementationOnce(() => {
return Promise.resolve({ token: "test" })
})
.mockImplementationOnce(() => {
return Promise.resolve({ token: "test" })
}),
GetBucketPolicy: jest.fn(({ bucketName, prefix }) => {
if (!bucketName) {
return Promise.reject({ message: "Invalid bucket" })
}
if (bucketName === 'test-public') return Promise.resolve({ policy: 'readonly' })
return Promise.resolve({})
})
}))
const middlewares = [thunk]
@@ -295,7 +306,8 @@ describe("Objects actions", () => {
type: "objects/SET_SHARE_OBJECT",
show: true,
object: "b.txt",
url: "test"
url: "test",
showExpiryDate: true
}
]
store.dispatch(actionsObjects.showShareObject("b.txt", "test"))
@@ -321,14 +333,16 @@ describe("Objects actions", () => {
it("creates objects/SET_SHARE_OBJECT when object is shared", () => {
const store = mockStore({
buckets: { currentBucket: "bk1" },
objects: { currentPrefix: "pre1/" }
objects: { currentPrefix: "pre1/" },
browser: { serverInfo: {} },
})
const expectedActions = [
{
type: "objects/SET_SHARE_OBJECT",
show: true,
object: "a.txt",
url: "https://test.com/bk1/pre1/b.txt"
url: "https://test.com/bk1/pre1/b.txt",
showExpiryDate: true
},
{
type: "alert/SET",
@@ -347,10 +361,42 @@ describe("Objects actions", () => {
})
})
it("creates objects/SET_SHARE_OBJECT when object is shared with public link", () => {
const store = mockStore({
buckets: { currentBucket: "test-public" },
objects: { currentPrefix: "pre1/" },
browser: { serverInfo: { info: { domains: ['public.com'] }} },
})
const expectedActions = [
{
type: "objects/SET_SHARE_OBJECT",
show: true,
object: "a.txt",
url: "public.com/test-public/pre1/a.txt",
showExpiryDate: false
},
{
type: "alert/SET",
alert: {
type: "success",
message: "Object shared.",
id: alertActions.alertId
}
}
]
return store
.dispatch(actionsObjects.shareObject("a.txt", 1, 0, 0))
.then(() => {
const actions = store.getActions()
expect(actions).toEqual(expectedActions)
})
})
it("creates alert/SET when shareObject is failed", () => {
const store = mockStore({
buckets: { currentBucket: "" },
objects: { currentPrefix: "pre1/" }
objects: { currentPrefix: "pre1/" },
browser: { serverInfo: {} },
})
const expectedActions = [
{
@@ -442,6 +488,34 @@ describe("Objects actions", () => {
})
})
it("should download prefix", () => {
const open = jest.fn()
const send = jest.fn()
const xhrMockClass = () => ({
open: open,
send: send
})
window.XMLHttpRequest = jest.fn().mockImplementation(xhrMockClass)
const store = mockStore({
buckets: { currentBucket: "bk1" },
objects: { currentPrefix: "pre1/" }
})
return store.dispatch(actionsObjects.downloadPrefix("pre2/")).then(() => {
const requestUrl = `${
location.origin
}${minioBrowserPrefix}/zip?token=test`
expect(open).toHaveBeenCalledWith("POST", requestUrl, true)
expect(send).toHaveBeenCalledWith(
JSON.stringify({
bucketName: "bk1",
prefix: "pre1/",
objects: ["pre2/"]
})
)
})
})
it("creates objects/CHECKED_LIST_ADD action", () => {
const store = mockStore()
const expectedActions = [

View File

@@ -19,20 +19,20 @@ import history from "../history"
import {
sortObjectsByName,
sortObjectsBySize,
sortObjectsByDate
sortObjectsByDate,
} from "../utils"
import { getCurrentBucket } from "../buckets/selectors"
import { getCurrentPrefix, getCheckedList } from "./selectors"
import * as alertActions from "../alert/actions"
import * as bucketActions from "../buckets/actions"
import {
minioBrowserPrefix,
SORT_BY_NAME,
SORT_BY_SIZE,
SORT_BY_LAST_MODIFIED,
SORT_ORDER_ASC,
SORT_ORDER_DESC
SORT_ORDER_DESC,
} from "../constants"
import { getServerInfo, hasServerPublicDomain } from '../browser/selectors'
export const SET_LIST = "objects/SET_LIST"
export const RESET_LIST = "objects/RESET_LIST"
@@ -48,35 +48,35 @@ export const CHECKED_LIST_REMOVE = "objects/CHECKED_LIST_REMOVE"
export const CHECKED_LIST_RESET = "objects/CHECKED_LIST_RESET"
export const SET_LIST_LOADING = "objects/SET_LIST_LOADING"
export const setList = objects => ({
export const setList = (objects) => ({
type: SET_LIST,
objects
objects,
})
export const resetList = () => ({
type: RESET_LIST
type: RESET_LIST,
})
export const setListLoading = listLoading => ({
export const setListLoading = (listLoading) => ({
type: SET_LIST_LOADING,
listLoading
listLoading,
})
export const fetchObjects = () => {
return function(dispatch, getState) {
return function (dispatch, getState) {
dispatch(resetList())
const {
buckets: { currentBucket },
objects: { currentPrefix }
objects: { currentPrefix },
} = getState()
if (currentBucket) {
dispatch(setListLoading(true))
return web
.ListObjects({
bucketName: currentBucket,
prefix: currentPrefix
prefix: currentPrefix,
})
.then(res => {
.then((res) => {
// we need to check if the bucket name and prefix are the same as
// when the request was made before updating the displayed objects
if (
@@ -85,10 +85,10 @@ export const fetchObjects = () => {
) {
let objects = []
if (res.objects) {
objects = res.objects.map(object => {
objects = res.objects.map((object) => {
return {
...object,
name: object.name.replace(currentPrefix, "")
name: object.name.replace(currentPrefix, ""),
}
})
}
@@ -104,13 +104,13 @@ export const fetchObjects = () => {
dispatch(setListLoading(false))
}
})
.catch(err => {
.catch((err) => {
if (web.LoggedIn()) {
dispatch(
alertActions.set({
type: "danger",
message: err.message,
autoClear: true
autoClear: true,
})
)
dispatch(resetList())
@@ -123,8 +123,8 @@ export const fetchObjects = () => {
}
}
export const sortObjects = sortBy => {
return function(dispatch, getState) {
export const sortObjects = (sortBy) => {
return function (dispatch, getState) {
const { objects } = getState()
let sortOrder = SORT_ORDER_ASC
// Reverse sort order if the list is already sorted on same field
@@ -149,18 +149,18 @@ const sortObjectsList = (list, sortBy, sortOrder) => {
}
}
export const setSortBy = sortBy => ({
export const setSortBy = (sortBy) => ({
type: SET_SORT_BY,
sortBy
sortBy,
})
export const setSortOrder = sortOrder => ({
export const setSortOrder = (sortOrder) => ({
type: SET_SORT_ORDER,
sortOrder
sortOrder,
})
export const selectPrefix = prefix => {
return function(dispatch, getState) {
export const selectPrefix = (prefix) => {
return function (dispatch, getState) {
dispatch(setCurrentPrefix(prefix))
dispatch(fetchObjects())
dispatch(resetCheckedList())
@@ -169,49 +169,49 @@ export const selectPrefix = prefix => {
}
}
export const setCurrentPrefix = prefix => {
export const setCurrentPrefix = (prefix) => {
return {
type: SET_CURRENT_PREFIX,
prefix
prefix,
}
}
export const setPrefixWritable = prefixWritable => ({
export const setPrefixWritable = (prefixWritable) => ({
type: SET_PREFIX_WRITABLE,
prefixWritable
prefixWritable,
})
export const deleteObject = object => {
return function(dispatch, getState) {
export const deleteObject = (object) => {
return function (dispatch, getState) {
const currentBucket = getCurrentBucket(getState())
const currentPrefix = getCurrentPrefix(getState())
const objectName = `${currentPrefix}${object}`
return web
.RemoveObject({
bucketName: currentBucket,
objects: [objectName]
objects: [objectName],
})
.then(() => {
dispatch(removeObject(object))
})
.catch(e => {
.catch((e) => {
dispatch(
alertActions.set({
type: "danger",
message: e.message
message: e.message,
})
)
})
}
}
export const removeObject = object => ({
export const removeObject = (object) => ({
type: REMOVE,
object
object,
})
export const deleteCheckedObjects = () => {
return function(dispatch, getState) {
return function (dispatch, getState) {
const checkedObjects = getCheckedList(getState())
for (let i = 0; i < checkedObjects.length; i++) {
dispatch(deleteObject(checkedObjects[i]))
@@ -221,33 +221,52 @@ export const deleteCheckedObjects = () => {
}
export const shareObject = (object, days, hours, minutes) => {
return function(dispatch, getState) {
return function (dispatch, getState) {
const hasServerDomain = hasServerPublicDomain(getState())
const currentBucket = getCurrentBucket(getState())
const currentPrefix = getCurrentPrefix(getState())
const objectName = `${currentPrefix}${object}`
const expiry = days * 24 * 60 * 60 + hours * 60 * 60 + minutes * 60
if (web.LoggedIn()) {
return web
.PresignedGet({
host: location.host,
bucket: currentBucket,
object: objectName,
expiry: expiry
.GetBucketPolicy({ bucketName: currentBucket, prefix: currentPrefix })
.catch(() => ({ policy: null }))
.then(({ policy }) => {
if (hasServerDomain && ['readonly', 'readwrite'].includes(policy)) {
const domain = getServerInfo(getState()).info.domains[0]
const url = `${domain}/${currentBucket}/${encodeURI(objectName)}`
dispatch(showShareObject(object, url, false))
dispatch(
alertActions.set({
type: "success",
message: "Object shared."
})
)
} else {
return web
.PresignedGet({
host: location.host,
bucket: currentBucket,
object: objectName,
expiry: expiry
})
}
})
.then(obj => {
.then((obj) => {
if (!obj) return
dispatch(showShareObject(object, obj.url))
dispatch(
alertActions.set({
type: "success",
message: `Object shared. Expires in ${days} days ${hours} hours ${minutes} minutes`
message: `Object shared. Expires in ${days} days ${hours} hours ${minutes} minutes`,
})
)
})
.catch(err => {
.catch((err) => {
dispatch(
alertActions.set({
type: "danger",
message: err.message
message: err.message,
})
)
})
@@ -265,29 +284,29 @@ export const shareObject = (object, days, hours, minutes) => {
dispatch(
alertActions.set({
type: "success",
message: `Object shared.`
message: `Object shared.`,
})
)
}
}
}
export const showShareObject = (object, url) => ({
export const showShareObject = (object, url, showExpiryDate = true) => ({
type: SET_SHARE_OBJECT,
show: true,
object,
url
url,
showExpiryDate,
})
export const hideShareObject = (object, url) => ({
type: SET_SHARE_OBJECT,
show: false,
object: "",
url: ""
url: "",
})
export const downloadObject = object => {
return function(dispatch, getState) {
export const getObjectURL = (object, callback) => {
return function (dispatch, getState) {
const currentBucket = getCurrentBucket(getState())
const currentPrefix = getCurrentPrefix(getState())
const objectName = `${currentPrefix}${object}`
@@ -295,78 +314,119 @@ export const downloadObject = object => {
if (web.LoggedIn()) {
return web
.CreateURLToken()
.then(res => {
const url = `${
window.location.origin
}${minioBrowserPrefix}/download/${currentBucket}/${encObjectName}?token=${
res.token
}`
window.location = url
.then((res) => {
const url = `${window.location.origin}${minioBrowserPrefix}/download/${currentBucket}/${encObjectName}?token=${res.token}`
callback(url)
})
.catch(err => {
.catch((err) => {
dispatch(
alertActions.set({
type: "danger",
message: err.message
message: err.message,
})
)
})
} else {
const url = `${
window.location.origin
}${minioBrowserPrefix}/download/${currentBucket}/${encObjectName}?token=`
const url = `${window.location.origin}${minioBrowserPrefix}/download/${currentBucket}/${encObjectName}?token=`
callback(url)
}
}
}
export const downloadObject = (object) => {
return function (dispatch, getState) {
const currentBucket = getCurrentBucket(getState())
const currentPrefix = getCurrentPrefix(getState())
const objectName = `${currentPrefix}${object}`
const encObjectName = encodeURI(objectName)
if (web.LoggedIn()) {
return web
.CreateURLToken()
.then((res) => {
const url = `${window.location.origin}${minioBrowserPrefix}/download/${currentBucket}/${encObjectName}?token=${res.token}`
window.location = url
})
.catch((err) => {
dispatch(
alertActions.set({
type: "danger",
message: err.message,
})
)
})
} else {
const url = `${window.location.origin}${minioBrowserPrefix}/download/${currentBucket}/${encObjectName}?token=`
window.location = url
}
}
}
export const checkObject = object => ({
type: CHECKED_LIST_ADD,
object
})
export const uncheckObject = object => ({
type: CHECKED_LIST_REMOVE,
object
})
export const resetCheckedList = () => ({
type: CHECKED_LIST_RESET
})
export const downloadCheckedObjects = () => {
return function(dispatch, getState) {
const state = getState()
const req = {
bucketName: getCurrentBucket(state),
prefix: getCurrentPrefix(state),
objects: getCheckedList(state)
}
if (!web.LoggedIn()) {
const requestUrl = location.origin + "/minio/zip?token="
downloadZip(requestUrl, req, dispatch)
} else {
return web
.CreateURLToken()
.then(res => {
const requestUrl = `${
location.origin
}${minioBrowserPrefix}/zip?token=${res.token}`
downloadZip(requestUrl, req, dispatch)
})
.catch(err =>
dispatch(
alertActions.set({
type: "danger",
message: err.message
})
)
)
}
export const downloadPrefix = (object) => {
return function (dispatch, getState) {
return downloadObjects(
getCurrentBucket(getState()),
getCurrentPrefix(getState()),
[object],
`${object.slice(0, -1)}.zip`,
dispatch
)
}
}
const downloadZip = (url, req, dispatch) => {
export const checkObject = (object) => ({
type: CHECKED_LIST_ADD,
object,
})
export const uncheckObject = (object) => ({
type: CHECKED_LIST_REMOVE,
object,
})
export const resetCheckedList = () => ({
type: CHECKED_LIST_RESET,
})
export const downloadCheckedObjects = () => {
return function (dispatch, getState) {
return downloadObjects(
getCurrentBucket(getState()),
getCurrentPrefix(getState()),
getCheckedList(getState()),
null,
dispatch
)
}
}
const downloadObjects = (bucketName, prefix, objects, filename, dispatch) => {
const req = {
bucketName: bucketName,
prefix: prefix,
objects: objects,
}
if (web.LoggedIn()) {
return web
.CreateURLToken()
.then((res) => {
const requestUrl = `${location.origin}${minioBrowserPrefix}/zip?token=${res.token}`
downloadZip(requestUrl, req, filename, dispatch)
})
.catch((err) =>
dispatch(
alertActions.set({
type: "danger",
message: err.message,
})
)
)
} else {
const requestUrl = `${location.origin}${minioBrowserPrefix}/zip?token=`
downloadZip(requestUrl, req, filename, dispatch)
}
}
const downloadZip = (url, req, filename, dispatch) => {
var anchor = document.createElement("a")
document.body.appendChild(anchor)
@@ -374,17 +434,17 @@ const downloadZip = (url, req, dispatch) => {
xhr.open("POST", url, true)
xhr.responseType = "blob"
xhr.onload = function(e) {
xhr.onload = function (e) {
if (this.status == 200) {
dispatch(resetCheckedList())
var blob = new Blob([this.response], {
type: "octet/stream"
type: "octet/stream",
})
var blobUrl = window.URL.createObjectURL(blob)
var separator = req.prefix.length > 1 ? "-" : ""
anchor.href = blobUrl
anchor.download =
anchor.download = filename ||
req.bucketName + separator + req.prefix.slice(0, -1) + ".zip"
anchor.click()

View File

@@ -89,7 +89,8 @@ export default (
shareObject: {
show: action.show,
object: action.object,
url: action.url
url: action.url,
showExpiryDate: action.showExpiryDate
}
}
case actionsObjects.CHECKED_LIST_ADD:

View File

@@ -48,18 +48,29 @@ export class Dropzone extends React.Component {
const rejectStyle = {
backgroundColor: "#ffdddd"
}
const getStyle = (isDragActive, isDragAccept, isDragReject) => ({
...style,
...(isDragActive ? activeStyle : {}),
...(isDragReject ? rejectStyle : {})
})
// disableClick means that it won't trigger a file upload box when
// the user clicks on a file.
return (
<ReactDropzone
style={style}
activeStyle={activeStyle}
rejectStyle={rejectStyle}
disableClick={true}
onDrop={this.onDrop.bind(this)}
>
{this.props.children}
{({getRootProps, getInputProps, isDragActive, isDragAccept, isDragReject}) => (
<div
{...getRootProps({
onClick: event => event.stopPropagation()
})}
style={getStyle(isDragActive, isDragAccept, isDragReject)}
>
<input {...getInputProps()} />
{this.props.children}
</div>
)}
</ReactDropzone>
)
}

View File

@@ -89,11 +89,16 @@ export const uploadFile = file => {
return
}
const currentPrefix = getCurrentPrefix(state)
const objectName = `${currentPrefix}${file.name}`
var _filePath = file.path || file.name
if (_filePath.charAt(0) == '/') {
_filePath = _filePath.substring(1)
}
const filePath = _filePath
const objectName = `${currentPrefix}${filePath}`
const uploadUrl = `${
window.location.origin
}${minioBrowserPrefix}/upload/${currentBucket}/${objectName}`
const slug = `${currentBucket}-${currentPrefix}-${file.name}`
const slug = `${currentBucket}-${currentPrefix}-${filePath}`
let xhr = new XMLHttpRequest()
xhr.open("PUT", uploadUrl, true)
@@ -141,7 +146,7 @@ export const uploadFile = file => {
dispatch(
alertActions.set({
type: "success",
message: "File '" + file.name + "' uploaded successfully."
message: "File '" + filePath + "' uploaded successfully."
})
)
dispatch(objectsActions.selectPrefix(currentPrefix))
@@ -153,7 +158,7 @@ export const uploadFile = file => {
dispatch(
alertActions.set({
type: "danger",
message: "Error occurred uploading '" + file.name + "'."
message: "Error occurred uploading '" + filePath + "'."
})
)
})

View File

@@ -105,7 +105,7 @@ div.fesl-row {
.fesl-item-name {
a {
cursor: default;
cursor: pointer;
}
}
@@ -114,12 +114,6 @@ div.fesl-row {
----------------------------*/
&[data-type=folder] {
.list-type(#a1d6dd, '\f07b');
.fesl-item-name {
a {
cursor: pointer;
}
}
}
&[data-type=pdf] {.list-type(#fa7775, '\f1c1'); }
&[data-type=zip] { .list-type(#427089, '\f1c6'); }

10690
browser/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -28,71 +28,70 @@
},
"homepage": "https://github.com/minio/minio",
"devDependencies": {
"async": "^1.5.2",
"async": "^3.2.0",
"babel-cli": "^6.26.0",
"babel-core": "^6.26.3",
"babel-jest": "^22.1.0",
"babel-jest": "^23.6.0",
"babel-loader": "^7.1.2",
"babel-plugin-syntax-object-rest-spread": "^6.13.0",
"babel-plugin-transform-object-rest-spread": "^6.8.0",
"babel-polyfill": "^6.23.0",
"babel-plugin-transform-object-rest-spread": "^6.26.0",
"babel-polyfill": "^6.26.0",
"babel-preset-es2015": "^6.14.0",
"babel-preset-react": "^6.11.1",
"babel-register": "^6.26.0",
"copy-webpack-plugin": "^4.6.0",
"css-loader": "^0.23.1",
"enzyme": "^3.10.0",
"enzyme-adapter-react-16": "^1.1.1",
"esformatter": "^0.10.0",
"esformatter-jsx": "^7.4.1",
"copy-webpack-plugin": "^6.0.1",
"css-loader": "^3.5.3",
"enzyme": "^3.11.0",
"enzyme-adapter-react-16": "^1.15.2",
"esformatter": "^0.11.3",
"esformatter-jsx": "^8.0.1",
"esformatter-jsx-ignore": "^1.0.6",
"html-webpack-plugin": "^3.2.0",
"jest": "^22.1.4",
"jest-enzyme": "^4.0.2",
"json-loader": "^0.5.4",
"less": "^3.9.0",
"less-loader": "^4.1.0",
"purgecss-webpack-plugin": "^1.4.0",
"style-loader": "^0.13.1",
"url-loader": "^0.5.7",
"webpack-cli": "^3.2.0",
"webpack-dev-server": "^3.1.14"
"html-webpack-plugin": "^4.3.0",
"jest": "^23.6.0",
"jest-enzyme": "^7.1.2",
"json-loader": "^0.5.7",
"less": "^3.11.1",
"less-loader": "^6.1.0",
"purgecss-webpack-plugin": "^2.2.0",
"style-loader": "^1.2.1",
"url-loader": "^4.1.0",
"webpack-cli": "^3.3.11",
"webpack-dev-server": "^3.11.0"
},
"dependencies": {
"@fortawesome/fontawesome-free": "^5.10.0",
"@fortawesome/fontawesome-free": "^5.13.0",
"bootstrap": "^3.4.1",
"classnames": "^2.2.3",
"core-js": "^3.2.1",
"expect": "^1.20.2",
"glob-all": "^3.1.0",
"history": "^4.7.2",
"classnames": "^2.2.6",
"core-js": "^3.6.5",
"expect": "^26.0.1",
"glob-all": "^3.2.1",
"history": "^4.10.1",
"humanize": "0.0.9",
"identity-obj-proxy": "^3.0.0",
"json-loader": "^0.5.4",
"jwt-decode": "^2.2.0",
"local-storage-fallback": "^4.0.2",
"local-storage-fallback": "^4.1.1",
"material-design-iconic-font": "^2.2.0",
"mime-db": "^1.25.0",
"mime-types": "^2.1.13",
"moment": "^2.24.0",
"query-string": "^6.8.2",
"react": "^16.2.0",
"react-addons-test-utils": "^0.14.8",
"react-bootstrap": "^0.32.0",
"react-copy-to-clipboard": "^5.0.1",
"mime-db": "^1.44.0",
"mime-types": "^2.1.27",
"moment": "^2.26.0",
"query-string": "^6.12.1",
"react": "^16.13.1",
"react-addons-test-utils": "^15.6.2",
"react-bootstrap": "^0.32.4",
"react-copy-to-clipboard": "^5.0.2",
"react-custom-scrollbars": "^4.2.1",
"react-dom": "^16.2.0",
"react-dropzone": "^4.2.3",
"react-infinite-scroller": "^1.0.6",
"react-dom": "^16.13.1",
"react-dropzone": "^11.0.1",
"react-infinite-scroller": "^1.2.4",
"react-onclickout": "^2.0.8",
"react-redux": "^5.0.6",
"react-router-dom": "^4.2.0",
"redux": "^3.7.2",
"redux-mock-store": "^1.5.1",
"redux-thunk": "^2.2.0",
"reselect": "^3.0.1",
"superagent": "^3.8.2",
"react-redux": "^5.1.2",
"react-router-dom": "^5.2.0",
"redux": "^4.0.5",
"redux-mock-store": "^1.5.4",
"redux-thunk": "^2.3.0",
"reselect": "^4.0.0",
"superagent": "^5.2.2",
"superagent-es6-promise": "^1.0.0",
"webpack": "^4.28.3"
"webpack": "^4.43.0"
}
}

1
browser/staticcheck.conf Normal file
View File

@@ -0,0 +1 @@
checks = ["all", "-ST1005", "-ST1000", "-SA4000", "-SA9004", "-SA1019", "-SA1008", "-U1000", "-ST1003", "-ST1018"]

File diff suppressed because one or more lines are too long

View File

@@ -73,17 +73,17 @@ var exports = {
},
proxy: {
'/minio/webrpc': {
target: 'http://localhost:9000',
secure: false,
headers: {'Host': "localhost:9000"}
target: 'http://localhost:9000',
secure: false,
headers: {'Host': "localhost:9000"}
},
'/minio/upload/*': {
target: 'http://localhost:9000',
secure: false
target: 'http://localhost:9000',
secure: false
},
'/minio/download/*': {
target: 'http://localhost:9000',
secure: false
target: 'http://localhost:9000',
secure: false
},
'/minio/zip': {
target: 'http://localhost:9000',
@@ -92,7 +92,7 @@ var exports = {
}
},
plugins: [
new CopyWebpackPlugin([
new CopyWebpackPlugin({patterns: [
{from: 'app/css/loader.css'},
{from: 'app/img/browsers/chrome.png'},
{from: 'app/img/browsers/firefox.png'},
@@ -102,7 +102,7 @@ var exports = {
{from: 'app/img/favicon/favicon-32x32.png'},
{from: 'app/img/favicon/favicon-96x96.png'},
{from: 'app/index.html'}
]),
]}),
new webpack.ContextReplacementPlugin(/moment[\\\/]locale$/, /^\.\/(en)$/),
new PurgecssPlugin({
paths: glob.sync([

View File

@@ -67,7 +67,7 @@ var exports = {
fs:'empty'
},
plugins: [
new CopyWebpackPlugin([
new CopyWebpackPlugin({patterns: [
{from: 'app/css/loader.css'},
{from: 'app/img/browsers/chrome.png'},
{from: 'app/img/browsers/firefox.png'},
@@ -77,7 +77,7 @@ var exports = {
{from: 'app/img/favicon/favicon-32x32.png'},
{from: 'app/img/favicon/favicon-96x96.png'},
{from: 'app/index.html'}
]),
]}),
new webpack.ContextReplacementPlugin(/moment[\\\/]locale$/, /^\.\/(en)$/),
new PurgecssPlugin({
paths: glob.sync([

View File

@@ -9,7 +9,7 @@ function _init() {
export CGO_ENABLED=0
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/arm64 linux/s390x darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386"
SUPPORTED_OSARCH="linux/ppc64le linux/arm64 linux/s390x darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64"
}
function _build() {
@@ -20,11 +20,11 @@ function _build() {
package=$(go list -f '{{.ImportPath}}')
printf -- "--> %15s:%s\n" "${osarch}" "${package}"
# Go build to build the binary.
# go build -trimpath to build the binary.
export GOOS=$os
export GOARCH=$arch
export GO111MODULE=on
go build -tags kqueue -o /dev/null
go build -trimpath -tags kqueue -o /dev/null
}
function main() {

View File

@@ -46,7 +46,10 @@ function main()
gw_pid="$(start_minio_gateway_s3)"
SERVER_ENDPOINT=127.0.0.1:24240 ENABLE_HTTPS=0 ACCESS_KEY=minio \
SECRET_KEY=minio123 MINT_MODE="full" /mint/entrypoint.sh
SECRET_KEY=minio123 MINT_MODE="full" /mint/entrypoint.sh \
aws-sdk-go aws-sdk-java aws-sdk-php aws-sdk-ruby awscli \
healthcheck mc minio-dotnet minio-js \
minio-py s3cmd s3select security
rv=$?
kill "$sr_pid"

View File

@@ -3,5 +3,5 @@
set -e
for d in $(go list ./... | grep -v browser); do
CGO_ENABLED=1 go test -v -race --timeout 20m "$d"
CGO_ENABLED=1 go test -v -race --timeout 50m "$d"
done

View File

@@ -154,34 +154,6 @@ function run_test_erasure_sets() {
return "$rv"
}
function run_test_dist_erasure_sets_ipv6()
{
minio_pids=( $(start_minio_dist_erasure_sets_ipv6) )
export SERVER_ENDPOINT="[::1]:9000"
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
for pid in "${minio_pids[@]}"; do
kill "$pid"
done
sleep 3
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 9); do
echo "server$i log:"
cat "$WORK_DIR/dist-minio-v6-900$i.log"
done
fi
for i in $(seq 0 9); do
rm -f "$WORK_DIR/dist-minio-v6-900$i.log"
done
return "$rv"
}
function run_test_zone_erasure_sets()
{
minio_pids=( $(start_minio_zone_erasure_sets) )

View File

@@ -30,33 +30,50 @@ MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
function start_minio_3_node() {
declare -a minio_pids
declare -a ARGS
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MINIO_ERASURE_SET_DRIVE_COUNT=6
start_port=$(shuf -i 10000-65000 -n 1)
for i in $(seq 1 3); do
ARGS+=("http://127.0.0.1:$[8000+$i]${WORK_DIR}/$i/1/ http://127.0.0.1:$[8000+$i]${WORK_DIR}/$i/2/ http://127.0.0.1:$[8000+$i]${WORK_DIR}/$i/3/ http://127.0.0.1:$[8000+$i]${WORK_DIR}/$i/4/ http://127.0.0.1:$[8000+$i]${WORK_DIR}/$i/5/ http://127.0.0.1:$[8000+$i]${WORK_DIR}/$i/6/")
ARGS+=("http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/1/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/2/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/3/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/4/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/5/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/6/")
done
"${MINIO[@]}" --address ":8001" ${ARGS[@]} > "${WORK_DIR}/dist-minio-8001.log" 2>&1 &
"${MINIO[@]}" --address ":$[$start_port+1]" ${ARGS[@]} > "${WORK_DIR}/dist-minio-server1.log" 2>&1 &
minio_pids[0]=$!
disown "${minio_pids[0]}"
"${MINIO[@]}" --address ":8002" ${ARGS[@]} > "${WORK_DIR}/dist-minio-8002.log" 2>&1 &
"${MINIO[@]}" --address ":$[$start_port+2]" ${ARGS[@]} > "${WORK_DIR}/dist-minio-server2.log" 2>&1 &
minio_pids[1]=$!
disown "${minio_pids[1]}"
"${MINIO[@]}" --address ":8003" ${ARGS[@]} > "${WORK_DIR}/dist-minio-8003.log" 2>&1 &
"${MINIO[@]}" --address ":$[$start_port+3]" ${ARGS[@]} > "${WORK_DIR}/dist-minio-server3.log" 2>&1 &
minio_pids[2]=$!
disown "${minio_pids[2]}"
sleep "$1"
echo "${minio_pids[@]}"
for pid in "${minio_pids[@]}"; do
if ! kill "$pid"; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
# forcibly killing, to proceed further properly.
kill -9 "$pid"
sleep 1 # wait 1sec per pid
done
}
function check_online() {
for i in $(seq 1 3); do
if grep -q 'Server switching to safe mode' ${WORK_DIR}/dist-minio-$[8000+$i].log; then
echo "1"
fi
done
if grep -q 'Server switching to safe mode' ${WORK_DIR}/dist-minio-*.log; then
echo "1"
fi
}
function purge()
@@ -75,50 +92,21 @@ function __init__()
}
function perform_test() {
minio_pids=( $(start_minio_3_node 60) )
for pid in "${minio_pids[@]}"; do
if ! kill "$pid"; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-$[8000+$i].log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
# forcibly killing, to proceed further properly.
kill -9 "$pid"
done
start_minio_3_node 60
echo "Testing Distributed Erasure setup healing of drives"
echo "Remove the contents of the disks belonging to '${1}' erasure set"
rm -rf ${WORK_DIR}/${1}/*/
minio_pids=( $(start_minio_3_node 60) )
for pid in "${minio_pids[@]}"; do
if ! kill "$pid"; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-$[8000+$i].log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
# forcibly killing, to proceed further properly.
# if the previous kill is taking time.
kill -9 "$pid"
done
start_minio_3_node 60
rv=$(check_online)
if [ "$rv" == "1" ]; then
for pid in "${minio_pids[@]}"; do
kill -9 "$pid"
done
pkill -9 minio
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-$[8000+$i].log"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"

View File

@@ -100,18 +100,18 @@ func (api objectAPIHandlers) PutBucketACLHandler(w http.ResponseWriter, r *http.
}
if len(acl.AccessControlList.Grants) == 0 {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidArgument), r.URL, guessIsBrowserReq(r))
writeErrorResponse(ctx, w, toAPIError(ctx, NotImplemented{}), r.URL, guessIsBrowserReq(r))
return
}
if acl.AccessControlList.Grants[0].Permission != "FULL_CONTROL" {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidArgument), r.URL, guessIsBrowserReq(r))
writeErrorResponse(ctx, w, toAPIError(ctx, NotImplemented{}), r.URL, guessIsBrowserReq(r))
return
}
}
if aclHeader != "" && aclHeader != "private" {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidArgument), r.URL, guessIsBrowserReq(r))
writeErrorResponse(ctx, w, toAPIError(ctx, NotImplemented{}), r.URL, guessIsBrowserReq(r))
return
}
@@ -159,6 +159,7 @@ func (api objectAPIHandlers) GetBucketACLHandler(w http.ResponseWriter, r *http.
},
Permission: "FULL_CONTROL",
})
if err := xml.NewEncoder(w).Encode(acl); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
@@ -214,18 +215,18 @@ func (api objectAPIHandlers) PutObjectACLHandler(w http.ResponseWriter, r *http.
}
if len(acl.AccessControlList.Grants) == 0 {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidArgument), r.URL, guessIsBrowserReq(r))
writeErrorResponse(ctx, w, toAPIError(ctx, NotImplemented{}), r.URL, guessIsBrowserReq(r))
return
}
if acl.AccessControlList.Grants[0].Permission != "FULL_CONTROL" {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidArgument), r.URL, guessIsBrowserReq(r))
writeErrorResponse(ctx, w, toAPIError(ctx, NotImplemented{}), r.URL, guessIsBrowserReq(r))
return
}
}
if aclHeader != "" && aclHeader != "private" {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidArgument), r.URL, guessIsBrowserReq(r))
writeErrorResponse(ctx, w, toAPIError(ctx, NotImplemented{}), r.URL, guessIsBrowserReq(r))
return
}

View File

@@ -35,50 +35,47 @@ import (
"github.com/minio/minio/cmd/config/storageclass"
"github.com/minio/minio/cmd/crypto"
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/auth"
iampolicy "github.com/minio/minio/pkg/iam/policy"
"github.com/minio/minio/pkg/madmin"
)
func validateAdminReqConfigKV(ctx context.Context, w http.ResponseWriter, r *http.Request) ObjectLayer {
func validateAdminReqConfigKV(ctx context.Context, w http.ResponseWriter, r *http.Request) (auth.Credentials, ObjectLayer) {
// Get current object layer instance.
objectAPI := newObjectLayerWithoutSafeModeFn()
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return nil
return auth.Credentials{}, nil
}
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuthType(ctx, r, iampolicy.ConfigUpdateAdminAction, "")
cred, adminAPIErr := checkAdminRequestAuthType(ctx, r, iampolicy.ConfigUpdateAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return nil
return cred, nil
}
return objectAPI
return cred, objectAPI
}
// DelConfigKVHandler - DELETE /minio/admin/v2/del-config-kv
// DelConfigKVHandler - DELETE /minio/admin/v3/del-config-kv
func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DelConfigKVHandler")
ctx := newContext(r, w, "DeleteConfigKV")
objectAPI := validateAdminReqConfigKV(ctx, w, r)
defer logger.AuditLog(w, r, "DeleteConfigKV", mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
return
}
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
if r.ContentLength > maxEConfigJSONSize || r.ContentLength == -1 {
// More than maxConfigSize bytes were available
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigTooLarge), r.URL)
return
}
password := globalActiveCred.SecretKey
password := cred.SecretKey
kvBytes, err := madmin.DecryptData(password, io.LimitReader(r.Body, r.ContentLength))
if err != nil {
logger.LogIf(ctx, err, logger.Application)
@@ -103,28 +100,24 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
}
}
// SetConfigKVHandler - PUT /minio/admin/v2/set-config-kv
// SetConfigKVHandler - PUT /minio/admin/v3/set-config-kv
func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetConfigKVHandler")
ctx := newContext(r, w, "SetConfigKV")
objectAPI := validateAdminReqConfigKV(ctx, w, r)
defer logger.AuditLog(w, r, "SetConfigKV", mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
return
}
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
if r.ContentLength > maxEConfigJSONSize || r.ContentLength == -1 {
// More than maxConfigSize bytes were available
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigTooLarge), r.URL)
return
}
password := globalActiveCred.SecretKey
password := cred.SecretKey
kvBytes, err := madmin.DecryptData(password, io.LimitReader(r.Body, r.ContentLength))
if err != nil {
logger.LogIf(ctx, err, logger.Application)
@@ -162,23 +155,25 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
// Make sure to write backend is encrypted
if globalConfigEncrypted {
saveConfig(context.Background(), objectAPI, backendEncryptedFile, backendEncryptedMigrationComplete)
saveConfig(GlobalContext, objectAPI, backendEncryptedFile, backendEncryptedMigrationComplete)
}
writeSuccessResponseHeadersOnly(w)
}
// GetConfigKVHandler - GET /minio/admin/v2/get-config-kv?key={key}
// GetConfigKVHandler - GET /minio/admin/v3/get-config-kv?key={key}
func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfigKVHandler")
ctx := newContext(r, w, "GetConfigKV")
objectAPI := validateAdminReqConfigKV(ctx, w, r)
defer logger.AuditLog(w, r, "GetConfigKV", mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
return
}
cfg := globalServerConfig
if globalSafeMode {
if newObjectLayerFn() == nil {
var err error
cfg, err = getValidConfig(objectAPI)
if err != nil {
@@ -195,7 +190,7 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
password := globalActiveCred.SecretKey
password := cred.SecretKey
econfigData, err := madmin.EncryptData(password, buf.Bytes())
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@@ -206,9 +201,11 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
}
func (a adminAPIHandlers) ClearConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ClearConfigHistoryKVHandler")
ctx := newContext(r, w, "ClearConfigHistoryKV")
objectAPI := validateAdminReqConfigKV(ctx, w, r)
defer logger.AuditLog(w, r, "ClearConfigHistoryKV", mustGetClaimsFromToken(r))
_, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
return
}
@@ -241,9 +238,11 @@ func (a adminAPIHandlers) ClearConfigHistoryKVHandler(w http.ResponseWriter, r *
// RestoreConfigHistoryKVHandler - restores a config with KV settings for the given KV id.
func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RestoreConfigHistoryKVHandler")
ctx := newContext(r, w, "RestoreConfigHistoryKV")
objectAPI := validateAdminReqConfigKV(ctx, w, r)
defer logger.AuditLog(w, r, "RestoreConfigHistoryKV", mustGetClaimsFromToken(r))
_, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
return
}
@@ -287,9 +286,11 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
// ListConfigHistoryKVHandler - lists all the KV ids.
func (a adminAPIHandlers) ListConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListConfigHistoryKVHandler")
ctx := newContext(r, w, "ListConfigHistoryKV")
objectAPI := validateAdminReqConfigKV(ctx, w, r)
defer logger.AuditLog(w, r, "ListConfigHistoryKV", mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
return
}
@@ -313,7 +314,7 @@ func (a adminAPIHandlers) ListConfigHistoryKVHandler(w http.ResponseWriter, r *h
return
}
password := globalActiveCred.SecretKey
password := cred.SecretKey
econfigData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@@ -323,11 +324,13 @@ func (a adminAPIHandlers) ListConfigHistoryKVHandler(w http.ResponseWriter, r *h
writeSuccessResponseJSON(w, econfigData)
}
// HelpConfigKVHandler - GET /minio/admin/v2/help-config-kv?subSys={subSys}&key={key}
// HelpConfigKVHandler - GET /minio/admin/v3/help-config-kv?subSys={subSys}&key={key}
func (a adminAPIHandlers) HelpConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HelpConfigKVHandler")
ctx := newContext(r, w, "HelpConfigKV")
objectAPI := validateAdminReqConfigKV(ctx, w, r)
defer logger.AuditLog(w, r, "HelpHistoryKV", mustGetClaimsFromToken(r))
_, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
return
}
@@ -349,28 +352,24 @@ func (a adminAPIHandlers) HelpConfigKVHandler(w http.ResponseWriter, r *http.Req
w.(http.Flusher).Flush()
}
// SetConfigHandler - PUT /minio/admin/v2/config
// SetConfigHandler - PUT /minio/admin/v3/config
func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetConfigHandler")
ctx := newContext(r, w, "SetConfig")
objectAPI := validateAdminReqConfigKV(ctx, w, r)
defer logger.AuditLog(w, r, "SetConfig", mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
return
}
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
if r.ContentLength > maxEConfigJSONSize || r.ContentLength == -1 {
// More than maxConfigSize bytes were available
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigTooLarge), r.URL)
return
}
password := globalActiveCred.SecretKey
password := cred.SecretKey
kvBytes, err := madmin.DecryptData(password, io.LimitReader(r.Body, r.ContentLength))
if err != nil {
logger.LogIf(ctx, err, logger.Application)
@@ -403,18 +402,20 @@ func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Reques
// Make sure to write backend is encrypted
if globalConfigEncrypted {
saveConfig(context.Background(), objectAPI, backendEncryptedFile, backendEncryptedMigrationComplete)
saveConfig(GlobalContext, objectAPI, backendEncryptedFile, backendEncryptedMigrationComplete)
}
writeSuccessResponseHeadersOnly(w)
}
// GetConfigHandler - GET /minio/admin/v2/config
// GetConfigHandler - GET /minio/admin/v3/config
// Get config.json of this minio setup.
func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfigHandler")
ctx := newContext(r, w, "GetConfig")
objectAPI := validateAdminReqConfigKV(ctx, w, r)
defer logger.AuditLog(w, r, "GetConfig", mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
return
}
@@ -471,7 +472,7 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
}
}
password := globalActiveCred.SecretKey
password := cred.SecretKey
econfigData, err := madmin.EncryptData(password, []byte(s.String()))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)

View File

@@ -1,5 +1,5 @@
/*
* MinIO Cloud Storage, (C) 2019 MinIO, Inc.
* MinIO Cloud Storage, (C) 2019-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -51,21 +51,17 @@ func validateAdminUsersReq(ctx context.Context, w http.ResponseWriter, r *http.R
return objectAPI, cred
}
// RemoveUser - DELETE /minio/admin/v2/remove-user?accessKey=<access_key>
// RemoveUser - DELETE /minio/admin/v3/remove-user?accessKey=<access_key>
func (a adminAPIHandlers) RemoveUser(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RemoveUser")
defer logger.AuditLog(w, r, "RemoveUser", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.DeleteUserAdminAction)
if objectAPI == nil {
return
}
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
vars := mux.Vars(r)
accessKey := vars["accessKey"]
@@ -93,15 +89,19 @@ func (a adminAPIHandlers) RemoveUser(w http.ResponseWriter, r *http.Request) {
}
}
// ListUsers - GET /minio/admin/v2/list-users
// ListUsers - GET /minio/admin/v3/list-users
func (a adminAPIHandlers) ListUsers(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListUsers")
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.ListUsersAdminAction)
defer logger.AuditLog(w, r, "ListUsers", mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminUsersReq(ctx, w, r, iampolicy.ListUsersAdminAction)
if objectAPI == nil {
return
}
password := cred.SecretKey
allCredentials, err := globalIAMSys.ListUsers()
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@@ -114,7 +114,6 @@ func (a adminAPIHandlers) ListUsers(w http.ResponseWriter, r *http.Request) {
return
}
password := globalActiveCred.SecretKey
econfigData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@@ -124,10 +123,12 @@ func (a adminAPIHandlers) ListUsers(w http.ResponseWriter, r *http.Request) {
writeSuccessResponseJSON(w, econfigData)
}
// GetUserInfo - GET /minio/admin/v2/user-info
// GetUserInfo - GET /minio/admin/v3/user-info
func (a adminAPIHandlers) GetUserInfo(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetUserInfo")
defer logger.AuditLog(w, r, "GetUserInfo", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.GetUserAdminAction)
if objectAPI == nil {
return
@@ -151,10 +152,12 @@ func (a adminAPIHandlers) GetUserInfo(w http.ResponseWriter, r *http.Request) {
writeSuccessResponseJSON(w, data)
}
// UpdateGroupMembers - PUT /minio/admin/v2/update-group-members
// UpdateGroupMembers - PUT /minio/admin/v3/update-group-members
func (a adminAPIHandlers) UpdateGroupMembers(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "UpdateGroupMembers")
defer logger.AuditLog(w, r, "UpdateGroupMembers", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.AddUserToGroupAdminAction)
if objectAPI == nil {
return
@@ -194,10 +197,12 @@ func (a adminAPIHandlers) UpdateGroupMembers(w http.ResponseWriter, r *http.Requ
}
}
// GetGroup - /minio/admin/v2/group?group=mygroup1
// GetGroup - /minio/admin/v3/group?group=mygroup1
func (a adminAPIHandlers) GetGroup(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetGroup")
defer logger.AuditLog(w, r, "GetGroup", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.GetGroupAdminAction)
if objectAPI == nil {
return
@@ -221,10 +226,12 @@ func (a adminAPIHandlers) GetGroup(w http.ResponseWriter, r *http.Request) {
writeSuccessResponseJSON(w, body)
}
// ListGroups - GET /minio/admin/v2/groups
// ListGroups - GET /minio/admin/v3/groups
func (a adminAPIHandlers) ListGroups(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListGroups")
defer logger.AuditLog(w, r, "ListGroups", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.ListGroupsAdminAction)
if objectAPI == nil {
return
@@ -245,10 +252,12 @@ func (a adminAPIHandlers) ListGroups(w http.ResponseWriter, r *http.Request) {
writeSuccessResponseJSON(w, body)
}
// SetGroupStatus - PUT /minio/admin/v2/set-group-status?group=mygroup1&status=enabled
// SetGroupStatus - PUT /minio/admin/v3/set-group-status?group=mygroup1&status=enabled
func (a adminAPIHandlers) SetGroupStatus(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetGroupStatus")
defer logger.AuditLog(w, r, "SetGroupStatus", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.EnableGroupAdminAction)
if objectAPI == nil {
return
@@ -280,21 +289,17 @@ func (a adminAPIHandlers) SetGroupStatus(w http.ResponseWriter, r *http.Request)
}
}
// SetUserStatus - PUT /minio/admin/v2/set-user-status?accessKey=<access_key>&status=[enabled|disabled]
// SetUserStatus - PUT /minio/admin/v3/set-user-status?accessKey=<access_key>&status=[enabled|disabled]
func (a adminAPIHandlers) SetUserStatus(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetUserStatus")
defer logger.AuditLog(w, r, "SetUserStatus", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.EnableUserAdminAction)
if objectAPI == nil {
return
}
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
vars := mux.Vars(r)
accessKey := vars["accessKey"]
status := vars["status"]
@@ -319,21 +324,17 @@ func (a adminAPIHandlers) SetUserStatus(w http.ResponseWriter, r *http.Request)
}
}
// AddUser - PUT /minio/admin/v2/add-user?accessKey=<access_key>
// AddUser - PUT /minio/admin/v3/add-user?accessKey=<access_key>
func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AddUser")
defer logger.AuditLog(w, r, "AddUser", mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminUsersReq(ctx, w, r, iampolicy.CreateUserAdminAction)
if objectAPI == nil {
return
}
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
vars := mux.Vars(r)
accessKey := vars["accessKey"]
@@ -378,16 +379,323 @@ func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
}
}
// InfoCannedPolicy - GET /minio/admin/v2/info-canned-policy?name={policyName}
func (a adminAPIHandlers) InfoCannedPolicy(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "InfoCannedPolicy")
// AddServiceAccount - PUT /minio/admin/v3/add-service-account
func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AddServiceAccount")
defer logger.AuditLog(w, r, "AddServiceAccount", mustGetClaimsFromToken(r))
// Get current object layer instance.
objectAPI := newObjectLayerWithoutSafeModeFn()
if objectAPI == nil || globalNotificationSys == nil || globalIAMSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, _, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
password := cred.SecretKey
reqBytes, err := madmin.DecryptData(password, io.LimitReader(r.Body, r.ContentLength))
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminConfigBadJSON, err), r.URL)
return
}
var createReq madmin.AddServiceAccountReq
if err = json.Unmarshal(reqBytes, &createReq); err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminConfigBadJSON, err), r.URL)
return
}
// Disallow creating service accounts by root user.
if owner {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminAccountNotEligible), r.URL)
return
}
parentUser := cred.AccessKey
if cred.ParentUser != "" {
parentUser = cred.ParentUser
}
newCred, err := globalIAMSys.NewServiceAccount(ctx, parentUser, createReq.Policy)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Notify all other Minio peers to reload user the service account
for _, nerr := range globalNotificationSys.LoadServiceAccount(newCred.AccessKey) {
if nerr.Err != nil {
logger.GetReqInfo(ctx).SetTags("peerAddress", nerr.Host.String())
logger.LogIf(ctx, nerr.Err)
}
}
var createResp = madmin.AddServiceAccountResp{
Credentials: auth.Credentials{
AccessKey: newCred.AccessKey,
SecretKey: newCred.SecretKey,
},
}
data, err := json.Marshal(createResp)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}
// ListServiceAccounts - GET /minio/admin/v3/list-service-accounts
func (a adminAPIHandlers) ListServiceAccounts(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListServiceAccounts")
defer logger.AuditLog(w, r, "ListServiceAccounts", mustGetClaimsFromToken(r))
// Get current object layer instance.
objectAPI := newObjectLayerWithoutSafeModeFn()
if objectAPI == nil || globalNotificationSys == nil || globalIAMSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, _, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
// Disallow creating service accounts by root user.
if owner {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminAccountNotEligible), r.URL)
return
}
parentUser := cred.AccessKey
if cred.ParentUser != "" {
parentUser = cred.ParentUser
}
serviceAccounts, err := globalIAMSys.ListServiceAccounts(ctx, parentUser)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
var listResp = madmin.ListServiceAccountsResp{
Accounts: serviceAccounts,
}
data, err := json.Marshal(listResp)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(cred.SecretKey, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}
// DeleteServiceAccount - DELETE /minio/admin/v3/delete-service-account
func (a adminAPIHandlers) DeleteServiceAccount(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteServiceAccount")
defer logger.AuditLog(w, r, "DeleteServiceAccount", mustGetClaimsFromToken(r))
// Get current object layer instance.
objectAPI := newObjectLayerWithoutSafeModeFn()
if objectAPI == nil || globalNotificationSys == nil || globalIAMSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, _, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
// Disallow creating service accounts by root user.
if owner {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminAccountNotEligible), r.URL)
return
}
serviceAccount := mux.Vars(r)["accessKey"]
if serviceAccount == "" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminInvalidArgument), r.URL)
return
}
user, err := globalIAMSys.GetServiceAccountParent(ctx, serviceAccount)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
parentUser := cred.AccessKey
if cred.ParentUser != "" {
parentUser = cred.ParentUser
}
if parentUser != user || user == "" {
// The service account belongs to another user but return not
// found error to mitigate brute force attacks. or the
// serviceAccount doesn't exist.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServiceAccountNotFound), r.URL)
return
}
err = globalIAMSys.DeleteServiceAccount(ctx, serviceAccount)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessNoContent(w)
}
// AccountUsageInfoHandler returns usage
func (a adminAPIHandlers) AccountUsageInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AccountUsageInfo")
defer logger.AuditLog(w, r, "AccountUsageInfo", mustGetClaimsFromToken(r))
// Get current object layer instance.
objectAPI := newObjectLayerWithoutSafeModeFn()
if objectAPI == nil || globalNotificationSys == nil || globalIAMSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, claims, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
// Set prefix value for "s3:prefix" policy conditionals.
r.Header.Set("prefix", "")
// Set delimiter value for "s3:delimiter" policy conditionals.
r.Header.Set("delimiter", SlashSeparator)
isAllowedAccess := func(bucketName string) (rd, wr bool) {
// Use the following trick to filter in place
// https://github.com/golang/go/wiki/SliceTricks#filter-in-place
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Action: iampolicy.ListBucketAction,
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
IsOwner: owner,
ObjectName: "",
Claims: claims,
}) {
rd = true
}
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Action: iampolicy.PutObjectAction,
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
IsOwner: owner,
ObjectName: "",
Claims: claims,
}) {
wr = true
}
return rd, wr
}
buckets, err := objectAPI.ListBuckets(ctx)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Load the latest calculated data usage
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objectAPI)
if err != nil {
// log the error, continue with the accounting response
logger.LogIf(ctx, err)
}
accountName := cred.AccessKey
if cred.ParentUser != "" {
accountName = cred.ParentUser
}
acctInfo := madmin.AccountUsageInfo{
AccountName: accountName,
}
for _, bucket := range buckets {
rd, wr := isAllowedAccess(bucket.Name)
if rd || wr {
var size uint64
// Fetch the data usage of the current bucket
if !dataUsageInfo.LastUpdate.IsZero() {
size = dataUsageInfo.BucketsUsage[bucket.Name].Size
}
acctInfo.Buckets = append(acctInfo.Buckets, madmin.BucketUsageInfo{
Name: bucket.Name,
Created: bucket.Created,
Size: size,
Access: madmin.AccountAccess{
Read: rd,
Write: wr,
},
})
}
}
usageInfoJSON, err := json.Marshal(acctInfo)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, usageInfoJSON)
}
// InfoCannedPolicyV2 - GET /minio/admin/v2/info-canned-policy?name={policyName}
func (a adminAPIHandlers) InfoCannedPolicyV2(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "InfoCannedPolicyV2")
defer logger.AuditLog(w, r, "InfoCannedPolicyV2", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.GetPolicyAdminAction)
if objectAPI == nil {
return
}
data, err := globalIAMSys.InfoPolicy(mux.Vars(r)["name"])
policy, err := globalIAMSys.InfoPolicy(mux.Vars(r)["name"])
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
data, err := json.Marshal(policy)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -397,9 +705,32 @@ func (a adminAPIHandlers) InfoCannedPolicy(w http.ResponseWriter, r *http.Reques
w.(http.Flusher).Flush()
}
// ListCannedPolicies - GET /minio/admin/v2/list-canned-policies
func (a adminAPIHandlers) ListCannedPolicies(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListCannedPolicies")
// InfoCannedPolicy - GET /minio/admin/v3/info-canned-policy?name={policyName}
func (a adminAPIHandlers) InfoCannedPolicy(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "InfoCannedPolicy")
defer logger.AuditLog(w, r, "InfoCannedPolicy", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.GetPolicyAdminAction)
if objectAPI == nil {
return
}
policy, err := globalIAMSys.InfoPolicy(mux.Vars(r)["name"])
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
json.NewEncoder(w).Encode(policy)
w.(http.Flusher).Flush()
}
// ListCannedPoliciesV2 - GET /minio/admin/v2/list-canned-policies
func (a adminAPIHandlers) ListCannedPoliciesV2(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListCannedPoliciesV2")
defer logger.AuditLog(w, r, "ListCannedPoliciesV2", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.ListUserPoliciesAdminAction)
if objectAPI == nil {
@@ -412,7 +743,16 @@ func (a adminAPIHandlers) ListCannedPolicies(w http.ResponseWriter, r *http.Requ
return
}
if err = json.NewEncoder(w).Encode(policies); err != nil {
policyMap := make(map[string][]byte, len(policies))
for k, p := range policies {
var err error
policyMap[k], err = json.Marshal(p)
if err != nil {
logger.LogIf(ctx, err)
continue
}
}
if err = json.NewEncoder(w).Encode(policyMap); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
@@ -420,10 +760,46 @@ func (a adminAPIHandlers) ListCannedPolicies(w http.ResponseWriter, r *http.Requ
w.(http.Flusher).Flush()
}
// RemoveCannedPolicy - DELETE /minio/admin/v2/remove-canned-policy?name=<policy_name>
// ListCannedPolicies - GET /minio/admin/v3/list-canned-policies
func (a adminAPIHandlers) ListCannedPolicies(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListCannedPolicies")
defer logger.AuditLog(w, r, "ListCannedPolicies", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.ListUserPoliciesAdminAction)
if objectAPI == nil {
return
}
policies, err := globalIAMSys.ListPolicies()
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
var newPolicies = make(map[string]iampolicy.Policy)
for name, p := range policies {
_, err = json.Marshal(p)
if err != nil {
logger.LogIf(ctx, err)
continue
}
newPolicies[name] = p
}
if err = json.NewEncoder(w).Encode(newPolicies); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
w.(http.Flusher).Flush()
}
// RemoveCannedPolicy - DELETE /minio/admin/v3/remove-canned-policy?name=<policy_name>
func (a adminAPIHandlers) RemoveCannedPolicy(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RemoveCannedPolicy")
defer logger.AuditLog(w, r, "RemoveCannedPolicy", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.DeletePolicyAdminAction)
if objectAPI == nil {
return
@@ -432,12 +808,6 @@ func (a adminAPIHandlers) RemoveCannedPolicy(w http.ResponseWriter, r *http.Requ
vars := mux.Vars(r)
policyName := vars["name"]
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
if err := globalIAMSys.DeletePolicy(policyName); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -452,10 +822,12 @@ func (a adminAPIHandlers) RemoveCannedPolicy(w http.ResponseWriter, r *http.Requ
}
}
// AddCannedPolicy - PUT /minio/admin/v2/add-canned-policy?name=<policy_name>
// AddCannedPolicy - PUT /minio/admin/v3/add-canned-policy?name=<policy_name>
func (a adminAPIHandlers) AddCannedPolicy(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AddCannedPolicy")
defer logger.AuditLog(w, r, "AddCannedPolicy", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.CreatePolicyAdminAction)
if objectAPI == nil {
return
@@ -464,12 +836,6 @@ func (a adminAPIHandlers) AddCannedPolicy(w http.ResponseWriter, r *http.Request
vars := mux.Vars(r)
policyName := vars["name"]
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
// Error out if Content-Length is missing.
if r.ContentLength <= 0 {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMissingContentLength), r.URL)
@@ -508,10 +874,12 @@ func (a adminAPIHandlers) AddCannedPolicy(w http.ResponseWriter, r *http.Request
}
}
// SetPolicyForUserOrGroup - PUT /minio/admin/v2/set-policy?policy=xxx&user-or-group=?[&is-group]
// SetPolicyForUserOrGroup - PUT /minio/admin/v3/set-policy?policy=xxx&user-or-group=?[&is-group]
func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetPolicyForUserOrGroup")
defer logger.AuditLog(w, r, "SetPolicyForUserOrGroup", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.AttachPolicyAdminAction)
if objectAPI == nil {
return
@@ -522,15 +890,9 @@ func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http
entityName := vars["userOrGroup"]
isGroup := vars["isGroup"] == "true"
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
if !isGroup {
ok, err := globalIAMSys.IsTempUser(entityName)
if err != nil {
if err != nil && err != errNoSuchUser {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}

File diff suppressed because it is too large Load Diff

View File

@@ -33,26 +33,27 @@ import (
"github.com/minio/minio/pkg/madmin"
)
// adminXLTestBed - encapsulates subsystems that need to be setup for
// adminErasureTestBed - encapsulates subsystems that need to be setup for
// admin-handler unit tests.
type adminXLTestBed struct {
xlDirs []string
objLayer ObjectLayer
router *mux.Router
type adminErasureTestBed struct {
erasureDirs []string
objLayer ObjectLayer
router *mux.Router
}
// prepareAdminXLTestBed - helper function that setups a single-node
// XL backend for admin-handler tests.
func prepareAdminXLTestBed() (*adminXLTestBed, error) {
// prepareAdminErasureTestBed - helper function that setups a single-node
// Erasure backend for admin-handler tests.
func prepareAdminErasureTestBed(ctx context.Context) (*adminErasureTestBed, error) {
// reset global variables to start afresh.
resetTestGlobals()
// Set globalIsXL to indicate that the setup uses an erasure
// Set globalIsErasure to indicate that the setup uses an erasure
// code backend.
globalIsXL = true
globalIsErasure = true
// Initializing objectLayer for HealFormatHandler.
objLayer, xlDirs, xlErr := initTestXLObjLayer()
objLayer, erasureDirs, xlErr := initTestErasureObjLayer(ctx)
if xlErr != nil {
return nil, xlErr
}
@@ -65,58 +66,47 @@ func prepareAdminXLTestBed() (*adminXLTestBed, error) {
// Initialize boot time
globalBootTime = UTCNow()
globalEndpoints = mustGetZoneEndpoints(xlDirs...)
globalEndpoints = mustGetZoneEndpoints(erasureDirs...)
globalConfigSys = NewConfigSys()
newAllSubsystems()
globalIAMSys = NewIAMSys()
globalIAMSys.Init(objLayer)
buckets, err := objLayer.ListBuckets(context.Background())
if err != nil {
return nil, err
}
globalPolicySys = NewPolicySys()
globalPolicySys.Init(buckets, objLayer)
globalNotificationSys = NewNotificationSys(globalEndpoints)
globalNotificationSys.Init(buckets, objLayer)
initAllSubsystems(ctx, objLayer)
// Setup admin mgmt REST API handlers.
adminRouter := mux.NewRouter()
registerAdminRouter(adminRouter, true, true)
return &adminXLTestBed{
xlDirs: xlDirs,
objLayer: objLayer,
router: adminRouter,
return &adminErasureTestBed{
erasureDirs: erasureDirs,
objLayer: objLayer,
router: adminRouter,
}, nil
}
// TearDown - method that resets the test bed for subsequent unit
// tests to start afresh.
func (atb *adminXLTestBed) TearDown() {
removeRoots(atb.xlDirs)
func (atb *adminErasureTestBed) TearDown() {
removeRoots(atb.erasureDirs)
resetTestGlobals()
}
// initTestObjLayer - Helper function to initialize an XL-based object
// initTestObjLayer - Helper function to initialize an Erasure-based object
// layer and set globalObjectAPI.
func initTestXLObjLayer() (ObjectLayer, []string, error) {
xlDirs, err := getRandomDisks(16)
func initTestErasureObjLayer(ctx context.Context) (ObjectLayer, []string, error) {
erasureDirs, err := getRandomDisks(16)
if err != nil {
return nil, nil, err
}
endpoints := mustGetNewEndpoints(xlDirs...)
format, err := waitForFormatXL(true, endpoints, 1, 1, 16, "")
endpoints := mustGetNewEndpoints(erasureDirs...)
storageDisks, format, err := waitForFormatErasure(true, endpoints, 1, 1, 16, "")
if err != nil {
removeRoots(xlDirs)
removeRoots(erasureDirs)
return nil, nil, err
}
globalPolicySys = NewPolicySys()
objLayer, err := newXLSets(endpoints, format, 1, 16)
objLayer := &erasureZones{zones: make([]*erasureSets, 1)}
objLayer.zones[0], err = newErasureSets(ctx, endpoints, storageDisks, format)
if err != nil {
return nil, nil, err
}
@@ -125,7 +115,7 @@ func initTestXLObjLayer() (ObjectLayer, []string, error) {
globalObjLayerMutex.Lock()
globalObjectAPI = objLayer
globalObjLayerMutex.Unlock()
return objLayer, xlDirs, nil
return objLayer, erasureDirs, nil
}
// cmdType - Represents different service subcomands like status, stop
@@ -191,9 +181,12 @@ func getServiceCmdRequest(cmd cmdType, cred auth.Credentials) (*http.Request, er
// testServicesCmdHandler - parametrizes service subcommand tests on
// cmdType value.
func testServicesCmdHandler(cmd cmdType, t *testing.T) {
adminTestBed, err := prepareAdminXLTestBed()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
adminTestBed, err := prepareAdminErasureTestBed(ctx)
if err != nil {
t.Fatal("Failed to initialize a single node XL backend for admin handler tests.")
t.Fatal("Failed to initialize a single node Erasure backend for admin handler tests.")
}
defer adminTestBed.TearDown()
@@ -259,9 +252,12 @@ func buildAdminRequest(queryVal url.Values, method, path string,
}
func TestAdminServerInfo(t *testing.T) {
adminTestBed, err := prepareAdminXLTestBed()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
adminTestBed, err := prepareAdminErasureTestBed(ctx)
if err != nil {
t.Fatal("Failed to initialize a single node XL backend for admin handler tests.")
t.Fatal("Failed to initialize a single node Erasure backend for admin handler tests.")
}
defer adminTestBed.TearDown()
@@ -303,7 +299,7 @@ func TestToAdminAPIErrCode(t *testing.T) {
}{
// 1. Server not in quorum.
{
err: errXLWriteQuorum,
err: errErasureWriteQuorum,
expectedAPIErr: ErrAdminConfigNoQuorum,
},
// 2. No error.
@@ -314,12 +310,12 @@ func TestToAdminAPIErrCode(t *testing.T) {
// 3. Non-admin API specific error.
{
err: errDiskNotFound,
expectedAPIErr: toAPIErrorCode(context.Background(), errDiskNotFound),
expectedAPIErr: toAPIErrorCode(GlobalContext, errDiskNotFound),
},
}
for i, test := range testCases {
actualErr := toAdminAPIErrCode(context.Background(), test.err)
actualErr := toAdminAPIErrCode(GlobalContext, test.err)
if actualErr != test.expectedAPIErr {
t.Errorf("Test %d: Expected %v but received %v",
i+1, test.expectedAPIErr, actualErr)

View File

@@ -21,7 +21,6 @@ import (
"encoding/json"
"fmt"
"net/http"
"strings"
"sync"
"time"
@@ -60,9 +59,8 @@ const (
)
var (
errHealIdleTimeout = fmt.Errorf("healing results were not consumed for too long")
errHealPushStopNDiscard = fmt.Errorf("heal push stopped due to heal stop signal")
errHealStopSignalled = fmt.Errorf("heal stop signaled")
errHealIdleTimeout = fmt.Errorf("healing results were not consumed for too long")
errHealStopSignalled = fmt.Errorf("heal stop signaled")
errFnHealFromAPIErr = func(ctx context.Context, err error) error {
apiErr := toAPIError(ctx, err)
@@ -73,18 +71,11 @@ var (
// healSequenceStatus - accumulated status of the heal sequence
type healSequenceStatus struct {
// lock to update this structure as it is concurrently
// accessed
updateLock *sync.RWMutex
// summary and detail for failures
Summary healStatusSummary `json:"Summary"`
FailureDetail string `json:"Detail,omitempty"`
StartTime time.Time `json:"StartTime"`
// disk information
NumDisks int `json:"NumDisks"`
// settings for the heal sequence
HealSettings madmin.HealOpts `json:"Settings"`
@@ -100,25 +91,23 @@ type allHealState struct {
healSeqMap map[string]*healSequence
}
// initHealState - initialize healing apparatus
func initHealState() *allHealState {
// newHealState - initialize global heal state management
func newHealState() *allHealState {
healState := &allHealState{
healSeqMap: make(map[string]*healSequence),
}
go healState.periodicHealSeqsClean()
go healState.periodicHealSeqsClean(GlobalContext)
return healState
}
func (ahs *allHealState) periodicHealSeqsClean() {
func (ahs *allHealState) periodicHealSeqsClean(ctx context.Context) {
// Launch clean-up routine to remove this heal sequence (after
// it ends) from the global state after timeout has elapsed.
ticker := time.NewTicker(time.Minute * 5)
defer ticker.Stop()
for {
select {
case <-ticker.C:
case <-time.After(time.Minute * 5):
now := UTCNow()
ahs.Lock()
for path, h := range ahs.healSeqMap {
@@ -127,7 +116,7 @@ func (ahs *allHealState) periodicHealSeqsClean() {
}
}
ahs.Unlock()
case <-GlobalServiceDoneCh:
case <-ctx.Done():
// server could be restarting - need
// to exit immediately
return
@@ -166,8 +155,13 @@ func (ahs *allHealState) stopHealSequence(path string) ([]byte, APIError) {
StartTime: UTCNow(),
}
} else {
clientToken := he.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s@%d", he.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
hsp = madmin.HealStopSuccess{
ClientToken: he.clientToken,
ClientToken: clientToken,
ClientAddress: he.clientAddress,
StartTime: he.startTime,
}
@@ -183,7 +177,7 @@ func (ahs *allHealState) stopHealSequence(path string) ([]byte, APIError) {
}
b, err := json.Marshal(&hsp)
return b, toAdminAPIErr(context.Background(), err)
return b, toAdminAPIErr(GlobalContext, err)
}
// LaunchNewHealSequence - launches a background routine that performs
@@ -200,11 +194,9 @@ func (ahs *allHealState) LaunchNewHealSequence(h *healSequence) (
respBytes []byte, apiErr APIError, errMsg string) {
existsAndLive := false
he, exists := ahs.getHealSequence(h.path)
he, exists := ahs.getHealSequence(pathJoin(h.bucket, h.object))
if exists {
if !he.hasEnded() || len(he.currentStatus.Items) > 0 {
existsAndLive = true
}
existsAndLive = !he.hasEnded()
}
if existsAndLive {
@@ -229,9 +221,9 @@ func (ahs *allHealState) LaunchNewHealSequence(h *healSequence) (
// Check if new heal sequence to be started overlaps with any
// existing, running sequence
hpath := pathJoin(h.bucket, h.object)
for k, hSeq := range ahs.healSeqMap {
if !hSeq.hasEnded() && (strings.HasPrefix(k, h.path) ||
strings.HasPrefix(h.path, k)) {
if !hSeq.hasEnded() && (HasPrefix(k, hpath) || HasPrefix(hpath, k)) {
errMsg = "The provided heal sequence path overlaps with an existing " +
fmt.Sprintf("heal path: %s", k)
@@ -240,13 +232,18 @@ func (ahs *allHealState) LaunchNewHealSequence(h *healSequence) (
}
// Add heal state and start sequence
ahs.healSeqMap[h.path] = h
ahs.healSeqMap[hpath] = h
// Launch top-level background heal go-routine
go h.healSequenceStart()
clientToken := h.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s@%d", h.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
b, err := json.Marshal(madmin.HealStartSuccess{
ClientToken: h.clientToken,
ClientToken: clientToken,
ClientAddress: h.clientAddress,
StartTime: h.startTime,
})
@@ -261,11 +258,11 @@ func (ahs *allHealState) LaunchNewHealSequence(h *healSequence) (
// status results from global state and returns its JSON
// representation. The clientToken helps ensure there aren't
// conflicting clients fetching status.
func (ahs *allHealState) PopHealStatusJSON(path string,
func (ahs *allHealState) PopHealStatusJSON(hpath string,
clientToken string) ([]byte, APIErrorCode) {
// fetch heal state for given path
h, exists := ahs.getHealSequence(path)
h, exists := ahs.getHealSequence(hpath)
if !exists {
// If there is no such heal sequence, return error.
return nil, ErrHealNoSuchProcess
@@ -277,8 +274,8 @@ func (ahs *allHealState) PopHealStatusJSON(path string,
}
// Take lock to access and update the heal-sequence
h.currentStatus.updateLock.Lock()
defer h.currentStatus.updateLock.Unlock()
h.mutex.Lock()
defer h.mutex.Unlock()
numItems := len(h.currentStatus.Items)
@@ -289,38 +286,42 @@ func (ahs *allHealState) PopHealStatusJSON(path string,
lastResultIndex = h.currentStatus.Items[numItems-1].ResultIndex
}
// After sending status to client, and before relinquishing
// the updateLock, reset Item to nil and record the result
// index sent to the client.
defer func(i int64) {
h.lastSentResultIndex = i
h.currentStatus.Items = nil
}(lastResultIndex)
h.lastSentResultIndex = lastResultIndex
jbytes, err := json.Marshal(h.currentStatus)
if err != nil {
h.currentStatus.Items = nil
logger.LogIf(h.ctx, err)
return nil, ErrInternalError
}
h.currentStatus.Items = nil
return jbytes, ErrNone
}
// healSource denotes single entity and heal option.
type healSource struct {
bucket string
object string
versionID string
opts *madmin.HealOpts // optional heal option overrides default setting
}
// healSequence - state for each heal sequence initiated on the
// server.
type healSequence struct {
// bucket, and prefix on which heal seq. was initiated
bucket, objPrefix string
// bucket, and object on which heal seq. was initiated
bucket, object string
// path is just pathJoin(bucket, objPrefix)
path string
// A channel of entities (format, buckets, objects) to heal
sourceCh chan healSource
// List of entities (format, buckets, objects) to heal
sourceCh chan string
// A channel of entities with heal result
respCh chan healResult
// Report healing progress, false if this is a background
// healing since currently there is no entity which will
// receive realtime healing status
// Report healing progress
reportProgress bool
// time at which heal sequence was started
@@ -345,59 +346,141 @@ type healSequence struct {
// completed
traverseAndHealDoneCh chan error
// channel to signal heal sequence to stop (e.g. from the
// heal-stop API)
stopSignalCh chan struct{}
// canceler to cancel heal sequence.
cancelCtx context.CancelFunc
// the last result index sent to client
lastSentResultIndex int64
// Number of total items scanned
scannedItemsCount int64
// Number of total items scanned against item type
scannedItemsMap map[madmin.HealItemType]int64
// Number of total items healed against item type
healedItemsMap map[madmin.HealItemType]int64
// Number of total items where healing failed against endpoint and drive state
healFailedItemsMap map[string]int64
// The time of the last scan/heal activity
lastHealActivity time.Time
// Holds the request-info for logging
ctx context.Context
// used to lock this structure as it is concurrently accessed
mutex sync.RWMutex
}
// NewHealSequence - creates healSettings, assumes bucket and
// objPrefix are already validated.
func newHealSequence(bucket, objPrefix, clientAddr string,
numDisks int, hs madmin.HealOpts, forceStart bool) *healSequence {
func newHealSequence(ctx context.Context, bucket, objPrefix, clientAddr string,
hs madmin.HealOpts, forceStart bool) *healSequence {
reqInfo := &logger.ReqInfo{RemoteHost: clientAddr, API: "Heal", BucketName: bucket}
reqInfo.AppendTags("prefix", objPrefix)
ctx := logger.SetReqInfo(context.Background(), reqInfo)
ctx, cancel := context.WithCancel(logger.SetReqInfo(ctx, reqInfo))
clientToken := mustGetUUID()
return &healSequence{
respCh: make(chan healResult),
bucket: bucket,
objPrefix: objPrefix,
path: pathJoin(bucket, objPrefix),
object: objPrefix,
reportProgress: true,
startTime: UTCNow(),
clientToken: mustGetUUID(),
clientToken: clientToken,
clientAddress: clientAddr,
forceStarted: forceStart,
settings: hs,
currentStatus: healSequenceStatus{
Summary: healNotStartedStatus,
HealSettings: hs,
NumDisks: numDisks,
updateLock: &sync.RWMutex{},
},
traverseAndHealDoneCh: make(chan error),
stopSignalCh: make(chan struct{}),
cancelCtx: cancel,
ctx: ctx,
scannedItemsMap: make(map[madmin.HealItemType]int64),
healedItemsMap: make(map[madmin.HealItemType]int64),
healFailedItemsMap: make(map[string]int64),
}
}
// resetHealStatusCounters - reset the healSequence status counters between
// each monthly background heal scanning activity.
// This is used only in case of Background healing scenario, where
// we use a single long running healSequence which reactively heals
// objects passed to the SourceCh.
func (h *healSequence) resetHealStatusCounters() {
h.mutex.Lock()
defer h.mutex.Unlock()
h.currentStatus.Items = []madmin.HealResultItem{}
h.lastSentResultIndex = 0
h.scannedItemsMap = make(map[madmin.HealItemType]int64)
h.healedItemsMap = make(map[madmin.HealItemType]int64)
h.healFailedItemsMap = make(map[string]int64)
}
// getScannedItemsCount - returns a count of all scanned items
func (h *healSequence) getScannedItemsCount() int64 {
var count int64
h.mutex.RLock()
defer h.mutex.RUnlock()
for _, v := range h.scannedItemsMap {
count = count + v
}
return count
}
// getScannedItemsMap - returns map of all scanned items against type
func (h *healSequence) getScannedItemsMap() map[madmin.HealItemType]int64 {
h.mutex.RLock()
defer h.mutex.RUnlock()
// Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.scannedItemsMap))
for k, v := range h.scannedItemsMap {
retMap[k] = v
}
return retMap
}
// getHealedItemsMap - returns the map of all healed items against type
func (h *healSequence) getHealedItemsMap() map[madmin.HealItemType]int64 {
h.mutex.RLock()
defer h.mutex.RUnlock()
// Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.healedItemsMap))
for k, v := range h.healedItemsMap {
retMap[k] = v
}
return retMap
}
// gethealFailedItemsMap - returns map of all items where heal failed against
// drive endpoint and status
func (h *healSequence) gethealFailedItemsMap() map[string]int64 {
h.mutex.RLock()
defer h.mutex.RUnlock()
// Make a copy before returning the value
retMap := make(map[string]int64, len(h.healFailedItemsMap))
for k, v := range h.healFailedItemsMap {
retMap[k] = v
}
return retMap
}
// isQuitting - determines if the heal sequence is quitting (due to an
// external signal)
func (h *healSequence) isQuitting() bool {
select {
case <-h.stopSignalCh:
case <-h.ctx.Done():
return true
default:
return false
@@ -406,19 +489,15 @@ func (h *healSequence) isQuitting() bool {
// check if the heal sequence has ended
func (h *healSequence) hasEnded() bool {
h.currentStatus.updateLock.RLock()
summary := h.currentStatus.Summary
h.currentStatus.updateLock.RUnlock()
return summary == healStoppedStatus || summary == healFinishedStatus
h.mutex.RLock()
ended := len(h.currentStatus.Items) == 0 || h.currentStatus.Summary == healStoppedStatus || h.currentStatus.Summary == healFinishedStatus
h.mutex.RUnlock()
return ended
}
// stops the heal sequence - safe to call multiple times.
func (h *healSequence) stop() {
select {
case <-h.stopSignalCh:
default:
close(h.stopSignalCh)
}
h.cancelCtx()
}
// pushHealResultItem - pushes a heal result item for consumption in
@@ -445,29 +524,27 @@ func (h *healSequence) pushHealResultItem(r madmin.HealResultItem) error {
var itemsLen int
for {
h.currentStatus.updateLock.Lock()
h.mutex.Lock()
itemsLen = len(h.currentStatus.Items)
if itemsLen == maxUnconsumedHealResultItems {
// unlock and wait to check again if we can push
h.currentStatus.updateLock.Unlock()
// wait for a second, or quit if an external
// stop signal is received or the
// unconsumedTimer fires.
select {
// Check after a second
case <-time.After(time.Second):
h.mutex.Unlock()
continue
case <-h.stopSignalCh:
case <-h.ctx.Done():
h.mutex.Unlock()
// discard result and return.
return errHealPushStopNDiscard
return errHealStopSignalled
// Timeout if no results consumed for too
// long.
// Timeout if no results consumed for too long.
case <-unconsumedTimer.C:
h.mutex.Unlock()
return errHealIdleTimeout
}
}
break
@@ -484,13 +561,7 @@ func (h *healSequence) pushHealResultItem(r madmin.HealResultItem) error {
h.currentStatus.Items = append(h.currentStatus.Items, r)
// release lock
h.currentStatus.updateLock.Unlock()
// This is a "safe" point for the heal sequence to quit if
// signaled externally.
if h.isQuitting() {
return errHealStopSignalled
}
h.mutex.Unlock()
return nil
}
@@ -504,10 +575,10 @@ func (h *healSequence) pushHealResultItem(r madmin.HealResultItem) error {
// the heal-sequence.
func (h *healSequence) healSequenceStart() {
// Set status as running
h.currentStatus.updateLock.Lock()
h.mutex.Lock()
h.currentStatus.Summary = healRunningStatus
h.currentStatus.StartTime = UTCNow()
h.currentStatus.updateLock.Unlock()
h.mutex.Unlock()
if h.sourceCh == nil {
go h.traverseAndHeal()
@@ -517,25 +588,27 @@ func (h *healSequence) healSequenceStart() {
select {
case err, ok := <-h.traverseAndHealDoneCh:
if !ok {
return
}
h.mutex.Lock()
h.endTime = UTCNow()
h.currentStatus.updateLock.Lock()
defer h.currentStatus.updateLock.Unlock()
// Heal traversal is complete.
if ok {
if err == nil {
// heal traversal succeeded.
h.currentStatus.Summary = healFinishedStatus
} else {
// heal traversal had an error.
h.currentStatus.Summary = healStoppedStatus
h.currentStatus.FailureDetail = err.Error()
} else {
// heal traversal succeeded.
h.currentStatus.Summary = healFinishedStatus
}
case <-h.stopSignalCh:
h.mutex.Unlock()
case <-h.ctx.Done():
h.mutex.Lock()
h.endTime = UTCNow()
h.currentStatus.updateLock.Lock()
h.currentStatus.Summary = healStoppedStatus
h.currentStatus.FailureDetail = errHealStopSignalled.Error()
h.currentStatus.updateLock.Unlock()
h.mutex.Unlock()
// drain traverse channel so the traversal
// go-routine does not leak.
@@ -548,62 +621,95 @@ func (h *healSequence) healSequenceStart() {
}
}
func (h *healSequence) queueHealTask(path string, healType madmin.HealItemType) error {
var respCh = make(chan healResult)
defer close(respCh)
func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItemType) error {
// Send heal request
globalBackgroundHealRoutine.queueHealTask(healTask{path: path, responseCh: respCh, opts: h.settings})
// Wait for answer and push result to the client
res := <-respCh
if !h.reportProgress {
return nil
task := healTask{
bucket: source.bucket,
object: source.object,
versionID: source.versionID,
opts: h.settings,
responseCh: h.respCh,
}
res.result.Type = healType
if res.err != nil {
// Object might have been deleted, by the time heal
// was attempted, we should ignore this object and return success.
if isErrObjectNotFound(res.err) {
return nil
}
// Only report object error
if healType != madmin.HealItemObject {
if source.opts != nil {
task.opts = *source.opts
}
globalBackgroundHealRoutine.queueHealTask(task)
select {
case res := <-h.respCh:
if !h.reportProgress {
// Object might have been deleted, by the time heal
// was attempted, we should ignore this object and
// return success.
if isErrObjectNotFound(res.err) || isErrVersionNotFound(res.err) {
return nil
}
h.mutex.Lock()
defer h.mutex.Unlock()
// Progress is not reported in case of background heal processing.
// Instead we increment relevant counter based on the heal result
// for prometheus reporting.
if res.err != nil {
for _, d := range res.result.After.Drives {
// For failed items we report the endpoint and drive state
// This will help users take corrective actions for drives
h.healFailedItemsMap[d.Endpoint+","+d.State]++
}
} else {
// Only object type reported for successful healing
h.healedItemsMap[res.result.Type]++
}
// Report caller of any failure
return res.err
}
res.result.Detail = res.err.Error()
res.result.Type = healType
if res.err != nil {
// Object might have been deleted, by the time heal
// was attempted, we should ignore this object and return success.
if isErrObjectNotFound(res.err) || isErrVersionNotFound(res.err) {
return nil
}
// Only report object error
if healType != madmin.HealItemObject {
return res.err
}
res.result.Detail = res.err.Error()
}
return h.pushHealResultItem(res.result)
case <-h.ctx.Done():
return nil
}
return h.pushHealResultItem(res.result)
}
func (h *healSequence) healItemsFromSourceCh() error {
bucketsOnly := true // heal buckets only, not objects.
if err := h.healItems(bucketsOnly); err != nil {
logger.LogIf(h.ctx, err)
}
for {
select {
case path := <-h.sourceCh:
case source, ok := <-h.sourceCh:
if !ok {
return nil
}
var itemType madmin.HealItemType
switch {
case path == nopHeal:
case source.bucket == nopHeal:
continue
case path == SlashSeparator:
case source.bucket == SlashSeparator:
itemType = madmin.HealItemMetadata
case !strings.Contains(path, SlashSeparator):
case source.bucket != "" && source.object == "":
itemType = madmin.HealItemBucket
default:
itemType = madmin.HealItemObject
}
if err := h.queueHealTask(path, itemType); err != nil {
if err := h.queueHealTask(source, itemType); err != nil {
logger.LogIf(h.ctx, err)
}
h.scannedItemsCount++
h.scannedItemsMap[itemType]++
h.lastHealActivity = UTCNow()
case <-h.traverseAndHealDoneCh:
return nil
case <-GlobalServiceDoneCh:
case <-h.ctx.Done():
return nil
}
}
@@ -611,10 +717,9 @@ func (h *healSequence) healItemsFromSourceCh() error {
func (h *healSequence) healFromSourceCh() {
h.healItemsFromSourceCh()
close(h.traverseAndHealDoneCh)
}
func (h *healSequence) healItems(bucketsOnly bool) error {
func (h *healSequence) healDiskMeta() error {
// Start with format healing
if err := h.healDiskFormat(); err != nil {
return err
@@ -626,13 +731,12 @@ func (h *healSequence) healItems(bucketsOnly bool) error {
}
// Start healing the bucket config prefix.
if err := h.healMinioSysMeta(bucketConfigPrefix)(); err != nil {
return err
}
return h.healMinioSysMeta(bucketConfigPrefix)()
}
// Start healing the background ops prefix.
if err := h.healMinioSysMeta(backgroundOpsMetaPrefix)(); err != nil {
logger.LogIf(h.ctx, err)
func (h *healSequence) healItems(bucketsOnly bool) error {
if err := h.healDiskMeta(); err != nil {
return err
}
// Heal buckets and objects
@@ -648,13 +752,7 @@ func (h *healSequence) healItems(bucketsOnly bool) error {
// two objects.
func (h *healSequence) traverseAndHeal() {
bucketsOnly := false // Heals buckets and objects also.
if err := h.healItems(bucketsOnly); err != nil {
if h.isQuitting() {
err = errHealStopSignalled
}
h.traverseAndHealDoneCh <- err
}
h.traverseAndHealDoneCh <- h.healItems(bucketsOnly)
close(h.traverseAndHealDoneCh)
}
@@ -671,15 +769,19 @@ func (h *healSequence) healMinioSysMeta(metaPrefix string) func() error {
// NOTE: Healing on meta is run regardless
// of any bucket being selected, this is to ensure that
// meta are always upto date and correct.
return objectAPI.HealObjects(h.ctx, minioMetaBucket, metaPrefix, func(bucket string, object string) error {
return objectAPI.HealObjects(h.ctx, minioMetaBucket, metaPrefix, h.settings, func(bucket, object, versionID string) error {
if h.isQuitting() {
return errHealStopSignalled
}
herr := h.queueHealTask(pathJoin(bucket, object), madmin.HealItemBucketMetadata)
herr := h.queueHealTask(healSource{
bucket: bucket,
object: object,
versionID: versionID,
}, madmin.HealItemBucketMetadata)
// Object might have been deleted, by the time heal
// was attempted we ignore this object an move on.
if isErrObjectNotFound(herr) {
if isErrObjectNotFound(herr) || isErrVersionNotFound(herr) {
return nil
}
return herr
@@ -700,7 +802,7 @@ func (h *healSequence) healDiskFormat() error {
return errServerNotInitialized
}
return h.queueHealTask(SlashSeparator, madmin.HealItemMetadata)
return h.queueHealTask(healSource{bucket: SlashSeparator}, madmin.HealItemMetadata)
}
// healBuckets - check for all buckets heal or just particular bucket.
@@ -742,7 +844,7 @@ func (h *healSequence) healBucket(bucket string, bucketsOnly bool) error {
return errServerNotInitialized
}
if err := h.queueHealTask(bucket, madmin.HealItemBucket); err != nil {
if err := h.queueHealTask(healSource{bucket: bucket}, madmin.HealItemBucket); err != nil {
return err
}
@@ -751,12 +853,12 @@ func (h *healSequence) healBucket(bucket string, bucketsOnly bool) error {
}
if !h.settings.Recursive {
if h.objPrefix != "" {
if h.object != "" {
// Check if an object named as the objPrefix exists,
// and if so heal it.
_, err := objectAPI.GetObjectInfo(h.ctx, bucket, h.objPrefix, ObjectOptions{})
_, err := objectAPI.GetObjectInfo(h.ctx, bucket, h.object, ObjectOptions{})
if err == nil {
if err = h.healObject(bucket, h.objPrefix); err != nil {
if err = h.healObject(bucket, h.object, ""); err != nil {
return err
}
}
@@ -765,14 +867,14 @@ func (h *healSequence) healBucket(bucket string, bucketsOnly bool) error {
return nil
}
if err := objectAPI.HealObjects(h.ctx, bucket, h.objPrefix, h.healObject); err != nil {
if err := objectAPI.HealObjects(h.ctx, bucket, h.object, h.settings, h.healObject); err != nil {
return errFnHealFromAPIErr(h.ctx, err)
}
return nil
}
// healObject - heal the given object and record result
func (h *healSequence) healObject(bucket, object string) error {
func (h *healSequence) healObject(bucket, object, versionID string) error {
// Get current object layer instance.
objectAPI := newObjectLayerWithoutSafeModeFn()
if objectAPI == nil {
@@ -783,5 +885,9 @@ func (h *healSequence) healObject(bucket, object string) error {
return errHealStopSignalled
}
return h.queueHealTask(pathJoin(bucket, object), madmin.HealItemObject)
return h.queueHealTask(healSource{
bucket: bucket,
object: object,
versionID: versionID,
}, madmin.HealItemObject)
}

118
cmd/admin-quota-handlers.go Normal file
View File

@@ -0,0 +1,118 @@
/*
* MinIO Cloud Storage, (C) 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"encoding/json"
"io/ioutil"
"net/http"
"github.com/gorilla/mux"
"github.com/minio/minio/cmd/config"
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/env"
iampolicy "github.com/minio/minio/pkg/iam/policy"
)
const (
bucketQuotaConfigFile = "quota.json"
)
// PutBucketQuotaConfigHandler - PUT Bucket quota configuration.
// ----------
// Places a quota configuration on the specified bucket. The quota
// specified in the quota configuration will be applied by default
// to enforce total quota for the specified bucket.
func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketQuotaConfig")
defer logger.AuditLog(w, r, "PutBucketQuotaConfig", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SetBucketQuotaAdminAction)
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
// Turn off quota commands if data usage info is unavailable.
if env.Get(envDataUsageCrawlConf, config.EnableOn) == config.EnableOff {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminBucketQuotaDisabled), r.URL)
return
}
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
data, err := ioutil.ReadAll(r.Body)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
if _, err = parseBucketQuota(bucket, data); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if err = globalBucketMetadataSys.Update(bucket, bucketQuotaConfigFile, data); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Write success response.
writeSuccessResponseHeadersOnly(w)
}
// GetBucketQuotaConfigHandler - gets bucket quota configuration
func (a adminAPIHandlers) GetBucketQuotaConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketQuotaConfig")
defer logger.AuditLog(w, r, "GetBucketQuotaConfig", mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.GetBucketQuotaAdminAction)
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
config, err := globalBucketMetadataSys.GetQuotaConfig(bucket)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
configData, err := json.Marshal(config)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseJSON(w, configData)
}

View File

@@ -20,13 +20,17 @@ import (
"net/http"
"github.com/gorilla/mux"
"github.com/minio/minio/cmd/config"
"github.com/minio/minio/pkg/env"
"github.com/minio/minio/pkg/madmin"
)
const (
adminPathPrefix = minioReservedBucketPath + "/admin"
adminAPIVersion = madmin.AdminAPIVersion
adminAPIVersionPrefix = SlashSeparator + madmin.AdminAPIVersion
adminPathPrefix = minioReservedBucketPath + "/admin"
adminAPIVersionV2 = madmin.AdminAPIVersionV2
adminAPIVersion = madmin.AdminAPIVersion
adminAPIVersionPrefix = SlashSeparator + adminAPIVersion
adminAPIVersionV2Prefix = SlashSeparator + adminAPIVersionV2
)
// adminAPIHandlers provides HTTP handlers for MinIO admin API.
@@ -41,127 +45,178 @@ func registerAdminRouter(router *mux.Router, enableConfigOps, enableIAMOps bool)
/// Service operations
// Restart and stop MinIO service.
adminRouter.Methods(http.MethodPost).Path(adminAPIVersionPrefix+"/service").HandlerFunc(httpTraceAll(adminAPI.ServiceActionHandler)).Queries("action", "{action:.*}")
// Update MinIO servers.
adminRouter.Methods(http.MethodPost).Path(adminAPIVersionPrefix+"/update").HandlerFunc(httpTraceAll(adminAPI.ServerUpdateHandler)).Queries("updateURL", "{updateURL:.*}")
// Info operations
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/info").HandlerFunc(httpTraceAll(adminAPI.ServerInfoHandler))
// Harware Info operations
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix+"/hardware").HandlerFunc(httpTraceAll(adminAPI.ServerHardwareInfoHandler)).Queries("hwType", "{hwType:.*}")
// StorageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/storageinfo").HandlerFunc(httpTraceAll(adminAPI.StorageInfoHandler))
// DataUsageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/datausageinfo").HandlerFunc(httpTraceAll(adminAPI.DataUsageInfoHandler))
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/accountingusageinfo").HandlerFunc(httpTraceAll(adminAPI.AccountingUsageInfoHandler))
if globalIsDistXL || globalIsXL {
/// Heal operations
// Heal processing endpoint.
adminRouter.Methods(http.MethodPost).Path(adminAPIVersionPrefix + "/heal/").HandlerFunc(httpTraceAll(adminAPI.HealHandler))
adminRouter.Methods(http.MethodPost).Path(adminAPIVersionPrefix + "/heal/{bucket}").HandlerFunc(httpTraceAll(adminAPI.HealHandler))
adminRouter.Methods(http.MethodPost).Path(adminAPIVersionPrefix + "/heal/{bucket}/{prefix:.*}").HandlerFunc(httpTraceAll(adminAPI.HealHandler))
adminRouter.Methods(http.MethodPost).Path(adminAPIVersionPrefix + "/background-heal/status").HandlerFunc(httpTraceAll(adminAPI.BackgroundHealStatusHandler))
/// Health operations
}
// Performance command - return performance details based on input type
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix+"/performance").HandlerFunc(httpTraceAll(adminAPI.PerfInfoHandler)).Queries("perfType", "{perfType:.*}")
// Profiling operations
adminRouter.Methods(http.MethodPost).Path(adminAPIVersionPrefix+"/profiling/start").HandlerFunc(httpTraceAll(adminAPI.StartProfilingHandler)).
Queries("profilerType", "{profilerType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/profiling/download").HandlerFunc(httpTraceAll(adminAPI.DownloadProfilingHandler))
// Config KV operations.
if enableConfigOps {
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix+"/get-config-kv").HandlerFunc(httpTraceHdrs(adminAPI.GetConfigKVHandler)).Queries("key", "{key:.*}")
adminRouter.Methods(http.MethodPut).Path(adminAPIVersionPrefix + "/set-config-kv").HandlerFunc(httpTraceHdrs(adminAPI.SetConfigKVHandler))
adminRouter.Methods(http.MethodDelete).Path(adminAPIVersionPrefix + "/del-config-kv").HandlerFunc(httpTraceHdrs(adminAPI.DelConfigKVHandler))
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix+"/help-config-kv").HandlerFunc(httpTraceAll(adminAPI.HelpConfigKVHandler)).Queries("subSys", "{subSys:.*}", "key", "{key:.*}")
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix+"/list-config-history-kv").HandlerFunc(httpTraceAll(adminAPI.ListConfigHistoryKVHandler)).Queries("count", "{count:[0-9]+}")
adminRouter.Methods(http.MethodDelete).Path(adminAPIVersionPrefix+"/clear-config-history-kv").HandlerFunc(httpTraceHdrs(adminAPI.ClearConfigHistoryKVHandler)).Queries("restoreId", "{restoreId:.*}")
adminRouter.Methods(http.MethodPut).Path(adminAPIVersionPrefix+"/restore-config-history-kv").HandlerFunc(httpTraceHdrs(adminAPI.RestoreConfigHistoryKVHandler)).Queries("restoreId", "{restoreId:.*}")
adminVersions := []string{
adminAPIVersionPrefix,
adminAPIVersionV2Prefix,
}
/// Config operations
if enableConfigOps {
// Get config
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/config").HandlerFunc(httpTraceHdrs(adminAPI.GetConfigHandler))
// Set config
adminRouter.Methods(http.MethodPut).Path(adminAPIVersionPrefix + "/config").HandlerFunc(httpTraceHdrs(adminAPI.SetConfigHandler))
for _, adminVersion := range adminVersions {
// Restart and stop MinIO service.
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/service").HandlerFunc(httpTraceAll(adminAPI.ServiceHandler)).Queries("action", "{action:.*}")
// Update MinIO servers.
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/update").HandlerFunc(httpTraceAll(adminAPI.ServerUpdateHandler)).Queries("updateURL", "{updateURL:.*}")
// Info operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/info").HandlerFunc(httpTraceAll(adminAPI.ServerInfoHandler))
// StorageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/storageinfo").HandlerFunc(httpTraceAll(adminAPI.StorageInfoHandler))
// DataUsageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/datausageinfo").HandlerFunc(httpTraceAll(adminAPI.DataUsageInfoHandler))
if globalIsDistErasure || globalIsErasure {
/// Heal operations
// Heal processing endpoint.
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/").HandlerFunc(httpTraceAll(adminAPI.HealHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}").HandlerFunc(httpTraceAll(adminAPI.HealHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}/{prefix:.*}").HandlerFunc(httpTraceAll(adminAPI.HealHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/background-heal/status").HandlerFunc(httpTraceAll(adminAPI.BackgroundHealStatusHandler))
/// Health operations
}
// Profiling operations
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/profiling/start").HandlerFunc(httpTraceAll(adminAPI.StartProfilingHandler)).
Queries("profilerType", "{profilerType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(httpTraceAll(adminAPI.DownloadProfilingHandler))
// Config KV operations.
if enableConfigOps {
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-config-kv").HandlerFunc(httpTraceHdrs(adminAPI.GetConfigKVHandler)).Queries("key", "{key:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/set-config-kv").HandlerFunc(httpTraceHdrs(adminAPI.SetConfigKVHandler))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/del-config-kv").HandlerFunc(httpTraceHdrs(adminAPI.DelConfigKVHandler))
}
// Enable config help in all modes.
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/help-config-kv").HandlerFunc(httpTraceAll(adminAPI.HelpConfigKVHandler)).Queries("subSys", "{subSys:.*}", "key", "{key:.*}")
// Config KV history operations.
if enableConfigOps {
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-config-history-kv").HandlerFunc(httpTraceAll(adminAPI.ListConfigHistoryKVHandler)).Queries("count", "{count:[0-9]+}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/clear-config-history-kv").HandlerFunc(httpTraceHdrs(adminAPI.ClearConfigHistoryKVHandler)).Queries("restoreId", "{restoreId:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/restore-config-history-kv").HandlerFunc(httpTraceHdrs(adminAPI.RestoreConfigHistoryKVHandler)).Queries("restoreId", "{restoreId:.*}")
}
/// Config import/export bulk operations
if enableConfigOps {
// Get config
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/config").HandlerFunc(httpTraceHdrs(adminAPI.GetConfigHandler))
// Set config
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/config").HandlerFunc(httpTraceHdrs(adminAPI.SetConfigHandler))
}
if enableIAMOps {
// -- IAM APIs --
// Add policy IAM
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-canned-policy").HandlerFunc(httpTraceHdrs(adminAPI.AddCannedPolicy)).Queries("name", "{name:.*}")
// Add user IAM
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/accountusageinfo").HandlerFunc(httpTraceAll(adminAPI.AccountUsageInfoHandler))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-user").HandlerFunc(httpTraceHdrs(adminAPI.AddUser)).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-user-status").HandlerFunc(httpTraceHdrs(adminAPI.SetUserStatus)).Queries("accessKey", "{accessKey:.*}").Queries("status", "{status:.*}")
// Service accounts ops
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/add-service-account").HandlerFunc(httpTraceHdrs(adminAPI.AddServiceAccount))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-service-accounts").HandlerFunc(httpTraceHdrs(adminAPI.ListServiceAccounts))
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/delete-service-account").HandlerFunc(httpTraceHdrs(adminAPI.DeleteServiceAccount)).Queries("accessKey", "{accessKey:.*}")
if adminVersion == adminAPIVersionV2Prefix {
// Info policy IAM v2
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(httpTraceHdrs(adminAPI.InfoCannedPolicyV2)).Queries("name", "{name:.*}")
// List policies v2
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-canned-policies").HandlerFunc(httpTraceHdrs(adminAPI.ListCannedPoliciesV2))
} else {
// Info policy IAM latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(httpTraceHdrs(adminAPI.InfoCannedPolicy)).Queries("name", "{name:.*}")
// List policies latest
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-canned-policies").HandlerFunc(httpTraceHdrs(adminAPI.ListCannedPolicies))
}
// Remove policy IAM
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-canned-policy").HandlerFunc(httpTraceHdrs(adminAPI.RemoveCannedPolicy)).Queries("name", "{name:.*}")
// Set user or group policy
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-user-or-group-policy").
HandlerFunc(httpTraceHdrs(adminAPI.SetPolicyForUserOrGroup)).
Queries("policyName", "{policyName:.*}", "userOrGroup", "{userOrGroup:.*}", "isGroup", "{isGroup:true|false}")
// Remove user IAM
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-user").HandlerFunc(httpTraceHdrs(adminAPI.RemoveUser)).Queries("accessKey", "{accessKey:.*}")
// List users
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-users").HandlerFunc(httpTraceHdrs(adminAPI.ListUsers))
// User info
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/user-info").HandlerFunc(httpTraceHdrs(adminAPI.GetUserInfo)).Queries("accessKey", "{accessKey:.*}")
// Add/Remove members from group
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/update-group-members").HandlerFunc(httpTraceHdrs(adminAPI.UpdateGroupMembers))
// Get Group
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/group").HandlerFunc(httpTraceHdrs(adminAPI.GetGroup)).Queries("group", "{group:.*}")
// List Groups
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/groups").HandlerFunc(httpTraceHdrs(adminAPI.ListGroups))
// Set Group Status
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-group-status").HandlerFunc(httpTraceHdrs(adminAPI.SetGroupStatus)).Queries("group", "{group:.*}").Queries("status", "{status:.*}")
}
// Quota operations
if globalIsDistErasure || globalIsErasure {
if env.Get(envDataUsageCrawlConf, config.EnableOn) == config.EnableOn {
// GetBucketQuotaConfig
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-bucket-quota").HandlerFunc(
httpTraceHdrs(adminAPI.GetBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
// PutBucketQuotaConfig
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-bucket-quota").HandlerFunc(
httpTraceHdrs(adminAPI.PutBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
}
}
// -- Top APIs --
// Top locks
if globalIsDistErasure {
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/top/locks").HandlerFunc(httpTraceHdrs(adminAPI.TopLocksHandler))
}
// HTTP Trace
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/trace").HandlerFunc(adminAPI.TraceHandler)
// Console Logs
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/log").HandlerFunc(httpTraceAll(adminAPI.ConsoleLogHandler))
// -- KMS APIs --
//
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/kms/key/create").HandlerFunc(httpTraceAll(adminAPI.KMSCreateKeyHandler)).Queries("key-id", "{key-id:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/kms/key/status").HandlerFunc(httpTraceAll(adminAPI.KMSKeyStatusHandler))
if !globalIsGateway {
// -- OBD API --
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/obdinfo").
HandlerFunc(httpTraceHdrs(adminAPI.OBDInfoHandler)).
Queries("perfdrive", "{perfdrive:true|false}",
"perfnet", "{perfnet:true|false}",
"minioinfo", "{minioinfo:true|false}",
"minioconfig", "{minioconfig:true|false}",
"syscpu", "{syscpu:true|false}",
"sysdiskhw", "{sysdiskhw:true|false}",
"sysosinfo", "{sysosinfo:true|false}",
"sysmem", "{sysmem:true|false}",
"sysprocess", "{sysprocess:true|false}",
)
}
}
if enableIAMOps {
// -- IAM APIs --
// Add policy IAM
adminRouter.Methods(http.MethodPut).Path(adminAPIVersionPrefix+"/add-canned-policy").HandlerFunc(httpTraceHdrs(adminAPI.AddCannedPolicy)).Queries("name",
"{name:.*}")
// Add user IAM
adminRouter.Methods(http.MethodPut).Path(adminAPIVersionPrefix+"/add-user").HandlerFunc(httpTraceHdrs(adminAPI.AddUser)).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodPut).Path(adminAPIVersionPrefix+"/set-user-status").HandlerFunc(httpTraceHdrs(adminAPI.SetUserStatus)).
Queries("accessKey", "{accessKey:.*}").Queries("status", "{status:.*}")
// Info policy IAM
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix+"/info-canned-policy").HandlerFunc(httpTraceHdrs(adminAPI.InfoCannedPolicy)).Queries("name", "{name:.*}")
// Remove policy IAM
adminRouter.Methods(http.MethodDelete).Path(adminAPIVersionPrefix+"/remove-canned-policy").HandlerFunc(httpTraceHdrs(adminAPI.RemoveCannedPolicy)).Queries("name", "{name:.*}")
// Set user or group policy
adminRouter.Methods(http.MethodPut).Path(adminAPIVersionPrefix+"/set-user-or-group-policy").
HandlerFunc(httpTraceHdrs(adminAPI.SetPolicyForUserOrGroup)).
Queries("policyName", "{policyName:.*}", "userOrGroup", "{userOrGroup:.*}", "isGroup", "{isGroup:true|false}")
// Remove user IAM
adminRouter.Methods(http.MethodDelete).Path(adminAPIVersionPrefix+"/remove-user").HandlerFunc(httpTraceHdrs(adminAPI.RemoveUser)).Queries("accessKey", "{accessKey:.*}")
// List users
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/list-users").HandlerFunc(httpTraceHdrs(adminAPI.ListUsers))
// User info
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix+"/user-info").HandlerFunc(httpTraceHdrs(adminAPI.GetUserInfo)).Queries("accessKey", "{accessKey:.*}")
// Add/Remove members from group
adminRouter.Methods(http.MethodPut).Path(adminAPIVersionPrefix + "/update-group-members").HandlerFunc(httpTraceHdrs(adminAPI.UpdateGroupMembers))
// Get Group
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix+"/group").HandlerFunc(httpTraceHdrs(adminAPI.GetGroup)).Queries("group", "{group:.*}")
// List Groups
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/groups").HandlerFunc(httpTraceHdrs(adminAPI.ListGroups))
// Set Group Status
adminRouter.Methods(http.MethodPut).Path(adminAPIVersionPrefix+"/set-group-status").HandlerFunc(httpTraceHdrs(adminAPI.SetGroupStatus)).Queries("group", "{group:.*}").Queries("status", "{status:.*}")
// List policies
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/list-canned-policies").HandlerFunc(httpTraceHdrs(adminAPI.ListCannedPolicies))
}
// -- Top APIs --
// Top locks
if globalIsDistXL {
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/top/locks").HandlerFunc(httpTraceHdrs(adminAPI.TopLocksHandler))
}
// HTTP Trace
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/trace").HandlerFunc(adminAPI.TraceHandler)
// Console Logs
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/log").HandlerFunc(httpTraceAll(adminAPI.ConsoleLogHandler))
// -- KMS APIs --
//
adminRouter.Methods(http.MethodGet).Path(adminAPIVersionPrefix + "/kms/key/status").HandlerFunc(httpTraceAll(adminAPI.KMSKeyStatusHandler))
// If none of the routes match add default error handler routes
adminRouter.NotFoundHandler = http.HandlerFunc(httpTraceAll(errorResponseHandler))
adminRouter.MethodNotAllowedHandler = http.HandlerFunc(httpTraceAll(errorResponseHandler))

View File

@@ -17,181 +17,16 @@
package cmd
import (
"net"
"net/http"
"os"
"github.com/minio/minio-go/v6/pkg/set"
"github.com/minio/minio/pkg/cpu"
"github.com/minio/minio/pkg/disk"
"github.com/minio/minio/pkg/madmin"
"github.com/minio/minio/pkg/mem"
cpuhw "github.com/shirou/gopsutil/cpu"
)
// getLocalMemUsage - returns ServerMemUsageInfo for all zones, endpoints.
func getLocalMemUsage(endpointZones EndpointZones, r *http.Request) ServerMemUsageInfo {
var memUsages []mem.Usage
var historicUsages []mem.Usage
seenHosts := set.NewStringSet()
for _, ep := range endpointZones {
for _, endpoint := range ep.Endpoints {
if seenHosts.Contains(endpoint.Host) {
continue
}
seenHosts.Add(endpoint.Host)
// Only proceed for local endpoints
if endpoint.IsLocal {
memUsages = append(memUsages, mem.GetUsage())
historicUsages = append(historicUsages, mem.GetHistoricUsage())
}
}
}
addr := r.Host
if globalIsDistXL {
addr = GetLocalPeer(endpointZones)
}
return ServerMemUsageInfo{
Addr: addr,
Usage: memUsages,
HistoricUsage: historicUsages,
}
}
// getLocalCPULoad - returns ServerCPULoadInfo for all zones, endpoints.
func getLocalCPULoad(endpointZones EndpointZones, r *http.Request) ServerCPULoadInfo {
var cpuLoads []cpu.Load
var historicLoads []cpu.Load
seenHosts := set.NewStringSet()
for _, ep := range endpointZones {
for _, endpoint := range ep.Endpoints {
if seenHosts.Contains(endpoint.Host) {
continue
}
seenHosts.Add(endpoint.Host)
// Only proceed for local endpoints
if endpoint.IsLocal {
cpuLoads = append(cpuLoads, cpu.GetLoad())
historicLoads = append(historicLoads, cpu.GetHistoricLoad())
}
}
}
addr := r.Host
if globalIsDistXL {
addr = GetLocalPeer(endpointZones)
}
return ServerCPULoadInfo{
Addr: addr,
Load: cpuLoads,
HistoricLoad: historicLoads,
}
}
// getLocalDrivesPerf - returns ServerDrivesPerfInfo for all zones, endpoints.
func getLocalDrivesPerf(endpointZones EndpointZones, size int64, r *http.Request) madmin.ServerDrivesPerfInfo {
var dps []disk.Performance
for _, ep := range endpointZones {
for _, endpoint := range ep.Endpoints {
// Only proceed for local endpoints
if endpoint.IsLocal {
if _, err := os.Stat(endpoint.Path); err != nil {
// Since this drive is not available, add relevant details and proceed
dps = append(dps, disk.Performance{Path: endpoint.Path, Error: err.Error()})
continue
}
dp := disk.GetPerformance(pathJoin(endpoint.Path, minioMetaTmpBucket, mustGetUUID()), size)
dp.Path = endpoint.Path
dps = append(dps, dp)
}
}
}
addr := r.Host
if globalIsDistXL {
addr = GetLocalPeer(endpointZones)
}
return madmin.ServerDrivesPerfInfo{
Addr: addr,
Perf: dps,
}
}
// getLocalCPUInfo - returns ServerCPUHardwareInfo for all zones, endpoints.
func getLocalCPUInfo(endpointZones EndpointZones, r *http.Request) madmin.ServerCPUHardwareInfo {
var cpuHardwares []cpuhw.InfoStat
seenHosts := set.NewStringSet()
for _, ep := range endpointZones {
for _, endpoint := range ep.Endpoints {
if seenHosts.Contains(endpoint.Host) {
continue
}
// Add to the list of visited hosts
seenHosts.Add(endpoint.Host)
// Only proceed for local endpoints
if endpoint.IsLocal {
cpuHardware, err := cpuhw.Info()
if err != nil {
return madmin.ServerCPUHardwareInfo{
Error: err.Error(),
}
}
cpuHardwares = append(cpuHardwares, cpuHardware...)
}
}
}
addr := r.Host
if globalIsDistXL {
addr = GetLocalPeer(endpointZones)
}
return madmin.ServerCPUHardwareInfo{
Addr: addr,
CPUInfo: cpuHardwares,
}
}
// getLocalNetworkInfo - returns ServerNetworkHardwareInfo for all zones, endpoints.
func getLocalNetworkInfo(endpointZones EndpointZones, r *http.Request) madmin.ServerNetworkHardwareInfo {
var networkHardwares []net.Interface
seenHosts := set.NewStringSet()
for _, ep := range endpointZones {
for _, endpoint := range ep.Endpoints {
if seenHosts.Contains(endpoint.Host) {
continue
}
// Add to the list of visited hosts
seenHosts.Add(endpoint.Host)
// Only proceed for local endpoints
if endpoint.IsLocal {
networkHardware, err := net.Interfaces()
if err != nil {
return madmin.ServerNetworkHardwareInfo{
Error: err.Error(),
}
}
networkHardwares = append(networkHardwares, networkHardware...)
}
}
}
addr := r.Host
if globalIsDistXL {
addr = GetLocalPeer(endpointZones)
}
return madmin.ServerNetworkHardwareInfo{
Addr: addr,
NetworkInfo: networkHardwares,
}
}
// getLocalServerProperty - returns ServerDrivesPerfInfo for only the
// getLocalServerProperty - returns madmin.ServerProperties for only the
// local endpoints from given list of endpoints
func getLocalServerProperty(endpointZones EndpointZones, r *http.Request) madmin.ServerProperties {
var disks []madmin.Disk
addr := r.Host
if globalIsDistXL {
if globalIsDistErasure {
addr = GetLocalPeer(endpointZones)
}
network := make(map[string]string)
@@ -204,33 +39,14 @@ func getLocalServerProperty(endpointZones EndpointZones, r *http.Request) madmin
if endpoint.IsLocal {
// Only proceed for local endpoints
network[nodeName] = "online"
var di = madmin.Disk{
DrivePath: endpoint.Path,
}
diInfo, err := disk.GetInfo(endpoint.Path)
if err != nil {
if os.IsNotExist(err) || isSysErrPathNotFound(err) {
di.State = madmin.DriveStateMissing
} else {
di.State = madmin.DriveStateCorrupt
}
continue
}
_, present := network[nodeName]
if !present {
if err := IsServerResolvable(endpoint); err == nil {
network[nodeName] = "online"
} else {
di.State = madmin.DriveStateOk
di.DrivePath = endpoint.Path
di.TotalSpace = diInfo.Total
di.UsedSpace = diInfo.Total - diInfo.Free
di.Utilization = float64((diInfo.Total - diInfo.Free) / diInfo.Total * 100)
}
disks = append(disks, di)
} else {
_, present := network[nodeName]
if !present {
err := IsServerResolvable(endpoint)
if err == nil {
network[nodeName] = "online"
} else {
network[nodeName] = "offline"
}
network[nodeName] = "offline"
}
}
}
@@ -243,6 +59,5 @@ func getLocalServerProperty(endpointZones EndpointZones, r *http.Request) madmin
Version: Version,
CommitID: CommitID,
Network: network,
Disks: disks,
}
}

View File

@@ -20,9 +20,18 @@ import (
"encoding/xml"
)
// ObjectIdentifier carries key name for the object to delete.
type ObjectIdentifier struct {
// DeletedObject objects deleted
type DeletedObject struct {
DeleteMarker bool `xml:"DeleteMarker,omitempty"`
DeleteMarkerVersionID string `xml:"DeleteMarkerVersionId,omitempty"`
ObjectName string `xml:"Key,omitempty"`
VersionID string `xml:"VersionId,omitempty"`
}
// ObjectToDelete carries key name for the object to delete.
type ObjectToDelete struct {
ObjectName string `xml:"Key"`
VersionID string `xml:"VersionId"`
}
// createBucketConfiguration container for bucket configuration request from client.
@@ -37,5 +46,5 @@ type DeleteObjectsRequest struct {
// Element to enable quiet mode for the request
Quiet bool
// List of objects to be deleted
Objects []ObjectIdentifier `xml:"Object"`
Objects []ObjectToDelete `xml:"Object"`
}

View File

@@ -25,18 +25,18 @@ import (
"strings"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
"google.golang.org/api/googleapi"
minio "github.com/minio/minio-go/v6"
minio "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/cmd/config/etcd/dns"
"github.com/minio/minio/cmd/crypto"
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/auth"
"github.com/minio/minio/pkg/bucket/lifecycle"
objectlock "github.com/minio/minio/pkg/bucket/object/lock"
"github.com/minio/minio/pkg/bucket/object/tagging"
"github.com/minio/minio/pkg/bucket/policy"
"github.com/minio/minio/pkg/bucket/versioning"
"github.com/minio/minio/pkg/event"
"github.com/minio/minio/pkg/hash"
)
@@ -101,6 +101,9 @@ const (
ErrNoSuchBucketLifecycle
ErrNoSuchLifecycleConfiguration
ErrNoSuchBucketSSEConfig
ErrNoSuchCORSConfiguration
ErrNoSuchWebsiteConfiguration
ErrReplicationConfigurationNotFoundError
ErrNoSuchKey
ErrNoSuchUpload
ErrNoSuchVersion
@@ -122,7 +125,8 @@ const (
ErrMissingCredTag
ErrCredMalformed
ErrInvalidRegion
ErrInvalidService
ErrInvalidServiceS3
ErrInvalidServiceSTS
ErrInvalidRequestVersion
ErrMissingSignTag
ErrMissingSignHeadersTag
@@ -150,12 +154,14 @@ const (
ErrBadRequest
ErrKeyTooLongError
ErrInvalidBucketObjectLockConfiguration
ErrObjectLockConfigurationNotFound
ErrObjectLockConfigurationNotAllowed
ErrNoSuchObjectLockConfiguration
ErrObjectLocked
ErrInvalidRetentionDate
ErrPastObjectLockRetainDate
ErrUnknownWORMModeDirective
ErrBucketTaggingNotFound
ErrObjectLockInvalidHeaders
ErrInvalidTagDirective
// Add new error codes here.
@@ -210,6 +216,7 @@ const (
ErrInvalidResourceName
ErrServerNotInitialized
ErrOperationTimedOut
ErrOperationMaxedOut
ErrInvalidRequest
// MinIO storage class error codes
ErrInvalidStorageClass
@@ -233,6 +240,10 @@ const (
ErrAdminCredentialsMismatch
ErrInsecureClientRequest
ErrObjectTampered
// Bucket Quota error codes
ErrAdminBucketQuotaExceeded
ErrAdminNoSuchQuotaConfiguration
ErrAdminBucketQuotaDisabled
ErrHealNotImplemented
ErrHealNoSuchProcess
@@ -332,19 +343,28 @@ const (
ErrAdminProfilerNotEnabled
ErrInvalidDecompressedSize
ErrAddUserInvalidArgument
ErrAdminAccountNotEligible
ErrServiceAccountNotFound
ErrPostPolicyConditionInvalidFormat
)
type errorCodeMap map[APIErrorCode]APIError
func (e errorCodeMap) ToAPIErr(errCode APIErrorCode) APIError {
func (e errorCodeMap) ToAPIErrWithErr(errCode APIErrorCode, err error) APIError {
apiErr, ok := e[errCode]
if !ok {
return e[ErrInternalError]
apiErr = e[ErrInternalError]
}
if err != nil {
apiErr.Description = fmt.Sprintf("%s (%s)", apiErr.Description, err)
}
return apiErr
}
func (e errorCodeMap) ToAPIErr(errCode APIErrorCode) APIError {
return e.ToAPIErrWithErr(errCode, nil)
}
// error code to APIError structure, these fields carry respective
// descriptions for all the error responses.
var errorCodes = errorCodeMap{
@@ -440,7 +460,7 @@ var errorCodes = errorCodeMap{
},
ErrInvalidAccessKeyID: {
Code: "InvalidAccessKeyId",
Description: "The access key ID you provided does not exist in our records.",
Description: "The Access Key Id you provided does not exist in our records.",
HTTPStatusCode: http.StatusForbidden,
},
ErrInvalidBucketName: {
@@ -519,9 +539,9 @@ var errorCodes = errorCodeMap{
HTTPStatusCode: http.StatusNotFound,
},
ErrNoSuchVersion: {
Code: "NoSuchVersion",
Description: "Indicates that the version ID specified in the request does not match an existing version.",
HTTPStatusCode: http.StatusNotFound,
Code: "InvalidArgument",
Description: "Invalid version id specified",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNotImplemented: {
Code: "NotImplemented",
@@ -652,9 +672,14 @@ var errorCodes = errorCodeMap{
// FIXME: Should contain the invalid param set as seen in https://github.com/minio/minio/issues/2385.
// right Description: "Error parsing the X-Amz-Credential parameter; incorrect service \"s4\". This endpoint belongs to \"s3\".".
// Need changes to make sure variable messages can be constructed.
ErrInvalidService: {
Code: "AuthorizationQueryParametersError",
Description: "Error parsing the X-Amz-Credential parameter; incorrect service. This endpoint belongs to \"s3\".",
ErrInvalidServiceS3: {
Code: "AuthorizationParametersError",
Description: "Error parsing the Credential/X-Amz-Credential parameter; incorrect service. This endpoint belongs to \"s3\".",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidServiceSTS: {
Code: "AuthorizationParametersError",
Description: "Error parsing the Credential parameter; incorrect service. This endpoint belongs to \"sts\".",
HTTPStatusCode: http.StatusBadRequest,
},
// FIXME: Should contain the invalid param set as seen in https://github.com/minio/minio/issues/2385.
@@ -757,11 +782,36 @@ var errorCodes = errorCodeMap{
Description: "Bucket is missing ObjectLockConfiguration",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBucketTaggingNotFound: {
Code: "NoSuchTagSet",
Description: "The TagSet does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrObjectLockConfigurationNotFound: {
Code: "ObjectLockConfigurationNotFoundError",
Description: "Object Lock configuration does not exist for this bucket",
HTTPStatusCode: http.StatusNotFound,
},
ErrObjectLockConfigurationNotAllowed: {
Code: "InvalidBucketState",
Description: "Object Lock configuration cannot be enabled on existing buckets.",
Description: "Object Lock configuration cannot be enabled on existing buckets",
HTTPStatusCode: http.StatusConflict,
},
ErrNoSuchCORSConfiguration: {
Code: "NoSuchCORSConfiguration",
Description: "The CORS configuration does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrNoSuchWebsiteConfiguration: {
Code: "NoSuchWebsiteConfiguration",
Description: "The specified bucket does not have a website configuration",
HTTPStatusCode: http.StatusNotFound,
},
ErrReplicationConfigurationNotFoundError: {
Code: "ReplicationConfigurationNotFoundError",
Description: "The replication configuration was not found",
HTTPStatusCode: http.StatusNotFound,
},
ErrNoSuchObjectLockConfiguration: {
Code: "NoSuchObjectLockConfiguration",
Description: "The specified object does not have a ObjectLock configuration",
@@ -1068,15 +1118,35 @@ var errorCodes = errorCodeMap{
Description: "Credentials in config mismatch with server environment variables",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrAdminBucketQuotaExceeded: {
Code: "XMinioAdminBucketQuotaExceeded",
Description: "Bucket quota exceeded",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminNoSuchQuotaConfiguration: {
Code: "XMinioAdminNoSuchQuotaConfiguration",
Description: "The quota configuration does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminBucketQuotaDisabled: {
Code: "XMinioAdminBucketQuotaDisabled",
Description: "Quota specified but disk usage crawl is disabled on MinIO server",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInsecureClientRequest: {
Code: "XMinioInsecureClientRequest",
Description: "Cannot respond to plain-text request from TLS-encrypted server",
HTTPStatusCode: http.StatusBadRequest,
},
ErrOperationTimedOut: {
Code: "XMinioServerTimedOut",
Description: "A timeout occurred while trying to lock a resource",
HTTPStatusCode: http.StatusRequestTimeout,
Code: "RequestTimeout",
Description: "A timeout occurred while trying to lock a resource, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrOperationMaxedOut: {
Code: "SlowDown",
Description: "A timeout exceeded while waiting to proceed with the request, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrUnsupportedMetadata: {
Code: "InvalidArgument",
@@ -1573,6 +1643,16 @@ var errorCodes = errorCodeMap{
Description: "User is not allowed to be same as admin access key",
HTTPStatusCode: http.StatusConflict,
},
ErrAdminAccountNotEligible: {
Code: "XMinioInvalidIAMCredentials",
Description: "The administrator key is not eligible for this operation",
HTTPStatusCode: http.StatusConflict,
},
ErrServiceAccountNotFound: {
Code: "XMinioInvalidIAMCredentials",
Description: "The specified service account is not found",
HTTPStatusCode: http.StatusNotFound,
},
ErrPostPolicyConditionInvalidFormat: {
Code: "PostPolicyInvalidKeyName",
Description: "Invalid according to Policy: Policy Condition failed",
@@ -1588,7 +1668,7 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
if err == nil {
return ErrNone
}
// Verify if the underlying error is signature mismatch.
switch err {
case errInvalidArgument:
apiErr = ErrAdminInvalidArgument
@@ -1703,6 +1783,10 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrBucketAlreadyOwnedByYou
case ObjectNotFound:
apiErr = ErrNoSuchKey
case MethodNotAllowed:
apiErr = ErrMethodNotAllowed
case VersionNotFound:
apiErr = ErrNoSuchVersion
case ObjectAlreadyExists:
apiErr = ErrMethodNotAllowed
case ObjectNameInvalid:
@@ -1747,6 +1831,14 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrNoSuchLifecycleConfiguration
case BucketSSEConfigNotFound:
apiErr = ErrNoSuchBucketSSEConfig
case BucketTaggingNotFound:
apiErr = ErrBucketTaggingNotFound
case BucketObjectLockConfigNotFound:
apiErr = ErrObjectLockConfigurationNotFound
case BucketQuotaConfigNotFound:
apiErr = ErrAdminNoSuchQuotaConfiguration
case BucketQuotaExceeded:
apiErr = ErrAdminBucketQuotaExceeded
case *event.ErrInvalidEventName:
apiErr = ErrEventNotification
case *event.ErrInvalidARN:
@@ -1817,6 +1909,12 @@ func toAPIError(ctx context.Context, err error) APIError {
// their internal error types. This code is only
// useful with gateway implementations.
switch e := err.(type) {
case InvalidArgument:
apiErr = APIError{
Code: "InvalidArgument",
Description: e.Error(),
HTTPStatusCode: errorCodes[ErrInvalidRequest].HTTPStatusCode,
}
case *xml.SyntaxError:
apiErr = APIError{
Code: "MalformedXML",
@@ -1831,13 +1929,19 @@ func toAPIError(ctx context.Context, err error) APIError {
e.Error()),
HTTPStatusCode: http.StatusBadRequest,
}
case versioning.Error:
apiErr = APIError{
Code: "IllegalVersioningConfigurationException",
Description: fmt.Sprintf("Versioning configuration specified in the request is invalid. (%s)", e.Error()),
HTTPStatusCode: http.StatusBadRequest,
}
case lifecycle.Error:
apiErr = APIError{
Code: "InvalidRequest",
Description: e.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case tagging.Error:
case tags.Error:
apiErr = APIError{
Code: e.Code(),
Description: e.Error(),
@@ -1879,12 +1983,6 @@ func toAPIError(ctx context.Context, err error) APIError {
Description: e.Error(),
HTTPStatusCode: e.Response().StatusCode,
}
case oss.ServiceError:
apiErr = APIError{
Code: e.Code,
Description: e.Message,
HTTPStatusCode: e.StatusCode,
}
// Add more Gateway SDKs here if any in future.
}
}

View File

@@ -29,6 +29,7 @@ import (
"github.com/minio/minio/cmd/crypto"
xhttp "github.com/minio/minio/cmd/http"
"github.com/minio/minio/pkg/bucket/lifecycle"
)
// Returns a hexadecimal representation of time at the
@@ -68,6 +69,13 @@ func encodeResponseJSON(response interface{}) []byte {
return bytesBuffer.Bytes()
}
// Write parts count
func setPartsCountHeaders(w http.ResponseWriter, objInfo ObjectInfo) {
if strings.Contains(objInfo.ETag, "-") && len(objInfo.Parts) > 0 {
w.Header()[xhttp.AmzMpPartsCount] = []string{strconv.Itoa(len(objInfo.Parts))}
}
}
// Write object header
func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSpec) (err error) {
// set common headers
@@ -82,10 +90,6 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
w.Header()[xhttp.ETag] = []string{"\"" + objInfo.ETag + "\""}
}
if strings.Contains(objInfo.ETag, "-") && len(objInfo.Parts) > 0 {
w.Header().Set(xhttp.AmzMpPartsCount, strconv.Itoa(len(objInfo.Parts)))
}
if objInfo.ContentType != "" {
w.Header().Set(xhttp.ContentType, objInfo.ContentType)
}
@@ -97,6 +101,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
if !objInfo.Expires.IsZero() {
w.Header().Set(xhttp.Expires, objInfo.Expires.UTC().Format(http.TimeFormat))
}
if globalCacheConfig.Enabled {
w.Header().Set(xhttp.XCache, objInfo.CacheStatus.String())
w.Header().Set(xhttp.XCacheLookup, objInfo.CacheLookupStatus.String())
@@ -105,34 +110,34 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
// Set tag count if object has tags
tags, _ := url.ParseQuery(objInfo.UserTags)
tagCount := len(tags)
if tagCount != 0 {
w.Header().Set(xhttp.AmzTagCount, strconv.Itoa(tagCount))
if tagCount > 0 {
w.Header()[xhttp.AmzTagCount] = []string{strconv.Itoa(tagCount)}
}
// Set all other user defined metadata.
for k, v := range objInfo.UserDefined {
if HasPrefix(k, ReservedMetadataPrefix) {
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
}
w.Header().Set(k, v)
var isSet bool
for _, userMetadataPrefix := range userMetadataKeyPrefixes {
if !strings.HasPrefix(k, userMetadataPrefix) {
continue
}
w.Header()[strings.ToLower(k)] = []string{v}
isSet = true
break
}
if !isSet {
w.Header().Set(k, v)
}
}
var totalObjectSize int64
switch {
case crypto.IsEncrypted(objInfo.UserDefined):
totalObjectSize, err = objInfo.DecryptedSize()
if err != nil {
return err
}
case objInfo.IsCompressed():
totalObjectSize = objInfo.GetActualSize()
if totalObjectSize < 0 {
return errInvalidDecompressedSize
}
default:
totalObjectSize = objInfo.Size
totalObjectSize, err := objInfo.GetActualSize()
if err != nil {
return err
}
// for providing ranged content
@@ -148,5 +153,26 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
w.Header().Set(xhttp.ContentRange, contentRange)
}
// Set the relevant version ID as part of the response header.
if objInfo.VersionID != "" {
w.Header()[xhttp.AmzVersionID] = []string{objInfo.VersionID}
}
if lc, err := globalLifecycleSys.Get(objInfo.Bucket); err == nil {
ruleID, expiryTime := lc.PredictExpiryTime(lifecycle.ObjectOpts{
Name: objInfo.Name,
UserTags: objInfo.UserTags,
VersionID: objInfo.VersionID,
ModTime: objInfo.ModTime,
IsLatest: objInfo.IsLatest,
DeleteMarker: objInfo.DeleteMarker,
})
if !expiryTime.IsZero() {
w.Header()[xhttp.AmzExpiration] = []string{
fmt.Sprintf(`expiry-date="%s", rule-id="%s"`, expiryTime.Format(http.TimeFormat), ruleID),
}
}
}
return nil
}

View File

@@ -33,8 +33,10 @@ import (
)
const (
timeFormatAMZLong = "2006-01-02T15:04:05.000Z" // Reply date format with nanosecond precision.
maxObjectList = 10000 // Limit number of objects in a listObjectsResponse.
// RFC3339 a subset of the ISO8601 timestamp format. e.g 2014-04-29T18:30:38Z
iso8601TimeFormat = "2006-01-02T15:04:05.000Z" // Reply date format with nanosecond precision.
maxObjectList = 10000 // Limit number of objects in a listObjectsResponse/listObjectsVersionsResponse.
maxDeleteList = 10000 // Limit number of objects deleted in a delete call.
maxUploadsList = 10000 // Limit number of uploads in a listUploadsResponse.
maxPartsList = 10000 // Limit number of parts in a listPartsResponse.
)
@@ -233,10 +235,23 @@ type Bucket struct {
// ObjectVersion container for object version metadata
type ObjectVersion struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Version" json:"-"`
Object
VersionID string `xml:"VersionId"`
IsLatest bool
VersionID string `xml:"VersionId"`
isDeleteMarker bool
}
// MarshalXML - marshal ObjectVersion
func (o ObjectVersion) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
if o.isDeleteMarker {
start.Name.Local = "DeleteMarker"
} else {
start.Name.Local = "Version"
}
type objectVersionWrapper ObjectVersion
return e.EncodeElement(objectVersionWrapper(o), start)
}
// StringMap is a map[string]string.
@@ -331,9 +346,10 @@ type CompleteMultipartUploadResponse struct {
// DeleteError structure.
type DeleteError struct {
Code string
Message string
Key string
Code string
Message string
Key string
VersionID string `xml:"VersionId"`
}
// DeleteObjectsResponse container for multiple object deletes.
@@ -341,7 +357,7 @@ type DeleteObjectsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ DeleteResult" json:"-"`
// Collection of all deleted objects
DeletedObjects []ObjectIdentifier `xml:"Deleted,omitempty"`
DeletedObjects []DeletedObject `xml:"Deleted,omitempty"`
// Collection of errors deleting certain objects.
Errors []DeleteError `xml:"Error,omitempty"`
@@ -400,7 +416,7 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
for _, bucket := range buckets {
var listbucket = Bucket{}
listbucket.Name = bucket.Name
listbucket.CreationDate = bucket.Created.UTC().Format(timeFormatAMZLong)
listbucket.CreationDate = bucket.Created.UTC().Format(iso8601TimeFormat)
listbuckets = append(listbuckets, listbucket)
}
@@ -411,7 +427,7 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
}
// generates an ListBucketVersions response for the said bucket with other enumerated options.
func generateListVersionsResponse(bucket, prefix, marker, delimiter, encodingType string, maxKeys int, resp ListObjectsInfo) ListVersionsResponse {
func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType string, maxKeys int, resp ListObjectVersionsInfo) ListVersionsResponse {
var versions []ObjectVersion
var prefixes []CommonPrefix
var owner = Owner{}
@@ -424,20 +440,28 @@ func generateListVersionsResponse(bucket, prefix, marker, delimiter, encodingTyp
continue
}
content.Key = s3EncodeName(object.Name, encodingType)
content.LastModified = object.ModTime.UTC().Format(timeFormatAMZLong)
content.LastModified = object.ModTime.UTC().Format(iso8601TimeFormat)
if object.ETag != "" {
content.ETag = "\"" + object.ETag + "\""
}
content.Size = object.Size
content.StorageClass = object.StorageClass
if object.StorageClass != "" {
content.StorageClass = object.StorageClass
} else {
content.StorageClass = globalMinioDefaultStorageClass
}
content.Owner = owner
content.VersionID = "null"
content.IsLatest = true
content.VersionID = object.VersionID
if content.VersionID == "" {
content.VersionID = nullVersionID
}
content.IsLatest = object.IsLatest
content.isDeleteMarker = object.DeleteMarker
versions = append(versions, content)
}
data.Name = bucket
data.Versions = versions
data.EncodingType = encodingType
data.Prefix = s3EncodeName(prefix, encodingType)
data.KeyMarker = s3EncodeName(marker, encodingType)
@@ -445,6 +469,8 @@ func generateListVersionsResponse(bucket, prefix, marker, delimiter, encodingTyp
data.MaxKeys = maxKeys
data.NextKeyMarker = s3EncodeName(resp.NextMarker, encodingType)
data.NextVersionIDMarker = resp.NextVersionIDMarker
data.VersionIDMarker = versionIDMarker
data.IsTruncated = resp.IsTruncated
for _, prefix := range resp.Prefixes {
@@ -470,12 +496,16 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingTy
continue
}
content.Key = s3EncodeName(object.Name, encodingType)
content.LastModified = object.ModTime.UTC().Format(timeFormatAMZLong)
content.LastModified = object.ModTime.UTC().Format(iso8601TimeFormat)
if object.ETag != "" {
content.ETag = "\"" + object.ETag + "\""
}
content.Size = object.Size
content.StorageClass = object.StorageClass
if object.StorageClass != "" {
content.StorageClass = object.StorageClass
} else {
content.StorageClass = globalMinioDefaultStorageClass
}
content.Owner = owner
contents = append(contents, content)
}
@@ -516,17 +546,21 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
continue
}
content.Key = s3EncodeName(object.Name, encodingType)
content.LastModified = object.ModTime.UTC().Format(timeFormatAMZLong)
content.LastModified = object.ModTime.UTC().Format(iso8601TimeFormat)
if object.ETag != "" {
content.ETag = "\"" + object.ETag + "\""
}
content.Size = object.Size
content.StorageClass = object.StorageClass
if object.StorageClass != "" {
content.StorageClass = object.StorageClass
} else {
content.StorageClass = globalMinioDefaultStorageClass
}
content.Owner = owner
if metadata {
content.UserMetadata = make(StringMap)
for k, v := range CleanMinioInternalMetadataKeys(object.UserDefined) {
if HasPrefix(k, ReservedMetadataPrefix) {
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
@@ -561,7 +595,7 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
func generateCopyObjectResponse(etag string, lastModified time.Time) CopyObjectResponse {
return CopyObjectResponse{
ETag: "\"" + etag + "\"",
LastModified: lastModified.UTC().Format(timeFormatAMZLong),
LastModified: lastModified.UTC().Format(iso8601TimeFormat),
}
}
@@ -569,7 +603,7 @@ func generateCopyObjectResponse(etag string, lastModified time.Time) CopyObjectR
func generateCopyObjectPartResponse(etag string, lastModified time.Time) CopyObjectPartResponse {
return CopyObjectPartResponse{
ETag: "\"" + etag + "\"",
LastModified: lastModified.UTC().Format(timeFormatAMZLong),
LastModified: lastModified.UTC().Format(iso8601TimeFormat),
}
}
@@ -588,7 +622,8 @@ func generateCompleteMultpartUploadResponse(bucket, key, location, etag string)
Location: location,
Bucket: bucket,
Key: key,
ETag: etag,
// AWS S3 quotes the ETag in XML, make sure we are compatible here.
ETag: "\"" + etag + "\"",
}
}
@@ -613,7 +648,7 @@ func generateListPartsResponse(partsInfo ListPartsInfo, encodingType string) Lis
newPart.PartNumber = part.PartNumber
newPart.ETag = "\"" + part.ETag + "\""
newPart.Size = part.Size
newPart.LastModified = part.LastModified.UTC().Format(timeFormatAMZLong)
newPart.LastModified = part.LastModified.UTC().Format(iso8601TimeFormat)
listPartsResponse.Parts[index] = newPart
}
return listPartsResponse
@@ -643,23 +678,30 @@ func generateListMultipartUploadsResponse(bucket string, multipartsInfo ListMult
newUpload := Upload{}
newUpload.UploadID = upload.UploadID
newUpload.Key = s3EncodeName(upload.Object, encodingType)
newUpload.Initiated = upload.Initiated.UTC().Format(timeFormatAMZLong)
newUpload.Initiated = upload.Initiated.UTC().Format(iso8601TimeFormat)
listMultipartUploadsResponse.Uploads[index] = newUpload
}
return listMultipartUploadsResponse
}
// generate multi objects delete response.
func generateMultiDeleteResponse(quiet bool, deletedObjects []ObjectIdentifier, errs []DeleteError) DeleteObjectsResponse {
func generateMultiDeleteResponse(quiet bool, deletedObjects []DeletedObject, errs []DeleteError) DeleteObjectsResponse {
deleteResp := DeleteObjectsResponse{}
if !quiet {
deleteResp.DeletedObjects = deletedObjects
}
if len(errs) == len(deletedObjects) {
deleteResp.DeletedObjects = nil
}
deleteResp.Errors = errs
return deleteResp
}
func writeResponse(w http.ResponseWriter, statusCode int, response []byte, mType mimeType) {
if newObjectLayerFn() == nil {
// Server still in safe mode.
w.Header().Set(xhttp.MinIOServerStatus, "safemode")
}
setCommonHeaders(w)
if mType != mimeNone {
w.Header().Set(xhttp.ContentType, string(mType))
@@ -722,6 +764,10 @@ func writeErrorResponse(ctx context.Context, w http.ResponseWriter, err APIError
// The request is from browser and also if browser
// is enabled we need to redirect.
if browser && globalBrowserEnabled {
if newObjectLayerFn() == nil {
// server still in safe mode.
w.Header().Set(xhttp.MinIOServerStatus, "safemode")
}
w.Header().Set(xhttp.Location, minioReservedBucketPath+reqURL.Path)
w.WriteHeader(http.StatusTemporaryRedirect)
return
@@ -753,17 +799,6 @@ func writeErrorResponseJSON(ctx context.Context, w http.ResponseWriter, err APIE
writeResponse(w, err.HTTPStatusCode, encodedErrorResponse, mimeJSON)
}
// writeVersionMismatchResponse - writes custom error responses for version mismatches.
func writeVersionMismatchResponse(ctx context.Context, w http.ResponseWriter, err APIError, reqURL *url.URL, isJSON bool) {
if isJSON {
// Generate error response.
errorResponse := getAPIErrorResponse(ctx, err, reqURL.String(), w.Header().Get(xhttp.AmzRequestID), globalDeploymentID)
writeResponse(w, err.HTTPStatusCode, encodeResponseJSON(errorResponse), mimeJSON)
} else {
writeResponse(w, err.HTTPStatusCode, []byte(err.Description), mimeNone)
}
}
// writeCustomErrorResponseJSON - similar to writeErrorResponseJSON,
// but accepts the error message directly (this allows messages to be
// dynamically generated.)

View File

@@ -1,5 +1,5 @@
/*
* MinIO Cloud Storage, (C) 2016 MinIO, Inc.
* MinIO Cloud Storage, (C) 2016-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -21,6 +21,8 @@ import (
"github.com/gorilla/mux"
xhttp "github.com/minio/minio/cmd/http"
"github.com/minio/minio/pkg/wildcard"
"github.com/rs/cors"
)
func newHTTPServerFn() *xhttp.Server {
@@ -83,149 +85,255 @@ func registerAPIRouter(router *mux.Router, encryptionEnabled, allowSSEKMS bool)
var routers []*mux.Router
for _, domainName := range globalDomainNames {
routers = append(routers, apiRouter.Host("{bucket:.+}."+domainName).Subrouter())
routers = append(routers, apiRouter.Host("{bucket:.+}."+domainName+":{port:.*}").Subrouter())
}
routers = append(routers, apiRouter.PathPrefix("/{bucket}").Subrouter())
for _, bucket := range routers {
// Object operations
// HeadObject
bucket.Methods(http.MethodHead).Path("/{object:.+}").HandlerFunc(collectAPIStats("headobject", httpTraceAll(api.HeadObjectHandler)))
bucket.Methods(http.MethodHead).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("headobject", httpTraceAll(api.HeadObjectHandler))))
// CopyObjectPart
bucket.Methods(http.MethodPut).Path("/{object:.+}").HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").HandlerFunc(collectAPIStats("copyobjectpart", httpTraceAll(api.CopyObjectPartHandler))).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
bucket.Methods(http.MethodPut).Path("/{object:.+}").HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").HandlerFunc(maxClients(collectAPIStats("copyobjectpart", httpTraceAll(api.CopyObjectPartHandler)))).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
// PutObjectPart
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(collectAPIStats("putobjectpart", httpTraceHdrs(api.PutObjectPartHandler))).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjectpart", httpTraceHdrs(api.PutObjectPartHandler)))).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
// ListObjectParts
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(collectAPIStats("listobjectparts", httpTraceAll(api.ListObjectPartsHandler))).Queries("uploadId", "{uploadId:.*}")
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("listobjectparts", httpTraceAll(api.ListObjectPartsHandler)))).Queries("uploadId", "{uploadId:.*}")
// CompleteMultipartUpload
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(collectAPIStats("completemutipartupload", httpTraceAll(api.CompleteMultipartUploadHandler))).Queries("uploadId", "{uploadId:.*}")
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("completemutipartupload", httpTraceAll(api.CompleteMultipartUploadHandler)))).Queries("uploadId", "{uploadId:.*}")
// NewMultipartUpload
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(collectAPIStats("newmultipartupload", httpTraceAll(api.NewMultipartUploadHandler))).Queries("uploads", "")
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("newmultipartupload", httpTraceAll(api.NewMultipartUploadHandler)))).Queries("uploads", "")
// AbortMultipartUpload
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(collectAPIStats("abortmultipartupload", httpTraceAll(api.AbortMultipartUploadHandler))).Queries("uploadId", "{uploadId:.*}")
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("abortmultipartupload", httpTraceAll(api.AbortMultipartUploadHandler)))).Queries("uploadId", "{uploadId:.*}")
// GetObjectACL - this is a dummy call.
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(collectAPIStats("getobjectacl", httpTraceHdrs(api.GetObjectACLHandler))).Queries("acl", "")
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobjectacl", httpTraceHdrs(api.GetObjectACLHandler)))).Queries("acl", "")
// PutObjectACL - this is a dummy call.
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(collectAPIStats("putobjectacl", httpTraceHdrs(api.PutObjectACLHandler))).Queries("acl", "")
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjectacl", httpTraceHdrs(api.PutObjectACLHandler)))).Queries("acl", "")
// GetObjectTagging
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(collectAPIStats("getobjecttagging", httpTraceHdrs(api.GetObjectTaggingHandler))).Queries("tagging", "")
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobjecttagging", httpTraceHdrs(api.GetObjectTaggingHandler)))).Queries("tagging", "")
// PutObjectTagging
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(collectAPIStats("putobjecttagging", httpTraceHdrs(api.PutObjectTaggingHandler))).Queries("tagging", "")
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjecttagging", httpTraceHdrs(api.PutObjectTaggingHandler)))).Queries("tagging", "")
// DeleteObjectTagging
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(collectAPIStats("deleteobjecttagging", httpTraceHdrs(api.DeleteObjectTaggingHandler))).Queries("tagging", "")
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("deleteobjecttagging", httpTraceHdrs(api.DeleteObjectTaggingHandler)))).Queries("tagging", "")
// SelectObjectContent
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(collectAPIStats("selectobjectcontent", httpTraceHdrs(api.SelectObjectContentHandler))).Queries("select", "").Queries("select-type", "2")
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("selectobjectcontent", httpTraceHdrs(api.SelectObjectContentHandler)))).Queries("select", "").Queries("select-type", "2")
// GetObjectRetention
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(collectAPIStats("getobjectretention", httpTraceAll(api.GetObjectRetentionHandler))).Queries("retention", "")
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobjectretention", httpTraceAll(api.GetObjectRetentionHandler)))).Queries("retention", "")
// GetObjectLegalHold
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(collectAPIStats("getobjectlegalhold", httpTraceAll(api.GetObjectLegalHoldHandler))).Queries("legal-hold", "")
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobjectlegalhold", httpTraceAll(api.GetObjectLegalHoldHandler)))).Queries("legal-hold", "")
// GetObject
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(collectAPIStats("getobject", httpTraceHdrs(api.GetObjectHandler)))
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobject", httpTraceHdrs(api.GetObjectHandler))))
// CopyObject
bucket.Methods(http.MethodPut).Path("/{object:.+}").HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").HandlerFunc(collectAPIStats("copyobject", httpTraceAll(api.CopyObjectHandler)))
bucket.Methods(http.MethodPut).Path("/{object:.+}").HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").HandlerFunc(maxClients(collectAPIStats("copyobject", httpTraceAll(api.CopyObjectHandler))))
// PutObjectRetention
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(collectAPIStats("putobjectretention", httpTraceAll(api.PutObjectRetentionHandler))).Queries("retention", "")
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjectretention", httpTraceAll(api.PutObjectRetentionHandler)))).Queries("retention", "")
// PutObjectLegalHold
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(collectAPIStats("putobjectlegalhold", httpTraceAll(api.PutObjectLegalHoldHandler))).Queries("legal-hold", "")
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjectlegalhold", httpTraceAll(api.PutObjectLegalHoldHandler)))).Queries("legal-hold", "")
// PutObject
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(collectAPIStats("putobject", httpTraceHdrs(api.PutObjectHandler)))
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobject", httpTraceHdrs(api.PutObjectHandler))))
// DeleteObject
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(collectAPIStats("deleteobject", httpTraceAll(api.DeleteObjectHandler)))
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("deleteobject", httpTraceAll(api.DeleteObjectHandler))))
/// Bucket operations
// GetBucketLocation
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketlocation", httpTraceAll(api.GetBucketLocationHandler))).Queries("location", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketlocation", httpTraceAll(api.GetBucketLocationHandler)))).Queries("location", "")
// GetBucketPolicy
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketpolicy", httpTraceAll(api.GetBucketPolicyHandler))).Queries("policy", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketpolicy", httpTraceAll(api.GetBucketPolicyHandler)))).Queries("policy", "")
// GetBucketLifecycle
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketlifecycle", httpTraceAll(api.GetBucketLifecycleHandler))).Queries("lifecycle", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketlifecycle", httpTraceAll(api.GetBucketLifecycleHandler)))).Queries("lifecycle", "")
// GetBucketEncryption
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketencryption", httpTraceAll(api.GetBucketEncryptionHandler))).Queries("encryption", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketencryption", httpTraceAll(api.GetBucketEncryptionHandler)))).Queries("encryption", "")
// Dummy Bucket Calls
// GetBucketACL -- this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketacl", httpTraceAll(api.GetBucketACLHandler))).Queries("acl", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketacl", httpTraceAll(api.GetBucketACLHandler)))).Queries("acl", "")
// PutBucketACL -- this is a dummy call.
bucket.Methods(http.MethodPut).HandlerFunc(collectAPIStats("putbucketacl", httpTraceAll(api.PutBucketACLHandler))).Queries("acl", "")
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketacl", httpTraceAll(api.PutBucketACLHandler)))).Queries("acl", "")
// GetBucketCors - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketcors", httpTraceAll(api.GetBucketCorsHandler))).Queries("cors", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketcors", httpTraceAll(api.GetBucketCorsHandler)))).Queries("cors", "")
// GetBucketWebsiteHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketwebsite", httpTraceAll(api.GetBucketWebsiteHandler))).Queries("website", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketwebsite", httpTraceAll(api.GetBucketWebsiteHandler)))).Queries("website", "")
// GetBucketAccelerateHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketaccelerate", httpTraceAll(api.GetBucketAccelerateHandler))).Queries("accelerate", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketaccelerate", httpTraceAll(api.GetBucketAccelerateHandler)))).Queries("accelerate", "")
// GetBucketRequestPaymentHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketrequestpayment", httpTraceAll(api.GetBucketRequestPaymentHandler))).Queries("requestPayment", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketrequestpayment", httpTraceAll(api.GetBucketRequestPaymentHandler)))).Queries("requestPayment", "")
// GetBucketLoggingHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketlogging", httpTraceAll(api.GetBucketLoggingHandler))).Queries("logging", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketlogging", httpTraceAll(api.GetBucketLoggingHandler)))).Queries("logging", "")
// GetBucketLifecycleHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketlifecycle", httpTraceAll(api.GetBucketLifecycleHandler))).Queries("lifecycle", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketlifecycle", httpTraceAll(api.GetBucketLifecycleHandler)))).Queries("lifecycle", "")
// GetBucketReplicationHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketreplication", httpTraceAll(api.GetBucketReplicationHandler))).Queries("replication", "")
// GetBucketTaggingHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbuckettagging", httpTraceAll(api.GetBucketTaggingHandler))).Queries("tagging", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketreplication", httpTraceAll(api.GetBucketReplicationHandler)))).Queries("replication", "")
// GetBucketTaggingHandler
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbuckettagging", httpTraceAll(api.GetBucketTaggingHandler)))).Queries("tagging", "")
//DeleteBucketWebsiteHandler
bucket.Methods(http.MethodDelete).HandlerFunc(collectAPIStats("deletebucketwebsite", httpTraceAll(api.DeleteBucketWebsiteHandler))).Queries("website", "")
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucketwebsite", httpTraceAll(api.DeleteBucketWebsiteHandler)))).Queries("website", "")
// DeleteBucketTaggingHandler
bucket.Methods(http.MethodDelete).HandlerFunc(collectAPIStats("deletebuckettagging", httpTraceAll(api.DeleteBucketTaggingHandler))).Queries("tagging", "")
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebuckettagging", httpTraceAll(api.DeleteBucketTaggingHandler)))).Queries("tagging", "")
// GetBucketObjectLockConfig
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketobjectlockconfiguration", httpTraceAll(api.GetBucketObjectLockConfigHandler))).Queries("object-lock", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketobjectlockconfiguration", httpTraceAll(api.GetBucketObjectLockConfigHandler)))).Queries("object-lock", "")
// GetBucketVersioning
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketversioning", httpTraceAll(api.GetBucketVersioningHandler))).Queries("versioning", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketversioning", httpTraceAll(api.GetBucketVersioningHandler)))).Queries("versioning", "")
// GetBucketNotification
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("getbucketnotification", httpTraceAll(api.GetBucketNotificationHandler))).Queries("notification", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketnotification", httpTraceAll(api.GetBucketNotificationHandler)))).Queries("notification", "")
// ListenBucketNotification
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("listenbucketnotification", httpTraceAll(api.ListenBucketNotificationHandler))).Queries("events", "{events:.*}")
// ListMultipartUploads
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("listmultipartuploads", httpTraceAll(api.ListMultipartUploadsHandler))).Queries("uploads", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listmultipartuploads", httpTraceAll(api.ListMultipartUploadsHandler)))).Queries("uploads", "")
// ListObjectsV2M
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("listobjectsv2M", httpTraceAll(api.ListObjectsV2MHandler))).Queries("list-type", "2", "metadata", "true")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listobjectsv2M", httpTraceAll(api.ListObjectsV2MHandler)))).Queries("list-type", "2", "metadata", "true")
// ListObjectsV2
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("listobjectsv2", httpTraceAll(api.ListObjectsV2Handler))).Queries("list-type", "2")
// ListBucketVersions
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("listbucketversions", httpTraceAll(api.ListBucketObjectVersionsHandler))).Queries("versions", "")
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listobjectsv2", httpTraceAll(api.ListObjectsV2Handler)))).Queries("list-type", "2")
// ListObjectVersions
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listobjectversions", httpTraceAll(api.ListObjectVersionsHandler)))).Queries("versions", "")
// ListObjectsV1 (Legacy)
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("listobjectsv1", httpTraceAll(api.ListObjectsV1Handler)))
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listobjectsv1", httpTraceAll(api.ListObjectsV1Handler))))
// PutBucketLifecycle
bucket.Methods(http.MethodPut).HandlerFunc(collectAPIStats("putbucketlifecycle", httpTraceAll(api.PutBucketLifecycleHandler))).Queries("lifecycle", "")
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketlifecycle", httpTraceAll(api.PutBucketLifecycleHandler)))).Queries("lifecycle", "")
// PutBucketEncryption
bucket.Methods(http.MethodPut).HandlerFunc(collectAPIStats("putbucketencryption", httpTraceAll(api.PutBucketEncryptionHandler))).Queries("encryption", "")
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketencryption", httpTraceAll(api.PutBucketEncryptionHandler)))).Queries("encryption", "")
// PutBucketPolicy
bucket.Methods(http.MethodPut).HandlerFunc(collectAPIStats("putbucketpolicy", httpTraceAll(api.PutBucketPolicyHandler))).Queries("policy", "")
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketpolicy", httpTraceAll(api.PutBucketPolicyHandler)))).Queries("policy", "")
// PutBucketObjectLockConfig
bucket.Methods(http.MethodPut).HandlerFunc(collectAPIStats("putbucketobjectlockconfig", httpTraceAll(api.PutBucketObjectLockConfigHandler))).Queries("object-lock", "")
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketobjectlockconfig", httpTraceAll(api.PutBucketObjectLockConfigHandler)))).Queries("object-lock", "")
// PutBucketTaggingHandler
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbuckettagging", httpTraceAll(api.PutBucketTaggingHandler)))).Queries("tagging", "")
// PutBucketVersioning
bucket.Methods(http.MethodPut).HandlerFunc(collectAPIStats("putbucketversioning", httpTraceAll(api.PutBucketVersioningHandler))).Queries("versioning", "")
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketversioning", httpTraceAll(api.PutBucketVersioningHandler)))).Queries("versioning", "")
// PutBucketNotification
bucket.Methods(http.MethodPut).HandlerFunc(collectAPIStats("putbucketnotification", httpTraceAll(api.PutBucketNotificationHandler))).Queries("notification", "")
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketnotification", httpTraceAll(api.PutBucketNotificationHandler)))).Queries("notification", "")
// PutBucket
bucket.Methods(http.MethodPut).HandlerFunc(collectAPIStats("putbucket", httpTraceAll(api.PutBucketHandler)))
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucket", httpTraceAll(api.PutBucketHandler))))
// HeadBucket
bucket.Methods(http.MethodHead).HandlerFunc(collectAPIStats("headbucket", httpTraceAll(api.HeadBucketHandler)))
bucket.Methods(http.MethodHead).HandlerFunc(
maxClients(collectAPIStats("headbucket", httpTraceAll(api.HeadBucketHandler))))
// PostPolicy
bucket.Methods(http.MethodPost).HeadersRegexp(xhttp.ContentType, "multipart/form-data*").HandlerFunc(collectAPIStats("postpolicybucket", httpTraceHdrs(api.PostPolicyBucketHandler)))
bucket.Methods(http.MethodPost).HeadersRegexp(xhttp.ContentType, "multipart/form-data*").HandlerFunc(
maxClients(collectAPIStats("postpolicybucket", httpTraceHdrs(api.PostPolicyBucketHandler))))
// DeleteMultipleObjects
bucket.Methods(http.MethodPost).HandlerFunc(collectAPIStats("deletemultipleobjects", httpTraceAll(api.DeleteMultipleObjectsHandler))).Queries("delete", "")
bucket.Methods(http.MethodPost).HandlerFunc(
maxClients(collectAPIStats("deletemultipleobjects", httpTraceAll(api.DeleteMultipleObjectsHandler)))).Queries("delete", "")
// DeleteBucketPolicy
bucket.Methods(http.MethodDelete).HandlerFunc(collectAPIStats("deletebucketpolicy", httpTraceAll(api.DeleteBucketPolicyHandler))).Queries("policy", "")
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucketpolicy", httpTraceAll(api.DeleteBucketPolicyHandler)))).Queries("policy", "")
// DeleteBucketLifecycle
bucket.Methods(http.MethodDelete).HandlerFunc(collectAPIStats("deletebucketlifecycle", httpTraceAll(api.DeleteBucketLifecycleHandler))).Queries("lifecycle", "")
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucketlifecycle", httpTraceAll(api.DeleteBucketLifecycleHandler)))).Queries("lifecycle", "")
// DeleteBucketEncryption
bucket.Methods(http.MethodDelete).HandlerFunc(collectAPIStats("deletebucketencryption", httpTraceAll(api.DeleteBucketEncryptionHandler))).Queries("encryption", "")
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucketencryption", httpTraceAll(api.DeleteBucketEncryptionHandler)))).Queries("encryption", "")
// DeleteBucket
bucket.Methods(http.MethodDelete).HandlerFunc(collectAPIStats("deletebucket", httpTraceAll(api.DeleteBucketHandler)))
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucket", httpTraceAll(api.DeleteBucketHandler))))
}
/// Root operation
// ListBuckets
apiRouter.Methods(http.MethodGet).Path(SlashSeparator).HandlerFunc(collectAPIStats("listbuckets", httpTraceAll(api.ListBucketsHandler)))
apiRouter.Methods(http.MethodGet).Path(SlashSeparator).HandlerFunc(
maxClients(collectAPIStats("listbuckets", httpTraceAll(api.ListBucketsHandler))))
// S3 browser with signature v4 adds '//' for ListBuckets request, so rather
// than failing with UnknownAPIRequest we simply handle it for now.
apiRouter.Methods(http.MethodGet).Path(SlashSeparator + SlashSeparator).HandlerFunc(
maxClients(collectAPIStats("listbuckets", httpTraceAll(api.ListBucketsHandler))))
// If none of the routes match add default error handler routes
apiRouter.NotFoundHandler = http.HandlerFunc(collectAPIStats("notfound", httpTraceAll(errorResponseHandler)))
apiRouter.MethodNotAllowedHandler = http.HandlerFunc(collectAPIStats("methodnotallowed", httpTraceAll(errorResponseHandler)))
}
// corsHandler handler for CORS (Cross Origin Resource Sharing)
func corsHandler(handler http.Handler) http.Handler {
commonS3Headers := []string{
xhttp.Date,
xhttp.ETag,
xhttp.ServerInfo,
xhttp.Connection,
xhttp.AcceptRanges,
xhttp.ContentRange,
xhttp.ContentEncoding,
xhttp.ContentLength,
xhttp.ContentType,
"X-Amz*",
"x-amz*",
"*",
}
return cors.New(cors.Options{
AllowOriginFunc: func(origin string) bool {
for _, allowedOrigin := range globalAPIConfig.getCorsAllowOrigins() {
if wildcard.MatchSimple(allowedOrigin, origin) {
return true
}
}
return false
},
AllowedMethods: []string{
http.MethodGet,
http.MethodPut,
http.MethodHead,
http.MethodPost,
http.MethodDelete,
http.MethodOptions,
http.MethodPatch,
},
AllowedHeaders: commonS3Headers,
ExposedHeaders: commonS3Headers,
AllowCredentials: true,
}).Handler(handler)
}

View File

@@ -26,12 +26,15 @@ import (
"io"
"io/ioutil"
"net/http"
"strconv"
"strings"
"time"
xhttp "github.com/minio/minio/cmd/http"
xjwt "github.com/minio/minio/cmd/jwt"
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/auth"
objectlock "github.com/minio/minio/pkg/bucket/object/lock"
"github.com/minio/minio/pkg/bucket/policy"
"github.com/minio/minio/pkg/hash"
iampolicy "github.com/minio/minio/pkg/iam/policy"
@@ -118,9 +121,7 @@ func getRequestAuthType(r *http.Request) authType {
return authTypeUnknown
}
// checkAdminRequestAuthType checks whether the request is a valid signature V2 or V4 request.
// It does not accept presigned or JWT or anonymous requests.
func checkAdminRequestAuthType(ctx context.Context, r *http.Request, action iampolicy.AdminAction, region string) (auth.Credentials, APIErrorCode) {
func validateAdminSignature(ctx context.Context, r *http.Request, region string) (auth.Credentials, map[string]interface{}, bool, APIErrorCode) {
var cred auth.Credentials
var owner bool
s3Err := ErrAccessDenied
@@ -129,7 +130,7 @@ func checkAdminRequestAuthType(ctx context.Context, r *http.Request, action iamp
// We only support admin credentials to access admin APIs.
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
if s3Err != ErrNone {
return cred, s3Err
return cred, nil, owner, s3Err
}
// we only support V4 (no presign) with auth body
@@ -139,14 +140,24 @@ func checkAdminRequestAuthType(ctx context.Context, r *http.Request, action iamp
reqInfo := (&logger.ReqInfo{}).AppendTags("requestHeaders", dumpRequest(r))
ctx := logger.SetReqInfo(ctx, reqInfo)
logger.LogIf(ctx, errors.New(getAPIError(s3Err).Description), logger.Application)
return cred, nil, owner, s3Err
}
var claims map[string]interface{}
claims, s3Err = checkClaimsFromToken(r, cred)
claims, s3Err := checkClaimsFromToken(r, cred)
if s3Err != ErrNone {
return cred, nil, owner, s3Err
}
return cred, claims, owner, ErrNone
}
// checkAdminRequestAuthType checks whether the request is a valid signature V2 or V4 request.
// It does not accept presigned or JWT or anonymous requests.
func checkAdminRequestAuthType(ctx context.Context, r *http.Request, action iampolicy.AdminAction, region string) (auth.Credentials, APIErrorCode) {
cred, claims, owner, s3Err := validateAdminSignature(ctx, r, region)
if s3Err != ErrNone {
return cred, s3Err
}
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Action: iampolicy.Action(action),
@@ -173,15 +184,13 @@ func getSessionToken(r *http.Request) (token string) {
// Fetch claims in the security token returned by the client, doesn't return
// errors - upon errors the returned claims map will be empty.
func mustGetClaimsFromToken(r *http.Request) map[string]interface{} {
claims, _ := getClaimsFromToken(r)
claims, _ := getClaimsFromToken(r, getSessionToken(r))
return claims
}
// Fetch claims in the security token returned by the client.
func getClaimsFromToken(r *http.Request) (map[string]interface{}, error) {
func getClaimsFromToken(r *http.Request, token string) (map[string]interface{}, error) {
claims := xjwt.NewMapClaims()
token := getSessionToken(r)
if token == "" {
return claims.Map(), nil
}
@@ -204,15 +213,16 @@ func getClaimsFromToken(r *http.Request) (map[string]interface{}, error) {
if globalPolicyOPA == nil {
// If OPA is not set and if ldap claim key is set, allow the claim.
if _, ok := claims.Lookup(ldapUser); ok {
if _, ok := claims.MapClaims[ldapUser]; ok {
return claims.Map(), nil
}
// If OPA is not set, session token should
// have a policy and its mandatory, reject
// requests without policy claim.
_, pok := claims.Lookup(iamPolicyClaimName())
if !pok {
_, pokOpenID := claims.MapClaims[iamPolicyClaimNameOpenID()]
_, pokSA := claims.MapClaims[iamPolicyClaimNameSA()]
if !pokOpenID && !pokSA {
return nil, errAuthentication
}
@@ -226,7 +236,7 @@ func getClaimsFromToken(r *http.Request) (map[string]interface{}, error) {
if err != nil {
// Base64 decoding fails, we should log to indicate
// something is malforming the request sent by client.
logger.LogIf(context.Background(), err, logger.Application)
logger.LogIf(r.Context(), err, logger.Application)
return nil, errAuthentication
}
claims.MapClaims[iampolicy.SessionPolicyName] = string(spBytes)
@@ -241,12 +251,15 @@ func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]in
if token != "" && cred.AccessKey == "" {
return nil, ErrNoAccessKey
}
if cred.IsServiceAccount() && token == "" {
token = cred.SessionToken
}
if subtle.ConstantTimeCompare([]byte(token), []byte(cred.SessionToken)) != 1 {
return nil, ErrInvalidToken
}
claims, err := getClaimsFromToken(r)
claims, err := getClaimsFromToken(r, token)
if err != nil {
return nil, toAPIErrorCode(context.Background(), err)
return nil, toAPIErrorCode(r.Context(), err)
}
return claims, ErrNone
}
@@ -271,7 +284,7 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
var cred auth.Credentials
switch getRequestAuthType(r) {
case authTypeUnknown, authTypeStreamingSigned:
return accessKey, owner, ErrAccessDenied
return accessKey, owner, ErrSignatureVersionNotSupported
case authTypePresignedV2, authTypeSignedV2:
if s3Err = isReqAuthenticatedV2(r); s3Err != ErrNone {
return accessKey, owner, s3Err
@@ -333,7 +346,7 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
// Request is allowed return the appropriate access key.
return cred.AccessKey, owner, ErrNone
}
return accessKey, owner, ErrAccessDenied
return cred.AccessKey, owner, ErrAccessDenied
}
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
@@ -347,7 +360,8 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
// Request is allowed return the appropriate access key.
return cred.AccessKey, owner, ErrNone
}
return accessKey, owner, ErrAccessDenied
return cred.AccessKey, owner, ErrAccessDenied
}
// Verify if request has valid AWS Signature Version '2'.
@@ -410,7 +424,7 @@ func isReqAuthenticated(ctx context.Context, r *http.Request, region string, sty
if err != nil {
return toAPIErrorCode(ctx, err)
}
r.Body = ioutil.NopCloser(reader)
r.Body = reader
return ErrNone
}
@@ -460,7 +474,107 @@ func (a authHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
a.handler.ServeHTTP(w, r)
return
}
writeErrorResponse(context.Background(), w, errorCodes.ToAPIErr(ErrSignatureVersionNotSupported), r.URL, guessIsBrowserReq(r))
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrSignatureVersionNotSupported), r.URL, guessIsBrowserReq(r))
}
func validateSignature(atype authType, r *http.Request) (auth.Credentials, bool, map[string]interface{}, APIErrorCode) {
var cred auth.Credentials
var owner bool
var s3Err APIErrorCode
switch atype {
case authTypeUnknown, authTypeStreamingSigned:
return cred, owner, nil, ErrSignatureVersionNotSupported
case authTypeSignedV2, authTypePresignedV2:
if s3Err = isReqAuthenticatedV2(r); s3Err != ErrNone {
return cred, owner, nil, s3Err
}
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypePresigned, authTypeSigned:
region := globalServerRegion
if s3Err = isReqAuthenticated(GlobalContext, r, region, serviceS3); s3Err != ErrNone {
return cred, owner, nil, s3Err
}
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
}
if s3Err != ErrNone {
return cred, owner, nil, s3Err
}
claims, s3Err := checkClaimsFromToken(r, cred)
if s3Err != ErrNone {
return cred, owner, nil, s3Err
}
return cred, owner, claims, ErrNone
}
func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate time.Time, retMode objectlock.RetMode, byPassSet bool, r *http.Request, cred auth.Credentials, owner bool, claims map[string]interface{}) (s3Err APIErrorCode) {
var retSet bool
if cred.AccessKey == "" {
conditions := getConditionValues(r, "", "", nil)
conditions["object-lock-mode"] = []string{string(retMode)}
conditions["object-lock-retain-until-date"] = []string{retDate.Format(time.RFC3339)}
if retDays > 0 {
conditions["object-lock-remaining-retention-days"] = []string{strconv.Itoa(retDays)}
}
if retMode == objectlock.RetGovernance && byPassSet {
byPassSet = globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Action: policy.Action(policy.BypassGovernanceRetentionAction),
BucketName: bucketName,
ConditionValues: conditions,
IsOwner: false,
ObjectName: objectName,
})
}
if globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Action: policy.Action(policy.PutObjectRetentionAction),
BucketName: bucketName,
ConditionValues: conditions,
IsOwner: false,
ObjectName: objectName,
}) {
retSet = true
}
if byPassSet || retSet {
return ErrNone
}
return ErrAccessDenied
}
conditions := getConditionValues(r, "", cred.AccessKey, claims)
conditions["object-lock-mode"] = []string{string(retMode)}
conditions["object-lock-retain-until-date"] = []string{retDate.Format(time.RFC3339)}
if retDays > 0 {
conditions["object-lock-remaining-retention-days"] = []string{strconv.Itoa(retDays)}
}
if retMode == objectlock.RetGovernance && byPassSet {
byPassSet = globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Action: policy.BypassGovernanceRetentionAction,
BucketName: bucketName,
ObjectName: objectName,
ConditionValues: conditions,
IsOwner: owner,
Claims: claims,
})
}
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Action: policy.PutObjectRetentionAction,
BucketName: bucketName,
ConditionValues: conditions,
ObjectName: objectName,
IsOwner: owner,
Claims: claims,
}) {
retSet = true
}
if byPassSet || retSet {
return ErrNone
}
return ErrAccessDenied
}
// isPutActionAllowed - check if PUT operation is allowed on the resource, this
@@ -471,7 +585,7 @@ func isPutActionAllowed(atype authType, bucketName, objectName string, r *http.R
var owner bool
switch atype {
case authTypeUnknown:
return ErrAccessDenied
return ErrSignatureVersionNotSupported
case authTypeSignedV2, authTypePresignedV2:
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypeStreamingSigned, authTypePresigned, authTypeSigned:
@@ -487,10 +601,19 @@ func isPutActionAllowed(atype authType, bucketName, objectName string, r *http.R
return s3Err
}
// Do not check for PutObjectRetentionAction permission,
// if mode and retain until date are not set.
// Can happen when bucket has default lock config set
if action == iampolicy.PutObjectRetentionAction &&
r.Header.Get(xhttp.AmzObjectLockMode) == "" &&
r.Header.Get(xhttp.AmzObjectLockRetainUntilDate) == "" {
return ErrNone
}
if cred.AccessKey == "" {
if globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Action: policy.PutObjectAction,
Action: policy.Action(action),
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", "", nil),
IsOwner: false,

View File

@@ -391,6 +391,7 @@ func TestIsReqAuthenticated(t *testing.T) {
}
}
}
func TestCheckAdminRequestAuthType(t *testing.T) {
objLayer, fsDir, err := prepareFS()
if err != nil {
@@ -425,3 +426,48 @@ func TestCheckAdminRequestAuthType(t *testing.T) {
}
}
}
func TestValidateAdminSignature(t *testing.T) {
ctx := context.Background()
objLayer, fsDir, err := prepareFS()
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(fsDir)
if err = newTestConfig(globalMinioDefaultRegion, objLayer); err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
creds, err := auth.CreateCredentials("admin", "mypassword")
if err != nil {
t.Fatalf("unable create credential, %s", err)
}
globalActiveCred = creds
testCases := []struct {
AccessKey string
SecretKey string
ErrCode APIErrorCode
}{
{"", "", ErrInvalidAccessKeyID},
{"admin", "", ErrSignatureDoesNotMatch},
{"admin", "wrongpassword", ErrSignatureDoesNotMatch},
{"wronguser", "mypassword", ErrInvalidAccessKeyID},
{"", "mypassword", ErrInvalidAccessKeyID},
{"admin", "mypassword", ErrNone},
}
for i, testCase := range testCases {
req := mustNewRequest("GET", "http://localhost:9000/", 0, nil, t)
if err := signRequestV4(req, testCase.AccessKey, testCase.SecretKey); err != nil {
t.Fatalf("Unable to inititalized new signed http request %s", err)
}
_, _, _, s3Error := validateAdminSignature(ctx, req, globalMinioDefaultRegion)
if s3Error != testCase.ErrCode {
t.Errorf("Test %d: Unexpected s3error returned wanted %d, got %d", i+1, testCase.ErrCode, s3Error)
}
}
}

View File

@@ -18,6 +18,7 @@ package cmd
import (
"context"
"path"
"time"
"github.com/minio/minio/cmd/logger"
@@ -29,8 +30,10 @@ import (
// path: 'bucket/' or '/bucket/' => Heal bucket
// path: 'bucket/object' => Heal object
type healTask struct {
path string
opts madmin.HealOpts
bucket string
object string
versionID string
opts madmin.HealOpts
// Healing response will be sent here
responseCh chan healResult
}
@@ -66,8 +69,7 @@ func waitForLowHTTPReq(tolerance int32) {
}
// Wait for heal requests and process them
func (h *healRoutine) run() {
ctx := context.Background()
func (h *healRoutine) run(ctx context.Context, objAPI ObjectLayer) {
for {
select {
case task, ok := <-h.tasks:
@@ -76,31 +78,33 @@ func (h *healRoutine) run() {
}
// Wait and proceed if there are active requests
waitForLowHTTPReq(int32(globalEndpoints.Nodes()))
waitForLowHTTPReq(int32(globalEndpoints.NEndpoints()))
var res madmin.HealResultItem
var err error
bucket, object := path2BucketObject(task.path)
switch {
case bucket == "" && object == "":
res, err = bgHealDiskFormat(ctx, task.opts)
case bucket != "" && object == "":
res, err = bgHealBucket(ctx, bucket, task.opts)
case bucket != "" && object != "":
res, err = bgHealObject(ctx, bucket, object, task.opts)
case task.bucket == nopHeal:
continue
case task.bucket == SlashSeparator:
res, err = healDiskFormat(ctx, objAPI, task.opts)
case task.bucket != "" && task.object == "":
res, err = objAPI.HealBucket(ctx, task.bucket, task.opts.DryRun, task.opts.Remove)
case task.bucket != "" && task.object != "":
res, err = objAPI.HealObject(ctx, task.bucket, task.object, task.versionID, task.opts)
}
if task.responseCh != nil {
task.responseCh <- healResult{result: res, err: err}
if task.bucket != "" && task.object != "" {
ObjectPathUpdated(path.Join(task.bucket, task.object))
}
task.responseCh <- healResult{result: res, err: err}
case <-h.doneCh:
return
case <-GlobalServiceDoneCh:
case <-ctx.Done():
return
}
}
}
func initHealRoutine() *healRoutine {
func newHealRoutine() *healRoutine {
return &healRoutine{
tasks: make(chan healTask),
doneCh: make(chan struct{}),
@@ -108,45 +112,28 @@ func initHealRoutine() *healRoutine {
}
func startBackgroundHealing() {
ctx := context.Background()
func initBackgroundHealing(ctx context.Context, objAPI ObjectLayer) {
// Run the background healer
globalBackgroundHealRoutine = newHealRoutine()
go globalBackgroundHealRoutine.run(ctx, objAPI)
var objAPI ObjectLayer
for {
objAPI = newObjectLayerWithoutSafeModeFn()
if objAPI == nil {
time.Sleep(time.Second)
continue
nh := newBgHealSequence()
// Heal any disk format and metadata early, if possible.
if err := nh.healDiskMeta(); err != nil {
if newObjectLayerFn() != nil {
// log only in situations, when object layer
// has fully initialized.
logger.LogIf(nh.ctx, err)
}
break
}
// Run the background healer
globalBackgroundHealRoutine = initHealRoutine()
go globalBackgroundHealRoutine.run()
// Launch the background healer sequence to track
// background healing operations
info := objAPI.StorageInfo(ctx, false)
numDisks := info.Backend.OnlineDisks.Sum() + info.Backend.OfflineDisks.Sum()
nh := newBgHealSequence(numDisks)
globalBackgroundHealState.LaunchNewHealSequence(nh)
}
func initBackgroundHealing() {
go startBackgroundHealing()
}
// bgHealDiskFormat - heals format.json, return value indicates if a
// healDiskFormat - heals format.json, return value indicates if a
// failure error occurred.
func bgHealDiskFormat(ctx context.Context, opts madmin.HealOpts) (madmin.HealResultItem, error) {
// Get current object layer instance.
objectAPI := newObjectLayerWithoutSafeModeFn()
if objectAPI == nil {
return madmin.HealResultItem{}, errServerNotInitialized
}
res, err := objectAPI.HealFormat(ctx, opts.DryRun)
func healDiskFormat(ctx context.Context, objAPI ObjectLayer, opts madmin.HealOpts) (madmin.HealResultItem, error) {
res, err := objAPI.HealFormat(ctx, opts.DryRun)
// return any error, ignore error returned when disks have
// already healed.
@@ -157,34 +144,21 @@ func bgHealDiskFormat(ctx context.Context, opts madmin.HealOpts) (madmin.HealRes
// Healing succeeded notify the peers to reload format and re-initialize disks.
// We will not notify peers if healing is not required.
if err == nil {
for _, nerr := range globalNotificationSys.ReloadFormat(opts.DryRun) {
if nerr.Err != nil {
logger.GetReqInfo(ctx).SetTags("peerAddress", nerr.Host.String())
logger.LogIf(ctx, nerr.Err)
// Notify servers in background and retry if needed.
go func() {
retry:
for _, nerr := range globalNotificationSys.ReloadFormat(opts.DryRun) {
if nerr.Err != nil {
if nerr.Err.Error() == errServerNotInitialized.Error() {
time.Sleep(time.Second)
goto retry
}
logger.GetReqInfo(ctx).SetTags("peerAddress", nerr.Host.String())
logger.LogIf(ctx, nerr.Err)
}
}
}
}()
}
return res, nil
}
// bghealBucket - traverses and heals given bucket
func bgHealBucket(ctx context.Context, bucket string, opts madmin.HealOpts) (madmin.HealResultItem, error) {
// Get current object layer instance.
objectAPI := newObjectLayerWithoutSafeModeFn()
if objectAPI == nil {
return madmin.HealResultItem{}, errServerNotInitialized
}
return objectAPI.HealBucket(ctx, bucket, opts.DryRun, opts.Remove)
}
// bgHealObject - heal the given object and record result
func bgHealObject(ctx context.Context, bucket, object string, opts madmin.HealOpts) (madmin.HealResultItem, error) {
// Get current object layer instance.
objectAPI := newObjectLayerWithoutSafeModeFn()
if objectAPI == nil {
return madmin.HealResultItem{}, errServerNotInitialized
}
return objectAPI.HealObject(ctx, bucket, object, opts.DryRun, opts.Remove, opts.ScanMode)
}

View File

@@ -25,32 +25,19 @@ import (
const defaultMonitorNewDiskInterval = time.Minute * 10
func initLocalDisksAutoHeal() {
go monitorLocalDisksAndHeal()
func initLocalDisksAutoHeal(ctx context.Context, objAPI ObjectLayer) {
go monitorLocalDisksAndHeal(ctx, objAPI)
}
// monitorLocalDisksAndHeal - ensures that detected new disks are healed
// 1. Only the concerned erasure set will be listed and healed
// 2. Only the node hosting the disk is responsible to perform the heal
func monitorLocalDisksAndHeal() {
// Wait until the object layer is ready
var objAPI ObjectLayer
for {
objAPI = newObjectLayerWithoutSafeModeFn()
if objAPI == nil {
time.Sleep(time.Second)
continue
}
break
}
z, ok := objAPI.(*xlZones)
func monitorLocalDisksAndHeal(ctx context.Context, objAPI ObjectLayer) {
z, ok := objAPI.(*erasureZones)
if !ok {
return
}
ctx := context.Background()
var bgSeq *healSequence
var found bool
@@ -64,64 +51,74 @@ func monitorLocalDisksAndHeal() {
// Perform automatic disk healing when a disk is replaced locally.
for {
time.Sleep(defaultMonitorNewDiskInterval)
// Attempt a heal as the server starts-up first.
localDisksInZoneHeal := make([]Endpoints, len(z.zones))
for i, ep := range globalEndpoints {
localDisksToHeal := Endpoints{}
for _, endpoint := range ep.Endpoints {
if !endpoint.IsLocal {
select {
case <-ctx.Done():
return
case <-time.After(defaultMonitorNewDiskInterval):
// Attempt a heal as the server starts-up first.
localDisksInZoneHeal := make([]Endpoints, len(z.zones))
var healNewDisks bool
for i, ep := range globalEndpoints {
localDisksToHeal := Endpoints{}
for _, endpoint := range ep.Endpoints {
if !endpoint.IsLocal {
continue
}
// Try to connect to the current endpoint
// and reformat if the current disk is not formatted
_, _, err := connectEndpoint(endpoint)
if err == errUnformattedDisk {
localDisksToHeal = append(localDisksToHeal, endpoint)
}
}
if len(localDisksToHeal) == 0 {
continue
}
// Try to connect to the current endpoint
// and reformat if the current disk is not formatted
_, _, err := connectEndpoint(endpoint)
if err == errUnformattedDisk {
localDisksToHeal = append(localDisksToHeal, endpoint)
}
localDisksInZoneHeal[i] = localDisksToHeal
healNewDisks = true
}
if len(localDisksToHeal) == 0 {
// Reformat disks only if needed.
if !healNewDisks {
continue
}
localDisksInZoneHeal[i] = localDisksToHeal
}
// Reformat disks
bgSeq.sourceCh <- SlashSeparator
// Reformat disks
bgSeq.sourceCh <- healSource{bucket: SlashSeparator}
// Ensure that reformatting disks is finished
bgSeq.sourceCh <- nopHeal
// Ensure that reformatting disks is finished
bgSeq.sourceCh <- healSource{bucket: nopHeal}
var erasureSetInZoneToHeal = make([][]int, len(localDisksInZoneHeal))
// Compute the list of erasure set to heal
for i, localDisksToHeal := range localDisksInZoneHeal {
var erasureSetToHeal []int
for _, endpoint := range localDisksToHeal {
// Load the new format of this passed endpoint
_, format, err := connectEndpoint(endpoint)
if err != nil {
logger.LogIf(ctx, err)
continue
var erasureSetInZoneToHeal = make([][]int, len(localDisksInZoneHeal))
// Compute the list of erasure set to heal
for i, localDisksToHeal := range localDisksInZoneHeal {
var erasureSetToHeal []int
for _, endpoint := range localDisksToHeal {
// Load the new format of this passed endpoint
_, format, err := connectEndpoint(endpoint)
if err != nil {
logger.LogIf(ctx, err)
continue
}
// Calculate the set index where the current endpoint belongs
setIndex, _, err := findDiskIndex(z.zones[i].format, format)
if err != nil {
logger.LogIf(ctx, err)
continue
}
erasureSetToHeal = append(erasureSetToHeal, setIndex)
}
// Calculate the set index where the current endpoint belongs
setIndex, _, err := findDiskIndex(z.zones[i].format, format)
if err != nil {
logger.LogIf(ctx, err)
continue
}
erasureSetToHeal = append(erasureSetToHeal, setIndex)
erasureSetInZoneToHeal[i] = erasureSetToHeal
}
erasureSetInZoneToHeal[i] = erasureSetToHeal
}
// Heal all erasure sets that need
for i, erasureSetToHeal := range erasureSetInZoneToHeal {
for _, setIndex := range erasureSetToHeal {
err := healErasureSet(ctx, setIndex, z.zones[i].sets[setIndex])
if err != nil {
logger.LogIf(ctx, err)
// Heal all erasure sets that need
for i, erasureSetToHeal := range erasureSetInZoneToHeal {
for _, setIndex := range erasureSetToHeal {
err := healErasureSet(ctx, setIndex, z.zones[i].sets[setIndex], z.zones[i].drivesPerSet)
if err != nil {
logger.LogIf(ctx, err)
}
}
}
}

View File

@@ -28,11 +28,6 @@ import (
humanize "github.com/dustin/go-humanize"
)
// Prepare XL/FS backend for benchmark.
func prepareBenchmarkBackend(instanceType string) (ObjectLayer, []string, error) {
return prepareTestBackend(instanceType)
}
// Benchmark utility functions for ObjectLayer.PutObject().
// Creates Object layer setup ( MakeBucket ) and then runs the PutObject benchmark.
func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
@@ -40,7 +35,7 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucketWithLocation(context.Background(), bucket, "")
err = obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -81,7 +76,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
object := getRandomObjectName()
// create bucket.
err = obj.MakeBucketWithLocation(context.Background(), bucket, "")
err = obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -132,10 +127,12 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
b.StopTimer()
}
// creates XL/FS backend setup, obtains the object layer and calls the runPutObjectPartBenchmark function.
// creates Erasure/FS backend setup, obtains the object layer and calls the runPutObjectPartBenchmark function.
func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -146,10 +143,12 @@ func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
runPutObjectPartBenchmark(b, objLayer, objSize)
}
// creates XL/FS backend setup, obtains the object layer and calls the runPutObjectBenchmark function.
// creates Erasure/FS backend setup, obtains the object layer and calls the runPutObjectBenchmark function.
func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -160,10 +159,12 @@ func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
runPutObjectBenchmark(b, objLayer, objSize)
}
// creates XL/FS backend setup, obtains the object layer and runs parallel benchmark for put object.
// creates Erasure/FS backend setup, obtains the object layer and runs parallel benchmark for put object.
func benchmarkPutObjectParallel(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -180,7 +181,7 @@ func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucketWithLocation(context.Background(), bucket, "")
err := obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -189,7 +190,7 @@ func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// generate etag for the generated data.
// etag of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
// PutObject is the functions which writes the data onto the FS/Erasure backend.
// get text data generated for number of bytes equal to object size.
md5hex := getMD5Hash(textData)
@@ -239,10 +240,12 @@ func generateBytesData(size int) []byte {
return bytes.Repeat(getRandomByte(), size)
}
// creates XL/FS backend setup, obtains the object layer and calls the runGetObjectBenchmark function.
// creates Erasure/FS backend setup, obtains the object layer and calls the runGetObjectBenchmark function.
func benchmarkGetObject(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -253,10 +256,12 @@ func benchmarkGetObject(b *testing.B, instanceType string, objSize int) {
runGetObjectBenchmark(b, objLayer, objSize)
}
// creates XL/FS backend setup, obtains the object layer and runs parallel benchmark for ObjectLayer.GetObject() .
// creates Erasure/FS backend setup, obtains the object layer and runs parallel benchmark for ObjectLayer.GetObject() .
func benchmarkGetObjectParallel(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := prepareBenchmarkBackend(instanceType)
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
@@ -273,7 +278,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucketWithLocation(context.Background(), bucket, "")
err := obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -317,7 +322,7 @@ func runGetObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucketWithLocation(context.Background(), bucket, "")
err := obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -326,7 +331,7 @@ func runGetObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
// PutObject is the functions which writes the data onto the FS/Erasure backend.
md5hex := getMD5Hash([]byte(textData))
sha256hex := ""

View File

@@ -18,7 +18,6 @@ package cmd
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"hash"
@@ -27,6 +26,14 @@ import (
"github.com/minio/minio/cmd/logger"
)
type errHashMismatch struct {
message string
}
func (err *errHashMismatch) Error() string {
return err.message
}
// Calculates bitrot in chunks and writes the hash into the stream.
type streamingBitrotWriter struct {
iow *io.PipeWriter
@@ -132,9 +139,9 @@ func (b *streamingBitrotReader) ReadAt(buf []byte, offset int64) (int, error) {
b.h.Write(buf)
if !bytes.Equal(b.h.Sum(nil), b.hashBytes) {
err = fmt.Errorf("hashes do not match expected %s, got %s",
hex.EncodeToString(b.hashBytes), hex.EncodeToString(b.h.Sum(nil)))
logger.LogIf(context.Background(), err)
err := &errHashMismatch{fmt.Sprintf("hashes do not match expected %s, got %s",
hex.EncodeToString(b.hashBytes), hex.EncodeToString(b.h.Sum(nil)))}
logger.LogIf(GlobalContext, err)
return 0, err
}
b.currOffset += int64(len(buf))

View File

@@ -17,7 +17,6 @@
package cmd
import (
"context"
"hash"
"io"
@@ -36,12 +35,12 @@ type wholeBitrotWriter struct {
func (b *wholeBitrotWriter) Write(p []byte) (int, error) {
err := b.disk.AppendFile(b.volume, b.filePath, p)
if err != nil {
logger.LogIf(context.Background(), err)
logger.LogIf(GlobalContext, err)
return 0, err
}
_, err = b.Hash.Write(p)
if err != nil {
logger.LogIf(context.Background(), err)
logger.LogIf(GlobalContext, err)
return 0, err
}
return len(p), nil
@@ -70,14 +69,14 @@ func (b *wholeBitrotReader) ReadAt(buf []byte, offset int64) (n int, err error)
if b.buf == nil {
b.buf = make([]byte, b.tillOffset-offset)
if _, err := b.disk.ReadFile(b.volume, b.filePath, offset, b.buf, b.verifier); err != nil {
ctx := context.Background()
ctx := GlobalContext
logger.GetReqInfo(ctx).AppendTags("disk", b.disk.String())
logger.LogIf(ctx, err)
return 0, err
}
}
if len(b.buf) < len(buf) {
logger.LogIf(context.Background(), errLessData)
logger.LogIf(GlobalContext, errLessData)
return 0, errLessData
}
n = copy(buf, b.buf)

View File

@@ -17,7 +17,6 @@
package cmd
import (
"context"
"errors"
"hash"
"io"
@@ -31,25 +30,6 @@ import (
// magic HH-256 key as HH-256 hash of the first 100 decimals of π as utf-8 string with a zero key.
var magicHighwayHash256Key = []byte("\x4b\xe7\x34\xfa\x8e\x23\x8a\xcd\x26\x3e\x83\xe6\xbb\x96\x85\x52\x04\x0f\x93\x5d\xa3\x9f\x44\x14\x97\xe0\x9d\x13\x22\xde\x36\xa0")
// BitrotAlgorithm specifies a algorithm used for bitrot protection.
type BitrotAlgorithm uint
const (
// SHA256 represents the SHA-256 hash function
SHA256 BitrotAlgorithm = 1 + iota
// HighwayHash256 represents the HighwayHash-256 hash function
HighwayHash256
// HighwayHash256S represents the Streaming HighwayHash-256 hash function
HighwayHash256S
// BLAKE2b512 represents the BLAKE2b-512 hash function
BLAKE2b512
)
// DefaultBitrotAlgorithm is the default algorithm used for bitrot protection.
const (
DefaultBitrotAlgorithm = HighwayHash256S
)
var bitrotAlgorithms = map[BitrotAlgorithm]string{
SHA256: "sha256",
BLAKE2b512: "blake2b",
@@ -72,7 +52,7 @@ func (a BitrotAlgorithm) New() hash.Hash {
hh, _ := highwayhash.New(magicHighwayHash256Key) // New will never return error since key is 256 bit
return hh
default:
logger.CriticalIf(context.Background(), errors.New("Unsupported bitrot algorithm"))
logger.CriticalIf(GlobalContext, errors.New("Unsupported bitrot algorithm"))
return nil
}
}
@@ -88,7 +68,7 @@ func (a BitrotAlgorithm) Available() bool {
func (a BitrotAlgorithm) String() string {
name, ok := bitrotAlgorithms[a]
if !ok {
logger.CriticalIf(context.Background(), errors.New("Unsupported bitrot algorithm"))
logger.CriticalIf(GlobalContext, errors.New("Unsupported bitrot algorithm"))
}
return name
}

View File

@@ -34,7 +34,7 @@ func testBitrotReaderWriterAlgo(t *testing.T, bitrotAlgo BitrotAlgorithm) {
volume := "testvol"
filePath := "testfile"
disk, err := newPosix(tmpDir)
disk, err := newXLStorage(tmpDir, "")
if err != nil {
t.Fatal(err)
}

View File

@@ -20,16 +20,16 @@ import (
"context"
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"runtime"
"sync/atomic"
"time"
"github.com/gorilla/mux"
"github.com/minio/minio-go/v6/pkg/set"
"github.com/minio/minio-go/v7/pkg/set"
xhttp "github.com/minio/minio/cmd/http"
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/cmd/rest"
@@ -43,6 +43,7 @@ const (
)
const (
bootstrapRESTMethodHealth = "/health"
bootstrapRESTMethodVerify = "/verify"
)
@@ -62,9 +63,9 @@ func (s1 ServerSystemConfig) Diff(s2 ServerSystemConfig) error {
return fmt.Errorf("Expected platform '%s', found to be running '%s'",
s1.MinioPlatform, s2.MinioPlatform)
}
if s1.MinioEndpoints.Nodes() != s2.MinioEndpoints.Nodes() {
return fmt.Errorf("Expected number of endpoints %d, seen %d", s1.MinioEndpoints.Nodes(),
s2.MinioEndpoints.Nodes())
if s1.MinioEndpoints.NEndpoints() != s2.MinioEndpoints.NEndpoints() {
return fmt.Errorf("Expected number of endpoints %d, seen %d", s1.MinioEndpoints.NEndpoints(),
s2.MinioEndpoints.NEndpoints())
}
for i, ep := range s1.MinioEndpoints {
@@ -94,6 +95,9 @@ func getServerSystemCfg() ServerSystemConfig {
}
}
// HealthHandler returns success if request is valid
func (b *bootstrapRESTServer) HealthHandler(w http.ResponseWriter, r *http.Request) {}
func (b *bootstrapRESTServer) VerifyHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "VerifyHandler")
cfg := getServerSystemCfg()
@@ -106,37 +110,23 @@ func registerBootstrapRESTHandlers(router *mux.Router) {
server := &bootstrapRESTServer{}
subrouter := router.PathPrefix(bootstrapRESTPrefix).Subrouter()
subrouter.Methods(http.MethodPost).Path(bootstrapRESTVersionPrefix + bootstrapRESTMethodHealth).HandlerFunc(
httpTraceHdrs(server.HealthHandler))
subrouter.Methods(http.MethodPost).Path(bootstrapRESTVersionPrefix + bootstrapRESTMethodVerify).HandlerFunc(
httpTraceHdrs(server.VerifyHandler))
}
// client to talk to bootstrap Nodes.
// client to talk to bootstrap NEndpoints.
type bootstrapRESTClient struct {
endpoint Endpoint
restClient *rest.Client
connected int32
}
// Reconnect to a bootstrap rest server.k
func (client *bootstrapRESTClient) reConnect() {
atomic.StoreInt32(&client.connected, 1)
}
// Wrapper to restClient.Call to handle network errors, in case of network error the connection is marked disconnected
// permanently. The only way to restore the connection is at the xl-sets layer by xlsets.monitorAndConnectEndpoints()
// after verifying format.json
func (client *bootstrapRESTClient) call(method string, values url.Values, body io.Reader, length int64) (respBody io.ReadCloser, err error) {
return client.callWithContext(context.Background(), method, values, body, length)
}
// Wrapper to restClient.Call to handle network errors, in case of network error the connection is marked disconnected
// permanently. The only way to restore the connection is at the xl-sets layer by xlsets.monitorAndConnectEndpoints()
// after verifying format.json
func (client *bootstrapRESTClient) callWithContext(ctx context.Context, method string, values url.Values, body io.Reader, length int64) (respBody io.ReadCloser, err error) {
if !client.IsOnline() {
client.reConnect()
}
if values == nil {
values = make(url.Values)
}
@@ -146,10 +136,6 @@ func (client *bootstrapRESTClient) callWithContext(ctx context.Context, method s
return respBody, nil
}
if isNetworkError(err) {
atomic.StoreInt32(&client.connected, 0)
}
return nil, err
}
@@ -158,24 +144,12 @@ func (client *bootstrapRESTClient) String() string {
return client.endpoint.String()
}
// IsOnline - returns whether RPC client failed to connect or not.
func (client *bootstrapRESTClient) IsOnline() bool {
return atomic.LoadInt32(&client.connected) == 1
}
// Close - marks the client as closed.
func (client *bootstrapRESTClient) Close() error {
atomic.StoreInt32(&client.connected, 0)
client.restClient.Close()
return nil
}
// Verify - fetches system server config.
func (client *bootstrapRESTClient) Verify(srcCfg ServerSystemConfig) (err error) {
func (client *bootstrapRESTClient) Verify(ctx context.Context, srcCfg ServerSystemConfig) (err error) {
if newObjectLayerFn() != nil {
return nil
}
respBody, err := client.call(bootstrapRESTMethodVerify, nil, nil, -1)
respBody, err := client.callWithContext(ctx, bootstrapRESTMethodVerify, nil, nil, -1)
if err != nil {
return
}
@@ -187,14 +161,17 @@ func (client *bootstrapRESTClient) Verify(srcCfg ServerSystemConfig) (err error)
return srcCfg.Diff(recvCfg)
}
func verifyServerSystemConfig(endpointZones EndpointZones) error {
func verifyServerSystemConfig(ctx context.Context, endpointZones EndpointZones) error {
srcCfg := getServerSystemCfg()
clnts := newBootstrapRESTClients(endpointZones)
var onlineServers int
var offlineEndpoints []string
var retries int
for onlineServers < len(clnts)/2 {
for _, clnt := range clnts {
if err := clnt.Verify(srcCfg); err != nil {
if err := clnt.Verify(ctx, srcCfg); err != nil {
if isNetworkError(err) {
offlineEndpoints = append(offlineEndpoints, clnt.String())
continue
}
return fmt.Errorf("%s as has incorrect configuration: %w", clnt.String(), err)
@@ -204,6 +181,14 @@ func verifyServerSystemConfig(endpointZones EndpointZones) error {
// Sleep for a while - so that we don't go into
// 100% CPU when half the endpoints are offline.
time.Sleep(500 * time.Millisecond)
retries++
// after 5 retries start logging that servers are not reachable yet
if retries >= 5 {
logger.Info(fmt.Sprintf("Waiting for atleast %d servers to be online for bootstrap check", len(clnts)/2))
logger.Info(fmt.Sprintf("Following servers are currently offline or unreachable %s", offlineEndpoints))
retries = 0 // reset to log again after 5 retries.
}
offlineEndpoints = nil
}
return nil
}
@@ -220,11 +205,7 @@ func newBootstrapRESTClients(endpointZones EndpointZones) []*bootstrapRESTClient
// Only proceed for remote endpoints.
if !endpoint.IsLocal {
clnt, err := newBootstrapRESTClient(endpoint)
if err != nil {
continue
}
clnts = append(clnts, clnt)
clnts = append(clnts, newBootstrapRESTClient(endpoint))
}
}
}
@@ -232,7 +213,7 @@ func newBootstrapRESTClients(endpointZones EndpointZones) []*bootstrapRESTClient
}
// Returns a new bootstrap client.
func newBootstrapRESTClient(endpoint Endpoint) (*bootstrapRESTClient, error) {
func newBootstrapRESTClient(endpoint Endpoint) *bootstrapRESTClient {
serverURL := &url.URL{
Scheme: endpoint.Scheme,
Host: endpoint.Host,
@@ -247,11 +228,18 @@ func newBootstrapRESTClient(endpoint Endpoint) (*bootstrapRESTClient, error) {
}
}
trFn := newCustomHTTPTransport(tlsConfig, rest.DefaultRESTTimeout, rest.DefaultRESTTimeout)
restClient, err := rest.NewClient(serverURL, trFn, newAuthToken)
if err != nil {
return nil, err
trFn := newCustomHTTPTransport(tlsConfig, rest.DefaultRESTTimeout)
restClient := rest.NewClient(serverURL, trFn, newAuthToken)
restClient.HealthCheckFn = func() bool {
ctx, cancel := context.WithTimeout(GlobalContext, restClient.HealthCheckTimeout)
// Instantiate a new rest client for healthcheck
// to avoid recursive healthCheckFn()
respBody, err := rest.NewClient(serverURL, trFn, newAuthToken).CallWithContext(ctx, bootstrapRESTMethodHealth, nil, nil, -1)
xhttp.DrainBody(respBody)
cancel()
var ne *rest.NetworkError
return !errors.Is(err, context.DeadlineExceeded) && !errors.As(err, &ne)
}
return &bootstrapRESTClient{endpoint: endpoint, restClient: restClient, connected: 1}, nil
return &bootstrapRESTClient{endpoint: endpoint, restClient: restClient}
}

View File

@@ -18,13 +18,12 @@ package cmd
import (
"encoding/xml"
"fmt"
"io"
"net/http"
"github.com/gorilla/mux"
xhttp "github.com/minio/minio/cmd/http"
"github.com/minio/minio/cmd/logger"
bucketsse "github.com/minio/minio/pkg/bucket/encryption"
"github.com/minio/minio/pkg/bucket/policy"
)
@@ -46,15 +45,14 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
// PutBucketEncyrption API requires Content-Md5
if _, ok := r.Header[xhttp.ContentMD5]; !ok {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentMD5), r.URL, guessIsBrowserReq(r))
if !objAPI.IsEncryptionSupported() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL, guessIsBrowserReq(r))
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
if s3Error := checkRequestAuthType(ctx, r, policy.PutBucketEncryptionAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
@@ -69,7 +67,12 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
// Parse bucket encryption xml
encConfig, err := validateBucketSSEConfig(io.LimitReader(r.Body, maxBucketSSEConfigSize))
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedXML), r.URL, guessIsBrowserReq(r))
apiErr := APIError{
Code: "MalformedXML",
Description: fmt.Sprintf("%s (%s)", errorCodes[ErrMalformedXML].Description, err),
HTTPStatusCode: errorCodes[ErrMalformedXML].HTTPStatusCode,
}
writeErrorResponse(ctx, w, apiErr, r.URL, guessIsBrowserReq(r))
return
}
@@ -79,17 +82,17 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
return
}
// Store the bucket encryption configuration in the object layer
if err = objAPI.SetBucketSSEConfig(ctx, bucket, encConfig); err != nil {
configData, err := xml.Marshal(encConfig)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Update the in-memory bucket encryption cache
globalBucketSSEConfigSys.Set(bucket, *encConfig)
// Update peer MinIO servers of the updated bucket encryption config
globalNotificationSys.SetBucketSSEConfig(ctx, bucket, encConfig)
// Store the bucket encryption configuration in the object layer
if err = globalBucketMetadataSys.Update(bucket, bucketSSEConfig, configData); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
writeSuccessResponseHeadersOnly(w)
}
@@ -122,21 +125,20 @@ func (api objectAPIHandlers) GetBucketEncryptionHandler(w http.ResponseWriter, r
return
}
// Fetch bucket encryption configuration from object layer
var encConfig *bucketsse.BucketSSEConfig
if encConfig, err = objAPI.GetBucketSSEConfig(ctx, bucket); err != nil {
config, err := globalBucketMetadataSys.GetSSEConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
var encConfigData []byte
if encConfigData, err = xml.Marshal(encConfig); err != nil {
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Write bucket encryption configuration to client
writeSuccessResponseXML(w, encConfigData)
writeSuccessResponseXML(w, configData)
}
// DeleteBucketEncryptionHandler - Removes bucket encryption configuration
@@ -167,15 +169,10 @@ func (api objectAPIHandlers) DeleteBucketEncryptionHandler(w http.ResponseWriter
}
// Delete bucket encryption config from object layer
if err = objAPI.DeleteBucketSSEConfig(ctx, bucket); err != nil {
if err = globalBucketMetadataSys.Update(bucket, bucketSSEConfig, nil); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Remove entry from the in-memory bucket encryption cache
globalBucketSSEConfigSys.Remove(bucket)
// Update peer MinIO servers of the updated bucket encryption config
globalNotificationSys.RemoveBucketSSEConfig(ctx, bucket)
writeSuccessNoContent(w)
}

View File

@@ -17,142 +17,32 @@
package cmd
import (
"bytes"
"context"
"encoding/xml"
"errors"
"io"
"path"
"sync"
bucketsse "github.com/minio/minio/pkg/bucket/encryption"
)
// BucketSSEConfigSys - in-memory cache of bucket encryption config
type BucketSSEConfigSys struct {
sync.RWMutex
bucketSSEConfigMap map[string]bucketsse.BucketSSEConfig
}
type BucketSSEConfigSys struct{}
// NewBucketSSEConfigSys - Creates an empty in-memory bucket encryption configuration cache
func NewBucketSSEConfigSys() *BucketSSEConfigSys {
return &BucketSSEConfigSys{
bucketSSEConfigMap: make(map[string]bucketsse.BucketSSEConfig),
}
}
// load - Loads the bucket encryption configuration for the given list of buckets
func (sys *BucketSSEConfigSys) load(buckets []BucketInfo, objAPI ObjectLayer) error {
for _, bucket := range buckets {
config, err := objAPI.GetBucketSSEConfig(context.Background(), bucket.Name)
if err != nil {
if _, ok := err.(BucketSSEConfigNotFound); ok {
sys.Remove(bucket.Name)
}
continue
}
sys.Set(bucket.Name, *config)
}
return nil
}
// Init - Initializes in-memory bucket encryption config cache for the given list of buckets
func (sys *BucketSSEConfigSys) Init(buckets []BucketInfo, objAPI ObjectLayer) error {
if objAPI == nil {
return errServerNotInitialized
}
// We don't cache bucket encryption config in gateway mode, nothing to do.
if globalIsGateway {
return nil
}
// Load bucket encryption config cache once during boot.
return sys.load(buckets, objAPI)
return &BucketSSEConfigSys{}
}
// Get - gets bucket encryption config for the given bucket.
func (sys *BucketSSEConfigSys) Get(bucket string) (config bucketsse.BucketSSEConfig, ok bool) {
// We don't cache bucket encryption config in gateway mode.
func (sys *BucketSSEConfigSys) Get(bucket string) (*bucketsse.BucketSSEConfig, error) {
if globalIsGateway {
objAPI := newObjectLayerWithoutSafeModeFn()
if objAPI == nil {
return
return nil, errServerNotInitialized
}
cfg, err := objAPI.GetBucketSSEConfig(context.Background(), bucket)
if err != nil {
return
}
return *cfg, true
return nil, BucketSSEConfigNotFound{Bucket: bucket}
}
sys.Lock()
defer sys.Unlock()
config, ok = sys.bucketSSEConfigMap[bucket]
return
}
// Set - sets bucket encryption config to given bucket name.
func (sys *BucketSSEConfigSys) Set(bucket string, config bucketsse.BucketSSEConfig) {
// We don't cache bucket encryption config in gateway mode.
if globalIsGateway {
return
}
sys.Lock()
defer sys.Unlock()
sys.bucketSSEConfigMap[bucket] = config
}
// Remove - removes bucket encryption config for given bucket.
func (sys *BucketSSEConfigSys) Remove(bucket string) {
sys.Lock()
defer sys.Unlock()
delete(sys.bucketSSEConfigMap, bucket)
}
// saveBucketSSEConfig - save bucket encryption config for given bucket.
func saveBucketSSEConfig(ctx context.Context, objAPI ObjectLayer, bucket string, config *bucketsse.BucketSSEConfig) error {
data, err := xml.Marshal(config)
if err != nil {
return err
}
// Path to store bucket encryption config for the given bucket.
configFile := path.Join(bucketConfigPrefix, bucket, bucketSSEConfig)
return saveConfig(ctx, objAPI, configFile, data)
}
// getBucketSSEConfig - get bucket encryption config for given bucket.
func getBucketSSEConfig(objAPI ObjectLayer, bucket string) (*bucketsse.BucketSSEConfig, error) {
// Path to bucket-encryption.xml for the given bucket.
configFile := path.Join(bucketConfigPrefix, bucket, bucketSSEConfig)
configData, err := readConfig(context.Background(), objAPI, configFile)
if err != nil {
if err == errConfigNotFound {
err = BucketSSEConfigNotFound{Bucket: bucket}
}
return nil, err
}
return bucketsse.ParseBucketSSEConfig(bytes.NewReader(configData))
}
// removeBucketSSEConfig - removes bucket encryption config for given bucket.
func removeBucketSSEConfig(ctx context.Context, objAPI ObjectLayer, bucket string) error {
// Path to bucket-encryption.xml for the given bucket.
configFile := path.Join(bucketConfigPrefix, bucket, bucketSSEConfig)
if err := objAPI.DeleteObject(ctx, minioMetaBucket, configFile); err != nil {
if _, ok := err.(ObjectNotFound); ok {
return BucketSSEConfigNotFound{Bucket: bucket}
}
return err
}
return nil
return globalBucketMetadataSys.GetSSEConfig(bucket)
}
// validateBucketSSEConfig parses bucket encryption configuration and validates if it is supported by MinIO.
@@ -165,5 +55,6 @@ func validateBucketSSEConfig(r io.Reader) (*bucketsse.BucketSSEConfig, error) {
if len(encConfig.Rules) == 1 && encConfig.Rules[0].DefaultEncryptionAction.Algorithm == bucketsse.AES256 {
return encConfig, nil
}
return nil, errors.New("Unsupported bucket encryption configuration")
}

View File

@@ -17,7 +17,6 @@
package cmd
import (
"context"
"encoding/base64"
"encoding/xml"
"fmt"
@@ -26,11 +25,13 @@ import (
"net/url"
"path"
"path/filepath"
"strconv"
"strings"
"github.com/gorilla/mux"
"github.com/minio/minio-go/v6/pkg/set"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/cmd/config/etcd/dns"
"github.com/minio/minio/cmd/crypto"
xhttp "github.com/minio/minio/cmd/http"
@@ -45,10 +46,8 @@ import (
)
const (
getBucketVersioningResponse = `<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"/>`
objectLockConfig = "object-lock.xml"
bucketObjectLockEnabledConfigFile = "object-lock-enabled.json"
bucketObjectLockEnabledConfig = `{"x-amz-bucket-object-lock-enabled":true}`
objectLockConfig = "object-lock.xml"
bucketTaggingConfig = "tagging.xml"
)
// Check if there are buckets on server without corresponding entry in etcd backend and
@@ -70,7 +69,7 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
// Get buckets in the DNS
dnsBuckets, err := globalDNSConfig.List()
if err != nil && err != dns.ErrNoEntriesFound {
logger.LogIf(context.Background(), err)
logger.LogIf(GlobalContext, err)
return
}
@@ -118,12 +117,12 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
for _, err := range g.Wait() {
if err != nil {
logger.LogIf(context.Background(), err)
logger.LogIf(GlobalContext, err)
}
}
for _, bucket := range bucketsInConflict.ToSlice() {
logger.LogIf(context.Background(), fmt.Errorf("Unable to add bucket DNS entry for bucket %s, an entry exists for the same bucket. Use one of these IP addresses %v to access the bucket", bucket, globalDomainIPs.ToSlice()))
logger.LogIf(GlobalContext, fmt.Errorf("Unable to add bucket DNS entry for bucket %s, an entry exists for the same bucket. Use one of these IP addresses %v to access the bucket", bucket, globalDomainIPs.ToSlice()))
}
// Remove buckets that are in DNS for this server, but aren't local
@@ -140,7 +139,7 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
// We go to here, so we know the bucket no longer exists,
// but is registered in DNS to this server
if err = globalDNSConfig.Delete(bucket); err != nil {
logger.LogIf(context.Background(), fmt.Errorf("Failed to remove DNS entry for %s due to %w",
logger.LogIf(GlobalContext, fmt.Errorf("Failed to remove DNS entry for %s due to %w",
bucket, err))
}
}
@@ -266,7 +265,7 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
listBuckets := objectAPI.ListBuckets
accessKey, owner, s3Error := checkRequestAuthTypeToAccessKey(ctx, r, policy.ListAllMyBucketsAction, "", "")
if s3Error != ErrNone {
if s3Error != ErrNone && s3Error != ErrAccessDenied {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
}
@@ -295,32 +294,43 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
}
}
// Set prefix value for "s3:prefix" policy conditionals.
r.Header.Set("prefix", "")
if s3Error == ErrAccessDenied {
// Set prefix value for "s3:prefix" policy conditionals.
r.Header.Set("prefix", "")
// Set delimiter value for "s3:delimiter" policy conditionals.
r.Header.Set("delimiter", SlashSeparator)
// Set delimiter value for "s3:delimiter" policy conditionals.
r.Header.Set("delimiter", SlashSeparator)
// err will be nil here as we already called this function
// earlier in this request.
claims, _ := getClaimsFromToken(r)
var newBucketsInfo []BucketInfo
for _, bucketInfo := range bucketsInfo {
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: accessKey,
Action: iampolicy.ListBucketAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", accessKey, claims),
IsOwner: owner,
ObjectName: "",
Claims: claims,
}) {
newBucketsInfo = append(newBucketsInfo, bucketInfo)
// err will be nil here as we already called this function
// earlier in this request.
claims, _ := getClaimsFromToken(r, getSessionToken(r))
n := 0
// Use the following trick to filter in place
// https://github.com/golang/go/wiki/SliceTricks#filter-in-place
for _, bucketInfo := range bucketsInfo {
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: accessKey,
Action: iampolicy.ListBucketAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", accessKey, claims),
IsOwner: owner,
ObjectName: "",
Claims: claims,
}) {
bucketsInfo[n] = bucketInfo
n++
}
}
bucketsInfo = bucketsInfo[:n]
// No buckets can be filtered return access denied error.
if len(bucketsInfo) == 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
}
}
// Generate response.
response := generateListBucketsResponse(newBucketsInfo)
response := generateListBucketsResponse(bucketsInfo)
encodedSuccessResponse := encodeResponse(response)
// Write response.
@@ -367,77 +377,98 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
return
}
// Before proceeding validate if bucket exists.
_, err := objectAPI.GetBucketInfo(ctx, bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
deleteObjectsFn := objectAPI.DeleteObjects
if api.CacheAPI() != nil {
deleteObjectsFn = api.CacheAPI().DeleteObjects
}
var objectsToDelete = map[string]int{}
var objectsToDelete = map[ObjectToDelete]int{}
getObjectInfoFn := objectAPI.GetObjectInfo
if api.CacheAPI() != nil {
getObjectInfoFn = api.CacheAPI().GetObjectInfo
}
var dErrs = make([]APIErrorCode, len(deleteObjects.Objects))
dErrs := make([]DeleteError, len(deleteObjects.Objects))
for index, object := range deleteObjects.Objects {
if dErrs[index] = checkRequestAuthType(ctx, r, policy.DeleteObjectAction, bucket, object.ObjectName); dErrs[index] != ErrNone {
if dErrs[index] == ErrSignatureDoesNotMatch || dErrs[index] == ErrInvalidAccessKeyID {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(dErrs[index]), r.URL, guessIsBrowserReq(r))
if apiErrCode := checkRequestAuthType(ctx, r, policy.DeleteObjectAction, bucket, object.ObjectName); apiErrCode != ErrNone {
if apiErrCode == ErrSignatureDoesNotMatch || apiErrCode == ErrInvalidAccessKeyID {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(apiErrCode), r.URL, guessIsBrowserReq(r))
return
}
apiErr := errorCodes.ToAPIErr(apiErrCode)
dErrs[index] = DeleteError{
Code: apiErr.Code,
Message: apiErr.Description,
Key: object.ObjectName,
VersionID: object.VersionID,
}
continue
}
govBypassPerms := checkRequestAuthType(ctx, r, policy.BypassGovernanceRetentionAction, bucket, object.ObjectName)
if _, err := enforceRetentionBypassForDelete(ctx, r, bucket, object.ObjectName, getObjectInfoFn, govBypassPerms); err != ErrNone {
dErrs[index] = err
continue
if object.VersionID != "" {
if rcfg, _ := globalBucketObjectLockSys.Get(bucket); rcfg.LockEnabled {
if apiErrCode := enforceRetentionBypassForDelete(ctx, r, bucket, object, getObjectInfoFn); apiErrCode != ErrNone {
apiErr := errorCodes.ToAPIErr(apiErrCode)
dErrs[index] = DeleteError{
Code: apiErr.Code,
Message: apiErr.Description,
Key: object.ObjectName,
VersionID: object.VersionID,
}
continue
}
}
}
// Avoid duplicate objects, we use map to filter them out.
if _, ok := objectsToDelete[object.ObjectName]; !ok {
objectsToDelete[object.ObjectName] = index
if _, ok := objectsToDelete[object]; !ok {
objectsToDelete[object] = index
}
}
toNames := func(input map[string]int) (output []string) {
output = make([]string, len(input))
toNames := func(input map[ObjectToDelete]int) (output []ObjectToDelete) {
output = make([]ObjectToDelete, len(input))
idx := 0
for name := range input {
output[idx] = name
for obj := range input {
output[idx] = obj
idx++
}
return
}
deleteList := toNames(objectsToDelete)
errs, err := deleteObjectsFn(ctx, bucket, deleteList)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
dObjects, errs := deleteObjectsFn(ctx, bucket, deleteList, ObjectOptions{
Versioned: globalBucketVersioningSys.Enabled(bucket),
})
for i, objName := range deleteList {
dIdx := objectsToDelete[objName]
dErrs[dIdx] = toAPIErrorCode(ctx, errs[i])
}
// Collect deleted objects and errors if any.
var deletedObjects []ObjectIdentifier
var deleteErrors []DeleteError
for index, errCode := range dErrs {
object := deleteObjects.Objects[index]
// Success deleted objects are collected separately.
if errCode == ErrNone || errCode == ErrNoSuchKey {
deletedObjects = append(deletedObjects, object)
deletedObjects := make([]DeletedObject, len(deleteObjects.Objects))
for i := range errs {
dindex := objectsToDelete[deleteList[i]]
apiErr := toAPIError(ctx, errs[i])
if apiErr.Code == "" || apiErr.Code == "NoSuchKey" {
deletedObjects[dindex] = dObjects[i]
continue
}
apiErr := getAPIError(errCode)
// Error during delete should be collected separately.
deleteErrors = append(deleteErrors, DeleteError{
Code: apiErr.Code,
Message: apiErr.Description,
Key: object.ObjectName,
})
dErrs[dindex] = DeleteError{
Code: apiErr.Code,
Message: apiErr.Description,
Key: deleteList[i].ObjectName,
VersionID: deleteList[i].VersionID,
}
}
var deleteErrors []DeleteError
for _, dErr := range dErrs {
if dErr.Code != "" {
deleteErrors = append(deleteErrors, dErr)
}
}
// Generate response
@@ -449,12 +480,21 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
// Notify deleted event for objects.
for _, dobj := range deletedObjects {
objInfo := ObjectInfo{
Name: dobj.ObjectName,
VersionID: dobj.VersionID,
}
if dobj.DeleteMarker {
objInfo = ObjectInfo{
Name: dobj.ObjectName,
DeleteMarker: dobj.DeleteMarker,
VersionID: dobj.DeleteMarkerVersionID,
}
}
sendEvent(eventArgs{
EventName: event.ObjectRemovedDelete,
BucketName: bucket,
Object: ObjectInfo{
Name: dobj.ObjectName,
},
EventName: event.ObjectRemovedDelete,
BucketName: bucket,
Object: objInfo,
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: r.UserAgent(),
@@ -509,30 +549,30 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
return
}
opts := BucketOptions{
Location: location,
LockEnabled: objectLockEnabled,
}
if globalDNSConfig != nil {
sr, err := globalDNSConfig.Get(bucket)
if err != nil {
if err == dns.ErrNoEntriesFound {
// Proceed to creating a bucket.
if err = objectAPI.MakeBucketWithLocation(ctx, bucket, location); err != nil {
if err = objectAPI.MakeBucketWithLocation(ctx, bucket, opts); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if objectLockEnabled {
configFile := path.Join(bucketConfigPrefix, bucket, bucketObjectLockEnabledConfigFile)
if err = saveConfig(ctx, objectAPI, configFile, []byte(bucketObjectLockEnabledConfig)); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
}
if err = globalDNSConfig.Put(bucket); err != nil {
objectAPI.DeleteBucket(ctx, bucket)
objectAPI.DeleteBucket(ctx, bucket, false)
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Load updated bucket metadata into memory.
globalNotificationSys.LoadBucketMetadata(GlobalContext, bucket)
// Make sure to add Location information here only for bucket
w.Header().Set(xhttp.Location,
getObjectLocation(r, globalDomainNames, bucket, ""))
@@ -557,21 +597,14 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
// Proceed to creating a bucket.
err := objectAPI.MakeBucketWithLocation(ctx, bucket, location)
err := objectAPI.MakeBucketWithLocation(ctx, bucket, opts)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if objectLockEnabled && !globalIsGateway {
configFile := path.Join(bucketConfigPrefix, bucket, bucketObjectLockEnabledConfigFile)
if err = saveConfig(ctx, objectAPI, configFile, []byte(bucketObjectLockEnabledConfig)); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
globalBucketObjectLockConfig.Set(bucket, objectlock.Retention{})
globalNotificationSys.PutBucketObjectLockConfig(ctx, bucket, objectlock.Retention{})
}
// Load updated bucket metadata into memory.
globalNotificationSys.LoadBucketMetadata(GlobalContext, bucket)
// Make sure to add Location information here only for bucket
w.Header().Set(xhttp.Location, path.Clean(r.URL.Path)) // Clean any trailing slashes.
@@ -606,7 +639,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
bucket := mux.Vars(r)["bucket"]
// To detect if the client has disconnected.
r.Body = &detectDisconnect{r.Body, r.Context().Done()}
r.Body = &contextReader{r.Body, r.Context()}
// Require Content-Length to be set in the request
size := r.ContentLength
@@ -742,14 +775,16 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
}
rawReader := hashReader
pReader := NewPutObjReader(rawReader, nil, nil)
var objectEncryptionKey []byte
var objectEncryptionKey crypto.ObjectKey
// Check if bucket encryption is enabled
_, encEnabled := globalBucketSSEConfigSys.Get(bucket)
// This request header needs to be set prior to setting ObjectOptions
if (globalAutoEncryption || encEnabled) && !crypto.SSEC.IsRequested(r.Header) {
r.Header.Add(crypto.SSEHeader, crypto.SSEAlgorithmAES256)
if _, err = globalBucketSSEConfigSys.Get(bucket); err == nil || globalAutoEncryption {
// This request header needs to be set prior to setting ObjectOptions
if !crypto.SSEC.IsRequested(r.Header) {
r.Header.Add(crypto.SSEHeader, crypto.SSEAlgorithmAES256)
}
}
// get gateway encryption options
var opts ObjectOptions
opts, err = putOpts(ctx, r, bucket, object, metadata)
@@ -784,7 +819,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
pReader = NewPutObjReader(rawReader, hashReader, objectEncryptionKey)
pReader = NewPutObjReader(rawReader, hashReader, &objectEncryptionKey)
}
}
@@ -794,9 +829,17 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
location := getObjectLocation(r, globalDomainNames, bucket, object)
// We must not use the http.Header().Set method here because some (broken)
// clients expect the ETag header key to be literally "ETag" - not "Etag" (case-sensitive).
// Therefore, we have to set the ETag directly as map entry.
w.Header()[xhttp.ETag] = []string{`"` + objInfo.ETag + `"`}
w.Header().Set(xhttp.Location, location)
// Set the relevant version ID as part of the response header.
if objInfo.VersionID != "" {
w.Header()[xhttp.AmzVersionID] = []string{objInfo.VersionID}
}
w.Header().Set(xhttp.Location, getObjectLocation(r, globalDomainNames, bucket, object))
// Notify object created event.
defer sendEvent(eventArgs{
@@ -823,9 +866,9 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
Bucket: objInfo.Bucket,
Key: objInfo.Name,
ETag: `"` + objInfo.ETag + `"`,
Location: location,
Location: w.Header().Get(xhttp.Location),
})
writeResponse(w, http.StatusCreated, resp, "application/xml")
writeResponse(w, http.StatusCreated, resp, mimeXML)
case "200":
writeSuccessResponseHeadersOnly(w)
default:
@@ -883,88 +926,66 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
return
}
// Verify if the caller has sufficient permissions.
if s3Error := checkRequestAuthType(ctx, r, policy.DeleteBucketAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
}
forceDelete := false
if value := r.Header.Get(xhttp.MinIOForceDelete); value != "" {
var err error
forceDelete, err = strconv.ParseBool(value)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrInvalidRequest)
apiErr.Description = err.Error()
writeErrorResponse(ctx, w, apiErr, r.URL, guessIsBrowserReq(r))
return
}
// if force delete header is set, we need to evaluate the policy anyways
// regardless of it being true or not.
if s3Error := checkRequestAuthType(ctx, r, policy.ForceDeleteBucketAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
}
if forceDelete {
if rcfg, _ := globalBucketObjectLockSys.Get(bucket); rcfg.LockEnabled {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL, guessIsBrowserReq(r))
return
}
}
}
deleteBucket := objectAPI.DeleteBucket
// Attempt to delete bucket.
if err := deleteBucket(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
if err := deleteBucket(ctx, bucket, forceDelete); err != nil {
if _, ok := err.(BucketNotEmpty); ok && (globalBucketVersioningSys.Enabled(bucket) || globalBucketVersioningSys.Suspended(bucket)) {
apiErr := toAPIError(ctx, err)
apiErr.Description = "The bucket you tried to delete is not empty. You must delete all versions in the bucket."
writeErrorResponse(ctx, w, apiErr, r.URL, guessIsBrowserReq(r))
} else {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
}
return
}
globalNotificationSys.DeleteBucketMetadata(ctx, bucket)
if globalDNSConfig != nil {
if err := globalDNSConfig.Delete(bucket); err != nil {
// Deleting DNS entry failed, attempt to create the bucket again.
objectAPI.MakeBucketWithLocation(ctx, bucket, "")
logger.LogIf(ctx, fmt.Errorf("Unable to delete bucket DNS entry %w, please delete it manually using etcdctl", err))
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
}
globalNotificationSys.DeleteBucket(ctx, bucket)
// Write success response.
writeSuccessNoContent(w)
}
// PutBucketVersioningHandler - PUT Bucket Versioning.
// ----------
// No-op. Available for API compatibility.
func (api objectAPIHandlers) PutBucketVersioningHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketVersioning")
defer logger.AuditLog(w, r, "PutBucketVersioning", mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL, guessIsBrowserReq(r))
return
}
getBucketInfo := objectAPI.GetBucketInfo
if _, err := getBucketInfo(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Write success response.
writeSuccessResponseHeadersOnly(w)
}
// GetBucketVersioningHandler - GET Bucket Versioning.
// ----------
// No-op. Available for API compatibility.
func (api objectAPIHandlers) GetBucketVersioningHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketVersioning")
defer logger.AuditLog(w, r, "GetBucketVersioning", mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL, guessIsBrowserReq(r))
return
}
getBucketInfo := objectAPI.GetBucketInfo
if _, err := getBucketInfo(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Write success response.
writeSuccessResponseXML(w, []byte(getBucketVersioningResponse))
}
// PutBucketObjectLockConfigHandler - PUT Bucket object lock configuration.
// ----------
// Places an Object Lock configuration on the specified bucket. The rule
@@ -984,15 +1005,11 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
return
}
// Deny if WORM is enabled
if globalWORMEnabled {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL, guessIsBrowserReq(r))
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.PutBucketObjectLockConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
}
config, err := objectlock.ParseObjectLockConfig(r.Body)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedXML)
@@ -1000,39 +1017,23 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
writeErrorResponse(ctx, w, apiErr, r.URL, guessIsBrowserReq(r))
return
}
configFile := path.Join(bucketConfigPrefix, bucket, bucketObjectLockEnabledConfigFile)
configData, err := readConfig(ctx, objectAPI, configFile)
configData, err := xml.Marshal(config)
if err != nil {
aerr := toAPIError(ctx, err)
if err == errConfigNotFound {
aerr = errorCodes.ToAPIErr(ErrObjectLockConfigurationNotAllowed)
}
writeErrorResponse(ctx, w, aerr, r.URL, guessIsBrowserReq(r))
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if string(configData) != bucketObjectLockEnabledConfig {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInternalError), r.URL, guessIsBrowserReq(r))
return
}
data, err := xml.Marshal(config)
if err != nil {
// Deny object locking configuration settings on existing buckets without object lock enabled.
if _, err = globalBucketMetadataSys.GetObjectLockConfig(bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
configFile = path.Join(bucketConfigPrefix, bucket, objectLockConfig)
if err = saveConfig(ctx, objectAPI, configFile, data); err != nil {
if err = globalBucketMetadataSys.Update(bucket, objectLockConfig, configData); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if config.Rule != nil {
retention := config.ToRetention()
globalBucketObjectLockConfig.Set(bucket, retention)
globalNotificationSys.PutBucketObjectLockConfig(ctx, bucket, retention)
} else {
globalBucketObjectLockConfig.Remove(bucket)
globalNotificationSys.RemoveBucketObjectLockConfig(ctx, bucket)
}
// Write success response.
writeSuccessResponseHeadersOnly(w)
@@ -1056,43 +1057,137 @@ func (api objectAPIHandlers) GetBucketObjectLockConfigHandler(w http.ResponseWri
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL, guessIsBrowserReq(r))
return
}
// check if user has permissions to perform this operation
if s3Error := checkRequestAuthType(ctx, r, policy.GetBucketObjectLockConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
}
configFile := path.Join(bucketConfigPrefix, bucket, bucketObjectLockEnabledConfigFile)
configData, err := readConfig(ctx, objectAPI, configFile)
config, err := globalBucketMetadataSys.GetObjectLockConfig(bucket)
if err != nil {
var aerr APIError
if err == errConfigNotFound {
aerr = errorCodes.ToAPIErr(ErrMethodNotAllowed)
} else {
aerr = toAPIError(ctx, err)
}
writeErrorResponse(ctx, w, aerr, r.URL, guessIsBrowserReq(r))
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if string(configData) != bucketObjectLockEnabledConfig {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInternalError), r.URL, guessIsBrowserReq(r))
return
}
configFile = path.Join(bucketConfigPrefix, bucket, objectLockConfig)
configData, err = readConfig(ctx, objectAPI, configFile)
configData, err := xml.Marshal(config)
if err != nil {
if err != errConfigNotFound {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if configData, err = xml.Marshal(objectlock.NewObjectLockConfig()); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Write success response.
writeSuccessResponseXML(w, configData)
}
// PutBucketTaggingHandler - PUT Bucket tagging.
// ----------
func (api objectAPIHandlers) PutBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketTagging")
defer logger.AuditLog(w, r, "PutBucketTagging", mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL, guessIsBrowserReq(r))
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.PutBucketTaggingAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
}
tags, err := tags.ParseBucketXML(io.LimitReader(r.Body, r.ContentLength))
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedXML)
apiErr.Description = err.Error()
writeErrorResponse(ctx, w, apiErr, r.URL, guessIsBrowserReq(r))
return
}
configData, err := xml.Marshal(tags)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if err = globalBucketMetadataSys.Update(bucket, bucketTaggingConfig, configData); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Write success response.
writeSuccessResponseHeadersOnly(w)
}
// GetBucketTaggingHandler - GET Bucket tagging.
// ----------
func (api objectAPIHandlers) GetBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketTagging")
defer logger.AuditLog(w, r, "GetBucketTagging", mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL, guessIsBrowserReq(r))
return
}
// check if user has permissions to perform this operation
if s3Error := checkRequestAuthType(ctx, r, policy.GetBucketTaggingAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
}
config, err := globalBucketMetadataSys.GetTaggingConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Write success response.
writeSuccessResponseXML(w, configData)
}
// DeleteBucketTaggingHandler - DELETE Bucket tagging.
// ----------
func (api objectAPIHandlers) DeleteBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteBucketTagging")
defer logger.AuditLog(w, r, "DeleteBucketTagging", mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL, guessIsBrowserReq(r))
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.PutBucketTaggingAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
}
if err := globalBucketMetadataSys.Update(bucket, bucketTaggingConfig, nil); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Write success response.
writeSuccessResponseHeadersOnly(w)
}

View File

@@ -18,8 +18,8 @@ package cmd
import (
"bytes"
"context"
"encoding/xml"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
@@ -29,7 +29,52 @@ import (
"github.com/minio/minio/pkg/auth"
)
// Wrapper for calling GetBucketPolicy HTTP handler tests for both XL multiple disks and single node setup.
// Wrapper for calling RemoveBucket HTTP handler tests for both Erasure multiple disks and single node setup.
func TestRemoveBucketHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testRemoveBucketHandler, []string{"RemoveBucket"})
}
func testRemoveBucketHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials auth.Credentials, t *testing.T) {
_, err := obj.PutObject(GlobalContext, bucketName, "test-object", mustGetPutObjReader(t, bytes.NewBuffer([]byte{}), int64(0), "", "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"), ObjectOptions{})
// if object upload fails stop the test.
if err != nil {
t.Fatalf("Error uploading object: <ERROR> %v", err)
}
// initialize httptest Recorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
// construct HTTP request for DELETE bucket.
req, err := newTestSignedRequestV4("DELETE", getBucketLocationURL("", bucketName), 0, nil, credentials.AccessKey, credentials.SecretKey, nil)
if err != nil {
t.Fatalf("Test %s: Failed to create HTTP request for RemoveBucketHandler: <ERROR> %v", instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
switch rec.Code {
case http.StatusOK, http.StatusCreated, http.StatusAccepted, http.StatusNoContent:
t.Fatalf("Test %v: expected failure, but succeeded with %v", instanceType, rec.Code)
}
// Verify response of the V2 signed HTTP request.
// initialize HTTP NewRecorder, this records any mutations to response writer inside the handler.
recV2 := httptest.NewRecorder()
// construct HTTP request for DELETE bucket.
reqV2, err := newTestSignedRequestV2("DELETE", getBucketLocationURL("", bucketName), 0, nil, credentials.AccessKey, credentials.SecretKey, nil)
if err != nil {
t.Fatalf("Test %s: Failed to create HTTP request for RemoveBucketHandler: <ERROR> %v", instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(recV2, reqV2)
switch recV2.Code {
case http.StatusOK, http.StatusCreated, http.StatusAccepted, http.StatusNoContent:
t.Fatalf("Test %v: expected failure, but succeeded with %v", instanceType, recV2.Code)
}
}
// Wrapper for calling GetBucketPolicy HTTP handler tests for both Erasure multiple disks and single node setup.
func TestGetBucketLocationHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testGetBucketLocationHandler, []string{"GetBucketLocation"})
}
@@ -71,7 +116,7 @@ func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName stri
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidAccessKeyId",
Message: "The access key ID you provided does not exist in our records.",
Message: "The Access Key Id you provided does not exist in our records.",
},
shouldPass: false,
},
@@ -173,7 +218,7 @@ func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName stri
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling HeadBucket HTTP handler tests for both XL multiple disks and single node setup.
// Wrapper for calling HeadBucket HTTP handler tests for both Erasure multiple disks and single node setup.
func TestHeadBucketHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testHeadBucketHandler, []string{"HeadBucket"})
}
@@ -278,7 +323,7 @@ func testHeadBucketHandler(obj ObjectLayer, instanceType, bucketName string, api
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling TestListMultipartUploadsHandler tests for both XL multiple disks and single node setup.
// Wrapper for calling TestListMultipartUploadsHandler tests for both Erasure multiple disks and single node setup.
func TestListMultipartUploadsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListMultipartUploadsHandler, []string{"ListMultipartUploads"})
}
@@ -515,7 +560,7 @@ func testListMultipartUploadsHandler(obj ObjectLayer, instanceType, bucketName s
ExecObjectLayerAPINilTest(t, nilBucket, "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling TestListBucketsHandler tests for both XL multiple disks and single node setup.
// Wrapper for calling TestListBucketsHandler tests for both Erasure multiple disks and single node setup.
func TestListBucketsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListBucketsHandler, []string{"ListBuckets"})
}
@@ -609,7 +654,7 @@ func testListBucketsHandler(obj ObjectLayer, instanceType, bucketName string, ap
ExecObjectLayerAPINilTest(t, "", "", instanceType, apiRouter, nilReq)
}
// Wrapper for calling DeleteMultipleObjects HTTP handler tests for both XL multiple disks and single node setup.
// Wrapper for calling DeleteMultipleObjects HTTP handler tests for both Erasure multiple disks and single node setup.
func TestAPIDeleteMultipleObjectsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testAPIDeleteMultipleObjectsHandler, []string{"DeleteMultipleObjects"})
}
@@ -625,7 +670,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
for i := 0; i < 10; i++ {
objectName := "test-object-" + strconv.Itoa(i)
// uploading the object.
_, err = obj.PutObject(context.Background(), bucketName, objectName, mustGetPutObjReader(t, bytes.NewBuffer(contentBytes), int64(len(contentBytes)), "", sha256sum), ObjectOptions{})
_, err = obj.PutObject(GlobalContext, bucketName, objectName, mustGetPutObjReader(t, bytes.NewBuffer(contentBytes), int64(len(contentBytes)), "", sha256sum), ObjectOptions{})
// if object upload fails stop the test.
if err != nil {
t.Fatalf("Put Object %d: Error uploading object: <ERROR> %v", i, err)
@@ -635,14 +680,17 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
objectNames = append(objectNames, objectName)
}
getObjectIdentifierList := func(objectNames []string) (objectIdentifierList []ObjectIdentifier) {
getObjectToDeleteList := func(objectNames []string) (objectList []ObjectToDelete) {
for _, objectName := range objectNames {
objectIdentifierList = append(objectIdentifierList, ObjectIdentifier{objectName})
objectList = append(objectList, ObjectToDelete{
ObjectName: objectName,
})
}
return objectIdentifierList
return objectList
}
getDeleteErrorList := func(objects []ObjectIdentifier) (deleteErrorList []DeleteError) {
getDeleteErrorList := func(objects []ObjectToDelete) (deleteErrorList []DeleteError) {
for _, obj := range objects {
deleteErrorList = append(deleteErrorList, DeleteError{
Code: errorCodes[ErrAccessDenied].Code,
@@ -655,22 +703,38 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
}
requestList := []DeleteObjectsRequest{
{Quiet: false, Objects: getObjectIdentifierList(objectNames[:5])},
{Quiet: true, Objects: getObjectIdentifierList(objectNames[5:])},
{Quiet: false, Objects: getObjectToDeleteList(objectNames[:5])},
{Quiet: true, Objects: getObjectToDeleteList(objectNames[5:])},
}
// generate multi objects delete response.
successRequest0 := encodeResponse(requestList[0])
successResponse0 := generateMultiDeleteResponse(requestList[0].Quiet, requestList[0].Objects, nil)
deletedObjects := make([]DeletedObject, len(requestList[0].Objects))
for i := range requestList[0].Objects {
deletedObjects[i] = DeletedObject{
ObjectName: requestList[0].Objects[i].ObjectName,
}
}
successResponse0 := generateMultiDeleteResponse(requestList[0].Quiet, deletedObjects, nil)
encodedSuccessResponse0 := encodeResponse(successResponse0)
successRequest1 := encodeResponse(requestList[1])
successResponse1 := generateMultiDeleteResponse(requestList[1].Quiet, requestList[1].Objects, nil)
deletedObjects = make([]DeletedObject, len(requestList[1].Objects))
for i := range requestList[0].Objects {
deletedObjects[i] = DeletedObject{
ObjectName: requestList[1].Objects[i].ObjectName,
}
}
successResponse1 := generateMultiDeleteResponse(requestList[1].Quiet, deletedObjects, nil)
encodedSuccessResponse1 := encodeResponse(successResponse1)
// generate multi objects delete response for errors.
// errorRequest := encodeResponse(requestList[1])
errorResponse := generateMultiDeleteResponse(requestList[1].Quiet, requestList[1].Objects, nil)
errorResponse := generateMultiDeleteResponse(requestList[1].Quiet, deletedObjects, nil)
encodedErrorResponse := encodeResponse(errorResponse)
anonRequest := encodeResponse(requestList[0])
@@ -773,6 +837,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
// Verify whether the bucket obtained object is same as the one created.
if testCase.expectedContent != nil && !bytes.Equal(testCase.expectedContent, actualContent) {
fmt.Println(string(testCase.expectedContent), string(actualContent))
t.Errorf("Test %d : MinIO %s: Object content differs from expected value.", i+1, instanceType)
}
}

View File

@@ -72,13 +72,22 @@ func (api objectAPIHandlers) PutBucketLifecycleHandler(w http.ResponseWriter, r
return
}
if err = objAPI.SetBucketLifecycle(ctx, bucket, bucketLifecycle); err != nil {
// Validate the received bucket policy document
if err = bucketLifecycle.Validate(); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
globalLifecycleSys.Set(bucket, *bucketLifecycle)
globalNotificationSys.SetBucketLifecycle(ctx, bucket, bucketLifecycle)
configData, err := xml.Marshal(bucketLifecycle)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if err = globalBucketMetadataSys.Update(bucket, bucketLifecycleConfig, configData); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Success.
writeSuccessResponseHeadersOnly(w)
@@ -110,21 +119,20 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
return
}
// Read bucket access lifecycle.
bucketLifecycle, err := objAPI.GetBucketLifecycle(ctx, bucket)
config, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
lifecycleData, err := xml.Marshal(bucketLifecycle)
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Write lifecycle configuration to client.
writeSuccessResponseXML(w, lifecycleData)
writeSuccessResponseXML(w, configData)
}
// DeleteBucketLifecycleHandler - This HTTP handler removes bucket lifecycle configuration.
@@ -153,14 +161,11 @@ func (api objectAPIHandlers) DeleteBucketLifecycleHandler(w http.ResponseWriter,
return
}
if err := objAPI.DeleteBucketLifecycle(ctx, bucket); err != nil {
if err := globalBucketMetadataSys.Update(bucket, bucketLifecycleConfig, nil); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
globalLifecycleSys.Remove(bucket)
globalNotificationSys.RemoveBucketLifecycle(ctx, bucket)
// Success.
writeSuccessNoContent(w)
}

View File

@@ -0,0 +1,303 @@
/*
* MinIO Cloud Storage, (C) 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"bytes"
"encoding/xml"
"net/http"
"net/http/httptest"
"testing"
"github.com/minio/minio/pkg/auth"
)
// Test S3 Bucket lifecycle APIs with wrong credentials
func TestBucketLifecycleWrongCredentials(t *testing.T) {
ExecObjectLayerAPITest(t, testBucketLifecycleHandlersWrongCredentials, []string{"GetBucketLifecycle", "PutBucketLifecycle", "DeleteBucketLifecycle"})
}
// Test for authentication
func testBucketLifecycleHandlersWrongCredentials(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials auth.Credentials, t *testing.T) {
// test cases with sample input and expected output.
testCases := []struct {
method string
bucketName string
accessKey string
secretKey string
// Sent body
body []byte
// Expected response
expectedRespStatus int
lifecycleResponse []byte
errorResponse APIErrorResponse
shouldPass bool
}{
// GET empty credentials
{
method: "GET", bucketName: bucketName,
accessKey: "",
secretKey: "",
expectedRespStatus: http.StatusForbidden,
lifecycleResponse: []byte(""),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "AccessDenied",
Message: "Access Denied.",
},
shouldPass: false,
},
// GET wrong credentials
{
method: "GET", bucketName: bucketName,
accessKey: "abcd",
secretKey: "abcd",
expectedRespStatus: http.StatusForbidden,
lifecycleResponse: []byte(""),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidAccessKeyId",
Message: "The Access Key Id you provided does not exist in our records.",
},
shouldPass: false,
},
// PUT empty credentials
{
method: "PUT",
bucketName: bucketName,
accessKey: "",
secretKey: "",
expectedRespStatus: http.StatusForbidden,
lifecycleResponse: []byte(""),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "AccessDenied",
Message: "Access Denied.",
},
shouldPass: false,
},
// PUT wrong credentials
{
method: "PUT",
bucketName: bucketName,
accessKey: "abcd",
secretKey: "abcd",
expectedRespStatus: http.StatusForbidden,
lifecycleResponse: []byte(""),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidAccessKeyId",
Message: "The Access Key Id you provided does not exist in our records.",
},
shouldPass: false,
},
// DELETE empty credentials
{
method: "DELETE",
bucketName: bucketName,
accessKey: "",
secretKey: "",
expectedRespStatus: http.StatusForbidden,
lifecycleResponse: []byte(""),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "AccessDenied",
Message: "Access Denied.",
},
shouldPass: false,
},
// DELETE wrong credentials
{
method: "DELETE",
bucketName: bucketName,
accessKey: "abcd",
secretKey: "abcd",
expectedRespStatus: http.StatusForbidden,
lifecycleResponse: []byte(""),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidAccessKeyId",
Message: "The Access Key Id you provided does not exist in our records.",
},
shouldPass: false,
},
}
testBucketLifecycle(obj, instanceType, bucketName, apiRouter, t, testCases)
}
// Test S3 Bucket lifecycle APIs
func TestBucketLifecycle(t *testing.T) {
ExecObjectLayerAPITest(t, testBucketLifecycleHandlers, []string{"GetBucketLifecycle", "PutBucketLifecycle", "DeleteBucketLifecycle"})
}
// Simple tests of bucket lifecycle: PUT, GET, DELETE.
// Tests are related and the order is important.
func testBucketLifecycleHandlers(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
creds auth.Credentials, t *testing.T) {
// test cases with sample input and expected output.
testCases := []struct {
method string
bucketName string
accessKey string
secretKey string
// Sent body
body []byte
// Expected response
expectedRespStatus int
lifecycleResponse []byte
errorResponse APIErrorResponse
shouldPass bool
}{
// Test case - 1.
// Filter contains more than (Prefix,Tag,And) rule
{
method: "PUT",
bucketName: bucketName,
accessKey: creds.AccessKey,
secretKey: creds.SecretKey,
body: []byte(`<LifecycleConfiguration><Rule><ID>id</ID><Filter><Prefix>logs/</Prefix><Tag><Key>Key1</Key><Value>Value1</Value></Tag></Filter><Status>Enabled</Status><Expiration><Days>365</Days></Expiration></Rule></LifecycleConfiguration>`),
expectedRespStatus: http.StatusBadRequest,
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidRequest",
Message: "Filter must have exactly one of Prefix, Tag, or And specified",
},
shouldPass: false,
},
// Date contains wrong format
{
method: "PUT",
bucketName: bucketName,
accessKey: creds.AccessKey,
secretKey: creds.SecretKey,
body: []byte(`<LifecycleConfiguration><Rule><ID>id</ID><Filter><Prefix>logs/</Prefix><Tag><Key>Key1</Key><Value>Value1</Value></Tag></Filter><Status>Enabled</Status><Expiration><Date>365</Date></Expiration></Rule></LifecycleConfiguration>`),
expectedRespStatus: http.StatusBadRequest,
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidRequest",
Message: "Date must be provided in ISO 8601 format",
},
shouldPass: false,
},
{
method: "PUT",
bucketName: bucketName,
accessKey: creds.AccessKey,
secretKey: creds.SecretKey,
body: []byte(`<?xml version="1.0" encoding="UTF-8"?><LifecycleConfiguration><Rule><ID>id</ID><Filter><Prefix>logs/</Prefix></Filter><Status>Enabled</Status><Expiration><Days>365</Days></Expiration></Rule></LifecycleConfiguration>`),
expectedRespStatus: http.StatusOK,
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{},
shouldPass: true,
},
{
method: "GET",
accessKey: creds.AccessKey,
secretKey: creds.SecretKey,
bucketName: bucketName,
body: []byte(``),
expectedRespStatus: http.StatusOK,
lifecycleResponse: []byte(`<LifecycleConfiguration><Rule><ID>id</ID><Status>Enabled</Status><Filter><Prefix>logs/</Prefix></Filter><Expiration><Days>365</Days></Expiration></Rule></LifecycleConfiguration>`),
errorResponse: APIErrorResponse{},
shouldPass: true,
},
{
method: "DELETE",
accessKey: creds.AccessKey,
secretKey: creds.SecretKey,
bucketName: bucketName,
body: []byte(``),
expectedRespStatus: http.StatusNoContent,
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{},
shouldPass: true,
},
{
method: "GET",
accessKey: creds.AccessKey,
secretKey: creds.SecretKey,
bucketName: bucketName,
body: []byte(``),
expectedRespStatus: http.StatusNotFound,
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "NoSuchLifecycleConfiguration",
Message: "The lifecycle configuration does not exist",
},
shouldPass: false,
},
}
testBucketLifecycle(obj, instanceType, bucketName, apiRouter, t, testCases)
}
// testBucketLifecycle is a generic testing of lifecycle requests
func testBucketLifecycle(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
t *testing.T, testCases []struct {
method string
bucketName string
accessKey string
secretKey string
body []byte
expectedRespStatus int
lifecycleResponse []byte
errorResponse APIErrorResponse
shouldPass bool
}) {
for i, testCase := range testCases {
// initialize httptest Recorder, this records any mutations to response writer inside the handler.
rec := httptest.NewRecorder()
// construct HTTP request
req, err := newTestSignedRequestV4(testCase.method, getBucketLifecycleURL("", testCase.bucketName),
int64(len(testCase.body)), bytes.NewReader(testCase.body), testCase.accessKey, testCase.secretKey, nil)
if err != nil {
t.Fatalf("Test %d: %s: Failed to create HTTP request for GetBucketLocationHandler: <ERROR> %v", i+1, instanceType, err)
}
// Since `apiRouter` satisfies `http.Handler` it has a ServeHTTP to execute the logic of the handler.
// Call the ServeHTTP to execute the handler.
apiRouter.ServeHTTP(rec, req)
if rec.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, rec.Code)
}
if testCase.shouldPass && !bytes.Equal(testCase.lifecycleResponse, rec.Body.Bytes()) {
t.Errorf("Test %d: %s: Expected the response to be `%s`, but instead found `%s`", i+1, instanceType, string(testCase.lifecycleResponse), rec.Body.String())
}
errorResponse := APIErrorResponse{}
err = xml.Unmarshal(rec.Body.Bytes(), &errorResponse)
if err != nil && !testCase.shouldPass {
t.Fatalf("Test %d: %s: Unable to marshal response body %s", i+1, instanceType, rec.Body.String())
}
if errorResponse.Resource != testCase.errorResponse.Resource {
t.Errorf("Test %d: %s: Expected the error resource to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Resource, errorResponse.Resource)
}
if errorResponse.Message != testCase.errorResponse.Message {
t.Errorf("Test %d: %s: Expected the error message to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Message, errorResponse.Message)
}
if errorResponse.Code != testCase.errorResponse.Code {
t.Errorf("Test %d: %s: Expected the error code to be `%s`, but instead found `%s`", i+1, instanceType, testCase.errorResponse.Code, errorResponse.Code)
}
}
}

48
cmd/bucket-lifecycle.go Normal file
View File

@@ -0,0 +1,48 @@
/*
* MinIO Cloud Storage, (C) 2019 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"github.com/minio/minio/pkg/bucket/lifecycle"
)
const (
// Disabled means the lifecycle rule is inactive
Disabled = "Disabled"
)
// LifecycleSys - Bucket lifecycle subsystem.
type LifecycleSys struct{}
// Get - gets lifecycle config associated to a given bucket name.
func (sys *LifecycleSys) Get(bucketName string) (lc *lifecycle.Lifecycle, err error) {
if globalIsGateway {
objAPI := newObjectLayerWithoutSafeModeFn()
if objAPI == nil {
return nil, errServerNotInitialized
}
return nil, BucketLifecycleNotFound{Bucket: bucketName}
}
return globalBucketMetadataSys.GetLifecycleConfig(bucketName)
}
// NewLifecycleSys - creates new lifecycle system.
func NewLifecycleSys() *LifecycleSys {
return &LifecycleSys{}
}

View File

@@ -17,7 +17,10 @@
package cmd
import (
"context"
"fmt"
"net/http"
"strconv"
"strings"
"github.com/gorilla/mux"
@@ -49,13 +52,13 @@ func validateListObjectsArgs(marker, delimiter, encodingType string, maxKeys int
return ErrNone
}
// ListBucketObjectVersions - GET Bucket Object versions
// ListObjectVersions - GET Bucket Object versions
// You can use the versions subresource to list metadata about all
// of the versions of objects in a bucket.
func (api objectAPIHandlers) ListBucketObjectVersionsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListBucketObjectVersions")
func (api objectAPIHandlers) ListObjectVersionsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListObjectVersions")
defer logger.AuditLog(w, r, "ListBucketObjectVersions", mustGetClaimsFromToken(r))
defer logger.AuditLog(w, r, "ListObjectVersions", mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -74,8 +77,7 @@ func (api objectAPIHandlers) ListBucketObjectVersionsHandler(w http.ResponseWrit
urlValues := r.URL.Query()
// Extract all the listBucketVersions query params to their native values.
// versionIDMarker is ignored here.
prefix, marker, delimiter, maxkeys, encodingType, _, errCode := getListBucketObjectVersionsArgs(urlValues)
prefix, marker, delimiter, maxkeys, encodingType, versionIDMarker, errCode := getListBucketObjectVersionsArgs(urlValues)
if errCode != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(errCode), r.URL, guessIsBrowserReq(r))
return
@@ -87,40 +89,29 @@ func (api objectAPIHandlers) ListBucketObjectVersionsHandler(w http.ResponseWrit
return
}
listObjects := objectAPI.ListObjects
listObjectVersions := objectAPI.ListObjectVersions
// Inititate a list objects operation based on the input params.
// Inititate a list object versions operation based on the input params.
// On success would return back ListObjectsInfo object to be
// marshaled into S3 compatible XML header.
listObjectsInfo, err := listObjects(ctx, bucket, prefix, marker, delimiter, maxkeys)
listObjectVersionsInfo, err := listObjectVersions(ctx, bucket, prefix, marker, versionIDMarker, delimiter, maxkeys)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
for i := range listObjectsInfo.Objects {
var actualSize int64
if listObjectsInfo.Objects[i].IsCompressed() {
// Read the decompressed size from the meta.json.
actualSize = listObjectsInfo.Objects[i].GetActualSize()
if actualSize < 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidDecompressedSize),
r.URL, guessIsBrowserReq(r))
return
}
// Set the info.Size to the actualSize.
listObjectsInfo.Objects[i].Size = actualSize
} else if crypto.IsEncrypted(listObjectsInfo.Objects[i].UserDefined) {
listObjectsInfo.Objects[i].ETag = getDecryptedETag(r.Header, listObjectsInfo.Objects[i], false)
listObjectsInfo.Objects[i].Size, err = listObjectsInfo.Objects[i].DecryptedSize()
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
for i := range listObjectVersionsInfo.Objects {
if crypto.IsEncrypted(listObjectVersionsInfo.Objects[i].UserDefined) {
listObjectVersionsInfo.Objects[i].ETag = getDecryptedETag(r.Header, listObjectVersionsInfo.Objects[i], false)
}
listObjectVersionsInfo.Objects[i].Size, err = listObjectVersionsInfo.Objects[i].GetActualSize()
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
}
response := generateListVersionsResponse(bucket, prefix, marker, delimiter, encodingType, maxkeys, listObjectsInfo)
response := generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType, maxkeys, listObjectVersionsInfo)
// Write success response.
writeSuccessResponseXML(w, encodeResponse(response))
@@ -169,6 +160,13 @@ func (api objectAPIHandlers) ListObjectsV2MHandler(w http.ResponseWriter, r *htt
return
}
// Analyze continuation token and route the request accordingly
var success bool
token, success = proxyRequestByToken(ctx, w, r, token)
if success {
return
}
listObjectsV2 := objectAPI.ListObjectsV2
// Inititate a list objects operation based on the input params.
@@ -181,28 +179,23 @@ func (api objectAPIHandlers) ListObjectsV2MHandler(w http.ResponseWriter, r *htt
}
for i := range listObjectsV2Info.Objects {
var actualSize int64
if listObjectsV2Info.Objects[i].IsCompressed() {
// Read the decompressed size from the meta.json.
actualSize = listObjectsV2Info.Objects[i].GetActualSize()
if actualSize < 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidDecompressedSize), r.URL, guessIsBrowserReq(r))
return
}
// Set the info.Size to the actualSize.
listObjectsV2Info.Objects[i].Size = actualSize
} else if crypto.IsEncrypted(listObjectsV2Info.Objects[i].UserDefined) {
if crypto.IsEncrypted(listObjectsV2Info.Objects[i].UserDefined) {
listObjectsV2Info.Objects[i].ETag = getDecryptedETag(r.Header, listObjectsV2Info.Objects[i], false)
listObjectsV2Info.Objects[i].Size, err = listObjectsV2Info.Objects[i].DecryptedSize()
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
}
listObjectsV2Info.Objects[i].Size, err = listObjectsV2Info.Objects[i].GetActualSize()
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
}
response := generateListObjectsV2Response(bucket, prefix, token,
listObjectsV2Info.NextContinuationToken, startAfter,
// The next continuation token has id@node_index format to optimize paginated listing
nextContinuationToken := listObjectsV2Info.NextContinuationToken
if nextContinuationToken != "" && listObjectsV2Info.IsTruncated {
nextContinuationToken = fmt.Sprintf("%s@%d", listObjectsV2Info.NextContinuationToken, getLocalNodeIndex())
}
response := generateListObjectsV2Response(bucket, prefix, token, nextContinuationToken, startAfter,
delimiter, encodingType, fetchOwner, listObjectsV2Info.IsTruncated,
maxKeys, listObjectsV2Info.Objects, listObjectsV2Info.Prefixes, true)
@@ -253,6 +246,13 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
return
}
// Analyze continuation token and route the request accordingly
var success bool
token, success = proxyRequestByToken(ctx, w, r, token)
if success {
return
}
listObjectsV2 := objectAPI.ListObjectsV2
// Inititate a list objects operation based on the input params.
@@ -265,28 +265,23 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
}
for i := range listObjectsV2Info.Objects {
var actualSize int64
if listObjectsV2Info.Objects[i].IsCompressed() {
// Read the decompressed size from the meta.json.
actualSize = listObjectsV2Info.Objects[i].GetActualSize()
if actualSize < 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidDecompressedSize), r.URL, guessIsBrowserReq(r))
return
}
// Set the info.Size to the actualSize.
listObjectsV2Info.Objects[i].Size = actualSize
} else if crypto.IsEncrypted(listObjectsV2Info.Objects[i].UserDefined) {
if crypto.IsEncrypted(listObjectsV2Info.Objects[i].UserDefined) {
listObjectsV2Info.Objects[i].ETag = getDecryptedETag(r.Header, listObjectsV2Info.Objects[i], false)
listObjectsV2Info.Objects[i].Size, err = listObjectsV2Info.Objects[i].DecryptedSize()
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
}
listObjectsV2Info.Objects[i].Size, err = listObjectsV2Info.Objects[i].GetActualSize()
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
}
response := generateListObjectsV2Response(bucket, prefix, token,
listObjectsV2Info.NextContinuationToken, startAfter,
// The next continuation token has id@node_index format to optimize paginated listing
nextContinuationToken := listObjectsV2Info.NextContinuationToken
if nextContinuationToken != "" && listObjectsV2Info.IsTruncated {
nextContinuationToken = fmt.Sprintf("%s@%d", listObjectsV2Info.NextContinuationToken, getLocalNodeIndex())
}
response := generateListObjectsV2Response(bucket, prefix, token, nextContinuationToken, startAfter,
delimiter, encodingType, fetchOwner, listObjectsV2Info.IsTruncated,
maxKeys, listObjectsV2Info.Objects, listObjectsV2Info.Prefixes, false)
@@ -294,6 +289,60 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
writeSuccessResponseXML(w, encodeResponse(response))
}
func getLocalNodeIndex() int {
if len(globalProxyEndpoints) == 0 {
return -1
}
for i, ep := range globalProxyEndpoints {
if ep.IsLocal {
return i
}
}
return -1
}
func parseRequestToken(token string) (subToken string, nodeIndex int) {
if token == "" {
return token, -1
}
i := strings.Index(token, "@")
if i < 0 {
return token, -1
}
nodeIndex, err := strconv.Atoi(token[i+1:])
if err != nil {
return token, -1
}
subToken = token[:i]
return subToken, nodeIndex
}
func proxyRequestByToken(ctx context.Context, w http.ResponseWriter, r *http.Request, token string) (string, bool) {
subToken, nodeIndex := parseRequestToken(token)
if nodeIndex > 0 {
return subToken, proxyRequestByNodeIndex(ctx, w, r, nodeIndex)
}
return subToken, false
}
func proxyRequestByNodeIndex(ctx context.Context, w http.ResponseWriter, r *http.Request, index int) (success bool) {
if len(globalProxyEndpoints) == 0 {
return false
}
if index < 0 || index >= len(globalProxyEndpoints) {
return false
}
ep := globalProxyEndpoints[index]
if ep.IsLocal {
return false
}
return proxyRequest(ctx, w, r, ep)
}
func proxyRequestByBucket(ctx context.Context, w http.ResponseWriter, r *http.Request, bucket string) (success bool) {
return proxyRequestByNodeIndex(ctx, w, r, crcHashMod(bucket, len(globalProxyEndpoints)))
}
// ListObjectsV1Handler - GET Bucket (List Objects) Version 1.
// --------------------------
// This implementation of the GET operation returns some or all (up to 10000)
@@ -332,6 +381,10 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
return
}
if proxyRequestByBucket(ctx, w, r, bucket) {
return
}
listObjects := objectAPI.ListObjects
// Inititate a list objects operation based on the input params.
@@ -344,25 +397,16 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
}
for i := range listObjectsInfo.Objects {
var actualSize int64
if listObjectsInfo.Objects[i].IsCompressed() {
// Read the decompressed size from the meta.json.
actualSize = listObjectsInfo.Objects[i].GetActualSize()
if actualSize < 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidDecompressedSize), r.URL, guessIsBrowserReq(r))
return
}
// Set the info.Size to the actualSize.
listObjectsInfo.Objects[i].Size = actualSize
} else if crypto.IsEncrypted(listObjectsInfo.Objects[i].UserDefined) {
if crypto.IsEncrypted(listObjectsInfo.Objects[i].UserDefined) {
listObjectsInfo.Objects[i].ETag = getDecryptedETag(r.Header, listObjectsInfo.Objects[i], false)
listObjectsInfo.Objects[i].Size, err = listObjectsInfo.Objects[i].DecryptedSize()
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
}
listObjectsInfo.Objects[i].Size, err = listObjectsInfo.Objects[i].GetActualSize()
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
}
response := generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingType, maxKeys, listObjectsInfo)
// Write success response.

412
cmd/bucket-metadata-sys.go Normal file
View File

@@ -0,0 +1,412 @@
/*
* MinIO Cloud Storage, (C) 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"bytes"
"context"
"errors"
"fmt"
"sync"
"github.com/minio/minio-go/v7/pkg/tags"
bucketsse "github.com/minio/minio/pkg/bucket/encryption"
"github.com/minio/minio/pkg/bucket/lifecycle"
objectlock "github.com/minio/minio/pkg/bucket/object/lock"
"github.com/minio/minio/pkg/bucket/policy"
"github.com/minio/minio/pkg/bucket/versioning"
"github.com/minio/minio/pkg/event"
"github.com/minio/minio/pkg/madmin"
"github.com/minio/minio/pkg/sync/errgroup"
)
// BucketMetadataSys captures all bucket metadata for a given cluster.
type BucketMetadataSys struct {
sync.RWMutex
metadataMap map[string]BucketMetadata
}
// Remove bucket metadata from memory.
func (sys *BucketMetadataSys) Remove(bucket string) {
if globalIsGateway {
return
}
sys.Lock()
delete(sys.metadataMap, bucket)
sys.Unlock()
}
// Set - sets a new metadata in-memory.
// Only a shallow copy is saved and fields with references
// cannot be modified without causing a race condition,
// so they should be replaced atomically and not appended to, etc.
// Data is not persisted to disk.
func (sys *BucketMetadataSys) Set(bucket string, meta BucketMetadata) {
if globalIsGateway {
return
}
if bucket != minioMetaBucket {
sys.Lock()
sys.metadataMap[bucket] = meta
sys.Unlock()
}
}
// Update update bucket metadata for the specified config file.
// The configData data should not be modified after being sent here.
func (sys *BucketMetadataSys) Update(bucket string, configFile string, configData []byte) error {
objAPI := newObjectLayerWithoutSafeModeFn()
if objAPI == nil {
return errServerNotInitialized
}
if globalIsGateway {
// This code is needed only for gateway implementations.
switch configFile {
case bucketSSEConfig:
if globalGatewayName == "nas" {
meta, err := loadBucketMetadata(GlobalContext, objAPI, bucket)
if err != nil {
return err
}
meta.EncryptionConfigXML = configData
return meta.Save(GlobalContext, objAPI)
}
case bucketLifecycleConfig:
if globalGatewayName == "nas" {
meta, err := loadBucketMetadata(GlobalContext, objAPI, bucket)
if err != nil {
return err
}
meta.LifecycleConfigXML = configData
return meta.Save(GlobalContext, objAPI)
}
case bucketTaggingConfig:
if globalGatewayName == "nas" {
meta, err := loadBucketMetadata(GlobalContext, objAPI, bucket)
if err != nil {
return err
}
meta.TaggingConfigXML = configData
return meta.Save(GlobalContext, objAPI)
}
case bucketNotificationConfig:
if globalGatewayName == "nas" {
meta, err := loadBucketMetadata(GlobalContext, objAPI, bucket)
if err != nil {
return err
}
meta.NotificationConfigXML = configData
return meta.Save(GlobalContext, objAPI)
}
case bucketPolicyConfig:
if configData == nil {
return objAPI.DeleteBucketPolicy(GlobalContext, bucket)
}
config, err := policy.ParseConfig(bytes.NewReader(configData), bucket)
if err != nil {
return err
}
return objAPI.SetBucketPolicy(GlobalContext, bucket, config)
}
return NotImplemented{}
}
if bucket == minioMetaBucket {
return errInvalidArgument
}
meta, err := loadBucketMetadata(GlobalContext, objAPI, bucket)
if err != nil {
return err
}
switch configFile {
case bucketPolicyConfig:
meta.PolicyConfigJSON = configData
case bucketNotificationConfig:
meta.NotificationConfigXML = configData
case bucketLifecycleConfig:
meta.LifecycleConfigXML = configData
case bucketSSEConfig:
meta.EncryptionConfigXML = configData
case bucketTaggingConfig:
meta.TaggingConfigXML = configData
case objectLockConfig:
meta.ObjectLockConfigXML = configData
case bucketVersioningConfig:
meta.VersioningConfigXML = configData
case bucketQuotaConfigFile:
meta.QuotaConfigJSON = configData
default:
return fmt.Errorf("Unknown bucket %s metadata update requested %s", bucket, configFile)
}
if err := meta.Save(GlobalContext, objAPI); err != nil {
return err
}
sys.Set(bucket, meta)
globalNotificationSys.LoadBucketMetadata(GlobalContext, bucket)
return nil
}
// Get metadata for a bucket.
// If no metadata exists errConfigNotFound is returned and a new metadata is returned.
// Only a shallow copy is returned, so referenced data should not be modified,
// but can be replaced atomically.
func (sys *BucketMetadataSys) Get(bucket string) (BucketMetadata, error) {
if globalIsGateway || bucket == minioMetaBucket {
return newBucketMetadata(bucket), errConfigNotFound
}
sys.RLock()
defer sys.RUnlock()
meta, ok := sys.metadataMap[bucket]
if !ok {
return newBucketMetadata(bucket), errConfigNotFound
}
return meta, nil
}
// GetVersioningConfig returns configured versioning config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetVersioningConfig(bucket string) (*versioning.Versioning, error) {
meta, err := sys.GetConfig(bucket)
if err != nil {
return nil, err
}
return meta.versioningConfig, nil
}
// GetTaggingConfig returns configured tagging config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetTaggingConfig(bucket string) (*tags.Tags, error) {
meta, err := sys.GetConfig(bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketTaggingNotFound{Bucket: bucket}
}
return nil, err
}
if meta.taggingConfig == nil {
return nil, BucketTaggingNotFound{Bucket: bucket}
}
return meta.taggingConfig, nil
}
// GetObjectLockConfig returns configured object lock config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetObjectLockConfig(bucket string) (*objectlock.Config, error) {
meta, err := sys.GetConfig(bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketObjectLockConfigNotFound{Bucket: bucket}
}
return nil, err
}
if meta.objectLockConfig == nil {
return nil, BucketObjectLockConfigNotFound{Bucket: bucket}
}
return meta.objectLockConfig, nil
}
// GetLifecycleConfig returns configured lifecycle config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetLifecycleConfig(bucket string) (*lifecycle.Lifecycle, error) {
meta, err := sys.GetConfig(bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketLifecycleNotFound{Bucket: bucket}
}
return nil, err
}
if meta.lifecycleConfig == nil {
return nil, BucketLifecycleNotFound{Bucket: bucket}
}
return meta.lifecycleConfig, nil
}
// GetNotificationConfig returns configured notification config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetNotificationConfig(bucket string) (*event.Config, error) {
if globalIsGateway && globalGatewayName == "nas" {
// Only needed in case of NAS gateway.
objAPI := newObjectLayerWithoutSafeModeFn()
if objAPI == nil {
return nil, errServerNotInitialized
}
meta, err := loadBucketMetadata(GlobalContext, objAPI, bucket)
if err != nil {
return nil, err
}
return meta.notificationConfig, nil
}
meta, err := sys.GetConfig(bucket)
if err != nil {
return nil, err
}
return meta.notificationConfig, nil
}
// GetSSEConfig returns configured SSE config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetSSEConfig(bucket string) (*bucketsse.BucketSSEConfig, error) {
meta, err := sys.GetConfig(bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketSSEConfigNotFound{Bucket: bucket}
}
return nil, err
}
if meta.sseConfig == nil {
return nil, BucketSSEConfigNotFound{Bucket: bucket}
}
return meta.sseConfig, nil
}
// GetPolicyConfig returns configured bucket policy
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetPolicyConfig(bucket string) (*policy.Policy, error) {
if globalIsGateway {
objAPI := newObjectLayerWithoutSafeModeFn()
if objAPI == nil {
return nil, errServerNotInitialized
}
return objAPI.GetBucketPolicy(GlobalContext, bucket)
}
meta, err := sys.GetConfig(bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketPolicyNotFound{Bucket: bucket}
}
return nil, err
}
if meta.policyConfig == nil {
return nil, BucketPolicyNotFound{Bucket: bucket}
}
return meta.policyConfig, nil
}
// GetQuotaConfig returns configured bucket quota
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetQuotaConfig(bucket string) (*madmin.BucketQuota, error) {
meta, err := sys.GetConfig(bucket)
if err != nil {
return nil, err
}
return meta.quotaConfig, nil
}
// GetConfig returns the current bucket metadata
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetConfig(bucket string) (BucketMetadata, error) {
objAPI := newObjectLayerWithoutSafeModeFn()
if objAPI == nil {
return newBucketMetadata(bucket), errServerNotInitialized
}
if globalIsGateway {
return newBucketMetadata(bucket), NotImplemented{}
}
if bucket == minioMetaBucket {
return newBucketMetadata(bucket), errInvalidArgument
}
sys.RLock()
meta, ok := sys.metadataMap[bucket]
sys.RUnlock()
if ok {
return meta, nil
}
meta, err := loadBucketMetadata(GlobalContext, objAPI, bucket)
if err != nil {
return meta, err
}
sys.Lock()
sys.metadataMap[bucket] = meta
sys.Unlock()
return meta, nil
}
// Init - initializes bucket metadata system for all buckets.
func (sys *BucketMetadataSys) Init(ctx context.Context, buckets []BucketInfo, objAPI ObjectLayer) error {
if objAPI == nil {
return errServerNotInitialized
}
// In gateway mode, we don't need to load the policies
// from the backend.
if globalIsGateway {
return nil
}
// Load PolicySys once during boot.
return sys.load(ctx, buckets, objAPI)
}
// concurrently load bucket metadata to speed up loading bucket metadata.
func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []BucketInfo, objAPI ObjectLayer) error {
g := errgroup.WithNErrs(len(buckets))
for index := range buckets {
index := index
g.Go(func() error {
meta, err := loadBucketMetadata(ctx, objAPI, buckets[index].Name)
if err != nil {
return err
}
sys.Lock()
sys.metadataMap[buckets[index].Name] = meta
sys.Unlock()
return nil
}, index)
}
for _, err := range g.Wait() {
if err != nil {
return err
}
}
return nil
}
// Loads bucket metadata for all buckets into BucketMetadataSys.
func (sys *BucketMetadataSys) load(ctx context.Context, buckets []BucketInfo, objAPI ObjectLayer) error {
count := 100 // load 100 bucket metadata at a time.
for {
if len(buckets) < count {
return sys.concurrentLoad(ctx, buckets, objAPI)
}
if err := sys.concurrentLoad(ctx, buckets[:count], objAPI); err != nil {
return err
}
buckets = buckets[count:]
}
}
// NewBucketMetadataSys - creates new policy system.
func NewBucketMetadataSys() *BucketMetadataSys {
return &BucketMetadataSys{
metadataMap: make(map[string]BucketMetadata),
}
}

337
cmd/bucket-metadata.go Normal file
View File

@@ -0,0 +1,337 @@
/*
* MinIO Cloud Storage, (C) 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cmd
import (
"bytes"
"context"
"encoding/binary"
"encoding/xml"
"errors"
"fmt"
"path"
"time"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/cmd/logger"
bucketsse "github.com/minio/minio/pkg/bucket/encryption"
"github.com/minio/minio/pkg/bucket/lifecycle"
objectlock "github.com/minio/minio/pkg/bucket/object/lock"
"github.com/minio/minio/pkg/bucket/policy"
"github.com/minio/minio/pkg/bucket/versioning"
"github.com/minio/minio/pkg/event"
"github.com/minio/minio/pkg/madmin"
)
const (
legacyBucketObjectLockEnabledConfigFile = "object-lock-enabled.json"
legacyBucketObjectLockEnabledConfig = `{"x-amz-bucket-object-lock-enabled":true}`
bucketMetadataFile = ".metadata.bin"
bucketMetadataFormat = 1
bucketMetadataVersion = 1
)
var (
enabledBucketObjectLockConfig = []byte(`<ObjectLockConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><ObjectLockEnabled>Enabled</ObjectLockEnabled></ObjectLockConfiguration>`)
enabledBucketVersioningConfig = []byte(`<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Status>Enabled</Status></VersioningConfiguration>`)
)
//go:generate msgp -file $GOFILE
// BucketMetadata contains bucket metadata.
// When adding/removing fields, regenerate the marshal code using the go generate above.
// Only changing meaning of fields requires a version bump.
// bucketMetadataFormat refers to the format.
// bucketMetadataVersion can be used to track a rolling upgrade of a field.
type BucketMetadata struct {
Name string
Created time.Time
LockEnabled bool // legacy not used anymore.
PolicyConfigJSON []byte
NotificationConfigXML []byte
LifecycleConfigXML []byte
ObjectLockConfigXML []byte
VersioningConfigXML []byte
EncryptionConfigXML []byte
TaggingConfigXML []byte
QuotaConfigJSON []byte
// Unexported fields. Must be updated atomically.
policyConfig *policy.Policy
notificationConfig *event.Config
lifecycleConfig *lifecycle.Lifecycle
objectLockConfig *objectlock.Config
versioningConfig *versioning.Versioning
sseConfig *bucketsse.BucketSSEConfig
taggingConfig *tags.Tags
quotaConfig *madmin.BucketQuota
}
// newBucketMetadata creates BucketMetadata with the supplied name and Created to Now.
func newBucketMetadata(name string) BucketMetadata {
return BucketMetadata{
Name: name,
Created: UTCNow(),
notificationConfig: &event.Config{
XMLNS: "http://s3.amazonaws.com/doc/2006-03-01/",
},
quotaConfig: &madmin.BucketQuota{},
versioningConfig: &versioning.Versioning{
XMLNS: "http://s3.amazonaws.com/doc/2006-03-01/",
},
}
}
// Load - loads the metadata of bucket by name from ObjectLayer api.
// If an error is returned the returned metadata will be default initialized.
func (b *BucketMetadata) Load(ctx context.Context, api ObjectLayer, name string) error {
configFile := path.Join(bucketConfigPrefix, name, bucketMetadataFile)
data, err := readConfig(ctx, api, configFile)
if err != nil {
return err
}
if len(data) <= 4 {
return fmt.Errorf("loadBucketMetadata: no data")
}
// Read header
switch binary.LittleEndian.Uint16(data[0:2]) {
case bucketMetadataFormat:
default:
return fmt.Errorf("loadBucketMetadata: unknown format: %d", binary.LittleEndian.Uint16(data[0:2]))
}
switch binary.LittleEndian.Uint16(data[2:4]) {
case bucketMetadataVersion:
default:
return fmt.Errorf("loadBucketMetadata: unknown version: %d", binary.LittleEndian.Uint16(data[2:4]))
}
// OK, parse data.
_, err = b.UnmarshalMsg(data[4:])
return err
}
// loadBucketMetadata loads and migrates to bucket metadata.
func loadBucketMetadata(ctx context.Context, objectAPI ObjectLayer, bucket string) (BucketMetadata, error) {
b := newBucketMetadata(bucket)
err := b.Load(ctx, objectAPI, bucket)
if err == nil {
return b, b.convertLegacyConfigs(ctx, objectAPI)
}
if err != errConfigNotFound {
return b, err
}
// Old bucket without bucket metadata. Hence we migrate existing settings.
return b, b.convertLegacyConfigs(ctx, objectAPI)
}
// parseAllConfigs will parse all configs and populate the private fields.
// The first error encountered is returned.
func (b *BucketMetadata) parseAllConfigs(ctx context.Context, objectAPI ObjectLayer) (err error) {
if len(b.PolicyConfigJSON) != 0 {
b.policyConfig, err = policy.ParseConfig(bytes.NewReader(b.PolicyConfigJSON), b.Name)
if err != nil {
return err
}
} else {
b.policyConfig = nil
}
if len(b.NotificationConfigXML) != 0 {
if err = xml.Unmarshal(b.NotificationConfigXML, b.notificationConfig); err != nil {
return err
}
}
if len(b.LifecycleConfigXML) != 0 {
b.lifecycleConfig, err = lifecycle.ParseLifecycleConfig(bytes.NewReader(b.LifecycleConfigXML))
if err != nil {
return err
}
} else {
b.lifecycleConfig = nil
}
if len(b.EncryptionConfigXML) != 0 {
b.sseConfig, err = bucketsse.ParseBucketSSEConfig(bytes.NewReader(b.EncryptionConfigXML))
if err != nil {
return err
}
} else {
b.sseConfig = nil
}
if len(b.TaggingConfigXML) != 0 {
b.taggingConfig, err = tags.ParseBucketXML(bytes.NewReader(b.TaggingConfigXML))
if err != nil {
return err
}
} else {
b.taggingConfig = nil
}
if bytes.Equal(b.ObjectLockConfigXML, enabledBucketObjectLockConfig) {
b.VersioningConfigXML = enabledBucketVersioningConfig
}
if len(b.ObjectLockConfigXML) != 0 {
b.objectLockConfig, err = objectlock.ParseObjectLockConfig(bytes.NewReader(b.ObjectLockConfigXML))
if err != nil {
return err
}
} else {
b.objectLockConfig = nil
}
if len(b.VersioningConfigXML) != 0 {
b.versioningConfig, err = versioning.ParseConfig(bytes.NewReader(b.VersioningConfigXML))
if err != nil {
return err
}
}
if len(b.QuotaConfigJSON) != 0 {
b.quotaConfig, err = parseBucketQuota(b.Name, b.QuotaConfigJSON)
if err != nil {
return err
}
}
return nil
}
func (b *BucketMetadata) convertLegacyConfigs(ctx context.Context, objectAPI ObjectLayer) error {
legacyConfigs := []string{
legacyBucketObjectLockEnabledConfigFile,
bucketPolicyConfig,
bucketNotificationConfig,
bucketLifecycleConfig,
bucketQuotaConfigFile,
bucketSSEConfig,
bucketTaggingConfig,
objectLockConfig,
}
configs := make(map[string][]byte)
// Handle migration from lockEnabled to newer format.
if b.LockEnabled {
configs[objectLockConfig] = enabledBucketObjectLockConfig
b.LockEnabled = false // legacy value unset it
// we are only interested in b.ObjectLockConfigXML or objectLockConfig value
}
for _, legacyFile := range legacyConfigs {
configFile := path.Join(bucketConfigPrefix, b.Name, legacyFile)
configData, err := readConfig(ctx, objectAPI, configFile)
if err != nil {
if errors.Is(err, errConfigNotFound) {
// legacy file config not found, proceed to look for new metadata.
continue
}
return err
}
configs[legacyFile] = configData
}
if len(configs) == 0 {
// nothing to update, return right away.
return b.parseAllConfigs(ctx, objectAPI)
}
for legacyFile, configData := range configs {
switch legacyFile {
case legacyBucketObjectLockEnabledConfigFile:
if string(configData) == legacyBucketObjectLockEnabledConfig {
b.ObjectLockConfigXML = enabledBucketObjectLockConfig
b.VersioningConfigXML = enabledBucketVersioningConfig
b.LockEnabled = false // legacy value unset it
// we are only interested in b.ObjectLockConfigXML
}
case bucketPolicyConfig:
b.PolicyConfigJSON = configData
case bucketNotificationConfig:
b.NotificationConfigXML = configData
case bucketLifecycleConfig:
b.LifecycleConfigXML = configData
case bucketSSEConfig:
b.EncryptionConfigXML = configData
case bucketTaggingConfig:
b.TaggingConfigXML = configData
case objectLockConfig:
b.ObjectLockConfigXML = configData
b.VersioningConfigXML = enabledBucketVersioningConfig
case bucketQuotaConfigFile:
b.QuotaConfigJSON = configData
}
}
if err := b.Save(ctx, objectAPI); err != nil {
return err
}
for legacyFile := range configs {
configFile := path.Join(bucketConfigPrefix, b.Name, legacyFile)
if err := deleteConfig(ctx, objectAPI, configFile); err != nil && !errors.Is(err, errConfigNotFound) {
logger.LogIf(ctx, err)
}
}
return nil
}
// Save config to supplied ObjectLayer api.
func (b *BucketMetadata) Save(ctx context.Context, api ObjectLayer) error {
if err := b.parseAllConfigs(ctx, api); err != nil {
return err
}
data := make([]byte, 4, b.Msgsize()+4)
// Initialize the header.
binary.LittleEndian.PutUint16(data[0:2], bucketMetadataFormat)
binary.LittleEndian.PutUint16(data[2:4], bucketMetadataVersion)
// Marshal the bucket metadata
data, err := b.MarshalMsg(data)
if err != nil {
return err
}
configFile := path.Join(bucketConfigPrefix, b.Name, bucketMetadataFile)
return saveConfig(ctx, api, configFile, data)
}
// deleteBucketMetadata deletes bucket metadata
// If config does not exist no error is returned.
func deleteBucketMetadata(ctx context.Context, obj ObjectLayer, bucket string) error {
metadataFiles := []string{
dataUsageCacheName,
bucketMetadataFile,
}
for _, metaFile := range metadataFiles {
configFile := path.Join(bucketConfigPrefix, bucket, metaFile)
if err := deleteConfig(ctx, obj, configFile); err != nil && err != errConfigNotFound {
return err
}
}
return nil
}

Some files were not shown because too many files have changed in this diff Show More