Compare commits

...

726 Commits

Author SHA1 Message Date
Klaus Post
9885a0a6af Fix hasSpaceFor in SNSD setup (#17630)
If drive is offline or filled we divide by 0.

Fixes #17629

Bonus: Reject when any valid disk exceeds minimum inode threshold.
2023-07-11 14:29:34 -07:00
Kaan Kabalak
f64d62b01d Fix style of logOnceIf calls w/unique identifiers (#17631) 2023-07-11 13:17:45 -07:00
Harshavardhana
82075e8e3a use strconv variants to improve on performance per 'op' (#17626)
```
BenchmarkItoa
BenchmarkItoa-8         	673628088	         1.946 ns/op	       0 B/op	       0 allocs/op
BenchmarkFormatInt
BenchmarkFormatInt-8    	592919769	         2.012 ns/op	       0 B/op	       0 allocs/op
BenchmarkSprint
BenchmarkSprint-8       	26149144	        49.06 ns/op	       2 B/op	       1 allocs/op
BenchmarkSprintBool
BenchmarkSprintBool-8   	26440180	        45.92 ns/op	       4 B/op	       1 allocs/op
BenchmarkFormatBool
BenchmarkFormatBool-8   	1000000000	         0.2558 ns/op	       0 B/op	       0 allocs/op
```
2023-07-11 07:46:58 -07:00
Harshavardhana
5b7c83341b move per bucket metrics to peer location (#17627) 2023-07-11 07:46:24 -07:00
Harshavardhana
8522905d97 update helm linting workflow 2023-07-10 20:11:51 -07:00
Harshavardhana
524ed7ccd0 update go.mod pointing to wrong repo 2023-07-10 20:10:39 -07:00
Poorna
fb49aead9b replication: add validation API (#17520)
To check if replication is set up properly on a bucket.
2023-07-10 20:09:20 -07:00
Aditya Manthramurthy
85f5700e4e fix: missing audit logger call for some admin APIs (#17623) 2023-07-10 16:59:44 -07:00
Aditya Manthramurthy
43b3c093ef Fix: set request id in trace context properly (#17622) 2023-07-10 15:40:44 -07:00
Kaan Kabalak
bd6842d917 Further print log messages once per error (#17618) 2023-07-10 07:59:57 -07:00
Poorna
e8c98c3246 Avoid extra GetObjectInfo call in DeleteObject API (#17599)
Optimize DeleteObject API to avoid extra 
GetObjectInfo call on the replicating side.

For receiving side, it is just a regular
DeleteObject call.

Bonus: Fix a corner case where version purged is 
absent on target (either due to replication not yet
complete or target version already deleted in a
one-way replication or when replication was disabled). 

In such cases, mark version purge complete.
2023-07-10 07:57:56 -07:00
Harshavardhana
dfd7cca0d2 fix: allow cancel of decom only when its in progress (#17607) 2023-07-10 07:55:38 -07:00
Harshavardhana
af3d99e35f helm release v5.0.13
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-09 00:13:05 -07:00
Harshavardhana
f6186965c3 honor DeleteAllVersions in list(), head() calls (#17604) 2023-07-08 15:42:10 -07:00
Ian Martin
90c2129f44 Add helm chart linting to CI workflow (#17606) 2023-07-08 15:41:12 -07:00
yaohwu
69e131ee69 Fix helm templating syntax in post-job (#17605) 2023-07-08 12:18:31 -07:00
Harshavardhana
28a01f0320 update missing license header in files (#17603) 2023-07-08 10:42:05 -07:00
Anis Eleuch
6d0bc5ab1e prometheus: Fix internode stats (#17594)
Internode calculation was done inside S3 handlers, fix it by moving it
to internode handlers.

Remove admin stats since it is not used.
2023-07-08 07:35:11 -07:00
Aditya Manthramurthy
7af78af1f0 fix: set request ID in tracing context key (#17602)
Since `addCustomerHeaders` middleware was after the `httpTracer`
middleware, the request ID was not set in the http tracing context. By
reordering these middleware functions, the request ID header becomes
available. We also avoid setting the tracing context key again in
`newContext`.

Bonus: All middleware functions are renamed with a "Middleware" suffix
to avoid confusion with http Handler functions.
2023-07-08 07:31:42 -07:00
Klaus Post
45a717a142 Avoid per request URL parsing (#17593)
Every request does a `url.Parse(c.url.String())` to clone a URL.

The host will also be static, so we rewrite that on creation.
2023-07-07 22:07:30 -07:00
Ian Martin
cb1ec0a0d9 fix: helm templating syntax in post-job (#17600) 2023-07-07 22:06:40 -07:00
Harshavardhana
abb1f22057 Revert "change ttfb_distribution metrics to histogramMetric (#17115)"
This reverts commit 9112ca4e29.
2023-07-07 13:57:37 -07:00
Harshavardhana
73efe436a5 update helm v5.0.12
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-07 09:44:16 -07:00
Harshavardhana
f41edb23e2 add variadic delays in peer notification retries (#17592)
just adds more `jitter` in our retries to avoid
burst flooding for peer calls.
2023-07-07 07:47:38 -07:00
Minio Trusted
6335a48a53 Update yaml files to latest version RELEASE.2023-07-07T07-13-57Z 2023-07-07 07:53:18 +00:00
Klaus Post
e20aab25ec Check for progress before we reach the limit (#17552) 2023-07-07 00:13:57 -07:00
Anis Eleuch
66bea3942a CI/CD to stop one node per pool in the two pools mint test (#17518)
This is to make sure that all S3 ops work when there is enough quorum
2023-07-07 00:10:13 -07:00
Harshavardhana
08acd9c43d update all our deps (#17590) 2023-07-06 21:47:46 -07:00
Klaus Post
ff5988f4e0 Reduce allocations (#17584)
* Reduce allocations

* Add stringsHasPrefixFold which can compare string prefixes, while ignoring case and not allocating.
* Reuse all msgp.Readers
* Reuse metadata buffers when not reading data.

* Make type safe. Make buffer 4K instead of 8.

* Unslice
2023-07-06 16:02:08 -07:00
Harshavardhana
1bf23374a3 do not need to gzip 'mc' in our container (#17586)
fixes #17581
2023-07-06 15:13:17 -07:00
Alex
899b429094 Update Console to v0.32.0 (#17587)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-07-06 15:13:07 -07:00
jiuker
c47ff44f5e fix: disable site network test if site replication is disabled (#17579) 2023-07-06 09:19:14 -07:00
Harshavardhana
8af0773baf remove deprecated Content-Security-Policy (#17580)
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/block-all-mixed-content
2023-07-06 09:18:38 -07:00
Alex
37cbd114de Updated console to v0.31.0 (#17578) 2023-07-06 00:37:56 -07:00
jiuker
2dbb1cff4a feat: support perf site replication (#17477) 2023-07-05 22:28:26 -07:00
Klaus Post
6efcf9c982 Do lockless last minute latency metrics (#17576)
Collect metrics in one second and accumulate lockless before sending upstream.
2023-07-05 10:40:45 -07:00
Harshavardhana
0bc34952eb fix: under FanOut API avoid repeated md5sum calculation (#17572)
md5sum calculation has a high CPU overhead, avoid calculating
it repeatedly for similar fanOut calls.

To fix following CPU profiler result
```
(pprof) top10
Showing nodes accounting for 678.68s, 84.67% of 801.54s total
Dropped 1072 nodes (cum <= 4.01s)
Showing top 10 nodes out of 156
      flat  flat%   sum%        cum   cum%
   332.54s 41.49% 41.49%    332.54s 41.49%  runtime/internal/syscall.Syscall6
   228.39s 28.49% 69.98%    228.39s 28.49%  crypto/md5.block
    48.07s  6.00% 75.98%     48.07s  6.00%  runtime.memmove
    28.91s  3.61% 79.59%     28.91s  3.61%  github.com/minio/highwayhash.updateAVX2
     8.25s  1.03% 80.61%      8.25s  1.03%  runtime.futex
     8.25s  1.03% 81.64%     10.81s  1.35%  runtime.step
     6.99s  0.87% 82.52%     22.35s  2.79%  runtime.pcvalue
     6.67s  0.83% 83.35%     38.90s  4.85%  runtime.mallocgc
     5.77s  0.72% 84.07%     32.61s  4.07%  runtime.gentraceback
     4.84s   0.6% 84.67%     10.49s  1.31%  runtime.lock2
```
2023-07-05 03:16:05 -07:00
jiuker
f6b48ed02a fix: create user without policy that is required (#17554)
fixes #17492
2023-07-04 07:39:29 -07:00
Harshavardhana
e37c4efc6e fix: upon DNS refresh() failure use previous values (#17561)
DNS refresh() in-case of MinIO can safely re-use
the previous values on bare-metal setups, since
bare-metal arrangements do not change DNS in any 
manner commonly.

This PR simplifies that, we only ever need DNS caching
on bare-metal setups.

- On containerized setups do not enable DNS
  caching at all, as it may have adverse effects on
  the overall effectiveness of k8s DNS systems.

  k8s DNS systems are dynamic and expect applications
  to avoid managing DNS caching themselves, instead
  provide a cleaner container native caching
  implementations that must be used.

- update IsDocker() detection, including podman runtime

- move to minio/dnscache fork for a simpler package
2023-07-03 12:30:51 -07:00
Harshavardhana
22f5bc643c fix: honor older scanner settings only if newer has not changed (#17564) 2023-07-03 12:28:36 -07:00
Anis Eleuch
15fd5ce2fa fix: A typo in per pool make/delete bucket errs calculation (#17553) 2023-07-03 09:47:40 -07:00
Harshavardhana
7f782983ca fix: for FTP server driver allow implicit trust of TLS (#17541)
fixes #17535
2023-06-30 08:04:13 -07:00
Aditya Manthramurthy
9d628346eb fix: service account list for root user (#17547)
Fixes https://github.com/minio/minio/issues/17545
2023-06-30 08:02:12 -07:00
Aditya Manthramurthy
bde533a9c7 fix: OpenID config initialization (#17544)
This is due to a regression in the handling of the enable key in OpenID
configuration.
2023-06-29 23:38:26 -07:00
Minio Trusted
2fcb75d86d Update yaml files to latest version RELEASE.2023-06-29T05-12-28Z 2023-06-29 06:37:32 +00:00
Harshavardhana
aae6846413 feat: allow expiration of all versions via ILM Expiration action (#17521)
Following extension allows users to specify immediate purge of
all versions as soon as the latest version of this object has
expired.

```
<LifecycleConfiguration>
    <Rule>
        <ID>ClassADocRule</ID>
        <Filter>
           <Prefix>classA/</Prefix>
        </Filter>
        <Status>Enabled</Status>
        <Expiration>
             <Days>3650</Days>
	     <ExpiredObjectAllVersions>true</ExpiredObjectAllVersions>
        </Expiration>
    </Rule>
    ...
```
2023-06-28 22:12:28 -07:00
Harshavardhana
5317a0b755 fix: support LDAP settings properly in ftp/sftp (#17536)
Bonus this PR enhances and supports creating
buckets via ftp `mkdir`

fixes #17526
2023-06-28 13:15:21 -07:00
Harshavardhana
73de721a63 fix: handle copyObjectPart encryption properly (#17530)
- look for requested encryption while compressing
  not just via HTTP Headers, but also via multipart
  metadata

- look for SSE-S3 etag decryption not just via HTTP
  Headers, but also via multipart metadata

fixes #17519
2023-06-28 09:43:50 -07:00
Harshavardhana
d2f5c3621f fix: add additional decommission traces for ILM expired content (#17522)
current decommission traces were missing for

- Skipped ILM expired versions
- Skipped single DELETE marked version
- A success or failure in decommissioning DELETE marker
- allow additional info to be shared in DecomStatus() API
2023-06-27 11:59:40 -07:00
Harshavardhana
1818764840 fix: bug in passing Versioned field set for getHealReplicationInfo() (#17498)
Bonus: rejects prefix deletes on object-locked buckets earlier
2023-06-27 09:45:50 -07:00
Harshavardhana
d3e5e607a7 allow site-replication checks to work on non-distributed setups (#17524)
fixes #17523
2023-06-27 09:23:50 -07:00
Shireesh Anjal
c1943ea3af Capture realtime metrics in health report (#17516) 2023-06-27 01:39:18 -07:00
Harshavardhana
2a82c15bf1 update all our deps (#17497) 2023-06-26 15:36:56 -07:00
guangwu
87b6fb37d6 chore: pkg imported more than once (#17444) 2023-06-26 09:21:29 -07:00
Kaan Kabalak
21fbe88e1f Print certain log messages once per error (#17484) 2023-06-24 20:29:13 -07:00
Harshavardhana
1f8b9b4bd5 fix: do not listAndHeal() inline with PutObject() (#17499)
there is a possibility that slow drives can actually add latency
to the overall call, leading to a large spike in latency.

this can happen if there are other parallel listObjects()
calls to the same drive, in-turn causing each other to sort
of serialize.

this potentially improves performance and makes PutObject()
also non-blocking.
2023-06-24 19:31:04 -07:00
Minio Trusted
fcbed41cc3 Update yaml files to latest version RELEASE.2023-06-23T20-26-00Z 2023-06-24 07:25:49 +00:00
Klaus Post
216069d0da Remove 'null' version ID from directory object response (#17495)
Fixes #17494

Regression from #17132
2023-06-23 13:26:00 -07:00
Harshavardhana
eefa047974 fix: keep decommission in a go-routine (#17496)
This was removed by mistake in #17491
2023-06-23 12:29:32 -07:00
Anis Eleuch
d8dad5c9ea s3: Make/Delete buckets to use error quorum per pool (#17467) 2023-06-23 11:48:23 -07:00
Klaus Post
bf8a68879c fix: Time ILM Actions for scanner info (#17493)
ILM Actions were not timed fix it.
2023-06-23 07:48:36 -07:00
Aditya Manthramurthy
f3248a4b37 Redact all secrets from config viewing APIs (#17380)
This change adds a `Secret` property to `HelpKV` to identify secrets
like passwords and auth tokens that should not be revealed by the server
in its configuration fetching APIs. Configuration reporting APIs now do
not return secrets.
2023-06-23 07:45:27 -07:00
Harshavardhana
d315d012a4 decom: during multiple pool decom preserve current pool status (#17491)
removal of completed pools must retain pool status of other
pools in draining, to resume any remaining draining operations.
2023-06-23 07:44:18 -07:00
Harshavardhana
bd9bf3693f lambda: negative duration for presigned URL default to 1H (#17489)
fixes a bug where users created with Expiration as
timeSentinel is not rejected while generating the
presigned URL for lambda processing.
2023-06-23 00:17:24 -07:00
Klaus Post
15daa2e74a tooling: Add xlmeta --combine switch that will combine inline data (#17488)
Will combine or write partial data of each version found in the inspect data.

Example:

```
> xl-meta -export -combine inspect-data.1228fb52.zip

(... metadata json...)
}
Attempting to combine version "994f1113-da94-4be1-8551-9dbc54b204bc".
Read shard 1 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-01-of-13.data)
Read shard 2 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-02-of-13.data)
Read shard 3 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-03-of-13.data)
Read shard 4 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-04-of-13.data)
Read shard 6 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-06-of-13.data)
Read shard 7 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-07-of-13.data)
Read shard 8 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-08-of-13.data)
Read shard 9 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-09-of-13.data)
Read shard 10 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-10-of-13.data)
Read shard 11 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-11-of-13.data)
Read shard 13 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-13-of-13.data)
Attempting to reconstruct using parity sets:
* Setup: Data shards: 9 - Parity blocks: 6
Have 6 complete remapped data shards and 6 complete parity shards. Could NOT reconstruct: too few shards given
* Setup: Data shards: 8 - Parity blocks: 5
Have 5 complete remapped data shards and 5 complete parity shards. Could reconstruct completely
0 bytes missing. Truncating 0 from the end.
Wrote output to 994f1113-da94-4be1-8551-9dbc54b204bc.complete
```

So far only inline data, but no real reason that external data can't also be included with some handling of blocks.

Supports only unencrypted data.
2023-06-22 12:41:24 -07:00
Harshavardhana
74759b05a5 make sure to set relevant config entries correctly (#17485)
Bonus: also allow skipping keys properly.
2023-06-22 10:04:02 -07:00
Aditya Manthramurthy
82ce78a17c Fix locking in policy attach API (#17426)
For policy attach/detach API to work correctly the server should hold a
lock before reading existing policy mapping and until after writing the
updated policy mapping. This is fixed in this change.

A site replication bug, where LDAP policy attach/detach were not
correctly propagated is also fixed in this change.

Bonus: Additionally, the server responds with the actual (or net)
changes performed in the attach/detach API call. For e.g. if a user
already has policy A applied, and a call to attach policies A and B is
performed, the server will respond that B was attached successfully.
2023-06-21 22:44:50 -07:00
Harshavardhana
9af6c6ceef under rebalance look for expired versions v/s remaining versions (#17482)
A continuation of PR #17479 for rebalance behavior must
also match the decommission behavior.

Fixes bug where rebalance would ignore rebalancing object
versions after one of the version returned "ObjectNotFound"
2023-06-21 13:23:20 -07:00
Harshavardhana
021372cc4c update helm release v5.0.11
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-06-21 12:29:09 -07:00
Praveen raj Mani
b94ab07c2f Honor global root CAs for kafka audit tls (#17481)
honor global root CAs for kafka audit tls
2023-06-21 10:50:40 -07:00
Harshavardhana
7605d07bb2 add support for bucket level request count per API (#17468)
New metrics added to calculate API request count
per bucket, per API.  Captures errors, including
4xx, 5xx HTTP status codes separately.
2023-06-21 09:41:59 -07:00
Harshavardhana
ccc5801112 always look for expired versions v/s remaining versions (#17479)
while decommissioning it can so happen that the non-current
versions are all expired but there is a DEL marker as the
latest version.

For such objects, we should not decommission them instead
calculate the remaining versions and if the remaining versions
is one and that version is a DEL marker consider such
an object not to be scheduled for decommissioning.
2023-06-21 08:49:28 -07:00
Praveen raj Mani
7c72b25ef0 Add an option to make bucket notifications synchronous (#17406)
With the current asynchronous behaviour in sending notification events
to the targets, we can't provide guaranteed delivery as the systems
might go for restarts.

For such event-driven use-cases, we can provide an option to enable
synchronous events where the APIs wait until the event is successfully
sent or persisted.

This commit adds 'MINIO_API_SYNC_EVENTS' env which when set to 'on'
will enable sending/persisting events to targets synchronously.
2023-06-20 17:38:59 -07:00
Harshavardhana
02c2ec3027 skip onlineDisks with parity mismatch (#17478) 2023-06-20 13:18:24 -07:00
Harshavardhana
65c31fab12 fix: do not crash rebalance code instead set the object layer (#17465)
fixes #17421
2023-06-20 09:28:23 -07:00
jiuker
b6b68be052 fix: replication check for duplicate endpoints detection with wrong route (#17474) 2023-06-20 09:27:54 -07:00
Harshavardhana
15911c85f6 safely ignore out of band deletions while decommissioning (#17473) 2023-06-20 08:31:42 -07:00
Aditya Manthramurthy
5a1612fe32 Bump up madmin-go and pkg deps (#17469) 2023-06-19 17:53:08 -07:00
Minio Trusted
bbb7ae156c Update yaml files to latest version RELEASE.2023-06-19T19-52-50Z 2023-06-19 22:53:03 +00:00
Harshavardhana
f9b8d1c699 fix: sio-error test to fail if commands fail (#17466) 2023-06-19 12:52:50 -07:00
Harshavardhana
1443b5927a allow quorum fileInfo to pick same parityBlocks (#17454)
Bonus: allow replication to proceed for 503 errors such as
with error code SlowDownRead
2023-06-18 18:20:15 -07:00
Anis Eleuch
35ef35b5c1 fix a integer divide by zero crash during rebalance (#17455)
A state is updated with a delete marker, which does not have parity or
data blocks defined, which can cause the integer divide by zero panics.

This commit fixes to avoid panics.
2023-06-18 11:14:53 -07:00
Harshavardhana
6806537eb3 event args list for fanOut notification must be sized same (#17450)
without this fan-out API can crash if client cancels
the on-going request.
2023-06-18 07:09:20 -07:00
Harshavardhana
64de61d15d fallback on etags if they match when mtime is not same (#17424)
on "unversioned" buckets there are situations
when successive concurrent I/O can lead to
an inconsistent state() with mtime while the
etag might be the same for the object on disk.

in such a scenario it is possible for us to
allow reading of the object since etag matches
and if etag matches we are guaranteed that we
have enough copies the object will be readable
and same.

This PR allows fallback in such scenarios.
2023-06-17 19:18:20 -07:00
Harshavardhana
22b7c8cd8a upgrade pkg and dperf to latest packages (#17448)
- dperf improvements in benchmarking read and write tests
- upgrade mimedb to use latest content-types
2023-06-17 07:31:36 -07:00
Poorna
c4d0c49a5f ensure metadata updates go to same pool where version exists (#17451)
This PR also returns the replication status in 
proxy calls and defers replication attempt if 
HEAD on object version returned a error different
from NoSuchKey
2023-06-17 07:30:53 -07:00
Minio Trusted
142a5b0dcd Update yaml files to latest version RELEASE.2023-06-16T02-41-06Z 2023-06-16 05:36:19 +00:00
mungo312
25db1e4eca helm: fix permission denied errors in post-job when running as non-root (#17175)
Fixes ##17174
2023-06-15 19:41:06 -07:00
Harshavardhana
47a48b6832 do not save any metadata from the headers in tar extract (#17436)
only preserve the same storage-class as incoming
request other than that rest of them must be
deduced.
2023-06-15 17:44:07 -07:00
Harshavardhana
e98309eb75 update minio-go/v7 v7.0.57 (#17439) 2023-06-15 16:29:19 -07:00
Alex
87051872a7 Update console to v0.30.0 (#17438) 2023-06-15 15:30:56 -07:00
Anis Eleuch
a2aed12dcd decom: Fix a typo in routing decommissioning requests (#17435)
A specific node should do the decommissioning task, however routing the
start decommissioning to that node was not working properly.

Co-authored-by: Anis Elleuch <anis@min.io>
2023-06-15 14:54:29 -07:00
Anis Eleuch
d8e6e76e89 site-repl: Better error msg when setting sync in a local cluster (#17407) 2023-06-15 12:44:22 -07:00
Anis Eleuch
8c33fdf5f4 s3-check-md5: Add --modified-since flag to skip some objects (#17410) 2023-06-15 12:44:09 -07:00
Harshavardhana
ad4e511026 do not save plain-text ETag when encryption is requested (#17427)
fixes an issue under bucket replication could cause
ETags for replicated SSE-S3 single part PUT objects,
to fail as we would attempt a decryption while listing,
or stat() operation.
2023-06-15 12:43:26 -07:00
Klaus Post
4a562d6732 fix: fanout error response - error must be string for marshaling (#17433)
Uses https://github.com/minio/minio-go/pull/1839
2023-06-15 09:21:53 -07:00
Poorna
a9082e4f79 site replication: cancel ongoing op properly (#17428) 2023-06-15 08:05:08 -07:00
dependabot[bot]
6278679ffd build(deps): bump github.com/lestrrat-go/jwx from 1.2.25 to 1.2.26 (#17425)
Bumps [github.com/lestrrat-go/jwx](https://github.com/lestrrat-go/jwx) from 1.2.25 to 1.2.26.
- [Release notes](https://github.com/lestrrat-go/jwx/releases)
- [Changelog](https://github.com/lestrrat-go/jwx/blob/v1.2.26/Changes)
- [Commits](https://github.com/lestrrat-go/jwx/compare/v1.2.25...v1.2.26)

---
updated-dependencies:
- dependency-name: github.com/lestrrat-go/jwx
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-14 12:17:22 -07:00
jiuker
0474791cf8 fix: set time format right (#17402) 2023-06-14 07:49:13 -07:00
Harshavardhana
69f819e199 move to pkg@v1.7.3 to support aws:kms string in policy conditions (#17414) 2023-06-14 07:42:03 -07:00
Harshavardhana
f32efd5429 more compliance related fixes (#17408)
- lifecycle must return InvalidArgument for rule errors
- do not return `null` versionId in HTTP header
- reject mixed SSE uploads with correct error message
2023-06-13 13:52:33 -07:00
jiuker
22c247a988 fix: preserve multiple values for query params (#17392) 2023-06-13 11:38:46 -07:00
Shubhendu
35d71682f6 fix: do not allow removal of inbuilt policies unless they are already persisted (#17264)
Dont allow removal of inbuilt policies such as `readwrite, readonly, writeonly and diagnostics`
2023-06-13 11:06:17 -07:00
drivebyer
3d6b88a60e fix: syscall to record time on non-linux (#17383) 2023-06-13 11:04:50 -07:00
Harshavardhana
26a0803388 various compliance related fixes (#17401)
- getObjectTagging to be allowed for anonymous policies
- return correct errors for invalid retention period
- return sorted list of tags for an object
- putObjectTagging must return 200 OK not 204 OK
- return 409 ErrObjectLockConfigurationNotAllowed for existing buckets
2023-06-12 13:22:07 -07:00
Anis Eleuch
ae95384dd8 Revert "heal: Update object parity with the latest configured SC (#17187)" (#17404) 2023-06-12 11:54:51 -07:00
Anis Eleuch
0f0dcf0c5e tar: Avoid storing snowball extraction header in extract objects (#17389) 2023-06-12 09:42:06 -07:00
Klaus Post
6f2406b0b6 fix: protect ReplicationStats against concurrent map iteration and write crash (#17403) 2023-06-12 09:17:11 -07:00
Anis Eleuch
bb24346e04 listen: Only error out if not able to bind any interface (#17353) 2023-06-12 09:09:28 -07:00
Harshavardhana
be45ffd8a4 return 204 status code for DeleteBucketTagging (#17400) 2023-06-11 20:49:02 -07:00
Poorna Krishnamoorthy
f986b0c493 replication: perform bucket resync in parallel (#16707)
Default number of parallel resync operations for a bucket to 10
to speed up resync.
2023-06-11 16:09:55 -07:00
Harshavardhana
c9e87f0548 service accounts are allowed to have no expiration (#17397) 2023-06-11 10:34:59 -07:00
Harshavardhana
43468f4d47 return InvalidRequest when no parts are provided (#17395) 2023-06-10 21:59:51 -07:00
Minio Trusted
91987d6f7a Update yaml files to latest version RELEASE.2023-06-09T07-32-12Z 2023-06-10 05:38:08 +00:00
Harshavardhana
6b7c98bd0f make sure we pick up the right Go version in vulncheck (#17388) 2023-06-09 00:32:12 -07:00
Harshavardhana
b829e80ecb do not disable root for invalid API config values (#17386) 2023-06-08 15:50:06 -07:00
Klaus Post
6e38d0f3ab Add more bootstrap info in debug mode (#17362) 2023-06-08 08:39:47 -07:00
Anis Eleuch
38342b1df5 decom: Parallelize decommissining (#17364) 2023-06-07 14:27:51 -07:00
Harshavardhana
dbd4c2425e fix: kafka broker pings must not be greater than 1sec (#17376) 2023-06-07 11:47:00 -07:00
Harshavardhana
49ce85ee3d allow prefix/markers to have '/' in the beginning to throw an empty (#17373) 2023-06-07 11:25:26 -07:00
Anis Eleuch
eba378e4a1 vrf: Fix testing for loopback coming from the address (#17372) 2023-06-07 09:53:05 -07:00
Harshavardhana
442c50ff00 remove delimiter if not set by client, also fetchOwner is optional (#17366) 2023-06-06 21:31:47 -07:00
Harshavardhana
d1448adbda use slices package and remove some helpers (#17342) 2023-06-06 10:12:52 -07:00
jiuker
5a21b1f353 fix: Delete dir failed when .DS_Store in it (#17352) 2023-06-06 10:12:06 -07:00
Bartosz Marczyński
123a2fb3a8 helm: add /tmp mount to post jobs (#17301) 2023-06-06 06:50:33 -07:00
Harshavardhana
2f9e2147f5 allow quota enforcement to rely on older values (#17351)
PUT calls cannot afford to have large latency build-ups due
to contentious usage.json, or worse letting them fail with
some unexpected error, this can happen when this file is
concurrently being updated via scanner or it is being
healed during a disk replacement heal.

However, these are fairly quick in theory, stressed clusters
can quickly show visible latency this can add up leading to
invalid errors returned during PUT.

It is perhaps okay for us to relax this error return requirement
instead, make sure that we log that we are proceeding to take in
the requests while the quota is using an older value for the quota
enforcement. These things will reconcile themselves eventually,
via scanner making sure to overwrite the usage.json.

Bonus: make sure that storage-rest-client sets ExpectTimeouts to
be 'true', such that DiskInfo() call with contextTimeout does
not prematurely disconnect the servers leading to a longer
healthCheck, back-off routine. This can easily pile up while also
causing active callers to disconnect, leading to quorum loss.

DiskInfo is actively used in the PUT, Multipart call path for
upgrading parity when disks are down, it in-turn shouldn't cause
more disks to go down.
2023-06-05 16:56:35 -07:00
Harshavardhana
75c6fc4f02 only allow decryption of etag for only sse-s3 (#17335) 2023-06-05 13:08:51 -07:00
Anis Eleuch
f9e07d6143 goroutines parser: Add --less flag to filter goroutines (#17339) 2023-06-04 14:20:46 -07:00
Anis Eleuch
1436858347 log: Add a log when saving pool.bin fails (#17338)
Co-authored-by: Anis Elleuch <anis@min.io>
2023-06-04 14:20:21 -07:00
jiuker
8030e12ba5 fix: expMovingAvg is too small when startTime is zero (#17346) 2023-06-03 13:41:51 -07:00
Minio Trusted
a485b923bf Update yaml files to latest version RELEASE.2023-06-02T23-17-26Z 2023-06-03 05:07:30 +00:00
Kaan Kabalak
0649aca219 Add expiration to ListServiceAccounts function (#17249) 2023-06-02 16:17:26 -07:00
Harshavardhana
b210ea79bc do not save MTime in newMultipartUpload() to avoid side-affects (#17340) 2023-06-02 14:38:09 -07:00
Poorna
68f80b5fe7 replication: ignore retention mode validation for replica (#17332) 2023-06-01 18:53:12 -07:00
Poorna
e95825a42e replication: use latest object info for metrics update (#17333) 2023-06-01 18:52:55 -07:00
Anis Eleuch
931712dc46 fix: converting 'server closed idle connection' to errDiskNotFound (#17330) 2023-06-01 15:40:28 -07:00
Harshavardhana
54e544e03e allow lookup()/head() operations on Veeam SOS objects (#17331) 2023-06-01 15:26:26 -07:00
Poorna
f86b9abf32 site removal: update site config and reload targets after update (#17327) 2023-06-01 10:19:56 -07:00
Anis Eleuch
9ef7eda33a heal: Avoid objects created after the heal disk start time (#17323) 2023-05-31 13:10:45 -07:00
Klaus Post
c9e26401fa Fix GetObject encrypted etag (#17302)
Co-authored-by: Harshavardhana <harsha@minio.io>
2023-05-31 13:10:25 -07:00
Harshavardhana
e53f49e9a9 add additional tools that help in debugging (#17325) 2023-05-31 13:06:08 -07:00
jiuker
14f6ac9222 fix: fail large content in DeleteMultipleObjects() early (#17321) 2023-05-31 10:58:14 -07:00
drivebyer
b8474295af fix: time() returned function not being called as expected in globalSync() (#17319) 2023-05-31 09:40:23 -07:00
Shireesh Anjal
817e85a3e0 fix: proxy not set on subnet logger webhook sometimes (#17320) 2023-05-31 08:09:09 -07:00
jiuker
fb5ce3b87a record err time when remote node is offline (#17262) 2023-05-30 10:07:26 -07:00
Klaus Post
6fe028b7c5 Revert s3 select simdjson reuse (#17310) 2023-05-30 10:02:22 -07:00
Harshavardhana
1cd7f1e38d fix: cleanup empty multipart folders upon stale upload cleanup (#17312) 2023-05-30 09:56:50 -07:00
jiuker
043fd8b536 fix: on windows use FindClose close handler (#17306) 2023-05-30 02:15:57 -07:00
Klaus Post
669acbb032 Fix Test LDAP for automatic site replication (#17305) 2023-05-29 08:13:58 -07:00
Minio Trusted
086d8f036e Update yaml files to latest version RELEASE.2023-05-27T05-56-19Z 2023-05-28 06:53:17 +00:00
Harshavardhana
394690dcfb check for upto 50%+ data disks to be offline (#17294) 2023-05-26 22:56:19 -07:00
Harshavardhana
398bca92ff update helm to v5.0.10
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-05-26 17:05:49 -07:00
Harshavardhana
fb328b1a64 upgrade all dependencies (#17276) 2023-05-26 16:31:28 -07:00
Anis Eleuch
563f667e30 reorder-disks: Fix UID to UUID and add better error messages (#17292) 2023-05-26 15:03:31 -07:00
Klaus Post
c839b64f6a fix: compressed+encrypted block overhead (#17289) 2023-05-26 10:57:07 -07:00
Anis Eleuch
6425fec366 s3: Add x-minio-error-code header for S3 HEAD requests (#17283) 2023-05-26 10:13:18 -07:00
Harshavardhana
d5059840ef fix: for delete marked objects choose appropriate parity (#17287) 2023-05-26 09:57:44 -07:00
Aditya Manthramurthy
65cba212e8 Remove older policy attach behavior for LDAP (#17240) 2023-05-26 06:31:24 -07:00
Aditya Manthramurthy
7a69c9c75a Update builtin policy entities command (#17241) 2023-05-25 22:31:05 -07:00
Harshavardhana
4a425cbac1 cleanup scripts and apply shfmt (#17284) 2023-05-25 22:07:25 -07:00
Harshavardhana
615169c4ec update docker ubi image to 8.8 (#17281) 2023-05-25 18:19:09 -07:00
Harshavardhana
5cd9dcb844 rebalance 'null' delete markers properly (#17282) 2023-05-25 16:12:53 -07:00
Anis Eleuch
54c5c88fe6 Add number of offline disks in quorum errors (#16822) 2023-05-25 09:39:06 -07:00
jiuker
443250d135 fix: for Target isActive use net.Dial instead (#17251) 2023-05-25 09:24:11 -07:00
Harshavardhana
9b5829c16e avoid decommissioning DEL markers with single versions (#17274) 2023-05-25 09:18:49 -07:00
jiuker
d749aaab69 fix: ignore existing target status when adding new targets (#17250) 2023-05-24 22:57:37 -07:00
Krishnan Parthasarathi
62df731006 Add updatedAt for GetBucketLifecycleConfig (#17271) 2023-05-24 22:52:39 -07:00
Harshavardhana
d0a0eb9738 support fan-out objects via PostUpload() (#17233) 2023-05-24 22:51:07 -07:00
Klaus Post
66156b8230 Stricter partNumber checks (#17270)
Fixes #17269
2023-05-24 08:00:47 -07:00
Klaus Post
5677f73794 Add PostObject Checksum (#17244) 2023-05-23 07:58:33 -07:00
Harshavardhana
ef54200db7 offline drives more than 50% of total drives return error (#17252) 2023-05-23 07:57:57 -07:00
Harshavardhana
7875efbf61 update minio/dperf to latest release v0.4.4 (#17267) 2023-05-22 19:17:17 -07:00
Krishnan Parthasarathi
3e128c116e Add lifecycle event source to audit log tags (#17248) 2023-05-22 15:28:56 -07:00
Krishnan Parthasarathi
55a3310446 logger-http: Don't retry after a succesful send (#17266) 2023-05-22 14:53:18 -07:00
Harshavardhana
fc03be7891 simplify bucket metadata lookups for versioning/object locking (#17253) 2023-05-22 12:05:14 -07:00
jiuker
b1b00a5055 fix: Avoid Income globalStats twice upon error (#17263) 2023-05-22 07:42:27 -07:00
Poorna
2920b0fc6d allow specification of path/virtual style bucket lookup in batch replication (#17201) 2023-05-21 15:16:31 -07:00
Anis Eleuch
a30a55f3b1 Add object parity in listing V2M and listing versions M (#17238) 2023-05-19 09:42:45 -07:00
Praveen raj Mani
ecfb18b26a Freeze the s3 APIs until the notification sub-system initializes completely (#17182) 2023-05-19 08:44:48 -07:00
jiuker
41fa8fa2d2 fix: increment counter when entry be skipped (#17237) 2023-05-19 08:36:52 -07:00
jiuker
e94e6adf91 fix: return proper error if OIDC Discoverydoc fails to respond (#17242) 2023-05-19 02:13:33 -07:00
jiuker
7d433f16c4 before return make globalScannerMetrics.incTime call (#17230) 2023-05-18 13:45:05 -07:00
Klaus Post
b06d7bf834 fix: leaking connections in JSON SQL with limited return (#17239) 2023-05-18 11:26:46 -07:00
Minio Trusted
b784e458cb Update yaml files to latest version RELEASE.2023-05-18T00-05-36Z 2023-05-18 17:56:05 +00:00
Aditya Manthramurthy
9d96b18df0 Add "name" and "description" params to service acc (#17172) 2023-05-17 17:05:36 -07:00
drivebyer
ad2ab6eb3e fix: Give accurate cap to slice (#17224) 2023-05-17 15:14:09 -07:00
jiuker
f037c9b286 Protecting the read index is not out of bounds (#17226) 2023-05-17 12:09:41 -07:00
Praveen raj Mani
85912985b6 Check for only network errors in audit webhook for reachability (#17228) 2023-05-17 11:10:33 -07:00
Harshavardhana
876f51a708 remove minio-js from mint tests until next minio-js release 2023-05-17 09:09:54 -07:00
Harshavardhana
f7d29b4a53 cleanup of multipart per disk must cleanup itself only (#17223) 2023-05-17 01:45:58 -07:00
Harshavardhana
06557fe8be allow decommissioned pools to be removed while others are finishing (#17221) 2023-05-16 16:00:57 -07:00
Poorna
2131046427 replication: fix audit log reporting (#17222) 2023-05-16 15:35:08 -07:00
Klaus Post
aaf1abc993 simplify HardLimitReader by using LimitReader for internal usage (#17218) 2023-05-16 13:14:37 -07:00
jiuker
413549bcf5 fix: loadStatsFromDisk() should return nil for configNotFound (#17217) 2023-05-16 12:23:38 -07:00
jiuker
9a799065b3 fix: make slice cap of right size (#17192) 2023-05-16 08:10:07 -07:00
jiuker
fd2959fa3a fix: workers.New err must be returned (#17208) 2023-05-16 08:08:00 -07:00
Anis Eleuch
07927e032a Add a script to filter goroutines waiting for a given number of minutes (#17204) 2023-05-16 08:05:49 -07:00
jiuker
15bec32bb4 fix: tier handlers must write error only once (#17205) 2023-05-15 23:56:52 -07:00
Anis Eleuch
e2b7a08c10 heal: Update object parity with the latest configured SC (#17187) 2023-05-15 21:32:13 -07:00
Harshavardhana
ef2fc0f99e fix: reduce using memory and temporary files. (#17206) 2023-05-15 14:08:54 -07:00
Harshavardhana
d063596430 fix: veeam SOS API 'system.xml' strings (#17202) 2023-05-15 12:06:42 -07:00
jiuker
bd2dc6c670 fix: in healing tracker printTo when err (#17207) 2023-05-15 10:14:48 -07:00
Anis Eleuch
684399433b lock: Retry locking with an increasing random interval (#17200) 2023-05-13 08:42:21 -07:00
Harshavardhana
b62791617c fix: notify systemd as soon as we wait on the OS signal (#17199) 2023-05-12 16:42:17 -07:00
Poorna
e07c2ab868 Use hash.NewLimitReader for internal multipart calls (#17191) 2023-05-12 11:19:08 -07:00
jiuker
203755793c fix: in printEndpointError count error once per init() (#17193) 2023-05-12 10:41:54 -07:00
Anis Eleuch
883c98e26f fix: remove objects when there are skipped versions due to ILM in decom (#17198) 2023-05-12 10:37:38 -07:00
Harshavardhana
f5a20a5d06 allow nodes offline in k8s setups when expanding pools (#17183) 2023-05-11 17:41:33 -07:00
Poorna
ef7177ebbd disallow bucket replication setup with site replication (#17189) 2023-05-11 15:48:40 -07:00
Harshavardhana
3637aad36e do not count ILM expired objects and other skipped objects (#17184) 2023-05-11 13:35:16 -07:00
Aditya Manthramurthy
77db9686fb Update console to v0.27.0 (#17188) 2023-05-11 12:18:17 -07:00
Shireesh Anjal
c326e5a34e Add metrics for webhook endpoint stats (#17179) 2023-05-11 11:24:37 -07:00
jiuker
c23c982593 xmlDecoder err use ErrMalformedXML when PutBucketACLHandler (#17185) 2023-05-11 11:11:15 -07:00
Shireesh Anjal
a3d666356c fix: error in capturing XFS error config in health report (#17176) 2023-05-10 15:20:48 -07:00
jiuker
3cdbc2f414 add validationErr to validateConfig When DeleteIdentityProviderCfg (#17173) 2023-05-10 09:37:30 -07:00
Harshavardhana
b92cdea578 fix: start using pkg/workers to spawn parallel workers (#17170) 2023-05-09 16:37:31 -07:00
jiuker
5e629a99af fix: for profiling duration parsing error reply use ErrInvalidRequest (#17169) 2023-05-09 14:27:49 -07:00
Klaus Post
99c4ffa34f fix: avoid audit log race protection deadlocks (#17168) 2023-05-09 08:11:32 -07:00
Harshavardhana
a7f266c907 allow JWT parsing on large session policy based tokens (#17167) 2023-05-09 00:53:08 -07:00
Praveen raj Mani
57acacd5a7 Support persistent queue store for loggers (#17121) 2023-05-08 21:20:31 -07:00
Han Cen
42fb3cd95e helm: fix pod annotations indentation (#17130) 2023-05-08 07:59:56 -07:00
mstein11
855ed642c3 helm: allow postjob to run without user 1000 (#17160) 2023-05-08 07:59:25 -07:00
jiuker
629503ff73 add Err to BucketExists when NoSuchBucket (#17155) 2023-05-08 07:51:59 -07:00
jiuker
e3a070e3de put *msgp.Reader back to pool (#17156) 2023-05-08 07:51:39 -07:00
Anton Lindholm
5b364bca1f helm-chart: Use minio service account for post-deploy job if available (#17077) 2023-05-06 23:12:56 -07:00
Poorna
c5c1426262 Validate if replication config being added is self referential (#17142) 2023-05-06 13:35:43 -07:00
Denis Krivenko
be18d435a2 helm: declare missing properties in values.yaml (#17153) 2023-05-06 13:34:58 -07:00
mstein11
7eea6cdb12 add etc-path to post-job.yaml in helm chart (#17148) 2023-05-06 13:34:38 -07:00
Harshavardhana
824c55b3a4 fix: few typos and wordings in minio-limits.md 2023-05-05 20:04:52 -07:00
Klaus Post
76913a9fd5 Signed trailers for signature v4 (#16484) 2023-05-05 19:53:12 -07:00
Minio Trusted
2f44dac14f Update yaml files to latest version RELEASE.2023-05-04T21-44-30Z 2023-05-05 06:21:31 +00:00
Harshavardhana
5569acd95c disallow EC:0 if not set during server startup (#17141) 2023-05-04 14:44:30 -07:00
Harshavardhana
1d0211d395 allow deletes on directory objects to perform permanent deletes (#17132) 2023-05-04 14:43:52 -07:00
Anis Eleuch
06cd0a636e Avoid calling KES Status when peers ping each other (#17140) 2023-05-04 11:28:33 -07:00
Klaus Post
7f7b489a3d snowball: use latest time when mtime is missing (#17133) 2023-05-04 07:29:33 -07:00
Klaus Post
bb6f4d7633 Remove redundant checkFormatJSON logging (#17134) 2023-05-04 07:28:37 -07:00
Alex
6e24dff26a Added MINIO_BROWSER_LOGIN_ANIMATION env support for WebUI console (#17123)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-05-03 15:32:50 -07:00
Harshavardhana
e372e4e592 add nodeName to the log while taking drive offline (#17124) 2023-05-03 15:05:45 -07:00
Harshavardhana
9571b0825e add configurable VRF interface and user-timeout (#17108) 2023-05-03 14:12:25 -07:00
Poorna
90e2cc3d4c Add audit logging of site replication multipart proxying (#17122) 2023-05-03 11:19:45 -07:00
Alex
0c0820caef Updated Console version to v0.26.4 (#17119)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-05-03 11:00:23 -07:00
Harshavardhana
9112ca4e29 change ttfb_distribution metrics to histogramMetric (#17115) 2023-05-03 07:31:00 -07:00
Dylan Piergies
8203cb9990 helm: apply templating to Ingress host and environment variable values (#17073) 2023-05-02 23:27:17 -07:00
Harshavardhana
d5bce978a8 upgrade helm to v5.0.9
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-05-02 23:23:26 -07:00
Poorna
ec84bad882 batch replication now supports arbitrary S3 targets (#17113) 2023-05-02 22:52:35 -07:00
Harshavardhana
b53376a3a4 change directory objects to never create new versions (#17109) 2023-05-02 16:09:33 -07:00
Krishnan Parthasarathi
0ec722bc54 Add tags to NewerNoncurrentVersions audit event (#17110) 2023-05-02 12:56:33 -07:00
Anis Eleuch
4640b13c66 Use expontential backoff algo for internode reconnections (#17052) 2023-05-02 12:35:52 -07:00
Praveen raj Mani
1704abaf6b fix: store notification events immediately for persistent queues (#17112) 2023-05-02 07:53:13 -07:00
WGH
ab34f0065c Support systemd notify protocol (#17062) 2023-05-01 23:15:08 -07:00
Klaus Post
e8c0a50862 optimization use small blocks up to 64KB (#17107) 2023-05-01 09:47:49 -07:00
Denis Krivenko
b963f69f34 helm: align chart properties with naming convention (#17065) 2023-04-29 23:59:40 -07:00
Harshavardhana
02d8f3cdc8 fix: remove active healing on .minio.sys/ during startup (#17072) 2023-04-29 02:05:28 -07:00
Harshavardhana
7ae69accc0 allow root user to be disabled via config settings (#17089) 2023-04-28 12:24:14 -07:00
Minio Trusted
701b89f377 Update yaml files to latest version RELEASE.2023-04-28T18-11-17Z 2023-04-28 18:48:25 +00:00
Anis Eleuch
46d45a6923 grafana: Add TCP dial errors panel (#17101) 2023-04-28 11:11:17 -07:00
Klaus Post
7fad0c8b41 Remove checksums from HTTP range request, add part checksums (#17105) 2023-04-28 08:26:32 -07:00
jiuker
6e27264c6b update cleanupRoutine comment (#17102) 2023-04-28 01:11:51 -07:00
Anis Eleuch
5c83c9724f audit: Add request path and host to audit event (#17099) 2023-04-27 22:18:24 -07:00
Anis Eleuch
d5aff735be info: Add drives per set and sets count per pool information (#17100) 2023-04-27 15:24:03 -07:00
Poorna
98c26df53e fix: allow past retention headers to be copied in batch replication (#17095) 2023-04-27 13:43:18 -07:00
Anis Eleuch
2448a9e047 grafana: Remove minio_s3_requests_errors_total metric (#17094) 2023-04-27 10:55:30 -07:00
jiuker
b28d391a22 fix: add correct worker count before startHTTPLogger() (#17091) 2023-04-27 10:51:16 -07:00
jiuker
c8b92f6067 protect wg.Done from being called twice (#17075) 2023-04-27 07:55:36 -07:00
Krishnan Parthasarathi
e7cac8acef Add tags to auditLogLifecycle (#17081) 2023-04-26 17:49:00 -07:00
Anis Eleuch
31b5acc245 tcp: Increase user timeout to 10 minutes (#17087) 2023-04-26 17:48:31 -07:00
Harshavardhana
6105997299 remove unnecessary log when listing resume fails (#17086) 2023-04-26 14:53:25 -07:00
Harshavardhana
8c874884fc fix: do not copy context in DiskInfo cache (#17085) 2023-04-26 12:13:54 -07:00
Aditya Manthramurthy
ebfe81e5fd Fix put bucket policy error code (#17084) 2023-04-26 11:21:27 -07:00
Anis Eleuch
0b7ca094e4 Remove Expect 100-continue in internode communications (#17061) 2023-04-26 09:33:45 -07:00
Praveen raj Mani
72802a5972 Use 'minio/pkg/sync/errgroup' and 'minio/pkg/workers' (#17069) 2023-04-25 22:57:40 -07:00
Harshavardhana
b1f3935c5b allow ListObjects() when a prefix is an object (#17074) 2023-04-25 22:41:54 -07:00
Harshavardhana
dbd53af369 fix: initialize reverse proxy forwarder with right public certs (#17080) 2023-04-25 15:50:32 -07:00
Harshavardhana
b09fe0e50e fix: DeleteBucket for peers() must recreate bucket upon errors (#17079) 2023-04-25 14:16:35 -07:00
Krishnan Parthasarathi
fae9000304 heal: Pick maximally occuring modTime in quorum (#17071) 2023-04-25 10:13:57 -07:00
Harshavardhana
8fd07bcd51 simplify sort.Sort by using sort.Slice (#17066) 2023-04-24 13:28:18 -07:00
Anis Eleuch
6addc7a35d server-info: Return initializing state properly (#17070) 2023-04-24 09:10:02 -07:00
Harshavardhana
477230c82e avoid attempting to migrate old configs (#17004) 2023-04-21 13:56:08 -07:00
Harshavardhana
d1737199ed fix: delete DNS upon success, update failure message (#17059) 2023-04-21 12:12:31 -07:00
Jiffs Maverick
61101d82d9 Rename inodes metric in grafana dashboards (#17030) 2023-04-21 11:07:30 -07:00
Minio Trusted
cebb948da2 Update yaml files to latest version RELEASE.2023-04-20T17-56-55Z 2023-04-21 08:00:05 +00:00
Shireesh Anjal
c61c4b71b2 Upgrade madmin-go to 2.0.20 (#17054) 2023-04-20 10:56:55 -07:00
Harshavardhana
84f31ed45d simplify MRF, converge it to regular healing (#17026) 2023-04-19 07:47:42 -07:00
jiuker
8a81e317d6 verify maxPartID in object options helpers (#17015) 2023-04-18 22:34:30 -07:00
Anis Eleuch
224d9a752f fix: the race in healing tracker code (#17048) 2023-04-18 14:49:56 -07:00
Anis Eleuch
0db34e4b85 Listen bucket events to send empty events with new line (#17037) 2023-04-18 08:11:30 -07:00
Harshavardhana
8a9b9832fd add Dial timeout for Kafka broker pings (#17044) 2023-04-17 15:45:01 -07:00
Klaus Post
f66625be67 Snowball: Extract headers for metadata (#17042) 2023-04-17 12:16:54 -07:00
Harshavardhana
6825bd7e75 fix: inlined objects don't need to honor long locks (#17039) 2023-04-17 12:16:37 -07:00
Denis Krivenko
18515a4e3b Reformat helm chart templates (#16947) 2023-04-16 23:04:15 -07:00
Klaus Post
839b9c9271 Reduce allocations in Walkdir (#17036) 2023-04-15 10:25:25 -07:00
Harshavardhana
dd9ed85e22 implement support for FTP/SFTP server (#16952) 2023-04-15 07:34:02 -07:00
jiuker
e96c88e914 fix: DeleteBucketThrottle must delete ARN (#17034) 2023-04-15 02:14:26 -07:00
jiuker
6c1410f7f5 fix: Type of rejection for FIFO quota input (#17016) 2023-04-15 01:22:18 -07:00
Krishnan Parthasarathi
f92450d8b3 commonParity should pick readable FileInfo (#17032) 2023-04-14 16:23:28 -07:00
Klaus Post
c133979b8e Add part count to checksum (#17035) 2023-04-14 09:44:45 -07:00
Poorna
a9269cee29 heal: avoid logging version not found (#17031) 2023-04-13 19:45:52 -07:00
Harshavardhana
cf42ede92c upgrade helm to v5.0.8
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-04-13 14:49:51 -07:00
Klaus Post
958a480e53 fix: lambda function expiration when cred.Expiration is set (#17029) 2023-04-13 08:10:57 -07:00
吴小白
62151a751d build: support loong64 (#17027) 2023-04-13 08:10:23 -07:00
Minio Trusted
f1ab9df2ee Update yaml files to latest version RELEASE.2023-04-13T03-08-07Z 2023-04-13 07:28:46 +00:00
Anis Eleuch
a42650c065 Add minio_bucket_usage_version_total metric to Prometheus (#17023) 2023-04-12 20:08:07 -07:00
Harshavardhana
bdad3730f7 fix: do not error out if the local bucket is missing (#17025) 2023-04-12 15:44:16 -07:00
Harshavardhana
a5835cecbf fix: regression in counting total requests (#17024) 2023-04-12 14:37:19 -07:00
Denis Krivenko
b19620b324 Remove redundant namespace from the helm chart (#16968) 2023-04-11 21:09:29 -07:00
Poorna
cd6dec49c0 Add trace support for ilm activity (#16993) 2023-04-11 19:22:32 -07:00
Poorna
d350654aee config: fix duplication of replication priority key (#17014) 2023-04-11 19:22:10 -07:00
Krishnan Parthasarathi
6877578bbc Update minio_node_bucket_scans_finished metrics (#17006) 2023-04-11 19:21:34 -07:00
Kunal Singh
10693fddfa docs: fix repetition in wordings (#16999) 2023-04-11 13:23:14 -07:00
Harshavardhana
f3682b6149 allow writes to pools with inconsistent xl.meta (#17008) 2023-04-11 11:17:46 -07:00
ferhat elmas
056ca0c68e refactor: reuse open file in storage interface (#16970) 2023-04-09 23:09:28 -07:00
Krishnan Parthasarathi
25f7a8e406 Indicate RenameData is called by healObject (#16997) 2023-04-09 10:25:37 -07:00
Anis Eleuch
1f1c267b6c Add used inodes to disk info (#16994) 2023-04-07 07:52:14 -07:00
ferhat elmas
eab1dc927b chore: fix automatic issue handling from linter in ci (#16969) 2023-04-07 07:51:31 -07:00
Anis Eleuch
fc94ea1ced trace: Fix <unknown> func name of requests rejected by max clients (#16977) 2023-04-07 07:51:12 -07:00
jiuker
67d2cf8f30 add Err to Get getRemoteTargetClient. (#16982) 2023-04-07 07:50:46 -07:00
Minio Trusted
2c85b84cbc Update yaml files to latest version RELEASE.2023-04-07T05-28-58Z 2023-04-07 06:07:27 +00:00
Harshavardhana
09a25ea7b7 lint: fix some lint issues on files 2023-04-06 22:42:10 -07:00
Klaus Post
260a63ca73 os/windows: do not return errFileNotFound in readDir's (#16988) 2023-04-06 22:28:58 -07:00
Shubhendu
d3f70ea340 Enable audit log for global handlers (#16964)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-04-06 21:03:39 -07:00
Aditya Manthramurthy
ceebd35ef7 Bump up console to 0.26.3 (#16995) 2023-04-06 20:18:14 -07:00
Anis Eleuch
91b6fe1af3 trace: Bootstrap to show the correct source line number (#16989) 2023-04-06 17:51:53 -07:00
Klaus Post
9803f68522 snowball: Restrict zstd window size (#16987) 2023-04-06 17:47:38 -07:00
Harshavardhana
47b7469a60 add buffer pool for proxy forwarder (#16942) 2023-04-06 15:54:12 -07:00
Shubhendu
4c204707fd Correct to remove null version while ILM rule application (#16971)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
Co-authored-by: Harshavardhana <harsha@minio.io>
2023-04-06 14:10:01 -07:00
Harshavardhana
8fd6be0827 avoid mknod in GitHub actions (#16991) 2023-04-06 12:02:41 -07:00
Harshavardhana
c06e0bfef9 set correct Host: value for replication event notification (#16984) 2023-04-06 10:20:53 -07:00
Klaus Post
8625a9dbb3 Make listing metadata permissions stricter (#16974) 2023-04-06 07:52:35 -07:00
Alex
2b71b659e0 Update Console to v0.26.2 (#16981)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-04-06 01:31:09 -07:00
jiuker
0320ac43cb simplify bucketInfo return in GetBucketInfo peer call (#16983) 2023-04-06 01:30:50 -07:00
Anis Eleuch
111c7d4026 assumeRole return the correct http code for auth errors (#16967) 2023-04-05 22:19:31 -07:00
Klaus Post
62c3df0ca3 fix: directory listing on Go 1.20 windows (#16976) 2023-04-05 14:36:49 -07:00
Klaus Post
ae011663e8 mint: Ignore teardown errors (#16979) 2023-04-05 11:10:24 -07:00
Shireesh Anjal
12591cd241 Upgrade madmin-go to 2.0.18 (#16961) 2023-04-05 04:55:55 -07:00
Andrea Longo
0499e1c4b0 Remove ~ from Object Lambda curl example (#16966) 2023-04-04 12:15:28 -07:00
Poorna
3158f2d12e Add support for batch key rotation (#16844) 2023-04-04 10:56:54 -07:00
Praveen raj Mani
51f7f9aaa3 Generalize the event store using go generics (#16910) 2023-04-04 10:52:24 -07:00
jiuker
6e359c586e fix: close chan before return in scanner usage updates (#16960) 2023-04-04 10:51:05 -07:00
Poorna
699a24f7e5 batch: validate versioning on src/tgt buckets (#16955) 2023-04-04 10:50:11 -07:00
jiuker
5fa3665074 add isSysErrNotDir to openFileNoSync (#16958) 2023-04-04 08:00:08 -07:00
Harshavardhana
3b7781835e add lock metrics per node (#16943) 2023-04-03 21:23:24 -07:00
Shubhendu
5fe1b46bfd Enabled to send audit log while version deletion (#16954)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-04-03 11:58:04 -07:00
Harshavardhana
f65cce4317 move mint tests to separate folders to not confuse GitHub (#16940) 2023-03-31 14:38:10 -07:00
Allan Roger Reid
27d0d22e5d docs: Specify correct port in docker-compose README.md (#16939) 2023-03-31 12:20:56 -07:00
Poorna
407c9ddcbf feat: add batch replicate from remote MinIO to local cluster (#16922) 2023-03-31 10:48:36 -07:00
Anis Eleuch
d90d0c8931 Use one http response recorder per external http call (#16938) 2023-03-31 09:37:29 -07:00
Harshavardhana
216a471bbb on quorum DeleteObject() errors attempt an MRF (#16932) 2023-03-31 08:15:41 -07:00
jiuker
a7b7860e0e fix: potential data conflicts save site-resync metadata (#16926) 2023-03-30 20:59:45 -07:00
Klaus Post
d703daa480 Add metadata extension to ListObjectVersions (#16930) 2023-03-30 12:20:42 -07:00
Poorna
dc8fdcb9c9 fix: error checking in DeleteBucket (#16929) 2023-03-30 11:54:08 -07:00
Harshavardhana
7a6c4e438e allow more workers for ILM expiration (#16924) 2023-03-30 10:47:15 -07:00
jiuker
c468b4e2a8 fix: avoid out of slice index (#16925) 2023-03-30 08:03:48 -07:00
jiuker
b04956a676 fix: put *msgp.Reader back in pool (#16927) 2023-03-30 08:03:12 -07:00
Aditya Manthramurthy
518f6e4d39 fix: missing return after error response (#16920) 2023-03-29 16:21:13 -07:00
Allan Roger Reid
483b226cc1 fix: avoid logging when object/version not found in replication (#16919) 2023-03-29 15:02:45 -07:00
Harshavardhana
13151cbb2b [testing] add mint runner test (#16868) 2023-03-29 11:38:43 -07:00
Harshavardhana
8e02660a0d update all our deps (#16899) 2023-03-28 03:45:24 -07:00
jiuker
16feef2a2c fix: time.Parse RFC3339Nano (#16892) 2023-03-27 11:51:54 -07:00
jiuker
66ff17e452 fix: Avoid multiple write responses (#16894) 2023-03-27 09:15:23 -07:00
Harshavardhana
4c5edacae2 ignore operation timedout errors (#16891) 2023-03-26 03:16:51 -07:00
Anis Eleuch
8b4d0255b7 Set Console global Root CAs early to trust provided certs (#16890) 2023-03-25 09:58:38 -07:00
Minio Trusted
58c129f94a Update yaml files to latest version RELEASE.2023-03-24T21-41-23Z 2023-03-24 23:03:23 +00:00
Poorna
74040b457b Allow setting sync mode for site replication (#16876) 2023-03-24 14:41:23 -07:00
Anis Eleuch
c259a8ea38 Set tcp user timeout to clean sockets with data in the buffer (#16887) 2023-03-24 08:10:58 -07:00
mstmdev
2d51e42305 Remove the redundant conditional in the validateParity function (#16866) 2023-03-23 14:06:22 -07:00
Klaus Post
5e3bfd2148 Add user tags when listing with metadata (#16883) 2023-03-23 10:27:19 -07:00
Klaus Post
8b0ab6ead6 Revert "Make localLocker lock attempts cancellable (#16510)" (#16884) 2023-03-23 10:26:21 -07:00
Shubhendu
b1b0aadabf Revert query parameter src from diag upload if callhome enabled (#16881) 2023-03-23 00:24:58 -07:00
Harshavardhana
ac7d9c449a add missing expiration information from 'sts info' (#16878) 2023-03-22 16:47:02 -07:00
Anis Eleuch
1346561b9d return quorum error instead of insufficient storage error (#16874) 2023-03-22 16:22:37 -07:00
Minio Trusted
4bc52897b2 Update yaml files to latest version RELEASE.2023-03-22T06-36-24Z 2023-03-22 21:16:15 +00:00
dependabot[bot]
035791669e build(deps): bump github.com/nats-io/nats-streaming-server from 0.24.1 to 0.24.3 (#16752)
Signed-off-by: dependabot[bot] <support@github.com>
2023-03-21 23:36:24 -07:00
Krishnan Parthasarathi
6017b63a06 top-locks: Group by lock request ID (#16860) 2023-03-21 18:35:29 -07:00
Klaus Post
11d04279c8 Add lazy init of audit logger (#16842) 2023-03-21 10:50:40 -07:00
Harshavardhana
0448728228 fix: add deadline conns and dnsCache for remote transports (#16865) 2023-03-21 08:49:20 -07:00
Harshavardhana
12047702f5 fix: tweak the maintenance=true to satisfy baremetal first (#16864) 2023-03-21 08:48:38 -07:00
Harshavardhana
fb1492f531 check for quorum errors for DeleteBucket() (#16859) 2023-03-20 23:38:06 -07:00
Minio Trusted
d14ead7bec Update yaml files to latest version RELEASE.2023-03-20T20-16-18Z 2023-03-21 01:07:32 +00:00
Aditya Manthramurthy
05444a0f6a Use the official pub key to always verify binary (#16857) 2023-03-20 13:16:18 -07:00
Harshavardhana
b3c54ec81e reject object names with '\' on windows (#16856) 2023-03-20 13:16:00 -07:00
Harshavardhana
6c11dbffd5 add crash protection from backend modifications (#16846) 2023-03-20 09:08:42 -07:00
Harshavardhana
3b5dbf9046 allow bootstrapping to validate internode tokens (#16853) 2023-03-20 01:40:24 -07:00
Aditya Manthramurthy
09c733677a Add test for fixed post policy exploit (#16855) 2023-03-20 01:06:45 -07:00
Harshavardhana
8d6558b236 fix: convert '\' to '/' on windows (#16852) 2023-03-20 00:35:25 -07:00
Aditya Manthramurthy
67f4ba154a fix: post policy request security bypass (#16849) 2023-03-19 21:15:20 -07:00
Poorna
440ad20c1d Add support for batch job cancellation (#16843) 2023-03-17 23:42:43 -07:00
Krishnan Parthasarathi
31fba6f434 Save bootstrap trace events in a circular buffer (#16823) 2023-03-17 16:01:03 -07:00
Harshavardhana
280442e533 reduce 250ms to 50ms retry looking for metacache block (#16795) 2023-03-17 14:44:01 -07:00
Shubhendu
850a945a18 Added query parameter src to diag upload if callhome enabled (#16837)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-03-17 08:30:16 -07:00
Harshavardhana
46f9049fb4 simplify error responses for KMS (#16793) 2023-03-16 11:59:42 -07:00
Aditya Manthramurthy
58266c9e2c Add enable flag for LDAP IDP config (#16805) 2023-03-16 11:58:59 -07:00
Poorna
d1e775313d support decommissioning of tiered objects (#16751) 2023-03-16 07:48:05 -07:00
Anis Eleuch
a65df1e67b debug: new tool to reorder local erasure disks (#16816) 2023-03-15 13:53:17 -07:00
Harshavardhana
e700be8cd6 fix: return appropriate Location header for MakeBucket() (#16820) 2023-03-15 13:40:40 -07:00
Harshavardhana
3fdd574f54 update go dependencies (#16798) 2023-03-15 11:59:17 -07:00
Harshavardhana
e0f4dd6027 remove unncessary logs from WalkDir(), PutObject() (#16818) 2023-03-15 11:52:23 -07:00
Harshavardhana
de02eca467 restore rotating root credentials properly (#16812) 2023-03-15 08:07:42 -07:00
Nitish Tiwari
50dbd2cacc Update audit log flow to use new headers with unit (#16797) 2023-03-13 22:50:19 -07:00
Minio Trusted
c2f9cc5824 Update yaml files to latest version RELEASE.2023-03-13T19-46-17Z 2023-03-13 22:11:07 +00:00
Harshavardhana
c7f7e67a10 Do not allow adding root user to IAM subsystem (#16803) 2023-03-13 12:46:17 -07:00
Klaus Post
628042e65e tests: Protect globalLocalDrives against races (#16800) 2023-03-13 06:04:20 -07:00
ferhat elmas
cde7eeb660 chore: drop go versions in static analysis (#16790) 2023-03-11 22:10:33 -08:00
Aditya Manthramurthy
6305b206e1 fix: site-repl should heal STS with virtual parent (#16792) 2023-03-10 16:21:51 -08:00
Klaus Post
d85da9236e Add Object Version count histogram (#16739) 2023-03-10 08:53:59 -08:00
Minio Trusted
9800760cb3 Update yaml files to latest version RELEASE.2023-03-09T23-16-13Z 2023-03-10 00:59:46 +00:00
Anis Elleuch
5c087bdcad fix: a cosmetic error reporting with a lock timeout (#16788) 2023-03-09 15:16:13 -08:00
Klaus Post
a547bf517d Remove locks on usage cache (#16786) 2023-03-09 15:15:46 -08:00
Harshavardhana
b984bf8d1a allow expiration of all versions during Listing() (#16757) 2023-03-09 15:15:30 -08:00
Daniel Valdivia
18f9cccfa7 Upgrade Console to v0.25.0 (#16782)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-03-08 12:15:13 -08:00
Poorna
fb6ab1cca2 fix: allow replication of 'null' delete markers (#16773) 2023-03-08 07:03:29 -08:00
Krishnan Parthasarathi
56c57e2c53 tier-stats: Avoid repeated logs (#16774) 2023-03-07 16:43:05 -08:00
Poorna
a6057c35cc Avoid peer notification when peer is offline, tune retries (#16737) 2023-03-07 08:13:28 -08:00
Harshavardhana
901887e6bf feat: add lambda transformation functions target (#16507) 2023-03-07 08:12:41 -08:00
Poorna
ee54643004 Avoid unnecessary replication heal attempts (#16769) 2023-03-07 07:43:38 -08:00
Harshavardhana
0a17acdb34 return error if policy changes on disabled groups (#16766) 2023-03-06 10:46:24 -08:00
Harshavardhana
72e5212842 fix: handle syscall.EROFS also for osIsPermission() (#16765) 2023-03-06 08:56:29 -08:00
ferhat elmas
714283fae2 cleanup ignored static analysis (#16767) 2023-03-06 08:56:10 -08:00
ferhat elmas
3423028713 cleanup Go linter settings (#16736) 2023-03-04 20:57:35 -08:00
jiuker
9d062b37d7 return underlying error with BackendDown{} error (#16738) 2023-03-03 23:56:53 -08:00
Harshavardhana
4636d3a9c3 upgrade all dependencies (#16753) 2023-03-03 18:22:40 -08:00
Aditya Manthramurthy
c95ede35c1 Switch windows CI back to go 1.19.x (#16755) 2023-03-03 15:19:28 -08:00
Aditya Manthramurthy
7415e1aa56 Switch to go1.20 in CI (#16743) 2023-03-03 10:15:03 -08:00
jiuker
f350953a19 calculate disk cache usage percent accurately (#16740) 2023-03-02 20:32:22 -08:00
Daniel Valdivia
958bba5b42 Attach creds, owner and region to madmin calls (#16658)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-03-02 14:35:24 -08:00
Poorna
0f2b95b497 fix a data race in IAM loading (#16742) 2023-03-02 14:32:54 -08:00
Klaus Post
d07089ceac Fix scanner deadlock on lost global lock (#16726) 2023-02-28 21:34:45 -08:00
Aditya Manthramurthy
47dfa62384 Update LDAP doc for new policy attach|detach cmds (#16723) 2023-02-27 21:04:27 -08:00
Minio Trusted
3a3265cf88 Update yaml files to latest version RELEASE.2023-02-27T18-10-45Z 2023-02-28 00:30:32 +00:00
Harshavardhana
0ff931dc76 fix: allow CORS to work by default (#16713) 2023-02-27 10:10:45 -08:00
Praveen raj Mani
4d708cebe9 Support adding service accounts with expiration (#16430)
Co-authored-by: Harshavardhana <harsha@minio.io>
2023-02-27 10:10:22 -08:00
dorman
4d7c8e3bb8 Remove redundant log (#16710)
Co-authored-by: z30001483 <zekaifeng2@huawei.com>
2023-02-27 09:59:47 -08:00
Aditya Manthramurthy
8cde38404d Add metrics for custom auth plugin (#16701) 2023-02-27 09:55:18 -08:00
Krishnan Parthasarathi
fe7bf6cbbc Support tier-add if tier backend not empty (#16715) 2023-02-27 09:26:26 -08:00
Shubhendu
8b4eb2304b Set logger webhook proxy on subnet proxy change (#16665)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-02-27 08:35:36 -08:00
Harshavardhana
ae029191a3 liveness returns "busy" if queued requests > available capacity (#16719) 2023-02-27 08:34:52 -08:00
Harshavardhana
bfedea9bad fix: disk healing should honor the right pool/set index (#16712) 2023-02-27 04:55:32 -08:00
Aditya Manthramurthy
7777d3b43a Remove globalSTSTLSConfig (#16709) 2023-02-26 23:37:00 -08:00
Aditya Manthramurthy
9ed4fc9687 Remove globalOpenIDConfig (#16708) 2023-02-25 21:01:37 -08:00
jiuker
b49b39e99d fix: errNoSuchPolicy should use errors.Is (#16656) 2023-02-25 00:01:37 -08:00
jiuker
cd3a2de5a3 add isValidLocation to common parseLocation (#16690) 2023-02-25 08:09:20 +05:30
jiuker
6e8960ccdd fix: delete globalProfiler should lock (#16697) 2023-02-25 08:07:44 +05:30
Aditya Manthramurthy
e05f3d5d84 Remove globalLDAPConfig (#16706) 2023-02-25 08:07:22 +05:30
Anis Elleuch
94c6cb1323 tests: Add test for S3 API error codes (#16705) 2023-02-25 08:06:29 +05:30
Aditya Manthramurthy
3f81cd1b22 Update OpenID doc with info on redirection params (#16704) 2023-02-24 12:13:00 -08:00
Anis Elleuch
8da0f4c5bb Better error message when TLS certs do not have proper permissions (#16703) 2023-02-24 06:34:55 -08:00
Klaus Post
9acf1024e4 Remove bloom filter (#16682)
Removes the bloom filter since it has so limited usability, often gets saturated anyway and adds a bunch of complexity to the scanner.

Also removes a tiny bit of CPU by each write operation.
2023-02-24 09:03:31 +05:30
Harshavardhana
b21d3f9b82 event target registration failures must be returned (#16700) 2023-02-23 21:59:14 +05:30
Minio Trusted
2bbf380262 Update yaml files to latest version RELEASE.2023-02-22T18-23-45Z 2023-02-22 21:50:11 +00:00
Harshavardhana
f678bcf7ba update dependencies to the latest releases (#16694) 2023-02-22 10:23:45 -08:00
Harshavardhana
a0f06eac2a add Veeam SOS API first implementation (#16688) 2023-02-22 19:54:57 +05:30
jiuker
83fe1a2732 log: add more info about BROWSER_URL (#16678) 2023-02-22 19:54:05 +05:30
Harshavardhana
5c98223c89 add correct HostId instead of deploymentId for error responses (#16686) 2023-02-22 15:41:09 +05:30
jiuker
663a0b7783 save correct bucketInfo on it's indexes (#16685) 2023-02-22 14:08:34 +05:30
Anis Elleuch
6efe4d1df6 perf: Only remove generated data when no bucket name specified (#16610) 2023-02-21 21:21:40 -08:00
Daniel Valdivia
fb17f97cf3 move audit and logger message structure to minio/pkg (#16655)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-02-21 21:21:17 -08:00
Shubhendu
6b65ba1551 Added attribute proxy for mc admin config set ALIAS logger_webhook (#16657)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-02-21 21:19:46 -08:00
Poorna
9202c6e26a Reload bucket targets when replication config is refreshed (#16684) 2023-02-21 21:18:49 -08:00
Poorna
59a5456091 fix: STS error translation to API error (#16683) 2023-02-22 09:57:48 +05:30
Allan Roger Reid
8bfe972bab Set meaningful message from minio with env variable KMS_SECRET_KEY (#16584) 2023-02-22 07:13:01 +05:30
Klaus Post
fd6622458b Add detailed scanner trace output and notifications (#16668) 2023-02-21 09:33:33 -08:00
Poorna
8a08861dd9 fix: healing of replication config for endpoint changes (#16648) 2023-02-20 16:06:13 +05:30
Harshavardhana
82dcfd4e10 update dependencies to latest releases (#16651) 2023-02-20 16:05:20 +05:30
Harshavardhana
b66d7dc708 add missing x-amz-id-2 to event notification date (#16646) 2023-02-20 15:41:47 +05:30
Harshavardhana
eebdd2b31d update NOTICE to 2015-2023 copyright 2023-02-19 00:03:50 +05:30
Harshavardhana
b94733ab31 avoid locks when unnecessary in SiteReplicationMetaInfo() (#16650) 2023-02-18 05:35:22 -08:00
Minio Trusted
7f2c90a0ed Update yaml files to latest version RELEASE.2023-02-17T17-52-43Z 2023-02-17 19:07:39 +00:00
Klaus Post
84bb7d05a9 fix: healing deadlocks and ordering (#16643) 2023-02-17 23:22:43 +05:30
Harshavardhana
98a84d88e2 fix: trim and ignore './' directories (#16642) 2023-02-17 07:15:03 -08:00
jiuker
e470268c7c fix: a possible closer leak in SelectObjectHandler (#16598) 2023-02-17 01:44:40 -08:00
jiuker
3a6cd4f73d fix: sync.pool won't Put reader back (#16638) 2023-02-17 01:42:43 -08:00
Harshavardhana
6ea150fd68 fix: avoid printing certain errors under few locations (#16631) 2023-02-17 01:40:31 -08:00
Anis Elleuch
a7188bc9d0 fix: evaluate BypassGov policy action in deletion correctly (#16635) 2023-02-17 07:53:34 +05:30
Harshavardhana
e1e9ddd4a4 use kes.Status() for Status() call (#16629) 2023-02-16 22:12:24 +05:30
Krishnan Parthasarathi
a1dd08f2e6 Include tier name in MinIO/S3 target user-agent (#16630) 2023-02-15 22:09:46 -08:00
Krishnan Parthasarathi
d136ac0596 Don't close transition task channel on server exit (#16627) 2023-02-15 22:09:25 -08:00
Poorna
c33a237067 fix: under site replication disallow remote target modification (#16628) 2023-02-15 20:22:13 -08:00
Poorna
eb7d3da994 Fix site replication status reporting of quota (#16626) 2023-02-16 08:08:35 +05:30
Harshavardhana
37134e42d4 ignore io.EOF, io.ErrUnexpectedEOF on xl.meta reads in WalkDir() (#16625) 2023-02-15 07:12:48 -08:00
Klaus Post
626a4efaad Update reedsolomon to v1.11.7 (#16624) 2023-02-15 05:07:35 -08:00
Harshavardhana
0c1f8b4e0f add user-agent for all minio.Client usage (#16619) 2023-02-14 13:19:30 -08:00
Anis Elleuch
857674c3a0 heal: Do not mark buckets as done when there is no online disks (#16621) 2023-02-14 12:50:13 -08:00
Harshavardhana
15a75bd79b ignore preconditionFailed error in batch replication (#16615) 2023-02-14 07:22:08 -08:00
Andreas Auernhammer
74887c7372 kms: add support for KES API keys and switch to KES Go SDK (#16617)
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2023-02-14 07:19:20 -08:00
Harshavardhana
31188e9327 add parallel workers in batch replication (#16609) 2023-02-13 12:07:58 -08:00
Harshavardhana
ee6d96eb46 Periodically remove stale buckets from in-memory (#16597) 2023-02-13 08:09:52 -08:00
Minio Trusted
11fe2fd79a update helm v5.0.7
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-02-13 16:07:23 +05:30
atalakey4work
c2863cc6ef helm: remove parentheses from podDisruptionBudget (#16605) 2023-02-13 02:36:06 -08:00
Anis Elleuch
d6d01067a0 fix: allow global leader lock context merge to be canceled (#16603)
Global leader lock was first designated to only acquired once
until the node is killed. However, currently, the code acquires
it repeatedly during the lifetime of the server, now there is a
goroutine leak.
2023-02-13 01:26:38 -08:00
Minio Trusted
1d3b18c3f4 update helm v5.0.6
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-02-13 12:23:06 +05:30
jiuker
a15b6f21b8 remove incorrect use of WaitGroup (#16596) 2023-02-12 20:59:45 -08:00
Anis Elleuch
689179bf18 ServerInfo: return per erasure set information (#16583) 2023-02-11 18:31:56 +05:30
Minio Trusted
bf749eec61 Update yaml files to latest version RELEASE.2023-02-10T18-48-39Z 2023-02-10 20:55:45 +00:00
Klaus Post
d0f4cc89a5 Re-add Veeam Listing workaround (#16593) 2023-02-10 10:48:39 -08:00
Harshavardhana
d65debb6bc fix: comply with RFC6750 UserInfo endpoint requirements (#16592) 2023-02-10 22:20:25 +05:30
Harshavardhana
72daccd468 fix: scanner in healing cycle must use actual size (#16589) 2023-02-10 06:53:03 -08:00
Harshavardhana
b363400587 fix: username replacements for aws:username must use parentUser (#16591) 2023-02-10 06:52:31 -08:00
k0i
6b41f941b6 fix: README.md in docs/config (#16564) 2023-02-10 03:02:11 -08:00
jiuker
b9f5f9ba3f fix: aclHandlers convert XML parse error to relevant client error (#16587) 2023-02-10 02:59:10 -08:00
Anis Elleuch
c8ffa59d28 Periodically refresh buckets metadata from the backend disks (#16561)
fixes #16553
2023-02-09 10:29:20 -08:00
Minio Trusted
1141187bf2 Update yaml files to latest version RELEASE.2023-02-09T05-16-53Z 2023-02-09 06:21:34 +00:00
Poorna
52aeebebea Fix site replication meta info call to be non-blocking (#16526)
Co-authored-by: Harshavardhana <harsha@minio.io>
2023-02-08 21:16:53 -08:00
Aditya Manthramurthy
e101384aa4 Update console to v0.23.1 (#16574) 2023-02-08 20:01:53 -08:00
Harshavardhana
71f02adfca Revert "Print golang http errors in MinIO log format (#16465)"
This reverts commit 1fd7946dce.
2023-02-09 09:27:27 +05:30
Krishnan Parthasarathi
9de26531e4 tiering: UpdateWorkers may be called before Init (#16573) 2023-02-08 19:13:34 -08:00
Anis Elleuch
fadc46b906 Add the access key and parent user in the audit log (#16572) 2023-02-08 11:05:26 -08:00
Anis Elleuch
b1d98febfd New disk healing goes through the healing workers (#16568) 2023-02-08 09:25:29 -08:00
jiuker
1828fb212a fix: avoid goroutine leak after timeouts in PeerMetrics (#16569) 2023-02-08 09:11:16 -08:00
Harshavardhana
c97f50e274 replication syncs happen in 60sec intervals (#16571) 2023-02-08 08:23:26 -08:00
jiuker
be92046dfd Anonymous pools in health info output are double counted (#16566) 2023-02-08 06:58:50 -08:00
Krishnan Parthasarathi
990fc415f7 Ensure safety of transitionState at startup (#16563) 2023-02-07 23:11:42 -08:00
Harshavardhana
d8daabae9b make site replication healing safer (#16560) 2023-02-07 21:44:42 -08:00
Harshavardhana
84fe4fd156 fix: multiObjectDelete by passing versionId for authorization (#16562) 2023-02-08 08:01:00 +05:30
Harshavardhana
747d475e76 initialize subsystems that are not dependent on buckets first (#16559) 2023-02-07 12:46:47 -08:00
Anis Elleuch
095b518802 Show a better error msg when internal data encryption key is incorrect (#16549) 2023-02-07 05:22:54 -08:00
Harshavardhana
0319ae756a fix: pass proper username (simple) string as expected (#16555) 2023-02-07 03:43:08 -08:00
Harshavardhana
11c7ecb5cf support if-match/if-none-match with s3 uploads (#16551) 2023-02-06 18:58:29 -08:00
Cesar Celis Hernandez
422c396d73 Removing old action that is no longer needed (#16550) 2023-02-07 07:06:29 +05:30
Anis Elleuch
ffd57fde90 Convert 'server closed idle connection' to errDiskNotFound (#16548) 2023-02-06 10:42:08 -08:00
jiuker
a451d1cb8d fix: close resp.body checking for kubernetes version (#16547) 2023-02-06 10:41:41 -08:00
Harshavardhana
14cf8f1b22 upgrade deps for minio/pkg v1.6.1 to include groups conditions (#16538) 2023-02-06 09:27:29 -08:00
Harshavardhana
5996c8c4d5 feat: allow offline disks on a fresh start (#16541) 2023-02-06 09:26:09 -08:00
Harshavardhana
21885f9457 fix: liveness/readiness must return errors if KMS is unreachable (#16540) 2023-02-06 08:55:56 -08:00
Christian Niessner
6ac48aff46 helm: shared secrets handling for user and svcacct's (#16379) 2023-02-05 21:51:10 -08:00
Mathieu Parent
85ff76e7b0 fix(helm): use PodDisruptionBudget v1 for newer k8s versions (#16466) 2023-02-05 21:50:45 -08:00
Harshavardhana
e47a31f9fc fix: object size distribution in metrics for all objects (#16539) 2023-02-04 21:10:10 -08:00
Harshavardhana
517fcd423d add necessary tools to our docker release (#16536) 2023-02-03 19:40:25 -08:00
Harshavardhana
b780359598 helm version v5.0.5 2023-02-04 02:24:02 +05:30
Harshavardhana
aa8b9572b9 remove double ENABLED help output (#16528) 2023-02-03 05:52:52 -08:00
Cesar Celis Hernandez
8ca14e6267 Updating enterprise action (#16518) 2023-02-02 19:23:31 +05:30
Poorna
876e1a91b2 replication: Fix typo checking PreconditionFailed status code (#16517) 2023-02-02 19:22:02 +05:30
Klaus Post
0b7989aa4b Fix Kafka initialization crash (#16523) 2023-02-02 19:21:19 +05:30
Anis Elleuch
2278fc8f47 Print original error when IAM load is failed in some places (#16511) 2023-02-01 17:32:22 +05:30
Harshavardhana
a91f353621 support 'mc admin service restart' for windows (#16512) 2023-02-01 17:31:46 +05:30
Klaus Post
cdb1b48ad9 Make localLocker lock attempts cancellable (#16510) 2023-01-31 09:41:17 -08:00
Minio Trusted
a24037bfec Update yaml files to latest version RELEASE.2023-01-31T02-24-19Z 2023-01-31 07:49:15 +00:00
Kaan Kabalak
2d0f30f062 Fix typo in code comment (#16509) 2023-01-31 07:54:19 +05:30
Krishnan Parthasarathi
cea2ca8c8e Add restore-status header for multipart objects (#16508) 2023-01-31 07:53:45 +05:30
Klaus Post
f713436dd0 Fix truncated list response on deleted replicated objects (#16504) 2023-01-30 09:13:53 -08:00
Klaus Post
b923a62425 Check pool-index for invalid setups (#16501) 2023-01-30 18:33:07 +05:30
Harshavardhana
67fce4a5b3 fix: dangling delete() upon success should return 404 (#16494) 2023-01-27 12:43:45 -08:00
Poorna
eaa65b7ade fix replication healing on list to consider all versions (#16496) 2023-01-27 12:43:28 -08:00
Poorna
820d94447c replication: fix target bucket passed on GET proxy (#16495) 2023-01-27 10:24:51 -08:00
Poorna
ed20134a7b replication: detect proxy header presence correctly (#16489) 2023-01-27 01:29:32 -08:00
Harshavardhana
d19cbc81b5 fix: do not return IAM/Bucket metadata replication errors to client (#16486) 2023-01-26 11:11:54 -08:00
Anis Elleuch
1fd7946dce Print golang http errors in MinIO log format (#16465) 2023-01-26 22:46:16 +05:30
Klaus Post
027ff0f3a8 fix: set modTime to current in snowball if archive shows empty (#16482) 2023-01-26 22:20:35 +05:30
Jan Zhanal
8fa80874a6 doc: LDAP/AD - nested groups (#16483) 2023-01-26 22:17:59 +05:30
Harshavardhana
430669cfad do not fail groupadd if group already exists
fixes #16488
2023-01-26 21:32:02 +05:30
Harshavardhana
54b561898f fix: anonymize the x-amz-id-2 value from hostname (#16478) 2023-01-25 10:25:36 -08:00
Harshavardhana
65c104a589 add x-amz-id-2 to indicate the node that received the request (#16474) 2023-01-25 09:14:10 -08:00
Anis Elleuch
0a0416b6ea Better error when setting up replication with a service account alias (#16472) 2023-01-25 21:50:12 +05:30
Anis Elleuch
441babdc41 Rename peer S3 prefix to avoid collision in the future (#16473) 2023-01-25 06:46:30 -08:00
Minio Trusted
1bf1fafc86 Update yaml files to latest version RELEASE.2023-01-25T00-19-54Z 2023-01-25 02:09:27 +00:00
Aditya Manthramurthy
50d58e9b2d Bump up console to v0.23.0 (#16470) 2023-01-24 16:19:54 -08:00
Harshavardhana
e64b9f6751 fix: disallow SSE-C encrypted objects on replicated buckets (#16467) 2023-01-24 15:46:33 -08:00
Florian Schwab
d67a846ec4 allow restarting of decommissioning if completed, failed or canceld (#16464) 2023-01-24 07:07:59 -08:00
Poorna
ca2a1c3f60 replication: clone metrics while loading metrics cache (#16462) 2023-01-24 02:10:32 -08:00
Poorna
93fbb228bf Validate if parent user exists for service acct (#16443) 2023-01-24 08:17:18 +05:30
Harshavardhana
3683673fb0 add missing gorilla/mux migration, update credits (#16461) 2023-01-23 08:46:37 -08:00
Anis Elleuch
f37a5b6dae Add CPU info in the check update user-agent (#16447) 2023-01-23 08:07:55 -08:00
Harshavardhana
31b0decd46 migrate to minio/mux from gorilla/mux (#16456) 2023-01-23 16:42:47 +05:30
Harshavardhana
eb561e1c05 allow bootstrap platform checks to be pool specific (#16455) 2023-01-23 16:24:50 +05:30
Mathieu Parent
54c9ecff5b fix(helm): fsGroup is only on podSecurityContext (#16402) 2023-01-21 10:03:03 -08:00
Wessel Valkenburg (prevue.ch)
edcd72585d Update post-job.yaml in Helm chart (#16387) 2023-01-21 10:02:08 -08:00
Alex
1a17fc17bb GitHub Workflows security hardening (#15708) 2023-01-21 09:55:17 -08:00
Poorna
ddad231921 replication: Avoid logging PreConditionFailed error (#16450) 2023-01-21 07:33:04 +05:30
Anis Elleuch
e73894fa50 grafana: Show one metric for the total data growth (#16449) 2023-01-20 09:39:28 -08:00
Klaus Post
03b94f907f fix: deleted object names for directory objects (#16448) 2023-01-20 21:16:06 +05:30
Minio Trusted
3fa7218c44 Update yaml files to latest version RELEASE.2023-01-20T02-05-44Z 2023-01-20 08:11:58 +00:00
Shireesh Anjal
0f591d245d fix: incorrect anonymization of drive endpoint (#16442) 2023-01-20 07:35:44 +05:30
Poorna
1b02e046c2 Fix bandwidth monitoring to be per remote target (#16360) 2023-01-19 18:52:16 +05:30
Harshavardhana
d08e3cc895 add a way to avoid blocking queueHealTask() depending on caller (#16433) 2023-01-19 18:50:54 +05:30
Anis Elleuch
d98116559b Use async healing in PutObject call (#16431) 2023-01-19 00:54:22 -08:00
Krishnan Parthasarathi
71c95ad0d0 Signal stop-rebalance to all rebalancing pools (#16438) 2023-01-19 06:54:23 +05:30
Minio Trusted
5c1a4ba5f9 Update yaml files to latest version RELEASE.2023-01-18T04-36-38Z 2023-01-18 07:46:44 +00:00
Aditya Manthramurthy
698862ec5d Fix transports/timeouts related regressions (#16427) 2023-01-18 10:06:38 +05:30
Harshavardhana
b4ef5ff294 remove unnecessary code checking for supported features (#16423) 2023-01-17 19:37:47 +05:30
Harshavardhana
3db658e51e use correct xml package for custom MarshalXML() (#16421) 2023-01-17 05:08:33 +05:30
Shireesh Anjal
5a9f7516d6 Add monthly license update job (#16391) 2023-01-17 05:08:15 +05:30
Anis Elleuch
3039fd4519 Optimize background heal status to use LocalStorageInfo (#16414) 2023-01-17 05:02:00 +05:30
Harshavardhana
095fc0561d feat: allow decom of multiple pools (#16416) 2023-01-16 21:36:34 +05:30
Anis Elleuch
beb1924437 Properly restart fresh disk healing when failed in some places (#16413) 2023-01-14 05:06:46 +05:30
jiuker
c8e1154f1e fix: reading from erasureDisks must be protected via read lock() (#16407) 2023-01-13 04:16:23 -08:00
Poorna
b204c2dbec fix: enforce deny on DeleteVersionAction (#16409) 2023-01-13 04:16:00 -08:00
Poorna
b22b39de96 Avoid dangling deletes if disk not found (#16401) 2023-01-12 22:20:19 -08:00
Harshavardhana
c242e6c391 fix: calculate common parity properly (#16406) 2023-01-13 03:28:16 +05:30
Anis Elleuch
e05205756f metrics: Add more logs when unable to read bucket usage (#16405) 2023-01-13 02:32:00 +05:30
Daniel Valdivia
d03b244fcd Remove checks target from docker target (#16399)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-01-12 01:39:12 -08:00
Aditya Manthramurthy
5ef679d8f1 Simplify/speedup container image publishing script (#16403) 2023-01-12 01:35:47 -08:00
Minio Trusted
d33c527e39 Update yaml files to latest version RELEASE.2023-01-12T02-06-16Z 2023-01-12 07:27:07 +00:00
Aditya Manthramurthy
7bc95c47a3 Update console to 0.22.5 (#16400) 2023-01-11 18:06:16 -08:00
Anis Elleuch
475a88b555 fix: error out if an object is found after a full decom (#16277) 2023-01-12 05:52:51 +05:30
Allan Roger Reid
9815dac48f fix: allow bind on ipv6 loopback failures (#16388) 2023-01-11 08:47:39 +05:30
Anis Elleuch
1ece3d1dfe Add comment field to service accounts (#16380) 2023-01-10 21:57:52 +04:00
Anis Elleuch
2146ed4033 xl: Quit early when EC config is incorrect (#16390)
Co-authored-by: Anis Elleuch <anis@min.io>
2023-01-09 23:07:45 -08:00
Minio Trusted
52b88b52f0 Update yaml files to latest version RELEASE.2023-01-06T18-11-18Z 2023-01-08 07:51:31 +00:00
Anis Elleuch
ebd4388cca s3: Return XMinioInvalidObjectName if the object contains null char (#16372) 2023-01-06 10:11:18 -08:00
Harshavardhana
57fd02ee57 update console v0.22.4 (#16374)
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-01-05 22:15:51 -08:00
Anis Elleuch
0333412148 fix: heal only once per disk per set among multiple disks (#16358) 2023-01-05 20:41:19 -08:00
Anis Elleuch
1c85652cff lint: Fix in darwin environment (#16368) 2023-01-05 10:12:01 -08:00
Harshavardhana
e0086c1be7 reduce startup delays on kubernetes (#16356) 2023-01-05 02:32:43 -08:00
Poorna
b29e159604 docs: Update replication setup commands (#16361) 2023-01-04 13:39:37 -08:00
Anis Elleuch
7883e55da2 Merge buckets list from different nodes in ListBuckets() call (#16357) 2023-01-04 08:53:58 -08:00
Harshavardhana
b197623ed2 remove unnecessary kernel-tuning docs (#16354) 2023-01-04 01:33:40 -08:00
Harshavardhana
a15a2556c3 converge listBuckets() as a peer call (#16346) 2023-01-03 23:39:40 -08:00
Harshavardhana
14d29b77ae update replication tests with latest 'mc' (#16348) 2023-01-03 22:54:39 -08:00
Harshavardhana
a2514ffeed update klauspost/compress dependency (#16343) 2023-01-03 10:41:14 -08:00
Harshavardhana
f1bbb7fef5 vectorize cluster-wide calls such as bucket operations (#16313) 2023-01-03 08:16:39 -08:00
Minio Trusted
72394a8319 Update yaml files to latest version RELEASE.2023-01-02T09-40-09Z 2023-01-03 10:16:34 +00:00
Harshavardhana
1cd8e1d8b6 remove the startup jitter before locks() (#16340) 2023-01-02 01:40:09 -08:00
jiuker
62cd918061 fix: close helmInfo file descriptor (#16319) 2023-01-01 23:26:59 -08:00
Klaus Post
6a04067514 fix: tweak read buffer size to reduce over-reading (#16338) 2023-01-01 08:14:20 -08:00
Taran Pelkey
49b3908635 fix: misplaced write response command in DetachPolicy() (#16333) 2022-12-30 20:04:03 -08:00
Harshavardhana
75faef888e disable builds for go1.18 (#16332) 2022-12-30 11:37:07 -08:00
Harshavardhana
b67d97b1ba add missing fields in audit logs for non-compressed handlers (#16328) 2022-12-30 10:20:19 -08:00
Anis Elleuch
b8943fdf19 doc: Update prometheus metrics list (#16329) 2022-12-29 15:08:22 -08:00
Harshavardhana
f93183f66e fix: a deadlock by refactoring listBuckets() under site replication (#16323) 2022-12-29 00:08:31 -08:00
Harshavardhana
2937711390 fix: DeleteObject() API with versionId under replication (#16325) 2022-12-28 22:48:33 -08:00
Wojtek Czekalski
aa56c6d51d helm: Make bucket existence check faster (#16321) 2022-12-27 10:32:39 -08:00
Anis Elleuch
27417459fb metrics: Show healing info for all nodes (#16315) 2022-12-26 08:35:32 -08:00
Harshavardhana
5b8fe2e89a allow locks with object affinity to spread across pools (#16312) 2022-12-23 20:55:45 -08:00
Anis Elleuch
acc9c033ed debug: Add X-Amz-Request-ID to lock/unlock calls (#16309) 2022-12-23 19:49:07 -08:00
Poorna
8528b265a9 Validate replication target update to avoid duplicate endpoints (#16311) 2022-12-23 15:44:48 -08:00
Han Cen
44250f1a52 helm: disallow empty containers in post job template (#16281) 2022-12-23 12:32:18 -08:00
Minio Trusted
f7560670d9 update helm v5.0.4
Signed-off-by: Harshavardhana <harsha@minio.io>
2022-12-23 12:29:40 -08:00
Russell Sim
3891885800 helm: fix creating users, via proper secretKey (#16310) 2022-12-23 12:28:43 -08:00
Harshavardhana
b882310e2b avoid locks for internal and invalid buckets in MakeBucket() (#16302) 2022-12-23 07:46:00 -08:00
Poorna
de0b43de32 persist replication stats with leader lock (#16282) 2022-12-22 14:25:13 -08:00
Harshavardhana
48152a56ac upgrade UBI image to 8.7 (#16301) 2022-12-22 10:56:05 -08:00
jiuker
29dd7f1d68 tier verification leaks fd, that must be closed (#16296)
Co-authored-by: Harshavardhana <harsha@minio.io>
2022-12-22 10:35:54 -08:00
Poorna
6423e4c767 Remove site replication config if it succeeded locally (#16279) 2022-12-22 01:31:20 -08:00
Harshavardhana
1dd8f0e8f3 update console v0.22.3 (#16292)
Signed-off-by: Harshavardhana <harsha@minio.io>
2022-12-21 23:47:51 -08:00
Krishnan Parthasarathi
2fa35def2c Fix DeleteObject when only free versions remain (#16289) 2022-12-21 16:24:07 -08:00
Anis Elleuch
34167c51d5 trace: Add bootstrap tracing events (#16286) 2022-12-21 15:52:29 -08:00
Harshavardhana
a5f8af4efb serialize replication stats() only when needed (#16280) 2022-12-20 00:07:53 -08:00
Harshavardhana
5a218f38a1 allow retries for transaction lock on startup (#16273) 2022-12-19 22:00:00 -08:00
Anis Elleuch
e57e946206 Do not save credentials in config.json (#16275) 2022-12-19 12:27:06 -08:00
Klaus Post
b4f71362e9 Avoid config migration on every startup (#16278) 2022-12-19 11:10:14 -08:00
Taran Pelkey
ed37b7a9d5 Add API to fetch policy user/group associations (#16239) 2022-12-19 10:37:03 -08:00
Minio Trusted
6511021fbe update helm v5.0.3 2022-12-19 00:53:02 -08:00
mruzicka
6197ba851b helm: Fix post job template (#16236) 2022-12-18 08:01:22 -08:00
Minio Trusted
3ae1f9d852 update helm v5.0.2 2022-12-17 23:57:10 -08:00
orblazer
0db1930f48 helm: add policy to svcacct (#16272) 2022-12-17 22:50:37 -08:00
Anis Elleuch
89db3fdb5d Do not return an error when version disparity is detected (#16269) 2022-12-16 08:52:12 -08:00
Harshavardhana
80fc3a8a52 use newDynamicTimeoutWithOpts() when appropriate (#16266) 2022-12-15 13:11:37 -08:00
Klaus Post
988a2e8fed Faster startup of large distributed systems with latency (#16259) 2022-12-15 08:31:21 -08:00
Harshavardhana
2433698372 fix: remove unnecessary logs for client conn errors (#16261) 2022-12-15 08:25:05 -08:00
Harshavardhana
5d7e8f79ed fix: remove scanner healing with unnecessary logs (#16260) 2022-12-14 16:39:18 -08:00
Harshavardhana
bad229e16e fix: support event name s3:Restore:* (#16257) 2022-12-14 05:12:07 -08:00
Poorna
d37e514733 Cleanup remote targets automatically on replication config removal. (#16221) 2022-12-14 03:24:06 -08:00
Harshavardhana
c73ea27ed7 do not log checksum mismatch error, client received the error (#16246) 2022-12-14 01:57:40 -08:00
Krishnan Parthasarathi
0159b56717 fix: rebalance to account for object's on-disk size (#16240) 2022-12-14 00:15:14 -08:00
Aditya Manthramurthy
9e6cc847f8 Add HTTP2 config option for policy plugin (#16225) 2022-12-13 14:28:48 -08:00
Taran Pelkey
709eb283d9 Add endpoints for managing IAM policies (#15897)
Co-authored-by: Taran <taran@minio.io>
Co-authored-by: ¨taran-p¨ <¨taran@minio.io¨>
Co-authored-by: Aditya Manthramurthy <donatello@users.noreply.github.com>
2022-12-13 12:13:23 -08:00
Anis Elleuch
76dde82b41 Implement STS account info API (#16115) 2022-12-13 08:38:50 -08:00
Anis Elleuch
939c0100a6 log: Do not interpret verbs in object names in console output (#16233) 2022-12-13 08:27:40 -08:00
Aditya Manthramurthy
2d60bf8c50 Refactor HTTP transports (#16222) 2022-12-12 20:31:21 -08:00
Harshavardhana
37e20f6ef2 feat: allow listening specific addrs for API port (#16223) 2022-12-12 18:48:46 -08:00
Minio Trusted
76905b7a67 Update yaml files to latest version RELEASE.2022-12-12T19-27-27Z 2022-12-13 01:10:31 +00:00
Aditya Manthramurthy
a469e6768d Add LDAP DNS SRV record lookup support (#16201) 2022-12-12 11:27:27 -08:00
Harshavardhana
2fc182d8e6 fix: iso8601TimeFormat padding issue for certain nanoseconds (#16207) 2022-12-12 10:28:30 -08:00
Shireesh Anjal
a2cbeaa9e6 Use different subnet public key during dev/test (#16216) 2022-12-12 10:28:15 -08:00
Harshavardhana
444ff20bc5 do not rename multipart failed transactions back to tmp (#16204) 2022-12-12 01:40:29 -08:00
Harshavardhana
20ef5e7a6a avoid double deletes() when no more versions (#16206) 2022-12-12 01:40:04 -08:00
Minio Trusted
c233c8e329 update console to v0.22.2 2022-12-09 21:10:13 -08:00
Aditya Manthramurthy
e06127566d Add IAM API to attach/detach policies for LDAP (#16182) 2022-12-09 13:08:33 -08:00
Harshavardhana
dfe73629a3 fix: delete marker discrepancies via DeleteObject() API (#16195) 2022-12-08 18:15:16 -08:00
Harshavardhana
b03dd1af17 remove hard limit for number of buckets (#16194) 2022-12-08 12:24:03 -08:00
Harshavardhana
4bc367c490 fix: translate tier add errors properly (#16191) 2022-12-08 11:18:07 -08:00
Klaus Post
3eb2d086b2 Replace filepathx with fork (#16192) 2022-12-08 10:42:44 -08:00
Klaus Post
70986b6e6e Add version id to healresult (#16193) 2022-12-08 07:49:10 -08:00
jiuker
8edc2faaa9 reuse sha256 in config GetSettings (#16188) 2022-12-08 03:03:24 -08:00
Klaus Post
ebe395788b feat: Encrypt s3zip file index (#16179) 2022-12-07 14:56:07 -08:00
Klaus Post
12fd6678ee Encrypt checksums with KMS on CompleteMultipartUpload (#16177) 2022-12-07 10:18:18 -08:00
Harshavardhana
90d35b70b4 remove unnecessary logs for truncated XML inputs (#16184) 2022-12-07 08:30:52 -08:00
Minio Trusted
9f71369b67 Update yaml files to latest version RELEASE.2022-12-07T00-56-37Z 2022-12-07 01:30:51 +00:00
Javier Adriel
04ae9058ed Populate end_session_endpoint (#16183) 2022-12-06 16:56:37 -08:00
Aditya Manthramurthy
a30cfdd88f Bump up madmin-go to v2 (#16162) 2022-12-06 13:46:50 -08:00
Anis Elleuch
1bae32dc96 xl: Delete older data-dir when replacing an existing version-id (#16176) 2022-12-06 13:43:18 -08:00
Anis Elleuch
932d2c3c62 Add X-Amz-Request-Id to internode calls (#16146) 2022-12-06 09:27:26 -08:00
Anis Elleuch
52f4124678 Remove go1.18 from Github workflow tests (#16180) 2022-12-06 09:11:20 -08:00
jiuker
8d8d07ac5c use readlock instead of writelock to get heal information (#16175) 2022-12-06 08:08:22 -08:00
Anis Elleuch
44735be38e s3: Return correct error when Version is invalid in policy document (#16178) 2022-12-06 08:07:24 -08:00
dorman
1ef1b2ba50 helm: modify the job create order (#15696) 2022-12-05 19:22:31 -08:00
Timofei Bredov
6fdbd778d5 Add minimal setup command to helm chart's readme (#16165) 2022-12-05 13:22:02 -08:00
Harshavardhana
419f351df3 avoid logging gzipped body in trace output (#16172) 2022-12-05 13:21:27 -08:00
Klaus Post
180d6b30ca Avoid hot loop when lock is cancelled (#16169) 2022-12-05 13:21:14 -08:00
Klaus Post
3fd9059b4e opt: Only stream big data usage caches (#16168) 2022-12-05 13:01:11 -08:00
Klaus Post
a713aee3d5 Run staticcheck on CI (#16170) 2022-12-05 11:18:50 -08:00
Harshavardhana
a9f5b58a01 fix: update the JSON keys for latest 'mc' release (#16171) 2022-12-05 10:28:22 -08:00
Andreas Auernhammer
d882ba2cb4 kms: add support for KES enclaves (#16139)
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-12-04 02:34:24 -08:00
Cesar Celis Hernandez
90e37a8745 Start PR on Enterprise when there is new MinIO Version (#16121) 2022-12-04 02:29:25 -08:00
jiuker
6086f45d25 fix: in disk cache readCacheFileStream should closed upon return (#16138) 2022-12-04 02:28:10 -08:00
Minio Trusted
d6351879f3 Update yaml files to latest version RELEASE.2022-12-02T19-19-22Z 2022-12-02 20:15:36 +00:00
Harshavardhana
5655272f5a ship mc along with MinIO container image (#16156) 2022-12-02 11:19:22 -08:00
Harshavardhana
9b35c72349 fix: a crash in KMS cert reload function (#16158) 2022-12-02 11:19:05 -08:00
Klaus Post
98cffbce03 s3zip: Limit over-read for single file (#16161) 2022-12-02 08:53:24 -08:00
Klaus Post
1cd875de1e Persist updated metadata (#16160) 2022-12-02 08:35:04 -08:00
Harshavardhana
5a8df7efb3 re-implement StorageInfo to be a peer call (#16155) 2022-12-01 14:31:35 -08:00
Anis Elleuch
c84e2939e4 trace: Publish storage layer errors (#16153) 2022-12-01 12:10:54 -08:00
Anis Elleuch
641ab24aec repl: resync orchestrator to use global shared lock (#16154) 2022-12-01 12:10:09 -08:00
Harshavardhana
71133105d7 re-order the top-level config keys for priority (#16150) 2022-12-01 07:50:08 -08:00
Harshavardhana
625677b189 update reedsolomon v1.11.3 (#16149) 2022-11-30 13:39:03 -08:00
Minio Trusted
76943ac05e Update yaml files to latest version RELEASE.2022-11-29T23-40-49Z 2022-11-30 21:13:07 +00:00
Aditya Manthramurthy
87cbd41265 feat: Allow at most one claim based OpenID IDP (#16145) 2022-11-29 15:40:49 -08:00
Harshavardhana
be92cf5959 change dependency from amqp -> amqp091 (RabbitMQ) official (#16142) 2022-11-28 16:05:06 -08:00
Klaus Post
cc1d8f0057 Check for abandoned data when healing (#16122) 2022-11-28 10:20:55 -08:00
Anis Elleuch
1f1dcdce65 move HTTP recorder to an internal library (#16128) 2022-11-28 10:20:27 -08:00
Shireesh Anjal
98a67a3776 Improvements in logger and audit webhooks (#16102) 2022-11-28 08:03:26 -08:00
Andreas Auernhammer
9b1e70e4f9 kms: fix possible deadlock due to nested RLock calls. (#16136)
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-11-28 07:31:07 -08:00
Harshavardhana
09d4f8cd0f avoid serializing decryptKey() every 15mins (#16135)
if the certs are the same in an environment where the 
cert files are symlinks (e.g Kubernetes), then we resort
to reloading certs every 15mins - we can avoid reload
of the kes client instance. Ensure that the price to pay 
for contending with the lock must happen when necessary.
2022-11-28 01:14:33 -08:00
Minio Trusted
53cbc020b9 Update yaml files to latest version RELEASE.2022-11-26T22-43-32Z 2022-11-27 04:08:35 +00:00
Poorna
63fc6ba2cd preserve replicated ETag properly on target (#16129) 2022-11-26 14:43:32 -08:00
jiuker
ce53d7f6c2 add disk.Close() in healFreshDisk to indicate idiomatic flow of code (#16124) 2022-11-26 00:26:15 -08:00
jiuker
fe8eed963e fix: wrapped error will not equal in decommissioning (#16113) 2022-11-24 08:00:42 -08:00
Anis Elleuch
97eb7dbf5f notify: Return detailed err msg when connecting to target fails (#16118) 2022-11-24 07:59:19 -08:00
Shireesh Anjal
59f877fc64 fix: Timestamp not added in diagnostics report (#16114) 2022-11-23 07:11:22 -08:00
Klaus Post
f96fe9773c fix: duplicated shared prefix with custom delimiter when listing (#16111) 2022-11-22 08:51:04 -08:00
Anis Elleuch
04948b4d55 fix: checking for stale STS account under site replication (#16109) 2022-11-22 07:26:33 -08:00
Klaus Post
98ba622679 Reduce temporary file clean-up waits (#16110) 2022-11-22 07:23:36 -08:00
Harshavardhana
08103870a5 update single drive setup error message (#16098) 2022-11-18 14:47:38 -08:00
Anis Elleuch
993e586855 config: return XMinioConfigNotFound code for non existing config (#16065) 2022-11-18 10:28:14 -08:00
Harshavardhana
58ec835af0 fix: skip free version ID and marker in metadata equality (#16093) 2022-11-18 05:48:22 -08:00
Harshavardhana
6aea950d74 avoid partID lock validating uploadID exists prematurely (#16086) 2022-11-18 03:09:35 -08:00
Poorna
7198be5be9 bucket resync: persist reset id to bucket metadata (#16088) 2022-11-18 01:39:05 -08:00
Minio Trusted
3661aaf8a1 Update yaml files to latest version RELEASE.2022-11-17T23-20-09Z 2022-11-18 08:51:38 +00:00
563 changed files with 37080 additions and 18353 deletions

View File

@@ -1,5 +0,0 @@
# Config file for markdownlint-cli
MD033:
allowed_elements:
- details
- summary

View File

@@ -20,11 +20,11 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.20.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@629c2de402a417ea7690ca6ce3f33229e27606a5 # v2
- uses: actions/setup-go@bfdd3570ce990073878bf10f6b2d79082de49492 # v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3
- uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true

View File

@@ -20,20 +20,28 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.5b7]
go-version: [1.20.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Setup dockerfile for build test
run: |
echo "FROM us-docker.pkg.dev/google.com/api-project-999119582588/go-boringcrypto/golang:${{ matrix.go-version }}" > Dockerfile.fips.test
echo "COPY . /minio" >> Dockerfile.fips.test
echo "WORKDIR /minio" >> Dockerfile.fips.test
echo "RUN make" >> Dockerfile.fips.test
GO_VERSION=$(go version | cut -d ' ' -f 3 | sed 's/go//')
echo Detected go version $GO_VERSION
cat > Dockerfile.fips.test <<EOF
FROM golang:${GO_VERSION}
COPY . /minio
WORKDIR /minio
ENV GOEXPERIMENT=boringcrypto
RUN make
EOF
- name: Build
uses: docker/build-push-action@v3
@@ -48,4 +56,4 @@ jobs:
- name: Test binary
run: |
docker run --rm minio/fips-test:latest ./minio --version
docker run --rm -i minio/fips-test:latest /bin/bash -c 'go tool nm ./minio' | grep -q FIPS
docker run --rm -i minio/fips-test:latest /bin/bash -c 'go tool nm ./minio | grep FIPS | grep -q FIPS'

View File

@@ -20,10 +20,10 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.20.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

View File

@@ -20,10 +20,10 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.20.x]
os: [ubuntu-latest, windows-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

View File

@@ -20,10 +20,10 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.20.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

33
.github/workflows/helm-lint.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: Helm Chart linting
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
permissions:
contents: read
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Helm
uses: azure/setup-helm@v3
- name: Run helm lint
run: |
cd helm/minio
helm lint .

View File

@@ -61,7 +61,7 @@ jobs:
# are turned off - i.e. if ldap="", then ldap server is not enabled for
# the tests.
matrix:
go-version: [1.18.x]
go-version: [1.20.x]
ldap: ["", "localhost:389"]
etcd: ["", "http://localhost:2379"]
openid: ["", "http://127.0.0.1:5556/dex"]
@@ -75,7 +75,7 @@ jobs:
openid: "http://127.0.0.1:5556/dex"
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

View File

@@ -1,30 +0,0 @@
name: Markdown Linter
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
lint:
name: Lint all docs
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v2
- name: Lint all docs
run: |
npm install -g markdownlint-cli
markdownlint --fix '**/*.md' \
--config /home/runner/work/minio/minio/.github/markdown-lint-cfg.yaml \
--disable MD013 MD040 MD051

57
.github/workflows/mint.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: Mint Tests
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
mint-test:
runs-on: mint
timeout-minutes: 120
steps:
- name: cleanup #https://github.com/actions/checkout/issues/273
run: |
sudo -S rm -rf ${GITHUB_WORKSPACE}
mkdir ${GITHUB_WORKSPACE}
- name: checkout-step
uses: actions/checkout@v3
- name: setup-go-step
uses: actions/setup-go@v2
with:
go-version: 1.20.x
- name: github sha short
id: vars
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: build-minio
run: |
make install
docker build . -t "minio/minio:${{ steps.vars.outputs.sha_short }}"
- name: compress and encrypt
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "compress-encrypt" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
- name: multiple pools
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "pools" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
- name: standalone erasure
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "erasure" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
docker rmi -f minio/minio:${{ steps.vars.outputs.sha_short }}

View File

@@ -0,0 +1,80 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${JOB_NAME}
command: server --console-address ":9001" http://minio{1...4}/cdata{1...2}
expose:
- "9000"
- "9001"
environment:
MINIO_CI_CD: "on"
MINIO_ROOT_USER: "minio"
MINIO_ROOT_PASSWORD: "minio123"
MINIO_COMPRESSION_ENABLE: "on"
MINIO_COMPRESSION_MIME_TYPES: "*"
MINIO_COMPRESSION_ALLOW_ENCRYPTION: "on"
MINIO_KMS_SECRET_KEY: "my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw="
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- cdata1-1:/cdata1
- cdata1-2:/cdata2
minio2:
<<: *minio-common
hostname: minio2
volumes:
- cdata2-1:/cdata1
- cdata2-2:/cdata2
minio3:
<<: *minio-common
hostname: minio3
volumes:
- cdata3-1:/cdata1
- cdata3-2:/cdata2
minio4:
<<: *minio-common
hostname: minio4
volumes:
- cdata4-1:/cdata1
- cdata4-2:/cdata2
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
volumes:
- ./nginx-4-node.conf:/etc/nginx/nginx.conf:ro
ports:
- "9000:9000"
- "9001:9001"
depends_on:
- minio1
- minio2
- minio3
- minio4
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
cdata1-1:
cdata1-2:
cdata2-1:
cdata2-2:
cdata3-1:
cdata3-2:
cdata4-1:
cdata4-2:

View File

@@ -0,0 +1,51 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${JOB_NAME}
command: server --console-address ":9001" edata{1...4}
expose:
- "9000"
- "9001"
environment:
MINIO_CI_CD: "on"
MINIO_ROOT_USER: "minio"
MINIO_ROOT_PASSWORD: "minio123"
MINIO_KMS_SECRET_KEY: "my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw="
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- edata1-1:/edata1
- edata1-2:/edata2
- edata1-3:/edata3
- edata1-4:/edata4
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
volumes:
- ./nginx-1-node.conf:/etc/nginx/nginx.conf:ro
ports:
- "9000:9000"
- "9001:9001"
depends_on:
- minio1
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
edata1-1:
edata1-2:
edata1-3:
edata1-4:

117
.github/workflows/mint/minio-pools.yaml vendored Normal file
View File

@@ -0,0 +1,117 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${JOB_NAME}
command: server --console-address ":9001" http://minio{1...4}/pdata{1...2} http://minio{5...8}/pdata{1...2}
expose:
- "9000"
- "9001"
environment:
MINIO_CI_CD: "on"
MINIO_ROOT_USER: "minio"
MINIO_ROOT_PASSWORD: "minio123"
MINIO_KMS_SECRET_KEY: "my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw="
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- pdata1-1:/pdata1
- pdata1-2:/pdata2
minio2:
<<: *minio-common
hostname: minio2
volumes:
- pdata2-1:/pdata1
- pdata2-2:/pdata2
minio3:
<<: *minio-common
hostname: minio3
volumes:
- pdata3-1:/pdata1
- pdata3-2:/pdata2
minio4:
<<: *minio-common
hostname: minio4
volumes:
- pdata4-1:/pdata1
- pdata4-2:/pdata2
minio5:
<<: *minio-common
hostname: minio5
volumes:
- pdata5-1:/pdata1
- pdata5-2:/pdata2
minio6:
<<: *minio-common
hostname: minio6
volumes:
- pdata6-1:/pdata1
- pdata6-2:/pdata2
minio7:
<<: *minio-common
hostname: minio7
volumes:
- pdata7-1:/pdata1
- pdata7-2:/pdata2
minio8:
<<: *minio-common
hostname: minio8
volumes:
- pdata8-1:/pdata1
- pdata8-2:/pdata2
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
volumes:
- ./nginx-8-node.conf:/etc/nginx/nginx.conf:ro
ports:
- "9000:9000"
- "9001:9001"
depends_on:
- minio1
- minio2
- minio3
- minio4
- minio5
- minio6
- minio7
- minio8
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
pdata1-1:
pdata1-2:
pdata2-1:
pdata2-2:
pdata3-1:
pdata3-2:
pdata4-1:
pdata4-2:
pdata5-1:
pdata5-2:
pdata6-1:
pdata6-2:
pdata7-1:
pdata7-2:
pdata8-1:
pdata8-2:

100
.github/workflows/mint/nginx-1-node.conf vendored Normal file
View File

@@ -0,0 +1,100 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
}
upstream console {
ip_hash;
server minio1:9001;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}

106
.github/workflows/mint/nginx-4-node.conf vendored Normal file
View File

@@ -0,0 +1,106 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
}
upstream console {
ip_hash;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}

114
.github/workflows/mint/nginx-8-node.conf vendored Normal file
View File

@@ -0,0 +1,114 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
server minio5:9000;
server minio6:9000;
server minio7:9000;
server minio8:9000;
}
upstream console {
ip_hash;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
server minio5:9001;
server minio6:9001;
server minio7:9001;
server minio8:9001;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}

106
.github/workflows/mint/nginx.conf vendored Normal file
View File

@@ -0,0 +1,106 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
}
upstream console {
ip_hash;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}

View File

@@ -21,10 +21,10 @@ jobs:
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.20.x]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

34
.github/workflows/root-disable.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Root lockdown tests
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.20.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Start root lockdown tests
run: |
make test-root-disable

41
.github/workflows/run-mint.sh vendored Executable file
View File

@@ -0,0 +1,41 @@
#!/bin/bash
set -ex
export MODE="$1"
export ACCESS_KEY="$2"
export SECRET_KEY="$3"
export JOB_NAME="$4"
export MINT_MODE="full"
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -f dangling=true) || true
## change working directory
cd .github/workflows/mint
docker-compose -f minio-${MODE}.yaml up -d
sleep 5m
# Stop two nodes, one of each pool, to check that all S3 calls work while quorum is still there
[ "${MODE}" == "pools" ] && docker-compose -f minio-${MODE}.yaml stop minio{2,6}
docker run --rm --net=mint_default \
--name="mint-${MODE}-${JOB_NAME}" \
-e SERVER_ENDPOINT="nginx:9000" \
-e ACCESS_KEY="${ACCESS_KEY}" \
-e SECRET_KEY="${SECRET_KEY}" \
-e ENABLE_HTTPS=0 \
-e MINT_MODE="${MINT_MODE}" \
docker.io/minio/mint:edge
docker-compose -f minio-${MODE}.yaml down || true
sleep 10s
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -f dangling=true) || true
## change working directory
cd ../../../

22
.github/workflows/shfmt.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Shell formatting checks
on:
pull_request:
branches:
- master
permissions:
contents: read
jobs:
build:
name: runner / shfmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: luizm/action-sh-checker@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SHFMT_OPTS: "-s"
with:
sh_checker_shellcheck_disable: true # disable for now

View File

@@ -20,11 +20,11 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.20.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

View File

@@ -6,6 +6,10 @@ on:
push:
branches:
- master
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
vulncheck:
name: Analysis
@@ -16,7 +20,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.19.x
go-version: 1.19.10
check-latest: true
- name: Get official govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest

3
.gitignore vendored
View File

@@ -38,3 +38,6 @@ docs/debugging/s3-check-md5/s3-check-md5
docs/debugging/hash-set/hash-set
docs/debugging/healing-bin/healing-bin
docs/debugging/inspect/inspect
docs/debugging/pprofgoparser/pprofgoparser
.bin/
*.gz

View File

@@ -1,39 +1,34 @@
linters-settings:
gofumpt:
lang-version: "1.18"
simplify: true
misspell:
locale: US
staticcheck:
checks: ['all', '-ST1005', '-ST1000', '-SA4000', '-SA9004', '-SA1019', '-SA1008', '-U1000', '-ST1016']
linters:
disable-all: true
enable:
- typecheck
- goimports
- misspell
- govet
- revive
- ineffassign
- gomodguard
- durationcheck
- gocritic
- gofmt
- gofumpt
- goimports
- gomodguard
- govet
- ineffassign
- misspell
- revive
- staticcheck
- tenv
- typecheck
- unconvert
- unused
- gocritic
- gofumpt
- tenv
- durationcheck
issues:
exclude-use-default: false
exclude:
- should have a package comment
- error strings should not be capitalized or end with punctuation or a newline
# todo fix these when we get enough time.
- "singleCaseSwitch: should rewrite switch statement to if statement"
- "unlambda: replace"
- "captLocal:"
- "ifElseChain:"
- "elseif:"
service:
golangci-lint-version: 1.43.0 # use the fixed version to not introduce new linters unexpectedly
- should have a package comment
- error strings should not be capitalized or end with punctuation or a newline

3637
CREDITS

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8
ARG RELEASE
@@ -27,9 +27,8 @@ COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools iproute iputils jq minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-amd64/archive/minio.${RELEASE} -o /opt/bin/minio && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-amd64/archive/minio.${RELEASE}.sha256sum -o /opt/bin/minio.sha256sum && \

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8
ARG TARGETARCH
@@ -29,13 +29,13 @@ COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools iproute iputils jq minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /opt/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /opt/bin/minio.sha256sum && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /opt/bin/minio.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc -o /opt/bin/mc && \
microdnf clean all && \
chmod +x /opt/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8
ARG TARGETARCH
@@ -29,9 +29,8 @@ COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools iproute iputils jq minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips -o /opt/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips.sha256sum -o /opt/bin/minio.sha256sum && \

View File

@@ -8,6 +8,10 @@ GOOS := $(shell go env GOOS)
VERSION ?= $(shell git describe --tags)
TAG ?= "minio/minio:$(VERSION)"
GOLANGCI_VERSION = v1.51.2
GOLANGCI_DIR = .bin/golangci/$(GOLANGCI_VERSION)
GOLANGCI = $(GOLANGCI_DIR)/golangci-lint
all: build
checks: ## check dependencies
@@ -19,28 +23,36 @@ help: ## print this help
getdeps: ## fetch necessary dependencies
@mkdir -p ${GOPATH}/bin
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@f3635b96e4838a6c773babb65ef35297fe5fe2f9
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOLANGCI_DIR) $(GOLANGCI_VERSION)
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@v1.1.7
@echo "Installing stringer" && go install -v golang.org/x/tools/cmd/stringer@latest
crosscompile: ## cross compile minio
@(env bash $(PWD)/buildscripts/cross-compile.sh)
verifiers: getdeps lint check-gen
verifiers: lint check-gen
check-gen: ## check for updated autogenerated files
@go generate ./... >/dev/null
@(! git diff --name-only | grep '_gen.go$$') || (echo "Non-committed changes in auto-generated code is detected, please commit them to proceed." && false)
lint: ## runs golangci-lint suite of linters
lint: getdeps ## runs golangci-lint suite of linters
@echo "Running $@ check"
@${GOPATH}/bin/golangci-lint run --build-tags kqueue --timeout=10m --config ./.golangci.yml
@$(GOLANGCI) run --build-tags kqueue --timeout=10m --config ./.golangci.yml
lint-fix: getdeps ## runs golangci-lint suite of linters with automatic fixes
@echo "Running $@ check"
@$(GOLANGCI) run --build-tags kqueue --timeout=10m --config ./.golangci.yml --fix
check: test
test: verifiers build ## builds minio, runs linters, tests
@echo "Running unit tests"
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -tags kqueue ./...
test-root-disable: install
@echo "Running minio root lockdown tests"
@env bash $(PWD)/buildscripts/disable-root.sh
test-decom: install
@echo "Running minio decom tests"
@env bash $(PWD)/docs/distributed/decom.sh
@@ -62,11 +74,21 @@ test-iam: build ## verify IAM (external IDP, etcd backends)
@echo "Running tests for IAM (external IDP, etcd backends) with -race"
@MINIO_API_REQUESTS_MAX=10000 GORACE=history_size=7 CGO_ENABLED=1 go test -race -tags kqueue -v -run TestIAM* ./cmd
test-replication: install ## verify multi site replication
@echo "Running tests for replicating three sites"
@(env bash $(PWD)/docs/bucket/replication/setup_3site_replication.sh)
test-sio-error:
@(env bash $(PWD)/docs/bucket/replication/sio-error.sh)
test-replication-2site:
@(env bash $(PWD)/docs/bucket/replication/setup_2site_existing_replication.sh)
test-replication-3site:
@(env bash $(PWD)/docs/bucket/replication/setup_3site_replication.sh)
test-delete-replication:
@(env bash $(PWD)/docs/bucket/replication/delete-replication.sh)
test-replication: install test-replication-2site test-replication-3site test-delete-replication test-sio-error ## verify multi site replication
@echo "Running tests for replicating three sites"
test-site-replication-ldap: install ## verify automatic site replication
@echo "Running tests for automatic site replication of IAM (with LDAP)"
@(env bash $(PWD)/docs/site-replication/run-multi-site-ldap.sh)
@@ -133,7 +155,7 @@ docker-hotfix: hotfix-push checks ## builds minio docker container with hotfix t
@echo "Building minio docker image '$(TAG)'"
@docker build -q --no-cache -t $(TAG) --build-arg RELEASE=$(VERSION) . -f Dockerfile.hotfix
docker: build checks ## builds minio docker container
docker: build ## builds minio docker container
@echo "Building minio docker image '$(TAG)'"
@docker build -q --no-cache -t $(TAG) . -f Dockerfile

2
NOTICE
View File

@@ -1,4 +1,4 @@
MinIO Project, (C) 2015-2021 MinIO, Inc.
MinIO Project, (C) 2015-2023 MinIO, Inc.
This product includes software developed at MinIO, Inc.
(https://min.io/).

View File

@@ -86,8 +86,6 @@ chmod +x minio
./minio server /data
```
Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
The following table lists supported architectures. Replace the `wget` URL with the architecture for your Linux host.
| Architecture | URL |
@@ -125,7 +123,7 @@ You can also connect using any S3-compatible tool, such as the MinIO Client `mc`
## Install from Source
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.18](https://golang.org/dl/#stable)
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.19](https://golang.org/dl/#stable)
```sh
go install github.com/minio/minio@latest

View File

@@ -3,19 +3,19 @@
_init() {
shopt -s extglob
shopt -s extglob
## Minimum required versions for build dependencies
GIT_VERSION="1.0"
GO_VERSION="1.16"
OSX_VERSION="10.8"
KNAME=$(uname -s)
ARCH=$(uname -m)
case "${KNAME}" in
SunOS )
ARCH=$(isainfo -k)
;;
esac
## Minimum required versions for build dependencies
GIT_VERSION="1.0"
GO_VERSION="1.16"
OSX_VERSION="10.8"
KNAME=$(uname -s)
ARCH=$(uname -m)
case "${KNAME}" in
SunOS)
ARCH=$(isainfo -k)
;;
esac
}
## FIXME:
@@ -28,24 +28,23 @@ _init() {
## }
##
readlink() {
TARGET_FILE=$1
TARGET_FILE=$1
cd `dirname $TARGET_FILE`
TARGET_FILE=`basename $TARGET_FILE`
cd $(dirname $TARGET_FILE)
TARGET_FILE=$(basename $TARGET_FILE)
# Iterate down a (possible) chain of symlinks
while [ -L "$TARGET_FILE" ]
do
TARGET_FILE=$(env readlink $TARGET_FILE)
cd `dirname $TARGET_FILE`
TARGET_FILE=`basename $TARGET_FILE`
done
# Iterate down a (possible) chain of symlinks
while [ -L "$TARGET_FILE" ]; do
TARGET_FILE=$(env readlink $TARGET_FILE)
cd $(dirname $TARGET_FILE)
TARGET_FILE=$(basename $TARGET_FILE)
done
# Compute the canonicalized name by finding the physical path
# for the directory we're in and appending the target file.
PHYS_DIR=`pwd -P`
RESULT=$PHYS_DIR/$TARGET_FILE
echo $RESULT
# Compute the canonicalized name by finding the physical path
# for the directory we're in and appending the target file.
PHYS_DIR=$(pwd -P)
RESULT=$PHYS_DIR/$TARGET_FILE
echo $RESULT
}
## FIXME:
@@ -59,84 +58,86 @@ readlink() {
## }
##
check_minimum_version() {
IFS='.' read -r -a varray1 <<< "$1"
IFS='.' read -r -a varray2 <<< "$2"
IFS='.' read -r -a varray1 <<<"$1"
IFS='.' read -r -a varray2 <<<"$2"
for i in "${!varray1[@]}"; do
if [[ ${varray1[i]} -lt ${varray2[i]} ]]; then
return 0
elif [[ ${varray1[i]} -gt ${varray2[i]} ]]; then
return 1
fi
done
for i in "${!varray1[@]}"; do
if [[ ${varray1[i]} -lt ${varray2[i]} ]]; then
return 0
elif [[ ${varray1[i]} -gt ${varray2[i]} ]]; then
return 1
fi
done
return 0
return 0
}
assert_is_supported_arch() {
case "${ARCH}" in
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x )
return
;;
*)
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x]"
exit 1
esac
case "${ARCH}" in
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64)
return
;;
*)
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64]"
exit 1
;;
esac
}
assert_is_supported_os() {
case "${KNAME}" in
Linux | FreeBSD | OpenBSD | NetBSD | DragonFly | SunOS )
return
;;
Darwin )
osx_host_version=$(env sw_vers -productVersion)
if ! check_minimum_version "${OSX_VERSION}" "${osx_host_version}"; then
echo "OSX version '${osx_host_version}' is not supported. Minimum supported version: ${OSX_VERSION}"
exit 1
fi
return
;;
*)
echo "OS '${KNAME}' is not supported. Supported OS: [Linux, FreeBSD, OpenBSD, NetBSD, Darwin, DragonFly]"
exit 1
esac
case "${KNAME}" in
Linux | FreeBSD | OpenBSD | NetBSD | DragonFly | SunOS)
return
;;
Darwin)
osx_host_version=$(env sw_vers -productVersion)
if ! check_minimum_version "${OSX_VERSION}" "${osx_host_version}"; then
echo "OSX version '${osx_host_version}' is not supported. Minimum supported version: ${OSX_VERSION}"
exit 1
fi
return
;;
*)
echo "OS '${KNAME}' is not supported. Supported OS: [Linux, FreeBSD, OpenBSD, NetBSD, Darwin, DragonFly]"
exit 1
;;
esac
}
assert_check_golang_env() {
if ! which go >/dev/null 2>&1; then
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document at https://golang.org/doc/install"
exit 1
fi
if ! which go >/dev/null 2>&1; then
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document at https://golang.org/doc/install"
exit 1
fi
installed_go_version=$(go version | sed 's/^.* go\([0-9.]*\).*$/\1/')
if ! check_minimum_version "${GO_VERSION}" "${installed_go_version}"; then
echo "Go runtime version '${installed_go_version}' is unsupported. Minimum supported version: ${GO_VERSION} to compile."
exit 1
fi
installed_go_version=$(go version | sed 's/^.* go\([0-9.]*\).*$/\1/')
if ! check_minimum_version "${GO_VERSION}" "${installed_go_version}"; then
echo "Go runtime version '${installed_go_version}' is unsupported. Minimum supported version: ${GO_VERSION} to compile."
exit 1
fi
}
assert_check_deps() {
# support unusual Git versions such as: 2.7.4 (Apple Git-66)
installed_git_version=$(git version | perl -ne '$_ =~ m/git version (.*?)( |$)/; print "$1\n";')
if ! check_minimum_version "${GIT_VERSION}" "${installed_git_version}"; then
echo "Git version '${installed_git_version}' is not supported. Minimum supported version: ${GIT_VERSION}"
exit 1
fi
# support unusual Git versions such as: 2.7.4 (Apple Git-66)
installed_git_version=$(git version | perl -ne '$_ =~ m/git version (.*?)( |$)/; print "$1\n";')
if ! check_minimum_version "${GIT_VERSION}" "${installed_git_version}"; then
echo "Git version '${installed_git_version}' is not supported. Minimum supported version: ${GIT_VERSION}"
exit 1
fi
}
main() {
## Check for supported arch
assert_is_supported_arch
## Check for supported arch
assert_is_supported_arch
## Check for supported os
assert_is_supported_os
## Check for supported os
assert_is_supported_os
## Check for Go environment
assert_check_golang_env
## Check for Go environment
assert_check_golang_env
## Check for dependencies
assert_check_deps
## Check for dependencies
assert_check_deps
}
_init && main "$@"

View File

@@ -5,33 +5,33 @@ set -e
[ -n "$BASH_XTRACEFD" ] && set -x
function _init() {
## All binaries are static make sure to disable CGO.
export CGO_ENABLED=0
## All binaries are static make sure to disable CGO.
export CGO_ENABLED=0
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips openbsd/amd64"
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips"
}
function _build() {
local osarch=$1
IFS=/ read -r -a arr <<<"$osarch"
os="${arr[0]}"
arch="${arr[1]}"
package=$(go list -f '{{.ImportPath}}')
printf -- "--> %15s:%s\n" "${osarch}" "${package}"
local osarch=$1
IFS=/ read -r -a arr <<<"$osarch"
os="${arr[0]}"
arch="${arr[1]}"
package=$(go list -f '{{.ImportPath}}')
printf -- "--> %15s:%s\n" "${osarch}" "${package}"
# go build -trimpath to build the binary.
export GOOS=$os
export GOARCH=$arch
export GO111MODULE=on
go build -trimpath -tags kqueue -o /dev/null
# go build -trimpath to build the binary.
export GOOS=$os
export GOARCH=$arch
export GO111MODULE=on
go build -trimpath -tags kqueue -o /dev/null
}
function main() {
echo "Testing builds for OS/Arch: ${SUPPORTED_OSARCH}"
for each_osarch in ${SUPPORTED_OSARCH}; do
_build "${each_osarch}"
done
echo "Testing builds for OS/Arch: ${SUPPORTED_OSARCH}"
for each_osarch in ${SUPPORTED_OSARCH}; do
_build "${each_osarch}"
done
}
_init && main "$@"

119
buildscripts/disable-root.sh Executable file
View File

@@ -0,0 +1,119 @@
#!/bin/bash
set -x
export MINIO_CI_CD=1
killall -9 minio
rm -rf ${HOME}/tmp/dist
scheme="http"
nr_servers=4
addr="localhost"
args=""
for ((i = 0; i < $((nr_servers)); i++)); do
args="$args $scheme://$addr:$((9100 + i))/${HOME}/tmp/dist/path1/$i"
done
echo $args
for ((i = 0; i < $((nr_servers)); i++)); do
(minio server --address ":$((9100 + i))" $args 2>&1 >/tmp/log$i.txt) &
done
sleep 10s
if [ ! -f ./mc ]; then
wget --quiet -O ./mc https://dl.minio.io/client/mc/release/linux-amd64/./mc &&
chmod +x mc
fi
set +e
export MC_HOST_minioadm=http://minioadmin:minioadmin@localhost:9100/
./mc ls minioadm/
./mc admin config set minioadm/ api root_access=off
sleep 3s # let things settle a little
./mc ls minioadm/
if [ $? -eq 0 ]; then
echo "listing succeeded, 'minioadmin' was not disabled"
exit 1
fi
set -e
killall -9 minio
export MINIO_API_ROOT_ACCESS=on
for ((i = 0; i < $((nr_servers)); i++)); do
(minio server --address ":$((9100 + i))" $args 2>&1 >/tmp/log$i.txt) &
done
set +e
./mc ls minioadm/
if [ $? -ne 0 ]; then
echo "listing failed, 'minioadmin' should be enabled"
exit 1
fi
killall -9 minio
rm -rf /tmp/multisitea/
rm -rf /tmp/multisiteb/
echo "Setup site-replication and then disable root credentials"
minio server --address 127.0.0.1:9001 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
minio server --address 127.0.0.1:9002 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
minio server --address 127.0.0.1:9003 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
minio server --address 127.0.0.1:9004 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
sleep 20s
export MC_HOST_sitea=http://minioadmin:minioadmin@127.0.0.1:9001
export MC_HOST_siteb=http://minioadmin:minioadmin@127.0.0.1:9004
./mc admin replicate add sitea siteb
./mc admin user add sitea foobar foo12345
./mc admin policy attach sitea/ consoleAdmin --user=foobar
./mc admin user info siteb foobar
killall -9 minio
echo "turning off root access, however site replication must continue"
export MINIO_API_ROOT_ACCESS=off
minio server --address 127.0.0.1:9001 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
minio server --address 127.0.0.1:9002 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
minio server --address 127.0.0.1:9003 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
minio server --address 127.0.0.1:9004 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
sleep 20s
export MC_HOST_sitea=http://foobar:foo12345@127.0.0.1:9001
export MC_HOST_siteb=http://foobar:foo12345@127.0.0.1:9004
./mc admin user add sitea foobar-admin foo12345
sleep 2s
./mc admin user info siteb foobar-admin

View File

@@ -6,88 +6,87 @@ set -x
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
function start_minio_4drive() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
mkdir ${WORK_DIR}
C_PWD=${PWD}
if [ ! -x "$PWD/mc" ]; then
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
mkdir ${WORK_DIR}
C_PWD=${PWD}
if [ ! -x "$PWD/mc" ]; then
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$C_PWD/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
fi
(cd "${MC_BUILD_DIR}" && go build -o "$C_PWD/mc")
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/disk{1...4}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
# remove mc source.
purge "${MC_BUILD_DIR}"
fi
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/disk{1...4}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
"${PWD}/mc" mb --with-versioning minio/bucket
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
for i in $(seq 1 4); do
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
"${PWD}/mc" mb --with-versioning minio/bucket
sudo chown -R root. "${WORK_DIR}/disk${i}"
for i in $(seq 1 4); do
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
sudo chown -R root. "${WORK_DIR}/disk${i}"
sudo chown -R ${USER}. "${WORK_DIR}/disk${i}"
done
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
for vid in $("${PWD}/mc" ls --json --versions minio/bucket/testobj | jq -r .versionId); do
"${PWD}/mc" cat --vid "${vid}" minio/bucket/testobj | md5sum
done
sudo chown -R ${USER}. "${WORK_DIR}/disk${i}"
done
for vid in $("${PWD}/mc" ls --json --versions minio/bucket/testobj | jq -r .versionId); do
"${PWD}/mc" cat --vid "${vid}" minio/bucket/testobj | md5sum
done
pkill minio
sleep 3
pkill minio
sleep 3
}
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_4drive ${start_port}
start_minio_4drive ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -27,7 +27,7 @@ import (
"os"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
)
func main() {

View File

@@ -4,89 +4,89 @@ trap 'cleanup $LINENO' ERR
# shellcheck disable=SC2120
cleanup() {
MINIO_VERSION=dev docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm -s -f
docker volume prune -f
MINIO_VERSION=dev docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm -s -f
docker volume prune -f
}
verify_checksum_after_heal() {
local sum1
sum1=$(curl -s "$2" | sha256sum);
mc admin heal --json -r "$1" >/dev/null; # test after healing
local sum1_heal
sum1_heal=$(curl -s "$2" | sha256sum);
local sum1
sum1=$(curl -s "$2" | sha256sum)
mc admin heal --json -r "$1" >/dev/null # test after healing
local sum1_heal
sum1_heal=$(curl -s "$2" | sha256sum)
if [ "${sum1_heal}" != "${sum1}" ]; then
echo "mismatch expected ${sum1_heal}, got ${sum1}"
exit 1;
fi
if [ "${sum1_heal}" != "${sum1}" ]; then
echo "mismatch expected ${sum1_heal}, got ${sum1}"
exit 1
fi
}
verify_checksum_mc() {
local expected
expected=$(mc cat "$1" | sha256sum)
local got
got=$(mc cat "$2" | sha256sum)
local expected
expected=$(mc cat "$1" | sha256sum)
local got
got=$(mc cat "$2" | sha256sum)
if [ "${expected}" != "${got}" ]; then
echo "mismatch - expected ${expected}, got ${got}"
exit 1;
fi
echo "matches - ${expected}, got ${got}"
if [ "${expected}" != "${got}" ]; then
echo "mismatch - expected ${expected}, got ${got}"
exit 1
fi
echo "matches - ${expected}, got ${got}"
}
add_alias() {
for i in $(seq 1 4); do
echo "... attempting to add alias $i"
until (mc alias set minio http://127.0.0.1:9000 minioadmin minioadmin); do
echo "...waiting... for 5secs" && sleep 5
done
done
for i in $(seq 1 4); do
echo "... attempting to add alias $i"
until (mc alias set minio http://127.0.0.1:9000 minioadmin minioadmin); do
echo "...waiting... for 5secs" && sleep 5
done
done
echo "Sleeping for nginx"
sleep 20
echo "Sleeping for nginx"
sleep 20
}
__init__() {
sudo apt install curl -y
export GOPATH=/tmp/gopath
export PATH=${PATH}:${GOPATH}/bin
sudo apt install curl -y
export GOPATH=/tmp/gopath
export PATH=${PATH}:${GOPATH}/bin
go install github.com/minio/mc@latest
go install github.com/minio/mc@latest
TAG=minio/minio:dev make docker
TAG=minio/minio:dev make docker
MINIO_VERSION=RELEASE.2019-12-19T22-52-26Z docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
up -d --build
MINIO_VERSION=RELEASE.2019-12-19T22-52-26Z docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
up -d --build
add_alias
add_alias
mc mb minio/minio-test/
mc cp ./minio minio/minio-test/to-read/
mc cp /etc/hosts minio/minio-test/to-read/hosts
mc anonymous set download minio/minio-test
mc mb minio/minio-test/
mc cp ./minio minio/minio-test/to-read/
mc cp /etc/hosts minio/minio-test/to-read/hosts
mc anonymous set download minio/minio-test
verify_checksum_mc ./minio minio/minio-test/to-read/minio
verify_checksum_mc ./minio minio/minio-test/to-read/minio
curl -s http://127.0.0.1:9000/minio-test/to-read/hosts | sha256sum
curl -s http://127.0.0.1:9000/minio-test/to-read/hosts | sha256sum
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
}
main() {
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
add_alias
add_alias
verify_checksum_after_heal minio/minio-test http://127.0.0.1:9000/minio-test/to-read/hosts
verify_checksum_after_heal minio/minio-test http://127.0.0.1:9000/minio-test/to-read/hosts
verify_checksum_mc ./minio minio/minio-test/to-read/minio
verify_checksum_mc ./minio minio/minio-test/to-read/minio
verify_checksum_mc /etc/hosts minio/minio-test/to-read/hosts
verify_checksum_mc /etc/hosts minio/minio-test/to-read/hosts
cleanup
cleanup
}
( __init__ "$@" && main "$@" )
(__init__ "$@" && main "$@")

View File

@@ -5,7 +5,6 @@ set -e
export GORACE="history_size=7"
export MINIO_API_REQUESTS_MAX=10000
## TODO remove `dsync` from race detector once this is merged and released https://go-review.googlesource.com/c/go/+/333529/
for d in $(go list ./... | grep -v dsync); do
CGO_ENABLED=1 go test -v -race --timeout 100m "$d"
for d in $(go list ./...); do
CGO_ENABLED=1 go test -v -race --timeout 100m "$d"
done

View File

@@ -6,67 +6,66 @@ set -x
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
function start_minio_5drive() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
"${WORK_DIR}/mc" cp --quiet -r "buildscripts/cicd-corpus/" "${WORK_DIR}/cicd-corpus/"
# remove mc source.
purge "${MC_BUILD_DIR}"
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/cicd-corpus/disk{1...5}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
"${WORK_DIR}/mc" cp --quiet -r "buildscripts/cicd-corpus/" "${WORK_DIR}/cicd-corpus/"
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/cicd-corpus/disk{1...5}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
"${WORK_DIR}/mc" stat minio/bucket/testobj
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" stat minio/bucket/testobj
pkill minio
sleep 3
pkill minio
sleep 3
}
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_5drive ${start_port}
start_minio_5drive ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -6,146 +6,151 @@ set -x
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO_OLD=( "$PWD/minio.RELEASE.2020-10-28T08-16-50Z" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO_OLD=("$PWD/minio.RELEASE.2020-10-28T08-16-50Z" --config-dir "$MINIO_CONFIG_DIR" server)
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
function download_old_release() {
if [ ! -f minio.RELEASE.2020-10-28T08-16-50Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2020-10-28T08-16-50Z
chmod a+x minio.RELEASE.2020-10-28T08-16-50Z
fi
if [ ! -f minio.RELEASE.2020-10-28T08-16-50Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2020-10-28T08-16-50Z
chmod a+x minio.RELEASE.2020-10-28T08-16-50Z
fi
}
function verify_rewrite() {
start_port=$1
start_port=$1
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
# remove mc source.
purge "${MC_BUILD_DIR}"
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
"${WORK_DIR}/mc" mb minio/healing-rewrite-bucket --quiet --with-lock
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" mb minio/healing-rewrite-bucket --quiet --with-lock
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
kill ${pid}
sleep 3
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
kill ${pid}
sleep 3
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**
)
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
go run ./buildscripts/heal-manual.go "127.0.0.1:${start_port}" "minio" "minio123"
sleep 1
purge "$WORK_DIR"
exit 1
fi
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**
)
go run ./buildscripts/heal-manual.go "127.0.0.1:${start_port}" "minio" "minio123"
sleep 1
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
purge "$WORK_DIR"
exit 1
fi
kill ${pid}
kill ${pid}
}
function main() {
download_old_release
download_old_release
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
verify_rewrite ${start_port}
verify_rewrite ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -6,167 +6,172 @@ set -o pipefail
set -x
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO_OLD=( "$PWD/minio.RELEASE.2021-11-24T23-19-33Z" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO_OLD=("$PWD/minio.RELEASE.2021-11-24T23-19-33Z" --config-dir "$MINIO_CONFIG_DIR" server)
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
function download_old_release() {
if [ ! -f minio.RELEASE.2021-11-24T23-19-33Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2021-11-24T23-19-33Z
chmod a+x minio.RELEASE.2021-11-24T23-19-33Z
fi
if [ ! -f minio.RELEASE.2021-11-24T23-19-33Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2021-11-24T23-19-33Z
chmod a+x minio.RELEASE.2021-11-24T23-19-33Z
fi
}
function start_minio_16drive() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export _MINIO_SHARD_DISKTIME_DELTA="5s" # do not change this as its needed for tests
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export _MINIO_SHARD_DISKTIME_DELTA="5s" # do not change this as its needed for tests
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
# remove mc source.
purge "${MC_BUILD_DIR}"
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
shred --iterations=1 --size=5241856 - 1>"${WORK_DIR}/unaligned" 2>/dev/null
"${WORK_DIR}/mc" mb minio/healing-shard-bucket --quiet
"${WORK_DIR}/mc" cp \
"${WORK_DIR}/unaligned" \
minio/healing-shard-bucket/unaligned \
--disable-multipart --quiet
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
## "unaligned" object name gets consistently distributed
## to disks in following distribution order
##
## NOTE: if you change the name make sure to change the
## distribution order present here
##
## [15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
shred --iterations=1 --size=5241856 - 1>"${WORK_DIR}/unaligned" 2>/dev/null
"${WORK_DIR}/mc" mb minio/healing-shard-bucket --quiet
"${WORK_DIR}/mc" cp \
"${WORK_DIR}/unaligned" \
minio/healing-shard-bucket/unaligned \
--disable-multipart --quiet
## make sure to remove the "last" data shard
rm -rf "${WORK_DIR}/xl14/healing-shard-bucket/unaligned"
sleep 10
## Heal the shard
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
## then remove any other data shard let's pick first disk
## - 1st data shard.
rm -rf "${WORK_DIR}/xl3/healing-shard-bucket/unaligned"
sleep 10
## "unaligned" object name gets consistently distributed
## to disks in following distribution order
##
## NOTE: if you change the name make sure to change the
## distribution order present here
##
## [15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep CORRUPTED; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
## make sure to remove the "last" data shard
rm -rf "${WORK_DIR}/xl14/healing-shard-bucket/unaligned"
sleep 10
## Heal the shard
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
## then remove any other data shard let's pick first disk
## - 1st data shard.
rm -rf "${WORK_DIR}/xl3/healing-shard-bucket/unaligned"
sleep 10
pkill minio
sleep 3
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep CORRUPTED; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
pkill minio
sleep 3
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**
)
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
purge "$WORK_DIR"
exit 1
fi
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**
)
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
purge "$WORK_DIR"
exit 1
fi
pkill minio
sleep 3
pkill minio
sleep 3
}
function main() {
download_old_release
download_old_release
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_16drive ${start_port}
start_minio_16drive ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -6,8 +6,8 @@ set -E
set -o pipefail
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
@@ -25,285 +25,270 @@ export ENABLE_ADMIN=1
export MINIO_CI_CD=1
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR")
FILE_1_MB="$MINT_DATA_DIR/datafile-1-MB"
FILE_65_MB="$MINT_DATA_DIR/datafile-65-MB"
FUNCTIONAL_TESTS="$WORK_DIR/functional-tests.sh"
function start_minio_fs()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
"${MINIO[@]}" server "${WORK_DIR}/fs-disk" >"$WORK_DIR/fs-minio.log" 2>&1 &
sleep 10
function start_minio_fs() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
"${MINIO[@]}" server "${WORK_DIR}/fs-disk" >"$WORK_DIR/fs-minio.log" 2>&1 &
sleep 10
}
function start_minio_erasure()
{
"${MINIO[@]}" server "${WORK_DIR}/erasure-disk1" "${WORK_DIR}/erasure-disk2" "${WORK_DIR}/erasure-disk3" "${WORK_DIR}/erasure-disk4" >"$WORK_DIR/erasure-minio.log" 2>&1 &
sleep 15
function start_minio_erasure() {
"${MINIO[@]}" server "${WORK_DIR}/erasure-disk1" "${WORK_DIR}/erasure-disk2" "${WORK_DIR}/erasure-disk3" "${WORK_DIR}/erasure-disk4" >"$WORK_DIR/erasure-minio.log" 2>&1 &
sleep 15
}
function start_minio_erasure_sets()
{
export MINIO_ENDPOINTS="${WORK_DIR}/erasure-disk-sets{1...32}"
"${MINIO[@]}" server > "$WORK_DIR/erasure-minio-sets.log" 2>&1 &
sleep 15
function start_minio_erasure_sets() {
export MINIO_ENDPOINTS="${WORK_DIR}/erasure-disk-sets{1...32}"
"${MINIO[@]}" server >"$WORK_DIR/erasure-minio-sets.log" 2>&1 &
sleep 15
}
function start_minio_pool_erasure_sets()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/pool-disk-sets{1...4} http://127.0.0.1:9001${WORK_DIR}/pool-disk-sets{5...8}"
"${MINIO[@]}" server --address ":9000" > "$WORK_DIR/pool-minio-9000.log" 2>&1 &
"${MINIO[@]}" server --address ":9001" > "$WORK_DIR/pool-minio-9001.log" 2>&1 &
function start_minio_pool_erasure_sets() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/pool-disk-sets{1...4} http://127.0.0.1:9001${WORK_DIR}/pool-disk-sets{5...8}"
"${MINIO[@]}" server --address ":9000" >"$WORK_DIR/pool-minio-9000.log" 2>&1 &
"${MINIO[@]}" server --address ":9001" >"$WORK_DIR/pool-minio-9001.log" 2>&1 &
sleep 40
sleep 40
}
function start_minio_pool_erasure_sets_ipv6()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://[::1]:9000${WORK_DIR}/pool-disk-sets-ipv6{1...4} http://[::1]:9001${WORK_DIR}/pool-disk-sets-ipv6{5...8}"
"${MINIO[@]}" server --address="[::1]:9000" > "$WORK_DIR/pool-minio-ipv6-9000.log" 2>&1 &
"${MINIO[@]}" server --address="[::1]:9001" > "$WORK_DIR/pool-minio-ipv6-9001.log" 2>&1 &
function start_minio_pool_erasure_sets_ipv6() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://[::1]:9000${WORK_DIR}/pool-disk-sets-ipv6{1...4} http://[::1]:9001${WORK_DIR}/pool-disk-sets-ipv6{5...8}"
"${MINIO[@]}" server --address="[::1]:9000" >"$WORK_DIR/pool-minio-ipv6-9000.log" 2>&1 &
"${MINIO[@]}" server --address="[::1]:9001" >"$WORK_DIR/pool-minio-ipv6-9001.log" 2>&1 &
sleep 40
sleep 40
}
function start_minio_dist_erasure()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/dist-disk1 http://127.0.0.1:9001${WORK_DIR}/dist-disk2 http://127.0.0.1:9002${WORK_DIR}/dist-disk3 http://127.0.0.1:9003${WORK_DIR}/dist-disk4"
for i in $(seq 0 3); do
"${MINIO[@]}" server --address ":900${i}" > "$WORK_DIR/dist-minio-900${i}.log" 2>&1 &
done
function start_minio_dist_erasure() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/dist-disk1 http://127.0.0.1:9001${WORK_DIR}/dist-disk2 http://127.0.0.1:9002${WORK_DIR}/dist-disk3 http://127.0.0.1:9003${WORK_DIR}/dist-disk4"
for i in $(seq 0 3); do
"${MINIO[@]}" server --address ":900${i}" >"$WORK_DIR/dist-minio-900${i}.log" 2>&1 &
done
sleep 40
sleep 40
}
function run_test_fs()
{
start_minio_fs
function run_test_fs() {
start_minio_fs
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/fs-minio.log"
fi
rm -f "$WORK_DIR/fs-minio.log"
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/fs-minio.log"
fi
rm -f "$WORK_DIR/fs-minio.log"
return "$rv"
return "$rv"
}
function run_test_erasure_sets()
{
start_minio_erasure_sets
function run_test_erasure_sets() {
start_minio_erasure_sets
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio-sets.log"
fi
rm -f "$WORK_DIR/erasure-minio-sets.log"
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio-sets.log"
fi
rm -f "$WORK_DIR/erasure-minio-sets.log"
return "$rv"
return "$rv"
}
function run_test_pool_erasure_sets()
{
start_minio_pool_erasure_sets
function run_test_pool_erasure_sets() {
start_minio_pool_erasure_sets
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-900$i.log"
done
fi
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-900$i.log"
done
fi
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-900$i.log"
done
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-900$i.log"
done
return "$rv"
return "$rv"
}
function run_test_pool_erasure_sets_ipv6()
{
start_minio_pool_erasure_sets_ipv6
function run_test_pool_erasure_sets_ipv6() {
start_minio_pool_erasure_sets_ipv6
export SERVER_ENDPOINT="[::1]:9000"
export SERVER_ENDPOINT="[::1]:9000"
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
fi
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
fi
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
return "$rv"
return "$rv"
}
function run_test_erasure()
{
start_minio_erasure
function run_test_erasure() {
start_minio_erasure
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio.log"
fi
rm -f "$WORK_DIR/erasure-minio.log"
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio.log"
fi
rm -f "$WORK_DIR/erasure-minio.log"
return "$rv"
return "$rv"
}
function run_test_dist_erasure()
{
start_minio_dist_erasure
function run_test_dist_erasure() {
start_minio_dist_erasure
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
echo "server1 log:"
cat "$WORK_DIR/dist-minio-9000.log"
echo "server2 log:"
cat "$WORK_DIR/dist-minio-9001.log"
echo "server3 log:"
cat "$WORK_DIR/dist-minio-9002.log"
echo "server4 log:"
cat "$WORK_DIR/dist-minio-9003.log"
fi
if [ "$rv" -ne 0 ]; then
echo "server1 log:"
cat "$WORK_DIR/dist-minio-9000.log"
echo "server2 log:"
cat "$WORK_DIR/dist-minio-9001.log"
echo "server3 log:"
cat "$WORK_DIR/dist-minio-9002.log"
echo "server4 log:"
cat "$WORK_DIR/dist-minio-9003.log"
fi
rm -f "$WORK_DIR/dist-minio-9000.log" "$WORK_DIR/dist-minio-9001.log" "$WORK_DIR/dist-minio-9002.log" "$WORK_DIR/dist-minio-9003.log"
rm -f "$WORK_DIR/dist-minio-9000.log" "$WORK_DIR/dist-minio-9001.log" "$WORK_DIR/dist-minio-9002.log" "$WORK_DIR/dist-minio-9003.log"
return "$rv"
return "$rv"
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
function __init__()
{
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
mkdir -p "$MINT_DATA_DIR"
function __init__() {
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
mkdir -p "$MINT_DATA_DIR"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
# remove mc source.
purge "${MC_BUILD_DIR}"
shred -n 1 -s 1M - 1>"$FILE_1_MB" 2>/dev/null
shred -n 1 -s 65M - 1>"$FILE_65_MB" 2>/dev/null
shred -n 1 -s 1M - 1>"$FILE_1_MB" 2>/dev/null
shred -n 1 -s 65M - 1>"$FILE_65_MB" 2>/dev/null
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' > "$MINIO_CONFIG_DIR/config.json"
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' >"$MINIO_CONFIG_DIR/config.json"
if ! wget -q -O "$FUNCTIONAL_TESTS" https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh; then
echo "failed to download https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh"
exit 1
fi
if ! wget -q -O "$FUNCTIONAL_TESTS" https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh; then
echo "failed to download https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh"
exit 1
fi
sed -i 's|-sS|-sSg|g' "$FUNCTIONAL_TESTS"
chmod a+x "$FUNCTIONAL_TESTS"
sed -i 's|-sS|-sSg|g' "$FUNCTIONAL_TESTS"
chmod a+x "$FUNCTIONAL_TESTS"
}
function main()
{
echo "Testing in FS setup"
if ! run_test_fs; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
function main() {
echo "Testing in FS setup"
if ! run_test_fs; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup"
if ! run_test_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup"
if ! run_test_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure setup"
if ! run_test_dist_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure setup"
if ! run_test_dist_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup as sets"
if ! run_test_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup as sets"
if ! run_test_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Eraure expanded setup"
if ! run_test_pool_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Eraure expanded setup"
if ! run_test_pool_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure expanded setup with ipv6"
if ! run_test_pool_erasure_sets_ipv6; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure expanded setup with ipv6"
if ! run_test_pool_erasure_sets_ipv6; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
purge "$WORK_DIR"
purge "$WORK_DIR"
}
( __init__ "$@" && main "$@" )
(__init__ "$@" && main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -4,94 +4,93 @@ set -E
set -o pipefail
set -x
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$(mktemp -d)"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
function start_minio() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
unset MINIO_CI_CD
unset CI
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
unset MINIO_CI_CD
unset CI
args=()
for i in $(seq 1 4); do
args+=("http://localhost:$[${start_port}+$i]${WORK_DIR}/mnt/disk$i/ ")
done
args=()
for i in $(seq 1 4); do
args+=("http://localhost:$((start_port + i))${WORK_DIR}/mnt/disk$i/ ")
done
for i in $(seq 1 4); do
"${MINIO[@]}" --address ":$[$start_port+$i]" ${args[@]} 2>&1 >"${WORK_DIR}/server$i.log" &
done
for i in $(seq 1 4); do
"${MINIO[@]}" --address ":$((start_port + i))" ${args[@]} 2>&1 >"${WORK_DIR}/server$i.log" &
done
# Wait until all nodes return 403
for i in $(seq 1 4); do
while [ "$(curl -m 1 -s -o /dev/null -w "%{http_code}" http://localhost:$[$start_port+$i])" -ne "403" ]; do
echo -n ".";
sleep 1;
done
done
# Wait until all nodes return 403
for i in $(seq 1 4); do
while [ "$(curl -m 1 -s -o /dev/null -w "%{http_code}" http://localhost:$((start_port + i)))" -ne "403" ]; do
echo -n "."
sleep 1
done
done
}
}
# Prepare fake disks with losetup
function prepare_block_devices() {
mkdir -p ${WORK_DIR}/disks/ ${WORK_DIR}/mnt/
for i in 1 2 3 4; do
dd if=/dev/zero of=${WORK_DIR}/disks/img.$i bs=1M count=2048
mkfs.ext4 -F ${WORK_DIR}/disks/img.$i
sudo mknod /dev/minio-loopdisk$i b 7 $[256-$i]
sudo losetup /dev/minio-loopdisk$i ${WORK_DIR}/disks/img.$i
mkdir -p ${WORK_DIR}/mnt/disk$i/
sudo mount /dev/minio-loopdisk$i ${WORK_DIR}/mnt/disk$i/
sudo chown "$(id -u):$(id -g)" /dev/minio-loopdisk$i ${WORK_DIR}/mnt/disk$i/
done
set -e
mkdir -p ${WORK_DIR}/disks/ ${WORK_DIR}/mnt/
sudo modprobe loop
for i in 1 2 3 4; do
dd if=/dev/zero of=${WORK_DIR}/disks/img.${i} bs=1M count=2000
device=$(sudo losetup --find --show ${WORK_DIR}/disks/img.${i})
sudo mkfs.ext4 -F ${device}
mkdir -p ${WORK_DIR}/mnt/disk${i}/
sudo mount ${device} ${WORK_DIR}/mnt/disk${i}/
sudo chown "$(id -u):$(id -g)" ${device} ${WORK_DIR}/mnt/disk${i}/
done
set +e
}
# Start a distributed MinIO setup, unmount one disk and check if it is formatted
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_minio ${start_port}
start_port=$(shuf -i 10000-65000 -n 1)
start_minio ${start_port}
# Unmount the disk, after the unmount the device id
# /tmp/xxx/mnt/disk4 will be the same as '/' and it
# will be detected as root disk
while [ "$u" != "0" ]; do
sudo umount ${WORK_DIR}/mnt/disk4/
u=$?
sleep 1
done
# Unmount the disk, after the unmount the device id
# /tmp/xxx/mnt/disk4 will be the same as '/' and it
# will be detected as root disk
while [ "$u" != "0" ]; do
sudo umount ${WORK_DIR}/mnt/disk4/
u=$?
sleep 1
done
# Wait until MinIO self heal kicks in
sleep 60
# Wait until MinIO self heal kicks in
sleep 60
if [ -f ${WORK_DIR}/mnt/disk4/.minio.sys/format.json ]; then
echo "A root disk is formatted unexpectedely"
cat "${WORK_DIR}/server4.log"
exit -1
fi
if [ -f ${WORK_DIR}/mnt/disk4/.minio.sys/format.json ]; then
echo "A root disk is formatted unexpectedely"
cat "${WORK_DIR}/server4.log"
exit -1
fi
}
function cleanup() {
pkill minio
sudo umount ${WORK_DIR}/mnt/disk{1..3}/
sudo rm /dev/minio-loopdisk*
rm -rf "$WORK_DIR"
pkill minio
sudo umount ${WORK_DIR}/mnt/disk{1..3}/
sudo rm /dev/minio-loopdisk*
rm -rf "$WORK_DIR"
}
( prepare_block_devices )
( main "$@" )
(prepare_block_devices)
(main "$@")
rv=$?
cleanup
exit "$rv"

View File

@@ -5,139 +5,135 @@ set -E
set -o pipefail
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
function start_minio_3_node() {
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MINIO_ERASURE_SET_DRIVE_COUNT=6
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MINIO_ERASURE_SET_DRIVE_COUNT=6
export MINIO_CI_CD=1
start_port=$2
args=""
for i in $(seq 1 3); do
args="$args http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/1/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/2/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/3/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/4/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/5/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/6/"
done
"${MINIO[@]}" --address ":$[$start_port+1]" $args > "${WORK_DIR}/dist-minio-server1.log" 2>&1 &
pid1=$!
disown ${pid1}
"${MINIO[@]}" --address ":$[$start_port+2]" $args > "${WORK_DIR}/dist-minio-server2.log" 2>&1 &
pid2=$!
disown $pid2
"${MINIO[@]}" --address ":$[$start_port+3]" $args > "${WORK_DIR}/dist-minio-server3.log" 2>&1 &
pid3=$!
disown $pid3
sleep "$1"
if ! ps -p $pid1 1>&2 > /dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! ps -p $pid2 1>&2 > /dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! ps -p $pid3 1>&2 > /dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! pkill minio; then
start_port=$2
args=""
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
args="$args http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/1/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/2/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/3/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/4/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/5/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/6/"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
sleep 1;
if pgrep minio; then
# forcibly killing, to proceed further properly.
if ! pkill -9 minio; then
echo "no minio process running anymore, proceed."
"${MINIO[@]}" --address ":$((start_port + 1))" $args >"${WORK_DIR}/dist-minio-server1.log" 2>&1 &
pid1=$!
disown ${pid1}
"${MINIO[@]}" --address ":$((start_port + 2))" $args >"${WORK_DIR}/dist-minio-server2.log" 2>&1 &
pid2=$!
disown $pid2
"${MINIO[@]}" --address ":$((start_port + 3))" $args >"${WORK_DIR}/dist-minio-server3.log" 2>&1 &
pid3=$!
disown $pid3
sleep "$1"
if ! ps -p $pid1 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
fi
}
if ! ps -p $pid2 1>&2 >/dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! ps -p $pid3 1>&2 >/dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! pkill minio; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
sleep 1
if pgrep minio; then
# forcibly killing, to proceed further properly.
if ! pkill -9 minio; then
echo "no minio process running anymore, proceed."
fi
fi
}
function check_online() {
if grep -q 'Unable to initialize sub-systems' ${WORK_DIR}/dist-minio-*.log; then
echo "1"
fi
if grep -q 'Unable to initialize sub-systems' ${WORK_DIR}/dist-minio-*.log; then
echo "1"
fi
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
function __init__()
{
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
function __init__() {
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' > "$MINIO_CONFIG_DIR/config.json"
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' >"$MINIO_CONFIG_DIR/config.json"
}
function perform_test() {
start_minio_3_node 120 $2
start_minio_3_node 120 $2
echo "Testing Distributed Erasure setup healing of drives"
echo "Remove the contents of the disks belonging to '${1}' erasure set"
echo "Testing Distributed Erasure setup healing of drives"
echo "Remove the contents of the disks belonging to '${1}' erasure set"
rm -rf ${WORK_DIR}/${1}/*/
rm -rf ${WORK_DIR}/${1}/*/
start_minio_3_node 120 $2
start_minio_3_node 120 $2
rv=$(check_online)
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
rv=$(check_online)
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
}
function main()
{
# use same ports for all tests
start_port=$(shuf -i 10000-65000 -n 1)
function main() {
# use same ports for all tests
start_port=$(shuf -i 10000-65000 -n 1)
perform_test "2" ${start_port}
perform_test "1" ${start_port}
perform_test "3" ${start_port}
perform_test "2" ${start_port}
perform_test "1" ${start_port}
perform_test "3" ${start_port}
}
( __init__ "$@" && main "$@" )
(__init__ "$@" && main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -22,9 +22,9 @@ import (
"io"
"net/http"
"github.com/gorilla/mux"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
)
@@ -90,8 +90,8 @@ func (api objectAPIHandlers) PutBucketACLHandler(w http.ResponseWriter, r *http.
if aclHeader == "" {
acl := &accessControlPolicy{}
if err = xmlDecoder(r.Body, acl, r.ContentLength); err != nil {
if err == io.EOF {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingSecurityHeader),
if terr, ok := err.(*xml.SyntaxError); ok && terr.Msg == io.EOF.Error() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedXML),
r.URL)
return
}

View File

@@ -29,11 +29,10 @@ import (
"strings"
"time"
"github.com/gorilla/mux"
jsoniter "github.com/json-iterator/go"
"github.com/klauspost/compress/zip"
"github.com/minio/kes"
"github.com/minio/madmin-go"
"github.com/minio/kes-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/bucket/lifecycle"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
@@ -41,6 +40,7 @@ import (
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
)
@@ -85,11 +85,6 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
return
}
if quotaConfig.Type == "fifo" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketQuotaConfigFile, data)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -107,10 +102,7 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
}
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, bucketMeta); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, bucketMeta))
// Write success response.
writeSuccessResponseHeadersOnly(w)
@@ -172,7 +164,7 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
return
}
cred, _, _, s3Err := validateAdminSignature(ctx, r, "")
cred, _, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
@@ -201,12 +193,28 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
if update {
ops = madmin.GetTargetUpdateOps(r.Form)
} else {
target.Arn = globalBucketTargetSys.getRemoteARN(bucket, &target)
var exists bool // true if arn exists
target.Arn, exists = globalBucketTargetSys.getRemoteARN(bucket, &target, "")
if exists && target.Arn != "" { // return pre-existing ARN
data, err := json.Marshal(target.Arn)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseJSON(w, data)
return
}
}
if target.Arn == "" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminConfigBadJSON, err), r.URL)
return
}
if globalSiteReplicationSys.isEnabled() && !update {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrRemoteTargetDenyAddError, err), r.URL)
return
}
if update {
// overlay the updates on existing target
tgt := globalBucketTargetSys.GetRemoteBucketTargetByArn(ctx, bucket, target.Arn)
@@ -217,10 +225,14 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
for _, op := range ops {
switch op {
case madmin.CredentialsUpdateType:
tgt.Credentials = target.Credentials
tgt.TargetBucket = target.TargetBucket
tgt.Secure = target.Secure
tgt.Endpoint = target.Endpoint
if !globalSiteReplicationSys.isEnabled() {
// credentials update is possible only in bucket replication. User will never
// know the site replicator creds.
tgt.Credentials = target.Credentials
tgt.TargetBucket = target.TargetBucket
tgt.Secure = target.Secure
tgt.Endpoint = target.Endpoint
}
case madmin.SyncUpdateType:
tgt.ReplicationSync = target.ReplicationSync
case madmin.ProxyUpdateType:
@@ -448,7 +460,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
return
}
case bucketLifecycleConfig:
config, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
config, _, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
if err != nil {
if errors.Is(err, BucketLifecycleNotFound{Bucket: bucket}) {
continue
@@ -674,8 +686,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
bucket, fileName := slc[0], slc[1]
switch fileName {
case objectLockConfig:
if fileName == objectLockConfig {
reader, err := file.Open()
if err != nil {
rpt.SetStatus(bucket, fileName, err)
@@ -694,9 +705,9 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
}
if _, ok := bucketMap[bucket]; !ok {
opts := MakeBucketOptions{
LockEnabled: config.ObjectLockEnabled == "Enabled",
LockEnabled: config.Enabled(),
}
err = objectAPI.MakeBucketWithLocation(ctx, bucket, opts)
err = objectAPI.MakeBucket(ctx, bucket, opts)
if err != nil {
if _, ok := err.(BucketExists); !ok {
rpt.SetStatus(bucket, fileName, err)
@@ -744,8 +755,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
bucket, fileName := slc[0], slc[1]
switch fileName {
case bucketVersioningConfig:
if fileName == bucketVersioningConfig {
reader, err := file.Open()
if err != nil {
rpt.SetStatus(bucket, fileName, err)
@@ -757,7 +767,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
if _, ok := bucketMap[bucket]; !ok {
if err = objectAPI.MakeBucketWithLocation(ctx, bucket, MakeBucketOptions{}); err != nil {
if err = objectAPI.MakeBucket(ctx, bucket, MakeBucketOptions{}); err != nil {
if _, ok := err.(BucketExists); !ok {
rpt.SetStatus(bucket, fileName, err)
continue
@@ -809,7 +819,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
bucket, fileName := slc[0], slc[1]
// create bucket if it does not exist yet.
if _, ok := bucketMap[bucket]; !ok {
err = objectAPI.MakeBucketWithLocation(ctx, bucket, MakeBucketOptions{})
err = objectAPI.MakeBucket(ctx, bucket, MakeBucketOptions{})
if err != nil {
if _, ok := err.(BucketExists); !ok {
rpt.SetStatus(bucket, "", err)
@@ -863,7 +873,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
// Version in policy must not be empty
if bucketPolicy.Version == "" {
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrMalformedPolicy.String()))
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrPolicyInvalidVersion.String()))
continue
}
@@ -1022,11 +1032,6 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
if quotaConfig.Type == "fifo" {
rpt.SetStatus(bucket, fileName, fmt.Errorf("Detected older 'fifo' quota config, 'fifo' feature is removed and not supported anymore"))
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketQuotaConfigFile, data)
if err != nil {
rpt.SetStatus(bucket, fileName, err)

View File

@@ -23,8 +23,8 @@ import (
"fmt"
"net/http"
"github.com/minio/kes"
"github.com/minio/madmin-go"
"github.com/minio/kes-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/minio/internal/config"
iampolicy "github.com/minio/pkg/iam/policy"
@@ -84,7 +84,13 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
Description: e.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case config.Error:
case config.ErrConfigNotFound:
apiErr = APIError{
Code: "XMinioConfigNotFoundError",
Description: e.Error(),
HTTPStatusCode: http.StatusNotFound,
}
case config.ErrConfigGeneric:
apiErr = APIError{
Code: "XMinioConfigError",
Description: e.Error(),
@@ -148,15 +154,9 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
Description: err.Error(),
HTTPStatusCode: http.StatusForbidden,
}
case errors.Is(err, errIAMServiceAccount):
case errors.Is(err, errIAMServiceAccountNotAllowed):
apiErr = APIError{
Code: "XMinioIAMServiceAccount",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errIAMServiceAccountUsed):
apiErr = APIError{
Code: "XMinioIAMServiceAccountUsed",
Code: "XMinioIAMServiceAccountNotAllowed",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
@@ -168,10 +168,16 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
}
case errors.Is(err, errPolicyInUse):
apiErr = APIError{
Code: "XMinioAdminPolicyInUse",
Code: "XMinioIAMPolicyInUse",
Description: "The policy cannot be removed, as it is in use",
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errSessionPolicyTooLarge):
apiErr = APIError{
Code: "XMinioIAMServiceAccountSessionPolicyTooLarge",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, kes.ErrKeyExists):
apiErr = APIError{
Code: "XMinioKMSKeyExists",
@@ -204,24 +210,6 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierBackendInUse):
apiErr = APIError{
Code: "XMinioAdminTierBackendInUse",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierBackendNotEmpty):
apiErr = APIError{
Code: "XMinioAdminTierBackendNotEmpty",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierInsufficientCreds):
apiErr = APIError{
Code: "XMinioAdminTierInsufficientCreds",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errIsTierPermError(err):
apiErr = APIError{
Code: "XMinioAdminTierInsufficientPermissions",
@@ -238,12 +226,10 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
// toAdminAPIErrCode - converts errErasureWriteQuorum error to admin API
// specific error.
func toAdminAPIErrCode(ctx context.Context, err error) APIErrorCode {
switch err {
case errErasureWriteQuorum:
if errors.Is(err, errErasureWriteQuorum) {
return ErrAdminConfigNoQuorum
default:
return toAPIErrorCode(ctx, err)
}
return toAPIErrorCode(ctx, err)
}
// wraps export error for more context

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2021 MinIO, Inc.
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -26,8 +26,7 @@ import (
"strconv"
"strings"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/config/cache"
"github.com/minio/minio/internal/config/etcd"
@@ -36,7 +35,9 @@ import (
idplugin "github.com/minio/minio/internal/config/identity/plugin"
polplugin "github.com/minio/minio/internal/config/policy/plugin"
"github.com/minio/minio/internal/config/storageclass"
"github.com/minio/minio/internal/config/subnet"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
)
@@ -71,7 +72,7 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
cfg, err := readServerConfig(ctx, objectAPI)
cfg, err := readServerConfig(ctx, objectAPI, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -82,11 +83,15 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
if err = validateConfig(cfg, subSys); err != nil {
if err = validateConfig(ctx, cfg, subSys); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
// Check if subnet proxy being deleted and if so the value of proxy of subnet
// target of logger webhook configuration also should be deleted
loggerWebhookProxyDeleted := setLoggerWebhookSubnetProxy(subSys, cfg)
if err = saveServerConfig(ctx, objectAPI, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -101,6 +106,10 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
dynamic := config.SubSystemsDynamic.Contains(subSys)
if dynamic {
applyDynamic(ctx, objectAPI, cfg, subSys, r, w)
if subSys == config.SubnetSubSys && loggerWebhookProxyDeleted {
// Logger webhook proxy deleted, apply the dynamic changes
applyDynamic(ctx, objectAPI, cfg, config.LoggerWebhookSubSys, r, w)
}
}
}
@@ -117,6 +126,27 @@ func applyDynamic(ctx context.Context, objectAPI ObjectLayer, cfg config.Config,
w.Header().Set(madmin.ConfigAppliedHeader, madmin.ConfigAppliedTrue)
}
type badConfigErr struct {
Err error
}
// Error - return the error message
func (bce badConfigErr) Error() string {
return bce.Err.Error()
}
// Unwrap the error to its underlying error.
func (bce badConfigErr) Unwrap() error {
return bce.Err
}
type setConfigResult struct {
Cfg config.Config
SubSys string
Dynamic bool
LoggerWebhookCfgUpdated bool
}
// SetConfigKVHandler - PUT /minio/admin/v3/set-config-kv
func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetConfigKV")
@@ -142,54 +172,77 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
cfg, err := readServerConfig(ctx, objectAPI)
result, err := setConfigKV(ctx, objectAPI, kvBytes)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
switch err.(type) {
case badConfigErr:
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
default:
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
}
return
}
dynamic, err := cfg.ReadConfig(bytes.NewReader(kvBytes))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
subSys, _, _, err := config.GetSubSys(string(kvBytes))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = validateConfig(cfg, subSys); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
// Update the actual server config on disk.
if err = saveServerConfig(ctx, objectAPI, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write to the config input KV to history.
if err = saveServerConfigHistory(ctx, objectAPI, kvBytes); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if dynamic {
applyDynamic(ctx, objectAPI, cfg, subSys, r, w)
if result.Dynamic {
applyDynamic(ctx, objectAPI, result.Cfg, result.SubSys, r, w)
// If logger webhook config updated (proxy due to callhome), explicitly dynamically
// apply the config
if result.LoggerWebhookCfgUpdated {
applyDynamic(ctx, objectAPI, result.Cfg, config.LoggerWebhookSubSys, r, w)
}
}
writeSuccessResponseHeadersOnly(w)
}
func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (result setConfigResult, err error) {
result.Cfg, err = readServerConfig(ctx, objectAPI, nil)
if err != nil {
return
}
result.Dynamic, err = result.Cfg.ReadConfig(bytes.NewReader(kvBytes))
if err != nil {
return
}
result.SubSys, _, _, err = config.GetSubSys(string(kvBytes))
if err != nil {
return
}
tgts, err := config.ParseConfigTargetID(bytes.NewReader(kvBytes))
if err != nil {
return
}
ctx = context.WithValue(ctx, config.ContextKeyForTargetFromConfig, tgts)
if verr := validateConfig(ctx, result.Cfg, result.SubSys); verr != nil {
err = badConfigErr{Err: verr}
return
}
// Check if subnet proxy being set and if so set the same value to proxy of subnet
// target of logger webhook configuration
result.LoggerWebhookCfgUpdated = setLoggerWebhookSubnetProxy(result.SubSys, result.Cfg)
// Update the actual server config on disk.
if err = saveServerConfig(ctx, objectAPI, result.Cfg); err != nil {
return
}
// Write the config input KV to history.
err = saveServerConfigHistory(ctx, objectAPI, kvBytes)
return
}
// GetConfigKVHandler - GET /minio/admin/v3/get-config-kv?key={key}
//
// `key` can be one of three forms:
// 1. `subsys:target` -> request for config of a single subsystem and target pair.
// 2. `subsys:` -> request for config of a single subsystem and the default target.
// 3. `subsys` -> request for config of all targets for the given subsystem.
//
// This is a reporting API and config secrets are redacted in the response.
func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfigKV")
@@ -217,7 +270,7 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
}
}
subSysConfigs, err := cfg.GetSubsysInfo(subSys, target)
subSysConfigs, err := cfg.GetSubsysInfo(subSys, target, true)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -296,7 +349,7 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
return
}
cfg, err := readServerConfig(ctx, objectAPI)
cfg, err := readServerConfig(ctx, objectAPI, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -307,7 +360,7 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
return
}
if err = validateConfig(cfg, ""); err != nil {
if err = validateConfig(ctx, cfg, ""); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -418,7 +471,7 @@ func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Reques
return
}
if err = validateConfig(cfg, ""); err != nil {
if err = validateConfig(ctx, cfg, ""); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -439,7 +492,9 @@ func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Reques
}
// GetConfigHandler - GET /minio/admin/v3/config
// Get config.json of this minio setup.
//
// This endpoint is mainly for exporting and backing up the configuration.
// Secrets are not redacted.
func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfig")
@@ -456,7 +511,7 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
hkvs := config.HelpSubSysMap[""]
for _, hkv := range hkvs {
// We ignore the error below, as we cannot get one.
cfgSubsysItems, _ := cfg.GetSubsysInfo(hkv.Key, "")
cfgSubsysItems, _ := cfg.GetSubsysInfo(hkv.Key, "", false)
for _, item := range cfgSubsysItems {
off := item.Config.Get(config.Enable) == config.EnableOff
@@ -474,7 +529,7 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
case config.IdentityLDAPSubSys:
off = !xldap.Enabled(item.Config)
case config.IdentityTLSSubSys:
off = !globalSTSTLSConfig.Enabled
off = !globalIAMSys.STSTLSConfig.Enabled
case config.IdentityPluginSubSys:
off = !idplugin.Enabled(item.Config)
}
@@ -491,3 +546,18 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
writeSuccessResponseJSON(w, econfigData)
}
// setLoggerWebhookSubnetProxy - Sets the logger webhook's subnet proxy value to
// one being set for subnet proxy
func setLoggerWebhookSubnetProxy(subSys string, cfg config.Config) bool {
if subSys == config.SubnetSubSys || subSys == config.LoggerWebhookSubSys {
subnetWebhookCfg := cfg[config.LoggerWebhookSubSys][subnet.LoggerWebhookName]
loggerWebhookSubnetProxy := subnetWebhookCfg.Get(logger.Proxy)
subnetProxy := cfg[config.SubnetSubSys][config.Default].Get(logger.Proxy)
if loggerWebhookSubnetProxy != subnetProxy {
subnetWebhookCfg.Set(logger.Proxy, subnetProxy)
return true
}
}
return false
}

View File

@@ -26,18 +26,18 @@ import (
"net/http"
"strings"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/config"
cfgldap "github.com/minio/minio/internal/config/identity/ldap"
"github.com/minio/minio/internal/config/identity/openid"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/ldap"
)
func (a adminAPIHandlers) addOrUpdateIDPHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, isUpdate bool) {
func addOrUpdateIDPHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, isUpdate bool) {
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
@@ -86,7 +86,7 @@ func (a adminAPIHandlers) addOrUpdateIDPHandler(ctx context.Context, w http.Resp
if idpCfgType == madmin.LDAPIDPCfg && cfgName != madmin.Default {
// LDAP does not support multiple configurations. So cfgName must be
// empty or `madmin.Default`.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrBadRequest), r.URL)
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigLDAPNonDefaultConfigName), r.URL)
return
}
}
@@ -107,7 +107,7 @@ func (a adminAPIHandlers) addOrUpdateIDPHandler(ctx context.Context, w http.Resp
cfgData = subSys + tgtSuffix + config.KvSpaceSeparator + string(reqBytes)
}
cfg, err := readServerConfig(ctx, objectAPI)
cfg, err := readServerConfig(ctx, objectAPI, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -125,7 +125,7 @@ func (a adminAPIHandlers) addOrUpdateIDPHandler(ctx context.Context, w http.Resp
return
}
if err = validateConfig(cfg, subSys); err != nil {
if err = validateConfig(ctx, cfg, subSys); err != nil {
var validationErr ldap.Validation
if errors.As(err, &validationErr) {
@@ -178,9 +178,9 @@ func handleCreateUpdateValidation(s config.Config, subSys, cfgTarget string, isU
var cfgInfos []madmin.IDPCfgInfo
switch subSys {
case madmin.IdentityOpenIDSubSys:
cfgInfos, _ = globalOpenIDConfig.GetConfigInfo(s, cfgTarget)
cfgInfos, _ = globalIAMSys.OpenIDConfig.GetConfigInfo(s, cfgTarget)
case madmin.IdentityLDAPSubSys:
cfgInfos, _ = globalLDAPConfig.GetConfigInfo(s, cfgTarget)
cfgInfos, _ = globalIAMSys.LDAPConfig.GetConfigInfo(s, cfgTarget)
}
if len(cfgInfos) > 0 && !isUpdate {
@@ -201,19 +201,19 @@ func (a adminAPIHandlers) AddIdentityProviderCfg(w http.ResponseWriter, r *http.
ctx := newContext(r, w, "AddIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
a.addOrUpdateIDPHandler(ctx, w, r, false)
addOrUpdateIDPHandler(ctx, w, r, false)
}
// UpdateIdentityProviderCfg: updates an existing IDP config for openid/ldap.
//
// PATCH <admin-prefix>/idp-cfg/openid/dex1 -> update named config `dex1`
// POST <admin-prefix>/idp-cfg/openid/dex1 -> update named config `dex1`
//
// PATCH <admin-prefix>/idp-cfg/openid/_ -> update (default) named config `_`
// POST <admin-prefix>/idp-cfg/openid/_ -> update (default) named config `_`
func (a adminAPIHandlers) UpdateIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "UpdateIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
a.addOrUpdateIDPHandler(ctx, w, r, true)
addOrUpdateIDPHandler(ctx, w, r, true)
}
// ListIdentityProviderCfg:
@@ -240,10 +240,10 @@ func (a adminAPIHandlers) ListIdentityProviderCfg(w http.ResponseWriter, r *http
switch idpCfgType {
case madmin.OpenidIDPCfg:
cfg := globalServerConfig.Clone()
cfgList, err = globalOpenIDConfig.GetConfigList(cfg)
cfgList, err = globalIAMSys.OpenIDConfig.GetConfigList(cfg)
case madmin.LDAPIDPCfg:
cfg := globalServerConfig.Clone()
cfgList, err = globalLDAPConfig.GetConfigList(cfg)
cfgList, err = globalIAMSys.LDAPConfig.GetConfigList(cfg)
default:
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
@@ -296,9 +296,9 @@ func (a adminAPIHandlers) GetIdentityProviderCfg(w http.ResponseWriter, r *http.
var err error
switch idpCfgType {
case madmin.OpenidIDPCfg:
cfgInfos, err = globalOpenIDConfig.GetConfigInfo(cfg, cfgName)
cfgInfos, err = globalIAMSys.OpenIDConfig.GetConfigInfo(cfg, cfgName)
case madmin.LDAPIDPCfg:
cfgInfos, err = globalLDAPConfig.GetConfigInfo(cfg, cfgName)
cfgInfos, err = globalIAMSys.LDAPConfig.GetConfigInfo(cfg, cfgName)
}
if err != nil {
if errors.Is(err, openid.ErrProviderConfigNotFound) || errors.Is(err, cfgldap.ErrProviderConfigNotFound) {
@@ -355,7 +355,7 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
switch idpCfgType {
case madmin.OpenidIDPCfg:
subSys = config.IdentityOpenIDSubSys
cfgInfos, err := globalOpenIDConfig.GetConfigInfo(cfgCopy, cfgName)
cfgInfos, err := globalIAMSys.OpenIDConfig.GetConfigInfo(cfgCopy, cfgName)
if err != nil {
if errors.Is(err, openid.ErrProviderConfigNotFound) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchConfigTarget), r.URL)
@@ -380,7 +380,7 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
}
case madmin.LDAPIDPCfg:
subSys = config.IdentityLDAPSubSys
cfgInfos, err := globalLDAPConfig.GetConfigInfo(cfgCopy, cfgName)
cfgInfos, err := globalIAMSys.LDAPConfig.GetConfigInfo(cfgCopy, cfgName)
if err != nil {
if errors.Is(err, openid.ErrProviderConfigNotFound) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchConfigTarget), r.URL)
@@ -408,7 +408,7 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
return
}
cfg, err := readServerConfig(ctx, objectAPI)
cfg, err := readServerConfig(ctx, objectAPI, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -422,7 +422,17 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = validateConfig(cfg, subSys); err != nil {
if err = validateConfig(ctx, cfg, subSys); err != nil {
var validationErr ldap.Validation
if errors.As(err, &validationErr) {
// If we got an LDAP validation error, we need to send appropriate
// error message back to client (likely mc).
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigLDAPValidation),
validationErr.FormatError(), r.URL)
return
}
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}

View File

@@ -19,10 +19,12 @@ package cmd
import (
"encoding/json"
"io"
"net/http"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
)
@@ -86,3 +88,94 @@ func (a adminAPIHandlers) ListLDAPPolicyMappingEntities(w http.ResponseWriter, r
}
writeSuccessResponseJSON(w, econfigData)
}
// AttachDetachPolicyLDAP attaches or detaches policies from an LDAP entity
// (user or group).
//
// POST <admin-prefix>/idp/ldap/policy/{operation}
func (a adminAPIHandlers) AttachDetachPolicyLDAP(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AttachDetachPolicyLDAP")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
// Check authorization.
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.UpdatePolicyAssociationAction)
if objectAPI == nil {
return
}
if r.ContentLength > maxEConfigJSONSize || r.ContentLength == -1 {
// More than maxConfigSize bytes were available
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigTooLarge), r.URL)
return
}
// Ensure body content type is opaque to ensure that request body has not
// been interpreted as form data.
contentType := r.Header.Get("Content-Type")
if contentType != "application/octet-stream" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrBadRequest), r.URL)
return
}
// Validate operation
operation := mux.Vars(r)["operation"]
if operation != "attach" && operation != "detach" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminInvalidArgument), r.URL)
return
}
isAttach := operation == "attach"
// Validate API arguments in body.
password := cred.SecretKey
reqBytes, err := madmin.DecryptData(password, io.LimitReader(r.Body, r.ContentLength))
if err != nil {
logger.LogIf(ctx, err, logger.Application)
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), r.URL)
return
}
var par madmin.PolicyAssociationReq
err = json.Unmarshal(reqBytes, &par)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
if err := par.IsValid(); err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), r.URL)
return
}
// Call IAM subsystem
updatedAt, addedOrRemoved, _, err := globalIAMSys.PolicyDBUpdateLDAP(ctx, isAttach, par)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
respBody := madmin.PolicyAssociationResp{
UpdatedAt: updatedAt,
}
if isAttach {
respBody.PoliciesAttached = addedOrRemoved
} else {
respBody.PoliciesDetached = addedOrRemoved
}
data, err := json.Marshal(respBody)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}

View File

@@ -22,9 +22,10 @@ import (
"errors"
"fmt"
"net/http"
"strings"
"github.com/gorilla/mux"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
)
@@ -49,28 +50,53 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
return
}
pools, ok := objectAPI.(*erasureServerPools)
if !ok {
z, ok := objectAPI.(*erasureServerPools)
if !ok || len(z.serverPools) == 1 {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
if pools.IsRebalanceStarted() {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errDecommissionRebalanceAlreadyRunning), r.URL)
if z.IsDecommissionRunning() {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errDecommissionAlreadyRunning), r.URL)
return
}
if z.IsRebalanceStarted() {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL)
return
}
vars := mux.Vars(r)
v := vars["pool"]
idx := globalEndpoints.GetPoolIdx(v)
if idx == -1 {
// We didn't find any matching pools, invalid input
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errInvalidArgument), r.URL)
return
pools := strings.Split(v, ",")
poolIndices := make([]int, 0, len(pools))
for _, pool := range pools {
idx := globalEndpoints.GetPoolIdx(pool)
if idx == -1 {
// We didn't find any matching pools, invalid input
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errInvalidArgument), r.URL)
return
}
var pool *erasureSets
for pidx := range z.serverPools {
if pidx == idx {
pool = z.serverPools[idx]
break
}
}
if pool == nil {
// We didn't find any matching pools, invalid input
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errInvalidArgument), r.URL)
return
}
poolIndices = append(poolIndices, idx)
}
if ep := globalEndpoints[idx].Endpoints[0]; !ep.IsLocal {
if len(poolIndices) > 0 && !globalEndpoints[poolIndices[0]].Endpoints[0].IsLocal {
ep := globalEndpoints[poolIndices[0]].Endpoints[0]
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == ep.Host {
if proxyRequestByNodeIndex(ctx, w, r, nodeIdx) {
@@ -80,7 +106,7 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
}
}
if err := pools.Decommission(r.Context(), idx); err != nil {
if err := z.Decommission(r.Context(), poolIndices...); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}

View File

@@ -20,16 +20,19 @@ package cmd
import (
"bytes"
"context"
"encoding/gob"
"encoding/json"
"errors"
"io"
"net/http"
"strings"
"sync/atomic"
"time"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/dustin/go-humanize"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
)
@@ -114,37 +117,23 @@ func (a adminAPIHandlers) SRPeerBucketOps(w http.ResponseWriter, r *http.Request
default:
err = errSRInvalidRequest(errInvalidArgument)
case madmin.MakeWithVersioningBktOp:
_, isLockEnabled := r.Form["lockEnabled"]
_, isVersioningEnabled := r.Form["versioningEnabled"]
_, isForceCreate := r.Form["forceCreate"]
createdAtStr := strings.TrimSpace(r.Form.Get("createdAt"))
createdAt, cerr := time.Parse(time.RFC3339Nano, createdAtStr)
createdAt, cerr := time.Parse(time.RFC3339Nano, strings.TrimSpace(r.Form.Get("createdAt")))
if cerr != nil {
createdAt = timeSentinel
}
opts := MakeBucketOptions{
Location: r.Form.Get("location"),
LockEnabled: isLockEnabled,
VersioningEnabled: isVersioningEnabled,
ForceCreate: isForceCreate,
LockEnabled: r.Form.Get("lockEnabled") == "true",
VersioningEnabled: r.Form.Get("versioningEnabled") == "true",
ForceCreate: r.Form.Get("forceCreate") == "true",
CreatedAt: createdAt,
}
err = globalSiteReplicationSys.PeerBucketMakeWithVersioningHandler(ctx, bucket, opts)
case madmin.ConfigureReplBktOp:
err = globalSiteReplicationSys.PeerBucketConfigureReplHandler(ctx, bucket)
case madmin.DeleteBucketBktOp:
_, noRecreate := r.Form["noRecreate"]
case madmin.DeleteBucketBktOp, madmin.ForceDeleteBucketBktOp:
err = globalSiteReplicationSys.PeerBucketDeleteHandler(ctx, bucket, DeleteBucketOptions{
Force: false,
NoRecreate: noRecreate,
SRDeleteOp: getSRBucketDeleteOp(true),
})
case madmin.ForceDeleteBucketBktOp:
_, noRecreate := r.Form["noRecreate"]
err = globalSiteReplicationSys.PeerBucketDeleteHandler(ctx, bucket, DeleteBucketOptions{
Force: true,
NoRecreate: noRecreate,
Force: operation == madmin.ForceDeleteBucketBktOp,
SRDeleteOp: getSRBucketDeleteOp(true),
})
case madmin.PurgeDeletedBucketOp:
@@ -557,3 +546,49 @@ func (a adminAPIHandlers) SiteReplicationResyncOp(w http.ResponseWriter, r *http
}
writeSuccessResponseJSON(w, body)
}
// SiteReplicationDevNull - everything goes to io.Discard
// [POST] /minio/admin/v3/site-replication/devnull
func (a adminAPIHandlers) SiteReplicationDevNull(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationDevNull")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
globalSiteNetPerfRX.Connect()
defer globalSiteNetPerfRX.Disconnect()
connectTime := time.Now()
for {
n, err := io.CopyN(io.Discard, r.Body, 128*humanize.KiByte)
atomic.AddUint64(&globalSiteNetPerfRX.RX, uint64(n))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// If there is a disconnection before globalNetPerfMinDuration (we give a margin of error of 1 sec)
// would mean the network is not stable. Logging here will help in debugging network issues.
if time.Since(connectTime) < (globalNetPerfMinDuration - time.Second) {
logger.LogIf(ctx, err)
}
}
if err != nil {
if errors.Is(err, io.EOF) {
w.WriteHeader(http.StatusNoContent)
} else {
w.WriteHeader(http.StatusBadRequest)
}
break
}
}
}
// SiteReplicationNetPerf - everything goes to io.Discard
// [POST] /minio/admin/v3/site-replication/netperf
func (a adminAPIHandlers) SiteReplicationNetPerf(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationNetPerf")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
durationStr := r.Form.Get(peerRESTDuration)
duration, _ := time.ParseDuration(durationStr)
if duration < globalNetPerfMinDuration {
duration = globalNetPerfMinDuration
}
result := siteNetperf(r.Context(), duration)
logger.LogIf(r.Context(), gob.NewEncoder(w).Encode(result))
}

View File

@@ -27,12 +27,12 @@ import (
"context"
"fmt"
"runtime"
"sync"
"testing"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
minio "github.com/minio/minio-go/v7"
"github.com/minio/pkg/sync/errgroup"
)
func runAllIAMConcurrencyTests(suite *TestSuiteIAM, c *check) {
@@ -129,18 +129,21 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
secretKeys[i] = secretKey
}
wg := sync.WaitGroup{}
g := errgroup.Group{}
for i := 0; i < userCount; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
uClient := s.getUserClient(c, accessKeys[i], secretKeys[i], "")
err := s.adm.RemoveUser(ctx, accessKeys[i])
if err != nil {
c.Fatalf("unable to remove user: %v", err)
g.Go(func(i int) func() error {
return func() error {
uClient := s.getUserClient(c, accessKeys[i], secretKeys[i], "")
err := s.adm.RemoveUser(ctx, accessKeys[i])
if err != nil {
return err
}
c.mustNotListObjects(ctx, uClient, bucket)
return nil
}
c.mustNotListObjects(ctx, uClient, bucket)
}(i)
}(i), i)
}
if errs := g.Wait(); len(errs) > 0 {
c.Fatalf("unable to remove users: %v", errs)
}
wg.Wait()
}

File diff suppressed because it is too large Load Diff

View File

@@ -33,10 +33,9 @@ import (
"testing"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
cr "github.com/minio/minio-go/v7/pkg/credentials"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/signer"
@@ -825,7 +824,7 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
if set.CreateStringSet(groups...).Contains(group) {
c.Fatalf("created group still present!")
}
groupInfo, err = s.adm.GetGroupDescription(ctx, group)
_, err = s.adm.GetGroupDescription(ctx, group)
if err == nil {
c.Fatalf("group appears to exist")
}
@@ -877,7 +876,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
// Create an madmin client with user creds
userAdmClient, err := madmin.NewWithOptions(s.endpoint, &madmin.Options{
Creds: cr.NewStaticV4(accessKey, secretKey, ""),
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
Secure: s.secure,
})
if err != nil {
@@ -1072,7 +1071,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
// Create an madmin client with user creds
userAdmClient, err := madmin.NewWithOptions(s.endpoint, &madmin.Options{
Creds: cr.NewStaticV4(accessKey, secretKey, ""),
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
Secure: s.secure,
})
if err != nil {
@@ -1136,7 +1135,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
c.assertSvcAccDeletion(ctx, s, userAdmClient, accessKey, bucket)
// 6. Check that service account **can** be created for some other user.
// This is possible because of the policy enforced in the plugin.
// This is possible because the policy enforced in the plugin.
c.mustCreateSvcAccount(ctx, globalActiveCred.AccessKey, userAdmClient)
}
@@ -1261,6 +1260,52 @@ func (c *check) mustListBuckets(ctx context.Context, client *minio.Client) {
}
}
func (c *check) mustNotDelete(ctx context.Context, client *minio.Client, bucket string, vid string) {
c.Helper()
err := client.RemoveObject(ctx, bucket, "some-object", minio.RemoveObjectOptions{VersionID: vid})
if err == nil {
c.Fatalf("user must not be allowed to delete")
}
err = client.RemoveObject(ctx, bucket, "some-object", minio.RemoveObjectOptions{})
if err != nil {
c.Fatal("user must be able to create delete marker")
}
}
func (c *check) mustDownload(ctx context.Context, client *minio.Client, bucket string) {
c.Helper()
rd, err := client.GetObject(ctx, bucket, "some-object", minio.GetObjectOptions{})
if err != nil {
c.Fatalf("download did not succeed got %#v", err)
}
if _, err = io.Copy(io.Discard, rd); err != nil {
c.Fatalf("download did not succeed got %#v", err)
}
}
func (c *check) mustUploadReturnVersions(ctx context.Context, client *minio.Client, bucket string) []string {
c.Helper()
versions := []string{}
for i := 0; i < 5; i++ {
ui, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
if err != nil {
c.Fatalf("upload did not succeed got %#v", err)
}
versions = append(versions, ui.VersionID)
}
return versions
}
func (c *check) mustUpload(ctx context.Context, client *minio.Client, bucket string) {
c.Helper()
_, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
if err != nil {
c.Fatalf("upload did not succeed got %#v", err)
}
}
func (c *check) mustNotUpload(ctx context.Context, client *minio.Client, bucket string) {
c.Helper()
_, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
@@ -1283,7 +1328,11 @@ func (c *check) assertSvcAccAppearsInListing(ctx context.Context, madmClient *ma
if err != nil {
c.Fatalf("unable to list svc accounts: %v", err)
}
if !set.CreateStringSet(listResp.Accounts...).Contains(svcAK) {
var accessKeys []string
for _, item := range listResp.Accounts {
accessKeys = append(accessKeys, item.AccessKey)
}
if !set.CreateStringSet(accessKeys...).Contains(svcAK) {
c.Fatalf("service account did not appear in listing!")
}
}

View File

@@ -32,7 +32,6 @@ import (
"hash/crc32"
"io"
"math"
"math/rand"
"net/http"
"net/url"
"os"
@@ -45,17 +44,18 @@ import (
"time"
"github.com/dustin/go-humanize"
"github.com/gorilla/mux"
"github.com/klauspost/compress/zip"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/estream"
"github.com/minio/madmin-go/v3"
"github.com/minio/madmin-go/v3/estream"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/dsync"
"github.com/minio/minio/internal/handlers"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/logger/message/log"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/logger/message/log"
xnet "github.com/minio/pkg/net"
"github.com/secure-io/sio-go"
)
@@ -297,15 +297,12 @@ type ServerProperties struct {
SQSARN []string `json:"sqsARN"`
}
// ServerConnStats holds transferred bytes from/to the server
type ServerConnStats struct {
TotalInputBytes uint64 `json:"transferred"`
TotalOutputBytes uint64 `json:"received"`
Throughput uint64 `json:"throughput,omitempty"`
S3InputBytes uint64 `json:"transferredS3"`
S3OutputBytes uint64 `json:"receivedS3"`
AdminInputBytes uint64 `json:"transferredAdmin"`
AdminOutputBytes uint64 `json:"receivedAdmin"`
// serverConnStats holds transferred bytes from/to the server
type serverConnStats struct {
internodeInputBytes uint64
internodeOutputBytes uint64
s3InputBytes uint64
s3OutputBytes uint64
}
// ServerHTTPAPIStats holds total number of HTTP operations from/to the server,
@@ -344,8 +341,7 @@ func (a adminAPIHandlers) StorageInfoHandler(w http.ResponseWriter, r *http.Requ
return
}
// ignores any errors here.
storageInfo, _ := objectAPI.StorageInfo(ctx)
storageInfo := objectAPI.StorageInfo(ctx)
// Collect any disk healing.
healing, _ := getAggregatedBackgroundHealState(ctx, nil)
@@ -534,16 +530,19 @@ func lriToLockEntry(l lockRequesterInfo, now time.Time, resource, server string)
func topLockEntries(peerLocks []*PeerLocks, stale bool) madmin.LockEntries {
now := time.Now().UTC()
entryMap := make(map[string]*madmin.LockEntry)
toEntry := func(lri lockRequesterInfo) string {
return fmt.Sprintf("%s/%s", lri.Name, lri.UID)
}
for _, peerLock := range peerLocks {
if peerLock == nil {
continue
}
for k, v := range peerLock.Locks {
for _, lockReqInfo := range v {
if val, ok := entryMap[lockReqInfo.Name]; ok {
if val, ok := entryMap[toEntry(lockReqInfo)]; ok {
val.ServerList = append(val.ServerList, peerLock.Addr)
} else {
entryMap[lockReqInfo.Name] = lriToLockEntry(lockReqInfo, now, k, peerLock.Addr)
entryMap[toEntry(lockReqInfo)] = lriToLockEntry(lockReqInfo, now, k, peerLock.Addr)
}
}
}
@@ -768,7 +767,7 @@ func (a adminAPIHandlers) ProfileHandler(w http.ResponseWriter, r *http.Request)
var err error
duration, err = time.ParseDuration(dstr)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
}
@@ -804,6 +803,8 @@ func (a adminAPIHandlers) ProfileHandler(w http.ResponseWriter, r *http.Request)
for {
select {
case <-ctx.Done():
globalProfilerMu.Lock()
defer globalProfilerMu.Unlock()
for k, v := range globalProfiler {
v.Stop()
delete(globalProfiler, k)
@@ -1014,7 +1015,7 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
if hr.errBody == "" {
errorRespJSON = encodeResponseJSON(getAPIErrorResponse(ctx, hr.apiErr,
r.URL.Path, w.Header().Get(xhttp.AmzRequestID),
globalDeploymentID))
w.Header().Get(xhttp.AmzRequestHostID)))
} else {
errorRespJSON = encodeResponseJSON(APIErrorResponse{
Code: hr.apiErr.Code,
@@ -1103,7 +1104,7 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
// If no ObjectLayer is provided no set status is returned.
func getAggregatedBackgroundHealState(ctx context.Context, o ObjectLayer) (madmin.BgHealState, error) {
// Get local heal status first
bgHealStates, ok := getBackgroundHealStatus(ctx, o)
bgHealStates, ok := getLocalBackgroundHealStatus(ctx, o)
if !ok {
return bgHealStates, errServerNotInitialized
}
@@ -1149,6 +1150,54 @@ func (a adminAPIHandlers) BackgroundHealStatusHandler(w http.ResponseWriter, r *
}
}
// SitePerfHandler - measures network throughput between site replicated setups
func (a adminAPIHandlers) SitePerfHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SitePerfHandler")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealthInfoAdminAction)
if objectAPI == nil {
return
}
if !globalSiteReplicationSys.isEnabled() {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
nsLock := objectAPI.NewNSLock(minioMetaBucket, "site-net-perf")
lkctx, err := nsLock.GetLock(ctx, globalOperationTimeout)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(toAPIErrorCode(ctx, err)), r.URL)
return
}
defer nsLock.Unlock(lkctx)
durationStr := r.Form.Get(peerRESTDuration)
duration, err := time.ParseDuration(durationStr)
if err != nil {
duration = globalNetPerfMinDuration
}
if duration < globalNetPerfMinDuration {
// We need sample size of minimum 10 secs.
duration = globalNetPerfMinDuration
}
duration = duration.Round(time.Second)
results, err := globalSiteReplicationSys.Netperf(ctx, duration)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(toAPIErrorCode(ctx, err)), r.URL)
return
}
enc := json.NewEncoder(w)
if err := enc.Encode(results); err != nil {
return
}
}
// NetperfHandler - perform mesh style network throughput test
func (a adminAPIHandlers) NetperfHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "NetperfHandler")
@@ -1171,7 +1220,7 @@ func (a adminAPIHandlers) NetperfHandler(w http.ResponseWriter, r *http.Request)
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(toAPIErrorCode(ctx, err)), r.URL)
return
}
defer nsLock.Unlock(lkctx.Cancel)
defer nsLock.Unlock(lkctx)
durationStr := r.Form.Get(peerRESTDuration)
duration, err := time.ParseDuration(durationStr)
@@ -1193,11 +1242,6 @@ func (a adminAPIHandlers) NetperfHandler(w http.ResponseWriter, r *http.Request)
}
}
// SpeedtestHandler - Deprecated. See ObjectSpeedTestHandler
func (a adminAPIHandlers) SpeedTestHandler(w http.ResponseWriter, r *http.Request) {
a.ObjectSpeedTestHandler(w, r)
}
// ObjectSpeedTestHandler - reports maximum speed of a cluster by performing PUT and
// GET operations on the server, supports auto tuning by default by automatically
// increasing concurrency and stopping when we have reached the limits on the
@@ -1218,6 +1262,7 @@ func (a adminAPIHandlers) ObjectSpeedTestHandler(w http.ResponseWriter, r *http.
storageClass := strings.TrimSpace(r.Form.Get(peerRESTStorageClass))
customBucket := strings.TrimSpace(r.Form.Get(peerRESTBucket))
autotune := r.Form.Get("autotune") == "true"
noClear := r.Form.Get("noclear") == "true"
size, err := strconv.Atoi(sizeStr)
if err != nil {
@@ -1234,7 +1279,7 @@ func (a adminAPIHandlers) ObjectSpeedTestHandler(w http.ResponseWriter, r *http.
duration = time.Second * 10
}
storageInfo, _ := objectAPI.StorageInfo(ctx)
storageInfo := objectAPI.StorageInfo(ctx)
sufficientCapacity, canAutotune, capacityErrMsg := validateObjPerfOptions(ctx, storageInfo, concurrent, size, autotune)
if !sufficientCapacity {
@@ -1259,14 +1304,16 @@ func (a adminAPIHandlers) ObjectSpeedTestHandler(w http.ResponseWriter, r *http.
return
}
if !bucketExists {
if !noClear && !bucketExists {
defer deleteObjectPerfBucket(objectAPI)
}
}
defer objectAPI.DeleteObject(ctx, customBucket, speedTest+SlashSeparator, ObjectOptions{
DeletePrefix: true,
})
if !noClear {
defer objectAPI.DeleteObject(ctx, customBucket, speedTest+SlashSeparator, ObjectOptions{
DeletePrefix: true,
})
}
// Freeze all incoming S3 API calls before running speedtest.
globalNotificationSys.ServiceFreeze(ctx, true)
@@ -1320,7 +1367,7 @@ func (a adminAPIHandlers) ObjectSpeedTestHandler(w http.ResponseWriter, r *http.
}
func makeObjectPerfBucket(ctx context.Context, objectAPI ObjectLayer, bucketName string) (bucketExists bool, err error) {
if err = objectAPI.MakeBucketWithLocation(ctx, bucketName, MakeBucketOptions{}); err != nil {
if err = objectAPI.MakeBucket(ctx, bucketName, MakeBucketOptions{}); err != nil {
if _, ok := err.(BucketExists); !ok {
// Only BucketExists error can be ignored.
return false, err
@@ -1333,7 +1380,6 @@ func makeObjectPerfBucket(ctx context.Context, objectAPI ObjectLayer, bucketName
func deleteObjectPerfBucket(objectAPI ObjectLayer) {
objectAPI.DeleteBucket(context.Background(), globalObjectPerfBucket, DeleteBucketOptions{
Force: true,
NoRecreate: true,
SRDeleteOp: getSRBucketDeleteOp(globalSiteReplicationSys.isEnabled()),
})
}
@@ -1365,6 +1411,7 @@ func validateObjPerfOptions(ctx context.Context, storageInfo madmin.StorageInfo,
// NetSpeedtestHandler - reports maximum network throughput
func (a adminAPIHandlers) NetSpeedtestHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "NetSpeedtestHandler")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
}
@@ -1494,6 +1541,7 @@ func extractTraceOptions(r *http.Request) (opts madmin.ServiceTraceOpts, err err
// The handler sends http trace to the connected HTTP client.
func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HTTPTrace")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.TraceAdminAction, "")
@@ -1519,10 +1567,15 @@ func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
return shouldTrace(entry, traceOpts)
})
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrSlowDown), r.URL)
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Publish bootstrap events that have already occurred before client could subscribe.
if traceOpts.TraceTypes().Contains(madmin.TraceBootstrap) {
go globalBootstrapTracer.Publish(ctx, globalTrace)
}
for _, peer := range peers {
if peer == nil {
continue
@@ -1594,7 +1647,7 @@ func (a adminAPIHandlers) ConsoleLogHandler(w http.ResponseWriter, r *http.Reque
err = globalConsoleSys.Subscribe(logCh, ctx.Done(), node, limitLines, logKind, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrSlowDown), r.URL)
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
@@ -1778,12 +1831,48 @@ func (a adminAPIHandlers) KMSKeyStatusHandler(w http.ResponseWriter, r *http.Req
writeSuccessResponseJSON(w, resp)
}
func getServerInfo(ctx context.Context, r *http.Request) madmin.InfoMessage {
func getPoolsInfo(ctx context.Context, allDisks []madmin.Disk) (map[int]map[int]madmin.ErasureSetInfo, error) {
objectAPI := newObjectLayerFn()
if objectAPI == nil {
return nil, errServerNotInitialized
}
z, _ := objectAPI.(*erasureServerPools)
poolsInfo := make(map[int]map[int]madmin.ErasureSetInfo)
for _, d := range allDisks {
poolInfo, ok := poolsInfo[d.PoolIndex]
if !ok {
poolInfo = make(map[int]madmin.ErasureSetInfo)
}
erasureSet, ok := poolInfo[d.SetIndex]
if !ok {
erasureSet.ID = d.SetIndex
cache := dataUsageCache{}
if err := cache.load(ctx, z.serverPools[d.PoolIndex].sets[d.SetIndex], dataUsageCacheName); err == nil {
dataUsageInfo := cache.dui(dataUsageRoot, nil)
erasureSet.ObjectsCount = dataUsageInfo.ObjectsTotalCount
erasureSet.VersionsCount = dataUsageInfo.VersionsTotalCount
erasureSet.Usage = dataUsageInfo.ObjectsTotalSize
}
}
erasureSet.RawCapacity += d.TotalSpace
erasureSet.RawUsage += d.UsedSpace
if d.Healing {
erasureSet.HealDisks = 1
}
poolInfo[d.SetIndex] = erasureSet
poolsInfo[d.PoolIndex] = poolInfo
}
return poolsInfo, nil
}
func getServerInfo(ctx context.Context, poolsInfoEnabled bool, r *http.Request) madmin.InfoMessage {
kmsStat := fetchKMSStatus()
ldap := madmin.LDAP{}
if globalLDAPConfig.Enabled() {
ldapConn, err := globalLDAPConfig.LDAP.Connect()
if globalIAMSys.LDAPConfig.Enabled() {
ldapConn, err := globalIAMSys.LDAPConfig.LDAP.Connect()
//nolint:gocritic
if err != nil {
ldap.Status = string(madmin.ItemOffline)
@@ -1807,7 +1896,9 @@ func getServerInfo(ctx context.Context, r *http.Request) madmin.InfoMessage {
assignPoolNumbers(servers)
var backend interface{}
var poolsInfo map[int]map[int]madmin.ErasureSetInfo
var backend madmin.ErasureBackend
mode := madmin.ItemInitializing
buckets := madmin.Buckets{}
@@ -1834,25 +1925,25 @@ func getServerInfo(ctx context.Context, r *http.Request) madmin.InfoMessage {
// Fetching the backend information
backendInfo := objectAPI.BackendInfo()
if backendInfo.Type == madmin.Erasure {
// Calculate the number of online/offline disks of all nodes
var allDisks []madmin.Disk
for _, s := range servers {
allDisks = append(allDisks, s.Disks...)
}
onlineDisks, offlineDisks := getOnlineOfflineDisksStats(allDisks)
// Calculate the number of online/offline disks of all nodes
var allDisks []madmin.Disk
for _, s := range servers {
allDisks = append(allDisks, s.Disks...)
}
onlineDisks, offlineDisks := getOnlineOfflineDisksStats(allDisks)
backend = madmin.ErasureBackend{
Type: madmin.ErasureType,
OnlineDisks: onlineDisks.Sum(),
OfflineDisks: offlineDisks.Sum(),
StandardSCParity: backendInfo.StandardSCParity,
RRSCParity: backendInfo.RRSCParity,
}
} else {
backend = madmin.FSBackend{
Type: madmin.FsType,
}
backend = madmin.ErasureBackend{
Type: madmin.ErasureType,
OnlineDisks: onlineDisks.Sum(),
OfflineDisks: offlineDisks.Sum(),
StandardSCParity: backendInfo.StandardSCParity,
RRSCParity: backendInfo.RRSCParity,
TotalSets: backendInfo.TotalSets,
DrivesPerSet: backendInfo.DrivesPerSet,
}
if poolsInfoEnabled {
poolsInfo, _ = getPoolsInfo(ctx, allDisks)
}
}
@@ -1878,6 +1969,7 @@ func getServerInfo(ctx context.Context, r *http.Request) madmin.InfoMessage {
Services: services,
Backend: backend,
Servers: servers,
Pools: poolsInfo,
}
}
@@ -1903,7 +1995,7 @@ func getKubernetesInfo(dctx context.Context) madmin.KubernetesInfo {
ki.Error = err.Error()
return ki
}
defer resp.Body.Close()
decoder := json.NewDecoder(resp.Body)
if err := decoder.Decode(&ki); err != nil {
ki.Error = err.Error()
@@ -2053,6 +2145,27 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
}
}
// collect all realtime metrics except disk
// disk metrics are already included under drive info of each server
getRealtimeMetrics := func() *madmin.RealtimeMetrics {
var m madmin.RealtimeMetrics
var types madmin.MetricType = madmin.MetricsAll &^ madmin.MetricsDisk
mLocal := collectLocalMetrics(types, collectMetricsOpts{})
m.Merge(&mLocal)
cctx, cancel := context.WithTimeout(healthCtx, time.Second/2)
mRemote := collectRemoteMetrics(cctx, types, collectMetricsOpts{})
cancel()
m.Merge(&mRemote)
for idx, host := range m.Hosts {
m.Hosts[idx] = anonAddr(host)
}
for host, metrics := range m.ByHost {
m.ByHost[anonAddr(host)] = metrics
delete(m.ByHost, host)
}
return &m
}
anonymizeCmdLine := func(cmdLine string) string {
if !globalIsDistErasure {
// FS mode - single server - hard code to `server1`
@@ -2080,8 +2193,8 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
// No ellipses pattern. Anonymize host name from every pool arg
pools := strings.Fields(poolsArgs)
anonPools = make([]string, len(pools))
for _, arg := range pools {
anonPools = append(anonPools, anonAddr(arg))
for index, arg := range pools {
anonPools[index] = anonAddr(arg)
}
return cmdLineWithoutPools + strings.Join(anonPools, " ")
}
@@ -2136,7 +2249,7 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
getAndWriteMinioConfig := func() {
if query.Get("minioconfig") == "true" {
config, err := readServerConfig(healthCtx, objectAPI)
config, err := readServerConfig(healthCtx, objectAPI, nil)
if err != nil {
healthInfo.Minio.Config = madmin.MinioConfig{
Error: err.Error(),
@@ -2184,7 +2297,7 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
getAndWriteSysConfig()
if query.Get("minioinfo") == "true" {
infoMessage := getServerInfo(healthCtx, nil)
infoMessage := getServerInfo(healthCtx, false, nil)
servers := make([]madmin.ServerInfo, 0, len(infoMessage.Servers))
for _, server := range infoMessage.Servers {
anonEndpoint := anonAddr(server.Endpoint)
@@ -2230,6 +2343,7 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
TLS: &tls,
IsKubernetes: &isK8s,
IsDocker: &isDocker,
Metrics: getRealtimeMetrics(),
}
partialWrite(healthInfo)
}
@@ -2253,7 +2367,8 @@ func (a adminAPIHandlers) HealthInfoHandler(w http.ResponseWriter, r *http.Reque
enc := json.NewEncoder(w)
healthInfo := madmin.HealthInfo{
Version: madmin.HealthInfoVersion,
TimeStamp: time.Now().UTC(),
Version: madmin.HealthInfoVersion,
Minio: madmin.MinioHealthInfo{
Info: madmin.MinioInfo{
DeploymentID: globalDeploymentID,
@@ -2263,7 +2378,7 @@ func (a adminAPIHandlers) HealthInfoHandler(w http.ResponseWriter, r *http.Reque
errResp := func(err error) {
errorResponse := getAPIErrorResponse(ctx, toAdminAPIErr(ctx, err), r.URL.String(),
w.Header().Get(xhttp.AmzRequestID), globalDeploymentID)
w.Header().Get(xhttp.AmzRequestID), w.Header().Get(xhttp.AmzRequestHostID))
encodedErrorResponse := encodeResponse(errorResponse)
healthInfo.Error = string(encodedErrorResponse)
logger.LogIf(ctx, enc.Encode(healthInfo))
@@ -2286,7 +2401,7 @@ func (a adminAPIHandlers) HealthInfoHandler(w http.ResponseWriter, r *http.Reque
return
}
defer nsLock.Unlock(lkctx.Cancel)
defer nsLock.Unlock(lkctx)
healthCtx, healthCancel := context.WithTimeout(lkctx.Context(), deadline)
defer healthCancel()
@@ -2342,66 +2457,6 @@ func getTLSInfo() madmin.TLSInfo {
return tlsInfo
}
// BandwidthMonitorHandler - GET /minio/admin/v3/bandwidth
// ----------
// Get bandwidth consumption information
func (a adminAPIHandlers) BandwidthMonitorHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "BandwidthMonitor")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.BandwidthMonitorAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
}
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
setEventStreamHeaders(w)
reportCh := make(chan madmin.BucketBandwidthReport)
keepAliveTicker := time.NewTicker(500 * time.Millisecond)
defer keepAliveTicker.Stop()
bucketsRequestedString := r.Form.Get("buckets")
bucketsRequested := strings.Split(bucketsRequestedString, ",")
go func() {
defer close(reportCh)
for {
select {
case <-ctx.Done():
return
case reportCh <- globalNotificationSys.GetBandwidthReports(ctx, bucketsRequested...):
time.Sleep(time.Duration(rnd.Float64() * float64(2*time.Second)))
}
}
}()
enc := json.NewEncoder(w)
for {
select {
case report, ok := <-reportCh:
if !ok {
return
}
if err := enc.Encode(report); err != nil {
return
}
if len(reportCh) == 0 {
// Flush if nothing is queued
w.(http.Flusher).Flush()
}
case <-keepAliveTicker.C:
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
case <-ctx.Done():
return
}
}
}
// ServerInfoHandler - GET /minio/admin/v3/info
// ----------
// Get server information
@@ -2418,7 +2473,7 @@ func (a adminAPIHandlers) ServerInfoHandler(w http.ResponseWriter, r *http.Reque
}
// Marshal API response
jsonBytes, err := json.Marshal(getServerInfo(ctx, r))
jsonBytes, err := json.Marshal(getServerInfo(ctx, true, r))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -2448,7 +2503,7 @@ func assignPoolNumbers(servers []madmin.ServerProperties) {
func fetchLambdaInfo() []map[string][]madmin.TargetIDStatus {
lambdaMap := make(map[string][]madmin.TargetIDStatus)
for _, tgt := range globalConfigTargetList.Targets() {
for _, tgt := range globalNotifyTargetList.Targets() {
targetIDStatus := make(map[string]madmin.Status)
active, _ := tgt.IsActive()
targetID := tgt.ID()
@@ -2518,63 +2573,21 @@ func fetchKMSStatus() madmin.KMS {
func fetchLoggerInfo() ([]madmin.Logger, []madmin.Audit) {
var loggerInfo []madmin.Logger
var auditloggerInfo []madmin.Audit
for _, target := range logger.SystemTargets() {
if target.Endpoint() != "" {
tgt := target.String()
err := checkConnection(target.Endpoint(), 15*time.Second)
if err == nil {
mapLog := make(map[string]madmin.Status)
mapLog[tgt] = madmin.Status{Status: string(madmin.ItemOnline)}
loggerInfo = append(loggerInfo, mapLog)
} else {
mapLog := make(map[string]madmin.Status)
mapLog[tgt] = madmin.Status{Status: string(madmin.ItemOffline)}
loggerInfo = append(loggerInfo, mapLog)
}
for _, tgt := range logger.SystemTargets() {
if tgt.Endpoint() != "" {
loggerInfo = append(loggerInfo, madmin.Logger{tgt.String(): logger.TargetStatus(GlobalContext, tgt)})
}
}
for _, target := range logger.AuditTargets() {
if target.Endpoint() != "" {
tgt := target.String()
err := checkConnection(target.Endpoint(), 15*time.Second)
if err == nil {
mapAudit := make(map[string]madmin.Status)
mapAudit[tgt] = madmin.Status{Status: string(madmin.ItemOnline)}
auditloggerInfo = append(auditloggerInfo, mapAudit)
} else {
mapAudit := make(map[string]madmin.Status)
mapAudit[tgt] = madmin.Status{Status: string(madmin.ItemOffline)}
auditloggerInfo = append(auditloggerInfo, mapAudit)
}
for _, tgt := range logger.AuditTargets() {
if tgt.Endpoint() != "" {
auditloggerInfo = append(auditloggerInfo, madmin.Audit{tgt.String(): logger.TargetStatus(GlobalContext, tgt)})
}
}
return loggerInfo, auditloggerInfo
}
// checkConnection - ping an endpoint , return err in case of no connection
func checkConnection(endpointStr string, timeout time.Duration) error {
ctx, cancel := context.WithTimeout(GlobalContext, timeout)
defer cancel()
client := &http.Client{
Transport: globalProxyTransport,
}
req, err := http.NewRequestWithContext(ctx, http.MethodHead, endpointStr, nil)
if err != nil {
return err
}
resp, err := client.Do(req)
if err != nil {
return err
}
defer xhttp.DrainBody(resp.Body)
return nil
}
func embedFileInZip(zipWriter *zip.Writer, name string, data []byte) error {
// Send profiling data to zip as file
header, zerr := zip.FileInfoHeader(dummyFileInfo{
@@ -2622,7 +2635,7 @@ func getClusterMetaInfo(ctx context.Context) []byte {
ci.Info.NoOfServers = len(globalEndpoints.Hostnames())
ci.Info.MinioVersion = Version
si, _ := objectAPI.StorageInfo(ctx)
si := objectAPI.StorageInfo(ctx)
ci.Info.NoOfDrives = len(si.Disks)
for _, disk := range si.Disks {
@@ -2744,7 +2757,7 @@ func (a adminAPIHandlers) InspectDataHandler(w http.ResponseWriter, r *http.Requ
stream := estream.NewWriter(w)
defer stream.Close()
clusterKey, err := bytesToPublicKey(subnetAdminPublicKey)
clusterKey, err := bytesToPublicKey(getSubnetAdminPublicKey())
if err != nil {
logger.LogIf(ctx, stream.AddError(err.Error()))
return
@@ -2879,6 +2892,13 @@ func (a adminAPIHandlers) InspectDataHandler(w http.ResponseWriter, r *http.Requ
logger.LogIf(ctx, embedFileInZip(inspectZipW, "inspect-input.txt", sb.Bytes()))
}
func getSubnetAdminPublicKey() []byte {
if globalIsCICD {
return subnetAdminPublicKeyDev
}
return subnetAdminPublicKey
}
func createHostAnonymizerForFSMode() map[string]string {
hostAnonymizer := map[string]string{
globalLocalNodeName: "server1",
@@ -2956,10 +2976,16 @@ func createHostAnonymizer() map[string]string {
}
hostAnonymizer := map[string]string{}
hosts := set.NewStringSet()
srvrIdx := 0
for poolIdx, pool := range globalEndpoints {
for srvrIdx, endpoint := range pool.Endpoints {
anonymizeHost(hostAnonymizer, endpoint, poolIdx+1, srvrIdx+1)
for _, endpoint := range pool.Endpoints {
if !hosts.Contains(endpoint.Host) {
hosts.Add(endpoint.Host)
srvrIdx++
}
anonymizeHost(hostAnonymizer, endpoint, poolIdx+1, srvrIdx)
}
}
return hostAnonymizer

View File

@@ -21,17 +21,19 @@ import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"net/url"
"sort"
"sync"
"testing"
"time"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/mux"
)
// adminErasureTestBed - encapsulates subsystems that need to be setup for
@@ -71,7 +73,7 @@ func prepareAdminErasureTestBed(ctx context.Context) (*adminErasureTestBed, erro
// Initialize boot time
globalBootTime = UTCNow()
globalEndpoints = mustGetPoolEndpoints(erasureDirs...)
globalEndpoints = mustGetPoolEndpoints(0, erasureDirs...)
initAllSubsystems(ctx)
@@ -106,7 +108,7 @@ func initTestErasureObjLayer(ctx context.Context) (ObjectLayer, []string, error)
if err != nil {
return nil, nil, err
}
endpoints := mustGetPoolEndpoints(erasureDirs...)
endpoints := mustGetPoolEndpoints(0, erasureDirs...)
globalPolicySys = NewPolicySys()
objLayer, err := newErasureServerPools(ctx, endpoints)
if err != nil {
@@ -380,3 +382,146 @@ func TestExtractHealInitParams(t *testing.T) {
}
}
}
type byResourceUID struct{ madmin.LockEntries }
func (b byResourceUID) Less(i, j int) bool {
toUniqLock := func(entry madmin.LockEntry) string {
return fmt.Sprintf("%s/%s", entry.Resource, entry.ID)
}
return toUniqLock(b.LockEntries[i]) < toUniqLock(b.LockEntries[j])
}
func TestTopLockEntries(t *testing.T) {
locksHeld := make(map[string][]lockRequesterInfo)
var owners []string
for i := 0; i < 4; i++ {
owners = append(owners, fmt.Sprintf("node-%d", i))
}
// Simulate DeleteObjects of 10 objects in a single request. i.e same lock
// request UID, but 10 different resource names associated with it.
var lris []lockRequesterInfo
uuid := mustGetUUID()
for i := 0; i < 10; i++ {
resource := fmt.Sprintf("bucket/delete-object-%d", i)
lri := lockRequesterInfo{
Name: resource,
Writer: true,
UID: uuid,
Owner: owners[i%len(owners)],
Group: true,
Quorum: 3,
}
lris = append(lris, lri)
locksHeld[resource] = []lockRequesterInfo{lri}
}
// Add a few concurrent read locks to the mix
for i := 0; i < 50; i++ {
resource := fmt.Sprintf("bucket/get-object-%d", i)
lri := lockRequesterInfo{
Name: resource,
UID: mustGetUUID(),
Owner: owners[i%len(owners)],
Quorum: 2,
}
lris = append(lris, lri)
locksHeld[resource] = append(locksHeld[resource], lri)
// concurrent read lock, same resource different uid
lri.UID = mustGetUUID()
lris = append(lris, lri)
locksHeld[resource] = append(locksHeld[resource], lri)
}
var peerLocks []*PeerLocks
for _, owner := range owners {
peerLocks = append(peerLocks, &PeerLocks{
Addr: owner,
Locks: locksHeld,
})
}
var exp madmin.LockEntries
for _, lri := range lris {
lockType := func(lri lockRequesterInfo) string {
if lri.Writer {
return "WRITE"
}
return "READ"
}
exp = append(exp, madmin.LockEntry{
Resource: lri.Name,
Type: lockType(lri),
ServerList: owners,
Owner: lri.Owner,
ID: lri.UID,
Quorum: lri.Quorum,
})
}
testCases := []struct {
peerLocks []*PeerLocks
expected madmin.LockEntries
}{
{
peerLocks: peerLocks,
expected: exp,
},
}
// printEntries := func(entries madmin.LockEntries) {
// for i, entry := range entries {
// fmt.Printf("%d: %s %s %s %s %v %d\n", i, entry.Resource, entry.ID, entry.Owner, entry.Type, entry.ServerList, entry.Elapsed)
// }
// }
check := func(exp, got madmin.LockEntries) (int, bool) {
if len(exp) != len(got) {
return 0, false
}
sort.Slice(exp, byResourceUID{exp}.Less)
sort.Slice(got, byResourceUID{got}.Less)
// printEntries(exp)
// printEntries(got)
for i, e := range exp {
if !e.Timestamp.Equal(got[i].Timestamp) {
return i, false
}
// Skip checking elapsed since it's time sensitive.
// if e.Elapsed != got[i].Elapsed {
// return false
// }
if e.Resource != got[i].Resource {
return i, false
}
if e.Type != got[i].Type {
return i, false
}
if e.Source != got[i].Source {
return i, false
}
if e.Owner != got[i].Owner {
return i, false
}
if e.ID != got[i].ID {
return i, false
}
if len(e.ServerList) != len(got[i].ServerList) {
return i, false
}
for j := range e.ServerList {
if e.ServerList[j] != got[i].ServerList[j] {
return i, false
}
}
}
return 0, true
}
for i, tc := range testCases {
got := topLockEntries(tc.peerLocks, false)
if idx, ok := check(tc.expected, got); !ok {
t.Fatalf("%d: mismatch at %d \n expected %#v but got %#v", i, idx, tc.expected[idx], got[idx])
}
}
}

View File

@@ -27,7 +27,7 @@ import (
"sync"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
)
@@ -176,11 +176,12 @@ func (ahs *allHealState) getHealLocalDiskEndpoints() Endpoints {
return endpoints
}
func (ahs *allHealState) markDiskForHealing(ep Endpoint) {
// Set, in the memory, the state of the disk as currently healing or not
func (ahs *allHealState) setDiskHealingStatus(ep Endpoint, healing bool) {
ahs.Lock()
defer ahs.Unlock()
ahs.healLocalDisks[ep] = true
ahs.healLocalDisks[ep] = healing
}
func (ahs *allHealState) pushHealLocalDisks(healLocalDisks ...Endpoint) {
@@ -222,8 +223,8 @@ func (ahs *allHealState) periodicHealSeqsClean(ctx context.Context) {
// getHealSequenceByToken - Retrieve a heal sequence by token. The second
// argument returns if a heal sequence actually exists.
func (ahs *allHealState) getHealSequenceByToken(token string) (h *healSequence, exists bool) {
ahs.Lock()
defer ahs.Unlock()
ahs.RLock()
defer ahs.RUnlock()
for _, healSeq := range ahs.healSeqMap {
if healSeq.clientToken == token {
return healSeq, true
@@ -235,8 +236,8 @@ func (ahs *allHealState) getHealSequenceByToken(token string) (h *healSequence,
// getHealSequence - Retrieve a heal sequence by path. The second
// argument returns if a heal sequence actually exists.
func (ahs *allHealState) getHealSequence(path string) (h *healSequence, exists bool) {
ahs.Lock()
defer ahs.Unlock()
ahs.RLock()
defer ahs.RUnlock()
h, exists = ahs.healSeqMap[path]
return h, exists
}
@@ -396,6 +397,7 @@ type healSource struct {
bucket string
object string
versionID string
noWait bool // a non blocking call, if task queue is full return right away.
opts *madmin.HealOpts // optional heal option overrides default setting
}
@@ -405,9 +407,6 @@ type healSequence struct {
// bucket, and object on which heal seq. was initiated
bucket, object string
// A channel of entities with heal result
respCh chan healResult
// Report healing progress
reportProgress bool
@@ -470,7 +469,6 @@ func newHealSequence(ctx context.Context, bucket, objPrefix, clientAddr string,
clientToken := mustGetUUID()
return &healSequence{
respCh: make(chan healResult),
bucket: bucket,
object: objPrefix,
reportProgress: true,
@@ -699,7 +697,6 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
object: source.object,
versionID: source.versionID,
opts: h.settings,
respCh: h.respCh,
}
if source.opts != nil {
task.opts = *source.opts
@@ -712,6 +709,24 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
h.lastHealActivity = UTCNow()
h.mutex.Unlock()
if source.noWait {
select {
case globalBackgroundHealRoutine.tasks <- task:
if serverDebugLog {
logger.Info("Task in the queue: %#v", task)
}
default:
// task queue is full, no more workers, we shall move on and heal later.
return nil
}
// Don't wait for result
return nil
}
// respCh must be set to wait for result.
// We make it size 1, so a result can always be written
// even if we aren't listening.
task.respCh = make(chan healResult, 1)
select {
case globalBackgroundHealRoutine.tasks <- task:
if serverDebugLog {
@@ -721,8 +736,9 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
return nil
}
// task queued, now wait for the response.
select {
case res := <-h.respCh:
case res := <-task.respCh:
if !h.reportProgress {
if errors.Is(res.err, errSkipFile) { // this is only sent usually by nopHeal
return nil
@@ -768,6 +784,11 @@ func (h *healSequence) healDiskMeta(objAPI ObjectLayer) error {
}
func (h *healSequence) healItems(objAPI ObjectLayer, bucketsOnly bool) error {
if h.clientToken == bgHealingUUID {
// For background heal do nothing.
return nil
}
if err := h.healDiskMeta(objAPI); err != nil {
return err
}
@@ -853,13 +874,8 @@ func (h *healSequence) healBucket(objAPI ObjectLayer, bucket string, bucketsOnly
if !h.settings.Recursive {
if h.object != "" {
// Check if an object named as the objPrefix exists,
// and if so heal it.
oi, err := objAPI.GetObjectInfo(h.ctx, bucket, h.object, ObjectOptions{})
if err == nil {
if err = h.healObject(bucket, h.object, oi.VersionID); err != nil {
return err
}
if err := h.healObject(bucket, h.object, ""); err != nil {
return err
}
}

View File

@@ -20,17 +20,19 @@ package cmd
import (
"net/http"
"github.com/gorilla/mux"
"github.com/klauspost/compress/gzhttp"
"github.com/klauspost/compress/gzip"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
)
const (
adminPathPrefix = minioReservedBucketPath + "/admin"
adminAPIVersion = madmin.AdminAPIVersion
adminAPIVersionPrefix = SlashSeparator + adminAPIVersion
adminPathPrefix = minioReservedBucketPath + "/admin"
adminAPIVersion = madmin.AdminAPIVersion
adminAPIVersionPrefix = SlashSeparator + adminAPIVersion
adminAPISiteReplicationDevNull = "/site-replication/devnull"
adminAPISiteReplicationNetPerf = "/site-replication/netperf"
)
// adminAPIHandlers provides HTTP handlers for MinIO admin API.
@@ -142,12 +144,18 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-service-accounts").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListServiceAccounts)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/delete-service-account").HandlerFunc(gz(httpTraceHdrs(adminAPI.DeleteServiceAccount))).Queries("accessKey", "{accessKey:.*}")
// STS accounts ops
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/temporary-account-info").HandlerFunc(gz(httpTraceHdrs(adminAPI.TemporaryAccountInfo))).Queries("accessKey", "{accessKey:.*}")
// Info policy IAM latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(gz(httpTraceHdrs(adminAPI.InfoCannedPolicy))).Queries("name", "{name:.*}")
// List policies latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-canned-policies").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListBucketPolicies))).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-canned-policies").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListCannedPolicies)))
// Builtin IAM policy associations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/builtin/policy-entities").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListPolicyMappingEntities)))
// Remove policy IAM
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-canned-policy").HandlerFunc(gz(httpTraceHdrs(adminAPI.RemoveCannedPolicy))).Queries("name", "{name:.*}")
@@ -156,6 +164,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
HandlerFunc(gz(httpTraceHdrs(adminAPI.SetPolicyForUserOrGroup))).
Queries("policyName", "{policyName:.*}", "userOrGroup", "{userOrGroup:.*}", "isGroup", "{isGroup:true|false}")
// Attach/Detach policies to/from user or group
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/builtin/policy/{operation}").HandlerFunc(gz(httpTraceHdrs(adminAPI.AttachDetachPolicyBuiltin)))
// Remove user IAM
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-user").HandlerFunc(gz(httpTraceHdrs(adminAPI.RemoveUser))).Queries("accessKey", "{accessKey:.*}")
@@ -192,7 +203,7 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// LDAP IAM operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/ldap/policy-entities").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListLDAPPolicyMappingEntities)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/ldap/policy/{operation}").HandlerFunc(gz(httpTraceHdrs(adminAPI.AttachDetachPolicyLDAP)))
// -- END IAM APIs --
// GetBucketQuotaConfig
@@ -225,6 +236,8 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/describe-job").HandlerFunc(
gz(httpTraceHdrs(adminAPI.DescribeBatchJob)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/cancel-job").HandlerFunc(
gz(httpTraceHdrs(adminAPI.CancelBatchJob)))
// Bucket migration operations
// ExportBucketMetaHandler
@@ -249,6 +262,8 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/info").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/metainfo").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationMetaInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationStatus)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPISiteReplicationDevNull).HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationDevNull)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPISiteReplicationNetPerf).HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationNetPerf)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/join").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerJoin)))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/peer/bucket-ops").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerBucketOps))).Queries("bucket", "{bucket:.*}").Queries("operation", "{operation:.*}")
@@ -268,10 +283,11 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
Queries("paths", "{paths:.*}").HandlerFunc(gz(httpTraceHdrs(adminAPI.ForceUnlockHandler)))
}
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest").HandlerFunc(httpTraceHdrs(adminAPI.SpeedTestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest").HandlerFunc(httpTraceHdrs(adminAPI.ObjectSpeedTestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/object").HandlerFunc(httpTraceHdrs(adminAPI.ObjectSpeedTestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/drive").HandlerFunc(httpTraceHdrs(adminAPI.DriveSpeedtestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/net").HandlerFunc(httpTraceHdrs(adminAPI.NetperfHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/site").HandlerFunc(httpTraceHdrs(adminAPI.SitePerfHandler))
// HTTP Trace
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/trace").HandlerFunc(gz(http.HandlerFunc(adminAPI.TraceHandler)))
@@ -291,8 +307,6 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// -- Health API --
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/healthinfo").
HandlerFunc(gz(httpTraceHdrs(adminAPI.HealthInfoHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/bandwidth").
HandlerFunc(gz(httpTraceHdrs(adminAPI.BandwidthMonitorHandler)))
}
// If none of the routes match add default error handler routes

View File

@@ -26,15 +26,15 @@ import (
"strings"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
)
// getLocalServerProperty - returns madmin.ServerProperties for only the
// local endpoints from given list of endpoints
func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Request) madmin.ServerProperties {
var localEndpoints Endpoints
addr := globalLocalNodeName
if r != nil {
addr = r.Host
@@ -52,7 +52,6 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
if endpoint.IsLocal {
// Only proceed for local endpoints
network[nodeName] = string(madmin.ItemOnline)
localEndpoints = append(localEndpoints, endpoint)
continue
}
_, present := network[nodeName]
@@ -88,7 +87,6 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
}
props := madmin.ServerProperties{
State: string(madmin.ItemInitializing),
Endpoint: addr,
Uptime: UTCNow().Unix() - globalBootTime.Unix(),
Version: Version,
@@ -120,7 +118,7 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
config.EnvRootUser: {},
config.EnvRootPassword: {},
config.EnvMinIOSubnetAPIKey: {},
config.EnvKMSSecretKey: {},
kms.EnvKMSSecretKey: {},
}
for _, v := range os.Environ() {
if !strings.HasPrefix(v, "MINIO") && !strings.HasPrefix(v, "_MINIO") {
@@ -143,10 +141,12 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
objLayer := newObjectLayerFn()
if objLayer != nil {
// only need Disks information in server mode.
storageInfo, _ := objLayer.LocalStorageInfo(GlobalContext)
storageInfo := objLayer.LocalStorageInfo(GlobalContext)
props.State = string(madmin.ItemOnline)
props.Disks = storageInfo.Disks
} else {
props.State = string(madmin.ItemInitializing)
props.Disks = getOfflineDisks("", globalEndpoints)
}
return props

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2021 MinIO, Inc.
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -28,9 +28,10 @@ import (
"strings"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/minio/minio/internal/ioutil"
"google.golang.org/api/googleapi"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/auth"
@@ -38,10 +39,12 @@ import (
"github.com/minio/minio/internal/bucket/replication"
"github.com/minio/minio/internal/config/dns"
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
"github.com/minio/minio/internal/bucket/versioning"
levent "github.com/minio/minio/internal/config/lambda/event"
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/hash"
"github.com/minio/pkg/bucket/policy"
@@ -84,6 +87,7 @@ const (
ErrInternalError
ErrInvalidAccessKeyID
ErrAccessKeyDisabled
ErrInvalidArgument
ErrInvalidBucketName
ErrInvalidDigest
ErrInvalidRange
@@ -132,7 +136,10 @@ const (
ErrReplicationNeedsVersioningError
ErrReplicationBucketNeedsVersioningError
ErrReplicationDenyEditError
ErrRemoteTargetDenyAddError
ErrReplicationNoExistingObjects
ErrReplicationValidationError
ErrReplicationPermissionCheckError
ErrObjectRestoreAlreadyInProgress
ErrNoSuchKey
ErrNoSuchUpload
@@ -145,13 +152,14 @@ const (
ErrMethodNotAllowed
ErrInvalidPart
ErrInvalidPartOrder
ErrMissingPart
ErrAuthorizationHeaderMalformed
ErrMalformedPOSTRequest
ErrPOSTFileRequired
ErrSignatureVersionNotSupported
ErrBucketNotEmpty
ErrAllAccessDisabled
ErrMalformedPolicy
ErrPolicyInvalidVersion
ErrMissingFields
ErrMissingCredTag
ErrCredMalformed
@@ -164,7 +172,6 @@ const (
ErrMalformedDate
ErrMalformedPresignedDate
ErrMalformedCredentialDate
ErrMalformedCredentialRegion
ErrMalformedExpires
ErrNegativeExpires
ErrAuthHeaderEmpty
@@ -177,11 +184,12 @@ const (
ErrBucketAlreadyOwnedByYou
ErrInvalidDuration
ErrBucketAlreadyExists
ErrTooManyBuckets
ErrMetadataTooLarge
ErrUnsupportedMetadata
ErrUnsupportedHostHeader
ErrMaximumExpires
ErrSlowDown
ErrSlowDownRead
ErrSlowDownWrite
ErrInvalidPrefixMarker
ErrBadRequest
ErrKeyTooLongError
@@ -196,6 +204,9 @@ const (
ErrBucketTaggingNotFound
ErrObjectLockInvalidHeaders
ErrInvalidTagDirective
ErrPolicyAlreadyAttached
ErrPolicyNotAttached
ErrExcessData
// Add new error codes here.
// SSE-S3/SSE-KMS related API errors
@@ -207,6 +218,8 @@ const (
ErrSSEMultipartEncrypted
ErrSSEEncryptedObject
ErrInvalidEncryptionParameters
ErrInvalidEncryptionParametersSSEC
ErrInvalidSSECustomerAlgorithm
ErrInvalidSSECustomerKey
ErrMissingSSECustomerKey
@@ -216,6 +229,7 @@ const (
ErrIncompatibleEncryptionMethod
ErrKMSNotConfigured
ErrKMSKeyNotFoundException
ErrKMSDefaultKeyAlreadyConfigured
ErrNoAccessKey
ErrInvalidToken
@@ -239,18 +253,17 @@ const (
// Add new extended error codes here.
// MinIO extended errors.
ErrReadQuorum
ErrWriteQuorum
ErrStorageFull
ErrRequestBodyParse
ErrObjectExistsAsDirectory
ErrInvalidObjectName
ErrInvalidObjectNamePrefixSlash
ErrInvalidResourceName
ErrInvalidLifecycleQueryParameter
ErrServerNotInitialized
ErrOperationTimedOut
ErrRequestTimedout
ErrClientDisconnected
ErrOperationMaxedOut
ErrTooManyRequests
ErrInvalidRequest
ErrTransitionStorageClassNotFoundError
// MinIO storage class error codes
@@ -262,10 +275,13 @@ const (
ErrMalformedJSON
ErrAdminNoSuchUser
ErrAdminNoSuchUserLDAPWarn
ErrAdminNoSuchGroup
ErrAdminGroupNotEmpty
ErrAdminGroupDisabled
ErrAdminNoSuchJob
ErrAdminNoSuchPolicy
ErrAdminPolicyChangeAlreadyApplied
ErrAdminInvalidArgument
ErrAdminInvalidAccessKey
ErrAdminInvalidSecretKey
@@ -276,6 +292,7 @@ const (
ErrAdminConfigEnvOverridden
ErrAdminConfigDuplicateKeys
ErrAdminConfigInvalidIDPType
ErrAdminConfigLDAPNonDefaultConfigName
ErrAdminConfigLDAPValidation
ErrAdminConfigIDPCfgNameAlreadyExists
ErrAdminConfigIDPCfgNameDoesNotExist
@@ -406,6 +423,12 @@ const (
ErrPostPolicyConditionInvalidFormat
ErrInvalidChecksum
// Lambda functions
ErrLambdaARNInvalid
ErrLambdaARNNotFound
apiErrCodeEnd // This is used only for the testing code
)
type errorCodeMap map[APIErrorCode]APIError
@@ -419,8 +442,7 @@ func (e errorCodeMap) ToAPIErrWithErr(errCode APIErrorCode, err error) APIError
apiErr.Description = fmt.Sprintf("%s (%s)", apiErr.Description, err)
}
if globalSite.Region != "" {
switch errCode {
case ErrAuthorizationHeaderMalformed:
if errCode == ErrAuthorizationHeaderMalformed {
apiErr.Description = fmt.Sprintf("The authorization header is malformed; the region is wrong; expecting '%s'.", globalSite.Region)
return apiErr
}
@@ -515,6 +537,11 @@ var errorCodes = errorCodeMap{
Description: "Your proposed upload exceeds the maximum allowed object size.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrExcessData: {
Code: "ExcessData",
Description: "More data provided than indicated content length",
HTTPStatusCode: http.StatusBadRequest,
},
ErrPolicyTooLarge: {
Code: "PolicyTooLarge",
Description: "Policy exceeds the maximum allowed document size.",
@@ -540,6 +567,11 @@ var errorCodes = errorCodeMap{
Description: "Your account is disabled; please contact your administrator.",
HTTPStatusCode: http.StatusForbidden,
},
ErrInvalidArgument: {
Code: "InvalidArgument",
Description: "Invalid argument",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidBucketName: {
Code: "InvalidBucketName",
Description: "The specified bucket is not valid.",
@@ -665,6 +697,11 @@ var errorCodes = errorCodeMap{
Description: "One or more of the specified parts could not be found. The part may not have been uploaded, or the specified entity tag may not match the part's entity tag.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingPart: {
Code: "InvalidRequest",
Description: "You must specify at least one part",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPartOrder: {
Code: "InvalidPartOrder",
Description: "The list of parts was not in ascending order. The parts list must be specified in order by part number.",
@@ -695,11 +732,6 @@ var errorCodes = errorCodeMap{
Description: "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrTooManyBuckets: {
Code: "TooManyBuckets",
Description: "You have attempted to create more buckets than allowed",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBucketNotEmpty: {
Code: "BucketNotEmpty",
Description: "The bucket you tried to delete is not empty",
@@ -715,9 +747,9 @@ var errorCodes = errorCodeMap{
Description: "All access to this resource has been disabled.",
HTTPStatusCode: http.StatusForbidden,
},
ErrMalformedPolicy: {
ErrPolicyInvalidVersion: {
Code: "MalformedPolicy",
Description: "Policy has invalid resource.",
Description: "The policy must contain a valid version string",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingFields: {
@@ -815,11 +847,16 @@ var errorCodes = errorCodeMap{
Description: "Request is not valid yet",
HTTPStatusCode: http.StatusForbidden,
},
ErrSlowDown: {
Code: "SlowDown",
ErrSlowDownRead: {
Code: "SlowDownRead",
Description: "Resource requested is unreadable, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrSlowDownWrite: {
Code: "SlowDownWrite",
Description: "Resource requested is unwritable, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrInvalidPrefixMarker: {
Code: "InvalidPrefixMarker",
Description: "Invalid marker prefix combination",
@@ -920,6 +957,11 @@ var errorCodes = errorCodeMap{
Description: "No matching ExistingsObjects rule enabled",
HTTPStatusCode: http.StatusBadRequest,
},
ErrRemoteTargetDenyAddError: {
Code: "XMinioAdminRemoteTargetDenyAdd",
Description: "Cannot add remote target endpoint since this server is in a cluster replication setup",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationDenyEditError: {
Code: "XMinioReplicationDenyEdit",
Description: "Cannot alter local replication config since this server is in a cluster replication setup",
@@ -975,6 +1017,16 @@ var errorCodes = errorCodeMap{
Description: "Versioning must be 'Enabled' on the bucket to add a replication target",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationValidationError: {
Code: "InvalidRequest",
Description: "Replication validation failed on target",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationPermissionCheckError: {
Code: "ReplicationPermissionCheck",
Description: "X-Minio-Source-Replication-Check cannot be specified in request. Request cannot be completed",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNoSuchObjectLockConfiguration: {
Code: "NoSuchObjectLockConfiguration",
Description: "The specified object does not have a ObjectLock configuration",
@@ -1088,8 +1140,13 @@ var errorCodes = errorCodeMap{
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionMethod: {
Code: "InvalidRequest",
Description: "The encryption method specified is not supported",
Code: "InvalidArgument",
Description: "Server Side Encryption with AWS KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms",
HTTPStatusCode: http.StatusBadRequest,
},
ErrIncompatibleEncryptionMethod: {
Code: "InvalidArgument",
Description: "Server Side Encryption with Customer provided key is incompatible with the encryption method specified",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionKeyID: {
@@ -1117,6 +1174,11 @@ var errorCodes = errorCodeMap{
Description: "The encryption parameters are not applicable to this object.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionParametersSSEC: {
Code: "InvalidRequest",
Description: "SSE-C encryption parameters are not supported on replicated bucket.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidSSECustomerAlgorithm: {
Code: "InvalidArgument",
Description: "Requests specifying Server Side Encryption with Customer provided keys must provide a valid encryption algorithm.",
@@ -1147,11 +1209,6 @@ var errorCodes = errorCodeMap{
Description: "The provided encryption parameters did not match the ones used originally.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrIncompatibleEncryptionMethod: {
Code: "InvalidArgument",
Description: "Server side encryption specified with both SSE-C and SSE-S3 headers",
HTTPStatusCode: http.StatusBadRequest,
},
ErrKMSNotConfigured: {
Code: "NotImplemented",
Description: "Server side encryption specified but KMS is not configured",
@@ -1162,6 +1219,11 @@ var errorCodes = errorCodeMap{
Description: "Invalid keyId",
HTTPStatusCode: http.StatusBadRequest,
},
ErrKMSDefaultKeyAlreadyConfigured: {
Code: "KMS.DefaultKeyAlreadyConfiguredException",
Description: "A default encryption already exists and cannot be changed on KMS",
HTTPStatusCode: http.StatusConflict,
},
ErrNoAccessKey: {
Code: "AccessDenied",
Description: "No AWSAccessKey was presented",
@@ -1226,11 +1288,21 @@ var errorCodes = errorCodeMap{
Description: "The JSON you provided was not well-formed or did not validate against our published format.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidLifecycleQueryParameter: {
Code: "XMinioInvalidLifecycleParameter",
Description: "The boolean value provided for withUpdatedAt query parameter was invalid.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminNoSuchUser: {
Code: "XMinioAdminNoSuchUser",
Description: "The specified user does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminNoSuchUserLDAPWarn: {
Code: "XMinioAdminNoSuchUser",
Description: "The specified user does not exist. If you meant a user in LDAP, use `mc idp ldap`",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminNoSuchGroup: {
Code: "XMinioAdminNoSuchGroup",
Description: "The specified group does not exist.",
@@ -1246,11 +1318,22 @@ var errorCodes = errorCodeMap{
Description: "The specified group is not empty - cannot remove it.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminGroupDisabled: {
Code: "XMinioAdminGroupDisabled",
Description: "The specified group is disabled.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminNoSuchPolicy: {
Code: "XMinioAdminNoSuchPolicy",
Description: "The canned policy does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminPolicyChangeAlreadyApplied: {
Code: "XMinioAdminPolicyChangeAlreadyApplied",
Description: "The specified policy change is already in effect.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminInvalidArgument: {
Code: "XMinioAdminInvalidArgument",
Description: "Invalid arguments specified.",
@@ -1302,6 +1385,11 @@ var errorCodes = errorCodeMap{
Description: fmt.Sprintf("Invalid IDP configuration type - must be one of %v", madmin.ValidIDPConfigTypes),
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigLDAPNonDefaultConfigName: {
Code: "XMinioAdminConfigLDAPNonDefaultConfigName",
Description: "Only a single LDAP configuration is supported - config name must be empty or `_`",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigLDAPValidation: {
Code: "XMinioAdminConfigLDAPValidation",
Description: "LDAP Configuration validation failed",
@@ -1347,7 +1435,7 @@ var errorCodes = errorCodeMap{
Description: "Cannot respond to plain-text request from TLS-encrypted server",
HTTPStatusCode: http.StatusBadRequest,
},
ErrOperationTimedOut: {
ErrRequestTimedout: {
Code: "RequestTimeout",
Description: "A timeout occurred while trying to lock a resource, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
@@ -1357,9 +1445,9 @@ var errorCodes = errorCodeMap{
Description: "Client disconnected before response was ready",
HTTPStatusCode: 499, // No official code, use nginx value.
},
ErrOperationMaxedOut: {
Code: "SlowDown",
Description: "A timeout exceeded while waiting to proceed with the request, please reduce your request rate",
ErrTooManyRequests: {
Code: "TooManyRequests",
Description: "Deadline exceeded while waiting in incoming queue, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrUnsupportedMetadata: {
@@ -1367,6 +1455,11 @@ var errorCodes = errorCodeMap{
Description: "Your metadata headers are not supported.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrUnsupportedHostHeader: {
Code: "InvalidArgument",
Description: "Your Host header is malformed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectTampered: {
Code: "XMinioObjectTampered",
Description: errObjectTampered.Error(),
@@ -1381,7 +1474,7 @@ var errorCodes = errorCodeMap{
ErrSiteReplicationPeerResp: {
Code: "XMinioSiteReplicationPeerResp",
Description: "Error received when contacting a peer site",
HTTPStatusCode: http.StatusServiceUnavailable,
HTTPStatusCode: http.StatusBadRequest,
},
ErrSiteReplicationBackendIssue: {
Code: "XMinioSiteReplicationBackendIssue",
@@ -1499,7 +1592,7 @@ var errorCodes = errorCodeMap{
HTTPStatusCode: http.StatusBadRequest,
},
ErrBusy: {
Code: "Busy",
Code: "ServerBusy",
Description: "The service is unavailable. Please retry.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
@@ -1938,6 +2031,26 @@ var errorCodes = errorCodeMap{
Description: "Invalid checksum provided.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrLambdaARNInvalid: {
Code: "LambdaARNInvalid",
Description: "The specified lambda ARN is invalid",
HTTPStatusCode: http.StatusBadRequest,
},
ErrLambdaARNNotFound: {
Code: "LambdaARNNotFound",
Description: "The specified lambda ARN does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrPolicyAlreadyAttached: {
Code: "XMinioPolicyAlreadyAttached",
Description: "The specified policy is already attached.",
HTTPStatusCode: http.StatusConflict,
},
ErrPolicyNotAttached: {
Code: "XMinioPolicyNotAttached",
Description: "The specified policy is not found.",
HTTPStatusCode: http.StatusNotFound,
},
// Add your error structure here.
}
@@ -1950,18 +2063,23 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
}
// Only return ErrClientDisconnected if the provided context is actually canceled.
// This way downstream context.Canceled will still report ErrOperationTimedOut
if contextCanceled(ctx) {
if ctx.Err() == context.Canceled {
return ErrClientDisconnected
}
// This way downstream context.Canceled will still report ErrRequestTimedout
if contextCanceled(ctx) && errors.Is(ctx.Err(), context.Canceled) {
return ErrClientDisconnected
}
// Unwrap the error first
err = unwrapAll(err)
switch err {
case errInvalidArgument:
apiErr = ErrAdminInvalidArgument
case errNoSuchPolicy:
apiErr = ErrAdminNoSuchPolicy
case errNoSuchUser:
apiErr = ErrAdminNoSuchUser
case errNoSuchUserLDAPWarn:
apiErr = ErrAdminNoSuchUserLDAPWarn
case errNoSuchServiceAccount:
apiErr = ErrAdminServiceAccountNotFound
case errNoSuchGroup:
@@ -1970,8 +2088,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrAdminGroupNotEmpty
case errNoSuchJob:
apiErr = ErrAdminNoSuchJob
case errNoSuchPolicy:
apiErr = ErrAdminNoSuchPolicy
case errNoPolicyToAttachOrDetach:
apiErr = ErrAdminPolicyChangeAlreadyApplied
case errSignatureMismatch:
apiErr = ErrSignatureDoesNotMatch
case errInvalidRange:
@@ -1988,9 +2106,15 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrAdminInvalidSecretKey
case errInvalidStorageClass:
apiErr = ErrInvalidStorageClass
case errErasureReadQuorum:
apiErr = ErrSlowDownRead
case errErasureWriteQuorum:
apiErr = ErrSlowDownWrite
// SSE errors
case errInvalidEncryptionParameters:
apiErr = ErrInvalidEncryptionParameters
case errInvalidEncryptionParametersSSEC:
apiErr = ErrInvalidEncryptionParametersSSEC
case crypto.ErrInvalidEncryptionMethod:
apiErr = ErrInvalidEncryptionMethod
case crypto.ErrInvalidEncryptionKeyID:
@@ -2017,11 +2141,12 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrKMSNotConfigured
case errKMSKeyNotFound:
apiErr = ErrKMSKeyNotFoundException
case context.Canceled, context.DeadlineExceeded:
apiErr = ErrOperationTimedOut
case errDiskNotFound:
apiErr = ErrSlowDown
case errKMSDefaultKeyAlreadyConfigured:
apiErr = ErrKMSDefaultKeyAlreadyConfigured
case context.Canceled:
apiErr = ErrClientDisconnected
case context.DeadlineExceeded:
apiErr = ErrRequestTimedout
case objectlock.ErrInvalidRetentionDate:
apiErr = ErrInvalidRetentionDate
case objectlock.ErrPastObjectLockRetainDate:
@@ -2032,11 +2157,14 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrObjectLockInvalidHeaders
case objectlock.ErrMalformedXML:
apiErr = ErrMalformedXML
case errInvalidMaxParts:
apiErr = ErrInvalidMaxParts
case ioutil.ErrOverread:
apiErr = ErrExcessData
}
// Compression errors
switch err {
case errInvalidDecompressedSize:
if err == errInvalidDecompressedSize {
apiErr = ErrInvalidDecompressedSize
}
@@ -2047,7 +2175,7 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
// etcd specific errors, a key is always a bucket for us return
// ErrNoSuchBucket in such a case.
if err == dns.ErrNoEntriesFound {
if errors.Is(err, dns.ErrNoEntriesFound) {
return ErrNoSuchBucket
}
@@ -2097,9 +2225,9 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
case InvalidPart:
apiErr = ErrInvalidPart
case InsufficientWriteQuorum:
apiErr = ErrSlowDown
apiErr = ErrSlowDownWrite
case InsufficientReadQuorum:
apiErr = ErrSlowDown
apiErr = ErrSlowDownRead
case InvalidMarkerPrefixCombination:
apiErr = ErrNotImplemented
case InvalidUploadIDKeyCombination:
@@ -2114,10 +2242,10 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrContentSHA256Mismatch
case hash.ChecksumMismatch:
apiErr = ErrContentChecksumMismatch
case ObjectTooLarge:
apiErr = ErrEntityTooLarge
case ObjectTooSmall:
case hash.SizeTooSmall:
apiErr = ErrEntityTooSmall
case hash.SizeTooLarge:
apiErr = ErrEntityTooLarge
case NotImplemented:
apiErr = ErrNotImplemented
case PartTooBig:
@@ -2162,7 +2290,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrTransitionStorageClassNotFoundError
case InvalidObjectState:
apiErr = ErrInvalidObjectState
case PreConditionFailed:
apiErr = ErrPreconditionFailed
case BucketQuotaExceeded:
apiErr = ErrAdminBucketQuotaExceeded
case *event.ErrInvalidEventName:
@@ -2171,6 +2300,10 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrARNNotification
case *event.ErrARNNotFound:
apiErr = ErrARNNotification
case *levent.ErrInvalidARN:
apiErr = ErrLambdaARNInvalid
case *levent.ErrARNNotFound:
apiErr = ErrLambdaARNNotFound
case *event.ErrUnknownRegion:
apiErr = ErrRegionNotification
case *event.ErrInvalidFilterName:
@@ -2188,7 +2321,7 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
case *event.ErrUnsupportedConfiguration:
apiErr = ErrUnsupportedNotification
case OperationTimedOut:
apiErr = ErrOperationTimedOut
apiErr = ErrRequestTimedout
case BackendDown:
apiErr = ErrBackendDown
case ObjectNameTooLong:
@@ -2234,44 +2367,29 @@ func toAPIError(ctx context.Context, err error) APIError {
}
apiErr := errorCodes.ToAPIErr(toAPIErrorCode(ctx, err))
e, ok := err.(dns.ErrInvalidBucketName)
if ok {
code := toAPIErrorCode(ctx, e)
apiErr = errorCodes.ToAPIErrWithErr(code, e)
}
if apiErr.Code == "NotImplemented" {
switch e := err.(type) {
case NotImplemented:
desc := e.Error()
if desc == "" {
desc = apiErr.Description
}
apiErr = APIError{
Code: apiErr.Code,
Description: desc,
HTTPStatusCode: apiErr.HTTPStatusCode,
}
return apiErr
switch apiErr.Code {
case "NotImplemented":
desc := fmt.Sprintf("%s (%v)", apiErr.Description, err)
apiErr = APIError{
Code: apiErr.Code,
Description: desc,
HTTPStatusCode: apiErr.HTTPStatusCode,
}
}
if apiErr.Code == "XMinioBackendDown" {
case "XMinioBackendDown":
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
return apiErr
}
if apiErr.Code == "InternalError" {
case "InternalError":
// If we see an internal error try to interpret
// any underlying errors if possible depending on
// their internal error types.
switch e := err.(type) {
case batchReplicationJobError:
case kms.Error:
apiErr = APIError{
Code: e.Code,
Description: e.Description,
Description: e.Err.Error(),
Code: e.APICode,
HTTPStatusCode: e.HTTPStatusCode,
}
case batchReplicationJobError:
apiErr = APIError(e)
case InvalidArgument:
apiErr = APIError{
Code: "InvalidArgument",
@@ -2280,27 +2398,25 @@ func toAPIError(ctx context.Context, err error) APIError {
}
case *xml.SyntaxError:
apiErr = APIError{
Code: "MalformedXML",
Description: fmt.Sprintf("%s (%s)", errorCodes[ErrMalformedXML].Description,
e.Error()),
Code: "MalformedXML",
Description: fmt.Sprintf("%s (%s)", errorCodes[ErrMalformedXML].Description, e),
HTTPStatusCode: errorCodes[ErrMalformedXML].HTTPStatusCode,
}
case url.EscapeError:
apiErr = APIError{
Code: "XMinioInvalidObjectName",
Description: fmt.Sprintf("%s (%s)", errorCodes[ErrInvalidObjectName].Description,
e.Error()),
Code: "XMinioInvalidObjectName",
Description: fmt.Sprintf("%s (%s)", errorCodes[ErrInvalidObjectName].Description, e),
HTTPStatusCode: http.StatusBadRequest,
}
case versioning.Error:
apiErr = APIError{
Code: "IllegalVersioningConfigurationException",
Description: fmt.Sprintf("Versioning configuration specified in the request is invalid. (%s)", e.Error()),
Description: fmt.Sprintf("Versioning configuration specified in the request is invalid. (%s)", e),
HTTPStatusCode: http.StatusBadRequest,
}
case lifecycle.Error:
apiErr = APIError{
Code: "InvalidRequest",
Code: "InvalidArgument",
Description: e.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
@@ -2361,19 +2477,7 @@ func toAPIError(ctx context.Context, err error) APIError {
// Add more other SDK related errors here if any in future.
default:
//nolint:gocritic
if errors.Is(err, errMalformedEncoding) {
apiErr = APIError{
Code: "BadRequest",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
} else if errors.Is(err, errChunkTooBig) {
apiErr = APIError{
Code: "BadRequest",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
} else if errors.Is(err, strconv.ErrRange) {
if errors.Is(err, errMalformedEncoding) || errors.Is(err, errChunkTooBig) || errors.Is(err, strconv.ErrRange) {
apiErr = APIError{
Code: "BadRequest",
Description: err.Error(),

View File

@@ -20,8 +20,6 @@ package cmd
import (
"context"
"errors"
"os"
"path/filepath"
"testing"
"github.com/minio/minio/internal/crypto"
@@ -42,8 +40,8 @@ var toAPIErrorTests = []struct {
{err: ObjectNameInvalid{}, errCode: ErrInvalidObjectName},
{err: InvalidUploadID{}, errCode: ErrNoSuchUpload},
{err: InvalidPart{}, errCode: ErrInvalidPart},
{err: InsufficientReadQuorum{}, errCode: ErrSlowDown},
{err: InsufficientWriteQuorum{}, errCode: ErrSlowDown},
{err: InsufficientReadQuorum{}, errCode: ErrSlowDownRead},
{err: InsufficientWriteQuorum{}, errCode: ErrSlowDownWrite},
{err: InvalidMarkerPrefixCombination{}, errCode: ErrNotImplemented},
{err: InvalidUploadIDKeyCombination{}, errCode: ErrNotImplemented},
{err: MalformedUploadID{}, errCode: ErrNoSuchUpload},
@@ -67,11 +65,6 @@ var toAPIErrorTests = []struct {
}
func TestAPIErrCode(t *testing.T) {
disk := filepath.Join(globalTestTmpDir, "minio-"+nextSuffix())
defer os.RemoveAll(disk)
initFSObjects(disk, t)
ctx := context.Background()
for i, testCase := range toAPIErrorTests {
errCode := toAPIErrorCode(ctx, testCase.err)
@@ -80,3 +73,19 @@ func TestAPIErrCode(t *testing.T) {
}
}
}
// Check if an API error is properly defined
func TestAPIErrCodeDefinition(t *testing.T) {
for errAPI := ErrNone + 1; errAPI < apiErrCodeEnd; errAPI++ {
errCode, ok := errorCodes[errAPI]
if !ok {
t.Fatal(errAPI, "error code is not defined in the API error code table")
}
if errCode.Code == "" {
t.Fatal(errAPI, "error code has an empty XML code")
}
if errCode.HTTPStatusCode == 0 {
t.Fatal(errAPI, "error code has a zero HTTP status code")
}
}
}

View File

@@ -20,6 +20,7 @@ package cmd
import (
"bytes"
"encoding/json"
"encoding/xml"
"fmt"
"net/http"
"net/url"
@@ -64,15 +65,31 @@ func setCommonHeaders(w http.ResponseWriter) {
// Encodes the response headers into XML format.
func encodeResponse(response interface{}) []byte {
var bytesBuffer bytes.Buffer
bytesBuffer.WriteString(xxml.Header)
buf, err := xxml.Marshal(response)
if err != nil {
var buf bytes.Buffer
buf.WriteString(xml.Header)
if err := xml.NewEncoder(&buf).Encode(response); err != nil {
logger.LogIf(GlobalContext, err)
return nil
}
bytesBuffer.Write(buf)
return bytesBuffer.Bytes()
return buf.Bytes()
}
// Use this encodeResponseList() to support control characters
// this function must be used by only ListObjects() for objects
// with control characters, this is a specialized extension
// to support AWS S3 compatible behavior.
//
// Do not use this function for anything other than ListObjects()
// variants, please open a github discussion if you wish to use
// this in other places.
func encodeResponseList(response interface{}) []byte {
var buf bytes.Buffer
buf.WriteString(xxml.Header)
if err := xxml.NewEncoder(&buf).Encode(response); err != nil {
logger.LogIf(GlobalContext, err)
return nil
}
return buf.Bytes()
}
// Encodes the response headers into JSON format.
@@ -136,7 +153,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
continue
}
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
@@ -149,7 +166,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
var isSet bool
for _, userMetadataPrefix := range userMetadataKeyPrefixes {
if !strings.HasPrefix(strings.ToLower(k), strings.ToLower(userMetadataPrefix)) {
if !stringsHasPrefixFold(k, userMetadataPrefix) {
continue
}
w.Header()[strings.ToLower(k)] = []string{v}
@@ -186,7 +203,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
}
// Set the relevant version ID as part of the response header.
if objInfo.VersionID != "" {
if objInfo.VersionID != "" && objInfo.VersionID != nullVersionID {
w.Header()[xhttp.AmzVersionID] = []string{objInfo.VersionID}
}

View File

@@ -37,8 +37,8 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
maxkeys = maxObjectList
}
prefix = trimLeadingSlash(values.Get("prefix"))
marker = trimLeadingSlash(values.Get("marker"))
prefix = values.Get("prefix")
marker = values.Get("marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
return
@@ -57,8 +57,8 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
maxkeys = maxObjectList
}
prefix = trimLeadingSlash(values.Get("prefix"))
marker = trimLeadingSlash(values.Get("key-marker"))
prefix = values.Get("prefix")
marker = values.Get("key-marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
versionIDMarker = values.Get("version-id-marker")
@@ -87,8 +87,8 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
maxkeys = maxObjectList
}
prefix = trimLeadingSlash(values.Get("prefix"))
startAfter = trimLeadingSlash(values.Get("start-after"))
prefix = values.Get("prefix")
startAfter = values.Get("start-after")
delimiter = values.Get("delimiter")
fetchOwner = values.Get("fetch-owner") == "true"
encodingType = values.Get("encoding-type")
@@ -118,8 +118,8 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
maxUploads = maxUploadsList
}
prefix = trimLeadingSlash(values.Get("prefix"))
keyMarker = trimLeadingSlash(values.Get("key-marker"))
prefix = values.Get("prefix")
keyMarker = values.Get("key-marker")
uploadIDMarker = values.Get("upload-id-marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")

View File

@@ -29,21 +29,21 @@ import (
"strings"
"time"
"github.com/minio/minio/internal/amztime"
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/handlers"
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
xxml "github.com/minio/xxml"
)
const (
// RFC3339 a subset of the ISO8601 timestamp format. e.g 2014-04-29T18:30:38Z
iso8601TimeFormat = "2006-01-02T15:04:05.000Z" // Reply date format with nanosecond precision.
maxObjectList = 1000 // Limit number of objects in a listObjectsResponse/listObjectsVersionsResponse.
maxDeleteList = 1000 // Limit number of objects deleted in a delete call.
maxUploadsList = 10000 // Limit number of uploads in a listUploadsResponse.
maxPartsList = 10000 // Limit number of parts in a listPartsResponse.
maxObjectList = 1000 // Limit number of objects in a listObjectsResponse/listObjectsVersionsResponse.
maxDeleteList = 1000 // Limit number of objects deleted in a delete call.
maxUploadsList = 10000 // Limit number of uploads in a listUploadsResponse.
maxPartsList = 10000 // Limit number of parts in a listPartsResponse.
)
// LocationResponse - format for location response.
@@ -85,7 +85,7 @@ type ListVersionsResponse struct {
VersionIDMarker string `xml:"VersionIdMarker"`
MaxKeys int
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
@@ -115,7 +115,7 @@ type ListObjectsResponse struct {
NextMarker string `xml:"NextMarker,omitempty"`
MaxKeys int
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
@@ -146,7 +146,7 @@ type ListObjectsV2Response struct {
KeyCount int
MaxKeys int
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
@@ -205,7 +205,7 @@ type ListMultipartUploadsResponse struct {
UploadIDMarker string `xml:"UploadIdMarker"`
NextKeyMarker string
NextUploadIDMarker string `xml:"NextUploadIdMarker"`
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
Prefix string
EncodingType string `xml:"EncodingType,omitempty"`
MaxUploads int
@@ -315,12 +315,12 @@ func (s *Metadata) Set(k, v string) {
}
type xmlKeyEntry struct {
XMLName xml.Name
XMLName xxml.Name
Value string `xml:",chardata"`
}
// MarshalXML - StringMap marshals into XML.
func (s *Metadata) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
func (s *Metadata) MarshalXML(e *xxml.Encoder, start xxml.StartElement) error {
if s == nil {
return nil
}
@@ -335,7 +335,7 @@ func (s *Metadata) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
for _, item := range s.Items {
if err := e.Encode(xmlKeyEntry{
XMLName: xml.Name{Local: item.Key},
XMLName: xxml.Name{Local: item.Key},
Value: item.Value,
}); err != nil {
return err
@@ -345,6 +345,13 @@ func (s *Metadata) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
return e.EncodeToken(start.End())
}
// ObjectInternalInfo contains some internal information about a given
// object, it will printed in listing calls with enabled metadata.
type ObjectInternalInfo struct {
K int // Data blocks
M int // Parity blocks
}
// Object container for object metadata
type Object struct {
Key string
@@ -353,13 +360,16 @@ type Object struct {
Size int64
// Owner of the object.
Owner Owner
Owner *Owner `xml:"Owner,omitempty"`
// The class of storage used to store the object.
StorageClass string
// UserMetadata user-defined metadata
UserMetadata *Metadata `xml:"UserMetadata,omitempty"`
UserTags string `xml:"UserTags,omitempty"`
Internal *ObjectInternalInfo `xml:"Internal,omitempty"`
}
// CopyObjectResponse container returns ETag and LastModified of the successfully copied object
@@ -482,7 +492,7 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
for _, bucket := range buckets {
listbuckets = append(listbuckets, Bucket{
Name: bucket.Name,
CreationDate: bucket.Created.UTC().Format(iso8601TimeFormat),
CreationDate: amztime.ISO8601Format(bucket.Created.UTC()),
})
}
@@ -493,22 +503,30 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
}
// generates an ListBucketVersions response for the said bucket with other enumerated options.
func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType string, maxKeys int, resp ListObjectVersionsInfo) ListVersionsResponse {
func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType string, maxKeys int, resp ListObjectVersionsInfo, metadata metaCheckFn) ListVersionsResponse {
versions := make([]ObjectVersion, 0, len(resp.Objects))
owner := Owner{
owner := &Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
data := ListVersionsResponse{}
var lastObjMetaName string
var tagErr, metaErr APIErrorCode = -1, -1
for _, object := range resp.Objects {
if object.Name == "" {
continue
}
// Cache checks for the same object
if metadata != nil && lastObjMetaName != object.Name {
tagErr = metadata(object.Name, policy.GetObjectTaggingAction)
metaErr = metadata(object.Name, policy.GetObjectAction)
lastObjMetaName = object.Name
}
content := ObjectVersion{}
content.Key = s3EncodeName(object.Name, encodingType)
content.LastModified = object.ModTime.UTC().Format(iso8601TimeFormat)
content.LastModified = amztime.ISO8601Format(object.ModTime.UTC())
if object.ETag != "" {
content.ETag = "\"" + object.ETag + "\""
}
@@ -518,6 +536,36 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
} else {
content.StorageClass = globalMinioDefaultStorageClass
}
if tagErr == ErrNone {
content.UserTags = object.UserTags
}
if metaErr == ErrNone {
content.UserMetadata = &Metadata{}
switch kind, _ := crypto.IsEncrypted(object.UserDefined); kind {
case crypto.S3:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionAES)
case crypto.S3KMS:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionKMS)
case crypto.SSEC:
content.UserMetadata.Set(xhttp.AmzServerSideEncryptionCustomerAlgorithm, xhttp.AmzEncryptionAES)
}
for k, v := range cleanMinioInternalMetadataKeys(object.UserDefined) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
}
// https://github.com/google/security-research/security/advisories/GHSA-76wf-9vgp-pj7w
if equals(k, xhttp.AmzMetaUnencryptedContentLength, xhttp.AmzMetaUnencryptedContentMD5) {
continue
}
content.UserMetadata.Set(k, v)
}
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
}
}
content.Owner = owner
content.VersionID = object.VersionID
if content.VersionID == "" {
@@ -554,7 +602,7 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
// generates an ListObjectsV1 response for the said bucket with other enumerated options.
func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingType string, maxKeys int, resp ListObjectsInfo) ListObjectsResponse {
contents := make([]Object, 0, len(resp.Objects))
owner := Owner{
owner := &Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
@@ -566,7 +614,7 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingTy
continue
}
content.Key = s3EncodeName(object.Name, encodingType)
content.LastModified = object.ModTime.UTC().Format(iso8601TimeFormat)
content.LastModified = amztime.ISO8601Format(object.ModTime.UTC())
if object.ETag != "" {
content.ETag = "\"" + object.ETag + "\""
}
@@ -601,12 +649,16 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingTy
}
// generates an ListObjectsV2 response for the said bucket with other enumerated options.
func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter, delimiter, encodingType string, fetchOwner, isTruncated bool, maxKeys int, objects []ObjectInfo, prefixes []string, metadata bool) ListObjectsV2Response {
func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter, delimiter, encodingType string, fetchOwner, isTruncated bool, maxKeys int, objects []ObjectInfo, prefixes []string, metadata metaCheckFn) ListObjectsV2Response {
contents := make([]Object, 0, len(objects))
owner := Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
var owner *Owner
if fetchOwner {
owner = &Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
}
data := ListObjectsV2Response{}
for _, object := range objects {
@@ -615,7 +667,7 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
continue
}
content.Key = s3EncodeName(object.Name, encodingType)
content.LastModified = object.ModTime.UTC().Format(iso8601TimeFormat)
content.LastModified = amztime.ISO8601Format(object.ModTime.UTC())
if object.ETag != "" {
content.ETag = "\"" + object.ETag + "\""
}
@@ -626,27 +678,36 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
content.StorageClass = globalMinioDefaultStorageClass
}
content.Owner = owner
if metadata {
content.UserMetadata = &Metadata{}
switch kind, _ := crypto.IsEncrypted(object.UserDefined); kind {
case crypto.S3:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionAES)
case crypto.S3KMS:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionKMS)
case crypto.SSEC:
content.UserMetadata.Set(xhttp.AmzServerSideEncryptionCustomerAlgorithm, xhttp.AmzEncryptionAES)
if metadata != nil {
if metadata(object.Name, policy.GetObjectTaggingAction) == ErrNone {
content.UserTags = object.UserTags
}
for k, v := range cleanMinioInternalMetadataKeys(object.UserDefined) {
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
if metadata(object.Name, policy.GetObjectAction) == ErrNone {
content.UserMetadata = &Metadata{}
switch kind, _ := crypto.IsEncrypted(object.UserDefined); kind {
case crypto.S3:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionAES)
case crypto.S3KMS:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionKMS)
case crypto.SSEC:
content.UserMetadata.Set(xhttp.AmzServerSideEncryptionCustomerAlgorithm, xhttp.AmzEncryptionAES)
}
// https://github.com/google/security-research/security/advisories/GHSA-76wf-9vgp-pj7w
if equals(k, xhttp.AmzMetaUnencryptedContentLength, xhttp.AmzMetaUnencryptedContentMD5) {
continue
for k, v := range cleanMinioInternalMetadataKeys(object.UserDefined) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
}
// https://github.com/google/security-research/security/advisories/GHSA-76wf-9vgp-pj7w
if equals(k, xhttp.AmzMetaUnencryptedContentLength, xhttp.AmzMetaUnencryptedContentMD5) {
continue
}
content.UserMetadata.Set(k, v)
}
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
}
content.UserMetadata.Set(k, v)
}
}
contents = append(contents, content)
@@ -674,11 +735,13 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
return data
}
type metaCheckFn = func(name string, action policy.Action) (s3Err APIErrorCode)
// generates CopyObjectResponse from etag and lastModified time.
func generateCopyObjectResponse(etag string, lastModified time.Time) CopyObjectResponse {
return CopyObjectResponse{
ETag: "\"" + etag + "\"",
LastModified: lastModified.UTC().Format(iso8601TimeFormat),
LastModified: amztime.ISO8601Format(lastModified.UTC()),
}
}
@@ -686,7 +749,7 @@ func generateCopyObjectResponse(etag string, lastModified time.Time) CopyObjectR
func generateCopyObjectPartResponse(etag string, lastModified time.Time) CopyObjectPartResponse {
return CopyObjectPartResponse{
ETag: "\"" + etag + "\"",
LastModified: lastModified.UTC().Format(iso8601TimeFormat),
LastModified: amztime.ISO8601Format(lastModified.UTC()),
}
}
@@ -701,7 +764,7 @@ func generateInitiateMultipartUploadResponse(bucket, key, uploadID string) Initi
// generates CompleteMultipartUploadResponse for given bucket, key, location and ETag.
func generateCompleteMultpartUploadResponse(bucket, key, location string, oi ObjectInfo) CompleteMultipartUploadResponse {
cs := oi.decryptChecksums()
cs := oi.decryptChecksums(0)
c := CompleteMultipartUploadResponse{
Location: location,
Bucket: bucket,
@@ -746,7 +809,7 @@ func generateListPartsResponse(partsInfo ListPartsInfo, encodingType string) Lis
newPart.PartNumber = part.PartNumber
newPart.ETag = "\"" + part.ETag + "\""
newPart.Size = part.Size
newPart.LastModified = part.LastModified.UTC().Format(iso8601TimeFormat)
newPart.LastModified = amztime.ISO8601Format(part.LastModified.UTC())
newPart.ChecksumCRC32 = part.ChecksumCRC32
newPart.ChecksumCRC32C = part.ChecksumCRC32C
newPart.ChecksumSHA1 = part.ChecksumSHA1
@@ -780,7 +843,7 @@ func generateListMultipartUploadsResponse(bucket string, multipartsInfo ListMult
newUpload := Upload{}
newUpload.UploadID = upload.UploadID
newUpload.Key = s3EncodeName(upload.Object, encodingType)
newUpload.Initiated = upload.Initiated.UTC().Format(iso8601TimeFormat)
newUpload.Initiated = amztime.ISO8601Format(upload.Initiated.UTC())
listMultipartUploadsResponse.Uploads[index] = newUpload
}
return listMultipartUploadsResponse
@@ -876,12 +939,14 @@ func writeErrorResponse(ctx context.Context, w http.ResponseWriter, err APIError
// Generate error response.
errorResponse := getAPIErrorResponse(ctx, err, reqURL.Path,
w.Header().Get(xhttp.AmzRequestID), globalDeploymentID)
w.Header().Get(xhttp.AmzRequestID), w.Header().Get(xhttp.AmzRequestHostID))
encodedErrorResponse := encodeResponse(errorResponse)
writeResponse(w, err.HTTPStatusCode, encodedErrorResponse, mimeXML)
}
func writeErrorResponseHeadersOnly(w http.ResponseWriter, err APIError) {
w.Header().Set(xMinIOErrCodeHeader, err.Code)
w.Header().Set(xMinIOErrDescHeader, "\""+err.Description+"\"")
writeResponse(w, err.HTTPStatusCode, nil, mimeNone)
}
@@ -894,7 +959,7 @@ func writeErrorResponseString(ctx context.Context, w http.ResponseWriter, err AP
// useful for admin APIs.
func writeErrorResponseJSON(ctx context.Context, w http.ResponseWriter, err APIError, reqURL *url.URL) {
// Generate error response.
errorResponse := getAPIErrorResponse(ctx, err, reqURL.Path, w.Header().Get(xhttp.AmzRequestID), globalDeploymentID)
errorResponse := getAPIErrorResponse(ctx, err, reqURL.Path, w.Header().Get(xhttp.AmzRequestID), w.Header().Get(xhttp.AmzRequestHostID))
encodedErrorResponse := encodeResponseJSON(errorResponse)
writeResponse(w, err.HTTPStatusCode, encodedErrorResponse, mimeJSON)
}

View File

@@ -22,11 +22,11 @@ import (
"net"
"net/http"
"github.com/gorilla/mux"
"github.com/klauspost/compress/gzhttp"
"github.com/minio/console/restapi"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/wildcard"
"github.com/rs/cors"
)
@@ -245,10 +245,10 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodPut).Path("/{object:.+}").
HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").
HandlerFunc(collectAPIStats("copyobjectpart", maxClients(gz(httpTraceAll(api.CopyObjectPartHandler))))).
Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
Queries("partNumber", "{partNumber:.*}", "uploadId", "{uploadId:.*}")
// PutObjectPart
router.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
collectAPIStats("putobjectpart", maxClients(gz(httpTraceHdrs(api.PutObjectPartHandler))))).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
collectAPIStats("putobjectpart", maxClients(gz(httpTraceHdrs(api.PutObjectPartHandler))))).Queries("partNumber", "{partNumber:.*}", "uploadId", "{uploadId:.*}")
// ListObjectParts
router.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
collectAPIStats("listobjectparts", maxClients(gz(httpTraceAll(api.ListObjectPartsHandler))))).Queries("uploadId", "{uploadId:.*}")
@@ -285,7 +285,10 @@ func registerAPIRouter(router *mux.Router) {
// GetObjectLegalHold
router.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
collectAPIStats("getobjectlegalhold", maxClients(gz(httpTraceAll(api.GetObjectLegalHoldHandler))))).Queries("legal-hold", "")
// GetObject - note gzip compression is *not* added due to Range requests.
// GetObject with lambda ARNs
router.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
collectAPIStats("getobject", maxClients(gz(httpTraceHdrs(api.GetObjectLambdaHandler))))).Queries("lambdaArn", "{lambdaArn:.*}")
// GetObject
router.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
collectAPIStats("getobject", maxClients(gz(httpTraceHdrs(api.GetObjectHandler)))))
// CopyObject
@@ -389,6 +392,9 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("listobjectsv2", maxClients(gz(httpTraceAll(api.ListObjectsV2Handler))))).Queries("list-type", "2")
// ListObjectVersions
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("listobjectversions", maxClients(gz(httpTraceAll(api.ListObjectVersionsMHandler))))).Queries("versions", "", "metadata", "true")
// ListObjectVersions
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("listobjectversions", maxClients(gz(httpTraceAll(api.ListObjectVersionsHandler))))).Queries("versions", "")
// GetBucketPolicyStatus
@@ -431,8 +437,9 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodHead).HandlerFunc(
collectAPIStats("headbucket", maxClients(gz(httpTraceAll(api.HeadBucketHandler)))))
// PostPolicy
router.Methods(http.MethodPost).HeadersRegexp(xhttp.ContentType, "multipart/form-data*").HandlerFunc(
collectAPIStats("postpolicybucket", maxClients(gz(httpTraceHdrs(api.PostPolicyBucketHandler)))))
router.Methods(http.MethodPost).MatcherFunc(func(r *http.Request, _ *mux.RouteMatch) bool {
return isRequestPostPolicySignatureV4(r)
}).HandlerFunc(collectAPIStats("postpolicybucket", maxClients(gz(httpTraceHdrs(api.PostPolicyBucketHandler)))))
// DeleteMultipleObjects
router.Methods(http.MethodPost).HandlerFunc(
collectAPIStats("deletemultipleobjects", maxClients(gz(httpTraceAll(api.DeleteMultipleObjectsHandler))))).Queries("delete", "")
@@ -457,6 +464,9 @@ func registerAPIRouter(router *mux.Router) {
// GetBucketReplicationMetrics
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("getbucketreplicationmetrics", maxClients(gz(httpTraceAll(api.GetBucketReplicationMetricsHandler))))).Queries("replication-metrics", "")
// ValidateBucketReplicationCreds
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("checkbucketreplicationconfiguration", maxClients(gz(httpTraceAll(api.ValidateBucketReplicationCredsHandler))))).Queries("replication-check", "")
// Register rejected bucket APIs
for _, r := range rejectedBucketAPIs {
@@ -516,7 +526,11 @@ func corsHandler(handler http.Handler) http.Handler {
return cors.New(cors.Options{
AllowOriginFunc: func(origin string) bool {
for _, allowedOrigin := range globalAPIConfig.getCorsAllowOrigins() {
allowedOrigins := globalAPIConfig.getCorsAllowOrigins()
if len(allowedOrigins) == 0 {
allowedOrigins = []string{"*"}
}
for _, allowedOrigin := range allowedOrigins {
if wildcard.MatchSimple(allowedOrigin, origin) {
return true
}

View File

@@ -94,14 +94,8 @@ func s3URLEncode(s string) string {
}
// s3EncodeName encodes string in response when encodingType is specified in AWS S3 requests.
func s3EncodeName(name string, encodingType string) (result string) {
// Quick path to exit
if encodingType == "" {
return name
}
encodingType = strings.ToLower(encodingType)
switch encodingType {
case "url":
func s3EncodeName(name, encodingType string) string {
if strings.ToLower(encodingType) == "url" {
return s3URLEncode(name)
}
return name

File diff suppressed because one or more lines are too long

View File

@@ -25,6 +25,7 @@ import (
"encoding/hex"
"errors"
"io"
"mime"
"net/http"
"net/url"
"strconv"
@@ -39,6 +40,7 @@ import (
xhttp "github.com/minio/minio/internal/http"
xjwt "github.com/minio/minio/internal/jwt"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/mcontext"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
)
@@ -73,8 +75,11 @@ func isRequestPresignedSignatureV2(r *http.Request) bool {
// Verify if request has AWS Post policy Signature Version '4'.
func isRequestPostPolicySignatureV4(r *http.Request) bool {
return strings.Contains(r.Header.Get(xhttp.ContentType), "multipart/form-data") &&
r.Method == http.MethodPost
mediaType, _, err := mime.ParseMediaType(r.Header.Get(xhttp.ContentType))
if err != nil {
return false
}
return mediaType == "multipart/form-data" && r.Method == http.MethodPost
}
// Verify if the request has AWS Streaming Signature Version '4'. This is only valid for 'PUT' operation.
@@ -83,6 +88,18 @@ func isRequestSignStreamingV4(r *http.Request) bool {
r.Method == http.MethodPut
}
// Verify if the request has AWS Streaming Signature Version '4'. This is only valid for 'PUT' operation.
func isRequestSignStreamingTrailerV4(r *http.Request) bool {
return r.Header.Get(xhttp.AmzContentSha256) == streamingContentSHA256Trailer &&
r.Method == http.MethodPut
}
// Verify if the request has AWS Streaming Signature Version '4', with unsigned content and trailer.
func isRequestUnsignedTrailerV4(r *http.Request) bool {
return r.Header.Get(xhttp.AmzContentSha256) == unsignedPayloadTrailer &&
r.Method == http.MethodPut && strings.Contains(r.Header.Get(xhttp.ContentEncoding), streamingContentEncoding)
}
// Authorization type.
//
//go:generate stringer -type=authType -trimprefix=authType $GOFILE
@@ -100,10 +117,12 @@ const (
authTypeSignedV2
authTypeJWT
authTypeSTS
authTypeStreamingSignedTrailer
authTypeStreamingUnsignedTrailer
)
// Get request authentication type.
func getRequestAuthType(r *http.Request) authType {
func getRequestAuthType(r *http.Request) (at authType) {
if r.URL != nil {
var err error
r.Form, err = url.ParseQuery(r.URL.RawQuery)
@@ -118,6 +137,10 @@ func getRequestAuthType(r *http.Request) authType {
return authTypePresignedV2
} else if isRequestSignStreamingV4(r) {
return authTypeStreamingSigned
} else if isRequestSignStreamingTrailerV4(r) {
return authTypeStreamingSignedTrailer
} else if isRequestUnsignedTrailerV4(r) {
return authTypeStreamingUnsignedTrailer
} else if isRequestSignatureV4(r) {
return authTypeSigned
} else if isRequestPresignedSignatureV4(r) {
@@ -134,7 +157,7 @@ func getRequestAuthType(r *http.Request) authType {
return authTypeUnknown
}
func validateAdminSignature(ctx context.Context, r *http.Request, region string) (auth.Credentials, map[string]interface{}, bool, APIErrorCode) {
func validateAdminSignature(ctx context.Context, r *http.Request, region string) (auth.Credentials, bool, APIErrorCode) {
var cred auth.Credentials
var owner bool
s3Err := ErrAccessDenied
@@ -143,27 +166,28 @@ func validateAdminSignature(ctx context.Context, r *http.Request, region string)
// We only support admin credentials to access admin APIs.
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
if s3Err != ErrNone {
return cred, nil, owner, s3Err
return cred, owner, s3Err
}
// we only support V4 (no presign) with auth body
s3Err = isReqAuthenticated(ctx, r, region, serviceS3)
}
if s3Err != ErrNone {
reqInfo := (&logger.ReqInfo{}).AppendTags("requestHeaders", dumpRequest(r))
ctx := logger.SetReqInfo(ctx, reqInfo)
logger.LogIf(ctx, errors.New(getAPIError(s3Err).Description), logger.Application)
return cred, nil, owner, s3Err
return cred, owner, s3Err
}
return cred, cred.Claims, owner, ErrNone
logger.GetReqInfo(ctx).Cred = cred
logger.GetReqInfo(ctx).Owner = owner
logger.GetReqInfo(ctx).Region = globalSite.Region
return cred, owner, ErrNone
}
// checkAdminRequestAuth checks for authentication and authorization for the incoming
// request. It only accepts V2 and V4 requests. Presigned, JWT and anonymous requests
// are automatically rejected.
func checkAdminRequestAuth(ctx context.Context, r *http.Request, action iampolicy.AdminAction, region string) (auth.Credentials, APIErrorCode) {
cred, claims, owner, s3Err := validateAdminSignature(ctx, r, region)
cred, owner, s3Err := validateAdminSignature(ctx, r, region)
if s3Err != ErrNone {
return cred, s3Err
}
@@ -171,9 +195,9 @@ func checkAdminRequestAuth(ctx context.Context, r *http.Request, action iampolic
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(action),
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: claims,
Claims: cred.Claims,
}) {
// Request is allowed return the appropriate access key.
return cred, ErrNone
@@ -205,7 +229,7 @@ func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{},
// that clients cannot decode the token using the temp
// secret keys and generate an entirely new claim by essentially
// hijacking the policies. We need to make sure that this is
// based an admin credential such that token cannot be decoded
// based on admin credential such that token cannot be decoded
// on the client side and is treated like an opaque value.
claims, err := auth.ExtractClaims(token, secret)
if err != nil {
@@ -255,7 +279,7 @@ func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]in
return nil, ErrNoAccessKey
}
if token == "" && cred.IsTemp() {
if token == "" && cred.IsTemp() && !cred.IsServiceAccount() {
// Temporary credentials should always have x-amz-security-token
return nil, ErrInvalidToken
}
@@ -265,7 +289,7 @@ func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]in
return nil, ErrInvalidToken
}
if cred.IsTemp() && subtle.ConstantTimeCompare([]byte(token), []byte(cred.SessionToken)) != 1 {
if !cred.IsServiceAccount() && cred.IsTemp() && subtle.ConstantTimeCompare([]byte(token), []byte(cred.SessionToken)) != 1 {
// validate token for temporary credentials only.
return nil, ErrInvalidToken
}
@@ -302,6 +326,17 @@ func checkRequestAuthType(ctx context.Context, r *http.Request, action policy.Ac
return s3Err
}
// checkRequestAuthTypeWithVID is similar to checkRequestAuthType
// passes versionID additionally.
func checkRequestAuthTypeWithVID(ctx context.Context, r *http.Request, action policy.Action, bucketName, objectName, versionID string) (s3Err APIErrorCode) {
logger.GetReqInfo(ctx).BucketName = bucketName
logger.GetReqInfo(ctx).ObjectName = objectName
logger.GetReqInfo(ctx).VersionID = versionID
_, _, s3Err = checkRequestAuthTypeCredential(ctx, r, action)
return s3Err
}
func authenticateRequest(ctx context.Context, r *http.Request, action policy.Action) (s3Err APIErrorCode) {
if logger.GetReqInfo(ctx) == nil {
logger.LogIf(ctx, errors.New("unexpected context.Context does not have a logger.ReqInfo"), logger.Minio)
@@ -335,6 +370,7 @@ func authenticateRequest(ctx context.Context, r *http.Request, action policy.Act
logger.GetReqInfo(ctx).Cred = cred
logger.GetReqInfo(ctx).Owner = owner
logger.GetReqInfo(ctx).Region = globalSite.Region
// region is valid only for CreateBucketAction.
var region string
@@ -373,14 +409,16 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
region := reqInfo.Region
bucket := reqInfo.BucketName
object := reqInfo.ObjectName
versionID := reqInfo.VersionID
if action != policy.ListAllMyBucketsAction && cred.AccessKey == "" {
// Anonymous checks are not meant for ListAllBuckets action
if globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: action,
BucketName: bucket,
ConditionValues: getConditionValues(r, region, "", nil),
ConditionValues: getConditionValues(r, region, auth.AnonymousCredentials),
IsOwner: false,
ObjectName: object,
}) {
@@ -393,9 +431,10 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
// verify as a fallback.
if globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListBucketAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, region, "", nil),
ConditionValues: getConditionValues(r, region, auth.AnonymousCredentials),
IsOwner: false,
ObjectName: object,
}) {
@@ -406,13 +445,27 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
return ErrAccessDenied
}
if action == policy.DeleteObjectAction && versionID != "" {
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(policy.DeleteObjectVersionAction),
BucketName: bucket,
ConditionValues: getConditionValues(r, "", cred),
ObjectName: object,
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: true,
}) { // Request is not allowed if Deny action on DeleteObjectVersionAction
return ErrAccessDenied
}
}
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(action),
BucketName: bucket,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
ObjectName: object,
IsOwner: owner,
Claims: cred.Claims,
@@ -429,7 +482,7 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
Groups: cred.Groups,
Action: iampolicy.ListBucketAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
ObjectName: object,
IsOwner: owner,
Claims: cred.Claims,
@@ -525,13 +578,15 @@ func isReqAuthenticated(ctx context.Context, r *http.Request, region string, sty
// List of all support S3 auth types.
var supportedS3AuthTypes = map[authType]struct{}{
authTypeAnonymous: {},
authTypePresigned: {},
authTypePresignedV2: {},
authTypeSigned: {},
authTypeSignedV2: {},
authTypePostPolicy: {},
authTypeStreamingSigned: {},
authTypeAnonymous: {},
authTypePresigned: {},
authTypePresignedV2: {},
authTypeSigned: {},
authTypeSignedV2: {},
authTypePostPolicy: {},
authTypeStreamingSigned: {},
authTypeStreamingSignedTrailer: {},
authTypeStreamingUnsignedTrailer: {},
}
// Validate if the authType is valid and supported.
@@ -540,25 +595,27 @@ func isSupportedS3AuthType(aType authType) bool {
return ok
}
// setAuthHandler to validate authorization header for the incoming request.
func setAuthHandler(h http.Handler) http.Handler {
// setAuthMiddleware to validate authorization header for the incoming request.
func setAuthMiddleware(h http.Handler) http.Handler {
// handler for validating incoming authorization headers.
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
tc, ok := r.Context().Value(contextTraceReqKey).(*traceCtxt)
tc, ok := r.Context().Value(mcontext.ContextTraceKey).(*mcontext.TraceCtxt)
aType := getRequestAuthType(r)
if aType == authTypeSigned || aType == authTypeSignedV2 || aType == authTypeStreamingSigned {
switch aType {
case authTypeSigned, authTypeSignedV2, authTypeStreamingSigned, authTypeStreamingSignedTrailer:
// Verify if date headers are set, if not reject the request
amzDate, errCode := parseAmzDateHeader(r)
if errCode != ErrNone {
if ok {
tc.funcName = "handler.Auth"
tc.responseRecorder.LogErrBody = true
tc.FuncName = "handler.Auth"
tc.ResponseRecorder.LogErrBody = true
}
// All our internal APIs are sensitive towards Date
// header, for all requests where Date header is not
// present we will reject such clients.
defer logger.AuditLog(r.Context(), w, r, mustGetClaimsFromToken(r))
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(errCode), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
return
@@ -568,25 +625,33 @@ func setAuthHandler(h http.Handler) http.Handler {
curTime := UTCNow()
if curTime.Sub(amzDate) > globalMaxSkewTime || amzDate.Sub(curTime) > globalMaxSkewTime {
if ok {
tc.funcName = "handler.Auth"
tc.responseRecorder.LogErrBody = true
tc.FuncName = "handler.Auth"
tc.ResponseRecorder.LogErrBody = true
}
defer logger.AuditLog(r.Context(), w, r, mustGetClaimsFromToken(r))
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrRequestTimeTooSkewed), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
return
}
}
if isSupportedS3AuthType(aType) || aType == authTypeJWT || aType == authTypeSTS {
h.ServeHTTP(w, r)
return
case authTypeJWT, authTypeSTS:
h.ServeHTTP(w, r)
return
default:
if isSupportedS3AuthType(aType) {
h.ServeHTTP(w, r)
return
}
}
if ok {
tc.funcName = "handler.Auth"
tc.responseRecorder.LogErrBody = true
tc.FuncName = "handler.Auth"
tc.ResponseRecorder.LogErrBody = true
}
defer logger.AuditLog(r.Context(), w, r, mustGetClaimsFromToken(r))
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrSignatureVersionNotSupported), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsAuth, 1)
})
@@ -624,7 +689,7 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
return ErrAccessDenied
}
conditions := getConditionValues(r, "", cred.AccessKey, cred.Claims)
conditions := getConditionValues(r, "", cred)
conditions["object-lock-mode"] = []string{string(retMode)}
conditions["object-lock-retain-until-date"] = []string{retDate.UTC().Format(time.RFC3339)}
if retDays > 0 {
@@ -672,7 +737,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
return ErrSignatureVersionNotSupported
case authTypeSignedV2, authTypePresignedV2:
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypeStreamingSigned, authTypePresigned, authTypeSigned:
case authTypeStreamingSigned, authTypePresigned, authTypeSigned, authTypeStreamingSignedTrailer, authTypeStreamingUnsignedTrailer:
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
}
if s3Err != ErrNone {
@@ -698,7 +763,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
Groups: cred.Groups,
Action: policy.Action(action),
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", "", nil),
ConditionValues: getConditionValues(r, "", auth.AnonymousCredentials),
IsOwner: false,
ObjectName: objectName,
}) {
@@ -712,7 +777,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
Groups: cred.Groups,
Action: action,
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
ObjectName: objectName,
IsOwner: owner,
Claims: cred.Claims,

View File

@@ -375,8 +375,6 @@ func TestIsReqAuthenticated(t *testing.T) {
initConfigSubsystem(ctx, objLayer)
globalIAMSys.Init(ctx, objLayer, globalEtcdClient, 2*time.Second)
creds, err := auth.CreateCredentials("myuser", "mypassword")
if err != nil {
t.Fatalf("unable create credential, %s", err)
@@ -384,6 +382,8 @@ func TestIsReqAuthenticated(t *testing.T) {
globalActiveCred = creds
globalIAMSys.Init(ctx, objLayer, globalEtcdClient, 2*time.Second)
// List of test cases for validating http request authentication.
testCases := []struct {
req *http.Request
@@ -464,9 +464,8 @@ func TestValidateAdminSignature(t *testing.T) {
}
initAllSubsystems(ctx)
initConfigSubsystem(ctx, objLayer)
globalIAMSys.Init(ctx, objLayer, globalEtcdClient, 2*time.Second)
initConfigSubsystem(ctx, objLayer)
creds, err := auth.CreateCredentials("admin", "mypassword")
if err != nil {
@@ -474,6 +473,8 @@ func TestValidateAdminSignature(t *testing.T) {
}
globalActiveCred = creds
globalIAMSys.Init(ctx, objLayer, globalEtcdClient, 2*time.Second)
testCases := []struct {
AccessKey string
SecretKey string
@@ -492,7 +493,7 @@ func TestValidateAdminSignature(t *testing.T) {
if err := signRequestV4(req, testCase.AccessKey, testCase.SecretKey); err != nil {
t.Fatalf("Unable to inititalized new signed http request %s", err)
}
_, _, _, s3Error := validateAdminSignature(ctx, req, globalMinioDefaultRegion)
_, _, s3Error := validateAdminSignature(ctx, req, globalMinioDefaultRegion)
if s3Error != testCase.ErrCode {
t.Errorf("Test %d: Unexpected s3error returned wanted %d, got %d", i+1, testCase.ErrCode, s3Error)
}

View File

@@ -18,11 +18,13 @@ func _() {
_ = x[authTypeSignedV2-7]
_ = x[authTypeJWT-8]
_ = x[authTypeSTS-9]
_ = x[authTypeStreamingSignedTrailer-10]
_ = x[authTypeStreamingUnsignedTrailer-11]
}
const _authType_name = "UnknownAnonymousPresignedPresignedV2PostPolicyStreamingSignedSignedSignedV2JWTSTS"
const _authType_name = "UnknownAnonymousPresignedPresignedV2PostPolicyStreamingSignedSignedSignedV2JWTSTSStreamingSignedTrailerStreamingUnsignedTrailer"
var _authType_index = [...]uint8{0, 7, 16, 25, 36, 46, 61, 67, 75, 78, 81}
var _authType_index = [...]uint8{0, 7, 16, 25, 36, 46, 61, 67, 75, 78, 81, 103, 127}
func (i authType) String() string {
if i < 0 || i >= authType(len(_authType_index)-1) {

View File

@@ -19,9 +19,14 @@ package cmd
import (
"context"
"fmt"
"runtime"
"strconv"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/env"
)
// healTask represents what to heal along with options
@@ -56,13 +61,43 @@ func activeListeners() int {
return int(globalHTTPListen.Subscribers()) + int(globalTrace.Subscribers())
}
func waitForLowHTTPReq() {
var currentIO func() int
if httpServer := newHTTPServerFn(); httpServer != nil {
currentIO = httpServer.GetRequestCount
func waitForLowIO(maxIO int, maxWait time.Duration, currentIO func() int) {
// No need to wait run at full speed.
if maxIO <= 0 {
return
}
globalHealConfig.Wait(currentIO, activeListeners)
const waitTick = 100 * time.Millisecond
tmpMaxWait := maxWait
for currentIO() >= maxIO {
if tmpMaxWait > 0 {
if tmpMaxWait < waitTick {
time.Sleep(tmpMaxWait)
} else {
time.Sleep(waitTick)
}
tmpMaxWait -= waitTick
}
if tmpMaxWait <= 0 {
return
}
}
}
func currentHTTPIO() int {
httpServer := newHTTPServerFn()
if httpServer == nil {
return 0
}
return httpServer.GetRequestCount() - activeListeners()
}
func waitForLowHTTPReq() {
maxIO, maxWait, _ := globalHealConfig.Clone()
waitForLowIO(maxIO, maxWait, currentHTTPIO)
}
func initBackgroundHealing(ctx context.Context, objAPI ObjectLayer) {
@@ -88,8 +123,7 @@ func (h *healRoutine) AddWorker(ctx context.Context, objAPI ObjectLayer) {
var err error
switch task.bucket {
case nopHeal:
task.respCh <- healResult{err: errSkipFile}
continue
err = errSkipFile
case SlashSeparator:
res, err = healDiskFormat(ctx, objAPI, task.opts)
default:
@@ -100,7 +134,10 @@ func (h *healRoutine) AddWorker(ctx context.Context, objAPI ObjectLayer) {
}
}
task.respCh <- healResult{result: res, err: err}
if task.respCh != nil {
task.respCh <- healResult{result: res, err: err}
}
case <-ctx.Done():
return
}
@@ -109,9 +146,19 @@ func (h *healRoutine) AddWorker(ctx context.Context, objAPI ObjectLayer) {
func newHealRoutine() *healRoutine {
workers := runtime.GOMAXPROCS(0) / 2
if envHealWorkers := env.Get("_MINIO_HEAL_WORKERS", ""); envHealWorkers != "" {
if numHealers, err := strconv.Atoi(envHealWorkers); err != nil {
logger.LogIf(context.Background(), fmt.Errorf("invalid _MINIO_HEAL_WORKERS value: %w", err))
} else {
workers = numHealers
}
}
if workers == 0 {
workers = 4
}
return &healRoutine{
tasks: make(chan healTask),
workers: workers,

View File

@@ -26,12 +26,15 @@ import (
"os"
"sort"
"strings"
"sync"
"time"
"github.com/dustin/go-humanize"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/env"
)
const (
@@ -43,7 +46,8 @@ const (
// healingTracker is used to persist healing information during a heal.
type healingTracker struct {
disk StorageAPI `msg:"-"`
disk StorageAPI `msg:"-"`
mu *sync.RWMutex `msg:"-"`
ID string
PoolIndex int
@@ -79,6 +83,10 @@ type healingTracker struct {
// Filled during heal.
HealedBuckets []string
// ID of the current healing operation
HealID string
// Add future tracking capabilities
// Be sure that they are included in toHealingDisk
}
@@ -108,21 +116,76 @@ func loadHealingTracker(ctx context.Context, disk StorageAPI) (*healingTracker,
}
h.disk = disk
h.ID = diskID
h.mu = &sync.RWMutex{}
return &h, nil
}
// newHealingTracker will create a new healing tracker for the disk.
func newHealingTracker(disk StorageAPI) *healingTracker {
diskID, _ := disk.GetDiskID()
h := healingTracker{
disk: disk,
ID: diskID,
Path: disk.String(),
Endpoint: disk.Endpoint().String(),
Started: time.Now().UTC(),
func newHealingTracker() *healingTracker {
return &healingTracker{
mu: &sync.RWMutex{},
}
}
func initHealingTracker(disk StorageAPI, healID string) *healingTracker {
h := newHealingTracker()
diskID, _ := disk.GetDiskID()
h.disk = disk
h.ID = diskID
h.HealID = healID
h.Path = disk.String()
h.Endpoint = disk.Endpoint().String()
h.Started = time.Now().UTC()
h.PoolIndex, h.SetIndex, h.DiskIndex = disk.GetDiskLoc()
return &h
return h
}
func (h healingTracker) getLastUpdate() time.Time {
h.mu.RLock()
defer h.mu.RUnlock()
return h.LastUpdate
}
func (h healingTracker) getBucket() string {
h.mu.RLock()
defer h.mu.RUnlock()
return h.Bucket
}
func (h *healingTracker) setBucket(bucket string) {
h.mu.Lock()
defer h.mu.Unlock()
h.Bucket = bucket
}
func (h healingTracker) getObject() string {
h.mu.RLock()
defer h.mu.RUnlock()
return h.Object
}
func (h *healingTracker) setObject(object string) {
h.mu.Lock()
defer h.mu.Unlock()
h.Object = object
}
func (h *healingTracker) updateProgress(success bool, bytes uint64) {
h.mu.Lock()
defer h.mu.Unlock()
if success {
h.ItemsHealed++
h.BytesDone += bytes
} else {
h.ItemsFailed++
h.BytesFailed += bytes
}
}
// update will update the tracker on the disk.
@@ -131,15 +194,18 @@ func (h *healingTracker) update(ctx context.Context) error {
if h.disk.Healing() == nil {
return fmt.Errorf("healingTracker: drive %q is not marked as healing", h.ID)
}
h.mu.Lock()
if h.ID == "" || h.PoolIndex < 0 || h.SetIndex < 0 || h.DiskIndex < 0 {
h.ID, _ = h.disk.GetDiskID()
h.PoolIndex, h.SetIndex, h.DiskIndex = h.disk.GetDiskLoc()
}
h.mu.Unlock()
return h.save(ctx)
}
// save will unconditionally save the tracker and will be created if not existing.
func (h *healingTracker) save(ctx context.Context) error {
h.mu.Lock()
if h.PoolIndex < 0 || h.SetIndex < 0 || h.DiskIndex < 0 {
// Attempt to get location.
if api := newObjectLayerFn(); api != nil {
@@ -150,6 +216,7 @@ func (h *healingTracker) save(ctx context.Context) error {
}
h.LastUpdate = time.Now().UTC()
htrackerBytes, err := h.MarshalMsg(nil)
h.mu.Unlock()
if err != nil {
return err
}
@@ -171,6 +238,8 @@ func (h *healingTracker) delete(ctx context.Context) error {
}
func (h *healingTracker) isHealed(bucket string) bool {
h.mu.RLock()
defer h.mu.RUnlock()
for _, v := range h.HealedBuckets {
if v == bucket {
return true
@@ -181,6 +250,9 @@ func (h *healingTracker) isHealed(bucket string) bool {
// resume will reset progress to the numbers at the start of the bucket.
func (h *healingTracker) resume() {
h.mu.Lock()
defer h.mu.Unlock()
h.ItemsHealed = h.ResumeItemsHealed
h.ItemsFailed = h.ResumeItemsFailed
h.BytesDone = h.ResumeBytesDone
@@ -190,6 +262,9 @@ func (h *healingTracker) resume() {
// bucketDone should be called when a bucket is done healing.
// Adds the bucket to the list of healed buckets and updates resume numbers.
func (h *healingTracker) bucketDone(bucket string) {
h.mu.Lock()
defer h.mu.Unlock()
h.ResumeItemsHealed = h.ItemsHealed
h.ResumeItemsFailed = h.ItemsFailed
h.ResumeBytesDone = h.BytesDone
@@ -206,6 +281,9 @@ func (h *healingTracker) bucketDone(bucket string) {
// setQueuedBuckets will add buckets, but exclude any that is already in h.HealedBuckets.
// Order is preserved.
func (h *healingTracker) setQueuedBuckets(buckets []BucketInfo) {
h.mu.Lock()
defer h.mu.Unlock()
s := set.CreateStringSet(h.HealedBuckets...)
h.QueuedBuckets = make([]string, 0, len(buckets))
for _, b := range buckets {
@@ -216,17 +294,25 @@ func (h *healingTracker) setQueuedBuckets(buckets []BucketInfo) {
}
func (h *healingTracker) printTo(writer io.Writer) {
h.mu.RLock()
defer h.mu.RUnlock()
b, err := json.MarshalIndent(h, "", " ")
if err != nil {
writer.Write([]byte(err.Error()))
return
}
writer.Write(b)
}
// toHealingDisk converts the information to madmin.HealingDisk
func (h *healingTracker) toHealingDisk() madmin.HealingDisk {
h.mu.RLock()
defer h.mu.RUnlock()
return madmin.HealingDisk{
ID: h.ID,
HealID: h.HealID,
Endpoint: h.Endpoint,
PoolIndex: h.PoolIndex,
SetIndex: h.SetIndex,
@@ -261,10 +347,15 @@ func initAutoHeal(ctx context.Context, objAPI ObjectLayer) {
globalBackgroundHealState.pushHealLocalDisks(getLocalDisksToHeal()...)
go monitorLocalDisksAndHeal(ctx, z)
if env.Get("_MINIO_AUTO_DISK_HEALING", config.EnableOn) == config.EnableOn {
go monitorLocalDisksAndHeal(ctx, z)
}
}
func getLocalDisksToHeal() (disksToHeal Endpoints) {
globalLocalDrivesMu.RLock()
globalLocalDrives := globalLocalDrives
globalLocalDrivesMu.RUnlock()
for _, disk := range globalLocalDrives {
_, err := disk.GetDiskID()
if errors.Is(err, errUnformattedDisk) {
@@ -286,16 +377,14 @@ func getLocalDisksToHeal() (disksToHeal Endpoints) {
var newDiskHealingTimeout = newDynamicTimeout(30*time.Second, 10*time.Second)
func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint) error {
logger.Info(fmt.Sprintf("Proceeding to heal '%s' - 'mc admin heal alias/ --verbose' to check the status.", endpoint))
disk, format, err := connectEndpoint(endpoint)
if err != nil {
return fmt.Errorf("Error: %w, %s", err, endpoint)
}
defer disk.Close()
poolIdx := globalEndpoints.GetLocalPoolIdx(disk.Endpoint())
if poolIdx < 0 {
return fmt.Errorf("unexpected pool index (%d) found in %s", poolIdx, disk.Endpoint())
return fmt.Errorf("unexpected pool index (%d) found for %s", poolIdx, disk.Endpoint())
}
// Calculate the set index where the current endpoint belongs
@@ -306,20 +395,36 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
return err
}
if setIdx < 0 {
return fmt.Errorf("unexpected set index (%d) found in %s", setIdx, disk.Endpoint())
return fmt.Errorf("unexpected set index (%d) found for %s", setIdx, disk.Endpoint())
}
// Prevent parallel erasure set healing
locker := z.NewNSLock(minioMetaBucket, fmt.Sprintf("new-drive-healing/%d/%d", poolIdx, setIdx))
lkctx, err := locker.GetLock(ctx, newDiskHealingTimeout)
if err != nil {
return err
return fmt.Errorf("Healing of drive '%v' on %s pool, belonging to %s erasure set already in progress: %w",
disk, humanize.Ordinal(poolIdx+1), humanize.Ordinal(setIdx+1), err)
}
ctx = lkctx.Context()
defer locker.Unlock(lkctx.Cancel)
defer locker.Unlock(lkctx)
// Load healing tracker in this disk
tracker, err := loadHealingTracker(ctx, disk)
if err != nil {
// A healing tracker may be deleted if another disk in the
// same erasure set with same healing-id successfully finished
// healing.
if errors.Is(err, errFileNotFound) {
return nil
}
logger.LogIf(ctx, fmt.Errorf("Unable to load healing tracker on '%s': %w, re-initializing..", disk, err))
tracker = initHealingTracker(disk, mustGetUUID())
}
logger.Info(fmt.Sprintf("Healing drive '%s' - 'mc admin heal alias/ --verbose' to check the current status.", endpoint))
buckets, _ := z.ListBuckets(ctx, BucketOptions{})
// Buckets data are dispersed in multiple zones/sets, make
// Buckets data are dispersed in multiple pools/sets, make
// sure to heal all bucket metadata configuration.
buckets = append(buckets, BucketInfo{
Name: pathJoin(minioMetaBucket, minioConfigPrefix),
@@ -337,16 +442,7 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
})
if serverDebugLog {
logger.Info("Healing drive '%v' on %s pool", disk, humanize.Ordinal(poolIdx+1))
}
// Load healing tracker in this disk
tracker, err := loadHealingTracker(ctx, disk)
if err != nil {
// So someone changed the drives underneath, healing tracker missing.
logger.LogIf(ctx, fmt.Errorf("Healing tracker missing on '%s', drive was swapped again on %s pool: %w",
disk, humanize.Ordinal(poolIdx+1), err))
tracker = newHealingTracker(disk)
logger.Info("Healing drive '%v' on %s pool, belonging to %s erasure set", disk, humanize.Ordinal(poolIdx+1), humanize.Ordinal(setIdx+1))
}
// Load bucket totals
@@ -369,9 +465,13 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
}
if tracker.ItemsFailed > 0 {
logger.Info("Healing drive '%s' failed (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
logger.Info("Healing of drive '%s' failed (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
} else {
logger.Info("Healing drive '%s' complete (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
logger.Info("Healing of drive '%s' complete (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
}
if len(tracker.QueuedBuckets) > 0 {
return fmt.Errorf("not all buckets were healed: %v", tracker.QueuedBuckets)
}
if serverDebugLog {
@@ -379,7 +479,24 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
logger.Info("\n")
}
logger.LogIf(ctx, tracker.delete(ctx))
if tracker.HealID == "" { // HealID was empty only before Feb 2023
logger.LogIf(ctx, tracker.delete(ctx))
return nil
}
// Remove .healing.bin from all disks with similar heal-id
for _, disk := range z.serverPools[poolIdx].sets[setIdx].getDisks() {
t, err := loadHealingTracker(ctx, disk)
if err != nil {
if !errors.Is(err, errFileNotFound) {
logger.LogIf(ctx, err)
}
continue
}
if t.HealID == tracker.HealID {
t.delete(ctx)
}
}
return nil
}
@@ -415,10 +532,13 @@ func monitorLocalDisksAndHeal(ctx context.Context, z *erasureServerPools) {
for _, disk := range healDisks {
go func(disk Endpoint) {
globalBackgroundHealState.markDiskForHealing(disk)
err := healFreshDisk(ctx, z, disk)
if err != nil {
printEndpointError(disk, err, false)
globalBackgroundHealState.setDiskHealingStatus(disk, true)
if err := healFreshDisk(ctx, z, disk); err != nil {
globalBackgroundHealState.setDiskHealingStatus(disk, false)
timedout := OperationTimedOut{}
if !errors.Is(err, context.Canceled) && !errors.As(err, &timedout) {
printEndpointError(disk, err, false)
}
return
}
// Only upon success pop the healed disk.

View File

@@ -182,6 +182,12 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
}
case "HealID":
z.HealID, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "HealID")
return
}
default:
err = dc.Skip()
if err != nil {
@@ -195,9 +201,9 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 22
// map header, size 23
// write "ID"
err = en.Append(0xde, 0x0, 0x16, 0xa2, 0x49, 0x44)
err = en.Append(0xde, 0x0, 0x17, 0xa2, 0x49, 0x44)
if err != nil {
return
}
@@ -430,15 +436,25 @@ func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
return
}
}
// write "HealID"
err = en.Append(0xa6, 0x48, 0x65, 0x61, 0x6c, 0x49, 0x44)
if err != nil {
return
}
err = en.WriteString(z.HealID)
if err != nil {
err = msgp.WrapError(err, "HealID")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 22
// map header, size 23
// string "ID"
o = append(o, 0xde, 0x0, 0x16, 0xa2, 0x49, 0x44)
o = append(o, 0xde, 0x0, 0x17, 0xa2, 0x49, 0x44)
o = msgp.AppendString(o, z.ID)
// string "PoolIndex"
o = append(o, 0xa9, 0x50, 0x6f, 0x6f, 0x6c, 0x49, 0x6e, 0x64, 0x65, 0x78)
@@ -509,6 +525,9 @@ func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
for za0002 := range z.HealedBuckets {
o = msgp.AppendString(o, z.HealedBuckets[za0002])
}
// string "HealID"
o = append(o, 0xa6, 0x48, 0x65, 0x61, 0x6c, 0x49, 0x44)
o = msgp.AppendString(o, z.HealID)
return
}
@@ -688,6 +707,12 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
return
}
}
case "HealID":
z.HealID, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "HealID")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -710,5 +735,6 @@ func (z *healingTracker) Msgsize() (s int) {
for za0002 := range z.HealedBuckets {
s += msgp.StringPrefixSize + len(z.HealedBuckets[za0002])
}
s += 7 + msgp.StringPrefixSize + len(z.HealID)
return
}

File diff suppressed because it is too large Load Diff

View File

@@ -702,6 +702,12 @@ func (z *BatchJobReplicateSource) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "Endpoint")
return
}
case "Path":
z.Path, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Path")
return
}
case "Creds":
var zb0003 uint32
zb0003, err = dc.ReadMapHeader()
@@ -756,9 +762,9 @@ func (z *BatchJobReplicateSource) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *BatchJobReplicateSource) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 5
// map header, size 6
// write "Type"
err = en.Append(0x85, 0xa4, 0x54, 0x79, 0x70, 0x65)
err = en.Append(0x86, 0xa4, 0x54, 0x79, 0x70, 0x65)
if err != nil {
return
}
@@ -797,6 +803,16 @@ func (z *BatchJobReplicateSource) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "Endpoint")
return
}
// write "Path"
err = en.Append(0xa4, 0x50, 0x61, 0x74, 0x68)
if err != nil {
return
}
err = en.WriteString(z.Path)
if err != nil {
err = msgp.WrapError(err, "Path")
return
}
// write "Creds"
err = en.Append(0xa5, 0x43, 0x72, 0x65, 0x64, 0x73)
if err != nil {
@@ -839,9 +855,9 @@ func (z *BatchJobReplicateSource) EncodeMsg(en *msgp.Writer) (err error) {
// MarshalMsg implements msgp.Marshaler
func (z *BatchJobReplicateSource) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 5
// map header, size 6
// string "Type"
o = append(o, 0x85, 0xa4, 0x54, 0x79, 0x70, 0x65)
o = append(o, 0x86, 0xa4, 0x54, 0x79, 0x70, 0x65)
o = msgp.AppendString(o, string(z.Type))
// string "Bucket"
o = append(o, 0xa6, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74)
@@ -852,6 +868,9 @@ func (z *BatchJobReplicateSource) MarshalMsg(b []byte) (o []byte, err error) {
// string "Endpoint"
o = append(o, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
o = msgp.AppendString(o, z.Endpoint)
// string "Path"
o = append(o, 0xa4, 0x50, 0x61, 0x74, 0x68)
o = msgp.AppendString(o, z.Path)
// string "Creds"
o = append(o, 0xa5, 0x43, 0x72, 0x65, 0x64, 0x73)
// map header, size 3
@@ -913,6 +932,12 @@ func (z *BatchJobReplicateSource) UnmarshalMsg(bts []byte) (o []byte, err error)
err = msgp.WrapError(err, "Endpoint")
return
}
case "Path":
z.Path, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Path")
return
}
case "Creds":
var zb0003 uint32
zb0003, bts, err = msgp.ReadMapHeaderBytes(bts)
@@ -968,7 +993,7 @@ func (z *BatchJobReplicateSource) UnmarshalMsg(bts []byte) (o []byte, err error)
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobReplicateSource) Msgsize() (s int) {
s = 1 + 5 + msgp.StringPrefixSize + len(string(z.Type)) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 6 + 1 + 10 + msgp.StringPrefixSize + len(z.Creds.AccessKey) + 10 + msgp.StringPrefixSize + len(z.Creds.SecretKey) + 13 + msgp.StringPrefixSize + len(z.Creds.SessionToken)
s = 1 + 5 + msgp.StringPrefixSize + len(string(z.Type)) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 5 + msgp.StringPrefixSize + len(z.Path) + 6 + 1 + 10 + msgp.StringPrefixSize + len(z.Creds.AccessKey) + 10 + msgp.StringPrefixSize + len(z.Creds.SecretKey) + 13 + msgp.StringPrefixSize + len(z.Creds.SessionToken)
return
}
@@ -1018,6 +1043,12 @@ func (z *BatchJobReplicateTarget) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "Endpoint")
return
}
case "Path":
z.Path, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Path")
return
}
case "Creds":
var zb0003 uint32
zb0003, err = dc.ReadMapHeader()
@@ -1072,9 +1103,9 @@ func (z *BatchJobReplicateTarget) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *BatchJobReplicateTarget) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 5
// map header, size 6
// write "Type"
err = en.Append(0x85, 0xa4, 0x54, 0x79, 0x70, 0x65)
err = en.Append(0x86, 0xa4, 0x54, 0x79, 0x70, 0x65)
if err != nil {
return
}
@@ -1113,6 +1144,16 @@ func (z *BatchJobReplicateTarget) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "Endpoint")
return
}
// write "Path"
err = en.Append(0xa4, 0x50, 0x61, 0x74, 0x68)
if err != nil {
return
}
err = en.WriteString(z.Path)
if err != nil {
err = msgp.WrapError(err, "Path")
return
}
// write "Creds"
err = en.Append(0xa5, 0x43, 0x72, 0x65, 0x64, 0x73)
if err != nil {
@@ -1155,9 +1196,9 @@ func (z *BatchJobReplicateTarget) EncodeMsg(en *msgp.Writer) (err error) {
// MarshalMsg implements msgp.Marshaler
func (z *BatchJobReplicateTarget) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 5
// map header, size 6
// string "Type"
o = append(o, 0x85, 0xa4, 0x54, 0x79, 0x70, 0x65)
o = append(o, 0x86, 0xa4, 0x54, 0x79, 0x70, 0x65)
o = msgp.AppendString(o, string(z.Type))
// string "Bucket"
o = append(o, 0xa6, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74)
@@ -1168,6 +1209,9 @@ func (z *BatchJobReplicateTarget) MarshalMsg(b []byte) (o []byte, err error) {
// string "Endpoint"
o = append(o, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
o = msgp.AppendString(o, z.Endpoint)
// string "Path"
o = append(o, 0xa4, 0x50, 0x61, 0x74, 0x68)
o = msgp.AppendString(o, z.Path)
// string "Creds"
o = append(o, 0xa5, 0x43, 0x72, 0x65, 0x64, 0x73)
// map header, size 3
@@ -1229,6 +1273,12 @@ func (z *BatchJobReplicateTarget) UnmarshalMsg(bts []byte) (o []byte, err error)
err = msgp.WrapError(err, "Endpoint")
return
}
case "Path":
z.Path, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Path")
return
}
case "Creds":
var zb0003 uint32
zb0003, bts, err = msgp.ReadMapHeaderBytes(bts)
@@ -1284,7 +1334,7 @@ func (z *BatchJobReplicateTarget) UnmarshalMsg(bts []byte) (o []byte, err error)
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobReplicateTarget) Msgsize() (s int) {
s = 1 + 5 + msgp.StringPrefixSize + len(string(z.Type)) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 6 + 1 + 10 + msgp.StringPrefixSize + len(z.Creds.AccessKey) + 10 + msgp.StringPrefixSize + len(z.Creds.SecretKey) + 13 + msgp.StringPrefixSize + len(z.Creds.SessionToken)
s = 1 + 5 + msgp.StringPrefixSize + len(string(z.Type)) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 5 + msgp.StringPrefixSize + len(z.Path) + 6 + 1 + 10 + msgp.StringPrefixSize + len(z.Creds.AccessKey) + 10 + msgp.StringPrefixSize + len(z.Creds.SecretKey) + 13 + msgp.StringPrefixSize + len(z.Creds.SessionToken)
return
}
@@ -1538,6 +1588,24 @@ func (z *BatchJobRequest) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
}
case "KeyRotate":
if dc.IsNil() {
err = dc.ReadNil()
if err != nil {
err = msgp.WrapError(err, "KeyRotate")
return
}
z.KeyRotate = nil
} else {
if z.KeyRotate == nil {
z.KeyRotate = new(BatchJobKeyRotateV1)
}
err = z.KeyRotate.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "KeyRotate")
return
}
}
default:
err = dc.Skip()
if err != nil {
@@ -1551,9 +1619,9 @@ func (z *BatchJobRequest) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *BatchJobRequest) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 5
// map header, size 6
// write "ID"
err = en.Append(0x85, 0xa2, 0x49, 0x44)
err = en.Append(0x86, 0xa2, 0x49, 0x44)
if err != nil {
return
}
@@ -1609,15 +1677,32 @@ func (z *BatchJobRequest) EncodeMsg(en *msgp.Writer) (err error) {
return
}
}
// write "KeyRotate"
err = en.Append(0xa9, 0x4b, 0x65, 0x79, 0x52, 0x6f, 0x74, 0x61, 0x74, 0x65)
if err != nil {
return
}
if z.KeyRotate == nil {
err = en.WriteNil()
if err != nil {
return
}
} else {
err = z.KeyRotate.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "KeyRotate")
return
}
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *BatchJobRequest) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 5
// map header, size 6
// string "ID"
o = append(o, 0x85, 0xa2, 0x49, 0x44)
o = append(o, 0x86, 0xa2, 0x49, 0x44)
o = msgp.AppendString(o, z.ID)
// string "User"
o = append(o, 0xa4, 0x55, 0x73, 0x65, 0x72)
@@ -1639,6 +1724,17 @@ func (z *BatchJobRequest) MarshalMsg(b []byte) (o []byte, err error) {
return
}
}
// string "KeyRotate"
o = append(o, 0xa9, 0x4b, 0x65, 0x79, 0x52, 0x6f, 0x74, 0x61, 0x74, 0x65)
if z.KeyRotate == nil {
o = msgp.AppendNil(o)
} else {
o, err = z.KeyRotate.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "KeyRotate")
return
}
}
return
}
@@ -1701,6 +1797,23 @@ func (z *BatchJobRequest) UnmarshalMsg(bts []byte) (o []byte, err error) {
return
}
}
case "KeyRotate":
if msgp.IsNil(bts) {
bts, err = msgp.ReadNilBytes(bts)
if err != nil {
return
}
z.KeyRotate = nil
} else {
if z.KeyRotate == nil {
z.KeyRotate = new(BatchJobKeyRotateV1)
}
bts, err = z.KeyRotate.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "KeyRotate")
return
}
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -1721,6 +1834,12 @@ func (z *BatchJobRequest) Msgsize() (s int) {
} else {
s += z.Replicate.Msgsize()
}
s += 10
if z.KeyRotate == nil {
s += msgp.NilSize
} else {
s += z.KeyRotate.Msgsize()
}
return
}

564
cmd/batch-rotate.go Normal file
View File

@@ -0,0 +1,564 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"math/rand"
"net/http"
"runtime"
"strconv"
"strings"
"time"
jsoniter "github.com/json-iterator/go"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/crypto"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/env"
"github.com/minio/pkg/wildcard"
"github.com/minio/pkg/workers"
)
// keyrotate:
// apiVersion: v1
// bucket: BUCKET
// prefix: PREFIX
// encryption:
// type: sse-s3 # valid values are sse-s3 and sse-kms
// key: <new-kms-key> # valid only for sse-kms
// context: <new-kms-key-context> # valid only for sse-kms
// # optional flags based filtering criteria
// # for all objects
// flags:
// filter:
// newerThan: "7d" # match objects newer than this value (e.g. 7d10h31s)
// olderThan: "7d" # match objects older than this value (e.g. 7d10h31s)
// createdAfter: "date" # match objects created after "date"
// createdBefore: "date" # match objects created before "date"
// tags:
// - key: "name"
// value: "pick*" # match objects with tag 'name', with all values starting with 'pick'
// metadata:
// - key: "content-type"
// value: "image/*" # match objects with 'content-type', with all values starting with 'image/'
// kmskey: "key-id" # match objects with KMS key-id (applicable only for sse-kms)
// notify:
// endpoint: "https://notify.endpoint" # notification endpoint to receive job status events
// token: "Bearer xxxxx" # optional authentication token for the notification endpoint
// retry:
// attempts: 10 # number of retries for the job before giving up
// delay: "500ms" # least amount of delay between each retry
//go:generate msgp -file $GOFILE -unexported
// BatchKeyRotateKV is a datatype that holds key and values for filtering of objects
// used by metadata filter as well as tags based filtering.
type BatchKeyRotateKV struct {
Key string `yaml:"key" json:"key"`
Value string `yaml:"value" json:"value"`
}
// Validate returns an error if key is empty
func (kv BatchKeyRotateKV) Validate() error {
if kv.Key == "" {
return errInvalidArgument
}
return nil
}
// Empty indicates if kv is not set
func (kv BatchKeyRotateKV) Empty() bool {
return kv.Key == "" && kv.Value == ""
}
// Match matches input kv with kv, value will be wildcard matched depending on the user input
func (kv BatchKeyRotateKV) Match(ikv BatchKeyRotateKV) bool {
if kv.Empty() {
return true
}
if strings.EqualFold(kv.Key, ikv.Key) {
return wildcard.Match(kv.Value, ikv.Value)
}
return false
}
// BatchKeyRotateRetry datatype represents total retry attempts and delay between each retries.
type BatchKeyRotateRetry struct {
Attempts int `yaml:"attempts" json:"attempts"` // number of retry attempts
Delay time.Duration `yaml:"delay" json:"delay"` // delay between each retries
}
// Validate validates input replicate retries.
func (r BatchKeyRotateRetry) Validate() error {
if r.Attempts < 0 {
return errInvalidArgument
}
if r.Delay < 0 {
return errInvalidArgument
}
return nil
}
// BatchKeyRotationType defines key rotation type
type BatchKeyRotationType string
const (
sses3 BatchKeyRotationType = "sse-s3"
ssekms BatchKeyRotationType = "sse-kms"
)
// BatchJobKeyRotateEncryption defines key rotation encryption options passed
type BatchJobKeyRotateEncryption struct {
Type BatchKeyRotationType `yaml:"type" json:"type"`
Key string `yaml:"key" json:"key"`
Context string `yaml:"context" json:"context"`
kmsContext kms.Context `msg:"-"`
}
// Validate validates input key rotation encryption options.
func (e BatchJobKeyRotateEncryption) Validate() error {
if e.Type != sses3 && e.Type != ssekms {
return errInvalidArgument
}
spaces := strings.HasPrefix(e.Key, " ") || strings.HasSuffix(e.Key, " ")
if e.Type == ssekms && spaces {
return crypto.ErrInvalidEncryptionKeyID
}
if e.Type == ssekms && GlobalKMS != nil {
ctx := kms.Context{}
if e.Context != "" {
b, err := base64.StdEncoding.DecodeString(e.Context)
if err != nil {
return err
}
json := jsoniter.ConfigCompatibleWithStandardLibrary
if err := json.Unmarshal(b, &ctx); err != nil {
return err
}
}
e.kmsContext = kms.Context{}
for k, v := range ctx {
e.kmsContext[k] = v
}
ctx["MinIO batch API"] = "batchrotate" // Context for a test key operation
if _, err := GlobalKMS.GenerateKey(GlobalContext, e.Key, ctx); err != nil {
return err
}
}
return nil
}
// BatchKeyRotateFilter holds all the filters currently supported for batch replication
type BatchKeyRotateFilter struct {
NewerThan time.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchKeyRotateKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchKeyRotateKV `yaml:"metadata,omitempty" json:"metadata"`
KMSKeyID string `yaml:"kmskeyid" json:"kmskey"`
}
// BatchKeyRotateNotification success or failure notification endpoint for each job attempts
type BatchKeyRotateNotification struct {
Endpoint string `yaml:"endpoint" json:"endpoint"`
Token string `yaml:"token" json:"token"`
}
// BatchJobKeyRotateFlags various configurations for replication job definition currently includes
// - filter
// - notify
// - retry
type BatchJobKeyRotateFlags struct {
Filter BatchKeyRotateFilter `yaml:"filter" json:"filter"`
Notify BatchKeyRotateNotification `yaml:"notify" json:"notify"`
Retry BatchKeyRotateRetry `yaml:"retry" json:"retry"`
}
// BatchJobKeyRotateV1 v1 of batch key rotation job
type BatchJobKeyRotateV1 struct {
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Flags BatchJobKeyRotateFlags `yaml:"flags" json:"flags"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Encryption BatchJobKeyRotateEncryption `yaml:"encryption" json:"encryption"`
}
// Notify notifies notification endpoint if configured regarding job failure or success.
func (r BatchJobKeyRotateV1) Notify(ctx context.Context, body io.Reader) error {
if r.Flags.Notify.Endpoint == "" {
return nil
}
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, http.MethodPost, r.Flags.Notify.Endpoint, body)
if err != nil {
return err
}
if r.Flags.Notify.Token != "" {
req.Header.Set("Authorization", r.Flags.Notify.Token)
}
clnt := http.Client{Transport: getRemoteInstanceTransport}
resp, err := clnt.Do(req)
if err != nil {
return err
}
xhttp.DrainBody(resp.Body)
if resp.StatusCode != http.StatusOK {
return errors.New(resp.Status)
}
return nil
}
// KeyRotate rotates encryption key of an object
func (r *BatchJobKeyRotateV1) KeyRotate(ctx context.Context, api ObjectLayer, objInfo ObjectInfo) error {
srcBucket := r.Bucket
srcObject := objInfo.Name
if objInfo.DeleteMarker || !objInfo.VersionPurgeStatus.Empty() {
return nil
}
sseKMS := crypto.S3KMS.IsEncrypted(objInfo.UserDefined)
sseS3 := crypto.S3.IsEncrypted(objInfo.UserDefined)
if !sseKMS && !sseS3 { // neither sse-s3 nor sse-kms disallowed
return errInvalidEncryptionParameters
}
if sseKMS && r.Encryption.Type == sses3 { // previously encrypted with sse-kms, now sse-s3 disallowed
return errInvalidEncryptionParameters
}
versioned := globalBucketVersioningSys.PrefixEnabled(srcBucket, srcObject)
versionSuspended := globalBucketVersioningSys.PrefixSuspended(srcBucket, srcObject)
lock := api.NewNSLock(r.Bucket, objInfo.Name)
lkctx, err := lock.GetLock(ctx, globalOperationTimeout)
if err != nil {
return err
}
ctx = lkctx.Context()
defer lock.Unlock(lkctx)
opts := ObjectOptions{
VersionID: objInfo.VersionID,
Versioned: versioned,
VersionSuspended: versionSuspended,
NoLock: true,
}
obj, err := api.GetObjectInfo(ctx, r.Bucket, objInfo.Name, opts)
if err != nil {
return err
}
oi := obj.Clone()
var (
newKeyID string
newKeyContext kms.Context
)
encMetadata := make(map[string]string)
for k, v := range oi.UserDefined {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
encMetadata[k] = v
}
}
if (sseKMS || sseS3) && r.Encryption.Type == ssekms {
if err = r.Encryption.Validate(); err != nil {
return err
}
newKeyID = strings.TrimPrefix(r.Encryption.Key, crypto.ARNPrefix)
newKeyContext = r.Encryption.kmsContext
}
if err = rotateKey(ctx, []byte{}, newKeyID, []byte{}, r.Bucket, oi.Name, encMetadata, newKeyContext); err != nil {
return err
}
// Since we are rotating the keys, make sure to update the metadata.
oi.metadataOnly = true
oi.keyRotation = true
for k, v := range encMetadata {
oi.UserDefined[k] = v
}
if _, err := api.CopyObject(ctx, r.Bucket, oi.Name, r.Bucket, oi.Name, oi, ObjectOptions{
VersionID: oi.VersionID,
}, ObjectOptions{
VersionID: oi.VersionID,
NoLock: true,
}); err != nil {
return err
}
return nil
}
const (
batchKeyRotationName = "batch-rotate.bin"
batchKeyRotationFormat = 1
batchKeyRotateVersionV1 = 1
batchKeyRotateVersion = batchKeyRotateVersionV1
batchKeyRotateAPIVersion = "v1"
batchKeyRotateJobDefaultRetries = 3
batchKeyRotateJobDefaultRetryDelay = 250 * time.Millisecond
)
// Start the batch key rottion job, resumes if there was a pending job via "job.ID"
func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job BatchJobRequest) error {
ri := &batchJobInfo{
JobID: job.ID,
JobType: string(job.Type()),
StartTime: job.Started,
}
if err := ri.load(ctx, api, job); err != nil {
return err
}
globalBatchJobsMetrics.save(job.ID, ri)
lastObject := ri.Object
delay := job.KeyRotate.Flags.Retry.Delay
if delay == 0 {
delay = batchKeyRotateJobDefaultRetryDelay
}
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
skip := func(info FileInfo) (ok bool) {
if r.Flags.Filter.OlderThan > 0 && time.Since(info.ModTime) < r.Flags.Filter.OlderThan {
// skip all objects that are newer than specified older duration
return false
}
if r.Flags.Filter.NewerThan > 0 && time.Since(info.ModTime) >= r.Flags.Filter.NewerThan {
// skip all objects that are older than specified newer duration
return false
}
if !r.Flags.Filter.CreatedAfter.IsZero() && r.Flags.Filter.CreatedAfter.Before(info.ModTime) {
// skip all objects that are created before the specified time.
return false
}
if !r.Flags.Filter.CreatedBefore.IsZero() && r.Flags.Filter.CreatedBefore.After(info.ModTime) {
// skip all objects that are created after the specified time.
return false
}
if len(r.Flags.Filter.Tags) > 0 {
// Only parse object tags if tags filter is specified.
tagMap := map[string]string{}
tagStr := info.Metadata[xhttp.AmzObjectTagging]
if len(tagStr) != 0 {
t, err := tags.ParseObjectTags(tagStr)
if err != nil {
return false
}
tagMap = t.ToMap()
}
for _, kv := range r.Flags.Filter.Tags {
for t, v := range tagMap {
if kv.Match(BatchKeyRotateKV{Key: t, Value: v}) {
return true
}
}
}
// None of the provided tags filter match skip the object
return false
}
if len(r.Flags.Filter.Metadata) > 0 {
for _, kv := range r.Flags.Filter.Metadata {
for k, v := range info.Metadata {
if !stringsHasPrefixFold(k, "x-amz-meta-") && !isStandardHeader(k) {
continue
}
// We only need to match x-amz-meta or standardHeaders
if kv.Match(BatchKeyRotateKV{Key: k, Value: v}) {
return true
}
}
}
// None of the provided metadata filters match skip the object.
return false
}
if r.Flags.Filter.KMSKeyID != "" {
if v, ok := info.Metadata[xhttp.AmzServerSideEncryptionKmsID]; ok && strings.TrimPrefix(v, crypto.ARNPrefix) != r.Flags.Filter.KMSKeyID {
return false
}
}
return true
}
workerSize, err := strconv.Atoi(env.Get("_MINIO_BATCH_KEYROTATION_WORKERS", strconv.Itoa(runtime.GOMAXPROCS(0)/2)))
if err != nil {
return err
}
wk, err := workers.New(workerSize)
if err != nil {
// invalid worker size.
return err
}
retryAttempts := ri.RetryAttempts
ctx, cancel := context.WithCancel(ctx)
results := make(chan ObjectInfo, 100)
if err := api.Walk(ctx, r.Bucket, r.Prefix, results, ObjectOptions{
WalkMarker: lastObject,
WalkFilter: skip,
}); err != nil {
cancel()
// Do not need to retry if we can't list objects on source.
return err
}
for result := range results {
result := result
sseKMS := crypto.S3KMS.IsEncrypted(result.UserDefined)
sseS3 := crypto.S3.IsEncrypted(result.UserDefined)
if !sseKMS && !sseS3 { // neither sse-s3 nor sse-kms disallowed
continue
}
wk.Take()
go func() {
defer wk.Give()
for attempts := 1; attempts <= retryAttempts; attempts++ {
attempts := attempts
stopFn := globalBatchJobsMetrics.trace(batchKeyRotationMetricObject, job.ID, attempts, result)
success := true
if err := r.KeyRotate(ctx, api, result); err != nil {
stopFn(err)
logger.LogIf(ctx, err)
success = false
} else {
stopFn(nil)
}
ri.trackCurrentBucketObject(r.Bucket, result, success)
ri.RetryAttempts = attempts
globalBatchJobsMetrics.save(job.ID, ri)
// persist in-memory state to disk after every 10secs.
logger.LogIf(ctx, ri.updateAfter(ctx, api, 10*time.Second, job))
if success {
break
}
}
}()
}
wk.Wait()
ri.Complete = ri.ObjectsFailed == 0
ri.Failed = ri.ObjectsFailed > 0
globalBatchJobsMetrics.save(job.ID, ri)
// persist in-memory state to disk.
logger.LogIf(ctx, ri.updateAfter(ctx, api, 0, job))
buf, _ := json.Marshal(ri)
if err := r.Notify(ctx, bytes.NewReader(buf)); err != nil {
logger.LogIf(ctx, fmt.Errorf("unable to notify %v", err))
}
cancel()
if ri.Failed {
ri.ObjectsFailed = 0
ri.Bucket = ""
ri.Object = ""
ri.Objects = 0
time.Sleep(delay + time.Duration(rnd.Float64()*float64(delay)))
}
return nil
}
//msgp:ignore batchKeyRotationJobError
type batchKeyRotationJobError struct {
Code string
Description string
HTTPStatusCode int
}
func (e batchKeyRotationJobError) Error() string {
return e.Description
}
// Validate validates the job definition input
func (r *BatchJobKeyRotateV1) Validate(ctx context.Context, job BatchJobRequest, o ObjectLayer) error {
if r == nil {
return nil
}
if r.APIVersion != batchKeyRotateAPIVersion {
return errInvalidArgument
}
if r.Bucket == "" {
return errInvalidArgument
}
if _, err := o.GetBucketInfo(ctx, r.Bucket, BucketOptions{}); err != nil {
if isErrBucketNotFound(err) {
return batchKeyRotationJobError{
Code: "NoSuchSourceBucket",
Description: "The specified source bucket does not exist",
HTTPStatusCode: http.StatusNotFound,
}
}
return err
}
if GlobalKMS == nil {
return errKMSNotConfigured
}
if err := r.Encryption.Validate(); err != nil {
return err
}
for _, tag := range r.Flags.Filter.Tags {
if err := tag.Validate(); err != nil {
return err
}
}
for _, meta := range r.Flags.Filter.Metadata {
if err := meta.Validate(); err != nil {
return err
}
}
if err := r.Flags.Retry.Validate(); err != nil {
return err
}
return nil
}

1650
cmd/batch-rotate_gen.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,801 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
import (
"bytes"
"testing"
"github.com/tinylib/msgp/msgp"
)
func TestMarshalUnmarshalBatchJobKeyRotateEncryption(t *testing.T) {
v := BatchJobKeyRotateEncryption{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobKeyRotateEncryption(t *testing.T) {
v := BatchJobKeyRotateEncryption{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobKeyRotateEncryption Msgsize() is inaccurate")
}
vn := BatchJobKeyRotateEncryption{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobKeyRotateFlags(t *testing.T) {
v := BatchJobKeyRotateFlags{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobKeyRotateFlags(t *testing.T) {
v := BatchJobKeyRotateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobKeyRotateFlags Msgsize() is inaccurate")
}
vn := BatchJobKeyRotateFlags{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobKeyRotateV1(t *testing.T) {
v := BatchJobKeyRotateV1{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobKeyRotateV1(t *testing.T) {
v := BatchJobKeyRotateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobKeyRotateV1 Msgsize() is inaccurate")
}
vn := BatchJobKeyRotateV1{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchKeyRotateFilter(t *testing.T) {
v := BatchKeyRotateFilter{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchKeyRotateFilter(t *testing.T) {
v := BatchKeyRotateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchKeyRotateFilter Msgsize() is inaccurate")
}
vn := BatchKeyRotateFilter{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchKeyRotateKV(t *testing.T) {
v := BatchKeyRotateKV{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchKeyRotateKV(t *testing.T) {
v := BatchKeyRotateKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchKeyRotateKV Msgsize() is inaccurate")
}
vn := BatchKeyRotateKV{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchKeyRotateNotification(t *testing.T) {
v := BatchKeyRotateNotification{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchKeyRotateNotification(t *testing.T) {
v := BatchKeyRotateNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchKeyRotateNotification Msgsize() is inaccurate")
}
vn := BatchKeyRotateNotification{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchKeyRotateRetry(t *testing.T) {
v := BatchKeyRotateRetry{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchKeyRotateRetry(t *testing.T) {
v := BatchKeyRotateRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchKeyRotateRetry Msgsize() is inaccurate")
}
vn := BatchKeyRotateRetry{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}

View File

@@ -0,0 +1,24 @@
// Code generated by "stringer -type=batchJobMetric -trimprefix=batchJobMetric batch-handlers.go"; DO NOT EDIT.
package cmd
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[batchReplicationMetricObject-0]
_ = x[batchKeyRotationMetricObject-1]
}
const _batchJobMetric_name = "batchReplicationMetricObjectbatchKeyRotationMetricObject"
var _batchJobMetric_index = [...]uint8{0, 28, 56}
func (i batchJobMetric) String() string {
if i >= batchJobMetric(len(_batchJobMetric_index)-1) {
return "batchJobMetric(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _batchJobMetric_name[_batchJobMetric_index[i]:_batchJobMetric_index[i+1]]
}

View File

@@ -1,23 +0,0 @@
// Code generated by "stringer -type=batchReplicationMetric -trimprefix=batchReplicationMetric batch-handlers.go"; DO NOT EDIT.
package cmd
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[batchReplicationMetricObject-0]
}
const _batchReplicationMetric_name = "Object"
var _batchReplicationMetric_index = [...]uint8{0, 6}
func (i batchReplicationMetric) String() string {
if i >= batchReplicationMetric(len(_batchReplicationMetric_index)-1) {
return "batchReplicationMetric(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _batchReplicationMetric_name[_batchReplicationMetric_index[i]:_batchReplicationMetric_index[i+1]]
}

View File

@@ -35,7 +35,7 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucketWithLocation(context.Background(), bucket, MakeBucketOptions{})
err = obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -76,7 +76,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
object := getRandomObjectName()
// create bucket.
err = obj.MakeBucketWithLocation(context.Background(), bucket, MakeBucketOptions{})
err = obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -196,7 +196,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucketWithLocation(context.Background(), bucket, MakeBucketOptions{})
err := obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}

View File

@@ -165,9 +165,9 @@ func (b *streamingBitrotReader) ReadAt(buf []byte, offset int64) (int, error) {
b.rc, err = b.disk.ReadFileStream(context.TODO(), b.volume, b.filePath, streamOffset, b.tillOffset-streamOffset)
if err != nil {
if !IsErr(err, ignoredErrs...) {
logger.LogIf(GlobalContext,
logger.LogOnceIf(GlobalContext,
fmt.Errorf("Reading erasure shards at (%s: %s/%s) returned '%w', will attempt to reconstruct if we have quorum",
b.disk, b.volume, b.filePath, err))
b.disk, b.volume, b.filePath, err), "bitrot-read-file-stream-"+b.volume+"-"+b.filePath)
}
}
} else {

116
cmd/bootstrap-messages.go Normal file
View File

@@ -0,0 +1,116 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"context"
"fmt"
"sync"
"time"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/pubsub"
)
const bootstrapMsgsLimit = 4 << 10
type bootstrapInfo struct {
msg string
ts time.Time
source string
}
type bootstrapTracer struct {
mu sync.RWMutex
idx int
info [bootstrapMsgsLimit]bootstrapInfo
lastUpdate time.Time
}
var globalBootstrapTracer = &bootstrapTracer{}
func (bs *bootstrapTracer) DropEvents() {
bs.mu.Lock()
defer bs.mu.Unlock()
if time.Now().UTC().Sub(bs.lastUpdate) > 24*time.Hour {
bs.info = [4096]bootstrapInfo{}
bs.idx = 0
}
}
func (bs *bootstrapTracer) Empty() bool {
var empty bool
bs.mu.RLock()
empty = bs.info[0].msg == ""
bs.mu.RUnlock()
return empty
}
func (bs *bootstrapTracer) Record(msg string, skip int) {
source := getSource(skip + 1)
bs.mu.Lock()
now := time.Now().UTC()
bs.info[bs.idx] = bootstrapInfo{
msg: msg,
ts: now,
source: source,
}
bs.lastUpdate = now
bs.idx = (bs.idx + 1) % bootstrapMsgsLimit
bs.mu.Unlock()
}
func (bs *bootstrapTracer) Events() []madmin.TraceInfo {
traceInfo := make([]madmin.TraceInfo, 0, bootstrapMsgsLimit)
// Add all messages in order
addAll := func(info []bootstrapInfo) {
for _, msg := range info {
if msg.ts.IsZero() {
continue // skip empty events
}
traceInfo = append(traceInfo, madmin.TraceInfo{
TraceType: madmin.TraceBootstrap,
Time: msg.ts,
NodeName: globalLocalNodeName,
FuncName: "BOOTSTRAP",
Message: fmt.Sprintf("%s %s", msg.source, msg.msg),
})
}
}
bs.mu.RLock()
addAll(bs.info[bs.idx:])
addAll(bs.info[:bs.idx])
bs.mu.RUnlock()
return traceInfo
}
func (bs *bootstrapTracer) Publish(ctx context.Context, trace *pubsub.PubSub[madmin.TraceInfo, madmin.TraceType]) {
if bs.Empty() {
return
}
for _, bsEvent := range bs.Events() {
select {
case <-ctx.Done():
default:
trace.Publish(bsEvent)
}
}
}

View File

@@ -0,0 +1,60 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"fmt"
"strings"
"testing"
"time"
)
func TestBootstrap(t *testing.T) {
// Bootstrap events exceed bootstrap messages limit
bsTracer := &bootstrapTracer{}
for i := 0; i < bootstrapMsgsLimit+10; i++ {
bsTracer.Record(fmt.Sprintf("msg-%d", i), 1)
}
traceInfos := bsTracer.Events()
if len(traceInfos) != bootstrapMsgsLimit {
t.Fatalf("Expected length of events %d but got %d", bootstrapMsgsLimit, len(traceInfos))
}
// Simulate the case where bootstrap events were updated a day ago
bsTracer.lastUpdate = time.Now().UTC().Add(-25 * time.Hour)
bsTracer.DropEvents()
if !bsTracer.Empty() {
t.Fatalf("Expected all bootstrap events to have been dropped, but found %d events", len(bsTracer.Events()))
}
// Fewer than 4K bootstrap events
for i := 0; i < 10; i++ {
bsTracer.Record(fmt.Sprintf("msg-%d", i), 1)
}
events := bsTracer.Events()
if len(events) != 10 {
t.Fatalf("Expected length of events %d but got %d", 10, len(events))
}
for i, traceInfo := range bsTracer.Events() {
msg := fmt.Sprintf("msg-%d", i)
if !strings.HasSuffix(traceInfo.Message, msg) {
t.Fatalf("Expected %s but got %s", msg, traceInfo.Message)
}
}
}

View File

@@ -24,15 +24,15 @@ import (
"io"
"net/http"
"net/url"
"os"
"reflect"
"runtime"
"time"
"github.com/gorilla/mux"
"github.com/minio/minio-go/v7/pkg/set"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/rest"
"github.com/minio/mux"
"github.com/minio/pkg/env"
)
@@ -53,24 +53,22 @@ type bootstrapRESTServer struct{}
// ServerSystemConfig - captures information about server configuration.
type ServerSystemConfig struct {
MinioPlatform string
MinioEndpoints EndpointServerPools
MinioEnv map[string]string
}
// Diff - returns error on first difference found in two configs.
func (s1 ServerSystemConfig) Diff(s2 ServerSystemConfig) error {
if s1.MinioPlatform != s2.MinioPlatform {
return fmt.Errorf("Expected platform '%s', found to be running '%s'",
s1.MinioPlatform, s2.MinioPlatform)
}
if s1.MinioEndpoints.NEndpoints() != s2.MinioEndpoints.NEndpoints() {
return fmt.Errorf("Expected number of endpoints %d, seen %d", s1.MinioEndpoints.NEndpoints(),
s2.MinioEndpoints.NEndpoints())
}
for i, ep := range s1.MinioEndpoints {
if ep.CmdLine != s2.MinioEndpoints[i].CmdLine {
return fmt.Errorf("Expected command line argument %s, seen %s", ep.CmdLine,
s2.MinioEndpoints[i].CmdLine)
}
if ep.SetCount != s2.MinioEndpoints[i].SetCount {
return fmt.Errorf("Expected set count %d, seen %d", ep.SetCount,
s2.MinioEndpoints[i].SetCount)
@@ -79,11 +77,9 @@ func (s1 ServerSystemConfig) Diff(s2 ServerSystemConfig) error {
return fmt.Errorf("Expected drives pet set %d, seen %d", ep.DrivesPerSet,
s2.MinioEndpoints[i].DrivesPerSet)
}
for j, endpoint := range ep.Endpoints {
if endpoint.String() != s2.MinioEndpoints[i].Endpoints[j].String() {
return fmt.Errorf("Expected endpoint %s, seen %s", endpoint,
s2.MinioEndpoints[i].Endpoints[j])
}
if ep.Platform != s2.MinioEndpoints[i].Platform {
return fmt.Errorf("Expected platform '%s', found to be on '%s'",
ep.Platform, s2.MinioEndpoints[i].Platform)
}
}
if !reflect.DeepEqual(s1.MinioEnv, s2.MinioEnv) {
@@ -106,10 +102,14 @@ func (s1 ServerSystemConfig) Diff(s2 ServerSystemConfig) error {
}
var skipEnvs = map[string]struct{}{
"MINIO_OPTS": {},
"MINIO_CERT_PASSWD": {},
"MINIO_SERVER_DEBUG": {},
"MINIO_DSYNC_TRACE": {},
"MINIO_OPTS": {},
"MINIO_CERT_PASSWD": {},
"MINIO_SERVER_DEBUG": {},
"MINIO_DSYNC_TRACE": {},
"MINIO_ROOT_USER": {},
"MINIO_ROOT_PASSWORD": {},
"MINIO_ACCESS_KEY": {},
"MINIO_SECRET_KEY": {},
}
func getServerSystemCfg() ServerSystemConfig {
@@ -123,34 +123,48 @@ func getServerSystemCfg() ServerSystemConfig {
if _, ok := skipEnvs[envK]; ok {
continue
}
envValues[envK] = env.Get(envK, "")
envValues[envK] = logger.HashString(env.Get(envK, ""))
}
return ServerSystemConfig{
MinioPlatform: fmt.Sprintf("OS: %s | Arch: %s", runtime.GOOS, runtime.GOARCH),
MinioEndpoints: globalEndpoints,
MinioEnv: envValues,
}
}
func (b *bootstrapRESTServer) writeErrorResponse(w http.ResponseWriter, err error) {
w.WriteHeader(http.StatusForbidden)
w.Write([]byte(err.Error()))
}
// HealthHandler returns success if request is valid
func (b *bootstrapRESTServer) HealthHandler(w http.ResponseWriter, r *http.Request) {}
func (b *bootstrapRESTServer) VerifyHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "VerifyHandler")
if err := storageServerRequestValidate(r); err != nil {
b.writeErrorResponse(w, err)
return
}
cfg := getServerSystemCfg()
logger.LogIf(ctx, json.NewEncoder(w).Encode(&cfg))
}
// registerBootstrapRESTHandlers - register bootstrap rest router.
func registerBootstrapRESTHandlers(router *mux.Router) {
h := func(f http.HandlerFunc) http.HandlerFunc {
return collectInternodeStats(httpTraceHdrs(f))
}
server := &bootstrapRESTServer{}
subrouter := router.PathPrefix(bootstrapRESTPrefix).Subrouter()
subrouter.Methods(http.MethodPost).Path(bootstrapRESTVersionPrefix + bootstrapRESTMethodHealth).HandlerFunc(
httpTraceHdrs(server.HealthHandler))
h(server.HealthHandler))
subrouter.Methods(http.MethodPost).Path(bootstrapRESTVersionPrefix + bootstrapRESTMethodVerify).HandlerFunc(
httpTraceHdrs(server.VerifyHandler))
h(server.VerifyHandler))
}
// client to talk to bootstrap NEndpoints.
@@ -207,6 +221,7 @@ func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointS
for onlineServers < len(clnts)/2 {
for _, clnt := range clnts {
if err := clnt.Verify(ctx, srcCfg); err != nil {
bootstrapTrace(fmt.Sprintf("clnt.Verify: %v, endpoint: %v", err, clnt.endpoint))
if !isNetworkError(err) {
logger.LogOnceIf(ctx, fmt.Errorf("%s has incorrect configuration: %w", clnt.String(), err), clnt.String())
incorrectConfigs = append(incorrectConfigs, fmt.Errorf("%s has incorrect configuration: %w", clnt.String(), err))
@@ -232,7 +247,7 @@ func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointS
logger.Info(fmt.Sprintf("Following servers are currently offline or unreachable %s", offlineEndpoints))
}
if len(incorrectConfigs) > 0 {
logger.Info(fmt.Sprintf("Following servers mismatch in their configuration %s", incorrectConfigs))
logger.Info(fmt.Sprintf("Following servers have mismatching configuration %s", incorrectConfigs))
}
retries = 0 // reset to log again after 5 retries.
}
@@ -255,7 +270,11 @@ func newBootstrapRESTClients(endpointServerPools EndpointServerPools) []*bootstr
// Only proceed for remote endpoints.
if !endpoint.IsLocal {
clnts = append(clnts, newBootstrapRESTClient(endpoint))
cl := newBootstrapRESTClient(endpoint)
if serverDebugLog {
cl.restClient.TraceOutput = os.Stdout
}
clnts = append(clnts, cl)
}
}
}

View File

@@ -25,11 +25,11 @@ import (
"io"
"net/http"
"github.com/gorilla/mux"
"github.com/minio/kes"
"github.com/minio/madmin-go"
"github.com/minio/kes-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
)
@@ -51,11 +51,6 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
return
}
if !objAPI.IsEncryptionSupported() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -119,15 +114,12 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeSSEConfig,
Bucket: bucket,
SSEConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
writeSuccessResponseHeadersOnly(w)
}
@@ -209,16 +201,14 @@ func (api objectAPIHandlers) DeleteBucketEncryptionHandler(w http.ResponseWriter
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Call site replication hook.
//
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeSSEConfig,
Bucket: bucket,
SSEConfig: nil,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
writeSuccessNoContent(w)
}

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2021 MinIO, Inc.
// Copyright (c) 2015-2022 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -20,25 +20,34 @@ package cmd
import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"io"
"mime/multipart"
"net/http"
"net/textproto"
"net/url"
"path"
"runtime"
"sort"
"strconv"
"strings"
"sync"
"github.com/google/uuid"
"github.com/gorilla/mux"
"github.com/minio/mux"
"github.com/valyala/bytebufferpool"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/auth"
sse "github.com/minio/minio/internal/bucket/encryption"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
"github.com/minio/minio/internal/bucket/replication"
@@ -48,17 +57,21 @@ import (
"github.com/minio/minio/internal/handlers"
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/sync/errgroup"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/sync/errgroup"
)
const (
objectLockConfig = "object-lock.xml"
bucketTaggingConfig = "tagging.xml"
bucketReplicationConfig = "replication.xml"
xMinIOErrCodeHeader = "x-minio-error-code"
xMinIOErrDescHeader = "x-minio-error-desc"
)
// Check if there are buckets on server without corresponding entry in etcd backend and
@@ -359,7 +372,7 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
Groups: cred.Groups,
Action: iampolicy.ListBucketAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
ObjectName: "",
Claims: cred.Claims,
@@ -371,7 +384,7 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
Groups: cred.Groups,
Action: iampolicy.GetBucketLocationAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
ObjectName: "",
Claims: cred.Claims,
@@ -428,10 +441,14 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
// The max. XML contains 100000 object names (each at most 1024 bytes long) + XML overhead
const maxBodySize = 2 * 100000 * 1024
if r.ContentLength > maxBodySize {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrEntityTooLarge), r.URL)
return
}
// Unmarshal list of keys to be deleted.
deleteObjectsReq := &DeleteObjectsRequest{}
if err := xmlDecoder(r.Body, deleteObjectsReq, maxBodySize); err != nil {
logger.LogIf(ctx, err, logger.Application)
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -495,7 +512,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
vc, _ := globalBucketVersioningSys.Get(bucket)
oss := make([]*objSweeper, len(deleteObjectsReq.Objects))
for index, object := range deleteObjectsReq.Objects {
if apiErrCode := checkRequestAuthType(ctx, r, policy.DeleteObjectAction, bucket, object.ObjectName); apiErrCode != ErrNone {
if apiErrCode := checkRequestAuthTypeWithVID(ctx, r, policy.DeleteObjectAction, bucket, object.ObjectName, object.VersionID); apiErrCode != ErrNone {
if apiErrCode == ErrSignatureDoesNotMatch || apiErrCode == ErrInvalidAccessKeyID {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(apiErrCode), r.URL)
return
@@ -511,11 +528,10 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
if object.VersionID != "" && object.VersionID != nullVersionID {
if _, err := uuid.Parse(object.VersionID); err != nil {
logger.LogIf(ctx, fmt.Errorf("invalid version-id specified %w", err))
apiErr := errorCodes.ToAPIErr(ErrNoSuchVersion)
deleteResults[index].errInfo = DeleteError{
Code: apiErr.Code,
Message: apiErr.Description,
Message: fmt.Sprintf("%s (%s)", apiErr.Description, err),
Key: object.ObjectName,
VersionID: object.VersionID,
}
@@ -541,6 +557,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
oss[index].SetTransitionState(goi.TransitionedObject)
}
// All deletes on directory objects needs to be for `nullVersionID`
if isDirObject(object.ObjectName) && object.VersionID == "" {
object.VersionID = nullVersionID
}
if replicateDeletes {
dsc = checkReplicateDelete(ctx, bucket, ObjectToDelete{
ObjectV: ObjectV{
@@ -559,8 +580,8 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
}
if object.VersionID != "" && hasLockEnabled {
if apiErrCode := enforceRetentionBypassForDelete(ctx, r, bucket, object, goi, gerr); apiErrCode != ErrNone {
apiErr := errorCodes.ToAPIErr(apiErrCode)
if err := enforceRetentionBypassForDelete(ctx, r, bucket, object, goi, gerr); err != nil {
apiErr := toAPIError(ctx, err)
deleteResults[index].errInfo = DeleteError{
Code: apiErr.Code,
Message: apiErr.Description,
@@ -635,6 +656,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
if deleteResult.errInfo.Code != "" {
deleteErrors = append(deleteErrors, deleteResult.errInfo)
} else {
// All deletes on directory objects was with `nullVersionID`.
// Remove it from response.
if isDirObject(deleteResult.delInfo.ObjectName) && deleteResult.delInfo.VersionID == nullVersionID {
deleteResult.delInfo.VersionID = ""
}
deletedObjects = append(deletedObjects, deleteResult.delInfo)
}
}
@@ -650,6 +676,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
if replicateDeletes && (dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == Pending) {
// copy so we can re-add null ID.
dobj := dobj
if isDirObject(dobj.ObjectName) && dobj.VersionID == "" {
dobj.VersionID = nullVersionID
}
dv := DeletedObjectReplicationInfo{
DeletedObject: dobj,
Bucket: bucket,
@@ -744,7 +775,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: action,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
BucketName: bucket,
IsOwner: owner,
Claims: cred.Claims,
@@ -756,29 +787,18 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
// Parse incoming location constraint.
location, s3Error := parseLocationConstraint(r)
_, s3Error = parseLocationConstraint(r)
if s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Validate if location sent by the client is valid, reject
// requests which do not follow valid region requirements.
if !isValidLocation(location) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidRegion), r.URL)
return
}
// check if client is attempting to create more buckets than allowed maximum.
// check if client is attempting to create more buckets, complain about it.
if currBuckets := globalBucketMetadataSys.Count(); currBuckets+1 > maxBuckets {
apiErr := errorCodes.ToAPIErr(ErrTooManyBuckets)
apiErr.Description = fmt.Sprintf("You have attempted to create %d buckets than allowed %d", currBuckets+1, maxBuckets)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
logger.LogIf(ctx, fmt.Errorf("An attempt to create %d buckets beyond recommended %d", currBuckets+1, maxBuckets))
}
opts := MakeBucketOptions{
Location: location,
LockEnabled: objectLockEnabled,
ForceCreate: forceCreate,
}
@@ -790,15 +810,14 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
// exists elsewhere
if err == dns.ErrNoEntriesFound || err == dns.ErrNotImplemented {
// Proceed to creating a bucket.
if err = objectAPI.MakeBucketWithLocation(ctx, bucket, opts); err != nil {
if err = objectAPI.MakeBucket(ctx, bucket, opts); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if err = globalDNSConfig.Put(bucket); err != nil {
objectAPI.DeleteBucket(context.Background(), bucket, DeleteBucketOptions{
Force: false,
NoRecreate: true,
Force: true,
SRDeleteOp: getSRBucketDeleteOp(globalSiteReplicationSys.isEnabled()),
})
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -809,8 +828,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
globalNotificationSys.LoadBucketMetadata(GlobalContext, bucket)
// Make sure to add Location information here only for bucket
w.Header().Set(xhttp.Location,
getObjectLocation(r, globalDomainNames, bucket, ""))
w.Header().Set(xhttp.Location, pathJoin(SlashSeparator, bucket))
writeSuccessResponseHeadersOnly(w)
@@ -842,7 +860,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
// Proceed to creating a bucket.
if err := objectAPI.MakeBucketWithLocation(ctx, bucket, opts); err != nil {
if err := objectAPI.MakeBucket(ctx, bucket, opts); err != nil {
if _, ok := err.(BucketExists); ok {
// Though bucket exists locally, we send the site-replication
// hook to ensure all sites have this bucket. If the hook
@@ -858,12 +876,10 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
globalNotificationSys.LoadBucketMetadata(GlobalContext, bucket)
// Call site replication hook
globalSiteReplicationSys.MakeBucketHook(ctx, bucket, opts)
logger.LogIf(ctx, globalSiteReplicationSys.MakeBucketHook(ctx, bucket, opts))
// Make sure to add Location information here only for bucket
if cp := pathClean(r.URL.Path); cp != "" {
w.Header().Set(xhttp.Location, cp) // Clean any trailing slashes.
}
w.Header().Set(xhttp.Location, pathJoin(SlashSeparator, bucket))
writeSuccessResponseHeadersOnly(w)
@@ -897,20 +913,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
if crypto.Requested(r.Header) && !objectAPI.IsEncryptionSupported() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
bucket := mux.Vars(r)["bucket"]
// Require Content-Length to be set in the request
size := r.ContentLength
if size < 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentLength), r.URL)
return
}
resource, err := getResource(r.URL.Path, r.Host, globalDomainNames)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
@@ -925,41 +928,152 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Here the parameter is the size of the form data that should
// be loaded in memory, the remaining being put in temporary files.
reader, err := r.MultipartReader()
mp, err := r.MultipartReader()
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedPOSTRequest), r.URL)
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
// Read multipart data and save in memory and in the disk if needed
form, err := reader.ReadForm(maxFormMemory)
const mapEntryOverhead = 200
var (
reader io.Reader
fileSize int64 = -1
fileName string
fanOutEntries = make([]minio.PutObjectFanOutEntry, 0, 100)
)
maxParts := 1000
// Canonicalize the form values into http.Header.
formValues := make(http.Header)
for {
part, err := mp.NextRawPart()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if maxParts <= 0 {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
maxParts--
name := part.FormName()
if name == "" {
continue
}
fileName = part.FileName()
// Multiple values for the same key (one map entry, longer slice) are cheaper
// than the same number of values for different keys (many map entries), but
// using a consistent per-value cost for overhead is simpler.
maxMemoryBytes := 2 * int64(10<<20)
maxMemoryBytes -= int64(len(name))
maxMemoryBytes -= mapEntryOverhead
if maxMemoryBytes < 0 {
// We can't actually take this path, since nextPart would already have
// rejected the MIME headers for being too large. Check anyway.
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
var b bytes.Buffer
if fileName == "" {
if http.CanonicalHeaderKey(name) == http.CanonicalHeaderKey("x-minio-fanout-list") {
dec := json.NewDecoder(part)
// while the array contains values
for dec.More() {
var m minio.PutObjectFanOutEntry
if err := dec.Decode(&m); err != nil {
part.Close()
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
fanOutEntries = append(fanOutEntries, m)
}
part.Close()
continue
}
// value, store as string in memory
n, err := io.CopyN(&b, part, maxMemoryBytes+1)
part.Close()
if err != nil && err != io.EOF {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
maxMemoryBytes -= n
if maxMemoryBytes < 0 {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if n > maxFormFieldSize {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
formValues[http.CanonicalHeaderKey(name)] = append(formValues[http.CanonicalHeaderKey(name)], b.String())
continue
}
// In accordance with https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html
// The file or text content.
// The file or text content must be the last field in the form.
// You cannot upload more than one file at a time.
reader = part
// we have found the File part of the request we are done processing multipart-form
break
}
if _, ok := formValues["Key"]; !ok {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The name of the uploaded key is missing"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if fileName == "" {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The file or text content is missing"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
checksum, err := hash.GetContentChecksum(formValues)
if err != nil {
logger.LogIf(ctx, err, logger.Application)
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedPOSTRequest), r.URL)
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, fmt.Errorf("Invalid checksum: %w", err))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
// Remove all tmp files created during multipart upload
defer form.RemoveAll()
// Extract all form fields
fileBody, fileName, fileSize, formValues, err := extractPostPolicyFormValues(ctx, form)
if err != nil {
logger.LogIf(ctx, err, logger.Application)
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedPOSTRequest), r.URL)
if checksum != nil && checksum.Type.Trailing() {
// Not officially supported in POST requests.
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("Trailing checksums not available for POST operations"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
// Check if file is provided, error out otherwise.
if fileBody == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrPOSTFileRequired), r.URL)
return
}
// Close multipart file
defer fileBody.Close()
formValues.Set("Bucket", bucket)
if fileName != "" && strings.Contains(formValues.Get("Key"), "${filename}") {
// S3 feature to replace ${filename} found in Key form field
@@ -986,20 +1100,38 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
// Once signature is validated, check if the user has
// explicit permissions for the user.
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectAction,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
if len(fanOutEntries) > 0 {
// Once signature is validated, check if the user has
// explicit permissions for the user.
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectFanOutAction,
ConditionValues: getConditionValues(r, "", cred),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
} else {
// Once signature is validated, check if the user has
// explicit permissions for the user.
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectAction,
ConditionValues: getConditionValues(r, "", cred),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
}
policyBytes, err := base64.StdEncoding.DecodeString(formValues.Get("Policy"))
@@ -1008,6 +1140,19 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
hashReader, err := hash.NewReader(reader, fileSize, "", "", fileSize)
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if checksum != nil && checksum.Valid() {
if err = hashReader.AddChecksumNoTrailer(formValues, false); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
// Handle policy if it is set.
if len(policyBytes) > 0 {
postPolicyForm, err := parsePostPolicyForm(bytes.NewReader(policyBytes))
@@ -1028,15 +1173,8 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// should not exceed the maximum single Put size (5 GiB)
lengthRange := postPolicyForm.Conditions.ContentLengthRange
if lengthRange.Valid {
if fileSize < lengthRange.Min {
writeErrorResponse(ctx, w, toAPIError(ctx, errDataTooSmall), r.URL)
return
}
if fileSize > lengthRange.Max || isMaxObjectSize(fileSize) {
writeErrorResponse(ctx, w, toAPIError(ctx, errDataTooLarge), r.URL)
return
}
hashReader.SetExpectedMin(lengthRange.Min)
hashReader.SetExpectedMax(lengthRange.Max)
}
}
@@ -1048,19 +1186,13 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
hashReader, err := hash.NewReader(fileBody, fileSize, "", "", fileSize)
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
rawReader := hashReader
pReader := NewPutObjReader(rawReader)
var objectEncryptionKey crypto.ObjectKey
// Check if bucket encryption is enabled
sseConfig, _ := globalBucketSSEConfigSys.Get(bucket)
sseConfig.Apply(r.Header, sse.ApplyOptions{
sseConfig.Apply(formValues, sse.ApplyOptions{
AutoEncrypt: globalAutoEncryption,
})
@@ -1070,53 +1202,199 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponseHeadersOnly(w, toAPIError(ctx, err))
return
}
if objectAPI.IsEncryptionSupported() {
if crypto.Requested(formValues) && !HasSuffix(object, SlashSeparator) { // handle SSE requests
if crypto.SSECopy.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParameters), r.URL)
fanOutOpts := fanOutOptions{Checksum: checksum}
if crypto.Requested(formValues) {
if crypto.SSECopy.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParameters), r.URL)
return
}
if crypto.SSEC.IsRequested(r.Header) && crypto.S3.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, crypto.ErrIncompatibleEncryptionMethod), r.URL)
return
}
if crypto.SSEC.IsRequested(r.Header) && crypto.S3KMS.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, crypto.ErrIncompatibleEncryptionMethod), r.URL)
return
}
if crypto.SSEC.IsRequested(r.Header) && isReplicationEnabled(ctx, bucket) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParametersSSEC), r.URL)
return
}
var (
reader io.Reader
keyID string
key []byte
kmsCtx kms.Context
)
kind, _ := crypto.IsRequested(formValues)
switch kind {
case crypto.SSEC:
key, err = ParseSSECustomerHeader(formValues)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
var (
reader io.Reader
keyID string
key []byte
kmsCtx kms.Context
)
kind, _ := crypto.IsRequested(formValues)
switch kind {
case crypto.SSEC:
key, err = ParseSSECustomerHeader(formValues)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
case crypto.S3KMS:
keyID, kmsCtx, err = crypto.S3KMS.ParseHTTP(formValues)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
case crypto.S3KMS:
keyID, kmsCtx, err = crypto.S3KMS.ParseHTTP(formValues)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
if len(fanOutEntries) == 0 {
reader, objectEncryptionKey, err = newEncryptReader(ctx, hashReader, kind, keyID, key, bucket, object, metadata, kmsCtx)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
info := ObjectInfo{Size: fileSize}
// do not try to verify encrypted content
hashReader, err = hash.NewReader(reader, info.EncryptedSize(), "", "", fileSize)
// do not try to verify encrypted content/
hashReader, err = hash.NewReader(reader, -1, "", "", -1)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if checksum != nil && checksum.Valid() {
if err = hashReader.AddChecksumNoTrailer(formValues, true); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
pReader, err = pReader.WithEncryption(hashReader, &objectEncryptionKey)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
} else {
fanOutOpts = fanOutOptions{
Key: key,
Kind: kind,
KeyID: keyID,
KmsCtx: kmsCtx,
Checksum: checksum,
}
}
}
if len(fanOutEntries) > 0 {
// Fan-out requires no copying, and must be carried from original source
// https://en.wikipedia.org/wiki/Copy_protection so the incoming stream
// is always going to be in-memory as we cannot re-read from what we
// wrote to disk - since that amounts to "copying" from a "copy"
// instead of "copying" from source, we need the stream to be seekable
// to ensure that we can make fan-out calls concurrently.
buf := bytebufferpool.Get()
defer bytebufferpool.Put(buf)
md5w := md5.New()
// Maximum allowed fan-out object size.
const maxFanOutSize = 16 << 20
n, err := io.Copy(io.MultiWriter(buf, md5w), ioutil.HardLimitReader(pReader, maxFanOutSize))
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Set the correct hex md5sum for the fan-out stream.
fanOutOpts.MD5Hex = hex.EncodeToString(md5w.Sum(nil))
concurrentSize := 100
if runtime.GOMAXPROCS(0) < concurrentSize {
concurrentSize = runtime.GOMAXPROCS(0)
}
fanOutResp := make([]minio.PutObjectFanOutResponse, 0, len(fanOutEntries))
eventArgsList := make([]eventArgs, 0, len(fanOutEntries))
for {
var objInfos []ObjectInfo
var errs []error
var done bool
if len(fanOutEntries) < concurrentSize {
objInfos, errs = fanOutPutObject(ctx, bucket, objectAPI, fanOutEntries, buf.Bytes()[:n], fanOutOpts)
done = true
} else {
objInfos, errs = fanOutPutObject(ctx, bucket, objectAPI, fanOutEntries[:concurrentSize], buf.Bytes()[:n], fanOutOpts)
fanOutEntries = fanOutEntries[concurrentSize:]
}
for i, objInfo := range objInfos {
if errs[i] != nil {
fanOutResp = append(fanOutResp, minio.PutObjectFanOutResponse{
Key: objInfo.Name,
Error: errs[i].Error(),
})
eventArgsList = append(eventArgsList, eventArgs{
EventName: event.ObjectCreatedPost,
BucketName: objInfo.Bucket,
Object: ObjectInfo{Name: objInfo.Name},
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: fmt.Sprintf("%s MinIO-Fan-Out (failed: %v)", r.UserAgent(), errs[i]),
Host: handlers.GetSourceIP(r),
})
continue
}
fanOutResp = append(fanOutResp, minio.PutObjectFanOutResponse{
Key: objInfo.Name,
ETag: getDecryptedETag(formValues, objInfo, false),
VersionID: objInfo.VersionID,
LastModified: &objInfo.ModTime,
})
eventArgsList = append(eventArgsList, eventArgs{
EventName: event.ObjectCreatedPost,
BucketName: objInfo.Bucket,
Object: objInfo,
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: r.UserAgent() + " " + "MinIO-Fan-Out",
Host: handlers.GetSourceIP(r),
})
}
if done {
break
}
}
enc := json.NewEncoder(w)
for i, fanOutResp := range fanOutResp {
if err = enc.Encode(&fanOutResp); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Notify object created events.
sendEvent(eventArgsList[i])
if eventArgsList[i].Object.NumVersions > dataScannerExcessiveVersionsThreshold {
// Send events for excessive versions.
sendEvent(eventArgs{
EventName: event.ObjectManyVersions,
BucketName: eventArgsList[i].Object.Bucket,
Object: eventArgsList[i].Object,
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: r.UserAgent() + " " + "MinIO-Fan-Out",
Host: handlers.GetSourceIP(r),
})
}
}
return
}
objInfo, err := objectAPI.PutObject(ctx, bucket, object, pReader, opts)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -1129,11 +1407,13 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
w.Header()[xhttp.ETag] = []string{`"` + objInfo.ETag + `"`}
// Set the relevant version ID as part of the response header.
if objInfo.VersionID != "" {
if objInfo.VersionID != "" && objInfo.VersionID != nullVersionID {
w.Header()[xhttp.AmzVersionID] = []string{objInfo.VersionID}
}
w.Header().Set(xhttp.Location, getObjectLocation(r, globalDomainNames, bucket, object))
if obj := getObjectLocation(r, globalDomainNames, bucket, object); obj != "" {
w.Header().Set(xhttp.Location, obj)
}
// Notify object created event.
defer sendEvent(eventArgs{
@@ -1146,6 +1426,18 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
Host: handlers.GetSourceIP(r),
})
if objInfo.NumVersions > dataScannerExcessiveVersionsThreshold {
defer sendEvent(eventArgs{
EventName: event.ObjectManyVersions,
BucketName: objInfo.Bucket,
Object: objInfo,
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: r.UserAgent(),
Host: handlers.GetSourceIP(r),
})
}
if redirectURL != nil { // success_action_redirect is valid and set.
v := redirectURL.Query()
v.Add("bucket", objInfo.Bucket)
@@ -1156,6 +1448,11 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
// Add checksum header.
if checksum != nil && checksum.Valid() {
hash.AddChecksumHeader(w, checksum.AsMap())
}
// Decide what http response to send depending on success_action_status parameter
switch successStatus {
case "201":
@@ -1204,7 +1501,7 @@ func (api objectAPIHandlers) GetBucketPolicyStatusHandler(w http.ResponseWriter,
readable := globalPolicySys.IsAllowed(policy.Args{
Action: policy.ListBucketAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", "", nil),
ConditionValues: getConditionValues(r, "", auth.AnonymousCredentials),
IsOwner: false,
})
@@ -1212,7 +1509,7 @@ func (api objectAPIHandlers) GetBucketPolicyStatusHandler(w http.ResponseWriter,
writable := globalPolicySys.IsAllowed(policy.Args{
Action: policy.PutObjectAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", "", nil),
ConditionValues: getConditionValues(r, "", auth.AnonymousCredentials),
IsOwner: false,
})
@@ -1324,14 +1621,6 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
}
}
if globalDNSConfig != nil {
if err := globalDNSConfig.Delete(bucket); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to delete bucket DNS entry %w, please delete it manually", err))
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
deleteBucket := objectAPI.DeleteBucket
// Attempt to delete bucket.
@@ -1345,22 +1634,23 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
apiErr.Description = "The bucket you tried to delete is not empty. You must delete all versions in the bucket."
}
}
if globalDNSConfig != nil {
if err2 := globalDNSConfig.Put(bucket); err2 != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to restore bucket DNS entry %w, please fix it manually", err2))
}
}
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if globalDNSConfig != nil {
if err := globalDNSConfig.Delete(bucket); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to delete bucket DNS entry %w, please delete it manually, bucket on MinIO no longer exists", err))
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
globalNotificationSys.DeleteBucketMetadata(ctx, bucket)
globalReplicationPool.deleteResyncMetadata(ctx, bucket)
// Call site replication hook.
if err := globalSiteReplicationSys.DeleteBucketHook(ctx, bucket, forceDelete); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
logger.LogIf(ctx, globalSiteReplicationSys.DeleteBucketHook(ctx, bucket, forceDelete))
// Write success response.
writeSuccessNoContent(w)
@@ -1400,7 +1690,7 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
config, err := objectlock.ParseObjectLockConfig(r.Body)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedXML)
apiErr := errorCodes.ToAPIErr(ErrInvalidArgument)
apiErr.Description = err.Error()
writeErrorResponse(ctx, w, apiErr, r.URL)
return
@@ -1414,7 +1704,11 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
// Deny object locking configuration settings on existing buckets without object lock enabled.
if _, _, err = globalBucketMetadataSys.GetObjectLockConfig(bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
if _, ok := err.(BucketObjectLockConfigNotFound); ok {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrObjectLockConfigurationNotAllowed), r.URL)
} else {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
}
return
}
@@ -1429,15 +1723,12 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeObjectLockConfig,
Bucket: bucket,
ObjectLockConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
// Write success response.
writeSuccessResponseHeadersOnly(w)
@@ -1536,15 +1827,12 @@ func (api objectAPIHandlers) PutBucketTaggingHandler(w http.ResponseWriter, r *h
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
Tags: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
// Write success response.
writeSuccessResponseHeadersOnly(w)
@@ -1615,15 +1903,12 @@ func (api objectAPIHandlers) DeleteBucketTaggingHandler(w http.ResponseWriter, r
return
}
if err := globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
// Write success response.
writeSuccessResponseHeadersOnly(w)
writeSuccessNoContent(w)
}

View File

@@ -657,11 +657,15 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
) {
var err error
contentBytes := []byte("hello")
sha256sum := ""
var objectNames []string
for i := 0; i < 10; i++ {
contentBytes := []byte("hello")
objectName := "test-object-" + strconv.Itoa(i)
if i == 0 {
objectName += "/"
contentBytes = []byte{}
}
// uploading the object.
_, err = obj.PutObject(GlobalContext, bucketName, objectName, mustGetPutObjReader(t, bytes.NewReader(contentBytes), int64(len(contentBytes)), "", sha256sum), ObjectOptions{})
// if object upload fails stop the test.
@@ -673,6 +677,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
objectNames = append(objectNames, objectName)
}
contentBytes := []byte("hello")
for _, name := range []string{"private/object", "public/object"} {
// Uploading the object with retention enabled
_, err = obj.PutObject(GlobalContext, bucketName, name, mustGetPutObjReader(t, bytes.NewReader(contentBytes), int64(len(contentBytes)), "", sha256sum), ObjectOptions{})
@@ -742,8 +747,13 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
deletedObjects := make([]DeletedObject, len(requestList[0].Objects))
for i := range requestList[0].Objects {
var vid string
if isDirObject(requestList[0].Objects[i].ObjectName) {
vid = ""
}
deletedObjects[i] = DeletedObject{
ObjectName: requestList[0].Objects[i].ObjectName,
VersionID: vid,
}
}
@@ -753,9 +763,14 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
successRequest1 := encodeResponse(requestList[1])
deletedObjects = make([]DeletedObject, len(requestList[1].Objects))
for i := range requestList[0].Objects {
for i := range requestList[1].Objects {
var vid string
if isDirObject(requestList[0].Objects[i].ObjectName) {
vid = ""
}
deletedObjects[i] = DeletedObject{
ObjectName: requestList[1].Objects[i].ObjectName,
VersionID: vid,
}
}
@@ -793,9 +808,9 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
expectedContent []byte
expectedRespStatus int
}{
// Test case - 1.
// Test case - 0.
// Delete objects with invalid access key.
{
0: {
bucket: bucketName,
objects: successRequest0,
accessKey: "Invalid-AccessID",
@@ -803,9 +818,19 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
expectedContent: nil,
expectedRespStatus: http.StatusForbidden,
},
// Test case - 2.
// Test case - 1.
// Delete valid objects with quiet flag off.
{
1: {
bucket: bucketName,
objects: successRequest0,
accessKey: credentials.AccessKey,
secretKey: credentials.SecretKey,
expectedContent: encodedSuccessResponse0,
expectedRespStatus: http.StatusOK,
},
// Test case - 2.
// Delete deleted objects with quiet flag off.
2: {
bucket: bucketName,
objects: successRequest0,
accessKey: credentials.AccessKey,
@@ -815,7 +840,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
},
// Test case - 3.
// Delete valid objects with quiet flag on.
{
3: {
bucket: bucketName,
objects: successRequest1,
accessKey: credentials.AccessKey,
@@ -825,7 +850,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
},
// Test case - 4.
// Delete previously deleted objects.
{
4: {
bucket: bucketName,
objects: successRequest1,
accessKey: credentials.AccessKey,
@@ -836,7 +861,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
// Test case - 5.
// Anonymous user access denied response
// Currently anonymous users cannot delete multiple objects in MinIO server
{
5: {
bucket: bucketName,
objects: anonRequest,
accessKey: "",
@@ -847,7 +872,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
// Test case - 6.
// Anonymous user has access to some public folder, issue removing with
// another private object as well
{
6: {
bucket: bucketName,
objects: anonRequestWithPartialPublicAccess,
accessKey: "",
@@ -881,19 +906,19 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
apiRouter.ServeHTTP(rec, req)
// Assert the response code with the expected status.
if rec.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: MinIO %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, rec.Code)
t.Errorf("Test %d: MinIO %s: Expected the response status to be `%d`, but instead found `%d`", i, instanceType, testCase.expectedRespStatus, rec.Code)
}
// read the response body.
actualContent, err = io.ReadAll(rec.Body)
if err != nil {
t.Fatalf("Test %d : MinIO %s: Failed parsing response body: <ERROR> %v", i+1, instanceType, err)
t.Fatalf("Test %d : MinIO %s: Failed parsing response body: <ERROR> %v", i, instanceType, err)
}
// Verify whether the bucket obtained object is same as the one created.
if testCase.expectedContent != nil && !bytes.Equal(testCase.expectedContent, actualContent) {
t.Log(string(testCase.expectedContent), string(actualContent))
t.Errorf("Test %d : MinIO %s: Object content differs from expected value.", i+1, instanceType)
t.Errorf("Test %d : MinIO %s: Object content differs from expected value.", i, instanceType)
}
}

View File

@@ -0,0 +1,88 @@
// Copyright (c) 2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import "github.com/minio/minio/internal/bucket/lifecycle"
//go:generate stringer -type lcEventSrc -trimprefix lcEventSrc_ $GOFILE
type lcEventSrc uint8
//revive:disable:var-naming Underscores is used here to indicate where common prefix ends and the enumeration name begins
const (
lcEventSrc_None lcEventSrc = iota
lcEventSrc_Scanner
lcEventSrc_Decom
lcEventSrc_Rebal
lcEventSrc_s3HeadObject
lcEventSrc_s3GetObject
lcEventSrc_s3ListObjects
lcEventSrc_s3PutObject
lcEventSrc_s3CopyObject
lcEventSrc_s3CompleteMultipartUpload
)
//revive:enable:var-naming
type lcAuditEvent struct {
lifecycle.Event
source lcEventSrc
}
func (lae lcAuditEvent) Tags() map[string]interface{} {
event := lae.Event
src := lae.source
const (
ilmSrc = "ilm-src"
ilmAction = "ilm-action"
ilmDue = "ilm-due"
ilmRuleID = "ilm-rule-id"
ilmTier = "ilm-tier"
ilmNewerNoncurrentVersions = "ilm-newer-noncurrent-versions"
ilmNoncurrentDays = "ilm-noncurrent-days"
)
tags := make(map[string]interface{}, 5)
if src > lcEventSrc_None {
tags[ilmSrc] = src.String()
}
tags[ilmAction] = event.Action.String()
tags[ilmRuleID] = event.RuleID
if !event.Due.IsZero() {
tags[ilmDue] = event.Due
}
// rule with Transition/NoncurrentVersionTransition in effect
if event.StorageClass != "" {
tags[ilmTier] = event.StorageClass
}
// rule with NewernoncurrentVersions in effect
if event.NewerNoncurrentVersions > 0 {
tags[ilmNewerNoncurrentVersions] = event.NewerNoncurrentVersions
}
if event.NoncurrentDays > 0 {
tags[ilmNoncurrentDays] = event.NoncurrentDays
}
return tags
}
func newLifecycleAuditEvent(src lcEventSrc, event lifecycle.Event) lcAuditEvent {
return lcAuditEvent{
Event: event,
source: src,
}
}

View File

@@ -21,11 +21,12 @@ import (
"encoding/xml"
"io"
"net/http"
"strconv"
"github.com/gorilla/mux"
"github.com/minio/minio/internal/bucket/lifecycle"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
)
@@ -115,6 +116,16 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
vars := mux.Vars(r)
bucket := vars["bucket"]
var withUpdatedAt bool
if updatedAtStr := r.Form.Get("withUpdatedAt"); updatedAtStr != "" {
var err error
withUpdatedAt, err = strconv.ParseBool(updatedAtStr)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidLifecycleQueryParameter), r.URL)
return
}
}
if s3Error := checkRequestAuthType(ctx, r, policy.GetBucketLifecycleAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
@@ -126,7 +137,7 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
return
}
config, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
config, updatedAt, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -138,6 +149,9 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
return
}
if withUpdatedAt {
w.Header().Set(xhttp.MinIOLifecycleCfgUpdatedAt, updatedAt.Format(iso8601Format))
}
// Write lifecycle configuration to client.
writeSuccessResponseXML(w, configData)
}

View File

@@ -179,7 +179,7 @@ func testBucketLifecycleHandlers(obj ObjectLayer, instanceType, bucketName strin
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidRequest",
Code: "InvalidArgument",
Message: "Filter must have exactly one of Prefix, Tag, or And specified",
},
@@ -196,7 +196,7 @@ func testBucketLifecycleHandlers(obj ObjectLayer, instanceType, bucketName strin
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidRequest",
Code: "InvalidArgument",
Message: "Date must be provided in ISO 8601 format",
},

View File

@@ -24,12 +24,15 @@ import (
"fmt"
"io"
"net/http"
"runtime"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/google/uuid"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/amztime"
sse "github.com/minio/minio/internal/bucket/encryption"
@@ -38,6 +41,8 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/s3select"
"github.com/minio/pkg/env"
"github.com/minio/pkg/workers"
)
const (
@@ -58,7 +63,8 @@ type LifecycleSys struct{}
// Get - gets lifecycle config associated to a given bucket name.
func (sys *LifecycleSys) Get(bucketName string) (lc *lifecycle.Lifecycle, err error) {
return globalBucketMetadataSys.GetLifecycleConfig(bucketName)
lc, _, err = globalBucketMetadataSys.GetLifecycleConfig(bucketName)
return lc, err
}
// NewLifecycleSys - creates new lifecycle system.
@@ -66,10 +72,34 @@ func NewLifecycleSys() *LifecycleSys {
return &LifecycleSys{}
}
func ilmTrace(startTime time.Time, duration time.Duration, oi ObjectInfo, event string) madmin.TraceInfo {
return madmin.TraceInfo{
TraceType: madmin.TraceILM,
Time: startTime,
NodeName: globalLocalNodeName,
FuncName: event,
Duration: duration,
Path: pathJoin(oi.Bucket, oi.Name),
Error: "",
Message: getSource(4),
Custom: map[string]string{"version-id": oi.VersionID},
}
}
func (sys *LifecycleSys) trace(oi ObjectInfo) func(event string) {
startTime := time.Now()
return func(event string) {
duration := time.Since(startTime)
if globalTrace.NumSubscribers(madmin.TraceILM) > 0 {
globalTrace.Publish(ilmTrace(startTime, duration, oi, event))
}
}
}
type expiryTask struct {
objInfo ObjectInfo
versionExpiry bool
restoredObject bool
objInfo ObjectInfo
event lifecycle.Event
src lcEventSrc
}
type expiryState struct {
@@ -92,22 +122,22 @@ func (es *expiryState) close() {
}
// enqueueByDays enqueues object versions expired by days for expiry.
func (es *expiryState) enqueueByDays(oi ObjectInfo, restoredObject bool, rmVersion bool) {
func (es *expiryState) enqueueByDays(oi ObjectInfo, event lifecycle.Event, src lcEventSrc) {
select {
case <-GlobalContext.Done():
es.close()
case es.byDaysCh <- expiryTask{objInfo: oi, versionExpiry: rmVersion, restoredObject: restoredObject}:
case es.byDaysCh <- expiryTask{objInfo: oi, event: event, src: src}:
default:
}
}
// enqueueByNewerNoncurrent enqueues object versions expired by
// NewerNoncurrentVersions limit for expiry.
func (es *expiryState) enqueueByNewerNoncurrent(bucket string, versions []ObjectToDelete) {
func (es *expiryState) enqueueByNewerNoncurrent(bucket string, versions []ObjectToDelete, lcEvent lifecycle.Event) {
select {
case <-GlobalContext.Done():
es.close()
case es.byNewerNoncurrentCh <- newerNoncurrentTask{bucket: bucket, versions: versions}:
case es.byNewerNoncurrentCh <- newerNoncurrentTask{bucket: bucket, versions: versions, event: lcEvent}:
default:
}
}
@@ -116,26 +146,51 @@ var globalExpiryState *expiryState
func newExpiryState() *expiryState {
return &expiryState{
byDaysCh: make(chan expiryTask, 10000),
byNewerNoncurrentCh: make(chan newerNoncurrentTask, 10000),
byDaysCh: make(chan expiryTask, 100000),
byNewerNoncurrentCh: make(chan newerNoncurrentTask, 100000),
}
}
func initBackgroundExpiry(ctx context.Context, objectAPI ObjectLayer) {
globalExpiryState = newExpiryState()
workerSize, _ := strconv.Atoi(env.Get("_MINIO_ILM_EXPIRY_WORKERS", strconv.Itoa((runtime.GOMAXPROCS(0)+1)/2)))
if workerSize == 0 {
workerSize = 4
}
ewk, err := workers.New(workerSize)
if err != nil {
logger.LogIf(ctx, err)
}
nwk, err := workers.New(workerSize)
if err != nil {
logger.LogIf(ctx, err)
}
go func() {
for t := range globalExpiryState.byDaysCh {
if t.objInfo.TransitionedObject.Status != "" {
applyExpiryOnTransitionedObject(ctx, objectAPI, t.objInfo, t.restoredObject)
} else {
applyExpiryOnNonTransitionedObjects(ctx, objectAPI, t.objInfo, t.versionExpiry)
}
ewk.Take()
go func(t expiryTask) {
defer ewk.Give()
if t.objInfo.TransitionedObject.Status != "" {
applyExpiryOnTransitionedObject(ctx, objectAPI, t.objInfo, t.event, t.src)
} else {
applyExpiryOnNonTransitionedObjects(ctx, objectAPI, t.objInfo, t.event, t.src)
}
}(t)
}
ewk.Wait()
}()
go func() {
for t := range globalExpiryState.byNewerNoncurrentCh {
deleteObjectVersions(ctx, objectAPI, t.bucket, t.versions)
nwk.Take()
go func(t newerNoncurrentTask) {
defer nwk.Give()
deleteObjectVersions(ctx, objectAPI, t.bucket, t.versions, t.event)
}(t)
}
nwk.Wait()
}()
}
@@ -144,15 +199,16 @@ func initBackgroundExpiry(ctx context.Context, objectAPI ObjectLayer) {
type newerNoncurrentTask struct {
bucket string
versions []ObjectToDelete
event lifecycle.Event
}
type transitionTask struct {
tier string
objInfo ObjectInfo
src lcEventSrc
event lifecycle.Event
}
type transitionState struct {
once sync.Once
transitionCh chan transitionTask
ctx context.Context
@@ -167,33 +223,42 @@ type transitionState struct {
lastDayStats map[string]*lastDayTierStats
}
func (t *transitionState) queueTransitionTask(oi ObjectInfo, sc string) {
func (t *transitionState) queueTransitionTask(oi ObjectInfo, event lifecycle.Event, src lcEventSrc) {
select {
case <-GlobalContext.Done():
t.once.Do(func() {
close(t.transitionCh)
})
case t.transitionCh <- transitionTask{objInfo: oi, tier: sc}:
case <-t.ctx.Done():
case t.transitionCh <- transitionTask{objInfo: oi, event: event, src: src}:
default:
}
}
var globalTransitionState *transitionState
func newTransitionState(ctx context.Context, objAPI ObjectLayer) *transitionState {
// newTransitionState returns a transitionState object ready to be initialized
// via its Init method.
func newTransitionState(ctx context.Context) *transitionState {
return &transitionState{
transitionCh: make(chan transitionTask, 10000),
ctx: ctx,
objAPI: objAPI,
killCh: make(chan struct{}),
lastDayStats: make(map[string]*lastDayTierStats),
}
}
// Init initializes t with given objAPI and instantiates the configured number
// of transition workers.
func (t *transitionState) Init(objAPI ObjectLayer) {
n := globalAPIConfig.getTransitionWorkers()
t.mu.Lock()
defer t.mu.Unlock()
t.objAPI = objAPI
t.updateWorkers(n)
}
// PendingTasks returns the number of ILM transition tasks waiting for a worker
// goroutine.
func (t *transitionState) PendingTasks() int {
return len(globalTransitionState.transitionCh)
return len(t.transitionCh)
}
// ActiveTasks returns the number of active (ongoing) ILM transition tasks.
@@ -202,21 +267,21 @@ func (t *transitionState) ActiveTasks() int {
}
// worker waits for transition tasks
func (t *transitionState) worker(ctx context.Context, objectAPI ObjectLayer) {
func (t *transitionState) worker(objectAPI ObjectLayer) {
for {
select {
case <-t.killCh:
return
case <-ctx.Done():
case <-t.ctx.Done():
return
case task, ok := <-t.transitionCh:
if !ok {
return
}
atomic.AddInt32(&t.activeTasks, 1)
if err := transitionObject(ctx, objectAPI, task.objInfo, task.tier); err != nil {
logger.LogIf(ctx, fmt.Errorf("Transition failed for %s/%s version:%s with %w",
task.objInfo.Bucket, task.objInfo.Name, task.objInfo.VersionID, err))
if err := transitionObject(t.ctx, objectAPI, task.objInfo, newLifecycleAuditEvent(task.src, task.event)); err != nil {
logger.LogIf(t.ctx, fmt.Errorf("Transition to %s failed for %s/%s version:%s with %w",
task.event.StorageClass, task.objInfo.Bucket, task.objInfo.Name, task.objInfo.VersionID, err))
} else {
ts := tierStats{
TotalSize: uint64(task.objInfo.Size),
@@ -225,7 +290,7 @@ func (t *transitionState) worker(ctx context.Context, objectAPI ObjectLayer) {
if task.objInfo.IsLatest {
ts.NumObjects = 1
}
t.addLastDayStats(task.tier, ts)
t.addLastDayStats(task.event.StorageClass, ts)
}
atomic.AddInt32(&t.activeTasks, -1)
}
@@ -258,9 +323,15 @@ func (t *transitionState) getDailyAllTierStats() DailyAllTierStats {
func (t *transitionState) UpdateWorkers(n int) {
t.mu.Lock()
defer t.mu.Unlock()
if t.objAPI == nil { // Init hasn't been called yet.
return
}
t.updateWorkers(n)
}
func (t *transitionState) updateWorkers(n int) {
for t.numWorkers < n {
go t.worker(t.ctx, t.objAPI)
go t.worker(t.objAPI)
t.numWorkers++
}
@@ -270,12 +341,6 @@ func (t *transitionState) UpdateWorkers(n int) {
}
}
func initBackgroundTransition(ctx context.Context, objectAPI ObjectLayer) {
globalTransitionState = newTransitionState(ctx, objectAPI)
n := globalAPIConfig.getTransitionWorkers()
globalTransitionState.UpdateWorkers(n)
}
var errInvalidStorageClass = errors.New("invalid storage class")
func validateTransitionTier(lc *lifecycle.Lifecycle) error {
@@ -296,89 +361,78 @@ func validateTransitionTier(lc *lifecycle.Lifecycle) error {
// enqueueTransitionImmediate enqueues obj for transition if eligible.
// This is to be called after a successful upload of an object (version).
func enqueueTransitionImmediate(obj ObjectInfo) {
func enqueueTransitionImmediate(obj ObjectInfo, src lcEventSrc) {
if lc, err := globalLifecycleSys.Get(obj.Bucket); err == nil {
event := lc.Eval(obj.ToLifecycleOpts())
switch event.Action {
switch event := lc.Eval(obj.ToLifecycleOpts()); event.Action {
case lifecycle.TransitionAction, lifecycle.TransitionVersionAction:
globalTransitionState.queueTransitionTask(obj, event.StorageClass)
globalTransitionState.queueTransitionTask(obj, event, src)
}
}
}
// expireAction represents different actions to be performed on expiry of a
// restored/transitioned object
type expireAction int
const (
// ignore the zero value
_ expireAction = iota
// expireObj indicates expiry of 'regular' transitioned objects.
expireObj
// expireRestoredObj indicates expiry of restored objects.
expireRestoredObj
)
// expireTransitionedObject handles expiry of transitioned/restored objects
// (versions) in one of the following situations:
//
// 1. when a restored (via PostRestoreObject API) object expires.
// 2. when a transitioned object expires (based on an ILM rule).
func expireTransitionedObject(ctx context.Context, objectAPI ObjectLayer, oi *ObjectInfo, lcOpts lifecycle.ObjectOpts, action expireAction) error {
func expireTransitionedObject(ctx context.Context, objectAPI ObjectLayer, oi *ObjectInfo, lcOpts lifecycle.ObjectOpts, lcEvent lifecycle.Event, src lcEventSrc) error {
traceFn := globalLifecycleSys.trace(*oi)
var opts ObjectOptions
opts.Versioned = globalBucketVersioningSys.PrefixEnabled(oi.Bucket, oi.Name)
opts.VersionID = lcOpts.VersionID
opts.Expiration = ExpirationOptions{Expire: true}
switch action {
case expireObj:
// When an object is past expiry or when a transitioned object is being
// deleted, 'mark' the data in the remote tier for delete.
entry := jentry{
ObjName: oi.TransitionedObject.Name,
VersionID: oi.TransitionedObject.VersionID,
TierName: oi.TransitionedObject.Tier,
}
if err := globalTierJournal.AddEntry(entry); err != nil {
logger.LogIf(ctx, err)
return err
}
// Delete metadata on source, now that data in remote tier has been
// marked for deletion.
if _, err := objectAPI.DeleteObject(ctx, oi.Bucket, oi.Name, opts); err != nil {
logger.LogIf(ctx, err)
return err
}
// Send audit for the lifecycle delete operation
auditLogLifecycle(ctx, *oi, ILMExpiry)
eventName := event.ObjectRemovedDelete
if lcOpts.DeleteMarker {
eventName = event.ObjectRemovedDeleteMarkerCreated
}
objInfo := ObjectInfo{
Name: oi.Name,
VersionID: lcOpts.VersionID,
DeleteMarker: lcOpts.DeleteMarker,
}
// Notify object deleted event.
sendEvent(eventArgs{
EventName: eventName,
BucketName: oi.Bucket,
Object: objInfo,
Host: "Internal: [ILM-EXPIRY]",
})
case expireRestoredObj:
tags := newLifecycleAuditEvent(src, lcEvent).Tags()
if lcEvent.Action.DeleteRestored() {
// delete locally restored copy of object or object version
// from the source, while leaving metadata behind. The data on
// transitioned tier lies untouched and still accessible
opts.Transition.ExpireRestored = true
_, err := objectAPI.DeleteObject(ctx, oi.Bucket, oi.Name, opts)
if err == nil {
// TODO consider including expiry of restored object to events we
// notify.
auditLogLifecycle(ctx, *oi, ILMExpiry, tags, traceFn)
}
return err
default:
return fmt.Errorf("Unknown expire action %v", action)
}
// When an object is past expiry or when a transitioned object is being
// deleted, 'mark' the data in the remote tier for delete.
entry := jentry{
ObjName: oi.TransitionedObject.Name,
VersionID: oi.TransitionedObject.VersionID,
TierName: oi.TransitionedObject.Tier,
}
if err := globalTierJournal.AddEntry(entry); err != nil {
logger.LogIf(ctx, err)
return err
}
// Delete metadata on source, now that data in remote tier has been
// marked for deletion.
if _, err := objectAPI.DeleteObject(ctx, oi.Bucket, oi.Name, opts); err != nil {
logger.LogIf(ctx, err)
return err
}
// Send audit for the lifecycle delete operation
defer auditLogLifecycle(ctx, *oi, ILMExpiry, tags, traceFn)
eventName := event.ObjectRemovedDelete
if lcOpts.DeleteMarker {
eventName = event.ObjectRemovedDeleteMarkerCreated
}
objInfo := ObjectInfo{
Name: oi.Name,
VersionID: lcOpts.VersionID,
DeleteMarker: lcOpts.DeleteMarker,
}
// Notify object deleted event.
sendEvent(eventArgs{
EventName: eventName,
BucketName: oi.Bucket,
Object: objInfo,
UserAgent: "Internal: [ILM-Expiry]",
Host: globalLocalNodeName,
})
return nil
}
@@ -398,17 +452,18 @@ func genTransitionObjName(bucket string) (string, error) {
// storage specified by the transition ARN, the metadata is left behind on source cluster and original content
// is moved to the transition tier. Note that in the case of encrypted objects, entire encrypted stream is moved
// to the transition tier without decrypting or re-encrypting.
func transitionObject(ctx context.Context, objectAPI ObjectLayer, oi ObjectInfo, tier string) error {
func transitionObject(ctx context.Context, objectAPI ObjectLayer, oi ObjectInfo, lae lcAuditEvent) error {
opts := ObjectOptions{
Transition: TransitionOptions{
Status: lifecycle.TransitionPending,
Tier: tier,
Tier: lae.StorageClass,
ETag: oi.ETag,
},
VersionID: oi.VersionID,
Versioned: globalBucketVersioningSys.PrefixEnabled(oi.Bucket, oi.Name),
VersionSuspended: globalBucketVersioningSys.PrefixSuspended(oi.Bucket, oi.Name),
MTime: oi.ModTime,
LifecycleAuditEvent: lae,
VersionID: oi.VersionID,
Versioned: globalBucketVersioningSys.PrefixEnabled(oi.Bucket, oi.Name),
VersionSuspended: globalBucketVersioningSys.PrefixSuspended(oi.Bucket, oi.Name),
MTime: oi.ModTime,
}
return objectAPI.TransitionObject(ctx, oi.Bucket, oi.Name, opts)
}
@@ -540,7 +595,7 @@ type RestoreObjectRequest struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ RestoreRequest" json:"-"`
Days int `xml:"Days,omitempty"`
Type RestoreRequestType `xml:"Type,omitempty"`
Tier string `xml:"Tier,-"`
Tier string `xml:"Tier"`
Description string `xml:"Description,omitempty"`
SelectParameters *SelectParameters `xml:"SelectParameters,omitempty"`
OutputLocation OutputLocation `xml:"OutputLocation,omitempty"`
@@ -636,7 +691,7 @@ func putRestoreOpts(bucket, object string, rreq *RestoreObjectRequest, objInfo O
if rreq.Type == SelectRestoreRequest {
for _, v := range rreq.OutputLocation.S3.UserMetadata {
if !strings.HasPrefix(strings.ToLower(v.Name), "x-amz-meta") {
if !stringsHasPrefixFold(v.Name, "x-amz-meta") {
meta["x-amz-meta-"+v.Name] = v.Value
continue
}
@@ -660,7 +715,9 @@ func putRestoreOpts(bucket, object string, rreq *RestoreObjectRequest, objInfo O
if len(objInfo.UserTags) != 0 {
meta[xhttp.AmzObjectTagging] = objInfo.UserTags
}
// Set restore object status
restoreExpiry := lifecycle.ExpectedExpiryTime(time.Now().UTC(), rreq.Days)
meta[xhttp.AmzRestore] = completedRestoreObj(restoreExpiry).String()
return ObjectOptions{
Versioned: globalBucketVersioningSys.PrefixEnabled(bucket, object),
VersionSuspended: globalBucketVersioningSys.PrefixSuspended(bucket, object),

View File

@@ -23,8 +23,8 @@ import (
"strconv"
"strings"
"github.com/gorilla/mux"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
)
@@ -59,10 +59,18 @@ func validateListObjectsArgs(prefix, marker, delimiter, encodingType string, max
return ErrNone
}
// ListObjectVersions - GET Bucket Object versions
func (api objectAPIHandlers) ListObjectVersionsHandler(w http.ResponseWriter, r *http.Request) {
api.listObjectVersionsHandler(w, r, false)
}
func (api objectAPIHandlers) ListObjectVersionsMHandler(w http.ResponseWriter, r *http.Request) {
api.listObjectVersionsHandler(w, r, true)
}
// ListObjectVersionsHandler - GET Bucket Object versions
// You can use the versions subresource to list metadata about all
// of the versions of objects in a bucket.
func (api objectAPIHandlers) ListObjectVersionsHandler(w http.ResponseWriter, r *http.Request) {
func (api objectAPIHandlers) listObjectVersionsHandler(w http.ResponseWriter, r *http.Request, metadata bool) {
ctx := newContext(r, w, "ListObjectVersions")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
@@ -80,7 +88,12 @@ func (api objectAPIHandlers) ListObjectVersionsHandler(w http.ResponseWriter, r
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
var checkObjMeta metaCheckFn
if metadata {
checkObjMeta = func(name string, action policy.Action) (s3Err APIErrorCode) {
return checkRequestAuthType(ctx, r, action, bucket, name)
}
}
urlValues := r.Form
// Extract all the listBucketVersions query params to their native values.
@@ -111,11 +124,10 @@ func (api objectAPIHandlers) ListObjectVersionsHandler(w http.ResponseWriter, r
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
response := generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType, maxkeys, listObjectVersionsInfo)
response := generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType, maxkeys, listObjectVersionsInfo, checkObjMeta)
// Write success response.
writeSuccessResponseXML(w, encodeResponse(response))
writeSuccessResponseXML(w, encodeResponseList(response))
}
// ListObjectsV2MHandler - GET Bucket (List Objects) Version 2 with metadata.
@@ -128,64 +140,7 @@ func (api objectAPIHandlers) ListObjectVersionsHandler(w http.ResponseWriter, r
// MinIO continues to support ListObjectsV1 and V2 for supporting legacy tools.
func (api objectAPIHandlers) ListObjectsV2MHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListObjectsV2M")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.ListBucketAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
urlValues := r.Form
// Extract all the listObjectsV2 query params to their native values.
prefix, token, startAfter, delimiter, fetchOwner, maxKeys, encodingType, errCode := getListObjectsV2Args(urlValues)
if errCode != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(errCode), r.URL)
return
}
// Validate the query params before beginning to serve the request.
// fetch-owner is not validated since it is a boolean
if s3Error := validateListObjectsArgs(prefix, token, delimiter, encodingType, maxKeys); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
listObjectsV2 := objectAPI.ListObjectsV2
// Inititate a list objects operation based on the input params.
// On success would return back ListObjectsInfo object to be
// marshaled into S3 compatible XML header.
listObjectsV2Info, err := listObjectsV2(ctx, bucket, prefix, token, delimiter, maxKeys, fetchOwner, startAfter)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if err = DecryptETags(ctx, GlobalKMS, listObjectsV2Info.Objects); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// The next continuation token has id@node_index format to optimize paginated listing
nextContinuationToken := listObjectsV2Info.NextContinuationToken
response := generateListObjectsV2Response(bucket, prefix, token, nextContinuationToken, startAfter,
delimiter, encodingType, fetchOwner, listObjectsV2Info.IsTruncated,
maxKeys, listObjectsV2Info.Objects, listObjectsV2Info.Prefixes, true)
// Write success response.
writeSuccessResponseXML(w, encodeResponse(response))
api.listObjectsV2Handler(ctx, w, r, true)
}
// ListObjectsV2Handler - GET Bucket (List Objects) Version 2.
@@ -198,7 +153,11 @@ func (api objectAPIHandlers) ListObjectsV2MHandler(w http.ResponseWriter, r *htt
// MinIO continues to support ListObjectsV1 for supporting legacy tools.
func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListObjectsV2")
api.listObjectsV2Handler(ctx, w, r, false)
}
// listObjectsV2Handler performs listing either with or without extra metadata.
func (api objectAPIHandlers) listObjectsV2Handler(ctx context.Context, w http.ResponseWriter, r *http.Request, metadata bool) {
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
@@ -215,6 +174,12 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
return
}
var checkObjMeta metaCheckFn
if metadata {
checkObjMeta = func(name string, action policy.Action) (s3Err APIErrorCode) {
return checkRequestAuthType(ctx, r, action, bucket, name)
}
}
urlValues := r.Form
// Extract all the listObjectsV2 query params to their native values.
@@ -225,7 +190,6 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
}
// Validate the query params before beginning to serve the request.
// fetch-owner is not validated since it is a boolean
if s3Error := validateListObjectsArgs(prefix, token, delimiter, encodingType, maxKeys); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
@@ -257,10 +221,10 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
response := generateListObjectsV2Response(bucket, prefix, token, listObjectsV2Info.NextContinuationToken, startAfter,
delimiter, encodingType, fetchOwner, listObjectsV2Info.IsTruncated,
maxKeys, listObjectsV2Info.Objects, listObjectsV2Info.Prefixes, false)
maxKeys, listObjectsV2Info.Objects, listObjectsV2Info.Prefixes, checkObjMeta)
// Write success response.
writeSuccessResponseXML(w, encodeResponse(response))
writeSuccessResponseXML(w, encodeResponseList(response))
}
func parseRequestToken(token string) (subToken string, nodeIndex int) {
@@ -357,5 +321,5 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
response := generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingType, maxKeys, listObjectsInfo)
// Write success response.
writeSuccessResponseXML(w, encodeResponse(response))
writeSuccessResponseXML(w, encodeResponseList(response))
}

View File

@@ -21,10 +21,12 @@ import (
"context"
"errors"
"fmt"
"runtime"
"sync"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/tags"
bucketsse "github.com/minio/minio/internal/bucket/encryption"
"github.com/minio/minio/internal/bucket/lifecycle"
@@ -34,12 +36,14 @@ import (
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/sync/errgroup"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/sync/errgroup"
)
// BucketMetadataSys captures all bucket metadata for a given cluster.
type BucketMetadataSys struct {
objAPI ObjectLayer
sync.RWMutex
metadataMap map[string]BucketMetadata
}
@@ -53,20 +57,36 @@ func (sys *BucketMetadataSys) Count() int {
}
// Remove bucket metadata from memory.
func (sys *BucketMetadataSys) Remove(bucket string) {
func (sys *BucketMetadataSys) Remove(buckets ...string) {
sys.Lock()
delete(sys.metadataMap, bucket)
globalBucketMonitor.DeleteBucket(bucket)
for _, bucket := range buckets {
delete(sys.metadataMap, bucket)
globalBucketMonitor.DeleteBucket(bucket)
}
sys.Unlock()
}
// RemoveStaleBuckets removes all stale buckets in memory that are not on disk.
func (sys *BucketMetadataSys) RemoveStaleBuckets(diskBuckets set.StringSet) {
sys.Lock()
defer sys.Unlock()
for bucket := range sys.metadataMap {
if diskBuckets.Contains(bucket) {
continue
} // doesn't exist on disk remove from memory.
delete(sys.metadataMap, bucket)
globalBucketMonitor.DeleteBucket(bucket)
}
}
// Set - sets a new metadata in-memory.
// Only a shallow copy is saved and fields with references
// cannot be modified without causing a race condition,
// so they should be replaced atomically and not appended to, etc.
// Data is not persisted to disk.
func (sys *BucketMetadataSys) Set(bucket string, meta BucketMetadata) {
if bucket != minioMetaBucket {
if !isMinioMetaBucketName(bucket) {
sys.Lock()
sys.metadataMap[bucket] = meta
sys.Unlock()
@@ -79,7 +99,7 @@ func (sys *BucketMetadataSys) updateAndParse(ctx context.Context, bucket string,
return updatedAt, errServerNotInitialized
}
if bucket == minioMetaBucket {
if isMinioMetaBucketName(bucket) {
return updatedAt, errInvalidArgument
}
@@ -101,6 +121,7 @@ func (sys *BucketMetadataSys) updateAndParse(ctx context.Context, bucket string,
meta.NotificationConfigXML = configData
case bucketLifecycleConfig:
meta.LifecycleConfigXML = configData
meta.LifecycleConfigUpdatedAt = updatedAt
case bucketSSEConfig:
meta.EncryptionConfigXML = configData
meta.EncryptionConfigUpdatedAt = updatedAt
@@ -164,7 +185,7 @@ func (sys *BucketMetadataSys) Update(ctx context.Context, bucket string, configF
// For all other bucket specific metadata, use the relevant
// calls implemented specifically for each of those features.
func (sys *BucketMetadataSys) Get(bucket string) (BucketMetadata, error) {
if bucket == minioMetaBucket {
if isMinioMetaBucketName(bucket) {
return newBucketMetadata(bucket), errConfigNotFound
}
@@ -182,7 +203,7 @@ func (sys *BucketMetadataSys) Get(bucket string) (BucketMetadata, error) {
// GetVersioningConfig returns configured versioning config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetVersioningConfig(bucket string) (*versioning.Versioning, time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return &versioning.Versioning{XMLNS: "http://s3.amazonaws.com/doc/2006-03-01/"}, meta.Created, nil
@@ -195,7 +216,7 @@ func (sys *BucketMetadataSys) GetVersioningConfig(bucket string) (*versioning.Ve
// GetTaggingConfig returns configured tagging config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetTaggingConfig(bucket string) (*tags.Tags, time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, time.Time{}, BucketTaggingNotFound{Bucket: bucket}
@@ -211,7 +232,7 @@ func (sys *BucketMetadataSys) GetTaggingConfig(bucket string) (*tags.Tags, time.
// GetObjectLockConfig returns configured object lock config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetObjectLockConfig(bucket string) (*objectlock.Config, time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, time.Time{}, BucketObjectLockConfigNotFound{Bucket: bucket}
@@ -226,24 +247,24 @@ func (sys *BucketMetadataSys) GetObjectLockConfig(bucket string) (*objectlock.Co
// GetLifecycleConfig returns configured lifecycle config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetLifecycleConfig(bucket string) (*lifecycle.Lifecycle, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
func (sys *BucketMetadataSys) GetLifecycleConfig(bucket string) (*lifecycle.Lifecycle, time.Time, error) {
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketLifecycleNotFound{Bucket: bucket}
return nil, time.Time{}, BucketLifecycleNotFound{Bucket: bucket}
}
return nil, err
return nil, time.Time{}, err
}
if meta.lifecycleConfig == nil {
return nil, BucketLifecycleNotFound{Bucket: bucket}
return nil, time.Time{}, BucketLifecycleNotFound{Bucket: bucket}
}
return meta.lifecycleConfig, nil
return meta.lifecycleConfig, meta.LifecycleConfigUpdatedAt, nil
}
// GetNotificationConfig returns configured notification config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetNotificationConfig(bucket string) (*event.Config, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
return nil, err
}
@@ -253,7 +274,7 @@ func (sys *BucketMetadataSys) GetNotificationConfig(bucket string) (*event.Confi
// GetSSEConfig returns configured SSE config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetSSEConfig(bucket string) (*bucketsse.BucketSSEConfig, time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, time.Time{}, BucketSSEConfigNotFound{Bucket: bucket}
@@ -268,7 +289,7 @@ func (sys *BucketMetadataSys) GetSSEConfig(bucket string) (*bucketsse.BucketSSEC
// CreatedAt returns the time of creation of bucket
func (sys *BucketMetadataSys) CreatedAt(bucket string) (time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
return time.Time{}, err
}
@@ -278,7 +299,7 @@ func (sys *BucketMetadataSys) CreatedAt(bucket string) (time.Time, error) {
// GetPolicyConfig returns configured bucket policy
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetPolicyConfig(bucket string) (*policy.Policy, time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, time.Time{}, BucketPolicyNotFound{Bucket: bucket}
@@ -294,7 +315,7 @@ func (sys *BucketMetadataSys) GetPolicyConfig(bucket string) (*policy.Policy, ti
// GetQuotaConfig returns configured bucket quota
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetQuotaConfig(ctx context.Context, bucket string) (*madmin.BucketQuota, time.Time, error) {
meta, err := sys.GetConfig(ctx, bucket)
meta, _, err := sys.GetConfig(ctx, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, time.Time{}, BucketQuotaConfigNotFound{Bucket: bucket}
@@ -307,7 +328,7 @@ func (sys *BucketMetadataSys) GetQuotaConfig(ctx context.Context, bucket string)
// GetReplicationConfig returns configured bucket replication config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetReplicationConfig(ctx context.Context, bucket string) (*replication.Config, time.Time, error) {
meta, err := sys.GetConfig(ctx, bucket)
meta, reloaded, err := sys.GetConfig(ctx, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, time.Time{}, BucketReplicationConfigNotFound{Bucket: bucket}
@@ -318,13 +339,18 @@ func (sys *BucketMetadataSys) GetReplicationConfig(ctx context.Context, bucket s
if meta.replicationConfig == nil {
return nil, time.Time{}, BucketReplicationConfigNotFound{Bucket: bucket}
}
if reloaded {
globalBucketTargetSys.set(BucketInfo{
Name: bucket,
}, meta)
}
return meta.replicationConfig, meta.ReplicationConfigUpdatedAt, nil
}
// GetBucketTargetsConfig returns configured bucket targets for this bucket
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetBucketTargetsConfig(bucket string) (*madmin.BucketTargets, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
meta, reloaded, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketRemoteTargetNotFound{Bucket: bucket}
@@ -334,36 +360,56 @@ func (sys *BucketMetadataSys) GetBucketTargetsConfig(bucket string) (*madmin.Buc
if meta.bucketTargetConfig == nil {
return nil, BucketRemoteTargetNotFound{Bucket: bucket}
}
if reloaded {
globalBucketTargetSys.set(BucketInfo{
Name: bucket,
}, meta)
}
return meta.bucketTargetConfig, nil
}
// GetConfig returns a specific configuration from the bucket metadata.
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetConfig(ctx context.Context, bucket string) (BucketMetadata, error) {
// GetConfigFromDisk read bucket metadata config from disk.
func (sys *BucketMetadataSys) GetConfigFromDisk(ctx context.Context, bucket string) (BucketMetadata, error) {
objAPI := newObjectLayerFn()
if objAPI == nil {
return newBucketMetadata(bucket), errServerNotInitialized
}
if bucket == minioMetaBucket {
if isMinioMetaBucketName(bucket) {
return newBucketMetadata(bucket), errInvalidArgument
}
return loadBucketMetadata(ctx, objAPI, bucket)
}
// GetConfig returns a specific configuration from the bucket metadata.
// The returned object may not be modified.
// reloaded will be true if metadata refreshed from disk
func (sys *BucketMetadataSys) GetConfig(ctx context.Context, bucket string) (meta BucketMetadata, reloaded bool, err error) {
objAPI := newObjectLayerFn()
if objAPI == nil {
return newBucketMetadata(bucket), reloaded, errServerNotInitialized
}
if isMinioMetaBucketName(bucket) {
return newBucketMetadata(bucket), reloaded, errInvalidArgument
}
sys.RLock()
meta, ok := sys.metadataMap[bucket]
sys.RUnlock()
if ok {
return meta, nil
return meta, reloaded, nil
}
meta, err := loadBucketMetadata(ctx, objAPI, bucket)
meta, err = loadBucketMetadata(ctx, objAPI, bucket)
if err != nil {
return meta, err
return meta, reloaded, err
}
sys.Lock()
sys.metadataMap[bucket] = meta
sys.Unlock()
return meta, nil
return meta, true, nil
}
// Init - initializes bucket metadata system for all buckets.
@@ -372,59 +418,129 @@ func (sys *BucketMetadataSys) Init(ctx context.Context, buckets []BucketInfo, ob
return errServerNotInitialized
}
// Load bucket metadata sys in background
go sys.load(ctx, buckets, objAPI)
sys.objAPI = objAPI
// Load bucket metadata sys.
sys.init(ctx, buckets)
return nil
}
func (sys *BucketMetadataSys) loadBucketMetadata(ctx context.Context, bucket BucketInfo) error {
meta, err := loadBucketMetadata(ctx, sys.objAPI, bucket.Name)
if err != nil {
return err
}
sys.Lock()
sys.metadataMap[bucket.Name] = meta
sys.Unlock()
return nil
}
// concurrently load bucket metadata to speed up loading bucket metadata.
func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []BucketInfo, objAPI ObjectLayer) {
func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []BucketInfo) {
g := errgroup.WithNErrs(len(buckets))
bucketMetas := make([]BucketMetadata, len(buckets))
for index := range buckets {
index := index
g.Go(func() error {
_, _ = objAPI.HealBucket(ctx, buckets[index].Name, madmin.HealOpts{
_, _ = sys.objAPI.HealBucket(ctx, buckets[index].Name, madmin.HealOpts{
// Ensure heal opts for bucket metadata be deep healed all the time.
ScanMode: madmin.HealDeepScan,
Recreate: true,
})
meta, err := loadBucketMetadata(ctx, objAPI, buckets[index].Name)
meta, err := loadBucketMetadata(ctx, sys.objAPI, buckets[index].Name)
if err != nil {
if !globalIsErasure && !globalIsDistErasure && errors.Is(err, errVolumeNotFound) {
meta = newBucketMetadata(buckets[index].Name)
} else {
return err
}
return err
}
sys.Lock()
sys.metadataMap[buckets[index].Name] = meta
sys.Unlock()
globalEventNotifier.set(buckets[index], meta) // set notification targets
globalBucketTargetSys.set(buckets[index], meta) // set remote replication targets
bucketMetas[index] = meta
return nil
}, index)
}
for _, err := range g.Wait() {
errs := g.Wait()
for _, err := range errs {
if err != nil {
logger.LogIf(ctx, err)
}
}
// Hold lock here to update in-memory map at once,
// instead of serializing the Go routines.
sys.Lock()
for i, meta := range bucketMetas {
if errs[i] != nil {
continue
}
sys.metadataMap[buckets[i].Name] = meta
}
sys.Unlock()
for i, meta := range bucketMetas {
if errs[i] != nil {
continue
}
globalEventNotifier.set(buckets[i], meta) // set notification targets
globalBucketTargetSys.set(buckets[i], meta) // set remote replication targets
}
}
func (sys *BucketMetadataSys) refreshBucketsMetadataLoop(ctx context.Context) {
const bucketMetadataRefresh = 15 * time.Minute
t := time.NewTimer(bucketMetadataRefresh)
defer t.Stop()
for {
select {
case <-ctx.Done():
return
case <-t.C:
buckets, err := sys.objAPI.ListBuckets(ctx, BucketOptions{})
if err != nil {
logger.LogIf(ctx, err)
continue
}
// Handle if we have some buckets in-memory those are stale.
// first delete them and then replace the newer state()
// from disk.
diskBuckets := set.CreateStringSet()
for _, bucket := range buckets {
diskBuckets.Add(bucket.Name)
}
sys.RemoveStaleBuckets(diskBuckets)
for _, bucket := range buckets {
err := sys.loadBucketMetadata(ctx, bucket)
if err != nil {
logger.LogIf(ctx, err)
continue
}
// Check if there is a spare procs, wait 100ms instead
waitForLowIO(runtime.GOMAXPROCS(0), 100*time.Millisecond, currentHTTPIO)
}
t.Reset(bucketMetadataRefresh)
}
}
}
// Loads bucket metadata for all buckets into BucketMetadataSys.
func (sys *BucketMetadataSys) load(ctx context.Context, buckets []BucketInfo, objAPI ObjectLayer) {
func (sys *BucketMetadataSys) init(ctx context.Context, buckets []BucketInfo) {
count := 100 // load 100 bucket metadata at a time.
for {
if len(buckets) < count {
sys.concurrentLoad(ctx, buckets, objAPI)
return
sys.concurrentLoad(ctx, buckets)
break
}
sys.concurrentLoad(ctx, buckets[:count], objAPI)
sys.concurrentLoad(ctx, buckets[:count])
buckets = buckets[count:]
}
if globalIsDistErasure {
go sys.refreshBucketsMetadataLoop(ctx)
}
}
// Reset the state of the BucketMetadataSys.

View File

@@ -29,7 +29,7 @@ import (
"path"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/tags"
bucketsse "github.com/minio/minio/internal/bucket/encryption"
"github.com/minio/minio/internal/bucket/lifecycle"
@@ -88,6 +88,7 @@ type BucketMetadata struct {
QuotaConfigUpdatedAt time.Time
ReplicationConfigUpdatedAt time.Time
VersioningConfigUpdatedAt time.Time
LifecycleConfigUpdatedAt time.Time
// Unexported fields. Must be updated atomically.
policyConfig *policy.Policy
@@ -119,6 +120,16 @@ func newBucketMetadata(name string) BucketMetadata {
}
}
// Versioning returns true if versioning is enabled
func (b BucketMetadata) Versioning() bool {
return b.LockEnabled || (b.versioningConfig != nil && b.versioningConfig.Enabled()) || (b.objectLockConfig != nil && b.objectLockConfig.Enabled())
}
// ObjectLocking returns true if object locking is enabled
func (b BucketMetadata) ObjectLocking() bool {
return b.LockEnabled || (b.objectLockConfig != nil && b.objectLockConfig.Enabled())
}
// SetCreatedAt preserves the CreatedAt time for bucket across sites in site replication. It defaults to
// creation time of bucket on this cluster in all other cases.
func (b *BucketMetadata) SetCreatedAt(createdAt time.Time) {
@@ -135,7 +146,7 @@ func (b *BucketMetadata) SetCreatedAt(createdAt time.Time) {
func (b *BucketMetadata) Load(ctx context.Context, api ObjectLayer, name string) error {
if name == "" {
logger.LogIf(ctx, errors.New("bucket name cannot be empty"))
return errors.New("bucket name cannot be empty")
return errInvalidArgument
}
configFile := path.Join(bucketMetaPrefix, name, bucketMetadataFile)
data, err := readConfig(ctx, api, configFile)
@@ -323,8 +334,7 @@ func (b *BucketMetadata) getAllLegacyConfigs(ctx context.Context, objectAPI Obje
configData, err := readConfig(ctx, objectAPI, configFile)
if err != nil {
switch err.(type) {
case ObjectExistsAsDirectory:
if _, ok := err.(ObjectExistsAsDirectory); ok {
// in FS mode it possible that we have actual
// files in this folder with `.minio.sys/buckets/bucket/configFile`
continue
@@ -418,6 +428,10 @@ func (b *BucketMetadata) defaultTimestamps() {
if b.VersioningConfigUpdatedAt.IsZero() {
b.VersioningConfigUpdatedAt = b.Created
}
if b.LifecycleConfigUpdatedAt.IsZero() {
b.LifecycleConfigUpdatedAt = b.Created
}
}
// Save config to supplied ObjectLayer api.

View File

@@ -150,6 +150,12 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "VersioningConfigUpdatedAt")
return
}
case "LifecycleConfigUpdatedAt":
z.LifecycleConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "LifecycleConfigUpdatedAt")
return
}
default:
err = dc.Skip()
if err != nil {
@@ -163,9 +169,9 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 21
// map header, size 22
// write "Name"
err = en.Append(0xde, 0x0, 0x15, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
err = en.Append(0xde, 0x0, 0x16, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
if err != nil {
return
}
@@ -374,15 +380,25 @@ func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "VersioningConfigUpdatedAt")
return
}
// write "LifecycleConfigUpdatedAt"
err = en.Append(0xb8, 0x4c, 0x69, 0x66, 0x65, 0x63, 0x79, 0x63, 0x6c, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.LifecycleConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "LifecycleConfigUpdatedAt")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 21
// map header, size 22
// string "Name"
o = append(o, 0xde, 0x0, 0x15, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = append(o, 0xde, 0x0, 0x16, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = msgp.AppendString(o, z.Name)
// string "Created"
o = append(o, 0xa7, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64)
@@ -444,6 +460,9 @@ func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
// string "VersioningConfigUpdatedAt"
o = append(o, 0xb9, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.VersioningConfigUpdatedAt)
// string "LifecycleConfigUpdatedAt"
o = append(o, 0xb8, 0x4c, 0x69, 0x66, 0x65, 0x63, 0x79, 0x63, 0x6c, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.LifecycleConfigUpdatedAt)
return
}
@@ -591,6 +610,12 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "VersioningConfigUpdatedAt")
return
}
case "LifecycleConfigUpdatedAt":
z.LifecycleConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "LifecycleConfigUpdatedAt")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -605,6 +630,6 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BucketMetadata) Msgsize() (s int) {
s = 3 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON) + 28 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigMetaJSON) + 22 + msgp.TimeSize + 26 + msgp.TimeSize + 26 + msgp.TimeSize + 23 + msgp.TimeSize + 21 + msgp.TimeSize + 27 + msgp.TimeSize + 26 + msgp.TimeSize
s = 3 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON) + 28 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigMetaJSON) + 22 + msgp.TimeSize + 26 + msgp.TimeSize + 26 + msgp.TimeSize + 23 + msgp.TimeSize + 21 + msgp.TimeSize + 27 + msgp.TimeSize + 26 + msgp.TimeSize + 25 + msgp.TimeSize
return
}

View File

@@ -23,9 +23,9 @@ import (
"net/http"
"reflect"
"github.com/gorilla/mux"
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
)
@@ -50,11 +50,6 @@ func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter,
return
}
if !objAPI.IsNotificationSupported() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.GetBucketNotificationAction, bucketName, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
@@ -119,11 +114,6 @@ func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter,
return
}
if !objectAPI.IsNotificationSupported() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
vars := mux.Vars(r)
bucketName := vars["bucket"]

View File

@@ -82,30 +82,24 @@ func enforceRetentionForDeletion(ctx context.Context, objInfo ObjectInfo) (locke
// For objects in "Governance" mode, overwrite is allowed if a) object retention date is past OR
// governance bypass headers are set and user has governance bypass permissions.
// Objects in "Compliance" mode can be overwritten only if retention date is past.
func enforceRetentionBypassForDelete(ctx context.Context, r *http.Request, bucket string, object ObjectToDelete, oi ObjectInfo, gerr error) APIErrorCode {
opts, err := getOpts(ctx, r, bucket, object.ObjectName)
if err != nil {
return toAPIErrorCode(ctx, err)
}
opts.VersionID = object.VersionID
func enforceRetentionBypassForDelete(ctx context.Context, r *http.Request, bucket string, object ObjectToDelete, oi ObjectInfo, gerr error) error {
if gerr != nil { // error from GetObjectInfo
switch gerr.(type) {
case MethodNotAllowed: // This happens usually for a delete marker
if _, ok := gerr.(MethodNotAllowed); ok {
// This happens usually for a delete marker
if oi.DeleteMarker || !oi.VersionPurgeStatus.Empty() {
// Delete marker should be present and valid.
return ErrNone
return nil
}
}
if isErrObjectNotFound(gerr) || isErrVersionNotFound(gerr) {
return ErrNone
return nil
}
return toAPIErrorCode(ctx, gerr)
return gerr
}
lhold := objectlock.GetObjectLegalHoldMeta(oi.UserDefined)
if lhold.Status.Valid() && lhold.Status == objectlock.LegalHoldOn {
return ErrObjectLocked
return ObjectLocked{}
}
ret := objectlock.GetObjectRetentionMeta(oi.UserDefined)
@@ -121,13 +115,13 @@ func enforceRetentionBypassForDelete(ctx context.Context, r *http.Request, bucke
t, err := objectlock.UTCNowNTP()
if err != nil {
logger.LogIf(ctx, err)
return ErrObjectLocked
return ObjectLocked{}
}
if !ret.RetainUntilDate.Before(t) {
return ErrObjectLocked
return ObjectLocked{}
}
return ErrNone
return nil
case objectlock.RetGovernance:
// In governance mode, users can't overwrite or delete an object
// version or alter its lock settings unless they have special
@@ -147,25 +141,22 @@ func enforceRetentionBypassForDelete(ctx context.Context, r *http.Request, bucke
t, err := objectlock.UTCNowNTP()
if err != nil {
logger.LogIf(ctx, err)
return ErrObjectLocked
return ObjectLocked{}
}
if !ret.RetainUntilDate.Before(t) {
return ErrObjectLocked
return ObjectLocked{}
}
return ErrNone
return nil
}
// https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes
// If you try to delete objects protected by governance mode and have s3:BypassGovernanceRetention
// or s3:GetBucketObjectLockConfiguration permissions, the operation will succeed.
govBypassPerms1 := checkRequestAuthType(ctx, r, policy.BypassGovernanceRetentionAction, bucket, object.ObjectName)
govBypassPerms2 := checkRequestAuthType(ctx, r, policy.GetBucketObjectLockConfigurationAction, bucket, object.ObjectName)
if govBypassPerms1 != ErrNone && govBypassPerms2 != ErrNone {
return ErrAccessDenied
// If you try to delete objects protected by governance mode and have s3:BypassGovernanceRetention, the operation will succeed.
if checkRequestAuthType(ctx, r, policy.BypassGovernanceRetentionAction, bucket, object.ObjectName) != ErrNone {
return errAuthentication
}
}
}
return ErrNone
return nil
}
// enforceRetentionBypassForPut enforces whether an existing object under governance can be overwritten
@@ -195,8 +186,7 @@ func enforceRetentionBypassForPut(ctx context.Context, r *http.Request, oi Objec
days, objRetention.RetainUntilDate.Time,
objRetention.Mode, byPassSet, r, cred,
owner)
switch apiErr {
case ErrAccessDenied:
if apiErr == ErrAccessDenied {
return errAuthentication
}
return nil
@@ -213,8 +203,7 @@ func enforceRetentionBypassForPut(ctx context.Context, r *http.Request, oi Objec
return ObjectLocked{Bucket: oi.Bucket, Object: oi.Name, VersionID: oi.VersionID}
}
}
switch govPerm {
case ErrAccessDenied:
if govPerm == ErrAccessDenied {
return errAuthentication
}
return nil
@@ -227,8 +216,7 @@ func enforceRetentionBypassForPut(ctx context.Context, r *http.Request, oi Objec
apiErr := isPutRetentionAllowed(oi.Bucket, oi.Name,
days, objRetention.RetainUntilDate.Time, objRetention.Mode,
false, r, cred, owner)
switch apiErr {
case ErrAccessDenied:
if apiErr == ErrAccessDenied {
return errAuthentication
}
return nil
@@ -239,8 +227,7 @@ func enforceRetentionBypassForPut(ctx context.Context, r *http.Request, oi Objec
apiErr := isPutRetentionAllowed(oi.Bucket, oi.Name,
days, objRetention.RetainUntilDate.Time,
objRetention.Mode, byPassSet, r, cred, owner)
switch apiErr {
case ErrAccessDenied:
if apiErr == ErrAccessDenied {
return errAuthentication
}
return nil
@@ -319,7 +306,7 @@ func checkPutObjectLockAllowed(ctx context.Context, rq *http.Request, bucket, ob
return mode, retainDate, legalHold, toAPIErrorCode(ctx, err)
}
rMode, rDate, err := objectlock.ParseObjectLockRetentionHeaders(rq.Header)
if err != nil {
if err != nil && !(replica && rMode == "" && rDate.IsZero()) {
return mode, retainDate, legalHold, toAPIErrorCode(ctx, err)
}
if retentionPermErr != ErrNone {

Some files were not shown because too many files have changed in this diff Show More