Compare commits

...

959 Commits

Author SHA1 Message Date
Shubhendu
e47e625f73 Added replication graphs for site replication metrics (#17951)
This dashboard graphs the metrics when site replication is enabled
across MinIO instances.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-31 08:31:16 -07:00
yangw
b13fcaf666 fix: read atomic variable in clientDevNull round trip time (#17955) 2023-08-31 08:31:01 -07:00
Harshavardhana
9458485e43 avoid double logging from healing (#17950) 2023-08-30 18:46:04 -07:00
Shubhendu
0ce9e00ffa Added node scanner and node drives graphs (#17949)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-30 14:01:51 -07:00
Shubhendu
c778c381b5 Added new bucket replication graphs (#17947)
This PR adds new bucket replication graphs for better and granular
monitoring of bucket replication. Also arranged all replication graphs
together.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-30 11:57:41 -07:00
Harshavardhana
0d1fbef751 fix: a possible crash in event target Close() (#17948)
these are possible crashes when the configured
target is still in init() state and never finished
- however a delete config was initiated.
2023-08-30 07:27:45 -07:00
Poorna
b48bbe08b2 Add additional info for replication metrics API (#17293)
to track the replication transfer rate across different nodes,
number of active workers in use and in-queue stats to get
an idea of the current workload.

This PR also adds replication metrics to the site replication
status API. For site replication, prometheus metrics are
no longer at the bucket level - but at the cluster level.

Add prometheus metric to track credential errors since uptime
2023-08-30 01:00:59 -07:00
Minio Trusted
cce90cb2b7 Update yaml files to latest version RELEASE.2023-08-29T23-07-35Z 2023-08-30 02:17:56 +00:00
Harshavardhana
07b1281046 add queue_dir to help message for logger/audit targets 2023-08-29 16:07:35 -07:00
Ravind Kumar
3515b99671 Clarify Community Helm Chart (#17944)
There is some consistent confusion between the Community Helm Chart in this repo and the MinIO Kubernetes Operator Helm Chart.

This change seeks to clarify the differences between the two charts and which ones are community maintained vs MinIO maintained.
2023-08-29 11:27:48 -07:00
Krishnan Parthasarathi
6a67c277eb Reuse types for key-value, notification and retry (#17936) 2023-08-29 11:27:23 -07:00
Harshavardhana
1067dd3011 update minio-go v7.0.63 (#17937)
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-08-28 20:02:14 -07:00
Harshavardhana
7cafdc0512 fix: skip access checks further for known buckets (#17934) 2023-08-28 15:16:41 -07:00
Harshavardhana
8a57b6bced use renameat2 Linux extension syscall (#17757)
this is a faster and safer alternative
on newer kernel versions.
2023-08-27 09:57:11 -07:00
Andrea Longo
6f0ed2a091 Add equivalent mc ilm rule commands for some JSON rule examples (#17923) 2023-08-26 00:33:25 -07:00
Krishnan Parthasarathi
53abd25116 Don't log when object to be tiered is not found (#17924) 2023-08-25 23:34:16 -07:00
Harshavardhana
1ea7826c0e do not have to consider replicationTimestamp for healing and quorum (#17922)
replicationTimestamp might differ if there were retries
in replication and the retried attempt overwrote in
quorum but enough shards with newer timestamp causing
the existing timestamps on xl.meta to be invalid, we
do not rely on this value for anything external.

this is purely a hint for debugging purposes, but there
is no real value in it considering the object itself
is in-tact we do not have to spend time healing this
situation.

we may consider healing this situation in future but
that needs to be decoupled to make sure that we do not
over calculate how much we have to heal.
2023-08-25 15:31:15 -07:00
Harshavardhana
97f4cf48f8 update wording on PR template 2023-08-25 11:57:37 -07:00
Anis Eleuch
0cde37be50 Reduce the number of calls to import bucket metadata (#17899)
For each bucket, save the bucket metadata 
once, call the site replication hook once
2023-08-25 07:59:16 -07:00
jiuker
6aeca54ece fix: replace context by timeout-context from parent-context when selfSpeedTest (#17906) 2023-08-25 07:58:38 -07:00
Harshavardhana
124e28578c remove strict persistence requirements for List() .metacache objects (#17917)
.metacache objects are transient in nature, and are better left to
use page-cache effectively to avoid using more IOPs on the disks.

this allows for incoming calls to be not taxed heavily due to
multiple large batch listings.
2023-08-25 07:58:11 -07:00
Harshavardhana
62c9e500de remove mTime requirement from pre-condition checks (#17916)
given a versionId the mtime is always the same, it
can never be different than its original value.

versionIds also do not conflict, since they are uuid's
and unique practically forever.
2023-08-24 14:33:58 -07:00
jiuker
02cc18ff29 refactor the perf client for TTFB and TotalResponseTime (#17901) 2023-08-24 10:21:08 -07:00
Harshavardhana
ba4566e86d add missing IAM node metrics to cluster and node endpoint (#17908) 2023-08-24 09:26:37 -07:00
Krishnan Parthasarathi
87cb0081ec Retain current and upto NewerNoncurrentVersions versions (#17909)
applyNewerNoncurrentVersionLimit method should pass along versions
unaffected by NewerNoncurrentVersions rule for further ILM evaluation.
2023-08-24 09:26:29 -07:00
Poorna
4a6af93c83 mark replication target offline if network timeouts seen (#17907)
regular target liveness check every 5 secs will toggle state back
as target returns online.
2023-08-24 09:24:26 -07:00
Minio Trusted
a2f0771fd3 Update yaml files to latest version RELEASE.2023-08-23T10-07-06Z 2023-08-23 23:17:59 +00:00
Harshavardhana
af564b8ba0 allow bootstrap to capture time-spent for each initializers (#17900) 2023-08-23 03:07:06 -07:00
Harshavardhana
adb8be069e tune-kafka targets to ensure timeout triggers on hung brokers (#17898)
hung brokers can cause slowness to the entire system
when many callers are hung, leading to large goroutine
build-up.
2023-08-22 20:26:35 -07:00
Klaus Post
7c8746732b Return cancelled storage calls as 499 (#17895)
Make upstream cancels more visible - right now they are just reported as "forbidden".
2023-08-22 11:10:41 -07:00
Klaus Post
f506117edb Reduce memory profiling rate (#17894)
Change profiling from every 4KB to every 128K, reducing the lock contention by a factor of 32.
2023-08-22 07:21:49 -07:00
Harshavardhana
1c5af7c31a serialize queueMRFHeal(), add timeouts and avoid normal build-ups (#17886)
we expect a certain level of IOPs and latency so this is okay.

fixes other miscellaneous bugs

- such as hanging on mrfCh <- when the context is canceled
- queuing MRF heal when the context is canceled
- remove unused saveStateCh channel
2023-08-21 16:44:50 -07:00
Harshavardhana
3a0125fa1f remove unexpected logging from peer calls (#17888)
also make sure RequestID is set for system logs
2023-08-21 14:25:24 -07:00
Daniel Valdivia
328cb0a076 Pass environment variable to control session length to console (#17885)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-08-21 11:55:43 -07:00
Shubhendu
c3c8441a1d Corrected the count of buckets and objects graphs (#17883)
In distributed setup with a load balancer, randmoly any server
would report the metrics `minio_cluster_bucket_total` and
`minio_cluster_usage_object_total` and while graphing it, we should
take max of reported values.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-21 09:04:38 -07:00
jiuker
fa2a8d7209 fix: drain the req.body into io.Discard correctly (#17881) 2023-08-21 01:09:07 -07:00
jiuker
e3ea97c964 fix: replace req context by locker context (#17880) 2023-08-19 22:09:07 -07:00
Mathieu Parent
7219ae530e helm: allow to configure statement policy effect (#17700)
Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
2023-08-19 07:39:11 -07:00
Andreas Auernhammer
8f8f8854f0 update minio/kes-go dep to v0.2.0 (#17850)
This commit updates the minio/kes-go dependency
to v0.2.0 and updates the existing code to work
with the new KES APIs.

The `SetPolicy` handler got removed since it
may not get implemented by KES at all and could
not have been used in the past since stateless KES
is read-only w.r.t. policies and identities.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2023-08-19 07:37:53 -07:00
Anis Eleuch
4c6869cd9a ilm: Fix cleaning non current null versions (#17876) 2023-08-18 12:55:47 -07:00
Harshavardhana
bc7c0d8624 if object is a delete marker it must skip tags filter in ILM (#17861) 2023-08-18 09:36:23 -07:00
Harshavardhana
11dfc817f3 do not log client canceled events (#17838) 2023-08-17 14:53:43 -07:00
Harshavardhana
dde1a12819 fix: validate incoming uploadID to be base64 encoded (#17865)
Bonus fixes include

- do not have to write final xl.meta (renameData) does this
  already, saves some IOPs.

- make sure to purge the multipart directory properly using
  a recursive delete, otherwise this can easily pile up and
  rely on the stale uploads cleanup.

fixes #17863
2023-08-17 09:37:55 -07:00
Minio Trusted
065fd094d1 Update yaml files to latest version RELEASE.2023-08-16T20-17-30Z 2023-08-16 21:22:41 +00:00
Harshavardhana
d09351bb10 update console release to v0.37.0
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-08-16 13:17:30 -07:00
bh4t
25d38e030b Minor change to text under license page (#17864)
Made minor change to move the text and link 
regarding license details.
2023-08-16 11:28:34 -07:00
Harshavardhana
9ebd10d3f4 Revert "Include SuccessorModTime for FileInfo quorum (#17732)" (#17860)
This reverts commit bf3901342c.

This is to fix a regression caused when there are inconsistent
versions, but one version is in quorum. SuccessorModTime issue
must be fixed differently.
2023-08-16 07:51:33 -07:00
Harshavardhana
8a9b886011 update grafana dashboard with disk -> drive rename (#17857) 2023-08-15 16:04:20 -07:00
Cesar Celis Hernandez
21f0d6b549 Removing deprecated var from minio helm (#17856) 2023-08-15 13:33:55 -07:00
Harshavardhana
3ba927edae fix: batch status reporting after complete (#17852)
batch status can perpetually wait after completion
due to a race between the MetricsHandler() returning
the active metrics in intervals of 1sec and delete
of metrics after job completion.

this PR ensures that we keep the 'status' around
for a while, i.e upto 24hrs for all the batch jobs.
2023-08-15 12:22:30 -07:00
Harshavardhana
c4ca0a5a57 add two more drive metrics when metrics is available (#17854) 2023-08-15 10:55:47 -07:00
Klaus Post
406ea4f281 Fix distributed listing not able to resume (#17855)
Two fields in lifecycles made GOB encoding consistently fail with `gob: type lifecycle.Prefix has no exported fields`.

This meant that in distributed systems listings would never be able to continue and would restart on every call.

Fix issues and be sure to log these errors at least once per bucket. We may see some connectivity errors here, but we shouldn't hide them.
2023-08-15 07:45:25 -07:00
Harshavardhana
64aa7feabd allow specifying lower disks for Walk() (#17829)
useful when you may want Walk() with
reduced quorum requirements.
2023-08-14 21:32:39 -07:00
Poorna
875f4076ec site replication: avoid retries when peer is offline (#17853) 2023-08-14 21:31:41 -07:00
Harshavardhana
4643efe6be fix: add deadline worker pattern for local disk removers (#17845) 2023-08-14 12:28:13 -07:00
Harshavardhana
b760137e1d fix: add proxyByNode for batch jobs as part of their jobId (#17844) 2023-08-11 13:12:35 -07:00
Harshavardhana
5f56f441bf fix: apply common notification code with content-type (#17843) 2023-08-11 11:34:43 -07:00
Klaus Post
96a22bfcbb fix: wrapped io.EOF during ListObjects() (#17842)
When listing getObjectFileInfo can return `io.EOF` if file is being written.

When we wrap the error it will *not* retry upstream, since `io.EOF` is a valid return value.

Allow one retry before returning errors and canceling the listing.
2023-08-11 09:47:16 -07:00
Harshavardhana
6c59b33fb1 add community contribution credits and update PR template (#17840) 2023-08-10 22:08:38 -07:00
Poorna
dfaf735073 replication: fix queuing of large uploads (#17831)
Fixes regression from #17687
2023-08-10 15:48:42 -07:00
Harshavardhana
0d2b7bf94d upgrade console dependency to v0.36.0 (#17839) 2023-08-10 15:48:11 -07:00
Anis Eleuch
7fcfde7f07 s3: Pick a pool with >85% if all other pools are in suspended state (#17826) 2023-08-10 11:06:31 -07:00
jiuker
b1391d1991 feat: support perf client to show TX from client to server (#17718) 2023-08-10 07:14:46 -07:00
Harshavardhana
49c8e16410 update CI/CD to go1.21 (#17828) 2023-08-10 07:13:58 -07:00
Minio Trusted
0e93681589 Update yaml files to latest version RELEASE.2023-08-09T23-30-22Z 2023-08-10 00:09:18 +00:00
Harshavardhana
eb55034dfe optimize deletePrefix, use direct set location via object name (#17827)
* optimize deletePrefix, use direct set location via object name

instead of fanning out the calls for an object force delete
we can assume the set location and not do fan-out calls

* Apply suggestions from code review

Co-authored-by: Krishnan Parthasarathi <krisis@users.noreply.github.com>

---------

Co-authored-by: Krishnan Parthasarathi <krisis@users.noreply.github.com>
2023-08-09 16:30:22 -07:00
Harshavardhana
c45bc32d98 skip disks under scanning when healing disks (#17822)
Bonus:

- avoid calling DiskInfo() calls when missing blocks
  instead heal the object using MRF operation.

- change the max_sleep to 250ms beyond that we will
  not stop healing.
2023-08-09 12:51:47 -07:00
Harshavardhana
6e860b6dc5 count all versions as part of DeleteAllVersionsAction (#17821) 2023-08-09 08:55:19 -07:00
Harshavardhana
b732a673dc reduce logging in bucket replication in retry scenarios (#17820) 2023-08-08 13:27:40 -07:00
Shubhendu
b6b6d6e8d8 Removed replication dashboard (#17815)
As all replication metrics are moved at bucket level, all replication
graphs as well are added under minio-bucket.json. Removing the independent
replication dashboard.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-08 08:13:45 -07:00
Yang Wu
23e4895dfc Create metrics slice when necessary (#17809) 2023-08-07 02:21:22 -07:00
Harshavardhana
8666c55ca6 fix: do not use PrefixEnabled() logic to ignore valid objects (#17677)
ignoring valid objects with valid replication metadata
after the Prefix was disabled must still honor the older
metadata.

this can lead to unexpected results, allow it during
READ phase always.
2023-08-05 13:56:01 -07:00
Anis Eleuch
a3f00c5d5e batch: Strict unmarshal yaml document to avoid user made typos (#17808)
// UnmarshalStrict is like Unmarshal except that any fields that are found
// in the data that do not have corresponding struct members, or mapping
// keys that are duplicates, will result in
// an error.
2023-08-05 13:51:48 -07:00
Poorna
26c23b30f4 replication: set context timeout for NewMultipartUpload calls (#17807) 2023-08-05 12:27:07 -07:00
Anis Eleuch
a436fd513b track client disconnections properly for all ListObjects calls (#17804)
Currently ListObjects* calls were returning 200 OK for timed-out clients,
this makes debugging via `mc admin trace` very hard.
2023-08-04 15:57:27 -07:00
Minio Trusted
3bc34ffd94 Update yaml files to latest version RELEASE.2023-08-04T17-40-21Z 2023-08-04 19:12:46 +00:00
Harshavardhana
533cd8d6df fix: batch replication pull must preserve versionID (#17805)
batch replication pull must preserve versionID regardless
of destination bucket versioning configuration.

This is similar to the issue with decommissioning and rebalancing
2023-08-04 12:09:10 -07:00
Harshavardhana
cb089dcb52 error out by default beyond 10000 versions per object (#17803)
```
You've exceeded the limit on the number of versions you can create on this object
```
2023-08-04 10:40:21 -07:00
Harshavardhana
e0329cfdbb update console v0.35.1 2023-08-03 20:25:21 -07:00
Harshavardhana
239ccc9c40 fix: crash in globalTierJournal when TierConfig is not initialized (#17791) 2023-08-03 14:16:15 -07:00
Poorna
b762fbaf21 sts: validate if iam subsystem initialized in handlers (#17796) 2023-08-03 13:24:25 -07:00
Praveen raj Mani
0285df5a02 fix: prioritize audit_webhook and logger_webhook ENVs over the config KVS (#17783) 2023-08-03 02:47:07 -07:00
Harshavardhana
45fb375c41 allow healing to prefer local disks over remote (#17788) 2023-08-03 02:18:18 -07:00
Harshavardhana
4a4950fe41 fix: honor requested allow origin settings properly (#17789)
fixes #17778
2023-08-02 20:41:21 -07:00
Anis Eleuch
1664fd8bb1 Avoid logging errors twice during transitioned objects expiration (#17782) 2023-08-02 09:06:03 -07:00
Harshavardhana
21cdd2bf5d avoid overwriting metrics on success, save it in defer (#17780) 2023-08-01 22:19:56 -07:00
Harshavardhana
0153f96a20 add deadlines for readMetadata() in listing (#17776)
Bonus: also skip spending time looking for xl.json

- Listing()
- Delete()
2023-08-01 21:52:31 -07:00
Harshavardhana
a7a7533190 add new errors for Disks with timeouts (#17770) 2023-08-01 12:47:50 -07:00
Poorna
311380f8cb replication resync: fix queueing (#17775)
Assign resync of all versions of object to the same worker to avoid locking
contention. Fixes parallel resync implementation in #16707
2023-08-01 11:51:15 -07:00
Harshavardhana
b0f0e53bba fix: make sure to correctly initialize health checks (#17765)
health checks were missing for drives replaced since

- HealFormat() would replace the drives without a health check
- disconnected drives when they reconnect via connectEndpoint()
  the loop also loses health checks for local disks and merges
  these into a single code.
- other than this separate cleanUp, health check variables to avoid
  overloading them with similar requirements.
- also ensure that we compete via context selector for disk monitoring
  such that the canceled disks don't linger around longer waiting for
  the ticker to trigger.
- allow disabling active monitoring.
2023-08-01 10:54:26 -07:00
Klaus Post
004f1e2f66 Fix trailing header signature mismatch (#17774)
Seems like clients may omit a newline at the end of the trailer chunk. Each header should end with a newline. Add that if missing.

Fixes #17662
2023-08-01 08:45:57 -07:00
Harshavardhana
2fa561f22e do not crash on invalid metric values (#17764)
```
minio[1032735]: panic: label value "\xc0.\xc0." is not valid UTF-8
minio[1032735]: goroutine 1781101 [running]:
minio[1032735]: github.com/prometheus/client_golang/prometheus.MustNewConstMetric(...)
```

log such errors for investigation
2023-08-01 00:55:39 -07:00
Harshavardhana
81be718674 fix: optimize DiskInfo() call avoid metrics when not needed (#17763) 2023-07-31 15:20:48 -07:00
Rajat Shinde
8162fd1e20 fix: couple of typos in README.md (#17754) 2023-07-31 11:17:13 -07:00
Sho Ce
49a1e2f98e update-notifier.go: misleading version age message (#17750) 2023-07-31 08:36:19 -07:00
Klaus Post
684c46369c Send events for extracted objects (#17760)
Fixes #17759
2023-07-31 08:33:51 -07:00
Martin Zruban
715c9e3ca9 fix: allow mc executable inside minio containers (#17755) 2023-07-31 00:13:41 -07:00
Harshavardhana
73edd5b8fd introduce 'mc admin config set alias/ api odirect=on' (#17753)
change disable_odirect=off -> odirect=on to make it
easier to understand, instead of making it double
negative.
2023-07-31 00:12:53 -07:00
Harshavardhana
5e5bdf5432 capture total errors data availability and any timeout errors (#17748) 2023-07-29 23:26:26 -07:00
Harshavardhana
48a3e9bc82 update NTP package to fix ipv6 bug (#17752) 2023-07-29 17:43:50 -07:00
Harshavardhana
f13cfcb83e allow disabling O_DIRECT for write ops (#17751)
on really slow systems, O_DIRECT simply kills the drives
allow for a way to disable them.
2023-07-29 15:17:56 -07:00
Anis Eleuch
9c0e8cd15b logger: Avoid slow calls in http logger Send() function (#17747)
Send() is synchronous and can affect the latency of S3 requests when the
logger buffer is full.

Avoid checking if the HTTP target is online or not and increase the
workers anyway since the buffer is already full.

Also, avoid logs flooding when the audit target is down.
2023-07-29 12:49:18 -07:00
Harshavardhana
ad2a70ba06 update console v0.34.0
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-28 17:24:05 -07:00
Harshavardhana
731e03fe5a add ReadFileStream deadline for disk call (#17745)
timeout the reader side if hung via disk max timeout
2023-07-28 15:37:53 -07:00
jiuker
f9d029c8fa fix: prevCertificate save not correct in loop (#17744)
fix certs save not correct

Co-authored-by: guozhi.li <guozhi.li@daocloud.io>
2023-07-28 12:57:49 -07:00
Anis Eleuch
7057d00a28 s3: Return invalid bucket name the first thing in all S3 calls (#17742) 2023-07-28 10:49:20 -07:00
Harshavardhana
114fab4c70 export cluster health as prometheus metrics (#17741) 2023-07-28 01:16:53 -07:00
Harshavardhana
c2edbfae55 update all deps and add credits (#17740) 2023-07-27 12:43:25 -07:00
ruspaul013
a92cb66468 Get the signed headers in the order they were signed (#17690)
use pSignValues to get signed headers in order
2023-07-27 11:45:30 -07:00
ruspaul013
535f97ba61 check if metadata headers/url values are equal with signed headers (#17737) 2023-07-27 11:44:56 -07:00
drivebyer
14ebd82dbd fix: missing disk metrics when query metric api from peer (#17738) 2023-07-27 11:44:13 -07:00
Klaus Post
aea7b08a47 Update nats server dependency (#17734)
I don't see why we shouldn't.

Fixes #17730
2023-07-27 07:35:52 -07:00
Harshavardhana
47dcfcbdd4 introduce deadlines on READ operations (#17724) 2023-07-27 07:33:05 -07:00
Krishnan Parthasarathi
bf3901342c Include SuccessorModTime for FileInfo quorum (#17732) 2023-07-26 17:04:16 -07:00
Shubhendu
e1731d9403 Added bucket specific grafana dashboard (#17727)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-07-26 15:10:11 -07:00
Harshavardhana
b28bcad11b avoid Access() calls on known bucket paths (#17719) 2023-07-26 11:31:40 -07:00
Harshavardhana
a7c71e4c6b protect disk monitoring to avoid busy loop configuration (#17723) 2023-07-25 20:02:22 -07:00
Poorna
1a42693d68 replication: limit larger uploads to a subset of workers (#17687)
Limit large uploads (> 128MiB)  to a max of 10 workers, intent is to avoid
larger uploads from using all replication bandwidth, giving room for smaller
uploads to sync faster.
2023-07-25 20:02:02 -07:00
Harshavardhana
e7b60c4d65 Add slow drive timeouts to match with active disk monitoring (#17701)
allow active disk-monitoring to be configurable, and use
these add deadlines in various call layers for various
syscalls.
2023-07-25 16:58:31 -07:00
Poorna
f95129894d Use decrypted object size while computing object size summary (#17717)
Corrects an issue with encrypted versioned objects being reported under
`unversioned` bin in the object version histogram
2023-07-24 17:13:25 -07:00
Harshavardhana
c32c71c836 allow DNS cache TTL to be configurable (#17709)
this is added for now as a hidden variable
2023-07-24 15:13:35 -07:00
Harshavardhana
14e1ace552 remove serializing WalkDir() across all buckets/prefixes on SSDs (#17707)
slower drives get knocked off because they are too slow via 
active monitoring, we do not need to block calls arbitrarily.

Serializing adds latencies for already slow calls, remove
it for SSDs/NVMEs

Also, add a selection with context when writing to `out <-`
channel, to avoid any potential blocks.
2023-07-24 09:30:19 -07:00
drivebyer
a7fb3a3853 fix: Create metrics slice when necessary in getCacheMetrics() (#17711) 2023-07-24 08:40:21 -07:00
Klaus Post
2da4bd5f1a Revert "don't error when asked for 0-based range on empty objects (#17708) (#17713)
Revert "don't error when asked for 0-based range on empty objects (#17708)"

This reverts commit 7e76d66184.

There is no valid way to specify offsets in a 0-byte file. Blame it on the [RFC](https://datatracker.ietf.org/doc/html/rfc7233#section-4.4)

> The 416 (Range Not Satisfiable) status code indicates that none of the ranges in the 
> request's Range header field (Section 3.1) overlap the current extent of the selected resource...

A request for "bytes=0-" is a request for the first byte of a resource. If the resource is 0-length, 
the range [0,0] does not overlap the resource content and the server responds with an error.
2023-07-24 07:56:28 -07:00
flisk
7e76d66184 don't error when asked for 0-based range on empty objects (#17708)
In a reverse proxying setup, a proxy in front of MinIO may attempt to
request objects in slices for enhanced cache efficiency. Since such a
a proxy cannot have prior knowledge of how large a requested resource is,
it usually sends a header of the form:

        Range: 0-$slice_size

... and, depending on the size of the resource, expects either:

- an empty response, if $resource_size == 0
- a full response, if $resource_size <= $slice_size
- a partial response, if $resource_size > $slice_size

Prior to this change, MinIO would respond 416 Range Not Satisfiable if a
client tried to request a range on an empty resource. This behavior is
technically consistent with RFC9110[1] – However, it renders sliced
reverse proxying, such as implemented in Nginx, broken in the case of
empty files. Nginx itself seems to break this convention to enable
"useful" responses in these cases, and MinIO should probably do that
too.

[1]: https://www.rfc-editor.org/rfc/rfc9110#byte.ranges
2023-07-23 00:10:03 -07:00
Harshavardhana
7764f4a8e3 return tags as part of Head/Get calls (#17635)
AWS S3 only returns the number of tag
counts, along with that we must return
the tags as well to avoid another metadata
call to the server.
2023-07-22 07:19:43 -07:00
Harshavardhana
e1094dde08 update MinIO replication dashboard with latest metrics 2023-07-21 17:30:04 -07:00
Minio Trusted
4894c67196 Update yaml files to latest version RELEASE.2023-07-21T21-12-44Z 2023-07-21 22:28:35 +00:00
Doom And Love
d004c45386 grafana-dashboard: Update scrape_jobs variable to be single select (#17696)
Set `includeAll` and `mult` to be false since this dashboard only works with a single value being selected
2023-07-21 14:12:44 -07:00
Kaan Kabalak
6624f970c0 Fix spelling of 'already' across repository (#17703) 2023-07-21 08:45:08 -07:00
Harshavardhana
de684dc122 update console release v0.33.0
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-21 01:20:13 -07:00
Harshavardhana
331bdc2245 fix: remove CompleteMultipartUpload() 200 OK response for blocking calls (#17699)
sending whitespace character with CompleteMultipartUpload()
with 200 OK was an AWS S3 compatible implementation detail,
and it was expected that the client SDK must look for both
successful XML as well as error XML for 200 OK.

But this is not useful anymore on MinIO, since we do not
have any large delayed coalescing of parts anymore.
2023-07-20 22:14:38 -07:00
Harshavardhana
e12ab486a2 avoid using os.Getenv for internal code, use env.Get() instead (#17688) 2023-07-20 07:52:49 -07:00
Krishnan Parthasarathi
9eeee92d36 Add deletemarker_total metric (#17689) 2023-07-20 07:52:32 -07:00
Anis Eleuch
756d6aa729 fix: report correct pool/set/disk indexes for offline disks (#17695) 2023-07-20 07:48:21 -07:00
Harshavardhana
bddd53d6d2 fix: retry listing in decommissioning if it fails perpetually (#17682) 2023-07-19 13:09:37 -07:00
Harshavardhana
c0a5bdaed9 update grafana dashboard JSON with the new metrics (#17683) 2023-07-19 08:16:04 -07:00
jiuker
a99cd825ab fix: byHost realTime metrics API (#17681) 2023-07-18 23:50:30 -07:00
Harshavardhana
6426b74770 move bucket centric metrics to /minio/v2/metrics/bucket handlers (#17663)
users/customers do not have a reasonable number of buckets anymore,
this is why we must avoid overpopulating cluster endpoints, instead
move the bucket monitoring to a separate endpoint.

some of it's a breaking change here for a couple of metrics, but
it is imperative that we do it to improve the responsiveness of
our Prometheus cluster endpoint.

Bonus: Added new cluster metrics for usage, objects and histograms
2023-07-18 22:25:12 -07:00
Harshavardhana
4f257bf1e6 pick internode interface properly via globalLocalNodeName (#17680)
current code will not pick the right interface name
if --address or --interface is not provided.
2023-07-18 19:18:11 -07:00
Minio Trusted
73a056999c Update yaml files to latest version RELEASE.2023-07-18T17-49-40Z 2023-07-18 21:44:57 +00:00
Krishnan Parthasarathi
0120ff93bc admin-info: add DeleteMarkers count (#17659) 2023-07-18 10:49:40 -07:00
Anis Eleuch
49638fa533 s3: Delete Bucket should not recreate bucket if it does not exist (#17676)
Also return Bucket Not Found error in the same use case.
2023-07-18 09:32:19 -07:00
Harshavardhana
76510dac8a upgrade all deps to their latest releases (#17671) 2023-07-17 21:12:48 -07:00
Shubhendu
7a3a7b19e5 Added a start script to inspect command output (#17591)
Using this script, post decrypt we should be able to bring up the
MinIO instance with same configuration.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-07-17 14:15:28 -07:00
Harshavardhana
24e86d0c59 avoid passing around poolIdx, setIdx instead pass the relevant disks (#17660) 2023-07-17 09:52:05 -07:00
drivebyer
9b5c2c386a fix: return error when requested interface has no stats available (#17666) 2023-07-17 01:14:01 -07:00
jiuker
d118031ed6 fix: when Origin: null is set return back '*' for allow origins (#17651) 2023-07-15 12:15:06 -07:00
Anis Eleuch
341a89c00d return a descriptive error when loading any IAM item fails (#17654)
Sometimes IAM fails to load certain items, which could be a user, 
a service account or a policy but with not enough information for 
us to debug.

This commit will create a more descriptive error to make it easier to
debug in such situations.
2023-07-14 20:17:14 -07:00
Anis Eleuch
df29d25e6b return different status code for internode communication (#17655)
mc admin trace -a will be able to quickly show
401 Unauthorized header to pinpoint trivial issues
between nodes, such as wrong root 
credentials and skewed time.
2023-07-14 18:34:55 -07:00
Harshavardhana
3e196fa7b3 fix: ILM newer noncurrent version limit must return correct versions (#17652)
objects/versions that are not expired via NewerNoncurrentVersions
must be properly returned to be applied under further ILM actions.

this would cause legitimately expired objects to be missed
from expiration.
2023-07-14 16:42:35 -07:00
drivebyer
04c792476f fix: provide a possible slice cap for heal failed metrics items (#17647)
Signed-off-by: Wu <yang.wu@daocloud.io>
2023-07-14 11:02:45 -07:00
Harshavardhana
005a4a275a add more bootstrap messages to provide latency (#17650)
- simplify refreshing bucket metadata, wait() to
  depend on how fast the bucket metadata can load.

- simplify resync to start resync in single pass.
2023-07-14 04:00:29 -07:00
Harshavardhana
bdddf597f6 shuffle buckets randomly before being scanned (#17644)
this randomness is needed to avoid scanning
the same buckets across different erasure sets,
in the same order.

allow random buckets to be scanned instead
allowing a wider spread of ILM, replication
checks.

Additionally do not loop over twice to fill
the channel, fill the channel regardless of
having bucket new or old.
2023-07-14 02:25:40 -07:00
Aditya Manthramurthy
bb6921bf9c Send AuditLog via new middleware fn for admin APIs (#17632)
A new middleware function is added for admin handlers, including options
for modifying certain behaviors. This admin middleware:

- sets the handler context via reflection in the request and sends AuditLog
- checks for object API availability (skipping it if a flag is passed)
- enables gzip compression (skipping it if a flag is passed)
- enables header tracing (adding body tracing if a flag is passed)

While the new function is a middleware, due to the flags used for
conditional behavior modification, which is used in each route registration
call.

To try to ensure that no regressions are introduced, the following
changes were done mechanically mostly with `sed` and regexp:

- Remove defer logger.AuditLog in admin handlers
- Replace newContext() calls with r.Context()
- Update admin routes registration calls

Bonus: remove unused NetSpeedtestHandler

Since the new adminMiddleware function checks for object layer presence
by default, we need to pass the `noObjLayerFlag` explicitly to admin
handlers that should work even when it is not available. The following
admin handlers do not require it:

- ServerInfoHandler
- StartProfilingHandler
- DownloadProfilingHandler
- ProfileHandler
- SiteReplicationDevNull
- SiteReplicationNetPerf
- TraceHandler

For these handlers adminMiddleware does not check for the object layer
presence (disabled by passing the `noObjLayerFlag`), and for all other
handlers, the pre-check ensures that the handler is not called when the
object layer is not available - the client would get a
ErrServerNotInitialized and can retry later.

This `noObjLayerFlag` is added based on existing behavior for these
handlers only.
2023-07-13 14:52:21 -07:00
Shireesh Anjal
bb63375f1b Do not consider subnet api key as secret (#17643)
As it is required by mc and console to communicate with subnet
2023-07-13 12:24:47 -07:00
Klaus Post
4f89e5bba9 Add active disk health checks (#17539)
Add check every 2 minutes to see if a write+read operation can complete.

If disk is unresponsive for 2 minutes or returns errFaultyDisk, take it offline.
2023-07-13 11:41:55 -07:00
jiuker
183428db03 fear: Implement 'mc support top net' (#17598) 2023-07-13 11:41:19 -07:00
Shireesh Anjal
fc6d873758 Use os.ReadFile instead of ioutil.ReadFile (#17649)
ioutil.ReadFile is deprecated and also doesn't work with certain kinds
of symlinks.
2023-07-13 09:07:10 -07:00
Poorna
5e2f8d7a42 replication: Simplify mrf requeueing and add backlog handler (#17171)
Simplify MRF queueing and add backlog handler

- Limit re-tries to 3 to avoid repeated re-queueing. Fall offs
to be re-tried when the scanner revisits this object or upon access.

- Change MRF to have each node process only its MRF entries.

- Collect MRF backlog by the node to allow for current backlog visibility
2023-07-12 23:51:33 -07:00
Shubhendu
9b9871cfbb Added endpoint and versions attributes to KMS details (#17350)
Now it would list details of all KMS instances with additional
attributes `endpoint` and `version`. In the case of k8s-based
deployment the list would consist of a single entry.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-07-12 23:50:38 -07:00
guangwu
f80b6926d3 chore: fix minor issues reported via staticcheck (#17639) 2023-07-12 20:33:11 -07:00
Shubhendu
6dc55fe5ed Corrected the API name for audit logging purpose (#17642)
This would better to record the correct API name so that
any verification around audit logs to figure out if required
APIs are called required no of times, would be correct.
Here in this case of policy attached, API `AttachDetachPolicyBuiltin`
would be called with `requestPath` as `/minio/admin/v3/idp/builtin/policy/attach`
and in case of detach policy the value would be `/minio/admin/v3/idp/builtin/policy/detach`

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-07-12 15:38:49 -07:00
Harshavardhana
2d1cda2061 fix: do not os.Exit(1) while writing goroutines during shutdown (#17640)
Also shutdown poll add jitter, to verify if the shutdown
sequence can finish before 500ms, this reduces the overall
time taken during "restart" of the service.

Provides speedup for `mc admin service restart` during
active I/O, also ensures that systemd doesn't treat the
returned 'error' as a failure, certain configurations in
systemd can cause it to 'auto-restart' the process by-itself
which can interfere with `mc admin service restart`.

It can be observed how now restarting the service is
much snappier.
2023-07-12 07:18:30 -07:00
Harshavardhana
a566bcf613 treat 0-byte objects to honor same quorum as delete marker (#17633)
on unversioned buckets its possible that 0-byte objects
might lose quorum on flaky systems, allow them to be same
as DELETE markers. Since practically speak they have no
content.
2023-07-11 21:53:49 -07:00
Minio Trusted
f6040dffaf Update yaml files to latest version RELEASE.2023-07-11T21-29-34Z 2023-07-11 23:25:30 +00:00
Klaus Post
9885a0a6af Fix hasSpaceFor in SNSD setup (#17630)
If drive is offline or filled we divide by 0.

Fixes #17629

Bonus: Reject when any valid disk exceeds minimum inode threshold.
2023-07-11 14:29:34 -07:00
Kaan Kabalak
f64d62b01d Fix style of logOnceIf calls w/unique identifiers (#17631) 2023-07-11 13:17:45 -07:00
Harshavardhana
82075e8e3a use strconv variants to improve on performance per 'op' (#17626)
```
BenchmarkItoa
BenchmarkItoa-8         	673628088	         1.946 ns/op	       0 B/op	       0 allocs/op
BenchmarkFormatInt
BenchmarkFormatInt-8    	592919769	         2.012 ns/op	       0 B/op	       0 allocs/op
BenchmarkSprint
BenchmarkSprint-8       	26149144	        49.06 ns/op	       2 B/op	       1 allocs/op
BenchmarkSprintBool
BenchmarkSprintBool-8   	26440180	        45.92 ns/op	       4 B/op	       1 allocs/op
BenchmarkFormatBool
BenchmarkFormatBool-8   	1000000000	         0.2558 ns/op	       0 B/op	       0 allocs/op
```
2023-07-11 07:46:58 -07:00
Harshavardhana
5b7c83341b move per bucket metrics to peer location (#17627) 2023-07-11 07:46:24 -07:00
Harshavardhana
8522905d97 update helm linting workflow 2023-07-10 20:11:51 -07:00
Harshavardhana
524ed7ccd0 update go.mod pointing to wrong repo 2023-07-10 20:10:39 -07:00
Poorna
fb49aead9b replication: add validation API (#17520)
To check if replication is set up properly on a bucket.
2023-07-10 20:09:20 -07:00
Aditya Manthramurthy
85f5700e4e fix: missing audit logger call for some admin APIs (#17623) 2023-07-10 16:59:44 -07:00
Aditya Manthramurthy
43b3c093ef Fix: set request id in trace context properly (#17622) 2023-07-10 15:40:44 -07:00
Kaan Kabalak
bd6842d917 Further print log messages once per error (#17618) 2023-07-10 07:59:57 -07:00
Poorna
e8c98c3246 Avoid extra GetObjectInfo call in DeleteObject API (#17599)
Optimize DeleteObject API to avoid extra 
GetObjectInfo call on the replicating side.

For receiving side, it is just a regular
DeleteObject call.

Bonus: Fix a corner case where version purged is 
absent on target (either due to replication not yet
complete or target version already deleted in a
one-way replication or when replication was disabled). 

In such cases, mark version purge complete.
2023-07-10 07:57:56 -07:00
Harshavardhana
dfd7cca0d2 fix: allow cancel of decom only when its in progress (#17607) 2023-07-10 07:55:38 -07:00
Harshavardhana
af3d99e35f helm release v5.0.13
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-09 00:13:05 -07:00
Harshavardhana
f6186965c3 honor DeleteAllVersions in list(), head() calls (#17604) 2023-07-08 15:42:10 -07:00
Ian Martin
90c2129f44 Add helm chart linting to CI workflow (#17606) 2023-07-08 15:41:12 -07:00
yaohwu
69e131ee69 Fix helm templating syntax in post-job (#17605) 2023-07-08 12:18:31 -07:00
Harshavardhana
28a01f0320 update missing license header in files (#17603) 2023-07-08 10:42:05 -07:00
Anis Eleuch
6d0bc5ab1e prometheus: Fix internode stats (#17594)
Internode calculation was done inside S3 handlers, fix it by moving it
to internode handlers.

Remove admin stats since it is not used.
2023-07-08 07:35:11 -07:00
Aditya Manthramurthy
7af78af1f0 fix: set request ID in tracing context key (#17602)
Since `addCustomerHeaders` middleware was after the `httpTracer`
middleware, the request ID was not set in the http tracing context. By
reordering these middleware functions, the request ID header becomes
available. We also avoid setting the tracing context key again in
`newContext`.

Bonus: All middleware functions are renamed with a "Middleware" suffix
to avoid confusion with http Handler functions.
2023-07-08 07:31:42 -07:00
Klaus Post
45a717a142 Avoid per request URL parsing (#17593)
Every request does a `url.Parse(c.url.String())` to clone a URL.

The host will also be static, so we rewrite that on creation.
2023-07-07 22:07:30 -07:00
Ian Martin
cb1ec0a0d9 fix: helm templating syntax in post-job (#17600) 2023-07-07 22:06:40 -07:00
Harshavardhana
abb1f22057 Revert "change ttfb_distribution metrics to histogramMetric (#17115)"
This reverts commit 9112ca4e29.
2023-07-07 13:57:37 -07:00
Harshavardhana
73efe436a5 update helm v5.0.12
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-07 09:44:16 -07:00
Harshavardhana
f41edb23e2 add variadic delays in peer notification retries (#17592)
just adds more `jitter` in our retries to avoid
burst flooding for peer calls.
2023-07-07 07:47:38 -07:00
Minio Trusted
6335a48a53 Update yaml files to latest version RELEASE.2023-07-07T07-13-57Z 2023-07-07 07:53:18 +00:00
Klaus Post
e20aab25ec Check for progress before we reach the limit (#17552) 2023-07-07 00:13:57 -07:00
Anis Eleuch
66bea3942a CI/CD to stop one node per pool in the two pools mint test (#17518)
This is to make sure that all S3 ops work when there is enough quorum
2023-07-07 00:10:13 -07:00
Harshavardhana
08acd9c43d update all our deps (#17590) 2023-07-06 21:47:46 -07:00
Klaus Post
ff5988f4e0 Reduce allocations (#17584)
* Reduce allocations

* Add stringsHasPrefixFold which can compare string prefixes, while ignoring case and not allocating.
* Reuse all msgp.Readers
* Reuse metadata buffers when not reading data.

* Make type safe. Make buffer 4K instead of 8.

* Unslice
2023-07-06 16:02:08 -07:00
Harshavardhana
1bf23374a3 do not need to gzip 'mc' in our container (#17586)
fixes #17581
2023-07-06 15:13:17 -07:00
Alex
899b429094 Update Console to v0.32.0 (#17587)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-07-06 15:13:07 -07:00
jiuker
c47ff44f5e fix: disable site network test if site replication is disabled (#17579) 2023-07-06 09:19:14 -07:00
Harshavardhana
8af0773baf remove deprecated Content-Security-Policy (#17580)
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/block-all-mixed-content
2023-07-06 09:18:38 -07:00
Alex
37cbd114de Updated console to v0.31.0 (#17578) 2023-07-06 00:37:56 -07:00
jiuker
2dbb1cff4a feat: support perf site replication (#17477) 2023-07-05 22:28:26 -07:00
Klaus Post
6efcf9c982 Do lockless last minute latency metrics (#17576)
Collect metrics in one second and accumulate lockless before sending upstream.
2023-07-05 10:40:45 -07:00
Harshavardhana
0bc34952eb fix: under FanOut API avoid repeated md5sum calculation (#17572)
md5sum calculation has a high CPU overhead, avoid calculating
it repeatedly for similar fanOut calls.

To fix following CPU profiler result
```
(pprof) top10
Showing nodes accounting for 678.68s, 84.67% of 801.54s total
Dropped 1072 nodes (cum <= 4.01s)
Showing top 10 nodes out of 156
      flat  flat%   sum%        cum   cum%
   332.54s 41.49% 41.49%    332.54s 41.49%  runtime/internal/syscall.Syscall6
   228.39s 28.49% 69.98%    228.39s 28.49%  crypto/md5.block
    48.07s  6.00% 75.98%     48.07s  6.00%  runtime.memmove
    28.91s  3.61% 79.59%     28.91s  3.61%  github.com/minio/highwayhash.updateAVX2
     8.25s  1.03% 80.61%      8.25s  1.03%  runtime.futex
     8.25s  1.03% 81.64%     10.81s  1.35%  runtime.step
     6.99s  0.87% 82.52%     22.35s  2.79%  runtime.pcvalue
     6.67s  0.83% 83.35%     38.90s  4.85%  runtime.mallocgc
     5.77s  0.72% 84.07%     32.61s  4.07%  runtime.gentraceback
     4.84s   0.6% 84.67%     10.49s  1.31%  runtime.lock2
```
2023-07-05 03:16:05 -07:00
jiuker
f6b48ed02a fix: create user without policy that is required (#17554)
fixes #17492
2023-07-04 07:39:29 -07:00
Harshavardhana
e37c4efc6e fix: upon DNS refresh() failure use previous values (#17561)
DNS refresh() in-case of MinIO can safely re-use
the previous values on bare-metal setups, since
bare-metal arrangements do not change DNS in any 
manner commonly.

This PR simplifies that, we only ever need DNS caching
on bare-metal setups.

- On containerized setups do not enable DNS
  caching at all, as it may have adverse effects on
  the overall effectiveness of k8s DNS systems.

  k8s DNS systems are dynamic and expect applications
  to avoid managing DNS caching themselves, instead
  provide a cleaner container native caching
  implementations that must be used.

- update IsDocker() detection, including podman runtime

- move to minio/dnscache fork for a simpler package
2023-07-03 12:30:51 -07:00
Harshavardhana
22f5bc643c fix: honor older scanner settings only if newer has not changed (#17564) 2023-07-03 12:28:36 -07:00
Anis Eleuch
15fd5ce2fa fix: A typo in per pool make/delete bucket errs calculation (#17553) 2023-07-03 09:47:40 -07:00
Harshavardhana
7f782983ca fix: for FTP server driver allow implicit trust of TLS (#17541)
fixes #17535
2023-06-30 08:04:13 -07:00
Aditya Manthramurthy
9d628346eb fix: service account list for root user (#17547)
Fixes https://github.com/minio/minio/issues/17545
2023-06-30 08:02:12 -07:00
Aditya Manthramurthy
bde533a9c7 fix: OpenID config initialization (#17544)
This is due to a regression in the handling of the enable key in OpenID
configuration.
2023-06-29 23:38:26 -07:00
Minio Trusted
2fcb75d86d Update yaml files to latest version RELEASE.2023-06-29T05-12-28Z 2023-06-29 06:37:32 +00:00
Harshavardhana
aae6846413 feat: allow expiration of all versions via ILM Expiration action (#17521)
Following extension allows users to specify immediate purge of
all versions as soon as the latest version of this object has
expired.

```
<LifecycleConfiguration>
    <Rule>
        <ID>ClassADocRule</ID>
        <Filter>
           <Prefix>classA/</Prefix>
        </Filter>
        <Status>Enabled</Status>
        <Expiration>
             <Days>3650</Days>
	     <ExpiredObjectAllVersions>true</ExpiredObjectAllVersions>
        </Expiration>
    </Rule>
    ...
```
2023-06-28 22:12:28 -07:00
Harshavardhana
5317a0b755 fix: support LDAP settings properly in ftp/sftp (#17536)
Bonus this PR enhances and supports creating
buckets via ftp `mkdir`

fixes #17526
2023-06-28 13:15:21 -07:00
Harshavardhana
73de721a63 fix: handle copyObjectPart encryption properly (#17530)
- look for requested encryption while compressing
  not just via HTTP Headers, but also via multipart
  metadata

- look for SSE-S3 etag decryption not just via HTTP
  Headers, but also via multipart metadata

fixes #17519
2023-06-28 09:43:50 -07:00
Harshavardhana
d2f5c3621f fix: add additional decommission traces for ILM expired content (#17522)
current decommission traces were missing for

- Skipped ILM expired versions
- Skipped single DELETE marked version
- A success or failure in decommissioning DELETE marker
- allow additional info to be shared in DecomStatus() API
2023-06-27 11:59:40 -07:00
Harshavardhana
1818764840 fix: bug in passing Versioned field set for getHealReplicationInfo() (#17498)
Bonus: rejects prefix deletes on object-locked buckets earlier
2023-06-27 09:45:50 -07:00
Harshavardhana
d3e5e607a7 allow site-replication checks to work on non-distributed setups (#17524)
fixes #17523
2023-06-27 09:23:50 -07:00
Shireesh Anjal
c1943ea3af Capture realtime metrics in health report (#17516) 2023-06-27 01:39:18 -07:00
Harshavardhana
2a82c15bf1 update all our deps (#17497) 2023-06-26 15:36:56 -07:00
guangwu
87b6fb37d6 chore: pkg imported more than once (#17444) 2023-06-26 09:21:29 -07:00
Kaan Kabalak
21fbe88e1f Print certain log messages once per error (#17484) 2023-06-24 20:29:13 -07:00
Harshavardhana
1f8b9b4bd5 fix: do not listAndHeal() inline with PutObject() (#17499)
there is a possibility that slow drives can actually add latency
to the overall call, leading to a large spike in latency.

this can happen if there are other parallel listObjects()
calls to the same drive, in-turn causing each other to sort
of serialize.

this potentially improves performance and makes PutObject()
also non-blocking.
2023-06-24 19:31:04 -07:00
Minio Trusted
fcbed41cc3 Update yaml files to latest version RELEASE.2023-06-23T20-26-00Z 2023-06-24 07:25:49 +00:00
Klaus Post
216069d0da Remove 'null' version ID from directory object response (#17495)
Fixes #17494

Regression from #17132
2023-06-23 13:26:00 -07:00
Harshavardhana
eefa047974 fix: keep decommission in a go-routine (#17496)
This was removed by mistake in #17491
2023-06-23 12:29:32 -07:00
Anis Eleuch
d8dad5c9ea s3: Make/Delete buckets to use error quorum per pool (#17467) 2023-06-23 11:48:23 -07:00
Klaus Post
bf8a68879c fix: Time ILM Actions for scanner info (#17493)
ILM Actions were not timed fix it.
2023-06-23 07:48:36 -07:00
Aditya Manthramurthy
f3248a4b37 Redact all secrets from config viewing APIs (#17380)
This change adds a `Secret` property to `HelpKV` to identify secrets
like passwords and auth tokens that should not be revealed by the server
in its configuration fetching APIs. Configuration reporting APIs now do
not return secrets.
2023-06-23 07:45:27 -07:00
Harshavardhana
d315d012a4 decom: during multiple pool decom preserve current pool status (#17491)
removal of completed pools must retain pool status of other
pools in draining, to resume any remaining draining operations.
2023-06-23 07:44:18 -07:00
Harshavardhana
bd9bf3693f lambda: negative duration for presigned URL default to 1H (#17489)
fixes a bug where users created with Expiration as
timeSentinel is not rejected while generating the
presigned URL for lambda processing.
2023-06-23 00:17:24 -07:00
Klaus Post
15daa2e74a tooling: Add xlmeta --combine switch that will combine inline data (#17488)
Will combine or write partial data of each version found in the inspect data.

Example:

```
> xl-meta -export -combine inspect-data.1228fb52.zip

(... metadata json...)
}
Attempting to combine version "994f1113-da94-4be1-8551-9dbc54b204bc".
Read shard 1 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-01-of-13.data)
Read shard 2 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-02-of-13.data)
Read shard 3 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-03-of-13.data)
Read shard 4 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-04-of-13.data)
Read shard 6 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-06-of-13.data)
Read shard 7 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-07-of-13.data)
Read shard 8 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-08-of-13.data)
Read shard 9 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-09-of-13.data)
Read shard 10 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-10-of-13.data)
Read shard 11 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-11-of-13.data)
Read shard 13 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-13-of-13.data)
Attempting to reconstruct using parity sets:
* Setup: Data shards: 9 - Parity blocks: 6
Have 6 complete remapped data shards and 6 complete parity shards. Could NOT reconstruct: too few shards given
* Setup: Data shards: 8 - Parity blocks: 5
Have 5 complete remapped data shards and 5 complete parity shards. Could reconstruct completely
0 bytes missing. Truncating 0 from the end.
Wrote output to 994f1113-da94-4be1-8551-9dbc54b204bc.complete
```

So far only inline data, but no real reason that external data can't also be included with some handling of blocks.

Supports only unencrypted data.
2023-06-22 12:41:24 -07:00
Harshavardhana
74759b05a5 make sure to set relevant config entries correctly (#17485)
Bonus: also allow skipping keys properly.
2023-06-22 10:04:02 -07:00
Aditya Manthramurthy
82ce78a17c Fix locking in policy attach API (#17426)
For policy attach/detach API to work correctly the server should hold a
lock before reading existing policy mapping and until after writing the
updated policy mapping. This is fixed in this change.

A site replication bug, where LDAP policy attach/detach were not
correctly propagated is also fixed in this change.

Bonus: Additionally, the server responds with the actual (or net)
changes performed in the attach/detach API call. For e.g. if a user
already has policy A applied, and a call to attach policies A and B is
performed, the server will respond that B was attached successfully.
2023-06-21 22:44:50 -07:00
Harshavardhana
9af6c6ceef under rebalance look for expired versions v/s remaining versions (#17482)
A continuation of PR #17479 for rebalance behavior must
also match the decommission behavior.

Fixes bug where rebalance would ignore rebalancing object
versions after one of the version returned "ObjectNotFound"
2023-06-21 13:23:20 -07:00
Harshavardhana
021372cc4c update helm release v5.0.11
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-06-21 12:29:09 -07:00
Praveen raj Mani
b94ab07c2f Honor global root CAs for kafka audit tls (#17481)
honor global root CAs for kafka audit tls
2023-06-21 10:50:40 -07:00
Harshavardhana
7605d07bb2 add support for bucket level request count per API (#17468)
New metrics added to calculate API request count
per bucket, per API.  Captures errors, including
4xx, 5xx HTTP status codes separately.
2023-06-21 09:41:59 -07:00
Harshavardhana
ccc5801112 always look for expired versions v/s remaining versions (#17479)
while decommissioning it can so happen that the non-current
versions are all expired but there is a DEL marker as the
latest version.

For such objects, we should not decommission them instead
calculate the remaining versions and if the remaining versions
is one and that version is a DEL marker consider such
an object not to be scheduled for decommissioning.
2023-06-21 08:49:28 -07:00
Praveen raj Mani
7c72b25ef0 Add an option to make bucket notifications synchronous (#17406)
With the current asynchronous behaviour in sending notification events
to the targets, we can't provide guaranteed delivery as the systems
might go for restarts.

For such event-driven use-cases, we can provide an option to enable
synchronous events where the APIs wait until the event is successfully
sent or persisted.

This commit adds 'MINIO_API_SYNC_EVENTS' env which when set to 'on'
will enable sending/persisting events to targets synchronously.
2023-06-20 17:38:59 -07:00
Harshavardhana
02c2ec3027 skip onlineDisks with parity mismatch (#17478) 2023-06-20 13:18:24 -07:00
Harshavardhana
65c31fab12 fix: do not crash rebalance code instead set the object layer (#17465)
fixes #17421
2023-06-20 09:28:23 -07:00
jiuker
b6b68be052 fix: replication check for duplicate endpoints detection with wrong route (#17474) 2023-06-20 09:27:54 -07:00
Harshavardhana
15911c85f6 safely ignore out of band deletions while decommissioning (#17473) 2023-06-20 08:31:42 -07:00
Aditya Manthramurthy
5a1612fe32 Bump up madmin-go and pkg deps (#17469) 2023-06-19 17:53:08 -07:00
Minio Trusted
bbb7ae156c Update yaml files to latest version RELEASE.2023-06-19T19-52-50Z 2023-06-19 22:53:03 +00:00
Harshavardhana
f9b8d1c699 fix: sio-error test to fail if commands fail (#17466) 2023-06-19 12:52:50 -07:00
Harshavardhana
1443b5927a allow quorum fileInfo to pick same parityBlocks (#17454)
Bonus: allow replication to proceed for 503 errors such as
with error code SlowDownRead
2023-06-18 18:20:15 -07:00
Anis Eleuch
35ef35b5c1 fix a integer divide by zero crash during rebalance (#17455)
A state is updated with a delete marker, which does not have parity or
data blocks defined, which can cause the integer divide by zero panics.

This commit fixes to avoid panics.
2023-06-18 11:14:53 -07:00
Harshavardhana
6806537eb3 event args list for fanOut notification must be sized same (#17450)
without this fan-out API can crash if client cancels
the on-going request.
2023-06-18 07:09:20 -07:00
Harshavardhana
64de61d15d fallback on etags if they match when mtime is not same (#17424)
on "unversioned" buckets there are situations
when successive concurrent I/O can lead to
an inconsistent state() with mtime while the
etag might be the same for the object on disk.

in such a scenario it is possible for us to
allow reading of the object since etag matches
and if etag matches we are guaranteed that we
have enough copies the object will be readable
and same.

This PR allows fallback in such scenarios.
2023-06-17 19:18:20 -07:00
Harshavardhana
22b7c8cd8a upgrade pkg and dperf to latest packages (#17448)
- dperf improvements in benchmarking read and write tests
- upgrade mimedb to use latest content-types
2023-06-17 07:31:36 -07:00
Poorna
c4d0c49a5f ensure metadata updates go to same pool where version exists (#17451)
This PR also returns the replication status in 
proxy calls and defers replication attempt if 
HEAD on object version returned a error different
from NoSuchKey
2023-06-17 07:30:53 -07:00
Minio Trusted
142a5b0dcd Update yaml files to latest version RELEASE.2023-06-16T02-41-06Z 2023-06-16 05:36:19 +00:00
mungo312
25db1e4eca helm: fix permission denied errors in post-job when running as non-root (#17175)
Fixes ##17174
2023-06-15 19:41:06 -07:00
Harshavardhana
47a48b6832 do not save any metadata from the headers in tar extract (#17436)
only preserve the same storage-class as incoming
request other than that rest of them must be
deduced.
2023-06-15 17:44:07 -07:00
Harshavardhana
e98309eb75 update minio-go/v7 v7.0.57 (#17439) 2023-06-15 16:29:19 -07:00
Alex
87051872a7 Update console to v0.30.0 (#17438) 2023-06-15 15:30:56 -07:00
Anis Eleuch
a2aed12dcd decom: Fix a typo in routing decommissioning requests (#17435)
A specific node should do the decommissioning task, however routing the
start decommissioning to that node was not working properly.

Co-authored-by: Anis Elleuch <anis@min.io>
2023-06-15 14:54:29 -07:00
Anis Eleuch
d8e6e76e89 site-repl: Better error msg when setting sync in a local cluster (#17407) 2023-06-15 12:44:22 -07:00
Anis Eleuch
8c33fdf5f4 s3-check-md5: Add --modified-since flag to skip some objects (#17410) 2023-06-15 12:44:09 -07:00
Harshavardhana
ad4e511026 do not save plain-text ETag when encryption is requested (#17427)
fixes an issue under bucket replication could cause
ETags for replicated SSE-S3 single part PUT objects,
to fail as we would attempt a decryption while listing,
or stat() operation.
2023-06-15 12:43:26 -07:00
Klaus Post
4a562d6732 fix: fanout error response - error must be string for marshaling (#17433)
Uses https://github.com/minio/minio-go/pull/1839
2023-06-15 09:21:53 -07:00
Poorna
a9082e4f79 site replication: cancel ongoing op properly (#17428) 2023-06-15 08:05:08 -07:00
dependabot[bot]
6278679ffd build(deps): bump github.com/lestrrat-go/jwx from 1.2.25 to 1.2.26 (#17425)
Bumps [github.com/lestrrat-go/jwx](https://github.com/lestrrat-go/jwx) from 1.2.25 to 1.2.26.
- [Release notes](https://github.com/lestrrat-go/jwx/releases)
- [Changelog](https://github.com/lestrrat-go/jwx/blob/v1.2.26/Changes)
- [Commits](https://github.com/lestrrat-go/jwx/compare/v1.2.25...v1.2.26)

---
updated-dependencies:
- dependency-name: github.com/lestrrat-go/jwx
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-14 12:17:22 -07:00
jiuker
0474791cf8 fix: set time format right (#17402) 2023-06-14 07:49:13 -07:00
Harshavardhana
69f819e199 move to pkg@v1.7.3 to support aws:kms string in policy conditions (#17414) 2023-06-14 07:42:03 -07:00
Harshavardhana
f32efd5429 more compliance related fixes (#17408)
- lifecycle must return InvalidArgument for rule errors
- do not return `null` versionId in HTTP header
- reject mixed SSE uploads with correct error message
2023-06-13 13:52:33 -07:00
jiuker
22c247a988 fix: preserve multiple values for query params (#17392) 2023-06-13 11:38:46 -07:00
Shubhendu
35d71682f6 fix: do not allow removal of inbuilt policies unless they are already persisted (#17264)
Dont allow removal of inbuilt policies such as `readwrite, readonly, writeonly and diagnostics`
2023-06-13 11:06:17 -07:00
drivebyer
3d6b88a60e fix: syscall to record time on non-linux (#17383) 2023-06-13 11:04:50 -07:00
Harshavardhana
26a0803388 various compliance related fixes (#17401)
- getObjectTagging to be allowed for anonymous policies
- return correct errors for invalid retention period
- return sorted list of tags for an object
- putObjectTagging must return 200 OK not 204 OK
- return 409 ErrObjectLockConfigurationNotAllowed for existing buckets
2023-06-12 13:22:07 -07:00
Anis Eleuch
ae95384dd8 Revert "heal: Update object parity with the latest configured SC (#17187)" (#17404) 2023-06-12 11:54:51 -07:00
Anis Eleuch
0f0dcf0c5e tar: Avoid storing snowball extraction header in extract objects (#17389) 2023-06-12 09:42:06 -07:00
Klaus Post
6f2406b0b6 fix: protect ReplicationStats against concurrent map iteration and write crash (#17403) 2023-06-12 09:17:11 -07:00
Anis Eleuch
bb24346e04 listen: Only error out if not able to bind any interface (#17353) 2023-06-12 09:09:28 -07:00
Harshavardhana
be45ffd8a4 return 204 status code for DeleteBucketTagging (#17400) 2023-06-11 20:49:02 -07:00
Poorna Krishnamoorthy
f986b0c493 replication: perform bucket resync in parallel (#16707)
Default number of parallel resync operations for a bucket to 10
to speed up resync.
2023-06-11 16:09:55 -07:00
Harshavardhana
c9e87f0548 service accounts are allowed to have no expiration (#17397) 2023-06-11 10:34:59 -07:00
Harshavardhana
43468f4d47 return InvalidRequest when no parts are provided (#17395) 2023-06-10 21:59:51 -07:00
Minio Trusted
91987d6f7a Update yaml files to latest version RELEASE.2023-06-09T07-32-12Z 2023-06-10 05:38:08 +00:00
Harshavardhana
6b7c98bd0f make sure we pick up the right Go version in vulncheck (#17388) 2023-06-09 00:32:12 -07:00
Harshavardhana
b829e80ecb do not disable root for invalid API config values (#17386) 2023-06-08 15:50:06 -07:00
Klaus Post
6e38d0f3ab Add more bootstrap info in debug mode (#17362) 2023-06-08 08:39:47 -07:00
Anis Eleuch
38342b1df5 decom: Parallelize decommissining (#17364) 2023-06-07 14:27:51 -07:00
Harshavardhana
dbd4c2425e fix: kafka broker pings must not be greater than 1sec (#17376) 2023-06-07 11:47:00 -07:00
Harshavardhana
49ce85ee3d allow prefix/markers to have '/' in the beginning to throw an empty (#17373) 2023-06-07 11:25:26 -07:00
Anis Eleuch
eba378e4a1 vrf: Fix testing for loopback coming from the address (#17372) 2023-06-07 09:53:05 -07:00
Harshavardhana
442c50ff00 remove delimiter if not set by client, also fetchOwner is optional (#17366) 2023-06-06 21:31:47 -07:00
Harshavardhana
d1448adbda use slices package and remove some helpers (#17342) 2023-06-06 10:12:52 -07:00
jiuker
5a21b1f353 fix: Delete dir failed when .DS_Store in it (#17352) 2023-06-06 10:12:06 -07:00
Bartosz Marczyński
123a2fb3a8 helm: add /tmp mount to post jobs (#17301) 2023-06-06 06:50:33 -07:00
Harshavardhana
2f9e2147f5 allow quota enforcement to rely on older values (#17351)
PUT calls cannot afford to have large latency build-ups due
to contentious usage.json, or worse letting them fail with
some unexpected error, this can happen when this file is
concurrently being updated via scanner or it is being
healed during a disk replacement heal.

However, these are fairly quick in theory, stressed clusters
can quickly show visible latency this can add up leading to
invalid errors returned during PUT.

It is perhaps okay for us to relax this error return requirement
instead, make sure that we log that we are proceeding to take in
the requests while the quota is using an older value for the quota
enforcement. These things will reconcile themselves eventually,
via scanner making sure to overwrite the usage.json.

Bonus: make sure that storage-rest-client sets ExpectTimeouts to
be 'true', such that DiskInfo() call with contextTimeout does
not prematurely disconnect the servers leading to a longer
healthCheck, back-off routine. This can easily pile up while also
causing active callers to disconnect, leading to quorum loss.

DiskInfo is actively used in the PUT, Multipart call path for
upgrading parity when disks are down, it in-turn shouldn't cause
more disks to go down.
2023-06-05 16:56:35 -07:00
Harshavardhana
75c6fc4f02 only allow decryption of etag for only sse-s3 (#17335) 2023-06-05 13:08:51 -07:00
Anis Eleuch
f9e07d6143 goroutines parser: Add --less flag to filter goroutines (#17339) 2023-06-04 14:20:46 -07:00
Anis Eleuch
1436858347 log: Add a log when saving pool.bin fails (#17338)
Co-authored-by: Anis Elleuch <anis@min.io>
2023-06-04 14:20:21 -07:00
jiuker
8030e12ba5 fix: expMovingAvg is too small when startTime is zero (#17346) 2023-06-03 13:41:51 -07:00
Minio Trusted
a485b923bf Update yaml files to latest version RELEASE.2023-06-02T23-17-26Z 2023-06-03 05:07:30 +00:00
Kaan Kabalak
0649aca219 Add expiration to ListServiceAccounts function (#17249) 2023-06-02 16:17:26 -07:00
Harshavardhana
b210ea79bc do not save MTime in newMultipartUpload() to avoid side-affects (#17340) 2023-06-02 14:38:09 -07:00
Poorna
68f80b5fe7 replication: ignore retention mode validation for replica (#17332) 2023-06-01 18:53:12 -07:00
Poorna
e95825a42e replication: use latest object info for metrics update (#17333) 2023-06-01 18:52:55 -07:00
Anis Eleuch
931712dc46 fix: converting 'server closed idle connection' to errDiskNotFound (#17330) 2023-06-01 15:40:28 -07:00
Harshavardhana
54e544e03e allow lookup()/head() operations on Veeam SOS objects (#17331) 2023-06-01 15:26:26 -07:00
Poorna
f86b9abf32 site removal: update site config and reload targets after update (#17327) 2023-06-01 10:19:56 -07:00
Anis Eleuch
9ef7eda33a heal: Avoid objects created after the heal disk start time (#17323) 2023-05-31 13:10:45 -07:00
Klaus Post
c9e26401fa Fix GetObject encrypted etag (#17302)
Co-authored-by: Harshavardhana <harsha@minio.io>
2023-05-31 13:10:25 -07:00
Harshavardhana
e53f49e9a9 add additional tools that help in debugging (#17325) 2023-05-31 13:06:08 -07:00
jiuker
14f6ac9222 fix: fail large content in DeleteMultipleObjects() early (#17321) 2023-05-31 10:58:14 -07:00
drivebyer
b8474295af fix: time() returned function not being called as expected in globalSync() (#17319) 2023-05-31 09:40:23 -07:00
Shireesh Anjal
817e85a3e0 fix: proxy not set on subnet logger webhook sometimes (#17320) 2023-05-31 08:09:09 -07:00
jiuker
fb5ce3b87a record err time when remote node is offline (#17262) 2023-05-30 10:07:26 -07:00
Klaus Post
6fe028b7c5 Revert s3 select simdjson reuse (#17310) 2023-05-30 10:02:22 -07:00
Harshavardhana
1cd7f1e38d fix: cleanup empty multipart folders upon stale upload cleanup (#17312) 2023-05-30 09:56:50 -07:00
jiuker
043fd8b536 fix: on windows use FindClose close handler (#17306) 2023-05-30 02:15:57 -07:00
Klaus Post
669acbb032 Fix Test LDAP for automatic site replication (#17305) 2023-05-29 08:13:58 -07:00
Minio Trusted
086d8f036e Update yaml files to latest version RELEASE.2023-05-27T05-56-19Z 2023-05-28 06:53:17 +00:00
Harshavardhana
394690dcfb check for upto 50%+ data disks to be offline (#17294) 2023-05-26 22:56:19 -07:00
Harshavardhana
398bca92ff update helm to v5.0.10
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-05-26 17:05:49 -07:00
Harshavardhana
fb328b1a64 upgrade all dependencies (#17276) 2023-05-26 16:31:28 -07:00
Anis Eleuch
563f667e30 reorder-disks: Fix UID to UUID and add better error messages (#17292) 2023-05-26 15:03:31 -07:00
Klaus Post
c839b64f6a fix: compressed+encrypted block overhead (#17289) 2023-05-26 10:57:07 -07:00
Anis Eleuch
6425fec366 s3: Add x-minio-error-code header for S3 HEAD requests (#17283) 2023-05-26 10:13:18 -07:00
Harshavardhana
d5059840ef fix: for delete marked objects choose appropriate parity (#17287) 2023-05-26 09:57:44 -07:00
Aditya Manthramurthy
65cba212e8 Remove older policy attach behavior for LDAP (#17240) 2023-05-26 06:31:24 -07:00
Aditya Manthramurthy
7a69c9c75a Update builtin policy entities command (#17241) 2023-05-25 22:31:05 -07:00
Harshavardhana
4a425cbac1 cleanup scripts and apply shfmt (#17284) 2023-05-25 22:07:25 -07:00
Harshavardhana
615169c4ec update docker ubi image to 8.8 (#17281) 2023-05-25 18:19:09 -07:00
Harshavardhana
5cd9dcb844 rebalance 'null' delete markers properly (#17282) 2023-05-25 16:12:53 -07:00
Anis Eleuch
54c5c88fe6 Add number of offline disks in quorum errors (#16822) 2023-05-25 09:39:06 -07:00
jiuker
443250d135 fix: for Target isActive use net.Dial instead (#17251) 2023-05-25 09:24:11 -07:00
Harshavardhana
9b5829c16e avoid decommissioning DEL markers with single versions (#17274) 2023-05-25 09:18:49 -07:00
jiuker
d749aaab69 fix: ignore existing target status when adding new targets (#17250) 2023-05-24 22:57:37 -07:00
Krishnan Parthasarathi
62df731006 Add updatedAt for GetBucketLifecycleConfig (#17271) 2023-05-24 22:52:39 -07:00
Harshavardhana
d0a0eb9738 support fan-out objects via PostUpload() (#17233) 2023-05-24 22:51:07 -07:00
Klaus Post
66156b8230 Stricter partNumber checks (#17270)
Fixes #17269
2023-05-24 08:00:47 -07:00
Klaus Post
5677f73794 Add PostObject Checksum (#17244) 2023-05-23 07:58:33 -07:00
Harshavardhana
ef54200db7 offline drives more than 50% of total drives return error (#17252) 2023-05-23 07:57:57 -07:00
Harshavardhana
7875efbf61 update minio/dperf to latest release v0.4.4 (#17267) 2023-05-22 19:17:17 -07:00
Krishnan Parthasarathi
3e128c116e Add lifecycle event source to audit log tags (#17248) 2023-05-22 15:28:56 -07:00
Krishnan Parthasarathi
55a3310446 logger-http: Don't retry after a succesful send (#17266) 2023-05-22 14:53:18 -07:00
Harshavardhana
fc03be7891 simplify bucket metadata lookups for versioning/object locking (#17253) 2023-05-22 12:05:14 -07:00
jiuker
b1b00a5055 fix: Avoid Income globalStats twice upon error (#17263) 2023-05-22 07:42:27 -07:00
Poorna
2920b0fc6d allow specification of path/virtual style bucket lookup in batch replication (#17201) 2023-05-21 15:16:31 -07:00
Anis Eleuch
a30a55f3b1 Add object parity in listing V2M and listing versions M (#17238) 2023-05-19 09:42:45 -07:00
Praveen raj Mani
ecfb18b26a Freeze the s3 APIs until the notification sub-system initializes completely (#17182) 2023-05-19 08:44:48 -07:00
jiuker
41fa8fa2d2 fix: increment counter when entry be skipped (#17237) 2023-05-19 08:36:52 -07:00
jiuker
e94e6adf91 fix: return proper error if OIDC Discoverydoc fails to respond (#17242) 2023-05-19 02:13:33 -07:00
jiuker
7d433f16c4 before return make globalScannerMetrics.incTime call (#17230) 2023-05-18 13:45:05 -07:00
Klaus Post
b06d7bf834 fix: leaking connections in JSON SQL with limited return (#17239) 2023-05-18 11:26:46 -07:00
Minio Trusted
b784e458cb Update yaml files to latest version RELEASE.2023-05-18T00-05-36Z 2023-05-18 17:56:05 +00:00
Aditya Manthramurthy
9d96b18df0 Add "name" and "description" params to service acc (#17172) 2023-05-17 17:05:36 -07:00
drivebyer
ad2ab6eb3e fix: Give accurate cap to slice (#17224) 2023-05-17 15:14:09 -07:00
jiuker
f037c9b286 Protecting the read index is not out of bounds (#17226) 2023-05-17 12:09:41 -07:00
Praveen raj Mani
85912985b6 Check for only network errors in audit webhook for reachability (#17228) 2023-05-17 11:10:33 -07:00
Harshavardhana
876f51a708 remove minio-js from mint tests until next minio-js release 2023-05-17 09:09:54 -07:00
Harshavardhana
f7d29b4a53 cleanup of multipart per disk must cleanup itself only (#17223) 2023-05-17 01:45:58 -07:00
Harshavardhana
06557fe8be allow decommissioned pools to be removed while others are finishing (#17221) 2023-05-16 16:00:57 -07:00
Poorna
2131046427 replication: fix audit log reporting (#17222) 2023-05-16 15:35:08 -07:00
Klaus Post
aaf1abc993 simplify HardLimitReader by using LimitReader for internal usage (#17218) 2023-05-16 13:14:37 -07:00
jiuker
413549bcf5 fix: loadStatsFromDisk() should return nil for configNotFound (#17217) 2023-05-16 12:23:38 -07:00
jiuker
9a799065b3 fix: make slice cap of right size (#17192) 2023-05-16 08:10:07 -07:00
jiuker
fd2959fa3a fix: workers.New err must be returned (#17208) 2023-05-16 08:08:00 -07:00
Anis Eleuch
07927e032a Add a script to filter goroutines waiting for a given number of minutes (#17204) 2023-05-16 08:05:49 -07:00
jiuker
15bec32bb4 fix: tier handlers must write error only once (#17205) 2023-05-15 23:56:52 -07:00
Anis Eleuch
e2b7a08c10 heal: Update object parity with the latest configured SC (#17187) 2023-05-15 21:32:13 -07:00
Harshavardhana
ef2fc0f99e fix: reduce using memory and temporary files. (#17206) 2023-05-15 14:08:54 -07:00
Harshavardhana
d063596430 fix: veeam SOS API 'system.xml' strings (#17202) 2023-05-15 12:06:42 -07:00
jiuker
bd2dc6c670 fix: in healing tracker printTo when err (#17207) 2023-05-15 10:14:48 -07:00
Anis Eleuch
684399433b lock: Retry locking with an increasing random interval (#17200) 2023-05-13 08:42:21 -07:00
Harshavardhana
b62791617c fix: notify systemd as soon as we wait on the OS signal (#17199) 2023-05-12 16:42:17 -07:00
Poorna
e07c2ab868 Use hash.NewLimitReader for internal multipart calls (#17191) 2023-05-12 11:19:08 -07:00
jiuker
203755793c fix: in printEndpointError count error once per init() (#17193) 2023-05-12 10:41:54 -07:00
Anis Eleuch
883c98e26f fix: remove objects when there are skipped versions due to ILM in decom (#17198) 2023-05-12 10:37:38 -07:00
Harshavardhana
f5a20a5d06 allow nodes offline in k8s setups when expanding pools (#17183) 2023-05-11 17:41:33 -07:00
Poorna
ef7177ebbd disallow bucket replication setup with site replication (#17189) 2023-05-11 15:48:40 -07:00
Harshavardhana
3637aad36e do not count ILM expired objects and other skipped objects (#17184) 2023-05-11 13:35:16 -07:00
Aditya Manthramurthy
77db9686fb Update console to v0.27.0 (#17188) 2023-05-11 12:18:17 -07:00
Shireesh Anjal
c326e5a34e Add metrics for webhook endpoint stats (#17179) 2023-05-11 11:24:37 -07:00
jiuker
c23c982593 xmlDecoder err use ErrMalformedXML when PutBucketACLHandler (#17185) 2023-05-11 11:11:15 -07:00
Shireesh Anjal
a3d666356c fix: error in capturing XFS error config in health report (#17176) 2023-05-10 15:20:48 -07:00
jiuker
3cdbc2f414 add validationErr to validateConfig When DeleteIdentityProviderCfg (#17173) 2023-05-10 09:37:30 -07:00
Harshavardhana
b92cdea578 fix: start using pkg/workers to spawn parallel workers (#17170) 2023-05-09 16:37:31 -07:00
jiuker
5e629a99af fix: for profiling duration parsing error reply use ErrInvalidRequest (#17169) 2023-05-09 14:27:49 -07:00
Klaus Post
99c4ffa34f fix: avoid audit log race protection deadlocks (#17168) 2023-05-09 08:11:32 -07:00
Harshavardhana
a7f266c907 allow JWT parsing on large session policy based tokens (#17167) 2023-05-09 00:53:08 -07:00
Praveen raj Mani
57acacd5a7 Support persistent queue store for loggers (#17121) 2023-05-08 21:20:31 -07:00
Han Cen
42fb3cd95e helm: fix pod annotations indentation (#17130) 2023-05-08 07:59:56 -07:00
mstein11
855ed642c3 helm: allow postjob to run without user 1000 (#17160) 2023-05-08 07:59:25 -07:00
jiuker
629503ff73 add Err to BucketExists when NoSuchBucket (#17155) 2023-05-08 07:51:59 -07:00
jiuker
e3a070e3de put *msgp.Reader back to pool (#17156) 2023-05-08 07:51:39 -07:00
Anton Lindholm
5b364bca1f helm-chart: Use minio service account for post-deploy job if available (#17077) 2023-05-06 23:12:56 -07:00
Poorna
c5c1426262 Validate if replication config being added is self referential (#17142) 2023-05-06 13:35:43 -07:00
Denis Krivenko
be18d435a2 helm: declare missing properties in values.yaml (#17153) 2023-05-06 13:34:58 -07:00
mstein11
7eea6cdb12 add etc-path to post-job.yaml in helm chart (#17148) 2023-05-06 13:34:38 -07:00
Harshavardhana
824c55b3a4 fix: few typos and wordings in minio-limits.md 2023-05-05 20:04:52 -07:00
Klaus Post
76913a9fd5 Signed trailers for signature v4 (#16484) 2023-05-05 19:53:12 -07:00
Minio Trusted
2f44dac14f Update yaml files to latest version RELEASE.2023-05-04T21-44-30Z 2023-05-05 06:21:31 +00:00
Harshavardhana
5569acd95c disallow EC:0 if not set during server startup (#17141) 2023-05-04 14:44:30 -07:00
Harshavardhana
1d0211d395 allow deletes on directory objects to perform permanent deletes (#17132) 2023-05-04 14:43:52 -07:00
Anis Eleuch
06cd0a636e Avoid calling KES Status when peers ping each other (#17140) 2023-05-04 11:28:33 -07:00
Klaus Post
7f7b489a3d snowball: use latest time when mtime is missing (#17133) 2023-05-04 07:29:33 -07:00
Klaus Post
bb6f4d7633 Remove redundant checkFormatJSON logging (#17134) 2023-05-04 07:28:37 -07:00
Alex
6e24dff26a Added MINIO_BROWSER_LOGIN_ANIMATION env support for WebUI console (#17123)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-05-03 15:32:50 -07:00
Harshavardhana
e372e4e592 add nodeName to the log while taking drive offline (#17124) 2023-05-03 15:05:45 -07:00
Harshavardhana
9571b0825e add configurable VRF interface and user-timeout (#17108) 2023-05-03 14:12:25 -07:00
Poorna
90e2cc3d4c Add audit logging of site replication multipart proxying (#17122) 2023-05-03 11:19:45 -07:00
Alex
0c0820caef Updated Console version to v0.26.4 (#17119)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-05-03 11:00:23 -07:00
Harshavardhana
9112ca4e29 change ttfb_distribution metrics to histogramMetric (#17115) 2023-05-03 07:31:00 -07:00
Dylan Piergies
8203cb9990 helm: apply templating to Ingress host and environment variable values (#17073) 2023-05-02 23:27:17 -07:00
Harshavardhana
d5bce978a8 upgrade helm to v5.0.9
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-05-02 23:23:26 -07:00
Poorna
ec84bad882 batch replication now supports arbitrary S3 targets (#17113) 2023-05-02 22:52:35 -07:00
Harshavardhana
b53376a3a4 change directory objects to never create new versions (#17109) 2023-05-02 16:09:33 -07:00
Krishnan Parthasarathi
0ec722bc54 Add tags to NewerNoncurrentVersions audit event (#17110) 2023-05-02 12:56:33 -07:00
Anis Eleuch
4640b13c66 Use expontential backoff algo for internode reconnections (#17052) 2023-05-02 12:35:52 -07:00
Praveen raj Mani
1704abaf6b fix: store notification events immediately for persistent queues (#17112) 2023-05-02 07:53:13 -07:00
WGH
ab34f0065c Support systemd notify protocol (#17062) 2023-05-01 23:15:08 -07:00
Klaus Post
e8c0a50862 optimization use small blocks up to 64KB (#17107) 2023-05-01 09:47:49 -07:00
Denis Krivenko
b963f69f34 helm: align chart properties with naming convention (#17065) 2023-04-29 23:59:40 -07:00
Harshavardhana
02d8f3cdc8 fix: remove active healing on .minio.sys/ during startup (#17072) 2023-04-29 02:05:28 -07:00
Harshavardhana
7ae69accc0 allow root user to be disabled via config settings (#17089) 2023-04-28 12:24:14 -07:00
Minio Trusted
701b89f377 Update yaml files to latest version RELEASE.2023-04-28T18-11-17Z 2023-04-28 18:48:25 +00:00
Anis Eleuch
46d45a6923 grafana: Add TCP dial errors panel (#17101) 2023-04-28 11:11:17 -07:00
Klaus Post
7fad0c8b41 Remove checksums from HTTP range request, add part checksums (#17105) 2023-04-28 08:26:32 -07:00
jiuker
6e27264c6b update cleanupRoutine comment (#17102) 2023-04-28 01:11:51 -07:00
Anis Eleuch
5c83c9724f audit: Add request path and host to audit event (#17099) 2023-04-27 22:18:24 -07:00
Anis Eleuch
d5aff735be info: Add drives per set and sets count per pool information (#17100) 2023-04-27 15:24:03 -07:00
Poorna
98c26df53e fix: allow past retention headers to be copied in batch replication (#17095) 2023-04-27 13:43:18 -07:00
Anis Eleuch
2448a9e047 grafana: Remove minio_s3_requests_errors_total metric (#17094) 2023-04-27 10:55:30 -07:00
jiuker
b28d391a22 fix: add correct worker count before startHTTPLogger() (#17091) 2023-04-27 10:51:16 -07:00
jiuker
c8b92f6067 protect wg.Done from being called twice (#17075) 2023-04-27 07:55:36 -07:00
Krishnan Parthasarathi
e7cac8acef Add tags to auditLogLifecycle (#17081) 2023-04-26 17:49:00 -07:00
Anis Eleuch
31b5acc245 tcp: Increase user timeout to 10 minutes (#17087) 2023-04-26 17:48:31 -07:00
Harshavardhana
6105997299 remove unnecessary log when listing resume fails (#17086) 2023-04-26 14:53:25 -07:00
Harshavardhana
8c874884fc fix: do not copy context in DiskInfo cache (#17085) 2023-04-26 12:13:54 -07:00
Aditya Manthramurthy
ebfe81e5fd Fix put bucket policy error code (#17084) 2023-04-26 11:21:27 -07:00
Anis Eleuch
0b7ca094e4 Remove Expect 100-continue in internode communications (#17061) 2023-04-26 09:33:45 -07:00
Praveen raj Mani
72802a5972 Use 'minio/pkg/sync/errgroup' and 'minio/pkg/workers' (#17069) 2023-04-25 22:57:40 -07:00
Harshavardhana
b1f3935c5b allow ListObjects() when a prefix is an object (#17074) 2023-04-25 22:41:54 -07:00
Harshavardhana
dbd53af369 fix: initialize reverse proxy forwarder with right public certs (#17080) 2023-04-25 15:50:32 -07:00
Harshavardhana
b09fe0e50e fix: DeleteBucket for peers() must recreate bucket upon errors (#17079) 2023-04-25 14:16:35 -07:00
Krishnan Parthasarathi
fae9000304 heal: Pick maximally occuring modTime in quorum (#17071) 2023-04-25 10:13:57 -07:00
Harshavardhana
8fd07bcd51 simplify sort.Sort by using sort.Slice (#17066) 2023-04-24 13:28:18 -07:00
Anis Eleuch
6addc7a35d server-info: Return initializing state properly (#17070) 2023-04-24 09:10:02 -07:00
Harshavardhana
477230c82e avoid attempting to migrate old configs (#17004) 2023-04-21 13:56:08 -07:00
Harshavardhana
d1737199ed fix: delete DNS upon success, update failure message (#17059) 2023-04-21 12:12:31 -07:00
Jiffs Maverick
61101d82d9 Rename inodes metric in grafana dashboards (#17030) 2023-04-21 11:07:30 -07:00
Minio Trusted
cebb948da2 Update yaml files to latest version RELEASE.2023-04-20T17-56-55Z 2023-04-21 08:00:05 +00:00
Shireesh Anjal
c61c4b71b2 Upgrade madmin-go to 2.0.20 (#17054) 2023-04-20 10:56:55 -07:00
Harshavardhana
84f31ed45d simplify MRF, converge it to regular healing (#17026) 2023-04-19 07:47:42 -07:00
jiuker
8a81e317d6 verify maxPartID in object options helpers (#17015) 2023-04-18 22:34:30 -07:00
Anis Eleuch
224d9a752f fix: the race in healing tracker code (#17048) 2023-04-18 14:49:56 -07:00
Anis Eleuch
0db34e4b85 Listen bucket events to send empty events with new line (#17037) 2023-04-18 08:11:30 -07:00
Harshavardhana
8a9b9832fd add Dial timeout for Kafka broker pings (#17044) 2023-04-17 15:45:01 -07:00
Klaus Post
f66625be67 Snowball: Extract headers for metadata (#17042) 2023-04-17 12:16:54 -07:00
Harshavardhana
6825bd7e75 fix: inlined objects don't need to honor long locks (#17039) 2023-04-17 12:16:37 -07:00
Denis Krivenko
18515a4e3b Reformat helm chart templates (#16947) 2023-04-16 23:04:15 -07:00
Klaus Post
839b9c9271 Reduce allocations in Walkdir (#17036) 2023-04-15 10:25:25 -07:00
Harshavardhana
dd9ed85e22 implement support for FTP/SFTP server (#16952) 2023-04-15 07:34:02 -07:00
jiuker
e96c88e914 fix: DeleteBucketThrottle must delete ARN (#17034) 2023-04-15 02:14:26 -07:00
jiuker
6c1410f7f5 fix: Type of rejection for FIFO quota input (#17016) 2023-04-15 01:22:18 -07:00
Krishnan Parthasarathi
f92450d8b3 commonParity should pick readable FileInfo (#17032) 2023-04-14 16:23:28 -07:00
Klaus Post
c133979b8e Add part count to checksum (#17035) 2023-04-14 09:44:45 -07:00
Poorna
a9269cee29 heal: avoid logging version not found (#17031) 2023-04-13 19:45:52 -07:00
Harshavardhana
cf42ede92c upgrade helm to v5.0.8
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-04-13 14:49:51 -07:00
Klaus Post
958a480e53 fix: lambda function expiration when cred.Expiration is set (#17029) 2023-04-13 08:10:57 -07:00
吴小白
62151a751d build: support loong64 (#17027) 2023-04-13 08:10:23 -07:00
Minio Trusted
f1ab9df2ee Update yaml files to latest version RELEASE.2023-04-13T03-08-07Z 2023-04-13 07:28:46 +00:00
Anis Eleuch
a42650c065 Add minio_bucket_usage_version_total metric to Prometheus (#17023) 2023-04-12 20:08:07 -07:00
Harshavardhana
bdad3730f7 fix: do not error out if the local bucket is missing (#17025) 2023-04-12 15:44:16 -07:00
Harshavardhana
a5835cecbf fix: regression in counting total requests (#17024) 2023-04-12 14:37:19 -07:00
Denis Krivenko
b19620b324 Remove redundant namespace from the helm chart (#16968) 2023-04-11 21:09:29 -07:00
Poorna
cd6dec49c0 Add trace support for ilm activity (#16993) 2023-04-11 19:22:32 -07:00
Poorna
d350654aee config: fix duplication of replication priority key (#17014) 2023-04-11 19:22:10 -07:00
Krishnan Parthasarathi
6877578bbc Update minio_node_bucket_scans_finished metrics (#17006) 2023-04-11 19:21:34 -07:00
Kunal Singh
10693fddfa docs: fix repetition in wordings (#16999) 2023-04-11 13:23:14 -07:00
Harshavardhana
f3682b6149 allow writes to pools with inconsistent xl.meta (#17008) 2023-04-11 11:17:46 -07:00
ferhat elmas
056ca0c68e refactor: reuse open file in storage interface (#16970) 2023-04-09 23:09:28 -07:00
Krishnan Parthasarathi
25f7a8e406 Indicate RenameData is called by healObject (#16997) 2023-04-09 10:25:37 -07:00
Anis Eleuch
1f1c267b6c Add used inodes to disk info (#16994) 2023-04-07 07:52:14 -07:00
ferhat elmas
eab1dc927b chore: fix automatic issue handling from linter in ci (#16969) 2023-04-07 07:51:31 -07:00
Anis Eleuch
fc94ea1ced trace: Fix <unknown> func name of requests rejected by max clients (#16977) 2023-04-07 07:51:12 -07:00
jiuker
67d2cf8f30 add Err to Get getRemoteTargetClient. (#16982) 2023-04-07 07:50:46 -07:00
Minio Trusted
2c85b84cbc Update yaml files to latest version RELEASE.2023-04-07T05-28-58Z 2023-04-07 06:07:27 +00:00
Harshavardhana
09a25ea7b7 lint: fix some lint issues on files 2023-04-06 22:42:10 -07:00
Klaus Post
260a63ca73 os/windows: do not return errFileNotFound in readDir's (#16988) 2023-04-06 22:28:58 -07:00
Shubhendu
d3f70ea340 Enable audit log for global handlers (#16964)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-04-06 21:03:39 -07:00
Aditya Manthramurthy
ceebd35ef7 Bump up console to 0.26.3 (#16995) 2023-04-06 20:18:14 -07:00
Anis Eleuch
91b6fe1af3 trace: Bootstrap to show the correct source line number (#16989) 2023-04-06 17:51:53 -07:00
Klaus Post
9803f68522 snowball: Restrict zstd window size (#16987) 2023-04-06 17:47:38 -07:00
Harshavardhana
47b7469a60 add buffer pool for proxy forwarder (#16942) 2023-04-06 15:54:12 -07:00
Shubhendu
4c204707fd Correct to remove null version while ILM rule application (#16971)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
Co-authored-by: Harshavardhana <harsha@minio.io>
2023-04-06 14:10:01 -07:00
Harshavardhana
8fd6be0827 avoid mknod in GitHub actions (#16991) 2023-04-06 12:02:41 -07:00
Harshavardhana
c06e0bfef9 set correct Host: value for replication event notification (#16984) 2023-04-06 10:20:53 -07:00
Klaus Post
8625a9dbb3 Make listing metadata permissions stricter (#16974) 2023-04-06 07:52:35 -07:00
Alex
2b71b659e0 Update Console to v0.26.2 (#16981)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-04-06 01:31:09 -07:00
jiuker
0320ac43cb simplify bucketInfo return in GetBucketInfo peer call (#16983) 2023-04-06 01:30:50 -07:00
Anis Eleuch
111c7d4026 assumeRole return the correct http code for auth errors (#16967) 2023-04-05 22:19:31 -07:00
Klaus Post
62c3df0ca3 fix: directory listing on Go 1.20 windows (#16976) 2023-04-05 14:36:49 -07:00
Klaus Post
ae011663e8 mint: Ignore teardown errors (#16979) 2023-04-05 11:10:24 -07:00
Shireesh Anjal
12591cd241 Upgrade madmin-go to 2.0.18 (#16961) 2023-04-05 04:55:55 -07:00
Andrea Longo
0499e1c4b0 Remove ~ from Object Lambda curl example (#16966) 2023-04-04 12:15:28 -07:00
Poorna
3158f2d12e Add support for batch key rotation (#16844) 2023-04-04 10:56:54 -07:00
Praveen raj Mani
51f7f9aaa3 Generalize the event store using go generics (#16910) 2023-04-04 10:52:24 -07:00
jiuker
6e359c586e fix: close chan before return in scanner usage updates (#16960) 2023-04-04 10:51:05 -07:00
Poorna
699a24f7e5 batch: validate versioning on src/tgt buckets (#16955) 2023-04-04 10:50:11 -07:00
jiuker
5fa3665074 add isSysErrNotDir to openFileNoSync (#16958) 2023-04-04 08:00:08 -07:00
Harshavardhana
3b7781835e add lock metrics per node (#16943) 2023-04-03 21:23:24 -07:00
Shubhendu
5fe1b46bfd Enabled to send audit log while version deletion (#16954)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-04-03 11:58:04 -07:00
Harshavardhana
f65cce4317 move mint tests to separate folders to not confuse GitHub (#16940) 2023-03-31 14:38:10 -07:00
Allan Roger Reid
27d0d22e5d docs: Specify correct port in docker-compose README.md (#16939) 2023-03-31 12:20:56 -07:00
Poorna
407c9ddcbf feat: add batch replicate from remote MinIO to local cluster (#16922) 2023-03-31 10:48:36 -07:00
Anis Eleuch
d90d0c8931 Use one http response recorder per external http call (#16938) 2023-03-31 09:37:29 -07:00
Harshavardhana
216a471bbb on quorum DeleteObject() errors attempt an MRF (#16932) 2023-03-31 08:15:41 -07:00
jiuker
a7b7860e0e fix: potential data conflicts save site-resync metadata (#16926) 2023-03-30 20:59:45 -07:00
Klaus Post
d703daa480 Add metadata extension to ListObjectVersions (#16930) 2023-03-30 12:20:42 -07:00
Poorna
dc8fdcb9c9 fix: error checking in DeleteBucket (#16929) 2023-03-30 11:54:08 -07:00
Harshavardhana
7a6c4e438e allow more workers for ILM expiration (#16924) 2023-03-30 10:47:15 -07:00
jiuker
c468b4e2a8 fix: avoid out of slice index (#16925) 2023-03-30 08:03:48 -07:00
jiuker
b04956a676 fix: put *msgp.Reader back in pool (#16927) 2023-03-30 08:03:12 -07:00
Aditya Manthramurthy
518f6e4d39 fix: missing return after error response (#16920) 2023-03-29 16:21:13 -07:00
Allan Roger Reid
483b226cc1 fix: avoid logging when object/version not found in replication (#16919) 2023-03-29 15:02:45 -07:00
Harshavardhana
13151cbb2b [testing] add mint runner test (#16868) 2023-03-29 11:38:43 -07:00
Harshavardhana
8e02660a0d update all our deps (#16899) 2023-03-28 03:45:24 -07:00
jiuker
16feef2a2c fix: time.Parse RFC3339Nano (#16892) 2023-03-27 11:51:54 -07:00
jiuker
66ff17e452 fix: Avoid multiple write responses (#16894) 2023-03-27 09:15:23 -07:00
Harshavardhana
4c5edacae2 ignore operation timedout errors (#16891) 2023-03-26 03:16:51 -07:00
Anis Eleuch
8b4d0255b7 Set Console global Root CAs early to trust provided certs (#16890) 2023-03-25 09:58:38 -07:00
Minio Trusted
58c129f94a Update yaml files to latest version RELEASE.2023-03-24T21-41-23Z 2023-03-24 23:03:23 +00:00
Poorna
74040b457b Allow setting sync mode for site replication (#16876) 2023-03-24 14:41:23 -07:00
Anis Eleuch
c259a8ea38 Set tcp user timeout to clean sockets with data in the buffer (#16887) 2023-03-24 08:10:58 -07:00
mstmdev
2d51e42305 Remove the redundant conditional in the validateParity function (#16866) 2023-03-23 14:06:22 -07:00
Klaus Post
5e3bfd2148 Add user tags when listing with metadata (#16883) 2023-03-23 10:27:19 -07:00
Klaus Post
8b0ab6ead6 Revert "Make localLocker lock attempts cancellable (#16510)" (#16884) 2023-03-23 10:26:21 -07:00
Shubhendu
b1b0aadabf Revert query parameter src from diag upload if callhome enabled (#16881) 2023-03-23 00:24:58 -07:00
Harshavardhana
ac7d9c449a add missing expiration information from 'sts info' (#16878) 2023-03-22 16:47:02 -07:00
Anis Eleuch
1346561b9d return quorum error instead of insufficient storage error (#16874) 2023-03-22 16:22:37 -07:00
Minio Trusted
4bc52897b2 Update yaml files to latest version RELEASE.2023-03-22T06-36-24Z 2023-03-22 21:16:15 +00:00
dependabot[bot]
035791669e build(deps): bump github.com/nats-io/nats-streaming-server from 0.24.1 to 0.24.3 (#16752)
Signed-off-by: dependabot[bot] <support@github.com>
2023-03-21 23:36:24 -07:00
Krishnan Parthasarathi
6017b63a06 top-locks: Group by lock request ID (#16860) 2023-03-21 18:35:29 -07:00
Klaus Post
11d04279c8 Add lazy init of audit logger (#16842) 2023-03-21 10:50:40 -07:00
Harshavardhana
0448728228 fix: add deadline conns and dnsCache for remote transports (#16865) 2023-03-21 08:49:20 -07:00
Harshavardhana
12047702f5 fix: tweak the maintenance=true to satisfy baremetal first (#16864) 2023-03-21 08:48:38 -07:00
Harshavardhana
fb1492f531 check for quorum errors for DeleteBucket() (#16859) 2023-03-20 23:38:06 -07:00
Minio Trusted
d14ead7bec Update yaml files to latest version RELEASE.2023-03-20T20-16-18Z 2023-03-21 01:07:32 +00:00
Aditya Manthramurthy
05444a0f6a Use the official pub key to always verify binary (#16857) 2023-03-20 13:16:18 -07:00
Harshavardhana
b3c54ec81e reject object names with '\' on windows (#16856) 2023-03-20 13:16:00 -07:00
Harshavardhana
6c11dbffd5 add crash protection from backend modifications (#16846) 2023-03-20 09:08:42 -07:00
Harshavardhana
3b5dbf9046 allow bootstrapping to validate internode tokens (#16853) 2023-03-20 01:40:24 -07:00
Aditya Manthramurthy
09c733677a Add test for fixed post policy exploit (#16855) 2023-03-20 01:06:45 -07:00
Harshavardhana
8d6558b236 fix: convert '\' to '/' on windows (#16852) 2023-03-20 00:35:25 -07:00
Aditya Manthramurthy
67f4ba154a fix: post policy request security bypass (#16849) 2023-03-19 21:15:20 -07:00
Poorna
440ad20c1d Add support for batch job cancellation (#16843) 2023-03-17 23:42:43 -07:00
Krishnan Parthasarathi
31fba6f434 Save bootstrap trace events in a circular buffer (#16823) 2023-03-17 16:01:03 -07:00
Harshavardhana
280442e533 reduce 250ms to 50ms retry looking for metacache block (#16795) 2023-03-17 14:44:01 -07:00
Shubhendu
850a945a18 Added query parameter src to diag upload if callhome enabled (#16837)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-03-17 08:30:16 -07:00
Harshavardhana
46f9049fb4 simplify error responses for KMS (#16793) 2023-03-16 11:59:42 -07:00
Aditya Manthramurthy
58266c9e2c Add enable flag for LDAP IDP config (#16805) 2023-03-16 11:58:59 -07:00
Poorna
d1e775313d support decommissioning of tiered objects (#16751) 2023-03-16 07:48:05 -07:00
Anis Eleuch
a65df1e67b debug: new tool to reorder local erasure disks (#16816) 2023-03-15 13:53:17 -07:00
Harshavardhana
e700be8cd6 fix: return appropriate Location header for MakeBucket() (#16820) 2023-03-15 13:40:40 -07:00
Harshavardhana
3fdd574f54 update go dependencies (#16798) 2023-03-15 11:59:17 -07:00
Harshavardhana
e0f4dd6027 remove unncessary logs from WalkDir(), PutObject() (#16818) 2023-03-15 11:52:23 -07:00
Harshavardhana
de02eca467 restore rotating root credentials properly (#16812) 2023-03-15 08:07:42 -07:00
Nitish Tiwari
50dbd2cacc Update audit log flow to use new headers with unit (#16797) 2023-03-13 22:50:19 -07:00
Minio Trusted
c2f9cc5824 Update yaml files to latest version RELEASE.2023-03-13T19-46-17Z 2023-03-13 22:11:07 +00:00
Harshavardhana
c7f7e67a10 Do not allow adding root user to IAM subsystem (#16803) 2023-03-13 12:46:17 -07:00
Klaus Post
628042e65e tests: Protect globalLocalDrives against races (#16800) 2023-03-13 06:04:20 -07:00
ferhat elmas
cde7eeb660 chore: drop go versions in static analysis (#16790) 2023-03-11 22:10:33 -08:00
Aditya Manthramurthy
6305b206e1 fix: site-repl should heal STS with virtual parent (#16792) 2023-03-10 16:21:51 -08:00
Klaus Post
d85da9236e Add Object Version count histogram (#16739) 2023-03-10 08:53:59 -08:00
Minio Trusted
9800760cb3 Update yaml files to latest version RELEASE.2023-03-09T23-16-13Z 2023-03-10 00:59:46 +00:00
Anis Elleuch
5c087bdcad fix: a cosmetic error reporting with a lock timeout (#16788) 2023-03-09 15:16:13 -08:00
Klaus Post
a547bf517d Remove locks on usage cache (#16786) 2023-03-09 15:15:46 -08:00
Harshavardhana
b984bf8d1a allow expiration of all versions during Listing() (#16757) 2023-03-09 15:15:30 -08:00
Daniel Valdivia
18f9cccfa7 Upgrade Console to v0.25.0 (#16782)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-03-08 12:15:13 -08:00
Poorna
fb6ab1cca2 fix: allow replication of 'null' delete markers (#16773) 2023-03-08 07:03:29 -08:00
Krishnan Parthasarathi
56c57e2c53 tier-stats: Avoid repeated logs (#16774) 2023-03-07 16:43:05 -08:00
Poorna
a6057c35cc Avoid peer notification when peer is offline, tune retries (#16737) 2023-03-07 08:13:28 -08:00
Harshavardhana
901887e6bf feat: add lambda transformation functions target (#16507) 2023-03-07 08:12:41 -08:00
Poorna
ee54643004 Avoid unnecessary replication heal attempts (#16769) 2023-03-07 07:43:38 -08:00
Harshavardhana
0a17acdb34 return error if policy changes on disabled groups (#16766) 2023-03-06 10:46:24 -08:00
Harshavardhana
72e5212842 fix: handle syscall.EROFS also for osIsPermission() (#16765) 2023-03-06 08:56:29 -08:00
ferhat elmas
714283fae2 cleanup ignored static analysis (#16767) 2023-03-06 08:56:10 -08:00
ferhat elmas
3423028713 cleanup Go linter settings (#16736) 2023-03-04 20:57:35 -08:00
jiuker
9d062b37d7 return underlying error with BackendDown{} error (#16738) 2023-03-03 23:56:53 -08:00
Harshavardhana
4636d3a9c3 upgrade all dependencies (#16753) 2023-03-03 18:22:40 -08:00
Aditya Manthramurthy
c95ede35c1 Switch windows CI back to go 1.19.x (#16755) 2023-03-03 15:19:28 -08:00
Aditya Manthramurthy
7415e1aa56 Switch to go1.20 in CI (#16743) 2023-03-03 10:15:03 -08:00
jiuker
f350953a19 calculate disk cache usage percent accurately (#16740) 2023-03-02 20:32:22 -08:00
Daniel Valdivia
958bba5b42 Attach creds, owner and region to madmin calls (#16658)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-03-02 14:35:24 -08:00
Poorna
0f2b95b497 fix a data race in IAM loading (#16742) 2023-03-02 14:32:54 -08:00
Klaus Post
d07089ceac Fix scanner deadlock on lost global lock (#16726) 2023-02-28 21:34:45 -08:00
Aditya Manthramurthy
47dfa62384 Update LDAP doc for new policy attach|detach cmds (#16723) 2023-02-27 21:04:27 -08:00
Minio Trusted
3a3265cf88 Update yaml files to latest version RELEASE.2023-02-27T18-10-45Z 2023-02-28 00:30:32 +00:00
Harshavardhana
0ff931dc76 fix: allow CORS to work by default (#16713) 2023-02-27 10:10:45 -08:00
Praveen raj Mani
4d708cebe9 Support adding service accounts with expiration (#16430)
Co-authored-by: Harshavardhana <harsha@minio.io>
2023-02-27 10:10:22 -08:00
dorman
4d7c8e3bb8 Remove redundant log (#16710)
Co-authored-by: z30001483 <zekaifeng2@huawei.com>
2023-02-27 09:59:47 -08:00
Aditya Manthramurthy
8cde38404d Add metrics for custom auth plugin (#16701) 2023-02-27 09:55:18 -08:00
Krishnan Parthasarathi
fe7bf6cbbc Support tier-add if tier backend not empty (#16715) 2023-02-27 09:26:26 -08:00
Shubhendu
8b4eb2304b Set logger webhook proxy on subnet proxy change (#16665)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-02-27 08:35:36 -08:00
Harshavardhana
ae029191a3 liveness returns "busy" if queued requests > available capacity (#16719) 2023-02-27 08:34:52 -08:00
Harshavardhana
bfedea9bad fix: disk healing should honor the right pool/set index (#16712) 2023-02-27 04:55:32 -08:00
Aditya Manthramurthy
7777d3b43a Remove globalSTSTLSConfig (#16709) 2023-02-26 23:37:00 -08:00
Aditya Manthramurthy
9ed4fc9687 Remove globalOpenIDConfig (#16708) 2023-02-25 21:01:37 -08:00
jiuker
b49b39e99d fix: errNoSuchPolicy should use errors.Is (#16656) 2023-02-25 00:01:37 -08:00
jiuker
cd3a2de5a3 add isValidLocation to common parseLocation (#16690) 2023-02-25 08:09:20 +05:30
jiuker
6e8960ccdd fix: delete globalProfiler should lock (#16697) 2023-02-25 08:07:44 +05:30
Aditya Manthramurthy
e05f3d5d84 Remove globalLDAPConfig (#16706) 2023-02-25 08:07:22 +05:30
Anis Elleuch
94c6cb1323 tests: Add test for S3 API error codes (#16705) 2023-02-25 08:06:29 +05:30
Aditya Manthramurthy
3f81cd1b22 Update OpenID doc with info on redirection params (#16704) 2023-02-24 12:13:00 -08:00
Anis Elleuch
8da0f4c5bb Better error message when TLS certs do not have proper permissions (#16703) 2023-02-24 06:34:55 -08:00
Klaus Post
9acf1024e4 Remove bloom filter (#16682)
Removes the bloom filter since it has so limited usability, often gets saturated anyway and adds a bunch of complexity to the scanner.

Also removes a tiny bit of CPU by each write operation.
2023-02-24 09:03:31 +05:30
Harshavardhana
b21d3f9b82 event target registration failures must be returned (#16700) 2023-02-23 21:59:14 +05:30
Minio Trusted
2bbf380262 Update yaml files to latest version RELEASE.2023-02-22T18-23-45Z 2023-02-22 21:50:11 +00:00
Harshavardhana
f678bcf7ba update dependencies to the latest releases (#16694) 2023-02-22 10:23:45 -08:00
Harshavardhana
a0f06eac2a add Veeam SOS API first implementation (#16688) 2023-02-22 19:54:57 +05:30
jiuker
83fe1a2732 log: add more info about BROWSER_URL (#16678) 2023-02-22 19:54:05 +05:30
Harshavardhana
5c98223c89 add correct HostId instead of deploymentId for error responses (#16686) 2023-02-22 15:41:09 +05:30
jiuker
663a0b7783 save correct bucketInfo on it's indexes (#16685) 2023-02-22 14:08:34 +05:30
Anis Elleuch
6efe4d1df6 perf: Only remove generated data when no bucket name specified (#16610) 2023-02-21 21:21:40 -08:00
Daniel Valdivia
fb17f97cf3 move audit and logger message structure to minio/pkg (#16655)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-02-21 21:21:17 -08:00
Shubhendu
6b65ba1551 Added attribute proxy for mc admin config set ALIAS logger_webhook (#16657)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-02-21 21:19:46 -08:00
Poorna
9202c6e26a Reload bucket targets when replication config is refreshed (#16684) 2023-02-21 21:18:49 -08:00
Poorna
59a5456091 fix: STS error translation to API error (#16683) 2023-02-22 09:57:48 +05:30
Allan Roger Reid
8bfe972bab Set meaningful message from minio with env variable KMS_SECRET_KEY (#16584) 2023-02-22 07:13:01 +05:30
Klaus Post
fd6622458b Add detailed scanner trace output and notifications (#16668) 2023-02-21 09:33:33 -08:00
Poorna
8a08861dd9 fix: healing of replication config for endpoint changes (#16648) 2023-02-20 16:06:13 +05:30
Harshavardhana
82dcfd4e10 update dependencies to latest releases (#16651) 2023-02-20 16:05:20 +05:30
Harshavardhana
b66d7dc708 add missing x-amz-id-2 to event notification date (#16646) 2023-02-20 15:41:47 +05:30
Harshavardhana
eebdd2b31d update NOTICE to 2015-2023 copyright 2023-02-19 00:03:50 +05:30
Harshavardhana
b94733ab31 avoid locks when unnecessary in SiteReplicationMetaInfo() (#16650) 2023-02-18 05:35:22 -08:00
Minio Trusted
7f2c90a0ed Update yaml files to latest version RELEASE.2023-02-17T17-52-43Z 2023-02-17 19:07:39 +00:00
Klaus Post
84bb7d05a9 fix: healing deadlocks and ordering (#16643) 2023-02-17 23:22:43 +05:30
Harshavardhana
98a84d88e2 fix: trim and ignore './' directories (#16642) 2023-02-17 07:15:03 -08:00
jiuker
e470268c7c fix: a possible closer leak in SelectObjectHandler (#16598) 2023-02-17 01:44:40 -08:00
jiuker
3a6cd4f73d fix: sync.pool won't Put reader back (#16638) 2023-02-17 01:42:43 -08:00
Harshavardhana
6ea150fd68 fix: avoid printing certain errors under few locations (#16631) 2023-02-17 01:40:31 -08:00
Anis Elleuch
a7188bc9d0 fix: evaluate BypassGov policy action in deletion correctly (#16635) 2023-02-17 07:53:34 +05:30
Harshavardhana
e1e9ddd4a4 use kes.Status() for Status() call (#16629) 2023-02-16 22:12:24 +05:30
Krishnan Parthasarathi
a1dd08f2e6 Include tier name in MinIO/S3 target user-agent (#16630) 2023-02-15 22:09:46 -08:00
Krishnan Parthasarathi
d136ac0596 Don't close transition task channel on server exit (#16627) 2023-02-15 22:09:25 -08:00
Poorna
c33a237067 fix: under site replication disallow remote target modification (#16628) 2023-02-15 20:22:13 -08:00
Poorna
eb7d3da994 Fix site replication status reporting of quota (#16626) 2023-02-16 08:08:35 +05:30
Harshavardhana
37134e42d4 ignore io.EOF, io.ErrUnexpectedEOF on xl.meta reads in WalkDir() (#16625) 2023-02-15 07:12:48 -08:00
Klaus Post
626a4efaad Update reedsolomon to v1.11.7 (#16624) 2023-02-15 05:07:35 -08:00
Harshavardhana
0c1f8b4e0f add user-agent for all minio.Client usage (#16619) 2023-02-14 13:19:30 -08:00
Anis Elleuch
857674c3a0 heal: Do not mark buckets as done when there is no online disks (#16621) 2023-02-14 12:50:13 -08:00
Harshavardhana
15a75bd79b ignore preconditionFailed error in batch replication (#16615) 2023-02-14 07:22:08 -08:00
Andreas Auernhammer
74887c7372 kms: add support for KES API keys and switch to KES Go SDK (#16617)
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2023-02-14 07:19:20 -08:00
Harshavardhana
31188e9327 add parallel workers in batch replication (#16609) 2023-02-13 12:07:58 -08:00
Harshavardhana
ee6d96eb46 Periodically remove stale buckets from in-memory (#16597) 2023-02-13 08:09:52 -08:00
Minio Trusted
11fe2fd79a update helm v5.0.7
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-02-13 16:07:23 +05:30
atalakey4work
c2863cc6ef helm: remove parentheses from podDisruptionBudget (#16605) 2023-02-13 02:36:06 -08:00
Anis Elleuch
d6d01067a0 fix: allow global leader lock context merge to be canceled (#16603)
Global leader lock was first designated to only acquired once
until the node is killed. However, currently, the code acquires
it repeatedly during the lifetime of the server, now there is a
goroutine leak.
2023-02-13 01:26:38 -08:00
Minio Trusted
1d3b18c3f4 update helm v5.0.6
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-02-13 12:23:06 +05:30
jiuker
a15b6f21b8 remove incorrect use of WaitGroup (#16596) 2023-02-12 20:59:45 -08:00
Anis Elleuch
689179bf18 ServerInfo: return per erasure set information (#16583) 2023-02-11 18:31:56 +05:30
Minio Trusted
bf749eec61 Update yaml files to latest version RELEASE.2023-02-10T18-48-39Z 2023-02-10 20:55:45 +00:00
Klaus Post
d0f4cc89a5 Re-add Veeam Listing workaround (#16593) 2023-02-10 10:48:39 -08:00
Harshavardhana
d65debb6bc fix: comply with RFC6750 UserInfo endpoint requirements (#16592) 2023-02-10 22:20:25 +05:30
Harshavardhana
72daccd468 fix: scanner in healing cycle must use actual size (#16589) 2023-02-10 06:53:03 -08:00
Harshavardhana
b363400587 fix: username replacements for aws:username must use parentUser (#16591) 2023-02-10 06:52:31 -08:00
k0i
6b41f941b6 fix: README.md in docs/config (#16564) 2023-02-10 03:02:11 -08:00
jiuker
b9f5f9ba3f fix: aclHandlers convert XML parse error to relevant client error (#16587) 2023-02-10 02:59:10 -08:00
Anis Elleuch
c8ffa59d28 Periodically refresh buckets metadata from the backend disks (#16561)
fixes #16553
2023-02-09 10:29:20 -08:00
Minio Trusted
1141187bf2 Update yaml files to latest version RELEASE.2023-02-09T05-16-53Z 2023-02-09 06:21:34 +00:00
Poorna
52aeebebea Fix site replication meta info call to be non-blocking (#16526)
Co-authored-by: Harshavardhana <harsha@minio.io>
2023-02-08 21:16:53 -08:00
Aditya Manthramurthy
e101384aa4 Update console to v0.23.1 (#16574) 2023-02-08 20:01:53 -08:00
Harshavardhana
71f02adfca Revert "Print golang http errors in MinIO log format (#16465)"
This reverts commit 1fd7946dce.
2023-02-09 09:27:27 +05:30
Krishnan Parthasarathi
9de26531e4 tiering: UpdateWorkers may be called before Init (#16573) 2023-02-08 19:13:34 -08:00
Anis Elleuch
fadc46b906 Add the access key and parent user in the audit log (#16572) 2023-02-08 11:05:26 -08:00
Anis Elleuch
b1d98febfd New disk healing goes through the healing workers (#16568) 2023-02-08 09:25:29 -08:00
jiuker
1828fb212a fix: avoid goroutine leak after timeouts in PeerMetrics (#16569) 2023-02-08 09:11:16 -08:00
Harshavardhana
c97f50e274 replication syncs happen in 60sec intervals (#16571) 2023-02-08 08:23:26 -08:00
jiuker
be92046dfd Anonymous pools in health info output are double counted (#16566) 2023-02-08 06:58:50 -08:00
Krishnan Parthasarathi
990fc415f7 Ensure safety of transitionState at startup (#16563) 2023-02-07 23:11:42 -08:00
Harshavardhana
d8daabae9b make site replication healing safer (#16560) 2023-02-07 21:44:42 -08:00
Harshavardhana
84fe4fd156 fix: multiObjectDelete by passing versionId for authorization (#16562) 2023-02-08 08:01:00 +05:30
Harshavardhana
747d475e76 initialize subsystems that are not dependent on buckets first (#16559) 2023-02-07 12:46:47 -08:00
Anis Elleuch
095b518802 Show a better error msg when internal data encryption key is incorrect (#16549) 2023-02-07 05:22:54 -08:00
Harshavardhana
0319ae756a fix: pass proper username (simple) string as expected (#16555) 2023-02-07 03:43:08 -08:00
Harshavardhana
11c7ecb5cf support if-match/if-none-match with s3 uploads (#16551) 2023-02-06 18:58:29 -08:00
Cesar Celis Hernandez
422c396d73 Removing old action that is no longer needed (#16550) 2023-02-07 07:06:29 +05:30
Anis Elleuch
ffd57fde90 Convert 'server closed idle connection' to errDiskNotFound (#16548) 2023-02-06 10:42:08 -08:00
jiuker
a451d1cb8d fix: close resp.body checking for kubernetes version (#16547) 2023-02-06 10:41:41 -08:00
Harshavardhana
14cf8f1b22 upgrade deps for minio/pkg v1.6.1 to include groups conditions (#16538) 2023-02-06 09:27:29 -08:00
Harshavardhana
5996c8c4d5 feat: allow offline disks on a fresh start (#16541) 2023-02-06 09:26:09 -08:00
Harshavardhana
21885f9457 fix: liveness/readiness must return errors if KMS is unreachable (#16540) 2023-02-06 08:55:56 -08:00
Christian Niessner
6ac48aff46 helm: shared secrets handling for user and svcacct's (#16379) 2023-02-05 21:51:10 -08:00
Mathieu Parent
85ff76e7b0 fix(helm): use PodDisruptionBudget v1 for newer k8s versions (#16466) 2023-02-05 21:50:45 -08:00
Harshavardhana
e47a31f9fc fix: object size distribution in metrics for all objects (#16539) 2023-02-04 21:10:10 -08:00
Harshavardhana
517fcd423d add necessary tools to our docker release (#16536) 2023-02-03 19:40:25 -08:00
Harshavardhana
b780359598 helm version v5.0.5 2023-02-04 02:24:02 +05:30
Harshavardhana
aa8b9572b9 remove double ENABLED help output (#16528) 2023-02-03 05:52:52 -08:00
Cesar Celis Hernandez
8ca14e6267 Updating enterprise action (#16518) 2023-02-02 19:23:31 +05:30
Poorna
876e1a91b2 replication: Fix typo checking PreconditionFailed status code (#16517) 2023-02-02 19:22:02 +05:30
Klaus Post
0b7989aa4b Fix Kafka initialization crash (#16523) 2023-02-02 19:21:19 +05:30
Anis Elleuch
2278fc8f47 Print original error when IAM load is failed in some places (#16511) 2023-02-01 17:32:22 +05:30
Harshavardhana
a91f353621 support 'mc admin service restart' for windows (#16512) 2023-02-01 17:31:46 +05:30
Klaus Post
cdb1b48ad9 Make localLocker lock attempts cancellable (#16510) 2023-01-31 09:41:17 -08:00
Minio Trusted
a24037bfec Update yaml files to latest version RELEASE.2023-01-31T02-24-19Z 2023-01-31 07:49:15 +00:00
Kaan Kabalak
2d0f30f062 Fix typo in code comment (#16509) 2023-01-31 07:54:19 +05:30
Krishnan Parthasarathi
cea2ca8c8e Add restore-status header for multipart objects (#16508) 2023-01-31 07:53:45 +05:30
Klaus Post
f713436dd0 Fix truncated list response on deleted replicated objects (#16504) 2023-01-30 09:13:53 -08:00
Klaus Post
b923a62425 Check pool-index for invalid setups (#16501) 2023-01-30 18:33:07 +05:30
Harshavardhana
67fce4a5b3 fix: dangling delete() upon success should return 404 (#16494) 2023-01-27 12:43:45 -08:00
Poorna
eaa65b7ade fix replication healing on list to consider all versions (#16496) 2023-01-27 12:43:28 -08:00
Poorna
820d94447c replication: fix target bucket passed on GET proxy (#16495) 2023-01-27 10:24:51 -08:00
Poorna
ed20134a7b replication: detect proxy header presence correctly (#16489) 2023-01-27 01:29:32 -08:00
Harshavardhana
d19cbc81b5 fix: do not return IAM/Bucket metadata replication errors to client (#16486) 2023-01-26 11:11:54 -08:00
Anis Elleuch
1fd7946dce Print golang http errors in MinIO log format (#16465) 2023-01-26 22:46:16 +05:30
Klaus Post
027ff0f3a8 fix: set modTime to current in snowball if archive shows empty (#16482) 2023-01-26 22:20:35 +05:30
Jan Zhanal
8fa80874a6 doc: LDAP/AD - nested groups (#16483) 2023-01-26 22:17:59 +05:30
Harshavardhana
430669cfad do not fail groupadd if group already exists
fixes #16488
2023-01-26 21:32:02 +05:30
Harshavardhana
54b561898f fix: anonymize the x-amz-id-2 value from hostname (#16478) 2023-01-25 10:25:36 -08:00
Harshavardhana
65c104a589 add x-amz-id-2 to indicate the node that received the request (#16474) 2023-01-25 09:14:10 -08:00
Anis Elleuch
0a0416b6ea Better error when setting up replication with a service account alias (#16472) 2023-01-25 21:50:12 +05:30
Anis Elleuch
441babdc41 Rename peer S3 prefix to avoid collision in the future (#16473) 2023-01-25 06:46:30 -08:00
Minio Trusted
1bf1fafc86 Update yaml files to latest version RELEASE.2023-01-25T00-19-54Z 2023-01-25 02:09:27 +00:00
Aditya Manthramurthy
50d58e9b2d Bump up console to v0.23.0 (#16470) 2023-01-24 16:19:54 -08:00
Harshavardhana
e64b9f6751 fix: disallow SSE-C encrypted objects on replicated buckets (#16467) 2023-01-24 15:46:33 -08:00
Florian Schwab
d67a846ec4 allow restarting of decommissioning if completed, failed or canceld (#16464) 2023-01-24 07:07:59 -08:00
Poorna
ca2a1c3f60 replication: clone metrics while loading metrics cache (#16462) 2023-01-24 02:10:32 -08:00
Poorna
93fbb228bf Validate if parent user exists for service acct (#16443) 2023-01-24 08:17:18 +05:30
Harshavardhana
3683673fb0 add missing gorilla/mux migration, update credits (#16461) 2023-01-23 08:46:37 -08:00
Anis Elleuch
f37a5b6dae Add CPU info in the check update user-agent (#16447) 2023-01-23 08:07:55 -08:00
Harshavardhana
31b0decd46 migrate to minio/mux from gorilla/mux (#16456) 2023-01-23 16:42:47 +05:30
Harshavardhana
eb561e1c05 allow bootstrap platform checks to be pool specific (#16455) 2023-01-23 16:24:50 +05:30
Mathieu Parent
54c9ecff5b fix(helm): fsGroup is only on podSecurityContext (#16402) 2023-01-21 10:03:03 -08:00
Wessel Valkenburg (prevue.ch)
edcd72585d Update post-job.yaml in Helm chart (#16387) 2023-01-21 10:02:08 -08:00
Alex
1a17fc17bb GitHub Workflows security hardening (#15708) 2023-01-21 09:55:17 -08:00
Poorna
ddad231921 replication: Avoid logging PreConditionFailed error (#16450) 2023-01-21 07:33:04 +05:30
Anis Elleuch
e73894fa50 grafana: Show one metric for the total data growth (#16449) 2023-01-20 09:39:28 -08:00
Klaus Post
03b94f907f fix: deleted object names for directory objects (#16448) 2023-01-20 21:16:06 +05:30
Minio Trusted
3fa7218c44 Update yaml files to latest version RELEASE.2023-01-20T02-05-44Z 2023-01-20 08:11:58 +00:00
Shireesh Anjal
0f591d245d fix: incorrect anonymization of drive endpoint (#16442) 2023-01-20 07:35:44 +05:30
Poorna
1b02e046c2 Fix bandwidth monitoring to be per remote target (#16360) 2023-01-19 18:52:16 +05:30
Harshavardhana
d08e3cc895 add a way to avoid blocking queueHealTask() depending on caller (#16433) 2023-01-19 18:50:54 +05:30
Anis Elleuch
d98116559b Use async healing in PutObject call (#16431) 2023-01-19 00:54:22 -08:00
Krishnan Parthasarathi
71c95ad0d0 Signal stop-rebalance to all rebalancing pools (#16438) 2023-01-19 06:54:23 +05:30
Minio Trusted
5c1a4ba5f9 Update yaml files to latest version RELEASE.2023-01-18T04-36-38Z 2023-01-18 07:46:44 +00:00
Aditya Manthramurthy
698862ec5d Fix transports/timeouts related regressions (#16427) 2023-01-18 10:06:38 +05:30
Harshavardhana
b4ef5ff294 remove unnecessary code checking for supported features (#16423) 2023-01-17 19:37:47 +05:30
Harshavardhana
3db658e51e use correct xml package for custom MarshalXML() (#16421) 2023-01-17 05:08:33 +05:30
Shireesh Anjal
5a9f7516d6 Add monthly license update job (#16391) 2023-01-17 05:08:15 +05:30
Anis Elleuch
3039fd4519 Optimize background heal status to use LocalStorageInfo (#16414) 2023-01-17 05:02:00 +05:30
Harshavardhana
095fc0561d feat: allow decom of multiple pools (#16416) 2023-01-16 21:36:34 +05:30
Anis Elleuch
beb1924437 Properly restart fresh disk healing when failed in some places (#16413) 2023-01-14 05:06:46 +05:30
jiuker
c8e1154f1e fix: reading from erasureDisks must be protected via read lock() (#16407) 2023-01-13 04:16:23 -08:00
Poorna
b204c2dbec fix: enforce deny on DeleteVersionAction (#16409) 2023-01-13 04:16:00 -08:00
Poorna
b22b39de96 Avoid dangling deletes if disk not found (#16401) 2023-01-12 22:20:19 -08:00
Harshavardhana
c242e6c391 fix: calculate common parity properly (#16406) 2023-01-13 03:28:16 +05:30
Anis Elleuch
e05205756f metrics: Add more logs when unable to read bucket usage (#16405) 2023-01-13 02:32:00 +05:30
Daniel Valdivia
d03b244fcd Remove checks target from docker target (#16399)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-01-12 01:39:12 -08:00
Aditya Manthramurthy
5ef679d8f1 Simplify/speedup container image publishing script (#16403) 2023-01-12 01:35:47 -08:00
Minio Trusted
d33c527e39 Update yaml files to latest version RELEASE.2023-01-12T02-06-16Z 2023-01-12 07:27:07 +00:00
Aditya Manthramurthy
7bc95c47a3 Update console to 0.22.5 (#16400) 2023-01-11 18:06:16 -08:00
Anis Elleuch
475a88b555 fix: error out if an object is found after a full decom (#16277) 2023-01-12 05:52:51 +05:30
Allan Roger Reid
9815dac48f fix: allow bind on ipv6 loopback failures (#16388) 2023-01-11 08:47:39 +05:30
Anis Elleuch
1ece3d1dfe Add comment field to service accounts (#16380) 2023-01-10 21:57:52 +04:00
Anis Elleuch
2146ed4033 xl: Quit early when EC config is incorrect (#16390)
Co-authored-by: Anis Elleuch <anis@min.io>
2023-01-09 23:07:45 -08:00
Minio Trusted
52b88b52f0 Update yaml files to latest version RELEASE.2023-01-06T18-11-18Z 2023-01-08 07:51:31 +00:00
Anis Elleuch
ebd4388cca s3: Return XMinioInvalidObjectName if the object contains null char (#16372) 2023-01-06 10:11:18 -08:00
Harshavardhana
57fd02ee57 update console v0.22.4 (#16374)
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-01-05 22:15:51 -08:00
Anis Elleuch
0333412148 fix: heal only once per disk per set among multiple disks (#16358) 2023-01-05 20:41:19 -08:00
Anis Elleuch
1c85652cff lint: Fix in darwin environment (#16368) 2023-01-05 10:12:01 -08:00
Harshavardhana
e0086c1be7 reduce startup delays on kubernetes (#16356) 2023-01-05 02:32:43 -08:00
Poorna
b29e159604 docs: Update replication setup commands (#16361) 2023-01-04 13:39:37 -08:00
Anis Elleuch
7883e55da2 Merge buckets list from different nodes in ListBuckets() call (#16357) 2023-01-04 08:53:58 -08:00
Harshavardhana
b197623ed2 remove unnecessary kernel-tuning docs (#16354) 2023-01-04 01:33:40 -08:00
Harshavardhana
a15a2556c3 converge listBuckets() as a peer call (#16346) 2023-01-03 23:39:40 -08:00
Harshavardhana
14d29b77ae update replication tests with latest 'mc' (#16348) 2023-01-03 22:54:39 -08:00
Harshavardhana
a2514ffeed update klauspost/compress dependency (#16343) 2023-01-03 10:41:14 -08:00
Harshavardhana
f1bbb7fef5 vectorize cluster-wide calls such as bucket operations (#16313) 2023-01-03 08:16:39 -08:00
Minio Trusted
72394a8319 Update yaml files to latest version RELEASE.2023-01-02T09-40-09Z 2023-01-03 10:16:34 +00:00
Harshavardhana
1cd8e1d8b6 remove the startup jitter before locks() (#16340) 2023-01-02 01:40:09 -08:00
jiuker
62cd918061 fix: close helmInfo file descriptor (#16319) 2023-01-01 23:26:59 -08:00
Klaus Post
6a04067514 fix: tweak read buffer size to reduce over-reading (#16338) 2023-01-01 08:14:20 -08:00
Taran Pelkey
49b3908635 fix: misplaced write response command in DetachPolicy() (#16333) 2022-12-30 20:04:03 -08:00
Harshavardhana
75faef888e disable builds for go1.18 (#16332) 2022-12-30 11:37:07 -08:00
Harshavardhana
b67d97b1ba add missing fields in audit logs for non-compressed handlers (#16328) 2022-12-30 10:20:19 -08:00
Anis Elleuch
b8943fdf19 doc: Update prometheus metrics list (#16329) 2022-12-29 15:08:22 -08:00
Harshavardhana
f93183f66e fix: a deadlock by refactoring listBuckets() under site replication (#16323) 2022-12-29 00:08:31 -08:00
Harshavardhana
2937711390 fix: DeleteObject() API with versionId under replication (#16325) 2022-12-28 22:48:33 -08:00
Wojtek Czekalski
aa56c6d51d helm: Make bucket existence check faster (#16321) 2022-12-27 10:32:39 -08:00
Anis Elleuch
27417459fb metrics: Show healing info for all nodes (#16315) 2022-12-26 08:35:32 -08:00
Harshavardhana
5b8fe2e89a allow locks with object affinity to spread across pools (#16312) 2022-12-23 20:55:45 -08:00
Anis Elleuch
acc9c033ed debug: Add X-Amz-Request-ID to lock/unlock calls (#16309) 2022-12-23 19:49:07 -08:00
Poorna
8528b265a9 Validate replication target update to avoid duplicate endpoints (#16311) 2022-12-23 15:44:48 -08:00
Han Cen
44250f1a52 helm: disallow empty containers in post job template (#16281) 2022-12-23 12:32:18 -08:00
Minio Trusted
f7560670d9 update helm v5.0.4
Signed-off-by: Harshavardhana <harsha@minio.io>
2022-12-23 12:29:40 -08:00
Russell Sim
3891885800 helm: fix creating users, via proper secretKey (#16310) 2022-12-23 12:28:43 -08:00
Harshavardhana
b882310e2b avoid locks for internal and invalid buckets in MakeBucket() (#16302) 2022-12-23 07:46:00 -08:00
Poorna
de0b43de32 persist replication stats with leader lock (#16282) 2022-12-22 14:25:13 -08:00
Harshavardhana
48152a56ac upgrade UBI image to 8.7 (#16301) 2022-12-22 10:56:05 -08:00
jiuker
29dd7f1d68 tier verification leaks fd, that must be closed (#16296)
Co-authored-by: Harshavardhana <harsha@minio.io>
2022-12-22 10:35:54 -08:00
Poorna
6423e4c767 Remove site replication config if it succeeded locally (#16279) 2022-12-22 01:31:20 -08:00
Harshavardhana
1dd8f0e8f3 update console v0.22.3 (#16292)
Signed-off-by: Harshavardhana <harsha@minio.io>
2022-12-21 23:47:51 -08:00
Krishnan Parthasarathi
2fa35def2c Fix DeleteObject when only free versions remain (#16289) 2022-12-21 16:24:07 -08:00
Anis Elleuch
34167c51d5 trace: Add bootstrap tracing events (#16286) 2022-12-21 15:52:29 -08:00
Harshavardhana
a5f8af4efb serialize replication stats() only when needed (#16280) 2022-12-20 00:07:53 -08:00
Harshavardhana
5a218f38a1 allow retries for transaction lock on startup (#16273) 2022-12-19 22:00:00 -08:00
Anis Elleuch
e57e946206 Do not save credentials in config.json (#16275) 2022-12-19 12:27:06 -08:00
Klaus Post
b4f71362e9 Avoid config migration on every startup (#16278) 2022-12-19 11:10:14 -08:00
Taran Pelkey
ed37b7a9d5 Add API to fetch policy user/group associations (#16239) 2022-12-19 10:37:03 -08:00
Minio Trusted
6511021fbe update helm v5.0.3 2022-12-19 00:53:02 -08:00
mruzicka
6197ba851b helm: Fix post job template (#16236) 2022-12-18 08:01:22 -08:00
Minio Trusted
3ae1f9d852 update helm v5.0.2 2022-12-17 23:57:10 -08:00
orblazer
0db1930f48 helm: add policy to svcacct (#16272) 2022-12-17 22:50:37 -08:00
Anis Elleuch
89db3fdb5d Do not return an error when version disparity is detected (#16269) 2022-12-16 08:52:12 -08:00
Harshavardhana
80fc3a8a52 use newDynamicTimeoutWithOpts() when appropriate (#16266) 2022-12-15 13:11:37 -08:00
Klaus Post
988a2e8fed Faster startup of large distributed systems with latency (#16259) 2022-12-15 08:31:21 -08:00
Harshavardhana
2433698372 fix: remove unnecessary logs for client conn errors (#16261) 2022-12-15 08:25:05 -08:00
Harshavardhana
5d7e8f79ed fix: remove scanner healing with unnecessary logs (#16260) 2022-12-14 16:39:18 -08:00
Harshavardhana
bad229e16e fix: support event name s3:Restore:* (#16257) 2022-12-14 05:12:07 -08:00
Poorna
d37e514733 Cleanup remote targets automatically on replication config removal. (#16221) 2022-12-14 03:24:06 -08:00
Harshavardhana
c73ea27ed7 do not log checksum mismatch error, client received the error (#16246) 2022-12-14 01:57:40 -08:00
Krishnan Parthasarathi
0159b56717 fix: rebalance to account for object's on-disk size (#16240) 2022-12-14 00:15:14 -08:00
Aditya Manthramurthy
9e6cc847f8 Add HTTP2 config option for policy plugin (#16225) 2022-12-13 14:28:48 -08:00
Taran Pelkey
709eb283d9 Add endpoints for managing IAM policies (#15897)
Co-authored-by: Taran <taran@minio.io>
Co-authored-by: ¨taran-p¨ <¨taran@minio.io¨>
Co-authored-by: Aditya Manthramurthy <donatello@users.noreply.github.com>
2022-12-13 12:13:23 -08:00
Anis Elleuch
76dde82b41 Implement STS account info API (#16115) 2022-12-13 08:38:50 -08:00
Anis Elleuch
939c0100a6 log: Do not interpret verbs in object names in console output (#16233) 2022-12-13 08:27:40 -08:00
Aditya Manthramurthy
2d60bf8c50 Refactor HTTP transports (#16222) 2022-12-12 20:31:21 -08:00
Harshavardhana
37e20f6ef2 feat: allow listening specific addrs for API port (#16223) 2022-12-12 18:48:46 -08:00
Minio Trusted
76905b7a67 Update yaml files to latest version RELEASE.2022-12-12T19-27-27Z 2022-12-13 01:10:31 +00:00
Aditya Manthramurthy
a469e6768d Add LDAP DNS SRV record lookup support (#16201) 2022-12-12 11:27:27 -08:00
Harshavardhana
2fc182d8e6 fix: iso8601TimeFormat padding issue for certain nanoseconds (#16207) 2022-12-12 10:28:30 -08:00
Shireesh Anjal
a2cbeaa9e6 Use different subnet public key during dev/test (#16216) 2022-12-12 10:28:15 -08:00
Harshavardhana
444ff20bc5 do not rename multipart failed transactions back to tmp (#16204) 2022-12-12 01:40:29 -08:00
Harshavardhana
20ef5e7a6a avoid double deletes() when no more versions (#16206) 2022-12-12 01:40:04 -08:00
Minio Trusted
c233c8e329 update console to v0.22.2 2022-12-09 21:10:13 -08:00
Aditya Manthramurthy
e06127566d Add IAM API to attach/detach policies for LDAP (#16182) 2022-12-09 13:08:33 -08:00
Harshavardhana
dfe73629a3 fix: delete marker discrepancies via DeleteObject() API (#16195) 2022-12-08 18:15:16 -08:00
Harshavardhana
b03dd1af17 remove hard limit for number of buckets (#16194) 2022-12-08 12:24:03 -08:00
Harshavardhana
4bc367c490 fix: translate tier add errors properly (#16191) 2022-12-08 11:18:07 -08:00
Klaus Post
3eb2d086b2 Replace filepathx with fork (#16192) 2022-12-08 10:42:44 -08:00
Klaus Post
70986b6e6e Add version id to healresult (#16193) 2022-12-08 07:49:10 -08:00
jiuker
8edc2faaa9 reuse sha256 in config GetSettings (#16188) 2022-12-08 03:03:24 -08:00
Klaus Post
ebe395788b feat: Encrypt s3zip file index (#16179) 2022-12-07 14:56:07 -08:00
Klaus Post
12fd6678ee Encrypt checksums with KMS on CompleteMultipartUpload (#16177) 2022-12-07 10:18:18 -08:00
Harshavardhana
90d35b70b4 remove unnecessary logs for truncated XML inputs (#16184) 2022-12-07 08:30:52 -08:00
Minio Trusted
9f71369b67 Update yaml files to latest version RELEASE.2022-12-07T00-56-37Z 2022-12-07 01:30:51 +00:00
Javier Adriel
04ae9058ed Populate end_session_endpoint (#16183) 2022-12-06 16:56:37 -08:00
Aditya Manthramurthy
a30cfdd88f Bump up madmin-go to v2 (#16162) 2022-12-06 13:46:50 -08:00
Anis Elleuch
1bae32dc96 xl: Delete older data-dir when replacing an existing version-id (#16176) 2022-12-06 13:43:18 -08:00
Anis Elleuch
932d2c3c62 Add X-Amz-Request-Id to internode calls (#16146) 2022-12-06 09:27:26 -08:00
Anis Elleuch
52f4124678 Remove go1.18 from Github workflow tests (#16180) 2022-12-06 09:11:20 -08:00
jiuker
8d8d07ac5c use readlock instead of writelock to get heal information (#16175) 2022-12-06 08:08:22 -08:00
Anis Elleuch
44735be38e s3: Return correct error when Version is invalid in policy document (#16178) 2022-12-06 08:07:24 -08:00
dorman
1ef1b2ba50 helm: modify the job create order (#15696) 2022-12-05 19:22:31 -08:00
Timofei Bredov
6fdbd778d5 Add minimal setup command to helm chart's readme (#16165) 2022-12-05 13:22:02 -08:00
Harshavardhana
419f351df3 avoid logging gzipped body in trace output (#16172) 2022-12-05 13:21:27 -08:00
Klaus Post
180d6b30ca Avoid hot loop when lock is cancelled (#16169) 2022-12-05 13:21:14 -08:00
Klaus Post
3fd9059b4e opt: Only stream big data usage caches (#16168) 2022-12-05 13:01:11 -08:00
Klaus Post
a713aee3d5 Run staticcheck on CI (#16170) 2022-12-05 11:18:50 -08:00
Harshavardhana
a9f5b58a01 fix: update the JSON keys for latest 'mc' release (#16171) 2022-12-05 10:28:22 -08:00
Andreas Auernhammer
d882ba2cb4 kms: add support for KES enclaves (#16139)
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-12-04 02:34:24 -08:00
Cesar Celis Hernandez
90e37a8745 Start PR on Enterprise when there is new MinIO Version (#16121) 2022-12-04 02:29:25 -08:00
jiuker
6086f45d25 fix: in disk cache readCacheFileStream should closed upon return (#16138) 2022-12-04 02:28:10 -08:00
Minio Trusted
d6351879f3 Update yaml files to latest version RELEASE.2022-12-02T19-19-22Z 2022-12-02 20:15:36 +00:00
Harshavardhana
5655272f5a ship mc along with MinIO container image (#16156) 2022-12-02 11:19:22 -08:00
Harshavardhana
9b35c72349 fix: a crash in KMS cert reload function (#16158) 2022-12-02 11:19:05 -08:00
Klaus Post
98cffbce03 s3zip: Limit over-read for single file (#16161) 2022-12-02 08:53:24 -08:00
Klaus Post
1cd875de1e Persist updated metadata (#16160) 2022-12-02 08:35:04 -08:00
Harshavardhana
5a8df7efb3 re-implement StorageInfo to be a peer call (#16155) 2022-12-01 14:31:35 -08:00
Anis Elleuch
c84e2939e4 trace: Publish storage layer errors (#16153) 2022-12-01 12:10:54 -08:00
Anis Elleuch
641ab24aec repl: resync orchestrator to use global shared lock (#16154) 2022-12-01 12:10:09 -08:00
Harshavardhana
71133105d7 re-order the top-level config keys for priority (#16150) 2022-12-01 07:50:08 -08:00
Harshavardhana
625677b189 update reedsolomon v1.11.3 (#16149) 2022-11-30 13:39:03 -08:00
Minio Trusted
76943ac05e Update yaml files to latest version RELEASE.2022-11-29T23-40-49Z 2022-11-30 21:13:07 +00:00
Aditya Manthramurthy
87cbd41265 feat: Allow at most one claim based OpenID IDP (#16145) 2022-11-29 15:40:49 -08:00
Harshavardhana
be92cf5959 change dependency from amqp -> amqp091 (RabbitMQ) official (#16142) 2022-11-28 16:05:06 -08:00
Klaus Post
cc1d8f0057 Check for abandoned data when healing (#16122) 2022-11-28 10:20:55 -08:00
Anis Elleuch
1f1dcdce65 move HTTP recorder to an internal library (#16128) 2022-11-28 10:20:27 -08:00
Shireesh Anjal
98a67a3776 Improvements in logger and audit webhooks (#16102) 2022-11-28 08:03:26 -08:00
Andreas Auernhammer
9b1e70e4f9 kms: fix possible deadlock due to nested RLock calls. (#16136)
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-11-28 07:31:07 -08:00
Harshavardhana
09d4f8cd0f avoid serializing decryptKey() every 15mins (#16135)
if the certs are the same in an environment where the 
cert files are symlinks (e.g Kubernetes), then we resort
to reloading certs every 15mins - we can avoid reload
of the kes client instance. Ensure that the price to pay 
for contending with the lock must happen when necessary.
2022-11-28 01:14:33 -08:00
Minio Trusted
53cbc020b9 Update yaml files to latest version RELEASE.2022-11-26T22-43-32Z 2022-11-27 04:08:35 +00:00
Poorna
63fc6ba2cd preserve replicated ETag properly on target (#16129) 2022-11-26 14:43:32 -08:00
jiuker
ce53d7f6c2 add disk.Close() in healFreshDisk to indicate idiomatic flow of code (#16124) 2022-11-26 00:26:15 -08:00
jiuker
fe8eed963e fix: wrapped error will not equal in decommissioning (#16113) 2022-11-24 08:00:42 -08:00
Anis Elleuch
97eb7dbf5f notify: Return detailed err msg when connecting to target fails (#16118) 2022-11-24 07:59:19 -08:00
Shireesh Anjal
59f877fc64 fix: Timestamp not added in diagnostics report (#16114) 2022-11-23 07:11:22 -08:00
Klaus Post
f96fe9773c fix: duplicated shared prefix with custom delimiter when listing (#16111) 2022-11-22 08:51:04 -08:00
Anis Elleuch
04948b4d55 fix: checking for stale STS account under site replication (#16109) 2022-11-22 07:26:33 -08:00
Klaus Post
98ba622679 Reduce temporary file clean-up waits (#16110) 2022-11-22 07:23:36 -08:00
Harshavardhana
08103870a5 update single drive setup error message (#16098) 2022-11-18 14:47:38 -08:00
Anis Elleuch
993e586855 config: return XMinioConfigNotFound code for non existing config (#16065) 2022-11-18 10:28:14 -08:00
Harshavardhana
58ec835af0 fix: skip free version ID and marker in metadata equality (#16093) 2022-11-18 05:48:22 -08:00
Harshavardhana
6aea950d74 avoid partID lock validating uploadID exists prematurely (#16086) 2022-11-18 03:09:35 -08:00
Poorna
7198be5be9 bucket resync: persist reset id to bucket metadata (#16088) 2022-11-18 01:39:05 -08:00
Minio Trusted
3661aaf8a1 Update yaml files to latest version RELEASE.2022-11-17T23-20-09Z 2022-11-18 08:51:38 +00:00
Klaus Post
a22b4adf4c distribute replication ops based on names (#16083) 2022-11-17 15:20:09 -08:00
Klaus Post
b7bb122be8 fix: replication auto-scaling deadlock (#16084) 2022-11-17 07:35:02 -08:00
Krishnan Parthasarathi
8441a3bf5f fix: update metacache entry only once (#16072) 2022-11-16 11:25:00 -08:00
Harshavardhana
853c4de75a allow changing endpoints in distributed setups (#16071) 2022-11-16 07:59:10 -08:00
jiuker
3597af789e allow resultCh to be closed() after clusterMetaHealthInfo() (#16073) 2022-11-16 03:04:36 -08:00
Andreas Auernhammer
4c9cac0b47 update KES dependency to v0.22.0 (#16077)
Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-11-16 03:03:04 -08:00
Minio Trusted
1a0b68498b update console release v0.21.3
Signed-off-by: Harshavardhana <harsha@minio.io>
2022-11-15 16:47:25 -08:00
Shireesh Anjal
5246e3be84 Send health diagnostics data as part of callhome (#16006) 2022-11-15 13:53:05 -08:00
Klaus Post
8a07000e58 fix: refactor getReplicationDiff for safe use (#16051) 2022-11-15 07:59:21 -08:00
Krishnan Parthasarathi
3bb82ef60d top-locks: Include lock-held duration (#16061) 2022-11-15 07:57:52 -08:00
Alexander Overvoorde
c8a221a9a7 Add missing argument for tpl in Helm chart (fix for bug in #16064) (#16068) 2022-11-15 07:56:58 -08:00
Harshavardhana
91f45c4aa6 avoid inconsistent versions healing when versions are large (#16066) 2022-11-14 18:35:26 -08:00
Alexander Overvoorde
7c5e4da90c helm: Allow tls.certSecret in chart to be template'd (#16064) 2022-11-14 09:47:59 -08:00
Poorna
d6bc141bd1 feat: Add support for site level resync (#15753) 2022-11-14 07:16:40 -08:00
jiuker
7ac64ad24a fix: use errors.Is for wrapped returns (#16062) 2022-11-14 07:15:46 -08:00
asoria-lf
14e52f29b0 helm: Add new job to create service accounts (#15939) 2022-11-13 09:28:07 -08:00
Philipp B
344ae9f84e helm: add extraContainer (#15660)
Signed-off-by: Philipp Born <git@pborn.eu>
2022-11-13 09:22:27 -08:00
Minio Trusted
f7db12c7ef helm release v5.0.1 2022-11-13 02:04:51 -08:00
Harshavardhana
962d1f1a71 choose default values upon incorrect storage_class value (#16058) 2022-11-12 10:18:21 -08:00
Harshavardhana
6d76db9d6c improve server startup error when pools are incorrect (#16056) 2022-11-11 19:40:45 -08:00
yanggang
00857f8f59 helm: fix positional parameter in template (#15983)
fixes #15901
2022-11-11 12:44:37 -08:00
Ray
66239f30ce configuring the nats target to reconnect forever (#16050) 2022-11-11 12:42:41 -08:00
jiuker
bf89f79694 save deploymentID to avoid mutating request entry in Audit (#16053) 2022-11-11 12:42:15 -08:00
elg0ch0
ce299b47ea helm: update bucket policy setting via 'mc anonymous' (#16055) 2022-11-11 11:34:01 -08:00
Minio Trusted
6dc7109a9f Update yaml files to latest version RELEASE.2022-11-11T03-44-20Z 2022-11-11 04:57:16 +00:00
jiuker
bdcb485740 netPerfRX Reset() should use write Lock() (#16043) 2022-11-10 19:44:20 -08:00
Poorna
e32b948a49 fix: parsing multipart uploadID under site replicated setup (#16048)
continue the fix from #16034
2022-11-10 16:17:45 -08:00
Minio Trusted
4fe9cbb973 Update yaml files to latest version RELEASE.2022-11-10T18-20-21Z 2022-11-10 19:36:54 +00:00
Klaus Post
5b242f1d11 Add Audit target metrics (#16044) 2022-11-10 10:20:21 -08:00
Poorna
34d28dd79f replication: Avoid blocking on mrf save (#16045) 2022-11-10 10:20:02 -08:00
Krishnan Parthasarathi
6eef9b4a23 lifecycle: simplify Eval and HasActiveRules (#16036) 2022-11-10 07:17:45 -08:00
Aditya Manthramurthy
5f1999cc71 fix: avoid URL unsafe chars in multipart upload ID (#16034) 2022-11-09 16:41:16 -08:00
Krishnan Parthasarathi
40a2c6b882 Return remote tier as StorageClass for transitioned objects (#16035) 2022-11-09 15:57:34 -08:00
Krishnan Parthasarathi
7ba281728f ilm: fix x-amz-expiration header evaluation (#16029) 2022-11-09 04:20:34 -08:00
jiuker
7b7356f04c close the reader under disk cache bitrot verification (#16024) 2022-11-09 04:20:11 -08:00
Klaus Post
bbc312fce6 Add notification queue metrics (#16026) 2022-11-08 16:36:47 -08:00
Harshavardhana
1b0dfb0f58 remove printing map() checksums (#16028) 2022-11-08 13:29:24 -08:00
Anis Elleuch
7260241511 Remove some logs caused by external apps (#16027) 2022-11-08 13:29:05 -08:00
Anis Elleuch
3b1a9b9fdf Use the same lock for the scanner and site replication healing (#15985) 2022-11-08 08:55:55 -08:00
yanggang
52769e1e71 remove io/util for advanced golang (#16011) 2022-11-08 07:58:02 -08:00
Harshavardhana
72afc2727a rebalance status must return appropriate error initially (#16022) 2022-11-08 07:56:45 -08:00
Minio Trusted
808739867c Update yaml files to latest version RELEASE.2022-11-08T05-27-07Z 2022-11-08 05:59:18 +00:00
Harshavardhana
752e18e795 upgrade console to v0.21.2 2022-11-07 21:27:07 -08:00
Aditya Manthramurthy
76d822bf1e Add LDAP policy entities API (#15908) 2022-11-07 14:35:09 -08:00
Klaus Post
ddeca9f12a fix: filter rest errors and logs returned (#16019) 2022-11-07 10:38:08 -08:00
Shireesh Anjal
19d0340ddf Update version of madmin-go to v1.7.3 (#15994) 2022-11-07 09:32:18 -08:00
Harshavardhana
21251d8c22 initialize streaming events without lazy initialization (#16016) 2022-11-07 08:01:24 -08:00
Harshavardhana
1f3db03bf0 allow changing argument for path for SNSD setup (#16013) 2022-11-07 00:11:58 -08:00
Harshavardhana
944c62daf4 skip flaky tests on windows OS (#16015) 2022-11-07 00:11:21 -08:00
Harshavardhana
9547b7d0e9 add deadlineConnections on remoteTransport (#16010) 2022-11-05 11:09:21 -07:00
Harshavardhana
76c4ea7682 force all internal MinIO operations to be under UTC (#16009) 2022-11-04 16:44:38 -07:00
Klaus Post
808ecfe0f2 merge versions across sets when listing (#16003) 2022-11-04 11:33:22 -07:00
Klaus Post
2894dd4d1a fix: hold lock while serializing replication stats (#16007) 2022-11-04 09:59:14 -07:00
yanggang
797fa7f97b update Elasticsearch dependency to 7.17.7 (#15992) 2022-11-04 08:23:33 -07:00
jiuker
fd8750e959 fix: http body must be drained in downloadBinary() (#16001) 2022-11-04 08:22:38 -07:00
Harshavardhana
7be65f66b8 support HS256 series of JWT signature for OpenID connect (#15993) 2022-11-03 16:41:53 -07:00
Poorna
4f5d38a4b1 site replication edit: validate endpoint belongs to deployment (#16000) 2022-11-03 16:23:45 -07:00
Anis Elleuch
7e73fc2870 Implement inspect data API v2 (#15474)
Co-authored-by: Klaus Post <klauspost@gmail.com>
2022-11-02 13:36:38 -07:00
yanggang
d2c9a9e395 add windows port allot by "netsh dynamicport" (#15986) 2022-11-02 09:10:26 -07:00
Harshavardhana
0d49b365ff converge SNSD deployments into single code (#15988) 2022-11-01 16:41:01 -07:00
Anis Elleuch
7721595aa9 config: Deprecated delay/max_wait/scanner and introduce speed (#15941) 2022-11-01 08:04:07 -07:00
Harshavardhana
fd6f6fc8df cleanup stale parent multipart directories (#15980) 2022-11-01 08:00:02 -07:00
Aditya Manthramurthy
4fb47cd568 fix: update admin IDP APIs to be more RESTful (#15896) 2022-10-31 14:52:26 -07:00
Klaus Post
ecc932d5dd Clean entire tmp-old on restart (#15979) 2022-10-31 07:27:50 -07:00
Harshavardhana
b57fbff7c1 ignore background healInfo in single drive setup (#15968) 2022-10-31 07:26:10 -07:00
Harshavardhana
4892a766a8 do not panic if webhook returns an error (#15970) 2022-10-30 16:45:53 -07:00
Minio Trusted
0303cd8625 Update yaml files to latest version RELEASE.2022-10-29T06-21-33Z 2022-10-29 09:29:37 +00:00
640 changed files with 64256 additions and 33033 deletions

View File

@@ -1,3 +1,9 @@
## Community Contribution License
All community contributions in this pull request are licensed to the project maintainers
under the terms of the [Apache 2 license] (https://www.apache.org/licenses/LICENSE-2.0).
By creating this pull request I represent that I have the right to license the
contributions to the project maintainers under the Apache 2 license.
## Description

View File

@@ -1,5 +0,0 @@
# Config file for markdownlint-cli
MD033:
allowed_elements:
- details
- summary

View File

@@ -20,11 +20,11 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@629c2de402a417ea7690ca6ce3f33229e27606a5 # v2
- uses: actions/setup-go@bfdd3570ce990073878bf10f6b2d79082de49492 # v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3
- uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true

View File

@@ -20,20 +20,28 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.5b7]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Setup dockerfile for build test
run: |
echo "FROM us-docker.pkg.dev/google.com/api-project-999119582588/go-boringcrypto/golang:${{ matrix.go-version }}" > Dockerfile.fips.test
echo "COPY . /minio" >> Dockerfile.fips.test
echo "WORKDIR /minio" >> Dockerfile.fips.test
echo "RUN make" >> Dockerfile.fips.test
GO_VERSION=$(go version | cut -d ' ' -f 3 | sed 's/go//')
echo Detected go version $GO_VERSION
cat > Dockerfile.fips.test <<EOF
FROM golang:${GO_VERSION}
COPY . /minio
WORKDIR /minio
ENV GOEXPERIMENT=boringcrypto
RUN make
EOF
- name: Build
uses: docker/build-push-action@v3
@@ -48,4 +56,4 @@ jobs:
- name: Test binary
run: |
docker run --rm minio/fips-test:latest ./minio --version
docker run --rm -i minio/fips-test:latest /bin/bash -c 'go tool nm ./minio' | grep -q FIPS
docker run --rm -i minio/fips-test:latest /bin/bash -c 'go tool nm ./minio | grep FIPS | grep -q FIPS'

View File

@@ -20,10 +20,10 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

View File

@@ -20,10 +20,10 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.21.x]
os: [ubuntu-latest, windows-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
@@ -34,6 +34,7 @@ jobs:
CGO_ENABLED: 0
GO111MODULE: on
run: |
netsh int ipv4 set dynamicport tcp start=60000 num=61000
go build --ldflags="-s -w" -o %GOPATH%\bin\minio.exe
go test -v --timeout 50m ./...
- name: Build on ${{ matrix.os }}

View File

@@ -20,10 +20,10 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

30
.github/workflows/helm-lint.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Helm Chart linting
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Helm
uses: azure/setup-helm@v3
- name: Run helm lint
run: |
cd helm/minio
helm lint .

View File

@@ -61,7 +61,7 @@ jobs:
# are turned off - i.e. if ldap="", then ldap server is not enabled for
# the tests.
matrix:
go-version: [1.18.x]
go-version: [1.21.x]
ldap: ["", "localhost:389"]
etcd: ["", "http://localhost:2379"]
openid: ["", "http://127.0.0.1:5556/dex"]
@@ -75,16 +75,16 @@ jobs:
openid: "http://127.0.0.1:5556/dex"
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Test LDAP/OpenID/Etcd combo
env:
LDAP_TEST_SERVER: ${{ matrix.ldap }}
ETCD_SERVER: ${{ matrix.etcd }}
OPENID_TEST_SERVER: ${{ matrix.openid }}
_MINIO_LDAP_TEST_SERVER: ${{ matrix.ldap }}
_MINIO_ETCD_TEST_SERVER: ${{ matrix.etcd }}
_MINIO_OPENID_TEST_SERVER: ${{ matrix.openid }}
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
@@ -92,20 +92,20 @@ jobs:
- name: Test with multiple OpenID providers
if: matrix.openid == 'http://127.0.0.1:5556/dex'
env:
LDAP_TEST_SERVER: ${{ matrix.ldap }}
ETCD_SERVER: ${{ matrix.etcd }}
OPENID_TEST_SERVER: ${{ matrix.openid }}
OPENID_TEST_SERVER_2: "http://127.0.0.1:5557/dex"
_MINIO_LDAP_TEST_SERVER: ${{ matrix.ldap }}
_MINIO_ETCD_TEST_SERVER: ${{ matrix.etcd }}
_MINIO_OPENID_TEST_SERVER: ${{ matrix.openid }}
_MINIO_OPENID_TEST_SERVER_2: "http://127.0.0.1:5557/dex"
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-iam
- name: Test with Access Management Plugin enabled
env:
LDAP_TEST_SERVER: ${{ matrix.ldap }}
ETCD_SERVER: ${{ matrix.etcd }}
OPENID_TEST_SERVER: ${{ matrix.openid }}
POLICY_PLUGIN_ENDPOINT: "http://127.0.0.1:8080"
_MINIO_LDAP_TEST_SERVER: ${{ matrix.ldap }}
_MINIO_ETCD_TEST_SERVER: ${{ matrix.etcd }}
_MINIO_OPENID_TEST_SERVER: ${{ matrix.openid }}
_MINIO_POLICY_PLUGIN_TEST_ENDPOINT: "http://127.0.0.1:8080"
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0

View File

@@ -1,30 +0,0 @@
name: Markdown Linter
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
lint:
name: Lint all docs
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v2
- name: Lint all docs
run: |
npm install -g markdownlint-cli
markdownlint --fix '**/*.md' \
--config /home/runner/work/minio/minio/.github/markdown-lint-cfg.yaml \
--disable MD013 MD040 MD051

65
.github/workflows/mint.yml vendored Normal file
View File

@@ -0,0 +1,65 @@
name: Mint Tests
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
mint-test:
runs-on: mint
timeout-minutes: 120
steps:
- name: cleanup #https://github.com/actions/checkout/issues/273
run: |
sudo -S rm -rf ${GITHUB_WORKSPACE}
mkdir ${GITHUB_WORKSPACE}
- name: checkout-step
uses: actions/checkout@v3
- name: setup-go-step
uses: actions/setup-go@v2
with:
go-version: 1.21.x
- name: github sha short
id: vars
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: build-minio
run: |
TAG="minio/minio:${{ steps.vars.outputs.sha_short }}" make docker
- name: compress and encrypt
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "compress-encrypt" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
- name: multiple pools
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "pools" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
- name: standalone erasure
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "erasure" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
- name: The job must cleanup
if: ${{ always() }}
run: |
export JOB_NAME=${{ steps.vars.outputs.sha_short }}
for mode in $(echo compress-encrypt pools erasure); do
docker-compose -f ${GITHUB_WORKSPACE}/.github/workflows/mint/minio-${mode}.yaml down || true
docker-compose -f ${GITHUB_WORKSPACE}/.github/workflows/mint/minio-${mode}.yaml rm || true
done
docker rmi -f minio/minio:${{ steps.vars.outputs.sha_short }}
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -q -f dangling=true) || true

View File

@@ -0,0 +1,80 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${JOB_NAME}
command: server --console-address ":9001" http://minio{1...4}/cdata{1...2}
expose:
- "9000"
- "9001"
environment:
MINIO_CI_CD: "on"
MINIO_ROOT_USER: "minio"
MINIO_ROOT_PASSWORD: "minio123"
MINIO_COMPRESSION_ENABLE: "on"
MINIO_COMPRESSION_MIME_TYPES: "*"
MINIO_COMPRESSION_ALLOW_ENCRYPTION: "on"
MINIO_KMS_SECRET_KEY: "my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw="
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- cdata1-1:/cdata1
- cdata1-2:/cdata2
minio2:
<<: *minio-common
hostname: minio2
volumes:
- cdata2-1:/cdata1
- cdata2-2:/cdata2
minio3:
<<: *minio-common
hostname: minio3
volumes:
- cdata3-1:/cdata1
- cdata3-2:/cdata2
minio4:
<<: *minio-common
hostname: minio4
volumes:
- cdata4-1:/cdata1
- cdata4-2:/cdata2
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
volumes:
- ./nginx-4-node.conf:/etc/nginx/nginx.conf:ro
ports:
- "9000:9000"
- "9001:9001"
depends_on:
- minio1
- minio2
- minio3
- minio4
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
cdata1-1:
cdata1-2:
cdata2-1:
cdata2-2:
cdata3-1:
cdata3-2:
cdata4-1:
cdata4-2:

View File

@@ -0,0 +1,51 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${JOB_NAME}
command: server --console-address ":9001" edata{1...4}
expose:
- "9000"
- "9001"
environment:
MINIO_CI_CD: "on"
MINIO_ROOT_USER: "minio"
MINIO_ROOT_PASSWORD: "minio123"
MINIO_KMS_SECRET_KEY: "my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw="
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- edata1-1:/edata1
- edata1-2:/edata2
- edata1-3:/edata3
- edata1-4:/edata4
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
volumes:
- ./nginx-1-node.conf:/etc/nginx/nginx.conf:ro
ports:
- "9000:9000"
- "9001:9001"
depends_on:
- minio1
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
edata1-1:
edata1-2:
edata1-3:
edata1-4:

117
.github/workflows/mint/minio-pools.yaml vendored Normal file
View File

@@ -0,0 +1,117 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${JOB_NAME}
command: server --console-address ":9001" http://minio{1...4}/pdata{1...2} http://minio{5...8}/pdata{1...2}
expose:
- "9000"
- "9001"
environment:
MINIO_CI_CD: "on"
MINIO_ROOT_USER: "minio"
MINIO_ROOT_PASSWORD: "minio123"
MINIO_KMS_SECRET_KEY: "my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw="
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- pdata1-1:/pdata1
- pdata1-2:/pdata2
minio2:
<<: *minio-common
hostname: minio2
volumes:
- pdata2-1:/pdata1
- pdata2-2:/pdata2
minio3:
<<: *minio-common
hostname: minio3
volumes:
- pdata3-1:/pdata1
- pdata3-2:/pdata2
minio4:
<<: *minio-common
hostname: minio4
volumes:
- pdata4-1:/pdata1
- pdata4-2:/pdata2
minio5:
<<: *minio-common
hostname: minio5
volumes:
- pdata5-1:/pdata1
- pdata5-2:/pdata2
minio6:
<<: *minio-common
hostname: minio6
volumes:
- pdata6-1:/pdata1
- pdata6-2:/pdata2
minio7:
<<: *minio-common
hostname: minio7
volumes:
- pdata7-1:/pdata1
- pdata7-2:/pdata2
minio8:
<<: *minio-common
hostname: minio8
volumes:
- pdata8-1:/pdata1
- pdata8-2:/pdata2
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
volumes:
- ./nginx-8-node.conf:/etc/nginx/nginx.conf:ro
ports:
- "9000:9000"
- "9001:9001"
depends_on:
- minio1
- minio2
- minio3
- minio4
- minio5
- minio6
- minio7
- minio8
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
pdata1-1:
pdata1-2:
pdata2-1:
pdata2-2:
pdata3-1:
pdata3-2:
pdata4-1:
pdata4-2:
pdata5-1:
pdata5-2:
pdata6-1:
pdata6-2:
pdata7-1:
pdata7-2:
pdata8-1:
pdata8-2:

100
.github/workflows/mint/nginx-1-node.conf vendored Normal file
View File

@@ -0,0 +1,100 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
}
upstream console {
ip_hash;
server minio1:9001;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}

106
.github/workflows/mint/nginx-4-node.conf vendored Normal file
View File

@@ -0,0 +1,106 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
}
upstream console {
ip_hash;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}

114
.github/workflows/mint/nginx-8-node.conf vendored Normal file
View File

@@ -0,0 +1,114 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
server minio5:9000;
server minio6:9000;
server minio7:9000;
server minio8:9000;
}
upstream console {
ip_hash;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
server minio5:9001;
server minio6:9001;
server minio7:9001;
server minio8:9001;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}

106
.github/workflows/mint/nginx.conf vendored Normal file
View File

@@ -0,0 +1,106 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
}
upstream console {
ip_hash;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}

View File

@@ -21,10 +21,10 @@ jobs:
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.21.x]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

34
.github/workflows/root-disable.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Root lockdown tests
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Start root lockdown tests
run: |
make test-root-disable

46
.github/workflows/run-mint.sh vendored Executable file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
set -ex
export MODE="$1"
export ACCESS_KEY="$2"
export SECRET_KEY="$3"
export JOB_NAME="$4"
export MINT_MODE="full"
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -f dangling=true) || true
## change working directory
cd .github/workflows/mint
docker-compose -f minio-${MODE}.yaml up -d
sleep 5m
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -q -f dangling=true) || true
# Stop two nodes, one of each pool, to check that all S3 calls work while quorum is still there
[ "${MODE}" == "pools" ] && docker-compose -f minio-${MODE}.yaml stop minio2
[ "${MODE}" == "pools" ] && docker-compose -f minio-${MODE}.yaml stop minio6
docker run --rm --net=mint_default \
--name="mint-${MODE}-${JOB_NAME}" \
-e SERVER_ENDPOINT="nginx:9000" \
-e ACCESS_KEY="${ACCESS_KEY}" \
-e SECRET_KEY="${SECRET_KEY}" \
-e ENABLE_HTTPS=0 \
-e MINT_MODE="${MINT_MODE}" \
docker.io/minio/mint:edge
docker-compose -f minio-${MODE}.yaml down || true
sleep 10s
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -q -f dangling=true) || true
## change working directory
cd ../../../

22
.github/workflows/shfmt.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Shell formatting checks
on:
pull_request:
branches:
- master
permissions:
contents: read
jobs:
build:
name: runner / shfmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: luizm/action-sh-checker@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SHFMT_OPTS: "-s"
with:
sh_checker_shellcheck_disable: true # disable for now

View File

@@ -20,11 +20,11 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}

View File

@@ -6,6 +6,10 @@ on:
push:
branches:
- master
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
vulncheck:
name: Analysis
@@ -16,7 +20,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.19.x
go-version: 1.21.0
check-latest: true
- name: Get official govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest

5
.gitignore vendored
View File

@@ -25,7 +25,7 @@ mc.*
s3-check-md5*
xl-meta*
healing-*
inspect*
inspect*.zip
200M*
hash-set
minio.RELEASE*
@@ -38,3 +38,6 @@ docs/debugging/s3-check-md5/s3-check-md5
docs/debugging/hash-set/hash-set
docs/debugging/healing-bin/healing-bin
docs/debugging/inspect/inspect
docs/debugging/pprofgoparser/pprofgoparser
.bin/
*.gz

View File

@@ -1,39 +1,34 @@
linters-settings:
gofumpt:
lang-version: "1.18"
simplify: true
misspell:
locale: US
staticcheck:
checks: ['all', '-ST1005', '-ST1000', '-SA4000', '-SA9004', '-SA1019', '-SA1008', '-U1000', '-ST1016']
linters:
disable-all: true
enable:
- typecheck
- goimports
- misspell
- govet
- revive
- ineffassign
- gomodguard
- durationcheck
- gocritic
- gofmt
- gofumpt
- goimports
- gomodguard
- govet
- ineffassign
- misspell
- revive
- staticcheck
- tenv
- typecheck
- unconvert
- unused
- gocritic
- gofumpt
- tenv
- durationcheck
issues:
exclude-use-default: false
exclude:
- should have a package comment
- error strings should not be capitalized or end with punctuation or a newline
# todo fix these when we get enough time.
- "singleCaseSwitch: should rewrite switch statement to if statement"
- "unlambda: replace"
- "captLocal:"
- "ifElseChain:"
- "elseif:"
service:
golangci-lint-version: 1.43.0 # use the fixed version to not introduce new linters unexpectedly
- should have a package comment
- error strings should not be capitalized or end with punctuation or a newline

3672
CREDITS

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8
ARG RELEASE
@@ -27,9 +27,8 @@ COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools iproute iputils jq minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-amd64/archive/minio.${RELEASE} -o /opt/bin/minio && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-amd64/archive/minio.${RELEASE}.sha256sum -o /opt/bin/minio.sha256sum && \

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8
ARG TARGETARCH
@@ -29,15 +29,16 @@ COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools iproute iputils jq minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /opt/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /opt/bin/minio.sha256sum && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /opt/bin/minio.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc -o /opt/bin/mc && \
microdnf clean all && \
chmod +x /opt/bin/minio && \
chmod +x /opt/bin/mc && \
chmod +x /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/verify-minio.sh && \
/usr/bin/verify-minio.sh && \

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8
ARG TARGETARCH
@@ -29,9 +29,8 @@ COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools iproute iputils jq minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips -o /opt/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips.sha256sum -o /opt/bin/minio.sha256sum && \

View File

@@ -8,6 +8,10 @@ GOOS := $(shell go env GOOS)
VERSION ?= $(shell git describe --tags)
TAG ?= "minio/minio:$(VERSION)"
GOLANGCI_VERSION = v1.51.2
GOLANGCI_DIR = .bin/golangci/$(GOLANGCI_VERSION)
GOLANGCI = $(GOLANGCI_DIR)/golangci-lint
all: build
checks: ## check dependencies
@@ -19,28 +23,36 @@ help: ## print this help
getdeps: ## fetch necessary dependencies
@mkdir -p ${GOPATH}/bin
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@f3635b96e4838a6c773babb65ef35297fe5fe2f9
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOLANGCI_DIR) $(GOLANGCI_VERSION)
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@v1.1.7
@echo "Installing stringer" && go install -v golang.org/x/tools/cmd/stringer@latest
crosscompile: ## cross compile minio
@(env bash $(PWD)/buildscripts/cross-compile.sh)
verifiers: getdeps lint check-gen
verifiers: lint check-gen
check-gen: ## check for updated autogenerated files
@go generate ./... >/dev/null
@(! git diff --name-only | grep '_gen.go$$') || (echo "Non-committed changes in auto-generated code is detected, please commit them to proceed." && false)
lint: ## runs golangci-lint suite of linters
lint: getdeps ## runs golangci-lint suite of linters
@echo "Running $@ check"
@${GOPATH}/bin/golangci-lint run --build-tags kqueue --timeout=10m --config ./.golangci.yml
@$(GOLANGCI) run --build-tags kqueue --timeout=10m --config ./.golangci.yml
lint-fix: getdeps ## runs golangci-lint suite of linters with automatic fixes
@echo "Running $@ check"
@$(GOLANGCI) run --build-tags kqueue --timeout=10m --config ./.golangci.yml --fix
check: test
test: verifiers build ## builds minio, runs linters, tests
@echo "Running unit tests"
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -tags kqueue ./...
test-root-disable: install
@echo "Running minio root lockdown tests"
@env bash $(PWD)/buildscripts/disable-root.sh
test-decom: install
@echo "Running minio decom tests"
@env bash $(PWD)/docs/distributed/decom.sh
@@ -62,11 +74,21 @@ test-iam: build ## verify IAM (external IDP, etcd backends)
@echo "Running tests for IAM (external IDP, etcd backends) with -race"
@MINIO_API_REQUESTS_MAX=10000 GORACE=history_size=7 CGO_ENABLED=1 go test -race -tags kqueue -v -run TestIAM* ./cmd
test-replication: install ## verify multi site replication
@echo "Running tests for replicating three sites"
@(env bash $(PWD)/docs/bucket/replication/setup_3site_replication.sh)
test-sio-error:
@(env bash $(PWD)/docs/bucket/replication/sio-error.sh)
test-replication-2site:
@(env bash $(PWD)/docs/bucket/replication/setup_2site_existing_replication.sh)
test-replication-3site:
@(env bash $(PWD)/docs/bucket/replication/setup_3site_replication.sh)
test-delete-replication:
@(env bash $(PWD)/docs/bucket/replication/delete-replication.sh)
test-replication: install test-replication-2site test-replication-3site test-delete-replication test-sio-error ## verify multi site replication
@echo "Running tests for replicating three sites"
test-site-replication-ldap: install ## verify automatic site replication
@echo "Running tests for automatic site replication of IAM (with LDAP)"
@(env bash $(PWD)/docs/site-replication/run-multi-site-ldap.sh)
@@ -133,7 +155,7 @@ docker-hotfix: hotfix-push checks ## builds minio docker container with hotfix t
@echo "Building minio docker image '$(TAG)'"
@docker build -q --no-cache -t $(TAG) --build-arg RELEASE=$(VERSION) . -f Dockerfile.hotfix
docker: build checks ## builds minio docker container
docker: build ## builds minio docker container
@echo "Building minio docker image '$(TAG)'"
@docker build -q --no-cache -t $(TAG) . -f Dockerfile

2
NOTICE
View File

@@ -1,4 +1,4 @@
MinIO Project, (C) 2015-2021 MinIO, Inc.
MinIO Project, (C) 2015-2023 MinIO, Inc.
This product includes software developed at MinIO, Inc.
(https://min.io/).

View File

@@ -86,8 +86,6 @@ chmod +x minio
./minio server /data
```
Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
The following table lists supported architectures. Replace the `wget` URL with the architecture for your Linux host.
| Architecture | URL |
@@ -125,7 +123,7 @@ You can also connect using any S3-compatible tool, such as the MinIO Client `mc`
## Install from Source
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.18](https://golang.org/dl/#stable)
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.19](https://golang.org/dl/#stable)
```sh
go install github.com/minio/minio@latest
@@ -202,7 +200,7 @@ service iptables restart
MinIO Server comes with an embedded web based object browser. Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully.
> NOTE: MinIO runs console on random port by default if you wish choose a specific port use `--console-address` to pick a specific interface and port.
> NOTE: MinIO runs console on random port by default, if you wish to choose a specific port use `--console-address` to pick a specific interface and port.
### Things to consider
@@ -243,7 +241,7 @@ mc admin update <minio alias, e.g., myminio>
### Upgrade Checklist
- Test all upgrades in a lower environment (DEV, QA, UAT) before applying to production. Performing blind upgrades in production environments carries significant risk.
- Read the release notes for MinIO *before* performing any upgrade, there is no forced requirement to upgrade to latest releases upon every releases. Some releases may not be relevant to your setup, avoid upgrading production environments unnecessarily.
- Read the release notes for MinIO *before* performing any upgrade, there is no forced requirement to upgrade to latest release upon every release. Some release may not be relevant to your setup, avoid upgrading production environments unnecessarily.
- If you plan to use `mc admin update`, MinIO process must have write access to the parent directory where the binary is present on the host system.
- `mc admin update` is not supported and should be avoided in kubernetes/container environments, please upgrade containers by upgrading relevant container images.
- **We do not recommend upgrading one MinIO server at a time, the product is designed to support parallel upgrades please follow our recommended guidelines.**
@@ -261,6 +259,6 @@ Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/ma
## License
- MinIO source is licensed under the GNU AGPLv3 license that can be found in the [LICENSE](https://github.com/minio/minio/blob/master/LICENSE) file.
- MinIO [Documentation](https://github.com/minio/minio/tree/master/docs) © 2021 by MinIO, Inc is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
- MinIO source is licensed under the [GNU AGPLv3](https://github.com/minio/minio/blob/master/LICENSE).
- MinIO [documentation](https://github.com/minio/minio/tree/master/docs) is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
- [License Compliance](https://github.com/minio/minio/blob/master/COMPLIANCE.md)

View File

@@ -3,19 +3,19 @@
_init() {
shopt -s extglob
shopt -s extglob
## Minimum required versions for build dependencies
GIT_VERSION="1.0"
GO_VERSION="1.16"
OSX_VERSION="10.8"
KNAME=$(uname -s)
ARCH=$(uname -m)
case "${KNAME}" in
SunOS )
ARCH=$(isainfo -k)
;;
esac
## Minimum required versions for build dependencies
GIT_VERSION="1.0"
GO_VERSION="1.16"
OSX_VERSION="10.8"
KNAME=$(uname -s)
ARCH=$(uname -m)
case "${KNAME}" in
SunOS)
ARCH=$(isainfo -k)
;;
esac
}
## FIXME:
@@ -28,24 +28,23 @@ _init() {
## }
##
readlink() {
TARGET_FILE=$1
TARGET_FILE=$1
cd `dirname $TARGET_FILE`
TARGET_FILE=`basename $TARGET_FILE`
cd $(dirname $TARGET_FILE)
TARGET_FILE=$(basename $TARGET_FILE)
# Iterate down a (possible) chain of symlinks
while [ -L "$TARGET_FILE" ]
do
TARGET_FILE=$(env readlink $TARGET_FILE)
cd `dirname $TARGET_FILE`
TARGET_FILE=`basename $TARGET_FILE`
done
# Iterate down a (possible) chain of symlinks
while [ -L "$TARGET_FILE" ]; do
TARGET_FILE=$(env readlink $TARGET_FILE)
cd $(dirname $TARGET_FILE)
TARGET_FILE=$(basename $TARGET_FILE)
done
# Compute the canonicalized name by finding the physical path
# for the directory we're in and appending the target file.
PHYS_DIR=`pwd -P`
RESULT=$PHYS_DIR/$TARGET_FILE
echo $RESULT
# Compute the canonicalized name by finding the physical path
# for the directory we're in and appending the target file.
PHYS_DIR=$(pwd -P)
RESULT=$PHYS_DIR/$TARGET_FILE
echo $RESULT
}
## FIXME:
@@ -59,84 +58,86 @@ readlink() {
## }
##
check_minimum_version() {
IFS='.' read -r -a varray1 <<< "$1"
IFS='.' read -r -a varray2 <<< "$2"
IFS='.' read -r -a varray1 <<<"$1"
IFS='.' read -r -a varray2 <<<"$2"
for i in "${!varray1[@]}"; do
if [[ ${varray1[i]} -lt ${varray2[i]} ]]; then
return 0
elif [[ ${varray1[i]} -gt ${varray2[i]} ]]; then
return 1
fi
done
for i in "${!varray1[@]}"; do
if [[ ${varray1[i]} -lt ${varray2[i]} ]]; then
return 0
elif [[ ${varray1[i]} -gt ${varray2[i]} ]]; then
return 1
fi
done
return 0
return 0
}
assert_is_supported_arch() {
case "${ARCH}" in
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x )
return
;;
*)
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x]"
exit 1
esac
case "${ARCH}" in
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64)
return
;;
*)
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64]"
exit 1
;;
esac
}
assert_is_supported_os() {
case "${KNAME}" in
Linux | FreeBSD | OpenBSD | NetBSD | DragonFly | SunOS )
return
;;
Darwin )
osx_host_version=$(env sw_vers -productVersion)
if ! check_minimum_version "${OSX_VERSION}" "${osx_host_version}"; then
echo "OSX version '${osx_host_version}' is not supported. Minimum supported version: ${OSX_VERSION}"
exit 1
fi
return
;;
*)
echo "OS '${KNAME}' is not supported. Supported OS: [Linux, FreeBSD, OpenBSD, NetBSD, Darwin, DragonFly]"
exit 1
esac
case "${KNAME}" in
Linux | FreeBSD | OpenBSD | NetBSD | DragonFly | SunOS)
return
;;
Darwin)
osx_host_version=$(env sw_vers -productVersion)
if ! check_minimum_version "${OSX_VERSION}" "${osx_host_version}"; then
echo "OSX version '${osx_host_version}' is not supported. Minimum supported version: ${OSX_VERSION}"
exit 1
fi
return
;;
*)
echo "OS '${KNAME}' is not supported. Supported OS: [Linux, FreeBSD, OpenBSD, NetBSD, Darwin, DragonFly]"
exit 1
;;
esac
}
assert_check_golang_env() {
if ! which go >/dev/null 2>&1; then
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document at https://golang.org/doc/install"
exit 1
fi
if ! which go >/dev/null 2>&1; then
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document at https://golang.org/doc/install"
exit 1
fi
installed_go_version=$(go version | sed 's/^.* go\([0-9.]*\).*$/\1/')
if ! check_minimum_version "${GO_VERSION}" "${installed_go_version}"; then
echo "Go runtime version '${installed_go_version}' is unsupported. Minimum supported version: ${GO_VERSION} to compile."
exit 1
fi
installed_go_version=$(go version | sed 's/^.* go\([0-9.]*\).*$/\1/')
if ! check_minimum_version "${GO_VERSION}" "${installed_go_version}"; then
echo "Go runtime version '${installed_go_version}' is unsupported. Minimum supported version: ${GO_VERSION} to compile."
exit 1
fi
}
assert_check_deps() {
# support unusual Git versions such as: 2.7.4 (Apple Git-66)
installed_git_version=$(git version | perl -ne '$_ =~ m/git version (.*?)( |$)/; print "$1\n";')
if ! check_minimum_version "${GIT_VERSION}" "${installed_git_version}"; then
echo "Git version '${installed_git_version}' is not supported. Minimum supported version: ${GIT_VERSION}"
exit 1
fi
# support unusual Git versions such as: 2.7.4 (Apple Git-66)
installed_git_version=$(git version | perl -ne '$_ =~ m/git version (.*?)( |$)/; print "$1\n";')
if ! check_minimum_version "${GIT_VERSION}" "${installed_git_version}"; then
echo "Git version '${installed_git_version}' is not supported. Minimum supported version: ${GIT_VERSION}"
exit 1
fi
}
main() {
## Check for supported arch
assert_is_supported_arch
## Check for supported arch
assert_is_supported_arch
## Check for supported os
assert_is_supported_os
## Check for supported os
assert_is_supported_os
## Check for Go environment
assert_check_golang_env
## Check for Go environment
assert_check_golang_env
## Check for dependencies
assert_check_deps
## Check for dependencies
assert_check_deps
}
_init && main "$@"

View File

@@ -5,33 +5,33 @@ set -e
[ -n "$BASH_XTRACEFD" ] && set -x
function _init() {
## All binaries are static make sure to disable CGO.
export CGO_ENABLED=0
## All binaries are static make sure to disable CGO.
export CGO_ENABLED=0
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips openbsd/amd64"
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips"
}
function _build() {
local osarch=$1
IFS=/ read -r -a arr <<<"$osarch"
os="${arr[0]}"
arch="${arr[1]}"
package=$(go list -f '{{.ImportPath}}')
printf -- "--> %15s:%s\n" "${osarch}" "${package}"
local osarch=$1
IFS=/ read -r -a arr <<<"$osarch"
os="${arr[0]}"
arch="${arr[1]}"
package=$(go list -f '{{.ImportPath}}')
printf -- "--> %15s:%s\n" "${osarch}" "${package}"
# go build -trimpath to build the binary.
export GOOS=$os
export GOARCH=$arch
export GO111MODULE=on
go build -trimpath -tags kqueue -o /dev/null
# go build -trimpath to build the binary.
export GOOS=$os
export GOARCH=$arch
export GO111MODULE=on
go build -trimpath -tags kqueue -o /dev/null
}
function main() {
echo "Testing builds for OS/Arch: ${SUPPORTED_OSARCH}"
for each_osarch in ${SUPPORTED_OSARCH}; do
_build "${each_osarch}"
done
echo "Testing builds for OS/Arch: ${SUPPORTED_OSARCH}"
for each_osarch in ${SUPPORTED_OSARCH}; do
_build "${each_osarch}"
done
}
_init && main "$@"

119
buildscripts/disable-root.sh Executable file
View File

@@ -0,0 +1,119 @@
#!/bin/bash
set -x
export MINIO_CI_CD=1
killall -9 minio
rm -rf ${HOME}/tmp/dist
scheme="http"
nr_servers=4
addr="localhost"
args=""
for ((i = 0; i < $((nr_servers)); i++)); do
args="$args $scheme://$addr:$((9100 + i))/${HOME}/tmp/dist/path1/$i"
done
echo $args
for ((i = 0; i < $((nr_servers)); i++)); do
(minio server --address ":$((9100 + i))" $args 2>&1 >/tmp/log$i.txt) &
done
sleep 10s
if [ ! -f ./mc ]; then
wget --quiet -O ./mc https://dl.minio.io/client/mc/release/linux-amd64/./mc &&
chmod +x mc
fi
set +e
export MC_HOST_minioadm=http://minioadmin:minioadmin@localhost:9100/
./mc ls minioadm/
./mc admin config set minioadm/ api root_access=off
sleep 3s # let things settle a little
./mc ls minioadm/
if [ $? -eq 0 ]; then
echo "listing succeeded, 'minioadmin' was not disabled"
exit 1
fi
set -e
killall -9 minio
export MINIO_API_ROOT_ACCESS=on
for ((i = 0; i < $((nr_servers)); i++)); do
(minio server --address ":$((9100 + i))" $args 2>&1 >/tmp/log$i.txt) &
done
set +e
./mc ls minioadm/
if [ $? -ne 0 ]; then
echo "listing failed, 'minioadmin' should be enabled"
exit 1
fi
killall -9 minio
rm -rf /tmp/multisitea/
rm -rf /tmp/multisiteb/
echo "Setup site-replication and then disable root credentials"
minio server --address 127.0.0.1:9001 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
minio server --address 127.0.0.1:9002 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
minio server --address 127.0.0.1:9003 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
minio server --address 127.0.0.1:9004 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
sleep 20s
export MC_HOST_sitea=http://minioadmin:minioadmin@127.0.0.1:9001
export MC_HOST_siteb=http://minioadmin:minioadmin@127.0.0.1:9004
./mc admin replicate add sitea siteb
./mc admin user add sitea foobar foo12345
./mc admin policy attach sitea/ consoleAdmin --user=foobar
./mc admin user info siteb foobar
killall -9 minio
echo "turning off root access, however site replication must continue"
export MINIO_API_ROOT_ACCESS=off
minio server --address 127.0.0.1:9001 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
minio server --address 127.0.0.1:9002 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
minio server --address 127.0.0.1:9003 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
minio server --address 127.0.0.1:9004 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
sleep 20s
export MC_HOST_sitea=http://foobar:foo12345@127.0.0.1:9001
export MC_HOST_siteb=http://foobar:foo12345@127.0.0.1:9004
./mc admin user add sitea foobar-admin foo12345
sleep 2s
./mc admin user info siteb foobar-admin

View File

@@ -6,88 +6,87 @@ set -x
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
function start_minio_4drive() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
mkdir ${WORK_DIR}
C_PWD=${PWD}
if [ ! -x "$PWD/mc" ]; then
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
mkdir ${WORK_DIR}
C_PWD=${PWD}
if [ ! -x "$PWD/mc" ]; then
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$C_PWD/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
fi
(cd "${MC_BUILD_DIR}" && go build -o "$C_PWD/mc")
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/disk{1...4}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
# remove mc source.
purge "${MC_BUILD_DIR}"
fi
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/disk{1...4}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
"${PWD}/mc" mb --with-versioning minio/bucket
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
for i in $(seq 1 4); do
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
"${PWD}/mc" mb --with-versioning minio/bucket
sudo chown -R root. "${WORK_DIR}/disk${i}"
for i in $(seq 1 4); do
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
sudo chown -R root. "${WORK_DIR}/disk${i}"
sudo chown -R ${USER}. "${WORK_DIR}/disk${i}"
done
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
for vid in $("${PWD}/mc" ls --json --versions minio/bucket/testobj | jq -r .versionId); do
"${PWD}/mc" cat --vid "${vid}" minio/bucket/testobj | md5sum
done
sudo chown -R ${USER}. "${WORK_DIR}/disk${i}"
done
for vid in $("${PWD}/mc" ls --json --versions minio/bucket/testobj | jq -r .versionId); do
"${PWD}/mc" cat --vid "${vid}" minio/bucket/testobj | md5sum
done
pkill minio
sleep 3
pkill minio
sleep 3
}
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_4drive ${start_port}
start_minio_4drive ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -27,7 +27,7 @@ import (
"os"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
)
func main() {

View File

@@ -4,89 +4,89 @@ trap 'cleanup $LINENO' ERR
# shellcheck disable=SC2120
cleanup() {
MINIO_VERSION=dev docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm -s -f
docker volume prune -f
MINIO_VERSION=dev docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm -s -f
docker volume prune -f
}
verify_checksum_after_heal() {
local sum1
sum1=$(curl -s "$2" | sha256sum);
mc admin heal --json -r "$1" >/dev/null; # test after healing
local sum1_heal
sum1_heal=$(curl -s "$2" | sha256sum);
local sum1
sum1=$(curl -s "$2" | sha256sum)
mc admin heal --json -r "$1" >/dev/null # test after healing
local sum1_heal
sum1_heal=$(curl -s "$2" | sha256sum)
if [ "${sum1_heal}" != "${sum1}" ]; then
echo "mismatch expected ${sum1_heal}, got ${sum1}"
exit 1;
fi
if [ "${sum1_heal}" != "${sum1}" ]; then
echo "mismatch expected ${sum1_heal}, got ${sum1}"
exit 1
fi
}
verify_checksum_mc() {
local expected
expected=$(mc cat "$1" | sha256sum)
local got
got=$(mc cat "$2" | sha256sum)
local expected
expected=$(mc cat "$1" | sha256sum)
local got
got=$(mc cat "$2" | sha256sum)
if [ "${expected}" != "${got}" ]; then
echo "mismatch - expected ${expected}, got ${got}"
exit 1;
fi
echo "matches - ${expected}, got ${got}"
if [ "${expected}" != "${got}" ]; then
echo "mismatch - expected ${expected}, got ${got}"
exit 1
fi
echo "matches - ${expected}, got ${got}"
}
add_alias() {
for i in $(seq 1 4); do
echo "... attempting to add alias $i"
until (mc alias set minio http://127.0.0.1:9000 minioadmin minioadmin); do
echo "...waiting... for 5secs" && sleep 5
done
done
for i in $(seq 1 4); do
echo "... attempting to add alias $i"
until (mc alias set minio http://127.0.0.1:9000 minioadmin minioadmin); do
echo "...waiting... for 5secs" && sleep 5
done
done
echo "Sleeping for nginx"
sleep 20
echo "Sleeping for nginx"
sleep 20
}
__init__() {
sudo apt install curl -y
export GOPATH=/tmp/gopath
export PATH=${PATH}:${GOPATH}/bin
sudo apt install curl -y
export GOPATH=/tmp/gopath
export PATH=${PATH}:${GOPATH}/bin
go install github.com/minio/mc@latest
go install github.com/minio/mc@latest
TAG=minio/minio:dev make docker
TAG=minio/minio:dev make docker
MINIO_VERSION=RELEASE.2019-12-19T22-52-26Z docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
up -d --build
MINIO_VERSION=RELEASE.2019-12-19T22-52-26Z docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
up -d --build
add_alias
add_alias
mc mb minio/minio-test/
mc cp ./minio minio/minio-test/to-read/
mc cp /etc/hosts minio/minio-test/to-read/hosts
mc anonymous set download minio/minio-test
mc mb minio/minio-test/
mc cp ./minio minio/minio-test/to-read/
mc cp /etc/hosts minio/minio-test/to-read/hosts
mc anonymous set download minio/minio-test
verify_checksum_mc ./minio minio/minio-test/to-read/minio
verify_checksum_mc ./minio minio/minio-test/to-read/minio
curl -s http://127.0.0.1:9000/minio-test/to-read/hosts | sha256sum
curl -s http://127.0.0.1:9000/minio-test/to-read/hosts | sha256sum
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
}
main() {
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
add_alias
add_alias
verify_checksum_after_heal minio/minio-test http://127.0.0.1:9000/minio-test/to-read/hosts
verify_checksum_after_heal minio/minio-test http://127.0.0.1:9000/minio-test/to-read/hosts
verify_checksum_mc ./minio minio/minio-test/to-read/minio
verify_checksum_mc ./minio minio/minio-test/to-read/minio
verify_checksum_mc /etc/hosts minio/minio-test/to-read/hosts
verify_checksum_mc /etc/hosts minio/minio-test/to-read/hosts
cleanup
cleanup
}
( __init__ "$@" && main "$@" )
(__init__ "$@" && main "$@")

View File

@@ -5,7 +5,6 @@ set -e
export GORACE="history_size=7"
export MINIO_API_REQUESTS_MAX=10000
## TODO remove `dsync` from race detector once this is merged and released https://go-review.googlesource.com/c/go/+/333529/
for d in $(go list ./... | grep -v dsync); do
CGO_ENABLED=1 go test -v -race --timeout 100m "$d"
for d in $(go list ./...); do
CGO_ENABLED=1 go test -v -race --timeout 100m "$d"
done

View File

@@ -3,70 +3,70 @@
set -E
set -o pipefail
set -x
set -e
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
function start_minio_5drive() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
"${WORK_DIR}/mc" cp --quiet -r "buildscripts/cicd-corpus/" "${WORK_DIR}/cicd-corpus/"
# remove mc source.
purge "${MC_BUILD_DIR}"
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/cicd-corpus/disk{1...5}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
"${WORK_DIR}/mc" cp --quiet -r "buildscripts/cicd-corpus/" "${WORK_DIR}/cicd-corpus/"
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/cicd-corpus/disk{1...5}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
"${WORK_DIR}/mc" stat minio/bucket/testobj
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" stat minio/bucket/testobj
pkill minio
sleep 3
pkill minio
sleep 3
}
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_5drive ${start_port}
start_minio_5drive ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -6,146 +6,151 @@ set -x
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO_OLD=( "$PWD/minio.RELEASE.2020-10-28T08-16-50Z" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO_OLD=("$PWD/minio.RELEASE.2020-10-28T08-16-50Z" --config-dir "$MINIO_CONFIG_DIR" server)
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
function download_old_release() {
if [ ! -f minio.RELEASE.2020-10-28T08-16-50Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2020-10-28T08-16-50Z
chmod a+x minio.RELEASE.2020-10-28T08-16-50Z
fi
if [ ! -f minio.RELEASE.2020-10-28T08-16-50Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2020-10-28T08-16-50Z
chmod a+x minio.RELEASE.2020-10-28T08-16-50Z
fi
}
function verify_rewrite() {
start_port=$1
start_port=$1
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
# remove mc source.
purge "${MC_BUILD_DIR}"
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
"${WORK_DIR}/mc" mb minio/healing-rewrite-bucket --quiet --with-lock
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" mb minio/healing-rewrite-bucket --quiet --with-lock
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
kill ${pid}
sleep 3
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
kill ${pid}
sleep 3
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**
)
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
go run ./buildscripts/heal-manual.go "127.0.0.1:${start_port}" "minio" "minio123"
sleep 1
purge "$WORK_DIR"
exit 1
fi
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**
)
go run ./buildscripts/heal-manual.go "127.0.0.1:${start_port}" "minio" "minio123"
sleep 1
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
purge "$WORK_DIR"
exit 1
fi
kill ${pid}
kill ${pid}
}
function main() {
download_old_release
download_old_release
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
verify_rewrite ${start_port}
verify_rewrite ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -6,167 +6,172 @@ set -o pipefail
set -x
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO_OLD=( "$PWD/minio.RELEASE.2021-11-24T23-19-33Z" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO_OLD=("$PWD/minio.RELEASE.2021-11-24T23-19-33Z" --config-dir "$MINIO_CONFIG_DIR" server)
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
function download_old_release() {
if [ ! -f minio.RELEASE.2021-11-24T23-19-33Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2021-11-24T23-19-33Z
chmod a+x minio.RELEASE.2021-11-24T23-19-33Z
fi
if [ ! -f minio.RELEASE.2021-11-24T23-19-33Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2021-11-24T23-19-33Z
chmod a+x minio.RELEASE.2021-11-24T23-19-33Z
fi
}
function start_minio_16drive() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export _MINIO_SHARD_DISKTIME_DELTA="5s" # do not change this as its needed for tests
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export _MINIO_SHARD_DISKTIME_DELTA="5s" # do not change this as its needed for tests
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
# remove mc source.
purge "${MC_BUILD_DIR}"
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
shred --iterations=1 --size=5241856 - 1>"${WORK_DIR}/unaligned" 2>/dev/null
"${WORK_DIR}/mc" mb minio/healing-shard-bucket --quiet
"${WORK_DIR}/mc" cp \
"${WORK_DIR}/unaligned" \
minio/healing-shard-bucket/unaligned \
--disable-multipart --quiet
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
## "unaligned" object name gets consistently distributed
## to disks in following distribution order
##
## NOTE: if you change the name make sure to change the
## distribution order present here
##
## [15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
shred --iterations=1 --size=5241856 - 1>"${WORK_DIR}/unaligned" 2>/dev/null
"${WORK_DIR}/mc" mb minio/healing-shard-bucket --quiet
"${WORK_DIR}/mc" cp \
"${WORK_DIR}/unaligned" \
minio/healing-shard-bucket/unaligned \
--disable-multipart --quiet
## make sure to remove the "last" data shard
rm -rf "${WORK_DIR}/xl14/healing-shard-bucket/unaligned"
sleep 10
## Heal the shard
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
## then remove any other data shard let's pick first disk
## - 1st data shard.
rm -rf "${WORK_DIR}/xl3/healing-shard-bucket/unaligned"
sleep 10
## "unaligned" object name gets consistently distributed
## to disks in following distribution order
##
## NOTE: if you change the name make sure to change the
## distribution order present here
##
## [15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep CORRUPTED; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
## make sure to remove the "last" data shard
rm -rf "${WORK_DIR}/xl14/healing-shard-bucket/unaligned"
sleep 10
## Heal the shard
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
## then remove any other data shard let's pick first disk
## - 1st data shard.
rm -rf "${WORK_DIR}/xl3/healing-shard-bucket/unaligned"
sleep 10
pkill minio
sleep 3
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep CORRUPTED; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
pkill minio
sleep 3
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**
)
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
purge "$WORK_DIR"
exit 1
fi
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**
)
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
purge "$WORK_DIR"
exit 1
fi
pkill minio
sleep 3
pkill minio
sleep 3
}
function main() {
download_old_release
download_old_release
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_16drive ${start_port}
start_minio_16drive ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -6,8 +6,8 @@ set -E
set -o pipefail
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
@@ -25,285 +25,270 @@ export ENABLE_ADMIN=1
export MINIO_CI_CD=1
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR")
FILE_1_MB="$MINT_DATA_DIR/datafile-1-MB"
FILE_65_MB="$MINT_DATA_DIR/datafile-65-MB"
FUNCTIONAL_TESTS="$WORK_DIR/functional-tests.sh"
function start_minio_fs()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
"${MINIO[@]}" server "${WORK_DIR}/fs-disk" >"$WORK_DIR/fs-minio.log" 2>&1 &
sleep 10
function start_minio_fs() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
"${MINIO[@]}" server "${WORK_DIR}/fs-disk" >"$WORK_DIR/fs-minio.log" 2>&1 &
sleep 10
}
function start_minio_erasure()
{
"${MINIO[@]}" server "${WORK_DIR}/erasure-disk1" "${WORK_DIR}/erasure-disk2" "${WORK_DIR}/erasure-disk3" "${WORK_DIR}/erasure-disk4" >"$WORK_DIR/erasure-minio.log" 2>&1 &
sleep 15
function start_minio_erasure() {
"${MINIO[@]}" server "${WORK_DIR}/erasure-disk1" "${WORK_DIR}/erasure-disk2" "${WORK_DIR}/erasure-disk3" "${WORK_DIR}/erasure-disk4" >"$WORK_DIR/erasure-minio.log" 2>&1 &
sleep 15
}
function start_minio_erasure_sets()
{
export MINIO_ENDPOINTS="${WORK_DIR}/erasure-disk-sets{1...32}"
"${MINIO[@]}" server > "$WORK_DIR/erasure-minio-sets.log" 2>&1 &
sleep 15
function start_minio_erasure_sets() {
export MINIO_ENDPOINTS="${WORK_DIR}/erasure-disk-sets{1...32}"
"${MINIO[@]}" server >"$WORK_DIR/erasure-minio-sets.log" 2>&1 &
sleep 15
}
function start_minio_pool_erasure_sets()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/pool-disk-sets{1...4} http://127.0.0.1:9001${WORK_DIR}/pool-disk-sets{5...8}"
"${MINIO[@]}" server --address ":9000" > "$WORK_DIR/pool-minio-9000.log" 2>&1 &
"${MINIO[@]}" server --address ":9001" > "$WORK_DIR/pool-minio-9001.log" 2>&1 &
function start_minio_pool_erasure_sets() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/pool-disk-sets{1...4} http://127.0.0.1:9001${WORK_DIR}/pool-disk-sets{5...8}"
"${MINIO[@]}" server --address ":9000" >"$WORK_DIR/pool-minio-9000.log" 2>&1 &
"${MINIO[@]}" server --address ":9001" >"$WORK_DIR/pool-minio-9001.log" 2>&1 &
sleep 40
sleep 40
}
function start_minio_pool_erasure_sets_ipv6()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://[::1]:9000${WORK_DIR}/pool-disk-sets-ipv6{1...4} http://[::1]:9001${WORK_DIR}/pool-disk-sets-ipv6{5...8}"
"${MINIO[@]}" server --address="[::1]:9000" > "$WORK_DIR/pool-minio-ipv6-9000.log" 2>&1 &
"${MINIO[@]}" server --address="[::1]:9001" > "$WORK_DIR/pool-minio-ipv6-9001.log" 2>&1 &
function start_minio_pool_erasure_sets_ipv6() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://[::1]:9000${WORK_DIR}/pool-disk-sets-ipv6{1...4} http://[::1]:9001${WORK_DIR}/pool-disk-sets-ipv6{5...8}"
"${MINIO[@]}" server --address="[::1]:9000" >"$WORK_DIR/pool-minio-ipv6-9000.log" 2>&1 &
"${MINIO[@]}" server --address="[::1]:9001" >"$WORK_DIR/pool-minio-ipv6-9001.log" 2>&1 &
sleep 40
sleep 40
}
function start_minio_dist_erasure()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/dist-disk1 http://127.0.0.1:9001${WORK_DIR}/dist-disk2 http://127.0.0.1:9002${WORK_DIR}/dist-disk3 http://127.0.0.1:9003${WORK_DIR}/dist-disk4"
for i in $(seq 0 3); do
"${MINIO[@]}" server --address ":900${i}" > "$WORK_DIR/dist-minio-900${i}.log" 2>&1 &
done
function start_minio_dist_erasure() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/dist-disk1 http://127.0.0.1:9001${WORK_DIR}/dist-disk2 http://127.0.0.1:9002${WORK_DIR}/dist-disk3 http://127.0.0.1:9003${WORK_DIR}/dist-disk4"
for i in $(seq 0 3); do
"${MINIO[@]}" server --address ":900${i}" >"$WORK_DIR/dist-minio-900${i}.log" 2>&1 &
done
sleep 40
sleep 40
}
function run_test_fs()
{
start_minio_fs
function run_test_fs() {
start_minio_fs
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/fs-minio.log"
fi
rm -f "$WORK_DIR/fs-minio.log"
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/fs-minio.log"
fi
rm -f "$WORK_DIR/fs-minio.log"
return "$rv"
return "$rv"
}
function run_test_erasure_sets()
{
start_minio_erasure_sets
function run_test_erasure_sets() {
start_minio_erasure_sets
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio-sets.log"
fi
rm -f "$WORK_DIR/erasure-minio-sets.log"
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio-sets.log"
fi
rm -f "$WORK_DIR/erasure-minio-sets.log"
return "$rv"
return "$rv"
}
function run_test_pool_erasure_sets()
{
start_minio_pool_erasure_sets
function run_test_pool_erasure_sets() {
start_minio_pool_erasure_sets
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-900$i.log"
done
fi
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-900$i.log"
done
fi
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-900$i.log"
done
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-900$i.log"
done
return "$rv"
return "$rv"
}
function run_test_pool_erasure_sets_ipv6()
{
start_minio_pool_erasure_sets_ipv6
function run_test_pool_erasure_sets_ipv6() {
start_minio_pool_erasure_sets_ipv6
export SERVER_ENDPOINT="[::1]:9000"
export SERVER_ENDPOINT="[::1]:9000"
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
fi
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
fi
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
return "$rv"
return "$rv"
}
function run_test_erasure()
{
start_minio_erasure
function run_test_erasure() {
start_minio_erasure
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio.log"
fi
rm -f "$WORK_DIR/erasure-minio.log"
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio.log"
fi
rm -f "$WORK_DIR/erasure-minio.log"
return "$rv"
return "$rv"
}
function run_test_dist_erasure()
{
start_minio_dist_erasure
function run_test_dist_erasure() {
start_minio_dist_erasure
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
echo "server1 log:"
cat "$WORK_DIR/dist-minio-9000.log"
echo "server2 log:"
cat "$WORK_DIR/dist-minio-9001.log"
echo "server3 log:"
cat "$WORK_DIR/dist-minio-9002.log"
echo "server4 log:"
cat "$WORK_DIR/dist-minio-9003.log"
fi
if [ "$rv" -ne 0 ]; then
echo "server1 log:"
cat "$WORK_DIR/dist-minio-9000.log"
echo "server2 log:"
cat "$WORK_DIR/dist-minio-9001.log"
echo "server3 log:"
cat "$WORK_DIR/dist-minio-9002.log"
echo "server4 log:"
cat "$WORK_DIR/dist-minio-9003.log"
fi
rm -f "$WORK_DIR/dist-minio-9000.log" "$WORK_DIR/dist-minio-9001.log" "$WORK_DIR/dist-minio-9002.log" "$WORK_DIR/dist-minio-9003.log"
rm -f "$WORK_DIR/dist-minio-9000.log" "$WORK_DIR/dist-minio-9001.log" "$WORK_DIR/dist-minio-9002.log" "$WORK_DIR/dist-minio-9003.log"
return "$rv"
return "$rv"
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
function __init__()
{
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
mkdir -p "$MINT_DATA_DIR"
function __init__() {
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
mkdir -p "$MINT_DATA_DIR"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
# remove mc source.
purge "${MC_BUILD_DIR}"
shred -n 1 -s 1M - 1>"$FILE_1_MB" 2>/dev/null
shred -n 1 -s 65M - 1>"$FILE_65_MB" 2>/dev/null
shred -n 1 -s 1M - 1>"$FILE_1_MB" 2>/dev/null
shred -n 1 -s 65M - 1>"$FILE_65_MB" 2>/dev/null
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' > "$MINIO_CONFIG_DIR/config.json"
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' >"$MINIO_CONFIG_DIR/config.json"
if ! wget -q -O "$FUNCTIONAL_TESTS" https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh; then
echo "failed to download https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh"
exit 1
fi
if ! wget -q -O "$FUNCTIONAL_TESTS" https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh; then
echo "failed to download https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh"
exit 1
fi
sed -i 's|-sS|-sSg|g' "$FUNCTIONAL_TESTS"
chmod a+x "$FUNCTIONAL_TESTS"
sed -i 's|-sS|-sSg|g' "$FUNCTIONAL_TESTS"
chmod a+x "$FUNCTIONAL_TESTS"
}
function main()
{
echo "Testing in FS setup"
if ! run_test_fs; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
function main() {
echo "Testing in FS setup"
if ! run_test_fs; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup"
if ! run_test_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup"
if ! run_test_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure setup"
if ! run_test_dist_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure setup"
if ! run_test_dist_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup as sets"
if ! run_test_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup as sets"
if ! run_test_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Eraure expanded setup"
if ! run_test_pool_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Eraure expanded setup"
if ! run_test_pool_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure expanded setup with ipv6"
if ! run_test_pool_erasure_sets_ipv6; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure expanded setup with ipv6"
if ! run_test_pool_erasure_sets_ipv6; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
purge "$WORK_DIR"
purge "$WORK_DIR"
}
( __init__ "$@" && main "$@" )
(__init__ "$@" && main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -4,94 +4,93 @@ set -E
set -o pipefail
set -x
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$(mktemp -d)"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
function start_minio() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
unset MINIO_CI_CD
unset CI
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
unset MINIO_CI_CD
unset CI
args=()
for i in $(seq 1 4); do
args+=("http://localhost:$[${start_port}+$i]${WORK_DIR}/mnt/disk$i/ ")
done
args=()
for i in $(seq 1 4); do
args+=("http://localhost:$((start_port + i))${WORK_DIR}/mnt/disk$i/ ")
done
for i in $(seq 1 4); do
"${MINIO[@]}" --address ":$[$start_port+$i]" ${args[@]} 2>&1 >"${WORK_DIR}/server$i.log" &
done
for i in $(seq 1 4); do
"${MINIO[@]}" --address ":$((start_port + i))" ${args[@]} 2>&1 >"${WORK_DIR}/server$i.log" &
done
# Wait until all nodes return 403
for i in $(seq 1 4); do
while [ "$(curl -m 1 -s -o /dev/null -w "%{http_code}" http://localhost:$[$start_port+$i])" -ne "403" ]; do
echo -n ".";
sleep 1;
done
done
# Wait until all nodes return 403
for i in $(seq 1 4); do
while [ "$(curl -m 1 -s -o /dev/null -w "%{http_code}" http://localhost:$((start_port + i)))" -ne "403" ]; do
echo -n "."
sleep 1
done
done
}
}
# Prepare fake disks with losetup
function prepare_block_devices() {
mkdir -p ${WORK_DIR}/disks/ ${WORK_DIR}/mnt/
for i in 1 2 3 4; do
dd if=/dev/zero of=${WORK_DIR}/disks/img.$i bs=1M count=2048
mkfs.ext4 -F ${WORK_DIR}/disks/img.$i
sudo mknod /dev/minio-loopdisk$i b 7 $[256-$i]
sudo losetup /dev/minio-loopdisk$i ${WORK_DIR}/disks/img.$i
mkdir -p ${WORK_DIR}/mnt/disk$i/
sudo mount /dev/minio-loopdisk$i ${WORK_DIR}/mnt/disk$i/
sudo chown "$(id -u):$(id -g)" /dev/minio-loopdisk$i ${WORK_DIR}/mnt/disk$i/
done
set -e
mkdir -p ${WORK_DIR}/disks/ ${WORK_DIR}/mnt/
sudo modprobe loop
for i in 1 2 3 4; do
dd if=/dev/zero of=${WORK_DIR}/disks/img.${i} bs=1M count=2000
device=$(sudo losetup --find --show ${WORK_DIR}/disks/img.${i})
sudo mkfs.ext4 -F ${device}
mkdir -p ${WORK_DIR}/mnt/disk${i}/
sudo mount ${device} ${WORK_DIR}/mnt/disk${i}/
sudo chown "$(id -u):$(id -g)" ${device} ${WORK_DIR}/mnt/disk${i}/
done
set +e
}
# Start a distributed MinIO setup, unmount one disk and check if it is formatted
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_minio ${start_port}
start_port=$(shuf -i 10000-65000 -n 1)
start_minio ${start_port}
# Unmount the disk, after the unmount the device id
# /tmp/xxx/mnt/disk4 will be the same as '/' and it
# will be detected as root disk
while [ "$u" != "0" ]; do
sudo umount ${WORK_DIR}/mnt/disk4/
u=$?
sleep 1
done
# Unmount the disk, after the unmount the device id
# /tmp/xxx/mnt/disk4 will be the same as '/' and it
# will be detected as root disk
while [ "$u" != "0" ]; do
sudo umount ${WORK_DIR}/mnt/disk4/
u=$?
sleep 1
done
# Wait until MinIO self heal kicks in
sleep 60
# Wait until MinIO self heal kicks in
sleep 60
if [ -f ${WORK_DIR}/mnt/disk4/.minio.sys/format.json ]; then
echo "A root disk is formatted unexpectedely"
cat "${WORK_DIR}/server4.log"
exit -1
fi
if [ -f ${WORK_DIR}/mnt/disk4/.minio.sys/format.json ]; then
echo "A root disk is formatted unexpectedely"
cat "${WORK_DIR}/server4.log"
exit -1
fi
}
function cleanup() {
pkill minio
sudo umount ${WORK_DIR}/mnt/disk{1..3}/
sudo rm /dev/minio-loopdisk*
rm -rf "$WORK_DIR"
pkill minio
sudo umount ${WORK_DIR}/mnt/disk{1..3}/
sudo rm /dev/minio-loopdisk*
rm -rf "$WORK_DIR"
}
( prepare_block_devices )
( main "$@" )
(prepare_block_devices)
(main "$@")
rv=$?
cleanup
exit "$rv"

View File

@@ -5,139 +5,135 @@ set -E
set -o pipefail
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
function start_minio_3_node() {
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MINIO_ERASURE_SET_DRIVE_COUNT=6
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MINIO_ERASURE_SET_DRIVE_COUNT=6
export MINIO_CI_CD=1
start_port=$2
args=""
for i in $(seq 1 3); do
args="$args http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/1/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/2/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/3/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/4/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/5/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/6/"
done
"${MINIO[@]}" --address ":$[$start_port+1]" $args > "${WORK_DIR}/dist-minio-server1.log" 2>&1 &
pid1=$!
disown ${pid1}
"${MINIO[@]}" --address ":$[$start_port+2]" $args > "${WORK_DIR}/dist-minio-server2.log" 2>&1 &
pid2=$!
disown $pid2
"${MINIO[@]}" --address ":$[$start_port+3]" $args > "${WORK_DIR}/dist-minio-server3.log" 2>&1 &
pid3=$!
disown $pid3
sleep "$1"
if ! ps -p $pid1 1>&2 > /dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! ps -p $pid2 1>&2 > /dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! ps -p $pid3 1>&2 > /dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! pkill minio; then
start_port=$2
args=""
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
args="$args http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/1/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/2/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/3/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/4/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/5/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/6/"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
sleep 1;
if pgrep minio; then
# forcibly killing, to proceed further properly.
if ! pkill -9 minio; then
echo "no minio process running anymore, proceed."
"${MINIO[@]}" --address ":$((start_port + 1))" $args >"${WORK_DIR}/dist-minio-server1.log" 2>&1 &
pid1=$!
disown ${pid1}
"${MINIO[@]}" --address ":$((start_port + 2))" $args >"${WORK_DIR}/dist-minio-server2.log" 2>&1 &
pid2=$!
disown $pid2
"${MINIO[@]}" --address ":$((start_port + 3))" $args >"${WORK_DIR}/dist-minio-server3.log" 2>&1 &
pid3=$!
disown $pid3
sleep "$1"
if ! ps -p $pid1 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
fi
}
if ! ps -p $pid2 1>&2 >/dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! ps -p $pid3 1>&2 >/dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! pkill minio; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
sleep 1
if pgrep minio; then
# forcibly killing, to proceed further properly.
if ! pkill -9 minio; then
echo "no minio process running anymore, proceed."
fi
fi
}
function check_online() {
if grep -q 'Unable to initialize sub-systems' ${WORK_DIR}/dist-minio-*.log; then
echo "1"
fi
if grep -q 'Unable to initialize sub-systems' ${WORK_DIR}/dist-minio-*.log; then
echo "1"
fi
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
function __init__()
{
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
function __init__() {
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' > "$MINIO_CONFIG_DIR/config.json"
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' >"$MINIO_CONFIG_DIR/config.json"
}
function perform_test() {
start_minio_3_node 120 $2
start_minio_3_node 120 $2
echo "Testing Distributed Erasure setup healing of drives"
echo "Remove the contents of the disks belonging to '${1}' erasure set"
echo "Testing Distributed Erasure setup healing of drives"
echo "Remove the contents of the disks belonging to '${1}' erasure set"
rm -rf ${WORK_DIR}/${1}/*/
rm -rf ${WORK_DIR}/${1}/*/
start_minio_3_node 120 $2
start_minio_3_node 120 $2
rv=$(check_online)
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
rv=$(check_online)
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
}
function main()
{
# use same ports for all tests
start_port=$(shuf -i 10000-65000 -n 1)
function main() {
# use same ports for all tests
start_port=$(shuf -i 10000-65000 -n 1)
perform_test "2" ${start_port}
perform_test "1" ${start_port}
perform_test "3" ${start_port}
perform_test "2" ${start_port}
perform_test "1" ${start_port}
perform_test "3" ${start_port}
}
( __init__ "$@" && main "$@" )
(__init__ "$@" && main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -22,9 +22,9 @@ import (
"io"
"net/http"
"github.com/gorilla/mux"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
)
@@ -90,8 +90,8 @@ func (api objectAPIHandlers) PutBucketACLHandler(w http.ResponseWriter, r *http.
if aclHeader == "" {
acl := &accessControlPolicy{}
if err = xmlDecoder(r.Body, acl, r.ContentLength); err != nil {
if err == io.EOF {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingSecurityHeader),
if terr, ok := err.(*xml.SyntaxError); ok && terr.Msg == io.EOF.Error() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedXML),
r.URL)
return
}

View File

@@ -29,11 +29,10 @@ import (
"strings"
"time"
"github.com/gorilla/mux"
jsoniter "github.com/json-iterator/go"
"github.com/klauspost/compress/zip"
"github.com/minio/kes"
"github.com/minio/madmin-go"
"github.com/minio/kes-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/bucket/lifecycle"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
@@ -41,6 +40,7 @@ import (
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
)
@@ -56,9 +56,7 @@ const (
// specified in the quota configuration will be applied by default
// to enforce total quota for the specified bucket.
func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketQuotaConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SetBucketQuotaAdminAction)
if objectAPI == nil {
@@ -85,11 +83,6 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
return
}
if quotaConfig.Type == "fifo" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketQuotaConfigFile, data)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -107,10 +100,7 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
}
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, bucketMeta); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, bucketMeta))
// Write success response.
writeSuccessResponseHeadersOnly(w)
@@ -118,9 +108,7 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
// GetBucketQuotaConfigHandler - gets bucket quota configuration
func (a adminAPIHandlers) GetBucketQuotaConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketQuotaConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.GetBucketQuotaAdminAction)
if objectAPI == nil {
@@ -153,9 +141,8 @@ func (a adminAPIHandlers) GetBucketQuotaConfigHandler(w http.ResponseWriter, r *
// SetRemoteTargetHandler - sets a remote target for bucket
func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetBucketTarget")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := pathClean(vars["bucket"])
update := r.Form.Get("update") == "true"
@@ -172,7 +159,7 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
return
}
cred, _, _, s3Err := validateAdminSignature(ctx, r, "")
cred, _, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
@@ -201,12 +188,28 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
if update {
ops = madmin.GetTargetUpdateOps(r.Form)
} else {
target.Arn = globalBucketTargetSys.getRemoteARN(bucket, &target)
var exists bool // true if arn exists
target.Arn, exists = globalBucketTargetSys.getRemoteARN(bucket, &target, "")
if exists && target.Arn != "" { // return pre-existing ARN
data, err := json.Marshal(target.Arn)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseJSON(w, data)
return
}
}
if target.Arn == "" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminConfigBadJSON, err), r.URL)
return
}
if globalSiteReplicationSys.isEnabled() && !update {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrRemoteTargetDenyAddError, err), r.URL)
return
}
if update {
// overlay the updates on existing target
tgt := globalBucketTargetSys.GetRemoteBucketTargetByArn(ctx, bucket, target.Arn)
@@ -217,10 +220,14 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
for _, op := range ops {
switch op {
case madmin.CredentialsUpdateType:
tgt.Credentials = target.Credentials
tgt.TargetBucket = target.TargetBucket
tgt.Secure = target.Secure
tgt.Endpoint = target.Endpoint
if !globalSiteReplicationSys.isEnabled() {
// credentials update is possible only in bucket replication. User will never
// know the site replicator creds.
tgt.Credentials = target.Credentials
tgt.TargetBucket = target.TargetBucket
tgt.Secure = target.Secure
tgt.Endpoint = target.Endpoint
}
case madmin.SyncUpdateType:
tgt.ReplicationSync = target.ReplicationSync
case madmin.ProxyUpdateType:
@@ -277,9 +284,8 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
// ListRemoteTargetsHandler - lists remote target(s) for a bucket or gets a target
// for a particular ARN type
func (a adminAPIHandlers) ListRemoteTargetsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListBucketTargets")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := pathClean(vars["bucket"])
arnType := vars["type"]
@@ -312,9 +318,8 @@ func (a adminAPIHandlers) ListRemoteTargetsHandler(w http.ResponseWriter, r *htt
// RemoveRemoteTargetHandler - removes a remote target for bucket with specified ARN
func (a adminAPIHandlers) RemoveRemoteTargetHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RemoveBucketTarget")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := pathClean(vars["bucket"])
arn := vars["arn"]
@@ -356,8 +361,7 @@ func (a adminAPIHandlers) RemoveRemoteTargetHandler(w http.ResponseWriter, r *ht
// ExportBucketMetadataHandler - exports all bucket metadata as a zipped file
func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ExportBucketMetadata")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
bucket := pathClean(r.Form.Get("bucket"))
// Get current object layer instance.
@@ -448,7 +452,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
return
}
case bucketLifecycleConfig:
config, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
config, _, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
if err != nil {
if errors.Is(err, BucketLifecycleNotFound{Bucket: bucket}) {
continue
@@ -640,9 +644,7 @@ func (i *importMetaReport) SetStatus(bucket, fname string, err error) {
// 2. Replication config - is omitted from import as remote target credentials are not available from exported data for security reasons.
// 3. lifecycle config - if transition rules are present, tier name needs to have been defined.
func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ImportBucketMetadata")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ImportBucketMetadataAction)
@@ -660,12 +662,31 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
bucketMap := make(map[string]struct{}, 1)
rpt := importMetaReport{
madmin.BucketMetaImportErrs{
Buckets: make(map[string]madmin.BucketStatus, len(zr.File)),
},
}
bucketMap := make(map[string]*BucketMetadata, len(zr.File))
updatedAt := UTCNow()
for _, file := range zr.File {
slc := strings.Split(file.Name, slashSeparator)
if len(slc) != 2 { // expecting bucket/configfile in the zipfile
rpt.SetStatus(file.Name, "", fmt.Errorf("malformed zip - expecting format bucket/<config.json>"))
continue
}
bucket := slc[0]
meta, err := readBucketMetadata(ctx, objectAPI, bucket)
if err == nil {
bucketMap[bucket] = &meta
} else if err != errConfigNotFound {
rpt.SetStatus(bucket, "", err)
}
}
// import object lock config if any - order of import matters here.
for _, file := range zr.File {
slc := strings.Split(file.Name, slashSeparator)
@@ -674,8 +695,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
bucket, fileName := slc[0], slc[1]
switch fileName {
case objectLockConfig:
if fileName == objectLockConfig {
reader, err := file.Open()
if err != nil {
rpt.SetStatus(bucket, fileName, err)
@@ -694,16 +714,17 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
}
if _, ok := bucketMap[bucket]; !ok {
opts := MakeBucketOptions{
LockEnabled: config.ObjectLockEnabled == "Enabled",
LockEnabled: config.Enabled(),
}
err = objectAPI.MakeBucketWithLocation(ctx, bucket, opts)
err = objectAPI.MakeBucket(ctx, bucket, opts)
if err != nil {
if _, ok := err.(BucketExists); !ok {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
bucketMap[bucket] = struct{}{}
v := newBucketMetadata(bucket)
bucketMap[bucket] = &v
}
// Deny object locking configuration settings on existing buckets without object lock enabled.
@@ -712,27 +733,9 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, objectLockConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].ObjectLockConfigXML = configData
bucketMap[bucket].ObjectLockConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeObjectLockConfig,
Bucket: bucket,
ObjectLockConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
}
@@ -744,8 +747,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
bucket, fileName := slc[0], slc[1]
switch fileName {
case bucketVersioningConfig:
if fileName == bucketVersioningConfig {
reader, err := file.Open()
if err != nil {
rpt.SetStatus(bucket, fileName, err)
@@ -757,13 +759,14 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
if _, ok := bucketMap[bucket]; !ok {
if err = objectAPI.MakeBucketWithLocation(ctx, bucket, MakeBucketOptions{}); err != nil {
if err = objectAPI.MakeBucket(ctx, bucket, MakeBucketOptions{}); err != nil {
if _, ok := err.(BucketExists); !ok {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
bucketMap[bucket] = struct{}{}
v := newBucketMetadata(bucket)
bucketMap[bucket] = &v
}
if globalSiteReplicationSys.isEnabled() && v.Suspended() {
@@ -786,10 +789,8 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketVersioningConfig, configData); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].VersioningConfigXML = configData
bucketMap[bucket].VersioningConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
}
}
@@ -807,16 +808,18 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
bucket, fileName := slc[0], slc[1]
// create bucket if it does not exist yet.
if _, ok := bucketMap[bucket]; !ok {
err = objectAPI.MakeBucketWithLocation(ctx, bucket, MakeBucketOptions{})
err = objectAPI.MakeBucket(ctx, bucket, MakeBucketOptions{})
if err != nil {
if _, ok := err.(BucketExists); !ok {
rpt.SetStatus(bucket, "", err)
continue
}
}
bucketMap[bucket] = struct{}{}
v := newBucketMetadata(bucket)
bucketMap[bucket] = &v
}
if _, ok := bucketMap[bucket]; !ok {
continue
@@ -835,12 +838,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketNotificationConfig, configData); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rulesMap := config.ToRulesMap()
globalEventNotifier.AddRulesMap(bucket, rulesMap)
bucketMap[bucket].NotificationConfigXML = configData
rpt.SetStatus(bucket, fileName, nil)
case bucketPolicyConfig:
// Error out if Content-Length is beyond allowed size.
@@ -863,7 +861,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
// Version in policy must not be empty
if bucketPolicy.Version == "" {
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrMalformedPolicy.String()))
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrPolicyInvalidVersion.String()))
continue
}
@@ -873,22 +871,9 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketPolicyConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].PolicyConfigJSON = configData
bucketMap[bucket].PolicyConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypePolicy,
Bucket: bucket,
Policy: bucketPolicyBytes,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
case bucketLifecycleConfig:
bucketLifecycle, err := lifecycle.ParseLifecycleConfig(io.LimitReader(reader, sz))
if err != nil {
@@ -914,10 +899,8 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketLifecycleConfig, configData); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].LifecycleConfigXML = configData
bucketMap[bucket].LifecycleConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
case bucketSSEConfig:
// Parse bucket encryption xml
@@ -952,29 +935,9 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
// Store the bucket encryption configuration in the object layer
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketSSEConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].EncryptionConfigXML = configData
bucketMap[bucket].EncryptionConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeSSEConfig,
Bucket: bucket,
SSEConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
case bucketTaggingConfig:
tags, err := tags.ParseBucketXML(io.LimitReader(reader, sz))
if err != nil {
@@ -988,27 +951,9 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketTaggingConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].TaggingConfigXML = configData
bucketMap[bucket].TaggingConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
Tags: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
case bucketQuotaConfigFile:
data, err := io.ReadAll(reader)
if err != nil {
@@ -1016,42 +961,49 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
quotaConfig, err := parseBucketQuota(bucket, data)
_, err = parseBucketQuota(bucket, data)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
if quotaConfig.Type == "fifo" {
rpt.SetStatus(bucket, fileName, fmt.Errorf("Detected older 'fifo' quota config, 'fifo' feature is removed and not supported anymore"))
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketQuotaConfigFile, data)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].QuotaConfigJSON = data
bucketMap[bucket].QuotaConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
bucketMeta := madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeQuotaConfig,
Bucket: bucket,
Quota: data,
UpdatedAt: updatedAt,
}
if quotaConfig.Quota == 0 {
bucketMeta.Quota = nil
}
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, bucketMeta); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
}
enc := func(b []byte) *string {
if b == nil {
return nil
}
v := base64.StdEncoding.EncodeToString(b)
return &v
}
for bucket, meta := range bucketMap {
err := globalBucketMetadataSys.save(ctx, *meta)
if err != nil {
rpt.SetStatus(bucket, "", err)
continue
}
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Bucket: bucket,
Quota: meta.QuotaConfigJSON,
Policy: meta.PolicyConfigJSON,
Versioning: enc(meta.VersioningConfigXML),
Tags: enc(meta.TaggingConfigXML),
ObjectLockConfig: enc(meta.ObjectLockConfigXML),
SSEConfig: enc(meta.EncryptionConfigXML),
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, "", err)
continue
}
}
rptData, err := json.Marshal(rpt.BucketMetaImportErrs)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -1064,8 +1016,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
// ReplicationDiffHandler - POST returns info on unreplicated versions for a remote target ARN
// to the connected HTTP client.
func (a adminAPIHandlers) ReplicationDiffHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ReplicationDiff")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1124,3 +1075,62 @@ func (a adminAPIHandlers) ReplicationDiffHandler(w http.ResponseWriter, r *http.
}
}
}
// ReplicationMRFHandler - POST returns info on entries in the MRF backlog for a node or all nodes
func (a adminAPIHandlers) ReplicationMRFHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ReplicationDiff)
if objectAPI == nil {
return
}
// Check if bucket exists.
if bucket != "" {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
q := r.Form
node := q.Get("node")
keepAliveTicker := time.NewTicker(500 * time.Millisecond)
defer keepAliveTicker.Stop()
mrfCh, err := globalNotificationSys.GetReplicationMRF(ctx, bucket, node)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
enc := json.NewEncoder(w)
for {
select {
case entry, ok := <-mrfCh:
if !ok {
return
}
if err := enc.Encode(entry); err != nil {
return
}
if len(mrfCh) == 0 {
// Flush if nothing is queued
w.(http.Flusher).Flush()
}
case <-keepAliveTicker.C:
if len(mrfCh) > 0 {
continue
}
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
case <-ctx.Done():
return
}
}
}

View File

@@ -23,8 +23,8 @@ import (
"fmt"
"net/http"
"github.com/minio/kes"
"github.com/minio/madmin-go"
"github.com/minio/kes-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/minio/internal/config"
iampolicy "github.com/minio/pkg/iam/policy"
@@ -84,7 +84,13 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
Description: e.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case config.Error:
case config.ErrConfigNotFound:
apiErr = APIError{
Code: "XMinioConfigNotFoundError",
Description: e.Error(),
HTTPStatusCode: http.StatusNotFound,
}
case config.ErrConfigGeneric:
apiErr = APIError{
Code: "XMinioConfigError",
Description: e.Error(),
@@ -148,15 +154,9 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
Description: err.Error(),
HTTPStatusCode: http.StatusForbidden,
}
case errors.Is(err, errIAMServiceAccount):
case errors.Is(err, errIAMServiceAccountNotAllowed):
apiErr = APIError{
Code: "XMinioIAMServiceAccount",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errIAMServiceAccountUsed):
apiErr = APIError{
Code: "XMinioIAMServiceAccountUsed",
Code: "XMinioIAMServiceAccountNotAllowed",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
@@ -168,10 +168,16 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
}
case errors.Is(err, errPolicyInUse):
apiErr = APIError{
Code: "XMinioAdminPolicyInUse",
Code: "XMinioIAMPolicyInUse",
Description: "The policy cannot be removed, as it is in use",
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errSessionPolicyTooLarge):
apiErr = APIError{
Code: "XMinioIAMServiceAccountSessionPolicyTooLarge",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, kes.ErrKeyExists):
apiErr = APIError{
Code: "XMinioKMSKeyExists",
@@ -204,24 +210,6 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierBackendInUse):
apiErr = APIError{
Code: "XMinioAdminTierBackendInUse",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierBackendNotEmpty):
apiErr = APIError{
Code: "XMinioAdminTierBackendNotEmpty",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierInsufficientCreds):
apiErr = APIError{
Code: "XMinioAdminTierInsufficientCreds",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errIsTierPermError(err):
apiErr = APIError{
Code: "XMinioAdminTierInsufficientPermissions",
@@ -238,12 +226,10 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
// toAdminAPIErrCode - converts errErasureWriteQuorum error to admin API
// specific error.
func toAdminAPIErrCode(ctx context.Context, err error) APIErrorCode {
switch err {
case errErasureWriteQuorum:
if errors.Is(err, errErasureWriteQuorum) {
return ErrAdminConfigNoQuorum
default:
return toAPIErrorCode(ctx, err)
}
return toAPIErrorCode(ctx, err)
}
// wraps export error for more context

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2021 MinIO, Inc.
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -26,8 +26,7 @@ import (
"strconv"
"strings"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/config/cache"
"github.com/minio/minio/internal/config/etcd"
@@ -36,15 +35,15 @@ import (
idplugin "github.com/minio/minio/internal/config/identity/plugin"
polplugin "github.com/minio/minio/internal/config/policy/plugin"
"github.com/minio/minio/internal/config/storageclass"
"github.com/minio/minio/internal/config/subnet"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
)
// DelConfigKVHandler - DELETE /minio/admin/v3/del-config-kv
func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteConfigKV")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -71,7 +70,7 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
cfg, err := readServerConfig(ctx, objectAPI)
cfg, err := readServerConfig(ctx, objectAPI, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -82,11 +81,15 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
if err = validateConfig(cfg, subSys); err != nil {
if err = validateConfig(ctx, cfg, subSys); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
// Check if subnet proxy being deleted and if so the value of proxy of subnet
// target of logger webhook configuration also should be deleted
loggerWebhookProxyDeleted := setLoggerWebhookSubnetProxy(subSys, cfg)
if err = saveServerConfig(ctx, objectAPI, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -101,6 +104,10 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
dynamic := config.SubSystemsDynamic.Contains(subSys)
if dynamic {
applyDynamic(ctx, objectAPI, cfg, subSys, r, w)
if subSys == config.SubnetSubSys && loggerWebhookProxyDeleted {
// Logger webhook proxy deleted, apply the dynamic changes
applyDynamic(ctx, objectAPI, cfg, config.LoggerWebhookSubSys, r, w)
}
}
}
@@ -117,11 +124,30 @@ func applyDynamic(ctx context.Context, objectAPI ObjectLayer, cfg config.Config,
w.Header().Set(madmin.ConfigAppliedHeader, madmin.ConfigAppliedTrue)
}
type badConfigErr struct {
Err error
}
// Error - return the error message
func (bce badConfigErr) Error() string {
return bce.Err.Error()
}
// Unwrap the error to its underlying error.
func (bce badConfigErr) Unwrap() error {
return bce.Err
}
type setConfigResult struct {
Cfg config.Config
SubSys string
Dynamic bool
LoggerWebhookCfgUpdated bool
}
// SetConfigKVHandler - PUT /minio/admin/v3/set-config-kv
func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetConfigKV")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -142,58 +168,79 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
cfg, err := readServerConfig(ctx, objectAPI)
result, err := setConfigKV(ctx, objectAPI, kvBytes)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
switch err.(type) {
case badConfigErr:
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
default:
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
}
return
}
dynamic, err := cfg.ReadConfig(bytes.NewReader(kvBytes))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
subSys, _, _, err := config.GetSubSys(string(kvBytes))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = validateConfig(cfg, subSys); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
// Update the actual server config on disk.
if err = saveServerConfig(ctx, objectAPI, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write to the config input KV to history.
if err = saveServerConfigHistory(ctx, objectAPI, kvBytes); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if dynamic {
applyDynamic(ctx, objectAPI, cfg, subSys, r, w)
if result.Dynamic {
applyDynamic(ctx, objectAPI, result.Cfg, result.SubSys, r, w)
// If logger webhook config updated (proxy due to callhome), explicitly dynamically
// apply the config
if result.LoggerWebhookCfgUpdated {
applyDynamic(ctx, objectAPI, result.Cfg, config.LoggerWebhookSubSys, r, w)
}
}
writeSuccessResponseHeadersOnly(w)
}
func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (result setConfigResult, err error) {
result.Cfg, err = readServerConfig(ctx, objectAPI, nil)
if err != nil {
return
}
result.Dynamic, err = result.Cfg.ReadConfig(bytes.NewReader(kvBytes))
if err != nil {
return
}
result.SubSys, _, _, err = config.GetSubSys(string(kvBytes))
if err != nil {
return
}
tgts, err := config.ParseConfigTargetID(bytes.NewReader(kvBytes))
if err != nil {
return
}
ctx = context.WithValue(ctx, config.ContextKeyForTargetFromConfig, tgts)
if verr := validateConfig(ctx, result.Cfg, result.SubSys); verr != nil {
err = badConfigErr{Err: verr}
return
}
// Check if subnet proxy being set and if so set the same value to proxy of subnet
// target of logger webhook configuration
result.LoggerWebhookCfgUpdated = setLoggerWebhookSubnetProxy(result.SubSys, result.Cfg)
// Update the actual server config on disk.
if err = saveServerConfig(ctx, objectAPI, result.Cfg); err != nil {
return
}
// Write the config input KV to history.
err = saveServerConfigHistory(ctx, objectAPI, kvBytes)
return
}
// GetConfigKVHandler - GET /minio/admin/v3/get-config-kv?key={key}
//
// `key` can be one of three forms:
// 1. `subsys:target` -> request for config of a single subsystem and target pair.
// 2. `subsys:` -> request for config of a single subsystem and the default target.
// 3. `subsys` -> request for config of all targets for the given subsystem.
//
// This is a reporting API and config secrets are redacted in the response.
func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfigKV")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -217,7 +264,7 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
}
}
subSysConfigs, err := cfg.GetSubsysInfo(subSys, target)
subSysConfigs, err := cfg.GetSubsysInfo(subSys, target, true)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -225,7 +272,7 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
var s strings.Builder
for _, subSysConfig := range subSysConfigs {
subSysConfig.AddString(&s, false)
subSysConfig.WriteTo(&s, false)
}
password := cred.SecretKey
@@ -239,9 +286,7 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
}
func (a adminAPIHandlers) ClearConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ClearConfigHistoryKV")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -274,9 +319,7 @@ func (a adminAPIHandlers) ClearConfigHistoryKVHandler(w http.ResponseWriter, r *
// RestoreConfigHistoryKVHandler - restores a config with KV settings for the given KV id.
func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RestoreConfigHistoryKV")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -296,7 +339,7 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
return
}
cfg, err := readServerConfig(ctx, objectAPI)
cfg, err := readServerConfig(ctx, objectAPI, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -307,7 +350,7 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
return
}
if err = validateConfig(cfg, ""); err != nil {
if err = validateConfig(ctx, cfg, ""); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -322,9 +365,7 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
// ListConfigHistoryKVHandler - lists all the KV ids.
func (a adminAPIHandlers) ListConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListConfigHistoryKV")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -362,9 +403,7 @@ func (a adminAPIHandlers) ListConfigHistoryKVHandler(w http.ResponseWriter, r *h
// HelpConfigKVHandler - GET /minio/admin/v3/help-config-kv?subSys={subSys}&key={key}
func (a adminAPIHandlers) HelpConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HelpConfigKV")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -389,9 +428,7 @@ func (a adminAPIHandlers) HelpConfigKVHandler(w http.ResponseWriter, r *http.Req
// SetConfigHandler - PUT /minio/admin/v3/config
func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -418,7 +455,7 @@ func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Reques
return
}
if err = validateConfig(cfg, ""); err != nil {
if err = validateConfig(ctx, cfg, ""); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -439,11 +476,11 @@ func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Reques
}
// GetConfigHandler - GET /minio/admin/v3/config
// Get config.json of this minio setup.
//
// This endpoint is mainly for exporting and backing up the configuration.
// Secrets are not redacted.
func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -454,13 +491,9 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
var s strings.Builder
hkvs := config.HelpSubSysMap[""]
var count int
for _, hkv := range hkvs {
count += len(cfg[hkv.Key])
}
for _, hkv := range hkvs {
// We ignore the error below, as we cannot get one.
cfgSubsysItems, _ := cfg.GetSubsysInfo(hkv.Key, "")
cfgSubsysItems, _ := cfg.GetSubsysInfo(hkv.Key, "", false)
for _, item := range cfgSubsysItems {
off := item.Config.Get(config.Enable) == config.EnableOff
@@ -478,11 +511,11 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
case config.IdentityLDAPSubSys:
off = !xldap.Enabled(item.Config)
case config.IdentityTLSSubSys:
off = !globalSTSTLSConfig.Enabled
off = !globalIAMSys.STSTLSConfig.Enabled
case config.IdentityPluginSubSys:
off = !idplugin.Enabled(item.Config)
}
item.AddString(&s, off)
item.WriteTo(&s, off)
}
}
@@ -495,3 +528,18 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
writeSuccessResponseJSON(w, econfigData)
}
// setLoggerWebhookSubnetProxy - Sets the logger webhook's subnet proxy value to
// one being set for subnet proxy
func setLoggerWebhookSubnetProxy(subSys string, cfg config.Config) bool {
if subSys == config.SubnetSubSys || subSys == config.LoggerWebhookSubSys {
subnetWebhookCfg := cfg[config.LoggerWebhookSubSys][subnet.LoggerWebhookName]
loggerWebhookSubnetProxy := subnetWebhookCfg.Get(logger.Proxy)
subnetProxy := cfg[config.SubnetSubSys][config.Default].Get(logger.Proxy)
if loggerWebhookSubnetProxy != subnetProxy {
subnetWebhookCfg.Set(logger.Proxy, subnetProxy)
return true
}
}
return false
}

View File

@@ -26,23 +26,18 @@ import (
"net/http"
"strings"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/config"
cfgldap "github.com/minio/minio/internal/config/identity/ldap"
"github.com/minio/minio/internal/config/identity/openid"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/ldap"
)
// SetIdentityProviderCfg:
//
// PUT <admin-prefix>/id-cfg?type=openid&name=dex1
func (a adminAPIHandlers) SetIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetIdentityCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
func addOrUpdateIDPHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, isUpdate bool) {
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
@@ -54,6 +49,14 @@ func (a adminAPIHandlers) SetIdentityProviderCfg(w http.ResponseWriter, r *http.
return
}
// Ensure body content type is opaque to ensure that request body has not
// been interpreted as form data.
contentType := r.Header.Get("Content-Type")
if contentType != "application/octet-stream" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrBadRequest), r.URL)
return
}
password := cred.SecretKey
reqBytes, err := madmin.DecryptData(password, io.LimitReader(r.Body, r.ContentLength))
if err != nil {
@@ -68,44 +71,43 @@ func (a adminAPIHandlers) SetIdentityProviderCfg(w http.ResponseWriter, r *http.
return
}
var cfgDataBuilder strings.Builder
var subSys string
switch idpCfgType {
case madmin.OpenidIDPCfg:
fmt.Fprintf(&cfgDataBuilder, "identity_openid")
subSys = madmin.IdentityOpenIDSubSys
case madmin.LDAPIDPCfg:
fmt.Fprintf(&cfgDataBuilder, "identity_ldap")
subSys = madmin.IdentityLDAPSubSys
}
// Ensure body content type is opaque.
contentType := r.Header.Get("Content-Type")
if contentType != "application/octet-stream" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrBadRequest), r.URL)
return
}
// Subsystem configuration name could be empty.
cfgName := mux.Vars(r)["name"]
cfgTarget := madmin.Default
if cfgName != "" {
if idpCfgType == madmin.LDAPIDPCfg {
// LDAP does not support multiple configurations. So this must be
// empty.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrBadRequest), r.URL)
cfgTarget = cfgName
if idpCfgType == madmin.LDAPIDPCfg && cfgName != madmin.Default {
// LDAP does not support multiple configurations. So cfgName must be
// empty or `madmin.Default`.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigLDAPNonDefaultConfigName), r.URL)
return
}
fmt.Fprintf(&cfgDataBuilder, "%s%s", config.SubSystemSeparator, cfgName)
}
fmt.Fprintf(&cfgDataBuilder, "%s%s", config.KvSpaceSeparator, string(reqBytes))
cfgData := cfgDataBuilder.String()
subSys, _, _, err := config.GetSubSys(cfgData)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
// Check that this is a valid Create vs Update API call.
s := globalServerConfig.Clone()
if apiErrCode := handleCreateUpdateValidation(s, subSys, cfgTarget, isUpdate); apiErrCode != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(apiErrCode), r.URL)
return
}
cfg, err := readServerConfig(ctx, objectAPI)
cfgData := ""
{
tgtSuffix := ""
if cfgTarget != madmin.Default {
tgtSuffix = config.SubSystemSeparator + cfgTarget
}
cfgData = subSys + tgtSuffix + config.KvSpaceSeparator + string(reqBytes)
}
cfg, err := readServerConfig(ctx, objectAPI, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -123,7 +125,7 @@ func (a adminAPIHandlers) SetIdentityProviderCfg(w http.ResponseWriter, r *http.
return
}
if err = validateConfig(cfg, subSys); err != nil {
if err = validateConfig(ctx, cfg, subSys); err != nil {
var validationErr ldap.Validation
if errors.As(err, &validationErr) {
@@ -153,18 +155,123 @@ func (a adminAPIHandlers) SetIdentityProviderCfg(w http.ResponseWriter, r *http.
writeSuccessResponseHeadersOnly(w)
}
func handleCreateUpdateValidation(s config.Config, subSys, cfgTarget string, isUpdate bool) APIErrorCode {
if cfgTarget != madmin.Default {
// This cannot give an error at this point.
subSysTargets, _ := s.GetAvailableTargets(subSys)
subSysTargetsSet := set.CreateStringSet(subSysTargets...)
if isUpdate && !subSysTargetsSet.Contains(cfgTarget) {
return ErrAdminConfigIDPCfgNameDoesNotExist
}
if !isUpdate && subSysTargetsSet.Contains(cfgTarget) {
return ErrAdminConfigIDPCfgNameAlreadyExists
}
return ErrNone
}
// For the default configuration name, since it will always be an available
// target, we need to check if a configuration value has been set previously
// to figure out if this is a valid create or update API call.
// This cannot really error (FIXME: improve the type for GetConfigInfo)
var cfgInfos []madmin.IDPCfgInfo
switch subSys {
case madmin.IdentityOpenIDSubSys:
cfgInfos, _ = globalIAMSys.OpenIDConfig.GetConfigInfo(s, cfgTarget)
case madmin.IdentityLDAPSubSys:
cfgInfos, _ = globalIAMSys.LDAPConfig.GetConfigInfo(s, cfgTarget)
}
if len(cfgInfos) > 0 && !isUpdate {
return ErrAdminConfigIDPCfgNameAlreadyExists
}
if len(cfgInfos) == 0 && isUpdate {
return ErrAdminConfigIDPCfgNameDoesNotExist
}
return ErrNone
}
// AddIdentityProviderCfg: adds a new IDP config for openid/ldap.
//
// PUT <admin-prefix>/idp-cfg/openid/dex1 -> create named config `dex1`
//
// PUT <admin-prefix>/idp-cfg/openid/_ -> create (default) named config `_`
func (a adminAPIHandlers) AddIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
addOrUpdateIDPHandler(ctx, w, r, false)
}
// UpdateIdentityProviderCfg: updates an existing IDP config for openid/ldap.
//
// POST <admin-prefix>/idp-cfg/openid/dex1 -> update named config `dex1`
//
// POST <admin-prefix>/idp-cfg/openid/_ -> update (default) named config `_`
func (a adminAPIHandlers) UpdateIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
addOrUpdateIDPHandler(ctx, w, r, true)
}
// ListIdentityProviderCfg:
//
// GET <admin-prefix>/idp-cfg/openid -> lists openid provider configs.
func (a adminAPIHandlers) ListIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
password := cred.SecretKey
idpCfgType := mux.Vars(r)["type"]
if !madmin.ValidIDPConfigTypes.Contains(idpCfgType) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigInvalidIDPType), r.URL)
return
}
var cfgList []madmin.IDPListItem
var err error
switch idpCfgType {
case madmin.OpenidIDPCfg:
cfg := globalServerConfig.Clone()
cfgList, err = globalIAMSys.OpenIDConfig.GetConfigList(cfg)
case madmin.LDAPIDPCfg:
cfg := globalServerConfig.Clone()
cfgList, err = globalIAMSys.LDAPConfig.GetConfigList(cfg)
default:
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
data, err := json.Marshal(cfgList)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
econfigData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, econfigData)
}
// GetIdentityProviderCfg:
//
// GET <admin-prefix>/id-cfg?type=openid&name=dex_test
//
// GetIdentityProviderCfg returns a list of configured IDPs on the server if
// name is empty. If name is non-empty, returns the configuration details for
// the IDP of the given type and configuration name. The configuration name for
// the default ("un-named") configuration target is `_`.
// GET <admin-prefix>/idp-cfg/openid/dex_test
func (a adminAPIHandlers) GetIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -172,7 +279,7 @@ func (a adminAPIHandlers) GetIdentityProviderCfg(w http.ResponseWriter, r *http.
}
idpCfgType := mux.Vars(r)["type"]
cfgName := r.Form.Get("name")
cfgName := mux.Vars(r)["name"]
password := cred.SecretKey
if !madmin.ValidIDPConfigTypes.Contains(idpCfgType) {
@@ -180,23 +287,17 @@ func (a adminAPIHandlers) GetIdentityProviderCfg(w http.ResponseWriter, r *http.
return
}
// If no cfgName is provided, we list.
if cfgName == "" {
a.listIdentityProviders(ctx, w, r, idpCfgType, password)
return
}
cfg := globalServerConfig.Clone()
var cfgInfos []madmin.IDPCfgInfo
var err error
switch idpCfgType {
case madmin.OpenidIDPCfg:
cfgInfos, err = globalOpenIDConfig.GetConfigInfo(cfg, cfgName)
cfgInfos, err = globalIAMSys.OpenIDConfig.GetConfigInfo(cfg, cfgName)
case madmin.LDAPIDPCfg:
cfgInfos, err = globalLDAPConfig.GetConfigInfo(cfg, cfgName)
cfgInfos, err = globalIAMSys.LDAPConfig.GetConfigInfo(cfg, cfgName)
}
if err != nil {
if errors.Is(err, openid.ErrProviderConfigNotFound) {
if errors.Is(err, openid.ErrProviderConfigNotFound) || errors.Is(err, cfgldap.ErrProviderConfigNotFound) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchConfigTarget), r.URL)
return
}
@@ -225,49 +326,11 @@ func (a adminAPIHandlers) GetIdentityProviderCfg(w http.ResponseWriter, r *http.
writeSuccessResponseJSON(w, econfigData)
}
func (a adminAPIHandlers) listIdentityProviders(ctx context.Context, w http.ResponseWriter, r *http.Request, idpCfgType, password string) {
var cfgList []madmin.IDPListItem
var err error
switch idpCfgType {
case madmin.OpenidIDPCfg:
cfg := globalServerConfig.Clone()
cfgList, err = globalOpenIDConfig.GetConfigList(cfg)
case madmin.LDAPIDPCfg:
cfg := globalServerConfig.Clone()
cfgList, err = globalLDAPConfig.GetConfigList(cfg)
default:
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
data, err := json.Marshal(cfgList)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
econfigData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, econfigData)
}
// DeleteIdentityProviderCfg:
//
// DELETE <admin-prefix>/id-cfg?type=openid&name=dex_test
// DELETE <admin-prefix>/idp-cfg/openid/dex_test
func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
@@ -286,7 +349,7 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
switch idpCfgType {
case madmin.OpenidIDPCfg:
subSys = config.IdentityOpenIDSubSys
cfgInfos, err := globalOpenIDConfig.GetConfigInfo(cfgCopy, cfgName)
cfgInfos, err := globalIAMSys.OpenIDConfig.GetConfigInfo(cfgCopy, cfgName)
if err != nil {
if errors.Is(err, openid.ErrProviderConfigNotFound) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchConfigTarget), r.URL)
@@ -311,7 +374,7 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
}
case madmin.LDAPIDPCfg:
subSys = config.IdentityLDAPSubSys
cfgInfos, err := globalLDAPConfig.GetConfigInfo(cfgCopy, cfgName)
cfgInfos, err := globalIAMSys.LDAPConfig.GetConfigInfo(cfgCopy, cfgName)
if err != nil {
if errors.Is(err, openid.ErrProviderConfigNotFound) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchConfigTarget), r.URL)
@@ -339,7 +402,7 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
return
}
cfg, err := readServerConfig(ctx, objectAPI)
cfg, err := readServerConfig(ctx, objectAPI, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -353,7 +416,17 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = validateConfig(cfg, subSys); err != nil {
if err = validateConfig(ctx, cfg, subSys); err != nil {
var validationErr ldap.Validation
if errors.As(err, &validationErr) {
// If we got an LDAP validation error, we need to send appropriate
// error message back to client (likely mc).
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigLDAPValidation),
validationErr.FormatError(), r.URL)
return
}
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}

View File

@@ -0,0 +1,177 @@
// Copyright (c) 2015-2022 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"encoding/json"
"io"
"net/http"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
)
// ListLDAPPolicyMappingEntities lists users/groups mapped to given/all policies.
//
// GET <admin-prefix>/idp/ldap/policy-entities?[query-params]
//
// Query params:
//
// user=... -> repeatable query parameter, specifying users to query for
// policy mapping
//
// group=... -> repeatable query parameter, specifying groups to query for
// policy mapping
//
// policy=... -> repeatable query parameter, specifying policy to query for
// user/group mapping
//
// When all query parameters are omitted, returns mappings for all policies.
func (a adminAPIHandlers) ListLDAPPolicyMappingEntities(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Check authorization.
objectAPI, cred := validateAdminReq(ctx, w, r,
iampolicy.ListGroupsAdminAction, iampolicy.ListUsersAdminAction, iampolicy.ListUserPoliciesAdminAction)
if objectAPI == nil {
return
}
// Validate API arguments.
q := madmin.PolicyEntitiesQuery{
Users: r.Form["user"],
Groups: r.Form["group"],
Policy: r.Form["policy"],
}
// Query IAM
res, err := globalIAMSys.QueryLDAPPolicyEntities(r.Context(), q)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Encode result and send response.
data, err := json.Marshal(res)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
password := cred.SecretKey
econfigData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, econfigData)
}
// AttachDetachPolicyLDAP attaches or detaches policies from an LDAP entity
// (user or group).
//
// POST <admin-prefix>/idp/ldap/policy/{operation}
func (a adminAPIHandlers) AttachDetachPolicyLDAP(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Check authorization.
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.UpdatePolicyAssociationAction)
if objectAPI == nil {
return
}
if r.ContentLength > maxEConfigJSONSize || r.ContentLength == -1 {
// More than maxConfigSize bytes were available
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigTooLarge), r.URL)
return
}
// Ensure body content type is opaque to ensure that request body has not
// been interpreted as form data.
contentType := r.Header.Get("Content-Type")
if contentType != "application/octet-stream" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrBadRequest), r.URL)
return
}
// Validate operation
operation := mux.Vars(r)["operation"]
if operation != "attach" && operation != "detach" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminInvalidArgument), r.URL)
return
}
isAttach := operation == "attach"
// Validate API arguments in body.
password := cred.SecretKey
reqBytes, err := madmin.DecryptData(password, io.LimitReader(r.Body, r.ContentLength))
if err != nil {
logger.LogIf(ctx, err, logger.Application)
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), r.URL)
return
}
var par madmin.PolicyAssociationReq
err = json.Unmarshal(reqBytes, &par)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
if err := par.IsValid(); err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), r.URL)
return
}
// Call IAM subsystem
updatedAt, addedOrRemoved, _, err := globalIAMSys.PolicyDBUpdateLDAP(ctx, isAttach, par)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
respBody := madmin.PolicyAssociationResp{
UpdatedAt: updatedAt,
}
if isAttach {
respBody.PoliciesAttached = addedOrRemoved
} else {
respBody.PoliciesDetached = addedOrRemoved
}
data, err := json.Marshal(respBody)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}

View File

@@ -22,21 +22,20 @@ import (
"errors"
"fmt"
"net/http"
"strings"
"github.com/gorilla/mux"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
)
var (
errRebalanceDecommissionAlreadyRunning = errors.New("Rebalance cannot be started, decommission is aleady in progress")
errRebalanceDecommissionAlreadyRunning = errors.New("Rebalance cannot be started, decommission is already in progress")
errDecommissionRebalanceAlreadyRunning = errors.New("Decommission cannot be started, rebalance is already in progress")
)
func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "StartDecommission")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.DecommissionAdminAction)
if objectAPI == nil {
@@ -49,28 +48,53 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
return
}
pools, ok := objectAPI.(*erasureServerPools)
if !ok {
z, ok := objectAPI.(*erasureServerPools)
if !ok || len(z.serverPools) == 1 {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
if pools.IsRebalanceStarted() {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errDecommissionRebalanceAlreadyRunning), r.URL)
if z.IsDecommissionRunning() {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errDecommissionAlreadyRunning), r.URL)
return
}
if z.IsRebalanceStarted() {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL)
return
}
vars := mux.Vars(r)
v := vars["pool"]
idx := globalEndpoints.GetPoolIdx(v)
if idx == -1 {
// We didn't find any matching pools, invalid input
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errInvalidArgument), r.URL)
return
pools := strings.Split(v, ",")
poolIndices := make([]int, 0, len(pools))
for _, pool := range pools {
idx := globalEndpoints.GetPoolIdx(pool)
if idx == -1 {
// We didn't find any matching pools, invalid input
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errInvalidArgument), r.URL)
return
}
var pool *erasureSets
for pidx := range z.serverPools {
if pidx == idx {
pool = z.serverPools[idx]
break
}
}
if pool == nil {
// We didn't find any matching pools, invalid input
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errInvalidArgument), r.URL)
return
}
poolIndices = append(poolIndices, idx)
}
if ep := globalEndpoints[idx].Endpoints[0]; !ep.IsLocal {
if len(poolIndices) > 0 && !globalEndpoints[poolIndices[0]].Endpoints[0].IsLocal {
ep := globalEndpoints[poolIndices[0]].Endpoints[0]
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == ep.Host {
if proxyRequestByNodeIndex(ctx, w, r, nodeIdx) {
@@ -80,16 +104,14 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
}
}
if err := pools.Decommission(r.Context(), idx); err != nil {
if err := z.Decommission(r.Context(), poolIndices...); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
}
func (a adminAPIHandlers) CancelDecommission(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "CancelDecommission")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.DecommissionAdminAction)
if objectAPI == nil {
@@ -135,9 +157,7 @@ func (a adminAPIHandlers) CancelDecommission(w http.ResponseWriter, r *http.Requ
}
func (a adminAPIHandlers) StatusPool(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "StatusPool")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ServerInfoAdminAction, iampolicy.DecommissionAdminAction)
if objectAPI == nil {
@@ -178,9 +198,7 @@ func (a adminAPIHandlers) StatusPool(w http.ResponseWriter, r *http.Request) {
}
func (a adminAPIHandlers) ListPools(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListPools")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ServerInfoAdminAction, iampolicy.DecommissionAdminAction)
if objectAPI == nil {
@@ -213,8 +231,7 @@ func (a adminAPIHandlers) ListPools(w http.ResponseWriter, r *http.Request) {
}
func (a adminAPIHandlers) RebalanceStart(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RebalanceStart")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.RebalanceAdminAction)
if objectAPI == nil {
@@ -285,8 +302,7 @@ func (a adminAPIHandlers) RebalanceStart(w http.ResponseWriter, r *http.Request)
}
func (a adminAPIHandlers) RebalanceStatus(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RebalanceStatus")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.RebalanceAdminAction)
if objectAPI == nil {
@@ -314,7 +330,7 @@ func (a adminAPIHandlers) RebalanceStatus(w http.ResponseWriter, r *http.Request
rs, err := rebalanceStatus(ctx, pools)
if err != nil {
if errors.Is(err, errRebalanceNotStarted) {
if errors.Is(err, errRebalanceNotStarted) || errors.Is(err, errConfigNotFound) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceNotStarted), r.URL)
return
}
@@ -326,8 +342,7 @@ func (a adminAPIHandlers) RebalanceStatus(w http.ResponseWriter, r *http.Request
}
func (a adminAPIHandlers) RebalanceStop(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RebalanceStop")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.RebalanceAdminAction)
if objectAPI == nil {

View File

@@ -20,25 +20,26 @@ package cmd
import (
"bytes"
"context"
"encoding/gob"
"encoding/json"
"errors"
"io"
"net/http"
"strings"
"sync/atomic"
"time"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/dustin/go-humanize"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
)
// SiteReplicationAdd - PUT /minio/admin/v3/site-replication/add
func (a adminAPIHandlers) SiteReplicationAdd(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationAdd")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
if objectAPI == nil {
@@ -72,9 +73,7 @@ func (a adminAPIHandlers) SiteReplicationAdd(w http.ResponseWriter, r *http.Requ
// used internally to tell current cluster to enable SR with
// the provided peer clusters and service account.
func (a adminAPIHandlers) SRPeerJoin(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerJoin")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
if objectAPI == nil {
@@ -96,9 +95,7 @@ func (a adminAPIHandlers) SRPeerJoin(w http.ResponseWriter, r *http.Request) {
// SRPeerBucketOps - PUT /minio/admin/v3/site-replication/bucket-ops?bucket=x&operation=y
func (a adminAPIHandlers) SRPeerBucketOps(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerBucketOps")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationOperationAction)
if objectAPI == nil {
@@ -114,37 +111,23 @@ func (a adminAPIHandlers) SRPeerBucketOps(w http.ResponseWriter, r *http.Request
default:
err = errSRInvalidRequest(errInvalidArgument)
case madmin.MakeWithVersioningBktOp:
_, isLockEnabled := r.Form["lockEnabled"]
_, isVersioningEnabled := r.Form["versioningEnabled"]
_, isForceCreate := r.Form["forceCreate"]
createdAtStr := strings.TrimSpace(r.Form.Get("createdAt"))
createdAt, cerr := time.Parse(time.RFC3339Nano, createdAtStr)
createdAt, cerr := time.Parse(time.RFC3339Nano, strings.TrimSpace(r.Form.Get("createdAt")))
if cerr != nil {
createdAt = timeSentinel
}
opts := MakeBucketOptions{
Location: r.Form.Get("location"),
LockEnabled: isLockEnabled,
VersioningEnabled: isVersioningEnabled,
ForceCreate: isForceCreate,
LockEnabled: r.Form.Get("lockEnabled") == "true",
VersioningEnabled: r.Form.Get("versioningEnabled") == "true",
ForceCreate: r.Form.Get("forceCreate") == "true",
CreatedAt: createdAt,
}
err = globalSiteReplicationSys.PeerBucketMakeWithVersioningHandler(ctx, bucket, opts)
case madmin.ConfigureReplBktOp:
err = globalSiteReplicationSys.PeerBucketConfigureReplHandler(ctx, bucket)
case madmin.DeleteBucketBktOp:
_, noRecreate := r.Form["noRecreate"]
case madmin.DeleteBucketBktOp, madmin.ForceDeleteBucketBktOp:
err = globalSiteReplicationSys.PeerBucketDeleteHandler(ctx, bucket, DeleteBucketOptions{
Force: false,
NoRecreate: noRecreate,
SRDeleteOp: getSRBucketDeleteOp(true),
})
case madmin.ForceDeleteBucketBktOp:
_, noRecreate := r.Form["noRecreate"]
err = globalSiteReplicationSys.PeerBucketDeleteHandler(ctx, bucket, DeleteBucketOptions{
Force: true,
NoRecreate: noRecreate,
Force: operation == madmin.ForceDeleteBucketBktOp,
SRDeleteOp: getSRBucketDeleteOp(true),
})
case madmin.PurgeDeletedBucketOp:
@@ -159,9 +142,7 @@ func (a adminAPIHandlers) SRPeerBucketOps(w http.ResponseWriter, r *http.Request
// SRPeerReplicateIAMItem - PUT /minio/admin/v3/site-replication/iam-item
func (a adminAPIHandlers) SRPeerReplicateIAMItem(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerReplicateIAMItem")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationOperationAction)
if objectAPI == nil {
@@ -213,9 +194,7 @@ func (a adminAPIHandlers) SRPeerReplicateIAMItem(w http.ResponseWriter, r *http.
// SRPeerReplicateBucketItem - PUT /minio/admin/v3/site-replication/bucket-meta
func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerReplicateBucketItem")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationOperationAction)
if objectAPI == nil {
@@ -228,10 +207,15 @@ func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *ht
return
}
if item.Bucket == "" {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errSRInvalidRequest(errInvalidArgument)), r.URL)
return
}
var err error
switch item.Type {
default:
err = errSRInvalidRequest(errInvalidArgument)
err = globalSiteReplicationSys.PeerBucketMetadataUpdateHandler(ctx, item)
case madmin.SRBucketMetaTypePolicy:
if item.Policy == nil {
err = globalSiteReplicationSys.PeerBucketPolicyHandler(ctx, item.Bucket, nil, item.UpdatedAt)
@@ -257,7 +241,7 @@ func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *ht
return
}
if err = globalSiteReplicationSys.PeerBucketQuotaConfigHandler(ctx, item.Bucket, quotaConfig, item.UpdatedAt); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
}
@@ -279,9 +263,7 @@ func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *ht
// SiteReplicationInfo - GET /minio/admin/v3/site-replication/info
func (a adminAPIHandlers) SiteReplicationInfo(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationInfo")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationInfoAction)
if objectAPI == nil {
@@ -301,9 +283,7 @@ func (a adminAPIHandlers) SiteReplicationInfo(w http.ResponseWriter, r *http.Req
}
func (a adminAPIHandlers) SRPeerGetIDPSettings(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationGetIDPSettings")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
if objectAPI == nil {
@@ -340,9 +320,7 @@ func parseJSONBody(ctx context.Context, body io.Reader, v interface{}, encryptio
// SiteReplicationStatus - GET /minio/admin/v3/site-replication/status
func (a adminAPIHandlers) SiteReplicationStatus(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationStatus")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationInfoAction)
if objectAPI == nil {
@@ -371,9 +349,7 @@ func (a adminAPIHandlers) SiteReplicationStatus(w http.ResponseWriter, r *http.R
// SiteReplicationMetaInfo - GET /minio/admin/v3/site-replication/metainfo
func (a adminAPIHandlers) SiteReplicationMetaInfo(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationMetaInfo")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationInfoAction)
if objectAPI == nil {
@@ -395,8 +371,7 @@ func (a adminAPIHandlers) SiteReplicationMetaInfo(w http.ResponseWriter, r *http
// SiteReplicationEdit - PUT /minio/admin/v3/site-replication/edit
func (a adminAPIHandlers) SiteReplicationEdit(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationEdit")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
if objectAPI == nil {
@@ -427,8 +402,7 @@ func (a adminAPIHandlers) SiteReplicationEdit(w http.ResponseWriter, r *http.Req
//
// used internally to tell current cluster to update endpoint for peer
func (a adminAPIHandlers) SRPeerEdit(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerEdit")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
if objectAPI == nil {
@@ -457,14 +431,13 @@ func getSRStatusOptions(r *http.Request) (opts madmin.SRStatusOptions) {
opts.Entity = madmin.GetSREntityType(q.Get("entity"))
opts.EntityValue = q.Get("entityvalue")
opts.ShowDeleted = q.Get("showDeleted") == "true"
opts.Metrics = q.Get("metrics") == "true"
return
}
// SiteReplicationRemove - PUT /minio/admin/v3/site-replication/remove
func (a adminAPIHandlers) SiteReplicationRemove(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationRemove")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationRemoveAction)
if objectAPI == nil {
@@ -495,8 +468,7 @@ func (a adminAPIHandlers) SiteReplicationRemove(w http.ResponseWriter, r *http.R
//
// used internally to tell current cluster to update endpoint for peer
func (a adminAPIHandlers) SRPeerRemove(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerRemove")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationRemoveAction)
if objectAPI == nil {
@@ -515,3 +487,85 @@ func (a adminAPIHandlers) SRPeerRemove(w http.ResponseWriter, r *http.Request) {
return
}
}
// SiteReplicationResyncOp - PUT /minio/admin/v3/site-replication/resync/op
func (a adminAPIHandlers) SiteReplicationResyncOp(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationResyncAction)
if objectAPI == nil {
return
}
var peerSite madmin.PeerInfo
if err := parseJSONBody(ctx, r.Body, &peerSite, ""); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
vars := mux.Vars(r)
op := madmin.SiteResyncOp(vars["operation"])
var (
status madmin.SRResyncOpStatus
err error
)
switch op {
case madmin.SiteResyncStart:
status, err = globalSiteReplicationSys.startResync(ctx, objectAPI, peerSite)
case madmin.SiteResyncCancel:
status, err = globalSiteReplicationSys.cancelResync(ctx, objectAPI, peerSite)
default:
err = errSRInvalidRequest(errInvalidArgument)
}
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
body, err := json.Marshal(status)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, body)
}
// SiteReplicationDevNull - everything goes to io.Discard
// [POST] /minio/admin/v3/site-replication/devnull
func (a adminAPIHandlers) SiteReplicationDevNull(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
globalSiteNetPerfRX.Connect()
defer globalSiteNetPerfRX.Disconnect()
connectTime := time.Now()
for {
n, err := io.CopyN(io.Discard, r.Body, 128*humanize.KiByte)
atomic.AddUint64(&globalSiteNetPerfRX.RX, uint64(n))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// If there is a disconnection before globalNetPerfMinDuration (we give a margin of error of 1 sec)
// would mean the network is not stable. Logging here will help in debugging network issues.
if time.Since(connectTime) < (globalNetPerfMinDuration - time.Second) {
logger.LogIf(ctx, err)
}
}
if err != nil {
if errors.Is(err, io.EOF) {
w.WriteHeader(http.StatusNoContent)
} else {
w.WriteHeader(http.StatusBadRequest)
}
break
}
}
}
// SiteReplicationNetPerf - everything goes to io.Discard
// [POST] /minio/admin/v3/site-replication/netperf
func (a adminAPIHandlers) SiteReplicationNetPerf(w http.ResponseWriter, r *http.Request) {
durationStr := r.Form.Get(peerRESTDuration)
duration, _ := time.ParseDuration(durationStr)
if duration < globalNetPerfMinDuration {
duration = globalNetPerfMinDuration
}
result := siteNetperf(r.Context(), duration)
logger.LogIf(r.Context(), gob.NewEncoder(w).Encode(result))
}

View File

@@ -27,12 +27,12 @@ import (
"context"
"fmt"
"runtime"
"sync"
"testing"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
minio "github.com/minio/minio-go/v7"
"github.com/minio/pkg/sync/errgroup"
)
func runAllIAMConcurrencyTests(suite *TestSuiteIAM, c *check) {
@@ -129,18 +129,21 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
secretKeys[i] = secretKey
}
wg := sync.WaitGroup{}
g := errgroup.Group{}
for i := 0; i < userCount; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
uClient := s.getUserClient(c, accessKeys[i], secretKeys[i], "")
err := s.adm.RemoveUser(ctx, accessKeys[i])
if err != nil {
c.Fatalf("unable to remove user: %v", err)
g.Go(func(i int) func() error {
return func() error {
uClient := s.getUserClient(c, accessKeys[i], secretKeys[i], "")
err := s.adm.RemoveUser(ctx, accessKeys[i])
if err != nil {
return err
}
c.mustNotListObjects(ctx, uClient, bucket)
return nil
}
c.mustNotListObjects(ctx, uClient, bucket)
}(i)
}(i), i)
}
if errs := g.Wait(); len(errs) > 0 {
c.Fatalf("unable to remove users: %v", errs)
}
wg.Wait()
}

File diff suppressed because it is too large Load Diff

View File

@@ -27,20 +27,19 @@ import (
"io"
"net/http"
"net/url"
"os"
"runtime"
"strings"
"testing"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
cr "github.com/minio/minio-go/v7/pkg/credentials"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/signer"
"github.com/minio/minio/internal/auth"
"github.com/minio/pkg/env"
)
const (
@@ -122,7 +121,7 @@ var iamTestSuites = func() []*TestSuiteIAM {
}()
const (
EnvTestEtcdBackend = "ETCD_SERVER"
EnvTestEtcdBackend = "_MINIO_ETCD_TEST_SERVER"
)
func (s *TestSuiteIAM) setUpEtcd(c *check, etcdServer string) {
@@ -145,7 +144,7 @@ func (s *TestSuiteIAM) setUpEtcd(c *check, etcdServer string) {
func (s *TestSuiteIAM) SetUpSuite(c *check) {
// If etcd backend is specified and etcd server is not present, the test
// is skipped.
etcdServer := os.Getenv(EnvTestEtcdBackend)
etcdServer := env.Get(EnvTestEtcdBackend, "")
if s.withEtcdBackend && etcdServer == "" {
c.Skip("Skipping etcd backend IAM test as no etcd server is configured.")
}
@@ -825,7 +824,7 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
if set.CreateStringSet(groups...).Contains(group) {
c.Fatalf("created group still present!")
}
groupInfo, err = s.adm.GetGroupDescription(ctx, group)
_, err = s.adm.GetGroupDescription(ctx, group)
if err == nil {
c.Fatalf("group appears to exist")
}
@@ -877,7 +876,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
// Create an madmin client with user creds
userAdmClient, err := madmin.NewWithOptions(s.endpoint, &madmin.Options{
Creds: cr.NewStaticV4(accessKey, secretKey, ""),
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
Secure: s.secure,
})
if err != nil {
@@ -984,9 +983,9 @@ func (s *TestSuiteIAM) SetUpAccMgmtPlugin(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
pluginEndpoint := os.Getenv("POLICY_PLUGIN_ENDPOINT")
pluginEndpoint := env.Get("_MINIO_POLICY_PLUGIN_ENDPOINT", "")
if pluginEndpoint == "" {
c.Skip("POLICY_PLUGIN_ENDPOINT not given - skipping.")
c.Skip("_MINIO_POLICY_PLUGIN_ENDPOINT not given - skipping.")
}
configCmds := []string{
@@ -1072,7 +1071,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
// Create an madmin client with user creds
userAdmClient, err := madmin.NewWithOptions(s.endpoint, &madmin.Options{
Creds: cr.NewStaticV4(accessKey, secretKey, ""),
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
Secure: s.secure,
})
if err != nil {
@@ -1136,7 +1135,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
c.assertSvcAccDeletion(ctx, s, userAdmClient, accessKey, bucket)
// 6. Check that service account **can** be created for some other user.
// This is possible because of the policy enforced in the plugin.
// This is possible because the policy enforced in the plugin.
c.mustCreateSvcAccount(ctx, globalActiveCred.AccessKey, userAdmClient)
}
@@ -1261,6 +1260,52 @@ func (c *check) mustListBuckets(ctx context.Context, client *minio.Client) {
}
}
func (c *check) mustNotDelete(ctx context.Context, client *minio.Client, bucket string, vid string) {
c.Helper()
err := client.RemoveObject(ctx, bucket, "some-object", minio.RemoveObjectOptions{VersionID: vid})
if err == nil {
c.Fatalf("user must not be allowed to delete")
}
err = client.RemoveObject(ctx, bucket, "some-object", minio.RemoveObjectOptions{})
if err != nil {
c.Fatal("user must be able to create delete marker")
}
}
func (c *check) mustDownload(ctx context.Context, client *minio.Client, bucket string) {
c.Helper()
rd, err := client.GetObject(ctx, bucket, "some-object", minio.GetObjectOptions{})
if err != nil {
c.Fatalf("download did not succeed got %#v", err)
}
if _, err = io.Copy(io.Discard, rd); err != nil {
c.Fatalf("download did not succeed got %#v", err)
}
}
func (c *check) mustUploadReturnVersions(ctx context.Context, client *minio.Client, bucket string) []string {
c.Helper()
versions := []string{}
for i := 0; i < 5; i++ {
ui, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
if err != nil {
c.Fatalf("upload did not succeed got %#v", err)
}
versions = append(versions, ui.VersionID)
}
return versions
}
func (c *check) mustUpload(ctx context.Context, client *minio.Client, bucket string) {
c.Helper()
_, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
if err != nil {
c.Fatalf("upload did not succeed got %#v", err)
}
}
func (c *check) mustNotUpload(ctx context.Context, client *minio.Client, bucket string) {
c.Helper()
_, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
@@ -1283,7 +1328,11 @@ func (c *check) assertSvcAccAppearsInListing(ctx context.Context, madmClient *ma
if err != nil {
c.Fatalf("unable to list svc accounts: %v", err)
}
if !set.CreateStringSet(listResp.Accounts...).Contains(svcAK) {
var accessKeys []string
for _, item := range listResp.Accounts {
accessKeys = append(accessKeys, item.AccessKey)
}
if !set.CreateStringSet(accessKeys...).Contains(svcAK) {
c.Fatalf("service account did not appear in listing!")
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -21,17 +21,19 @@ import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"net/url"
"sort"
"sync"
"testing"
"time"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/mux"
)
// adminErasureTestBed - encapsulates subsystems that need to be setup for
@@ -71,7 +73,7 @@ func prepareAdminErasureTestBed(ctx context.Context) (*adminErasureTestBed, erro
// Initialize boot time
globalBootTime = UTCNow()
globalEndpoints = mustGetPoolEndpoints(erasureDirs...)
globalEndpoints = mustGetPoolEndpoints(0, erasureDirs...)
initAllSubsystems(ctx)
@@ -106,7 +108,7 @@ func initTestErasureObjLayer(ctx context.Context) (ObjectLayer, []string, error)
if err != nil {
return nil, nil, err
}
endpoints := mustGetPoolEndpoints(erasureDirs...)
endpoints := mustGetPoolEndpoints(0, erasureDirs...)
globalPolicySys = NewPolicySys()
objLayer, err := newErasureServerPools(ctx, endpoints)
if err != nil {
@@ -380,3 +382,146 @@ func TestExtractHealInitParams(t *testing.T) {
}
}
}
type byResourceUID struct{ madmin.LockEntries }
func (b byResourceUID) Less(i, j int) bool {
toUniqLock := func(entry madmin.LockEntry) string {
return fmt.Sprintf("%s/%s", entry.Resource, entry.ID)
}
return toUniqLock(b.LockEntries[i]) < toUniqLock(b.LockEntries[j])
}
func TestTopLockEntries(t *testing.T) {
locksHeld := make(map[string][]lockRequesterInfo)
var owners []string
for i := 0; i < 4; i++ {
owners = append(owners, fmt.Sprintf("node-%d", i))
}
// Simulate DeleteObjects of 10 objects in a single request. i.e same lock
// request UID, but 10 different resource names associated with it.
var lris []lockRequesterInfo
uuid := mustGetUUID()
for i := 0; i < 10; i++ {
resource := fmt.Sprintf("bucket/delete-object-%d", i)
lri := lockRequesterInfo{
Name: resource,
Writer: true,
UID: uuid,
Owner: owners[i%len(owners)],
Group: true,
Quorum: 3,
}
lris = append(lris, lri)
locksHeld[resource] = []lockRequesterInfo{lri}
}
// Add a few concurrent read locks to the mix
for i := 0; i < 50; i++ {
resource := fmt.Sprintf("bucket/get-object-%d", i)
lri := lockRequesterInfo{
Name: resource,
UID: mustGetUUID(),
Owner: owners[i%len(owners)],
Quorum: 2,
}
lris = append(lris, lri)
locksHeld[resource] = append(locksHeld[resource], lri)
// concurrent read lock, same resource different uid
lri.UID = mustGetUUID()
lris = append(lris, lri)
locksHeld[resource] = append(locksHeld[resource], lri)
}
var peerLocks []*PeerLocks
for _, owner := range owners {
peerLocks = append(peerLocks, &PeerLocks{
Addr: owner,
Locks: locksHeld,
})
}
var exp madmin.LockEntries
for _, lri := range lris {
lockType := func(lri lockRequesterInfo) string {
if lri.Writer {
return "WRITE"
}
return "READ"
}
exp = append(exp, madmin.LockEntry{
Resource: lri.Name,
Type: lockType(lri),
ServerList: owners,
Owner: lri.Owner,
ID: lri.UID,
Quorum: lri.Quorum,
})
}
testCases := []struct {
peerLocks []*PeerLocks
expected madmin.LockEntries
}{
{
peerLocks: peerLocks,
expected: exp,
},
}
// printEntries := func(entries madmin.LockEntries) {
// for i, entry := range entries {
// fmt.Printf("%d: %s %s %s %s %v %d\n", i, entry.Resource, entry.ID, entry.Owner, entry.Type, entry.ServerList, entry.Elapsed)
// }
// }
check := func(exp, got madmin.LockEntries) (int, bool) {
if len(exp) != len(got) {
return 0, false
}
sort.Slice(exp, byResourceUID{exp}.Less)
sort.Slice(got, byResourceUID{got}.Less)
// printEntries(exp)
// printEntries(got)
for i, e := range exp {
if !e.Timestamp.Equal(got[i].Timestamp) {
return i, false
}
// Skip checking elapsed since it's time sensitive.
// if e.Elapsed != got[i].Elapsed {
// return false
// }
if e.Resource != got[i].Resource {
return i, false
}
if e.Type != got[i].Type {
return i, false
}
if e.Source != got[i].Source {
return i, false
}
if e.Owner != got[i].Owner {
return i, false
}
if e.ID != got[i].ID {
return i, false
}
if len(e.ServerList) != len(got[i].ServerList) {
return i, false
}
for j := range e.ServerList {
if e.ServerList[j] != got[i].ServerList[j] {
return i, false
}
}
}
return 0, true
}
for i, tc := range testCases {
got := topLockEntries(tc.peerLocks, false)
if idx, ok := check(tc.expected, got); !ok {
t.Fatalf("%d: mismatch at %d \n expected %#v but got %#v", i, idx, tc.expected[idx], got[idx])
}
}
}

View File

@@ -27,7 +27,7 @@ import (
"sync"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
)
@@ -176,11 +176,12 @@ func (ahs *allHealState) getHealLocalDiskEndpoints() Endpoints {
return endpoints
}
func (ahs *allHealState) markDiskForHealing(ep Endpoint) {
// Set, in the memory, the state of the disk as currently healing or not
func (ahs *allHealState) setDiskHealingStatus(ep Endpoint, healing bool) {
ahs.Lock()
defer ahs.Unlock()
ahs.healLocalDisks[ep] = true
ahs.healLocalDisks[ep] = healing
}
func (ahs *allHealState) pushHealLocalDisks(healLocalDisks ...Endpoint) {
@@ -222,8 +223,8 @@ func (ahs *allHealState) periodicHealSeqsClean(ctx context.Context) {
// getHealSequenceByToken - Retrieve a heal sequence by token. The second
// argument returns if a heal sequence actually exists.
func (ahs *allHealState) getHealSequenceByToken(token string) (h *healSequence, exists bool) {
ahs.Lock()
defer ahs.Unlock()
ahs.RLock()
defer ahs.RUnlock()
for _, healSeq := range ahs.healSeqMap {
if healSeq.clientToken == token {
return healSeq, true
@@ -235,8 +236,8 @@ func (ahs *allHealState) getHealSequenceByToken(token string) (h *healSequence,
// getHealSequence - Retrieve a heal sequence by path. The second
// argument returns if a heal sequence actually exists.
func (ahs *allHealState) getHealSequence(path string) (h *healSequence, exists bool) {
ahs.Lock()
defer ahs.Unlock()
ahs.RLock()
defer ahs.RUnlock()
h, exists = ahs.healSeqMap[path]
return h, exists
}
@@ -252,7 +253,7 @@ func (ahs *allHealState) stopHealSequence(path string) ([]byte, APIError) {
} else {
clientToken := he.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s@%d", he.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s:%d", he.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
hsp = madmin.HealStopSuccess{
@@ -326,7 +327,7 @@ func (ahs *allHealState) LaunchNewHealSequence(h *healSequence, objAPI ObjectLay
clientToken := h.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s@%d", h.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s:%d", h.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
b, err := json.Marshal(madmin.HealStartSuccess{
@@ -396,6 +397,7 @@ type healSource struct {
bucket string
object string
versionID string
noWait bool // a non blocking call, if task queue is full return right away.
opts *madmin.HealOpts // optional heal option overrides default setting
}
@@ -405,9 +407,6 @@ type healSequence struct {
// bucket, and object on which heal seq. was initiated
bucket, object string
// A channel of entities with heal result
respCh chan healResult
// Report healing progress
reportProgress bool
@@ -470,7 +469,6 @@ func newHealSequence(ctx context.Context, bucket, objPrefix, clientAddr string,
clientToken := mustGetUUID()
return &healSequence{
respCh: make(chan healResult),
bucket: bucket,
object: objPrefix,
reportProgress: true,
@@ -699,7 +697,6 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
object: source.object,
versionID: source.versionID,
opts: h.settings,
respCh: h.respCh,
}
if source.opts != nil {
task.opts = *source.opts
@@ -712,6 +709,24 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
h.lastHealActivity = UTCNow()
h.mutex.Unlock()
if source.noWait {
select {
case globalBackgroundHealRoutine.tasks <- task:
if serverDebugLog {
logger.Info("Task in the queue: %#v", task)
}
default:
// task queue is full, no more workers, we shall move on and heal later.
return nil
}
// Don't wait for result
return nil
}
// respCh must be set to wait for result.
// We make it size 1, so a result can always be written
// even if we aren't listening.
task.respCh = make(chan healResult, 1)
select {
case globalBackgroundHealRoutine.tasks <- task:
if serverDebugLog {
@@ -721,8 +736,9 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
return nil
}
// task queued, now wait for the response.
select {
case res := <-h.respCh:
case res := <-task.respCh:
if !h.reportProgress {
if errors.Is(res.err, errSkipFile) { // this is only sent usually by nopHeal
return nil
@@ -768,6 +784,11 @@ func (h *healSequence) healDiskMeta(objAPI ObjectLayer) error {
}
func (h *healSequence) healItems(objAPI ObjectLayer, bucketsOnly bool) error {
if h.clientToken == bgHealingUUID {
// For background heal do nothing.
return nil
}
if err := h.healDiskMeta(objAPI); err != nil {
return err
}
@@ -853,13 +874,8 @@ func (h *healSequence) healBucket(objAPI ObjectLayer, bucket string, bucketsOnly
if !h.settings.Recursive {
if h.object != "" {
// Check if an object named as the objPrefix exists,
// and if so heal it.
oi, err := objAPI.GetObjectInfo(h.ctx, bucket, h.object, ObjectOptions{})
if err == nil {
if err = h.healObject(bucket, h.object, oi.VersionID); err != nil {
return err
}
if err := h.healObject(bucket, h.object, ""); err != nil {
return err
}
}

View File

@@ -19,20 +19,127 @@ package cmd
import (
"net/http"
"reflect"
"runtime"
"strings"
"github.com/gorilla/mux"
"github.com/klauspost/compress/gzhttp"
"github.com/klauspost/compress/gzip"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
)
const (
adminPathPrefix = minioReservedBucketPath + "/admin"
adminAPIVersion = madmin.AdminAPIVersion
adminAPIVersionPrefix = SlashSeparator + adminAPIVersion
adminPathPrefix = minioReservedBucketPath + "/admin"
adminAPIVersion = madmin.AdminAPIVersion
adminAPIVersionPrefix = SlashSeparator + adminAPIVersion
adminAPISiteReplicationDevNull = "/site-replication/devnull"
adminAPISiteReplicationNetPerf = "/site-replication/netperf"
adminAPIClientDevNull = "/speedtest/client/devnull"
adminAPIClientDevExtraTime = "/speedtest/client/devnull/extratime"
)
var gzipHandler = func() func(http.Handler) http.HandlerFunc {
gz, err := gzhttp.NewWrapper(gzhttp.MinSize(1000), gzhttp.CompressionLevel(gzip.BestSpeed))
if err != nil {
// Static params, so this is very unlikely.
logger.Fatal(err, "Unable to initialize server")
}
return gz
}()
// Set of handler options as bit flags
type hFlag uint8
const (
// this flag disables gzip compression of responses
noGZFlag = 1 << iota
// this flag enables tracing body and headers instead of just headers
traceAllFlag
// pass this flag to skip checking if object layer is available
noObjLayerFlag
)
// Has checks if the the given flag is enabled in `h`.
func (h hFlag) Has(flag hFlag) bool {
// Use bitwise-AND and check if the result is non-zero.
return h&flag != 0
}
func getHandlerName(f http.HandlerFunc) string {
name := runtime.FuncForPC(reflect.ValueOf(f).Pointer()).Name()
name = strings.TrimPrefix(name, "github.com/minio/minio/cmd.adminAPIHandlers.")
name = strings.TrimSuffix(name, "Handler-fm")
name = strings.TrimSuffix(name, "-fm")
return name
}
// adminMiddleware performs some common admin handler functionality for all
// handlers:
//
// - updates request context with `logger.ReqInfo` and api name based on the
// name of the function handler passed (this handler must be a method of
// `adminAPIHandlers`).
//
// - sets up call to send AuditLog
//
// Note that, while this is a middleware function (i.e. it takes a handler
// function and returns one), due to flags being passed based on required
// conditions, it is done per-"handler function registration" in the router.
//
// When no flags are passed, gzip compression, http tracing of headers and
// checking of object layer availability are all enabled. Use flags to modify
// this behavior.
func adminMiddleware(f http.HandlerFunc, flags ...hFlag) http.HandlerFunc {
// Collect all flags with bitwise-OR and assign operator
var handlerFlags hFlag
for _, flag := range flags {
handlerFlags |= flag
}
// Get name of the handler using reflection. NOTE: The passed in handler
// function must be a method of `adminAPIHandlers` for this extraction to
// work as expected.
handlerName := getHandlerName(f)
var handler http.HandlerFunc = func(w http.ResponseWriter, r *http.Request) {
// Update request context with `logger.ReqInfo`.
r = r.WithContext(newContext(r, w, handlerName))
defer logger.AuditLog(r.Context(), w, r, mustGetClaimsFromToken(r))
// Check if object layer is available, if not return error early.
if !handlerFlags.Has(noObjLayerFlag) {
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(r.Context(), w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
}
// Apply http tracing "middleware" based on presence of flag.
var f2 http.HandlerFunc
if handlerFlags.Has(traceAllFlag) {
f2 = httpTraceAll(f)
} else {
f2 = httpTraceHdrs(f)
}
// call the final handler
f2(w, r)
}
// Enable compression of responses based on presence of flag.
if !handlerFlags.Has(noGZFlag) {
handler = gzipHandler(handler)
}
return handler
}
// adminAPIHandlers provides HTTP handlers for MinIO admin API.
type adminAPIHandlers struct{}
@@ -46,247 +153,264 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminAPIVersionPrefix,
}
gz, err := gzhttp.NewWrapper(gzhttp.MinSize(1000), gzhttp.CompressionLevel(gzip.BestSpeed))
if err != nil {
// Static params, so this is very unlikely.
logger.Fatal(err, "Unable to initialize server")
}
for _, adminVersion := range adminVersions {
// Restart and stop MinIO service.
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/service").HandlerFunc(gz(httpTraceAll(adminAPI.ServiceHandler))).Queries("action", "{action:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/service").HandlerFunc(adminMiddleware(adminAPI.ServiceHandler, traceAllFlag)).Queries("action", "{action:.*}")
// Update MinIO servers.
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/update").HandlerFunc(gz(httpTraceAll(adminAPI.ServerUpdateHandler))).Queries("updateURL", "{updateURL:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/update").HandlerFunc(adminMiddleware(adminAPI.ServerUpdateHandler, traceAllFlag)).Queries("updateURL", "{updateURL:.*}")
// Info operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/info").HandlerFunc(gz(httpTraceAll(adminAPI.ServerInfoHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/inspect-data").HandlerFunc(httpTraceHdrs(adminAPI.InspectDataHandler)).Queries("volume", "{volume:.*}", "file", "{file:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/info").HandlerFunc(adminMiddleware(adminAPI.ServerInfoHandler, traceAllFlag, noObjLayerFlag))
adminRouter.Methods(http.MethodGet, http.MethodPost).Path(adminVersion + "/inspect-data").HandlerFunc(adminMiddleware(adminAPI.InspectDataHandler, noGZFlag, traceAllFlag))
// StorageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/storageinfo").HandlerFunc(gz(httpTraceAll(adminAPI.StorageInfoHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/storageinfo").HandlerFunc(adminMiddleware(adminAPI.StorageInfoHandler, traceAllFlag))
// DataUsageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/datausageinfo").HandlerFunc(gz(httpTraceAll(adminAPI.DataUsageInfoHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/datausageinfo").HandlerFunc(adminMiddleware(adminAPI.DataUsageInfoHandler, traceAllFlag))
// Metrics operation
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/metrics").HandlerFunc(gz(httpTraceAll(adminAPI.MetricsHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/metrics").HandlerFunc(adminMiddleware(adminAPI.MetricsHandler, traceAllFlag))
if globalIsDistErasure || globalIsErasure {
// Heal operations
// Heal processing endpoint.
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/").HandlerFunc(gz(httpTraceAll(adminAPI.HealHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}").HandlerFunc(gz(httpTraceAll(adminAPI.HealHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}/{prefix:.*}").HandlerFunc(gz(httpTraceAll(adminAPI.HealHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/background-heal/status").HandlerFunc(gz(httpTraceAll(adminAPI.BackgroundHealStatusHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/").HandlerFunc(adminMiddleware(adminAPI.HealHandler, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}").HandlerFunc(adminMiddleware(adminAPI.HealHandler, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}/{prefix:.*}").HandlerFunc(adminMiddleware(adminAPI.HealHandler, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/background-heal/status").HandlerFunc(adminMiddleware(adminAPI.BackgroundHealStatusHandler, traceAllFlag))
// Pool operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/pools/list").HandlerFunc(gz(httpTraceAll(adminAPI.ListPools)))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/pools/status").HandlerFunc(gz(httpTraceAll(adminAPI.StatusPool))).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/pools/list").HandlerFunc(adminMiddleware(adminAPI.ListPools, traceAllFlag))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/pools/status").HandlerFunc(adminMiddleware(adminAPI.StatusPool, traceAllFlag)).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/pools/decommission").HandlerFunc(gz(httpTraceAll(adminAPI.StartDecommission))).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/pools/cancel").HandlerFunc(gz(httpTraceAll(adminAPI.CancelDecommission))).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/pools/decommission").HandlerFunc(adminMiddleware(adminAPI.StartDecommission, traceAllFlag)).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/pools/cancel").HandlerFunc(adminMiddleware(adminAPI.CancelDecommission, traceAllFlag)).Queries("pool", "{pool:.*}")
// Rebalance operations
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/rebalance/start").HandlerFunc(gz(httpTraceAll(adminAPI.RebalanceStart)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/rebalance/status").HandlerFunc(gz(httpTraceAll(adminAPI.RebalanceStatus)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/rebalance/stop").HandlerFunc(gz(httpTraceAll(adminAPI.RebalanceStop)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/rebalance/start").HandlerFunc(adminMiddleware(adminAPI.RebalanceStart, traceAllFlag))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/rebalance/status").HandlerFunc(adminMiddleware(adminAPI.RebalanceStatus, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/rebalance/stop").HandlerFunc(adminMiddleware(adminAPI.RebalanceStop, traceAllFlag))
}
// Profiling operations - deprecated API
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/profiling/start").HandlerFunc(gz(httpTraceAll(adminAPI.StartProfilingHandler))).
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/profiling/start").HandlerFunc(adminMiddleware(adminAPI.StartProfilingHandler, traceAllFlag, noObjLayerFlag)).
Queries("profilerType", "{profilerType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(gz(httpTraceAll(adminAPI.DownloadProfilingHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(adminMiddleware(adminAPI.DownloadProfilingHandler, traceAllFlag, noObjLayerFlag))
// Profiling operations
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/profile").HandlerFunc(gz(httpTraceAll(adminAPI.ProfileHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/profile").HandlerFunc(adminMiddleware(adminAPI.ProfileHandler, traceAllFlag, noObjLayerFlag))
// Config KV operations.
if enableConfigOps {
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-config-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetConfigKVHandler))).Queries("key", "{key:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/set-config-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetConfigKVHandler)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/del-config-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.DelConfigKVHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-config-kv").HandlerFunc(adminMiddleware(adminAPI.GetConfigKVHandler)).Queries("key", "{key:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/set-config-kv").HandlerFunc(adminMiddleware(adminAPI.SetConfigKVHandler))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/del-config-kv").HandlerFunc(adminMiddleware(adminAPI.DelConfigKVHandler))
}
// Enable config help in all modes.
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/help-config-kv").HandlerFunc(gz(httpTraceAll(adminAPI.HelpConfigKVHandler))).Queries("subSys", "{subSys:.*}", "key", "{key:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/help-config-kv").HandlerFunc(adminMiddleware(adminAPI.HelpConfigKVHandler, traceAllFlag)).Queries("subSys", "{subSys:.*}", "key", "{key:.*}")
// Config KV history operations.
if enableConfigOps {
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-config-history-kv").HandlerFunc(gz(httpTraceAll(adminAPI.ListConfigHistoryKVHandler))).Queries("count", "{count:[0-9]+}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/clear-config-history-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.ClearConfigHistoryKVHandler))).Queries("restoreId", "{restoreId:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/restore-config-history-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.RestoreConfigHistoryKVHandler))).Queries("restoreId", "{restoreId:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-config-history-kv").HandlerFunc(adminMiddleware(adminAPI.ListConfigHistoryKVHandler, traceAllFlag)).Queries("count", "{count:[0-9]+}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/clear-config-history-kv").HandlerFunc(adminMiddleware(adminAPI.ClearConfigHistoryKVHandler)).Queries("restoreId", "{restoreId:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/restore-config-history-kv").HandlerFunc(adminMiddleware(adminAPI.RestoreConfigHistoryKVHandler)).Queries("restoreId", "{restoreId:.*}")
}
// Config import/export bulk operations
if enableConfigOps {
// Get config
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/config").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetConfigHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/config").HandlerFunc(adminMiddleware(adminAPI.GetConfigHandler))
// Set config
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/config").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetConfigHandler)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/config").HandlerFunc(adminMiddleware(adminAPI.SetConfigHandler))
}
// -- IAM APIs --
// Add policy IAM
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-canned-policy").HandlerFunc(gz(httpTraceAll(adminAPI.AddCannedPolicy))).Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-canned-policy").HandlerFunc(adminMiddleware(adminAPI.AddCannedPolicy, traceAllFlag)).Queries("name", "{name:.*}")
// Add user IAM
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/accountinfo").HandlerFunc(gz(httpTraceAll(adminAPI.AccountInfoHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/accountinfo").HandlerFunc(adminMiddleware(adminAPI.AccountInfoHandler, traceAllFlag))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-user").HandlerFunc(gz(httpTraceHdrs(adminAPI.AddUser))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-user").HandlerFunc(adminMiddleware(adminAPI.AddUser)).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-user-status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetUserStatus))).Queries("accessKey", "{accessKey:.*}").Queries("status", "{status:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-user-status").HandlerFunc(adminMiddleware(adminAPI.SetUserStatus)).Queries("accessKey", "{accessKey:.*}").Queries("status", "{status:.*}")
// Service accounts ops
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/add-service-account").HandlerFunc(gz(httpTraceHdrs(adminAPI.AddServiceAccount)))
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/update-service-account").HandlerFunc(gz(httpTraceHdrs(adminAPI.UpdateServiceAccount))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-service-account").HandlerFunc(gz(httpTraceHdrs(adminAPI.InfoServiceAccount))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-service-accounts").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListServiceAccounts)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/delete-service-account").HandlerFunc(gz(httpTraceHdrs(adminAPI.DeleteServiceAccount))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/add-service-account").HandlerFunc(adminMiddleware(adminAPI.AddServiceAccount))
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/update-service-account").HandlerFunc(adminMiddleware(adminAPI.UpdateServiceAccount)).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-service-account").HandlerFunc(adminMiddleware(adminAPI.InfoServiceAccount)).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-service-accounts").HandlerFunc(adminMiddleware(adminAPI.ListServiceAccounts))
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/delete-service-account").HandlerFunc(adminMiddleware(adminAPI.DeleteServiceAccount)).Queries("accessKey", "{accessKey:.*}")
// STS accounts ops
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/temporary-account-info").HandlerFunc(adminMiddleware(adminAPI.TemporaryAccountInfo)).Queries("accessKey", "{accessKey:.*}")
// Info policy IAM latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(gz(httpTraceHdrs(adminAPI.InfoCannedPolicy))).Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(adminMiddleware(adminAPI.InfoCannedPolicy)).Queries("name", "{name:.*}")
// List policies latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-canned-policies").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListBucketPolicies))).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-canned-policies").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListCannedPolicies)))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-canned-policies").HandlerFunc(adminMiddleware(adminAPI.ListBucketPolicies)).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-canned-policies").HandlerFunc(adminMiddleware(adminAPI.ListCannedPolicies))
// Builtin IAM policy associations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/builtin/policy-entities").HandlerFunc(adminMiddleware(adminAPI.ListPolicyMappingEntities))
// Remove policy IAM
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-canned-policy").HandlerFunc(gz(httpTraceHdrs(adminAPI.RemoveCannedPolicy))).Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-canned-policy").HandlerFunc(adminMiddleware(adminAPI.RemoveCannedPolicy)).Queries("name", "{name:.*}")
// Set user or group policy
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-user-or-group-policy").
HandlerFunc(gz(httpTraceHdrs(adminAPI.SetPolicyForUserOrGroup))).
HandlerFunc(adminMiddleware(adminAPI.SetPolicyForUserOrGroup)).
Queries("policyName", "{policyName:.*}", "userOrGroup", "{userOrGroup:.*}", "isGroup", "{isGroup:true|false}")
// Attach/Detach policies to/from user or group
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/builtin/policy/{operation}").HandlerFunc(adminMiddleware(adminAPI.AttachDetachPolicyBuiltin))
// Remove user IAM
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-user").HandlerFunc(gz(httpTraceHdrs(adminAPI.RemoveUser))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-user").HandlerFunc(adminMiddleware(adminAPI.RemoveUser)).Queries("accessKey", "{accessKey:.*}")
// List users
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-users").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListBucketUsers))).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-users").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListUsers)))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-users").HandlerFunc(adminMiddleware(adminAPI.ListBucketUsers)).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-users").HandlerFunc(adminMiddleware(adminAPI.ListUsers))
// User info
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/user-info").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetUserInfo))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/user-info").HandlerFunc(adminMiddleware(adminAPI.GetUserInfo)).Queries("accessKey", "{accessKey:.*}")
// Add/Remove members from group
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/update-group-members").HandlerFunc(gz(httpTraceHdrs(adminAPI.UpdateGroupMembers)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/update-group-members").HandlerFunc(adminMiddleware(adminAPI.UpdateGroupMembers))
// Get Group
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/group").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetGroup))).Queries("group", "{group:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/group").HandlerFunc(adminMiddleware(adminAPI.GetGroup)).Queries("group", "{group:.*}")
// List Groups
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/groups").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListGroups)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/groups").HandlerFunc(adminMiddleware(adminAPI.ListGroups))
// Set Group Status
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-group-status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetGroupStatus))).Queries("group", "{group:.*}").Queries("status", "{status:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-group-status").HandlerFunc(adminMiddleware(adminAPI.SetGroupStatus)).Queries("group", "{group:.*}").Queries("status", "{status:.*}")
// Export IAM info to zipped file
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/export-iam").HandlerFunc(httpTraceHdrs(adminAPI.ExportIAM))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/export-iam").HandlerFunc(adminMiddleware(adminAPI.ExportIAM, noGZFlag))
// Import IAM info
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam").HandlerFunc(httpTraceHdrs(adminAPI.ImportIAM))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam").HandlerFunc(adminMiddleware(adminAPI.ImportIAM, noGZFlag))
// IDentity Provider configuration APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/idp-config").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetIdentityProviderCfg))).Queries("type", "{type:.*}").Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/idp-config").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetIdentityProviderCfg))).Queries("type", "{type:.*}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/idp-config").HandlerFunc(gz(httpTraceHdrs(adminAPI.DeleteIdentityProviderCfg))).Queries("type", "{type:.*}").Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.AddIdentityProviderCfg))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.UpdateIdentityProviderCfg))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp-config/{type}").HandlerFunc(adminMiddleware(adminAPI.ListIdentityProviderCfg))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.GetIdentityProviderCfg))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.DeleteIdentityProviderCfg))
// LDAP IAM operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/ldap/policy-entities").HandlerFunc(adminMiddleware(adminAPI.ListLDAPPolicyMappingEntities))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/ldap/policy/{operation}").HandlerFunc(adminMiddleware(adminAPI.AttachDetachPolicyLDAP))
// -- END IAM APIs --
// GetBucketQuotaConfig
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-bucket-quota").HandlerFunc(
gz(httpTraceHdrs(adminAPI.GetBucketQuotaConfigHandler))).Queries("bucket", "{bucket:.*}")
adminMiddleware(adminAPI.GetBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
// PutBucketQuotaConfig
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-bucket-quota").HandlerFunc(
gz(httpTraceHdrs(adminAPI.PutBucketQuotaConfigHandler))).Queries("bucket", "{bucket:.*}")
adminMiddleware(adminAPI.PutBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
// Bucket replication operations
// GetBucketTargetHandler
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-remote-targets").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ListRemoteTargetsHandler))).Queries("bucket", "{bucket:.*}", "type", "{type:.*}")
adminMiddleware(adminAPI.ListRemoteTargetsHandler)).Queries("bucket", "{bucket:.*}", "type", "{type:.*}")
// SetRemoteTargetHandler
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-remote-target").HandlerFunc(
gz(httpTraceHdrs(adminAPI.SetRemoteTargetHandler))).Queries("bucket", "{bucket:.*}")
adminMiddleware(adminAPI.SetRemoteTargetHandler)).Queries("bucket", "{bucket:.*}")
// RemoveRemoteTargetHandler
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-remote-target").HandlerFunc(
gz(httpTraceHdrs(adminAPI.RemoveRemoteTargetHandler))).Queries("bucket", "{bucket:.*}", "arn", "{arn:.*}")
adminMiddleware(adminAPI.RemoveRemoteTargetHandler)).Queries("bucket", "{bucket:.*}", "arn", "{arn:.*}")
// ReplicationDiff - MinIO extension API
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/replication/diff").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ReplicationDiffHandler))).Queries("bucket", "{bucket:.*}")
adminMiddleware(adminAPI.ReplicationDiffHandler)).Queries("bucket", "{bucket:.*}")
// ReplicationMRFHandler - MinIO extension API
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/replication/mrf").HandlerFunc(
adminMiddleware(adminAPI.ReplicationMRFHandler)).Queries("bucket", "{bucket:.*}")
// Batch job operations
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/start-job").HandlerFunc(
gz(httpTraceHdrs(adminAPI.StartBatchJob)))
adminMiddleware(adminAPI.StartBatchJob))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-jobs").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ListBatchJobs)))
adminMiddleware(adminAPI.ListBatchJobs))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/describe-job").HandlerFunc(
gz(httpTraceHdrs(adminAPI.DescribeBatchJob)))
adminMiddleware(adminAPI.DescribeBatchJob))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/cancel-job").HandlerFunc(
adminMiddleware(adminAPI.CancelBatchJob))
// Bucket migration operations
// ExportBucketMetaHandler
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/export-bucket-metadata").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ExportBucketMetadataHandler)))
adminMiddleware(adminAPI.ExportBucketMetadataHandler))
// ImportBucketMetaHandler
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-bucket-metadata").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ImportBucketMetadataHandler)))
adminMiddleware(adminAPI.ImportBucketMetadataHandler))
// Remote Tier management operations
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/tier").HandlerFunc(gz(httpTraceHdrs(adminAPI.AddTierHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.EditTierHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListTierHandler)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.RemoveTierHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.VerifyTierHandler)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/tier").HandlerFunc(adminMiddleware(adminAPI.AddTierHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/tier/{tier}").HandlerFunc(adminMiddleware(adminAPI.EditTierHandler))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier").HandlerFunc(adminMiddleware(adminAPI.ListTierHandler))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/tier/{tier}").HandlerFunc(adminMiddleware(adminAPI.RemoveTierHandler))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier/{tier}").HandlerFunc(adminMiddleware(adminAPI.VerifyTierHandler))
// Tier stats
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier-stats").HandlerFunc(gz(httpTraceHdrs(adminAPI.TierStatsHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier-stats").HandlerFunc(adminMiddleware(adminAPI.TierStatsHandler))
// Cluster Replication APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/add").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationAdd)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/remove").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationRemove)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/info").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/metainfo").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationMetaInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationStatus)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/add").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationAdd))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/remove").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationRemove))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/info").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationInfo))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/metainfo").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationMetaInfo))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/status").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationStatus))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPISiteReplicationDevNull).HandlerFunc(adminMiddleware(adminAPI.SiteReplicationDevNull, noObjLayerFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPISiteReplicationNetPerf).HandlerFunc(adminMiddleware(adminAPI.SiteReplicationNetPerf, noObjLayerFlag))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/join").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerJoin)))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/peer/bucket-ops").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerBucketOps))).Queries("bucket", "{bucket:.*}").Queries("operation", "{operation:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/iam-item").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerReplicateIAMItem)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/bucket-meta").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerReplicateBucketItem)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/peer/idp-settings").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerGetIDPSettings)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/edit").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationEdit)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/edit").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerEdit)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/remove").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerRemove)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/join").HandlerFunc(adminMiddleware(adminAPI.SRPeerJoin))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/peer/bucket-ops").HandlerFunc(adminMiddleware(adminAPI.SRPeerBucketOps)).Queries("bucket", "{bucket:.*}").Queries("operation", "{operation:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/iam-item").HandlerFunc(adminMiddleware(adminAPI.SRPeerReplicateIAMItem))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/bucket-meta").HandlerFunc(adminMiddleware(adminAPI.SRPeerReplicateBucketItem))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/peer/idp-settings").HandlerFunc(adminMiddleware(adminAPI.SRPeerGetIDPSettings))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/edit").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationEdit))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/edit").HandlerFunc(adminMiddleware(adminAPI.SRPeerEdit))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/remove").HandlerFunc(adminMiddleware(adminAPI.SRPeerRemove))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/resync/op").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationResyncOp)).Queries("operation", "{operation:.*}")
if globalIsDistErasure {
// Top locks
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/top/locks").HandlerFunc(gz(httpTraceHdrs(adminAPI.TopLocksHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/top/locks").HandlerFunc(adminMiddleware(adminAPI.TopLocksHandler))
// Force unlocks paths
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/force-unlock").
Queries("paths", "{paths:.*}").HandlerFunc(gz(httpTraceHdrs(adminAPI.ForceUnlockHandler)))
Queries("paths", "{paths:.*}").HandlerFunc(adminMiddleware(adminAPI.ForceUnlockHandler))
}
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest").HandlerFunc(httpTraceHdrs(adminAPI.SpeedTestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/object").HandlerFunc(httpTraceHdrs(adminAPI.ObjectSpeedTestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/drive").HandlerFunc(httpTraceHdrs(adminAPI.DriveSpeedtestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/net").HandlerFunc(httpTraceHdrs(adminAPI.NetperfHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest").HandlerFunc(adminMiddleware(adminAPI.ObjectSpeedTestHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/object").HandlerFunc(adminMiddleware(adminAPI.ObjectSpeedTestHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/drive").HandlerFunc(adminMiddleware(adminAPI.DriveSpeedtestHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/net").HandlerFunc(adminMiddleware(adminAPI.NetperfHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/site").HandlerFunc(adminMiddleware(adminAPI.SitePerfHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPIClientDevNull).HandlerFunc(adminMiddleware(adminAPI.ClientDevNull, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPIClientDevExtraTime).HandlerFunc(adminMiddleware(adminAPI.ClientDevNullExtraTime, noGZFlag))
// HTTP Trace
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/trace").HandlerFunc(gz(http.HandlerFunc(adminAPI.TraceHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/trace").HandlerFunc(adminMiddleware(adminAPI.TraceHandler, noObjLayerFlag))
// Console Logs
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/log").HandlerFunc(gz(httpTraceAll(adminAPI.ConsoleLogHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/log").HandlerFunc(adminMiddleware(adminAPI.ConsoleLogHandler, traceAllFlag))
// -- KMS APIs --
//
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/kms/status").HandlerFunc(gz(httpTraceAll(adminAPI.KMSStatusHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/kms/key/create").HandlerFunc(gz(httpTraceAll(adminAPI.KMSCreateKeyHandler))).Queries("key-id", "{key-id:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/kms/key/status").HandlerFunc(gz(httpTraceAll(adminAPI.KMSKeyStatusHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/kms/status").HandlerFunc(adminMiddleware(adminAPI.KMSStatusHandler, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/kms/key/create").HandlerFunc(adminMiddleware(adminAPI.KMSCreateKeyHandler, traceAllFlag)).Queries("key-id", "{key-id:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/kms/key/status").HandlerFunc(adminMiddleware(adminAPI.KMSKeyStatusHandler, traceAllFlag))
// Keep obdinfo for backward compatibility with mc
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/obdinfo").
HandlerFunc(gz(httpTraceHdrs(adminAPI.HealthInfoHandler)))
HandlerFunc(adminMiddleware(adminAPI.HealthInfoHandler))
// -- Health API --
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/healthinfo").
HandlerFunc(gz(httpTraceHdrs(adminAPI.HealthInfoHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/bandwidth").
HandlerFunc(gz(httpTraceHdrs(adminAPI.BandwidthMonitorHandler)))
HandlerFunc(adminMiddleware(adminAPI.HealthInfoHandler))
}
// If none of the routes match add default error handler routes

View File

@@ -26,15 +26,15 @@ import (
"strings"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
)
// getLocalServerProperty - returns madmin.ServerProperties for only the
// local endpoints from given list of endpoints
func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Request) madmin.ServerProperties {
var localEndpoints Endpoints
addr := globalLocalNodeName
if r != nil {
addr = r.Host
@@ -52,7 +52,6 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
if endpoint.IsLocal {
// Only proceed for local endpoints
network[nodeName] = string(madmin.ItemOnline)
localEndpoints = append(localEndpoints, endpoint)
continue
}
_, present := network[nodeName]
@@ -88,7 +87,6 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
}
props := madmin.ServerProperties{
State: string(madmin.ItemInitializing),
Endpoint: addr,
Uptime: UTCNow().Unix() - globalBootTime.Unix(),
Version: Version,
@@ -120,7 +118,7 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
config.EnvRootUser: {},
config.EnvRootPassword: {},
config.EnvMinIOSubnetAPIKey: {},
config.EnvKMSSecretKey: {},
kms.EnvKMSSecretKey: {},
}
for _, v := range os.Environ() {
if !strings.HasPrefix(v, "MINIO") && !strings.HasPrefix(v, "_MINIO") {
@@ -143,10 +141,12 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
objLayer := newObjectLayerFn()
if objLayer != nil {
// only need Disks information in server mode.
storageInfo, _ := objLayer.LocalStorageInfo(GlobalContext)
storageInfo := objLayer.LocalStorageInfo(GlobalContext)
props.State = string(madmin.ItemOnline)
props.Disks = storageInfo.Disks
} else {
props.State = string(madmin.ItemInitializing)
props.Disks = getOfflineDisks("", globalEndpoints)
}
return props

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2021 MinIO, Inc.
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -28,9 +28,10 @@ import (
"strings"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/minio/minio/internal/ioutil"
"google.golang.org/api/googleapi"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/auth"
@@ -38,10 +39,12 @@ import (
"github.com/minio/minio/internal/bucket/replication"
"github.com/minio/minio/internal/config/dns"
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
"github.com/minio/minio/internal/bucket/versioning"
levent "github.com/minio/minio/internal/config/lambda/event"
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/hash"
"github.com/minio/pkg/bucket/policy"
@@ -84,6 +87,7 @@ const (
ErrInternalError
ErrInvalidAccessKeyID
ErrAccessKeyDisabled
ErrInvalidArgument
ErrInvalidBucketName
ErrInvalidDigest
ErrInvalidRange
@@ -132,7 +136,10 @@ const (
ErrReplicationNeedsVersioningError
ErrReplicationBucketNeedsVersioningError
ErrReplicationDenyEditError
ErrRemoteTargetDenyAddError
ErrReplicationNoExistingObjects
ErrReplicationValidationError
ErrReplicationPermissionCheckError
ErrObjectRestoreAlreadyInProgress
ErrNoSuchKey
ErrNoSuchUpload
@@ -145,13 +152,14 @@ const (
ErrMethodNotAllowed
ErrInvalidPart
ErrInvalidPartOrder
ErrMissingPart
ErrAuthorizationHeaderMalformed
ErrMalformedPOSTRequest
ErrPOSTFileRequired
ErrSignatureVersionNotSupported
ErrBucketNotEmpty
ErrAllAccessDisabled
ErrMalformedPolicy
ErrPolicyInvalidVersion
ErrMissingFields
ErrMissingCredTag
ErrCredMalformed
@@ -164,7 +172,6 @@ const (
ErrMalformedDate
ErrMalformedPresignedDate
ErrMalformedCredentialDate
ErrMalformedCredentialRegion
ErrMalformedExpires
ErrNegativeExpires
ErrAuthHeaderEmpty
@@ -177,11 +184,13 @@ const (
ErrBucketAlreadyOwnedByYou
ErrInvalidDuration
ErrBucketAlreadyExists
ErrTooManyBuckets
ErrMetadataTooLarge
ErrUnsupportedMetadata
ErrUnsupportedHostHeader
ErrMaximumExpires
ErrSlowDown
ErrSlowDownRead
ErrSlowDownWrite
ErrMaxVersionsExceeded
ErrInvalidPrefixMarker
ErrBadRequest
ErrKeyTooLongError
@@ -196,6 +205,9 @@ const (
ErrBucketTaggingNotFound
ErrObjectLockInvalidHeaders
ErrInvalidTagDirective
ErrPolicyAlreadyAttached
ErrPolicyNotAttached
ErrExcessData
// Add new error codes here.
// SSE-S3/SSE-KMS related API errors
@@ -207,6 +219,8 @@ const (
ErrSSEMultipartEncrypted
ErrSSEEncryptedObject
ErrInvalidEncryptionParameters
ErrInvalidEncryptionParametersSSEC
ErrInvalidSSECustomerAlgorithm
ErrInvalidSSECustomerKey
ErrMissingSSECustomerKey
@@ -216,6 +230,7 @@ const (
ErrIncompatibleEncryptionMethod
ErrKMSNotConfigured
ErrKMSKeyNotFoundException
ErrKMSDefaultKeyAlreadyConfigured
ErrNoAccessKey
ErrInvalidToken
@@ -239,18 +254,17 @@ const (
// Add new extended error codes here.
// MinIO extended errors.
ErrReadQuorum
ErrWriteQuorum
ErrStorageFull
ErrRequestBodyParse
ErrObjectExistsAsDirectory
ErrInvalidObjectName
ErrInvalidObjectNamePrefixSlash
ErrInvalidResourceName
ErrInvalidLifecycleQueryParameter
ErrServerNotInitialized
ErrOperationTimedOut
ErrRequestTimedout
ErrClientDisconnected
ErrOperationMaxedOut
ErrTooManyRequests
ErrInvalidRequest
ErrTransitionStorageClassNotFoundError
// MinIO storage class error codes
@@ -262,10 +276,13 @@ const (
ErrMalformedJSON
ErrAdminNoSuchUser
ErrAdminNoSuchUserLDAPWarn
ErrAdminNoSuchGroup
ErrAdminGroupNotEmpty
ErrAdminGroupDisabled
ErrAdminNoSuchJob
ErrAdminNoSuchPolicy
ErrAdminPolicyChangeAlreadyApplied
ErrAdminInvalidArgument
ErrAdminInvalidAccessKey
ErrAdminInvalidSecretKey
@@ -276,7 +293,10 @@ const (
ErrAdminConfigEnvOverridden
ErrAdminConfigDuplicateKeys
ErrAdminConfigInvalidIDPType
ErrAdminConfigLDAPNonDefaultConfigName
ErrAdminConfigLDAPValidation
ErrAdminConfigIDPCfgNameAlreadyExists
ErrAdminConfigIDPCfgNameDoesNotExist
ErrAdminCredentialsMismatch
ErrInsecureClientRequest
ErrObjectTampered
@@ -404,6 +424,12 @@ const (
ErrPostPolicyConditionInvalidFormat
ErrInvalidChecksum
// Lambda functions
ErrLambdaARNInvalid
ErrLambdaARNNotFound
apiErrCodeEnd // This is used only for the testing code
)
type errorCodeMap map[APIErrorCode]APIError
@@ -417,8 +443,7 @@ func (e errorCodeMap) ToAPIErrWithErr(errCode APIErrorCode, err error) APIError
apiErr.Description = fmt.Sprintf("%s (%s)", apiErr.Description, err)
}
if globalSite.Region != "" {
switch errCode {
case ErrAuthorizationHeaderMalformed:
if errCode == ErrAuthorizationHeaderMalformed {
apiErr.Description = fmt.Sprintf("The authorization header is malformed; the region is wrong; expecting '%s'.", globalSite.Region)
return apiErr
}
@@ -513,6 +538,11 @@ var errorCodes = errorCodeMap{
Description: "Your proposed upload exceeds the maximum allowed object size.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrExcessData: {
Code: "ExcessData",
Description: "More data provided than indicated content length",
HTTPStatusCode: http.StatusBadRequest,
},
ErrPolicyTooLarge: {
Code: "PolicyTooLarge",
Description: "Policy exceeds the maximum allowed document size.",
@@ -538,6 +568,11 @@ var errorCodes = errorCodeMap{
Description: "Your account is disabled; please contact your administrator.",
HTTPStatusCode: http.StatusForbidden,
},
ErrInvalidArgument: {
Code: "InvalidArgument",
Description: "Invalid argument",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidBucketName: {
Code: "InvalidBucketName",
Description: "The specified bucket is not valid.",
@@ -663,6 +698,11 @@ var errorCodes = errorCodeMap{
Description: "One or more of the specified parts could not be found. The part may not have been uploaded, or the specified entity tag may not match the part's entity tag.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingPart: {
Code: "InvalidRequest",
Description: "You must specify at least one part",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPartOrder: {
Code: "InvalidPartOrder",
Description: "The list of parts was not in ascending order. The parts list must be specified in order by part number.",
@@ -693,11 +733,6 @@ var errorCodes = errorCodeMap{
Description: "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrTooManyBuckets: {
Code: "TooManyBuckets",
Description: "You have attempted to create more buckets than allowed",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBucketNotEmpty: {
Code: "BucketNotEmpty",
Description: "The bucket you tried to delete is not empty",
@@ -713,9 +748,9 @@ var errorCodes = errorCodeMap{
Description: "All access to this resource has been disabled.",
HTTPStatusCode: http.StatusForbidden,
},
ErrMalformedPolicy: {
ErrPolicyInvalidVersion: {
Code: "MalformedPolicy",
Description: "Policy has invalid resource.",
Description: "The policy must contain a valid version string",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingFields: {
@@ -813,11 +848,21 @@ var errorCodes = errorCodeMap{
Description: "Request is not valid yet",
HTTPStatusCode: http.StatusForbidden,
},
ErrSlowDown: {
Code: "SlowDown",
ErrSlowDownRead: {
Code: "SlowDownRead",
Description: "Resource requested is unreadable, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrSlowDownWrite: {
Code: "SlowDownWrite",
Description: "Resource requested is unwritable, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrMaxVersionsExceeded: {
Code: "MaxVersionsExceeded",
Description: "You've exceeded the limit on the number of versions you can create on this object",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPrefixMarker: {
Code: "InvalidPrefixMarker",
Description: "Invalid marker prefix combination",
@@ -918,6 +963,11 @@ var errorCodes = errorCodeMap{
Description: "No matching ExistingsObjects rule enabled",
HTTPStatusCode: http.StatusBadRequest,
},
ErrRemoteTargetDenyAddError: {
Code: "XMinioAdminRemoteTargetDenyAdd",
Description: "Cannot add remote target endpoint since this server is in a cluster replication setup",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationDenyEditError: {
Code: "XMinioReplicationDenyEdit",
Description: "Cannot alter local replication config since this server is in a cluster replication setup",
@@ -973,6 +1023,16 @@ var errorCodes = errorCodeMap{
Description: "Versioning must be 'Enabled' on the bucket to add a replication target",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationValidationError: {
Code: "InvalidRequest",
Description: "Replication validation failed on target",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationPermissionCheckError: {
Code: "ReplicationPermissionCheck",
Description: "X-Minio-Source-Replication-Check cannot be specified in request. Request cannot be completed",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNoSuchObjectLockConfiguration: {
Code: "NoSuchObjectLockConfiguration",
Description: "The specified object does not have a ObjectLock configuration",
@@ -1086,8 +1146,13 @@ var errorCodes = errorCodeMap{
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionMethod: {
Code: "InvalidRequest",
Description: "The encryption method specified is not supported",
Code: "InvalidArgument",
Description: "Server Side Encryption with AWS KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms",
HTTPStatusCode: http.StatusBadRequest,
},
ErrIncompatibleEncryptionMethod: {
Code: "InvalidArgument",
Description: "Server Side Encryption with Customer provided key is incompatible with the encryption method specified",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionKeyID: {
@@ -1115,6 +1180,11 @@ var errorCodes = errorCodeMap{
Description: "The encryption parameters are not applicable to this object.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionParametersSSEC: {
Code: "InvalidRequest",
Description: "SSE-C encryption parameters are not supported on replicated bucket.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidSSECustomerAlgorithm: {
Code: "InvalidArgument",
Description: "Requests specifying Server Side Encryption with Customer provided keys must provide a valid encryption algorithm.",
@@ -1145,11 +1215,6 @@ var errorCodes = errorCodeMap{
Description: "The provided encryption parameters did not match the ones used originally.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrIncompatibleEncryptionMethod: {
Code: "InvalidArgument",
Description: "Server side encryption specified with both SSE-C and SSE-S3 headers",
HTTPStatusCode: http.StatusBadRequest,
},
ErrKMSNotConfigured: {
Code: "NotImplemented",
Description: "Server side encryption specified but KMS is not configured",
@@ -1160,6 +1225,11 @@ var errorCodes = errorCodeMap{
Description: "Invalid keyId",
HTTPStatusCode: http.StatusBadRequest,
},
ErrKMSDefaultKeyAlreadyConfigured: {
Code: "KMS.DefaultKeyAlreadyConfiguredException",
Description: "A default encryption already exists and cannot be changed on KMS",
HTTPStatusCode: http.StatusConflict,
},
ErrNoAccessKey: {
Code: "AccessDenied",
Description: "No AWSAccessKey was presented",
@@ -1224,11 +1294,21 @@ var errorCodes = errorCodeMap{
Description: "The JSON you provided was not well-formed or did not validate against our published format.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidLifecycleQueryParameter: {
Code: "XMinioInvalidLifecycleParameter",
Description: "The boolean value provided for withUpdatedAt query parameter was invalid.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminNoSuchUser: {
Code: "XMinioAdminNoSuchUser",
Description: "The specified user does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminNoSuchUserLDAPWarn: {
Code: "XMinioAdminNoSuchUser",
Description: "The specified user does not exist. If you meant a user in LDAP, use `mc idp ldap`",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminNoSuchGroup: {
Code: "XMinioAdminNoSuchGroup",
Description: "The specified group does not exist.",
@@ -1244,11 +1324,22 @@ var errorCodes = errorCodeMap{
Description: "The specified group is not empty - cannot remove it.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminGroupDisabled: {
Code: "XMinioAdminGroupDisabled",
Description: "The specified group is disabled.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminNoSuchPolicy: {
Code: "XMinioAdminNoSuchPolicy",
Description: "The canned policy does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminPolicyChangeAlreadyApplied: {
Code: "XMinioAdminPolicyChangeAlreadyApplied",
Description: "The specified policy change is already in effect.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminInvalidArgument: {
Code: "XMinioAdminInvalidArgument",
Description: "Invalid arguments specified.",
@@ -1300,11 +1391,26 @@ var errorCodes = errorCodeMap{
Description: fmt.Sprintf("Invalid IDP configuration type - must be one of %v", madmin.ValidIDPConfigTypes),
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigLDAPNonDefaultConfigName: {
Code: "XMinioAdminConfigLDAPNonDefaultConfigName",
Description: "Only a single LDAP configuration is supported - config name must be empty or `_`",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigLDAPValidation: {
Code: "XMinioAdminConfigLDAPValidation",
Description: "LDAP Configuration validation failed",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigIDPCfgNameAlreadyExists: {
Code: "XMinioAdminConfigIDPCfgNameAlreadyExists",
Description: "An IDP configuration with the given name already exists",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigIDPCfgNameDoesNotExist: {
Code: "XMinioAdminConfigIDPCfgNameDoesNotExist",
Description: "No such IDP configuration exists",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigNotificationTargetsFailed: {
Code: "XMinioAdminNotificationTargetsTestFailed",
Description: "Configuration update failed due an unsuccessful attempt to connect to one or more notification servers",
@@ -1335,7 +1441,7 @@ var errorCodes = errorCodeMap{
Description: "Cannot respond to plain-text request from TLS-encrypted server",
HTTPStatusCode: http.StatusBadRequest,
},
ErrOperationTimedOut: {
ErrRequestTimedout: {
Code: "RequestTimeout",
Description: "A timeout occurred while trying to lock a resource, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
@@ -1345,9 +1451,9 @@ var errorCodes = errorCodeMap{
Description: "Client disconnected before response was ready",
HTTPStatusCode: 499, // No official code, use nginx value.
},
ErrOperationMaxedOut: {
Code: "SlowDown",
Description: "A timeout exceeded while waiting to proceed with the request, please reduce your request rate",
ErrTooManyRequests: {
Code: "TooManyRequests",
Description: "Deadline exceeded while waiting in incoming queue, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrUnsupportedMetadata: {
@@ -1355,6 +1461,11 @@ var errorCodes = errorCodeMap{
Description: "Your metadata headers are not supported.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrUnsupportedHostHeader: {
Code: "InvalidArgument",
Description: "Your Host header is malformed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectTampered: {
Code: "XMinioObjectTampered",
Description: errObjectTampered.Error(),
@@ -1369,7 +1480,7 @@ var errorCodes = errorCodeMap{
ErrSiteReplicationPeerResp: {
Code: "XMinioSiteReplicationPeerResp",
Description: "Error received when contacting a peer site",
HTTPStatusCode: http.StatusServiceUnavailable,
HTTPStatusCode: http.StatusBadRequest,
},
ErrSiteReplicationBackendIssue: {
Code: "XMinioSiteReplicationBackendIssue",
@@ -1487,7 +1598,7 @@ var errorCodes = errorCodeMap{
HTTPStatusCode: http.StatusBadRequest,
},
ErrBusy: {
Code: "Busy",
Code: "ServerBusy",
Description: "The service is unavailable. Please retry.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
@@ -1926,6 +2037,26 @@ var errorCodes = errorCodeMap{
Description: "Invalid checksum provided.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrLambdaARNInvalid: {
Code: "LambdaARNInvalid",
Description: "The specified lambda ARN is invalid",
HTTPStatusCode: http.StatusBadRequest,
},
ErrLambdaARNNotFound: {
Code: "LambdaARNNotFound",
Description: "The specified lambda ARN does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrPolicyAlreadyAttached: {
Code: "XMinioPolicyAlreadyAttached",
Description: "The specified policy is already attached.",
HTTPStatusCode: http.StatusConflict,
},
ErrPolicyNotAttached: {
Code: "XMinioPolicyNotAttached",
Description: "The specified policy is not found.",
HTTPStatusCode: http.StatusNotFound,
},
// Add your error structure here.
}
@@ -1938,18 +2069,23 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
}
// Only return ErrClientDisconnected if the provided context is actually canceled.
// This way downstream context.Canceled will still report ErrOperationTimedOut
if contextCanceled(ctx) {
if ctx.Err() == context.Canceled {
return ErrClientDisconnected
}
// This way downstream context.Canceled will still report ErrRequestTimedout
if contextCanceled(ctx) && errors.Is(ctx.Err(), context.Canceled) {
return ErrClientDisconnected
}
// Unwrap the error first
err = unwrapAll(err)
switch err {
case errInvalidArgument:
apiErr = ErrAdminInvalidArgument
case errNoSuchPolicy:
apiErr = ErrAdminNoSuchPolicy
case errNoSuchUser:
apiErr = ErrAdminNoSuchUser
case errNoSuchUserLDAPWarn:
apiErr = ErrAdminNoSuchUserLDAPWarn
case errNoSuchServiceAccount:
apiErr = ErrAdminServiceAccountNotFound
case errNoSuchGroup:
@@ -1958,8 +2094,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrAdminGroupNotEmpty
case errNoSuchJob:
apiErr = ErrAdminNoSuchJob
case errNoSuchPolicy:
apiErr = ErrAdminNoSuchPolicy
case errNoPolicyToAttachOrDetach:
apiErr = ErrAdminPolicyChangeAlreadyApplied
case errSignatureMismatch:
apiErr = ErrSignatureDoesNotMatch
case errInvalidRange:
@@ -1976,9 +2112,17 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrAdminInvalidSecretKey
case errInvalidStorageClass:
apiErr = ErrInvalidStorageClass
case errErasureReadQuorum:
apiErr = ErrSlowDownRead
case errErasureWriteQuorum:
apiErr = ErrSlowDownWrite
case errMaxVersionsExceeded:
apiErr = ErrMaxVersionsExceeded
// SSE errors
case errInvalidEncryptionParameters:
apiErr = ErrInvalidEncryptionParameters
case errInvalidEncryptionParametersSSEC:
apiErr = ErrInvalidEncryptionParametersSSEC
case crypto.ErrInvalidEncryptionMethod:
apiErr = ErrInvalidEncryptionMethod
case crypto.ErrInvalidEncryptionKeyID:
@@ -2005,11 +2149,12 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrKMSNotConfigured
case errKMSKeyNotFound:
apiErr = ErrKMSKeyNotFoundException
case context.Canceled, context.DeadlineExceeded:
apiErr = ErrOperationTimedOut
case errDiskNotFound:
apiErr = ErrSlowDown
case errKMSDefaultKeyAlreadyConfigured:
apiErr = ErrKMSDefaultKeyAlreadyConfigured
case context.Canceled:
apiErr = ErrClientDisconnected
case context.DeadlineExceeded:
apiErr = ErrRequestTimedout
case objectlock.ErrInvalidRetentionDate:
apiErr = ErrInvalidRetentionDate
case objectlock.ErrPastObjectLockRetainDate:
@@ -2020,11 +2165,14 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrObjectLockInvalidHeaders
case objectlock.ErrMalformedXML:
apiErr = ErrMalformedXML
case errInvalidMaxParts:
apiErr = ErrInvalidMaxParts
case ioutil.ErrOverread:
apiErr = ErrExcessData
}
// Compression errors
switch err {
case errInvalidDecompressedSize:
if err == errInvalidDecompressedSize {
apiErr = ErrInvalidDecompressedSize
}
@@ -2035,7 +2183,7 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
// etcd specific errors, a key is always a bucket for us return
// ErrNoSuchBucket in such a case.
if err == dns.ErrNoEntriesFound {
if errors.Is(err, dns.ErrNoEntriesFound) {
return ErrNoSuchBucket
}
@@ -2085,9 +2233,9 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
case InvalidPart:
apiErr = ErrInvalidPart
case InsufficientWriteQuorum:
apiErr = ErrSlowDown
apiErr = ErrSlowDownWrite
case InsufficientReadQuorum:
apiErr = ErrSlowDown
apiErr = ErrSlowDownRead
case InvalidMarkerPrefixCombination:
apiErr = ErrNotImplemented
case InvalidUploadIDKeyCombination:
@@ -2102,10 +2250,10 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrContentSHA256Mismatch
case hash.ChecksumMismatch:
apiErr = ErrContentChecksumMismatch
case ObjectTooLarge:
apiErr = ErrEntityTooLarge
case ObjectTooSmall:
case hash.SizeTooSmall:
apiErr = ErrEntityTooSmall
case hash.SizeTooLarge:
apiErr = ErrEntityTooLarge
case NotImplemented:
apiErr = ErrNotImplemented
case PartTooBig:
@@ -2150,7 +2298,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrTransitionStorageClassNotFoundError
case InvalidObjectState:
apiErr = ErrInvalidObjectState
case PreConditionFailed:
apiErr = ErrPreconditionFailed
case BucketQuotaExceeded:
apiErr = ErrAdminBucketQuotaExceeded
case *event.ErrInvalidEventName:
@@ -2159,6 +2308,10 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrARNNotification
case *event.ErrARNNotFound:
apiErr = ErrARNNotification
case *levent.ErrInvalidARN:
apiErr = ErrLambdaARNInvalid
case *levent.ErrARNNotFound:
apiErr = ErrLambdaARNNotFound
case *event.ErrUnknownRegion:
apiErr = ErrRegionNotification
case *event.ErrInvalidFilterName:
@@ -2176,7 +2329,7 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
case *event.ErrUnsupportedConfiguration:
apiErr = ErrUnsupportedNotification
case OperationTimedOut:
apiErr = ErrOperationTimedOut
apiErr = ErrRequestTimedout
case BackendDown:
apiErr = ErrBackendDown
case ObjectNameTooLong:
@@ -2222,44 +2375,29 @@ func toAPIError(ctx context.Context, err error) APIError {
}
apiErr := errorCodes.ToAPIErr(toAPIErrorCode(ctx, err))
e, ok := err.(dns.ErrInvalidBucketName)
if ok {
code := toAPIErrorCode(ctx, e)
apiErr = errorCodes.ToAPIErrWithErr(code, e)
}
if apiErr.Code == "NotImplemented" {
switch e := err.(type) {
case NotImplemented:
desc := e.Error()
if desc == "" {
desc = apiErr.Description
}
apiErr = APIError{
Code: apiErr.Code,
Description: desc,
HTTPStatusCode: apiErr.HTTPStatusCode,
}
return apiErr
switch apiErr.Code {
case "NotImplemented":
desc := fmt.Sprintf("%s (%v)", apiErr.Description, err)
apiErr = APIError{
Code: apiErr.Code,
Description: desc,
HTTPStatusCode: apiErr.HTTPStatusCode,
}
}
if apiErr.Code == "XMinioBackendDown" {
case "XMinioBackendDown":
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
return apiErr
}
if apiErr.Code == "InternalError" {
case "InternalError":
// If we see an internal error try to interpret
// any underlying errors if possible depending on
// their internal error types.
switch e := err.(type) {
case batchReplicationJobError:
case kms.Error:
apiErr = APIError{
Code: e.Code,
Description: e.Description,
Description: e.Err.Error(),
Code: e.APICode,
HTTPStatusCode: e.HTTPStatusCode,
}
case batchReplicationJobError:
apiErr = APIError(e)
case InvalidArgument:
apiErr = APIError{
Code: "InvalidArgument",
@@ -2268,27 +2406,25 @@ func toAPIError(ctx context.Context, err error) APIError {
}
case *xml.SyntaxError:
apiErr = APIError{
Code: "MalformedXML",
Description: fmt.Sprintf("%s (%s)", errorCodes[ErrMalformedXML].Description,
e.Error()),
Code: "MalformedXML",
Description: fmt.Sprintf("%s (%s)", errorCodes[ErrMalformedXML].Description, e),
HTTPStatusCode: errorCodes[ErrMalformedXML].HTTPStatusCode,
}
case url.EscapeError:
apiErr = APIError{
Code: "XMinioInvalidObjectName",
Description: fmt.Sprintf("%s (%s)", errorCodes[ErrInvalidObjectName].Description,
e.Error()),
Code: "XMinioInvalidObjectName",
Description: fmt.Sprintf("%s (%s)", errorCodes[ErrInvalidObjectName].Description, e),
HTTPStatusCode: http.StatusBadRequest,
}
case versioning.Error:
apiErr = APIError{
Code: "IllegalVersioningConfigurationException",
Description: fmt.Sprintf("Versioning configuration specified in the request is invalid. (%s)", e.Error()),
Description: fmt.Sprintf("Versioning configuration specified in the request is invalid. (%s)", e),
HTTPStatusCode: http.StatusBadRequest,
}
case lifecycle.Error:
apiErr = APIError{
Code: "InvalidRequest",
Code: "InvalidArgument",
Description: e.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
@@ -2349,19 +2485,7 @@ func toAPIError(ctx context.Context, err error) APIError {
// Add more other SDK related errors here if any in future.
default:
//nolint:gocritic
if errors.Is(err, errMalformedEncoding) {
apiErr = APIError{
Code: "BadRequest",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
} else if errors.Is(err, errChunkTooBig) {
apiErr = APIError{
Code: "BadRequest",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
} else if errors.Is(err, strconv.ErrRange) {
if errors.Is(err, errMalformedEncoding) || errors.Is(err, errChunkTooBig) || errors.Is(err, strconv.ErrRange) {
apiErr = APIError{
Code: "BadRequest",
Description: err.Error(),

View File

@@ -20,8 +20,6 @@ package cmd
import (
"context"
"errors"
"os"
"path/filepath"
"testing"
"github.com/minio/minio/internal/crypto"
@@ -42,8 +40,8 @@ var toAPIErrorTests = []struct {
{err: ObjectNameInvalid{}, errCode: ErrInvalidObjectName},
{err: InvalidUploadID{}, errCode: ErrNoSuchUpload},
{err: InvalidPart{}, errCode: ErrInvalidPart},
{err: InsufficientReadQuorum{}, errCode: ErrSlowDown},
{err: InsufficientWriteQuorum{}, errCode: ErrSlowDown},
{err: InsufficientReadQuorum{}, errCode: ErrSlowDownRead},
{err: InsufficientWriteQuorum{}, errCode: ErrSlowDownWrite},
{err: InvalidMarkerPrefixCombination{}, errCode: ErrNotImplemented},
{err: InvalidUploadIDKeyCombination{}, errCode: ErrNotImplemented},
{err: MalformedUploadID{}, errCode: ErrNoSuchUpload},
@@ -67,11 +65,6 @@ var toAPIErrorTests = []struct {
}
func TestAPIErrCode(t *testing.T) {
disk := filepath.Join(globalTestTmpDir, "minio-"+nextSuffix())
defer os.RemoveAll(disk)
initFSObjects(disk, t)
ctx := context.Background()
for i, testCase := range toAPIErrorTests {
errCode := toAPIErrorCode(ctx, testCase.err)
@@ -80,3 +73,19 @@ func TestAPIErrCode(t *testing.T) {
}
}
}
// Check if an API error is properly defined
func TestAPIErrCodeDefinition(t *testing.T) {
for errAPI := ErrNone + 1; errAPI < apiErrCodeEnd; errAPI++ {
errCode, ok := errorCodes[errAPI]
if !ok {
t.Fatal(errAPI, "error code is not defined in the API error code table")
}
if errCode.Code == "" {
t.Fatal(errAPI, "error code has an empty XML code")
}
if errCode.HTTPStatusCode == 0 {
t.Fatal(errAPI, "error code has a zero HTTP status code")
}
}
}

View File

@@ -20,13 +20,14 @@ package cmd
import (
"bytes"
"encoding/json"
"encoding/xml"
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/crypto"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
@@ -64,15 +65,31 @@ func setCommonHeaders(w http.ResponseWriter) {
// Encodes the response headers into XML format.
func encodeResponse(response interface{}) []byte {
var bytesBuffer bytes.Buffer
bytesBuffer.WriteString(xxml.Header)
buf, err := xxml.Marshal(response)
if err != nil {
var buf bytes.Buffer
buf.WriteString(xml.Header)
if err := xml.NewEncoder(&buf).Encode(response); err != nil {
logger.LogIf(GlobalContext, err)
return nil
}
bytesBuffer.Write(buf)
return bytesBuffer.Bytes()
return buf.Bytes()
}
// Use this encodeResponseList() to support control characters
// this function must be used by only ListObjects() for objects
// with control characters, this is a specialized extension
// to support AWS S3 compatible behavior.
//
// Do not use this function for anything other than ListObjects()
// variants, please open a github discussion if you wish to use
// this in other places.
func encodeResponseList(response interface{}) []byte {
var buf bytes.Buffer
buf.WriteString(xxml.Header)
if err := xxml.NewEncoder(&buf).Encode(response); err != nil {
logger.LogIf(GlobalContext, err)
return nil
}
return buf.Bytes()
}
// Encodes the response headers into JSON format.
@@ -123,9 +140,13 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
// Set tag count if object has tags
if len(objInfo.UserTags) > 0 {
tags, _ := url.ParseQuery(objInfo.UserTags)
if len(tags) > 0 {
w.Header()[xhttp.AmzTagCount] = []string{strconv.Itoa(len(tags))}
tags, _ := tags.ParseObjectTags(objInfo.UserTags)
if tags.Count() > 0 {
w.Header()[xhttp.AmzTagCount] = []string{strconv.Itoa(tags.Count())}
if opts.Tagging {
// This is MinIO only extension to return back tags along with the count.
w.Header()[xhttp.AmzObjectTagging] = []string{objInfo.UserTags}
}
}
}
@@ -136,7 +157,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
continue
}
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
@@ -149,7 +170,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
var isSet bool
for _, userMetadataPrefix := range userMetadataKeyPrefixes {
if !strings.HasPrefix(strings.ToLower(k), strings.ToLower(userMetadataPrefix)) {
if !stringsHasPrefixFold(k, userMetadataPrefix) {
continue
}
w.Header()[strings.ToLower(k)] = []string{v}
@@ -186,7 +207,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
}
// Set the relevant version ID as part of the response header.
if objInfo.VersionID != "" {
if objInfo.VersionID != "" && objInfo.VersionID != nullVersionID {
w.Header()[xhttp.AmzVersionID] = []string{objInfo.VersionID}
}

View File

@@ -37,8 +37,8 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
maxkeys = maxObjectList
}
prefix = trimLeadingSlash(values.Get("prefix"))
marker = trimLeadingSlash(values.Get("marker"))
prefix = values.Get("prefix")
marker = values.Get("marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
return
@@ -57,8 +57,8 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
maxkeys = maxObjectList
}
prefix = trimLeadingSlash(values.Get("prefix"))
marker = trimLeadingSlash(values.Get("key-marker"))
prefix = values.Get("prefix")
marker = values.Get("key-marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
versionIDMarker = values.Get("version-id-marker")
@@ -87,8 +87,8 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
maxkeys = maxObjectList
}
prefix = trimLeadingSlash(values.Get("prefix"))
startAfter = trimLeadingSlash(values.Get("start-after"))
prefix = values.Get("prefix")
startAfter = values.Get("start-after")
delimiter = values.Get("delimiter")
fetchOwner = values.Get("fetch-owner") == "true"
encodingType = values.Get("encoding-type")
@@ -118,8 +118,8 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
maxUploads = maxUploadsList
}
prefix = trimLeadingSlash(values.Get("prefix"))
keyMarker = trimLeadingSlash(values.Get("key-marker"))
prefix = values.Get("prefix")
keyMarker = values.Get("key-marker")
uploadIDMarker = values.Get("upload-id-marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")

View File

@@ -29,21 +29,21 @@ import (
"strings"
"time"
"github.com/minio/minio/internal/amztime"
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/handlers"
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
xxml "github.com/minio/xxml"
)
const (
// RFC3339 a subset of the ISO8601 timestamp format. e.g 2014-04-29T18:30:38Z
iso8601TimeFormat = "2006-01-02T15:04:05.000Z" // Reply date format with nanosecond precision.
maxObjectList = 1000 // Limit number of objects in a listObjectsResponse/listObjectsVersionsResponse.
maxDeleteList = 1000 // Limit number of objects deleted in a delete call.
maxUploadsList = 10000 // Limit number of uploads in a listUploadsResponse.
maxPartsList = 10000 // Limit number of parts in a listPartsResponse.
maxObjectList = 1000 // Limit number of objects in a listObjectsResponse/listObjectsVersionsResponse.
maxDeleteList = 1000 // Limit number of objects deleted in a delete call.
maxUploadsList = 10000 // Limit number of uploads in a listUploadsResponse.
maxPartsList = 10000 // Limit number of parts in a listPartsResponse.
)
// LocationResponse - format for location response.
@@ -85,7 +85,7 @@ type ListVersionsResponse struct {
VersionIDMarker string `xml:"VersionIdMarker"`
MaxKeys int
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
@@ -115,7 +115,7 @@ type ListObjectsResponse struct {
NextMarker string `xml:"NextMarker,omitempty"`
MaxKeys int
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
@@ -146,7 +146,7 @@ type ListObjectsV2Response struct {
KeyCount int
MaxKeys int
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
@@ -205,7 +205,7 @@ type ListMultipartUploadsResponse struct {
UploadIDMarker string `xml:"UploadIdMarker"`
NextKeyMarker string
NextUploadIDMarker string `xml:"NextUploadIdMarker"`
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
Prefix string
EncodingType string `xml:"EncodingType,omitempty"`
MaxUploads int
@@ -315,12 +315,12 @@ func (s *Metadata) Set(k, v string) {
}
type xmlKeyEntry struct {
XMLName xml.Name
XMLName xxml.Name
Value string `xml:",chardata"`
}
// MarshalXML - StringMap marshals into XML.
func (s *Metadata) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
func (s *Metadata) MarshalXML(e *xxml.Encoder, start xxml.StartElement) error {
if s == nil {
return nil
}
@@ -335,7 +335,7 @@ func (s *Metadata) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
for _, item := range s.Items {
if err := e.Encode(xmlKeyEntry{
XMLName: xml.Name{Local: item.Key},
XMLName: xxml.Name{Local: item.Key},
Value: item.Value,
}); err != nil {
return err
@@ -345,6 +345,13 @@ func (s *Metadata) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
return e.EncodeToken(start.End())
}
// ObjectInternalInfo contains some internal information about a given
// object, it will printed in listing calls with enabled metadata.
type ObjectInternalInfo struct {
K int // Data blocks
M int // Parity blocks
}
// Object container for object metadata
type Object struct {
Key string
@@ -353,13 +360,16 @@ type Object struct {
Size int64
// Owner of the object.
Owner Owner
Owner *Owner `xml:"Owner,omitempty"`
// The class of storage used to store the object.
StorageClass string
// UserMetadata user-defined metadata
UserMetadata *Metadata `xml:"UserMetadata,omitempty"`
UserTags string `xml:"UserTags,omitempty"`
Internal *ObjectInternalInfo `xml:"Internal,omitempty"`
}
// CopyObjectResponse container returns ETag and LastModified of the successfully copied object
@@ -482,7 +492,7 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
for _, bucket := range buckets {
listbuckets = append(listbuckets, Bucket{
Name: bucket.Name,
CreationDate: bucket.Created.UTC().Format(iso8601TimeFormat),
CreationDate: amztime.ISO8601Format(bucket.Created.UTC()),
})
}
@@ -493,22 +503,30 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
}
// generates an ListBucketVersions response for the said bucket with other enumerated options.
func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType string, maxKeys int, resp ListObjectVersionsInfo) ListVersionsResponse {
func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType string, maxKeys int, resp ListObjectVersionsInfo, metadata metaCheckFn) ListVersionsResponse {
versions := make([]ObjectVersion, 0, len(resp.Objects))
owner := Owner{
owner := &Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
data := ListVersionsResponse{}
var lastObjMetaName string
var tagErr, metaErr APIErrorCode = -1, -1
for _, object := range resp.Objects {
if object.Name == "" {
continue
}
// Cache checks for the same object
if metadata != nil && lastObjMetaName != object.Name {
tagErr = metadata(object.Name, policy.GetObjectTaggingAction)
metaErr = metadata(object.Name, policy.GetObjectAction)
lastObjMetaName = object.Name
}
content := ObjectVersion{}
content.Key = s3EncodeName(object.Name, encodingType)
content.LastModified = object.ModTime.UTC().Format(iso8601TimeFormat)
content.LastModified = amztime.ISO8601Format(object.ModTime.UTC())
if object.ETag != "" {
content.ETag = "\"" + object.ETag + "\""
}
@@ -518,6 +536,36 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
} else {
content.StorageClass = globalMinioDefaultStorageClass
}
if tagErr == ErrNone {
content.UserTags = object.UserTags
}
if metaErr == ErrNone {
content.UserMetadata = &Metadata{}
switch kind, _ := crypto.IsEncrypted(object.UserDefined); kind {
case crypto.S3:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionAES)
case crypto.S3KMS:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionKMS)
case crypto.SSEC:
content.UserMetadata.Set(xhttp.AmzServerSideEncryptionCustomerAlgorithm, xhttp.AmzEncryptionAES)
}
for k, v := range cleanMinioInternalMetadataKeys(object.UserDefined) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
}
// https://github.com/google/security-research/security/advisories/GHSA-76wf-9vgp-pj7w
if equals(k, xhttp.AmzMetaUnencryptedContentLength, xhttp.AmzMetaUnencryptedContentMD5) {
continue
}
content.UserMetadata.Set(k, v)
}
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
}
}
content.Owner = owner
content.VersionID = object.VersionID
if content.VersionID == "" {
@@ -554,7 +602,7 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
// generates an ListObjectsV1 response for the said bucket with other enumerated options.
func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingType string, maxKeys int, resp ListObjectsInfo) ListObjectsResponse {
contents := make([]Object, 0, len(resp.Objects))
owner := Owner{
owner := &Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
@@ -566,7 +614,7 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingTy
continue
}
content.Key = s3EncodeName(object.Name, encodingType)
content.LastModified = object.ModTime.UTC().Format(iso8601TimeFormat)
content.LastModified = amztime.ISO8601Format(object.ModTime.UTC())
if object.ETag != "" {
content.ETag = "\"" + object.ETag + "\""
}
@@ -601,12 +649,16 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingTy
}
// generates an ListObjectsV2 response for the said bucket with other enumerated options.
func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter, delimiter, encodingType string, fetchOwner, isTruncated bool, maxKeys int, objects []ObjectInfo, prefixes []string, metadata bool) ListObjectsV2Response {
func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter, delimiter, encodingType string, fetchOwner, isTruncated bool, maxKeys int, objects []ObjectInfo, prefixes []string, metadata metaCheckFn) ListObjectsV2Response {
contents := make([]Object, 0, len(objects))
owner := Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
var owner *Owner
if fetchOwner {
owner = &Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
}
data := ListObjectsV2Response{}
for _, object := range objects {
@@ -615,7 +667,7 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
continue
}
content.Key = s3EncodeName(object.Name, encodingType)
content.LastModified = object.ModTime.UTC().Format(iso8601TimeFormat)
content.LastModified = amztime.ISO8601Format(object.ModTime.UTC())
if object.ETag != "" {
content.ETag = "\"" + object.ETag + "\""
}
@@ -626,27 +678,36 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
content.StorageClass = globalMinioDefaultStorageClass
}
content.Owner = owner
if metadata {
content.UserMetadata = &Metadata{}
switch kind, _ := crypto.IsEncrypted(object.UserDefined); kind {
case crypto.S3:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionAES)
case crypto.S3KMS:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionKMS)
case crypto.SSEC:
content.UserMetadata.Set(xhttp.AmzServerSideEncryptionCustomerAlgorithm, xhttp.AmzEncryptionAES)
if metadata != nil {
if metadata(object.Name, policy.GetObjectTaggingAction) == ErrNone {
content.UserTags = object.UserTags
}
for k, v := range cleanMinioInternalMetadataKeys(object.UserDefined) {
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
if metadata(object.Name, policy.GetObjectAction) == ErrNone {
content.UserMetadata = &Metadata{}
switch kind, _ := crypto.IsEncrypted(object.UserDefined); kind {
case crypto.S3:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionAES)
case crypto.S3KMS:
content.UserMetadata.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionKMS)
case crypto.SSEC:
content.UserMetadata.Set(xhttp.AmzServerSideEncryptionCustomerAlgorithm, xhttp.AmzEncryptionAES)
}
// https://github.com/google/security-research/security/advisories/GHSA-76wf-9vgp-pj7w
if equals(k, xhttp.AmzMetaUnencryptedContentLength, xhttp.AmzMetaUnencryptedContentMD5) {
continue
for k, v := range cleanMinioInternalMetadataKeys(object.UserDefined) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
}
// https://github.com/google/security-research/security/advisories/GHSA-76wf-9vgp-pj7w
if equals(k, xhttp.AmzMetaUnencryptedContentLength, xhttp.AmzMetaUnencryptedContentMD5) {
continue
}
content.UserMetadata.Set(k, v)
}
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
}
content.UserMetadata.Set(k, v)
}
}
contents = append(contents, content)
@@ -674,11 +735,13 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
return data
}
type metaCheckFn = func(name string, action policy.Action) (s3Err APIErrorCode)
// generates CopyObjectResponse from etag and lastModified time.
func generateCopyObjectResponse(etag string, lastModified time.Time) CopyObjectResponse {
return CopyObjectResponse{
ETag: "\"" + etag + "\"",
LastModified: lastModified.UTC().Format(iso8601TimeFormat),
LastModified: amztime.ISO8601Format(lastModified.UTC()),
}
}
@@ -686,7 +749,7 @@ func generateCopyObjectResponse(etag string, lastModified time.Time) CopyObjectR
func generateCopyObjectPartResponse(etag string, lastModified time.Time) CopyObjectPartResponse {
return CopyObjectPartResponse{
ETag: "\"" + etag + "\"",
LastModified: lastModified.UTC().Format(iso8601TimeFormat),
LastModified: amztime.ISO8601Format(lastModified.UTC()),
}
}
@@ -701,7 +764,7 @@ func generateInitiateMultipartUploadResponse(bucket, key, uploadID string) Initi
// generates CompleteMultipartUploadResponse for given bucket, key, location and ETag.
func generateCompleteMultpartUploadResponse(bucket, key, location string, oi ObjectInfo) CompleteMultipartUploadResponse {
cs := oi.decryptChecksums()
cs := oi.decryptChecksums(0)
c := CompleteMultipartUploadResponse{
Location: location,
Bucket: bucket,
@@ -746,7 +809,7 @@ func generateListPartsResponse(partsInfo ListPartsInfo, encodingType string) Lis
newPart.PartNumber = part.PartNumber
newPart.ETag = "\"" + part.ETag + "\""
newPart.Size = part.Size
newPart.LastModified = part.LastModified.UTC().Format(iso8601TimeFormat)
newPart.LastModified = amztime.ISO8601Format(part.LastModified.UTC())
newPart.ChecksumCRC32 = part.ChecksumCRC32
newPart.ChecksumCRC32C = part.ChecksumCRC32C
newPart.ChecksumSHA1 = part.ChecksumSHA1
@@ -780,7 +843,7 @@ func generateListMultipartUploadsResponse(bucket string, multipartsInfo ListMult
newUpload := Upload{}
newUpload.UploadID = upload.UploadID
newUpload.Key = s3EncodeName(upload.Object, encodingType)
newUpload.Initiated = upload.Initiated.UTC().Format(iso8601TimeFormat)
newUpload.Initiated = amztime.ISO8601Format(upload.Initiated.UTC())
listMultipartUploadsResponse.Uploads[index] = newUpload
}
return listMultipartUploadsResponse
@@ -876,12 +939,14 @@ func writeErrorResponse(ctx context.Context, w http.ResponseWriter, err APIError
// Generate error response.
errorResponse := getAPIErrorResponse(ctx, err, reqURL.Path,
w.Header().Get(xhttp.AmzRequestID), globalDeploymentID)
w.Header().Get(xhttp.AmzRequestID), w.Header().Get(xhttp.AmzRequestHostID))
encodedErrorResponse := encodeResponse(errorResponse)
writeResponse(w, err.HTTPStatusCode, encodedErrorResponse, mimeXML)
}
func writeErrorResponseHeadersOnly(w http.ResponseWriter, err APIError) {
w.Header().Set(xMinIOErrCodeHeader, err.Code)
w.Header().Set(xMinIOErrDescHeader, "\""+err.Description+"\"")
writeResponse(w, err.HTTPStatusCode, nil, mimeNone)
}
@@ -894,7 +959,7 @@ func writeErrorResponseString(ctx context.Context, w http.ResponseWriter, err AP
// useful for admin APIs.
func writeErrorResponseJSON(ctx context.Context, w http.ResponseWriter, err APIError, reqURL *url.URL) {
// Generate error response.
errorResponse := getAPIErrorResponse(ctx, err, reqURL.Path, w.Header().Get(xhttp.AmzRequestID), globalDeploymentID)
errorResponse := getAPIErrorResponse(ctx, err, reqURL.Path, w.Header().Get(xhttp.AmzRequestID), w.Header().Get(xhttp.AmzRequestHostID))
encodedErrorResponse := encodeResponseJSON(errorResponse)
writeResponse(w, err.HTTPStatusCode, encodedErrorResponse, mimeJSON)
}

View File

@@ -22,11 +22,11 @@ import (
"net"
"net/http"
"github.com/gorilla/mux"
"github.com/klauspost/compress/gzhttp"
"github.com/minio/console/restapi"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/wildcard"
"github.com/rs/cors"
)
@@ -245,10 +245,10 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodPut).Path("/{object:.+}").
HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").
HandlerFunc(collectAPIStats("copyobjectpart", maxClients(gz(httpTraceAll(api.CopyObjectPartHandler))))).
Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
Queries("partNumber", "{partNumber:.*}", "uploadId", "{uploadId:.*}")
// PutObjectPart
router.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
collectAPIStats("putobjectpart", maxClients(gz(httpTraceHdrs(api.PutObjectPartHandler))))).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
collectAPIStats("putobjectpart", maxClients(gz(httpTraceHdrs(api.PutObjectPartHandler))))).Queries("partNumber", "{partNumber:.*}", "uploadId", "{uploadId:.*}")
// ListObjectParts
router.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
collectAPIStats("listobjectparts", maxClients(gz(httpTraceAll(api.ListObjectPartsHandler))))).Queries("uploadId", "{uploadId:.*}")
@@ -285,7 +285,10 @@ func registerAPIRouter(router *mux.Router) {
// GetObjectLegalHold
router.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
collectAPIStats("getobjectlegalhold", maxClients(gz(httpTraceAll(api.GetObjectLegalHoldHandler))))).Queries("legal-hold", "")
// GetObject - note gzip compression is *not* added due to Range requests.
// GetObject with lambda ARNs
router.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
collectAPIStats("getobject", maxClients(gz(httpTraceHdrs(api.GetObjectLambdaHandler))))).Queries("lambdaArn", "{lambdaArn:.*}")
// GetObject
router.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
collectAPIStats("getobject", maxClients(gz(httpTraceHdrs(api.GetObjectHandler)))))
// CopyObject
@@ -389,6 +392,9 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("listobjectsv2", maxClients(gz(httpTraceAll(api.ListObjectsV2Handler))))).Queries("list-type", "2")
// ListObjectVersions
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("listobjectversions", maxClients(gz(httpTraceAll(api.ListObjectVersionsMHandler))))).Queries("versions", "", "metadata", "true")
// ListObjectVersions
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("listobjectversions", maxClients(gz(httpTraceAll(api.ListObjectVersionsHandler))))).Queries("versions", "")
// GetBucketPolicyStatus
@@ -431,8 +437,9 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodHead).HandlerFunc(
collectAPIStats("headbucket", maxClients(gz(httpTraceAll(api.HeadBucketHandler)))))
// PostPolicy
router.Methods(http.MethodPost).HeadersRegexp(xhttp.ContentType, "multipart/form-data*").HandlerFunc(
collectAPIStats("postpolicybucket", maxClients(gz(httpTraceHdrs(api.PostPolicyBucketHandler)))))
router.Methods(http.MethodPost).MatcherFunc(func(r *http.Request, _ *mux.RouteMatch) bool {
return isRequestPostPolicySignatureV4(r)
}).HandlerFunc(collectAPIStats("postpolicybucket", maxClients(gz(httpTraceHdrs(api.PostPolicyBucketHandler)))))
// DeleteMultipleObjects
router.Methods(http.MethodPost).HandlerFunc(
collectAPIStats("deletemultipleobjects", maxClients(gz(httpTraceAll(api.DeleteMultipleObjectsHandler))))).Queries("delete", "")
@@ -454,10 +461,16 @@ func registerAPIRouter(router *mux.Router) {
// MinIO extension API for replication.
//
// GetBucketReplicationMetrics
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("getbucketreplicationmetrics", maxClients(gz(httpTraceAll(api.GetBucketReplicationMetricsV2Handler))))).Queries("replication-metrics", "2")
// deprecated handler
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("getbucketreplicationmetrics", maxClients(gz(httpTraceAll(api.GetBucketReplicationMetricsHandler))))).Queries("replication-metrics", "")
// ValidateBucketReplicationCreds
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("checkbucketreplicationconfiguration", maxClients(gz(httpTraceAll(api.ValidateBucketReplicationCredsHandler))))).Queries("replication-check", "")
// Register rejected bucket APIs
for _, r := range rejectedBucketAPIs {
router.Methods(r.methods...).
@@ -513,8 +526,7 @@ func corsHandler(handler http.Handler) http.Handler {
"x-amz*",
"*",
}
return cors.New(cors.Options{
opts := cors.Options{
AllowOriginFunc: func(origin string) bool {
for _, allowedOrigin := range globalAPIConfig.getCorsAllowOrigins() {
if wildcard.MatchSimple(allowedOrigin, origin) {
@@ -535,5 +547,6 @@ func corsHandler(handler http.Handler) http.Handler {
AllowedHeaders: commonS3Headers,
ExposedHeaders: commonS3Headers,
AllowCredentials: true,
}).Handler(handler)
}
return cors.New(opts).Handler(handler)
}

View File

@@ -94,14 +94,8 @@ func s3URLEncode(s string) string {
}
// s3EncodeName encodes string in response when encodingType is specified in AWS S3 requests.
func s3EncodeName(name string, encodingType string) (result string) {
// Quick path to exit
if encodingType == "" {
return name
}
encodingType = strings.ToLower(encodingType)
switch encodingType {
case "url":
func s3EncodeName(name, encodingType string) string {
if strings.ToLower(encodingType) == "url" {
return s3URLEncode(name)
}
return name

File diff suppressed because one or more lines are too long

View File

@@ -25,6 +25,7 @@ import (
"encoding/hex"
"errors"
"io"
"mime"
"net/http"
"net/url"
"strconv"
@@ -39,6 +40,7 @@ import (
xhttp "github.com/minio/minio/internal/http"
xjwt "github.com/minio/minio/internal/jwt"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/mcontext"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
)
@@ -73,8 +75,11 @@ func isRequestPresignedSignatureV2(r *http.Request) bool {
// Verify if request has AWS Post policy Signature Version '4'.
func isRequestPostPolicySignatureV4(r *http.Request) bool {
return strings.Contains(r.Header.Get(xhttp.ContentType), "multipart/form-data") &&
r.Method == http.MethodPost
mediaType, _, err := mime.ParseMediaType(r.Header.Get(xhttp.ContentType))
if err != nil {
return false
}
return mediaType == "multipart/form-data" && r.Method == http.MethodPost
}
// Verify if the request has AWS Streaming Signature Version '4'. This is only valid for 'PUT' operation.
@@ -83,6 +88,18 @@ func isRequestSignStreamingV4(r *http.Request) bool {
r.Method == http.MethodPut
}
// Verify if the request has AWS Streaming Signature Version '4'. This is only valid for 'PUT' operation.
func isRequestSignStreamingTrailerV4(r *http.Request) bool {
return r.Header.Get(xhttp.AmzContentSha256) == streamingContentSHA256Trailer &&
r.Method == http.MethodPut
}
// Verify if the request has AWS Streaming Signature Version '4', with unsigned content and trailer.
func isRequestUnsignedTrailerV4(r *http.Request) bool {
return r.Header.Get(xhttp.AmzContentSha256) == unsignedPayloadTrailer &&
r.Method == http.MethodPut && strings.Contains(r.Header.Get(xhttp.ContentEncoding), streamingContentEncoding)
}
// Authorization type.
//
//go:generate stringer -type=authType -trimprefix=authType $GOFILE
@@ -100,10 +117,12 @@ const (
authTypeSignedV2
authTypeJWT
authTypeSTS
authTypeStreamingSignedTrailer
authTypeStreamingUnsignedTrailer
)
// Get request authentication type.
func getRequestAuthType(r *http.Request) authType {
func getRequestAuthType(r *http.Request) (at authType) {
if r.URL != nil {
var err error
r.Form, err = url.ParseQuery(r.URL.RawQuery)
@@ -118,6 +137,10 @@ func getRequestAuthType(r *http.Request) authType {
return authTypePresignedV2
} else if isRequestSignStreamingV4(r) {
return authTypeStreamingSigned
} else if isRequestSignStreamingTrailerV4(r) {
return authTypeStreamingSignedTrailer
} else if isRequestUnsignedTrailerV4(r) {
return authTypeStreamingUnsignedTrailer
} else if isRequestSignatureV4(r) {
return authTypeSigned
} else if isRequestPresignedSignatureV4(r) {
@@ -134,7 +157,7 @@ func getRequestAuthType(r *http.Request) authType {
return authTypeUnknown
}
func validateAdminSignature(ctx context.Context, r *http.Request, region string) (auth.Credentials, map[string]interface{}, bool, APIErrorCode) {
func validateAdminSignature(ctx context.Context, r *http.Request, region string) (auth.Credentials, bool, APIErrorCode) {
var cred auth.Credentials
var owner bool
s3Err := ErrAccessDenied
@@ -143,27 +166,28 @@ func validateAdminSignature(ctx context.Context, r *http.Request, region string)
// We only support admin credentials to access admin APIs.
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
if s3Err != ErrNone {
return cred, nil, owner, s3Err
return cred, owner, s3Err
}
// we only support V4 (no presign) with auth body
s3Err = isReqAuthenticated(ctx, r, region, serviceS3)
}
if s3Err != ErrNone {
reqInfo := (&logger.ReqInfo{}).AppendTags("requestHeaders", dumpRequest(r))
ctx := logger.SetReqInfo(ctx, reqInfo)
logger.LogIf(ctx, errors.New(getAPIError(s3Err).Description), logger.Application)
return cred, nil, owner, s3Err
return cred, owner, s3Err
}
return cred, cred.Claims, owner, ErrNone
logger.GetReqInfo(ctx).Cred = cred
logger.GetReqInfo(ctx).Owner = owner
logger.GetReqInfo(ctx).Region = globalSite.Region
return cred, owner, ErrNone
}
// checkAdminRequestAuth checks for authentication and authorization for the incoming
// request. It only accepts V2 and V4 requests. Presigned, JWT and anonymous requests
// are automatically rejected.
func checkAdminRequestAuth(ctx context.Context, r *http.Request, action iampolicy.AdminAction, region string) (auth.Credentials, APIErrorCode) {
cred, claims, owner, s3Err := validateAdminSignature(ctx, r, region)
cred, owner, s3Err := validateAdminSignature(ctx, r, region)
if s3Err != ErrNone {
return cred, s3Err
}
@@ -171,9 +195,9 @@ func checkAdminRequestAuth(ctx context.Context, r *http.Request, action iampolic
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(action),
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: claims,
Claims: cred.Claims,
}) {
// Request is allowed return the appropriate access key.
return cred, ErrNone
@@ -205,7 +229,7 @@ func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{},
// that clients cannot decode the token using the temp
// secret keys and generate an entirely new claim by essentially
// hijacking the policies. We need to make sure that this is
// based an admin credential such that token cannot be decoded
// based on admin credential such that token cannot be decoded
// on the client side and is treated like an opaque value.
claims, err := auth.ExtractClaims(token, secret)
if err != nil {
@@ -255,7 +279,7 @@ func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]in
return nil, ErrNoAccessKey
}
if token == "" && cred.IsTemp() {
if token == "" && cred.IsTemp() && !cred.IsServiceAccount() {
// Temporary credentials should always have x-amz-security-token
return nil, ErrInvalidToken
}
@@ -265,7 +289,7 @@ func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]in
return nil, ErrInvalidToken
}
if cred.IsTemp() && subtle.ConstantTimeCompare([]byte(token), []byte(cred.SessionToken)) != 1 {
if !cred.IsServiceAccount() && cred.IsTemp() && subtle.ConstantTimeCompare([]byte(token), []byte(cred.SessionToken)) != 1 {
// validate token for temporary credentials only.
return nil, ErrInvalidToken
}
@@ -302,6 +326,17 @@ func checkRequestAuthType(ctx context.Context, r *http.Request, action policy.Ac
return s3Err
}
// checkRequestAuthTypeWithVID is similar to checkRequestAuthType
// passes versionID additionally.
func checkRequestAuthTypeWithVID(ctx context.Context, r *http.Request, action policy.Action, bucketName, objectName, versionID string) (s3Err APIErrorCode) {
logger.GetReqInfo(ctx).BucketName = bucketName
logger.GetReqInfo(ctx).ObjectName = objectName
logger.GetReqInfo(ctx).VersionID = versionID
_, _, s3Err = checkRequestAuthTypeCredential(ctx, r, action)
return s3Err
}
func authenticateRequest(ctx context.Context, r *http.Request, action policy.Action) (s3Err APIErrorCode) {
if logger.GetReqInfo(ctx) == nil {
logger.LogIf(ctx, errors.New("unexpected context.Context does not have a logger.ReqInfo"), logger.Minio)
@@ -335,6 +370,7 @@ func authenticateRequest(ctx context.Context, r *http.Request, action policy.Act
logger.GetReqInfo(ctx).Cred = cred
logger.GetReqInfo(ctx).Owner = owner
logger.GetReqInfo(ctx).Region = globalSite.Region
// region is valid only for CreateBucketAction.
var region string
@@ -373,14 +409,16 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
region := reqInfo.Region
bucket := reqInfo.BucketName
object := reqInfo.ObjectName
versionID := reqInfo.VersionID
if action != policy.ListAllMyBucketsAction && cred.AccessKey == "" {
// Anonymous checks are not meant for ListAllBuckets action
if globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: action,
BucketName: bucket,
ConditionValues: getConditionValues(r, region, "", nil),
ConditionValues: getConditionValues(r, region, auth.AnonymousCredentials),
IsOwner: false,
ObjectName: object,
}) {
@@ -393,9 +431,10 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
// verify as a fallback.
if globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListBucketAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, region, "", nil),
ConditionValues: getConditionValues(r, region, auth.AnonymousCredentials),
IsOwner: false,
ObjectName: object,
}) {
@@ -406,13 +445,27 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
return ErrAccessDenied
}
if action == policy.DeleteObjectAction && versionID != "" {
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(policy.DeleteObjectVersionAction),
BucketName: bucket,
ConditionValues: getConditionValues(r, "", cred),
ObjectName: object,
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: true,
}) { // Request is not allowed if Deny action on DeleteObjectVersionAction
return ErrAccessDenied
}
}
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(action),
BucketName: bucket,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
ObjectName: object,
IsOwner: owner,
Claims: cred.Claims,
@@ -429,7 +482,7 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
Groups: cred.Groups,
Action: iampolicy.ListBucketAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
ObjectName: object,
IsOwner: owner,
Claims: cred.Claims,
@@ -525,13 +578,15 @@ func isReqAuthenticated(ctx context.Context, r *http.Request, region string, sty
// List of all support S3 auth types.
var supportedS3AuthTypes = map[authType]struct{}{
authTypeAnonymous: {},
authTypePresigned: {},
authTypePresignedV2: {},
authTypeSigned: {},
authTypeSignedV2: {},
authTypePostPolicy: {},
authTypeStreamingSigned: {},
authTypeAnonymous: {},
authTypePresigned: {},
authTypePresignedV2: {},
authTypeSigned: {},
authTypeSignedV2: {},
authTypePostPolicy: {},
authTypeStreamingSigned: {},
authTypeStreamingSignedTrailer: {},
authTypeStreamingUnsignedTrailer: {},
}
// Validate if the authType is valid and supported.
@@ -540,25 +595,27 @@ func isSupportedS3AuthType(aType authType) bool {
return ok
}
// setAuthHandler to validate authorization header for the incoming request.
func setAuthHandler(h http.Handler) http.Handler {
// setAuthMiddleware to validate authorization header for the incoming request.
func setAuthMiddleware(h http.Handler) http.Handler {
// handler for validating incoming authorization headers.
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
tc, ok := r.Context().Value(contextTraceReqKey).(*traceCtxt)
tc, ok := r.Context().Value(mcontext.ContextTraceKey).(*mcontext.TraceCtxt)
aType := getRequestAuthType(r)
if aType == authTypeSigned || aType == authTypeSignedV2 || aType == authTypeStreamingSigned {
switch aType {
case authTypeSigned, authTypeSignedV2, authTypeStreamingSigned, authTypeStreamingSignedTrailer:
// Verify if date headers are set, if not reject the request
amzDate, errCode := parseAmzDateHeader(r)
if errCode != ErrNone {
if ok {
tc.funcName = "handler.Auth"
tc.responseRecorder.LogErrBody = true
tc.FuncName = "handler.Auth"
tc.ResponseRecorder.LogErrBody = true
}
// All our internal APIs are sensitive towards Date
// header, for all requests where Date header is not
// present we will reject such clients.
defer logger.AuditLog(r.Context(), w, r, mustGetClaimsFromToken(r))
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(errCode), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
return
@@ -568,25 +625,33 @@ func setAuthHandler(h http.Handler) http.Handler {
curTime := UTCNow()
if curTime.Sub(amzDate) > globalMaxSkewTime || amzDate.Sub(curTime) > globalMaxSkewTime {
if ok {
tc.funcName = "handler.Auth"
tc.responseRecorder.LogErrBody = true
tc.FuncName = "handler.Auth"
tc.ResponseRecorder.LogErrBody = true
}
defer logger.AuditLog(r.Context(), w, r, mustGetClaimsFromToken(r))
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrRequestTimeTooSkewed), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
return
}
}
if isSupportedS3AuthType(aType) || aType == authTypeJWT || aType == authTypeSTS {
h.ServeHTTP(w, r)
return
case authTypeJWT, authTypeSTS:
h.ServeHTTP(w, r)
return
default:
if isSupportedS3AuthType(aType) {
h.ServeHTTP(w, r)
return
}
}
if ok {
tc.funcName = "handler.Auth"
tc.responseRecorder.LogErrBody = true
tc.FuncName = "handler.Auth"
tc.ResponseRecorder.LogErrBody = true
}
defer logger.AuditLog(r.Context(), w, r, mustGetClaimsFromToken(r))
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrSignatureVersionNotSupported), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsAuth, 1)
})
@@ -624,7 +689,7 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
return ErrAccessDenied
}
conditions := getConditionValues(r, "", cred.AccessKey, cred.Claims)
conditions := getConditionValues(r, "", cred)
conditions["object-lock-mode"] = []string{string(retMode)}
conditions["object-lock-retain-until-date"] = []string{retDate.UTC().Format(time.RFC3339)}
if retDays > 0 {
@@ -672,7 +737,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
return ErrSignatureVersionNotSupported
case authTypeSignedV2, authTypePresignedV2:
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypeStreamingSigned, authTypePresigned, authTypeSigned:
case authTypeStreamingSigned, authTypePresigned, authTypeSigned, authTypeStreamingSignedTrailer, authTypeStreamingUnsignedTrailer:
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
}
if s3Err != ErrNone {
@@ -698,7 +763,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
Groups: cred.Groups,
Action: policy.Action(action),
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", "", nil),
ConditionValues: getConditionValues(r, "", auth.AnonymousCredentials),
IsOwner: false,
ObjectName: objectName,
}) {
@@ -712,7 +777,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
Groups: cred.Groups,
Action: action,
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
ObjectName: objectName,
IsOwner: owner,
Claims: cred.Claims,

View File

@@ -375,8 +375,6 @@ func TestIsReqAuthenticated(t *testing.T) {
initConfigSubsystem(ctx, objLayer)
globalIAMSys.Init(ctx, objLayer, globalEtcdClient, 2*time.Second)
creds, err := auth.CreateCredentials("myuser", "mypassword")
if err != nil {
t.Fatalf("unable create credential, %s", err)
@@ -384,6 +382,8 @@ func TestIsReqAuthenticated(t *testing.T) {
globalActiveCred = creds
globalIAMSys.Init(ctx, objLayer, globalEtcdClient, 2*time.Second)
// List of test cases for validating http request authentication.
testCases := []struct {
req *http.Request
@@ -464,9 +464,8 @@ func TestValidateAdminSignature(t *testing.T) {
}
initAllSubsystems(ctx)
initConfigSubsystem(ctx, objLayer)
globalIAMSys.Init(ctx, objLayer, globalEtcdClient, 2*time.Second)
initConfigSubsystem(ctx, objLayer)
creds, err := auth.CreateCredentials("admin", "mypassword")
if err != nil {
@@ -474,6 +473,8 @@ func TestValidateAdminSignature(t *testing.T) {
}
globalActiveCred = creds
globalIAMSys.Init(ctx, objLayer, globalEtcdClient, 2*time.Second)
testCases := []struct {
AccessKey string
SecretKey string
@@ -492,7 +493,7 @@ func TestValidateAdminSignature(t *testing.T) {
if err := signRequestV4(req, testCase.AccessKey, testCase.SecretKey); err != nil {
t.Fatalf("Unable to inititalized new signed http request %s", err)
}
_, _, _, s3Error := validateAdminSignature(ctx, req, globalMinioDefaultRegion)
_, _, s3Error := validateAdminSignature(ctx, req, globalMinioDefaultRegion)
if s3Error != testCase.ErrCode {
t.Errorf("Test %d: Unexpected s3error returned wanted %d, got %d", i+1, testCase.ErrCode, s3Error)
}

View File

@@ -18,11 +18,13 @@ func _() {
_ = x[authTypeSignedV2-7]
_ = x[authTypeJWT-8]
_ = x[authTypeSTS-9]
_ = x[authTypeStreamingSignedTrailer-10]
_ = x[authTypeStreamingUnsignedTrailer-11]
}
const _authType_name = "UnknownAnonymousPresignedPresignedV2PostPolicyStreamingSignedSignedSignedV2JWTSTS"
const _authType_name = "UnknownAnonymousPresignedPresignedV2PostPolicyStreamingSignedSignedSignedV2JWTSTSStreamingSignedTrailerStreamingUnsignedTrailer"
var _authType_index = [...]uint8{0, 7, 16, 25, 36, 46, 61, 67, 75, 78, 81}
var _authType_index = [...]uint8{0, 7, 16, 25, 36, 46, 61, 67, 75, 78, 81, 103, 127}
func (i authType) String() string {
if i < 0 || i >= authType(len(_authType_index)-1) {

View File

@@ -19,9 +19,14 @@ package cmd
import (
"context"
"fmt"
"runtime"
"strconv"
"time"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/env"
)
// healTask represents what to heal along with options
@@ -56,13 +61,43 @@ func activeListeners() int {
return int(globalHTTPListen.Subscribers()) + int(globalTrace.Subscribers())
}
func waitForLowHTTPReq() {
var currentIO func() int
if httpServer := newHTTPServerFn(); httpServer != nil {
currentIO = httpServer.GetRequestCount
func waitForLowIO(maxIO int, maxWait time.Duration, currentIO func() int) {
// No need to wait run at full speed.
if maxIO <= 0 {
return
}
globalHealConfig.Wait(currentIO, activeListeners)
const waitTick = 100 * time.Millisecond
tmpMaxWait := maxWait
for currentIO() >= maxIO {
if tmpMaxWait > 0 {
if tmpMaxWait < waitTick {
time.Sleep(tmpMaxWait)
return
}
time.Sleep(waitTick)
tmpMaxWait -= waitTick
}
if tmpMaxWait <= 0 {
return
}
}
}
func currentHTTPIO() int {
httpServer := newHTTPServerFn()
if httpServer == nil {
return 0
}
return httpServer.GetRequestCount() - activeListeners()
}
func waitForLowHTTPReq() {
maxIO, maxWait, _ := globalHealConfig.Clone()
waitForLowIO(maxIO, maxWait, currentHTTPIO)
}
func initBackgroundHealing(ctx context.Context, objAPI ObjectLayer) {
@@ -88,8 +123,7 @@ func (h *healRoutine) AddWorker(ctx context.Context, objAPI ObjectLayer) {
var err error
switch task.bucket {
case nopHeal:
task.respCh <- healResult{err: errSkipFile}
continue
err = errSkipFile
case SlashSeparator:
res, err = healDiskFormat(ctx, objAPI, task.opts)
default:
@@ -100,7 +134,10 @@ func (h *healRoutine) AddWorker(ctx context.Context, objAPI ObjectLayer) {
}
}
task.respCh <- healResult{result: res, err: err}
if task.respCh != nil {
task.respCh <- healResult{result: res, err: err}
}
case <-ctx.Done():
return
}
@@ -109,9 +146,19 @@ func (h *healRoutine) AddWorker(ctx context.Context, objAPI ObjectLayer) {
func newHealRoutine() *healRoutine {
workers := runtime.GOMAXPROCS(0) / 2
if envHealWorkers := env.Get("_MINIO_HEAL_WORKERS", ""); envHealWorkers != "" {
if numHealers, err := strconv.Atoi(envHealWorkers); err != nil {
logger.LogIf(context.Background(), fmt.Errorf("invalid _MINIO_HEAL_WORKERS value: %w", err))
} else {
workers = numHealers
}
}
if workers == 0 {
workers = 4
}
return &healRoutine{
tasks: make(chan healTask),
workers: workers,

View File

@@ -26,12 +26,15 @@ import (
"os"
"sort"
"strings"
"sync"
"time"
"github.com/dustin/go-humanize"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/env"
)
const (
@@ -43,7 +46,8 @@ const (
// healingTracker is used to persist healing information during a heal.
type healingTracker struct {
disk StorageAPI `msg:"-"`
disk StorageAPI `msg:"-"`
mu *sync.RWMutex `msg:"-"`
ID string
PoolIndex int
@@ -79,6 +83,10 @@ type healingTracker struct {
// Filled during heal.
HealedBuckets []string
// ID of the current healing operation
HealID string
// Add future tracking capabilities
// Be sure that they are included in toHealingDisk
}
@@ -108,21 +116,76 @@ func loadHealingTracker(ctx context.Context, disk StorageAPI) (*healingTracker,
}
h.disk = disk
h.ID = diskID
h.mu = &sync.RWMutex{}
return &h, nil
}
// newHealingTracker will create a new healing tracker for the disk.
func newHealingTracker(disk StorageAPI) *healingTracker {
diskID, _ := disk.GetDiskID()
h := healingTracker{
disk: disk,
ID: diskID,
Path: disk.String(),
Endpoint: disk.Endpoint().String(),
Started: time.Now().UTC(),
func newHealingTracker() *healingTracker {
return &healingTracker{
mu: &sync.RWMutex{},
}
}
func initHealingTracker(disk StorageAPI, healID string) *healingTracker {
h := newHealingTracker()
diskID, _ := disk.GetDiskID()
h.disk = disk
h.ID = diskID
h.HealID = healID
h.Path = disk.String()
h.Endpoint = disk.Endpoint().String()
h.Started = time.Now().UTC()
h.PoolIndex, h.SetIndex, h.DiskIndex = disk.GetDiskLoc()
return &h
return h
}
func (h healingTracker) getLastUpdate() time.Time {
h.mu.RLock()
defer h.mu.RUnlock()
return h.LastUpdate
}
func (h healingTracker) getBucket() string {
h.mu.RLock()
defer h.mu.RUnlock()
return h.Bucket
}
func (h *healingTracker) setBucket(bucket string) {
h.mu.Lock()
defer h.mu.Unlock()
h.Bucket = bucket
}
func (h healingTracker) getObject() string {
h.mu.RLock()
defer h.mu.RUnlock()
return h.Object
}
func (h *healingTracker) setObject(object string) {
h.mu.Lock()
defer h.mu.Unlock()
h.Object = object
}
func (h *healingTracker) updateProgress(success bool, bytes uint64) {
h.mu.Lock()
defer h.mu.Unlock()
if success {
h.ItemsHealed++
h.BytesDone += bytes
} else {
h.ItemsFailed++
h.BytesFailed += bytes
}
}
// update will update the tracker on the disk.
@@ -131,15 +194,18 @@ func (h *healingTracker) update(ctx context.Context) error {
if h.disk.Healing() == nil {
return fmt.Errorf("healingTracker: drive %q is not marked as healing", h.ID)
}
h.mu.Lock()
if h.ID == "" || h.PoolIndex < 0 || h.SetIndex < 0 || h.DiskIndex < 0 {
h.ID, _ = h.disk.GetDiskID()
h.PoolIndex, h.SetIndex, h.DiskIndex = h.disk.GetDiskLoc()
}
h.mu.Unlock()
return h.save(ctx)
}
// save will unconditionally save the tracker and will be created if not existing.
func (h *healingTracker) save(ctx context.Context) error {
h.mu.Lock()
if h.PoolIndex < 0 || h.SetIndex < 0 || h.DiskIndex < 0 {
// Attempt to get location.
if api := newObjectLayerFn(); api != nil {
@@ -150,6 +216,7 @@ func (h *healingTracker) save(ctx context.Context) error {
}
h.LastUpdate = time.Now().UTC()
htrackerBytes, err := h.MarshalMsg(nil)
h.mu.Unlock()
if err != nil {
return err
}
@@ -171,6 +238,8 @@ func (h *healingTracker) delete(ctx context.Context) error {
}
func (h *healingTracker) isHealed(bucket string) bool {
h.mu.RLock()
defer h.mu.RUnlock()
for _, v := range h.HealedBuckets {
if v == bucket {
return true
@@ -181,6 +250,9 @@ func (h *healingTracker) isHealed(bucket string) bool {
// resume will reset progress to the numbers at the start of the bucket.
func (h *healingTracker) resume() {
h.mu.Lock()
defer h.mu.Unlock()
h.ItemsHealed = h.ResumeItemsHealed
h.ItemsFailed = h.ResumeItemsFailed
h.BytesDone = h.ResumeBytesDone
@@ -190,6 +262,9 @@ func (h *healingTracker) resume() {
// bucketDone should be called when a bucket is done healing.
// Adds the bucket to the list of healed buckets and updates resume numbers.
func (h *healingTracker) bucketDone(bucket string) {
h.mu.Lock()
defer h.mu.Unlock()
h.ResumeItemsHealed = h.ItemsHealed
h.ResumeItemsFailed = h.ItemsFailed
h.ResumeBytesDone = h.BytesDone
@@ -206,6 +281,9 @@ func (h *healingTracker) bucketDone(bucket string) {
// setQueuedBuckets will add buckets, but exclude any that is already in h.HealedBuckets.
// Order is preserved.
func (h *healingTracker) setQueuedBuckets(buckets []BucketInfo) {
h.mu.Lock()
defer h.mu.Unlock()
s := set.CreateStringSet(h.HealedBuckets...)
h.QueuedBuckets = make([]string, 0, len(buckets))
for _, b := range buckets {
@@ -216,17 +294,25 @@ func (h *healingTracker) setQueuedBuckets(buckets []BucketInfo) {
}
func (h *healingTracker) printTo(writer io.Writer) {
h.mu.RLock()
defer h.mu.RUnlock()
b, err := json.MarshalIndent(h, "", " ")
if err != nil {
writer.Write([]byte(err.Error()))
return
}
writer.Write(b)
}
// toHealingDisk converts the information to madmin.HealingDisk
func (h *healingTracker) toHealingDisk() madmin.HealingDisk {
h.mu.RLock()
defer h.mu.RUnlock()
return madmin.HealingDisk{
ID: h.ID,
HealID: h.HealID,
Endpoint: h.Endpoint,
PoolIndex: h.PoolIndex,
SetIndex: h.SetIndex,
@@ -261,10 +347,15 @@ func initAutoHeal(ctx context.Context, objAPI ObjectLayer) {
globalBackgroundHealState.pushHealLocalDisks(getLocalDisksToHeal()...)
go monitorLocalDisksAndHeal(ctx, z)
if env.Get("_MINIO_AUTO_DISK_HEALING", config.EnableOn) == config.EnableOn {
go monitorLocalDisksAndHeal(ctx, z)
}
}
func getLocalDisksToHeal() (disksToHeal Endpoints) {
globalLocalDrivesMu.RLock()
globalLocalDrives := globalLocalDrives
globalLocalDrivesMu.RUnlock()
for _, disk := range globalLocalDrives {
_, err := disk.GetDiskID()
if errors.Is(err, errUnformattedDisk) {
@@ -286,16 +377,14 @@ func getLocalDisksToHeal() (disksToHeal Endpoints) {
var newDiskHealingTimeout = newDynamicTimeout(30*time.Second, 10*time.Second)
func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint) error {
logger.Info(fmt.Sprintf("Proceeding to heal '%s' - 'mc admin heal alias/ --verbose' to check the status.", endpoint))
disk, format, err := connectEndpoint(endpoint)
if err != nil {
return fmt.Errorf("Error: %w, %s", err, endpoint)
}
defer disk.Close()
poolIdx := globalEndpoints.GetLocalPoolIdx(disk.Endpoint())
if poolIdx < 0 {
return fmt.Errorf("unexpected pool index (%d) found in %s", poolIdx, disk.Endpoint())
return fmt.Errorf("unexpected pool index (%d) found for %s", poolIdx, disk.Endpoint())
}
// Calculate the set index where the current endpoint belongs
@@ -306,20 +395,36 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
return err
}
if setIdx < 0 {
return fmt.Errorf("unexpected set index (%d) found in %s", setIdx, disk.Endpoint())
return fmt.Errorf("unexpected set index (%d) found for %s", setIdx, disk.Endpoint())
}
// Prevent parallel erasure set healing
locker := z.NewNSLock(minioMetaBucket, fmt.Sprintf("new-drive-healing/%d/%d", poolIdx, setIdx))
lkctx, err := locker.GetLock(ctx, newDiskHealingTimeout)
if err != nil {
return err
return fmt.Errorf("Healing of drive '%v' on %s pool, belonging to %s erasure set already in progress: %w",
disk, humanize.Ordinal(poolIdx+1), humanize.Ordinal(setIdx+1), err)
}
ctx = lkctx.Context()
defer locker.Unlock(lkctx.Cancel)
defer locker.Unlock(lkctx)
// Load healing tracker in this disk
tracker, err := loadHealingTracker(ctx, disk)
if err != nil {
// A healing tracker may be deleted if another disk in the
// same erasure set with same healing-id successfully finished
// healing.
if errors.Is(err, errFileNotFound) {
return nil
}
logger.LogIf(ctx, fmt.Errorf("Unable to load healing tracker on '%s': %w, re-initializing..", disk, err))
tracker = initHealingTracker(disk, mustGetUUID())
}
logger.Info(fmt.Sprintf("Healing drive '%s' - 'mc admin heal alias/ --verbose' to check the current status.", endpoint))
buckets, _ := z.ListBuckets(ctx, BucketOptions{})
// Buckets data are dispersed in multiple zones/sets, make
// Buckets data are dispersed in multiple pools/sets, make
// sure to heal all bucket metadata configuration.
buckets = append(buckets, BucketInfo{
Name: pathJoin(minioMetaBucket, minioConfigPrefix),
@@ -337,16 +442,7 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
})
if serverDebugLog {
logger.Info("Healing drive '%v' on %s pool", disk, humanize.Ordinal(poolIdx+1))
}
// Load healing tracker in this disk
tracker, err := loadHealingTracker(ctx, disk)
if err != nil {
// So someone changed the drives underneath, healing tracker missing.
logger.LogIf(ctx, fmt.Errorf("Healing tracker missing on '%s', drive was swapped again on %s pool: %w",
disk, humanize.Ordinal(poolIdx+1), err))
tracker = newHealingTracker(disk)
logger.Info("Healing drive '%v' on %s pool, belonging to %s erasure set", disk, humanize.Ordinal(poolIdx+1), humanize.Ordinal(setIdx+1))
}
// Load bucket totals
@@ -369,9 +465,13 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
}
if tracker.ItemsFailed > 0 {
logger.Info("Healing drive '%s' failed (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
logger.Info("Healing of drive '%s' failed (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
} else {
logger.Info("Healing drive '%s' complete (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
logger.Info("Healing of drive '%s' complete (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
}
if len(tracker.QueuedBuckets) > 0 {
return fmt.Errorf("not all buckets were healed: %v", tracker.QueuedBuckets)
}
if serverDebugLog {
@@ -379,7 +479,29 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
logger.Info("\n")
}
logger.LogIf(ctx, tracker.delete(ctx))
if tracker.HealID == "" { // HealID was empty only before Feb 2023
logger.LogIf(ctx, tracker.delete(ctx))
return nil
}
// Remove .healing.bin from all disks with similar heal-id
disks, err := z.GetDisks(poolIdx, setIdx)
if err != nil {
return err
}
for _, disk := range disks {
t, err := loadHealingTracker(ctx, disk)
if err != nil {
if !errors.Is(err, errFileNotFound) {
logger.LogIf(ctx, err)
}
continue
}
if t.HealID == tracker.HealID {
t.delete(ctx)
}
}
return nil
}
@@ -415,10 +537,13 @@ func monitorLocalDisksAndHeal(ctx context.Context, z *erasureServerPools) {
for _, disk := range healDisks {
go func(disk Endpoint) {
globalBackgroundHealState.markDiskForHealing(disk)
err := healFreshDisk(ctx, z, disk)
if err != nil {
printEndpointError(disk, err, false)
globalBackgroundHealState.setDiskHealingStatus(disk, true)
if err := healFreshDisk(ctx, z, disk); err != nil {
globalBackgroundHealState.setDiskHealingStatus(disk, false)
timedout := OperationTimedOut{}
if !errors.Is(err, context.Canceled) && !errors.As(err, &timedout) {
printEndpointError(disk, err, false)
}
return
}
// Only upon success pop the healed disk.

View File

@@ -182,6 +182,12 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
}
case "HealID":
z.HealID, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "HealID")
return
}
default:
err = dc.Skip()
if err != nil {
@@ -195,9 +201,9 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 22
// map header, size 23
// write "ID"
err = en.Append(0xde, 0x0, 0x16, 0xa2, 0x49, 0x44)
err = en.Append(0xde, 0x0, 0x17, 0xa2, 0x49, 0x44)
if err != nil {
return
}
@@ -430,15 +436,25 @@ func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
return
}
}
// write "HealID"
err = en.Append(0xa6, 0x48, 0x65, 0x61, 0x6c, 0x49, 0x44)
if err != nil {
return
}
err = en.WriteString(z.HealID)
if err != nil {
err = msgp.WrapError(err, "HealID")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 22
// map header, size 23
// string "ID"
o = append(o, 0xde, 0x0, 0x16, 0xa2, 0x49, 0x44)
o = append(o, 0xde, 0x0, 0x17, 0xa2, 0x49, 0x44)
o = msgp.AppendString(o, z.ID)
// string "PoolIndex"
o = append(o, 0xa9, 0x50, 0x6f, 0x6f, 0x6c, 0x49, 0x6e, 0x64, 0x65, 0x78)
@@ -509,6 +525,9 @@ func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
for za0002 := range z.HealedBuckets {
o = msgp.AppendString(o, z.HealedBuckets[za0002])
}
// string "HealID"
o = append(o, 0xa6, 0x48, 0x65, 0x61, 0x6c, 0x49, 0x44)
o = msgp.AppendString(o, z.HealID)
return
}
@@ -688,6 +707,12 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
return
}
}
case "HealID":
z.HealID, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "HealID")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -710,5 +735,6 @@ func (z *healingTracker) Msgsize() (s int) {
for za0002 := range z.HealedBuckets {
s += msgp.StringPrefixSize + len(z.HealedBuckets[za0002])
}
s += 7 + msgp.StringPrefixSize + len(z.HealID)
return
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,83 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"strings"
"time"
"github.com/minio/pkg/wildcard"
)
//go:generate msgp -file $GOFILE
// BatchJobKV is a key-value data type which supports wildcard matching
type BatchJobKV struct {
Key string `yaml:"key" json:"key"`
Value string `yaml:"value" json:"value"`
}
// Validate returns an error if key is empty
func (kv BatchJobKV) Validate() error {
if kv.Key == "" {
return errInvalidArgument
}
return nil
}
// Empty indicates if kv is not set
func (kv BatchJobKV) Empty() bool {
return kv.Key == "" && kv.Value == ""
}
// Match matches input kv with kv, value will be wildcard matched depending on the user input
func (kv BatchJobKV) Match(ikv BatchJobKV) bool {
if kv.Empty() {
return true
}
if strings.EqualFold(kv.Key, ikv.Key) {
return wildcard.Match(kv.Value, ikv.Value)
}
return false
}
// BatchJobNotification stores notification endpoint and token information.
// Used by batch jobs to notify of their status.
type BatchJobNotification struct {
Endpoint string `yaml:"endpoint" json:"endpoint"`
Token string `yaml:"token" json:"token"`
}
// BatchJobRetry stores retry configuration used in the event of failures.
type BatchJobRetry struct {
Attempts int `yaml:"attempts" json:"attempts"` // number of retry attempts
Delay time.Duration `yaml:"delay" json:"delay"` // delay between each retries
}
// Validate validates input replicate retries.
func (r BatchJobRetry) Validate() error {
if r.Attempts < 0 {
return errInvalidArgument
}
if r.Delay < 0 {
return errInvalidArgument
}
return nil
}

View File

@@ -0,0 +1,391 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
import (
"github.com/tinylib/msgp/msgp"
)
// DecodeMsg implements msgp.Decodable
func (z *BatchJobKV) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Key, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Key")
return
}
case "Value":
z.Value, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Value")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchJobKV) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 2
// write "Key"
err = en.Append(0x82, 0xa3, 0x4b, 0x65, 0x79)
if err != nil {
return
}
err = en.WriteString(z.Key)
if err != nil {
err = msgp.WrapError(err, "Key")
return
}
// write "Value"
err = en.Append(0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
if err != nil {
return
}
err = en.WriteString(z.Value)
if err != nil {
err = msgp.WrapError(err, "Value")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchJobKV) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 2
// string "Key"
o = append(o, 0x82, 0xa3, 0x4b, 0x65, 0x79)
o = msgp.AppendString(o, z.Key)
// string "Value"
o = append(o, 0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
o = msgp.AppendString(o, z.Value)
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchJobKV) UnmarshalMsg(bts []byte) (o []byte, err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Key, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Key")
return
}
case "Value":
z.Value, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Value")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchJobKV) Msgsize() (s int) {
s = 1 + 4 + msgp.StringPrefixSize + len(z.Key) + 6 + msgp.StringPrefixSize + len(z.Value)
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchJobNotification) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Endpoint":
z.Endpoint, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Endpoint")
return
}
case "Token":
z.Token, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Token")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchJobNotification) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 2
// write "Endpoint"
err = en.Append(0x82, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
if err != nil {
return
}
err = en.WriteString(z.Endpoint)
if err != nil {
err = msgp.WrapError(err, "Endpoint")
return
}
// write "Token"
err = en.Append(0xa5, 0x54, 0x6f, 0x6b, 0x65, 0x6e)
if err != nil {
return
}
err = en.WriteString(z.Token)
if err != nil {
err = msgp.WrapError(err, "Token")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchJobNotification) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 2
// string "Endpoint"
o = append(o, 0x82, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
o = msgp.AppendString(o, z.Endpoint)
// string "Token"
o = append(o, 0xa5, 0x54, 0x6f, 0x6b, 0x65, 0x6e)
o = msgp.AppendString(o, z.Token)
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchJobNotification) UnmarshalMsg(bts []byte) (o []byte, err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Endpoint":
z.Endpoint, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Endpoint")
return
}
case "Token":
z.Token, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Token")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchJobNotification) Msgsize() (s int) {
s = 1 + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 6 + msgp.StringPrefixSize + len(z.Token)
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchJobRetry) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Attempts":
z.Attempts, err = dc.ReadInt()
if err != nil {
err = msgp.WrapError(err, "Attempts")
return
}
case "Delay":
z.Delay, err = dc.ReadDuration()
if err != nil {
err = msgp.WrapError(err, "Delay")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchJobRetry) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 2
// write "Attempts"
err = en.Append(0x82, 0xa8, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
if err != nil {
return
}
err = en.WriteInt(z.Attempts)
if err != nil {
err = msgp.WrapError(err, "Attempts")
return
}
// write "Delay"
err = en.Append(0xa5, 0x44, 0x65, 0x6c, 0x61, 0x79)
if err != nil {
return
}
err = en.WriteDuration(z.Delay)
if err != nil {
err = msgp.WrapError(err, "Delay")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchJobRetry) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 2
// string "Attempts"
o = append(o, 0x82, 0xa8, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
o = msgp.AppendInt(o, z.Attempts)
// string "Delay"
o = append(o, 0xa5, 0x44, 0x65, 0x6c, 0x61, 0x79)
o = msgp.AppendDuration(o, z.Delay)
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchJobRetry) UnmarshalMsg(bts []byte) (o []byte, err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Attempts":
z.Attempts, bts, err = msgp.ReadIntBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Attempts")
return
}
case "Delay":
z.Delay, bts, err = msgp.ReadDurationBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Delay")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchJobRetry) Msgsize() (s int) {
s = 1 + 9 + msgp.IntSize + 6 + msgp.DurationSize
return
}

View File

@@ -0,0 +1,349 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
import (
"bytes"
"testing"
"github.com/tinylib/msgp/msgp"
)
func TestMarshalUnmarshalBatchJobKV(t *testing.T) {
v := BatchJobKV{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobKV(b *testing.B) {
v := BatchJobKV{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobKV(b *testing.B) {
v := BatchJobKV{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobKV(b *testing.B) {
v := BatchJobKV{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobKV(t *testing.T) {
v := BatchJobKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobKV Msgsize() is inaccurate")
}
vn := BatchJobKV{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobKV(b *testing.B) {
v := BatchJobKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobKV(b *testing.B) {
v := BatchJobKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobNotification(t *testing.T) {
v := BatchJobNotification{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobNotification(t *testing.T) {
v := BatchJobNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobNotification Msgsize() is inaccurate")
}
vn := BatchJobNotification{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobRetry(t *testing.T) {
v := BatchJobRetry{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobRetry(t *testing.T) {
v := BatchJobRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobRetry Msgsize() is inaccurate")
}
vn := BatchJobRetry{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}

183
cmd/batch-replicate.go Normal file
View File

@@ -0,0 +1,183 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"time"
miniogo "github.com/minio/minio-go/v7"
"github.com/minio/minio/internal/auth"
)
//go:generate msgp -file $GOFILE
// replicate:
// # source of the objects to be replicated
// source:
// type: "minio"
// bucket: "testbucket"
// prefix: "spark/"
//
// # optional flags based filtering criteria
// # for source objects
// flags:
// filter:
// newerThan: "7d"
// olderThan: "7d"
// createdAfter: "date"
// createdBefore: "date"
// tags:
// - key: "name"
// value: "value*"
// metadata:
// - key: "content-type"
// value: "image/*"
// notify:
// endpoint: "https://splunk-hec.dev.com"
// token: "Splunk ..." # e.g. "Bearer token"
//
// # target where the objects must be replicated
// target:
// type: "minio"
// bucket: "testbucket1"
// endpoint: "https://play.min.io"
// path: "on"
// credentials:
// accessKey: "minioadmin"
// secretKey: "minioadmin"
// sessionToken: ""
// BatchReplicateFilter holds all the filters currently supported for batch replication
type BatchReplicateFilter struct {
NewerThan time.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
}
// BatchJobReplicateFlags various configurations for replication job definition currently includes
// - filter
// - notify
// - retry
type BatchJobReplicateFlags struct {
Filter BatchReplicateFilter `yaml:"filter" json:"filter"`
Notify BatchJobNotification `yaml:"notify" json:"notify"`
Retry BatchJobRetry `yaml:"retry" json:"retry"`
}
// BatchJobReplicateResourceType defines the type of batch jobs
type BatchJobReplicateResourceType string
// Validate validates if the replicate resource type is recognized and supported
func (t BatchJobReplicateResourceType) Validate() error {
switch t {
case BatchJobReplicateResourceMinIO:
case BatchJobReplicateResourceS3:
default:
return errInvalidArgument
}
return nil
}
func (t BatchJobReplicateResourceType) isMinio() bool {
return t == BatchJobReplicateResourceMinIO
}
// Different types of batch jobs..
const (
BatchJobReplicateResourceMinIO BatchJobReplicateResourceType = "minio"
BatchJobReplicateResourceS3 BatchJobReplicateResourceType = "s3"
// add future targets
)
// BatchJobReplicateCredentials access credentials for batch replication it may
// be either for target or source.
type BatchJobReplicateCredentials struct {
AccessKey string `xml:"AccessKeyId" json:"accessKey,omitempty" yaml:"accessKey"`
SecretKey string `xml:"SecretAccessKey" json:"secretKey,omitempty" yaml:"secretKey"`
SessionToken string `xml:"SessionToken" json:"sessionToken,omitempty" yaml:"sessionToken"`
}
// Empty indicates if credentials are not set
func (c BatchJobReplicateCredentials) Empty() bool {
return c.AccessKey == "" && c.SecretKey == "" && c.SessionToken == ""
}
// Validate validates if credentials are valid
func (c BatchJobReplicateCredentials) Validate() error {
if !auth.IsAccessKeyValid(c.AccessKey) || !auth.IsSecretKeyValid(c.SecretKey) {
return errInvalidArgument
}
return nil
}
// BatchJobReplicateTarget describes target element of the replication job that receives
// the filtered data from source
type BatchJobReplicateTarget struct {
Type BatchJobReplicateResourceType `yaml:"type" json:"type"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Path string `yaml:"path" json:"path"`
Creds BatchJobReplicateCredentials `yaml:"credentials" json:"credentials"`
}
// ValidPath returns true if path is valid
func (t BatchJobReplicateTarget) ValidPath() bool {
return t.Path == "on" || t.Path == "off" || t.Path == "auto" || t.Path == ""
}
// BatchJobReplicateSource describes source element of the replication job that is
// the source of the data for the target
type BatchJobReplicateSource struct {
Type BatchJobReplicateResourceType `yaml:"type" json:"type"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Path string `yaml:"path" json:"path"`
Creds BatchJobReplicateCredentials `yaml:"credentials" json:"credentials"`
}
// ValidPath returns true if path is valid
func (s BatchJobReplicateSource) ValidPath() bool {
switch s.Path {
case "on", "off", "auto", "":
return true
default:
return false
}
}
// BatchJobReplicateV1 v1 of batch job replication
type BatchJobReplicateV1 struct {
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Flags BatchJobReplicateFlags `yaml:"flags" json:"flags"`
Target BatchJobReplicateTarget `yaml:"target" json:"target"`
Source BatchJobReplicateSource `yaml:"source" json:"source"`
clnt *miniogo.Core `msg:"-"`
}
// RemoteToLocal returns true if source is remote and target is local
func (r BatchJobReplicateV1) RemoteToLocal() bool {
return !r.Source.Creds.Empty()
}

1677
cmd/batch-replicate_gen.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,688 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
import (
"bytes"
"testing"
"github.com/tinylib/msgp/msgp"
)
func TestMarshalUnmarshalBatchJobReplicateCredentials(t *testing.T) {
v := BatchJobReplicateCredentials{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateCredentials(t *testing.T) {
v := BatchJobReplicateCredentials{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateCredentials Msgsize() is inaccurate")
}
vn := BatchJobReplicateCredentials{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobReplicateFlags(t *testing.T) {
v := BatchJobReplicateFlags{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateFlags(t *testing.T) {
v := BatchJobReplicateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateFlags Msgsize() is inaccurate")
}
vn := BatchJobReplicateFlags{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobReplicateSource(t *testing.T) {
v := BatchJobReplicateSource{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateSource(t *testing.T) {
v := BatchJobReplicateSource{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateSource Msgsize() is inaccurate")
}
vn := BatchJobReplicateSource{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobReplicateTarget(t *testing.T) {
v := BatchJobReplicateTarget{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateTarget(t *testing.T) {
v := BatchJobReplicateTarget{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateTarget Msgsize() is inaccurate")
}
vn := BatchJobReplicateTarget{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobReplicateV1(t *testing.T) {
v := BatchJobReplicateV1{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateV1(t *testing.T) {
v := BatchJobReplicateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateV1 Msgsize() is inaccurate")
}
vn := BatchJobReplicateV1{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchReplicateFilter(t *testing.T) {
v := BatchReplicateFilter{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchReplicateFilter(t *testing.T) {
v := BatchReplicateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchReplicateFilter Msgsize() is inaccurate")
}
vn := BatchReplicateFilter{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}

477
cmd/batch-rotate.go Normal file
View File

@@ -0,0 +1,477 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"context"
"encoding/base64"
"fmt"
"math/rand"
"net/http"
"runtime"
"strconv"
"strings"
"time"
jsoniter "github.com/json-iterator/go"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/crypto"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/env"
"github.com/minio/pkg/workers"
)
// keyrotate:
// apiVersion: v1
// bucket: BUCKET
// prefix: PREFIX
// encryption:
// type: sse-s3 # valid values are sse-s3 and sse-kms
// key: <new-kms-key> # valid only for sse-kms
// context: <new-kms-key-context> # valid only for sse-kms
// # optional flags based filtering criteria
// # for all objects
// flags:
// filter:
// newerThan: "7d" # match objects newer than this value (e.g. 7d10h31s)
// olderThan: "7d" # match objects older than this value (e.g. 7d10h31s)
// createdAfter: "date" # match objects created after "date"
// createdBefore: "date" # match objects created before "date"
// tags:
// - key: "name"
// value: "pick*" # match objects with tag 'name', with all values starting with 'pick'
// metadata:
// - key: "content-type"
// value: "image/*" # match objects with 'content-type', with all values starting with 'image/'
// kmskey: "key-id" # match objects with KMS key-id (applicable only for sse-kms)
// notify:
// endpoint: "https://notify.endpoint" # notification endpoint to receive job status events
// token: "Bearer xxxxx" # optional authentication token for the notification endpoint
// retry:
// attempts: 10 # number of retries for the job before giving up
// delay: "500ms" # least amount of delay between each retry
//go:generate msgp -file $GOFILE -unexported
// BatchKeyRotationType defines key rotation type
type BatchKeyRotationType string
const (
sses3 BatchKeyRotationType = "sse-s3"
ssekms BatchKeyRotationType = "sse-kms"
)
// BatchJobKeyRotateEncryption defines key rotation encryption options passed
type BatchJobKeyRotateEncryption struct {
Type BatchKeyRotationType `yaml:"type" json:"type"`
Key string `yaml:"key" json:"key"`
Context string `yaml:"context" json:"context"`
kmsContext kms.Context `msg:"-"`
}
// Validate validates input key rotation encryption options.
func (e BatchJobKeyRotateEncryption) Validate() error {
if e.Type != sses3 && e.Type != ssekms {
return errInvalidArgument
}
spaces := strings.HasPrefix(e.Key, " ") || strings.HasSuffix(e.Key, " ")
if e.Type == ssekms && spaces {
return crypto.ErrInvalidEncryptionKeyID
}
if e.Type == ssekms && GlobalKMS != nil {
ctx := kms.Context{}
if e.Context != "" {
b, err := base64.StdEncoding.DecodeString(e.Context)
if err != nil {
return err
}
json := jsoniter.ConfigCompatibleWithStandardLibrary
if err := json.Unmarshal(b, &ctx); err != nil {
return err
}
}
e.kmsContext = kms.Context{}
for k, v := range ctx {
e.kmsContext[k] = v
}
ctx["MinIO batch API"] = "batchrotate" // Context for a test key operation
if _, err := GlobalKMS.GenerateKey(GlobalContext, e.Key, ctx); err != nil {
return err
}
}
return nil
}
// BatchKeyRotateFilter holds all the filters currently supported for batch replication
type BatchKeyRotateFilter struct {
NewerThan time.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
KMSKeyID string `yaml:"kmskeyid" json:"kmskey"`
}
// BatchKeyRotateNotification success or failure notification endpoint for each job attempts
type BatchKeyRotateNotification struct {
Endpoint string `yaml:"endpoint" json:"endpoint"`
Token string `yaml:"token" json:"token"`
}
// BatchJobKeyRotateFlags various configurations for replication job definition currently includes
// - filter
// - notify
// - retry
type BatchJobKeyRotateFlags struct {
Filter BatchKeyRotateFilter `yaml:"filter" json:"filter"`
Notify BatchJobNotification `yaml:"notify" json:"notify"`
Retry BatchJobRetry `yaml:"retry" json:"retry"`
}
// BatchJobKeyRotateV1 v1 of batch key rotation job
type BatchJobKeyRotateV1 struct {
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Flags BatchJobKeyRotateFlags `yaml:"flags" json:"flags"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Encryption BatchJobKeyRotateEncryption `yaml:"encryption" json:"encryption"`
}
// Notify notifies notification endpoint if configured regarding job failure or success.
func (r BatchJobKeyRotateV1) Notify(ctx context.Context, ri *batchJobInfo) error {
return notifyEndpoint(ctx, ri, r.Flags.Notify.Endpoint, r.Flags.Notify.Token)
}
// KeyRotate rotates encryption key of an object
func (r *BatchJobKeyRotateV1) KeyRotate(ctx context.Context, api ObjectLayer, objInfo ObjectInfo) error {
srcBucket := r.Bucket
srcObject := objInfo.Name
if objInfo.DeleteMarker || !objInfo.VersionPurgeStatus.Empty() {
return nil
}
sseKMS := crypto.S3KMS.IsEncrypted(objInfo.UserDefined)
sseS3 := crypto.S3.IsEncrypted(objInfo.UserDefined)
if !sseKMS && !sseS3 { // neither sse-s3 nor sse-kms disallowed
return errInvalidEncryptionParameters
}
if sseKMS && r.Encryption.Type == sses3 { // previously encrypted with sse-kms, now sse-s3 disallowed
return errInvalidEncryptionParameters
}
versioned := globalBucketVersioningSys.PrefixEnabled(srcBucket, srcObject)
versionSuspended := globalBucketVersioningSys.PrefixSuspended(srcBucket, srcObject)
lock := api.NewNSLock(r.Bucket, objInfo.Name)
lkctx, err := lock.GetLock(ctx, globalOperationTimeout)
if err != nil {
return err
}
ctx = lkctx.Context()
defer lock.Unlock(lkctx)
opts := ObjectOptions{
VersionID: objInfo.VersionID,
Versioned: versioned,
VersionSuspended: versionSuspended,
NoLock: true,
}
obj, err := api.GetObjectInfo(ctx, r.Bucket, objInfo.Name, opts)
if err != nil {
return err
}
oi := obj.Clone()
var (
newKeyID string
newKeyContext kms.Context
)
encMetadata := make(map[string]string)
for k, v := range oi.UserDefined {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
encMetadata[k] = v
}
}
if (sseKMS || sseS3) && r.Encryption.Type == ssekms {
if err = r.Encryption.Validate(); err != nil {
return err
}
newKeyID = strings.TrimPrefix(r.Encryption.Key, crypto.ARNPrefix)
newKeyContext = r.Encryption.kmsContext
}
if err = rotateKey(ctx, []byte{}, newKeyID, []byte{}, r.Bucket, oi.Name, encMetadata, newKeyContext); err != nil {
return err
}
// Since we are rotating the keys, make sure to update the metadata.
oi.metadataOnly = true
oi.keyRotation = true
for k, v := range encMetadata {
oi.UserDefined[k] = v
}
if _, err := api.CopyObject(ctx, r.Bucket, oi.Name, r.Bucket, oi.Name, oi, ObjectOptions{
VersionID: oi.VersionID,
}, ObjectOptions{
VersionID: oi.VersionID,
NoLock: true,
}); err != nil {
return err
}
return nil
}
const (
batchKeyRotationName = "batch-rotate.bin"
batchKeyRotationFormat = 1
batchKeyRotateVersionV1 = 1
batchKeyRotateVersion = batchKeyRotateVersionV1
batchKeyRotateAPIVersion = "v1"
batchKeyRotateJobDefaultRetries = 3
batchKeyRotateJobDefaultRetryDelay = 25 * time.Millisecond
)
// Start the batch key rottion job, resumes if there was a pending job via "job.ID"
func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job BatchJobRequest) error {
ri := &batchJobInfo{
JobID: job.ID,
JobType: string(job.Type()),
StartTime: job.Started,
}
if err := ri.load(ctx, api, job); err != nil {
return err
}
globalBatchJobsMetrics.save(job.ID, ri)
lastObject := ri.Object
delay := job.KeyRotate.Flags.Retry.Delay
if delay == 0 {
delay = batchKeyRotateJobDefaultRetryDelay
}
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
skip := func(info FileInfo) (ok bool) {
if r.Flags.Filter.OlderThan > 0 && time.Since(info.ModTime) < r.Flags.Filter.OlderThan {
// skip all objects that are newer than specified older duration
return false
}
if r.Flags.Filter.NewerThan > 0 && time.Since(info.ModTime) >= r.Flags.Filter.NewerThan {
// skip all objects that are older than specified newer duration
return false
}
if !r.Flags.Filter.CreatedAfter.IsZero() && r.Flags.Filter.CreatedAfter.Before(info.ModTime) {
// skip all objects that are created before the specified time.
return false
}
if !r.Flags.Filter.CreatedBefore.IsZero() && r.Flags.Filter.CreatedBefore.After(info.ModTime) {
// skip all objects that are created after the specified time.
return false
}
if len(r.Flags.Filter.Tags) > 0 {
// Only parse object tags if tags filter is specified.
tagMap := map[string]string{}
tagStr := info.Metadata[xhttp.AmzObjectTagging]
if len(tagStr) != 0 {
t, err := tags.ParseObjectTags(tagStr)
if err != nil {
return false
}
tagMap = t.ToMap()
}
for _, kv := range r.Flags.Filter.Tags {
for t, v := range tagMap {
if kv.Match(BatchJobKV{Key: t, Value: v}) {
return true
}
}
}
// None of the provided tags filter match skip the object
return false
}
if len(r.Flags.Filter.Metadata) > 0 {
for _, kv := range r.Flags.Filter.Metadata {
for k, v := range info.Metadata {
if !stringsHasPrefixFold(k, "x-amz-meta-") && !isStandardHeader(k) {
continue
}
// We only need to match x-amz-meta or standardHeaders
if kv.Match(BatchJobKV{Key: k, Value: v}) {
return true
}
}
}
// None of the provided metadata filters match skip the object.
return false
}
if r.Flags.Filter.KMSKeyID != "" {
if v, ok := info.Metadata[xhttp.AmzServerSideEncryptionKmsID]; ok && strings.TrimPrefix(v, crypto.ARNPrefix) != r.Flags.Filter.KMSKeyID {
return false
}
}
return true
}
workerSize, err := strconv.Atoi(env.Get("_MINIO_BATCH_KEYROTATION_WORKERS", strconv.Itoa(runtime.GOMAXPROCS(0)/2)))
if err != nil {
return err
}
wk, err := workers.New(workerSize)
if err != nil {
// invalid worker size.
return err
}
retryAttempts := ri.RetryAttempts
ctx, cancel := context.WithCancel(ctx)
results := make(chan ObjectInfo, 100)
if err := api.Walk(ctx, r.Bucket, r.Prefix, results, ObjectOptions{
WalkMarker: lastObject,
WalkFilter: skip,
}); err != nil {
cancel()
// Do not need to retry if we can't list objects on source.
return err
}
for result := range results {
result := result
sseKMS := crypto.S3KMS.IsEncrypted(result.UserDefined)
sseS3 := crypto.S3.IsEncrypted(result.UserDefined)
if !sseKMS && !sseS3 { // neither sse-s3 nor sse-kms disallowed
continue
}
wk.Take()
go func() {
defer wk.Give()
for attempts := 1; attempts <= retryAttempts; attempts++ {
attempts := attempts
stopFn := globalBatchJobsMetrics.trace(batchKeyRotationMetricObject, job.ID, attempts, result)
success := true
if err := r.KeyRotate(ctx, api, result); err != nil {
stopFn(err)
logger.LogIf(ctx, err)
success = false
} else {
stopFn(nil)
}
ri.trackCurrentBucketObject(r.Bucket, result, success)
ri.RetryAttempts = attempts
globalBatchJobsMetrics.save(job.ID, ri)
// persist in-memory state to disk after every 10secs.
logger.LogIf(ctx, ri.updateAfter(ctx, api, 10*time.Second, job))
if success {
break
}
if delay > 0 {
time.Sleep(delay + time.Duration(rnd.Float64()*float64(delay)))
}
}
}()
}
wk.Wait()
ri.Complete = ri.ObjectsFailed == 0
ri.Failed = ri.ObjectsFailed > 0
globalBatchJobsMetrics.save(job.ID, ri)
// persist in-memory state to disk.
logger.LogIf(ctx, ri.updateAfter(ctx, api, 0, job))
if err := r.Notify(ctx, ri); err != nil {
logger.LogIf(ctx, fmt.Errorf("unable to notify %v", err))
}
cancel()
return nil
}
//msgp:ignore batchKeyRotationJobError
type batchKeyRotationJobError struct {
Code string
Description string
HTTPStatusCode int
}
func (e batchKeyRotationJobError) Error() string {
return e.Description
}
// Validate validates the job definition input
func (r *BatchJobKeyRotateV1) Validate(ctx context.Context, job BatchJobRequest, o ObjectLayer) error {
if r == nil {
return nil
}
if r.APIVersion != batchKeyRotateAPIVersion {
return errInvalidArgument
}
if r.Bucket == "" {
return errInvalidArgument
}
if _, err := o.GetBucketInfo(ctx, r.Bucket, BucketOptions{}); err != nil {
if isErrBucketNotFound(err) {
return batchKeyRotationJobError{
Code: "NoSuchSourceBucket",
Description: "The specified source bucket does not exist",
HTTPStatusCode: http.StatusNotFound,
}
}
return err
}
if GlobalKMS == nil {
return errKMSNotConfigured
}
if err := r.Encryption.Validate(); err != nil {
return err
}
for _, tag := range r.Flags.Filter.Tags {
if err := tag.Validate(); err != nil {
return err
}
}
for _, meta := range r.Flags.Filter.Metadata {
if err := meta.Validate(); err != nil {
return err
}
}
if err := r.Flags.Retry.Validate(); err != nil {
return err
}
return nil
}

1203
cmd/batch-rotate_gen.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,575 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
import (
"bytes"
"testing"
"github.com/tinylib/msgp/msgp"
)
func TestMarshalUnmarshalBatchJobKeyRotateEncryption(t *testing.T) {
v := BatchJobKeyRotateEncryption{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobKeyRotateEncryption(t *testing.T) {
v := BatchJobKeyRotateEncryption{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobKeyRotateEncryption Msgsize() is inaccurate")
}
vn := BatchJobKeyRotateEncryption{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobKeyRotateEncryption(b *testing.B) {
v := BatchJobKeyRotateEncryption{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobKeyRotateFlags(t *testing.T) {
v := BatchJobKeyRotateFlags{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobKeyRotateFlags(t *testing.T) {
v := BatchJobKeyRotateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobKeyRotateFlags Msgsize() is inaccurate")
}
vn := BatchJobKeyRotateFlags{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobKeyRotateFlags(b *testing.B) {
v := BatchJobKeyRotateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobKeyRotateV1(t *testing.T) {
v := BatchJobKeyRotateV1{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobKeyRotateV1(t *testing.T) {
v := BatchJobKeyRotateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobKeyRotateV1 Msgsize() is inaccurate")
}
vn := BatchJobKeyRotateV1{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobKeyRotateV1(b *testing.B) {
v := BatchJobKeyRotateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchKeyRotateFilter(t *testing.T) {
v := BatchKeyRotateFilter{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchKeyRotateFilter(t *testing.T) {
v := BatchKeyRotateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchKeyRotateFilter Msgsize() is inaccurate")
}
vn := BatchKeyRotateFilter{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchKeyRotateFilter(b *testing.B) {
v := BatchKeyRotateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchKeyRotateNotification(t *testing.T) {
v := BatchKeyRotateNotification{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchKeyRotateNotification(t *testing.T) {
v := BatchKeyRotateNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchKeyRotateNotification Msgsize() is inaccurate")
}
vn := BatchKeyRotateNotification{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchKeyRotateNotification(b *testing.B) {
v := BatchKeyRotateNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}

View File

@@ -0,0 +1,24 @@
// Code generated by "stringer -type=batchJobMetric -trimprefix=batchJobMetric batch-handlers.go"; DO NOT EDIT.
package cmd
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[batchReplicationMetricObject-0]
_ = x[batchKeyRotationMetricObject-1]
}
const _batchJobMetric_name = "batchReplicationMetricObjectbatchKeyRotationMetricObject"
var _batchJobMetric_index = [...]uint8{0, 28, 56}
func (i batchJobMetric) String() string {
if i >= batchJobMetric(len(_batchJobMetric_index)-1) {
return "batchJobMetric(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _batchJobMetric_name[_batchJobMetric_index[i]:_batchJobMetric_index[i+1]]
}

View File

@@ -1,23 +0,0 @@
// Code generated by "stringer -type=batchReplicationMetric -trimprefix=batchReplicationMetric batch-handlers.go"; DO NOT EDIT.
package cmd
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[batchReplicationMetricObject-0]
}
const _batchReplicationMetric_name = "Object"
var _batchReplicationMetric_index = [...]uint8{0, 6}
func (i batchReplicationMetric) String() string {
if i >= batchReplicationMetric(len(_batchReplicationMetric_index)-1) {
return "batchReplicationMetric(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _batchReplicationMetric_name[_batchReplicationMetric_index[i]:_batchReplicationMetric_index[i+1]]
}

View File

@@ -35,7 +35,7 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucketWithLocation(context.Background(), bucket, MakeBucketOptions{})
err = obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -76,7 +76,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
object := getRandomObjectName()
// create bucket.
err = obj.MakeBucketWithLocation(context.Background(), bucket, MakeBucketOptions{})
err = obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -196,7 +196,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucketWithLocation(context.Background(), bucket, MakeBucketOptions{})
err := obj.MakeBucket(context.Background(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}

View File

@@ -94,16 +94,23 @@ func newStreamingBitrotWriter(disk StorageAPI, volume, filePath string, length i
r, w := io.Pipe()
h := algo.New()
bw := &streamingBitrotWriter{iow: w, closeWithErr: w.CloseWithError, h: h, shardSize: shardSize, canClose: &sync.WaitGroup{}}
bw := &streamingBitrotWriter{
iow: ioutil.NewDeadlineWriter(w, diskMaxTimeout),
closeWithErr: w.CloseWithError,
h: h,
shardSize: shardSize,
canClose: &sync.WaitGroup{},
}
bw.canClose.Add(1)
go func() {
defer bw.canClose.Done()
totalFileSize := int64(-1) // For compressed objects length will be unknown (represented by length=-1)
if length != -1 {
bitrotSumsTotalSize := ceilFrac(length, shardSize) * int64(h.Size()) // Size used for storing bitrot checksums.
totalFileSize = bitrotSumsTotalSize + length
}
r.CloseWithError(disk.CreateFile(context.TODO(), volume, filePath, totalFileSize, r))
bw.canClose.Done()
}()
return bw
}
@@ -165,9 +172,9 @@ func (b *streamingBitrotReader) ReadAt(buf []byte, offset int64) (int, error) {
b.rc, err = b.disk.ReadFileStream(context.TODO(), b.volume, b.filePath, streamOffset, b.tillOffset-streamOffset)
if err != nil {
if !IsErr(err, ignoredErrs...) {
logger.LogIf(GlobalContext,
logger.LogOnceIf(GlobalContext,
fmt.Errorf("Reading erasure shards at (%s: %s/%s) returned '%w', will attempt to reconstruct if we have quorum",
b.disk, b.volume, b.filePath, err))
b.disk, b.volume, b.filePath, err), "bitrot-read-file-stream-"+b.volume+"-"+b.filePath)
}
}
} else {

69
cmd/bootstrap-messages.go Normal file
View File

@@ -0,0 +1,69 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"context"
"sync"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/pubsub"
)
const bootstrapTraceLimit = 4 << 10
type bootstrapTracer struct {
mu sync.RWMutex
info []madmin.TraceInfo
}
var globalBootstrapTracer = &bootstrapTracer{}
func (bs *bootstrapTracer) Record(info madmin.TraceInfo) {
bs.mu.Lock()
defer bs.mu.Unlock()
if len(bs.info) > bootstrapTraceLimit {
return
}
bs.info = append(bs.info, info)
}
func (bs *bootstrapTracer) Events() []madmin.TraceInfo {
traceInfo := make([]madmin.TraceInfo, 0, bootstrapTraceLimit)
bs.mu.RLock()
for _, i := range bs.info {
traceInfo = append(traceInfo, i)
}
bs.mu.RUnlock()
return traceInfo
}
func (bs *bootstrapTracer) Publish(ctx context.Context, trace *pubsub.PubSub[madmin.TraceInfo, madmin.TraceType]) {
for _, bsEvent := range bs.Events() {
if bsEvent.Message != "" {
select {
case <-ctx.Done():
default:
trace.Publish(bsEvent)
}
}
}
}

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2021 MinIO, Inc.
// Copyright (c) 2015-2022 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -24,15 +24,15 @@ import (
"io"
"net/http"
"net/url"
"os"
"reflect"
"runtime"
"time"
"github.com/gorilla/mux"
"github.com/minio/minio-go/v7/pkg/set"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/rest"
"github.com/minio/mux"
"github.com/minio/pkg/env"
)
@@ -53,23 +53,22 @@ type bootstrapRESTServer struct{}
// ServerSystemConfig - captures information about server configuration.
type ServerSystemConfig struct {
MinioPlatform string
MinioEndpoints EndpointServerPools
MinioEnv map[string]string
}
// Diff - returns error on first difference found in two configs.
func (s1 ServerSystemConfig) Diff(s2 ServerSystemConfig) error {
if s1.MinioPlatform != s2.MinioPlatform {
return fmt.Errorf("Expected platform '%s', found to be running '%s'",
s1.MinioPlatform, s2.MinioPlatform)
}
if s1.MinioEndpoints.NEndpoints() != s2.MinioEndpoints.NEndpoints() {
return fmt.Errorf("Expected number of endpoints %d, seen %d", s1.MinioEndpoints.NEndpoints(),
s2.MinioEndpoints.NEndpoints())
}
for i, ep := range s1.MinioEndpoints {
if ep.CmdLine != s2.MinioEndpoints[i].CmdLine {
return fmt.Errorf("Expected command line argument %s, seen %s", ep.CmdLine,
s2.MinioEndpoints[i].CmdLine)
}
if ep.SetCount != s2.MinioEndpoints[i].SetCount {
return fmt.Errorf("Expected set count %d, seen %d", ep.SetCount,
s2.MinioEndpoints[i].SetCount)
@@ -78,11 +77,9 @@ func (s1 ServerSystemConfig) Diff(s2 ServerSystemConfig) error {
return fmt.Errorf("Expected drives pet set %d, seen %d", ep.DrivesPerSet,
s2.MinioEndpoints[i].DrivesPerSet)
}
for j, endpoint := range ep.Endpoints {
if endpoint.String() != s2.MinioEndpoints[i].Endpoints[j].String() {
return fmt.Errorf("Expected endpoint %s, seen %s", endpoint,
s2.MinioEndpoints[i].Endpoints[j])
}
if ep.Platform != s2.MinioEndpoints[i].Platform {
return fmt.Errorf("Expected platform '%s', found to be on '%s'",
ep.Platform, s2.MinioEndpoints[i].Platform)
}
}
if !reflect.DeepEqual(s1.MinioEnv, s2.MinioEnv) {
@@ -105,10 +102,14 @@ func (s1 ServerSystemConfig) Diff(s2 ServerSystemConfig) error {
}
var skipEnvs = map[string]struct{}{
"MINIO_OPTS": {},
"MINIO_CERT_PASSWD": {},
"MINIO_SERVER_DEBUG": {},
"MINIO_DSYNC_TRACE": {},
"MINIO_OPTS": {},
"MINIO_CERT_PASSWD": {},
"MINIO_SERVER_DEBUG": {},
"MINIO_DSYNC_TRACE": {},
"MINIO_ROOT_USER": {},
"MINIO_ROOT_PASSWORD": {},
"MINIO_ACCESS_KEY": {},
"MINIO_SECRET_KEY": {},
}
func getServerSystemCfg() ServerSystemConfig {
@@ -122,34 +123,48 @@ func getServerSystemCfg() ServerSystemConfig {
if _, ok := skipEnvs[envK]; ok {
continue
}
envValues[envK] = env.Get(envK, "")
envValues[envK] = logger.HashString(env.Get(envK, ""))
}
return ServerSystemConfig{
MinioPlatform: fmt.Sprintf("OS: %s | Arch: %s", runtime.GOOS, runtime.GOARCH),
MinioEndpoints: globalEndpoints,
MinioEnv: envValues,
}
}
func (b *bootstrapRESTServer) writeErrorResponse(w http.ResponseWriter, err error) {
w.WriteHeader(http.StatusForbidden)
w.Write([]byte(err.Error()))
}
// HealthHandler returns success if request is valid
func (b *bootstrapRESTServer) HealthHandler(w http.ResponseWriter, r *http.Request) {}
func (b *bootstrapRESTServer) VerifyHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "VerifyHandler")
if err := storageServerRequestValidate(r); err != nil {
b.writeErrorResponse(w, err)
return
}
cfg := getServerSystemCfg()
logger.LogIf(ctx, json.NewEncoder(w).Encode(&cfg))
}
// registerBootstrapRESTHandlers - register bootstrap rest router.
func registerBootstrapRESTHandlers(router *mux.Router) {
h := func(f http.HandlerFunc) http.HandlerFunc {
return collectInternodeStats(httpTraceHdrs(f))
}
server := &bootstrapRESTServer{}
subrouter := router.PathPrefix(bootstrapRESTPrefix).Subrouter()
subrouter.Methods(http.MethodPost).Path(bootstrapRESTVersionPrefix + bootstrapRESTMethodHealth).HandlerFunc(
httpTraceHdrs(server.HealthHandler))
h(server.HealthHandler))
subrouter.Methods(http.MethodPost).Path(bootstrapRESTVersionPrefix + bootstrapRESTMethodVerify).HandlerFunc(
httpTraceHdrs(server.VerifyHandler))
h(server.VerifyHandler))
}
// client to talk to bootstrap NEndpoints.
@@ -200,15 +215,19 @@ func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointS
srcCfg := getServerSystemCfg()
clnts := newBootstrapRESTClients(endpointServerPools)
var onlineServers int
var offlineEndpoints []string
var offlineEndpoints []error
var incorrectConfigs []error
var retries int
for onlineServers < len(clnts)/2 {
for _, clnt := range clnts {
if err := clnt.Verify(ctx, srcCfg); err != nil {
bootstrapTraceMsg(fmt.Sprintf("clnt.Verify: %v, endpoint: %v", err, clnt.endpoint))
if !isNetworkError(err) {
logger.LogIf(ctx, fmt.Errorf("%s has incorrect configuration: %w", clnt.String(), err))
logger.LogOnceIf(ctx, fmt.Errorf("%s has incorrect configuration: %w", clnt.String(), err), clnt.String())
incorrectConfigs = append(incorrectConfigs, fmt.Errorf("%s has incorrect configuration: %w", clnt.String(), err))
} else {
offlineEndpoints = append(offlineEndpoints, fmt.Errorf("%s is unreachable: %w", clnt.String(), err))
}
offlineEndpoints = append(offlineEndpoints, clnt.String())
continue
}
onlineServers++
@@ -221,15 +240,19 @@ func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointS
// 100% CPU when half the endpoints are offline.
time.Sleep(100 * time.Millisecond)
retries++
// after 5 retries start logging that servers are not reachable yet
if retries >= 5 {
logger.Info(fmt.Sprintf("Waiting for atleast %d remote servers to be online for bootstrap check", len(clnts)/2))
// after 20 retries start logging that servers are not reachable yet
if retries >= 20 {
logger.Info(fmt.Sprintf("Waiting for atleast %d remote servers with valid configuration to be online", len(clnts)/2))
if len(offlineEndpoints) > 0 {
logger.Info(fmt.Sprintf("Following servers are currently offline or unreachable %s", offlineEndpoints))
}
if len(incorrectConfigs) > 0 {
logger.Info(fmt.Sprintf("Following servers have mismatching configuration %s", incorrectConfigs))
}
retries = 0 // reset to log again after 5 retries.
}
offlineEndpoints = nil
incorrectConfigs = nil
}
}
return nil
@@ -247,7 +270,11 @@ func newBootstrapRESTClients(endpointServerPools EndpointServerPools) []*bootstr
// Only proceed for remote endpoints.
if !endpoint.IsLocal {
clnts = append(clnts, newBootstrapRESTClient(endpoint))
cl := newBootstrapRESTClient(endpoint)
if serverDebugLog {
cl.restClient.TraceOutput = os.Stdout
}
clnts = append(clnts, cl)
}
}
}

View File

@@ -25,11 +25,11 @@ import (
"io"
"net/http"
"github.com/gorilla/mux"
"github.com/minio/kes"
"github.com/minio/madmin-go"
"github.com/minio/kes-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
)
@@ -51,11 +51,6 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
return
}
if !objAPI.IsEncryptionSupported() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -119,15 +114,12 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeSSEConfig,
Bucket: bucket,
SSEConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
writeSuccessResponseHeadersOnly(w)
}
@@ -209,16 +201,14 @@ func (api objectAPIHandlers) DeleteBucketEncryptionHandler(w http.ResponseWriter
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Call site replication hook.
//
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeSSEConfig,
Bucket: bucket,
SSEConfig: nil,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
writeSuccessNoContent(w)
}

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2021 MinIO, Inc.
// Copyright (c) 2015-2022 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -20,25 +20,34 @@ package cmd
import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"io"
"mime/multipart"
"net/http"
"net/textproto"
"net/url"
"path"
"runtime"
"sort"
"strconv"
"strings"
"sync"
"github.com/google/uuid"
"github.com/gorilla/mux"
"github.com/minio/mux"
"github.com/valyala/bytebufferpool"
"github.com/minio/madmin-go"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/auth"
sse "github.com/minio/minio/internal/bucket/encryption"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
"github.com/minio/minio/internal/bucket/replication"
@@ -48,17 +57,21 @@ import (
"github.com/minio/minio/internal/handlers"
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/sync/errgroup"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/sync/errgroup"
)
const (
objectLockConfig = "object-lock.xml"
bucketTaggingConfig = "tagging.xml"
bucketReplicationConfig = "replication.xml"
xMinIOErrCodeHeader = "x-minio-error-code"
xMinIOErrDescHeader = "x-minio-error-desc"
)
// Check if there are buckets on server without corresponding entry in etcd backend and
@@ -359,7 +372,7 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
Groups: cred.Groups,
Action: iampolicy.ListBucketAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
ObjectName: "",
Claims: cred.Claims,
@@ -371,7 +384,7 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
Groups: cred.Groups,
Action: iampolicy.GetBucketLocationAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
ObjectName: "",
Claims: cred.Claims,
@@ -428,10 +441,14 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
// The max. XML contains 100000 object names (each at most 1024 bytes long) + XML overhead
const maxBodySize = 2 * 100000 * 1024
if r.ContentLength > maxBodySize {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrEntityTooLarge), r.URL)
return
}
// Unmarshal list of keys to be deleted.
deleteObjectsReq := &DeleteObjectsRequest{}
if err := xmlDecoder(r.Body, deleteObjectsReq, maxBodySize); err != nil {
logger.LogIf(ctx, err, logger.Application)
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -495,7 +512,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
vc, _ := globalBucketVersioningSys.Get(bucket)
oss := make([]*objSweeper, len(deleteObjectsReq.Objects))
for index, object := range deleteObjectsReq.Objects {
if apiErrCode := checkRequestAuthType(ctx, r, policy.DeleteObjectAction, bucket, object.ObjectName); apiErrCode != ErrNone {
if apiErrCode := checkRequestAuthTypeWithVID(ctx, r, policy.DeleteObjectAction, bucket, object.ObjectName, object.VersionID); apiErrCode != ErrNone {
if apiErrCode == ErrSignatureDoesNotMatch || apiErrCode == ErrInvalidAccessKeyID {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(apiErrCode), r.URL)
return
@@ -511,11 +528,10 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
if object.VersionID != "" && object.VersionID != nullVersionID {
if _, err := uuid.Parse(object.VersionID); err != nil {
logger.LogIf(ctx, fmt.Errorf("invalid version-id specified %w", err))
apiErr := errorCodes.ToAPIErr(ErrNoSuchVersion)
deleteResults[index].errInfo = DeleteError{
Code: apiErr.Code,
Message: apiErr.Description,
Message: fmt.Sprintf("%s (%s)", apiErr.Description, err),
Key: object.ObjectName,
VersionID: object.VersionID,
}
@@ -541,6 +557,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
oss[index].SetTransitionState(goi.TransitionedObject)
}
// All deletes on directory objects needs to be for `nullVersionID`
if isDirObject(object.ObjectName) && object.VersionID == "" {
object.VersionID = nullVersionID
}
if replicateDeletes {
dsc = checkReplicateDelete(ctx, bucket, ObjectToDelete{
ObjectV: ObjectV{
@@ -559,8 +580,8 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
}
if object.VersionID != "" && hasLockEnabled {
if apiErrCode := enforceRetentionBypassForDelete(ctx, r, bucket, object, goi, gerr); apiErrCode != ErrNone {
apiErr := errorCodes.ToAPIErr(apiErrCode)
if err := enforceRetentionBypassForDelete(ctx, r, bucket, object, goi, gerr); err != nil {
apiErr := toAPIError(ctx, err)
deleteResults[index].errInfo = DeleteError{
Code: apiErr.Code,
Message: apiErr.Description,
@@ -635,6 +656,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
if deleteResult.errInfo.Code != "" {
deleteErrors = append(deleteErrors, deleteResult.errInfo)
} else {
// All deletes on directory objects was with `nullVersionID`.
// Remove it from response.
if isDirObject(deleteResult.delInfo.ObjectName) && deleteResult.delInfo.VersionID == nullVersionID {
deleteResult.delInfo.VersionID = ""
}
deletedObjects = append(deletedObjects, deleteResult.delInfo)
}
}
@@ -650,6 +676,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
if replicateDeletes && (dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == Pending) {
// copy so we can re-add null ID.
dobj := dobj
if isDirObject(dobj.ObjectName) && dobj.VersionID == "" {
dobj.VersionID = nullVersionID
}
dv := DeletedObjectReplicationInfo{
DeletedObject: dobj,
Bucket: bucket,
@@ -744,7 +775,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: action,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
ConditionValues: getConditionValues(r, "", cred),
BucketName: bucket,
IsOwner: owner,
Claims: cred.Claims,
@@ -756,29 +787,18 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
// Parse incoming location constraint.
location, s3Error := parseLocationConstraint(r)
_, s3Error = parseLocationConstraint(r)
if s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Validate if location sent by the client is valid, reject
// requests which do not follow valid region requirements.
if !isValidLocation(location) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidRegion), r.URL)
return
}
// check if client is attempting to create more buckets than allowed maximum.
// check if client is attempting to create more buckets, complain about it.
if currBuckets := globalBucketMetadataSys.Count(); currBuckets+1 > maxBuckets {
apiErr := errorCodes.ToAPIErr(ErrTooManyBuckets)
apiErr.Description = fmt.Sprintf("You have attempted to create %d buckets than allowed %d", currBuckets+1, maxBuckets)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
logger.LogIf(ctx, fmt.Errorf("An attempt to create %d buckets beyond recommended %d", currBuckets+1, maxBuckets))
}
opts := MakeBucketOptions{
Location: location,
LockEnabled: objectLockEnabled,
ForceCreate: forceCreate,
}
@@ -790,15 +810,14 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
// exists elsewhere
if err == dns.ErrNoEntriesFound || err == dns.ErrNotImplemented {
// Proceed to creating a bucket.
if err = objectAPI.MakeBucketWithLocation(ctx, bucket, opts); err != nil {
if err = objectAPI.MakeBucket(ctx, bucket, opts); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if err = globalDNSConfig.Put(bucket); err != nil {
objectAPI.DeleteBucket(context.Background(), bucket, DeleteBucketOptions{
Force: false,
NoRecreate: true,
Force: true,
SRDeleteOp: getSRBucketDeleteOp(globalSiteReplicationSys.isEnabled()),
})
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -809,8 +828,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
globalNotificationSys.LoadBucketMetadata(GlobalContext, bucket)
// Make sure to add Location information here only for bucket
w.Header().Set(xhttp.Location,
getObjectLocation(r, globalDomainNames, bucket, ""))
w.Header().Set(xhttp.Location, pathJoin(SlashSeparator, bucket))
writeSuccessResponseHeadersOnly(w)
@@ -842,7 +860,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
// Proceed to creating a bucket.
if err := objectAPI.MakeBucketWithLocation(ctx, bucket, opts); err != nil {
if err := objectAPI.MakeBucket(ctx, bucket, opts); err != nil {
if _, ok := err.(BucketExists); ok {
// Though bucket exists locally, we send the site-replication
// hook to ensure all sites have this bucket. If the hook
@@ -858,12 +876,10 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
globalNotificationSys.LoadBucketMetadata(GlobalContext, bucket)
// Call site replication hook
globalSiteReplicationSys.MakeBucketHook(ctx, bucket, opts)
logger.LogIf(ctx, globalSiteReplicationSys.MakeBucketHook(ctx, bucket, opts))
// Make sure to add Location information here only for bucket
if cp := pathClean(r.URL.Path); cp != "" {
w.Header().Set(xhttp.Location, cp) // Clean any trailing slashes.
}
w.Header().Set(xhttp.Location, pathJoin(SlashSeparator, bucket))
writeSuccessResponseHeadersOnly(w)
@@ -897,20 +913,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
if crypto.Requested(r.Header) && !objectAPI.IsEncryptionSupported() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
bucket := mux.Vars(r)["bucket"]
// Require Content-Length to be set in the request
size := r.ContentLength
if size < 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentLength), r.URL)
return
}
resource, err := getResource(r.URL.Path, r.Host, globalDomainNames)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
@@ -925,41 +928,152 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Here the parameter is the size of the form data that should
// be loaded in memory, the remaining being put in temporary files.
reader, err := r.MultipartReader()
mp, err := r.MultipartReader()
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedPOSTRequest), r.URL)
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
// Read multipart data and save in memory and in the disk if needed
form, err := reader.ReadForm(maxFormMemory)
const mapEntryOverhead = 200
var (
reader io.Reader
fileSize int64 = -1
fileName string
fanOutEntries = make([]minio.PutObjectFanOutEntry, 0, 100)
)
maxParts := 1000
// Canonicalize the form values into http.Header.
formValues := make(http.Header)
for {
part, err := mp.NextRawPart()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if maxParts <= 0 {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
maxParts--
name := part.FormName()
if name == "" {
continue
}
fileName = part.FileName()
// Multiple values for the same key (one map entry, longer slice) are cheaper
// than the same number of values for different keys (many map entries), but
// using a consistent per-value cost for overhead is simpler.
maxMemoryBytes := 2 * int64(10<<20)
maxMemoryBytes -= int64(len(name))
maxMemoryBytes -= mapEntryOverhead
if maxMemoryBytes < 0 {
// We can't actually take this path, since nextPart would already have
// rejected the MIME headers for being too large. Check anyway.
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
var b bytes.Buffer
if fileName == "" {
if http.CanonicalHeaderKey(name) == http.CanonicalHeaderKey("x-minio-fanout-list") {
dec := json.NewDecoder(part)
// while the array contains values
for dec.More() {
var m minio.PutObjectFanOutEntry
if err := dec.Decode(&m); err != nil {
part.Close()
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
fanOutEntries = append(fanOutEntries, m)
}
part.Close()
continue
}
// value, store as string in memory
n, err := io.CopyN(&b, part, maxMemoryBytes+1)
part.Close()
if err != nil && err != io.EOF {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
maxMemoryBytes -= n
if maxMemoryBytes < 0 {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if n > maxFormFieldSize {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
formValues[http.CanonicalHeaderKey(name)] = append(formValues[http.CanonicalHeaderKey(name)], b.String())
continue
}
// In accordance with https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html
// The file or text content.
// The file or text content must be the last field in the form.
// You cannot upload more than one file at a time.
reader = part
// we have found the File part of the request we are done processing multipart-form
break
}
if _, ok := formValues["Key"]; !ok {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The name of the uploaded key is missing"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if fileName == "" {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The file or text content is missing"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
checksum, err := hash.GetContentChecksum(formValues)
if err != nil {
logger.LogIf(ctx, err, logger.Application)
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedPOSTRequest), r.URL)
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, fmt.Errorf("Invalid checksum: %w", err))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
// Remove all tmp files created during multipart upload
defer form.RemoveAll()
// Extract all form fields
fileBody, fileName, fileSize, formValues, err := extractPostPolicyFormValues(ctx, form)
if err != nil {
logger.LogIf(ctx, err, logger.Application)
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedPOSTRequest), r.URL)
if checksum != nil && checksum.Type.Trailing() {
// Not officially supported in POST requests.
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("Trailing checksums not available for POST operations"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
// Check if file is provided, error out otherwise.
if fileBody == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrPOSTFileRequired), r.URL)
return
}
// Close multipart file
defer fileBody.Close()
formValues.Set("Bucket", bucket)
if fileName != "" && strings.Contains(formValues.Get("Key"), "${filename}") {
// S3 feature to replace ${filename} found in Key form field
@@ -986,20 +1100,38 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
// Once signature is validated, check if the user has
// explicit permissions for the user.
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectAction,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
if len(fanOutEntries) > 0 {
// Once signature is validated, check if the user has
// explicit permissions for the user.
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectFanOutAction,
ConditionValues: getConditionValues(r, "", cred),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
} else {
// Once signature is validated, check if the user has
// explicit permissions for the user.
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectAction,
ConditionValues: getConditionValues(r, "", cred),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
}
policyBytes, err := base64.StdEncoding.DecodeString(formValues.Get("Policy"))
@@ -1008,6 +1140,19 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
hashReader, err := hash.NewReader(reader, fileSize, "", "", fileSize)
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if checksum != nil && checksum.Valid() {
if err = hashReader.AddChecksumNoTrailer(formValues, false); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
// Handle policy if it is set.
if len(policyBytes) > 0 {
postPolicyForm, err := parsePostPolicyForm(bytes.NewReader(policyBytes))
@@ -1028,15 +1173,8 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// should not exceed the maximum single Put size (5 GiB)
lengthRange := postPolicyForm.Conditions.ContentLengthRange
if lengthRange.Valid {
if fileSize < lengthRange.Min {
writeErrorResponse(ctx, w, toAPIError(ctx, errDataTooSmall), r.URL)
return
}
if fileSize > lengthRange.Max || isMaxObjectSize(fileSize) {
writeErrorResponse(ctx, w, toAPIError(ctx, errDataTooLarge), r.URL)
return
}
hashReader.SetExpectedMin(lengthRange.Min)
hashReader.SetExpectedMax(lengthRange.Max)
}
}
@@ -1048,19 +1186,13 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
hashReader, err := hash.NewReader(fileBody, fileSize, "", "", fileSize)
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
rawReader := hashReader
pReader := NewPutObjReader(rawReader)
var objectEncryptionKey crypto.ObjectKey
// Check if bucket encryption is enabled
sseConfig, _ := globalBucketSSEConfigSys.Get(bucket)
sseConfig.Apply(r.Header, sse.ApplyOptions{
sseConfig.Apply(formValues, sse.ApplyOptions{
AutoEncrypt: globalAutoEncryption,
})
@@ -1070,53 +1202,199 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponseHeadersOnly(w, toAPIError(ctx, err))
return
}
if objectAPI.IsEncryptionSupported() {
if crypto.Requested(formValues) && !HasSuffix(object, SlashSeparator) { // handle SSE requests
if crypto.SSECopy.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParameters), r.URL)
fanOutOpts := fanOutOptions{Checksum: checksum}
if crypto.Requested(formValues) {
if crypto.SSECopy.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParameters), r.URL)
return
}
if crypto.SSEC.IsRequested(r.Header) && crypto.S3.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, crypto.ErrIncompatibleEncryptionMethod), r.URL)
return
}
if crypto.SSEC.IsRequested(r.Header) && crypto.S3KMS.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, crypto.ErrIncompatibleEncryptionMethod), r.URL)
return
}
if crypto.SSEC.IsRequested(r.Header) && isReplicationEnabled(ctx, bucket) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParametersSSEC), r.URL)
return
}
var (
reader io.Reader
keyID string
key []byte
kmsCtx kms.Context
)
kind, _ := crypto.IsRequested(formValues)
switch kind {
case crypto.SSEC:
key, err = ParseSSECustomerHeader(formValues)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
var (
reader io.Reader
keyID string
key []byte
kmsCtx kms.Context
)
kind, _ := crypto.IsRequested(formValues)
switch kind {
case crypto.SSEC:
key, err = ParseSSECustomerHeader(formValues)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
case crypto.S3KMS:
keyID, kmsCtx, err = crypto.S3KMS.ParseHTTP(formValues)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
case crypto.S3KMS:
keyID, kmsCtx, err = crypto.S3KMS.ParseHTTP(formValues)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
if len(fanOutEntries) == 0 {
reader, objectEncryptionKey, err = newEncryptReader(ctx, hashReader, kind, keyID, key, bucket, object, metadata, kmsCtx)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
info := ObjectInfo{Size: fileSize}
// do not try to verify encrypted content
hashReader, err = hash.NewReader(reader, info.EncryptedSize(), "", "", fileSize)
// do not try to verify encrypted content/
hashReader, err = hash.NewReader(reader, -1, "", "", -1)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if checksum != nil && checksum.Valid() {
if err = hashReader.AddChecksumNoTrailer(formValues, true); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
pReader, err = pReader.WithEncryption(hashReader, &objectEncryptionKey)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
} else {
fanOutOpts = fanOutOptions{
Key: key,
Kind: kind,
KeyID: keyID,
KmsCtx: kmsCtx,
Checksum: checksum,
}
}
}
if len(fanOutEntries) > 0 {
// Fan-out requires no copying, and must be carried from original source
// https://en.wikipedia.org/wiki/Copy_protection so the incoming stream
// is always going to be in-memory as we cannot re-read from what we
// wrote to disk - since that amounts to "copying" from a "copy"
// instead of "copying" from source, we need the stream to be seekable
// to ensure that we can make fan-out calls concurrently.
buf := bytebufferpool.Get()
defer bytebufferpool.Put(buf)
md5w := md5.New()
// Maximum allowed fan-out object size.
const maxFanOutSize = 16 << 20
n, err := io.Copy(io.MultiWriter(buf, md5w), ioutil.HardLimitReader(pReader, maxFanOutSize))
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Set the correct hex md5sum for the fan-out stream.
fanOutOpts.MD5Hex = hex.EncodeToString(md5w.Sum(nil))
concurrentSize := 100
if runtime.GOMAXPROCS(0) < concurrentSize {
concurrentSize = runtime.GOMAXPROCS(0)
}
fanOutResp := make([]minio.PutObjectFanOutResponse, 0, len(fanOutEntries))
eventArgsList := make([]eventArgs, 0, len(fanOutEntries))
for {
var objInfos []ObjectInfo
var errs []error
var done bool
if len(fanOutEntries) < concurrentSize {
objInfos, errs = fanOutPutObject(ctx, bucket, objectAPI, fanOutEntries, buf.Bytes()[:n], fanOutOpts)
done = true
} else {
objInfos, errs = fanOutPutObject(ctx, bucket, objectAPI, fanOutEntries[:concurrentSize], buf.Bytes()[:n], fanOutOpts)
fanOutEntries = fanOutEntries[concurrentSize:]
}
for i, objInfo := range objInfos {
if errs[i] != nil {
fanOutResp = append(fanOutResp, minio.PutObjectFanOutResponse{
Key: objInfo.Name,
Error: errs[i].Error(),
})
eventArgsList = append(eventArgsList, eventArgs{
EventName: event.ObjectCreatedPost,
BucketName: objInfo.Bucket,
Object: ObjectInfo{Name: objInfo.Name},
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: fmt.Sprintf("%s MinIO-Fan-Out (failed: %v)", r.UserAgent(), errs[i]),
Host: handlers.GetSourceIP(r),
})
continue
}
fanOutResp = append(fanOutResp, minio.PutObjectFanOutResponse{
Key: objInfo.Name,
ETag: getDecryptedETag(formValues, objInfo, false),
VersionID: objInfo.VersionID,
LastModified: &objInfo.ModTime,
})
eventArgsList = append(eventArgsList, eventArgs{
EventName: event.ObjectCreatedPost,
BucketName: objInfo.Bucket,
Object: objInfo,
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: r.UserAgent() + " " + "MinIO-Fan-Out",
Host: handlers.GetSourceIP(r),
})
}
if done {
break
}
}
enc := json.NewEncoder(w)
for i, fanOutResp := range fanOutResp {
if err = enc.Encode(&fanOutResp); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Notify object created events.
sendEvent(eventArgsList[i])
if eventArgsList[i].Object.NumVersions > dataScannerExcessiveVersionsThreshold {
// Send events for excessive versions.
sendEvent(eventArgs{
EventName: event.ObjectManyVersions,
BucketName: eventArgsList[i].Object.Bucket,
Object: eventArgsList[i].Object,
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: r.UserAgent() + " " + "MinIO-Fan-Out",
Host: handlers.GetSourceIP(r),
})
}
}
return
}
objInfo, err := objectAPI.PutObject(ctx, bucket, object, pReader, opts)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -1129,11 +1407,13 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
w.Header()[xhttp.ETag] = []string{`"` + objInfo.ETag + `"`}
// Set the relevant version ID as part of the response header.
if objInfo.VersionID != "" {
if objInfo.VersionID != "" && objInfo.VersionID != nullVersionID {
w.Header()[xhttp.AmzVersionID] = []string{objInfo.VersionID}
}
w.Header().Set(xhttp.Location, getObjectLocation(r, globalDomainNames, bucket, object))
if obj := getObjectLocation(r, globalDomainNames, bucket, object); obj != "" {
w.Header().Set(xhttp.Location, obj)
}
// Notify object created event.
defer sendEvent(eventArgs{
@@ -1146,6 +1426,18 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
Host: handlers.GetSourceIP(r),
})
if objInfo.NumVersions > dataScannerExcessiveVersionsThreshold {
defer sendEvent(eventArgs{
EventName: event.ObjectManyVersions,
BucketName: objInfo.Bucket,
Object: objInfo,
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: r.UserAgent(),
Host: handlers.GetSourceIP(r),
})
}
if redirectURL != nil { // success_action_redirect is valid and set.
v := redirectURL.Query()
v.Add("bucket", objInfo.Bucket)
@@ -1156,6 +1448,11 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
// Add checksum header.
if checksum != nil && checksum.Valid() {
hash.AddChecksumHeader(w, checksum.AsMap())
}
// Decide what http response to send depending on success_action_status parameter
switch successStatus {
case "201":
@@ -1204,7 +1501,7 @@ func (api objectAPIHandlers) GetBucketPolicyStatusHandler(w http.ResponseWriter,
readable := globalPolicySys.IsAllowed(policy.Args{
Action: policy.ListBucketAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", "", nil),
ConditionValues: getConditionValues(r, "", auth.AnonymousCredentials),
IsOwner: false,
})
@@ -1212,7 +1509,7 @@ func (api objectAPIHandlers) GetBucketPolicyStatusHandler(w http.ResponseWriter,
writable := globalPolicySys.IsAllowed(policy.Args{
Action: policy.PutObjectAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", "", nil),
ConditionValues: getConditionValues(r, "", auth.AnonymousCredentials),
IsOwner: false,
})
@@ -1324,18 +1621,14 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
}
}
if globalDNSConfig != nil {
if err := globalDNSConfig.Delete(bucket); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to delete bucket DNS entry %w, please delete it manually", err))
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Return an error if the bucket does not exist
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil && !forceDelete {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
deleteBucket := objectAPI.DeleteBucket
// Attempt to delete bucket.
if err := deleteBucket(ctx, bucket, DeleteBucketOptions{
if err := objectAPI.DeleteBucket(ctx, bucket, DeleteBucketOptions{
Force: forceDelete,
SRDeleteOp: getSRBucketDeleteOp(globalSiteReplicationSys.isEnabled()),
}); err != nil {
@@ -1345,22 +1638,23 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
apiErr.Description = "The bucket you tried to delete is not empty. You must delete all versions in the bucket."
}
}
if globalDNSConfig != nil {
if err2 := globalDNSConfig.Put(bucket); err2 != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to restore bucket DNS entry %w, please fix it manually", err2))
}
}
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if globalDNSConfig != nil {
if err := globalDNSConfig.Delete(bucket); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to delete bucket DNS entry %w, please delete it manually, bucket on MinIO no longer exists", err))
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
globalNotificationSys.DeleteBucketMetadata(ctx, bucket)
globalReplicationPool.deleteResyncMetadata(ctx, bucket)
// Call site replication hook.
if err := globalSiteReplicationSys.DeleteBucketHook(ctx, bucket, forceDelete); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
logger.LogIf(ctx, globalSiteReplicationSys.DeleteBucketHook(ctx, bucket, forceDelete))
// Write success response.
writeSuccessNoContent(w)
@@ -1400,7 +1694,7 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
config, err := objectlock.ParseObjectLockConfig(r.Body)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedXML)
apiErr := errorCodes.ToAPIErr(ErrInvalidArgument)
apiErr.Description = err.Error()
writeErrorResponse(ctx, w, apiErr, r.URL)
return
@@ -1414,7 +1708,11 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
// Deny object locking configuration settings on existing buckets without object lock enabled.
if _, _, err = globalBucketMetadataSys.GetObjectLockConfig(bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
if _, ok := err.(BucketObjectLockConfigNotFound); ok {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrObjectLockConfigurationNotAllowed), r.URL)
} else {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
}
return
}
@@ -1429,15 +1727,12 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeObjectLockConfig,
Bucket: bucket,
ObjectLockConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
// Write success response.
writeSuccessResponseHeadersOnly(w)
@@ -1536,15 +1831,12 @@ func (api objectAPIHandlers) PutBucketTaggingHandler(w http.ResponseWriter, r *h
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
Tags: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
// Write success response.
writeSuccessResponseHeadersOnly(w)
@@ -1615,15 +1907,12 @@ func (api objectAPIHandlers) DeleteBucketTaggingHandler(w http.ResponseWriter, r
return
}
if err := globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
logger.LogIf(ctx, globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}))
// Write success response.
writeSuccessResponseHeadersOnly(w)
writeSuccessNoContent(w)
}

View File

@@ -355,7 +355,7 @@ func testListMultipartUploadsHandler(obj ObjectLayer, instanceType, bucketName s
maxUploads: "0",
accessKey: credentials.AccessKey,
secretKey: credentials.SecretKey,
expectedRespStatus: http.StatusNotFound,
expectedRespStatus: http.StatusBadRequest,
shouldPass: false,
},
// Test case - 2.
@@ -657,11 +657,15 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
) {
var err error
contentBytes := []byte("hello")
sha256sum := ""
var objectNames []string
for i := 0; i < 10; i++ {
contentBytes := []byte("hello")
objectName := "test-object-" + strconv.Itoa(i)
if i == 0 {
objectName += "/"
contentBytes = []byte{}
}
// uploading the object.
_, err = obj.PutObject(GlobalContext, bucketName, objectName, mustGetPutObjReader(t, bytes.NewReader(contentBytes), int64(len(contentBytes)), "", sha256sum), ObjectOptions{})
// if object upload fails stop the test.
@@ -673,6 +677,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
objectNames = append(objectNames, objectName)
}
contentBytes := []byte("hello")
for _, name := range []string{"private/object", "public/object"} {
// Uploading the object with retention enabled
_, err = obj.PutObject(GlobalContext, bucketName, name, mustGetPutObjReader(t, bytes.NewReader(contentBytes), int64(len(contentBytes)), "", sha256sum), ObjectOptions{})
@@ -742,8 +747,13 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
deletedObjects := make([]DeletedObject, len(requestList[0].Objects))
for i := range requestList[0].Objects {
var vid string
if isDirObject(requestList[0].Objects[i].ObjectName) {
vid = ""
}
deletedObjects[i] = DeletedObject{
ObjectName: requestList[0].Objects[i].ObjectName,
VersionID: vid,
}
}
@@ -753,9 +763,14 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
successRequest1 := encodeResponse(requestList[1])
deletedObjects = make([]DeletedObject, len(requestList[1].Objects))
for i := range requestList[0].Objects {
for i := range requestList[1].Objects {
var vid string
if isDirObject(requestList[0].Objects[i].ObjectName) {
vid = ""
}
deletedObjects[i] = DeletedObject{
ObjectName: requestList[1].Objects[i].ObjectName,
VersionID: vid,
}
}
@@ -793,9 +808,9 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
expectedContent []byte
expectedRespStatus int
}{
// Test case - 1.
// Test case - 0.
// Delete objects with invalid access key.
{
0: {
bucket: bucketName,
objects: successRequest0,
accessKey: "Invalid-AccessID",
@@ -803,9 +818,19 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
expectedContent: nil,
expectedRespStatus: http.StatusForbidden,
},
// Test case - 2.
// Test case - 1.
// Delete valid objects with quiet flag off.
{
1: {
bucket: bucketName,
objects: successRequest0,
accessKey: credentials.AccessKey,
secretKey: credentials.SecretKey,
expectedContent: encodedSuccessResponse0,
expectedRespStatus: http.StatusOK,
},
// Test case - 2.
// Delete deleted objects with quiet flag off.
2: {
bucket: bucketName,
objects: successRequest0,
accessKey: credentials.AccessKey,
@@ -815,7 +840,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
},
// Test case - 3.
// Delete valid objects with quiet flag on.
{
3: {
bucket: bucketName,
objects: successRequest1,
accessKey: credentials.AccessKey,
@@ -825,7 +850,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
},
// Test case - 4.
// Delete previously deleted objects.
{
4: {
bucket: bucketName,
objects: successRequest1,
accessKey: credentials.AccessKey,
@@ -836,7 +861,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
// Test case - 5.
// Anonymous user access denied response
// Currently anonymous users cannot delete multiple objects in MinIO server
{
5: {
bucket: bucketName,
objects: anonRequest,
accessKey: "",
@@ -847,7 +872,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
// Test case - 6.
// Anonymous user has access to some public folder, issue removing with
// another private object as well
{
6: {
bucket: bucketName,
objects: anonRequestWithPartialPublicAccess,
accessKey: "",
@@ -881,19 +906,19 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
apiRouter.ServeHTTP(rec, req)
// Assert the response code with the expected status.
if rec.Code != testCase.expectedRespStatus {
t.Errorf("Test %d: MinIO %s: Expected the response status to be `%d`, but instead found `%d`", i+1, instanceType, testCase.expectedRespStatus, rec.Code)
t.Errorf("Test %d: MinIO %s: Expected the response status to be `%d`, but instead found `%d`", i, instanceType, testCase.expectedRespStatus, rec.Code)
}
// read the response body.
actualContent, err = io.ReadAll(rec.Body)
if err != nil {
t.Fatalf("Test %d : MinIO %s: Failed parsing response body: <ERROR> %v", i+1, instanceType, err)
t.Fatalf("Test %d : MinIO %s: Failed parsing response body: <ERROR> %v", i, instanceType, err)
}
// Verify whether the bucket obtained object is same as the one created.
if testCase.expectedContent != nil && !bytes.Equal(testCase.expectedContent, actualContent) {
t.Log(string(testCase.expectedContent), string(actualContent))
t.Errorf("Test %d : MinIO %s: Object content differs from expected value.", i+1, instanceType)
t.Errorf("Test %d : MinIO %s: Object content differs from expected value.", i, instanceType)
}
}

View File

@@ -0,0 +1,88 @@
// Copyright (c) 2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import "github.com/minio/minio/internal/bucket/lifecycle"
//go:generate stringer -type lcEventSrc -trimprefix lcEventSrc_ $GOFILE
type lcEventSrc uint8
//revive:disable:var-naming Underscores is used here to indicate where common prefix ends and the enumeration name begins
const (
lcEventSrc_None lcEventSrc = iota
lcEventSrc_Scanner
lcEventSrc_Decom
lcEventSrc_Rebal
lcEventSrc_s3HeadObject
lcEventSrc_s3GetObject
lcEventSrc_s3ListObjects
lcEventSrc_s3PutObject
lcEventSrc_s3CopyObject
lcEventSrc_s3CompleteMultipartUpload
)
//revive:enable:var-naming
type lcAuditEvent struct {
lifecycle.Event
source lcEventSrc
}
func (lae lcAuditEvent) Tags() map[string]interface{} {
event := lae.Event
src := lae.source
const (
ilmSrc = "ilm-src"
ilmAction = "ilm-action"
ilmDue = "ilm-due"
ilmRuleID = "ilm-rule-id"
ilmTier = "ilm-tier"
ilmNewerNoncurrentVersions = "ilm-newer-noncurrent-versions"
ilmNoncurrentDays = "ilm-noncurrent-days"
)
tags := make(map[string]interface{}, 5)
if src > lcEventSrc_None {
tags[ilmSrc] = src.String()
}
tags[ilmAction] = event.Action.String()
tags[ilmRuleID] = event.RuleID
if !event.Due.IsZero() {
tags[ilmDue] = event.Due
}
// rule with Transition/NoncurrentVersionTransition in effect
if event.StorageClass != "" {
tags[ilmTier] = event.StorageClass
}
// rule with NewernoncurrentVersions in effect
if event.NewerNoncurrentVersions > 0 {
tags[ilmNewerNoncurrentVersions] = event.NewerNoncurrentVersions
}
if event.NoncurrentDays > 0 {
tags[ilmNoncurrentDays] = event.NoncurrentDays
}
return tags
}
func newLifecycleAuditEvent(src lcEventSrc, event lifecycle.Event) lcAuditEvent {
return lcAuditEvent{
Event: event,
source: src,
}
}

View File

@@ -21,11 +21,12 @@ import (
"encoding/xml"
"io"
"net/http"
"strconv"
"github.com/gorilla/mux"
"github.com/minio/minio/internal/bucket/lifecycle"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
)
@@ -115,6 +116,16 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
vars := mux.Vars(r)
bucket := vars["bucket"]
var withUpdatedAt bool
if updatedAtStr := r.Form.Get("withUpdatedAt"); updatedAtStr != "" {
var err error
withUpdatedAt, err = strconv.ParseBool(updatedAtStr)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidLifecycleQueryParameter), r.URL)
return
}
}
if s3Error := checkRequestAuthType(ctx, r, policy.GetBucketLifecycleAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
@@ -126,7 +137,7 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
return
}
config, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
config, updatedAt, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -138,6 +149,9 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
return
}
if withUpdatedAt {
w.Header().Set(xhttp.MinIOLifecycleCfgUpdatedAt, updatedAt.Format(iso8601Format))
}
// Write lifecycle configuration to client.
writeSuccessResponseXML(w, configData)
}

View File

@@ -179,7 +179,7 @@ func testBucketLifecycleHandlers(obj ObjectLayer, instanceType, bucketName strin
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidRequest",
Code: "InvalidArgument",
Message: "Filter must have exactly one of Prefix, Tag, or And specified",
},
@@ -196,7 +196,7 @@ func testBucketLifecycleHandlers(obj ObjectLayer, instanceType, bucketName strin
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidRequest",
Code: "InvalidArgument",
Message: "Date must be provided in ISO 8601 format",
},

Some files were not shown because too many files have changed in this diff Show More