Compare commits

...

498 Commits

Author SHA1 Message Date
Harshavardhana
edfb310a59 fix: always load ENVs from files first as soon as server starts (#18247)
This is a regression from #18231, however reading from ENV files
must happen well before any parsing logic is invoked.
2023-10-15 21:13:43 -07:00
Minio Trusted
a2312028b9 Update yaml files to latest version RELEASE.2023-10-14T05-17-22Z 2023-10-14 06:27:13 +00:00
Poorna
78f1f69d57 fix site replication resync status (#18245)
To persist status changes on disk upon completion.

Adds new tests to handle this functionality.
2023-10-13 22:17:22 -07:00
Harshavardhana
e1e33077e8 fix: tests and resync replication status (#18244) 2023-10-13 17:03:34 -07:00
Aditya Manthramurthy
b3e7de010d Remove usage of errors.Join for go1.19 compat (#18243) 2023-10-13 15:14:16 -07:00
Satish Michael
f5b04865f4 Helm Chart: Added "MINIO_IDENTITY_OPENID_REDIRECT_URI" Env Var (#18236)
added redirect uri env

Signed-off-by: Satish Kumar Kadarkarai Main <michael.satish@gmail.com>
2023-10-13 07:46:54 -07:00
Shireesh Anjal
bf1c6edb76 Revert "Capture network device info in health report" (#18241)
Introducing a new version of healthinfo struct for adding this info is
not correct. It needs to be implemented differently without adding a new
version.

This reverts commit 8737025d940f80360ed4b3686b332db5156f6659.
2023-10-13 07:46:36 -07:00
jiuker
2ac7fee017 fix: missing fileName will upload failed when PostPolicyBucketHandler (#18240) 2023-10-13 07:31:23 -07:00
Klaus Post
128256e3ab Add event counters (#18232)
Export metric for global events sent and skipped for the lifetime of the server.
2023-10-12 15:39:22 -07:00
Shireesh Anjal
a66a7f3e97 Capture network device info in health report (#18213) 2023-10-12 15:33:31 -07:00
jiuker
20b79f8945 fix: env depend on the flag (#18231) 2023-10-12 15:32:38 -07:00
Klaus Post
9a877734b2 Fix various poolmeta races (#18230)
There is a fundamental race condition in `newErasureServerPools`, where setObjectLayer is 
called before the poolMeta has been loaded/populated.

We add a placeholder value to this field but disable all saving of the value, so we don't risk 
overwriting the value on disk. Once the value has been loaded or created, it is replaced with 
the proper value, which will also be saved.

Also fixes various accesses of `poolMeta` that were done without locks.

We make the `poolMeta.IsSuspended` return false, even if we shouldn't risk out-of-bounds 
reads anymore.
2023-10-12 15:30:42 -07:00
Harshavardhana
409c391850 implement helpers to get relevant info instead of FileInfo() (#18228) 2023-10-12 15:29:59 -07:00
Klaus Post
763ff085a6 Add CI tests for next branch (#18224) 2023-10-12 06:15:10 -07:00
Shubhendu
5b9656374c Error if target went offline (#18221)
If target went offline while MinIO was down, error once
while trying to send message. If target goes offline during
MinIO server running, it already comes through ping() call
and errors out if target offline.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-10-12 06:13:57 -07:00
dependabot[bot]
b32014549c build(deps): bump golang.org/x/net from 0.15.0 to 0.17.0 (#18219)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.15.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.15.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-12 01:54:36 -07:00
dependabot[bot]
9476d212bc build(deps): bump golang.org/x/net from 0.14.0 to 0.17.0 in /docs/debugging/s3-verify (#18218)
build(deps): bump golang.org/x/net in /docs/debugging/s3-verify

Bumps [golang.org/x/net](https://github.com/golang/net) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-12 00:39:12 -07:00
jiuker
000928d34e fix: should call func globalOSMetrics.time(s)() when updateOSMetrics (#18209) 2023-10-12 00:08:13 -07:00
Harshavardhana
6829ae5b13 completely remove drive caching layer from gateway days (#18217)
This has already been deprecated for close to a year now.
2023-10-11 21:18:17 -07:00
jiuker
f09756443d fix: a dynamic config will make a panic for addOrUpdateIDP (#18208) 2023-10-11 09:06:40 -07:00
jiuker
5512016885 fix: siteResyncMetrics init will make a deadlock when len(siteReplication) >= 3 (#18206) 2023-10-10 23:27:27 -07:00
Harshavardhana
21ecb941fe fix: avoid counting out of band deletes during disk heal (#18205) 2023-10-10 14:39:48 -07:00
Harshavardhana
77e94087cf fix: calling statfs() call moves the disk head (#18203)
if erasure upgrade is needed rely on the in-memory
values, instead of performing a "DiskInfo()" call.

https://brendangregg.com/blog/2016-09-03/sudden-disk-busy.html

for HDDs these are problematic, lets avoid this because
there is no value in "being" absolutely strict here
in terms of parity. We are okay to increase parity
as we see based on the in-memory online/offline ratio.
2023-10-10 13:47:35 -07:00
Klaus Post
9ab1f25a47 fix : PutObjectExtract data races (#18199)
Several callers to putObjectTar may be fighting to set sc. Move the write out of the loop.

Use static resp, and request elements.

Fixes tests with -race:

```
WARNING: DATA RACE
Read at 0x00c01cd680e0 by goroutine 691354:
  github.com/minio/minio/cmd.objectAPIHandlers.PutObjectExtractHandler.func1()
      e:/gopath/src/github.com/minio/minio/cmd/object-handlers.go:2130 +0x149
  github.com/minio/minio/cmd.untar.func1()
      e:/gopath/src/github.com/minio/minio/cmd/untar.go:250 +0x2b6
  github.com/minio/minio/cmd.untar.func8()
      e:/gopath/src/github.com/minio/minio/cmd/untar.go:261 +0xa4

Previous write at 0x00c01cd680e0 by goroutine 691352:
  github.com/minio/minio/cmd.objectAPIHandlers.PutObjectExtractHandler.func1()
      e:/gopath/src/github.com/minio/minio/cmd/object-handlers.go:2131 +0x15d
  github.com/minio/minio/cmd.untar.func1()
      e:/gopath/src/github.com/minio/minio/cmd/untar.go:250 +0x2b6
  github.com/minio/minio/cmd.untar.func8()
      e:/gopath/src/github.com/minio/minio/cmd/untar.go:261 +0xa4
```
2023-10-10 08:36:44 -07:00
jiuker
aaab7aefbe fix: avoid nil panic upon error in GetObjectNInfo via InnerGetObjectNInfoFn (#18198) 2023-10-10 08:35:33 -07:00
Klaus Post
5b8599e52d Do not log invalid tag errors (#18200)
Eliminate logging on invalid tags:

```
API: PutObjectTagging(bucket=aws-sdk-go-test-aupmzek4341ee2, object=sgehiqp24fwt4hafffmtwzkrqnq325)
Time: 07:40:33 UTC 10/10/2023
DeploymentID: f122cbfa-42b1-428f-9002-39c644cace71
RequestID: 178CAF0DE0A67480
RemoteHost: 127.0.0.1
Host: 127.0.0.1:9001
UserAgent: aws-sdk-go/1.44.257 (go1.21.0; linux; amd64)
Error: Tags cannot be more than 10 (*tags.errTag)
       5: internal\logger\logger.go:259:logger.LogIf()
       4: cmd\api-errors.go:2350:cmd.toAPIErrorCode()
       3: cmd\api-errors.go:2375:cmd.toAPIError()
       2: cmd\object-handlers.go:2912:cmd.objectAPIHandlers.PutObjectTaggingHandler()
       1: net\http\server.go:2136:http.HandlerFunc.ServeHTTP()

API: PutObjectTagging(bucket=aws-sdk-go-test-aupmzek4341ee2, object=sgehiqp24fwt4hafffmtwzkrqnq325)
Time: 07:40:33 UTC 10/10/2023
DeploymentID: f122cbfa-42b1-428f-9002-39c644cace71
RequestID: 178CAF0DE0BEA514
RemoteHost: 127.0.0.1
Host: 127.0.0.1:9001
UserAgent: aws-sdk-go/1.44.257 (go1.21.0; linux; amd64)
Error: Cannot provide multiple Tags with the same key (*tags.errTag)
       5: internal\logger\logger.go:259:logger.LogIf()
       4: cmd\api-errors.go:2350:cmd.toAPIErrorCode()
       3: cmd\api-errors.go:2375:cmd.toAPIError()
       2: cmd\object-handlers.go:2912:cmd.objectAPIHandlers.PutObjectTaggingHandler()
       1: net\http\server.go:2136:http.HandlerFunc.ServeHTTP()

API: PutObjectTagging(bucket=aws-sdk-go-test-aupmzek4341ee2, object=sgehiqp24fwt4hafffmtwzkrqnq325)
Time: 07:40:33 UTC 10/10/2023
DeploymentID: f122cbfa-42b1-428f-9002-39c644cace71
RequestID: 178CAF0DE0E78970
RemoteHost: 127.0.0.1
Host: 127.0.0.1:9001
UserAgent: aws-sdk-go/1.44.257 (go1.21.0; linux; amd64)
Error: The TagKey you have provided is invalid (*tags.errTag)
       5: internal\logger\logger.go:259:logger.LogIf()
       4: cmd\api-errors.go:2350:cmd.toAPIErrorCode()
       3: cmd\api-errors.go:2375:cmd.toAPIError()
       2: cmd\object-handlers.go:2912:cmd.objectAPIHandlers.PutObjectTaggingHandler()
       1: net\http\server.go:2136:http.HandlerFunc.ServeHTTP()

API: PutObjectTagging(bucket=aws-sdk-go-test-aupmzek4341ee2, object=sgehiqp24fwt4hafffmtwzkrqnq325)
Time: 07:40:33 UTC 10/10/2023
DeploymentID: f122cbfa-42b1-428f-9002-39c644cace71
RequestID: 178CAF0DE1002AE8
RemoteHost: 127.0.0.1
Host: 127.0.0.1:9001
UserAgent: aws-sdk-go/1.44.257 (go1.21.0; linux; amd64)
Error: The TagValue you have provided is invalid (*tags.errTag)
       5: internal\logger\logger.go:259:logger.LogIf()
       4: cmd\api-errors.go:2350:cmd.toAPIErrorCode()
       3: cmd\api-errors.go:2375:cmd.toAPIError()
       2: cmd\object-handlers.go:2912:cmd.objectAPIHandlers.PutObjectTaggingHandler()
       1: net\http\server.go:2136:http.HandlerFunc.ServeHTTP()
```
2023-10-10 08:35:03 -07:00
Harshavardhana
74e0c9ab9b reduce unnecessary logging, simplify certain error handling (#18196)
remove a bunch of unnecessary logs
2023-10-10 00:33:42 -07:00
Harshavardhana
dcce83b288 avoid rebalance state for getObjectTags if any (#18197)
fixes #18190
2023-10-09 23:56:26 -07:00
Matthew Toohey
f731e7ea36 Fix current_send_in_progress metric always being zero (#18160) 2023-10-09 17:28:17 -07:00
Maxim Tkachenko
ec30bb89a4 simplify channel send() in WalkDir() (#18186) 2023-10-09 17:27:55 -07:00
Klaus Post
7cd08594f6 Use better host names for metric errors (#18188)
Typically hosts would end up like this:

```
   "hosts": [
        ":9000",
        ":9000",
        ":9000",
...
```

Also add host name to errors.
2023-10-09 17:27:11 -07:00
Aditya Manthramurthy
2b4531f069 fix: O_DIRECT is on only for multi-disk setups (#18194)
Disable it for single disk/unsupported platforms
2023-10-09 17:08:40 -07:00
Harshavardhana
11544a62aa fix: upon write failure on disk journal close the file properly (#18183)
close the file properly before dereferencing *os.File,
this can silently leak fd's in rare cases.

This PR fixes this properly.
2023-10-08 12:17:08 -07:00
Taran Pelkey
18550387d5 fix: DeleteServiceAccount API behavior (#18163) 2023-10-08 12:13:18 -07:00
Minio Trusted
efb03e19e6 Update yaml files to latest version RELEASE.2023-10-07T15-07-38Z 2023-10-08 07:09:45 +00:00
Praveen raj Mani
c27d0583d4 Send kafka notification messages in batches when queue_dir is enabled (#18164)
Fixes #18124
2023-10-07 08:07:38 -07:00
Klaus Post
0de2b9a1b2 Fix panic on double unfreezeServices (#18177)
Calling unfreezeServices twice results in panic:

```
panic: "POST /minio/peer/v32/signalservice?signal=4&sub-sys=": close of nil channel
goroutine 14703 [running]:
runtime/debug.Stack()
	runtime/debug/stack.go:24 +0x65
github.com/minio/minio/cmd.setCriticalErrorHandler.func1.1()
	github.com/minio/minio/cmd/generic-handlers.go:549 +0x8e
panic({0x27c3020, 0x4c9b370})
	runtime/panic.go:884 +0x212
github.com/minio/minio/cmd.unfreezeServices()
	github.com/minio/minio/cmd/service.go:112 +0xc7
github.com/minio/minio/cmd.(*peerRESTServer).SignalServiceHandler(0x0?, {0x4cb6af0, 0xc010b96420}, 0xc01affab00)
	github.com/minio/minio/cmd/peer-rest-server.go:837 +0x13a
net/http.HandlerFunc.ServeHTTP(...)
```

If the function was called a second time `val` would not be nil, but the returned channel `ch` would be, causing the panic.

Check the channel isn't nil and also use Swap for an atomic swap instead of 2 separate operations (though we are in a mutex).
2023-10-06 07:51:50 -06:00
Poorna
9dc29d7687 Avoid ILM expiry on deleted versions that are yet to replicate (#18175)
Fixes #18167
2023-10-06 06:55:15 -06:00
Poorna
72871dbb9a delete replication: avoid overwriting replication decision (#18174)
from ObjectInfo unless version purge status is present. Otherwise
there is potential to make incorrect replication decision if Stat
returned an error
2023-10-05 21:09:45 -06:00
Aditya Manthramurthy
4bda4e4e2b fix: check for disk-level O_DIRECT support (#18173)
Disk level O_DIRECT support checking at xl storage initialization was
conditional on a config setting being enabled. (This never took effect
because config initialization happens after ObjectLayer is ready.) This
is not necessary as the config setting is dynamic - O_DIRECT should be
enabled via runtime config. So we need to do the disk level support
check regardless of the config setting.
2023-10-05 20:54:49 -06:00
Harshavardhana
1971c54a50 update buffer channels for both trace and listen events (#18171)
- Trace needs higher buffered channels than 4000 to ensure
  when we run `mc admin trace -a` it captures all information
  sufficiently.

- Listen event notification needs the event channel to be
  `apiRequestsMaxPerNode` * number of nodes
2023-10-05 18:16:04 -06:00
Cesar N
bb77b89da0 Update MinIO Console version (#18168)
Co-authored-by: cesnietor <>
2023-10-04 16:25:59 -07:00
Anis Eleuch
b336e9a79f fix: loading usage cache to not fail early when reading the backup fails (#18158)
Currently, the retry is not fully used when there is no backup copy of
the data usage; use 5 retry attempts when we don't have any valid data, 
new or backup, unless we have seen an un-recognized error.
2023-10-02 19:22:35 -07:00
Harshavardhana
a2ab21e91c add max-keys=2 optimization for spark workloads (#18154)
comment in the code provides more detailed explanation
on what this PR entails and its assumptions.

this PR reduces the amount of listing() by an order
of magnitude, however there are other such calls that
still needs further optimization that shall be done
in subsequent PRs.
2023-10-02 07:52:59 -06:00
Sveinn
603437e70f Fix startup formatting (#18156)
Percentages in root user names are used for formatting.

Before:
```
S3-API: http://192.168.50.21:9000  http://172.31.96.1:9000  http://127.0.0.1:9000
RootUser: "U4B6Zi!b75DXSPm%!!(MISSING)a(MISSING)vZb"
RootPass: "Q4#Q6y8G%!P(MISSING)x#npP4dudUobU#NBcGB7RMKV4ajYb"

Console: http://192.168.50.21:51915 http://172.31.96.1:51915 http://127.0.0.1:51915
RootUser: "U4B6Zi!b75DXSPm%!!(MISSING)a(MISSING)vZb"
RootPass: "Q4#Q6y8G%!P(MISSING)x#npP4dudUobU#NBcGB7RMKV4ajYb"

Command-line: https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart
FORMAT: %117s MESSAGE: $ mc alias set myminio http://192.168.50.21:9000 "U4B6Zi!b75DXSPm%avZb" "Q4#Q6y8G%%Px#npP4dudUobU#NBcGB7RMKV4ajYb"
   $ mc alias set myminio http://192.168.50.21:9000 "U4B6Zi!b75DXSPm%!a(MISSING)vZb" "Q4#Q6y8G%Px#npP4dudUobU#NBcGB7RMKV4ajYb"
```

After:

```
Status:         1 Online, 0 Offline.
S3-API: http://192.168.50.21:9000  http://172.31.96.1:9000  http://127.0.0.1:9000
RootUser: "U4B6Zi!b75DXSPm%avZb"
RootPass: "Q4#Q6y8G%%Px#npP4dudUobU#NBcGB7RMKV4ajYb"

Console: http://192.168.50.21:52421 http://172.31.96.1:52421 http://127.0.0.1:52421
RootUser: "U4B6Zi!b75DXSPm%avZb"
RootPass: "Q4#Q6y8G%%Px#npP4dudUobU#NBcGB7RMKV4ajYb"

Command-line: https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart
   $ mc alias set myminio http://192.168.50.21:9000 "U4B6Zi!b75DXSPm%avZb" "Q4#Q6y8G%%Px#npP4dudUobU#NBcGB7RMKV4ajYb"
```

No need for special Windows case. `mc` works just fine.
2023-10-02 07:39:47 -06:00
Harshavardhana
db3a9a5990 update missing mc command on multipart-tests 2023-09-30 20:29:45 -07:00
Harshavardhana
24c7e73b4e update helm chart images and release v5.0.14
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-09-30 13:46:10 -07:00
Alik
c053e57068 Add paramaters in Helm chart to load OIDC clientSecret from Secret Resource (#17784) 2023-09-30 13:44:38 -07:00
Shireesh Anjal
6d20ec3bea Add support for resource metrics (#18057)
Add a new endpoint for "resource" metrics `/v2/metrics/resource`

This should return system metrics related to drives, network, CPU and
memory. Except for drives, other metrics should have corresponding "avg"
and "max" values also.

Reuse the real-time feature to capture the required data,
introducing CPU and memory metrics in it.

Collect the data every minute and keep updating the average and max values
accordingly, returning the latest values when the API is called.
2023-09-30 13:40:20 -07:00
Harshavardhana
c50627ee3e Add tests for multipart upload overwrites on versioned buckets (#18142) 2023-09-30 03:13:56 -07:00
Minio Trusted
b3cd893f93 Update yaml files to latest version RELEASE.2023-09-30T07-02-29Z 2023-09-30 07:51:58 +00:00
Anis Eleuch
22d2dbc4e6 decom: Fix infinite retry when the decom is canceled (#18143)
Also, use rand.Float64() since it is thread-safe; otherwise go race
will complain.
2023-09-30 00:02:29 -07:00
Shireesh Anjal
2b5d9428b1 Use latest madmin-go (v3.0.21) (#18138)
This ensures that drive model is included in the partition data inside
the health diagnostics report.
2023-09-29 11:25:34 -07:00
Harshavardhana
d6446cb096 do not return an error in AbortMultipartUpload() (#18135)
returning an error is a bit undefined in AWS S3
as it may return an error or not depending on the
time from AbortMultipartUpload().
2023-09-29 10:28:19 -07:00
Harshavardhana
c34bdc33fb make sure to set Versioned field to ensure rename2 is not called (#18141)
without this the rename2() can rename the previous dataDir
causing issues for different versions of the object, only
latest version is preserved due to this bug.

Added healing code to ensure recovery of such content.
2023-09-29 09:08:24 -07:00
ferhat elmas
dd8547e51c chore: drop unnecessary linter (#18133) 2023-09-29 03:11:31 -07:00
Anis Eleuch
aec023f537 Avoid showing buckets without quorum in each pool (#18125) 2023-09-29 00:58:54 -07:00
Poorna
e101eeeda9 fix: tier addition validation (#18136) 2023-09-28 22:33:24 -07:00
Minio Trusted
f29522269d Update yaml files to latest version RELEASE.2023-09-27T15-22-50Z 2023-09-28 17:49:57 +00:00
Harshavardhana
3c470a6b8b fix: the inspect script to use scheme per deployment (#18118) 2023-09-27 08:22:50 -07:00
Poorna
6bc7d711b3 delete of a missing versionId return 204 (#18117) 2023-09-26 14:02:56 -07:00
Shubhendu
10d5dd3a67 fix: a regression with audit log sending (#18112)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-09-26 12:23:02 -07:00
Harshavardhana
d9f1df01eb return an error in CopyAligned upon premature EOF (#18110)
add a unit-test to capture this corner case
2023-09-26 11:20:06 -07:00
Harshavardhana
cdeab19673 fix: always check error upon w.Close() in Write() (#18111)
not checking w.Close() can prematurely make us
think that the w.Write() actually succeeded, apparently
Write() may or may not return an error but sometimes
only during a Close() call to the fd we may see the
error from Write() propagate.

Fdatasync(w) on the FD would return an error requiring
Close() error handling is less of a concern, however it may
happen such that fdatasync() did not return an error, where
as Close() would.
2023-09-26 11:04:00 -07:00
Anis Eleuch
22ee678136 tier: Avoid doing versioned operations since not required anymore (#18108)
Currently, setting a new tiering target returns an error when a bucket
is versioned and the tiering credentials does not have authorization to
specify a version-id when reading or removing a specific version;

Since tiering does not require versioning anymore; avoid doing versioned
operations when performing checklist ops while adding a new tiering
configuration.
2023-09-26 00:14:56 -07:00
Poorna
50a8f13e85 site replication: allow setting bandwidth default for bucket (#18062)
This can still be overridden at the bucket level
2023-09-25 15:50:52 -07:00
jiuker
6dec60b6e6 fix: check post policy like AWS S3 (#18074) 2023-09-25 12:35:25 -07:00
Harshavardhana
ac3a19138a fix: set scanning details locally to avoid cached values (#18092)
atomic variable results such as scanning must not use
cached values, instead rely on real-time information.
2023-09-25 08:26:29 -07:00
Klaus Post
21e8e071d7 Improve ListObject Compatibility (#18099)
Do not error out when a provided marker is before or after the prefix, but instead just ignore it if before and return an empty list when after.

Fixes #18093
2023-09-25 08:13:08 -07:00
Klaus Post
57f84a8b4c Add abandoned folder scanning to metrics (#18076)
Include object and versions heal scan times when checking non-empty abandoned folders.

Furthermore don't add delay between healing versions, instead do one per object wait.
2023-09-24 22:15:31 -07:00
Minio Trusted
8a672e70a7 Update yaml files to latest version RELEASE.2023-09-23T03-47-50Z 2023-09-25 01:55:31 +00:00
Aditya Manthramurthy
22041bbcc4 fix: Update policy mapping properly in notification (#18088)
This is fixing a regression from an earlier change where STS account
loading was made lazy.
2023-09-22 20:47:50 -07:00
Harshavardhana
5afb459113 upgrade all dependencies (#18085) 2023-09-22 14:45:19 -07:00
Harshavardhana
91ebac0a00 fix: move abandoned parts check after healing not in ILM path (#18087) 2023-09-22 12:07:52 -07:00
mundry
5fcb1cfd31 fix: broken bucket versioning support in community helm chart (#18003) 2023-09-22 11:32:24 -07:00
Harshavardhana
3a90fb108c only look for metadata if batch replication asks for metadata filters (#18082)
This PR changes the StatObject() to be must have for non-minio source
to being a conditional API call.

- Calls StatObject() when needed
- Calls GetObjectTagging() when needed

These calls if we do without these conditionals can cause a lot of
delays, so we avoid them if not needed in more common scenario.
2023-09-22 11:31:57 -07:00
Anis Eleuch
4eeb48f8e0 Return cached online/offline status for audit/http loggers (#18083)
To avoid having delays in prometheus scrape and in 'mc admin info' command.
2023-09-21 16:58:24 -07:00
Harshavardhana
373d48c8a3 allow admin actions to have proper condition map (#18080)
upgrade minio/pkg to v2.0.2

fixes #18078
2023-09-21 13:22:09 -07:00
Harshavardhana
1472875670 fix: failed messages counting in audit_http metrics (#18075)
all retries must not be counted as failed messages,
a failed message is a single counter not for all
retries, this PR fixes this.

Also we do not need to retry 10-times, instead we should
retry at max 3 times with some jitter to deliver the
messages.
2023-09-21 11:24:56 -07:00
Shubhendu
74cfb207c1 Added check for mandatory MINIO_KMS_KES_KEY_NAME env var (#18077)
If MinIO started with KMS enabled, MINIO_KMS_KES_KEY_NAME should
be set for server to start.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-09-21 10:37:37 -07:00
Minio Trusted
6a096e7dc7 Update yaml files to latest version RELEASE.2023-09-20T22-49-55Z 2023-09-21 00:42:16 +00:00
Harshavardhana
9788d85ea3 remove logging for invalid metadata values (#18068) 2023-09-20 15:49:55 -07:00
Anis Eleuch
69c0e18685 perf net: Add the endpoint name related to the perf net error (#18063)
In a perf test, one node will run speed test with all nodes. If there is
an error with a peer node, the peer node name is not included in the
error hence confusing the user.

This commit will add the peer endpoint string to the netperf error.
2023-09-19 22:41:06 -07:00
Aditya Manthramurthy
3cac927348 Load STS policy mappings periodically (#18061)
To ensure that policy mappings are current for service accounts
belonging to (non-derived) STS accounts (like an LDAP user's service
account) we periodically reload such mappings.

This is primarily to handle a case where a policy mapping update
notification is missed by a minio node. Such a node would continue to
have the stale mapping in memory because STS creds/mappings were never
periodically scanned from storage.
2023-09-19 17:57:42 -07:00
Harshavardhana
9081346c40 fix: more regressions listing policy mappings (#18060)
also relax ListServiceAccounts() returning error if
no service accounts exist.
2023-09-19 15:23:18 -07:00
Harshavardhana
fcfadb0e51 fix: regression in loading LDAP users policy mappings (#18055)
LDAP users are stored as STS users, we need to load
their policy mappings appropriately.

Fixes a regression caused by #17994
2023-09-19 10:31:56 -07:00
Harshavardhana
2add57cfed apply healing per object at 1024 cycles (#18050)
- we already have MRF for most recent failures
- we trigger healing during HEAD/GET operation

These are enough, also change the default max wait
from 5sec to 1sec for default scanner speed.
2023-09-19 09:24:22 -07:00
Anis Eleuch
c5279ec630 fix: building reorder-disks under darwin (#18053)
Also build debugging tools only in tests or with a specific target
2023-09-19 03:19:26 -07:00
Poorna
b73699fad8 replication: pass user tags while queueing (#18052)
Continues from #18032 - otherwise replication will fail on tag based rules.
2023-09-19 03:18:28 -07:00
Harshavardhana
b8ebe54e53 Revert "skip tiered objects to GLACIER in batch replication (#18044)"
This reverts commit fd421ddd6f.

MinIO already provides `filter` based on metadata that would work
in this scenario already.
2023-09-19 00:05:40 -07:00
Harshavardhana
c3d70e0795 cache usage, prefix-usage, and buckets for AccountInfo up to 10 secs (#18051)
AccountInfo is quite frequently called by the Console UI 
login attempts, when many users are logging in it is important
that we provide them with better responsiveness.

- ListBuckets information is cached every second
- Bucket usage info is cached for up to 10 seconds
- Prefix usage (optional) info is cached for up to 10 secs

Failure to update after cache expiration, would still
allow login which would end up providing information
previously cached.

This allows for seamless responsiveness for the Console UI
logins, and overall responsiveness on a heavily loaded
system.
2023-09-18 22:13:03 -07:00
Harshavardhana
8c4561b8da add all missing go.mod for debugging tools (#18049) 2023-09-18 13:47:03 -07:00
Harshavardhana
fd421ddd6f skip tiered objects to GLACIER in batch replication (#18044)
tiered objects to GLACIER are not readable until
they are restored, we skip these as unreadable
2023-09-18 10:25:31 -07:00
jiuker
9947c01c8e feat: SSE-KMS use uuid instead of read all data to md5. (#17958) 2023-09-18 10:00:54 -07:00
Eng Zer Jun
a00db4267c data-usage-cache: remove redundant nil check (#17970)
From the Go specification:

  "3. If the map is nil, the number of iterations is 0." [1]

Therefore, an additional nil check for before the loop is unnecessary.

[1]: https://go.dev/ref/spec#For_range

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2023-09-16 19:09:29 -07:00
Harshavardhana
36385010f5 use optimized pathJoin instead of path.Join (#18042)
this avoids allocations in scanner routine, they are tiny but 
they allocate a lot over many cycles of the scanner.
2023-09-16 19:08:59 -07:00
Harshavardhana
fa6d082bfd reduce all major allocations in replication path (#18032)
- remove targetClient for passing around via replicationObjectInfo{}
- remove cloing to object info unnecessarily
- remove objectInfo from replicationObjectInfo{} (only require necessary fields)
2023-09-16 02:28:06 -07:00
Minio Trusted
9fab91852a Update yaml files to latest version RELEASE.2023-09-16T01-01-47Z 2023-09-16 07:38:18 +00:00
Poorna
b733e6e83c site replication turn off retry login for admin API calls (#18039)
additionally also mark site offline if n/w is down
2023-09-15 18:01:47 -07:00
Harshavardhana
ce05bb69dc update console v0.39.0 (#18038)
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-09-15 14:01:52 -07:00
Anis Eleuch
37aa5934a1 scanner: Fix loading data usage cache structure (#18037)
Return an empty data usage cache structure when the data usage cache
file does not exist, otherwise, the scanner won't work.
2023-09-15 13:11:08 -07:00
Harshavardhana
1647fc7edc fix: optimize listMultipartUploads to serve via local disks (#18034)
and remove unused getLoadBalancedDisks()
2023-09-15 08:34:03 -07:00
Harshavardhana
7b92687397 remove generating presignedURLs with range header for lambda (#18033) 2023-09-14 21:58:17 -07:00
Anis Eleuch
419e5baf16 fix: webhook notify endpoint with standard ports (#18016) 2023-09-14 20:10:44 -07:00
Alex
dc48cd841a Added MINIO_PROMETHEUS_AUTH_TOKEN env support (#18028)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-09-14 17:28:21 -07:00
Anis Eleuch
b0e1776d6d Do not use a chain for S3 tiering to return better error messages (#18030)
When using a chain provider all providers do not return a valid
access and secret key, an anonymous request is sent, which makes it hard
for users to figure out what is going on

In the case of S3 tiering, when AWS IAM temporary account generation returns
an error, an anonymous login will be used because of the chain provider.
Avoid this and use the AWS IAM provider directly to get a good error
message.
2023-09-14 15:28:20 -07:00
Aditya Manthramurthy
7a7068ee47 Move IAM periodic ops to a single go routine (#18026)
This helps reduce disk operations as these periodic routines would not
run concurrently any more.

Also add expired STS purging periodic operation: Since we do not scan
the on-disk STS credentials (and instead only load them on-demand) a
separate routine is needed to purge expired credentials from storage.
Currently this runs about a quarter as often as IAM refresh.

Also fix a bug where with etcd, STS accounts could get loaded into the
iamUsersMap instead of the iamSTSAccountsMap.
2023-09-14 15:25:17 -07:00
Aditya Manthramurthy
cbc0ef459b Fix policy package import name (#18031)
We do not need to rename the import of minio/pkg/v2/policy as iampolicy
any more.
2023-09-14 14:50:16 -07:00
Harshavardhana
a2aabfabd9 add backups for usage-caches to rely on upon error (#18029)
This allows scanner to avoid lengthy scans, skip
things appropriately and also not lose metrics in
any manner.

reduce longer deadlines for usage-cache loads/saves
to match the disk timeout which is 2minutes now per
IOP.
2023-09-14 11:53:52 -07:00
Harshavardhana
822cbd4b43 add couple of missing things from #18027 2023-09-13 23:26:48 -07:00
Ravind Kumar
3c19a9308d DOCS-987: Reorganizing list.md for better RST compatibility (#18027) 2023-09-13 23:23:37 -07:00
Harshavardhana
32890342ce introduce MINIO_BROWSER_REDIRECT env to enable/disable auto-redirect (#18025) 2023-09-13 18:43:57 -07:00
Aditya Manthramurthy
ed2c2a285f Load STS accounts into IAM cache lazily (#17994)
In situations with large number of STS credentials on disk, IAM load
time is high. To mitigate this, STS accounts will now be loaded into
memory only on demand - i.e. when the credential is used.

In each IAM cache (re)load we skip loading STS credentials and STS
policy mappings into memory. Since STS accounts only expire and cannot
be deleted, there is no risk of invalid credentials being reused,
because credential validity is checked when it is used.
2023-09-13 12:43:46 -07:00
Poorna
18e23bafd9 replication resync: report only the on-disk status (#18017)
Avoid reporting in-memory status since results can vary if different
nodes are queried, resync always runs at a single node.
2023-09-13 10:58:38 -07:00
Harshavardhana
8b8be2695f optimize mkdir calls to avoid base-dir Mkdir attempts (#18021)
Currently we have IOPs of these patterns

```
[OS] os.Mkdir play.min.io:9000 /disk1 2.718µs
[OS] os.Mkdir play.min.io:9000 /disk1/data 2.406µs
[OS] os.Mkdir play.min.io:9000 /disk1/data/.minio.sys 4.068µs
[OS] os.Mkdir play.min.io:9000 /disk1/data/.minio.sys/tmp 2.843µs
[OS] os.Mkdir play.min.io:9000 /disk1/data/.minio.sys/tmp/d89c8ceb-f8d1-4cc6-b483-280f87c4719f 20.152µs
```

It can be seen that we can save quite Nx levels such as
if your drive is mounted at `/disk1/minio` you can simply
skip sending an `Mkdir /disk1/` and `Mkdir /disk1/minio`.

Since they are expected to exist already, this PR adds a way
for us to ignore all paths upto the mount or a directory which
ever has been provided to MinIO setup.
2023-09-13 08:14:36 -07:00
Poorna
96fbf18201 replication: queue existing objects to same workers as incoming (#18020)
Previously existing objects were queued to single worker and MRF re-queues
are also handled by same worker - this does not fully use the available
bandwidth in case there is no incoming workload.
2023-09-12 21:59:15 -07:00
Harshavardhana
c8a57a8fa2 fix: send content-md5 for AWS S3 proactively (#18018)
fixes #17977
2023-09-12 19:11:13 -07:00
Harshavardhana
b1c2dacab3 fix: allow dynamic ports for API only in non-distributed setups (#18019)
fixes #17998
2023-09-12 19:10:49 -07:00
Harshavardhana
65939913b4 update all dependencies (#18012) 2023-09-12 13:16:46 -07:00
Harshavardhana
08b3a466e8 fix: allow concurrent SFTP connections (#18013)
current implementation did not fully implement
the concurrent SFTP connection implementation,
this PR properly handles this.

fixes #17914
2023-09-12 12:41:52 -07:00
Harshavardhana
5aa7c38035 update pkg to v2.0.1 to extend admin actions (#18008) 2023-09-12 01:11:52 -07:00
Harshavardhana
1df5e31706 optimize MRF replication queue to avoid memory leaks (#18007) 2023-09-11 20:59:11 -07:00
Harshavardhana
9f7044aed0 fix: ignore transient errors in read path (#18006)
Errors such as

```
returned an error (context deadline exceeded) (*fmt.wrapError)
```

```
(msgp: too few bytes left to read object) (*fmt.wrapError)
```
2023-09-11 15:29:59 -07:00
Anis Eleuch
41de53996b heal: calculate the number of workers based on NRRequests (#17945) 2023-09-11 14:48:54 -07:00
Harshavardhana
9878031cfd fix: change DISK_ to DRIVE_ for some drive related envs (#18005) 2023-09-11 12:19:22 -07:00
Harshavardhana
e3fbcaeb72 allow scanner key cycle to be empty (#18001)
configs from 2020 server throws an
error due to deprecation of the keys
however an attempt is made to parse
them, we should have chosen existing
defaults - this PR fixes that.
2023-09-09 08:53:32 -07:00
Harshavardhana
ca6dd8be5e use go1.21.1 for vulncheck 2023-09-07 16:15:31 -07:00
Minio Trusted
fba0924b1d Update yaml files to latest version RELEASE.2023-09-07T02-05-02Z 2023-09-07 23:10:40 +00:00
Poorna
703ed46d79 fix: replication of tags while removing (#17989)
A tag removal was not being replicated prior to this change
2023-09-06 19:05:02 -07:00
Harshavardhana
f7ca6c63c2 fix: bucket quota clear and honor existing quota config (#17988) 2023-09-06 19:03:58 -07:00
Harshavardhana
ad69b9907f fix: report bucket metrics for only existing buckets (#17987) 2023-09-06 12:50:46 -07:00
Anis Eleuch
b9269151a4 fix: drive rotational calculation status for partitions (#17986)
Fix drive rotational calculation status

If a MinIO drive path is mounted to a partition and not a real disk,
getting the rotational status would fail because Linux does not expose
that status to partition; In other words,
/sys/block/drive-partition-name/queue/rotational does not exist;

To fix the issue, the code will search for the rotational status of the
disk that hosts the partition, and this can be calculated from the
real path of /sys/class/block/<drive-partition-name>
2023-09-06 12:37:57 -07:00
Shubhendu
bfddbb8b40 Embed file in ZIP with custom permissions (#17954)
This change enables embedding files in ZIP with custom permissions.
Also uses default creds for starting MinIO based on inspect data.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-09-06 09:24:01 -07:00
Poorna
13a2dc8485 replication resync: avoid blocking on results channel. (#17981)
continues fix in #17775
2023-09-05 20:22:39 -07:00
Harshavardhana
1e51424e8a use syscall.Rename() directly instead of os.Rename() (#17982) 2023-09-05 20:22:23 -07:00
Harshavardhana
5b114b43f7 refactor bandwidth throttling for replication target (#17980)
This refactor is to allow using the bandwidth throttling
for other purposes.
2023-09-05 20:21:59 -07:00
Poorna
812f5a02d7 metrics: fix panic in replication stats reporting (#17979) 2023-09-05 10:26:18 -07:00
Minio Trusted
19f70dbfbf Update yaml files to latest version RELEASE.2023-09-04T19-57-37Z 2023-09-04 21:20:49 +00:00
Aditya Manthramurthy
1c99fb106c Update to minio/pkg/v2 (#17967) 2023-09-04 12:57:37 -07:00
Krishnan Parthasarathi
71c32e9b48 Return successorModTime in quorum when available (#17925) 2023-09-04 08:24:17 -07:00
Harshavardhana
380a59520b add missing testdata for benchmarking 2023-09-02 14:40:38 -07:00
Harshavardhana
3995355150 avoid repeated large allocations for large parts (#17968)
objects with 10,000 parts and many of them can
cause a large memory spike which can potentially
lead to OOM due to lack of GC.

with previous PR reducing the memory usage significantly
in #17963, this PR reduces this further by 80% under
repeated calls.

Scanner sub-system has no use for the slice of Parts(),
it is better left empty.

```
benchmark                            old ns/op     new ns/op     delta
BenchmarkToFileInfo/ToFileInfo-8     295658        188143        -36.36%

benchmark                            old allocs     new allocs     delta
BenchmarkToFileInfo/ToFileInfo-8     61             60             -1.64%

benchmark                            old bytes     new bytes     delta
BenchmarkToFileInfo/ToFileInfo-8     1097210       227255        -79.29%
```
2023-09-02 07:49:24 -07:00
Harshavardhana
8208bcb896 remove all unnecessary logging, logOnce when absolutely needed (#17965) 2023-09-01 16:19:18 -07:00
Poorna
d665e855de replication: remove check for empty version id (#17964) 2023-09-01 13:46:10 -07:00
Harshavardhana
18b3655c99 with xlv2 format we never had to fill in checksumInfo() (#17963)
- this PR avoids sending a large ChecksumInfo slice
  when its not needed

- also for a file with XLV2 format there is no reason
  to allocate Checksum slice while reading
2023-09-01 13:45:58 -07:00
Anis Eleuch
6a8d8f34a5 kafka: Do not require key when sending a message (#17962)
Keys are helpful to ensure the strict ordering of messages, however currently the
code uses a random request id for every log, hence using the request-id
as a Kafka key is not serve any purpose;

This commit removes the usage of the key, to also fix the audit issue from
internal subsystem that does not have a request ID.
2023-09-01 08:37:22 -07:00
Harshavardhana
b1c1f02132 use buffers for pathJoin, to re-use buffers. (#17960)
```
benchmark                        old ns/op     new ns/op     delta
BenchmarkPathJoin/PathJoin-8     79.6          55.3          -30.53%

benchmark                        old allocs     new allocs     delta
BenchmarkPathJoin/PathJoin-8     2              1              -50.00%

benchmark                        old bytes     new bytes     delta
BenchmarkPathJoin/PathJoin-8     48            24            -50.00%
```
2023-08-31 17:58:48 -07:00
Minio Trusted
ea93643e6a Update yaml files to latest version RELEASE.2023-08-31T15-31-16Z 2023-09-01 00:15:57 +00:00
Shubhendu
e47e625f73 Added replication graphs for site replication metrics (#17951)
This dashboard graphs the metrics when site replication is enabled
across MinIO instances.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-31 08:31:16 -07:00
yangw
b13fcaf666 fix: read atomic variable in clientDevNull round trip time (#17955) 2023-08-31 08:31:01 -07:00
Harshavardhana
9458485e43 avoid double logging from healing (#17950) 2023-08-30 18:46:04 -07:00
Shubhendu
0ce9e00ffa Added node scanner and node drives graphs (#17949)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-30 14:01:51 -07:00
Shubhendu
c778c381b5 Added new bucket replication graphs (#17947)
This PR adds new bucket replication graphs for better and granular
monitoring of bucket replication. Also arranged all replication graphs
together.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-30 11:57:41 -07:00
Harshavardhana
0d1fbef751 fix: a possible crash in event target Close() (#17948)
these are possible crashes when the configured
target is still in init() state and never finished
- however a delete config was initiated.
2023-08-30 07:27:45 -07:00
Poorna
b48bbe08b2 Add additional info for replication metrics API (#17293)
to track the replication transfer rate across different nodes,
number of active workers in use and in-queue stats to get
an idea of the current workload.

This PR also adds replication metrics to the site replication
status API. For site replication, prometheus metrics are
no longer at the bucket level - but at the cluster level.

Add prometheus metric to track credential errors since uptime
2023-08-30 01:00:59 -07:00
Minio Trusted
cce90cb2b7 Update yaml files to latest version RELEASE.2023-08-29T23-07-35Z 2023-08-30 02:17:56 +00:00
Harshavardhana
07b1281046 add queue_dir to help message for logger/audit targets 2023-08-29 16:07:35 -07:00
Ravind Kumar
3515b99671 Clarify Community Helm Chart (#17944)
There is some consistent confusion between the Community Helm Chart in this repo and the MinIO Kubernetes Operator Helm Chart.

This change seeks to clarify the differences between the two charts and which ones are community maintained vs MinIO maintained.
2023-08-29 11:27:48 -07:00
Krishnan Parthasarathi
6a67c277eb Reuse types for key-value, notification and retry (#17936) 2023-08-29 11:27:23 -07:00
Harshavardhana
1067dd3011 update minio-go v7.0.63 (#17937)
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-08-28 20:02:14 -07:00
Harshavardhana
7cafdc0512 fix: skip access checks further for known buckets (#17934) 2023-08-28 15:16:41 -07:00
Harshavardhana
8a57b6bced use renameat2 Linux extension syscall (#17757)
this is a faster and safer alternative
on newer kernel versions.
2023-08-27 09:57:11 -07:00
Andrea Longo
6f0ed2a091 Add equivalent mc ilm rule commands for some JSON rule examples (#17923) 2023-08-26 00:33:25 -07:00
Krishnan Parthasarathi
53abd25116 Don't log when object to be tiered is not found (#17924) 2023-08-25 23:34:16 -07:00
Harshavardhana
1ea7826c0e do not have to consider replicationTimestamp for healing and quorum (#17922)
replicationTimestamp might differ if there were retries
in replication and the retried attempt overwrote in
quorum but enough shards with newer timestamp causing
the existing timestamps on xl.meta to be invalid, we
do not rely on this value for anything external.

this is purely a hint for debugging purposes, but there
is no real value in it considering the object itself
is in-tact we do not have to spend time healing this
situation.

we may consider healing this situation in future but
that needs to be decoupled to make sure that we do not
over calculate how much we have to heal.
2023-08-25 15:31:15 -07:00
Harshavardhana
97f4cf48f8 update wording on PR template 2023-08-25 11:57:37 -07:00
Anis Eleuch
0cde37be50 Reduce the number of calls to import bucket metadata (#17899)
For each bucket, save the bucket metadata 
once, call the site replication hook once
2023-08-25 07:59:16 -07:00
jiuker
6aeca54ece fix: replace context by timeout-context from parent-context when selfSpeedTest (#17906) 2023-08-25 07:58:38 -07:00
Harshavardhana
124e28578c remove strict persistence requirements for List() .metacache objects (#17917)
.metacache objects are transient in nature, and are better left to
use page-cache effectively to avoid using more IOPs on the disks.

this allows for incoming calls to be not taxed heavily due to
multiple large batch listings.
2023-08-25 07:58:11 -07:00
Harshavardhana
62c9e500de remove mTime requirement from pre-condition checks (#17916)
given a versionId the mtime is always the same, it
can never be different than its original value.

versionIds also do not conflict, since they are uuid's
and unique practically forever.
2023-08-24 14:33:58 -07:00
jiuker
02cc18ff29 refactor the perf client for TTFB and TotalResponseTime (#17901) 2023-08-24 10:21:08 -07:00
Harshavardhana
ba4566e86d add missing IAM node metrics to cluster and node endpoint (#17908) 2023-08-24 09:26:37 -07:00
Krishnan Parthasarathi
87cb0081ec Retain current and upto NewerNoncurrentVersions versions (#17909)
applyNewerNoncurrentVersionLimit method should pass along versions
unaffected by NewerNoncurrentVersions rule for further ILM evaluation.
2023-08-24 09:26:29 -07:00
Poorna
4a6af93c83 mark replication target offline if network timeouts seen (#17907)
regular target liveness check every 5 secs will toggle state back
as target returns online.
2023-08-24 09:24:26 -07:00
Minio Trusted
a2f0771fd3 Update yaml files to latest version RELEASE.2023-08-23T10-07-06Z 2023-08-23 23:17:59 +00:00
Harshavardhana
af564b8ba0 allow bootstrap to capture time-spent for each initializers (#17900) 2023-08-23 03:07:06 -07:00
Harshavardhana
adb8be069e tune-kafka targets to ensure timeout triggers on hung brokers (#17898)
hung brokers can cause slowness to the entire system
when many callers are hung, leading to large goroutine
build-up.
2023-08-22 20:26:35 -07:00
Klaus Post
7c8746732b Return cancelled storage calls as 499 (#17895)
Make upstream cancels more visible - right now they are just reported as "forbidden".
2023-08-22 11:10:41 -07:00
Klaus Post
f506117edb Reduce memory profiling rate (#17894)
Change profiling from every 4KB to every 128K, reducing the lock contention by a factor of 32.
2023-08-22 07:21:49 -07:00
Harshavardhana
1c5af7c31a serialize queueMRFHeal(), add timeouts and avoid normal build-ups (#17886)
we expect a certain level of IOPs and latency so this is okay.

fixes other miscellaneous bugs

- such as hanging on mrfCh <- when the context is canceled
- queuing MRF heal when the context is canceled
- remove unused saveStateCh channel
2023-08-21 16:44:50 -07:00
Harshavardhana
3a0125fa1f remove unexpected logging from peer calls (#17888)
also make sure RequestID is set for system logs
2023-08-21 14:25:24 -07:00
Daniel Valdivia
328cb0a076 Pass environment variable to control session length to console (#17885)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2023-08-21 11:55:43 -07:00
Shubhendu
c3c8441a1d Corrected the count of buckets and objects graphs (#17883)
In distributed setup with a load balancer, randmoly any server
would report the metrics `minio_cluster_bucket_total` and
`minio_cluster_usage_object_total` and while graphing it, we should
take max of reported values.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-21 09:04:38 -07:00
jiuker
fa2a8d7209 fix: drain the req.body into io.Discard correctly (#17881) 2023-08-21 01:09:07 -07:00
jiuker
e3ea97c964 fix: replace req context by locker context (#17880) 2023-08-19 22:09:07 -07:00
Mathieu Parent
7219ae530e helm: allow to configure statement policy effect (#17700)
Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
2023-08-19 07:39:11 -07:00
Andreas Auernhammer
8f8f8854f0 update minio/kes-go dep to v0.2.0 (#17850)
This commit updates the minio/kes-go dependency
to v0.2.0 and updates the existing code to work
with the new KES APIs.

The `SetPolicy` handler got removed since it
may not get implemented by KES at all and could
not have been used in the past since stateless KES
is read-only w.r.t. policies and identities.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2023-08-19 07:37:53 -07:00
Anis Eleuch
4c6869cd9a ilm: Fix cleaning non current null versions (#17876) 2023-08-18 12:55:47 -07:00
Harshavardhana
bc7c0d8624 if object is a delete marker it must skip tags filter in ILM (#17861) 2023-08-18 09:36:23 -07:00
Harshavardhana
11dfc817f3 do not log client canceled events (#17838) 2023-08-17 14:53:43 -07:00
Harshavardhana
dde1a12819 fix: validate incoming uploadID to be base64 encoded (#17865)
Bonus fixes include

- do not have to write final xl.meta (renameData) does this
  already, saves some IOPs.

- make sure to purge the multipart directory properly using
  a recursive delete, otherwise this can easily pile up and
  rely on the stale uploads cleanup.

fixes #17863
2023-08-17 09:37:55 -07:00
Minio Trusted
065fd094d1 Update yaml files to latest version RELEASE.2023-08-16T20-17-30Z 2023-08-16 21:22:41 +00:00
Harshavardhana
d09351bb10 update console release to v0.37.0
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-08-16 13:17:30 -07:00
bh4t
25d38e030b Minor change to text under license page (#17864)
Made minor change to move the text and link 
regarding license details.
2023-08-16 11:28:34 -07:00
Harshavardhana
9ebd10d3f4 Revert "Include SuccessorModTime for FileInfo quorum (#17732)" (#17860)
This reverts commit bf3901342c.

This is to fix a regression caused when there are inconsistent
versions, but one version is in quorum. SuccessorModTime issue
must be fixed differently.
2023-08-16 07:51:33 -07:00
Harshavardhana
8a9b886011 update grafana dashboard with disk -> drive rename (#17857) 2023-08-15 16:04:20 -07:00
Cesar Celis Hernandez
21f0d6b549 Removing deprecated var from minio helm (#17856) 2023-08-15 13:33:55 -07:00
Harshavardhana
3ba927edae fix: batch status reporting after complete (#17852)
batch status can perpetually wait after completion
due to a race between the MetricsHandler() returning
the active metrics in intervals of 1sec and delete
of metrics after job completion.

this PR ensures that we keep the 'status' around
for a while, i.e upto 24hrs for all the batch jobs.
2023-08-15 12:22:30 -07:00
Harshavardhana
c4ca0a5a57 add two more drive metrics when metrics is available (#17854) 2023-08-15 10:55:47 -07:00
Klaus Post
406ea4f281 Fix distributed listing not able to resume (#17855)
Two fields in lifecycles made GOB encoding consistently fail with `gob: type lifecycle.Prefix has no exported fields`.

This meant that in distributed systems listings would never be able to continue and would restart on every call.

Fix issues and be sure to log these errors at least once per bucket. We may see some connectivity errors here, but we shouldn't hide them.
2023-08-15 07:45:25 -07:00
Harshavardhana
64aa7feabd allow specifying lower disks for Walk() (#17829)
useful when you may want Walk() with
reduced quorum requirements.
2023-08-14 21:32:39 -07:00
Poorna
875f4076ec site replication: avoid retries when peer is offline (#17853) 2023-08-14 21:31:41 -07:00
Harshavardhana
4643efe6be fix: add deadline worker pattern for local disk removers (#17845) 2023-08-14 12:28:13 -07:00
Harshavardhana
b760137e1d fix: add proxyByNode for batch jobs as part of their jobId (#17844) 2023-08-11 13:12:35 -07:00
Harshavardhana
5f56f441bf fix: apply common notification code with content-type (#17843) 2023-08-11 11:34:43 -07:00
Klaus Post
96a22bfcbb fix: wrapped io.EOF during ListObjects() (#17842)
When listing getObjectFileInfo can return `io.EOF` if file is being written.

When we wrap the error it will *not* retry upstream, since `io.EOF` is a valid return value.

Allow one retry before returning errors and canceling the listing.
2023-08-11 09:47:16 -07:00
Harshavardhana
6c59b33fb1 add community contribution credits and update PR template (#17840) 2023-08-10 22:08:38 -07:00
Poorna
dfaf735073 replication: fix queuing of large uploads (#17831)
Fixes regression from #17687
2023-08-10 15:48:42 -07:00
Harshavardhana
0d2b7bf94d upgrade console dependency to v0.36.0 (#17839) 2023-08-10 15:48:11 -07:00
Anis Eleuch
7fcfde7f07 s3: Pick a pool with >85% if all other pools are in suspended state (#17826) 2023-08-10 11:06:31 -07:00
jiuker
b1391d1991 feat: support perf client to show TX from client to server (#17718) 2023-08-10 07:14:46 -07:00
Harshavardhana
49c8e16410 update CI/CD to go1.21 (#17828) 2023-08-10 07:13:58 -07:00
Minio Trusted
0e93681589 Update yaml files to latest version RELEASE.2023-08-09T23-30-22Z 2023-08-10 00:09:18 +00:00
Harshavardhana
eb55034dfe optimize deletePrefix, use direct set location via object name (#17827)
* optimize deletePrefix, use direct set location via object name

instead of fanning out the calls for an object force delete
we can assume the set location and not do fan-out calls

* Apply suggestions from code review

Co-authored-by: Krishnan Parthasarathi <krisis@users.noreply.github.com>

---------

Co-authored-by: Krishnan Parthasarathi <krisis@users.noreply.github.com>
2023-08-09 16:30:22 -07:00
Harshavardhana
c45bc32d98 skip disks under scanning when healing disks (#17822)
Bonus:

- avoid calling DiskInfo() calls when missing blocks
  instead heal the object using MRF operation.

- change the max_sleep to 250ms beyond that we will
  not stop healing.
2023-08-09 12:51:47 -07:00
Harshavardhana
6e860b6dc5 count all versions as part of DeleteAllVersionsAction (#17821) 2023-08-09 08:55:19 -07:00
Harshavardhana
b732a673dc reduce logging in bucket replication in retry scenarios (#17820) 2023-08-08 13:27:40 -07:00
Shubhendu
b6b6d6e8d8 Removed replication dashboard (#17815)
As all replication metrics are moved at bucket level, all replication
graphs as well are added under minio-bucket.json. Removing the independent
replication dashboard.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-08-08 08:13:45 -07:00
Yang Wu
23e4895dfc Create metrics slice when necessary (#17809) 2023-08-07 02:21:22 -07:00
Harshavardhana
8666c55ca6 fix: do not use PrefixEnabled() logic to ignore valid objects (#17677)
ignoring valid objects with valid replication metadata
after the Prefix was disabled must still honor the older
metadata.

this can lead to unexpected results, allow it during
READ phase always.
2023-08-05 13:56:01 -07:00
Anis Eleuch
a3f00c5d5e batch: Strict unmarshal yaml document to avoid user made typos (#17808)
// UnmarshalStrict is like Unmarshal except that any fields that are found
// in the data that do not have corresponding struct members, or mapping
// keys that are duplicates, will result in
// an error.
2023-08-05 13:51:48 -07:00
Poorna
26c23b30f4 replication: set context timeout for NewMultipartUpload calls (#17807) 2023-08-05 12:27:07 -07:00
Anis Eleuch
a436fd513b track client disconnections properly for all ListObjects calls (#17804)
Currently ListObjects* calls were returning 200 OK for timed-out clients,
this makes debugging via `mc admin trace` very hard.
2023-08-04 15:57:27 -07:00
Minio Trusted
3bc34ffd94 Update yaml files to latest version RELEASE.2023-08-04T17-40-21Z 2023-08-04 19:12:46 +00:00
Harshavardhana
533cd8d6df fix: batch replication pull must preserve versionID (#17805)
batch replication pull must preserve versionID regardless
of destination bucket versioning configuration.

This is similar to the issue with decommissioning and rebalancing
2023-08-04 12:09:10 -07:00
Harshavardhana
cb089dcb52 error out by default beyond 10000 versions per object (#17803)
```
You've exceeded the limit on the number of versions you can create on this object
```
2023-08-04 10:40:21 -07:00
Harshavardhana
e0329cfdbb update console v0.35.1 2023-08-03 20:25:21 -07:00
Harshavardhana
239ccc9c40 fix: crash in globalTierJournal when TierConfig is not initialized (#17791) 2023-08-03 14:16:15 -07:00
Poorna
b762fbaf21 sts: validate if iam subsystem initialized in handlers (#17796) 2023-08-03 13:24:25 -07:00
Praveen raj Mani
0285df5a02 fix: prioritize audit_webhook and logger_webhook ENVs over the config KVS (#17783) 2023-08-03 02:47:07 -07:00
Harshavardhana
45fb375c41 allow healing to prefer local disks over remote (#17788) 2023-08-03 02:18:18 -07:00
Harshavardhana
4a4950fe41 fix: honor requested allow origin settings properly (#17789)
fixes #17778
2023-08-02 20:41:21 -07:00
Anis Eleuch
1664fd8bb1 Avoid logging errors twice during transitioned objects expiration (#17782) 2023-08-02 09:06:03 -07:00
Harshavardhana
21cdd2bf5d avoid overwriting metrics on success, save it in defer (#17780) 2023-08-01 22:19:56 -07:00
Harshavardhana
0153f96a20 add deadlines for readMetadata() in listing (#17776)
Bonus: also skip spending time looking for xl.json

- Listing()
- Delete()
2023-08-01 21:52:31 -07:00
Harshavardhana
a7a7533190 add new errors for Disks with timeouts (#17770) 2023-08-01 12:47:50 -07:00
Poorna
311380f8cb replication resync: fix queueing (#17775)
Assign resync of all versions of object to the same worker to avoid locking
contention. Fixes parallel resync implementation in #16707
2023-08-01 11:51:15 -07:00
Harshavardhana
b0f0e53bba fix: make sure to correctly initialize health checks (#17765)
health checks were missing for drives replaced since

- HealFormat() would replace the drives without a health check
- disconnected drives when they reconnect via connectEndpoint()
  the loop also loses health checks for local disks and merges
  these into a single code.
- other than this separate cleanUp, health check variables to avoid
  overloading them with similar requirements.
- also ensure that we compete via context selector for disk monitoring
  such that the canceled disks don't linger around longer waiting for
  the ticker to trigger.
- allow disabling active monitoring.
2023-08-01 10:54:26 -07:00
Klaus Post
004f1e2f66 Fix trailing header signature mismatch (#17774)
Seems like clients may omit a newline at the end of the trailer chunk. Each header should end with a newline. Add that if missing.

Fixes #17662
2023-08-01 08:45:57 -07:00
Harshavardhana
2fa561f22e do not crash on invalid metric values (#17764)
```
minio[1032735]: panic: label value "\xc0.\xc0." is not valid UTF-8
minio[1032735]: goroutine 1781101 [running]:
minio[1032735]: github.com/prometheus/client_golang/prometheus.MustNewConstMetric(...)
```

log such errors for investigation
2023-08-01 00:55:39 -07:00
Harshavardhana
81be718674 fix: optimize DiskInfo() call avoid metrics when not needed (#17763) 2023-07-31 15:20:48 -07:00
Rajat Shinde
8162fd1e20 fix: couple of typos in README.md (#17754) 2023-07-31 11:17:13 -07:00
Sho Ce
49a1e2f98e update-notifier.go: misleading version age message (#17750) 2023-07-31 08:36:19 -07:00
Klaus Post
684c46369c Send events for extracted objects (#17760)
Fixes #17759
2023-07-31 08:33:51 -07:00
Martin Zruban
715c9e3ca9 fix: allow mc executable inside minio containers (#17755) 2023-07-31 00:13:41 -07:00
Harshavardhana
73edd5b8fd introduce 'mc admin config set alias/ api odirect=on' (#17753)
change disable_odirect=off -> odirect=on to make it
easier to understand, instead of making it double
negative.
2023-07-31 00:12:53 -07:00
Harshavardhana
5e5bdf5432 capture total errors data availability and any timeout errors (#17748) 2023-07-29 23:26:26 -07:00
Harshavardhana
48a3e9bc82 update NTP package to fix ipv6 bug (#17752) 2023-07-29 17:43:50 -07:00
Harshavardhana
f13cfcb83e allow disabling O_DIRECT for write ops (#17751)
on really slow systems, O_DIRECT simply kills the drives
allow for a way to disable them.
2023-07-29 15:17:56 -07:00
Anis Eleuch
9c0e8cd15b logger: Avoid slow calls in http logger Send() function (#17747)
Send() is synchronous and can affect the latency of S3 requests when the
logger buffer is full.

Avoid checking if the HTTP target is online or not and increase the
workers anyway since the buffer is already full.

Also, avoid logs flooding when the audit target is down.
2023-07-29 12:49:18 -07:00
Harshavardhana
ad2a70ba06 update console v0.34.0
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-28 17:24:05 -07:00
Harshavardhana
731e03fe5a add ReadFileStream deadline for disk call (#17745)
timeout the reader side if hung via disk max timeout
2023-07-28 15:37:53 -07:00
jiuker
f9d029c8fa fix: prevCertificate save not correct in loop (#17744)
fix certs save not correct

Co-authored-by: guozhi.li <guozhi.li@daocloud.io>
2023-07-28 12:57:49 -07:00
Anis Eleuch
7057d00a28 s3: Return invalid bucket name the first thing in all S3 calls (#17742) 2023-07-28 10:49:20 -07:00
Harshavardhana
114fab4c70 export cluster health as prometheus metrics (#17741) 2023-07-28 01:16:53 -07:00
Harshavardhana
c2edbfae55 update all deps and add credits (#17740) 2023-07-27 12:43:25 -07:00
ruspaul013
a92cb66468 Get the signed headers in the order they were signed (#17690)
use pSignValues to get signed headers in order
2023-07-27 11:45:30 -07:00
ruspaul013
535f97ba61 check if metadata headers/url values are equal with signed headers (#17737) 2023-07-27 11:44:56 -07:00
drivebyer
14ebd82dbd fix: missing disk metrics when query metric api from peer (#17738) 2023-07-27 11:44:13 -07:00
Klaus Post
aea7b08a47 Update nats server dependency (#17734)
I don't see why we shouldn't.

Fixes #17730
2023-07-27 07:35:52 -07:00
Harshavardhana
47dcfcbdd4 introduce deadlines on READ operations (#17724) 2023-07-27 07:33:05 -07:00
Krishnan Parthasarathi
bf3901342c Include SuccessorModTime for FileInfo quorum (#17732) 2023-07-26 17:04:16 -07:00
Shubhendu
e1731d9403 Added bucket specific grafana dashboard (#17727)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-07-26 15:10:11 -07:00
Harshavardhana
b28bcad11b avoid Access() calls on known bucket paths (#17719) 2023-07-26 11:31:40 -07:00
Harshavardhana
a7c71e4c6b protect disk monitoring to avoid busy loop configuration (#17723) 2023-07-25 20:02:22 -07:00
Poorna
1a42693d68 replication: limit larger uploads to a subset of workers (#17687)
Limit large uploads (> 128MiB)  to a max of 10 workers, intent is to avoid
larger uploads from using all replication bandwidth, giving room for smaller
uploads to sync faster.
2023-07-25 20:02:02 -07:00
Harshavardhana
e7b60c4d65 Add slow drive timeouts to match with active disk monitoring (#17701)
allow active disk-monitoring to be configurable, and use
these add deadlines in various call layers for various
syscalls.
2023-07-25 16:58:31 -07:00
Poorna
f95129894d Use decrypted object size while computing object size summary (#17717)
Corrects an issue with encrypted versioned objects being reported under
`unversioned` bin in the object version histogram
2023-07-24 17:13:25 -07:00
Harshavardhana
c32c71c836 allow DNS cache TTL to be configurable (#17709)
this is added for now as a hidden variable
2023-07-24 15:13:35 -07:00
Harshavardhana
14e1ace552 remove serializing WalkDir() across all buckets/prefixes on SSDs (#17707)
slower drives get knocked off because they are too slow via 
active monitoring, we do not need to block calls arbitrarily.

Serializing adds latencies for already slow calls, remove
it for SSDs/NVMEs

Also, add a selection with context when writing to `out <-`
channel, to avoid any potential blocks.
2023-07-24 09:30:19 -07:00
drivebyer
a7fb3a3853 fix: Create metrics slice when necessary in getCacheMetrics() (#17711) 2023-07-24 08:40:21 -07:00
Klaus Post
2da4bd5f1a Revert "don't error when asked for 0-based range on empty objects (#17708) (#17713)
Revert "don't error when asked for 0-based range on empty objects (#17708)"

This reverts commit 7e76d66184.

There is no valid way to specify offsets in a 0-byte file. Blame it on the [RFC](https://datatracker.ietf.org/doc/html/rfc7233#section-4.4)

> The 416 (Range Not Satisfiable) status code indicates that none of the ranges in the 
> request's Range header field (Section 3.1) overlap the current extent of the selected resource...

A request for "bytes=0-" is a request for the first byte of a resource. If the resource is 0-length, 
the range [0,0] does not overlap the resource content and the server responds with an error.
2023-07-24 07:56:28 -07:00
flisk
7e76d66184 don't error when asked for 0-based range on empty objects (#17708)
In a reverse proxying setup, a proxy in front of MinIO may attempt to
request objects in slices for enhanced cache efficiency. Since such a
a proxy cannot have prior knowledge of how large a requested resource is,
it usually sends a header of the form:

        Range: 0-$slice_size

... and, depending on the size of the resource, expects either:

- an empty response, if $resource_size == 0
- a full response, if $resource_size <= $slice_size
- a partial response, if $resource_size > $slice_size

Prior to this change, MinIO would respond 416 Range Not Satisfiable if a
client tried to request a range on an empty resource. This behavior is
technically consistent with RFC9110[1] – However, it renders sliced
reverse proxying, such as implemented in Nginx, broken in the case of
empty files. Nginx itself seems to break this convention to enable
"useful" responses in these cases, and MinIO should probably do that
too.

[1]: https://www.rfc-editor.org/rfc/rfc9110#byte.ranges
2023-07-23 00:10:03 -07:00
Harshavardhana
7764f4a8e3 return tags as part of Head/Get calls (#17635)
AWS S3 only returns the number of tag
counts, along with that we must return
the tags as well to avoid another metadata
call to the server.
2023-07-22 07:19:43 -07:00
Harshavardhana
e1094dde08 update MinIO replication dashboard with latest metrics 2023-07-21 17:30:04 -07:00
Minio Trusted
4894c67196 Update yaml files to latest version RELEASE.2023-07-21T21-12-44Z 2023-07-21 22:28:35 +00:00
Doom And Love
d004c45386 grafana-dashboard: Update scrape_jobs variable to be single select (#17696)
Set `includeAll` and `mult` to be false since this dashboard only works with a single value being selected
2023-07-21 14:12:44 -07:00
Kaan Kabalak
6624f970c0 Fix spelling of 'already' across repository (#17703) 2023-07-21 08:45:08 -07:00
Harshavardhana
de684dc122 update console release v0.33.0
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-21 01:20:13 -07:00
Harshavardhana
331bdc2245 fix: remove CompleteMultipartUpload() 200 OK response for blocking calls (#17699)
sending whitespace character with CompleteMultipartUpload()
with 200 OK was an AWS S3 compatible implementation detail,
and it was expected that the client SDK must look for both
successful XML as well as error XML for 200 OK.

But this is not useful anymore on MinIO, since we do not
have any large delayed coalescing of parts anymore.
2023-07-20 22:14:38 -07:00
Harshavardhana
e12ab486a2 avoid using os.Getenv for internal code, use env.Get() instead (#17688) 2023-07-20 07:52:49 -07:00
Krishnan Parthasarathi
9eeee92d36 Add deletemarker_total metric (#17689) 2023-07-20 07:52:32 -07:00
Anis Eleuch
756d6aa729 fix: report correct pool/set/disk indexes for offline disks (#17695) 2023-07-20 07:48:21 -07:00
Harshavardhana
bddd53d6d2 fix: retry listing in decommissioning if it fails perpetually (#17682) 2023-07-19 13:09:37 -07:00
Harshavardhana
c0a5bdaed9 update grafana dashboard JSON with the new metrics (#17683) 2023-07-19 08:16:04 -07:00
jiuker
a99cd825ab fix: byHost realTime metrics API (#17681) 2023-07-18 23:50:30 -07:00
Harshavardhana
6426b74770 move bucket centric metrics to /minio/v2/metrics/bucket handlers (#17663)
users/customers do not have a reasonable number of buckets anymore,
this is why we must avoid overpopulating cluster endpoints, instead
move the bucket monitoring to a separate endpoint.

some of it's a breaking change here for a couple of metrics, but
it is imperative that we do it to improve the responsiveness of
our Prometheus cluster endpoint.

Bonus: Added new cluster metrics for usage, objects and histograms
2023-07-18 22:25:12 -07:00
Harshavardhana
4f257bf1e6 pick internode interface properly via globalLocalNodeName (#17680)
current code will not pick the right interface name
if --address or --interface is not provided.
2023-07-18 19:18:11 -07:00
Minio Trusted
73a056999c Update yaml files to latest version RELEASE.2023-07-18T17-49-40Z 2023-07-18 21:44:57 +00:00
Krishnan Parthasarathi
0120ff93bc admin-info: add DeleteMarkers count (#17659) 2023-07-18 10:49:40 -07:00
Anis Eleuch
49638fa533 s3: Delete Bucket should not recreate bucket if it does not exist (#17676)
Also return Bucket Not Found error in the same use case.
2023-07-18 09:32:19 -07:00
Harshavardhana
76510dac8a upgrade all deps to their latest releases (#17671) 2023-07-17 21:12:48 -07:00
Shubhendu
7a3a7b19e5 Added a start script to inspect command output (#17591)
Using this script, post decrypt we should be able to bring up the
MinIO instance with same configuration.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-07-17 14:15:28 -07:00
Harshavardhana
24e86d0c59 avoid passing around poolIdx, setIdx instead pass the relevant disks (#17660) 2023-07-17 09:52:05 -07:00
drivebyer
9b5c2c386a fix: return error when requested interface has no stats available (#17666) 2023-07-17 01:14:01 -07:00
jiuker
d118031ed6 fix: when Origin: null is set return back '*' for allow origins (#17651) 2023-07-15 12:15:06 -07:00
Anis Eleuch
341a89c00d return a descriptive error when loading any IAM item fails (#17654)
Sometimes IAM fails to load certain items, which could be a user, 
a service account or a policy but with not enough information for 
us to debug.

This commit will create a more descriptive error to make it easier to
debug in such situations.
2023-07-14 20:17:14 -07:00
Anis Eleuch
df29d25e6b return different status code for internode communication (#17655)
mc admin trace -a will be able to quickly show
401 Unauthorized header to pinpoint trivial issues
between nodes, such as wrong root 
credentials and skewed time.
2023-07-14 18:34:55 -07:00
Harshavardhana
3e196fa7b3 fix: ILM newer noncurrent version limit must return correct versions (#17652)
objects/versions that are not expired via NewerNoncurrentVersions
must be properly returned to be applied under further ILM actions.

this would cause legitimately expired objects to be missed
from expiration.
2023-07-14 16:42:35 -07:00
drivebyer
04c792476f fix: provide a possible slice cap for heal failed metrics items (#17647)
Signed-off-by: Wu <yang.wu@daocloud.io>
2023-07-14 11:02:45 -07:00
Harshavardhana
005a4a275a add more bootstrap messages to provide latency (#17650)
- simplify refreshing bucket metadata, wait() to
  depend on how fast the bucket metadata can load.

- simplify resync to start resync in single pass.
2023-07-14 04:00:29 -07:00
Harshavardhana
bdddf597f6 shuffle buckets randomly before being scanned (#17644)
this randomness is needed to avoid scanning
the same buckets across different erasure sets,
in the same order.

allow random buckets to be scanned instead
allowing a wider spread of ILM, replication
checks.

Additionally do not loop over twice to fill
the channel, fill the channel regardless of
having bucket new or old.
2023-07-14 02:25:40 -07:00
Aditya Manthramurthy
bb6921bf9c Send AuditLog via new middleware fn for admin APIs (#17632)
A new middleware function is added for admin handlers, including options
for modifying certain behaviors. This admin middleware:

- sets the handler context via reflection in the request and sends AuditLog
- checks for object API availability (skipping it if a flag is passed)
- enables gzip compression (skipping it if a flag is passed)
- enables header tracing (adding body tracing if a flag is passed)

While the new function is a middleware, due to the flags used for
conditional behavior modification, which is used in each route registration
call.

To try to ensure that no regressions are introduced, the following
changes were done mechanically mostly with `sed` and regexp:

- Remove defer logger.AuditLog in admin handlers
- Replace newContext() calls with r.Context()
- Update admin routes registration calls

Bonus: remove unused NetSpeedtestHandler

Since the new adminMiddleware function checks for object layer presence
by default, we need to pass the `noObjLayerFlag` explicitly to admin
handlers that should work even when it is not available. The following
admin handlers do not require it:

- ServerInfoHandler
- StartProfilingHandler
- DownloadProfilingHandler
- ProfileHandler
- SiteReplicationDevNull
- SiteReplicationNetPerf
- TraceHandler

For these handlers adminMiddleware does not check for the object layer
presence (disabled by passing the `noObjLayerFlag`), and for all other
handlers, the pre-check ensures that the handler is not called when the
object layer is not available - the client would get a
ErrServerNotInitialized and can retry later.

This `noObjLayerFlag` is added based on existing behavior for these
handlers only.
2023-07-13 14:52:21 -07:00
Shireesh Anjal
bb63375f1b Do not consider subnet api key as secret (#17643)
As it is required by mc and console to communicate with subnet
2023-07-13 12:24:47 -07:00
Klaus Post
4f89e5bba9 Add active disk health checks (#17539)
Add check every 2 minutes to see if a write+read operation can complete.

If disk is unresponsive for 2 minutes or returns errFaultyDisk, take it offline.
2023-07-13 11:41:55 -07:00
jiuker
183428db03 fear: Implement 'mc support top net' (#17598) 2023-07-13 11:41:19 -07:00
Shireesh Anjal
fc6d873758 Use os.ReadFile instead of ioutil.ReadFile (#17649)
ioutil.ReadFile is deprecated and also doesn't work with certain kinds
of symlinks.
2023-07-13 09:07:10 -07:00
Poorna
5e2f8d7a42 replication: Simplify mrf requeueing and add backlog handler (#17171)
Simplify MRF queueing and add backlog handler

- Limit re-tries to 3 to avoid repeated re-queueing. Fall offs
to be re-tried when the scanner revisits this object or upon access.

- Change MRF to have each node process only its MRF entries.

- Collect MRF backlog by the node to allow for current backlog visibility
2023-07-12 23:51:33 -07:00
Shubhendu
9b9871cfbb Added endpoint and versions attributes to KMS details (#17350)
Now it would list details of all KMS instances with additional
attributes `endpoint` and `version`. In the case of k8s-based
deployment the list would consist of a single entry.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-07-12 23:50:38 -07:00
guangwu
f80b6926d3 chore: fix minor issues reported via staticcheck (#17639) 2023-07-12 20:33:11 -07:00
Shubhendu
6dc55fe5ed Corrected the API name for audit logging purpose (#17642)
This would better to record the correct API name so that
any verification around audit logs to figure out if required
APIs are called required no of times, would be correct.
Here in this case of policy attached, API `AttachDetachPolicyBuiltin`
would be called with `requestPath` as `/minio/admin/v3/idp/builtin/policy/attach`
and in case of detach policy the value would be `/minio/admin/v3/idp/builtin/policy/detach`

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2023-07-12 15:38:49 -07:00
Harshavardhana
2d1cda2061 fix: do not os.Exit(1) while writing goroutines during shutdown (#17640)
Also shutdown poll add jitter, to verify if the shutdown
sequence can finish before 500ms, this reduces the overall
time taken during "restart" of the service.

Provides speedup for `mc admin service restart` during
active I/O, also ensures that systemd doesn't treat the
returned 'error' as a failure, certain configurations in
systemd can cause it to 'auto-restart' the process by-itself
which can interfere with `mc admin service restart`.

It can be observed how now restarting the service is
much snappier.
2023-07-12 07:18:30 -07:00
Harshavardhana
a566bcf613 treat 0-byte objects to honor same quorum as delete marker (#17633)
on unversioned buckets its possible that 0-byte objects
might lose quorum on flaky systems, allow them to be same
as DELETE markers. Since practically speak they have no
content.
2023-07-11 21:53:49 -07:00
Minio Trusted
f6040dffaf Update yaml files to latest version RELEASE.2023-07-11T21-29-34Z 2023-07-11 23:25:30 +00:00
Klaus Post
9885a0a6af Fix hasSpaceFor in SNSD setup (#17630)
If drive is offline or filled we divide by 0.

Fixes #17629

Bonus: Reject when any valid disk exceeds minimum inode threshold.
2023-07-11 14:29:34 -07:00
Kaan Kabalak
f64d62b01d Fix style of logOnceIf calls w/unique identifiers (#17631) 2023-07-11 13:17:45 -07:00
Harshavardhana
82075e8e3a use strconv variants to improve on performance per 'op' (#17626)
```
BenchmarkItoa
BenchmarkItoa-8         	673628088	         1.946 ns/op	       0 B/op	       0 allocs/op
BenchmarkFormatInt
BenchmarkFormatInt-8    	592919769	         2.012 ns/op	       0 B/op	       0 allocs/op
BenchmarkSprint
BenchmarkSprint-8       	26149144	        49.06 ns/op	       2 B/op	       1 allocs/op
BenchmarkSprintBool
BenchmarkSprintBool-8   	26440180	        45.92 ns/op	       4 B/op	       1 allocs/op
BenchmarkFormatBool
BenchmarkFormatBool-8   	1000000000	         0.2558 ns/op	       0 B/op	       0 allocs/op
```
2023-07-11 07:46:58 -07:00
Harshavardhana
5b7c83341b move per bucket metrics to peer location (#17627) 2023-07-11 07:46:24 -07:00
Harshavardhana
8522905d97 update helm linting workflow 2023-07-10 20:11:51 -07:00
Harshavardhana
524ed7ccd0 update go.mod pointing to wrong repo 2023-07-10 20:10:39 -07:00
Poorna
fb49aead9b replication: add validation API (#17520)
To check if replication is set up properly on a bucket.
2023-07-10 20:09:20 -07:00
Aditya Manthramurthy
85f5700e4e fix: missing audit logger call for some admin APIs (#17623) 2023-07-10 16:59:44 -07:00
Aditya Manthramurthy
43b3c093ef Fix: set request id in trace context properly (#17622) 2023-07-10 15:40:44 -07:00
Kaan Kabalak
bd6842d917 Further print log messages once per error (#17618) 2023-07-10 07:59:57 -07:00
Poorna
e8c98c3246 Avoid extra GetObjectInfo call in DeleteObject API (#17599)
Optimize DeleteObject API to avoid extra 
GetObjectInfo call on the replicating side.

For receiving side, it is just a regular
DeleteObject call.

Bonus: Fix a corner case where version purged is 
absent on target (either due to replication not yet
complete or target version already deleted in a
one-way replication or when replication was disabled). 

In such cases, mark version purge complete.
2023-07-10 07:57:56 -07:00
Harshavardhana
dfd7cca0d2 fix: allow cancel of decom only when its in progress (#17607) 2023-07-10 07:55:38 -07:00
Harshavardhana
af3d99e35f helm release v5.0.13
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-09 00:13:05 -07:00
Harshavardhana
f6186965c3 honor DeleteAllVersions in list(), head() calls (#17604) 2023-07-08 15:42:10 -07:00
Ian Martin
90c2129f44 Add helm chart linting to CI workflow (#17606) 2023-07-08 15:41:12 -07:00
yaohwu
69e131ee69 Fix helm templating syntax in post-job (#17605) 2023-07-08 12:18:31 -07:00
Harshavardhana
28a01f0320 update missing license header in files (#17603) 2023-07-08 10:42:05 -07:00
Anis Eleuch
6d0bc5ab1e prometheus: Fix internode stats (#17594)
Internode calculation was done inside S3 handlers, fix it by moving it
to internode handlers.

Remove admin stats since it is not used.
2023-07-08 07:35:11 -07:00
Aditya Manthramurthy
7af78af1f0 fix: set request ID in tracing context key (#17602)
Since `addCustomerHeaders` middleware was after the `httpTracer`
middleware, the request ID was not set in the http tracing context. By
reordering these middleware functions, the request ID header becomes
available. We also avoid setting the tracing context key again in
`newContext`.

Bonus: All middleware functions are renamed with a "Middleware" suffix
to avoid confusion with http Handler functions.
2023-07-08 07:31:42 -07:00
Klaus Post
45a717a142 Avoid per request URL parsing (#17593)
Every request does a `url.Parse(c.url.String())` to clone a URL.

The host will also be static, so we rewrite that on creation.
2023-07-07 22:07:30 -07:00
Ian Martin
cb1ec0a0d9 fix: helm templating syntax in post-job (#17600) 2023-07-07 22:06:40 -07:00
Harshavardhana
abb1f22057 Revert "change ttfb_distribution metrics to histogramMetric (#17115)"
This reverts commit 9112ca4e29.
2023-07-07 13:57:37 -07:00
Harshavardhana
73efe436a5 update helm v5.0.12
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-07-07 09:44:16 -07:00
Harshavardhana
f41edb23e2 add variadic delays in peer notification retries (#17592)
just adds more `jitter` in our retries to avoid
burst flooding for peer calls.
2023-07-07 07:47:38 -07:00
Minio Trusted
6335a48a53 Update yaml files to latest version RELEASE.2023-07-07T07-13-57Z 2023-07-07 07:53:18 +00:00
Klaus Post
e20aab25ec Check for progress before we reach the limit (#17552) 2023-07-07 00:13:57 -07:00
Anis Eleuch
66bea3942a CI/CD to stop one node per pool in the two pools mint test (#17518)
This is to make sure that all S3 ops work when there is enough quorum
2023-07-07 00:10:13 -07:00
Harshavardhana
08acd9c43d update all our deps (#17590) 2023-07-06 21:47:46 -07:00
Klaus Post
ff5988f4e0 Reduce allocations (#17584)
* Reduce allocations

* Add stringsHasPrefixFold which can compare string prefixes, while ignoring case and not allocating.
* Reuse all msgp.Readers
* Reuse metadata buffers when not reading data.

* Make type safe. Make buffer 4K instead of 8.

* Unslice
2023-07-06 16:02:08 -07:00
Harshavardhana
1bf23374a3 do not need to gzip 'mc' in our container (#17586)
fixes #17581
2023-07-06 15:13:17 -07:00
Alex
899b429094 Update Console to v0.32.0 (#17587)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2023-07-06 15:13:07 -07:00
jiuker
c47ff44f5e fix: disable site network test if site replication is disabled (#17579) 2023-07-06 09:19:14 -07:00
Harshavardhana
8af0773baf remove deprecated Content-Security-Policy (#17580)
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/block-all-mixed-content
2023-07-06 09:18:38 -07:00
Alex
37cbd114de Updated console to v0.31.0 (#17578) 2023-07-06 00:37:56 -07:00
jiuker
2dbb1cff4a feat: support perf site replication (#17477) 2023-07-05 22:28:26 -07:00
Klaus Post
6efcf9c982 Do lockless last minute latency metrics (#17576)
Collect metrics in one second and accumulate lockless before sending upstream.
2023-07-05 10:40:45 -07:00
Harshavardhana
0bc34952eb fix: under FanOut API avoid repeated md5sum calculation (#17572)
md5sum calculation has a high CPU overhead, avoid calculating
it repeatedly for similar fanOut calls.

To fix following CPU profiler result
```
(pprof) top10
Showing nodes accounting for 678.68s, 84.67% of 801.54s total
Dropped 1072 nodes (cum <= 4.01s)
Showing top 10 nodes out of 156
      flat  flat%   sum%        cum   cum%
   332.54s 41.49% 41.49%    332.54s 41.49%  runtime/internal/syscall.Syscall6
   228.39s 28.49% 69.98%    228.39s 28.49%  crypto/md5.block
    48.07s  6.00% 75.98%     48.07s  6.00%  runtime.memmove
    28.91s  3.61% 79.59%     28.91s  3.61%  github.com/minio/highwayhash.updateAVX2
     8.25s  1.03% 80.61%      8.25s  1.03%  runtime.futex
     8.25s  1.03% 81.64%     10.81s  1.35%  runtime.step
     6.99s  0.87% 82.52%     22.35s  2.79%  runtime.pcvalue
     6.67s  0.83% 83.35%     38.90s  4.85%  runtime.mallocgc
     5.77s  0.72% 84.07%     32.61s  4.07%  runtime.gentraceback
     4.84s   0.6% 84.67%     10.49s  1.31%  runtime.lock2
```
2023-07-05 03:16:05 -07:00
jiuker
f6b48ed02a fix: create user without policy that is required (#17554)
fixes #17492
2023-07-04 07:39:29 -07:00
Harshavardhana
e37c4efc6e fix: upon DNS refresh() failure use previous values (#17561)
DNS refresh() in-case of MinIO can safely re-use
the previous values on bare-metal setups, since
bare-metal arrangements do not change DNS in any 
manner commonly.

This PR simplifies that, we only ever need DNS caching
on bare-metal setups.

- On containerized setups do not enable DNS
  caching at all, as it may have adverse effects on
  the overall effectiveness of k8s DNS systems.

  k8s DNS systems are dynamic and expect applications
  to avoid managing DNS caching themselves, instead
  provide a cleaner container native caching
  implementations that must be used.

- update IsDocker() detection, including podman runtime

- move to minio/dnscache fork for a simpler package
2023-07-03 12:30:51 -07:00
Harshavardhana
22f5bc643c fix: honor older scanner settings only if newer has not changed (#17564) 2023-07-03 12:28:36 -07:00
Anis Eleuch
15fd5ce2fa fix: A typo in per pool make/delete bucket errs calculation (#17553) 2023-07-03 09:47:40 -07:00
Harshavardhana
7f782983ca fix: for FTP server driver allow implicit trust of TLS (#17541)
fixes #17535
2023-06-30 08:04:13 -07:00
Aditya Manthramurthy
9d628346eb fix: service account list for root user (#17547)
Fixes https://github.com/minio/minio/issues/17545
2023-06-30 08:02:12 -07:00
Aditya Manthramurthy
bde533a9c7 fix: OpenID config initialization (#17544)
This is due to a regression in the handling of the enable key in OpenID
configuration.
2023-06-29 23:38:26 -07:00
Minio Trusted
2fcb75d86d Update yaml files to latest version RELEASE.2023-06-29T05-12-28Z 2023-06-29 06:37:32 +00:00
Harshavardhana
aae6846413 feat: allow expiration of all versions via ILM Expiration action (#17521)
Following extension allows users to specify immediate purge of
all versions as soon as the latest version of this object has
expired.

```
<LifecycleConfiguration>
    <Rule>
        <ID>ClassADocRule</ID>
        <Filter>
           <Prefix>classA/</Prefix>
        </Filter>
        <Status>Enabled</Status>
        <Expiration>
             <Days>3650</Days>
	     <ExpiredObjectAllVersions>true</ExpiredObjectAllVersions>
        </Expiration>
    </Rule>
    ...
```
2023-06-28 22:12:28 -07:00
Harshavardhana
5317a0b755 fix: support LDAP settings properly in ftp/sftp (#17536)
Bonus this PR enhances and supports creating
buckets via ftp `mkdir`

fixes #17526
2023-06-28 13:15:21 -07:00
Harshavardhana
73de721a63 fix: handle copyObjectPart encryption properly (#17530)
- look for requested encryption while compressing
  not just via HTTP Headers, but also via multipart
  metadata

- look for SSE-S3 etag decryption not just via HTTP
  Headers, but also via multipart metadata

fixes #17519
2023-06-28 09:43:50 -07:00
Harshavardhana
d2f5c3621f fix: add additional decommission traces for ILM expired content (#17522)
current decommission traces were missing for

- Skipped ILM expired versions
- Skipped single DELETE marked version
- A success or failure in decommissioning DELETE marker
- allow additional info to be shared in DecomStatus() API
2023-06-27 11:59:40 -07:00
Harshavardhana
1818764840 fix: bug in passing Versioned field set for getHealReplicationInfo() (#17498)
Bonus: rejects prefix deletes on object-locked buckets earlier
2023-06-27 09:45:50 -07:00
Harshavardhana
d3e5e607a7 allow site-replication checks to work on non-distributed setups (#17524)
fixes #17523
2023-06-27 09:23:50 -07:00
Shireesh Anjal
c1943ea3af Capture realtime metrics in health report (#17516) 2023-06-27 01:39:18 -07:00
Harshavardhana
2a82c15bf1 update all our deps (#17497) 2023-06-26 15:36:56 -07:00
guangwu
87b6fb37d6 chore: pkg imported more than once (#17444) 2023-06-26 09:21:29 -07:00
Kaan Kabalak
21fbe88e1f Print certain log messages once per error (#17484) 2023-06-24 20:29:13 -07:00
Harshavardhana
1f8b9b4bd5 fix: do not listAndHeal() inline with PutObject() (#17499)
there is a possibility that slow drives can actually add latency
to the overall call, leading to a large spike in latency.

this can happen if there are other parallel listObjects()
calls to the same drive, in-turn causing each other to sort
of serialize.

this potentially improves performance and makes PutObject()
also non-blocking.
2023-06-24 19:31:04 -07:00
Minio Trusted
fcbed41cc3 Update yaml files to latest version RELEASE.2023-06-23T20-26-00Z 2023-06-24 07:25:49 +00:00
Klaus Post
216069d0da Remove 'null' version ID from directory object response (#17495)
Fixes #17494

Regression from #17132
2023-06-23 13:26:00 -07:00
Harshavardhana
eefa047974 fix: keep decommission in a go-routine (#17496)
This was removed by mistake in #17491
2023-06-23 12:29:32 -07:00
Anis Eleuch
d8dad5c9ea s3: Make/Delete buckets to use error quorum per pool (#17467) 2023-06-23 11:48:23 -07:00
Klaus Post
bf8a68879c fix: Time ILM Actions for scanner info (#17493)
ILM Actions were not timed fix it.
2023-06-23 07:48:36 -07:00
Aditya Manthramurthy
f3248a4b37 Redact all secrets from config viewing APIs (#17380)
This change adds a `Secret` property to `HelpKV` to identify secrets
like passwords and auth tokens that should not be revealed by the server
in its configuration fetching APIs. Configuration reporting APIs now do
not return secrets.
2023-06-23 07:45:27 -07:00
Harshavardhana
d315d012a4 decom: during multiple pool decom preserve current pool status (#17491)
removal of completed pools must retain pool status of other
pools in draining, to resume any remaining draining operations.
2023-06-23 07:44:18 -07:00
Harshavardhana
bd9bf3693f lambda: negative duration for presigned URL default to 1H (#17489)
fixes a bug where users created with Expiration as
timeSentinel is not rejected while generating the
presigned URL for lambda processing.
2023-06-23 00:17:24 -07:00
Klaus Post
15daa2e74a tooling: Add xlmeta --combine switch that will combine inline data (#17488)
Will combine or write partial data of each version found in the inspect data.

Example:

```
> xl-meta -export -combine inspect-data.1228fb52.zip

(... metadata json...)
}
Attempting to combine version "994f1113-da94-4be1-8551-9dbc54b204bc".
Read shard 1 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-01-of-13.data)
Read shard 2 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-02-of-13.data)
Read shard 3 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-03-of-13.data)
Read shard 4 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-04-of-13.data)
Read shard 6 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-06-of-13.data)
Read shard 7 Data shards 9 Parity 4 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-07-of-13.data)
Read shard 8 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-08-of-13.data)
Read shard 9 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-09-of-13.data)
Read shard 10 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-10-of-13.data)
Read shard 11 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-11-of-13.data)
Read shard 13 Data shards 8 Parity 5 (994f1113-da94-4be1-8551-9dbc54b204bc/shard-13-of-13.data)
Attempting to reconstruct using parity sets:
* Setup: Data shards: 9 - Parity blocks: 6
Have 6 complete remapped data shards and 6 complete parity shards. Could NOT reconstruct: too few shards given
* Setup: Data shards: 8 - Parity blocks: 5
Have 5 complete remapped data shards and 5 complete parity shards. Could reconstruct completely
0 bytes missing. Truncating 0 from the end.
Wrote output to 994f1113-da94-4be1-8551-9dbc54b204bc.complete
```

So far only inline data, but no real reason that external data can't also be included with some handling of blocks.

Supports only unencrypted data.
2023-06-22 12:41:24 -07:00
Harshavardhana
74759b05a5 make sure to set relevant config entries correctly (#17485)
Bonus: also allow skipping keys properly.
2023-06-22 10:04:02 -07:00
Aditya Manthramurthy
82ce78a17c Fix locking in policy attach API (#17426)
For policy attach/detach API to work correctly the server should hold a
lock before reading existing policy mapping and until after writing the
updated policy mapping. This is fixed in this change.

A site replication bug, where LDAP policy attach/detach were not
correctly propagated is also fixed in this change.

Bonus: Additionally, the server responds with the actual (or net)
changes performed in the attach/detach API call. For e.g. if a user
already has policy A applied, and a call to attach policies A and B is
performed, the server will respond that B was attached successfully.
2023-06-21 22:44:50 -07:00
Harshavardhana
9af6c6ceef under rebalance look for expired versions v/s remaining versions (#17482)
A continuation of PR #17479 for rebalance behavior must
also match the decommission behavior.

Fixes bug where rebalance would ignore rebalancing object
versions after one of the version returned "ObjectNotFound"
2023-06-21 13:23:20 -07:00
Harshavardhana
021372cc4c update helm release v5.0.11
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-06-21 12:29:09 -07:00
Praveen raj Mani
b94ab07c2f Honor global root CAs for kafka audit tls (#17481)
honor global root CAs for kafka audit tls
2023-06-21 10:50:40 -07:00
Harshavardhana
7605d07bb2 add support for bucket level request count per API (#17468)
New metrics added to calculate API request count
per bucket, per API.  Captures errors, including
4xx, 5xx HTTP status codes separately.
2023-06-21 09:41:59 -07:00
Harshavardhana
ccc5801112 always look for expired versions v/s remaining versions (#17479)
while decommissioning it can so happen that the non-current
versions are all expired but there is a DEL marker as the
latest version.

For such objects, we should not decommission them instead
calculate the remaining versions and if the remaining versions
is one and that version is a DEL marker consider such
an object not to be scheduled for decommissioning.
2023-06-21 08:49:28 -07:00
Praveen raj Mani
7c72b25ef0 Add an option to make bucket notifications synchronous (#17406)
With the current asynchronous behaviour in sending notification events
to the targets, we can't provide guaranteed delivery as the systems
might go for restarts.

For such event-driven use-cases, we can provide an option to enable
synchronous events where the APIs wait until the event is successfully
sent or persisted.

This commit adds 'MINIO_API_SYNC_EVENTS' env which when set to 'on'
will enable sending/persisting events to targets synchronously.
2023-06-20 17:38:59 -07:00
Harshavardhana
02c2ec3027 skip onlineDisks with parity mismatch (#17478) 2023-06-20 13:18:24 -07:00
Harshavardhana
65c31fab12 fix: do not crash rebalance code instead set the object layer (#17465)
fixes #17421
2023-06-20 09:28:23 -07:00
jiuker
b6b68be052 fix: replication check for duplicate endpoints detection with wrong route (#17474) 2023-06-20 09:27:54 -07:00
Harshavardhana
15911c85f6 safely ignore out of band deletions while decommissioning (#17473) 2023-06-20 08:31:42 -07:00
Aditya Manthramurthy
5a1612fe32 Bump up madmin-go and pkg deps (#17469) 2023-06-19 17:53:08 -07:00
Minio Trusted
bbb7ae156c Update yaml files to latest version RELEASE.2023-06-19T19-52-50Z 2023-06-19 22:53:03 +00:00
Harshavardhana
f9b8d1c699 fix: sio-error test to fail if commands fail (#17466) 2023-06-19 12:52:50 -07:00
Harshavardhana
1443b5927a allow quorum fileInfo to pick same parityBlocks (#17454)
Bonus: allow replication to proceed for 503 errors such as
with error code SlowDownRead
2023-06-18 18:20:15 -07:00
Anis Eleuch
35ef35b5c1 fix a integer divide by zero crash during rebalance (#17455)
A state is updated with a delete marker, which does not have parity or
data blocks defined, which can cause the integer divide by zero panics.

This commit fixes to avoid panics.
2023-06-18 11:14:53 -07:00
Harshavardhana
6806537eb3 event args list for fanOut notification must be sized same (#17450)
without this fan-out API can crash if client cancels
the on-going request.
2023-06-18 07:09:20 -07:00
Harshavardhana
64de61d15d fallback on etags if they match when mtime is not same (#17424)
on "unversioned" buckets there are situations
when successive concurrent I/O can lead to
an inconsistent state() with mtime while the
etag might be the same for the object on disk.

in such a scenario it is possible for us to
allow reading of the object since etag matches
and if etag matches we are guaranteed that we
have enough copies the object will be readable
and same.

This PR allows fallback in such scenarios.
2023-06-17 19:18:20 -07:00
Harshavardhana
22b7c8cd8a upgrade pkg and dperf to latest packages (#17448)
- dperf improvements in benchmarking read and write tests
- upgrade mimedb to use latest content-types
2023-06-17 07:31:36 -07:00
Poorna
c4d0c49a5f ensure metadata updates go to same pool where version exists (#17451)
This PR also returns the replication status in 
proxy calls and defers replication attempt if 
HEAD on object version returned a error different
from NoSuchKey
2023-06-17 07:30:53 -07:00
Minio Trusted
142a5b0dcd Update yaml files to latest version RELEASE.2023-06-16T02-41-06Z 2023-06-16 05:36:19 +00:00
mungo312
25db1e4eca helm: fix permission denied errors in post-job when running as non-root (#17175)
Fixes ##17174
2023-06-15 19:41:06 -07:00
Harshavardhana
47a48b6832 do not save any metadata from the headers in tar extract (#17436)
only preserve the same storage-class as incoming
request other than that rest of them must be
deduced.
2023-06-15 17:44:07 -07:00
Harshavardhana
e98309eb75 update minio-go/v7 v7.0.57 (#17439) 2023-06-15 16:29:19 -07:00
Alex
87051872a7 Update console to v0.30.0 (#17438) 2023-06-15 15:30:56 -07:00
Anis Eleuch
a2aed12dcd decom: Fix a typo in routing decommissioning requests (#17435)
A specific node should do the decommissioning task, however routing the
start decommissioning to that node was not working properly.

Co-authored-by: Anis Elleuch <anis@min.io>
2023-06-15 14:54:29 -07:00
Anis Eleuch
d8e6e76e89 site-repl: Better error msg when setting sync in a local cluster (#17407) 2023-06-15 12:44:22 -07:00
Anis Eleuch
8c33fdf5f4 s3-check-md5: Add --modified-since flag to skip some objects (#17410) 2023-06-15 12:44:09 -07:00
Harshavardhana
ad4e511026 do not save plain-text ETag when encryption is requested (#17427)
fixes an issue under bucket replication could cause
ETags for replicated SSE-S3 single part PUT objects,
to fail as we would attempt a decryption while listing,
or stat() operation.
2023-06-15 12:43:26 -07:00
Klaus Post
4a562d6732 fix: fanout error response - error must be string for marshaling (#17433)
Uses https://github.com/minio/minio-go/pull/1839
2023-06-15 09:21:53 -07:00
Poorna
a9082e4f79 site replication: cancel ongoing op properly (#17428) 2023-06-15 08:05:08 -07:00
dependabot[bot]
6278679ffd build(deps): bump github.com/lestrrat-go/jwx from 1.2.25 to 1.2.26 (#17425)
Bumps [github.com/lestrrat-go/jwx](https://github.com/lestrrat-go/jwx) from 1.2.25 to 1.2.26.
- [Release notes](https://github.com/lestrrat-go/jwx/releases)
- [Changelog](https://github.com/lestrrat-go/jwx/blob/v1.2.26/Changes)
- [Commits](https://github.com/lestrrat-go/jwx/compare/v1.2.25...v1.2.26)

---
updated-dependencies:
- dependency-name: github.com/lestrrat-go/jwx
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-14 12:17:22 -07:00
jiuker
0474791cf8 fix: set time format right (#17402) 2023-06-14 07:49:13 -07:00
Harshavardhana
69f819e199 move to pkg@v1.7.3 to support aws:kms string in policy conditions (#17414) 2023-06-14 07:42:03 -07:00
Harshavardhana
f32efd5429 more compliance related fixes (#17408)
- lifecycle must return InvalidArgument for rule errors
- do not return `null` versionId in HTTP header
- reject mixed SSE uploads with correct error message
2023-06-13 13:52:33 -07:00
jiuker
22c247a988 fix: preserve multiple values for query params (#17392) 2023-06-13 11:38:46 -07:00
Shubhendu
35d71682f6 fix: do not allow removal of inbuilt policies unless they are already persisted (#17264)
Dont allow removal of inbuilt policies such as `readwrite, readonly, writeonly and diagnostics`
2023-06-13 11:06:17 -07:00
drivebyer
3d6b88a60e fix: syscall to record time on non-linux (#17383) 2023-06-13 11:04:50 -07:00
Harshavardhana
26a0803388 various compliance related fixes (#17401)
- getObjectTagging to be allowed for anonymous policies
- return correct errors for invalid retention period
- return sorted list of tags for an object
- putObjectTagging must return 200 OK not 204 OK
- return 409 ErrObjectLockConfigurationNotAllowed for existing buckets
2023-06-12 13:22:07 -07:00
Anis Eleuch
ae95384dd8 Revert "heal: Update object parity with the latest configured SC (#17187)" (#17404) 2023-06-12 11:54:51 -07:00
Anis Eleuch
0f0dcf0c5e tar: Avoid storing snowball extraction header in extract objects (#17389) 2023-06-12 09:42:06 -07:00
Klaus Post
6f2406b0b6 fix: protect ReplicationStats against concurrent map iteration and write crash (#17403) 2023-06-12 09:17:11 -07:00
Anis Eleuch
bb24346e04 listen: Only error out if not able to bind any interface (#17353) 2023-06-12 09:09:28 -07:00
Harshavardhana
be45ffd8a4 return 204 status code for DeleteBucketTagging (#17400) 2023-06-11 20:49:02 -07:00
Poorna Krishnamoorthy
f986b0c493 replication: perform bucket resync in parallel (#16707)
Default number of parallel resync operations for a bucket to 10
to speed up resync.
2023-06-11 16:09:55 -07:00
Harshavardhana
c9e87f0548 service accounts are allowed to have no expiration (#17397) 2023-06-11 10:34:59 -07:00
Harshavardhana
43468f4d47 return InvalidRequest when no parts are provided (#17395) 2023-06-10 21:59:51 -07:00
Minio Trusted
91987d6f7a Update yaml files to latest version RELEASE.2023-06-09T07-32-12Z 2023-06-10 05:38:08 +00:00
Harshavardhana
6b7c98bd0f make sure we pick up the right Go version in vulncheck (#17388) 2023-06-09 00:32:12 -07:00
Harshavardhana
b829e80ecb do not disable root for invalid API config values (#17386) 2023-06-08 15:50:06 -07:00
Klaus Post
6e38d0f3ab Add more bootstrap info in debug mode (#17362) 2023-06-08 08:39:47 -07:00
Anis Eleuch
38342b1df5 decom: Parallelize decommissining (#17364) 2023-06-07 14:27:51 -07:00
Harshavardhana
dbd4c2425e fix: kafka broker pings must not be greater than 1sec (#17376) 2023-06-07 11:47:00 -07:00
Harshavardhana
49ce85ee3d allow prefix/markers to have '/' in the beginning to throw an empty (#17373) 2023-06-07 11:25:26 -07:00
Anis Eleuch
eba378e4a1 vrf: Fix testing for loopback coming from the address (#17372) 2023-06-07 09:53:05 -07:00
Harshavardhana
442c50ff00 remove delimiter if not set by client, also fetchOwner is optional (#17366) 2023-06-06 21:31:47 -07:00
Harshavardhana
d1448adbda use slices package and remove some helpers (#17342) 2023-06-06 10:12:52 -07:00
jiuker
5a21b1f353 fix: Delete dir failed when .DS_Store in it (#17352) 2023-06-06 10:12:06 -07:00
Bartosz Marczyński
123a2fb3a8 helm: add /tmp mount to post jobs (#17301) 2023-06-06 06:50:33 -07:00
Harshavardhana
2f9e2147f5 allow quota enforcement to rely on older values (#17351)
PUT calls cannot afford to have large latency build-ups due
to contentious usage.json, or worse letting them fail with
some unexpected error, this can happen when this file is
concurrently being updated via scanner or it is being
healed during a disk replacement heal.

However, these are fairly quick in theory, stressed clusters
can quickly show visible latency this can add up leading to
invalid errors returned during PUT.

It is perhaps okay for us to relax this error return requirement
instead, make sure that we log that we are proceeding to take in
the requests while the quota is using an older value for the quota
enforcement. These things will reconcile themselves eventually,
via scanner making sure to overwrite the usage.json.

Bonus: make sure that storage-rest-client sets ExpectTimeouts to
be 'true', such that DiskInfo() call with contextTimeout does
not prematurely disconnect the servers leading to a longer
healthCheck, back-off routine. This can easily pile up while also
causing active callers to disconnect, leading to quorum loss.

DiskInfo is actively used in the PUT, Multipart call path for
upgrading parity when disks are down, it in-turn shouldn't cause
more disks to go down.
2023-06-05 16:56:35 -07:00
Harshavardhana
75c6fc4f02 only allow decryption of etag for only sse-s3 (#17335) 2023-06-05 13:08:51 -07:00
Anis Eleuch
f9e07d6143 goroutines parser: Add --less flag to filter goroutines (#17339) 2023-06-04 14:20:46 -07:00
Anis Eleuch
1436858347 log: Add a log when saving pool.bin fails (#17338)
Co-authored-by: Anis Elleuch <anis@min.io>
2023-06-04 14:20:21 -07:00
jiuker
8030e12ba5 fix: expMovingAvg is too small when startTime is zero (#17346) 2023-06-03 13:41:51 -07:00
Minio Trusted
a485b923bf Update yaml files to latest version RELEASE.2023-06-02T23-17-26Z 2023-06-03 05:07:30 +00:00
Kaan Kabalak
0649aca219 Add expiration to ListServiceAccounts function (#17249) 2023-06-02 16:17:26 -07:00
Harshavardhana
b210ea79bc do not save MTime in newMultipartUpload() to avoid side-affects (#17340) 2023-06-02 14:38:09 -07:00
Poorna
68f80b5fe7 replication: ignore retention mode validation for replica (#17332) 2023-06-01 18:53:12 -07:00
Poorna
e95825a42e replication: use latest object info for metrics update (#17333) 2023-06-01 18:52:55 -07:00
Anis Eleuch
931712dc46 fix: converting 'server closed idle connection' to errDiskNotFound (#17330) 2023-06-01 15:40:28 -07:00
Harshavardhana
54e544e03e allow lookup()/head() operations on Veeam SOS objects (#17331) 2023-06-01 15:26:26 -07:00
Poorna
f86b9abf32 site removal: update site config and reload targets after update (#17327) 2023-06-01 10:19:56 -07:00
Anis Eleuch
9ef7eda33a heal: Avoid objects created after the heal disk start time (#17323) 2023-05-31 13:10:45 -07:00
Klaus Post
c9e26401fa Fix GetObject encrypted etag (#17302)
Co-authored-by: Harshavardhana <harsha@minio.io>
2023-05-31 13:10:25 -07:00
Harshavardhana
e53f49e9a9 add additional tools that help in debugging (#17325) 2023-05-31 13:06:08 -07:00
jiuker
14f6ac9222 fix: fail large content in DeleteMultipleObjects() early (#17321) 2023-05-31 10:58:14 -07:00
drivebyer
b8474295af fix: time() returned function not being called as expected in globalSync() (#17319) 2023-05-31 09:40:23 -07:00
Shireesh Anjal
817e85a3e0 fix: proxy not set on subnet logger webhook sometimes (#17320) 2023-05-31 08:09:09 -07:00
jiuker
fb5ce3b87a record err time when remote node is offline (#17262) 2023-05-30 10:07:26 -07:00
Klaus Post
6fe028b7c5 Revert s3 select simdjson reuse (#17310) 2023-05-30 10:02:22 -07:00
Harshavardhana
1cd7f1e38d fix: cleanup empty multipart folders upon stale upload cleanup (#17312) 2023-05-30 09:56:50 -07:00
jiuker
043fd8b536 fix: on windows use FindClose close handler (#17306) 2023-05-30 02:15:57 -07:00
Klaus Post
669acbb032 Fix Test LDAP for automatic site replication (#17305) 2023-05-29 08:13:58 -07:00
Minio Trusted
086d8f036e Update yaml files to latest version RELEASE.2023-05-27T05-56-19Z 2023-05-28 06:53:17 +00:00
Harshavardhana
394690dcfb check for upto 50%+ data disks to be offline (#17294) 2023-05-26 22:56:19 -07:00
Harshavardhana
398bca92ff update helm to v5.0.10
Signed-off-by: Harshavardhana <harsha@minio.io>
2023-05-26 17:05:49 -07:00
Harshavardhana
fb328b1a64 upgrade all dependencies (#17276) 2023-05-26 16:31:28 -07:00
Anis Eleuch
563f667e30 reorder-disks: Fix UID to UUID and add better error messages (#17292) 2023-05-26 15:03:31 -07:00
Klaus Post
c839b64f6a fix: compressed+encrypted block overhead (#17289) 2023-05-26 10:57:07 -07:00
Anis Eleuch
6425fec366 s3: Add x-minio-error-code header for S3 HEAD requests (#17283) 2023-05-26 10:13:18 -07:00
Harshavardhana
d5059840ef fix: for delete marked objects choose appropriate parity (#17287) 2023-05-26 09:57:44 -07:00
Aditya Manthramurthy
65cba212e8 Remove older policy attach behavior for LDAP (#17240) 2023-05-26 06:31:24 -07:00
Aditya Manthramurthy
7a69c9c75a Update builtin policy entities command (#17241) 2023-05-25 22:31:05 -07:00
Harshavardhana
4a425cbac1 cleanup scripts and apply shfmt (#17284) 2023-05-25 22:07:25 -07:00
Harshavardhana
615169c4ec update docker ubi image to 8.8 (#17281) 2023-05-25 18:19:09 -07:00
Harshavardhana
5cd9dcb844 rebalance 'null' delete markers properly (#17282) 2023-05-25 16:12:53 -07:00
Anis Eleuch
54c5c88fe6 Add number of offline disks in quorum errors (#16822) 2023-05-25 09:39:06 -07:00
jiuker
443250d135 fix: for Target isActive use net.Dial instead (#17251) 2023-05-25 09:24:11 -07:00
Harshavardhana
9b5829c16e avoid decommissioning DEL markers with single versions (#17274) 2023-05-25 09:18:49 -07:00
jiuker
d749aaab69 fix: ignore existing target status when adding new targets (#17250) 2023-05-24 22:57:37 -07:00
Krishnan Parthasarathi
62df731006 Add updatedAt for GetBucketLifecycleConfig (#17271) 2023-05-24 22:52:39 -07:00
Harshavardhana
d0a0eb9738 support fan-out objects via PostUpload() (#17233) 2023-05-24 22:51:07 -07:00
Klaus Post
66156b8230 Stricter partNumber checks (#17270)
Fixes #17269
2023-05-24 08:00:47 -07:00
Klaus Post
5677f73794 Add PostObject Checksum (#17244) 2023-05-23 07:58:33 -07:00
Harshavardhana
ef54200db7 offline drives more than 50% of total drives return error (#17252) 2023-05-23 07:57:57 -07:00
Harshavardhana
7875efbf61 update minio/dperf to latest release v0.4.4 (#17267) 2023-05-22 19:17:17 -07:00
Krishnan Parthasarathi
3e128c116e Add lifecycle event source to audit log tags (#17248) 2023-05-22 15:28:56 -07:00
Krishnan Parthasarathi
55a3310446 logger-http: Don't retry after a succesful send (#17266) 2023-05-22 14:53:18 -07:00
Harshavardhana
fc03be7891 simplify bucket metadata lookups for versioning/object locking (#17253) 2023-05-22 12:05:14 -07:00
jiuker
b1b00a5055 fix: Avoid Income globalStats twice upon error (#17263) 2023-05-22 07:42:27 -07:00
Poorna
2920b0fc6d allow specification of path/virtual style bucket lookup in batch replication (#17201) 2023-05-21 15:16:31 -07:00
Anis Eleuch
a30a55f3b1 Add object parity in listing V2M and listing versions M (#17238) 2023-05-19 09:42:45 -07:00
Praveen raj Mani
ecfb18b26a Freeze the s3 APIs until the notification sub-system initializes completely (#17182) 2023-05-19 08:44:48 -07:00
jiuker
41fa8fa2d2 fix: increment counter when entry be skipped (#17237) 2023-05-19 08:36:52 -07:00
jiuker
e94e6adf91 fix: return proper error if OIDC Discoverydoc fails to respond (#17242) 2023-05-19 02:13:33 -07:00
jiuker
7d433f16c4 before return make globalScannerMetrics.incTime call (#17230) 2023-05-18 13:45:05 -07:00
Klaus Post
b06d7bf834 fix: leaking connections in JSON SQL with limited return (#17239) 2023-05-18 11:26:46 -07:00
Minio Trusted
b784e458cb Update yaml files to latest version RELEASE.2023-05-18T00-05-36Z 2023-05-18 17:56:05 +00:00
516 changed files with 37942 additions and 26002 deletions

View File

@@ -1,10 +1,17 @@
.git
.github
docs
default.etcd
*.gz
*.tar.gz
*.bzip2
*.zip
browser/node_modules
node_modules
node_modules
docs/debugging/s3-verify/s3-verify
docs/debugging/xl-meta/xl-meta
docs/debugging/s3-check-md5/s3-check-md5
docs/debugging/hash-set/hash-set
docs/debugging/healing-bin/healing-bin
docs/debugging/inspect/inspect
docs/debugging/pprofgoparser/pprofgoparser
docs/debugging/reorder-disks/reorder-disks

View File

@@ -1,3 +1,9 @@
## Community Contribution License
All community contributions in this pull request are licensed to the project maintainers
under the terms of the [Apache 2 license] (https://www.apache.org/licenses/LICENSE-2.0).
By creating this pull request I represent that I have the right to license the
contributions to the project maintainers under the Apache 2 license.
## Description

View File

@@ -4,6 +4,7 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -20,7 +21,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.20.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3

View File

@@ -4,6 +4,7 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -20,7 +21,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.20.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v3

View File

@@ -4,6 +4,7 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -20,7 +21,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.20.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v3

View File

@@ -4,6 +4,7 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -20,7 +21,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.20.x]
go-version: [1.21.x]
os: [ubuntu-latest, windows-latest]
steps:
- uses: actions/checkout@v3

View File

@@ -4,6 +4,7 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -20,7 +21,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.20.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v3

31
.github/workflows/helm-lint.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: Helm Chart linting
on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Helm
uses: azure/setup-helm@v3
- name: Run helm lint
run: |
cd helm/minio
helm lint .

View File

@@ -4,6 +4,7 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -61,7 +62,7 @@ jobs:
# are turned off - i.e. if ldap="", then ldap server is not enabled for
# the tests.
matrix:
go-version: [1.20.x]
go-version: [1.21.x]
ldap: ["", "localhost:389"]
etcd: ["", "http://localhost:2379"]
openid: ["", "http://127.0.0.1:5556/dex"]
@@ -82,9 +83,9 @@ jobs:
check-latest: true
- name: Test LDAP/OpenID/Etcd combo
env:
LDAP_TEST_SERVER: ${{ matrix.ldap }}
ETCD_SERVER: ${{ matrix.etcd }}
OPENID_TEST_SERVER: ${{ matrix.openid }}
_MINIO_LDAP_TEST_SERVER: ${{ matrix.ldap }}
_MINIO_ETCD_TEST_SERVER: ${{ matrix.etcd }}
_MINIO_OPENID_TEST_SERVER: ${{ matrix.openid }}
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
@@ -92,20 +93,20 @@ jobs:
- name: Test with multiple OpenID providers
if: matrix.openid == 'http://127.0.0.1:5556/dex'
env:
LDAP_TEST_SERVER: ${{ matrix.ldap }}
ETCD_SERVER: ${{ matrix.etcd }}
OPENID_TEST_SERVER: ${{ matrix.openid }}
OPENID_TEST_SERVER_2: "http://127.0.0.1:5557/dex"
_MINIO_LDAP_TEST_SERVER: ${{ matrix.ldap }}
_MINIO_ETCD_TEST_SERVER: ${{ matrix.etcd }}
_MINIO_OPENID_TEST_SERVER: ${{ matrix.openid }}
_MINIO_OPENID_TEST_SERVER_2: "http://127.0.0.1:5557/dex"
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-iam
- name: Test with Access Management Plugin enabled
env:
LDAP_TEST_SERVER: ${{ matrix.ldap }}
ETCD_SERVER: ${{ matrix.etcd }}
OPENID_TEST_SERVER: ${{ matrix.openid }}
POLICY_PLUGIN_ENDPOINT: "http://127.0.0.1:8080"
_MINIO_LDAP_TEST_SERVER: ${{ matrix.ldap }}
_MINIO_ETCD_TEST_SERVER: ${{ matrix.etcd }}
_MINIO_OPENID_TEST_SERVER: ${{ matrix.openid }}
_MINIO_POLICY_PLUGIN_TEST_ENDPOINT: "http://127.0.0.1:8080"
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0

View File

@@ -3,7 +3,8 @@ name: Mint Tests
on:
pull_request:
branches:
- master
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -29,7 +30,7 @@ jobs:
- name: setup-go-step
uses: actions/setup-go@v2
with:
go-version: 1.20.x
go-version: 1.21.x
- name: github sha short
id: vars
@@ -37,8 +38,11 @@ jobs:
- name: build-minio
run: |
make install
docker build . -t "minio/minio:${{ steps.vars.outputs.sha_short }}"
TAG="quay.io/minio/minio:${{ steps.vars.outputs.sha_short }}" make docker
- name: multipart uploads test
run: |
${GITHUB_WORKSPACE}/.github/workflows/multipart/migrate.sh "${{ steps.vars.outputs.sha_short }}"
- name: compress and encrypt
run: |
@@ -51,7 +55,23 @@ jobs:
- name: standalone erasure
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "erasure" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
docker rmi -f minio/minio:${{ steps.vars.outputs.sha_short }}
- name: The job must cleanup
if: ${{ always() }}
run: |
export JOB_NAME=${{ steps.vars.outputs.sha_short }}
for mode in $(echo compress-encrypt pools erasure); do
docker-compose -f ${GITHUB_WORKSPACE}/.github/workflows/mint/minio-${mode}.yaml down || true
docker-compose -f ${GITHUB_WORKSPACE}/.github/workflows/mint/minio-${mode}.yaml rm || true
done
docker-compose -f ${GITHUB_WORKSPACE}/.github/workflows/multipart/docker-compose-site1.yaml rm -s -f || true
docker-compose -f ${GITHUB_WORKSPACE}/.github/workflows/multipart/docker-compose-site2.yaml rm -s -f || true
for volume in $(docker volume ls -q | grep minio); do
docker volume rm ${volume} || true
done
docker rmi -f quay.io/minio/minio:${{ steps.vars.outputs.sha_short }}
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -q -f dangling=true) || true

View File

@@ -2,7 +2,7 @@ version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${JOB_NAME}
image: quay.io/minio/minio:${JOB_NAME}
command: server --console-address ":9001" http://minio{1...4}/cdata{1...2}
expose:
- "9000"
@@ -11,8 +11,9 @@ x-minio-common: &minio-common
MINIO_CI_CD: "on"
MINIO_ROOT_USER: "minio"
MINIO_ROOT_PASSWORD: "minio123"
MINIO_COMPRESS: "true"
MINIO_COMPRESS_MIMETYPES: "*"
MINIO_COMPRESSION_ENABLE: "on"
MINIO_COMPRESSION_MIME_TYPES: "*"
MINIO_COMPRESSION_ALLOW_ENCRYPTION: "on"
MINIO_KMS_SECRET_KEY: "my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw="
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]

View File

@@ -2,7 +2,7 @@ version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${JOB_NAME}
image: quay.io/minio/minio:${JOB_NAME}
command: server --console-address ":9001" edata{1...4}
expose:
- "9000"
@@ -11,8 +11,6 @@ x-minio-common: &minio-common
MINIO_CI_CD: "on"
MINIO_ROOT_USER: "minio"
MINIO_ROOT_PASSWORD: "minio123"
MINIO_COMPRESS: "true"
MINIO_COMPRESS_MIMETYPES: "*"
MINIO_KMS_SECRET_KEY: "my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw="
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]

View File

@@ -2,7 +2,7 @@ version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${JOB_NAME}
image: quay.io/minio/minio:${JOB_NAME}
command: server --console-address ":9001" http://minio{1...4}/pdata{1...2} http://minio{5...8}/pdata{1...2}
expose:
- "9000"

View File

@@ -0,0 +1,66 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: quay.io/minio/minio:${RELEASE}
command: server http://site1-minio{1...4}/data{1...2}
environment:
- MINIO_PROMETHEUS_AUTH_TYPE=public
- CI=true
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
site1-minio1:
<<: *minio-common
hostname: site1-minio1
volumes:
- site1-data1-1:/data1
- site1-data1-2:/data2
site1-minio2:
<<: *minio-common
hostname: site1-minio2
volumes:
- site1-data2-1:/data1
- site1-data2-2:/data2
site1-minio3:
<<: *minio-common
hostname: site1-minio3
volumes:
- site1-data3-1:/data1
- site1-data3-2:/data2
site1-minio4:
<<: *minio-common
hostname: site1-minio4
volumes:
- site1-data4-1:/data1
- site1-data4-2:/data2
site1-nginx:
image: nginx:1.19.2-alpine
hostname: site1-nginx
volumes:
- ./nginx-site1.conf:/etc/nginx/nginx.conf:ro
ports:
- "9001:9001"
depends_on:
- site1-minio1
- site1-minio2
- site1-minio3
- site1-minio4
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
site1-data1-1:
site1-data1-2:
site1-data2-1:
site1-data2-2:
site1-data3-1:
site1-data3-2:
site1-data4-1:
site1-data4-2:

View File

@@ -0,0 +1,66 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: quay.io/minio/minio:${RELEASE}
command: server http://site2-minio{1...4}/data{1...2}
environment:
- MINIO_PROMETHEUS_AUTH_TYPE=public
- CI=true
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
site2-minio1:
<<: *minio-common
hostname: site2-minio1
volumes:
- site2-data1-1:/data1
- site2-data1-2:/data2
site2-minio2:
<<: *minio-common
hostname: site2-minio2
volumes:
- site2-data2-1:/data1
- site2-data2-2:/data2
site2-minio3:
<<: *minio-common
hostname: site2-minio3
volumes:
- site2-data3-1:/data1
- site2-data3-2:/data2
site2-minio4:
<<: *minio-common
hostname: site2-minio4
volumes:
- site2-data4-1:/data1
- site2-data4-2:/data2
site2-nginx:
image: nginx:1.19.2-alpine
hostname: site2-nginx
volumes:
- ./nginx-site2.conf:/etc/nginx/nginx.conf:ro
ports:
- "9002:9002"
depends_on:
- site2-minio1
- site2-minio2
- site2-minio3
- site2-minio4
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
site2-data1-1:
site2-data1-2:
site2-data2-1:
site2-data2-2:
site2-data3-1:
site2-data3-2:
site2-data4-1:
site2-data4-2:

115
.github/workflows/multipart/migrate.sh vendored Executable file
View File

@@ -0,0 +1,115 @@
#!/bin/bash
set -x
## change working directory
cd .github/workflows/multipart/
docker-compose -f docker-compose-site1.yaml rm -s -f
docker-compose -f docker-compose-site2.yaml rm -s -f
for volume in $(docker volume ls -q | grep minio); do
docker volume rm ${volume}
done
if [ ! -f ./mc ]; then
wget --quiet -O mc https://dl.minio.io/client/mc/release/linux-amd64/mc &&
chmod +x mc
fi
(
cd /tmp
go install github.com/minio/minio/docs/debugging/s3-check-md5@latest
)
export RELEASE=RELEASE.2023-08-29T23-07-35Z
docker-compose -f docker-compose-site1.yaml up -d
docker-compose -f docker-compose-site2.yaml up -d
sleep 30s
./mc alias set site1 http://site1-nginx:9001 minioadmin minioadmin --api s3v4
./mc alias set site2 http://site2-nginx:9002 minioadmin minioadmin --api s3v4
./mc ready site1/
./mc ready site2/
./mc admin replicate add site1 site2
./mc mb site1/testbucket/
./mc cp -r --quiet /usr/bin site1/testbucket/
sleep 5
s3-check-md5 -h
failed_count_site1=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site1-nginx:9001 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site2=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site2-nginx:9002 -bucket testbucket 2>&1 | grep FAILED | wc -l)
if [ $failed_count_site1 -ne 0 ]; then
echo "failed with multipart on site1 uploads"
exit 1
fi
if [ $failed_count_site2 -ne 0 ]; then
echo "failed with multipart on site2 uploads"
exit 1
fi
./mc cp -r --quiet /usr/bin site1/testbucket/
sleep 5
failed_count_site1=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site1-nginx:9001 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site2=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site2-nginx:9002 -bucket testbucket 2>&1 | grep FAILED | wc -l)
## we do not need to fail here, since we are going to test
## upgrading to master, healing and being able to recover
## the last version.
if [ $failed_count_site1 -ne 0 ]; then
echo "failed with multipart on site1 uploads ${failed_count_site1}"
fi
if [ $failed_count_site2 -ne 0 ]; then
echo "failed with multipart on site2 uploads ${failed_count_site2}"
fi
export RELEASE=${1}
docker-compose -f docker-compose-site1.yaml up -d
docker-compose -f docker-compose-site2.yaml up -d
./mc ready site1/
./mc ready site2/
for i in $(seq 1 10); do
# mc admin heal -r --remove when used against a LB endpoint
# behaves flaky, let this run 10 times before giving up
./mc admin heal -r --remove --json site1/ 2>&1 >/dev/null
./mc admin heal -r --remove --json site2/ 2>&1 >/dev/null
done
failed_count_site1=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site1-nginx:9001 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site2=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site2-nginx:9002 -bucket testbucket 2>&1 | grep FAILED | wc -l)
if [ $failed_count_site1 -ne 0 ]; then
echo "failed with multipart on site1 uploads"
exit 1
fi
if [ $failed_count_site2 -ne 0 ]; then
echo "failed with multipart on site2 uploads"
exit 1
fi
docker-compose -f docker-compose-site1.yaml rm -s -f
docker-compose -f docker-compose-site2.yaml rm -s -f
for volume in $(docker volume ls -q | grep minio); do
docker volume rm ${volume}
done
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -q -f dangling=true) || true
## change working directory
cd ../../../

View File

@@ -0,0 +1,61 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server site1-minio1:9000;
server site1-minio2:9000;
server site1-minio3:9000;
server site1-minio4:9000;
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
}

View File

@@ -0,0 +1,61 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server site2-minio1:9000;
server site2-minio2:9000;
server site2-minio3:9000;
server site2-minio4:9000;
}
server {
listen 9002;
listen [::]:9002;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
}

View File

@@ -4,6 +4,7 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -21,7 +22,7 @@ jobs:
strategy:
matrix:
go-version: [1.20.x]
go-version: [1.21.x]
steps:
- uses: actions/checkout@v3

View File

@@ -4,6 +4,7 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -20,7 +21,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.20.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:

View File

@@ -16,37 +16,31 @@ docker volume rm $(docker volume ls -f dangling=true) || true
cd .github/workflows/mint
docker-compose -f minio-${MODE}.yaml up -d
sleep 5m
sleep 30s
docker run --rm --net=host \
--name="mint-${MODE}-${JOB_NAME}" \
-e SERVER_ENDPOINT="127.0.0.1:9000" \
-e ACCESS_KEY="${ACCESS_KEY}" \
-e SECRET_KEY="${SECRET_KEY}" \
-e ENABLE_HTTPS=0 \
-e MINT_MODE="${MINT_MODE}" \
docker.io/minio/mint:edge \
aws-sdk-go \
aws-sdk-java \
aws-sdk-php \
aws-sdk-ruby \
awscli \
healthcheck \
mc \
minio-go \
minio-java \
minio-py \
s3cmd \
s3select \
versioning
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -q -f dangling=true) || true
# Stop two nodes, one of each pool, to check that all S3 calls work while quorum is still there
[ "${MODE}" == "pools" ] && docker-compose -f minio-${MODE}.yaml stop minio2
[ "${MODE}" == "pools" ] && docker-compose -f minio-${MODE}.yaml stop minio6
docker run --rm --net=mint_default \
--name="mint-${MODE}-${JOB_NAME}" \
-e SERVER_ENDPOINT="nginx:9000" \
-e ACCESS_KEY="${ACCESS_KEY}" \
-e SECRET_KEY="${SECRET_KEY}" \
-e ENABLE_HTTPS=0 \
-e MINT_MODE="${MINT_MODE}" \
docker.io/minio/mint:edge
docker-compose -f minio-${MODE}.yaml down || true
sleep 10s
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -f dangling=true) || true
docker volume rm $(docker volume ls -q -f dangling=true) || true
## change working directory
cd ../../../

23
.github/workflows/shfmt.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Shell formatting checks
on:
pull_request:
branches:
- master
- next
permissions:
contents: read
jobs:
build:
name: runner / shfmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: luizm/action-sh-checker@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SHFMT_OPTS: "-s"
with:
sh_checker_shellcheck_disable: true # disable for now

View File

@@ -4,6 +4,7 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -20,7 +21,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.20.x]
go-version: [1.21.x]
os: [ubuntu-latest]
steps:

View File

@@ -3,9 +3,11 @@ on:
pull_request:
branches:
- master
- next
push:
branches:
- master
- next
permissions:
contents: read # to fetch code (actions/checkout)
@@ -20,7 +22,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.20.x
go-version: 1.21.3
check-latest: true
- name: Get official govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest

5
.gitignore vendored
View File

@@ -32,10 +32,13 @@ minio.RELEASE*
mc
nancy
inspects/*
.bin/
*.gz
docs/debugging/s3-verify/s3-verify
docs/debugging/xl-meta/xl-meta
docs/debugging/s3-check-md5/s3-check-md5
docs/debugging/hash-set/hash-set
docs/debugging/healing-bin/healing-bin
docs/debugging/inspect/inspect
.bin/
docs/debugging/pprofgoparser/pprofgoparser
docs/debugging/reorder-disks/reorder-disks

View File

@@ -13,7 +13,6 @@ linters:
enable:
- durationcheck
- gocritic
- gofmt
- gofumpt
- goimports
- gomodguard

1702
CREDITS

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.7
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8
ARG RELEASE
@@ -27,9 +27,8 @@ COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools iproute iputils jq minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-amd64/archive/minio.${RELEASE} -o /opt/bin/minio && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-amd64/archive/minio.${RELEASE}.sha256sum -o /opt/bin/minio.sha256sum && \

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.7
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8
ARG TARGETARCH
@@ -29,17 +29,16 @@ COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools iproute iputils jq minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /opt/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /opt/bin/minio.sha256sum && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /opt/bin/minio.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc -o /opt/bin/mc && \
gzip /opt/bin/mc && \
microdnf clean all && \
chmod +x /opt/bin/minio && \
chmod +x /opt/bin/mc && \
chmod +x /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/verify-minio.sh && \
/usr/bin/verify-minio.sh && \

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.7
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8
ARG TARGETARCH
@@ -29,9 +29,8 @@ COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux gzip lsof tar net-tools iproute iputils jq minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips -o /opt/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips.sha256sum -o /opt/bin/minio.sha256sum && \

View File

@@ -6,7 +6,7 @@ GOARCH := $(shell go env GOARCH)
GOOS := $(shell go env GOOS)
VERSION ?= $(shell git describe --tags)
TAG ?= "minio/minio:$(VERSION)"
TAG ?= "quay.io/minio/minio:$(VERSION)"
GOLANGCI_VERSION = v1.51.2
GOLANGCI_DIR = .bin/golangci/$(GOLANGCI_VERSION)
@@ -45,22 +45,22 @@ lint-fix: getdeps ## runs golangci-lint suite of linters with automatic fixes
@$(GOLANGCI) run --build-tags kqueue --timeout=10m --config ./.golangci.yml --fix
check: test
test: verifiers build ## builds minio, runs linters, tests
test: verifiers build build-debugging ## builds minio, runs linters, tests
@echo "Running unit tests"
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -tags kqueue ./...
test-root-disable: install
test-root-disable: install-race
@echo "Running minio root lockdown tests"
@env bash $(PWD)/buildscripts/disable-root.sh
test-decom: install
test-decom: install-race
@echo "Running minio decom tests"
@env bash $(PWD)/docs/distributed/decom.sh
@env bash $(PWD)/docs/distributed/decom-encrypted.sh
@env bash $(PWD)/docs/distributed/decom-encrypted-sse-s3.sh
@env bash $(PWD)/docs/distributed/decom-compressed-sse-s3.sh
test-upgrade: build
test-upgrade: install-race
@echo "Running minio upgrade tests"
@(env bash $(PWD)/buildscripts/minio-upgrade.sh)
@@ -74,6 +74,9 @@ test-iam: build ## verify IAM (external IDP, etcd backends)
@echo "Running tests for IAM (external IDP, etcd backends) with -race"
@MINIO_API_REQUESTS_MAX=10000 GORACE=history_size=7 CGO_ENABLED=1 go test -race -tags kqueue -v -run TestIAM* ./cmd
test-sio-error:
@(env bash $(PWD)/docs/bucket/replication/sio-error.sh)
test-replication-2site:
@(env bash $(PWD)/docs/bucket/replication/setup_2site_existing_replication.sh)
@@ -83,18 +86,18 @@ test-replication-3site:
test-delete-replication:
@(env bash $(PWD)/docs/bucket/replication/delete-replication.sh)
test-replication: install test-replication-2site test-replication-3site test-delete-replication ## verify multi site replication
test-replication: install-race test-replication-2site test-replication-3site test-delete-replication test-sio-error ## verify multi site replication
@echo "Running tests for replicating three sites"
test-site-replication-ldap: install ## verify automatic site replication
test-site-replication-ldap: install-race ## verify automatic site replication
@echo "Running tests for automatic site replication of IAM (with LDAP)"
@(env bash $(PWD)/docs/site-replication/run-multi-site-ldap.sh)
test-site-replication-oidc: install ## verify automatic site replication
test-site-replication-oidc: install-race ## verify automatic site replication
@echo "Running tests for automatic site replication of IAM (with OIDC)"
@(env bash $(PWD)/docs/site-replication/run-multi-site-oidc.sh)
test-site-replication-minio: install ## verify automatic site replication
test-site-replication-minio: install-race ## verify automatic site replication
@echo "Running tests for automatic site replication of IAM (with MinIO IDP)"
@(env bash $(PWD)/docs/site-replication/run-multi-site-minio-idp.sh)
@@ -125,6 +128,9 @@ verify-healing-inconsistent-versions: ## verify resolving inconsistent versions
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/resolve-right-versions.sh)
build-debugging:
@(env bash $(PWD)/docs/debugging/build.sh)
build: checks ## builds minio to $(PWD)
@echo "Building minio binary to './minio'"
@CGO_ENABLED=0 go build -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@@ -156,6 +162,12 @@ docker: build ## builds minio docker container
@echo "Building minio docker image '$(TAG)'"
@docker build -q --no-cache -t $(TAG) . -f Dockerfile
install-race: checks ## builds minio to $(PWD)
@echo "Building minio binary to './minio'"
@GORACE=history_size=7 CGO_ENABLED=1 go build -tags kqueue -race -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@echo "Installing minio binary to '$(GOPATH)/bin/minio'"
@mkdir -p $(GOPATH)/bin && cp -f $(PWD)/minio $(GOPATH)/bin/minio
install: build ## builds minio and installs it to $GOPATH/bin.
@echo "Installing minio binary to '$(GOPATH)/bin/minio'"
@mkdir -p $(GOPATH)/bin && cp -f $(PWD)/minio $(GOPATH)/bin/minio

View File

@@ -200,7 +200,7 @@ service iptables restart
MinIO Server comes with an embedded web based object browser. Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully.
> NOTE: MinIO runs console on random port by default if you wish choose a specific port use `--console-address` to pick a specific interface and port.
> NOTE: MinIO runs console on random port by default, if you wish to choose a specific port use `--console-address` to pick a specific interface and port.
### Things to consider
@@ -241,7 +241,7 @@ mc admin update <minio alias, e.g., myminio>
### Upgrade Checklist
- Test all upgrades in a lower environment (DEV, QA, UAT) before applying to production. Performing blind upgrades in production environments carries significant risk.
- Read the release notes for MinIO *before* performing any upgrade, there is no forced requirement to upgrade to latest releases upon every releases. Some releases may not be relevant to your setup, avoid upgrading production environments unnecessarily.
- Read the release notes for MinIO *before* performing any upgrade, there is no forced requirement to upgrade to latest release upon every release. Some release may not be relevant to your setup, avoid upgrading production environments unnecessarily.
- If you plan to use `mc admin update`, MinIO process must have write access to the parent directory where the binary is present on the host system.
- `mc admin update` is not supported and should be avoided in kubernetes/container environments, please upgrade containers by upgrading relevant container images.
- **We do not recommend upgrading one MinIO server at a time, the product is designed to support parallel upgrades please follow our recommended guidelines.**
@@ -259,6 +259,6 @@ Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/ma
## License
- MinIO source is licensed under the GNU AGPLv3 license that can be found in the [LICENSE](https://github.com/minio/minio/blob/master/LICENSE) file.
- MinIO [Documentation](https://github.com/minio/minio/tree/master/docs) © 2021 by MinIO, Inc is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
- MinIO source is licensed under the [GNU AGPLv3](https://github.com/minio/minio/blob/master/LICENSE).
- MinIO [documentation](https://github.com/minio/minio/tree/master/docs) is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
- [License Compliance](https://github.com/minio/minio/blob/master/COMPLIANCE.md)

View File

@@ -3,19 +3,19 @@
_init() {
shopt -s extglob
shopt -s extglob
## Minimum required versions for build dependencies
GIT_VERSION="1.0"
GO_VERSION="1.16"
OSX_VERSION="10.8"
KNAME=$(uname -s)
ARCH=$(uname -m)
case "${KNAME}" in
SunOS )
ARCH=$(isainfo -k)
;;
esac
## Minimum required versions for build dependencies
GIT_VERSION="1.0"
GO_VERSION="1.16"
OSX_VERSION="10.8"
KNAME=$(uname -s)
ARCH=$(uname -m)
case "${KNAME}" in
SunOS)
ARCH=$(isainfo -k)
;;
esac
}
## FIXME:
@@ -28,24 +28,23 @@ _init() {
## }
##
readlink() {
TARGET_FILE=$1
TARGET_FILE=$1
cd `dirname $TARGET_FILE`
TARGET_FILE=`basename $TARGET_FILE`
cd $(dirname $TARGET_FILE)
TARGET_FILE=$(basename $TARGET_FILE)
# Iterate down a (possible) chain of symlinks
while [ -L "$TARGET_FILE" ]
do
TARGET_FILE=$(env readlink $TARGET_FILE)
cd `dirname $TARGET_FILE`
TARGET_FILE=`basename $TARGET_FILE`
done
# Iterate down a (possible) chain of symlinks
while [ -L "$TARGET_FILE" ]; do
TARGET_FILE=$(env readlink $TARGET_FILE)
cd $(dirname $TARGET_FILE)
TARGET_FILE=$(basename $TARGET_FILE)
done
# Compute the canonicalized name by finding the physical path
# for the directory we're in and appending the target file.
PHYS_DIR=`pwd -P`
RESULT=$PHYS_DIR/$TARGET_FILE
echo $RESULT
# Compute the canonicalized name by finding the physical path
# for the directory we're in and appending the target file.
PHYS_DIR=$(pwd -P)
RESULT=$PHYS_DIR/$TARGET_FILE
echo $RESULT
}
## FIXME:
@@ -59,84 +58,86 @@ readlink() {
## }
##
check_minimum_version() {
IFS='.' read -r -a varray1 <<< "$1"
IFS='.' read -r -a varray2 <<< "$2"
IFS='.' read -r -a varray1 <<<"$1"
IFS='.' read -r -a varray2 <<<"$2"
for i in "${!varray1[@]}"; do
if [[ ${varray1[i]} -lt ${varray2[i]} ]]; then
return 0
elif [[ ${varray1[i]} -gt ${varray2[i]} ]]; then
return 1
fi
done
for i in "${!varray1[@]}"; do
if [[ ${varray1[i]} -lt ${varray2[i]} ]]; then
return 0
elif [[ ${varray1[i]} -gt ${varray2[i]} ]]; then
return 1
fi
done
return 0
return 0
}
assert_is_supported_arch() {
case "${ARCH}" in
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64 )
return
;;
*)
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64]"
exit 1
esac
case "${ARCH}" in
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64)
return
;;
*)
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64]"
exit 1
;;
esac
}
assert_is_supported_os() {
case "${KNAME}" in
Linux | FreeBSD | OpenBSD | NetBSD | DragonFly | SunOS )
return
;;
Darwin )
osx_host_version=$(env sw_vers -productVersion)
if ! check_minimum_version "${OSX_VERSION}" "${osx_host_version}"; then
echo "OSX version '${osx_host_version}' is not supported. Minimum supported version: ${OSX_VERSION}"
exit 1
fi
return
;;
*)
echo "OS '${KNAME}' is not supported. Supported OS: [Linux, FreeBSD, OpenBSD, NetBSD, Darwin, DragonFly]"
exit 1
esac
case "${KNAME}" in
Linux | FreeBSD | OpenBSD | NetBSD | DragonFly | SunOS)
return
;;
Darwin)
osx_host_version=$(env sw_vers -productVersion)
if ! check_minimum_version "${OSX_VERSION}" "${osx_host_version}"; then
echo "OSX version '${osx_host_version}' is not supported. Minimum supported version: ${OSX_VERSION}"
exit 1
fi
return
;;
*)
echo "OS '${KNAME}' is not supported. Supported OS: [Linux, FreeBSD, OpenBSD, NetBSD, Darwin, DragonFly]"
exit 1
;;
esac
}
assert_check_golang_env() {
if ! which go >/dev/null 2>&1; then
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document at https://golang.org/doc/install"
exit 1
fi
if ! which go >/dev/null 2>&1; then
echo "Cannot find go binary in your PATH configuration, please refer to Go installation document at https://golang.org/doc/install"
exit 1
fi
installed_go_version=$(go version | sed 's/^.* go\([0-9.]*\).*$/\1/')
if ! check_minimum_version "${GO_VERSION}" "${installed_go_version}"; then
echo "Go runtime version '${installed_go_version}' is unsupported. Minimum supported version: ${GO_VERSION} to compile."
exit 1
fi
installed_go_version=$(go version | sed 's/^.* go\([0-9.]*\).*$/\1/')
if ! check_minimum_version "${GO_VERSION}" "${installed_go_version}"; then
echo "Go runtime version '${installed_go_version}' is unsupported. Minimum supported version: ${GO_VERSION} to compile."
exit 1
fi
}
assert_check_deps() {
# support unusual Git versions such as: 2.7.4 (Apple Git-66)
installed_git_version=$(git version | perl -ne '$_ =~ m/git version (.*?)( |$)/; print "$1\n";')
if ! check_minimum_version "${GIT_VERSION}" "${installed_git_version}"; then
echo "Git version '${installed_git_version}' is not supported. Minimum supported version: ${GIT_VERSION}"
exit 1
fi
# support unusual Git versions such as: 2.7.4 (Apple Git-66)
installed_git_version=$(git version | perl -ne '$_ =~ m/git version (.*?)( |$)/; print "$1\n";')
if ! check_minimum_version "${GIT_VERSION}" "${installed_git_version}"; then
echo "Git version '${installed_git_version}' is not supported. Minimum supported version: ${GIT_VERSION}"
exit 1
fi
}
main() {
## Check for supported arch
assert_is_supported_arch
## Check for supported arch
assert_is_supported_arch
## Check for supported os
assert_is_supported_os
## Check for supported os
assert_is_supported_os
## Check for Go environment
assert_check_golang_env
## Check for Go environment
assert_check_golang_env
## Check for dependencies
assert_check_deps
## Check for dependencies
assert_check_deps
}
_init && main "$@"

View File

@@ -5,33 +5,33 @@ set -e
[ -n "$BASH_XTRACEFD" ] && set -x
function _init() {
## All binaries are static make sure to disable CGO.
export CGO_ENABLED=0
## All binaries are static make sure to disable CGO.
export CGO_ENABLED=0
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips openbsd/amd64"
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips"
}
function _build() {
local osarch=$1
IFS=/ read -r -a arr <<<"$osarch"
os="${arr[0]}"
arch="${arr[1]}"
package=$(go list -f '{{.ImportPath}}')
printf -- "--> %15s:%s\n" "${osarch}" "${package}"
local osarch=$1
IFS=/ read -r -a arr <<<"$osarch"
os="${arr[0]}"
arch="${arr[1]}"
package=$(go list -f '{{.ImportPath}}')
printf -- "--> %15s:%s\n" "${osarch}" "${package}"
# go build -trimpath to build the binary.
export GOOS=$os
export GOARCH=$arch
export GO111MODULE=on
go build -trimpath -tags kqueue -o /dev/null
# go build -trimpath to build the binary.
export GOOS=$os
export GOARCH=$arch
export GO111MODULE=on
go build -trimpath -tags kqueue -o /dev/null
}
function main() {
echo "Testing builds for OS/Arch: ${SUPPORTED_OSARCH}"
for each_osarch in ${SUPPORTED_OSARCH}; do
_build "${each_osarch}"
done
echo "Testing builds for OS/Arch: ${SUPPORTED_OSARCH}"
for each_osarch in ${SUPPORTED_OSARCH}; do
_build "${each_osarch}"
done
}
_init && main "$@"

View File

@@ -12,21 +12,21 @@ nr_servers=4
addr="localhost"
args=""
for ((i=0;i<$[${nr_servers}];i++)); do
args="$args $scheme://$addr:$[9100+$i]/${HOME}/tmp/dist/path1/$i"
for ((i = 0; i < $((nr_servers)); i++)); do
args="$args $scheme://$addr:$((9100 + i))/${HOME}/tmp/dist/path1/$i"
done
echo $args
for ((i=0;i<$[${nr_servers}];i++)); do
(minio server --address ":$[9100+$i]" $args 2>&1 > /tmp/log$i.txt) &
for ((i = 0; i < $((nr_servers)); i++)); do
(minio server --address ":$((9100 + i))" $args 2>&1 >/tmp/log$i.txt) &
done
sleep 10s
if [ ! -f ./mc ]; then
wget --quiet -O ./mc https://dl.minio.io/client/mc/release/linux-amd64/./mc && \
chmod +x mc
wget --quiet -O ./mc https://dl.minio.io/client/mc/release/linux-amd64/./mc &&
chmod +x mc
fi
set +e
@@ -41,8 +41,8 @@ sleep 3s # let things settle a little
./mc ls minioadm/
if [ $? -eq 0 ]; then
echo "listing succeeded, 'minioadmin' was not disabled"
exit 1
echo "listing succeeded, 'minioadmin' was not disabled"
exit 1
fi
set -e
@@ -50,16 +50,18 @@ set -e
killall -9 minio
export MINIO_API_ROOT_ACCESS=on
for ((i=0;i<$[${nr_servers}];i++)); do
(minio server --address ":$[9100+$i]" $args 2>&1 > /tmp/log$i.txt) &
for ((i = 0; i < $((nr_servers)); i++)); do
(minio server --address ":$((9100 + i))" $args 2>&1 >/tmp/log$i.txt) &
done
set +e
sleep 10
./mc ls minioadm/
if [ $? -ne 0 ]; then
echo "listing failed, 'minioadmin' should be enabled"
exit 1
echo "listing failed, 'minioadmin' should be enabled"
exit 1
fi
killall -9 minio
@@ -70,14 +72,14 @@ rm -rf /tmp/multisiteb/
echo "Setup site-replication and then disable root credentials"
minio server --address 127.0.0.1:9001 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
minio server --address 127.0.0.1:9002 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
minio server --address 127.0.0.1:9003 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
minio server --address 127.0.0.1:9004 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
sleep 20s
@@ -98,14 +100,14 @@ echo "turning off root access, however site replication must continue"
export MINIO_API_ROOT_ACCESS=off
minio server --address 127.0.0.1:9001 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
minio server --address 127.0.0.1:9002 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
minio server --address 127.0.0.1:9003 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
minio server --address 127.0.0.1:9004 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
sleep 20s

View File

@@ -6,88 +6,87 @@ set -x
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
function start_minio_4drive() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
mkdir ${WORK_DIR}
C_PWD=${PWD}
if [ ! -x "$PWD/mc" ]; then
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
mkdir ${WORK_DIR}
C_PWD=${PWD}
if [ ! -x "$PWD/mc" ]; then
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$C_PWD/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
fi
(cd "${MC_BUILD_DIR}" && go build -o "$C_PWD/mc")
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/disk{1...4}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
# remove mc source.
purge "${MC_BUILD_DIR}"
fi
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/disk{1...4}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
"${PWD}/mc" mb --with-versioning minio/bucket
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
for i in $(seq 1 4); do
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
"${PWD}/mc" mb --with-versioning minio/bucket
sudo chown -R root. "${WORK_DIR}/disk${i}"
for i in $(seq 1 4); do
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
sudo chown -R root. "${WORK_DIR}/disk${i}"
sudo chown -R ${USER}. "${WORK_DIR}/disk${i}"
done
"${PWD}/mc" cp /etc/hosts minio/bucket/testobj
for vid in $("${PWD}/mc" ls --json --versions minio/bucket/testobj | jq -r .versionId); do
"${PWD}/mc" cat --vid "${vid}" minio/bucket/testobj | md5sum
done
sudo chown -R ${USER}. "${WORK_DIR}/disk${i}"
done
for vid in $("${PWD}/mc" ls --json --versions minio/bucket/testobj | jq -r .versionId); do
"${PWD}/mc" cat --vid "${vid}" minio/bucket/testobj | md5sum
done
pkill minio
sleep 3
pkill minio
sleep 3
}
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_4drive ${start_port}
start_minio_4drive ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -27,7 +27,7 @@ import (
"os"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
)
func main() {

View File

@@ -4,89 +4,89 @@ trap 'cleanup $LINENO' ERR
# shellcheck disable=SC2120
cleanup() {
MINIO_VERSION=dev docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm -s -f
docker volume prune -f
MINIO_VERSION=dev docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm -s -f
docker volume prune -f
}
verify_checksum_after_heal() {
local sum1
sum1=$(curl -s "$2" | sha256sum);
mc admin heal --json -r "$1" >/dev/null; # test after healing
local sum1_heal
sum1_heal=$(curl -s "$2" | sha256sum);
local sum1
sum1=$(curl -s "$2" | sha256sum)
mc admin heal --json -r "$1" >/dev/null # test after healing
local sum1_heal
sum1_heal=$(curl -s "$2" | sha256sum)
if [ "${sum1_heal}" != "${sum1}" ]; then
echo "mismatch expected ${sum1_heal}, got ${sum1}"
exit 1;
fi
if [ "${sum1_heal}" != "${sum1}" ]; then
echo "mismatch expected ${sum1_heal}, got ${sum1}"
exit 1
fi
}
verify_checksum_mc() {
local expected
expected=$(mc cat "$1" | sha256sum)
local got
got=$(mc cat "$2" | sha256sum)
local expected
expected=$(mc cat "$1" | sha256sum)
local got
got=$(mc cat "$2" | sha256sum)
if [ "${expected}" != "${got}" ]; then
echo "mismatch - expected ${expected}, got ${got}"
exit 1;
fi
echo "matches - ${expected}, got ${got}"
if [ "${expected}" != "${got}" ]; then
echo "mismatch - expected ${expected}, got ${got}"
exit 1
fi
echo "matches - ${expected}, got ${got}"
}
add_alias() {
for i in $(seq 1 4); do
echo "... attempting to add alias $i"
until (mc alias set minio http://127.0.0.1:9000 minioadmin minioadmin); do
echo "...waiting... for 5secs" && sleep 5
done
done
for i in $(seq 1 4); do
echo "... attempting to add alias $i"
until (mc alias set minio http://127.0.0.1:9000 minioadmin minioadmin); do
echo "...waiting... for 5secs" && sleep 5
done
done
echo "Sleeping for nginx"
sleep 20
echo "Sleeping for nginx"
sleep 20
}
__init__() {
sudo apt install curl -y
export GOPATH=/tmp/gopath
export PATH=${PATH}:${GOPATH}/bin
sudo apt install curl -y
export GOPATH=/tmp/gopath
export PATH=${PATH}:${GOPATH}/bin
go install github.com/minio/mc@latest
go install github.com/minio/mc@latest
TAG=minio/minio:dev make docker
TAG=minio/minio:dev make docker
MINIO_VERSION=RELEASE.2019-12-19T22-52-26Z docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
up -d --build
MINIO_VERSION=RELEASE.2019-12-19T22-52-26Z docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
up -d --build
add_alias
add_alias
mc mb minio/minio-test/
mc cp ./minio minio/minio-test/to-read/
mc cp /etc/hosts minio/minio-test/to-read/hosts
mc anonymous set download minio/minio-test
mc mb minio/minio-test/
mc cp ./minio minio/minio-test/to-read/
mc cp /etc/hosts minio/minio-test/to-read/hosts
mc anonymous set download minio/minio-test
verify_checksum_mc ./minio minio/minio-test/to-read/minio
verify_checksum_mc ./minio minio/minio-test/to-read/minio
curl -s http://127.0.0.1:9000/minio-test/to-read/hosts | sha256sum
curl -s http://127.0.0.1:9000/minio-test/to-read/hosts | sha256sum
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
}
main() {
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
add_alias
add_alias
verify_checksum_after_heal minio/minio-test http://127.0.0.1:9000/minio-test/to-read/hosts
verify_checksum_after_heal minio/minio-test http://127.0.0.1:9000/minio-test/to-read/hosts
verify_checksum_mc ./minio minio/minio-test/to-read/minio
verify_checksum_mc ./minio minio/minio-test/to-read/minio
verify_checksum_mc /etc/hosts minio/minio-test/to-read/hosts
verify_checksum_mc /etc/hosts minio/minio-test/to-read/hosts
cleanup
cleanup
}
( __init__ "$@" && main "$@" )
(__init__ "$@" && main "$@")

View File

@@ -6,5 +6,5 @@ export GORACE="history_size=7"
export MINIO_API_REQUESTS_MAX=10000
for d in $(go list ./...); do
CGO_ENABLED=1 go test -v -race --timeout 100m "$d"
CGO_ENABLED=1 go test -v -race --timeout 100m "$d"
done

View File

@@ -3,70 +3,70 @@
set -E
set -o pipefail
set -x
set -e
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
function start_minio_5drive() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
"${WORK_DIR}/mc" cp --quiet -r "buildscripts/cicd-corpus/" "${WORK_DIR}/cicd-corpus/"
# remove mc source.
purge "${MC_BUILD_DIR}"
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/cicd-corpus/disk{1...5}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
"${WORK_DIR}/mc" cp --quiet -r "buildscripts/cicd-corpus/" "${WORK_DIR}/cicd-corpus/"
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/cicd-corpus/disk{1...5}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
"${WORK_DIR}/mc" stat minio/bucket/testobj
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" stat minio/bucket/testobj
pkill minio
sleep 3
pkill minio
sleep 3
}
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_5drive ${start_port}
start_minio_5drive ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -6,146 +6,151 @@ set -x
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO_OLD=( "$PWD/minio.RELEASE.2020-10-28T08-16-50Z" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO_OLD=("$PWD/minio.RELEASE.2020-10-28T08-16-50Z" --config-dir "$MINIO_CONFIG_DIR" server)
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
function download_old_release() {
if [ ! -f minio.RELEASE.2020-10-28T08-16-50Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2020-10-28T08-16-50Z
chmod a+x minio.RELEASE.2020-10-28T08-16-50Z
fi
if [ ! -f minio.RELEASE.2020-10-28T08-16-50Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2020-10-28T08-16-50Z
chmod a+x minio.RELEASE.2020-10-28T08-16-50Z
fi
}
function verify_rewrite() {
start_port=$1
start_port=$1
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
# remove mc source.
purge "${MC_BUILD_DIR}"
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
"${WORK_DIR}/mc" mb minio/healing-rewrite-bucket --quiet --with-lock
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" mb minio/healing-rewrite-bucket --quiet --with-lock
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
kill ${pid}
sleep 3
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
kill ${pid}
sleep 3
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
go install github.com/minio/minio/docs/debugging/s3-check-md5@latest
if ! s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**
)
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
go run ./buildscripts/heal-manual.go "127.0.0.1:${start_port}" "minio" "minio123"
sleep 1
purge "$WORK_DIR"
exit 1
fi
if ! s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**
)
go run ./buildscripts/heal-manual.go "127.0.0.1:${start_port}" "minio" "minio123"
sleep 1
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
purge "$WORK_DIR"
exit 1
fi
kill ${pid}
kill ${pid}
}
function main() {
download_old_release
download_old_release
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
verify_rewrite ${start_port}
verify_rewrite ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -6,167 +6,172 @@ set -o pipefail
set -x
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO_OLD=( "$PWD/minio.RELEASE.2021-11-24T23-19-33Z" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO_OLD=("$PWD/minio.RELEASE.2021-11-24T23-19-33Z" --config-dir "$MINIO_CONFIG_DIR" server)
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
function download_old_release() {
if [ ! -f minio.RELEASE.2021-11-24T23-19-33Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2021-11-24T23-19-33Z
chmod a+x minio.RELEASE.2021-11-24T23-19-33Z
fi
if [ ! -f minio.RELEASE.2021-11-24T23-19-33Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2021-11-24T23-19-33Z
chmod a+x minio.RELEASE.2021-11-24T23-19-33Z
fi
}
function start_minio_16drive() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export _MINIO_SHARD_DISKTIME_DELTA="5s" # do not change this as its needed for tests
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export _MINIO_SHARD_DISKTIME_DELTA="5s" # do not change this as its needed for tests
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
# remove mc source.
purge "${MC_BUILD_DIR}"
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
shred --iterations=1 --size=5241856 - 1>"${WORK_DIR}/unaligned" 2>/dev/null
"${WORK_DIR}/mc" mb minio/healing-shard-bucket --quiet
"${WORK_DIR}/mc" cp \
"${WORK_DIR}/unaligned" \
minio/healing-shard-bucket/unaligned \
--disable-multipart --quiet
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
## "unaligned" object name gets consistently distributed
## to disks in following distribution order
##
## NOTE: if you change the name make sure to change the
## distribution order present here
##
## [15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
shred --iterations=1 --size=5241856 - 1>"${WORK_DIR}/unaligned" 2>/dev/null
"${WORK_DIR}/mc" mb minio/healing-shard-bucket --quiet
"${WORK_DIR}/mc" cp \
"${WORK_DIR}/unaligned" \
minio/healing-shard-bucket/unaligned \
--disable-multipart --quiet
## make sure to remove the "last" data shard
rm -rf "${WORK_DIR}/xl14/healing-shard-bucket/unaligned"
sleep 10
## Heal the shard
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
## then remove any other data shard let's pick first disk
## - 1st data shard.
rm -rf "${WORK_DIR}/xl3/healing-shard-bucket/unaligned"
sleep 10
## "unaligned" object name gets consistently distributed
## to disks in following distribution order
##
## NOTE: if you change the name make sure to change the
## distribution order present here
##
## [15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
go install github.com/minio/minio/docs/debugging/s3-check-md5@latest
if ! s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep CORRUPTED; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
## make sure to remove the "last" data shard
rm -rf "${WORK_DIR}/xl14/healing-shard-bucket/unaligned"
sleep 10
## Heal the shard
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
## then remove any other data shard let's pick first disk
## - 1st data shard.
rm -rf "${WORK_DIR}/xl3/healing-shard-bucket/unaligned"
sleep 10
pkill minio
sleep 3
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep CORRUPTED; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
pkill minio
sleep 3
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 30
if ! s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**
)
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
purge "$WORK_DIR"
exit 1
fi
if ! s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(
cd inspects
"${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**
)
"${WORK_DIR}/mc" admin heal --quiet --recursive minio/healing-shard-bucket
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
if ! ./s3-check-md5 \
-debug \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" support inspect minio/healing-shard-bucket/unaligned/**)
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
purge "$WORK_DIR"
exit 1
fi
pkill minio
sleep 3
pkill minio
sleep 3
}
function main() {
download_old_release
download_old_release
start_port=$(shuf -i 10000-65000 -n 1)
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_16drive ${start_port}
start_minio_16drive ${start_port}
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
( main "$@" )
(main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -6,8 +6,8 @@ set -E
set -o pipefail
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
@@ -25,285 +25,270 @@ export ENABLE_ADMIN=1
export MINIO_CI_CD=1
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR")
FILE_1_MB="$MINT_DATA_DIR/datafile-1-MB"
FILE_65_MB="$MINT_DATA_DIR/datafile-65-MB"
FUNCTIONAL_TESTS="$WORK_DIR/functional-tests.sh"
function start_minio_fs()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
"${MINIO[@]}" server "${WORK_DIR}/fs-disk" >"$WORK_DIR/fs-minio.log" 2>&1 &
sleep 10
function start_minio_fs() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
"${MINIO[@]}" server "${WORK_DIR}/fs-disk" >"$WORK_DIR/fs-minio.log" 2>&1 &
sleep 10
}
function start_minio_erasure()
{
"${MINIO[@]}" server "${WORK_DIR}/erasure-disk1" "${WORK_DIR}/erasure-disk2" "${WORK_DIR}/erasure-disk3" "${WORK_DIR}/erasure-disk4" >"$WORK_DIR/erasure-minio.log" 2>&1 &
sleep 15
function start_minio_erasure() {
"${MINIO[@]}" server "${WORK_DIR}/erasure-disk1" "${WORK_DIR}/erasure-disk2" "${WORK_DIR}/erasure-disk3" "${WORK_DIR}/erasure-disk4" >"$WORK_DIR/erasure-minio.log" 2>&1 &
sleep 15
}
function start_minio_erasure_sets()
{
export MINIO_ENDPOINTS="${WORK_DIR}/erasure-disk-sets{1...32}"
"${MINIO[@]}" server > "$WORK_DIR/erasure-minio-sets.log" 2>&1 &
sleep 15
function start_minio_erasure_sets() {
export MINIO_ENDPOINTS="${WORK_DIR}/erasure-disk-sets{1...32}"
"${MINIO[@]}" server >"$WORK_DIR/erasure-minio-sets.log" 2>&1 &
sleep 15
}
function start_minio_pool_erasure_sets()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/pool-disk-sets{1...4} http://127.0.0.1:9001${WORK_DIR}/pool-disk-sets{5...8}"
"${MINIO[@]}" server --address ":9000" > "$WORK_DIR/pool-minio-9000.log" 2>&1 &
"${MINIO[@]}" server --address ":9001" > "$WORK_DIR/pool-minio-9001.log" 2>&1 &
function start_minio_pool_erasure_sets() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/pool-disk-sets{1...4} http://127.0.0.1:9001${WORK_DIR}/pool-disk-sets{5...8}"
"${MINIO[@]}" server --address ":9000" >"$WORK_DIR/pool-minio-9000.log" 2>&1 &
"${MINIO[@]}" server --address ":9001" >"$WORK_DIR/pool-minio-9001.log" 2>&1 &
sleep 40
sleep 40
}
function start_minio_pool_erasure_sets_ipv6()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://[::1]:9000${WORK_DIR}/pool-disk-sets-ipv6{1...4} http://[::1]:9001${WORK_DIR}/pool-disk-sets-ipv6{5...8}"
"${MINIO[@]}" server --address="[::1]:9000" > "$WORK_DIR/pool-minio-ipv6-9000.log" 2>&1 &
"${MINIO[@]}" server --address="[::1]:9001" > "$WORK_DIR/pool-minio-ipv6-9001.log" 2>&1 &
function start_minio_pool_erasure_sets_ipv6() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://[::1]:9000${WORK_DIR}/pool-disk-sets-ipv6{1...4} http://[::1]:9001${WORK_DIR}/pool-disk-sets-ipv6{5...8}"
"${MINIO[@]}" server --address="[::1]:9000" >"$WORK_DIR/pool-minio-ipv6-9000.log" 2>&1 &
"${MINIO[@]}" server --address="[::1]:9001" >"$WORK_DIR/pool-minio-ipv6-9001.log" 2>&1 &
sleep 40
sleep 40
}
function start_minio_dist_erasure()
{
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/dist-disk1 http://127.0.0.1:9001${WORK_DIR}/dist-disk2 http://127.0.0.1:9002${WORK_DIR}/dist-disk3 http://127.0.0.1:9003${WORK_DIR}/dist-disk4"
for i in $(seq 0 3); do
"${MINIO[@]}" server --address ":900${i}" > "$WORK_DIR/dist-minio-900${i}.log" 2>&1 &
done
function start_minio_dist_erasure() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/dist-disk1 http://127.0.0.1:9001${WORK_DIR}/dist-disk2 http://127.0.0.1:9002${WORK_DIR}/dist-disk3 http://127.0.0.1:9003${WORK_DIR}/dist-disk4"
for i in $(seq 0 3); do
"${MINIO[@]}" server --address ":900${i}" >"$WORK_DIR/dist-minio-900${i}.log" 2>&1 &
done
sleep 40
sleep 40
}
function run_test_fs()
{
start_minio_fs
function run_test_fs() {
start_minio_fs
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/fs-minio.log"
fi
rm -f "$WORK_DIR/fs-minio.log"
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/fs-minio.log"
fi
rm -f "$WORK_DIR/fs-minio.log"
return "$rv"
return "$rv"
}
function run_test_erasure_sets()
{
start_minio_erasure_sets
function run_test_erasure_sets() {
start_minio_erasure_sets
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio-sets.log"
fi
rm -f "$WORK_DIR/erasure-minio-sets.log"
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio-sets.log"
fi
rm -f "$WORK_DIR/erasure-minio-sets.log"
return "$rv"
return "$rv"
}
function run_test_pool_erasure_sets()
{
start_minio_pool_erasure_sets
function run_test_pool_erasure_sets() {
start_minio_pool_erasure_sets
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-900$i.log"
done
fi
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-900$i.log"
done
fi
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-900$i.log"
done
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-900$i.log"
done
return "$rv"
return "$rv"
}
function run_test_pool_erasure_sets_ipv6()
{
start_minio_pool_erasure_sets_ipv6
function run_test_pool_erasure_sets_ipv6() {
start_minio_pool_erasure_sets_ipv6
export SERVER_ENDPOINT="[::1]:9000"
export SERVER_ENDPOINT="[::1]:9000"
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
fi
if [ "$rv" -ne 0 ]; then
for i in $(seq 0 1); do
echo "server$i log:"
cat "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
fi
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
for i in $(seq 0 1); do
rm -f "$WORK_DIR/pool-minio-ipv6-900$i.log"
done
return "$rv"
return "$rv"
}
function run_test_erasure()
{
start_minio_erasure
function run_test_erasure() {
start_minio_erasure
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio.log"
fi
rm -f "$WORK_DIR/erasure-minio.log"
if [ "$rv" -ne 0 ]; then
cat "$WORK_DIR/erasure-minio.log"
fi
rm -f "$WORK_DIR/erasure-minio.log"
return "$rv"
return "$rv"
}
function run_test_dist_erasure()
{
start_minio_dist_erasure
function run_test_dist_erasure() {
start_minio_dist_erasure
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
(cd "$WORK_DIR" && "$FUNCTIONAL_TESTS")
rv=$?
pkill minio
sleep 3
pkill minio
sleep 3
if [ "$rv" -ne 0 ]; then
echo "server1 log:"
cat "$WORK_DIR/dist-minio-9000.log"
echo "server2 log:"
cat "$WORK_DIR/dist-minio-9001.log"
echo "server3 log:"
cat "$WORK_DIR/dist-minio-9002.log"
echo "server4 log:"
cat "$WORK_DIR/dist-minio-9003.log"
fi
if [ "$rv" -ne 0 ]; then
echo "server1 log:"
cat "$WORK_DIR/dist-minio-9000.log"
echo "server2 log:"
cat "$WORK_DIR/dist-minio-9001.log"
echo "server3 log:"
cat "$WORK_DIR/dist-minio-9002.log"
echo "server4 log:"
cat "$WORK_DIR/dist-minio-9003.log"
fi
rm -f "$WORK_DIR/dist-minio-9000.log" "$WORK_DIR/dist-minio-9001.log" "$WORK_DIR/dist-minio-9002.log" "$WORK_DIR/dist-minio-9003.log"
rm -f "$WORK_DIR/dist-minio-9000.log" "$WORK_DIR/dist-minio-9001.log" "$WORK_DIR/dist-minio-9002.log" "$WORK_DIR/dist-minio-9003.log"
return "$rv"
return "$rv"
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
function __init__()
{
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
mkdir -p "$MINT_DATA_DIR"
function __init__() {
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
mkdir -p "$MINT_DATA_DIR"
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
# remove mc source.
purge "${MC_BUILD_DIR}"
shred -n 1 -s 1M - 1>"$FILE_1_MB" 2>/dev/null
shred -n 1 -s 65M - 1>"$FILE_65_MB" 2>/dev/null
shred -n 1 -s 1M - 1>"$FILE_1_MB" 2>/dev/null
shred -n 1 -s 65M - 1>"$FILE_65_MB" 2>/dev/null
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' > "$MINIO_CONFIG_DIR/config.json"
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' >"$MINIO_CONFIG_DIR/config.json"
if ! wget -q -O "$FUNCTIONAL_TESTS" https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh; then
echo "failed to download https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh"
exit 1
fi
if ! wget -q -O "$FUNCTIONAL_TESTS" https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh; then
echo "failed to download https://raw.githubusercontent.com/minio/mc/master/functional-tests.sh"
exit 1
fi
sed -i 's|-sS|-sSg|g' "$FUNCTIONAL_TESTS"
chmod a+x "$FUNCTIONAL_TESTS"
sed -i 's|-sS|-sSg|g' "$FUNCTIONAL_TESTS"
chmod a+x "$FUNCTIONAL_TESTS"
}
function main()
{
echo "Testing in FS setup"
if ! run_test_fs; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
function main() {
echo "Testing in FS setup"
if ! run_test_fs; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup"
if ! run_test_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup"
if ! run_test_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure setup"
if ! run_test_dist_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure setup"
if ! run_test_dist_erasure; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup as sets"
if ! run_test_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Erasure setup as sets"
if ! run_test_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Eraure expanded setup"
if ! run_test_pool_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Eraure expanded setup"
if ! run_test_pool_erasure_sets; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure expanded setup with ipv6"
if ! run_test_pool_erasure_sets_ipv6; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
echo "Testing in Distributed Erasure expanded setup with ipv6"
if ! run_test_pool_erasure_sets_ipv6; then
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
purge "$WORK_DIR"
purge "$WORK_DIR"
}
( __init__ "$@" && main "$@" )
(__init__ "$@" && main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -4,96 +4,93 @@ set -E
set -o pipefail
set -x
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$(mktemp -d)"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
function start_minio() {
start_port=$1
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
unset MINIO_CI_CD
unset CI
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
unset MINIO_CI_CD
unset CI
args=()
for i in $(seq 1 4); do
args+=("http://localhost:$[${start_port}+$i]${WORK_DIR}/mnt/disk$i/ ")
done
args=()
for i in $(seq 1 4); do
args+=("http://localhost:$((start_port + i))${WORK_DIR}/mnt/disk$i/ ")
done
for i in $(seq 1 4); do
"${MINIO[@]}" --address ":$[$start_port+$i]" ${args[@]} 2>&1 >"${WORK_DIR}/server$i.log" &
done
for i in $(seq 1 4); do
"${MINIO[@]}" --address ":$((start_port + i))" ${args[@]} 2>&1 >"${WORK_DIR}/server$i.log" &
done
# Wait until all nodes return 403
for i in $(seq 1 4); do
while [ "$(curl -m 1 -s -o /dev/null -w "%{http_code}" http://localhost:$[$start_port+$i])" -ne "403" ]; do
echo -n ".";
sleep 1;
done
done
# Wait until all nodes return 403
for i in $(seq 1 4); do
while [ "$(curl -m 1 -s -o /dev/null -w "%{http_code}" http://localhost:$((start_port + i)))" -ne "403" ]; do
echo -n "."
sleep 1
done
done
}
}
# Prepare fake disks with losetup
function prepare_block_devices() {
set -e
mkdir -p ${WORK_DIR}/disks/ ${WORK_DIR}/mnt/
sudo modprobe loop
for i in 1 2 3 4; do
dd if=/dev/zero of=${WORK_DIR}/disks/img.${i} bs=1M count=2000
device=$(sudo losetup --find --show ${WORK_DIR}/disks/img.${i})
sudo mkfs.ext4 -F ${device}
mkdir -p ${WORK_DIR}/mnt/disk${i}/
sudo mount ${device} ${WORK_DIR}/mnt/disk${i}/
sudo chown "$(id -u):$(id -g)" ${device} ${WORK_DIR}/mnt/disk${i}/
done
set +e
set -e
mkdir -p ${WORK_DIR}/disks/ ${WORK_DIR}/mnt/
sudo modprobe loop
for i in 1 2 3 4; do
dd if=/dev/zero of=${WORK_DIR}/disks/img.${i} bs=1M count=2000
device=$(sudo losetup --find --show ${WORK_DIR}/disks/img.${i})
sudo mkfs.ext4 -F ${device}
mkdir -p ${WORK_DIR}/mnt/disk${i}/
sudo mount ${device} ${WORK_DIR}/mnt/disk${i}/
sudo chown "$(id -u):$(id -g)" ${device} ${WORK_DIR}/mnt/disk${i}/
done
set +e
}
# Start a distributed MinIO setup, unmount one disk and check if it is formatted
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_minio ${start_port}
start_port=$(shuf -i 10000-65000 -n 1)
start_minio ${start_port}
# Unmount the disk, after the unmount the device id
# /tmp/xxx/mnt/disk4 will be the same as '/' and it
# will be detected as root disk
while [ "$u" != "0" ]; do
sudo umount ${WORK_DIR}/mnt/disk4/
u=$?
sleep 1
done
# Unmount the disk, after the unmount the device id
# /tmp/xxx/mnt/disk4 will be the same as '/' and it
# will be detected as root disk
while [ "$u" != "0" ]; do
sudo umount ${WORK_DIR}/mnt/disk4/
u=$?
sleep 1
done
# Wait until MinIO self heal kicks in
sleep 60
# Wait until MinIO self heal kicks in
sleep 60
if [ -f ${WORK_DIR}/mnt/disk4/.minio.sys/format.json ]; then
echo "A root disk is formatted unexpectedely"
cat "${WORK_DIR}/server4.log"
exit -1
fi
if [ -f ${WORK_DIR}/mnt/disk4/.minio.sys/format.json ]; then
echo "A root disk is formatted unexpectedely"
cat "${WORK_DIR}/server4.log"
exit -1
fi
}
function cleanup() {
pkill minio
sudo umount ${WORK_DIR}/mnt/disk{1..3}/
sudo rm /dev/minio-loopdisk*
rm -rf "$WORK_DIR"
pkill minio
sudo umount ${WORK_DIR}/mnt/disk{1..3}/
sudo rm /dev/minio-loopdisk*
rm -rf "$WORK_DIR"
}
( prepare_block_devices )
( main "$@" )
(prepare_block_devices)
(main "$@")
rv=$?
cleanup
exit "$rv"

View File

@@ -5,139 +5,135 @@ set -E
set -o pipefail
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
function start_minio_3_node() {
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MINIO_ERASURE_SET_DRIVE_COUNT=6
export MINIO_CI_CD=1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MINIO_ERASURE_SET_DRIVE_COUNT=6
export MINIO_CI_CD=1
start_port=$2
args=""
for i in $(seq 1 3); do
args="$args http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/1/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/2/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/3/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/4/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/5/ http://127.0.0.1:$[$start_port+$i]${WORK_DIR}/$i/6/"
done
"${MINIO[@]}" --address ":$[$start_port+1]" $args > "${WORK_DIR}/dist-minio-server1.log" 2>&1 &
pid1=$!
disown ${pid1}
"${MINIO[@]}" --address ":$[$start_port+2]" $args > "${WORK_DIR}/dist-minio-server2.log" 2>&1 &
pid2=$!
disown $pid2
"${MINIO[@]}" --address ":$[$start_port+3]" $args > "${WORK_DIR}/dist-minio-server3.log" 2>&1 &
pid3=$!
disown $pid3
sleep "$1"
if ! ps -p $pid1 1>&2 > /dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! ps -p $pid2 1>&2 > /dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! ps -p $pid3 1>&2 > /dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! pkill minio; then
start_port=$2
args=""
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
args="$args http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/1/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/2/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/3/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/4/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/5/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/6/"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
sleep 1;
if pgrep minio; then
# forcibly killing, to proceed further properly.
if ! pkill -9 minio; then
echo "no minio process running anymore, proceed."
"${MINIO[@]}" --address ":$((start_port + 1))" $args >"${WORK_DIR}/dist-minio-server1.log" 2>&1 &
pid1=$!
disown ${pid1}
"${MINIO[@]}" --address ":$((start_port + 2))" $args >"${WORK_DIR}/dist-minio-server2.log" 2>&1 &
pid2=$!
disown $pid2
"${MINIO[@]}" --address ":$((start_port + 3))" $args >"${WORK_DIR}/dist-minio-server3.log" 2>&1 &
pid3=$!
disown $pid3
sleep "$1"
if ! ps -p $pid1 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
fi
}
if ! ps -p $pid2 1>&2 >/dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! ps -p $pid3 1>&2 >/dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
if ! pkill minio; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
sleep 1
if pgrep minio; then
# forcibly killing, to proceed further properly.
if ! pkill -9 minio; then
echo "no minio process running anymore, proceed."
fi
fi
}
function check_online() {
if grep -q 'Unable to initialize sub-systems' ${WORK_DIR}/dist-minio-*.log; then
echo "1"
fi
if grep -q 'Unable to initialize sub-systems' ${WORK_DIR}/dist-minio-*.log; then
echo "1"
fi
}
function purge()
{
rm -rf "$1"
function purge() {
rm -rf "$1"
}
function __init__()
{
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
function __init__() {
echo "Initializing environment"
mkdir -p "$WORK_DIR"
mkdir -p "$MINIO_CONFIG_DIR"
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' > "$MINIO_CONFIG_DIR/config.json"
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' >"$MINIO_CONFIG_DIR/config.json"
}
function perform_test() {
start_minio_3_node 120 $2
start_minio_3_node 120 $2
echo "Testing Distributed Erasure setup healing of drives"
echo "Remove the contents of the disks belonging to '${1}' erasure set"
echo "Testing Distributed Erasure setup healing of drives"
echo "Remove the contents of the disks belonging to '${1}' erasure set"
rm -rf ${WORK_DIR}/${1}/*/
rm -rf ${WORK_DIR}/${1}/*/
start_minio_3_node 120 $2
start_minio_3_node 120 $2
rv=$(check_online)
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
rv=$(check_online)
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
}
function main()
{
# use same ports for all tests
start_port=$(shuf -i 10000-65000 -n 1)
function main() {
# use same ports for all tests
start_port=$(shuf -i 10000-65000 -n 1)
perform_test "2" ${start_port}
perform_test "1" ${start_port}
perform_test "3" ${start_port}
perform_test "2" ${start_port}
perform_test "1" ${start_port}
perform_test "3" ${start_port}
}
( __init__ "$@" && main "$@" )
(__init__ "$@" && main "$@")
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -25,7 +25,7 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/v2/policy"
)
// Data types used for returning dummy access control

View File

@@ -32,7 +32,7 @@ import (
jsoniter "github.com/json-iterator/go"
"github.com/klauspost/compress/zip"
"github.com/minio/kes-go"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/bucket/lifecycle"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
@@ -41,8 +41,7 @@ import (
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/v2/policy"
)
const (
@@ -56,11 +55,9 @@ const (
// specified in the quota configuration will be applied by default
// to enforce total quota for the specified bucket.
func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketQuotaConfig")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SetBucketQuotaAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SetBucketQuotaAdminAction)
if objectAPI == nil {
return
}
@@ -97,7 +94,7 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
Quota: data,
UpdatedAt: updatedAt,
}
if quotaConfig.Quota == 0 {
if quotaConfig.Size == 0 || quotaConfig.Quota == 0 {
bucketMeta.Quota = nil
}
@@ -110,11 +107,9 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
// GetBucketQuotaConfigHandler - gets bucket quota configuration
func (a adminAPIHandlers) GetBucketQuotaConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketQuotaConfig")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.GetBucketQuotaAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.GetBucketQuotaAdminAction)
if objectAPI == nil {
return
}
@@ -145,15 +140,14 @@ func (a adminAPIHandlers) GetBucketQuotaConfigHandler(w http.ResponseWriter, r *
// SetRemoteTargetHandler - sets a remote target for bucket
func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetBucketTarget")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := pathClean(vars["bucket"])
update := r.Form.Get("update") == "true"
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SetBucketTargetAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SetBucketTargetAction)
if objectAPI == nil {
return
}
@@ -289,15 +283,14 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
// ListRemoteTargetsHandler - lists remote target(s) for a bucket or gets a target
// for a particular ARN type
func (a adminAPIHandlers) ListRemoteTargetsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListBucketTargets")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := pathClean(vars["bucket"])
arnType := vars["type"]
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.GetBucketTargetAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.GetBucketTargetAction)
if objectAPI == nil {
return
}
@@ -324,15 +317,14 @@ func (a adminAPIHandlers) ListRemoteTargetsHandler(w http.ResponseWriter, r *htt
// RemoveRemoteTargetHandler - removes a remote target for bucket with specified ARN
func (a adminAPIHandlers) RemoveRemoteTargetHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RemoveBucketTarget")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := pathClean(vars["bucket"])
arn := vars["arn"]
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SetBucketTargetAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SetBucketTargetAction)
if objectAPI == nil {
return
}
@@ -368,12 +360,11 @@ func (a adminAPIHandlers) RemoveRemoteTargetHandler(w http.ResponseWriter, r *ht
// ExportBucketMetadataHandler - exports all bucket metadata as a zipped file
func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ExportBucketMetadata")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
bucket := pathClean(r.Form.Get("bucket"))
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ExportBucketMetadataAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ExportBucketMetadataAction)
if objectAPI == nil {
return
}
@@ -401,7 +392,8 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
// of bucket metadata
zipWriter := zip.NewWriter(w)
defer zipWriter.Close()
rawDataFn := func(r io.Reader, filename string, sz int) error {
rawDataFn := func(r io.Reader, filename string, sz int) {
header, zerr := zip.FileInfoHeader(dummyFileInfo{
name: filename,
size: int64(sz),
@@ -410,20 +402,13 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
isDir: false,
sys: nil,
})
if zerr != nil {
logger.LogIf(ctx, zerr)
return nil
if zerr == nil {
header.Method = zip.Deflate
zwriter, zerr := zipWriter.CreateHeader(header)
if zerr == nil {
io.Copy(zwriter, r)
}
}
header.Method = zip.Deflate
zwriter, zerr := zipWriter.CreateHeader(header)
if zerr != nil {
logger.LogIf(ctx, zerr)
return nil
}
if _, err := io.Copy(zwriter, r); err != nil {
logger.LogIf(ctx, err)
}
return nil
}
cfgFiles := []string{
@@ -455,12 +440,9 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case bucketLifecycleConfig:
config, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
config, _, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
if err != nil {
if errors.Is(err, BucketLifecycleNotFound{Bucket: bucket}) {
continue
@@ -474,10 +456,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case bucketQuotaConfigFile:
config, _, err := globalBucketMetadataSys.GetQuotaConfig(ctx, bucket)
if err != nil {
@@ -492,10 +471,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case bucketSSEConfig:
config, _, err := globalBucketMetadataSys.GetSSEConfig(bucket)
if err != nil {
@@ -510,10 +486,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case bucketTaggingConfig:
config, _, err := globalBucketMetadataSys.GetTaggingConfig(bucket)
if err != nil {
@@ -528,10 +501,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case objectLockConfig:
config, _, err := globalBucketMetadataSys.GetObjectLockConfig(bucket)
if err != nil {
@@ -547,10 +517,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case bucketVersioningConfig:
config, _, err := globalBucketMetadataSys.GetVersioningConfig(bucket)
if err != nil {
@@ -566,10 +533,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case bucketReplicationConfig:
config, _, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket)
if err != nil {
@@ -584,11 +548,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case bucketTargetsFile:
config, err := globalBucketMetadataSys.GetBucketTargetsConfig(bucket)
if err != nil {
@@ -604,10 +564,7 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
}
}
}
@@ -652,12 +609,10 @@ func (i *importMetaReport) SetStatus(bucket, fname string, err error) {
// 2. Replication config - is omitted from import as remote target credentials are not available from exported data for security reasons.
// 3. lifecycle config - if transition rules are present, tier name needs to have been defined.
func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ImportBucketMetadata")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ImportBucketMetadataAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ImportBucketMetadataAction)
if objectAPI == nil {
return
}
@@ -672,12 +627,31 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
bucketMap := make(map[string]struct{}, 1)
rpt := importMetaReport{
madmin.BucketMetaImportErrs{
Buckets: make(map[string]madmin.BucketStatus, len(zr.File)),
},
}
bucketMap := make(map[string]*BucketMetadata, len(zr.File))
updatedAt := UTCNow()
for _, file := range zr.File {
slc := strings.Split(file.Name, slashSeparator)
if len(slc) != 2 { // expecting bucket/configfile in the zipfile
rpt.SetStatus(file.Name, "", fmt.Errorf("malformed zip - expecting format bucket/<config.json>"))
continue
}
bucket := slc[0]
meta, err := readBucketMetadata(ctx, objectAPI, bucket)
if err == nil {
bucketMap[bucket] = &meta
} else if err != errConfigNotFound {
rpt.SetStatus(bucket, "", err)
}
}
// import object lock config if any - order of import matters here.
for _, file := range zr.File {
slc := strings.Split(file.Name, slashSeparator)
@@ -705,7 +679,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
}
if _, ok := bucketMap[bucket]; !ok {
opts := MakeBucketOptions{
LockEnabled: config.ObjectLockEnabled == "Enabled",
LockEnabled: config.Enabled(),
}
err = objectAPI.MakeBucket(ctx, bucket, opts)
if err != nil {
@@ -714,7 +688,8 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
}
bucketMap[bucket] = struct{}{}
v := newBucketMetadata(bucket)
bucketMap[bucket] = &v
}
// Deny object locking configuration settings on existing buckets without object lock enabled.
@@ -723,27 +698,9 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, objectLockConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].ObjectLockConfigXML = configData
bucketMap[bucket].ObjectLockConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeObjectLockConfig,
Bucket: bucket,
ObjectLockConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
}
@@ -773,7 +730,8 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
}
bucketMap[bucket] = struct{}{}
v := newBucketMetadata(bucket)
bucketMap[bucket] = &v
}
if globalSiteReplicationSys.isEnabled() && v.Suspended() {
@@ -796,10 +754,8 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketVersioningConfig, configData); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].VersioningConfigXML = configData
bucketMap[bucket].VersioningConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
}
}
@@ -817,6 +773,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
bucket, fileName := slc[0], slc[1]
// create bucket if it does not exist yet.
if _, ok := bucketMap[bucket]; !ok {
err = objectAPI.MakeBucket(ctx, bucket, MakeBucketOptions{})
@@ -826,7 +783,8 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
}
bucketMap[bucket] = struct{}{}
v := newBucketMetadata(bucket)
bucketMap[bucket] = &v
}
if _, ok := bucketMap[bucket]; !ok {
continue
@@ -845,12 +803,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketNotificationConfig, configData); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rulesMap := config.ToRulesMap()
globalEventNotifier.AddRulesMap(bucket, rulesMap)
bucketMap[bucket].NotificationConfigXML = configData
rpt.SetStatus(bucket, fileName, nil)
case bucketPolicyConfig:
// Error out if Content-Length is beyond allowed size.
@@ -865,7 +818,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
bucketPolicy, err := policy.ParseConfig(bytes.NewReader(bucketPolicyBytes), bucket)
bucketPolicy, err := policy.ParseBucketPolicyConfig(bytes.NewReader(bucketPolicyBytes), bucket)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
@@ -883,22 +836,9 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketPolicyConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].PolicyConfigJSON = configData
bucketMap[bucket].PolicyConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypePolicy,
Bucket: bucket,
Policy: bucketPolicyBytes,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
case bucketLifecycleConfig:
bucketLifecycle, err := lifecycle.ParseLifecycleConfig(io.LimitReader(reader, sz))
if err != nil {
@@ -924,10 +864,8 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketLifecycleConfig, configData); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].LifecycleConfigXML = configData
bucketMap[bucket].LifecycleConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
case bucketSSEConfig:
// Parse bucket encryption xml
@@ -962,29 +900,9 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
// Store the bucket encryption configuration in the object layer
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketSSEConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].EncryptionConfigXML = configData
bucketMap[bucket].EncryptionConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeSSEConfig,
Bucket: bucket,
SSEConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
case bucketTaggingConfig:
tags, err := tags.ParseBucketXML(io.LimitReader(reader, sz))
if err != nil {
@@ -998,27 +916,9 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketTaggingConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].TaggingConfigXML = configData
bucketMap[bucket].TaggingConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
Tags: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
case bucketQuotaConfigFile:
data, err := io.ReadAll(reader)
if err != nil {
@@ -1026,37 +926,49 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
continue
}
quotaConfig, err := parseBucketQuota(bucket, data)
_, err = parseBucketQuota(bucket, data)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketQuotaConfigFile, data)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketMap[bucket].QuotaConfigJSON = data
bucketMap[bucket].QuotaConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
bucketMeta := madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeQuotaConfig,
Bucket: bucket,
Quota: data,
UpdatedAt: updatedAt,
}
if quotaConfig.Quota == 0 {
bucketMeta.Quota = nil
}
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, bucketMeta); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
}
enc := func(b []byte) *string {
if b == nil {
return nil
}
v := base64.StdEncoding.EncodeToString(b)
return &v
}
for bucket, meta := range bucketMap {
err := globalBucketMetadataSys.save(ctx, *meta)
if err != nil {
rpt.SetStatus(bucket, "", err)
continue
}
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Bucket: bucket,
Quota: meta.QuotaConfigJSON,
Policy: meta.PolicyConfigJSON,
Versioning: enc(meta.VersioningConfigXML),
Tags: enc(meta.TaggingConfigXML),
ObjectLockConfig: enc(meta.ObjectLockConfigXML),
SSEConfig: enc(meta.EncryptionConfigXML),
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, "", err)
continue
}
}
rptData, err := json.Marshal(rpt.BucketMetaImportErrs)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -1069,13 +981,12 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
// ReplicationDiffHandler - POST returns info on unreplicated versions for a remote target ARN
// to the connected HTTP client.
func (a adminAPIHandlers) ReplicationDiffHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ReplicationDiff")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ReplicationDiff)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ReplicationDiff)
if objectAPI == nil {
return
}
@@ -1129,3 +1040,62 @@ func (a adminAPIHandlers) ReplicationDiffHandler(w http.ResponseWriter, r *http.
}
}
}
// ReplicationMRFHandler - POST returns info on entries in the MRF backlog for a node or all nodes
func (a adminAPIHandlers) ReplicationMRFHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ReplicationDiff)
if objectAPI == nil {
return
}
// Check if bucket exists.
if bucket != "" {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
q := r.Form
node := q.Get("node")
keepAliveTicker := time.NewTicker(500 * time.Millisecond)
defer keepAliveTicker.Stop()
mrfCh, err := globalNotificationSys.GetReplicationMRF(ctx, bucket, node)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
enc := json.NewEncoder(w)
for {
select {
case entry, ok := <-mrfCh:
if !ok {
return
}
if err := enc.Encode(entry); err != nil {
return
}
if len(mrfCh) == 0 {
// Flush if nothing is queued
w.(http.Flusher).Flush()
}
case <-keepAliveTicker.C:
if len(mrfCh) > 0 {
continue
}
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
case <-ctx.Done():
return
}
}
}

View File

@@ -24,17 +24,17 @@ import (
"net/http"
"github.com/minio/kes-go"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/minio/internal/config"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/v2/policy"
)
// validateAdminReq will validate request against and return whether it is allowed.
// If any of the supplied actions are allowed it will be successful.
// If nil ObjectLayer is returned, the operation is not permitted.
// When nil ObjectLayer has been returned an error has always been sent to w.
func validateAdminReq(ctx context.Context, w http.ResponseWriter, r *http.Request, actions ...iampolicy.AdminAction) (ObjectLayer, auth.Credentials) {
func validateAdminReq(ctx context.Context, w http.ResponseWriter, r *http.Request, actions ...policy.AdminAction) (ObjectLayer, auth.Credentials) {
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
@@ -78,7 +78,7 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
var apiErr APIError
switch e := err.(type) {
case iampolicy.Error:
case policy.Error:
apiErr = APIError{
Code: "XMinioMalformedIAMPolicy",
Description: e.Error(),
@@ -226,12 +226,10 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
// toAdminAPIErrCode - converts errErasureWriteQuorum error to admin API
// specific error.
func toAdminAPIErrCode(ctx context.Context, err error) APIErrorCode {
switch err {
case errErasureWriteQuorum:
if errors.Is(err, errErasureWriteQuorum) {
return ErrAdminConfigNoQuorum
default:
return toAPIErrorCode(ctx, err)
}
return toAPIErrorCode(ctx, err)
}
// wraps export error for more context

View File

@@ -26,9 +26,8 @@ import (
"strconv"
"strings"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/config/cache"
"github.com/minio/minio/internal/config/etcd"
xldap "github.com/minio/minio/internal/config/identity/ldap"
"github.com/minio/minio/internal/config/identity/openid"
@@ -38,16 +37,14 @@ import (
"github.com/minio/minio/internal/config/subnet"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/v2/policy"
)
// DelConfigKVHandler - DELETE /minio/admin/v3/del-config-kv
func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteConfigKV")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -83,7 +80,7 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
if err = validateConfig(cfg, subSys); err != nil {
if err = validateConfig(ctx, cfg, subSys); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -149,11 +146,9 @@ type setConfigResult struct {
// SetConfigKVHandler - PUT /minio/admin/v3/set-config-kv
func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetConfigKV")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -211,7 +206,12 @@ func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (re
return
}
if verr := validateConfig(result.Cfg, result.SubSys); verr != nil {
tgts, err := config.ParseConfigTargetID(bytes.NewReader(kvBytes))
if err != nil {
return
}
ctx = context.WithValue(ctx, config.ContextKeyForTargetFromConfig, tgts)
if verr := validateConfig(ctx, result.Cfg, result.SubSys); verr != nil {
err = badConfigErr{Err: verr}
return
}
@@ -236,12 +236,12 @@ func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (re
// 1. `subsys:target` -> request for config of a single subsystem and target pair.
// 2. `subsys:` -> request for config of a single subsystem and the default target.
// 3. `subsys` -> request for config of all targets for the given subsystem.
//
// This is a reporting API and config secrets are redacted in the response.
func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfigKV")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -263,7 +263,7 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
}
}
subSysConfigs, err := cfg.GetSubsysInfo(subSys, target)
subSysConfigs, err := cfg.GetSubsysInfo(subSys, target, true)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -285,11 +285,9 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
}
func (a adminAPIHandlers) ClearConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ClearConfigHistoryKV")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -320,11 +318,9 @@ func (a adminAPIHandlers) ClearConfigHistoryKVHandler(w http.ResponseWriter, r *
// RestoreConfigHistoryKVHandler - restores a config with KV settings for the given KV id.
func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RestoreConfigHistoryKV")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -353,7 +349,7 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
return
}
if err = validateConfig(cfg, ""); err != nil {
if err = validateConfig(ctx, cfg, ""); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -368,11 +364,9 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
// ListConfigHistoryKVHandler - lists all the KV ids.
func (a adminAPIHandlers) ListConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListConfigHistoryKV")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -408,11 +402,9 @@ func (a adminAPIHandlers) ListConfigHistoryKVHandler(w http.ResponseWriter, r *h
// HelpConfigKVHandler - GET /minio/admin/v3/help-config-kv?subSys={subSys}&key={key}
func (a adminAPIHandlers) HelpConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HelpConfigKV")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -435,11 +427,9 @@ func (a adminAPIHandlers) HelpConfigKVHandler(w http.ResponseWriter, r *http.Req
// SetConfigHandler - PUT /minio/admin/v3/config
func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetConfig")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -464,7 +454,7 @@ func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Reques
return
}
if err = validateConfig(cfg, ""); err != nil {
if err = validateConfig(ctx, cfg, ""); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -485,13 +475,13 @@ func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Reques
}
// GetConfigHandler - GET /minio/admin/v3/config
// Get config.json of this minio setup.
//
// This endpoint is mainly for exporting and backing up the configuration.
// Secrets are not redacted.
func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfig")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -502,15 +492,13 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
hkvs := config.HelpSubSysMap[""]
for _, hkv := range hkvs {
// We ignore the error below, as we cannot get one.
cfgSubsysItems, _ := cfg.GetSubsysInfo(hkv.Key, "")
cfgSubsysItems, _ := cfg.GetSubsysInfo(hkv.Key, "", false)
for _, item := range cfgSubsysItems {
off := item.Config.Get(config.Enable) == config.EnableOff
switch hkv.Key {
case config.EtcdSubSys:
off = !etcd.Enabled(item.Config)
case config.CacheSubSys:
off = !cache.Enabled(item.Config)
case config.StorageClassSubSys:
off = !storageclass.Enabled(item.Config)
case config.PolicyPluginSubSys:
@@ -541,7 +529,7 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
// setLoggerWebhookSubnetProxy - Sets the logger webhook's subnet proxy value to
// one being set for subnet proxy
func setLoggerWebhookSubnetProxy(subSys string, cfg config.Config) bool {
if subSys == config.SubnetSubSys {
if subSys == config.SubnetSubSys || subSys == config.LoggerWebhookSubSys {
subnetWebhookCfg := cfg[config.LoggerWebhookSubSys][subnet.LoggerWebhookName]
loggerWebhookSubnetProxy := subnetWebhookCfg.Get(logger.Proxy)
subnetProxy := cfg[config.SubnetSubSys][config.Default].Get(logger.Proxy)

View File

@@ -26,19 +26,19 @@ import (
"net/http"
"strings"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/config"
cfgldap "github.com/minio/minio/internal/config/identity/ldap"
"github.com/minio/minio/internal/config/identity/openid"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/ldap"
"github.com/minio/pkg/v2/ldap"
"github.com/minio/pkg/v2/policy"
)
func (a adminAPIHandlers) addOrUpdateIDPHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, isUpdate bool) {
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
func addOrUpdateIDPHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, isUpdate bool) {
objectAPI, cred := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -121,11 +121,11 @@ func (a adminAPIHandlers) addOrUpdateIDPHandler(ctx context.Context, w http.Resp
// IDP config is not dynamic. Sanity check.
if dynamic {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInternalError), err.Error(), r.URL)
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInternalError), "", r.URL)
return
}
if err = validateConfig(cfg, subSys); err != nil {
if err = validateConfig(ctx, cfg, subSys); err != nil {
var validationErr ldap.Validation
if errors.As(err, &validationErr) {
@@ -198,32 +198,29 @@ func handleCreateUpdateValidation(s config.Config, subSys, cfgTarget string, isU
//
// PUT <admin-prefix>/idp-cfg/openid/_ -> create (default) named config `_`
func (a adminAPIHandlers) AddIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AddIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
a.addOrUpdateIDPHandler(ctx, w, r, false)
addOrUpdateIDPHandler(ctx, w, r, false)
}
// UpdateIdentityProviderCfg: updates an existing IDP config for openid/ldap.
//
// PATCH <admin-prefix>/idp-cfg/openid/dex1 -> update named config `dex1`
// POST <admin-prefix>/idp-cfg/openid/dex1 -> update named config `dex1`
//
// PATCH <admin-prefix>/idp-cfg/openid/_ -> update (default) named config `_`
// POST <admin-prefix>/idp-cfg/openid/_ -> update (default) named config `_`
func (a adminAPIHandlers) UpdateIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "UpdateIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
a.addOrUpdateIDPHandler(ctx, w, r, true)
addOrUpdateIDPHandler(ctx, w, r, true)
}
// ListIdentityProviderCfg:
//
// GET <admin-prefix>/idp-cfg/openid -> lists openid provider configs.
func (a adminAPIHandlers) ListIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -274,10 +271,9 @@ func (a adminAPIHandlers) ListIdentityProviderCfg(w http.ResponseWriter, r *http
//
// GET <admin-prefix>/idp-cfg/openid/dex_test
func (a adminAPIHandlers) GetIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -334,11 +330,9 @@ func (a adminAPIHandlers) GetIdentityProviderCfg(w http.ResponseWriter, r *http.
//
// DELETE <admin-prefix>/idp-cfg/openid/dex_test
func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteIdentityProviderCfg")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -422,7 +416,7 @@ func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *ht
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = validateConfig(cfg, subSys); err != nil {
if err = validateConfig(ctx, cfg, subSys); err != nil {
var validationErr ldap.Validation
if errors.As(err, &validationErr) {

View File

@@ -22,10 +22,10 @@ import (
"io"
"net/http"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/v2/policy"
)
// ListLDAPPolicyMappingEntities lists users/groups mapped to given/all policies.
@@ -45,14 +45,12 @@ import (
//
// When all query parameters are omitted, returns mappings for all policies.
func (a adminAPIHandlers) ListLDAPPolicyMappingEntities(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListLDAPPolicyMappingEntities")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
// Check authorization.
objectAPI, cred := validateAdminReq(ctx, w, r,
iampolicy.ListGroupsAdminAction, iampolicy.ListUsersAdminAction, iampolicy.ListUserPoliciesAdminAction)
policy.ListGroupsAdminAction, policy.ListUsersAdminAction, policy.ListUserPoliciesAdminAction)
if objectAPI == nil {
return
}
@@ -94,13 +92,11 @@ func (a adminAPIHandlers) ListLDAPPolicyMappingEntities(w http.ResponseWriter, r
//
// POST <admin-prefix>/idp/ldap/policy/{operation}
func (a adminAPIHandlers) AttachDetachPolicyLDAP(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AttachDetachPolicyLDAP")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
// Check authorization.
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.UpdatePolicyAssociationAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.UpdatePolicyAssociationAction)
if objectAPI == nil {
return
}
@@ -150,7 +146,7 @@ func (a adminAPIHandlers) AttachDetachPolicyLDAP(w http.ResponseWriter, r *http.
}
// Call IAM subsystem
updatedAt, addedOrRemoved, err := globalIAMSys.PolicyDBUpdateLDAP(ctx, isAttach, par)
updatedAt, addedOrRemoved, _, err := globalIAMSys.PolicyDBUpdateLDAP(ctx, isAttach, par)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return

View File

@@ -26,20 +26,18 @@ import (
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/v2/policy"
)
var (
errRebalanceDecommissionAlreadyRunning = errors.New("Rebalance cannot be started, decommission is aleady in progress")
errRebalanceDecommissionAlreadyRunning = errors.New("Rebalance cannot be started, decommission is already in progress")
errDecommissionRebalanceAlreadyRunning = errors.New("Decommission cannot be started, rebalance is already in progress")
)
func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "StartDecommission")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.DecommissionAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.DecommissionAdminAction)
if objectAPI == nil {
return
}
@@ -95,7 +93,7 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
poolIndices = append(poolIndices, idx)
}
if len(poolIndices) > 0 && globalEndpoints[poolIndices[0]].Endpoints[0].IsLocal {
if len(poolIndices) > 0 && !globalEndpoints[poolIndices[0]].Endpoints[0].IsLocal {
ep := globalEndpoints[poolIndices[0]].Endpoints[0]
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == ep.Host {
@@ -113,11 +111,9 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
}
func (a adminAPIHandlers) CancelDecommission(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "CancelDecommission")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.DecommissionAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.DecommissionAdminAction)
if objectAPI == nil {
return
}
@@ -161,11 +157,9 @@ func (a adminAPIHandlers) CancelDecommission(w http.ResponseWriter, r *http.Requ
}
func (a adminAPIHandlers) StatusPool(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "StatusPool")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ServerInfoAdminAction, iampolicy.DecommissionAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ServerInfoAdminAction, policy.DecommissionAdminAction)
if objectAPI == nil {
return
}
@@ -204,11 +198,9 @@ func (a adminAPIHandlers) StatusPool(w http.ResponseWriter, r *http.Request) {
}
func (a adminAPIHandlers) ListPools(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListPools")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ServerInfoAdminAction, iampolicy.DecommissionAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ServerInfoAdminAction, policy.DecommissionAdminAction)
if objectAPI == nil {
return
}
@@ -239,10 +231,9 @@ func (a adminAPIHandlers) ListPools(w http.ResponseWriter, r *http.Request) {
}
func (a adminAPIHandlers) RebalanceStart(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RebalanceStart")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.RebalanceAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.RebalanceAdminAction)
if objectAPI == nil {
return
}
@@ -311,10 +302,9 @@ func (a adminAPIHandlers) RebalanceStart(w http.ResponseWriter, r *http.Request)
}
func (a adminAPIHandlers) RebalanceStatus(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RebalanceStatus")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.RebalanceAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.RebalanceAdminAction)
if objectAPI == nil {
return
}
@@ -352,10 +342,9 @@ func (a adminAPIHandlers) RebalanceStatus(w http.ResponseWriter, r *http.Request
}
func (a adminAPIHandlers) RebalanceStop(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RebalanceStop")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.RebalanceAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.RebalanceAdminAction)
if objectAPI == nil {
return
}

View File

@@ -20,27 +20,27 @@ package cmd
import (
"bytes"
"context"
"encoding/gob"
"encoding/json"
"errors"
"io"
"net/http"
"strings"
"sync/atomic"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/mux"
"github.com/dustin/go-humanize"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
)
// SiteReplicationAdd - PUT /minio/admin/v3/site-replication/add
func (a adminAPIHandlers) SiteReplicationAdd(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationAdd")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.SiteReplicationAddAction)
if objectAPI == nil {
return
}
@@ -72,11 +72,9 @@ func (a adminAPIHandlers) SiteReplicationAdd(w http.ResponseWriter, r *http.Requ
// used internally to tell current cluster to enable SR with
// the provided peer clusters and service account.
func (a adminAPIHandlers) SRPeerJoin(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerJoin")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.SiteReplicationAddAction)
if objectAPI == nil {
return
}
@@ -96,11 +94,9 @@ func (a adminAPIHandlers) SRPeerJoin(w http.ResponseWriter, r *http.Request) {
// SRPeerBucketOps - PUT /minio/admin/v3/site-replication/bucket-ops?bucket=x&operation=y
func (a adminAPIHandlers) SRPeerBucketOps(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerBucketOps")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationOperationAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationOperationAction)
if objectAPI == nil {
return
}
@@ -145,11 +141,9 @@ func (a adminAPIHandlers) SRPeerBucketOps(w http.ResponseWriter, r *http.Request
// SRPeerReplicateIAMItem - PUT /minio/admin/v3/site-replication/iam-item
func (a adminAPIHandlers) SRPeerReplicateIAMItem(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerReplicateIAMItem")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationOperationAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationOperationAction)
if objectAPI == nil {
return
}
@@ -168,7 +162,7 @@ func (a adminAPIHandlers) SRPeerReplicateIAMItem(w http.ResponseWriter, r *http.
if item.Policy == nil {
err = globalSiteReplicationSys.PeerAddPolicyHandler(ctx, item.Name, nil, item.UpdatedAt)
} else {
policy, perr := iampolicy.ParseConfig(bytes.NewReader(item.Policy))
policy, perr := policy.ParseConfig(bytes.NewReader(item.Policy))
if perr != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, perr), r.URL)
return
@@ -199,11 +193,9 @@ func (a adminAPIHandlers) SRPeerReplicateIAMItem(w http.ResponseWriter, r *http.
// SRPeerReplicateBucketItem - PUT /minio/admin/v3/site-replication/bucket-meta
func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerReplicateBucketItem")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationOperationAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationOperationAction)
if objectAPI == nil {
return
}
@@ -214,15 +206,20 @@ func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *ht
return
}
if item.Bucket == "" {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errSRInvalidRequest(errInvalidArgument)), r.URL)
return
}
var err error
switch item.Type {
default:
err = errSRInvalidRequest(errInvalidArgument)
err = globalSiteReplicationSys.PeerBucketMetadataUpdateHandler(ctx, item)
case madmin.SRBucketMetaTypePolicy:
if item.Policy == nil {
err = globalSiteReplicationSys.PeerBucketPolicyHandler(ctx, item.Bucket, nil, item.UpdatedAt)
} else {
bktPolicy, berr := policy.ParseConfig(bytes.NewReader(item.Policy), item.Bucket)
bktPolicy, berr := policy.ParseBucketPolicyConfig(bytes.NewReader(item.Policy), item.Bucket)
if berr != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, berr), r.URL)
return
@@ -243,7 +240,7 @@ func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *ht
return
}
if err = globalSiteReplicationSys.PeerBucketQuotaConfigHandler(ctx, item.Bucket, quotaConfig, item.UpdatedAt); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
}
@@ -265,11 +262,9 @@ func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *ht
// SiteReplicationInfo - GET /minio/admin/v3/site-replication/info
func (a adminAPIHandlers) SiteReplicationInfo(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationInfo")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationInfoAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationInfoAction)
if objectAPI == nil {
return
}
@@ -287,11 +282,9 @@ func (a adminAPIHandlers) SiteReplicationInfo(w http.ResponseWriter, r *http.Req
}
func (a adminAPIHandlers) SRPeerGetIDPSettings(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationGetIDPSettings")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationAddAction)
if objectAPI == nil {
return
}
@@ -326,11 +319,9 @@ func parseJSONBody(ctx context.Context, body io.Reader, v interface{}, encryptio
// SiteReplicationStatus - GET /minio/admin/v3/site-replication/status
func (a adminAPIHandlers) SiteReplicationStatus(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationStatus")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationInfoAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationInfoAction)
if objectAPI == nil {
return
}
@@ -357,11 +348,9 @@ func (a adminAPIHandlers) SiteReplicationStatus(w http.ResponseWriter, r *http.R
// SiteReplicationMetaInfo - GET /minio/admin/v3/site-replication/metainfo
func (a adminAPIHandlers) SiteReplicationMetaInfo(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationMetaInfo")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationInfoAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationInfoAction)
if objectAPI == nil {
return
}
@@ -381,10 +370,9 @@ func (a adminAPIHandlers) SiteReplicationMetaInfo(w http.ResponseWriter, r *http
// SiteReplicationEdit - PUT /minio/admin/v3/site-replication/edit
func (a adminAPIHandlers) SiteReplicationEdit(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationEdit")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
objectAPI, cred := validateAdminReq(ctx, w, r, policy.SiteReplicationAddAction)
if objectAPI == nil {
return
}
@@ -413,10 +401,9 @@ func (a adminAPIHandlers) SiteReplicationEdit(w http.ResponseWriter, r *http.Req
//
// used internally to tell current cluster to update endpoint for peer
func (a adminAPIHandlers) SRPeerEdit(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerEdit")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationAddAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationAddAction)
if objectAPI == nil {
return
}
@@ -443,16 +430,15 @@ func getSRStatusOptions(r *http.Request) (opts madmin.SRStatusOptions) {
opts.Entity = madmin.GetSREntityType(q.Get("entity"))
opts.EntityValue = q.Get("entityvalue")
opts.ShowDeleted = q.Get("showDeleted") == "true"
opts.Metrics = q.Get("metrics") == "true"
return
}
// SiteReplicationRemove - PUT /minio/admin/v3/site-replication/remove
func (a adminAPIHandlers) SiteReplicationRemove(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationRemove")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationRemoveAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationRemoveAction)
if objectAPI == nil {
return
}
@@ -481,10 +467,9 @@ func (a adminAPIHandlers) SiteReplicationRemove(w http.ResponseWriter, r *http.R
//
// used internally to tell current cluster to update endpoint for peer
func (a adminAPIHandlers) SRPeerRemove(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SRPeerRemove")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationRemoveAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationRemoveAction)
if objectAPI == nil {
return
}
@@ -504,11 +489,9 @@ func (a adminAPIHandlers) SRPeerRemove(w http.ResponseWriter, r *http.Request) {
// SiteReplicationResyncOp - PUT /minio/admin/v3/site-replication/resync/op
func (a adminAPIHandlers) SiteReplicationResyncOp(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SiteReplicationResyncOp")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SiteReplicationResyncAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.SiteReplicationResyncAction)
if objectAPI == nil {
return
}
@@ -543,3 +526,45 @@ func (a adminAPIHandlers) SiteReplicationResyncOp(w http.ResponseWriter, r *http
}
writeSuccessResponseJSON(w, body)
}
// SiteReplicationDevNull - everything goes to io.Discard
// [POST] /minio/admin/v3/site-replication/devnull
func (a adminAPIHandlers) SiteReplicationDevNull(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
globalSiteNetPerfRX.Connect()
defer globalSiteNetPerfRX.Disconnect()
connectTime := time.Now()
for {
n, err := io.CopyN(io.Discard, r.Body, 128*humanize.KiByte)
atomic.AddUint64(&globalSiteNetPerfRX.RX, uint64(n))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// If there is a disconnection before globalNetPerfMinDuration (we give a margin of error of 1 sec)
// would mean the network is not stable. Logging here will help in debugging network issues.
if time.Since(connectTime) < (globalNetPerfMinDuration - time.Second) {
logger.LogIf(ctx, err)
}
}
if err != nil {
if errors.Is(err, io.EOF) {
w.WriteHeader(http.StatusNoContent)
} else {
w.WriteHeader(http.StatusBadRequest)
}
break
}
}
}
// SiteReplicationNetPerf - everything goes to io.Discard
// [POST] /minio/admin/v3/site-replication/netperf
func (a adminAPIHandlers) SiteReplicationNetPerf(w http.ResponseWriter, r *http.Request) {
durationStr := r.Form.Get(peerRESTDuration)
duration, _ := time.ParseDuration(durationStr)
if duration < globalNetPerfMinDuration {
duration = globalNetPerfMinDuration
}
result := siteNetperf(r.Context(), duration)
logger.LogIf(r.Context(), gob.NewEncoder(w).Encode(result))
}

View File

@@ -30,9 +30,9 @@ import (
"testing"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
minio "github.com/minio/minio-go/v7"
"github.com/minio/pkg/sync/errgroup"
"github.com/minio/pkg/v2/sync/errgroup"
)
func runAllIAMConcurrencyTests(suite *TestSuiteIAM, c *check) {

File diff suppressed because it is too large Load Diff

View File

@@ -27,20 +27,19 @@ import (
"io"
"net/http"
"net/url"
"os"
"runtime"
"strings"
"testing"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
cr "github.com/minio/minio-go/v7/pkg/credentials"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/signer"
"github.com/minio/minio/internal/auth"
"github.com/minio/pkg/v2/env"
)
const (
@@ -122,7 +121,7 @@ var iamTestSuites = func() []*TestSuiteIAM {
}()
const (
EnvTestEtcdBackend = "ETCD_SERVER"
EnvTestEtcdBackend = "_MINIO_ETCD_TEST_SERVER"
)
func (s *TestSuiteIAM) setUpEtcd(c *check, etcdServer string) {
@@ -145,7 +144,7 @@ func (s *TestSuiteIAM) setUpEtcd(c *check, etcdServer string) {
func (s *TestSuiteIAM) SetUpSuite(c *check) {
// If etcd backend is specified and etcd server is not present, the test
// is skipped.
etcdServer := os.Getenv(EnvTestEtcdBackend)
etcdServer := env.Get(EnvTestEtcdBackend, "")
if s.withEtcdBackend && etcdServer == "" {
c.Skip("Skipping etcd backend IAM test as no etcd server is configured.")
}
@@ -877,7 +876,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
// Create an madmin client with user creds
userAdmClient, err := madmin.NewWithOptions(s.endpoint, &madmin.Options{
Creds: cr.NewStaticV4(accessKey, secretKey, ""),
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
Secure: s.secure,
})
if err != nil {
@@ -984,9 +983,9 @@ func (s *TestSuiteIAM) SetUpAccMgmtPlugin(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
pluginEndpoint := os.Getenv("POLICY_PLUGIN_ENDPOINT")
pluginEndpoint := env.Get("_MINIO_POLICY_PLUGIN_ENDPOINT", "")
if pluginEndpoint == "" {
c.Skip("POLICY_PLUGIN_ENDPOINT not given - skipping.")
c.Skip("_MINIO_POLICY_PLUGIN_ENDPOINT not given - skipping.")
}
configCmds := []string{
@@ -1072,7 +1071,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
// Create an madmin client with user creds
userAdmClient, err := madmin.NewWithOptions(s.endpoint, &madmin.Options{
Creds: cr.NewStaticV4(accessKey, secretKey, ""),
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
Secure: s.secure,
})
if err != nil {
@@ -1329,7 +1328,11 @@ func (c *check) assertSvcAccAppearsInListing(ctx context.Context, madmClient *ma
if err != nil {
c.Fatalf("unable to list svc accounts: %v", err)
}
if !set.CreateStringSet(listResp.Accounts...).Contains(svcAK) {
var accessKeys []string
for _, item := range listResp.Accounts {
accessKeys = append(accessKeys, item.AccessKey)
}
if !set.CreateStringSet(accessKeys...).Contains(svcAK) {
c.Fatalf("service account did not appear in listing!")
}
}

View File

@@ -41,12 +41,13 @@ import (
"sort"
"strconv"
"strings"
"sync/atomic"
"time"
"github.com/dustin/go-humanize"
"github.com/klauspost/compress/zip"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v2/estream"
"github.com/minio/madmin-go/v3"
"github.com/minio/madmin-go/v3/estream"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/dsync"
"github.com/minio/minio/internal/handlers"
@@ -54,9 +55,9 @@ import (
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/logger/message/log"
xnet "github.com/minio/pkg/net"
"github.com/minio/pkg/v2/logger/message/log"
xnet "github.com/minio/pkg/v2/net"
"github.com/minio/pkg/v2/policy"
"github.com/secure-io/sio-go"
)
@@ -78,11 +79,9 @@ const (
// ----------
// updates all minio servers and restarts them gracefully.
func (a adminAPIHandlers) ServerUpdateHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ServerUpdate")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ServerUpdateAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ServerUpdateAdminAction)
if objectAPI == nil {
return
}
@@ -229,9 +228,7 @@ func (a adminAPIHandlers) ServerUpdateHandler(w http.ResponseWriter, r *http.Req
// - freeze (freezes all incoming S3 API calls)
// - unfreeze (unfreezes previously frozen S3 API calls)
func (a adminAPIHandlers) ServiceHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "Service")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
vars := mux.Vars(r)
action := vars["action"]
@@ -255,11 +252,11 @@ func (a adminAPIHandlers) ServiceHandler(w http.ResponseWriter, r *http.Request)
var objectAPI ObjectLayer
switch serviceSig {
case serviceRestart:
objectAPI, _ = validateAdminReq(ctx, w, r, iampolicy.ServiceRestartAdminAction)
objectAPI, _ = validateAdminReq(ctx, w, r, policy.ServiceRestartAdminAction)
case serviceStop:
objectAPI, _ = validateAdminReq(ctx, w, r, iampolicy.ServiceStopAdminAction)
objectAPI, _ = validateAdminReq(ctx, w, r, policy.ServiceStopAdminAction)
case serviceFreeze, serviceUnFreeze:
objectAPI, _ = validateAdminReq(ctx, w, r, iampolicy.ServiceFreezeAdminAction)
objectAPI, _ = validateAdminReq(ctx, w, r, policy.ServiceFreezeAdminAction)
}
if objectAPI == nil {
return
@@ -297,15 +294,12 @@ type ServerProperties struct {
SQSARN []string `json:"sqsARN"`
}
// ServerConnStats holds transferred bytes from/to the server
type ServerConnStats struct {
TotalInputBytes uint64 `json:"transferred"`
TotalOutputBytes uint64 `json:"received"`
Throughput uint64 `json:"throughput,omitempty"`
S3InputBytes uint64 `json:"transferredS3"`
S3OutputBytes uint64 `json:"receivedS3"`
AdminInputBytes uint64 `json:"transferredAdmin"`
AdminOutputBytes uint64 `json:"receivedAdmin"`
// serverConnStats holds transferred bytes from/to the server
type serverConnStats struct {
internodeInputBytes uint64
internodeOutputBytes uint64
s3InputBytes uint64
s3OutputBytes uint64
}
// ServerHTTPAPIStats holds total number of HTTP operations from/to the server,
@@ -335,11 +329,9 @@ type ServerHTTPStats struct {
// ----------
// Get server information
func (a adminAPIHandlers) StorageInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "StorageInfo")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.StorageInfoAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.StorageInfoAdminAction)
if objectAPI == nil {
return
}
@@ -376,11 +368,9 @@ func (a adminAPIHandlers) StorageInfoHandler(w http.ResponseWriter, r *http.Requ
// ----------
// Get realtime server metrics
func (a adminAPIHandlers) MetricsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "Metrics")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ServerInfoAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ServerInfoAdminAction)
if objectAPI == nil {
return
}
@@ -487,11 +477,9 @@ func (a adminAPIHandlers) MetricsHandler(w http.ResponseWriter, r *http.Request)
// ----------
// Get server/cluster data usage info
func (a adminAPIHandlers) DataUsageInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DataUsageInfo")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.DataUsageInfoAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.DataUsageInfoAdminAction)
if objectAPI == nil {
return
}
@@ -572,11 +560,9 @@ type PeerLocks struct {
// ForceUnlockHandler force unlocks requested resource
func (a adminAPIHandlers) ForceUnlockHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ForceUnlock")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ForceUnlockAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ForceUnlockAdminAction)
if objectAPI == nil {
return
}
@@ -609,11 +595,9 @@ func (a adminAPIHandlers) ForceUnlockHandler(w http.ResponseWriter, r *http.Requ
// TopLocksHandler Get list of locks in use
func (a adminAPIHandlers) TopLocksHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "TopLocks")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.TopLocksAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.TopLocksAdminAction)
if objectAPI == nil {
return
}
@@ -661,12 +645,10 @@ type StartProfilingResult struct {
// ----------
// Enable server profiling
func (a adminAPIHandlers) StartProfilingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "StartProfiling")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.ProfilingAdminAction, "")
_, adminAPIErr := checkAdminRequestAuth(ctx, r, policy.ProfilingAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
@@ -748,12 +730,10 @@ func (a adminAPIHandlers) StartProfilingHandler(w http.ResponseWriter, r *http.R
// ----------
// Enable server profiling
func (a adminAPIHandlers) ProfileHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "Profile")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.ProfilingAdminAction, "")
_, adminAPIErr := checkAdminRequestAuth(ctx, r, policy.ProfilingAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
@@ -845,12 +825,10 @@ func (f dummyFileInfo) Sys() interface{} { return f.sys }
// ----------
// Download profiling information of all nodes in a zip format - deprecated API
func (a adminAPIHandlers) DownloadProfilingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DownloadProfiling")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.ProfilingAdminAction, "")
_, adminAPIErr := checkAdminRequestAuth(ctx, r, policy.ProfilingAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
@@ -946,11 +924,9 @@ func extractHealInitParams(vars map[string]string, qParms url.Values, r io.Reade
// sequence. However, if the force-start flag is provided, the server
// aborts the running heal sequence and starts a new one.
func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "Heal")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.HealAdminAction)
if objectAPI == nil {
return
}
@@ -1049,7 +1025,7 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
if exists && !nh.hasEnded() && len(nh.currentStatus.Items) > 0 {
clientToken := nh.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s@%d", nh.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s:%d", nh.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
b, err := json.Marshal(madmin.HealStartSuccess{
ClientToken: clientToken,
@@ -1132,11 +1108,9 @@ func getAggregatedBackgroundHealState(ctx context.Context, o ObjectLayer) (madmi
}
func (a adminAPIHandlers) BackgroundHealStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HealBackgroundStatus")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.HealAdminAction)
if objectAPI == nil {
return
}
@@ -1153,13 +1127,118 @@ func (a adminAPIHandlers) BackgroundHealStatusHandler(w http.ResponseWriter, r *
}
}
// SitePerfHandler - measures network throughput between site replicated setups
func (a adminAPIHandlers) SitePerfHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, policy.HealthInfoAdminAction)
if objectAPI == nil {
return
}
if !globalSiteReplicationSys.isEnabled() {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
nsLock := objectAPI.NewNSLock(minioMetaBucket, "site-net-perf")
lkctx, err := nsLock.GetLock(ctx, globalOperationTimeout)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(toAPIErrorCode(ctx, err)), r.URL)
return
}
ctx = lkctx.Context()
defer nsLock.Unlock(lkctx)
durationStr := r.Form.Get(peerRESTDuration)
duration, err := time.ParseDuration(durationStr)
if err != nil {
duration = globalNetPerfMinDuration
}
if duration < globalNetPerfMinDuration {
// We need sample size of minimum 10 secs.
duration = globalNetPerfMinDuration
}
duration = duration.Round(time.Second)
results, err := globalSiteReplicationSys.Netperf(ctx, duration)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(toAPIErrorCode(ctx, err)), r.URL)
return
}
enc := json.NewEncoder(w)
if err := enc.Encode(results); err != nil {
return
}
}
// ClientDevNullExtraTime - return extratime for last devnull
// [POST] /minio/admin/v3/speedtest/client/devnull/extratime
func (a adminAPIHandlers) ClientDevNullExtraTime(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, policy.BandwidthMonitorAction)
if objectAPI == nil {
return
}
enc := json.NewEncoder(w)
if err := enc.Encode(madmin.ClientPerfExtraTime{TimeSpent: atomic.LoadInt64(&globalLastClientPerfExtraTime)}); err != nil {
return
}
}
// ClientDevNull - everything goes to io.Discard
// [POST] /minio/admin/v3/speedtest/client/devnull
func (a adminAPIHandlers) ClientDevNull(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
timeStart := time.Now()
objectAPI, _ := validateAdminReq(ctx, w, r, policy.BandwidthMonitorAction)
if objectAPI == nil {
return
}
nsLock := objectAPI.NewNSLock(minioMetaBucket, "client-perf")
lkctx, err := nsLock.GetLock(ctx, globalOperationTimeout)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(toAPIErrorCode(ctx, err)), r.URL)
return
}
ctx = lkctx.Context()
defer nsLock.Unlock(lkctx)
timeEnd := time.Now()
atomic.SwapInt64(&globalLastClientPerfExtraTime, timeEnd.Sub(timeStart).Nanoseconds())
ctx, cancel := context.WithTimeout(ctx, madmin.MaxClientPerfTimeout)
defer cancel()
totalRx := int64(0)
connectTime := time.Now()
for {
n, err := io.CopyN(io.Discard, r.Body, 128*humanize.KiByte)
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
// would mean the network is not stable. Logging here will help in debugging network issues.
if time.Since(connectTime) < (globalNetPerfMinDuration - time.Second) {
logger.LogIf(ctx, err)
}
}
totalRx += n
if err != nil || ctx.Err() != nil || totalRx > 100*humanize.GiByte {
break
}
}
w.WriteHeader(http.StatusOK)
}
// NetperfHandler - perform mesh style network throughput test
func (a adminAPIHandlers) NetperfHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "NetperfHandler")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealthInfoAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.HealthInfoAdminAction)
if objectAPI == nil {
return
}
@@ -1175,6 +1254,7 @@ func (a adminAPIHandlers) NetperfHandler(w http.ResponseWriter, r *http.Request)
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(toAPIErrorCode(ctx, err)), r.URL)
return
}
ctx = lkctx.Context()
defer nsLock.Unlock(lkctx)
durationStr := r.Form.Get(peerRESTDuration)
@@ -1197,21 +1277,14 @@ func (a adminAPIHandlers) NetperfHandler(w http.ResponseWriter, r *http.Request)
}
}
// SpeedtestHandler - Deprecated. See ObjectSpeedTestHandler
func (a adminAPIHandlers) SpeedTestHandler(w http.ResponseWriter, r *http.Request) {
a.ObjectSpeedTestHandler(w, r)
}
// ObjectSpeedTestHandler - reports maximum speed of a cluster by performing PUT and
// GET operations on the server, supports auto tuning by default by automatically
// increasing concurrency and stopping when we have reached the limits on the
// system.
func (a adminAPIHandlers) ObjectSpeedTestHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ObjectSpeedTestHandler")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealthInfoAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.HealthInfoAdminAction)
if objectAPI == nil {
return
}
@@ -1368,20 +1441,11 @@ func validateObjPerfOptions(ctx context.Context, storageInfo madmin.StorageInfo,
return true, autotune, ""
}
// NetSpeedtestHandler - reports maximum network throughput
func (a adminAPIHandlers) NetSpeedtestHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "NetSpeedtestHandler")
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
}
// DriveSpeedtestHandler - reports throughput of drives available in the cluster
func (a adminAPIHandlers) DriveSpeedtestHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DriveSpeedtestHandler")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealthInfoAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.HealthInfoAdminAction)
if objectAPI == nil {
return
}
@@ -1499,10 +1563,10 @@ func extractTraceOptions(r *http.Request) (opts madmin.ServiceTraceOpts, err err
// ----------
// The handler sends http trace to the connected HTTP client.
func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HTTPTrace")
ctx := r.Context()
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.TraceAdminAction, "")
_, adminAPIErr := checkAdminRequestAuth(ctx, r, policy.TraceAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
@@ -1517,7 +1581,9 @@ func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
// Trace Publisher and peer-trace-client uses nonblocking send and hence does not wait for slow receivers.
// Use buffered channel to take care of burst sends or slow w.Write()
traceCh := make(chan madmin.TraceInfo, 4000)
// Keep 100k buffered channel, should be sufficient to ensure we do not lose any events.
traceCh := make(chan madmin.TraceInfo, 100000)
peers, _ := newPeerRestClients(globalEndpoints)
@@ -1525,7 +1591,7 @@ func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
return shouldTrace(entry, traceOpts)
})
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrSlowDown), r.URL)
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
@@ -1541,7 +1607,7 @@ func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
peer.Trace(traceCh, ctx.Done(), traceOpts)
}
keepAliveTicker := time.NewTicker(500 * time.Millisecond)
keepAliveTicker := time.NewTicker(time.Second)
defer keepAliveTicker.Stop()
enc := json.NewEncoder(w)
@@ -1571,11 +1637,9 @@ func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
// The ConsoleLogHandler handler sends console logs to the connected HTTP client.
func (a adminAPIHandlers) ConsoleLogHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ConsoleLog")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConsoleLogAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ConsoleLogAdminAction)
if objectAPI == nil {
return
}
@@ -1605,7 +1669,7 @@ func (a adminAPIHandlers) ConsoleLogHandler(w http.ResponseWriter, r *http.Reque
err = globalConsoleSys.Subscribe(logCh, ctx.Done(), node, limitLines, logKind, nil)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrSlowDown), r.URL)
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
@@ -1654,10 +1718,9 @@ func (a adminAPIHandlers) ConsoleLogHandler(w http.ResponseWriter, r *http.Reque
// KMSCreateKeyHandler - POST /minio/admin/v3/kms/key/create?key-id=<master-key-id>
func (a adminAPIHandlers) KMSCreateKeyHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "KMSCreateKey")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.KMSCreateKeyAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.KMSCreateKeyAdminAction)
if objectAPI == nil {
return
}
@@ -1676,10 +1739,9 @@ func (a adminAPIHandlers) KMSCreateKeyHandler(w http.ResponseWriter, r *http.Req
// KMSKeyStatusHandler - GET /minio/admin/v3/kms/status
func (a adminAPIHandlers) KMSStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "KMSStatus")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.KMSKeyStatusAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.KMSKeyStatusAdminAction)
if objectAPI == nil {
return
}
@@ -1714,11 +1776,9 @@ func (a adminAPIHandlers) KMSStatusHandler(w http.ResponseWriter, r *http.Reques
// KMSKeyStatusHandler - GET /minio/admin/v3/kms/key/status?key-id=<master-key-id>
func (a adminAPIHandlers) KMSKeyStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "KMSKeyStatus")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.KMSKeyStatusAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.KMSKeyStatusAdminAction)
if objectAPI == nil {
return
}
@@ -1811,6 +1871,7 @@ func getPoolsInfo(ctx context.Context, allDisks []madmin.Disk) (map[int]map[int]
dataUsageInfo := cache.dui(dataUsageRoot, nil)
erasureSet.ObjectsCount = dataUsageInfo.ObjectsTotalCount
erasureSet.VersionsCount = dataUsageInfo.VersionsTotalCount
erasureSet.DeleteMarkersCount = dataUsageInfo.DeleteMarkersTotalCount
erasureSet.Usage = dataUsageInfo.ObjectsTotalSize
}
}
@@ -1826,8 +1887,6 @@ func getPoolsInfo(ctx context.Context, allDisks []madmin.Disk) (map[int]map[int]
}
func getServerInfo(ctx context.Context, poolsInfoEnabled bool, r *http.Request) madmin.InfoMessage {
kmsStat := fetchKMSStatus()
ldap := madmin.LDAP{}
if globalIAMSys.LDAPConfig.Enabled() {
ldapConn, err := globalIAMSys.LDAPConfig.LDAP.Connect()
@@ -1862,6 +1921,7 @@ func getServerInfo(ctx context.Context, poolsInfoEnabled bool, r *http.Request)
buckets := madmin.Buckets{}
objects := madmin.Objects{}
versions := madmin.Versions{}
deleteMarkers := madmin.DeleteMarkers{}
usage := madmin.Usage{}
objectAPI := newObjectLayerFn()
@@ -1874,10 +1934,12 @@ func getServerInfo(ctx context.Context, poolsInfoEnabled bool, r *http.Request)
buckets = madmin.Buckets{Count: dataUsageInfo.BucketsCount}
objects = madmin.Objects{Count: dataUsageInfo.ObjectsTotalCount}
versions = madmin.Versions{Count: dataUsageInfo.VersionsTotalCount}
deleteMarkers = madmin.DeleteMarkers{Count: dataUsageInfo.DeleteMarkersTotalCount}
usage = madmin.Usage{Size: dataUsageInfo.ObjectsTotalSize}
} else {
buckets = madmin.Buckets{Error: err.Error()}
objects = madmin.Objects{Error: err.Error()}
deleteMarkers = madmin.DeleteMarkers{Error: err.Error()}
usage = madmin.Usage{Error: err.Error()}
}
@@ -1907,7 +1969,8 @@ func getServerInfo(ctx context.Context, poolsInfoEnabled bool, r *http.Request)
domain := globalDomainNames
services := madmin.Services{
KMS: kmsStat,
KMS: fetchKMSStatus(),
KMSStatus: fetchKMSStatusV2(ctx),
LDAP: ldap,
Logger: log,
Audit: audit,
@@ -1915,19 +1978,20 @@ func getServerInfo(ctx context.Context, poolsInfoEnabled bool, r *http.Request)
}
return madmin.InfoMessage{
Mode: string(mode),
Domain: domain,
Region: globalSite.Region,
SQSARN: globalEventNotifier.GetARNList(false),
DeploymentID: globalDeploymentID,
Buckets: buckets,
Objects: objects,
Versions: versions,
Usage: usage,
Services: services,
Backend: backend,
Servers: servers,
Pools: poolsInfo,
Mode: string(mode),
Domain: domain,
Region: globalSite.Region,
SQSARN: globalEventNotifier.GetARNList(false),
DeploymentID: globalDeploymentID,
Buckets: buckets,
Objects: objects,
Versions: versions,
DeleteMarkers: deleteMarkers,
Usage: usage,
Services: services,
Backend: backend,
Servers: servers,
Pools: poolsInfo,
}
}
@@ -2103,6 +2167,27 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
}
}
// collect all realtime metrics except disk
// disk metrics are already included under drive info of each server
getRealtimeMetrics := func() *madmin.RealtimeMetrics {
var m madmin.RealtimeMetrics
var types madmin.MetricType = madmin.MetricsAll &^ madmin.MetricsDisk
mLocal := collectLocalMetrics(types, collectMetricsOpts{})
m.Merge(&mLocal)
cctx, cancel := context.WithTimeout(healthCtx, time.Second/2)
mRemote := collectRemoteMetrics(cctx, types, collectMetricsOpts{})
cancel()
m.Merge(&mRemote)
for idx, host := range m.Hosts {
m.Hosts[idx] = anonAddr(host)
}
for host, metrics := range m.ByHost {
m.ByHost[anonAddr(host)] = metrics
delete(m.ByHost, host)
}
return &m
}
anonymizeCmdLine := func(cmdLine string) string {
if !globalIsDistErasure {
// FS mode - single server - hard code to `server1`
@@ -2280,6 +2365,7 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
TLS: &tls,
IsKubernetes: &isK8s,
IsDocker: &isDocker,
Metrics: getRealtimeMetrics(),
}
partialWrite(healthInfo)
}
@@ -2290,10 +2376,9 @@ func fetchHealthInfo(healthCtx context.Context, objectAPI ObjectLayer, query *ur
// ----------
// Get server health info
func (a adminAPIHandlers) HealthInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HealthInfo")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealthInfoAdminAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.HealthInfoAdminAction)
if objectAPI == nil {
return
}
@@ -2397,12 +2482,10 @@ func getTLSInfo() madmin.TLSInfo {
// ----------
// Get server information
func (a adminAPIHandlers) ServerInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ServerInfo")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
ctx := r.Context()
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.ServerInfoAdminAction, "")
_, adminAPIErr := checkAdminRequestAuth(ctx, r, policy.ServerInfoAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
@@ -2505,6 +2588,28 @@ func fetchKMSStatus() madmin.KMS {
return kmsStat
}
// fetchKMSStatusV2 fetches KMS-related status information for all instances
func fetchKMSStatusV2(ctx context.Context) []madmin.KMS {
if GlobalKMS == nil {
return []madmin.KMS{}
}
results := GlobalKMS.Verify(ctx)
stats := []madmin.KMS{}
for _, result := range results {
stats = append(stats, madmin.KMS{
Status: result.Status,
Endpoint: result.Endpoint,
Encrypt: result.Encrypt,
Decrypt: result.Decrypt,
Version: result.Version,
})
}
return stats
}
// fetchLoggerDetails return log info
func fetchLoggerInfo() ([]madmin.Logger, []madmin.Audit) {
var loggerInfo []madmin.Logger
@@ -2524,12 +2629,12 @@ func fetchLoggerInfo() ([]madmin.Logger, []madmin.Audit) {
return loggerInfo, auditloggerInfo
}
func embedFileInZip(zipWriter *zip.Writer, name string, data []byte) error {
func embedFileInZip(zipWriter *zip.Writer, name string, data []byte, fileMode os.FileMode) error {
// Send profiling data to zip as file
header, zerr := zip.FileInfoHeader(dummyFileInfo{
name: name,
size: int64(len(data)),
mode: 0o600,
mode: fileMode,
modTime: UTCNow(),
isDir: false,
sys: nil,
@@ -2629,15 +2734,14 @@ type getRawDataer interface {
// ----------
// Download file from all nodes in a zip format
func (a adminAPIHandlers) InspectDataHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "InspectData")
ctx := r.Context()
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.InspectDataAction, "")
_, adminAPIErr := checkAdminRequestAuth(ctx, r, policy.InspectDataAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
}
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objLayer := newObjectLayerFn()
o, ok := objLayer.(getRawDataer)
@@ -2761,7 +2865,7 @@ func (a adminAPIHandlers) InspectDataHandler(w http.ResponseWriter, r *http.Requ
defer inspectZipW.Close()
if b := getClusterMetaInfo(ctx); len(b) > 0 {
logger.LogIf(ctx, embedFileInZip(inspectZipW, "cluster.info", b))
logger.LogIf(ctx, embedFileInZip(inspectZipW, "cluster.info", b, 0o600))
}
}
@@ -2825,7 +2929,41 @@ func (a adminAPIHandlers) InspectDataHandler(w http.ResponseWriter, r *http.Requ
sb.WriteString(pool.CmdLine)
}
sb.WriteString("\n")
logger.LogIf(ctx, embedFileInZip(inspectZipW, "inspect-input.txt", sb.Bytes()))
logger.LogIf(ctx, embedFileInZip(inspectZipW, "inspect-input.txt", sb.Bytes(), 0o600))
scheme := "https"
if !globalIsTLS {
scheme = "http"
}
// save MinIO start script to inspect command
var scrb bytes.Buffer
fmt.Fprintf(&scrb, `#!/usr/bin/env bash
function main() {
for file in $(ls -1); do
dest_file=$(echo "$file" | cut -d ":" -f1)
mv "$file" "$dest_file"
done
# Read content of inspect-input.txt
MINIO_OPTS=$(grep "Server command line args" <./inspect-input.txt | sed "s/Server command line args: //g" | sed -r "s#%s:\/\/#\.\/#g")
# Start MinIO instance using the options
START_CMD="CI=on _MINIO_AUTO_DRIVE_HEALING=off minio server ${MINIO_OPTS} &"
echo
echo "Starting MinIO instance: ${START_CMD}"
echo
eval "$START_CMD"
MINIO_SRVR_PID="$!"
echo "MinIO Server PID: ${MINIO_SRVR_PID}"
echo
echo "Waiting for MinIO instance to get ready!"
sleep 10
}
main "$@"`, scheme)
logger.LogIf(ctx, embedFileInZip(inspectZipW, "start-minio.sh", scrb.Bytes(), 0o755))
}
func getSubnetAdminPublicKey() []byte {

View File

@@ -31,7 +31,7 @@ import (
"testing"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/mux"
)
@@ -73,7 +73,7 @@ func prepareAdminErasureTestBed(ctx context.Context) (*adminErasureTestBed, erro
// Initialize boot time
globalBootTime = UTCNow()
globalEndpoints = mustGetPoolEndpoints(erasureDirs...)
globalEndpoints = mustGetPoolEndpoints(0, erasureDirs...)
initAllSubsystems(ctx)
@@ -108,7 +108,7 @@ func initTestErasureObjLayer(ctx context.Context) (ObjectLayer, []string, error)
if err != nil {
return nil, nil, err
}
endpoints := mustGetPoolEndpoints(erasureDirs...)
endpoints := mustGetPoolEndpoints(0, erasureDirs...)
globalPolicySys = NewPolicySys()
objLayer, err := newErasureServerPools(ctx, endpoints)
if err != nil {

View File

@@ -27,7 +27,7 @@ import (
"sync"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
)
@@ -253,7 +253,7 @@ func (ahs *allHealState) stopHealSequence(path string) ([]byte, APIError) {
} else {
clientToken := he.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s@%d", he.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s:%d", he.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
hsp = madmin.HealStopSuccess{
@@ -327,7 +327,7 @@ func (ahs *allHealState) LaunchNewHealSequence(h *healSequence, objAPI ObjectLay
clientToken := h.clientToken
if globalIsDistErasure {
clientToken = fmt.Sprintf("%s@%d", h.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
clientToken = fmt.Sprintf("%s:%d", h.clientToken, GetProxyEndpointLocalIndex(globalProxyEndpoints))
}
b, err := json.Marshal(madmin.HealStartSuccess{
@@ -683,13 +683,6 @@ func (h *healSequence) healSequenceStart(objAPI ObjectLayer) {
}
}
func (h *healSequence) logHeal(healType madmin.HealItemType) {
h.mutex.Lock()
h.scannedItemsMap[healType]++
h.lastHealActivity = UTCNow()
h.mutex.Unlock()
}
func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItemType) error {
// Send heal request
task := healTask{

View File

@@ -19,20 +19,127 @@ package cmd
import (
"net/http"
"reflect"
"runtime"
"strings"
"github.com/klauspost/compress/gzhttp"
"github.com/klauspost/compress/gzip"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
)
const (
adminPathPrefix = minioReservedBucketPath + "/admin"
adminAPIVersion = madmin.AdminAPIVersion
adminAPIVersionPrefix = SlashSeparator + adminAPIVersion
adminPathPrefix = minioReservedBucketPath + "/admin"
adminAPIVersion = madmin.AdminAPIVersion
adminAPIVersionPrefix = SlashSeparator + adminAPIVersion
adminAPISiteReplicationDevNull = "/site-replication/devnull"
adminAPISiteReplicationNetPerf = "/site-replication/netperf"
adminAPIClientDevNull = "/speedtest/client/devnull"
adminAPIClientDevExtraTime = "/speedtest/client/devnull/extratime"
)
var gzipHandler = func() func(http.Handler) http.HandlerFunc {
gz, err := gzhttp.NewWrapper(gzhttp.MinSize(1000), gzhttp.CompressionLevel(gzip.BestSpeed))
if err != nil {
// Static params, so this is very unlikely.
logger.Fatal(err, "Unable to initialize server")
}
return gz
}()
// Set of handler options as bit flags
type hFlag uint8
const (
// this flag disables gzip compression of responses
noGZFlag = 1 << iota
// this flag enables tracing body and headers instead of just headers
traceAllFlag
// pass this flag to skip checking if object layer is available
noObjLayerFlag
)
// Has checks if the the given flag is enabled in `h`.
func (h hFlag) Has(flag hFlag) bool {
// Use bitwise-AND and check if the result is non-zero.
return h&flag != 0
}
func getHandlerName(f http.HandlerFunc) string {
name := runtime.FuncForPC(reflect.ValueOf(f).Pointer()).Name()
name = strings.TrimPrefix(name, "github.com/minio/minio/cmd.adminAPIHandlers.")
name = strings.TrimSuffix(name, "Handler-fm")
name = strings.TrimSuffix(name, "-fm")
return name
}
// adminMiddleware performs some common admin handler functionality for all
// handlers:
//
// - updates request context with `logger.ReqInfo` and api name based on the
// name of the function handler passed (this handler must be a method of
// `adminAPIHandlers`).
//
// - sets up call to send AuditLog
//
// Note that, while this is a middleware function (i.e. it takes a handler
// function and returns one), due to flags being passed based on required
// conditions, it is done per-"handler function registration" in the router.
//
// When no flags are passed, gzip compression, http tracing of headers and
// checking of object layer availability are all enabled. Use flags to modify
// this behavior.
func adminMiddleware(f http.HandlerFunc, flags ...hFlag) http.HandlerFunc {
// Collect all flags with bitwise-OR and assign operator
var handlerFlags hFlag
for _, flag := range flags {
handlerFlags |= flag
}
// Get name of the handler using reflection. NOTE: The passed in handler
// function must be a method of `adminAPIHandlers` for this extraction to
// work as expected.
handlerName := getHandlerName(f)
var handler http.HandlerFunc = func(w http.ResponseWriter, r *http.Request) {
// Update request context with `logger.ReqInfo`.
r = r.WithContext(newContext(r, w, handlerName))
defer logger.AuditLog(r.Context(), w, r, mustGetClaimsFromToken(r))
// Check if object layer is available, if not return error early.
if !handlerFlags.Has(noObjLayerFlag) {
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(r.Context(), w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
}
// Apply http tracing "middleware" based on presence of flag.
var f2 http.HandlerFunc
if handlerFlags.Has(traceAllFlag) {
f2 = httpTraceAll(f)
} else {
f2 = httpTraceHdrs(f)
}
// call the final handler
f2(w, r)
}
// Enable compression of responses based on presence of flag.
if !handlerFlags.Has(noGZFlag) {
handler = gzipHandler(handler)
}
return handler
}
// adminAPIHandlers provides HTTP handlers for MinIO admin API.
type adminAPIHandlers struct{}
@@ -46,265 +153,264 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminAPIVersionPrefix,
}
gz, err := gzhttp.NewWrapper(gzhttp.MinSize(1000), gzhttp.CompressionLevel(gzip.BestSpeed))
if err != nil {
// Static params, so this is very unlikely.
logger.Fatal(err, "Unable to initialize server")
}
for _, adminVersion := range adminVersions {
// Restart and stop MinIO service.
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/service").HandlerFunc(gz(httpTraceAll(adminAPI.ServiceHandler))).Queries("action", "{action:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/service").HandlerFunc(adminMiddleware(adminAPI.ServiceHandler, traceAllFlag)).Queries("action", "{action:.*}")
// Update MinIO servers.
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/update").HandlerFunc(gz(httpTraceAll(adminAPI.ServerUpdateHandler))).Queries("updateURL", "{updateURL:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/update").HandlerFunc(adminMiddleware(adminAPI.ServerUpdateHandler, traceAllFlag)).Queries("updateURL", "{updateURL:.*}")
// Info operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/info").HandlerFunc(gz(httpTraceAll(adminAPI.ServerInfoHandler)))
adminRouter.Methods(http.MethodGet, http.MethodPost).Path(adminVersion + "/inspect-data").HandlerFunc(httpTraceAll(adminAPI.InspectDataHandler))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/info").HandlerFunc(adminMiddleware(adminAPI.ServerInfoHandler, traceAllFlag, noObjLayerFlag))
adminRouter.Methods(http.MethodGet, http.MethodPost).Path(adminVersion + "/inspect-data").HandlerFunc(adminMiddleware(adminAPI.InspectDataHandler, noGZFlag, traceAllFlag))
// StorageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/storageinfo").HandlerFunc(gz(httpTraceAll(adminAPI.StorageInfoHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/storageinfo").HandlerFunc(adminMiddleware(adminAPI.StorageInfoHandler, traceAllFlag))
// DataUsageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/datausageinfo").HandlerFunc(gz(httpTraceAll(adminAPI.DataUsageInfoHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/datausageinfo").HandlerFunc(adminMiddleware(adminAPI.DataUsageInfoHandler, traceAllFlag))
// Metrics operation
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/metrics").HandlerFunc(gz(httpTraceAll(adminAPI.MetricsHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/metrics").HandlerFunc(adminMiddleware(adminAPI.MetricsHandler, traceAllFlag))
if globalIsDistErasure || globalIsErasure {
// Heal operations
// Heal processing endpoint.
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/").HandlerFunc(gz(httpTraceAll(adminAPI.HealHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}").HandlerFunc(gz(httpTraceAll(adminAPI.HealHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}/{prefix:.*}").HandlerFunc(gz(httpTraceAll(adminAPI.HealHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/background-heal/status").HandlerFunc(gz(httpTraceAll(adminAPI.BackgroundHealStatusHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/").HandlerFunc(adminMiddleware(adminAPI.HealHandler, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}").HandlerFunc(adminMiddleware(adminAPI.HealHandler, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/heal/{bucket}/{prefix:.*}").HandlerFunc(adminMiddleware(adminAPI.HealHandler, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/background-heal/status").HandlerFunc(adminMiddleware(adminAPI.BackgroundHealStatusHandler, traceAllFlag))
// Pool operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/pools/list").HandlerFunc(gz(httpTraceAll(adminAPI.ListPools)))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/pools/status").HandlerFunc(gz(httpTraceAll(adminAPI.StatusPool))).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/pools/list").HandlerFunc(adminMiddleware(adminAPI.ListPools, traceAllFlag))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/pools/status").HandlerFunc(adminMiddleware(adminAPI.StatusPool, traceAllFlag)).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/pools/decommission").HandlerFunc(gz(httpTraceAll(adminAPI.StartDecommission))).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/pools/cancel").HandlerFunc(gz(httpTraceAll(adminAPI.CancelDecommission))).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/pools/decommission").HandlerFunc(adminMiddleware(adminAPI.StartDecommission, traceAllFlag)).Queries("pool", "{pool:.*}")
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/pools/cancel").HandlerFunc(adminMiddleware(adminAPI.CancelDecommission, traceAllFlag)).Queries("pool", "{pool:.*}")
// Rebalance operations
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/rebalance/start").HandlerFunc(gz(httpTraceAll(adminAPI.RebalanceStart)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/rebalance/status").HandlerFunc(gz(httpTraceAll(adminAPI.RebalanceStatus)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/rebalance/stop").HandlerFunc(gz(httpTraceAll(adminAPI.RebalanceStop)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/rebalance/start").HandlerFunc(adminMiddleware(adminAPI.RebalanceStart, traceAllFlag))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/rebalance/status").HandlerFunc(adminMiddleware(adminAPI.RebalanceStatus, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/rebalance/stop").HandlerFunc(adminMiddleware(adminAPI.RebalanceStop, traceAllFlag))
}
// Profiling operations - deprecated API
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/profiling/start").HandlerFunc(gz(httpTraceAll(adminAPI.StartProfilingHandler))).
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/profiling/start").HandlerFunc(adminMiddleware(adminAPI.StartProfilingHandler, traceAllFlag, noObjLayerFlag)).
Queries("profilerType", "{profilerType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(gz(httpTraceAll(adminAPI.DownloadProfilingHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(adminMiddleware(adminAPI.DownloadProfilingHandler, traceAllFlag, noObjLayerFlag))
// Profiling operations
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/profile").HandlerFunc(gz(httpTraceAll(adminAPI.ProfileHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/profile").HandlerFunc(adminMiddleware(adminAPI.ProfileHandler, traceAllFlag, noObjLayerFlag))
// Config KV operations.
if enableConfigOps {
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-config-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetConfigKVHandler))).Queries("key", "{key:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/set-config-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetConfigKVHandler)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/del-config-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.DelConfigKVHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-config-kv").HandlerFunc(adminMiddleware(adminAPI.GetConfigKVHandler)).Queries("key", "{key:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/set-config-kv").HandlerFunc(adminMiddleware(adminAPI.SetConfigKVHandler))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/del-config-kv").HandlerFunc(adminMiddleware(adminAPI.DelConfigKVHandler))
}
// Enable config help in all modes.
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/help-config-kv").HandlerFunc(gz(httpTraceAll(adminAPI.HelpConfigKVHandler))).Queries("subSys", "{subSys:.*}", "key", "{key:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/help-config-kv").HandlerFunc(adminMiddleware(adminAPI.HelpConfigKVHandler, traceAllFlag)).Queries("subSys", "{subSys:.*}", "key", "{key:.*}")
// Config KV history operations.
if enableConfigOps {
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-config-history-kv").HandlerFunc(gz(httpTraceAll(adminAPI.ListConfigHistoryKVHandler))).Queries("count", "{count:[0-9]+}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/clear-config-history-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.ClearConfigHistoryKVHandler))).Queries("restoreId", "{restoreId:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/restore-config-history-kv").HandlerFunc(gz(httpTraceHdrs(adminAPI.RestoreConfigHistoryKVHandler))).Queries("restoreId", "{restoreId:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-config-history-kv").HandlerFunc(adminMiddleware(adminAPI.ListConfigHistoryKVHandler, traceAllFlag)).Queries("count", "{count:[0-9]+}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/clear-config-history-kv").HandlerFunc(adminMiddleware(adminAPI.ClearConfigHistoryKVHandler)).Queries("restoreId", "{restoreId:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/restore-config-history-kv").HandlerFunc(adminMiddleware(adminAPI.RestoreConfigHistoryKVHandler)).Queries("restoreId", "{restoreId:.*}")
}
// Config import/export bulk operations
if enableConfigOps {
// Get config
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/config").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetConfigHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/config").HandlerFunc(adminMiddleware(adminAPI.GetConfigHandler))
// Set config
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/config").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetConfigHandler)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/config").HandlerFunc(adminMiddleware(adminAPI.SetConfigHandler))
}
// -- IAM APIs --
// Add policy IAM
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-canned-policy").HandlerFunc(gz(httpTraceAll(adminAPI.AddCannedPolicy))).Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-canned-policy").HandlerFunc(adminMiddleware(adminAPI.AddCannedPolicy, traceAllFlag)).Queries("name", "{name:.*}")
// Add user IAM
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/accountinfo").HandlerFunc(gz(httpTraceAll(adminAPI.AccountInfoHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/accountinfo").HandlerFunc(adminMiddleware(adminAPI.AccountInfoHandler, traceAllFlag))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-user").HandlerFunc(gz(httpTraceHdrs(adminAPI.AddUser))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-user").HandlerFunc(adminMiddleware(adminAPI.AddUser)).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-user-status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetUserStatus))).Queries("accessKey", "{accessKey:.*}").Queries("status", "{status:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-user-status").HandlerFunc(adminMiddleware(adminAPI.SetUserStatus)).Queries("accessKey", "{accessKey:.*}").Queries("status", "{status:.*}")
// Service accounts ops
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/add-service-account").HandlerFunc(gz(httpTraceHdrs(adminAPI.AddServiceAccount)))
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/update-service-account").HandlerFunc(gz(httpTraceHdrs(adminAPI.UpdateServiceAccount))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-service-account").HandlerFunc(gz(httpTraceHdrs(adminAPI.InfoServiceAccount))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-service-accounts").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListServiceAccounts)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/delete-service-account").HandlerFunc(gz(httpTraceHdrs(adminAPI.DeleteServiceAccount))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/add-service-account").HandlerFunc(adminMiddleware(adminAPI.AddServiceAccount))
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/update-service-account").HandlerFunc(adminMiddleware(adminAPI.UpdateServiceAccount)).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-service-account").HandlerFunc(adminMiddleware(adminAPI.InfoServiceAccount)).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-service-accounts").HandlerFunc(adminMiddleware(adminAPI.ListServiceAccounts))
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/delete-service-account").HandlerFunc(adminMiddleware(adminAPI.DeleteServiceAccount)).Queries("accessKey", "{accessKey:.*}")
// STS accounts ops
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/temporary-account-info").HandlerFunc(gz(httpTraceHdrs(adminAPI.TemporaryAccountInfo))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/temporary-account-info").HandlerFunc(adminMiddleware(adminAPI.TemporaryAccountInfo)).Queries("accessKey", "{accessKey:.*}")
// Info policy IAM latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(gz(httpTraceHdrs(adminAPI.InfoCannedPolicy))).Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(adminMiddleware(adminAPI.InfoCannedPolicy)).Queries("name", "{name:.*}")
// List policies latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-canned-policies").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListBucketPolicies))).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-canned-policies").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListCannedPolicies)))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-canned-policies").HandlerFunc(adminMiddleware(adminAPI.ListBucketPolicies)).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-canned-policies").HandlerFunc(adminMiddleware(adminAPI.ListCannedPolicies))
// Builtin IAM policy associations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/builtin/policy-entities").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListPolicyMappingEntities)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/builtin/policy-entities").HandlerFunc(adminMiddleware(adminAPI.ListPolicyMappingEntities))
// Remove policy IAM
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-canned-policy").HandlerFunc(gz(httpTraceHdrs(adminAPI.RemoveCannedPolicy))).Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-canned-policy").HandlerFunc(adminMiddleware(adminAPI.RemoveCannedPolicy)).Queries("name", "{name:.*}")
// Set user or group policy
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-user-or-group-policy").
HandlerFunc(gz(httpTraceHdrs(adminAPI.SetPolicyForUserOrGroup))).
HandlerFunc(adminMiddleware(adminAPI.SetPolicyForUserOrGroup)).
Queries("policyName", "{policyName:.*}", "userOrGroup", "{userOrGroup:.*}", "isGroup", "{isGroup:true|false}")
// Attach policies to user or group
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/builtin/policy/attach").HandlerFunc(gz(httpTraceHdrs(adminAPI.AttachPolicyBuiltin)))
// Detach policies from user or group
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/builtin/policy/detach").HandlerFunc(gz(httpTraceHdrs(adminAPI.DetachPolicyBuiltin)))
// Attach/Detach policies to/from user or group
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/builtin/policy/{operation}").HandlerFunc(adminMiddleware(adminAPI.AttachDetachPolicyBuiltin))
// Remove user IAM
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-user").HandlerFunc(gz(httpTraceHdrs(adminAPI.RemoveUser))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-user").HandlerFunc(adminMiddleware(adminAPI.RemoveUser)).Queries("accessKey", "{accessKey:.*}")
// List users
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-users").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListBucketUsers))).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-users").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListUsers)))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-users").HandlerFunc(adminMiddleware(adminAPI.ListBucketUsers)).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-users").HandlerFunc(adminMiddleware(adminAPI.ListUsers))
// User info
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/user-info").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetUserInfo))).Queries("accessKey", "{accessKey:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/user-info").HandlerFunc(adminMiddleware(adminAPI.GetUserInfo)).Queries("accessKey", "{accessKey:.*}")
// Add/Remove members from group
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/update-group-members").HandlerFunc(gz(httpTraceHdrs(adminAPI.UpdateGroupMembers)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/update-group-members").HandlerFunc(adminMiddleware(adminAPI.UpdateGroupMembers))
// Get Group
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/group").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetGroup))).Queries("group", "{group:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/group").HandlerFunc(adminMiddleware(adminAPI.GetGroup)).Queries("group", "{group:.*}")
// List Groups
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/groups").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListGroups)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/groups").HandlerFunc(adminMiddleware(adminAPI.ListGroups))
// Set Group Status
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-group-status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetGroupStatus))).Queries("group", "{group:.*}").Queries("status", "{status:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-group-status").HandlerFunc(adminMiddleware(adminAPI.SetGroupStatus)).Queries("group", "{group:.*}").Queries("status", "{status:.*}")
// Export IAM info to zipped file
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/export-iam").HandlerFunc(httpTraceHdrs(adminAPI.ExportIAM))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/export-iam").HandlerFunc(adminMiddleware(adminAPI.ExportIAM, noGZFlag))
// Import IAM info
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam").HandlerFunc(httpTraceHdrs(adminAPI.ImportIAM))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam").HandlerFunc(adminMiddleware(adminAPI.ImportIAM, noGZFlag))
// IDentity Provider configuration APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(gz(httpTraceHdrs(adminAPI.AddIdentityProviderCfg)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(gz(httpTraceHdrs(adminAPI.UpdateIdentityProviderCfg)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp-config/{type}").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListIdentityProviderCfg)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetIdentityProviderCfg)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(gz(httpTraceHdrs(adminAPI.DeleteIdentityProviderCfg)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.AddIdentityProviderCfg))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.UpdateIdentityProviderCfg))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp-config/{type}").HandlerFunc(adminMiddleware(adminAPI.ListIdentityProviderCfg))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.GetIdentityProviderCfg))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.DeleteIdentityProviderCfg))
// LDAP IAM operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/ldap/policy-entities").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListLDAPPolicyMappingEntities)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/ldap/policy/{operation}").HandlerFunc(gz(httpTraceHdrs(adminAPI.AttachDetachPolicyLDAP)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/ldap/policy-entities").HandlerFunc(adminMiddleware(adminAPI.ListLDAPPolicyMappingEntities))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/idp/ldap/policy/{operation}").HandlerFunc(adminMiddleware(adminAPI.AttachDetachPolicyLDAP))
// -- END IAM APIs --
// GetBucketQuotaConfig
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-bucket-quota").HandlerFunc(
gz(httpTraceHdrs(adminAPI.GetBucketQuotaConfigHandler))).Queries("bucket", "{bucket:.*}")
adminMiddleware(adminAPI.GetBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
// PutBucketQuotaConfig
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-bucket-quota").HandlerFunc(
gz(httpTraceHdrs(adminAPI.PutBucketQuotaConfigHandler))).Queries("bucket", "{bucket:.*}")
adminMiddleware(adminAPI.PutBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
// Bucket replication operations
// GetBucketTargetHandler
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-remote-targets").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ListRemoteTargetsHandler))).Queries("bucket", "{bucket:.*}", "type", "{type:.*}")
adminMiddleware(adminAPI.ListRemoteTargetsHandler)).Queries("bucket", "{bucket:.*}", "type", "{type:.*}")
// SetRemoteTargetHandler
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-remote-target").HandlerFunc(
gz(httpTraceHdrs(adminAPI.SetRemoteTargetHandler))).Queries("bucket", "{bucket:.*}")
adminMiddleware(adminAPI.SetRemoteTargetHandler)).Queries("bucket", "{bucket:.*}")
// RemoveRemoteTargetHandler
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-remote-target").HandlerFunc(
gz(httpTraceHdrs(adminAPI.RemoveRemoteTargetHandler))).Queries("bucket", "{bucket:.*}", "arn", "{arn:.*}")
adminMiddleware(adminAPI.RemoveRemoteTargetHandler)).Queries("bucket", "{bucket:.*}", "arn", "{arn:.*}")
// ReplicationDiff - MinIO extension API
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/replication/diff").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ReplicationDiffHandler))).Queries("bucket", "{bucket:.*}")
adminMiddleware(adminAPI.ReplicationDiffHandler)).Queries("bucket", "{bucket:.*}")
// ReplicationMRFHandler - MinIO extension API
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/replication/mrf").HandlerFunc(
adminMiddleware(adminAPI.ReplicationMRFHandler)).Queries("bucket", "{bucket:.*}")
// Batch job operations
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/start-job").HandlerFunc(
gz(httpTraceHdrs(adminAPI.StartBatchJob)))
adminMiddleware(adminAPI.StartBatchJob))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-jobs").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ListBatchJobs)))
adminMiddleware(adminAPI.ListBatchJobs))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/describe-job").HandlerFunc(
gz(httpTraceHdrs(adminAPI.DescribeBatchJob)))
adminMiddleware(adminAPI.DescribeBatchJob))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/cancel-job").HandlerFunc(
gz(httpTraceHdrs(adminAPI.CancelBatchJob)))
adminMiddleware(adminAPI.CancelBatchJob))
// Bucket migration operations
// ExportBucketMetaHandler
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/export-bucket-metadata").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ExportBucketMetadataHandler)))
adminMiddleware(adminAPI.ExportBucketMetadataHandler))
// ImportBucketMetaHandler
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-bucket-metadata").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ImportBucketMetadataHandler)))
adminMiddleware(adminAPI.ImportBucketMetadataHandler))
// Remote Tier management operations
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/tier").HandlerFunc(gz(httpTraceHdrs(adminAPI.AddTierHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.EditTierHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListTierHandler)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.RemoveTierHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.VerifyTierHandler)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/tier").HandlerFunc(adminMiddleware(adminAPI.AddTierHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/tier/{tier}").HandlerFunc(adminMiddleware(adminAPI.EditTierHandler))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier").HandlerFunc(adminMiddleware(adminAPI.ListTierHandler))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/tier/{tier}").HandlerFunc(adminMiddleware(adminAPI.RemoveTierHandler))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier/{tier}").HandlerFunc(adminMiddleware(adminAPI.VerifyTierHandler))
// Tier stats
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier-stats").HandlerFunc(gz(httpTraceHdrs(adminAPI.TierStatsHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier-stats").HandlerFunc(adminMiddleware(adminAPI.TierStatsHandler))
// Cluster Replication APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/add").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationAdd)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/remove").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationRemove)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/info").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/metainfo").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationMetaInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationStatus)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/add").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationAdd))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/remove").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationRemove))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/info").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationInfo))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/metainfo").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationMetaInfo))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/status").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationStatus))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPISiteReplicationDevNull).HandlerFunc(adminMiddleware(adminAPI.SiteReplicationDevNull, noObjLayerFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPISiteReplicationNetPerf).HandlerFunc(adminMiddleware(adminAPI.SiteReplicationNetPerf, noObjLayerFlag))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/join").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerJoin)))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/peer/bucket-ops").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerBucketOps))).Queries("bucket", "{bucket:.*}").Queries("operation", "{operation:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/iam-item").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerReplicateIAMItem)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/bucket-meta").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerReplicateBucketItem)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/peer/idp-settings").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerGetIDPSettings)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/edit").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationEdit)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/edit").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerEdit)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/remove").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerRemove)))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/resync/op").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationResyncOp))).Queries("operation", "{operation:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/join").HandlerFunc(adminMiddleware(adminAPI.SRPeerJoin))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/peer/bucket-ops").HandlerFunc(adminMiddleware(adminAPI.SRPeerBucketOps)).Queries("bucket", "{bucket:.*}").Queries("operation", "{operation:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/iam-item").HandlerFunc(adminMiddleware(adminAPI.SRPeerReplicateIAMItem))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/bucket-meta").HandlerFunc(adminMiddleware(adminAPI.SRPeerReplicateBucketItem))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/peer/idp-settings").HandlerFunc(adminMiddleware(adminAPI.SRPeerGetIDPSettings))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/edit").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationEdit))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/edit").HandlerFunc(adminMiddleware(adminAPI.SRPeerEdit))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/remove").HandlerFunc(adminMiddleware(adminAPI.SRPeerRemove))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/resync/op").HandlerFunc(adminMiddleware(adminAPI.SiteReplicationResyncOp)).Queries("operation", "{operation:.*}")
if globalIsDistErasure {
// Top locks
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/top/locks").HandlerFunc(gz(httpTraceHdrs(adminAPI.TopLocksHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/top/locks").HandlerFunc(adminMiddleware(adminAPI.TopLocksHandler))
// Force unlocks paths
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/force-unlock").
Queries("paths", "{paths:.*}").HandlerFunc(gz(httpTraceHdrs(adminAPI.ForceUnlockHandler)))
Queries("paths", "{paths:.*}").HandlerFunc(adminMiddleware(adminAPI.ForceUnlockHandler))
}
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest").HandlerFunc(httpTraceHdrs(adminAPI.SpeedTestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/object").HandlerFunc(httpTraceHdrs(adminAPI.ObjectSpeedTestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/drive").HandlerFunc(httpTraceHdrs(adminAPI.DriveSpeedtestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/net").HandlerFunc(httpTraceHdrs(adminAPI.NetperfHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest").HandlerFunc(adminMiddleware(adminAPI.ObjectSpeedTestHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/object").HandlerFunc(adminMiddleware(adminAPI.ObjectSpeedTestHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/drive").HandlerFunc(adminMiddleware(adminAPI.DriveSpeedtestHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/net").HandlerFunc(adminMiddleware(adminAPI.NetperfHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/site").HandlerFunc(adminMiddleware(adminAPI.SitePerfHandler, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPIClientDevNull).HandlerFunc(adminMiddleware(adminAPI.ClientDevNull, noGZFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + adminAPIClientDevExtraTime).HandlerFunc(adminMiddleware(adminAPI.ClientDevNullExtraTime, noGZFlag))
// HTTP Trace
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/trace").HandlerFunc(gz(http.HandlerFunc(adminAPI.TraceHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/trace").HandlerFunc(adminMiddleware(adminAPI.TraceHandler, noObjLayerFlag))
// Console Logs
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/log").HandlerFunc(gz(httpTraceAll(adminAPI.ConsoleLogHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/log").HandlerFunc(adminMiddleware(adminAPI.ConsoleLogHandler, traceAllFlag))
// -- KMS APIs --
//
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/kms/status").HandlerFunc(gz(httpTraceAll(adminAPI.KMSStatusHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/kms/key/create").HandlerFunc(gz(httpTraceAll(adminAPI.KMSCreateKeyHandler))).Queries("key-id", "{key-id:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/kms/key/status").HandlerFunc(gz(httpTraceAll(adminAPI.KMSKeyStatusHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/kms/status").HandlerFunc(adminMiddleware(adminAPI.KMSStatusHandler, traceAllFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/kms/key/create").HandlerFunc(adminMiddleware(adminAPI.KMSCreateKeyHandler, traceAllFlag)).Queries("key-id", "{key-id:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/kms/key/status").HandlerFunc(adminMiddleware(adminAPI.KMSKeyStatusHandler, traceAllFlag))
// Keep obdinfo for backward compatibility with mc
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/obdinfo").
HandlerFunc(gz(httpTraceHdrs(adminAPI.HealthInfoHandler)))
HandlerFunc(adminMiddleware(adminAPI.HealthInfoHandler))
// -- Health API --
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/healthinfo").
HandlerFunc(gz(httpTraceHdrs(adminAPI.HealthInfoHandler)))
HandlerFunc(adminMiddleware(adminAPI.HealthInfoHandler))
}
// If none of the routes match add default error handler routes

View File

@@ -26,7 +26,7 @@ import (
"strings"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"

View File

@@ -31,7 +31,7 @@ import (
"github.com/minio/minio/internal/ioutil"
"google.golang.org/api/googleapi"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/auth"
@@ -47,7 +47,7 @@ import (
levent "github.com/minio/minio/internal/config/lambda/event"
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/hash"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/v2/policy"
)
// APIError structure
@@ -87,6 +87,7 @@ const (
ErrInternalError
ErrInvalidAccessKeyID
ErrAccessKeyDisabled
ErrInvalidArgument
ErrInvalidBucketName
ErrInvalidDigest
ErrInvalidRange
@@ -137,6 +138,8 @@ const (
ErrReplicationDenyEditError
ErrRemoteTargetDenyAddError
ErrReplicationNoExistingObjects
ErrReplicationValidationError
ErrReplicationPermissionCheckError
ErrObjectRestoreAlreadyInProgress
ErrNoSuchKey
ErrNoSuchUpload
@@ -149,6 +152,7 @@ const (
ErrMethodNotAllowed
ErrInvalidPart
ErrInvalidPartOrder
ErrMissingPart
ErrAuthorizationHeaderMalformed
ErrMalformedPOSTRequest
ErrPOSTFileRequired
@@ -182,8 +186,11 @@ const (
ErrBucketAlreadyExists
ErrMetadataTooLarge
ErrUnsupportedMetadata
ErrUnsupportedHostHeader
ErrMaximumExpires
ErrSlowDown
ErrSlowDownRead
ErrSlowDownWrite
ErrMaxVersionsExceeded
ErrInvalidPrefixMarker
ErrBadRequest
ErrKeyTooLongError
@@ -253,10 +260,11 @@ const (
ErrInvalidObjectName
ErrInvalidObjectNamePrefixSlash
ErrInvalidResourceName
ErrInvalidLifecycleQueryParameter
ErrServerNotInitialized
ErrOperationTimedOut
ErrRequestTimedout
ErrClientDisconnected
ErrOperationMaxedOut
ErrTooManyRequests
ErrInvalidRequest
ErrTransitionStorageClassNotFoundError
// MinIO storage class error codes
@@ -268,6 +276,7 @@ const (
ErrMalformedJSON
ErrAdminNoSuchUser
ErrAdminNoSuchUserLDAPWarn
ErrAdminNoSuchGroup
ErrAdminGroupNotEmpty
ErrAdminGroupDisabled
@@ -559,6 +568,11 @@ var errorCodes = errorCodeMap{
Description: "Your account is disabled; please contact your administrator.",
HTTPStatusCode: http.StatusForbidden,
},
ErrInvalidArgument: {
Code: "InvalidArgument",
Description: "Invalid argument",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidBucketName: {
Code: "InvalidBucketName",
Description: "The specified bucket is not valid.",
@@ -684,6 +698,11 @@ var errorCodes = errorCodeMap{
Description: "One or more of the specified parts could not be found. The part may not have been uploaded, or the specified entity tag may not match the part's entity tag.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingPart: {
Code: "InvalidRequest",
Description: "You must specify at least one part",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPartOrder: {
Code: "InvalidPartOrder",
Description: "The list of parts was not in ascending order. The parts list must be specified in order by part number.",
@@ -829,11 +848,21 @@ var errorCodes = errorCodeMap{
Description: "Request is not valid yet",
HTTPStatusCode: http.StatusForbidden,
},
ErrSlowDown: {
Code: "SlowDown",
ErrSlowDownRead: {
Code: "SlowDownRead",
Description: "Resource requested is unreadable, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrSlowDownWrite: {
Code: "SlowDownWrite",
Description: "Resource requested is unwritable, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrMaxVersionsExceeded: {
Code: "MaxVersionsExceeded",
Description: "You've exceeded the limit on the number of versions you can create on this object",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPrefixMarker: {
Code: "InvalidPrefixMarker",
Description: "Invalid marker prefix combination",
@@ -994,6 +1023,16 @@ var errorCodes = errorCodeMap{
Description: "Versioning must be 'Enabled' on the bucket to add a replication target",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationValidationError: {
Code: "InvalidRequest",
Description: "Replication validation failed on target",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationPermissionCheckError: {
Code: "ReplicationPermissionCheck",
Description: "X-Minio-Source-Replication-Check cannot be specified in request. Request cannot be completed",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNoSuchObjectLockConfiguration: {
Code: "NoSuchObjectLockConfiguration",
Description: "The specified object does not have a ObjectLock configuration",
@@ -1107,8 +1146,13 @@ var errorCodes = errorCodeMap{
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionMethod: {
Code: "InvalidRequest",
Description: "The encryption method specified is not supported",
Code: "InvalidArgument",
Description: "Server Side Encryption with AWS KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms",
HTTPStatusCode: http.StatusBadRequest,
},
ErrIncompatibleEncryptionMethod: {
Code: "InvalidArgument",
Description: "Server Side Encryption with Customer provided key is incompatible with the encryption method specified",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionKeyID: {
@@ -1171,11 +1215,6 @@ var errorCodes = errorCodeMap{
Description: "The provided encryption parameters did not match the ones used originally.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrIncompatibleEncryptionMethod: {
Code: "InvalidArgument",
Description: "Server side encryption specified with both SSE-C and SSE-S3 headers",
HTTPStatusCode: http.StatusBadRequest,
},
ErrKMSNotConfigured: {
Code: "NotImplemented",
Description: "Server side encryption specified but KMS is not configured",
@@ -1255,11 +1294,21 @@ var errorCodes = errorCodeMap{
Description: "The JSON you provided was not well-formed or did not validate against our published format.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidLifecycleQueryParameter: {
Code: "XMinioInvalidLifecycleParameter",
Description: "The boolean value provided for withUpdatedAt query parameter was invalid.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminNoSuchUser: {
Code: "XMinioAdminNoSuchUser",
Description: "The specified user does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminNoSuchUserLDAPWarn: {
Code: "XMinioAdminNoSuchUser",
Description: "The specified user does not exist. If you meant a user in LDAP, use `mc idp ldap`",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminNoSuchGroup: {
Code: "XMinioAdminNoSuchGroup",
Description: "The specified group does not exist.",
@@ -1354,7 +1403,7 @@ var errorCodes = errorCodeMap{
},
ErrAdminConfigIDPCfgNameAlreadyExists: {
Code: "XMinioAdminConfigIDPCfgNameAlreadyExists",
Description: "An IDP configuration with the given name aleady exists",
Description: "An IDP configuration with the given name already exists",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigIDPCfgNameDoesNotExist: {
@@ -1392,7 +1441,7 @@ var errorCodes = errorCodeMap{
Description: "Cannot respond to plain-text request from TLS-encrypted server",
HTTPStatusCode: http.StatusBadRequest,
},
ErrOperationTimedOut: {
ErrRequestTimedout: {
Code: "RequestTimeout",
Description: "A timeout occurred while trying to lock a resource, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
@@ -1402,9 +1451,9 @@ var errorCodes = errorCodeMap{
Description: "Client disconnected before response was ready",
HTTPStatusCode: 499, // No official code, use nginx value.
},
ErrOperationMaxedOut: {
Code: "SlowDown",
Description: "A timeout exceeded while waiting to proceed with the request, please reduce your request rate",
ErrTooManyRequests: {
Code: "TooManyRequests",
Description: "Deadline exceeded while waiting in incoming queue, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrUnsupportedMetadata: {
@@ -1412,6 +1461,11 @@ var errorCodes = errorCodeMap{
Description: "Your metadata headers are not supported.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrUnsupportedHostHeader: {
Code: "InvalidArgument",
Description: "Your Host header is malformed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrObjectTampered: {
Code: "XMinioObjectTampered",
Description: errObjectTampered.Error(),
@@ -2015,11 +2069,14 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
}
// Only return ErrClientDisconnected if the provided context is actually canceled.
// This way downstream context.Canceled will still report ErrOperationTimedOut
// This way downstream context.Canceled will still report ErrRequestTimedout
if contextCanceled(ctx) && errors.Is(ctx.Err(), context.Canceled) {
return ErrClientDisconnected
}
// Unwrap the error first
err = unwrapAll(err)
switch err {
case errInvalidArgument:
apiErr = ErrAdminInvalidArgument
@@ -2027,6 +2084,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrAdminNoSuchPolicy
case errNoSuchUser:
apiErr = ErrAdminNoSuchUser
case errNoSuchUserLDAPWarn:
apiErr = ErrAdminNoSuchUserLDAPWarn
case errNoSuchServiceAccount:
apiErr = ErrAdminServiceAccountNotFound
case errNoSuchGroup:
@@ -2054,9 +2113,11 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
case errInvalidStorageClass:
apiErr = ErrInvalidStorageClass
case errErasureReadQuorum:
apiErr = ErrSlowDown
apiErr = ErrSlowDownRead
case errErasureWriteQuorum:
apiErr = ErrSlowDown
apiErr = ErrSlowDownWrite
case errMaxVersionsExceeded:
apiErr = ErrMaxVersionsExceeded
// SSE errors
case errInvalidEncryptionParameters:
apiErr = ErrInvalidEncryptionParameters
@@ -2090,10 +2151,10 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrKMSKeyNotFoundException
case errKMSDefaultKeyAlreadyConfigured:
apiErr = ErrKMSDefaultKeyAlreadyConfigured
case context.Canceled, context.DeadlineExceeded:
apiErr = ErrOperationTimedOut
case errDiskNotFound:
apiErr = ErrSlowDown
case context.Canceled:
apiErr = ErrClientDisconnected
case context.DeadlineExceeded:
apiErr = ErrRequestTimedout
case objectlock.ErrInvalidRetentionDate:
apiErr = ErrInvalidRetentionDate
case objectlock.ErrPastObjectLockRetainDate:
@@ -2172,11 +2233,9 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
case InvalidPart:
apiErr = ErrInvalidPart
case InsufficientWriteQuorum:
apiErr = ErrSlowDown
apiErr = ErrSlowDownWrite
case InsufficientReadQuorum:
apiErr = ErrSlowDown
case InvalidMarkerPrefixCombination:
apiErr = ErrNotImplemented
apiErr = ErrSlowDownRead
case InvalidUploadIDKeyCombination:
apiErr = ErrNotImplemented
case MalformedUploadID:
@@ -2268,7 +2327,7 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
case *event.ErrUnsupportedConfiguration:
apiErr = ErrUnsupportedNotification
case OperationTimedOut:
apiErr = ErrOperationTimedOut
apiErr = ErrRequestTimedout
case BackendDown:
apiErr = ErrBackendDown
case ObjectNameTooLong:
@@ -2278,6 +2337,12 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
case dns.ErrBucketConflict:
apiErr = ErrBucketAlreadyExists
default:
if _, ok := err.(tags.Error); ok {
// tag errors are not exported, so we check their custom interface to avoid logging.
// The correct type is inserted by toAPIError.
apiErr = ErrInternalError
break
}
var ie, iw int
// This work-around is to handle the issue golang/go#30648
//nolint:gocritic
@@ -2363,7 +2428,7 @@ func toAPIError(ctx context.Context, err error) APIError {
}
case lifecycle.Error:
apiErr = APIError{
Code: "InvalidRequest",
Code: "InvalidArgument",
Description: e.Error(),
HTTPStatusCode: http.StatusBadRequest,
}

View File

@@ -20,8 +20,6 @@ package cmd
import (
"context"
"errors"
"os"
"path/filepath"
"testing"
"github.com/minio/minio/internal/crypto"
@@ -42,9 +40,8 @@ var toAPIErrorTests = []struct {
{err: ObjectNameInvalid{}, errCode: ErrInvalidObjectName},
{err: InvalidUploadID{}, errCode: ErrNoSuchUpload},
{err: InvalidPart{}, errCode: ErrInvalidPart},
{err: InsufficientReadQuorum{}, errCode: ErrSlowDown},
{err: InsufficientWriteQuorum{}, errCode: ErrSlowDown},
{err: InvalidMarkerPrefixCombination{}, errCode: ErrNotImplemented},
{err: InsufficientReadQuorum{}, errCode: ErrSlowDownRead},
{err: InsufficientWriteQuorum{}, errCode: ErrSlowDownWrite},
{err: InvalidUploadIDKeyCombination{}, errCode: ErrNotImplemented},
{err: MalformedUploadID{}, errCode: ErrNoSuchUpload},
{err: PartTooSmall{}, errCode: ErrEntityTooSmall},
@@ -67,11 +64,6 @@ var toAPIErrorTests = []struct {
}
func TestAPIErrCode(t *testing.T) {
disk := filepath.Join(globalTestTmpDir, "minio-"+nextSuffix())
defer os.RemoveAll(disk)
initFSObjects(disk, t)
ctx := context.Background()
for i, testCase := range toAPIErrorTests {
errCode := toAPIErrorCode(ctx, testCase.err)

View File

@@ -23,11 +23,11 @@ import (
"encoding/xml"
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/crypto"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
@@ -133,16 +133,15 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
w.Header().Set(xhttp.Expires, objInfo.Expires.UTC().Format(http.TimeFormat))
}
if globalCacheConfig.Enabled {
w.Header().Set(xhttp.XCache, objInfo.CacheStatus.String())
w.Header().Set(xhttp.XCacheLookup, objInfo.CacheLookupStatus.String())
}
// Set tag count if object has tags
if len(objInfo.UserTags) > 0 {
tags, _ := url.ParseQuery(objInfo.UserTags)
if len(tags) > 0 {
w.Header()[xhttp.AmzTagCount] = []string{strconv.Itoa(len(tags))}
tags, _ := tags.ParseObjectTags(objInfo.UserTags)
if tags.Count() > 0 {
w.Header()[xhttp.AmzTagCount] = []string{strconv.Itoa(tags.Count())}
if opts.Tagging {
// This is MinIO only extension to return back tags along with the count.
w.Header()[xhttp.AmzObjectTagging] = []string{objInfo.UserTags}
}
}
}
@@ -153,7 +152,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
continue
}
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
@@ -166,7 +165,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
var isSet bool
for _, userMetadataPrefix := range userMetadataKeyPrefixes {
if !strings.HasPrefix(strings.ToLower(k), strings.ToLower(userMetadataPrefix)) {
if !stringsHasPrefixFold(k, userMetadataPrefix) {
continue
}
w.Header()[strings.ToLower(k)] = []string{v}
@@ -203,7 +202,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
}
// Set the relevant version ID as part of the response header.
if objInfo.VersionID != "" {
if objInfo.VersionID != "" && objInfo.VersionID != nullVersionID {
w.Header()[xhttp.AmzVersionID] = []string{objInfo.VersionID}
}

View File

@@ -37,8 +37,8 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
maxkeys = maxObjectList
}
prefix = trimLeadingSlash(values.Get("prefix"))
marker = trimLeadingSlash(values.Get("marker"))
prefix = values.Get("prefix")
marker = values.Get("marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
return
@@ -57,8 +57,8 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
maxkeys = maxObjectList
}
prefix = trimLeadingSlash(values.Get("prefix"))
marker = trimLeadingSlash(values.Get("key-marker"))
prefix = values.Get("prefix")
marker = values.Get("key-marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
versionIDMarker = values.Get("version-id-marker")
@@ -87,8 +87,8 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
maxkeys = maxObjectList
}
prefix = trimLeadingSlash(values.Get("prefix"))
startAfter = trimLeadingSlash(values.Get("start-after"))
prefix = values.Get("prefix")
startAfter = values.Get("start-after")
delimiter = values.Get("delimiter")
fetchOwner = values.Get("fetch-owner") == "true"
encodingType = values.Get("encoding-type")
@@ -118,8 +118,8 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
maxUploads = maxUploadsList
}
prefix = trimLeadingSlash(values.Get("prefix"))
keyMarker = trimLeadingSlash(values.Get("key-marker"))
prefix = values.Get("prefix")
keyMarker = values.Get("key-marker")
uploadIDMarker = values.Get("upload-id-marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")

View File

@@ -35,7 +35,7 @@ import (
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/v2/policy"
xxml "github.com/minio/xxml"
)
@@ -85,7 +85,7 @@ type ListVersionsResponse struct {
VersionIDMarker string `xml:"VersionIdMarker"`
MaxKeys int
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
@@ -115,7 +115,7 @@ type ListObjectsResponse struct {
NextMarker string `xml:"NextMarker,omitempty"`
MaxKeys int
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
@@ -146,7 +146,7 @@ type ListObjectsV2Response struct {
KeyCount int
MaxKeys int
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
@@ -205,7 +205,7 @@ type ListMultipartUploadsResponse struct {
UploadIDMarker string `xml:"UploadIdMarker"`
NextKeyMarker string
NextUploadIDMarker string `xml:"NextUploadIdMarker"`
Delimiter string
Delimiter string `xml:"Delimiter,omitempty"`
Prefix string
EncodingType string `xml:"EncodingType,omitempty"`
MaxUploads int
@@ -345,6 +345,13 @@ func (s *Metadata) MarshalXML(e *xxml.Encoder, start xxml.StartElement) error {
return e.EncodeToken(start.End())
}
// ObjectInternalInfo contains some internal information about a given
// object, it will printed in listing calls with enabled metadata.
type ObjectInternalInfo struct {
K int // Data blocks
M int // Parity blocks
}
// Object container for object metadata
type Object struct {
Key string
@@ -353,7 +360,7 @@ type Object struct {
Size int64
// Owner of the object.
Owner Owner
Owner *Owner `xml:"Owner,omitempty"`
// The class of storage used to store the object.
StorageClass string
@@ -361,6 +368,8 @@ type Object struct {
// UserMetadata user-defined metadata
UserMetadata *Metadata `xml:"UserMetadata,omitempty"`
UserTags string `xml:"UserTags,omitempty"`
Internal *ObjectInternalInfo `xml:"Internal,omitempty"`
}
// CopyObjectResponse container returns ETag and LastModified of the successfully copied object
@@ -497,7 +506,7 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType string, maxKeys int, resp ListObjectVersionsInfo, metadata metaCheckFn) ListVersionsResponse {
versions := make([]ObjectVersion, 0, len(resp.Objects))
owner := Owner{
owner := &Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
@@ -541,7 +550,7 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
content.UserMetadata.Set(xhttp.AmzServerSideEncryptionCustomerAlgorithm, xhttp.AmzEncryptionAES)
}
for k, v := range cleanMinioInternalMetadataKeys(object.UserDefined) {
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
@@ -552,6 +561,10 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
}
content.UserMetadata.Set(k, v)
}
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
}
}
content.Owner = owner
content.VersionID = object.VersionID
@@ -589,7 +602,7 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
// generates an ListObjectsV1 response for the said bucket with other enumerated options.
func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingType string, maxKeys int, resp ListObjectsInfo) ListObjectsResponse {
contents := make([]Object, 0, len(resp.Objects))
owner := Owner{
owner := &Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
@@ -638,10 +651,14 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingTy
// generates an ListObjectsV2 response for the said bucket with other enumerated options.
func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter, delimiter, encodingType string, fetchOwner, isTruncated bool, maxKeys int, objects []ObjectInfo, prefixes []string, metadata metaCheckFn) ListObjectsV2Response {
contents := make([]Object, 0, len(objects))
owner := Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
var owner *Owner
if fetchOwner {
owner = &Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
}
data := ListObjectsV2Response{}
for _, object := range objects {
@@ -676,7 +693,7 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
content.UserMetadata.Set(xhttp.AmzServerSideEncryptionCustomerAlgorithm, xhttp.AmzEncryptionAES)
}
for k, v := range cleanMinioInternalMetadataKeys(object.UserDefined) {
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
continue
@@ -687,6 +704,10 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
}
content.UserMetadata.Set(k, v)
}
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
}
}
}
contents = append(contents, content)
@@ -924,6 +945,8 @@ func writeErrorResponse(ctx context.Context, w http.ResponseWriter, err APIError
}
func writeErrorResponseHeadersOnly(w http.ResponseWriter, err APIError) {
w.Header().Set(xMinIOErrCodeHeader, err.Code)
w.Header().Set(xMinIOErrDescHeader, "\""+err.Description+"\"")
writeResponse(w, err.HTTPStatusCode, nil, mimeNone)
}

View File

@@ -27,7 +27,7 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/wildcard"
"github.com/minio/pkg/v2/wildcard"
"github.com/rs/cors"
)
@@ -61,18 +61,6 @@ func newObjectLayerFn() ObjectLayer {
return globalObjectAPI
}
func newCachedObjectLayerFn() CacheObjectLayer {
globalObjLayerMutex.RLock()
defer globalObjLayerMutex.RUnlock()
return globalCacheObjectAPI
}
func setCacheObjectLayer(c CacheObjectLayer) {
globalObjLayerMutex.Lock()
globalCacheObjectAPI = c
globalObjLayerMutex.Unlock()
}
func setObjectLayer(o ObjectLayer) {
globalObjLayerMutex.Lock()
globalObjectAPI = o
@@ -82,7 +70,6 @@ func setObjectLayer(o ObjectLayer) {
// objectAPIHandler implements and provides http handlers for S3 API.
type objectAPIHandlers struct {
ObjectAPI func() ObjectLayer
CacheAPI func() CacheObjectLayer
}
// getHost tries its best to return the request host.
@@ -189,7 +176,6 @@ func registerAPIRouter(router *mux.Router) {
// Initialize API.
api := objectAPIHandlers{
ObjectAPI: newObjectLayerFn,
CacheAPI: newCachedObjectLayerFn,
}
// API Router
@@ -245,10 +231,10 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodPut).Path("/{object:.+}").
HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").
HandlerFunc(collectAPIStats("copyobjectpart", maxClients(gz(httpTraceAll(api.CopyObjectPartHandler))))).
Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
Queries("partNumber", "{partNumber:.*}", "uploadId", "{uploadId:.*}")
// PutObjectPart
router.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
collectAPIStats("putobjectpart", maxClients(gz(httpTraceHdrs(api.PutObjectPartHandler))))).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
collectAPIStats("putobjectpart", maxClients(gz(httpTraceHdrs(api.PutObjectPartHandler))))).Queries("partNumber", "{partNumber:.*}", "uploadId", "{uploadId:.*}")
// ListObjectParts
router.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
collectAPIStats("listobjectparts", maxClients(gz(httpTraceAll(api.ListObjectPartsHandler))))).Queries("uploadId", "{uploadId:.*}")
@@ -461,10 +447,16 @@ func registerAPIRouter(router *mux.Router) {
// MinIO extension API for replication.
//
// GetBucketReplicationMetrics
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("getbucketreplicationmetrics", maxClients(gz(httpTraceAll(api.GetBucketReplicationMetricsV2Handler))))).Queries("replication-metrics", "2")
// deprecated handler
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("getbucketreplicationmetrics", maxClients(gz(httpTraceAll(api.GetBucketReplicationMetricsHandler))))).Queries("replication-metrics", "")
// ValidateBucketReplicationCreds
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("checkbucketreplicationconfiguration", maxClients(gz(httpTraceAll(api.ValidateBucketReplicationCredsHandler))))).Queries("replication-check", "")
// Register rejected bucket APIs
for _, r := range rejectedBucketAPIs {
router.Methods(r.methods...).
@@ -520,14 +512,9 @@ func corsHandler(handler http.Handler) http.Handler {
"x-amz*",
"*",
}
return cors.New(cors.Options{
opts := cors.Options{
AllowOriginFunc: func(origin string) bool {
allowedOrigins := globalAPIConfig.getCorsAllowOrigins()
if len(allowedOrigins) == 0 {
allowedOrigins = []string{"*"}
}
for _, allowedOrigin := range allowedOrigins {
for _, allowedOrigin := range globalAPIConfig.getCorsAllowOrigins() {
if wildcard.MatchSimple(allowedOrigin, origin) {
return true
}
@@ -546,5 +533,6 @@ func corsHandler(handler http.Handler) http.Handler {
AllowedHeaders: commonS3Headers,
ExposedHeaders: commonS3Headers,
AllowCredentials: true,
}).Handler(handler)
}
return cors.New(opts).Handler(handler)
}

File diff suppressed because one or more lines are too long

View File

@@ -41,8 +41,7 @@ import (
xjwt "github.com/minio/minio/internal/jwt"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/mcontext"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/v2/policy"
)
// Verify if request has JWT.
@@ -186,15 +185,15 @@ func validateAdminSignature(ctx context.Context, r *http.Request, region string)
// checkAdminRequestAuth checks for authentication and authorization for the incoming
// request. It only accepts V2 and V4 requests. Presigned, JWT and anonymous requests
// are automatically rejected.
func checkAdminRequestAuth(ctx context.Context, r *http.Request, action iampolicy.AdminAction, region string) (auth.Credentials, APIErrorCode) {
func checkAdminRequestAuth(ctx context.Context, r *http.Request, action policy.AdminAction, region string) (auth.Credentials, APIErrorCode) {
cred, owner, s3Err := validateAdminSignature(ctx, r, region)
if s3Err != ErrNone {
return cred, s3Err
}
if globalIAMSys.IsAllowed(iampolicy.Args{
if globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(action),
Action: policy.Action(action),
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
@@ -248,7 +247,7 @@ func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{},
}
// Check if a session policy is set. If so, decode it here.
sp, spok := claims.Lookup(iampolicy.SessionPolicyName)
sp, spok := claims.Lookup(policy.SessionPolicyName)
if spok {
// Looks like subpolicy is set and is a string, if set then its
// base64 encoded, decode it. Decoding fails reject such
@@ -413,7 +412,7 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
if action != policy.ListAllMyBucketsAction && cred.AccessKey == "" {
// Anonymous checks are not meant for ListAllBuckets action
if globalPolicySys.IsAllowed(policy.Args{
if globalPolicySys.IsAllowed(policy.BucketPolicyArgs{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: action,
@@ -429,7 +428,7 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
if action == policy.ListBucketVersionsAction {
// In AWS S3 s3:ListBucket permission is same as s3:ListBucketVersions permission
// verify as a fallback.
if globalPolicySys.IsAllowed(policy.Args{
if globalPolicySys.IsAllowed(policy.BucketPolicyArgs{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListBucketAction,
@@ -446,10 +445,10 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
return ErrAccessDenied
}
if action == policy.DeleteObjectAction && versionID != "" {
if !globalIAMSys.IsAllowed(iampolicy.Args{
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(policy.DeleteObjectVersionAction),
Action: policy.Action(policy.DeleteObjectVersionAction),
BucketName: bucket,
ConditionValues: getConditionValues(r, "", cred),
ObjectName: object,
@@ -460,10 +459,10 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
return ErrAccessDenied
}
}
if globalIAMSys.IsAllowed(iampolicy.Args{
if globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(action),
Action: action,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", cred),
ObjectName: object,
@@ -477,10 +476,10 @@ func authorizeRequest(ctx context.Context, r *http.Request, action policy.Action
if action == policy.ListBucketVersionsAction {
// In AWS S3 s3:ListBucket permission is same as s3:ListBucketVersions permission
// verify as a fallback.
if globalIAMSys.IsAllowed(iampolicy.Args{
if globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.ListBucketAction,
Action: policy.ListBucketAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", cred),
ObjectName: object,
@@ -568,7 +567,7 @@ func isReqAuthenticated(ctx context.Context, r *http.Request, region string, sty
// Verify 'Content-Md5' and/or 'X-Amz-Content-Sha256' if present.
// The verification happens implicit during reading.
reader, err := hash.NewReader(r.Body, -1, clientETag.String(), hex.EncodeToString(contentSHA256), -1)
reader, err := hash.NewReader(ctx, r.Body, -1, clientETag.String(), hex.EncodeToString(contentSHA256), -1)
if err != nil {
return toAPIErrorCode(ctx, err)
}
@@ -595,8 +594,8 @@ func isSupportedS3AuthType(aType authType) bool {
return ok
}
// setAuthHandler to validate authorization header for the incoming request.
func setAuthHandler(h http.Handler) http.Handler {
// setAuthMiddleware to validate authorization header for the incoming request.
func setAuthMiddleware(h http.Handler) http.Handler {
// handler for validating incoming authorization headers.
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
tc, ok := r.Context().Value(mcontext.ContextTraceKey).(*mcontext.TraceCtxt)
@@ -696,10 +695,10 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
conditions["object-lock-remaining-retention-days"] = []string{strconv.Itoa(retDays)}
}
if retMode == objectlock.RetGovernance && byPassSet {
byPassSet = globalIAMSys.IsAllowed(iampolicy.Args{
byPassSet = globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.BypassGovernanceRetentionAction,
Action: policy.BypassGovernanceRetentionAction,
BucketName: bucketName,
ObjectName: objectName,
ConditionValues: conditions,
@@ -707,10 +706,10 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
Claims: cred.Claims,
})
}
if globalIAMSys.IsAllowed(iampolicy.Args{
if globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectRetentionAction,
Action: policy.PutObjectRetentionAction,
BucketName: bucketName,
ConditionValues: conditions,
ObjectName: objectName,
@@ -728,7 +727,7 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
// isPutActionAllowed - check if PUT operation is allowed on the resource, this
// call verifies bucket policies and IAM policies, supports multi user
// checks etc.
func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectName string, r *http.Request, action iampolicy.Action) (s3Err APIErrorCode) {
func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectName string, r *http.Request, action policy.Action) (s3Err APIErrorCode) {
var cred auth.Credentials
var owner bool
region := globalSite.Region
@@ -751,17 +750,17 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
// Do not check for PutObjectRetentionAction permission,
// if mode and retain until date are not set.
// Can happen when bucket has default lock config set
if action == iampolicy.PutObjectRetentionAction &&
if action == policy.PutObjectRetentionAction &&
r.Header.Get(xhttp.AmzObjectLockMode) == "" &&
r.Header.Get(xhttp.AmzObjectLockRetainUntilDate) == "" {
return ErrNone
}
if cred.AccessKey == "" {
if globalPolicySys.IsAllowed(policy.Args{
if globalPolicySys.IsAllowed(policy.BucketPolicyArgs{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.Action(action),
Action: action,
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", auth.AnonymousCredentials),
IsOwner: false,
@@ -772,7 +771,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
return ErrAccessDenied
}
if globalIAMSys.IsAllowed(iampolicy.Args{
if globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: action,

View File

@@ -28,7 +28,7 @@ import (
"time"
"github.com/minio/minio/internal/auth"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/v2/policy"
)
type nullReader struct{}
@@ -443,7 +443,7 @@ func TestCheckAdminRequestAuthType(t *testing.T) {
{Request: mustNewPresignedRequest(http.MethodGet, "http://127.0.0.1:9000", 0, nil, t), ErrCode: ErrAccessDenied},
}
for i, testCase := range testCases {
if _, s3Error := checkAdminRequestAuth(ctx, testCase.Request, iampolicy.AllAdminActions, globalSite.Region); s3Error != testCase.ErrCode {
if _, s3Error := checkAdminRequestAuth(ctx, testCase.Request, policy.AllAdminActions, globalSite.Region); s3Error != testCase.ErrCode {
t.Errorf("Test %d: Unexpected s3error returned wanted %d, got %d", i, testCase.ErrCode, s3Error)
}
}

View File

@@ -24,9 +24,9 @@ import (
"strconv"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/env"
"github.com/minio/pkg/v2/env"
)
// healTask represents what to heal along with options
@@ -75,9 +75,9 @@ func waitForLowIO(maxIO int, maxWait time.Duration, currentIO func() int) {
if tmpMaxWait > 0 {
if tmpMaxWait < waitTick {
time.Sleep(tmpMaxWait)
} else {
time.Sleep(waitTick)
return
}
time.Sleep(waitTick)
tmpMaxWait -= waitTick
}
if tmpMaxWait <= 0 {

View File

@@ -30,9 +30,11 @@ import (
"time"
"github.com/dustin/go-humanize"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v2/env"
)
const (
@@ -345,7 +347,9 @@ func initAutoHeal(ctx context.Context, objAPI ObjectLayer) {
globalBackgroundHealState.pushHealLocalDisks(getLocalDisksToHeal()...)
go monitorLocalDisksAndHeal(ctx, z)
if env.Get("_MINIO_AUTO_DRIVE_HEALING", config.EnableOn) == config.EnableOn || env.Get("_MINIO_AUTO_DISK_HEALING", config.EnableOn) == config.EnableOn {
go monitorLocalDisksAndHeal(ctx, z)
}
}
func getLocalDisksToHeal() (disksToHeal Endpoints) {
@@ -481,7 +485,12 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
}
// Remove .healing.bin from all disks with similar heal-id
for _, disk := range z.serverPools[poolIdx].sets[setIdx].getDisks() {
disks, err := z.GetDisks(poolIdx, setIdx)
if err != nil {
return err
}
for _, disk := range disks {
t, err := loadHealingTracker(ctx, disk)
if err != nil {
if !errors.Is(err, errFileNotFound) {

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2022 MinIO, Inc.
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -28,7 +28,6 @@ import (
"math/rand"
"net/http"
"net/url"
"path"
"runtime"
"strconv"
"strings"
@@ -37,214 +36,24 @@ import (
"github.com/dustin/go-humanize"
"github.com/lithammer/shortuuid/v4"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
miniogo "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
"github.com/minio/minio-go/v7/pkg/encrypt"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/auth"
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/console"
"github.com/minio/pkg/env"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/wildcard"
"github.com/minio/pkg/workers"
"github.com/minio/pkg/v2/console"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v2/workers"
"gopkg.in/yaml.v2"
)
// replicate:
// # source of the objects to be replicated
// source:
// type: "minio"
// bucket: "testbucket"
// prefix: "spark/"
//
// # optional flags based filtering criteria
// # for source objects
// flags:
// filter:
// newerThan: "7d"
// olderThan: "7d"
// createdAfter: "date"
// createdBefore: "date"
// tags:
// - key: "name"
// value: "value*"
// metadata:
// - key: "content-type"
// value: "image/*"
// notify:
// endpoint: "https://splunk-hec.dev.com"
// token: "Splunk ..." # e.g. "Bearer token"
//
// # target where the objects must be replicated
// target:
// type: "minio"
// bucket: "testbucket1"
// endpoint: "https://play.min.io"
// credentials:
// accessKey: "minioadmin"
// secretKey: "minioadmin"
// sessionToken: ""
// BatchJobReplicateKV is a datatype that holds key and values for filtering of objects
// used by metadata filter as well as tags based filtering.
type BatchJobReplicateKV struct {
Key string `yaml:"key" json:"key"`
Value string `yaml:"value" json:"value"`
}
// Validate returns an error if key is empty
func (kv BatchJobReplicateKV) Validate() error {
if kv.Key == "" {
return errInvalidArgument
}
return nil
}
// Empty indicates if kv is not set
func (kv BatchJobReplicateKV) Empty() bool {
return kv.Key == "" && kv.Value == ""
}
// Match matches input kv with kv, value will be wildcard matched depending on the user input
func (kv BatchJobReplicateKV) Match(ikv BatchJobReplicateKV) bool {
if kv.Empty() {
return true
}
if strings.EqualFold(kv.Key, ikv.Key) {
return wildcard.Match(kv.Value, ikv.Value)
}
return false
}
// BatchReplicateRetry datatype represents total retry attempts and delay between each retries.
type BatchReplicateRetry struct {
Attempts int `yaml:"attempts" json:"attempts"` // number of retry attempts
Delay time.Duration `yaml:"delay" json:"delay"` // delay between each retries
}
// Validate validates input replicate retries.
func (r BatchReplicateRetry) Validate() error {
if r.Attempts < 0 {
return errInvalidArgument
}
if r.Delay < 0 {
return errInvalidArgument
}
return nil
}
// BatchReplicateFilter holds all the filters currently supported for batch replication
type BatchReplicateFilter struct {
NewerThan time.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobReplicateKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobReplicateKV `yaml:"metadata,omitempty" json:"metadata"`
}
// BatchReplicateNotification success or failure notification endpoint for each job attempts
type BatchReplicateNotification struct {
Endpoint string `yaml:"endpoint" json:"endpoint"`
Token string `yaml:"token" json:"token"`
}
// BatchJobReplicateFlags various configurations for replication job definition currently includes
// - filter
// - notify
// - retry
type BatchJobReplicateFlags struct {
Filter BatchReplicateFilter `yaml:"filter" json:"filter"`
Notify BatchReplicateNotification `yaml:"notify" json:"notify"`
Retry BatchReplicateRetry `yaml:"retry" json:"retry"`
}
// BatchJobReplicateResourceType defines the type of batch jobs
type BatchJobReplicateResourceType string
// Validate validates if the replicate resource type is recognized and supported
func (t BatchJobReplicateResourceType) Validate() error {
switch t {
case BatchJobReplicateResourceMinIO:
case BatchJobReplicateResourceS3:
default:
return errInvalidArgument
}
return nil
}
// Different types of batch jobs..
const (
BatchJobReplicateResourceMinIO BatchJobReplicateResourceType = "minio"
BatchJobReplicateResourceS3 BatchJobReplicateResourceType = "s3"
// add future targets
)
// BatchJobReplicateCredentials access credentials for batch replication it may
// be either for target or source.
type BatchJobReplicateCredentials struct {
AccessKey string `xml:"AccessKeyId" json:"accessKey,omitempty" yaml:"accessKey"`
SecretKey string `xml:"SecretAccessKey" json:"secretKey,omitempty" yaml:"secretKey"`
SessionToken string `xml:"SessionToken" json:"sessionToken,omitempty" yaml:"sessionToken"`
}
// Empty indicates if credentials are not set
func (c BatchJobReplicateCredentials) Empty() bool {
return c.AccessKey == "" && c.SecretKey == "" && c.SessionToken == ""
}
// Validate validates if credentials are valid
func (c BatchJobReplicateCredentials) Validate() error {
if !auth.IsAccessKeyValid(c.AccessKey) || !auth.IsSecretKeyValid(c.SecretKey) {
return errInvalidArgument
}
return nil
}
// BatchJobReplicateTarget describes target element of the replication job that receives
// the filtered data from source
type BatchJobReplicateTarget struct {
Type BatchJobReplicateResourceType `yaml:"type" json:"type"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Creds BatchJobReplicateCredentials `yaml:"credentials" json:"credentials"`
}
// BatchJobReplicateSource describes source element of the replication job that is
// the source of the data for the target
type BatchJobReplicateSource struct {
Type BatchJobReplicateResourceType `yaml:"type" json:"type"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Creds BatchJobReplicateCredentials `yaml:"credentials" json:"credentials"`
}
// BatchJobReplicateV1 v1 of batch job replication
type BatchJobReplicateV1 struct {
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Flags BatchJobReplicateFlags `yaml:"flags" json:"flags"`
Target BatchJobReplicateTarget `yaml:"target" json:"target"`
Source BatchJobReplicateSource `yaml:"source" json:"source"`
clnt *miniogo.Core `msg:"-"`
}
// RemoteToLocal returns true if source is remote and target is local
func (r BatchJobReplicateV1) RemoteToLocal() bool {
return !r.Source.Creds.Empty()
}
// BatchJobRequest this is an internal data structure not for external consumption.
type BatchJobRequest struct {
ID string `yaml:"-" json:"name"`
@@ -256,23 +65,28 @@ type BatchJobRequest struct {
ctx context.Context `msg:"-"`
}
// Notify notifies notification endpoint if configured regarding job failure or success.
func (r BatchJobReplicateV1) Notify(ctx context.Context, body io.Reader) error {
if r.Flags.Notify.Endpoint == "" {
func notifyEndpoint(ctx context.Context, ri *batchJobInfo, endpoint, token string) error {
if endpoint == "" {
return nil
}
buf, err := json.Marshal(ri)
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, http.MethodPost, r.Flags.Notify.Endpoint, body)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint, bytes.NewReader(buf))
if err != nil {
return err
}
if r.Flags.Notify.Token != "" {
req.Header.Set("Authorization", r.Flags.Notify.Token)
if token != "" {
req.Header.Set("Authorization", token)
}
req.Header.Set("Content-Type", "application/json")
clnt := http.Client{Transport: getRemoteInstanceTransport}
resp, err := clnt.Do(req)
@@ -288,27 +102,32 @@ func (r BatchJobReplicateV1) Notify(ctx context.Context, body io.Reader) error {
return nil
}
// Notify notifies notification endpoint if configured regarding job failure or success.
func (r BatchJobReplicateV1) Notify(ctx context.Context, ri *batchJobInfo) error {
return notifyEndpoint(ctx, ri, r.Flags.Notify.Endpoint, r.Flags.Notify.Token)
}
// ReplicateFromSource - this is not implemented yet where source is 'remote' and target is local.
func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api ObjectLayer, core *minio.Core, srcObjInfo ObjectInfo, retry bool) error {
func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api ObjectLayer, core *miniogo.Core, srcObjInfo ObjectInfo, retry bool) error {
srcBucket := r.Source.Bucket
tgtBucket := r.Target.Bucket
srcObject := srcObjInfo.Name
tgtObject := srcObjInfo.Name
if r.Target.Prefix != "" {
tgtObject = path.Join(r.Target.Prefix, srcObjInfo.Name)
tgtObject = pathJoin(r.Target.Prefix, srcObjInfo.Name)
}
versioned := globalBucketVersioningSys.PrefixEnabled(tgtBucket, tgtObject)
versionSuspended := globalBucketVersioningSys.PrefixSuspended(tgtBucket, tgtObject)
versionID := srcObjInfo.VersionID
if r.Target.Type == BatchJobReplicateResourceS3 || r.Source.Type == BatchJobReplicateResourceS3 {
versionID = ""
}
if srcObjInfo.DeleteMarker {
_, err := api.DeleteObject(ctx, tgtBucket, tgtObject, ObjectOptions{
VersionID: versionID,
VersionSuspended: versionSuspended,
Versioned: versioned,
VersionID: versionID,
// Since we are preserving a delete marker, we have to make sure this is always true.
// regardless of the current configuration of the bucket we must preserve all versions
// on the pool being batch replicated from source.
Versioned: true,
MTime: srcObjInfo.ModTime,
DeleteMarker: srcObjInfo.DeleteMarker,
ReplicationRequest: true,
@@ -317,12 +136,10 @@ func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api Objec
}
opts := ObjectOptions{
VersionID: srcObjInfo.VersionID,
Versioned: versioned,
VersionSuspended: versionSuspended,
MTime: srcObjInfo.ModTime,
PreserveETag: srcObjInfo.ETag,
UserDefined: srcObjInfo.UserDefined,
VersionID: srcObjInfo.VersionID,
MTime: srcObjInfo.ModTime,
PreserveETag: srcObjInfo.ETag,
UserDefined: srcObjInfo.UserDefined,
}
if r.Target.Type == BatchJobReplicateResourceS3 || r.Source.Type == BatchJobReplicateResourceS3 {
opts.VersionID = ""
@@ -338,7 +155,7 @@ func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api Objec
}
return r.copyWithMultipartfromSource(ctx, api, core, srcObjInfo, opts, partsCount)
}
gopts := minio.GetObjectOptions{
gopts := miniogo.GetObjectOptions{
VersionID: srcObjInfo.VersionID,
}
if err := gopts.SetMatchETag(srcObjInfo.ETag); err != nil {
@@ -350,7 +167,7 @@ func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api Objec
}
defer rd.Close()
hr, err := hash.NewReader(rd, objInfo.Size, "", "", objInfo.Size)
hr, err := hash.NewReader(ctx, rd, objInfo.Size, "", "", objInfo.Size)
if err != nil {
return err
}
@@ -359,13 +176,13 @@ func (r *BatchJobReplicateV1) ReplicateFromSource(ctx context.Context, api Objec
return err
}
func (r *BatchJobReplicateV1) copyWithMultipartfromSource(ctx context.Context, api ObjectLayer, c *minio.Core, srcObjInfo ObjectInfo, opts ObjectOptions, partsCount int) (err error) {
func (r *BatchJobReplicateV1) copyWithMultipartfromSource(ctx context.Context, api ObjectLayer, c *miniogo.Core, srcObjInfo ObjectInfo, opts ObjectOptions, partsCount int) (err error) {
srcBucket := r.Source.Bucket
tgtBucket := r.Target.Bucket
srcObject := srcObjInfo.Name
tgtObject := srcObjInfo.Name
if r.Target.Prefix != "" {
tgtObject = path.Join(r.Target.Prefix, srcObjInfo.Name)
tgtObject = pathJoin(r.Target.Prefix, srcObjInfo.Name)
}
if r.Target.Type == BatchJobReplicateResourceS3 || r.Source.Type == BatchJobReplicateResourceS3 {
opts.VersionID = ""
@@ -400,7 +217,7 @@ func (r *BatchJobReplicateV1) copyWithMultipartfromSource(ctx context.Context, a
)
for i := 0; i < partsCount; i++ {
gopts := minio.GetObjectOptions{
gopts := miniogo.GetObjectOptions{
VersionID: srcObjInfo.VersionID,
PartNumber: i + 1,
}
@@ -413,7 +230,7 @@ func (r *BatchJobReplicateV1) copyWithMultipartfromSource(ctx context.Context, a
}
defer rd.Close()
hr, err = hash.NewReader(io.LimitReader(rd, objInfo.Size), objInfo.Size, "", "", objInfo.Size)
hr, err = hash.NewReader(ctx, io.LimitReader(rd, objInfo.Size), objInfo.Size, "", "", objInfo.Size)
if err != nil {
return err
}
@@ -453,6 +270,10 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
}
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
isTags := len(r.Flags.Filter.Tags) != 0
isMetadata := len(r.Flags.Filter.Metadata) != 0
isStorageClassOnly := len(r.Flags.Filter.Metadata) == 1 && strings.EqualFold(r.Flags.Filter.Metadata[0].Key, xhttp.AmzStorageClass)
skip := func(oi ObjectInfo) (ok bool) {
if r.Flags.Filter.OlderThan > 0 && time.Since(oi.ModTime) < r.Flags.Filter.OlderThan {
// skip all objects that are newer than specified older duration
@@ -473,7 +294,8 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
// skip all objects that are created after the specified time.
return true
}
if len(r.Flags.Filter.Tags) > 0 {
if isTags {
// Only parse object tags if tags filter is specified.
tagMap := map[string]string{}
tagStr := oi.UserTags
@@ -486,7 +308,7 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
}
for _, kv := range r.Flags.Filter.Tags {
for t, v := range tagMap {
if kv.Match(BatchJobReplicateKV{Key: t, Value: v}) {
if kv.Match(BatchJobKV{Key: t, Value: v}) {
return true
}
}
@@ -496,23 +318,19 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
return false
}
if len(r.Flags.Filter.Metadata) > 0 {
for _, kv := range r.Flags.Filter.Metadata {
for k, v := range oi.UserDefined {
if !strings.HasPrefix(strings.ToLower(k), "x-amz-meta-") && !isStandardHeader(k) {
continue
}
// We only need to match x-amz-meta or standardHeaders
if kv.Match(BatchJobReplicateKV{Key: k, Value: v}) {
return true
}
for _, kv := range r.Flags.Filter.Metadata {
for k, v := range oi.UserDefined {
if !stringsHasPrefixFold(k, "x-amz-meta-") && !isStandardHeader(k) {
continue
}
// We only need to match x-amz-meta or standardHeaders
if kv.Match(BatchJobKV{Key: k, Value: v}) {
return true
}
}
// None of the provided metadata filters match skip the object.
return false
}
// None of the provided filters match
return false
}
@@ -524,16 +342,17 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
cred := r.Source.Creds
c, err := miniogo.New(u.Host, &miniogo.Options{
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport,
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport,
BucketLookup: lookupStyle(r.Source.Path),
})
if err != nil {
return err
}
c.SetAppInfo("minio-"+batchJobPrefix, r.APIVersion+" "+job.ID)
core := &minio.Core{Client: c}
core := &miniogo.Core{Client: c}
workerSize, err := strconv.Atoi(env.Get("_MINIO_BATCH_REPLICATION_WORKERS", strconv.Itoa(runtime.GOMAXPROCS(0)/2)))
if err != nil {
@@ -554,7 +373,7 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
s3Type := r.Target.Type == BatchJobReplicateResourceS3 || r.Source.Type == BatchJobReplicateResourceS3
minioSrc := r.Source.Type == BatchJobReplicateResourceMinIO
ctx, cancel := context.WithCancel(ctx)
objInfoCh := c.ListObjects(ctx, r.Source.Bucket, minio.ListObjectsOptions{
objInfoCh := c.ListObjects(ctx, r.Source.Bucket, miniogo.ListObjectsOptions{
Prefix: r.Source.Prefix,
WithVersions: minioSrc,
Recursive: true,
@@ -566,17 +385,32 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
for obj := range objInfoCh {
oi := toObjectInfo(r.Source.Bucket, obj.Key, obj)
if !minioSrc {
oi2, err := c.StatObject(ctx, r.Source.Bucket, obj.Key, minio.StatObjectOptions{})
if err == nil {
oi = toObjectInfo(r.Source.Bucket, obj.Key, oi2)
} else {
if isErrMethodNotAllowed(ErrorRespToObjectError(err, r.Source.Bucket, obj.Key)) ||
isErrObjectNotFound(ErrorRespToObjectError(err, r.Source.Bucket, obj.Key)) {
// Check if metadata filter was requested and it is expected to have
// all user metadata or just storageClass. If its only storageClass
// List() already returns relevant information for filter to be applied.
if isMetadata && !isStorageClassOnly {
oi2, err := c.StatObject(ctx, r.Source.Bucket, obj.Key, miniogo.StatObjectOptions{})
if err == nil {
oi = toObjectInfo(r.Source.Bucket, obj.Key, oi2)
} else {
if !isErrMethodNotAllowed(ErrorRespToObjectError(err, r.Source.Bucket, obj.Key)) &&
!isErrObjectNotFound(ErrorRespToObjectError(err, r.Source.Bucket, obj.Key)) {
logger.LogIf(ctx, err)
}
continue
}
}
if isTags {
tags, err := c.GetObjectTagging(ctx, r.Source.Bucket, obj.Key, minio.GetObjectTaggingOptions{})
if err == nil {
oi.UserTags = tags.String()
} else {
if !isErrMethodNotAllowed(ErrorRespToObjectError(err, r.Source.Bucket, obj.Key)) &&
!isErrObjectNotFound(ErrorRespToObjectError(err, r.Source.Bucket, obj.Key)) {
logger.LogIf(ctx, err)
}
continue
}
logger.LogIf(ctx, err)
cancel()
return err
}
}
if skip(oi) {
@@ -623,8 +457,7 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
// persist in-memory state to disk.
logger.LogIf(ctx, ri.updateAfter(ctx, api, 0, job))
buf, _ := json.Marshal(ri)
if err := r.Notify(ctx, bytes.NewReader(buf)); err != nil {
if err := r.Notify(ctx, ri); err != nil {
logger.LogIf(ctx, fmt.Errorf("unable to notify %v", err))
}
@@ -648,7 +481,7 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
}
// toObjectInfo converts minio.ObjectInfo to ObjectInfo
func toObjectInfo(bucket, object string, objInfo minio.ObjectInfo) ObjectInfo {
func toObjectInfo(bucket, object string, objInfo miniogo.ObjectInfo) ObjectInfo {
tags, _ := tags.MapToObjectTags(objInfo.UserTags)
oi := ObjectInfo{
Bucket: bucket,
@@ -665,10 +498,12 @@ func toObjectInfo(bucket, object string, objInfo minio.ObjectInfo) ObjectInfo {
ReplicationStatusInternal: objInfo.ReplicationStatus,
UserTags: tags.String(),
}
oi.UserDefined = make(map[string]string, len(objInfo.Metadata))
for k, v := range objInfo.Metadata {
oi.UserDefined[k] = v[0]
}
ce, ok := oi.UserDefined[xhttp.ContentEncoding]
if !ok {
ce, ok = oi.UserDefined[strings.ToLower(xhttp.ContentEncoding)]
@@ -676,6 +511,12 @@ func toObjectInfo(bucket, object string, objInfo minio.ObjectInfo) ObjectInfo {
if ok {
oi.ContentEncoding = ce
}
_, ok = oi.UserDefined[xhttp.AmzStorageClass]
if !ok {
oi.UserDefined[xhttp.AmzStorageClass] = objInfo.StorageClass
}
return oi
}
@@ -1005,7 +846,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
}
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
skip := func(info FileInfo) (ok bool) {
selectObj := func(info FileInfo) (ok bool) {
if r.Flags.Filter.OlderThan > 0 && time.Since(info.ModTime) < r.Flags.Filter.OlderThan {
// skip all objects that are newer than specified older duration
return false
@@ -1040,7 +881,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
for _, kv := range r.Flags.Filter.Tags {
for t, v := range tagMap {
if kv.Match(BatchJobReplicateKV{Key: t, Value: v}) {
if kv.Match(BatchJobKV{Key: t, Value: v}) {
return true
}
}
@@ -1053,11 +894,11 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
if len(r.Flags.Filter.Metadata) > 0 {
for _, kv := range r.Flags.Filter.Metadata {
for k, v := range info.Metadata {
if !strings.HasPrefix(strings.ToLower(k), "x-amz-meta-") && !isStandardHeader(k) {
if !stringsHasPrefixFold(k, "x-amz-meta-") && !isStandardHeader(k) {
continue
}
// We only need to match x-amz-meta or standardHeaders
if kv.Match(BatchJobReplicateKV{Key: k, Value: v}) {
if kv.Match(BatchJobKV{Key: k, Value: v}) {
return true
}
}
@@ -1082,9 +923,10 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
cred := r.Target.Creds
c, err := miniogo.NewCore(u.Host, &miniogo.Options{
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport,
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport,
BucketLookup: lookupStyle(r.Target.Path),
})
if err != nil {
return err
@@ -1114,7 +956,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
results := make(chan ObjectInfo, 100)
if err := api.Walk(ctx, r.Source.Bucket, r.Source.Prefix, results, ObjectOptions{
WalkMarker: lastObject,
WalkFilter: skip,
WalkFilter: selectObj,
}); err != nil {
cancel()
// Do not need to retry if we can't list objects on source.
@@ -1170,8 +1012,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
// persist in-memory state to disk.
logger.LogIf(ctx, ri.updateAfter(ctx, api, 0, job))
buf, _ := json.Marshal(ri)
if err := r.Notify(ctx, bytes.NewReader(buf)); err != nil {
if err := r.Notify(ctx, ri); err != nil {
logger.LogIf(ctx, fmt.Errorf("unable to notify %v", err))
}
@@ -1256,6 +1097,13 @@ func (r *BatchJobReplicateV1) Validate(ctx context.Context, job BatchJobRequest,
return errInvalidArgument
}
if r.Source.Endpoint != "" && !r.Source.Type.isMinio() && !r.Source.ValidPath() {
return errInvalidArgument
}
if r.Target.Endpoint != "" && !r.Target.Type.isMinio() && !r.Target.ValidPath() {
return errInvalidArgument
}
if r.Target.Bucket == "" {
return errInvalidArgument
}
@@ -1293,11 +1141,14 @@ func (r *BatchJobReplicateV1) Validate(ctx context.Context, job BatchJobRequest,
remoteEp := r.Target.Endpoint
remoteBkt := r.Target.Bucket
cred := r.Target.Creds
pathStyle := r.Target.Path
if r.Source.Endpoint != "" {
remoteEp = r.Source.Endpoint
cred = r.Source.Creds
remoteBkt = r.Source.Bucket
pathStyle = r.Source.Path
}
u, err := url.Parse(remoteEp)
@@ -1306,9 +1157,10 @@ func (r *BatchJobReplicateV1) Validate(ctx context.Context, job BatchJobRequest,
}
c, err := miniogo.NewCore(u.Host, &miniogo.Options{
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport,
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport,
BucketLookup: lookupStyle(pathStyle),
})
if err != nil {
return err
@@ -1372,7 +1224,6 @@ func (j BatchJobRequest) delete(ctx context.Context, api ObjectLayer) {
case j.KeyRotate != nil:
deleteConfig(ctx, api, pathJoin(j.Location, batchKeyRotationName))
}
globalBatchJobsMetrics.delete(j.ID)
deleteConfig(ctx, api, j.Location)
}
@@ -1429,11 +1280,9 @@ func batchReplicationOpts(ctx context.Context, sc string, objInfo ObjectInfo) (p
// ListBatchJobs - lists all currently active batch jobs, optionally takes {jobType}
// input to list only active batch jobs of 'jobType'
func (a adminAPIHandlers) ListBatchJobs(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListBatchJobs")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ListBatchJobsAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ListBatchJobsAction)
if objectAPI == nil {
return
}
@@ -1481,23 +1330,21 @@ var errNoSuchJob = errors.New("no such job")
// DescribeBatchJob returns the currently active batch job definition
func (a adminAPIHandlers) DescribeBatchJob(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DescribeBatchJob")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.DescribeBatchJobAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.DescribeBatchJobAction)
if objectAPI == nil {
return
}
id := r.Form.Get("jobId")
if id == "" {
jobID := r.Form.Get("jobId")
if jobID == "" {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, errInvalidArgument), r.URL)
return
}
req := &BatchJobRequest{}
if err := req.load(ctx, objectAPI, pathJoin(batchJobPrefix, id)); err != nil {
if err := req.load(ctx, objectAPI, pathJoin(batchJobPrefix, jobID)); err != nil {
if !errors.Is(err, errNoSuchJob) {
logger.LogIf(ctx, err)
}
@@ -1518,16 +1365,14 @@ func (a adminAPIHandlers) DescribeBatchJob(w http.ResponseWriter, r *http.Reques
// StarBatchJob queue a new job for execution
func (a adminAPIHandlers) StartBatchJob(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "StartBatchJob")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, creds := validateAdminReq(ctx, w, r, iampolicy.StartBatchJobAction)
objectAPI, creds := validateAdminReq(ctx, w, r, policy.StartBatchJobAction)
if objectAPI == nil {
return
}
buf, err := io.ReadAll(r.Body)
buf, err := io.ReadAll(ioutil.HardLimitReader(r.Body, humanize.MiByte*4))
if err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -1539,12 +1384,12 @@ func (a adminAPIHandlers) StartBatchJob(w http.ResponseWriter, r *http.Request)
}
job := &BatchJobRequest{}
if err = yaml.Unmarshal(buf, job); err != nil {
if err = yaml.UnmarshalStrict(buf, job); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
job.ID = shortuuid.New()
job.ID = fmt.Sprintf("%s:%d", shortuuid.New(), GetProxyEndpointLocalIndex(globalProxyEndpoints))
job.User = user
job.Started = time.Now()
@@ -1574,27 +1419,33 @@ func (a adminAPIHandlers) StartBatchJob(w http.ResponseWriter, r *http.Request)
// CancelBatchJob cancels a job in progress
func (a adminAPIHandlers) CancelBatchJob(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "CancelBatchJob")
ctx := r.Context()
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.CancelBatchJobAction)
objectAPI, _ := validateAdminReq(ctx, w, r, policy.CancelBatchJobAction)
if objectAPI == nil {
return
}
jobID := r.Form.Get("id")
if jobID == "" {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, errInvalidArgument), r.URL)
return
}
if _, success := proxyRequestByToken(ctx, w, r, jobID); success {
return
}
if err := globalBatchJobPool.canceler(jobID, true); err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrInvalidRequest, err), r.URL)
return
}
j := BatchJobRequest{
ID: jobID,
Location: pathJoin(batchJobPrefix, jobID),
}
j.delete(ctx, objectAPI)
writeSuccessNoContent(w)
@@ -1649,6 +1500,11 @@ func (j *BatchJobPool) resume() {
logger.LogIf(ctx, err)
continue
}
_, nodeIdx := parseRequestToken(req.ID)
if nodeIdx > -1 && GetProxyEndpointLocalIndex(globalProxyEndpoints) != nodeIdx {
// This job doesn't belong on this node.
continue
}
if err := j.queueJob(req); err != nil {
logger.LogIf(ctx, err)
continue
@@ -1769,10 +1625,6 @@ type batchJobMetrics struct {
metrics map[string]*batchJobInfo
}
var globalBatchJobsMetrics = batchJobMetrics{
metrics: make(map[string]*batchJobInfo),
}
//msgp:ignore batchJobMetric
//go:generate stringer -type=batchJobMetric -trimprefix=batchJobMetric $GOFILE
type batchJobMetric uint8
@@ -1812,9 +1664,17 @@ func (m *batchJobMetrics) report(jobID string) (metrics *madmin.BatchJobMetrics)
metrics = &madmin.BatchJobMetrics{CollectedAt: time.Now(), Jobs: make(map[string]madmin.JobMetric)}
m.RLock()
defer m.RUnlock()
match := true
for id, job := range m.metrics {
match := jobID != "" && id == jobID
metrics.Jobs[id] = madmin.JobMetric{
if jobID != "" {
match = id == jobID
}
if !match {
continue
}
m := madmin.JobMetric{
JobID: job.JobID,
JobType: job.JobType,
StartTime: job.StartTime,
@@ -1822,28 +1682,58 @@ func (m *batchJobMetrics) report(jobID string) (metrics *madmin.BatchJobMetrics)
RetryAttempts: job.RetryAttempts,
Complete: job.Complete,
Failed: job.Failed,
Replicate: &madmin.ReplicateInfo{
}
switch job.JobType {
case string(madmin.BatchJobReplicate):
m.Replicate = &madmin.ReplicateInfo{
Bucket: job.Bucket,
Object: job.Object,
Objects: job.Objects,
ObjectsFailed: job.ObjectsFailed,
BytesTransferred: job.BytesTransferred,
BytesFailed: job.BytesFailed,
},
KeyRotate: &madmin.KeyRotationInfo{
}
case string(madmin.BatchJobKeyRotate):
m.KeyRotate = &madmin.KeyRotationInfo{
Bucket: job.Bucket,
Object: job.Object,
Objects: job.Objects,
ObjectsFailed: job.ObjectsFailed,
},
}
if match {
break
}
}
metrics.Jobs[id] = m
}
return metrics
}
// keep job metrics for some time after the job is completed
// in-case some one wants to look at the older results.
func (m *batchJobMetrics) purgeJobMetrics() {
t := time.NewTicker(6 * time.Hour)
defer t.Stop()
for {
select {
case <-GlobalContext.Done():
return
case <-t.C:
var toDeleteJobMetrics []string
m.RLock()
for id, metrics := range m.metrics {
if time.Since(metrics.LastUpdate) > 24*time.Hour && (metrics.Complete || metrics.Failed) {
toDeleteJobMetrics = append(toDeleteJobMetrics, id)
}
}
m.RUnlock()
for _, jobID := range toDeleteJobMetrics {
m.delete(jobID)
}
}
}
}
func (m *batchJobMetrics) delete(jobID string) {
m.Lock()
defer m.Unlock()
@@ -1874,3 +1764,17 @@ func (m *batchJobMetrics) trace(d batchJobMetric, job string, attempts int, info
}
}
}
func lookupStyle(s string) miniogo.BucketLookupType {
var lookup miniogo.BucketLookupType
switch s {
case "on":
lookup = miniogo.BucketLookupPath
case "off":
lookup = miniogo.BucketLookupDNS
default:
lookup = miniogo.BucketLookupAuto
}
return lookup
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,83 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"strings"
"time"
"github.com/minio/pkg/v2/wildcard"
)
//go:generate msgp -file $GOFILE
// BatchJobKV is a key-value data type which supports wildcard matching
type BatchJobKV struct {
Key string `yaml:"key" json:"key"`
Value string `yaml:"value" json:"value"`
}
// Validate returns an error if key is empty
func (kv BatchJobKV) Validate() error {
if kv.Key == "" {
return errInvalidArgument
}
return nil
}
// Empty indicates if kv is not set
func (kv BatchJobKV) Empty() bool {
return kv.Key == "" && kv.Value == ""
}
// Match matches input kv with kv, value will be wildcard matched depending on the user input
func (kv BatchJobKV) Match(ikv BatchJobKV) bool {
if kv.Empty() {
return true
}
if strings.EqualFold(kv.Key, ikv.Key) {
return wildcard.Match(kv.Value, ikv.Value)
}
return false
}
// BatchJobNotification stores notification endpoint and token information.
// Used by batch jobs to notify of their status.
type BatchJobNotification struct {
Endpoint string `yaml:"endpoint" json:"endpoint"`
Token string `yaml:"token" json:"token"`
}
// BatchJobRetry stores retry configuration used in the event of failures.
type BatchJobRetry struct {
Attempts int `yaml:"attempts" json:"attempts"` // number of retry attempts
Delay time.Duration `yaml:"delay" json:"delay"` // delay between each retries
}
// Validate validates input replicate retries.
func (r BatchJobRetry) Validate() error {
if r.Attempts < 0 {
return errInvalidArgument
}
if r.Delay < 0 {
return errInvalidArgument
}
return nil
}

View File

@@ -0,0 +1,391 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
import (
"github.com/tinylib/msgp/msgp"
)
// DecodeMsg implements msgp.Decodable
func (z *BatchJobKV) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Key, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Key")
return
}
case "Value":
z.Value, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Value")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchJobKV) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 2
// write "Key"
err = en.Append(0x82, 0xa3, 0x4b, 0x65, 0x79)
if err != nil {
return
}
err = en.WriteString(z.Key)
if err != nil {
err = msgp.WrapError(err, "Key")
return
}
// write "Value"
err = en.Append(0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
if err != nil {
return
}
err = en.WriteString(z.Value)
if err != nil {
err = msgp.WrapError(err, "Value")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchJobKV) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 2
// string "Key"
o = append(o, 0x82, 0xa3, 0x4b, 0x65, 0x79)
o = msgp.AppendString(o, z.Key)
// string "Value"
o = append(o, 0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
o = msgp.AppendString(o, z.Value)
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchJobKV) UnmarshalMsg(bts []byte) (o []byte, err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Key, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Key")
return
}
case "Value":
z.Value, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Value")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchJobKV) Msgsize() (s int) {
s = 1 + 4 + msgp.StringPrefixSize + len(z.Key) + 6 + msgp.StringPrefixSize + len(z.Value)
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchJobNotification) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Endpoint":
z.Endpoint, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Endpoint")
return
}
case "Token":
z.Token, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Token")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchJobNotification) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 2
// write "Endpoint"
err = en.Append(0x82, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
if err != nil {
return
}
err = en.WriteString(z.Endpoint)
if err != nil {
err = msgp.WrapError(err, "Endpoint")
return
}
// write "Token"
err = en.Append(0xa5, 0x54, 0x6f, 0x6b, 0x65, 0x6e)
if err != nil {
return
}
err = en.WriteString(z.Token)
if err != nil {
err = msgp.WrapError(err, "Token")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchJobNotification) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 2
// string "Endpoint"
o = append(o, 0x82, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
o = msgp.AppendString(o, z.Endpoint)
// string "Token"
o = append(o, 0xa5, 0x54, 0x6f, 0x6b, 0x65, 0x6e)
o = msgp.AppendString(o, z.Token)
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchJobNotification) UnmarshalMsg(bts []byte) (o []byte, err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Endpoint":
z.Endpoint, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Endpoint")
return
}
case "Token":
z.Token, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Token")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchJobNotification) Msgsize() (s int) {
s = 1 + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 6 + msgp.StringPrefixSize + len(z.Token)
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchJobRetry) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Attempts":
z.Attempts, err = dc.ReadInt()
if err != nil {
err = msgp.WrapError(err, "Attempts")
return
}
case "Delay":
z.Delay, err = dc.ReadDuration()
if err != nil {
err = msgp.WrapError(err, "Delay")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchJobRetry) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 2
// write "Attempts"
err = en.Append(0x82, 0xa8, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
if err != nil {
return
}
err = en.WriteInt(z.Attempts)
if err != nil {
err = msgp.WrapError(err, "Attempts")
return
}
// write "Delay"
err = en.Append(0xa5, 0x44, 0x65, 0x6c, 0x61, 0x79)
if err != nil {
return
}
err = en.WriteDuration(z.Delay)
if err != nil {
err = msgp.WrapError(err, "Delay")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchJobRetry) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 2
// string "Attempts"
o = append(o, 0x82, 0xa8, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
o = msgp.AppendInt(o, z.Attempts)
// string "Delay"
o = append(o, 0xa5, 0x44, 0x65, 0x6c, 0x61, 0x79)
o = msgp.AppendDuration(o, z.Delay)
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchJobRetry) UnmarshalMsg(bts []byte) (o []byte, err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Attempts":
z.Attempts, bts, err = msgp.ReadIntBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Attempts")
return
}
case "Delay":
z.Delay, bts, err = msgp.ReadDurationBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Delay")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchJobRetry) Msgsize() (s int) {
s = 1 + 9 + msgp.IntSize + 6 + msgp.DurationSize
return
}

View File

@@ -0,0 +1,349 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
import (
"bytes"
"testing"
"github.com/tinylib/msgp/msgp"
)
func TestMarshalUnmarshalBatchJobKV(t *testing.T) {
v := BatchJobKV{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobKV(b *testing.B) {
v := BatchJobKV{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobKV(b *testing.B) {
v := BatchJobKV{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobKV(b *testing.B) {
v := BatchJobKV{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobKV(t *testing.T) {
v := BatchJobKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobKV Msgsize() is inaccurate")
}
vn := BatchJobKV{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobKV(b *testing.B) {
v := BatchJobKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobKV(b *testing.B) {
v := BatchJobKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobNotification(t *testing.T) {
v := BatchJobNotification{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobNotification(t *testing.T) {
v := BatchJobNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobNotification Msgsize() is inaccurate")
}
vn := BatchJobNotification{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobNotification(b *testing.B) {
v := BatchJobNotification{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobRetry(t *testing.T) {
v := BatchJobRetry{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobRetry(t *testing.T) {
v := BatchJobRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobRetry Msgsize() is inaccurate")
}
vn := BatchJobRetry{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobRetry(b *testing.B) {
v := BatchJobRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}

183
cmd/batch-replicate.go Normal file
View File

@@ -0,0 +1,183 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"time"
miniogo "github.com/minio/minio-go/v7"
"github.com/minio/minio/internal/auth"
)
//go:generate msgp -file $GOFILE
// replicate:
// # source of the objects to be replicated
// source:
// type: "minio"
// bucket: "testbucket"
// prefix: "spark/"
//
// # optional flags based filtering criteria
// # for source objects
// flags:
// filter:
// newerThan: "7d"
// olderThan: "7d"
// createdAfter: "date"
// createdBefore: "date"
// tags:
// - key: "name"
// value: "value*"
// metadata:
// - key: "content-type"
// value: "image/*"
// notify:
// endpoint: "https://splunk-hec.dev.com"
// token: "Splunk ..." # e.g. "Bearer token"
//
// # target where the objects must be replicated
// target:
// type: "minio"
// bucket: "testbucket1"
// endpoint: "https://play.min.io"
// path: "on"
// credentials:
// accessKey: "minioadmin"
// secretKey: "minioadmin"
// sessionToken: ""
// BatchReplicateFilter holds all the filters currently supported for batch replication
type BatchReplicateFilter struct {
NewerThan time.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
}
// BatchJobReplicateFlags various configurations for replication job definition currently includes
// - filter
// - notify
// - retry
type BatchJobReplicateFlags struct {
Filter BatchReplicateFilter `yaml:"filter" json:"filter"`
Notify BatchJobNotification `yaml:"notify" json:"notify"`
Retry BatchJobRetry `yaml:"retry" json:"retry"`
}
// BatchJobReplicateResourceType defines the type of batch jobs
type BatchJobReplicateResourceType string
// Validate validates if the replicate resource type is recognized and supported
func (t BatchJobReplicateResourceType) Validate() error {
switch t {
case BatchJobReplicateResourceMinIO:
case BatchJobReplicateResourceS3:
default:
return errInvalidArgument
}
return nil
}
func (t BatchJobReplicateResourceType) isMinio() bool {
return t == BatchJobReplicateResourceMinIO
}
// Different types of batch jobs..
const (
BatchJobReplicateResourceMinIO BatchJobReplicateResourceType = "minio"
BatchJobReplicateResourceS3 BatchJobReplicateResourceType = "s3"
// add future targets
)
// BatchJobReplicateCredentials access credentials for batch replication it may
// be either for target or source.
type BatchJobReplicateCredentials struct {
AccessKey string `xml:"AccessKeyId" json:"accessKey,omitempty" yaml:"accessKey"`
SecretKey string `xml:"SecretAccessKey" json:"secretKey,omitempty" yaml:"secretKey"`
SessionToken string `xml:"SessionToken" json:"sessionToken,omitempty" yaml:"sessionToken"`
}
// Empty indicates if credentials are not set
func (c BatchJobReplicateCredentials) Empty() bool {
return c.AccessKey == "" && c.SecretKey == "" && c.SessionToken == ""
}
// Validate validates if credentials are valid
func (c BatchJobReplicateCredentials) Validate() error {
if !auth.IsAccessKeyValid(c.AccessKey) || !auth.IsSecretKeyValid(c.SecretKey) {
return errInvalidArgument
}
return nil
}
// BatchJobReplicateTarget describes target element of the replication job that receives
// the filtered data from source
type BatchJobReplicateTarget struct {
Type BatchJobReplicateResourceType `yaml:"type" json:"type"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Path string `yaml:"path" json:"path"`
Creds BatchJobReplicateCredentials `yaml:"credentials" json:"credentials"`
}
// ValidPath returns true if path is valid
func (t BatchJobReplicateTarget) ValidPath() bool {
return t.Path == "on" || t.Path == "off" || t.Path == "auto" || t.Path == ""
}
// BatchJobReplicateSource describes source element of the replication job that is
// the source of the data for the target
type BatchJobReplicateSource struct {
Type BatchJobReplicateResourceType `yaml:"type" json:"type"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Path string `yaml:"path" json:"path"`
Creds BatchJobReplicateCredentials `yaml:"credentials" json:"credentials"`
}
// ValidPath returns true if path is valid
func (s BatchJobReplicateSource) ValidPath() bool {
switch s.Path {
case "on", "off", "auto", "":
return true
default:
return false
}
}
// BatchJobReplicateV1 v1 of batch job replication
type BatchJobReplicateV1 struct {
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Flags BatchJobReplicateFlags `yaml:"flags" json:"flags"`
Target BatchJobReplicateTarget `yaml:"target" json:"target"`
Source BatchJobReplicateSource `yaml:"source" json:"source"`
clnt *miniogo.Core `msg:"-"`
}
// RemoteToLocal returns true if source is remote and target is local
func (r BatchJobReplicateV1) RemoteToLocal() bool {
return !r.Source.Creds.Empty()
}

1677
cmd/batch-replicate_gen.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,688 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT.
import (
"bytes"
"testing"
"github.com/tinylib/msgp/msgp"
)
func TestMarshalUnmarshalBatchJobReplicateCredentials(t *testing.T) {
v := BatchJobReplicateCredentials{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateCredentials(t *testing.T) {
v := BatchJobReplicateCredentials{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateCredentials Msgsize() is inaccurate")
}
vn := BatchJobReplicateCredentials{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateCredentials(b *testing.B) {
v := BatchJobReplicateCredentials{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobReplicateFlags(t *testing.T) {
v := BatchJobReplicateFlags{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateFlags(t *testing.T) {
v := BatchJobReplicateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateFlags Msgsize() is inaccurate")
}
vn := BatchJobReplicateFlags{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateFlags(b *testing.B) {
v := BatchJobReplicateFlags{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobReplicateSource(t *testing.T) {
v := BatchJobReplicateSource{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateSource(t *testing.T) {
v := BatchJobReplicateSource{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateSource Msgsize() is inaccurate")
}
vn := BatchJobReplicateSource{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateSource(b *testing.B) {
v := BatchJobReplicateSource{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobReplicateTarget(t *testing.T) {
v := BatchJobReplicateTarget{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateTarget(t *testing.T) {
v := BatchJobReplicateTarget{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateTarget Msgsize() is inaccurate")
}
vn := BatchJobReplicateTarget{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateTarget(b *testing.B) {
v := BatchJobReplicateTarget{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobReplicateV1(t *testing.T) {
v := BatchJobReplicateV1{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobReplicateV1(t *testing.T) {
v := BatchJobReplicateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobReplicateV1 Msgsize() is inaccurate")
}
vn := BatchJobReplicateV1{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobReplicateV1(b *testing.B) {
v := BatchJobReplicateV1{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchReplicateFilter(t *testing.T) {
v := BatchReplicateFilter{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchReplicateFilter(t *testing.T) {
v := BatchReplicateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchReplicateFilter Msgsize() is inaccurate")
}
vn := BatchReplicateFilter{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchReplicateFilter(b *testing.B) {
v := BatchReplicateFilter{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}

View File

@@ -18,13 +18,9 @@
package cmd
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"math/rand"
"net/http"
"runtime"
@@ -38,9 +34,8 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/env"
"github.com/minio/pkg/wildcard"
"github.com/minio/pkg/workers"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v2/workers"
)
// keyrotate:
@@ -76,56 +71,6 @@ import (
//go:generate msgp -file $GOFILE -unexported
// BatchKeyRotateKV is a datatype that holds key and values for filtering of objects
// used by metadata filter as well as tags based filtering.
type BatchKeyRotateKV struct {
Key string `yaml:"key" json:"key"`
Value string `yaml:"value" json:"value"`
}
// Validate returns an error if key is empty
func (kv BatchKeyRotateKV) Validate() error {
if kv.Key == "" {
return errInvalidArgument
}
return nil
}
// Empty indicates if kv is not set
func (kv BatchKeyRotateKV) Empty() bool {
return kv.Key == "" && kv.Value == ""
}
// Match matches input kv with kv, value will be wildcard matched depending on the user input
func (kv BatchKeyRotateKV) Match(ikv BatchKeyRotateKV) bool {
if kv.Empty() {
return true
}
if strings.EqualFold(kv.Key, ikv.Key) {
return wildcard.Match(kv.Value, ikv.Value)
}
return false
}
// BatchKeyRotateRetry datatype represents total retry attempts and delay between each retries.
type BatchKeyRotateRetry struct {
Attempts int `yaml:"attempts" json:"attempts"` // number of retry attempts
Delay time.Duration `yaml:"delay" json:"delay"` // delay between each retries
}
// Validate validates input replicate retries.
func (r BatchKeyRotateRetry) Validate() error {
if r.Attempts < 0 {
return errInvalidArgument
}
if r.Delay < 0 {
return errInvalidArgument
}
return nil
}
// BatchKeyRotationType defines key rotation type
type BatchKeyRotationType string
@@ -178,13 +123,13 @@ func (e BatchJobKeyRotateEncryption) Validate() error {
// BatchKeyRotateFilter holds all the filters currently supported for batch replication
type BatchKeyRotateFilter struct {
NewerThan time.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchKeyRotateKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchKeyRotateKV `yaml:"metadata,omitempty" json:"metadata"`
KMSKeyID string `yaml:"kmskeyid" json:"kmskey"`
NewerThan time.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
KMSKeyID string `yaml:"kmskeyid" json:"kmskey"`
}
// BatchKeyRotateNotification success or failure notification endpoint for each job attempts
@@ -198,9 +143,9 @@ type BatchKeyRotateNotification struct {
// - notify
// - retry
type BatchJobKeyRotateFlags struct {
Filter BatchKeyRotateFilter `yaml:"filter" json:"filter"`
Notify BatchKeyRotateNotification `yaml:"notify" json:"notify"`
Retry BatchKeyRotateRetry `yaml:"retry" json:"retry"`
Filter BatchKeyRotateFilter `yaml:"filter" json:"filter"`
Notify BatchJobNotification `yaml:"notify" json:"notify"`
Retry BatchJobRetry `yaml:"retry" json:"retry"`
}
// BatchJobKeyRotateV1 v1 of batch key rotation job
@@ -214,35 +159,8 @@ type BatchJobKeyRotateV1 struct {
}
// Notify notifies notification endpoint if configured regarding job failure or success.
func (r BatchJobKeyRotateV1) Notify(ctx context.Context, body io.Reader) error {
if r.Flags.Notify.Endpoint == "" {
return nil
}
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, http.MethodPost, r.Flags.Notify.Endpoint, body)
if err != nil {
return err
}
if r.Flags.Notify.Token != "" {
req.Header.Set("Authorization", r.Flags.Notify.Token)
}
clnt := http.Client{Transport: getRemoteInstanceTransport}
resp, err := clnt.Do(req)
if err != nil {
return err
}
xhttp.DrainBody(resp.Body)
if resp.StatusCode != http.StatusOK {
return errors.New(resp.Status)
}
return nil
func (r BatchJobKeyRotateV1) Notify(ctx context.Context, ri *batchJobInfo) error {
return notifyEndpoint(ctx, ri, r.Flags.Notify.Endpoint, r.Flags.Notify.Token)
}
// KeyRotate rotates encryption key of an object
@@ -289,7 +207,7 @@ func (r *BatchJobKeyRotateV1) KeyRotate(ctx context.Context, api ObjectLayer, ob
)
encMetadata := make(map[string]string)
for k, v := range oi.UserDefined {
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
if stringsHasPrefixFold(k, ReservedMetadataPrefixLower) {
encMetadata[k] = v
}
}
@@ -330,7 +248,7 @@ const (
batchKeyRotateVersion = batchKeyRotateVersionV1
batchKeyRotateAPIVersion = "v1"
batchKeyRotateJobDefaultRetries = 3
batchKeyRotateJobDefaultRetryDelay = 250 * time.Millisecond
batchKeyRotateJobDefaultRetryDelay = 25 * time.Millisecond
)
// Start the batch key rottion job, resumes if there was a pending job via "job.ID"
@@ -351,6 +269,7 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
if delay == 0 {
delay = batchKeyRotateJobDefaultRetryDelay
}
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
skip := func(info FileInfo) (ok bool) {
@@ -388,7 +307,7 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
for _, kv := range r.Flags.Filter.Tags {
for t, v := range tagMap {
if kv.Match(BatchKeyRotateKV{Key: t, Value: v}) {
if kv.Match(BatchJobKV{Key: t, Value: v}) {
return true
}
}
@@ -401,11 +320,11 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
if len(r.Flags.Filter.Metadata) > 0 {
for _, kv := range r.Flags.Filter.Metadata {
for k, v := range info.Metadata {
if !strings.HasPrefix(strings.ToLower(k), "x-amz-meta-") && !isStandardHeader(k) {
if !stringsHasPrefixFold(k, "x-amz-meta-") && !isStandardHeader(k) {
continue
}
// We only need to match x-amz-meta or standardHeaders
if kv.Match(BatchKeyRotateKV{Key: k, Value: v}) {
if kv.Match(BatchJobKV{Key: k, Value: v}) {
return true
}
}
@@ -475,6 +394,9 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
if success {
break
}
if delay > 0 {
time.Sleep(delay + time.Duration(rnd.Float64()*float64(delay)))
}
}
}()
}
@@ -486,20 +408,11 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
// persist in-memory state to disk.
logger.LogIf(ctx, ri.updateAfter(ctx, api, 0, job))
buf, _ := json.Marshal(ri)
if err := r.Notify(ctx, bytes.NewReader(buf)); err != nil {
if err := r.Notify(ctx, ri); err != nil {
logger.LogIf(ctx, fmt.Errorf("unable to notify %v", err))
}
cancel()
if ri.Failed {
ri.ObjectsFailed = 0
ri.Bucket = ""
ri.Object = ""
ri.Objects = 0
time.Sleep(delay + time.Duration(rnd.Float64()*float64(delay)))
}
return nil
}

View File

@@ -192,75 +192,17 @@ func (z *BatchJobKeyRotateFlags) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
case "Notify":
var zb0002 uint32
zb0002, err = dc.ReadMapHeader()
err = z.Notify.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Notify")
return
}
for zb0002 > 0 {
zb0002--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err, "Notify")
return
}
switch msgp.UnsafeString(field) {
case "Endpoint":
z.Notify.Endpoint, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Notify", "Endpoint")
return
}
case "Token":
z.Notify.Token, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Notify", "Token")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err, "Notify")
return
}
}
}
case "Retry":
var zb0003 uint32
zb0003, err = dc.ReadMapHeader()
err = z.Retry.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Retry")
return
}
for zb0003 > 0 {
zb0003--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err, "Retry")
return
}
switch msgp.UnsafeString(field) {
case "Attempts":
z.Retry.Attempts, err = dc.ReadInt()
if err != nil {
err = msgp.WrapError(err, "Retry", "Attempts")
return
}
case "Delay":
z.Retry.Delay, err = dc.ReadDuration()
if err != nil {
err = msgp.WrapError(err, "Retry", "Delay")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err, "Retry")
return
}
}
}
default:
err = dc.Skip()
if err != nil {
@@ -290,25 +232,9 @@ func (z *BatchJobKeyRotateFlags) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
// map header, size 2
// write "Endpoint"
err = en.Append(0x82, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
err = z.Notify.EncodeMsg(en)
if err != nil {
return
}
err = en.WriteString(z.Notify.Endpoint)
if err != nil {
err = msgp.WrapError(err, "Notify", "Endpoint")
return
}
// write "Token"
err = en.Append(0xa5, 0x54, 0x6f, 0x6b, 0x65, 0x6e)
if err != nil {
return
}
err = en.WriteString(z.Notify.Token)
if err != nil {
err = msgp.WrapError(err, "Notify", "Token")
err = msgp.WrapError(err, "Notify")
return
}
// write "Retry"
@@ -316,25 +242,9 @@ func (z *BatchJobKeyRotateFlags) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
// map header, size 2
// write "Attempts"
err = en.Append(0x82, 0xa8, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
err = z.Retry.EncodeMsg(en)
if err != nil {
return
}
err = en.WriteInt(z.Retry.Attempts)
if err != nil {
err = msgp.WrapError(err, "Retry", "Attempts")
return
}
// write "Delay"
err = en.Append(0xa5, 0x44, 0x65, 0x6c, 0x61, 0x79)
if err != nil {
return
}
err = en.WriteDuration(z.Retry.Delay)
if err != nil {
err = msgp.WrapError(err, "Retry", "Delay")
err = msgp.WrapError(err, "Retry")
return
}
return
@@ -353,22 +263,18 @@ func (z *BatchJobKeyRotateFlags) MarshalMsg(b []byte) (o []byte, err error) {
}
// string "Notify"
o = append(o, 0xa6, 0x4e, 0x6f, 0x74, 0x69, 0x66, 0x79)
// map header, size 2
// string "Endpoint"
o = append(o, 0x82, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
o = msgp.AppendString(o, z.Notify.Endpoint)
// string "Token"
o = append(o, 0xa5, 0x54, 0x6f, 0x6b, 0x65, 0x6e)
o = msgp.AppendString(o, z.Notify.Token)
o, err = z.Notify.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Notify")
return
}
// string "Retry"
o = append(o, 0xa5, 0x52, 0x65, 0x74, 0x72, 0x79)
// map header, size 2
// string "Attempts"
o = append(o, 0x82, 0xa8, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
o = msgp.AppendInt(o, z.Retry.Attempts)
// string "Delay"
o = append(o, 0xa5, 0x44, 0x65, 0x6c, 0x61, 0x79)
o = msgp.AppendDuration(o, z.Retry.Delay)
o, err = z.Retry.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Retry")
return
}
return
}
@@ -397,75 +303,17 @@ func (z *BatchJobKeyRotateFlags) UnmarshalMsg(bts []byte) (o []byte, err error)
return
}
case "Notify":
var zb0002 uint32
zb0002, bts, err = msgp.ReadMapHeaderBytes(bts)
bts, err = z.Notify.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Notify")
return
}
for zb0002 > 0 {
zb0002--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err, "Notify")
return
}
switch msgp.UnsafeString(field) {
case "Endpoint":
z.Notify.Endpoint, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Notify", "Endpoint")
return
}
case "Token":
z.Notify.Token, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Notify", "Token")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err, "Notify")
return
}
}
}
case "Retry":
var zb0003 uint32
zb0003, bts, err = msgp.ReadMapHeaderBytes(bts)
bts, err = z.Retry.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Retry")
return
}
for zb0003 > 0 {
zb0003--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err, "Retry")
return
}
switch msgp.UnsafeString(field) {
case "Attempts":
z.Retry.Attempts, bts, err = msgp.ReadIntBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Retry", "Attempts")
return
}
case "Delay":
z.Retry.Delay, bts, err = msgp.ReadDurationBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Retry", "Delay")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err, "Retry")
return
}
}
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -480,7 +328,7 @@ func (z *BatchJobKeyRotateFlags) UnmarshalMsg(bts []byte) (o []byte, err error)
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobKeyRotateFlags) Msgsize() (s int) {
s = 1 + 7 + z.Filter.Msgsize() + 7 + 1 + 9 + msgp.StringPrefixSize + len(z.Notify.Endpoint) + 6 + msgp.StringPrefixSize + len(z.Notify.Token) + 6 + 1 + 9 + msgp.IntSize + 6 + msgp.DurationSize
s = 1 + 7 + z.Filter.Msgsize() + 7 + z.Notify.Msgsize() + 6 + z.Retry.Msgsize()
return
}
@@ -509,11 +357,46 @@ func (z *BatchJobKeyRotateV1) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
case "Flags":
err = z.Flags.DecodeMsg(dc)
var zb0002 uint32
zb0002, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err, "Flags")
return
}
for zb0002 > 0 {
zb0002--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err, "Flags")
return
}
switch msgp.UnsafeString(field) {
case "Filter":
err = z.Flags.Filter.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Flags", "Filter")
return
}
case "Notify":
err = z.Flags.Notify.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Flags", "Notify")
return
}
case "Retry":
err = z.Flags.Retry.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Flags", "Retry")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err, "Flags")
return
}
}
}
case "Bucket":
z.Bucket, err = dc.ReadString()
if err != nil {
@@ -567,9 +450,35 @@ func (z *BatchJobKeyRotateV1) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = z.Flags.EncodeMsg(en)
// map header, size 3
// write "Filter"
err = en.Append(0x83, 0xa6, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72)
if err != nil {
err = msgp.WrapError(err, "Flags")
return
}
err = z.Flags.Filter.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "Flags", "Filter")
return
}
// write "Notify"
err = en.Append(0xa6, 0x4e, 0x6f, 0x74, 0x69, 0x66, 0x79)
if err != nil {
return
}
err = z.Flags.Notify.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "Flags", "Notify")
return
}
// write "Retry"
err = en.Append(0xa5, 0x52, 0x65, 0x74, 0x72, 0x79)
if err != nil {
return
}
err = z.Flags.Retry.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "Flags", "Retry")
return
}
// write "Bucket"
@@ -624,9 +533,26 @@ func (z *BatchJobKeyRotateV1) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.AppendString(o, z.APIVersion)
// string "Flags"
o = append(o, 0xa5, 0x46, 0x6c, 0x61, 0x67, 0x73)
o, err = z.Flags.MarshalMsg(o)
// map header, size 3
// string "Filter"
o = append(o, 0x83, 0xa6, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72)
o, err = z.Flags.Filter.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Flags")
err = msgp.WrapError(err, "Flags", "Filter")
return
}
// string "Notify"
o = append(o, 0xa6, 0x4e, 0x6f, 0x74, 0x69, 0x66, 0x79)
o, err = z.Flags.Notify.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Flags", "Notify")
return
}
// string "Retry"
o = append(o, 0xa5, 0x52, 0x65, 0x74, 0x72, 0x79)
o, err = z.Flags.Retry.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Flags", "Retry")
return
}
// string "Bucket"
@@ -673,11 +599,46 @@ func (z *BatchJobKeyRotateV1) UnmarshalMsg(bts []byte) (o []byte, err error) {
return
}
case "Flags":
bts, err = z.Flags.UnmarshalMsg(bts)
var zb0002 uint32
zb0002, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Flags")
return
}
for zb0002 > 0 {
zb0002--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err, "Flags")
return
}
switch msgp.UnsafeString(field) {
case "Filter":
bts, err = z.Flags.Filter.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Flags", "Filter")
return
}
case "Notify":
bts, err = z.Flags.Notify.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Flags", "Notify")
return
}
case "Retry":
bts, err = z.Flags.Retry.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Flags", "Retry")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err, "Flags")
return
}
}
}
case "Bucket":
z.Bucket, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
@@ -716,7 +677,7 @@ func (z *BatchJobKeyRotateV1) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobKeyRotateV1) Msgsize() (s int) {
s = 1 + 11 + msgp.StringPrefixSize + len(z.APIVersion) + 6 + z.Flags.Msgsize() + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 11 + z.Encryption.Msgsize()
s = 1 + 11 + msgp.StringPrefixSize + len(z.APIVersion) + 6 + 1 + 7 + z.Flags.Filter.Msgsize() + 7 + z.Flags.Notify.Msgsize() + 6 + z.Flags.Retry.Msgsize() + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 11 + z.Encryption.Msgsize()
return
}
@@ -772,91 +733,33 @@ func (z *BatchKeyRotateFilter) DecodeMsg(dc *msgp.Reader) (err error) {
if cap(z.Tags) >= int(zb0002) {
z.Tags = (z.Tags)[:zb0002]
} else {
z.Tags = make([]BatchKeyRotateKV, zb0002)
z.Tags = make([]BatchJobKV, zb0002)
}
for za0001 := range z.Tags {
var zb0003 uint32
zb0003, err = dc.ReadMapHeader()
err = z.Tags[za0001].DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Tags", za0001)
return
}
for zb0003 > 0 {
zb0003--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err, "Tags", za0001)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Tags[za0001].Key, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Tags", za0001, "Key")
return
}
case "Value":
z.Tags[za0001].Value, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Tags", za0001, "Value")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err, "Tags", za0001)
return
}
}
}
}
case "Metadata":
var zb0004 uint32
zb0004, err = dc.ReadArrayHeader()
var zb0003 uint32
zb0003, err = dc.ReadArrayHeader()
if err != nil {
err = msgp.WrapError(err, "Metadata")
return
}
if cap(z.Metadata) >= int(zb0004) {
z.Metadata = (z.Metadata)[:zb0004]
if cap(z.Metadata) >= int(zb0003) {
z.Metadata = (z.Metadata)[:zb0003]
} else {
z.Metadata = make([]BatchKeyRotateKV, zb0004)
z.Metadata = make([]BatchJobKV, zb0003)
}
for za0002 := range z.Metadata {
var zb0005 uint32
zb0005, err = dc.ReadMapHeader()
err = z.Metadata[za0002].DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002)
return
}
for zb0005 > 0 {
zb0005--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Metadata[za0002].Key, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002, "Key")
return
}
case "Value":
z.Metadata[za0002].Value, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002, "Value")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002)
return
}
}
}
}
case "KMSKeyID":
z.KMSKeyID, err = dc.ReadString()
@@ -929,25 +832,9 @@ func (z *BatchKeyRotateFilter) EncodeMsg(en *msgp.Writer) (err error) {
return
}
for za0001 := range z.Tags {
// map header, size 2
// write "Key"
err = en.Append(0x82, 0xa3, 0x4b, 0x65, 0x79)
err = z.Tags[za0001].EncodeMsg(en)
if err != nil {
return
}
err = en.WriteString(z.Tags[za0001].Key)
if err != nil {
err = msgp.WrapError(err, "Tags", za0001, "Key")
return
}
// write "Value"
err = en.Append(0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
if err != nil {
return
}
err = en.WriteString(z.Tags[za0001].Value)
if err != nil {
err = msgp.WrapError(err, "Tags", za0001, "Value")
err = msgp.WrapError(err, "Tags", za0001)
return
}
}
@@ -962,25 +849,9 @@ func (z *BatchKeyRotateFilter) EncodeMsg(en *msgp.Writer) (err error) {
return
}
for za0002 := range z.Metadata {
// map header, size 2
// write "Key"
err = en.Append(0x82, 0xa3, 0x4b, 0x65, 0x79)
err = z.Metadata[za0002].EncodeMsg(en)
if err != nil {
return
}
err = en.WriteString(z.Metadata[za0002].Key)
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002, "Key")
return
}
// write "Value"
err = en.Append(0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
if err != nil {
return
}
err = en.WriteString(z.Metadata[za0002].Value)
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002, "Value")
err = msgp.WrapError(err, "Metadata", za0002)
return
}
}
@@ -1017,25 +888,21 @@ func (z *BatchKeyRotateFilter) MarshalMsg(b []byte) (o []byte, err error) {
o = append(o, 0xa4, 0x54, 0x61, 0x67, 0x73)
o = msgp.AppendArrayHeader(o, uint32(len(z.Tags)))
for za0001 := range z.Tags {
// map header, size 2
// string "Key"
o = append(o, 0x82, 0xa3, 0x4b, 0x65, 0x79)
o = msgp.AppendString(o, z.Tags[za0001].Key)
// string "Value"
o = append(o, 0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
o = msgp.AppendString(o, z.Tags[za0001].Value)
o, err = z.Tags[za0001].MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Tags", za0001)
return
}
}
// string "Metadata"
o = append(o, 0xa8, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61)
o = msgp.AppendArrayHeader(o, uint32(len(z.Metadata)))
for za0002 := range z.Metadata {
// map header, size 2
// string "Key"
o = append(o, 0x82, 0xa3, 0x4b, 0x65, 0x79)
o = msgp.AppendString(o, z.Metadata[za0002].Key)
// string "Value"
o = append(o, 0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
o = msgp.AppendString(o, z.Metadata[za0002].Value)
o, err = z.Metadata[za0002].MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002)
return
}
}
// string "KMSKeyID"
o = append(o, 0xa8, 0x4b, 0x4d, 0x53, 0x4b, 0x65, 0x79, 0x49, 0x44)
@@ -1095,91 +962,33 @@ func (z *BatchKeyRotateFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
if cap(z.Tags) >= int(zb0002) {
z.Tags = (z.Tags)[:zb0002]
} else {
z.Tags = make([]BatchKeyRotateKV, zb0002)
z.Tags = make([]BatchJobKV, zb0002)
}
for za0001 := range z.Tags {
var zb0003 uint32
zb0003, bts, err = msgp.ReadMapHeaderBytes(bts)
bts, err = z.Tags[za0001].UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Tags", za0001)
return
}
for zb0003 > 0 {
zb0003--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err, "Tags", za0001)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Tags[za0001].Key, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Tags", za0001, "Key")
return
}
case "Value":
z.Tags[za0001].Value, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Tags", za0001, "Value")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err, "Tags", za0001)
return
}
}
}
}
case "Metadata":
var zb0004 uint32
zb0004, bts, err = msgp.ReadArrayHeaderBytes(bts)
var zb0003 uint32
zb0003, bts, err = msgp.ReadArrayHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Metadata")
return
}
if cap(z.Metadata) >= int(zb0004) {
z.Metadata = (z.Metadata)[:zb0004]
if cap(z.Metadata) >= int(zb0003) {
z.Metadata = (z.Metadata)[:zb0003]
} else {
z.Metadata = make([]BatchKeyRotateKV, zb0004)
z.Metadata = make([]BatchJobKV, zb0003)
}
for za0002 := range z.Metadata {
var zb0005 uint32
zb0005, bts, err = msgp.ReadMapHeaderBytes(bts)
bts, err = z.Metadata[za0002].UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002)
return
}
for zb0005 > 0 {
zb0005--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Metadata[za0002].Key, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002, "Key")
return
}
case "Value":
z.Metadata[za0002].Value, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002, "Value")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err, "Metadata", za0002)
return
}
}
}
}
case "KMSKeyID":
z.KMSKeyID, bts, err = msgp.ReadStringBytes(bts)
@@ -1203,144 +1012,16 @@ func (z *BatchKeyRotateFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
func (z *BatchKeyRotateFilter) Msgsize() (s int) {
s = 1 + 10 + msgp.DurationSize + 10 + msgp.DurationSize + 13 + msgp.TimeSize + 14 + msgp.TimeSize + 5 + msgp.ArrayHeaderSize
for za0001 := range z.Tags {
s += 1 + 4 + msgp.StringPrefixSize + len(z.Tags[za0001].Key) + 6 + msgp.StringPrefixSize + len(z.Tags[za0001].Value)
s += z.Tags[za0001].Msgsize()
}
s += 9 + msgp.ArrayHeaderSize
for za0002 := range z.Metadata {
s += 1 + 4 + msgp.StringPrefixSize + len(z.Metadata[za0002].Key) + 6 + msgp.StringPrefixSize + len(z.Metadata[za0002].Value)
s += z.Metadata[za0002].Msgsize()
}
s += 9 + msgp.StringPrefixSize + len(z.KMSKeyID)
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchKeyRotateKV) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Key, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Key")
return
}
case "Value":
z.Value, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Value")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchKeyRotateKV) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 2
// write "Key"
err = en.Append(0x82, 0xa3, 0x4b, 0x65, 0x79)
if err != nil {
return
}
err = en.WriteString(z.Key)
if err != nil {
err = msgp.WrapError(err, "Key")
return
}
// write "Value"
err = en.Append(0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
if err != nil {
return
}
err = en.WriteString(z.Value)
if err != nil {
err = msgp.WrapError(err, "Value")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchKeyRotateKV) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 2
// string "Key"
o = append(o, 0x82, 0xa3, 0x4b, 0x65, 0x79)
o = msgp.AppendString(o, z.Key)
// string "Value"
o = append(o, 0xa5, 0x56, 0x61, 0x6c, 0x75, 0x65)
o = msgp.AppendString(o, z.Value)
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchKeyRotateKV) UnmarshalMsg(bts []byte) (o []byte, err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Key":
z.Key, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Key")
return
}
case "Value":
z.Value, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Value")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchKeyRotateKV) Msgsize() (s int) {
s = 1 + 4 + msgp.StringPrefixSize + len(z.Key) + 6 + msgp.StringPrefixSize + len(z.Value)
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchKeyRotateNotification) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
@@ -1469,134 +1150,6 @@ func (z BatchKeyRotateNotification) Msgsize() (s int) {
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchKeyRotateRetry) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Attempts":
z.Attempts, err = dc.ReadInt()
if err != nil {
err = msgp.WrapError(err, "Attempts")
return
}
case "Delay":
z.Delay, err = dc.ReadDuration()
if err != nil {
err = msgp.WrapError(err, "Delay")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchKeyRotateRetry) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 2
// write "Attempts"
err = en.Append(0x82, 0xa8, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
if err != nil {
return
}
err = en.WriteInt(z.Attempts)
if err != nil {
err = msgp.WrapError(err, "Attempts")
return
}
// write "Delay"
err = en.Append(0xa5, 0x44, 0x65, 0x6c, 0x61, 0x79)
if err != nil {
return
}
err = en.WriteDuration(z.Delay)
if err != nil {
err = msgp.WrapError(err, "Delay")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchKeyRotateRetry) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 2
// string "Attempts"
o = append(o, 0x82, 0xa8, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
o = msgp.AppendInt(o, z.Attempts)
// string "Delay"
o = append(o, 0xa5, 0x44, 0x65, 0x6c, 0x61, 0x79)
o = msgp.AppendDuration(o, z.Delay)
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchKeyRotateRetry) UnmarshalMsg(bts []byte) (o []byte, err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
switch msgp.UnsafeString(field) {
case "Attempts":
z.Attempts, bts, err = msgp.ReadIntBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Attempts")
return
}
case "Delay":
z.Delay, bts, err = msgp.ReadDurationBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Delay")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchKeyRotateRetry) Msgsize() (s int) {
s = 1 + 9 + msgp.IntSize + 6 + msgp.DurationSize
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchKeyRotationType) DecodeMsg(dc *msgp.Reader) (err error) {
{

View File

@@ -461,119 +461,6 @@ func BenchmarkDecodeBatchKeyRotateFilter(b *testing.B) {
}
}
func TestMarshalUnmarshalBatchKeyRotateKV(t *testing.T) {
v := BatchKeyRotateKV{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchKeyRotateKV(t *testing.T) {
v := BatchKeyRotateKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchKeyRotateKV Msgsize() is inaccurate")
}
vn := BatchKeyRotateKV{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchKeyRotateKV(b *testing.B) {
v := BatchKeyRotateKV{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchKeyRotateNotification(t *testing.T) {
v := BatchKeyRotateNotification{}
bts, err := v.MarshalMsg(nil)
@@ -686,116 +573,3 @@ func BenchmarkDecodeBatchKeyRotateNotification(b *testing.B) {
}
}
}
func TestMarshalUnmarshalBatchKeyRotateRetry(t *testing.T) {
v := BatchKeyRotateRetry{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchKeyRotateRetry(t *testing.T) {
v := BatchKeyRotateRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchKeyRotateRetry Msgsize() is inaccurate")
}
vn := BatchKeyRotateRetry{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchKeyRotateRetry(b *testing.B) {
v := BatchKeyRotateRetry{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}

View File

@@ -94,16 +94,23 @@ func newStreamingBitrotWriter(disk StorageAPI, volume, filePath string, length i
r, w := io.Pipe()
h := algo.New()
bw := &streamingBitrotWriter{iow: w, closeWithErr: w.CloseWithError, h: h, shardSize: shardSize, canClose: &sync.WaitGroup{}}
bw := &streamingBitrotWriter{
iow: ioutil.NewDeadlineWriter(w, diskMaxTimeout),
closeWithErr: w.CloseWithError,
h: h,
shardSize: shardSize,
canClose: &sync.WaitGroup{},
}
bw.canClose.Add(1)
go func() {
defer bw.canClose.Done()
totalFileSize := int64(-1) // For compressed objects length will be unknown (represented by length=-1)
if length != -1 {
bitrotSumsTotalSize := ceilFrac(length, shardSize) * int64(h.Size()) // Size used for storing bitrot checksums.
totalFileSize = bitrotSumsTotalSize + length
}
r.CloseWithError(disk.CreateFile(context.TODO(), volume, filePath, totalFileSize, r))
bw.canClose.Done()
}()
return bw
}
@@ -165,9 +172,9 @@ func (b *streamingBitrotReader) ReadAt(buf []byte, offset int64) (int, error) {
b.rc, err = b.disk.ReadFileStream(context.TODO(), b.volume, b.filePath, streamOffset, b.tillOffset-streamOffset)
if err != nil {
if !IsErr(err, ignoredErrs...) {
logger.LogIf(GlobalContext,
logger.LogOnceIf(GlobalContext,
fmt.Errorf("Reading erasure shards at (%s: %s/%s) returned '%w', will attempt to reconstruct if we have quorum",
b.disk, b.volume, b.filePath, err))
b.disk, b.volume, b.filePath, err), "bitrot-read-file-stream-"+b.volume+"-"+b.filePath)
}
}
} else {

View File

@@ -19,98 +19,51 @@ package cmd
import (
"context"
"fmt"
"sync"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/pubsub"
)
const bootstrapMsgsLimit = 4 << 10
const bootstrapTraceLimit = 4 << 10
type bootstrapInfo struct {
msg string
ts time.Time
source string
}
type bootstrapTracer struct {
mu sync.RWMutex
idx int
info [bootstrapMsgsLimit]bootstrapInfo
lastUpdate time.Time
mu sync.RWMutex
info []madmin.TraceInfo
}
var globalBootstrapTracer = &bootstrapTracer{}
func (bs *bootstrapTracer) DropEvents() {
func (bs *bootstrapTracer) Record(info madmin.TraceInfo) {
bs.mu.Lock()
defer bs.mu.Unlock()
if time.Now().UTC().Sub(bs.lastUpdate) > 24*time.Hour {
bs.info = [4096]bootstrapInfo{}
bs.idx = 0
if len(bs.info) > bootstrapTraceLimit {
return
}
}
func (bs *bootstrapTracer) Empty() bool {
var empty bool
bs.mu.RLock()
empty = bs.info[0].msg == ""
bs.mu.RUnlock()
return empty
}
func (bs *bootstrapTracer) Record(msg string, skip int) {
source := getSource(skip + 1)
bs.mu.Lock()
now := time.Now().UTC()
bs.info[bs.idx] = bootstrapInfo{
msg: msg,
ts: now,
source: source,
}
bs.lastUpdate = now
bs.idx = (bs.idx + 1) % bootstrapMsgsLimit
bs.mu.Unlock()
bs.info = append(bs.info, info)
}
func (bs *bootstrapTracer) Events() []madmin.TraceInfo {
traceInfo := make([]madmin.TraceInfo, 0, bootstrapMsgsLimit)
// Add all messages in order
addAll := func(info []bootstrapInfo) {
for _, msg := range info {
if msg.ts.IsZero() {
continue // skip empty events
}
traceInfo = append(traceInfo, madmin.TraceInfo{
TraceType: madmin.TraceBootstrap,
Time: msg.ts,
NodeName: globalLocalNodeName,
FuncName: "BOOTSTRAP",
Message: fmt.Sprintf("%s %s", msg.source, msg.msg),
})
}
}
traceInfo := make([]madmin.TraceInfo, 0, bootstrapTraceLimit)
bs.mu.RLock()
addAll(bs.info[bs.idx:])
addAll(bs.info[:bs.idx])
for _, i := range bs.info {
traceInfo = append(traceInfo, i)
}
bs.mu.RUnlock()
return traceInfo
}
func (bs *bootstrapTracer) Publish(ctx context.Context, trace *pubsub.PubSub[madmin.TraceInfo, madmin.TraceType]) {
if bs.Empty() {
return
}
for _, bsEvent := range bs.Events() {
select {
case <-ctx.Done():
default:
trace.Publish(bsEvent)
if bsEvent.Message != "" {
select {
case <-ctx.Done():
default:
trace.Publish(bsEvent)
}
}
}
}

View File

@@ -1,60 +0,0 @@
// Copyright (c) 2015-2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"fmt"
"strings"
"testing"
"time"
)
func TestBootstrap(t *testing.T) {
// Bootstrap events exceed bootstrap messages limit
bsTracer := &bootstrapTracer{}
for i := 0; i < bootstrapMsgsLimit+10; i++ {
bsTracer.Record(fmt.Sprintf("msg-%d", i), 1)
}
traceInfos := bsTracer.Events()
if len(traceInfos) != bootstrapMsgsLimit {
t.Fatalf("Expected length of events %d but got %d", bootstrapMsgsLimit, len(traceInfos))
}
// Simulate the case where bootstrap events were updated a day ago
bsTracer.lastUpdate = time.Now().UTC().Add(-25 * time.Hour)
bsTracer.DropEvents()
if !bsTracer.Empty() {
t.Fatalf("Expected all bootstrap events to have been dropped, but found %d events", len(bsTracer.Events()))
}
// Fewer than 4K bootstrap events
for i := 0; i < 10; i++ {
bsTracer.Record(fmt.Sprintf("msg-%d", i), 1)
}
events := bsTracer.Events()
if len(events) != 10 {
t.Fatalf("Expected length of events %d but got %d", 10, len(events))
}
for i, traceInfo := range bsTracer.Events() {
msg := fmt.Sprintf("msg-%d", i)
if !strings.HasSuffix(traceInfo.Message, msg) {
t.Fatalf("Expected %s but got %s", msg, traceInfo.Message)
}
}
}

View File

@@ -24,6 +24,7 @@ import (
"io"
"net/http"
"net/url"
"os"
"reflect"
"time"
@@ -32,7 +33,7 @@ import (
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/rest"
"github.com/minio/mux"
"github.com/minio/pkg/env"
"github.com/minio/pkg/v2/env"
)
const (
@@ -152,14 +153,18 @@ func (b *bootstrapRESTServer) VerifyHandler(w http.ResponseWriter, r *http.Reque
// registerBootstrapRESTHandlers - register bootstrap rest router.
func registerBootstrapRESTHandlers(router *mux.Router) {
h := func(f http.HandlerFunc) http.HandlerFunc {
return collectInternodeStats(httpTraceHdrs(f))
}
server := &bootstrapRESTServer{}
subrouter := router.PathPrefix(bootstrapRESTPrefix).Subrouter()
subrouter.Methods(http.MethodPost).Path(bootstrapRESTVersionPrefix + bootstrapRESTMethodHealth).HandlerFunc(
httpTraceHdrs(server.HealthHandler))
h(server.HealthHandler))
subrouter.Methods(http.MethodPost).Path(bootstrapRESTVersionPrefix + bootstrapRESTMethodVerify).HandlerFunc(
httpTraceHdrs(server.VerifyHandler))
h(server.VerifyHandler))
}
// client to talk to bootstrap NEndpoints.
@@ -216,6 +221,7 @@ func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointS
for onlineServers < len(clnts)/2 {
for _, clnt := range clnts {
if err := clnt.Verify(ctx, srcCfg); err != nil {
bootstrapTraceMsg(fmt.Sprintf("clnt.Verify: %v, endpoint: %v", err, clnt.endpoint))
if !isNetworkError(err) {
logger.LogOnceIf(ctx, fmt.Errorf("%s has incorrect configuration: %w", clnt.String(), err), clnt.String())
incorrectConfigs = append(incorrectConfigs, fmt.Errorf("%s has incorrect configuration: %w", clnt.String(), err))
@@ -264,7 +270,11 @@ func newBootstrapRESTClients(endpointServerPools EndpointServerPools) []*bootstr
// Only proceed for remote endpoints.
if !endpoint.IsLocal {
clnts = append(clnts, newBootstrapRESTClient(endpoint))
cl := newBootstrapRESTClient(endpoint)
if serverDebugLog {
cl.restClient.TraceOutput = os.Stdout
}
clnts = append(clnts, cl)
}
}
}

View File

@@ -26,11 +26,11 @@ import (
"net/http"
"github.com/minio/kes-go"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/v2/policy"
)
const (

View File

@@ -20,7 +20,10 @@ package cmd
import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
@@ -30,6 +33,7 @@ import (
"net/textproto"
"net/url"
"path"
"runtime"
"sort"
"strconv"
"strings"
@@ -37,8 +41,10 @@ import (
"github.com/google/uuid"
"github.com/minio/mux"
"github.com/valyala/bytebufferpool"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/auth"
@@ -51,17 +57,20 @@ import (
"github.com/minio/minio/internal/handlers"
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
"github.com/minio/pkg/sync/errgroup"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v2/sync/errgroup"
)
const (
objectLockConfig = "object-lock.xml"
bucketTaggingConfig = "tagging.xml"
bucketReplicationConfig = "replication.xml"
xMinIOErrCodeHeader = "x-minio-error-code"
xMinIOErrDescHeader = "x-minio-error-desc"
)
// Check if there are buckets on server without corresponding entry in etcd backend and
@@ -357,10 +366,10 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
// Use the following trick to filter in place
// https://github.com/golang/go/wiki/SliceTricks#filter-in-place
for _, bucketInfo := range bucketsInfo {
if globalIAMSys.IsAllowed(iampolicy.Args{
if globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.ListBucketAction,
Action: policy.ListBucketAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
@@ -369,10 +378,10 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
}) {
bucketsInfo[n] = bucketInfo
n++
} else if globalIAMSys.IsAllowed(iampolicy.Args{
} else if globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.GetBucketLocationAction,
Action: policy.GetBucketLocationAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
@@ -431,6 +440,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
// The max. XML contains 100000 object names (each at most 1024 bytes long) + XML overhead
const maxBodySize = 2 * 100000 * 1024
if r.ContentLength > maxBodySize {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrEntityTooLarge), r.URL)
return
}
// Unmarshal list of keys to be deleted.
deleteObjectsReq := &DeleteObjectsRequest{}
if err := xmlDecoder(r.Body, deleteObjectsReq, maxBodySize); err != nil {
@@ -460,9 +474,6 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
deleteObjectsFn := objectAPI.DeleteObjects
if api.CacheAPI() != nil {
deleteObjectsFn = api.CacheAPI().DeleteObjects
}
// Return Malformed XML as S3 spec if the number of objects is empty
if len(deleteObjectsReq.Objects) == 0 || len(deleteObjectsReq.Objects) > maxDeleteList {
@@ -472,9 +483,6 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
objectsToDelete := map[ObjectToDelete]int{}
getObjectInfoFn := objectAPI.GetObjectInfo
if api.CacheAPI() != nil {
getObjectInfoFn = api.CacheAPI().GetObjectInfo
}
var (
hasLockEnabled bool
@@ -565,8 +573,8 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
}
if object.VersionID != "" && hasLockEnabled {
if apiErrCode := enforceRetentionBypassForDelete(ctx, r, bucket, object, goi, gerr); apiErrCode != ErrNone {
apiErr := errorCodes.ToAPIErr(apiErrCode)
if err := enforceRetentionBypassForDelete(ctx, r, bucket, object, goi, gerr); err != nil {
apiErr := toAPIError(ctx, err)
deleteResults[index].errInfo = DeleteError{
Code: apiErr.Code,
Message: apiErr.Description,
@@ -641,6 +649,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
if deleteResult.errInfo.Code != "" {
deleteErrors = append(deleteErrors, deleteResult.errInfo)
} else {
// All deletes on directory objects was with `nullVersionID`.
// Remove it from response.
if isDirObject(deleteResult.delInfo.ObjectName) && deleteResult.delInfo.VersionID == nullVersionID {
deleteResult.delInfo.VersionID = ""
}
deletedObjects = append(deletedObjects, deleteResult.delInfo)
}
}
@@ -656,6 +669,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
if replicateDeletes && (dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == Pending) {
// copy so we can re-add null ID.
dobj := dobj
if isDirObject(dobj.ObjectName) && dobj.VersionID == "" {
dobj.VersionID = nullVersionID
}
dv := DeletedObjectReplicationInfo{
DeletedObject: dobj,
Bucket: bucket,
@@ -692,7 +710,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
if os == nil { // skip objects that weren't deleted due to invalid versionID etc.
continue
}
logger.LogIf(ctx, os.Sweep())
os.Sweep()
}
}
@@ -745,8 +763,8 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
if objectLockEnabled {
// Creating a bucket with locking requires the user having more permissions
for _, action := range []iampolicy.Action{iampolicy.PutBucketObjectLockConfigurationAction, iampolicy.PutBucketVersioningAction} {
if !globalIAMSys.IsAllowed(iampolicy.Args{
for _, action := range []policy.Action{policy.PutBucketObjectLockConfigurationAction, policy.PutBucketVersioningAction} {
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: action,
@@ -770,7 +788,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
// check if client is attempting to create more buckets, complain about it.
if currBuckets := globalBucketMetadataSys.Count(); currBuckets+1 > maxBuckets {
logger.LogIf(ctx, fmt.Errorf("An attempt to create %d buckets beyond recommended %d", currBuckets+1, maxBuckets))
logger.LogIf(ctx, fmt.Errorf("Please avoid creating more buckets %d beyond recommended %d", currBuckets+1, maxBuckets))
}
opts := MakeBucketOptions{
@@ -889,14 +907,6 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
}
bucket := mux.Vars(r)["bucket"]
// Require Content-Length to be set in the request
size := r.ContentLength
if size < 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentLength), r.URL)
return
}
resource, err := getResource(r.URL.Path, r.Host, globalDomainNames)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
@@ -911,7 +921,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Here the parameter is the size of the form data that should
// be loaded in memory, the remaining being put in temporary files.
reader, err := r.MultipartReader()
mp, err := r.MultipartReader()
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
@@ -919,16 +929,20 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
const mapEntryOverhead = 200
var (
fileBody io.ReadCloser
fileName string
reader io.Reader
fileSize int64 = -1
fileName string
fanOutEntries = make([]minio.PutObjectFanOutEntry, 0, 100)
)
maxParts := 1000
// Canonicalize the form values into http.Header.
formValues := make(http.Header)
for {
part, err := reader.NextRawPart()
part, err := mp.NextRawPart()
if errors.Is(err, io.EOF) {
break
}
@@ -956,7 +970,6 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Multiple values for the same key (one map entry, longer slice) are cheaper
// than the same number of values for different keys (many map entries), but
// using a consistent per-value cost for overhead is simpler.
const mapEntryOverhead = 200
maxMemoryBytes := 2 * int64(10<<20)
maxMemoryBytes -= int64(len(name))
maxMemoryBytes -= mapEntryOverhead
@@ -971,9 +984,29 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
var b bytes.Buffer
if fileName == "" {
if http.CanonicalHeaderKey(name) == http.CanonicalHeaderKey("x-minio-fanout-list") {
dec := json.NewDecoder(part)
// while the array contains values
for dec.More() {
var m minio.PutObjectFanOutEntry
if err := dec.Decode(&m); err != nil {
part.Close()
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
fanOutEntries = append(fanOutEntries, m)
}
part.Close()
continue
}
// value, store as string in memory
n, err := io.CopyN(&b, part, maxMemoryBytes+1)
part.Close()
if err != nil && err != io.EOF {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
@@ -1001,23 +1034,41 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// The file or text content.
// The file or text content must be the last field in the form.
// You cannot upload more than one file at a time.
fileBody = part // we have found the File part of the request breakout
defer part.Close()
reader = part
// we have found the File part of the request we are done processing multipart-form
break
}
if _, ok := formValues["Key"]; !ok {
if keyName, ok := formValues["Key"]; !ok {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The name of the uploaded key is missing"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
} else if fileName == "" && len(keyName) >= 1 {
// if we can't get fileName. We use keyName[0] to fileName
fileName = keyName[0]
}
if fileName == "" {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The file or text content is missing"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
checksum, err := hash.GetContentChecksum(formValues)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, fmt.Errorf("Invalid checksum: %w", err))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if checksum != nil && checksum.Type.Trailing() {
// Not officially supported in POST requests.
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("Trailing checksums not available for POST operations"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
formValues.Set("Bucket", bucket)
if fileName != "" && strings.Contains(formValues.Get("Key"), "${filename}") {
@@ -1045,20 +1096,38 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
// Once signature is validated, check if the user has
// explicit permissions for the user.
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectAction,
ConditionValues: getConditionValues(r, "", cred),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
if len(fanOutEntries) > 0 {
// Once signature is validated, check if the user has
// explicit permissions for the user.
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.PutObjectFanOutAction,
ConditionValues: getConditionValues(r, "", cred),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
} else {
// Once signature is validated, check if the user has
// explicit permissions for the user.
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.PutObjectAction,
ConditionValues: getConditionValues(r, "", cred),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
}
policyBytes, err := base64.StdEncoding.DecodeString(formValues.Get("Policy"))
@@ -1067,12 +1136,17 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
hashReader, err := hash.NewReader(fileBody, -1, "", "", -1)
hashReader, err := hash.NewReader(ctx, reader, fileSize, "", "", fileSize)
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if checksum != nil && checksum.Valid() {
if err = hashReader.AddChecksumNoTrailer(formValues, false); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
// Handle policy if it is set.
if len(policyBytes) > 0 {
@@ -1113,7 +1187,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Check if bucket encryption is enabled
sseConfig, _ := globalBucketSSEConfigSys.Get(bucket)
sseConfig.Apply(r.Header, sse.ApplyOptions{
sseConfig.Apply(formValues, sse.ApplyOptions{
AutoEncrypt: globalAutoEncryption,
})
@@ -1124,12 +1198,24 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
fanOutOpts := fanOutOptions{Checksum: checksum}
if crypto.Requested(formValues) {
if crypto.SSECopy.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParameters), r.URL)
return
}
if crypto.SSEC.IsRequested(r.Header) && crypto.S3.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, crypto.ErrIncompatibleEncryptionMethod), r.URL)
return
}
if crypto.SSEC.IsRequested(r.Header) && crypto.S3KMS.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, crypto.ErrIncompatibleEncryptionMethod), r.URL)
return
}
if crypto.SSEC.IsRequested(r.Header) && isReplicationEnabled(ctx, bucket) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParametersSSEC), r.URL)
return
@@ -1156,22 +1242,155 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
}
reader, objectEncryptionKey, err = newEncryptReader(ctx, hashReader, kind, keyID, key, bucket, object, metadata, kmsCtx)
if len(fanOutEntries) == 0 {
reader, objectEncryptionKey, err = newEncryptReader(ctx, hashReader, kind, keyID, key, bucket, object, metadata, kmsCtx)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// do not try to verify encrypted content/
hashReader, err = hash.NewReader(ctx, reader, -1, "", "", -1)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if checksum != nil && checksum.Valid() {
if err = hashReader.AddChecksumNoTrailer(formValues, true); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
pReader, err = pReader.WithEncryption(hashReader, &objectEncryptionKey)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
} else {
fanOutOpts = fanOutOptions{
Key: key,
Kind: kind,
KeyID: keyID,
KmsCtx: kmsCtx,
Checksum: checksum,
}
}
}
if len(fanOutEntries) > 0 {
// Fan-out requires no copying, and must be carried from original source
// https://en.wikipedia.org/wiki/Copy_protection so the incoming stream
// is always going to be in-memory as we cannot re-read from what we
// wrote to disk - since that amounts to "copying" from a "copy"
// instead of "copying" from source, we need the stream to be seekable
// to ensure that we can make fan-out calls concurrently.
buf := bytebufferpool.Get()
defer func() {
buf.Reset()
bytebufferpool.Put(buf)
}()
md5w := md5.New()
// Maximum allowed fan-out object size.
const maxFanOutSize = 16 << 20
n, err := io.Copy(io.MultiWriter(buf, md5w), ioutil.HardLimitReader(pReader, maxFanOutSize))
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// do not try to verify encrypted content/
hashReader, err = hash.NewReader(reader, -1, "", "", -1)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
// Set the correct hex md5sum for the fan-out stream.
fanOutOpts.MD5Hex = hex.EncodeToString(md5w.Sum(nil))
concurrentSize := 100
if runtime.GOMAXPROCS(0) < concurrentSize {
concurrentSize = runtime.GOMAXPROCS(0)
}
pReader, err = pReader.WithEncryption(hashReader, &objectEncryptionKey)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
fanOutResp := make([]minio.PutObjectFanOutResponse, 0, len(fanOutEntries))
eventArgsList := make([]eventArgs, 0, len(fanOutEntries))
for {
var objInfos []ObjectInfo
var errs []error
var done bool
if len(fanOutEntries) < concurrentSize {
objInfos, errs = fanOutPutObject(ctx, bucket, objectAPI, fanOutEntries, buf.Bytes()[:n], fanOutOpts)
done = true
} else {
objInfos, errs = fanOutPutObject(ctx, bucket, objectAPI, fanOutEntries[:concurrentSize], buf.Bytes()[:n], fanOutOpts)
fanOutEntries = fanOutEntries[concurrentSize:]
}
for i, objInfo := range objInfos {
if errs[i] != nil {
fanOutResp = append(fanOutResp, minio.PutObjectFanOutResponse{
Key: objInfo.Name,
Error: errs[i].Error(),
})
eventArgsList = append(eventArgsList, eventArgs{
EventName: event.ObjectCreatedPost,
BucketName: objInfo.Bucket,
Object: ObjectInfo{Name: objInfo.Name},
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: fmt.Sprintf("%s MinIO-Fan-Out (failed: %v)", r.UserAgent(), errs[i]),
Host: handlers.GetSourceIP(r),
})
continue
}
fanOutResp = append(fanOutResp, minio.PutObjectFanOutResponse{
Key: objInfo.Name,
ETag: getDecryptedETag(formValues, objInfo, false),
VersionID: objInfo.VersionID,
LastModified: &objInfo.ModTime,
})
eventArgsList = append(eventArgsList, eventArgs{
EventName: event.ObjectCreatedPost,
BucketName: objInfo.Bucket,
Object: objInfo,
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: r.UserAgent() + " " + "MinIO-Fan-Out",
Host: handlers.GetSourceIP(r),
})
}
if done {
break
}
}
enc := json.NewEncoder(w)
for i, fanOutResp := range fanOutResp {
if err = enc.Encode(&fanOutResp); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Notify object created events.
sendEvent(eventArgsList[i])
if eventArgsList[i].Object.NumVersions > dataScannerExcessiveVersionsThreshold {
// Send events for excessive versions.
sendEvent(eventArgs{
EventName: event.ObjectManyVersions,
BucketName: eventArgsList[i].Object.Bucket,
Object: eventArgsList[i].Object,
ReqParams: extractReqParams(r),
RespElements: extractRespElements(w),
UserAgent: r.UserAgent() + " " + "MinIO-Fan-Out",
Host: handlers.GetSourceIP(r),
})
}
}
return
}
objInfo, err := objectAPI.PutObject(ctx, bucket, object, pReader, opts)
@@ -1186,7 +1405,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
w.Header()[xhttp.ETag] = []string{`"` + objInfo.ETag + `"`}
// Set the relevant version ID as part of the response header.
if objInfo.VersionID != "" {
if objInfo.VersionID != "" && objInfo.VersionID != nullVersionID {
w.Header()[xhttp.AmzVersionID] = []string{objInfo.VersionID}
}
@@ -1204,6 +1423,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
UserAgent: r.UserAgent(),
Host: handlers.GetSourceIP(r),
})
if objInfo.NumVersions > dataScannerExcessiveVersionsThreshold {
defer sendEvent(eventArgs{
EventName: event.ObjectManyVersions,
@@ -1226,6 +1446,11 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
// Add checksum header.
if checksum != nil && checksum.Valid() {
hash.AddChecksumHeader(w, checksum.AsMap())
}
// Decide what http response to send depending on success_action_status parameter
switch successStatus {
case "201":
@@ -1271,7 +1496,7 @@ func (api objectAPIHandlers) GetBucketPolicyStatusHandler(w http.ResponseWriter,
}
// Check if anonymous (non-owner) has access to list objects.
readable := globalPolicySys.IsAllowed(policy.Args{
readable := globalPolicySys.IsAllowed(policy.BucketPolicyArgs{
Action: policy.ListBucketAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", auth.AnonymousCredentials),
@@ -1279,7 +1504,7 @@ func (api objectAPIHandlers) GetBucketPolicyStatusHandler(w http.ResponseWriter,
})
// Check if anonymous (non-owner) has access to upload objects.
writable := globalPolicySys.IsAllowed(policy.Args{
writable := globalPolicySys.IsAllowed(policy.BucketPolicyArgs{
Action: policy.PutObjectAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", auth.AnonymousCredentials),
@@ -1394,10 +1619,14 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
}
}
deleteBucket := objectAPI.DeleteBucket
// Return an error if the bucket does not exist
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil && !forceDelete {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Attempt to delete bucket.
if err := deleteBucket(ctx, bucket, DeleteBucketOptions{
if err := objectAPI.DeleteBucket(ctx, bucket, DeleteBucketOptions{
Force: forceDelete,
SRDeleteOp: getSRBucketDeleteOp(globalSiteReplicationSys.isEnabled()),
}); err != nil {
@@ -1463,7 +1692,7 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
config, err := objectlock.ParseObjectLockConfig(r.Body)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedXML)
apiErr := errorCodes.ToAPIErr(ErrInvalidArgument)
apiErr.Description = err.Error()
writeErrorResponse(ctx, w, apiErr, r.URL)
return
@@ -1477,7 +1706,11 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
// Deny object locking configuration settings on existing buckets without object lock enabled.
if _, _, err = globalBucketMetadataSys.GetObjectLockConfig(bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
if _, ok := err.(BucketObjectLockConfigNotFound); ok {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrObjectLockConfigurationNotAllowed), r.URL)
} else {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
}
return
}
@@ -1679,5 +1912,5 @@ func (api objectAPIHandlers) DeleteBucketTaggingHandler(w http.ResponseWriter, r
}))
// Write success response.
writeSuccessResponseHeadersOnly(w)
writeSuccessNoContent(w)
}

View File

@@ -355,7 +355,7 @@ func testListMultipartUploadsHandler(obj ObjectLayer, instanceType, bucketName s
maxUploads: "0",
accessKey: credentials.AccessKey,
secretKey: credentials.SecretKey,
expectedRespStatus: http.StatusNotFound,
expectedRespStatus: http.StatusBadRequest,
shouldPass: false,
},
// Test case - 2.
@@ -749,7 +749,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
for i := range requestList[0].Objects {
var vid string
if isDirObject(requestList[0].Objects[i].ObjectName) {
vid = nullVersionID
vid = ""
}
deletedObjects[i] = DeletedObject{
ObjectName: requestList[0].Objects[i].ObjectName,
@@ -766,7 +766,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
for i := range requestList[1].Objects {
var vid string
if isDirObject(requestList[0].Objects[i].ObjectName) {
vid = nullVersionID
vid = ""
}
deletedObjects[i] = DeletedObject{
ObjectName: requestList[1].Objects[i].ObjectName,

View File

@@ -0,0 +1,88 @@
// Copyright (c) 2023 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import "github.com/minio/minio/internal/bucket/lifecycle"
//go:generate stringer -type lcEventSrc -trimprefix lcEventSrc_ $GOFILE
type lcEventSrc uint8
//revive:disable:var-naming Underscores is used here to indicate where common prefix ends and the enumeration name begins
const (
lcEventSrc_None lcEventSrc = iota
lcEventSrc_Scanner
lcEventSrc_Decom
lcEventSrc_Rebal
lcEventSrc_s3HeadObject
lcEventSrc_s3GetObject
lcEventSrc_s3ListObjects
lcEventSrc_s3PutObject
lcEventSrc_s3CopyObject
lcEventSrc_s3CompleteMultipartUpload
)
//revive:enable:var-naming
type lcAuditEvent struct {
lifecycle.Event
source lcEventSrc
}
func (lae lcAuditEvent) Tags() map[string]interface{} {
event := lae.Event
src := lae.source
const (
ilmSrc = "ilm-src"
ilmAction = "ilm-action"
ilmDue = "ilm-due"
ilmRuleID = "ilm-rule-id"
ilmTier = "ilm-tier"
ilmNewerNoncurrentVersions = "ilm-newer-noncurrent-versions"
ilmNoncurrentDays = "ilm-noncurrent-days"
)
tags := make(map[string]interface{}, 5)
if src > lcEventSrc_None {
tags[ilmSrc] = src.String()
}
tags[ilmAction] = event.Action.String()
tags[ilmRuleID] = event.RuleID
if !event.Due.IsZero() {
tags[ilmDue] = event.Due
}
// rule with Transition/NoncurrentVersionTransition in effect
if event.StorageClass != "" {
tags[ilmTier] = event.StorageClass
}
// rule with NewernoncurrentVersions in effect
if event.NewerNoncurrentVersions > 0 {
tags[ilmNewerNoncurrentVersions] = event.NewerNoncurrentVersions
}
if event.NoncurrentDays > 0 {
tags[ilmNoncurrentDays] = event.NoncurrentDays
}
return tags
}
func newLifecycleAuditEvent(src lcEventSrc, event lifecycle.Event) lcAuditEvent {
return lcAuditEvent{
Event: event,
source: src,
}
}

View File

@@ -21,12 +21,13 @@ import (
"encoding/xml"
"io"
"net/http"
"strconv"
"github.com/minio/minio/internal/bucket/lifecycle"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/v2/policy"
)
const (
@@ -115,6 +116,16 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
vars := mux.Vars(r)
bucket := vars["bucket"]
var withUpdatedAt bool
if updatedAtStr := r.Form.Get("withUpdatedAt"); updatedAtStr != "" {
var err error
withUpdatedAt, err = strconv.ParseBool(updatedAtStr)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidLifecycleQueryParameter), r.URL)
return
}
}
if s3Error := checkRequestAuthType(ctx, r, policy.GetBucketLifecycleAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
@@ -126,7 +137,7 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
return
}
config, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
config, updatedAt, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -138,6 +149,9 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
return
}
if withUpdatedAt {
w.Header().Set(xhttp.MinIOLifecycleCfgUpdatedAt, updatedAt.Format(iso8601Format))
}
// Write lifecycle configuration to client.
writeSuccessResponseXML(w, configData)
}

View File

@@ -179,7 +179,7 @@ func testBucketLifecycleHandlers(obj ObjectLayer, instanceType, bucketName strin
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidRequest",
Code: "InvalidArgument",
Message: "Filter must have exactly one of Prefix, Tag, or And specified",
},
@@ -196,7 +196,7 @@ func testBucketLifecycleHandlers(obj ObjectLayer, instanceType, bucketName strin
lifecycleResponse: []byte(``),
errorResponse: APIErrorResponse{
Resource: SlashSeparator + bucketName + SlashSeparator,
Code: "InvalidRequest",
Code: "InvalidArgument",
Message: "Date must be provided in ISO 8601 format",
},

View File

@@ -32,7 +32,7 @@ import (
"time"
"github.com/google/uuid"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/amztime"
sse "github.com/minio/minio/internal/bucket/encryption"
@@ -41,8 +41,8 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/s3select"
"github.com/minio/pkg/env"
"github.com/minio/pkg/workers"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v2/workers"
)
const (
@@ -63,7 +63,8 @@ type LifecycleSys struct{}
// Get - gets lifecycle config associated to a given bucket name.
func (sys *LifecycleSys) Get(bucketName string) (lc *lifecycle.Lifecycle, err error) {
return globalBucketMetadataSys.GetLifecycleConfig(bucketName)
lc, _, err = globalBucketMetadataSys.GetLifecycleConfig(bucketName)
return lc, err
}
// NewLifecycleSys - creates new lifecycle system.
@@ -98,6 +99,7 @@ func (sys *LifecycleSys) trace(oi ObjectInfo) func(event string) {
type expiryTask struct {
objInfo ObjectInfo
event lifecycle.Event
src lcEventSrc
}
type expiryState struct {
@@ -120,11 +122,11 @@ func (es *expiryState) close() {
}
// enqueueByDays enqueues object versions expired by days for expiry.
func (es *expiryState) enqueueByDays(oi ObjectInfo, event lifecycle.Event) {
func (es *expiryState) enqueueByDays(oi ObjectInfo, event lifecycle.Event, src lcEventSrc) {
select {
case <-GlobalContext.Done():
es.close()
case es.byDaysCh <- expiryTask{objInfo: oi, event: event}:
case es.byDaysCh <- expiryTask{objInfo: oi, event: event, src: src}:
default:
}
}
@@ -172,9 +174,9 @@ func initBackgroundExpiry(ctx context.Context, objectAPI ObjectLayer) {
go func(t expiryTask) {
defer ewk.Give()
if t.objInfo.TransitionedObject.Status != "" {
applyExpiryOnTransitionedObject(ctx, objectAPI, t.objInfo, t.event)
applyExpiryOnTransitionedObject(ctx, objectAPI, t.objInfo, t.event, t.src)
} else {
applyExpiryOnNonTransitionedObjects(ctx, objectAPI, t.objInfo, t.event)
applyExpiryOnNonTransitionedObjects(ctx, objectAPI, t.objInfo, t.event, t.src)
}
}(t)
}
@@ -202,6 +204,7 @@ type newerNoncurrentTask struct {
type transitionTask struct {
objInfo ObjectInfo
src lcEventSrc
event lifecycle.Event
}
@@ -220,10 +223,10 @@ type transitionState struct {
lastDayStats map[string]*lastDayTierStats
}
func (t *transitionState) queueTransitionTask(oi ObjectInfo, event lifecycle.Event) {
func (t *transitionState) queueTransitionTask(oi ObjectInfo, event lifecycle.Event, src lcEventSrc) {
select {
case <-t.ctx.Done():
case t.transitionCh <- transitionTask{objInfo: oi, event: event}:
case t.transitionCh <- transitionTask{objInfo: oi, event: event, src: src}:
default:
}
}
@@ -276,9 +279,11 @@ func (t *transitionState) worker(objectAPI ObjectLayer) {
return
}
atomic.AddInt32(&t.activeTasks, 1)
if err := transitionObject(t.ctx, objectAPI, task.objInfo, task.event); err != nil {
logger.LogIf(t.ctx, fmt.Errorf("Transition to %s failed for %s/%s version:%s with %w",
task.event.StorageClass, task.objInfo.Bucket, task.objInfo.Name, task.objInfo.VersionID, err))
if err := transitionObject(t.ctx, objectAPI, task.objInfo, newLifecycleAuditEvent(task.src, task.event)); err != nil {
if !isErrVersionNotFound(err) && !isErrObjectNotFound(err) {
logger.LogIf(t.ctx, fmt.Errorf("Transition to %s failed for %s/%s version:%s with %w",
task.event.StorageClass, task.objInfo.Bucket, task.objInfo.Name, task.objInfo.VersionID, err))
}
} else {
ts := tierStats{
TotalSize: uint64(task.objInfo.Size),
@@ -358,11 +363,11 @@ func validateTransitionTier(lc *lifecycle.Lifecycle) error {
// enqueueTransitionImmediate enqueues obj for transition if eligible.
// This is to be called after a successful upload of an object (version).
func enqueueTransitionImmediate(obj ObjectInfo) {
func enqueueTransitionImmediate(obj ObjectInfo, src lcEventSrc) {
if lc, err := globalLifecycleSys.Get(obj.Bucket); err == nil {
switch event := lc.Eval(obj.ToLifecycleOpts()); event.Action {
case lifecycle.TransitionAction, lifecycle.TransitionVersionAction:
globalTransitionState.queueTransitionTask(obj, event)
globalTransitionState.queueTransitionTask(obj, event, src)
}
}
}
@@ -372,13 +377,13 @@ func enqueueTransitionImmediate(obj ObjectInfo) {
//
// 1. when a restored (via PostRestoreObject API) object expires.
// 2. when a transitioned object expires (based on an ILM rule).
func expireTransitionedObject(ctx context.Context, objectAPI ObjectLayer, oi *ObjectInfo, lcOpts lifecycle.ObjectOpts, lcEvent lifecycle.Event) error {
func expireTransitionedObject(ctx context.Context, objectAPI ObjectLayer, oi *ObjectInfo, lcOpts lifecycle.ObjectOpts, lcEvent lifecycle.Event, src lcEventSrc) error {
traceFn := globalLifecycleSys.trace(*oi)
var opts ObjectOptions
opts.Versioned = globalBucketVersioningSys.PrefixEnabled(oi.Bucket, oi.Name)
opts.VersionID = lcOpts.VersionID
opts.Expiration = ExpirationOptions{Expire: true}
tags := auditLifecycleTags(lcEvent)
tags := newLifecycleAuditEvent(src, lcEvent).Tags()
if lcEvent.Action.DeleteRestored() {
// delete locally restored copy of object or object version
// from the source, while leaving metadata behind. The data on
@@ -400,13 +405,11 @@ func expireTransitionedObject(ctx context.Context, objectAPI ObjectLayer, oi *Ob
TierName: oi.TransitionedObject.Tier,
}
if err := globalTierJournal.AddEntry(entry); err != nil {
logger.LogIf(ctx, err)
return err
}
// Delete metadata on source, now that data in remote tier has been
// marked for deletion.
if _, err := objectAPI.DeleteObject(ctx, oi.Bucket, oi.Name, opts); err != nil {
logger.LogIf(ctx, err)
return err
}
@@ -449,18 +452,18 @@ func genTransitionObjName(bucket string) (string, error) {
// storage specified by the transition ARN, the metadata is left behind on source cluster and original content
// is moved to the transition tier. Note that in the case of encrypted objects, entire encrypted stream is moved
// to the transition tier without decrypting or re-encrypting.
func transitionObject(ctx context.Context, objectAPI ObjectLayer, oi ObjectInfo, lcEvent lifecycle.Event) error {
func transitionObject(ctx context.Context, objectAPI ObjectLayer, oi ObjectInfo, lae lcAuditEvent) error {
opts := ObjectOptions{
Transition: TransitionOptions{
Status: lifecycle.TransitionPending,
Tier: lcEvent.StorageClass,
Tier: lae.StorageClass,
ETag: oi.ETag,
},
LifecycleEvent: lcEvent,
VersionID: oi.VersionID,
Versioned: globalBucketVersioningSys.PrefixEnabled(oi.Bucket, oi.Name),
VersionSuspended: globalBucketVersioningSys.PrefixSuspended(oi.Bucket, oi.Name),
MTime: oi.ModTime,
LifecycleAuditEvent: lae,
VersionID: oi.VersionID,
Versioned: globalBucketVersioningSys.PrefixEnabled(oi.Bucket, oi.Name),
VersionSuspended: globalBucketVersioningSys.PrefixSuspended(oi.Bucket, oi.Name),
MTime: oi.ModTime,
}
return objectAPI.TransitionObject(ctx, oi.Bucket, oi.Name, opts)
}
@@ -688,7 +691,7 @@ func putRestoreOpts(bucket, object string, rreq *RestoreObjectRequest, objInfo O
if rreq.Type == SelectRestoreRequest {
for _, v := range rreq.OutputLocation.S3.UserMetadata {
if !strings.HasPrefix(strings.ToLower(v.Name), "x-amz-meta") {
if !stringsHasPrefixFold(v.Name, "x-amz-meta") {
meta["x-amz-meta-"+v.Name] = v.Value
continue
}
@@ -868,35 +871,3 @@ func (oi ObjectInfo) ToLifecycleOpts() lifecycle.ObjectOpts {
TransitionStatus: oi.TransitionedObject.Status,
}
}
func auditLifecycleTags(event lifecycle.Event) map[string]interface{} {
const (
ilmAction = "ilm-action"
ilmDue = "ilm-due"
ilmRuleID = "ilm-rule-id"
ilmTier = "ilm-tier"
ilmNewerNoncurrentVersions = "ilm-newer-noncurrent-versions"
ilmNoncurrentDays = "ilm-noncurrent-days"
)
tags := make(map[string]interface{}, 4)
tags[ilmAction] = event.Action.String()
tags[ilmRuleID] = event.RuleID
if !event.Due.IsZero() {
tags[ilmDue] = event.Due
}
// rule with Transition/NoncurrentVersionTransition in effect
if event.StorageClass != "" {
tags[ilmTier] = event.StorageClass
}
// rule with NewernoncurrentVersions in effect
if event.NewerNoncurrentVersions > 0 {
tags[ilmNewerNoncurrentVersions] = event.NewerNoncurrentVersions
}
if event.NoncurrentDays > 0 {
tags[ilmNoncurrentDays] = event.NoncurrentDays
}
return tags
}

View File

@@ -26,7 +26,7 @@ import (
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/v2/policy"
)
// Validate all the ListObjects query arguments, returns an APIErrorCode
@@ -190,7 +190,6 @@ func (api objectAPIHandlers) listObjectsV2Handler(ctx context.Context, w http.Re
}
// Validate the query params before beginning to serve the request.
// fetch-owner is not validated since it is a boolean
if s3Error := validateListObjectsArgs(prefix, token, delimiter, encodingType, maxKeys); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
@@ -232,7 +231,7 @@ func parseRequestToken(token string) (subToken string, nodeIndex int) {
if token == "" {
return token, -1
}
i := strings.Index(token, "@")
i := strings.Index(token, ":")
if i < 0 {
return token, -1
}

View File

@@ -21,11 +21,10 @@ import (
"context"
"errors"
"fmt"
"runtime"
"sync"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/tags"
bucketsse "github.com/minio/minio/internal/bucket/encryption"
@@ -36,8 +35,8 @@ import (
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/sync/errgroup"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v2/sync/errgroup"
)
// BucketMetadataSys captures all bucket metadata for a given cluster.
@@ -121,6 +120,7 @@ func (sys *BucketMetadataSys) updateAndParse(ctx context.Context, bucket string,
meta.NotificationConfigXML = configData
case bucketLifecycleConfig:
meta.LifecycleConfigXML = configData
meta.LifecycleConfigUpdatedAt = updatedAt
case bucketSSEConfig:
meta.EncryptionConfigXML = configData
meta.EncryptionConfigUpdatedAt = updatedAt
@@ -151,14 +151,27 @@ func (sys *BucketMetadataSys) updateAndParse(ctx context.Context, bucket string,
return updatedAt, fmt.Errorf("Unknown bucket %s metadata update requested %s", bucket, configFile)
}
if err := meta.Save(ctx, objAPI); err != nil {
return updatedAt, err
err = sys.save(ctx, meta)
return updatedAt, err
}
func (sys *BucketMetadataSys) save(ctx context.Context, meta BucketMetadata) error {
objAPI := newObjectLayerFn()
if objAPI == nil {
return errServerNotInitialized
}
sys.Set(bucket, meta)
globalNotificationSys.LoadBucketMetadata(bgContext(ctx), bucket) // Do not use caller context here
if isMinioMetaBucketName(meta.Name) {
return errInvalidArgument
}
return updatedAt, nil
if err := meta.Save(ctx, objAPI); err != nil {
return err
}
sys.Set(meta.Name, meta)
globalNotificationSys.LoadBucketMetadata(bgContext(ctx), meta.Name) // Do not use caller context here
return nil
}
// Delete delete the bucket metadata for the specified bucket.
@@ -246,18 +259,18 @@ func (sys *BucketMetadataSys) GetObjectLockConfig(bucket string) (*objectlock.Co
// GetLifecycleConfig returns configured lifecycle config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetLifecycleConfig(bucket string) (*lifecycle.Lifecycle, error) {
func (sys *BucketMetadataSys) GetLifecycleConfig(bucket string) (*lifecycle.Lifecycle, time.Time, error) {
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketLifecycleNotFound{Bucket: bucket}
return nil, time.Time{}, BucketLifecycleNotFound{Bucket: bucket}
}
return nil, err
return nil, time.Time{}, err
}
if meta.lifecycleConfig == nil {
return nil, BucketLifecycleNotFound{Bucket: bucket}
return nil, time.Time{}, BucketLifecycleNotFound{Bucket: bucket}
}
return meta.lifecycleConfig, nil
return meta.lifecycleConfig, meta.LifecycleConfigUpdatedAt, nil
}
// GetNotificationConfig returns configured notification config
@@ -297,7 +310,7 @@ func (sys *BucketMetadataSys) CreatedAt(bucket string) (time.Time, error) {
// GetPolicyConfig returns configured bucket policy
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetPolicyConfig(bucket string) (*policy.Policy, time.Time, error) {
func (sys *BucketMetadataSys) GetPolicyConfig(bucket string) (*policy.BucketPolicy, time.Time, error) {
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
@@ -419,8 +432,8 @@ func (sys *BucketMetadataSys) Init(ctx context.Context, buckets []BucketInfo, ob
sys.objAPI = objAPI
// Load bucket metadata sys in background
go sys.init(ctx, buckets)
// Load bucket metadata sys.
sys.init(ctx, buckets)
return nil
}
@@ -488,6 +501,8 @@ func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []Buck
func (sys *BucketMetadataSys) refreshBucketsMetadataLoop(ctx context.Context) {
const bucketMetadataRefresh = 15 * time.Minute
sleeper := newDynamicSleeper(2, 150*time.Millisecond, false)
t := time.NewTimer(bucketMetadataRefresh)
defer t.Stop()
for {
@@ -511,13 +526,16 @@ func (sys *BucketMetadataSys) refreshBucketsMetadataLoop(ctx context.Context) {
sys.RemoveStaleBuckets(diskBuckets)
for _, bucket := range buckets {
wait := sleeper.Timer(ctx)
err := sys.loadBucketMetadata(ctx, bucket)
if err != nil {
logger.LogIf(ctx, err)
wait() // wait to proceed to next entry.
continue
}
// Check if there is a spare procs, wait 100ms instead
waitForLowIO(runtime.GOMAXPROCS(0), 100*time.Millisecond, currentHTTPIO)
wait() // wait to proceed to next entry.
}
t.Reset(bucketMetadataRefresh)

View File

@@ -29,7 +29,7 @@ import (
"path"
"time"
"github.com/minio/madmin-go/v2"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/tags"
bucketsse "github.com/minio/minio/internal/bucket/encryption"
"github.com/minio/minio/internal/bucket/lifecycle"
@@ -41,7 +41,7 @@ import (
"github.com/minio/minio/internal/fips"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/v2/policy"
"github.com/minio/sio"
)
@@ -88,9 +88,10 @@ type BucketMetadata struct {
QuotaConfigUpdatedAt time.Time
ReplicationConfigUpdatedAt time.Time
VersioningConfigUpdatedAt time.Time
LifecycleConfigUpdatedAt time.Time
// Unexported fields. Must be updated atomically.
policyConfig *policy.Policy
policyConfig *policy.BucketPolicy
notificationConfig *event.Config
lifecycleConfig *lifecycle.Lifecycle
objectLockConfig *objectlock.Config
@@ -119,6 +120,16 @@ func newBucketMetadata(name string) BucketMetadata {
}
}
// Versioning returns true if versioning is enabled
func (b BucketMetadata) Versioning() bool {
return b.LockEnabled || (b.versioningConfig != nil && b.versioningConfig.Enabled()) || (b.objectLockConfig != nil && b.objectLockConfig.Enabled())
}
// ObjectLocking returns true if object locking is enabled
func (b BucketMetadata) ObjectLocking() bool {
return b.LockEnabled || (b.objectLockConfig != nil && b.objectLockConfig.Enabled())
}
// SetCreatedAt preserves the CreatedAt time for bucket across sites in site replication. It defaults to
// creation time of bucket on this cluster in all other cases.
func (b *BucketMetadata) SetCreatedAt(createdAt time.Time) {
@@ -132,39 +143,38 @@ func (b *BucketMetadata) SetCreatedAt(createdAt time.Time) {
// Load - loads the metadata of bucket by name from ObjectLayer api.
// If an error is returned the returned metadata will be default initialized.
func (b *BucketMetadata) Load(ctx context.Context, api ObjectLayer, name string) error {
func readBucketMetadata(ctx context.Context, api ObjectLayer, name string) (BucketMetadata, error) {
if name == "" {
logger.LogIf(ctx, errors.New("bucket name cannot be empty"))
return errInvalidArgument
return BucketMetadata{}, errInvalidArgument
}
b := newBucketMetadata(name)
configFile := path.Join(bucketMetaPrefix, name, bucketMetadataFile)
data, err := readConfig(ctx, api, configFile)
if err != nil {
return err
return b, err
}
if len(data) <= 4 {
return fmt.Errorf("loadBucketMetadata: no data")
return b, fmt.Errorf("loadBucketMetadata: no data")
}
// Read header
switch binary.LittleEndian.Uint16(data[0:2]) {
case bucketMetadataFormat:
default:
return fmt.Errorf("loadBucketMetadata: unknown format: %d", binary.LittleEndian.Uint16(data[0:2]))
return b, fmt.Errorf("loadBucketMetadata: unknown format: %d", binary.LittleEndian.Uint16(data[0:2]))
}
switch binary.LittleEndian.Uint16(data[2:4]) {
case bucketMetadataVersion:
default:
return fmt.Errorf("loadBucketMetadata: unknown version: %d", binary.LittleEndian.Uint16(data[2:4]))
return b, fmt.Errorf("loadBucketMetadata: unknown version: %d", binary.LittleEndian.Uint16(data[2:4]))
}
// OK, parse data.
_, err = b.UnmarshalMsg(data[4:])
b.Name = name // in-case parsing failed for some reason, make sure bucket name is not empty.
return err
return b, err
}
func loadBucketMetadataParse(ctx context.Context, objectAPI ObjectLayer, bucket string, parse bool) (BucketMetadata, error) {
b := newBucketMetadata(bucket)
err := b.Load(ctx, objectAPI, b.Name)
b, err := readBucketMetadata(ctx, objectAPI, bucket)
b.Name = bucket // in-case parsing failed for some reason, make sure bucket name is not empty.
if err != nil && !errors.Is(err, errConfigNotFound) {
return b, err
}
@@ -207,7 +217,7 @@ func loadBucketMetadata(ctx context.Context, objectAPI ObjectLayer, bucket strin
// The first error encountered is returned.
func (b *BucketMetadata) parseAllConfigs(ctx context.Context, objectAPI ObjectLayer) (err error) {
if len(b.PolicyConfigJSON) != 0 {
b.policyConfig, err = policy.ParseConfig(bytes.NewReader(b.PolicyConfigJSON), b.Name)
b.policyConfig, err = policy.ParseBucketPolicyConfig(bytes.NewReader(b.PolicyConfigJSON), b.Name)
if err != nil {
return err
}
@@ -417,6 +427,10 @@ func (b *BucketMetadata) defaultTimestamps() {
if b.VersioningConfigUpdatedAt.IsZero() {
b.VersioningConfigUpdatedAt = b.Created
}
if b.LifecycleConfigUpdatedAt.IsZero() {
b.LifecycleConfigUpdatedAt = b.Created
}
}
// Save config to supplied ObjectLayer api.

View File

@@ -150,6 +150,12 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "VersioningConfigUpdatedAt")
return
}
case "LifecycleConfigUpdatedAt":
z.LifecycleConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "LifecycleConfigUpdatedAt")
return
}
default:
err = dc.Skip()
if err != nil {
@@ -163,9 +169,9 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 21
// map header, size 22
// write "Name"
err = en.Append(0xde, 0x0, 0x15, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
err = en.Append(0xde, 0x0, 0x16, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
if err != nil {
return
}
@@ -374,15 +380,25 @@ func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "VersioningConfigUpdatedAt")
return
}
// write "LifecycleConfigUpdatedAt"
err = en.Append(0xb8, 0x4c, 0x69, 0x66, 0x65, 0x63, 0x79, 0x63, 0x6c, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.LifecycleConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "LifecycleConfigUpdatedAt")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 21
// map header, size 22
// string "Name"
o = append(o, 0xde, 0x0, 0x15, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = append(o, 0xde, 0x0, 0x16, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = msgp.AppendString(o, z.Name)
// string "Created"
o = append(o, 0xa7, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64)
@@ -444,6 +460,9 @@ func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
// string "VersioningConfigUpdatedAt"
o = append(o, 0xb9, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.VersioningConfigUpdatedAt)
// string "LifecycleConfigUpdatedAt"
o = append(o, 0xb8, 0x4c, 0x69, 0x66, 0x65, 0x63, 0x79, 0x63, 0x6c, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.LifecycleConfigUpdatedAt)
return
}
@@ -591,6 +610,12 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "VersioningConfigUpdatedAt")
return
}
case "LifecycleConfigUpdatedAt":
z.LifecycleConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "LifecycleConfigUpdatedAt")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -605,6 +630,6 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BucketMetadata) Msgsize() (s int) {
s = 3 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON) + 28 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigMetaJSON) + 22 + msgp.TimeSize + 26 + msgp.TimeSize + 26 + msgp.TimeSize + 23 + msgp.TimeSize + 21 + msgp.TimeSize + 27 + msgp.TimeSize + 26 + msgp.TimeSize
s = 3 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON) + 28 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigMetaJSON) + 22 + msgp.TimeSize + 26 + msgp.TimeSize + 26 + msgp.TimeSize + 23 + msgp.TimeSize + 21 + msgp.TimeSize + 27 + msgp.TimeSize + 26 + msgp.TimeSize + 25 + msgp.TimeSize
return
}

View File

@@ -26,7 +26,7 @@ import (
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/pkg/v2/policy"
)
const (

Some files were not shown because too many files have changed in this diff Show More