Compare commits

...

614 Commits

Author SHA1 Message Date
Minio Trusted
d265fe7f9e update console v0.19.5 2022-08-10 21:37:28 -07:00
Krishnan Parthasarathi
91e6af4470 Add trace support for decommissioning (#15502)
* Add trace support for decommissioning
* Add support for tracing errors during decommission
2022-08-10 12:46:45 -07:00
Noah Gao
b940fe8fca chore: bad title syntax of helm chart README (#15513) 2022-08-10 11:16:22 -07:00
reyerdam
73fe2e95fe helm: Support adding objectlocking for buckets (#15505)
implemented object locking during bucket creation in helm chart
2022-08-10 08:13:51 -07:00
Shireesh Anjal
316c492842 Upgrade madmin-go to latest version (v1.4.15) (#15510) 2022-08-10 07:36:13 -07:00
Harshavardhana
74418b542a fix: incorrect context timeout during listPath() (#15509)
This PR cleans up the listing code for single drive
to ensure that we do not add an incorrect context
timeout, while resuming the listing.

fixes #15508
2022-08-10 07:35:29 -07:00
Poorna
172e63dbb6 fix: site replication group updates to set status correctly (#15507)
Fixes: #15486
2022-08-09 15:17:43 -07:00
Poorna
21bf5b4db7 replication: heal proactively upon access (#15501)
Queue failed/pending replication for healing during listing and GET/HEAD
API calls. This includes healing of existing objects that were never
replicated or those in the middle of a resync operation.

This PR also fixes a bug in ListObjectVersions where lifecycle filtering
should be done.
2022-08-09 15:00:24 -07:00
Harshavardhana
a406bb0288 restrict number of disks used for scanning buckets upto GOMAXPROCS (#15492)
control scanner parallelism to avoid higher CPU
usage on nodes that have more drives but an old CPU.
2022-08-08 16:16:44 -07:00
Harshavardhana
1823ab6808 LDAP/OpenID must be initialized IAM Init() (#15491)
This allows for LDAP/OpenID to be non-blocking,
allowing for unreachable Identity targets to be
initialized in IAM.
2022-08-08 16:16:27 -07:00
Harshavardhana
8eec49304d use logger.Info instead of logger.LogIf 2022-08-08 16:13:58 -07:00
Harshavardhana
ecdc2f2f5f fix: maxConcurrent '0' is an invalid value (#15500)
log and continue with defaults instead of
crashing the service.
2022-08-08 15:18:45 -07:00
Minio Trusted
6a6c772ff2 Update yaml files to latest version RELEASE.2022-08-08T18-34-09Z 2022-08-08 21:30:40 +00:00
Harshavardhana
e178c55bc3 remove non-working GetRawData() from FS mode (#15498) 2022-08-08 11:34:09 -07:00
Poorna
2c137c0d04 fix: handle invalid endpoint errors in site replication(#15499)
fixes #15497
2022-08-08 11:12:05 -07:00
Minio Trusted
1d35f2b58f update madmin-go to v1.4.13 2022-08-08 10:41:38 -07:00
Harshavardhana
638c57e466 revert changes in FS implementation for umask
fixes #15494
2022-08-08 09:48:24 -07:00
Harshavardhana
5e4213b3be fix: keep writing previous speedtest result (#15484)
when object speedtest is running keep writing
previous speedtest result back to client until
we have a new result - this avoids sending back
blank entries in between the speedtest when it
is running in 'autotune' mode.
2022-08-07 23:04:03 -07:00
Minio Trusted
102295f58a update helm v4.0.11 2022-08-06 22:41:47 -07:00
Jan Šafařík
a0d14f8ff7 helm: add a new line to the end of the credentials file (#15485) 2022-08-06 15:01:01 -07:00
Harshavardhana
e0b0a351c6 remove IAM old migration code (#15476)
```
commit 7bdaf9bc50
Author: Aditya Manthramurthy <donatello@users.noreply.github.com>
Date:   Wed Jul 24 17:34:23 2019 -0700

    Update on-disk storage format for users system (#7949)
```

Bonus: fixes a bug when etcd keys were being re-encrypted.
2022-08-05 17:53:23 -07:00
Minio Trusted
fcd4b3ba9b Update yaml files to latest version RELEASE.2022-08-05T23-27-09Z 2022-08-06 00:08:21 +00:00
Anis Elleuch
1d2ff46a89 Ensure lock/versioning permissions when creating a bucket (#15432)
Currently, the code doesn't check if the user creating a bucket with
locking feature has bucket locking and versioning permissions enabled,
adding it in accordance with S3 spec.

https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

Object Lock - If ObjectLockEnabledForBucket is set to true in your CreateBucket request,
s3:PutBucketObjectLockConfiguration and s3:PutBucketVersioning permissions are required.
2022-08-05 16:27:09 -07:00
Harshavardhana
8f7c739328 feat: add SpeedTest ResponseTimes and TTFB (#15479)
Capture average, p50, p99, p999 response times
and ttfb values. These are needed for latency
measurements and overall understanding of our
speedtest results.
2022-08-05 09:40:03 -07:00
Poorna
1beea3daba fix: import bucket metadata import to return a summary (#15462) 2022-08-05 01:52:50 -07:00
Minio Trusted
1ffd063939 update CREDITS for latest deps 2022-08-04 23:30:31 -07:00
Aditya Manthramurthy
3d94c38ec4 Add env variables to configuration APIs output (#15465)
Config export and config get APIs now include environment 
variables set on the server
2022-08-04 22:21:52 -07:00
Harshavardhana
f4af2d3cdc fix: decodeDirObject() in single drive DeleteObjects() call (#15477)
Thanks to @bh4t for reproducing this issue.
2022-08-04 18:57:43 -07:00
ebozduman
b57e7321e7 Replaces 'disk'=>'drive' visible to end user (#15464) 2022-08-04 16:10:08 -07:00
Anis Elleuch
e93867488b actively cancel listIAMConfigItems to avoid goroutine leak (#15471)
listConfigItems creates a goroutine but sometimes callers will
exit without properly asking listAllIAMConfigItems() to stop sending
results, hence a goroutine leak.

Create a new context and cancel it for each listAllIAMConfigItems
call.
2022-08-04 13:20:43 -07:00
Minio Trusted
c08790edd2 upgrade helm v4.0.10 2022-08-04 09:09:22 -07:00
Kourosh Tafreshi
a46baddbc4 Add OIDC to the HelmChart (#15469) 2022-08-04 09:07:51 -07:00
Harshavardhana
3bd9615d0e fix: log if there is readDir() failure with ListBuckets (#15461)
This is actionable and must be logged.

Bonus: also honor umask by using 0o666 for all Open() syscalls.
2022-08-04 07:23:05 -07:00
Minio Trusted
2871cb5775 upgrade helm v4.0.9 2022-08-02 23:10:44 -07:00
Harshavardhana
a6e0ec4e6f Add support converting non-inlined to inlined (#15444)
This is a feature to allow for inode compaction on
large clusters that use a lot of small files spread
across a large heirarchy.
2022-08-02 23:10:22 -07:00
Minio Trusted
e956369c4e Update yaml files to latest version RELEASE.2022-08-02T23-59-16Z 2022-08-03 01:45:42 +00:00
Minio Trusted
76f950c663 upgrade to minio-go/v7 v7.0.34 2022-08-02 16:59:16 -07:00
Andreas Auernhammer
d774a3309b kes: automatically reload KES client certificate (#15450)
This commit adds support for automatically reloading
the MinIO client certificate for authentication to KES.

The client certificate will now be reloaded:
 - when the private key / certificate file changes
 - when a SIGHUP signal is received
 - every 15 minutes

Fixes #14869

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-08-02 16:58:09 -07:00
Anis Elleuch
b3edb25377 bloom: healObject to mark a path dirty only for dangling objects (#15458)
The path is marked dirty automatically when healObject() is called, which is
wrong. HealObject() is called during self-healing and this will lead to
an increase in the false positive result of the bloom filter.

Also move NSUpdated() from renameData() and call it directly in
CompleteMultipart and PutObject, this is not a functional change but
it will make it less prone to errors in the future.
2022-08-02 16:57:39 -07:00
Harshavardhana
026b87e39b do not crash for unwrapErrs return nil (#15456)
fixes #15454
2022-08-02 15:10:11 -07:00
Harshavardhana
53a816b17a fix: readdir fallback on root of the drive (#15457)
fixes #15452
2022-08-02 14:57:36 -07:00
Harshavardhana
043aaa792d fix: intrument os.OpenFile differently for Reads and Writes (#15449)
allows us to trace latency for READs or WRITEs
2022-08-01 13:22:43 -07:00
dorman
aad9cb208a helm: modify user secret volumes mount path name (#15443) 2022-08-01 12:28:04 -07:00
Minio Trusted
edf081c6a2 update minio-go v7.0.33 2022-08-01 10:44:35 -07:00
Harshavardhana
fd349103e8 fix: allow P-384/P-512 constant time implementation (#15445)
since go1.18.x P-384/P-512 are now constant time
implementations, enable them.
2022-08-01 09:27:16 -07:00
Anis Elleuch
10b49eb4fb Fix resetting a config with a non default target name (#15448)
mc admin config reset <alias> notify_webhook:something was not working
properly.

The reason is that GetSubSys() was not calculating the target
name properly because it is quitting early when the number of config
inputs ('notify_webhook:something' in this case) is equal to 1.

This commit will make the code calculates always calculate the target
name if found.
2022-08-01 07:52:23 -07:00
Anis Elleuch
3856d078d2 fix: set 20000 as maximum parallel event calls (#15435)
This is needed to avoid consuming a lot of goroutines when a target is
very slow or there is a bug in a target library.
2022-07-30 12:12:33 -07:00
Minio Trusted
6b4cb35f4f Update yaml files to latest version RELEASE.2022-07-30T05-21-40Z 2022-07-30 05:50:06 +00:00
Shireesh Anjal
e6eab2091f fix: Incorrect ServersCount in cluster.info (#15431)
The `ServersCount` field in cluster.info is expected to contain the
number of nodes, and not number of endpoints.
2022-07-29 22:21:40 -07:00
Harshavardhana
3cdb609cca allow root users to return appropriate policy in AccountInfo (#15437)
fixes #15436

This fixes a regression caused after the removal of "consoleAdmin"
policy usage for 'root users' in PR #15402
2022-07-29 20:58:03 -07:00
Minio Trusted
d6a7f62ff5 update helm v4.0.8 2022-07-29 16:39:54 -07:00
Minio Trusted
72f170f5d2 update console to v0.19.4 2022-07-29 15:07:53 -07:00
Minio Trusted
824d52a82b Update yaml files to latest version RELEASE.2022-07-29T19-40-48Z 2022-07-29 22:06:57 +00:00
Minio Trusted
067ebab9d8 update object-locking docs and word them appropriately 2022-07-29 12:40:48 -07:00
Anis Elleuch
6be6c0d2e3 Update kafka library to v1.35.0 (#15434)
There is a known rare issue in the current version 1.30.0 described here
https://github.com/Shopify/sarama/issues/2241.

Update the library to 1.35.0

Bonus:  update shirou/gopsutil v3.22.5 to v3.22.6 to fix a compilation
error for OpenBSD
2022-07-29 11:34:45 -07:00
Harshavardhana
aa874010e2 fix: regression in resolving the right versions (#15430)
fix: regression in resolving right versions

commit d480022711 caused a regression in real
resolver, by picking up incorrect versionID.
2022-07-29 10:03:53 -07:00
Cesar Celis Hernandez
8ec888d13d feat: update binary once and push it to other servers (#15407) 2022-07-29 08:34:30 -07:00
Harshavardhana
916f274c83 choose starting concurrency based on number of local disks (#15428)
smaller setups may have less drives per server choosing
the concurrency based on number of local drives, and let
the MinIO server change the overall concurrency as
necessary.
2022-07-29 00:00:06 -07:00
Aditya Manthramurthy
7ac53c07af fix: passing application configuration to console (#15409)
This is an update to MinIO server after swagger codegen related build
fixes added after issues introduced in 39fd7b0b3b
2022-07-28 18:30:24 -07:00
Harshavardhana
bc72e4226e do not allow filesystem fallback in server download (#15429)
It is possible for anyone with admin access to relatively
to get any content of any random OS location by simply
providing the file with 'mc admin update alias/ /etc/passwd`.

Workaround is to disable 'admin:ServiceUpdate' action. Everyone
is advised to upgrade to this patch.

Thanks to @alevsk for finding this bug.
2022-07-28 17:44:21 -07:00
Poorna
5e0776e96a replication: Include replica object versions for resync (#15427) 2022-07-28 13:43:02 -07:00
Anis Elleuch
2f1ef02d35 Do not update directory access time (#15426)
Most setups will have relatime it only updates the access time 
following a change in the directory.
2022-07-28 12:40:48 -07:00
Minio Trusted
db8442584e update helm chart v4.0.7 2022-07-27 20:54:38 -07:00
Naveen
d46cf50760 chore(deps): Included dependency review (#14958)
> Dependency Review GitHub Action in your repository to enforce dependency
> reviews on your pull requests.
> The action scans for vulnerable versions of dependencies introduced by package version
> changes in pull requests,
> and warns you about the associated security vulnerabilities.
> This gives you better visibility of what's changing in a pull request,
> and helps prevent vulnerabilities from being added to your repository.

https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review#dependency-review-enforcement

Signed-off-by: Naveen <172697+naveensrinivasan@users.noreply.github.com>
2022-07-27 20:53:26 -07:00
Andrew Hibbert
0357121d17 Optionally run a command when completing jobs (#15017) 2022-07-27 20:51:16 -07:00
Harshavardhana
aff236e20e fix: cluster healthcheck for single drive setups (#15415)
single drive setups must return '200 OK' if
drive is accessible, current master returns '503'
2022-07-27 16:46:34 -07:00
Harshavardhana
cbd70d26b5 optimize speedtest for smaller setups (#15414)
this has been observed in multiple environments
where the setups are small `speedtest` naturally
fails with default '10s' and the concurrency
of '32' is big for such clusters.

choose a smaller value i.e equal to number of
drives in such clusters and let 'autotune'
increase the concurrency instead.
2022-07-27 14:41:59 -07:00
Harshavardhana
5e763b71dc use logger.LogOnce to reduce printing disconnection logs (#15408)
fixes #15334

- re-use net/url parsed value for http.Request{}
- remove gosimple, structcheck and unusued due to https://github.com/golangci/golangci-lint/issues/2649
- unwrapErrs upto leafErr to ensure that we store exactly the correct errors
2022-07-27 09:44:59 -07:00
Aditya Manthramurthy
7e4e7a66af Remove internal usage of consoleAdmin (#15402)
"consoleAdmin" was used as the policy for root derived accounts, but this
lead to unexpected bugs when an administrator modified the consoleAdmin
policy

This change avoids evaluating a policy for root derived accounts as by
default no policy is mapped to the root user. If a session policy is
attached to a root derived account, it will be evaluated as expected.
2022-07-26 19:06:55 -07:00
Shireesh Anjal
906947a285 fix: typo in json key ClusterInfo DeploymentID (#15406)
deployement_id -> deployment_id
2022-07-26 19:05:33 -07:00
Minio Trusted
bfc70bc74e Update yaml files to latest version RELEASE.2022-07-26T00-53-03Z 2022-07-26 06:56:37 +00:00
jiuker
6b4f833a12 convert repeated error checks into single function in logger (#15387) 2022-07-25 17:53:03 -07:00
Poorna
426c902b87 site replication: fix healing of bucket deletes. (#15377)
This PR changes the handling of bucket deletes for site 
replicated setups to hold on to deleted bucket state until 
it syncs to all the clusters participating in site replication.
2022-07-25 17:51:32 -07:00
Anis Elleuch
e4b51235f8 upgrade: Split in two steps to ensure a stable retry (#15396)
Currently, if one server in a distributed setup fails to upgrade 
due to any reasons, it is not possible to upgrade again unless 
nodes are restarted.

To fix this, split the upgrade process into two steps :

- download the new binary on all servers
- If successful, overwrite the old binary with the new one
2022-07-25 17:49:47 -07:00
Harshavardhana
4c6498d726 move all CI/CD to go1.18 (#15401) 2022-07-25 15:27:20 -07:00
Eng Zer Jun
0a3b1ad4eb test: use T.TempDir to create temporary test directory (#15400)
This commit replaces `ioutil.TempDir` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.

Prior to this commit, temporary directory created using `ioutil.TempDir`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
	defer func() {
		if err := os.RemoveAll(dir); err != nil {
			t.Fatal(err)
		}
	}
is also tedious, but `t.TempDir` handles this for us nicely.

Reference: https://pkg.go.dev/testing#T.TempDir

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2022-07-25 12:37:26 -07:00
Anis Elleuch
f23f442d33 Add cluster info to inspect/profiling archive (#15360)
Add cluster info to inspect and profiling archive.

In addition to the existing data generation for both inspect and profiling,
cluster.info file is added. This latter contains some info of the cluster.
The generation of cluster.info is is done as the last step and it can fail
if it exceed 10 seconds.
2022-07-25 09:11:35 -07:00
Minio Trusted
e465c3587b Update yaml files to latest version RELEASE.2022-07-24T17-09-31Z 2022-07-24 17:37:58 +00:00
Minio Trusted
7109b6d414 update console to v0.19.3 2022-07-24 10:09:31 -07:00
Minio Trusted
8c97f3e9bc update minio-go/v7 v7.0.32 2022-07-24 09:28:19 -07:00
Klaus Post
3795b2c8ba Add compression scheme to header (#15395)
For easier debugging. We still do not return compressed size for security reasons.
2022-07-24 07:15:49 -07:00
Harshavardhana
7725425e05 fix: fork os.MkdirAll to optimize cases where parent exists (#15379)
a/b/c/d/ where `a/b/c/` exists results in additional syscalls
such as an Lstat() call to verify if the `a/b/c/` exists
and its a directory.

We do not need to do this on MinIO since the parent prefixes
if exist, we can simply return success without spending
additional syscalls.

Also this implementation attempts to simply use Access() calls
to avoid os.Stat() calls since the latter does memory allocation
for things we do not need to use.

Access() is simpler since we have a predictable structure on
the backend and we know exactly how our path structures are.
2022-07-24 00:43:11 -07:00
Minio Trusted
b2f4948bbe update helm v4.0.6 2022-07-23 20:34:14 -07:00
Minio Trusted
f802d2ba83 Update yaml files to latest version RELEASE.2022-07-24T01-54-52Z 2022-07-24 02:31:24 +00:00
Daniel Valdivia
ce8548a1a2 Console v0.19.2 (#15390)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2022-07-23 18:54:52 -07:00
Minio Trusted
490dec981a update go mod tidy -compat=1.17 2022-07-22 15:31:16 -07:00
Aditya Manthramurthy
39fd7b0b3b Pass multiple IDP config to console (#15270)
This change passes multiple IDP config via a struct 
rather than env variables.
2022-07-22 15:28:02 -07:00
Taran Pelkey
e83930333b Allow DelKVS to delete specific sub-system fields. (#15354) 2022-07-22 14:48:23 -07:00
Harshavardhana
b0d70a0e5e support additional claim info in Auditing STS calls (#15381)
Bonus: Adds a missing AuditLog from AssumeRoleWithCertificate API

Fixes #9529
2022-07-22 11:12:03 -07:00
Denis Krivenko
ff5a5c1ee0 helm: Add runtimeClassName support to vanilla helm chart (#15385) 2022-07-22 10:25:41 -07:00
Mathieu Parent
290a53d735 helm: Use existingSecretKey as in the user example (#15386) 2022-07-22 10:25:22 -07:00
Aditya Manthramurthy
2393a13f86 Allow site replication config with multiple IDPs (#15361)
Fixes a bug that did not let site replication be configured when
multiple IDPs are configured.
2022-07-21 19:52:23 -07:00
Poorna
7d8c8de827 single drive: Remove bucket metadata on DeleteBucket (#15378)
from disk and in-memory map
2022-07-21 19:51:53 -07:00
jiuker
3faef829c5 expect full quorum for writing 'format.json' everywhere (#15362) 2022-07-21 18:04:17 -07:00
Poorna
7560fb6f9a save IAM export assets relative at a folder prefix (#15355) 2022-07-21 17:51:33 -07:00
Harshavardhana
2fddcc6a11 upgrade mqtt library to v1.4.1 (#15366)
mainly to address some connect()/reconnect() packet
exhaustion issues, that were found in some deployments.
2022-07-21 17:49:28 -07:00
Klaus Post
69bf39f42e fix: make complete multipart uploads faster encrypted/compressed backends (#15375)
- Only fetch the parts we need and abort as soon as one is missing.
- Only fetch the number of parts requested by "ListObjectParts".
2022-07-21 16:47:58 -07:00
MohammadReza
f4d5c861f3 update grafana dashboard (#15357) 2022-07-21 15:17:44 -07:00
Minio Trusted
564a0afae1 Revert "tests: Add context cancelation (#15374)"
This reverts commit 1e332f0eb1.

Reverting this as tests are failing randomly.
2022-07-21 13:58:56 -07:00
Klaus Post
1e332f0eb1 tests: Add context cancelation (#15374)
A huge number of goroutines would build up from various monitors

When creating test filesystems provide a context so they can shut down when no longer needed.
2022-07-21 11:52:18 -07:00
Poorna
cab8d3d568 feat: add API to return list of objects waiting to be replicated (#15091) 2022-07-21 11:05:44 -07:00
Klaus Post
be8c4cb24a fix: support multiple validateAdminReq actions (#15372)
handle multiple validateAdminReq actions and remove duplicate error responses.
2022-07-21 10:26:59 -07:00
Harshavardhana
65166e4ce4 fix: readQuorum calculation when defaultParityCount is 0 (#15363)
when parity is '0' the readQuorum must be equal
to the number of data disks.
2022-07-21 07:25:54 -07:00
Harshavardhana
8249cd4406 fix: allow payload verification error to be returned (#15364)
without reading the reader the error is ignored
by the custom unmarshaller written by ObjectLegalHold
data structure.
2022-07-21 01:24:03 -07:00
Harshavardhana
c6ecaf68ed update CREDITS with latest dependencies 2022-07-21 00:49:38 -07:00
Harshavardhana
d3f89fa6e3 remove unnecessary logs in IAM store (#15356) 2022-07-20 08:19:12 -07:00
Harshavardhana
ce8397f7d9 use partInfo only for intermediate part.x.meta (#15353) 2022-07-19 18:56:24 -07:00
Klaus Post
cae9aeca00 fix: reused field crash in PartIndices (#15351)
`PartIndices` may be set if xlMetaV2Version is reused.

Clear before unmarshaling and add sanity check when reading.
2022-07-19 16:49:46 -07:00
Klaus Post
f939d1c183 Independent Multipart Uploads (#15346)
Do completely independent multipart uploads.

In distributed mode, a lock was held to merge each multipart 
upload as it was added. This lock was highly contested and 
retries are expensive (timewise) in distributed mode.

Instead, each part adds its metadata information uniquely. 
This eliminates the per object lock required for each to merge.
The metadata is read back and merged by "CompleteMultipartUpload" 
without locks when constructing final object.

Co-authored-by: Harshavardhana <harsha@minio.io>
2022-07-19 08:35:29 -07:00
Andreas Auernhammer
242d06274a kms: add context.Context to KMS API calls (#15327)
This commit adds a `context.Context` to the
the KMS `{Stat, CreateKey, GenerateKey}` API
calls.

The context will be used to terminate external calls
as soon as the client requests gets canceled.

A follow-up PR will add a `context.Context` to
the remaining `DecryptKey` API call.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-07-18 18:54:27 -07:00
Poorna
957e3ed729 export IAM: include site replicator svcacct (#15339) 2022-07-18 17:38:53 -07:00
Jeff Haynie
ed02ee4ef4 fix: issue when a Helm create user job returns more than once (#15321) 2022-07-18 12:09:44 -07:00
Daniel Valdivia
ba9691a0ad Console v0.19.1 (#15338)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2022-07-18 11:45:20 -07:00
Minio Trusted
e7eb94de6b Update yaml files to latest version RELEASE.2022-07-17T15-43-14Z 2022-07-17 22:06:11 +00:00
Harshavardhana
b6eb8dff64 Add decommission compression+encryption enabled tests (#15322)
update compression environment variables to follow
the expected sub-system style, however support fallback
mode.
2022-07-17 08:43:14 -07:00
Harshavardhana
7da9e3a6f8 support encrypted/compressed objects properly during decommission (#15320)
fixes #15314
2022-07-16 19:35:24 -07:00
Anis Elleuch
876970baea Exclude upload-ids with incomplete part upload in multipart listing (#15318)
Uploading a part object can leave an inconsistent state inside
.minio.sys/multipart where data are uploaded but xl.meta is not
committed yet.

Do not list upload-ids that have this state in the multipart listing.
2022-07-16 13:25:58 -07:00
LHHDZ
e68e76e143 fix: data race, which caused tests execution to fail (#15313) 2022-07-16 07:57:55 -07:00
Mathieu Parent
2bc7ca2d34 helm: add annotations for ServiceMonitor (#15020) 2022-07-16 01:04:27 -07:00
Minio Trusted
e94eb9af10 update helm v4.0.5
Signed-off-by: Minio Trusted <trusted@minio.io>
2022-07-15 23:42:56 -07:00
Jon Kartago Lamida
3018b21ab8 fix: failure to createUser used by make-user-job helm chart (#15293) 2022-07-15 23:22:21 -07:00
Steven Kriegler
0b605c3383 Allow topologySpreadConstraints configuration (#14684)
The default replica value is 16 (right now) which can lead to massive
resource consumption on one node in smaller clusters. The idea for this
addition is to allow users to specify how the pods (replicas) are being
spread across the cluster. It gives more control over this Helm Release
in smaller clusters where most worker nodes have taints.

As this Kubernetes feature exists since Kubernetes 1.19 and is only
useful for a replica count > 1, this was taken into account.
2022-07-15 21:05:38 -07:00
Harshavardhana
e7ac1ea54c allow decommission to continue when healing (#15312)
Bonus:

- heal buckets in-case during startup the new
  pools have bucket missing.
2022-07-15 21:03:23 -07:00
Harshavardhana
5ac6d91525 support 'admin update' for hotfix versions (#15308)
hotfixed versions are rejected as invalid,
allow `mc admin update` from hotfix repos.
2022-07-15 16:00:34 -07:00
Harshavardhana
1cd6713e24 copy query values before update to preserve the expected keys (#15310)
in success_action_redirect we were missing required
query params as per S3 spec - updated tests.
2022-07-15 15:04:48 -07:00
Harshavardhana
785b429737 add reconnect duration allows for verifying disconnect intervals (#15306) 2022-07-15 14:41:24 -07:00
Minio Trusted
4aecd8d039 Update yaml files to latest version RELEASE.2022-07-15T03-44-22Z 2022-07-15 06:05:11 +00:00
Harshavardhana
1b339ea062 allow force delete on decom pool (#15302)
Bonus:

- skip suspended pool from being
  considered for multipart uploads

- add more context for decomErrors()
2022-07-14 20:44:22 -07:00
Harshavardhana
236ef03dbd fix: skip objects expired via lifecycle rules during decommission (#15300) 2022-07-14 16:47:09 -07:00
Poorna
53cc561048 Default DeleteReplication rule status if unspecified. (#15301)
Since this is a MinIO specific extension in the replication config,
default this to Disabled to allow other sdks to be used to configure
replication rules.

Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
2022-07-14 16:27:09 -07:00
Alexander Overvoorde
bb4b143f3b helm: Add missing TLS config for service monitor (#15228) 2022-07-14 14:29:08 -07:00
chel-ou
3af41cd37d helm: enable using different ports for minioAPIPort and minioConsolePort (#15259) 2022-07-14 14:28:34 -07:00
Poorna
7e32a17742 fix: site replication healing of missing buckets (#15298)
fixes a regression from #15186

- Adding tests to cover healing of buckets.
- Also dereference quota in SiteReplicationStatus only when non-nil
2022-07-14 14:27:47 -07:00
Cesar Celis Hernandez
6c265534a4 Updating minio-go to fix channel close bug (#15297) 2022-07-14 14:26:48 -07:00
Krishnan Parthasarathi
1d42133d44 listing: Expire object versions past expiry (#15287)
We skip object versions which are past their ILM expiry. This change schedules
them for expiry while at it.
2022-07-14 07:21:26 -07:00
LHHDZ
df911c9b9e correct RefreshCall & UnlockCall of DefaultTimeouts (#15288) 2022-07-14 07:20:48 -07:00
Minio Trusted
a6f40dd574 update helm to v4.0.4 2022-07-13 21:44:23 -07:00
Minio Trusted
688215e787 Update yaml files to latest version RELEASE.2022-07-13T23-29-44Z 2022-07-14 00:11:19 +00:00
Anis Elleuch
1cfa2e04bc Add a github workflow test for root disk detection (#15267)
Use losetup to create fake disks, start a MinIO cluster, umount 
one disk, and fails if the mount point directory will have format.json
recreated. It should fail because the mount point directory will belong
to the root disk after unmount.
2022-07-13 16:29:44 -07:00
Poorna
b4f6901903 resync: Avoid concurrent access/write on map (#15286)
fixes a crash

```
fatal error: concurrent map iteration and map write
minio[19309]: goroutine 18640 [running]:
minio[19309]: runtime.throw({0x27a3399?, 0x1785?})
minio[19309]: runtime/panic.go:992 +0x71 fp=0xc0062f1c80 sp=0xc0062f1c50 pc=0x438671
minio[19309]: runtime.mapiternext(0xc0062f1e90?)
minio[19309]: runtime/map.go:871 +0x4eb fp=0xc0062f1cf0 sp=0xc0062f1c80 pc=0x41002b
minio[19309]: github.com/minio/minio/cmd.(*ReplicationPool).periodicResyncMetaSave(0xc0056c00c0, {0x4d06a48, 0xc0005b2480}, {0x4d22fc0, 0xc0015ea0
```
2022-07-13 16:29:10 -07:00
Klaus Post
0149382cdc Add padding to compressed+encrypted files (#15282)
Add up to 256 bytes of padding for compressed+encrypted files.

This will obscure the obvious cases of extremely compressible content 
and leave a similar output size for a very wide variety of inputs.

This does *not* mean the compression ratio doesn't leak information 
about the content, but the outcome space is much smaller, 
so often *less* information is leaked.
2022-07-13 07:52:15 -07:00
Klaus Post
697c9973a7 Upgrade compression package (#15284)
Includes mitigation for CVE-2022-30631 (Go should still be updated)

Remove functions now available upstream.
2022-07-13 07:48:14 -07:00
Harshavardhana
788fd3df81 preserve incoming query params in success_action_redirect (#15280)
fixes #15274
2022-07-13 07:46:44 -07:00
Anis Elleuch
996cac5fed Avoid listing buckets from a suspended pool (#15283)
Make bucket requests sent after decommissioning is started are not
created in a suspended pool. Therefore listing buckets should avoid
suspended pools as well.
2022-07-13 07:44:50 -07:00
Harshavardhana
0a8b78cb84 fix: simplify passing auditLog eventType (#15278)
Rename Trigger -> Event to be a more appropriate
name for the audit event.

Bonus: fixes a bug in AddMRFWorker() it did not
cancel the waitgroup, leading to waitgroup leaks.
2022-07-12 10:43:32 -07:00
Harshavardhana
b4eb74f5ff allow custom speedtest bucket (#15271)
this allows for specifying existing buckets with

- object replication enabled
- object encryption enabled
- object versioning enabled
- object locking enabled
2022-07-12 10:12:47 -07:00
Anis Elleuch
57d1f31054 Do not log erasure read failure when disk goes offline (#15277)
Avoid printing the following log

```
API: SYSTEM
Time: Fri Jul 08 2022 11:48:40 GMT+0100
Error: Error(disk not found) reading erasure shards at...

Backtrace:
0: internal/logger/logger.go:278:logger.LogIf()
1: cmd/bitrot-streaming.go:156:cmd.(*streamingBitrotReader).ReadAt()
2: cmd/erasure-decode.go:165:cmd.(*parallelReader).Read.func1()
```
2022-07-12 09:56:56 -07:00
Klaus Post
9f02f51b87 Add 4K minimum compressed size (#15273)
There is no point in compressing very small files.

Typically the effective size on disk will be the same due to disk blocks.

So don't waste resources on extremely small files.

We don't check on multipart. 1) because we don't know and 2) this is very likely a big object anyway.
2022-07-12 07:42:04 -07:00
Klaus Post
911a17b149 Add compressed file index (#15247) 2022-07-11 17:30:56 -07:00
Poorna
3d969bd2b4 fix: ignore missing targets/replication config during site removal (#15269) 2022-07-11 14:11:46 -07:00
Andreas Auernhammer
f800cee4fa metric: add KMS-related metrics (#15258)
This commit adds a minimal set of KMS-related metrics:
```
 # HELP minio_cluster_kms_online Reports whether the KMS is online (1) or offline (0)
 # TYPE minio_cluster_kms_online gauge
 minio_cluster_kms_online{server="127.0.0.1:9000"} 1
 # HELP minio_cluster_kms_request_error Number of KMS requests that failed with a well-defined error
 # TYPE minio_cluster_kms_request_error counter
 minio_cluster_kms_request_error{server="127.0.0.1:9000"} 16790
 # HELP minio_cluster_kms_request_success Number of KMS requests that succeeded
 # TYPE minio_cluster_kms_request_success counter
 minio_cluster_kms_request_success{server="127.0.0.1:9000"} 348031
```

Currently, we report whether the KMS is available and how many requests
succeeded/failed. However, KES exposes much more metrics that can be
exposed if necessary. See: https://pkg.go.dev/github.com/minio/kes#Metric

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-07-11 09:17:28 -07:00
Praveen raj Mani
b49fc33cb3 purge objects immediately with x-minio-force-delete in DeleteObject and DeleteBucket API (#15148) 2022-07-11 09:15:54 -07:00
daniel-bogusz95
00e235a1ee fix grammatic errors and minor rewrites (#15264)
Thank you @djwfyi for the help
2022-07-11 07:59:49 -07:00
Klaus Post
37a6b2da67 Allow compaction at bucket top level. (#15266)
If more than 1M folders (objects or prefixes) are found at the top level in a bucket allow it to be compacted.

While very suboptimal structure we should limit memory usage at some point.
2022-07-11 07:59:03 -07:00
Harshavardhana
913e977c8d remove auto-port warning for console-address (#15260) 2022-07-08 13:36:41 -07:00
Harshavardhana
c2ddcb3b40 do not recreate deprecated delete-journal.bin, only read it (#15185)
simplify deprecated code, re-enable hot-swap replace disks
2022-07-08 12:17:02 -07:00
dorman
ab9544c0d3 helm: allow special characters in access/secret key (#15243) 2022-07-08 07:20:10 -07:00
Minio Trusted
4bfe849409 update helm to v4.0.3
Signed-off-by: Minio Trusted <trusted@minio.io>
2022-07-07 23:16:22 -07:00
Ray
3bdb92fcad Adding error check for jetstream connection (#15252) 2022-07-07 23:14:47 -07:00
Minio Trusted
cf9e3069f2 Update yaml files to latest version RELEASE.2022-07-08T00-05-23Z 2022-07-08 00:44:43 +00:00
Anis Elleuch
ed0cbfb31e fix: rootdisk detection by not using cached value when GetDiskInfo() errors out (#15249)
GetDiskInfo() uses timedValue to cache the disk info for one second.

timedValue behavior was recently changed to return an old cached value
when calculating a new value returns an error.

When a mount point is empty, GetDiskInfo() will return errUnformattedDisk,
timedValue will return cached disk info with unexpected IsRootDisk value,
e.g. false if the mount point belongs to a root disk. Therefore, the mount
point will be considered a valid disk and will be formatted as well.

This commit will also add more defensive code when marking root disks:
always mark a disk offline for any GetDiskInfo() error except
errUnformattedDisk. The server will try anyway to reconnect to those
disks every 10 seconds.
2022-07-07 17:05:23 -07:00
Harshavardhana
32b2f6117e fix: do not pass around sync.Map (#15250)
it is not safe to pass around sync.Map
through pointers, as it may be concurrently
updated by different callers.

this PR simplifies by avoiding sync.Map
altogether, we do not need sync.Map
to keep object->erasureMap association.

This PR fixes a crash when concurrently
using this value when audit logs are
configured.

```
fatal error: concurrent map iteration and map write

goroutine 247651580 [running]:
runtime.throw({0x277a6c1?, 0xc002381400?})
        runtime/panic.go:992 +0x71 fp=0xc004d29b20 sp=0xc004d29af0 pc=0x438671
runtime.mapiternext(0xc0d6e87f18?)
        runtime/map.go:871 +0x4eb fp=0xc004d29b90 sp=0xc004d29b20 pc=0x41002b
```
2022-07-07 17:04:25 -07:00
Harshavardhana
ae92521310 remove unnecessary nAgreed value in partial() func (#15242) 2022-07-07 13:45:34 -07:00
Harshavardhana
5802df4365 retry and resume decom operation upon retriable failures (#15244)
it is possible in a k8s-like system reading pool.bin
might not have quorum during startup, however, add
a way to retry after this failure.
2022-07-07 12:31:44 -07:00
Minio Trusted
c1901f4e12 Update yaml files to latest version RELEASE.2022-07-06T20-29-49Z 2022-07-07 00:24:36 +00:00
Anis Elleuch
8d98282afd Better reporting of total/free usable capacity of the cluster (#15230)
The current code uses approximation using a ratio. The approximation 
can skew if we have multiple pools with different disk capacities.

Replace the algorithm with a simpler one which counts data 
disks and ignore parity disks.
2022-07-06 13:29:49 -07:00
Harshavardhana
dd839bf295 add NATS JetStream support (#15201) 2022-07-06 13:29:08 -07:00
Harshavardhana
3af6073576 no 'replicate status' without replication config (#15233)
'replicate status' shouldn't be displaying historic
values unless replication config is present on the
relevant bucket.
2022-07-06 09:53:33 -07:00
Harshavardhana
2518af5f9e fix: allow certain mutations on objects during decommissioning (#15231)
fix: allow certain mutation on objects during decommission

currently by mistake deletion of objects was skipped,
if the object resided on the pool being decommissioned.

delete's are okay to be allowed since decommission is
designed to run on a cluster with active I/O.
2022-07-06 09:53:16 -07:00
Harshavardhana
7b793d84c8 fix: calculate scanner metric paths for single drive (#15232)
Additionally use pathJoin() to avoid double `//`
in path names.
2022-07-06 07:48:38 -07:00
Aditya Manthramurthy
af9bc7ea7d Add external IDP management Admin API for OpenID (#15152) 2022-07-05 18:18:04 -07:00
Klaus Post
ac055b09e9 Add detailed scanner metrics (#15161) 2022-07-05 14:45:49 -07:00
haslersn
df42914da6 Fix missing whitespace in error message for IncompleteBody (#15227) 2022-07-05 12:19:57 -07:00
Klaus Post
2471bdda00 fix: for DiskInfo call cache disk metrics (#15229)
Small uploads spend a significant amount of time (~5%) fetching disk info metrics. Also maps are allocated for each call.

Add a 100ms cache to disk metrics.
2022-07-05 11:02:30 -07:00
dorman
c7e01b139d helm: service port set to minioAPIPort in helm (#15223) 2022-07-05 07:38:04 -07:00
Harshavardhana
9d80ff5a05 fix: decommission delete markers for non-current objects (#15225)
versioned buckets were not creating the delete markers
present in the versioned stack of an object, this essentially
would stop decommission to succeed.

This PR fixes creating such delete markers properly during
a decommissioning process, adds tests as well.
2022-07-05 07:37:24 -07:00
Minio Trusted
39b3941892 Update yaml files to latest version RELEASE.2022-07-04T21-02-54Z 2022-07-04 21:51:54 +00:00
Harshavardhana
b311abed31 decom IAM, Bucket metadata properly (#15220)
Current code incorrectly passed the
config asset object name while decommissioning,
make sure that we pass the right object name
to be hashed on the newer set of pools.

This PR fixes situations after a successful
decommission, the users and policies might go
missing due to wrong hashed set.
2022-07-04 14:02:54 -07:00
Harshavardhana
ce667ddae0 do not print errFileNotFound in entries.resolve() (#15216) 2022-07-04 06:40:46 -07:00
Harshavardhana
0fee993a4b return appropriate error under 'decom status' (#15213)
fixes #15208
2022-07-01 16:21:23 -07:00
Poorna
0ea5c9d8e8 site healing: Skip stale iam asset updates from peer. (#15203)
Allow healing to apply IAM change only when peer
gave the most recent update.
2022-07-01 13:19:13 -07:00
Harshavardhana
63ac260bd5 Simplify Prometheus metrics gather (#15210) 2022-07-01 13:18:39 -07:00
Minio Trusted
a01a39b153 Update yaml files to latest version RELEASE.2022-06-30T20-58-09Z 2022-07-01 00:44:04 +00:00
Harshavardhana
f9a4ad7904 update banner with version+runtime (#15206) 2022-06-30 13:58:09 -07:00
Minio Trusted
e60b67d246 Revert "Tighten enforcement of object retention (#14993)"
This reverts commit 5e3010d455.

This commit causes regression on object locked buckets causine
delete-markers to be not created.
2022-06-30 13:06:32 -07:00
Klaus Post
9004d69c6f Make ReqInfo concurrency safe (#15204)
Some read/writes of ReqInfo did not get appropriate locks, leading to races.

Make sure reading and writing holds appropriate locks.
2022-06-30 10:48:50 -07:00
Harshavardhana
8856a2d77b finalize startup-banner and remove unnecessary logs (#15202) 2022-06-29 16:32:04 -07:00
Anis Elleuch
54a061bdda Save minio version information centrally (#15181) 2022-06-29 14:45:49 -07:00
Harshavardhana
65b4b100a8 de-couple caller context to avoid internal races (#15195)
```
fatal error: concurrent map iteration and map write
fatal error: concurrent map iteration and map write

goroutine 745335841 [running]:
runtime.throw({0x273e67b?, 0x80?})
        runtime/panic.go:992 +0x71 fp=0xc0390bc240 sp=0xc0390bc210 pc=0x438671
runtime.mapiternext(0x40d987?)
        runtime/map.go:871 +0x4eb fp=0xc0390bc2b0 sp=0xc0390bc240 pc=0x41002b
runtime.mapiterinit(0x46bec7?, 0x4ef76c?, 0xc0017cc9c0?)
        runtime/map.go:861 +0x228 fp=0xc0390bc2d0 sp=0xc0390bc2b0 pc=0x40fae8
reflect.mapiterinit(0x1b5?, 0xc0?, 0x235bcc0?)
```

```
github.com/minio/minio/internal/rest/client.go:151 +0x5f4 fp=0xc0390bd988 sp=0xc0390bd730 pc=0x153e434
```
2022-06-29 14:44:26 -07:00
Poorna
7cc9286e0f site healing: Skip stale bucket metadata updates from peer (#15186)
Allow healing to apply bucket metadata change only when peer
gave the most recent update.
2022-06-28 18:09:20 -07:00
Harshavardhana
2f25639ea0 update banner to reflect the final agreed UI (#15192) 2022-06-28 16:37:40 -07:00
Harshavardhana
2070c215a2 handle missing funcNames for handlers (#15188)
also use designated names for internal
calls

- storageREST calls are storageR
- lockREST calls are lockR
- peerREST calls are just peer

Named in this fashion to facilitate wildcard matches
by having prefixes of the same name.

Additionally, also enable funcNames for generic handlers
that return errors, currently we disable '<unknown>'
2022-06-28 05:04:10 -07:00
Minio Trusted
94b98222c2 update minio-go/v7 to v7.0.30 2022-06-27 21:12:22 -07:00
Harshavardhana
9c605ad153 allow support for parity '0', '1' enabling support for 2,3 drive setups (#15171)
allows for further granular setups

- 2 drives (1 parity, 1 data)
- 3 drives (1 parity, 2 data)

Bonus: allows '0' parity as well.
2022-06-27 20:22:18 -07:00
Anis Elleuch
b7c7e59dac Revert proxying requests with precondition errors (#15180)
In a replicated setup, when an object is updated in one cluster but
still waiting to be replicated to the other cluster, GET requests with
if-match, and range headers will likely fail. It is better to proxy
requests instead.

Also, this commit avoids printing verbose logs about precondition &
range errors.
2022-06-27 14:03:44 -07:00
Klaus Post
767c1436d3 Upgrade reedsolomon/compression packages (#15182)
reedsolomon/cpuid would take a long time to start up on Xen VMs with 
AMD processors due to a bug in the VM CPUID implementation.

Compression upgraded for better speed/compression.
2022-06-27 13:07:42 -07:00
Harshavardhana
699cf6ff45 perform object sweep after equeue the latest CopyObject() (#15183)
keep it similar to PutObject/CompleteMultipart
2022-06-27 12:11:33 -07:00
Anis Elleuch
9201870f6c Remove unnecessary code in WalkDir() (#15168)
Recalculating forward is useless. It is never used and it will be
computed again when calling scanDir() again.
2022-06-27 10:26:56 -07:00
Harshavardhana
6722f58668 save MinIO version with each version (8-bytes extra) (#15170)
store MinIO version along with each version in 'xl.meta'
for future purposes, can be used as ways to add specific
code for bug fixes if any.
2022-06-27 03:59:41 -07:00
Harshavardhana
7b9b7cef11 add license banner for GNU AGPLv3 (#15178)
Bonus: rewrite subnet re-use of Transport
2022-06-27 03:58:25 -07:00
Minio Trusted
7d4fce09dc update RedHat UBI image to 8.6 2022-06-26 09:14:23 -07:00
Minio Trusted
2075501d86 Update yaml files to latest version RELEASE.2022-06-25T15-50-16Z 2022-06-26 16:09:28 +00:00
Harshavardhana
bd099f5e71 fix: change timedValue to return the previously cached value (#15169)
fix: change timedvalue to return previous cached value

caller can interpret the underlying error and decide
accordingly, places where we do not interpret the
errors upon timedValue.Get() - we should simply use
the previously cached value instead of returning "empty".

Bonus: remove some unused code
2022-06-25 08:50:16 -07:00
Klaus Post
baf257adcb fix: health client leak when calling UpdateAllTargets (#15167)
When `LoadBucketMetadataHandler` is called and `UpdateAllTargets` gets called.

Since targets are rebuilt we cancel all.
2022-06-24 11:12:52 -07:00
Anis Elleuch
4fd1986885 Trace all http requests (#15064)
Add a generic handler that adds a new tracing context to the request if
tracing is enabled. Other handlers are free to modify the tracing
context to update information on the fly, such as, func name, enable
body logging etc..

With this commit, requests like this 

```
curl -H "Host: ::1:3000" http://localhost:9000/
```

will be traced as well.
2022-06-23 23:19:24 -07:00
Harshavardhana
e1afac9439 reduce sha256 CPU usage by turning it off for speedtests (#15154)
continuation of the PR #15151, keeping signature v4 for
the headers however avoiding sha256 for the body.
2022-06-23 11:26:53 -07:00
Poorna
580d9db85e Add APIs to import/export IAM data (#15014) 2022-06-23 09:25:15 -07:00
Anis Elleuch
42e2fd35d8 heal: Include dir markers when healing a fresh disk (#15158)
Directories markers are not healed when healing a new fresh disk. A
a proper fix would be moving object names encoding/decoding to erasure
object level but it is too late now since the object to set distribution is
calculated at a higher level.
2022-06-23 06:47:33 -07:00
Harshavardhana
1a40c7c27c use signature-v2 for 'object perf' tests to avoid CPU using sha256 (#15151)
It is observed in a local 8 drive system the CPU seems to be
bottlenecked at

```
(pprof) top
Showing nodes accounting for 1385.31s, 88.47% of 1565.88s total
Dropped 1304 nodes (cum <= 7.83s)
Showing top 10 nodes out of 159
      flat  flat%   sum%        cum   cum%
      724s 46.24% 46.24%       724s 46.24%  crypto/sha256.block
   219.04s 13.99% 60.22%    226.63s 14.47%  syscall.Syscall
   158.04s 10.09% 70.32%    158.04s 10.09%  runtime.memmove
   127.58s  8.15% 78.46%    127.58s  8.15%  crypto/md5.block
    58.67s  3.75% 82.21%     58.67s  3.75%  github.com/minio/highwayhash.updateAVX2
    40.07s  2.56% 84.77%     40.07s  2.56%  runtime.epollwait
    33.76s  2.16% 86.93%     33.76s  2.16%  github.com/klauspost/reedsolomon._galMulAVX512Parallel84
     8.88s  0.57% 87.49%     11.56s  0.74%  runtime.step
     7.84s   0.5% 87.99%      7.84s   0.5%  runtime.memclrNoHeapPointers
     7.43s  0.47% 88.47%     22.18s  1.42%  runtime.pcvalue
```

Bonus changes:

- re-use transport for bucket replication clients, also site replication clients.
- use 32KiB buffer for all read and writes at transport layer seems to help
  TLS read connections.
- Do not have 'MaxConnsPerHost' this is problematic to be used with net/http
  connection pooling 'MaxIdleConnsPerHost' is enough.
2022-06-22 16:28:25 -07:00
Anis Elleuch
f3bec41eb9 s3-verify: Add a flag to exclude younger than a certain age (#15142)
--minimum-object-age 1h can help exclude objects that are newly
uploaded but not replicated yet
2022-06-22 08:12:47 -07:00
Andreas Auernhammer
825634d24e fips: fix order of elliptic curves (#15141)
This commit fixes the order of elliptic curves.
As documented by https://pkg.go.dev/crypto/tls#Config
```
// CurvePreferences contains the elliptic curves that will be used in
// an ECDHE handshake, in preference order. If empty, the default will
// be used. The client will use the first preference as the type for
// its key share in TLS 1.3. This may change in the future.
```

In general, we should prefer `X25519` over the NIST curves.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-06-22 08:09:28 -07:00
Poorna
cb097e6b0a CopyObject: fix read/write err on closed pipe (#15135)
Fixes: #15128
Regression from PR#14971
2022-06-21 19:20:11 -07:00
Poorna
1cfb03fb74 replication: Avoid proxying when precondition failed (#15134)
Proxying is not required when content is on this cluster and
does not meet pre-conditions specified in the request.

Fixes #15124
2022-06-21 14:11:35 -07:00
Harshavardhana
f293df647c s3/zip: extract metadata properly for Zipped objects (#15123)
s3/zip: extra metadata properly for Zipped objects

fixes #15121
2022-06-21 14:11:12 -07:00
Harshavardhana
10522438b7 add go1.18 specific curve preferences (#15132) 2022-06-21 11:10:50 -07:00
sota
e2e5bd6f19 fix: cant parse comment without '=' in environment file (#15130) 2022-06-21 10:37:15 -07:00
Andreas Auernhammer
cd7a0a9757 fips: simplify TLS configuration (#15127)
This commit simplifies the TLS configuration.
It inlines the FIPS / non-FIPS code.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-06-21 07:54:48 -07:00
Anis Elleuch
b3eda248a3 Parallelize new disks healing of different erasure sets (#15112)
- Always reformat all disks when a new disk is detected, this will
  ensure new uploads to be written in new fresh disks
- Always heal all buckets first when an erasure set started to be healed
- Use a lock to prevent two disks belonging to different nodes but in
  the same erasure set to be healed in parallel
- Heal different sets in parallel

Bonus:
- Avoid logging errUnformattedDisk when a new fresh disk is inserted but
  not detected by healing mechanism yet (10 seconds lag)
2022-06-21 07:53:55 -07:00
Anis Elleuch
95b51c48be s3-verify: Fix endpoint and missing comparaison (#15129)
- Fix a typo where target s3 client uses the source endpoint
- Fix a missing necessary comparison: if source name is lexically lower than target name
2022-06-21 05:35:41 -07:00
Harshavardhana
486888f595 remove gateway banner and some other TODO loggers (#15125) 2022-06-21 05:25:40 -07:00
Minio Trusted
17ab8145b5 Update yaml files to latest version RELEASE.2022-06-20T23-13-45Z 2022-06-21 00:16:07 +00:00
Poorna
b3ebc69034 improve error message for bucket metadata export/import API (#15120) 2022-06-20 16:13:45 -07:00
Harshavardhana
761dde2f1b fix: add 'mc support inspect' support for single drive deployment (#15122) 2022-06-20 16:11:19 -07:00
Harshavardhana
2bb6a3f4d0 cleanup site replication error handling (#15113)
site replication errors were printed at
various random locations, repeatedly - this
PR attempts to remove double logging and
capture all of them at a common place.

This PR also enhances the code to show
partial success and errors as well.
2022-06-20 10:48:11 -07:00
Harshavardhana
e83e947ca3 debug/s3-verify: simplify the tool to use lower memory footprint (#15110) 2022-06-20 10:45:35 -07:00
Anis Elleuch
73733a8fb9 heal: Report correctly in multip-pools setup (#15117)
`mc admin heal -r <alias>` in a multi setup pools returns incorrectly
grey objects. The reason is that erasure-server-pools.HealObject() runs
HealObject in all pools and returns the result of the first nil
error. However, in the lower erasureObject level, HealObject() returns
nil if an object does not exist + missing error in each disk of the object
in that pool, therefore confusing mc.

Make erasureObject.HealObject() to return not found error in the lower
level, so at least erasureServerPools will know what pools to ignore.
2022-06-20 08:07:45 -07:00
daniel-bogusz95
ce6c23a360 docs: some grammatical, typo fixes
includes #15104, #15105, #15106, #15107
2022-06-19 15:35:51 -07:00
Daniel Valdivia
99d8e6a30f Update Console to v0.19.0 (#15109)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2022-06-18 18:02:17 -07:00
Poorna
2fa1d8ac48 Add import/export APIs to migrate bucket metadata (#14929) 2022-06-18 06:55:39 -07:00
Minio Trusted
ca7e425ce8 update minio-go to v7.0.29
fixes a client GetObject() leak when the caller
has canceled the context.
2022-06-17 22:15:43 -07:00
Poorna
8b9a19eef1 fix: typo in site replication version healing (#15103) 2022-06-17 16:43:24 -07:00
Aditya Manthramurthy
7f629df4d5 Add generic function to retrieve config value with metadata (#15083)
`config.ResolveConfigParam` returns the value of a configuration for any
subsystem based on checking env, config store, and default value. Also returns info
about which config source returned the value.

This is useful to return info about config params overridden via env in the user
APIs. Currently implemented only for OpenID subsystem, but will be extended for
others subsequently.
2022-06-17 11:39:21 -07:00
Anis Elleuch
98ddc3596c Avoid CompleteMultipart freeze with unexpected network issue (#15102)
If sending a white space during a long S3 handler call fails,
the whitespace goroutine forgets to return a result to the caller.
Therefore, the complete multipart handler will be blocked.

Remember to send the header written result to the caller 
or/and close the channel.
2022-06-17 10:41:25 -07:00
Harshavardhana
5d23be6242 fix: ignore printing io.EOF during WalkDir() on concurrently modified objects (#15100)
fix: ignore print io.EOF during WalkDir() on concurrently modified objects
2022-06-17 08:23:47 -07:00
Daniel Jakots
d15d3a524b Update gopsutil to v3.22.5 (#15098) 2022-06-16 22:01:39 -07:00
Minio Trusted
1e1d9acb1b Update yaml files to latest version RELEASE.2022-06-17T02-00-35Z 2022-06-17 02:56:57 +00:00
Poorna
55ee94bed0 initialize site replication subsys after loading metadata (#15099) 2022-06-16 19:00:35 -07:00
Harshavardhana
d228d29944 update '-v' flag behavior to include copyRight and license (#15097)
```
~ minio -v
minio version DEVELOPMENT.2022-06-16T20-40-14Z (commit-id=e083228e2a06bfdcd006fee28d449cd2b47c542a)
Runtime: go1.18.3 linux/amd64
Copyright (c) 2015-2022 MinIO, Inc.
Licence AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
```
2022-06-16 16:10:48 -07:00
Harshavardhana
013cc66d8e add dataErrs for healing debug log (#15092) 2022-06-16 09:42:45 -07:00
Harshavardhana
c7ed6eee5e fix: background local test also via channel (#15086)
current implementation for `standalone` setups
was blocking the `perf drive`.

Bonus: remove all old unused complicated code.
2022-06-15 14:51:42 -07:00
Harshavardhana
8082d1fed6 add bucket level S3 received/sent bytes (#15084)
adds bucket level metrics for bytes received and sent bytes on all S3 API calls.
2022-06-14 15:14:24 -07:00
Harshavardhana
d2a10dbe69 fix: simplify healthcheck code to freeze calls only once (#15082)
- currently subnet health check was freezing and calling
  locks at multiple locations, avoid them.

- throw errors if first attempt itself fails with no results
2022-06-14 11:22:07 -07:00
Anis Elleuch
14645142db erasure-sd: Evaluate versioning Prefix in multi-delete objects (#15081)
Erasure SD DeleteObjects() is only inheriting bucket versioning status
from the handler layer.

Add the missing versioning prefix evaluation for each object that will
deleted.
2022-06-14 10:05:12 -07:00
Minio Trusted
f34b2ef90b update dashboard Data Usage Growth as time series 2022-06-13 22:05:36 -07:00
George Costea
ce894665a8 examples: support configuration of a session policy file (#15078) 2022-06-13 15:36:58 -07:00
Anis Elleuch
0d00f3a55b kms: initialize after cli parsing (#15076)
KMS depends on the --certs-dir flag. 

Ensure KMS is initialized after loading the flag.
2022-06-13 13:06:13 -07:00
Minio Trusted
48ff373ff7 fix: 'mc support perf drive' crash fix when read returns < 1s 2022-06-13 11:24:37 -07:00
Anis Elleuch
e9efee0e64 debug: Close object after check (#15077) 2022-06-13 07:21:04 -07:00
Minio Trusted
4b3e7aee0b Update yaml files to latest version RELEASE.2022-06-11T19-55-32Z 2022-06-11 21:04:23 +00:00
Anis Elleuch
dd53b287f2 sts: Avoid printing all STS errors (#15065)
Limit printing STS errors to 

- STS internal error
- STS not initialized
- STS upstream error
2022-06-11 12:55:32 -07:00
Anis Elleuch
21526efe51 Update dperf to 0.4.1 (#15071) 2022-06-11 09:39:50 -07:00
Harshavardhana
7413045f0e fix: add missing minio_s3_requests_total (#15070)
PR #15052 caused a regression, add the missing metrics back.

Bonus:

- internode information should be only for distributed setups 
- update the dashboard to include 4xx and 5xx error panels.
2022-06-11 00:50:31 -07:00
Harshavardhana
d76c508566 debug: verify diff on latest objects on source and target buckets (#15069) 2022-06-10 16:56:51 -07:00
Minio Trusted
8fb46de5e4 Update yaml files to latest version RELEASE.2022-06-10T16-59-15Z 2022-06-10 20:12:04 +00:00
Harshavardhana
af1944f28d support reading systemctl config automatically on baremetal setups (#15066)
this allows for customers to use `mc admin service restart`
directly even when performing RPM, DEB upgrades. Upon such 'restart'
after upgrade MinIO will re-read the /etc/default/minio for any
newer environment variables.

As long as `MINIO_CONFIG_ENV_FILE=/etc/default/minio` is set, this
is honored.
2022-06-10 09:59:15 -07:00
Harshavardhana
214ea14f29 fix: for frozen calls return if client disconnects (#15062) 2022-06-09 05:06:47 -07:00
Anis Elleuch
5fb420c703 prometheus: Add S3 4xx and 5xx S3 monitoring (#15052)
Currently minio_s3_requests_errors_total covers 4xx and 
5xx S3 responses which can be confusing when s3 applications 
sent a lot of HEAD requests with obvious 404 responses or 
when the replication is enabled.

Add 
- minio_s3_requests_4xx_errors_total
- minio_s3_requests_5xx_errors_total

to help users monitor 4xx and 5xx HTTP status codes separately.
2022-06-08 11:22:34 -07:00
Harshavardhana
2420f6c000 fix: make metrics endpoint responsive by reducing the chatter (#15055)
peerOnlineCounter was making NxN calls to many peers, this
can be really long and tedious if there are random servers
that are going down.

Instead we should calculate online peers from the point of
view of "self" and return those online and offline appropriately
by performing a healthcheck.
2022-06-08 02:43:13 -07:00
Harshavardhana
b0d7332a0c healthcheck cluster endpoint should honor write/readQuorum per pool (#15053) 2022-06-07 19:08:21 -07:00
Daniel Valdivia
f71b56a5d0 Bump Console v0.18.1 (#15051)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2022-06-07 12:19:38 -07:00
Harshavardhana
d55efc791f relax O_DIRECT in single drive mode if unsupported (#15045) 2022-06-07 06:44:01 -07:00
Minio Trusted
f63645546d update minimum goroutine threshold on dashboard 2022-06-06 22:13:54 -07:00
Kaan Kabalak
e2dd3e3587 Include the entirety of vendor folder in .gitignore (#15046)
The 'go mod vendor' command generates a directory called 
'vendor' in the main module's root directory, which includes 
the required packages to support builds. Therefore, we can 
include the 'vendor' directory in .gitignore completely, 
regardless of any file extension.
2022-06-06 20:47:51 -07:00
Minio Trusted
27ab780317 Update yaml files to latest version RELEASE.2022-06-07T00-33-41Z 2022-06-07 01:06:59 +00:00
Minio Trusted
e2d4d097e7 do not print errors upon 'nil' err 2022-06-06 17:33:41 -07:00
Minio Trusted
ac8cb6ba0d Update yaml files to latest version RELEASE.2022-06-06T23-14-52Z 2022-06-06 23:47:31 +00:00
Shireesh Anjal
4ce81fd07f Add periodic callhome functionality (#14918)
* Add periodic callhome functionality

Periodically (every 24hrs by default), fetch callhome information and
upload it to SUBNET.

New config keys under the `callhome` subsystem:

enable - Set to `on` for enabling callhome. Default `off`
frequency - Interval between callhome cycles. Default `24h`

* Improvements based on review comments

- Update `enableCallhome` safely
- Rename pctx to ctx
- Block during execution of callhome
- Store parsed proxy URL in global subnet config
- Store callhome URL(s) in constants
- Use existing global transport
- Pass auth token to subnetPostReq
- Use `config.EnableOn` instead of `"on"`

* Use atomic package instead of lock

* Use uber atomic package

* Use `Cancel` instead of `cancel`

Co-authored-by: Harshavardhana <harsha@minio.io>

Co-authored-by: Harshavardhana <harsha@minio.io>
Co-authored-by: Aditya Manthramurthy <donatello@users.noreply.github.com>
2022-06-06 16:14:52 -07:00
Harshavardhana
df9eeb7f8f fix: do not log concurrently when multiple disks return errors (#15044)
since the values inside 'context' are mutated internally by
logger, make sure to log serially upon errors not concurrently.
2022-06-06 15:15:11 -07:00
Harshavardhana
31c4fdbf79 fix: resyncing 'null' version on pre-existing content (#15043)
PR #15041 fixed replicating 'null' version however
due to a regression from #14994 caused the target
versions for these 'null' versioned objects to have
different 'versions', this may cause confusion with
bi-directional replication and cause double replication.

This PR fixes this properly by making sure we replicate
the correct versions on the objects.
2022-06-06 15:14:56 -07:00
Harshavardhana
48e367ff7d reject resync start on misconfigured replication rules (#15041)
we expect resync to start on buckets with replication
rule ExistingObjects enabled, if not we reject such
calls.
2022-06-06 02:54:39 -07:00
Anis Elleuch
fd02492cb7 avoid limits on the number of parallel trace/bucket notifications listeners (#14799)
Simplifies overall limits on the incoming listeners for notifications.

Fixes #14566
2022-06-05 14:29:12 -07:00
Aditya Manthramurthy
addfa35d93 Add FIPS build to CI and add README.fips.md (#15038) 2022-06-04 18:25:37 -07:00
Harshavardhana
5afdc56796 allow single drive mode to run on root disk (#15037)
for practical reasons, allow root disk based installs for single drive mode.
2022-06-03 12:53:42 -07:00
Harshavardhana
fb1c333a83 update latest dperf v0.4.0 2022-06-03 11:13:20 -07:00
Harshavardhana
c3e1da8e04 honor canceled context and do not leak on mergeChannels (#15034)
mergeEntryChannels has the potential to perpetually
wait on the results channel, context might be closed
and we did not honor the caller context canceling.
2022-06-03 05:59:02 -07:00
Anis Elleuch
20a753e2e5 Fix a possible service freeze after perf object (#15036)
The S3 service can be frozen indefinitely if a client or mc asks for object
perf API but quits early or has some networking issues. The reason is
that partialWrite() can block indefinitely.

This commit makes partialWrite() listens to context cancellation as well. It
also renames deadlinedCtx to healthCtx since it covers handler context
cancellation and not only not only the speedtest deadline.
2022-06-03 05:58:45 -07:00
Minio Trusted
3a398775fb Update yaml files to latest version RELEASE.2022-06-03T01-40-53Z 2022-06-03 02:36:56 +00:00
Aditya Manthramurthy
61a7434379 Update --version option behavior (#15032)
- Add git commit ID
- Add go version
2022-06-02 18:40:53 -07:00
Aditya Manthramurthy
09f5e29327 Bump up console to v0.18.0 (#15031) 2022-06-02 17:34:37 -07:00
Poorna
29edb4ccfe fix: site replication bucket heal to not panic if replication config is missing (#15025) 2022-06-02 12:34:03 -07:00
Minio Trusted
197d6fb644 Update yaml files to latest version RELEASE.2022-06-02T16-16-26Z 2022-06-02 17:46:32 +00:00
Anis Elleuch
d4e565e595 Add defensive check for one stream message size (#15029)
In a streaming response, the client knows the size of a streamed
message but never checks the message size. Add the check to error 
out if the response message is truncated.
2022-06-02 09:16:26 -07:00
Minio Trusted
1fce2b180f Update yaml files to latest version RELEASE.2022-06-02T02-11-04Z 2022-06-02 02:42:14 +00:00
Aditya Manthramurthy
be6ccd129d fix: typo in FIPS sha256 (#15024) 2022-06-01 19:11:04 -07:00
Klaus Post
f7cecf0945 Make isIndexedMetaV2 return errors (#15012)
Indexed streams would be decoded by the legacy loader if there 
was an error loading it. Return an error when the stream is indexed 
and it cannot be loaded.

Fixes "unknown minor metadata version" on corrupted xl.meta files and 
returns an actual error.
2022-05-31 19:06:57 -07:00
Harshavardhana
7b2198f7e5 handle IPv6 sourceIPs properly (#15005) 2022-05-31 06:04:12 -07:00
Harshavardhana
52221db7ef fix: for unexpected errors in reading versioning config panic (#14994)
We need to make sure if we cannot read bucket metadata
for some reason, and bucket metadata is not missing and
returning corrupted information we should panic such
handlers to disallow I/O to protect the overall state
on the system.

In-case of such corruption we have a mechanism now
to force recreate the metadata on the bucket, using
`x-minio-force-create` header with `PUT /bucket` API
call.

Additionally fix the versioning config updated state
to be set properly for the site replication healing
to trigger correctly.
2022-05-31 02:57:57 -07:00
Harshavardhana
befbf48563 fix: s3-check-md5 to not panic for incomplete md5 2022-05-30 20:58:42 -07:00
Anis Elleuch
56a61bab56 test: Add GetObjectNInfo test with some outdated disks (#15004)
Add a test reading an object which has some old data in some outdated
disks, in a versioned and non-versioned bucket.
2022-05-30 17:52:59 -07:00
Harshavardhana
d480022711 fix: invalidate outdated disks appropriately during readAllXL (#15002)
readAllXL would return inlined data for outdated disks
causing "read" to return incorrect content to the client,

this PR fixes this behavior by making sure we skip such
outdated disks appropriately based on the latest ModTime
on the disk.
2022-05-30 12:43:54 -07:00
Harshavardhana
f1abb92f0c feat: Single drive XL implementation (#14970)
Main motivation is move towards a common backend format
for all different types of modes in MinIO, allowing for
a simpler code and predictable behavior across all features.

This PR also brings features such as versioning, replication,
transitioning to single drive setups.
2022-05-30 10:58:37 -07:00
Harshavardhana
5792be71fa fix: add timeouts to avoid goroutine leaks in net/http (#14995)
Following code can reproduce an unending go-routine buildup,
while keeping connections established due to lack of client
not closing the connections.

https://gist.github.com/harshavardhana/2d00e6f909054d2d2524c71485ad02e1

Without this PR all MinIO deployments can be put into
denial of service attacks, causing entire service to be
unavailable.

We bring in two timeouts at this stage to control such
go-routine build ups, new change

- IdleTimeout (to kill off idle connections)
- ReadHeaderTimeout (to kill off connections that are too slow)

This new change also brings two hidden options to make any
additional relevant changes if desired in some setups.
2022-05-30 06:24:51 -07:00
Harshavardhana
c2630bb3a3 add total usage pie chart based on total/free bytes 2022-05-28 09:53:53 -07:00
Poorna
5e3010d455 Tighten enforcement of object retention (#14993)
Ref issue#14991 - in the rare case that object in bucket under
retention has null version, make sure to enforce retention
rules.
2022-05-28 02:21:19 -07:00
Anis Elleuch
ccbf65c8e8 site-repl: Fix deadlock after an IAM loading error (#14990)
Fix forgotten IAM cache lock releases when reading some data from
disk/etcd

Co-authored-by: Anis Elleuch <anis@min.io>
2022-05-27 10:26:38 -07:00
Harshavardhana
9d07cde385 use crypto/sha256 only for FIPS 140-2 compliance (#14983)
It would seem like the PR #11623 had chewed more
than it wanted to, non-fips build shouldn't really
be forced to use slower crypto/sha256 even for
presumed "non-performance" codepaths. In MinIO
there are really no "non-performance" codepaths.
This assumption seems to have had an adverse
effect in certain areas of CPU usage.

This PR ensures that we stick to sha256-simd
on all non-FIPS builds, our most common build
to ensure we get the best out of the CPU at
any given point in time.
2022-05-27 06:00:19 -07:00
Aditya Manthramurthy
464b9d7c80 Add support for Identity Management Plugin (#14913)
- Adds an STS API `AssumeRoleWithCustomToken` that can be used to 
  authenticate via the Id. Mgmt. Plugin.
- Adds a sample identity manager plugin implementation
- Add doc for plugin and STS API
- Add an example program using go SDK for AssumeRoleWithCustomToken
2022-05-26 17:58:09 -07:00
Poorna
5c81d0d89a site replication: heal missing/invalid replication config (#14979)
Validate remote target ARNs and heal any stale rules in
the replication config
2022-05-26 17:57:23 -07:00
Praveen raj Mani
62cd643868 Add --insecure flag to skip TLS verification in s3-md5-check tool (#14980) 2022-05-26 06:02:05 -07:00
Klaus Post
c0bf02b8b2 Ignore disks with 0 total space (#14981)
Ignore disks with 0 total

Mainly defensive to ensure no `/0` in percent calculation.
2022-05-26 06:01:50 -07:00
Minio Trusted
1b7dd70f72 Update yaml files to latest version RELEASE.2022-05-26T05-48-41Z 2022-05-26 06:27:03 +00:00
Minio Trusted
372a08be49 Update minio-go to v7.0.27 2022-05-26 05:48:41 +00:00
Harshavardhana
fd46a1c3b3 fix: some races when accessing ldap/openid config globally (#14978) 2022-05-25 18:32:53 -07:00
Aditya Manthramurthy
5aae7178ad Fix listing of service and sts accounts (#14977)
Now returns user does not exist error if the user is not known to the system
2022-05-25 15:28:54 -07:00
Harshavardhana
dea8220eee do not heal outdated disks > parityBlocks (#14976)
this PR also fixes a situation where incorrect
partsMetadata slice was used where fi.Data was
re-used from a single drive causing duplication
of the shards across all drives.

This happens for situations where shouldHeal()
returns true for all drives > parityBlocks.

To avoid this we should never attempt to heal on all
drives > parityBlocks, unless we are doing metadata
migration from xl.json -> xl.meta
2022-05-25 15:17:10 -07:00
Klaus Post
a4be0b88f6 Add server pool reserved space (#14974)
If one or more pools reach 85% usage in a set, we will only 
use pools that have more free space.

In case all pools are above 85% we allow all of them to be used 
with the regular distribution.
2022-05-25 13:20:20 -07:00
Poorna
d8101573be Disallow deletion of ARN when under active replication (#14972)
fixes a regression from #12880
2022-05-24 19:40:45 -07:00
Klaus Post
41cdb357bb Compensate for different server pool sizes (#14968)
When a server pool with a different number of sets is added they are 
not compensated when choosing a destination pool for new objects. 
This leads to the unbalanced placement of objects with smaller pools 
getting a bigger number of objects since we only compare the destination 
sets directly.

This change will compensate for differences in set sizes when choosing
the destination pool.

Different set sizes are already compensated by fewer disks.
2022-05-24 18:57:14 -07:00
Harshavardhana
38caddffe7 fix: copyObject on versioned bucket when updating metadata (#14971)
updating metadata with CopyObject on a versioned bucket
causes the latest version to be not readable, this PR fixes
this properly by handling the inline data bug fix introduced
in PR #14780.

This bug affects only inlined data.
2022-05-24 17:27:45 -07:00
Minio Trusted
80fe166902 update vulnerable deps coredns, ldap/v3 2022-05-24 15:53:52 -07:00
Poorna
0e26f983d6 site replication: Allow replication rule edit (#14969)
Revert commit b42cfcea60 as too
restrictive
2022-05-24 13:27:33 -07:00
Klaus Post
fc08fcab52 hash-set: Add file input for debug tool (#14965)
Add input option for a file list to display total distribution.
2022-05-24 09:05:39 -07:00
Anis Elleuch
77dc99e71d Do not use inline data size in xl.meta quorum calculation (#14831)
* Do not use inline data size in xl.meta quorum calculation

Data shards of one object can different inline/not-inline decision
in multiple disks. This happens with outdated disks when inline
decision changes. For example, enabling bucket versioning configuration
will change the small file threshold.

When the parity of an object becomes low, GET object can return 503
because it is not unable to calculate the xl.meta quorum, just because
some xl.meta has inline data and other are not.

So this commit will be disable taking the size of the inline data into
consideration when calculating the xl.meta quorum.

* Add tests for simulatenous inline/notinline object

Co-authored-by: Anis Elleuch <anis@min.io>
2022-05-24 06:26:38 -07:00
Anis Elleuch
5041bfcb5c replication healing: Fix typo when healing bucket quota info (#14966)
A typo is found in the replication healing code where an empty quota
configuration is sent to peer sites instead of the correct one.
.io>
2022-05-24 06:26:13 -07:00
Minio Trusted
5be76856bd Update yaml files to latest version RELEASE.2022-05-23T18-45-11Z 2022-05-24 00:29:45 +00:00
Minio Trusted
2a3f5e1ad1 update console release to v0.17.2 2022-05-23 11:45:11 -07:00
Harshavardhana
f8650a3493 fetch bucket replication stats across peers in single call (#14956)
current implementation relied on recursively calling one bucket
at a time across all peers, this would be very slow and chatty
when there are 100's of buckets which would mean 100*peerCount
amount of network operations.

This PR attempts to reduce this entire call into `peerCount`
amount of network calls only. This functionality addresses also a
concern where the Prometheus metrics would significantly slow
down when one of the peers is offline.
2022-05-23 09:15:30 -07:00
Klaus Post
90a52a29c5 Fix WalkDir fallback hot loop (#14961)
Fix fallback hot loop

fd was never refreshed, leading to an infinite hot loop if a disk failed and the fallback disk fails as well.

Fix & simplify retry loop.

Fixes #14960
2022-05-23 06:28:46 -07:00
Poorna
8859c92f80 Relax site replication syncing of service accounts (#14955)
Synchronous replication of service/sts accounts can be relaxed
as site replication healing should catch up when peer clusters
are back online.
2022-05-20 19:09:11 -07:00
Anis Elleuch
01e5632949 mrf: Fix stale MRF data showed in heal info (#14953)
One usee reported having mc admin heal status output ETA increasing
by time. It turned out it is MRF that is not clearing its data due to a
bug in the code.

pendingItems is increased when an object is queued to be healed but
never decreasd when there is a healing error. This commit will decrease
pendingItems and pendingBytes even when there is an error to give
accurate reporting.
2022-05-20 07:33:18 -07:00
Minio Trusted
18a4276e25 Update yaml files to latest version RELEASE.2022-05-19T18-20-59Z 2022-05-19 20:18:49 +00:00
Minio Trusted
c06032f35f update upgrade checklist and upgrade docs for systemd 2022-05-19 11:20:59 -07:00
Anis Elleuch
95a6b2c991 Merge LDAP STS policy evaluation with the generic STS code (#14944)
If LDAP is enabled, STS security token policy is evaluated using a
different code path and expects ldapUser claim to exist in the security
token. This means other STS temporary accounts generated by any Assume
Role function, such as AssumeRoleWithCertificate, won't be allowed to do any
operation as these accounts do not have LDAP user claim.

Since IsAllowedLDAPSTS() is similar to IsAllowedSTS(), this commit will
merge both.

Non harmful changes:
- IsAllowed for LDAP will start supporting RoleARN claim
- IsAllowed for LDAP will not check for parent claim anymore. This check doesn't
  seem to be useful since all STS login compare access/secret/security-token
  with the one saved in the disk.
- LDAP will support $username condition in policy documents.

Co-authored-by: Anis Elleuch <anis@min.io>
Co-authored-by: Aditya Manthramurthy <donatello@users.noreply.github.com>
2022-05-19 11:06:55 -07:00
Minio Trusted
ee28f6caaa update console v0.17.0 2022-05-19 03:47:09 -07:00
Harshavardhana
30c9e50701 make sure to ignore expected errors and dirname deletes (#14945) 2022-05-18 17:58:19 -07:00
Aditya Manthramurthy
9aadd725d2 Avoid calling .Reset() on active timer (#14941)
.Reset() documentation states:

    For a Timer created with NewTimer, Reset should be invoked only on stopped
    or expired timers with drained channels.

This change is just to comply with this requirement as there might be some
runtime dependent situation that might lead to unexpected behavior.
2022-05-18 15:37:58 -07:00
Harshavardhana
6cfb1cb6fd fix: timer usage across codebase (#14935)
it seems in some places we have been wrongly using the
timer.Reset() function, nicely exposed by an example
shared by @donatello https://go.dev/play/p/qoF71_D1oXD

this PR fixes all the usage comprehensively
2022-05-17 22:42:59 -07:00
Harshavardhana
2dc8ac1e62 allow IAM cache load to be granular and capture missed state (#14930)
anything that is stuck on the disk today can cause latency
spikes for all incoming S3 I/O, we need to have this
de-coupled so that we can make sure that latency in loading
credentials are not reflected back to the S3 API calls.

The approach this PR takes is by checking if the calls were
updated just in case when the IAM load was in progress,
so that we can use merge instead of "replacement" to avoid
missing state.
2022-05-17 19:58:47 -07:00
Anis Elleuch
e952e2a691 audit/kafka: Fix quitting early after first logging (#14932)
A recent commit created some regressions:
- Kafka/Audit goroutines quit when the first log is sent
- Missing doneCh initialization in Kafka audit
2022-05-17 07:43:25 -07:00
Harshavardhana
040ac5cad8 fix: when logger queue is full exit quickly upon doneCh (#14928)
Additionally only reload requested sub-system not everything
2022-05-16 16:10:51 -07:00
Anis Elleuch
05685863e3 Cancel old logger/audit targets outside lock (#14927)
When configuring a new target, such as an audit target, the server waits
until all audit events are sent to the audit target before doing the
swap from the old to the new audit target. Therefore current S3 operations
can suffer from this since the audit swap lock will be held.

This behavior is unnecessary as the new audit target can enter in a
functional mode immediately and the old audit will just cancel itself
at its own pace.
2022-05-16 13:32:36 -07:00
Domonkos Cinke
d324c0a1c3 Add PVC annotations to StatefulSet PVC templates (#14915) 2022-05-16 05:39:53 -07:00
Harshavardhana
03f8b25b50 disable connectDisks loop under testing (#14920)
avoids races during tests, keeps tests predictable
2022-05-16 05:36:00 -07:00
Anis Elleuch
b0e2c2da78 lifecycle: Support tags with special characters (#14906)
Object tags can have special characters such as whitespace. However
the current code doesn't properly consider those characters while
evaluating the lifecycle document.

ObjectInfo.UserTags contains an url encoded form of object tags
(e.g. key+1=val)

This commit fixes the issue by using the tags package to parse object tags.
2022-05-14 10:25:55 -07:00
Aditya Manthramurthy
f28a8eca91 Add Access Management Plugin tests with OpenID (#14919) 2022-05-13 12:48:02 -07:00
Anis Elleuch
ca69e54cb6 tests: Fix sporadic failure of TestXLStorageDeleteFile (#14911)
The test expects from DeleteFile to return errDiskNotFound when the disk
is not available. It calls os.RemoveAll() to remove one disk after XL storage
initialization. However, this latter contains some goroutines which can
race with os.RemoveAll() and then the test fails sporadically with
returning random errors.

The commit will tweak the initialization routine of the XL storage to
only run deletion of temporary and metacache data in the  background,
so TestXLStorageDeleteFile won't fail anymore.
2022-05-12 15:24:58 -07:00
Aditya Manthramurthy
4629abd5a2 Add tests for Access Management Plugin (#14909) 2022-05-12 15:24:19 -07:00
Harshavardhana
dc99f4a7a3 allow bucket to be listed when GetBucketLocation is enabled (#14903)
currently, we allowed buckets to be listed from the
API call if and when the user has ListObject()
permission at the global level, this is okay to be
extended to GetBucketLocation() as well since

GetBucketLocation() is a "read" call and allowing "reads"
on a bucket has an implicit assumption that ListBuckets()
should be allowed.

This makes discoverability of access for read-only users
becomes easier or users with specific restrictions on their
policies.
2022-05-12 10:46:20 -07:00
Krishna Srinivas
389ec21d0c Update documentation for /minio/health/cluster (#14889) 2022-05-12 09:54:07 -07:00
Harshavardhana
9341201132 logger lock should be more granular (#14901)
This PR simplifies few things by splitting
the locks between audit, logger targets to
avoid potential contention between them.

any failures inside audit/logger HTTP
targets must only log to console instead
of other targets to avoid cyclical dependency.

avoids unneeded atomic variables instead
uses RWLock to differentiate a more common
read phase v/s lock phase.
2022-05-12 07:20:58 -07:00
Krishnan Parthasarathi
88dd83a365 lifecycle: Set opts.VersionSuspended when expiring objects (#14902) 2022-05-12 06:09:24 -07:00
Minio Trusted
74285d50c4 update console v0.16.3 2022-05-11 19:45:51 -07:00
Harshavardhana
60d0611ac2 use BadRequest HTTP status instead of Conflict for certain errors (#14900)
PutBucketVersioning API should return BadRequest for errors
instead of Conflict, Conflict is used for "AlreadyExists"
resource situations.
2022-05-11 13:44:16 -07:00
Harshavardhana
f939222942 add support for extra prometheus labels (#14899)
fixes #14353
2022-05-11 13:04:53 -07:00
Eric Qiu
c293c2e9a3 docs: update new name for MINIO_POLICY_OPA_URL (#14898) 2022-05-11 13:04:15 -07:00
Krishna Srinivas
e34ca9acd1 retry each object decom upto 3 times, in-case of failure (#14861) 2022-05-11 11:37:32 -07:00
Aditya Manthramurthy
83071a3459 Add support for Access Management Plugin (#14875)
- This change renames the OPA integration as Access Management Plugin - there is
nothing specific to OPA in the integration, it is just a webhook.

- OPA configuration is automatically migrated to Access Management Plugin and
OPA specific configuration is marked as deprecated.

- OPA doc is updated and moved.
2022-05-10 17:14:55 -07:00
Anis Elleuch
edf364bf21 tracing: Add disk path to storage tracing (#14883)
Example:

2022-05-09T17:14:04:000 [STORAGE] storage.ListVols 127.0.0.1:9000 /tmp/xl/2 / 227.834µs
2022-05-09T17:14:04:000 [STORAGE] storage.ListVols 127.0.0.1:9000 /tmp/xl/4 / 236.042µs
2022-05-09T17:14:04:000 [STORAGE] storage.ListVols 127.0.0.1:9000 /tmp/xl/3 / 130.958µs
2022-05-09T17:14:04:000 [STORAGE] storage.ListVols 127.0.0.1:9000 /tmp/xl/1 / 102.875µs
2022-05-10 07:48:07 -07:00
Anis Elleuch
1e037883b0 pools: GetObjectNInfo should cover locking during object read (#14887)
In case of multi-pools setup, GetObjectNInfo returns a GetObjectReader
but it unlocks the read lock when quitting GetObjectNInfo. This should
not happen, unlock should only happen when GetObjectReader is closed.
2022-05-10 07:47:40 -07:00
Klaus Post
d909f167ff tests: Add localLocker RUnlock test (#14882) 2022-05-09 09:55:52 -07:00
Minio Trusted
4592aaa3e2 update helm v4.0.2 2022-05-08 21:25:47 -07:00
Minio Trusted
95d1a12422 Update yaml files to latest version RELEASE.2022-05-08T23-50-31Z 2022-05-09 03:46:40 +00:00
Harshavardhana
62aa42cccf avoid replication proxy on version excluded paths (#14878)
no need to attempt proxying objects that were
never replicated, but do have local `null`
versions on them.
2022-05-08 16:50:31 -07:00
Harshavardhana
5cffd3780a fix: multiple fixes in prefix exclude implementation (#14877)
- do not need to restrict prefix exclusions that do not
  have `/` as suffix, relax this requirement as spark may
  have staging folders with other autogenerated characters
  , so we are better off doing full prefix March and skip. 

- multiple delete objects was incorrectly creating a
  null delete marker on a versioned bucket instead of
  creating a proper versioned delete marker.

- do not suspend paths on the excluded prefixes during
  delete operations to avoid creating `null` delete markers,
  honor suspension of versioning only at bucket level for
  delete markers.
2022-05-07 22:06:44 -07:00
Harshavardhana
def75ffcfe allow versioning config changes under site replication (#14876)
PR #14828 introduced prefix-level exclusion of versioning
and replication - however our site replication implementation
since it defaults versioning on all buckets did not allow
changing versioning configuration once the bucket was created.

This PR changes this and ensures that such changes are honored
and also propagated/healed across sites appropriately.
2022-05-07 18:39:40 -07:00
Krishnan Parthasarathi
ad8e611098 feat: implement prefix-level versioning exclusion (#14828)
Spark/Hadoop workloads which use Hadoop MR 
Committer v1/v2 algorithm upload objects to a 
temporary prefix in a bucket. These objects are 
'renamed' to a different prefix on Job commit. 
Object storage admins are forced to configure 
separate ILM policies to expire these objects 
and their versions to reclaim space.

Our solution:

This can be avoided by simply marking objects 
under these prefixes to be excluded from versioning, 
as shown below. Consequently, these objects are 
excluded from replication, and don't require ILM 
policies to prune unnecessary versions.

-  MinIO Extension to Bucket Version Configuration
```xml
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> 
        <Status>Enabled</Status>
        <ExcludeFolders>true</ExcludeFolders>
        <ExcludedPrefixes>
          <Prefix>app1-jobs/*/_temporary/</Prefix>
        </ExcludedPrefixes>
        <ExcludedPrefixes>
          <Prefix>app2-jobs/*/__magic/</Prefix>
        </ExcludedPrefixes>

        <!-- .. up to 10 prefixes in all -->     
</VersioningConfiguration>
```
Note: `ExcludeFolders` excludes all folders in a bucket 
from versioning. This is required to prevent the parent 
folders from accumulating delete markers, especially
those which are shared across spark workloads 
spanning projects/teams.

- To enable version exclusion on a list of prefixes

```
mc version enable --excluded-prefixes "app1-jobs/*/_temporary/,app2-jobs/*/_magic," --exclude-prefix-marker myminio/test
```
2022-05-06 19:05:28 -07:00
Shireesh Anjal
3ec1844e4a return kubernetes info in health report (#14865) 2022-05-06 12:41:07 -07:00
Poorna
523670ba0d fix: site removal API error handling (#14870)
when the site is being removed is missing replication config. This can
happen when a new deployment is brought in place of a site that
is lost/destroyed and needs to delink old deployment from site
replication.
2022-05-06 12:40:34 -07:00
Harshavardhana
35dea24ffd fix: console log peer API from its broken implementation (#14873)
console logging peer API was broken as it would
timeout after 15minutes, this never really worked
beyond this value and basically failed to provide
the streaming "log" functionality that was expected
from this implementation.

also fix convoluted channel handling by keeping things
simple, this is rewritten.
2022-05-06 12:39:58 -07:00
Aditya Manthramurthy
e55104a155 Reorganize OpenID config (#14871)
- Split into multiple files
- Remove JSON unmarshaler for Config and providerCfg types (unused)
2022-05-05 13:40:06 -07:00
Klaus Post
111745c564 Add "enable" to config help (#14866)
Most help sections were missing "enable", which means it
is filtered out with `mc admin config get --json`.

Add it where missing.
2022-05-05 04:17:04 -07:00
Harshavardhana
c7df1ffc6f avoid concurrent reads and writes to opts.UserDefined (#14862)
do not modify opts.UserDefined after object-handler
has set all the necessary values, any mutation needed
should be done on a copy of this value not directly.

As there are other pieces of code that access opts.UserDefined
concurrently this becomes challenging.

fixes #14856
2022-05-05 04:14:41 -07:00
Aditya Manthramurthy
2b7e75e079 Add OPA doc and remove deprecation marking (#14863) 2022-05-04 23:53:42 -07:00
Domonkos Cinke
bcdaa09c75 add missing annotations for PVCs in vanilla helm chart (#14793) 2022-05-04 10:02:55 -07:00
Minio Trusted
2fc65dcb99 Update yaml files to latest version RELEASE.2022-05-04T07-45-27Z 2022-05-04 08:54:16 +00:00
Anis Elleuch
44a3b58e52 Add audit log for decommissioning (#14858) 2022-05-04 00:45:27 -07:00
Minio Trusted
0a256053ee Update yaml files to latest version RELEASE.2022-05-03T20-36-08Z 2022-05-03 21:27:19 +00:00
Anis Elleuch
46de9ac03e Decom: Easily restart decommission when it is done (#14855)
When a decommission task is successfully completed, failed, or canceled,
this commit allows restarting the decommission again. Restarting is not
allowed when there is an ongoing decommission task.
2022-05-03 13:36:08 -07:00
Aditya Manthramurthy
a53dc1d9c8 Update console to v0.16.2 (#14857) 2022-05-03 13:33:22 -07:00
Harshavardhana
f0462322fd fix: remove embedded-policy as requested by the user (#14847)
this PR introduces a few changes such as

- sessionPolicyName is not reused in an extracted manner
  to apply policies for incoming authenticated calls,
  instead uses a different key to designate this
  information for the callers.

- this differentiation is needed to ensure that service
  account updates do not accidentally store JSON representation
  instead of base64 equivalent on the disk.

- relax requirements for Deleting a service account, allow
  deleting a service account that might be unreadable, i.e
  a situation where the user might have removed session policy 
  which now carries a JSON representation, making it unparsable.

- introduce some constants to reuse instead of strings.

fixes #14784
2022-05-02 17:56:19 -07:00
Klaus Post
c59d2a6288 Log Range Header if present in the request (#14851)
Add Range header as param to easier debug of Range requests.
2022-05-02 10:37:26 -07:00
Klaus Post
3e3ff2a70b Check error status codes (#14850)
If an invalid status code is generated from an error we risk panicking. Even if there 
are no potential problems at the moment we should prevent this in the future.

Add safeguards against this.

Sample trace:

```
May 02 06:41:39   minio[52806]: panic: "GET /20180401230655.PDF": invalid WriteHeader code 0
May 02 06:41:39   minio[52806]: goroutine 16040430822 [running]:
May 02 06:41:39   minio[52806]: runtime/debug.Stack(0xc01fff7c20, 0x25c4b00, 0xc0490e4080)
May 02 06:41:39   minio[52806]:         runtime/debug/stack.go:24 +0x9f
May 02 06:41:39   minio[52806]: github.com/minio/minio/cmd.setCriticalErrorHandler.func1.1(0xc022048800, 0x4f38ab0, 0xc0406e0fc0)
May 02 06:41:39   minio[52806]:         github.com/minio/minio/cmd/generic-handlers.go:469 +0x85
May 02 06:41:39   minio[52806]: panic(0x25c4b00, 0xc0490e4080)
May 02 06:41:39   minio[52806]:         runtime/panic.go:965 +0x1b9
May 02 06:41:39   minio[52806]: net/http.checkWriteHeaderCode(...)
May 02 06:41:39   minio[52806]:         net/http/server.go:1092
May 02 06:41:39   minio[52806]: net/http.(*response).WriteHeader(0xc0406e0fc0, 0x0)
May 02 06:41:39   minio[52806]:         net/http/server.go:1126 +0x718
May 02 06:41:39   minio[52806]: github.com/minio/minio/internal/logger.(*ResponseWriter).WriteHeader(0xc032fa3ea0, 0x0)
May 02 06:41:39   minio[52806]:         github.com/minio/minio/internal/logger/audit.go:116 +0xb1
May 02 06:41:39   minio[52806]: github.com/minio/minio/internal/logger.(*ResponseWriter).WriteHeader(0xc032fa3f40, 0x0)
May 02 06:41:39   minio[52806]:         github.com/minio/minio/internal/logger/audit.go:116 +0xb1
May 02 06:41:39   minio[52806]: github.com/minio/minio/internal/logger.(*ResponseWriter).WriteHeader(0xc002ce8000, 0x0)
May 02 06:41:39   minio[52806]:         github.com/minio/minio/internal/logger/audit.go:116 +0xb1
May 02 06:41:39   minio[52806]: github.com/minio/minio/cmd.writeResponse(0x4f364a0, 0xc002ce8000, 0x0, 0xc0443b86c0, 0x1cb, 0x224, 0x2a9651e, 0xf)
May 02 06:41:39   minio[52806]:         github.com/minio/minio/cmd/api-response.go:736 +0x18d
May 02 06:41:39   minio[52806]: github.com/minio/minio/cmd.writeErrorResponse(0x4f44218, 0xc069086ae0, 0x4f364a0, 0xc002ce8000, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc00656afc0)
May 02 06:41:39   minio[52806]:         github.com/minio/minio/cmd/api-response.go:798 +0x306
May 02 06:41:39   minio[52806]: github.com/minio/minio/cmd.objectAPIHandlers.getObjectHandler(0x4b73768, 0x4b73730, 0x4f44218, 0xc069086ae0, 0x4f82090, 0xc002d80620, 0xc040e03885, 0xe, 0xc040e03894, 0x61, ...)
May 02 06:41:39   minio[52806]:         github.com/minio/minio/cmd/object-handlers.go:456 +0x252c
```
2022-05-02 10:36:29 -07:00
Harshavardhana
16bc11e72e fix: disallow newer policies, users & groups with space characters (#14845)
space characters at the beginning or at the end can lead to
confusion under various UI elements in differentiating the
actual name of "policy, user or group" - to avoid this behavior
this PR onwards we shall reject such inputs for newer entries.

existing saved entries will behave as is and are going to be
operable until they are removed/renamed to something more
meaningful.
2022-05-02 09:27:35 -07:00
Harshavardhana
2719f1efaa fix: reject invalid r.Host headers (#14846)
r.Host headers can come in unparsed that may contain
invalid hostnames, reject such requests as invalid.

This is a continuation fix from #14844
2022-05-02 04:42:41 -07:00
Minio Trusted
cff1be0ae8 update helm release to v4.0.1 2022-05-01 23:10:34 -07:00
Harshavardhana
39ac62a1a1 fix: panic in browser redirect handler for unexpected r.Host (#14844)
```
panic: "GET /": invalid hostname
goroutine 148 [running]:
runtime/debug.Stack()
	runtime/debug/stack.go:24 +0x65
github.com/minio/minio/cmd.setCriticalErrorHandler.func1.1()
	github.com/minio/minio/cmd/generic-handlers.go:469 +0x8e
panic({0x2201f00, 0xc001f1ddd0})
	runtime/panic.go:1038 +0x215
github.com/minio/pkg/net.URL.String({{0x25aa417, 0x5}, {0x0, 0x0}, 0x0, {0xc000174380, 0xd7}, {0x0, 0x0}, {0x0, ...}, ...})
	github.com/minio/pkg@v1.1.23/net/url.go:97 +0xfe
github.com/minio/minio/cmd.setBrowserRedirectHandler.func1({0x49af080, 0xc0003c20e0}, 0xc00002ea00)
	github.com/minio/minio/cmd/generic-handlers.go:136 +0x118
net/http.HandlerFunc.ServeHTTP(0xc00002ea00, {0x49af080, 0xc0003c20e0}, 0xa)
	net/http/server.go:2047 +0x2f
github.com/minio/minio/cmd.setAuthHandler.func1({0x49af080, 0xc0003c20e0}, 0xc00002ea00)
	github.com/minio/minio/cmd/auth-handler.go:525 +0x3d8
net/http.HandlerFunc.ServeHTTP(0xc00002e900, {0x49af080, 0xc0003c20e0}, 0xc001f33701)
	net/http/server.go:2047 +0x2f
github.com/gorilla/mux.(*Router).ServeHTTP(0xc0025d0780, {0x49af080, 0xc0003c20e0}, 0xc00002e800)
	github.com/gorilla/mux@v1.8.0/mux.go:210 +0x1cf
github.com/rs/cors.(*Cors).Handler.func1({0x49af080, 0xc0003c20e0}, 0xc00002e800)
	github.com/rs/cors@v1.7.0/cors.go:219 +0x1bd
net/http.HandlerFunc.ServeHTTP(0x0, {0x49af080, 0xc0003c20e0}, 0xc00068d9f8)
	net/http/server.go:2047 +0x2f
github.com/minio/minio/cmd.setCriticalErrorHandler.func1({0x49af080, 0xc0003c20e0}, 0x4a5cd3)
	github.com/minio/minio/cmd/generic-handlers.go:476 +0x83
net/http.HandlerFunc.ServeHTTP(0x72, {0x49af080, 0xc0003c20e0}, 0x0)
	net/http/server.go:2047 +0x2f
github.com/minio/minio/internal/http.(*Server).Start.func1({0x49af080, 0xc0003c20e0}, 0x10000c001f1dda0)
	github.com/minio/minio/internal/http/server.go:105 +0x1b6
net/http.HandlerFunc.ServeHTTP(0x0, {0x49af080, 0xc0003c20e0}, 0x46982e)
	net/http/server.go:2047 +0x2f
net/http.serverHandler.ServeHTTP({0xc003dc1950}, {0x49af080, 0xc0003c20e0}, 0xc00002e800)
	net/http/server.go:2879 +0x43b
net/http.(*conn).serve(0xc000514d20, {0x49cfc38, 0xc0010c0e70})
	net/http/server.go:1930 +0xb08
created by net/http.(*Server).Serve
	net/http/server.go:3034 +0x4e8
```
2022-05-01 13:45:45 -07:00
Minio Trusted
f427dbbd60 Update yaml files to latest version RELEASE.2022-04-30T22-23-53Z 2022-05-01 01:33:28 +00:00
Harshavardhana
c3f689a7d9 JWKS should be parsed before usage (#14842)
fixes #14811
2022-04-30 15:23:53 -07:00
Harshavardhana
85f3a9f3b0 Remove Azure gateway implementation (#14418)
refer #14331
2022-04-29 12:51:23 -07:00
Klaus Post
13ba4b433d Clean up cpuio profiling (#14838)
Don't start regular cpu profile as well. Use bed madmin const.
2022-04-29 09:35:42 -07:00
Minio Trusted
96f27a4965 Update yaml files to latest version RELEASE.2022-04-29T01-27-09Z 2022-04-29 06:32:50 +00:00
Aditya Manthramurthy
0e502899a8 Add support for multiple OpenID providers with role policies (#14223)
- When using multiple providers, claim-based providers are not allowed. All
providers must use role policies.

- Update markdown config to allow `details` HTML element
2022-04-28 18:27:09 -07:00
Harshavardhana
424b44c247 allow changing server command line from http->https (#14832)
this is allowed as long as order is preserved as is
on an existing setup, the new command line is updated
in `pool.bin` to facilitate future decommission's on
these pools.
2022-04-28 16:27:53 -07:00
Harshavardhana
01a71c366d allow service accounts and temp credentials site-level healing (#14829)
This PR introduces support for site level

- service account healing
- temporary credentials healing
2022-04-28 02:39:00 -07:00
Harshavardhana
990fbeb3a4 rename true/false to on/off in bucket notification docs 2022-04-27 23:51:31 -07:00
Harshavardhana
5a9a898ba2 allow forcibly creating metadata on buckets (#14820)
introduce x-minio-force-create environment variable
to force create a bucket and its metadata as required,
it is useful in some situations when bucket metadata
needs recovery.
2022-04-27 04:44:07 -07:00
Sidhartha Mani
fe1fbe0005 standardize config help defaults (#14788) 2022-04-26 20:11:37 -07:00
Harshavardhana
c56a139fdc fix: support decommissioning directory objects (#14822)
improvements in this PR include

- decommission objects that have __XLDIR__ suffix
- decommission objects that have `null` version on
  a versioned bucket.
- make sure to look for any "decom" failures to ensure
  that we do not wrong conclude decom as complete without
  all files getting copied over.
- break out eagerly upon first error for objects with
  multiple versions, leave the object as is for support
  debugging and analysis.
2022-04-26 20:06:41 -07:00
Anis Elleuch
df50eda811 Add number of versions in server info API (#14812)
The goal is to show the number of versions in the server info API.
2022-04-25 22:04:10 -07:00
Aditya Manthramurthy
f5d3313210 Increase context timeout for IAM concurrency test (#14817)
- This should reduce failures in Windows CI
2022-04-25 20:14:20 -07:00
Minio Trusted
97fcc9ff99 update helm release to v4.0.0 removes gcs gateway support
newer MinIO server removes "gcs" gateway support as per #14331
2022-04-25 19:41:39 -07:00
Minio Trusted
8a6b2b4447 Update yaml files to latest version RELEASE.2022-04-26T01-20-24Z 2022-04-26 02:08:20 +00:00
Aditya Manthramurthy
757eaeae92 Update console to v0.16.0 (#14816) 2022-04-25 18:20:24 -07:00
Daniel Valdivia
b7dd61f6bc Fix double slash subpath for console (#14815)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2022-04-25 13:05:56 -07:00
Minio Trusted
d2a95a04a4 update pkg v1.1.22 2022-04-25 10:33:38 -07:00
Harshavardhana
0cc993f403 Remove GCS, HDFS gateway implementations #14418
refer #14331
2022-04-24 10:19:17 -07:00
Poorna
3a64580663 Add support for site replication healing (#14572)
heal bucket metadata and IAM entries for
sites participating in site replication from
the site with the most updated entry.

Co-authored-by: Harshavardhana <harsha@minio.io>
Co-authored-by: Aditya Manthramurthy <aditya@minio.io>
2022-04-24 02:36:31 -07:00
Harshavardhana
d087e28dce start using t.SetEnv instead of os.Setenv (#14787) 2022-04-23 15:33:45 -07:00
Klaus Post
96adfaebe1 Make storage class config dynamic (#14791)
Updating the storage class is already thread safe, so we can do this safely.
2022-04-21 12:07:33 -07:00
Aditya Manthramurthy
ddf84f8257 fix: concurrency bug in site-replication (#14786)
The site replication status call was using a loop iteration variable sent
directly into go-routines instead of being passed as an argument. As the
variable is being updated in the loop, previously launched go routines do not
necessarily use the value at the time they were launched.
2022-04-20 16:20:07 -07:00
Harshavardhana
507f993075 attempt to real resolve when there is a quorum failure on reads (#14613) 2022-04-20 12:49:05 -07:00
Harshavardhana
73a6a60785 fix: replication deleteObject() regression and CopyObject() behavior (#14780)
This PR fixes two issues

- The first fix is a regression from #14555, the fix itself in #14555
  is correct but the interpretation of that information by the
  object layer code for "replication" was not correct. This PR
  tries to fix this situation by making sure the "Delete" replication
  works as expected when "VersionPurgeStatus" is already set.

  Without this fix, there is a DELETE marker created incorrectly on
  the source where the "DELETE" was triggered.

- The second fix is perhaps an older problem started since we inlined-data
  on the disk for small objects, CopyObject() incorrectly inline's
  a non-inlined data. This is due to the fact that we have code where
  we read the `part.1` under certain conditions where the size of the
  `part.1` is less than the specific "threshold".

  This eventually causes problems when we are "deleting" the data that
  is only inlined, which means dataDir is ignored leaving such
  dataDir on the disk, that looks like an inconsistent content on
  the namespace.

fixes #14767
2022-04-20 10:22:05 -07:00
Anis Elleuch
cf4cf58faf Do not allow parallel upgrade in one server (#14782)
It is wasteful to allow parallel upgrades of MinIO server. This also generates
 weird error invoked by selfupdate module when it happens such as:

'rename /opt/bin/.minio.old /opt/bin/..minio.old.old'
2022-04-20 06:18:21 -07:00
polaris-megrez
6bc3c74c0c honor client context in IAM user/policy listing calls (#14682) 2022-04-19 09:00:19 -07:00
Harshavardhana
598ce1e354 supply prefix filtering when necessary (#14772)
currently filterPefix was never used and set
that would filter out entries when needed
when `prefix` doesn't end with `/` - this
often leads to objects getting Walked(), Healed()
that were never requested by the caller.
2022-04-19 08:20:48 -07:00
Aditya Manthramurthy
4685b76e08 Update dperf v0.3.6 (#14773) 2022-04-19 02:40:36 -07:00
Minio Trusted
78c9109f6c update console to v0.15.14 2022-04-18 17:29:56 -07:00
Harshavardhana
7e248fc0ba wait on parallel decom to complete before returning (#14764)
without this wait there is a potential for some objects
that are in actively being decommissioned would cancel,
however the decommission status might wrongly conclude
this as "Complete".

To avoid this make sure to add waitgroups on the parallel
workers, allowing parallel copies to complete fully before
we return.
2022-04-18 13:26:29 -07:00
Daniel Valdivia
c526fa9119 Support console UI access at a subpath on a subdomain (#14761)
fixes #14285 

Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2022-04-17 16:01:49 -07:00
Harshavardhana
520e0fd985 update helm to v3.6.6 2022-04-17 14:46:44 -07:00
Yi Siqi
54a7eba358 Support overriding existing secrets (#14690) 2022-04-16 07:36:50 -07:00
Minio Trusted
1494ba2e6e Update yaml files to latest version RELEASE.2022-04-16T04-26-02Z 2022-04-16 05:03:00 +00:00
Anis Elleuch
a5b3548ede Bring back listing LDAP users temporarly (#14760)
In previous releases, mc admin user list would return the list of users
that have policies mapped in IAM database. However, this was removed but
this commit will bring it back until we revamp this.
2022-04-15 21:26:02 -07:00
Harshavardhana
8318aa0113 cancel active routine only after metadata has been saved (#14757)
currently updated pool.bin was not saved properly, that would
lead to unable to remove a pool upon a successful decommission.

fixes #14756
2022-04-15 13:16:15 -07:00
Harshavardhana
e69c42956b fix: IAM reload should only list at config/iam/ precisely (#14753) 2022-04-15 12:12:45 -07:00
Harshavardhana
53ca589c11 update deps for minio-go/v7 and jwt/v4 2022-04-15 00:50:22 -07:00
Daniel Valdivia
ca8ff8718e Update Console v0.15.13 (#14751)
Signed-off-by: Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
2022-04-14 18:35:00 -07:00
Aditya Manthramurthy
e8e48e4c4a S3 select switch to new parquet library and reduce locking (#14731)
- This change switches to a new parquet library
- SelectObjectContent now takes a single lock at the beginning and holds it
during the operation. Previously the operation took a lock every time the
parquet library performed a Seek on the underlying object stream.
- Add basic support for LogicalType annotations for timestamps.
2022-04-14 06:54:47 -07:00
Minio Trusted
67e17ed3f8 update helm v3.6.5
Signed-off-by: Minio Trusted <trusted@minio.io>
2022-04-13 15:45:54 -07:00
Harshavardhana
2a6a40e93b enable go1.18.x builds (#14746) 2022-04-13 14:21:55 -07:00
Harshavardhana
eda34423d7 update gofumpt -w - new changes 2022-04-13 12:00:11 -07:00
Yi Siqi
7ce1f6e736 Support templating accessKey existingSecret and bucket name (#14643) 2022-04-13 11:58:29 -07:00
Shireesh Anjal
5c53620a72 Include speedtest as part of healthinfo api (#14696)
Execute the object, drive and net speedtests as part of the healthinfo
(if requested by the client), and include their result in the response.

The options for the speedtests have been picked from the default values
used by `mc support perf` command.
2022-04-12 13:17:44 -07:00
Krishna Srinivas
5f94cec1e2 Allow parallel decom migration threads to be more than erasure sets (#14733) 2022-04-12 10:49:53 -07:00
Minio Trusted
646350fa7f Update yaml files to latest version RELEASE.2022-04-12T06-55-35Z 2022-04-12 07:23:20 +00:00
Aditya Manthramurthy
e162a055cc Bump up console to v0.15.11 (#14734) 2022-04-11 23:55:35 -07:00
Krishnan Parthasarathi
28d3ad3ada Honor object retention when applying ILM policies (#14732) 2022-04-11 21:55:56 -07:00
Harshavardhana
0bd44a7764 update helm v3.6.4 2022-04-11 18:30:28 -07:00
Aditya Manthramurthy
8be6d887e2 Bump up dperf to 0.3.5 (#14730) 2022-04-11 15:50:15 -07:00
Aditya Manthramurthy
66b14a0d32 Fix service account privilege escalation (#14729)
Ensure that a regular unprivileged user is unable to create service accounts for other users/root.
2022-04-11 15:30:28 -07:00
Harshavardhana
153a612253 fetch bucket retention config once for ILM evalAction (#14727)
This is mainly an optimization, does not change any
existing functionality.
2022-04-11 13:25:32 -07:00
Krishnan Parthasarathi
1a1b55e133 Add support for minio tier type (#14468) 2022-04-11 13:24:40 -07:00
Naveen
879de20edf Set permissions for GitHub actions (#14693)
- Included permissions for the action. https://github.com/ossf/scorecard/blob/main/docs/checks.md#token-permissions

https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs

[Keeping your GitHub Actions and workflows secure Part 1: Preventing pwn requests](https://securitylab.github.com/research/github-actions-preventing-pwn-requests/)

> Restrict the GitHub token permissions only to the required ones; this way, even if the attackers will succeed in compromising your workflow, they won’t be able to do much.

https://www.legitsecurity.com/blog/github-privilege-escalation-vulnerability

Signed-off-by: naveensrinivasan <172697+naveensrinivasan@users.noreply.github.com>
2022-04-11 02:45:59 -07:00
Harshavardhana
e77ad3f9bb make sure to pass Lifecycle if set for List filtering (#14722)
PR #14606 never really passed the Lifecycle filter
down to the listing callers to ensure skipping the
entries.
2022-04-10 11:14:52 -07:00
Harshavardhana
4ce86ff5fa align atomic variables once more for 32bit (#14721) 2022-04-09 22:19:44 -07:00
Daniel Valdivia
e290c010e6 Console v0.15.10 (#14723)
Signed-off-by: Daniel Valdivia <hola@danielvaldivia.com>
2022-04-09 20:55:36 -07:00
Minio Trusted
33d267fa1b Update yaml files to latest version RELEASE.2022-04-09T15-09-52Z 2022-04-09 20:23:18 +00:00
Harshavardhana
601a744159 pass the necessary query params for remote NSSCanner (#14719)
fixes a regression from #14464
2022-04-09 08:09:52 -07:00
Minio Trusted
f630d7c3fa Update yaml files to latest version RELEASE.2022-04-08T19-44-35Z 2022-04-08 23:35:38 +00:00
Harshavardhana
91bfefcf8c move back go.mod to 1.17 2022-04-08 16:25:20 -07:00
Poorna
a1b01e6d5f Combine profiling start/stop APIs into one (#14662)
Take profile duration as a query parameter for profile API
2022-04-08 12:44:35 -07:00
Krishna Srinivas
48594617b5 Parallelize decommissioning process (#14704) 2022-04-07 23:19:13 -07:00
Krishna Srinivas
b35b9dcff7 Use S3 client for uplooads/downloads during perf test (#14570) 2022-04-07 21:20:40 -07:00
Lenin Alevski
a3e317773a Skip commented lines when parsing MinIO configuration file (#14710)
Signed-off-by: Lenin Alevski <alevsk.8772@gmail.com>
2022-04-07 16:02:51 -07:00
Anis Elleuch
16431d222c heal: Enable periodic bitrot scan configuration (#14464) 2022-04-07 08:10:40 -07:00
Harshavardhana
ee49a23220 resume/start decommission on the first node of the pool under decommission (#14705)
Additionally fixes

- IsSuspended() can use read locks
- Avoid double cancels panic on canceler
2022-04-06 23:42:05 -07:00
Harshavardhana
a9eef521ec skip config/history/ during IAM load (#14698) 2022-04-06 21:03:41 -07:00
Klaus Post
901d33b59c Tweak listing quorum (#14703)
Always go for 50% quorum, and only use non-healing disks.

Fixes #14635
2022-04-06 12:24:21 -07:00
Daniel Valdivia
255116fde7 Update Console Dependency to v0.15.9 (#14699)
Signed-off-by: Daniel Valdivia <hola@danielvaldivia.com>
2022-04-05 20:46:17 -07:00
Harshavardhana
00ebea2536 skip config/history/ during IAM load (#14698) 2022-04-05 19:00:59 -07:00
Klaus Post
dedf9774c7 Set inspect-input.txt modtime (#14688)
If no time given, use current time.
2022-04-05 13:06:10 -07:00
Andreas Auernhammer
6b1c62133d listing: improve listing of encrypted objects (#14667)
This commit improves the listing of encrypted objects:
 - Use `etag.Format` and `etag.Decrypt`
 - Detect SSE-S3 single-part objects in a single iteration
 - Fix batch size to `250`
 - Pass request context to `DecryptAll` to not waste resources
   when a client cancels the operation.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-04-04 11:42:03 -07:00
Anis Elleuch
d4251b2545 Remove unnecessary log printing (#14685)
Co-authored-by: Anis Elleuch <anis@min.io>
2022-04-04 11:10:06 -07:00
Andreas Auernhammer
b9d1698d74 etag: add Format and Decrypt functions (#14659)
This commit adds two new functions to the
internal `etag` package:
 - `ETag.Format`
 - `Decrypt`

The `Decrypt` function decrypts an encrypted
ETag using a decryption key. It returns not
encrypted / multipart ETags unmodified.

The `Decrypt` function is mainly used when
handling SSE-S3 encrypted single-part objects.
In particular, the ETag of an SSE-S3 encrypted
single-part object needs to be decrypted since
S3 clients expect that this ETag is equal to the
content MD5.

The `ETag.Format` method also covers SSE ETag handling.
MinIO encrypts all ETags of SSE single part objects.
However, only the ETag of SSE-S3 encrypted single part
objects needs to be decrypted.
The ETag of an SSE-C or SSE-KMS single part object
does not correspond to its content MD5 and can be
a random value.
The `ETag.Format` function formats an ETag such that
it is an AWS S3 compliant ETag. In particular, it
returns non-encrypted ETags (single / multipart)
unmodified. However, for encrypted ETags it returns
the trailing 16 bytes as ETag. For encrypted ETags
the last 16 bytes will be a random value.

The main purpose of `Format` is to format ETags
such that clients accept them as well-formed AWS S3
ETags.
It differs from the `String` method since `String`
will return string representations for encrypted
ETags that are not AWS S3 compliant.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-04-03 13:29:13 -07:00
Shireesh Anjal
7c696e1cb6 Write deployment id to health report at the start (#14673)
The deployment id was being written to the health report towards the end
of the handler. Because of this, if there was a timeout in any of the
data fetching, the deployment id was not getting written at all. Upload
of such reports fails on SUBNET as deployment id is the unique
identifier for a cluster in subnet.

Fixed by writing the deployment id at the beginning of the processing.
2022-04-03 13:15:02 -07:00
Aditya Manthramurthy
165d60421d Add metrics for observing IAM sync operations (#14680) 2022-04-03 13:08:59 -07:00
Minio Trusted
c7962118f8 Update yaml files to latest version RELEASE.2022-04-01T03-41-39Z 2022-04-01 08:23:40 +00:00
Aditya Manthramurthy
892a204013 Update console to v0.15.8 (#14671) 2022-03-31 20:41:39 -07:00
Poorna
0e6aedc7ed Capture cmdline args for inspect API (#14668)
Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
2022-03-31 16:05:43 -07:00
Naveen
c547a4d835 Pin actions to a full length commit SHA (#14590)
- Pinned actions by SHA https://github.com/ossf/scorecard/blob/main/docs/checks.md#pinned-dependencies
- Included permissions for the action. https://github.com/ossf/scorecard/blob/main/docs/checks.md#token-permissions

https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-third-party-actions

Also, dependabot supports upgrades based on SHA.
2022-03-31 10:12:53 -07:00
Aditya Manthramurthy
fc9668baa5 Increase IAM refresh rate to every 10 mins (#14661)
Add timing information for IAM init and refresh
2022-03-30 17:02:59 -07:00
Andreas Auernhammer
ba17d46f15 ListObjectParts: simplify ETag decryption and size adjustment (#14653)
This commit simplifies the ETag decryption and size adjustment
when listing object parts.

When listing object parts, MinIO has to decrypt the ETag of all
parts if and only if the object resp. the parts is encrypted using
SSE-S3.
In case of SSE-KMS and SSE-C, MinIO returns a pseudo-random ETag.
This is inline with AWS S3 behavior.

Further, MinIO has to adjust the size of all encrypted parts due to
the encryption overhead.

The ListObjectParts does specifically not use the KMS bulk decryption
API (4d2fc530d0) since the ETags of all
parts are encrypted using the same object encryption key. Therefore,
MinIO only has to connect to the KMS once, even if there are multiple
parts resp. ETags. It can simply reuse the same object encryption key.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-03-30 15:23:25 -07:00
Harshavardhana
54a4f93854 update CREDITS 2022-03-30 14:09:39 -07:00
Krishna Srinivas
bdd816488d Get the BackendInfo to fill the apporpriate struct fields (#14660) 2022-03-30 10:48:35 -07:00
Krishna Srinivas
36dcfee2f7 Allow decomission of pool even if a drive in it is down (#14656) 2022-03-29 22:51:31 -07:00
Poorna
4d13ddf6b3 Avoid shadowing error during replication proxy check (#14655)
Fixes #14652
2022-03-29 10:53:09 -07:00
Poorna
9e25475475 Validate tier manager is initialized in tier Empty() check (#14646)
Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
2022-03-29 10:10:06 -07:00
Andreas Auernhammer
e955aa7f2a kes: add support for encrypted private keys (#14650)
This commit adds support for encrypted KES
client private keys.

Now, it is possible to encrypt the KES client
private key (`MINIO_KMS_KES_KEY_FILE`) with
a password.

For example, KES CLI already supports the
creation of encrypted private keys:
```
kes identity new --encrypt --key client.key --cert client.crt MinIO
```

To decrypt an encrypted private key, the password
needs to be provided:
```
MINIO_KMS_KES_KEY_PASSWORD=<password>
```

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-03-29 09:53:33 -07:00
Eco
81d2b54dfd doc: typo fix for ttfb entry in table (#14647) 2022-03-29 09:42:02 -07:00
Harshavardhana
7956ff0313 fix: multiple pool setup return incorrect DeleteMarker metadata (#14642) 2022-03-27 23:39:50 -07:00
Aditya Manthramurthy
9ff25fb64b Load IAM in-memory cache using only a single list call (#14640)
- Increase global IAM refresh interval to 30 minutes
- Also print a log after loading IAM subsystem
2022-03-27 18:48:01 -07:00
Andreas Auernhammer
04df69f633 listing: decrypt only SSE-S3 single-part ETags (#14638)
This commit optimises the ETag decryption when
listing objects.

When MinIO lists objects, it has to decrypt the
ETags of single-part SSE-S3 objects.

It does not need to decrypt ETags of
 - plaintext objects => Their ETag is not encrypted
 - SSE-C objects     => Their ETag is not the content MD5
 - SSE-KMS objects   => Their ETag is not the content MD5
 - multipart objects => Their ETag is not encrypted

Hence, MinIO only needs to make a call to the KMS
when it needs to decrypt a single-part SSE-S3 object.
It can resolve the ETags off all other object types
locally.

This commit implements the above semantics by
processing an object listing in batches.
If the batch contains no single-part SSE-S3 object,
then no KMS calls will be made.

If the batch contains at least one single-part
SSE-S3 object we have to make at least one KMS call.
No we first filter all single-part SSE-S3 objects
such that we only request the decryption keys for
these objects.
Once we know which objects resp. ETags require a
decryption key, MinIO either uses the KES bulk
decryption API (if supported) or decrypts each
ETag serially.

This commit is a significant improvement compared
to the previous listing code. Before, a single
non-SSE-S3 object caused MinIO to fall-back to
a serial ETag decryption.
For example, if a batch consisted of 249 SSE-S3
objects and one single SSE-KMS object, MinIO would
send 249 requests to the KMS.
Now, MinIO will send a single request for exactly
those 249 objects and skip the one SSE-KMS object
since it can handle its ETag locally.

Further, MinIO would request decryption keys
for SSE-S3 multipart objects in the past - even
though multipart ETags are not encrypted.
So, if a bucket contained only multipart SSE-S3
objects, MinIO would make totally unnecessary
requests to the KMS.
Now, MinIO simply skips these multipart objects
since it can handle the ETags locally.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-03-27 18:34:11 -07:00
Anis Elleuch
908eb57795 Always get the actual object size (#14637)
In bulk ETag decryption, do not rely on the etag to check if it is
encrypted or not to decide if we should set the actual object size in
ObjectInfo. The reason is that multipart objects ETags are not
encrypted.

Always get the actual object size in that case.
2022-03-27 08:54:25 -07:00
Harshavardhana
ecfae074dc do not crash when KMS is not enabled (#14634)
KMS when not enabled might crash when listing
an object that previously had SSE-S3 enabled,
fail appropriately in such situations.
2022-03-27 08:54:01 -07:00
Minio Trusted
be5d394e56 Update yaml files to latest version RELEASE.2022-03-26T06-49-28Z 2022-03-26 07:32:25 +00:00
Minio Trusted
849a27ee61 update hotfixes instructions and fix some typo 2022-03-25 23:49:28 -07:00
Andreas Auernhammer
062f3ea43a etag: fix incorrect multipart detection (#14631)
This commit fixes a subtle bug in the ETag
`IsEncrypted` implementation.

An encrypted ETag may contain random bytes,
i.e. some randomness used for encryption.
This random value can contain a '-' byte
simple due to being randomly generated.

Before, the `IsEncrypted` implementation
incorrectly assumed that an encrypted ETag
cannot contain a '-' since it would be a
multipart ETag. Multipart ETags have a
16 byte value followed by a '-' and the part number.
For example:
```
059ba80b807c3c776fb3bcf3f33e11ae-2
```

However, the following encrypted ETag
```
20000f00db2d90a7b40782d4cff2b41a7799fc1e7ead25972db65150118dfbe2ba76a3c002da28f85c840cd2001a28a9
```
also contains a '-' byte but is not a multipart ETag.

This commit fixes the `IsEncrypted` implementation
simply by checking whether the ETag is at least 32
bytes long. A valid multipart ETag is never 32 bytes
long since a part number must be <= 10000.

However, an encrypted ETag must be at least 32 bytes
long. It contains the encrypted ETag bytes (16 bytes)
and the authentication tag added by the AEAD cipher (again
16 bytes).

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-03-25 18:21:01 -07:00
Harshavardhana
5cfedcfe33 askDisks for strict quorum to be equal to read quorum (#14623) 2022-03-25 16:29:45 -07:00
Andreas Auernhammer
4d2fc530d0 add support for SSE-S3 bulk ETag decryption (#14627)
This commit adds support for bulk ETag
decryption for SSE-S3 encrypted objects.

If KES supports a bulk decryption API, then
MinIO will check whether its policy grants
access to this API. If so, MinIO will use
a bulk API call instead of sending encrypted
ETags serially to KES.

Note that MinIO will not use the KES bulk API
if its client certificate is an admin identity.

MinIO will process object listings in batches.
A batch has a configurable size that can be set
via `MINIO_KMS_KES_BULK_API_BATCH_SIZE=N`.
It defaults to `500`.

This env. variable is experimental and may be
renamed / removed in the future.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-03-25 15:01:41 -07:00
Sergey Zhuk
3970204009 ci: Check for new go-version. Bump setup-go to v3 (#14598) 2022-03-25 08:56:04 -07:00
Harshavardhana
f046f557fa request only 1 best version for latest version resolution (#14625)
ListObjects, ListObjectsV2 calls are being heavily taxed when
there are many versions on objects left over from a previous
release or ILM was never setup to clean them up. Instead
of being absolutely correct at resolving the exact latest
version of an object, we simply rely on the top most 1
version and resolve the rest.

Once we have obtained the top most "1" version for
ListObject, ListObjectsV2 call we break out.
2022-03-25 08:50:07 -07:00
Harshavardhana
401958938d add load balance properly restClientFromHash() bucket/prefix (#14621)
spread out resuming further to other nodes
2022-03-25 03:41:31 -07:00
Poorna
566cffe53d save format.json by default for inspect API (#14620) 2022-03-25 02:02:17 -07:00
Minio Trusted
028bc2f9be update console release to v0.15.6 2022-03-24 19:59:15 -07:00
Minio Trusted
813d9bc316 update helm release 2022-03-23 21:07:15 -07:00
Aditya Manthramurthy
79ba458051 fix: free up reader resources in S3Select properly (#14600) 2022-03-23 20:58:53 -07:00
Minio Trusted
cf220be9b5 Update yaml files to latest version RELEASE.2022-03-24T00-43-44Z 2022-03-24 01:28:05 +00:00
Harshavardhana
c433572585 update go mod to go1.16 deps (#14614) 2022-03-23 17:43:44 -07:00
Minio Trusted
a42b576382 keep maximum concurrent operations to 512 (to sustain upto 1024 open fds) 2022-03-23 17:02:04 -07:00
Avimitin
fb9b53026d Add riscv64 support (#14601)
In riscv64, the `syscall.Uname` function will return a uint8 slice.

    func main() {
      var buf syscall.Utsname
      fmt.Printf("Buffer Type: %T\n", buf.Release)
    }

    output:
      Buffer Type: [65]uint8

This is tested in the Arch Linux RISC-V 64 QEMU environment.

Signed-off-by: Avimitin <avimitin@gmail.com>
2022-03-22 20:36:59 -07:00
Klaus Post
2ac54e5a7b ListObjects: Filter lifecycle expired objects (#14606)
For ListObjects and ListObjectsV2 perform lifecycle checks on 
all objects before returning. This will filter out objects that are 
pending lifecycle expiration.

Bonus: Cheaper server pool conflict resolution by not converting to FileInfo.
2022-03-22 12:39:45 -07:00
Harshavardhana
8eecdc6d1f odd stripe sizes should choose (odd+1)/2 to get correct quorum (#14610) 2022-03-22 12:21:14 -07:00
Klaus Post
50577e2bd2 Allow adjusting request pool both ways (#14609)
When reloading a dynamic config allow the request pool to scale both ways.

Existing requests hold on to the previous pool, so they will pop the elements from that.
2022-03-22 11:28:54 -07:00
Klaus Post
7bc1f986e8 Do not wait for results when canceled (#14607)
When canceled nobody may be listening for the results.

Prevents memory buildup from cancelled requests.
2022-03-22 09:37:01 -07:00
Harshavardhana
d796621ccc choose smaller default deadline for diagnostics without --full (#14599) 2022-03-21 23:25:24 -07:00
Minio Trusted
751e9fb7be Update yaml files to latest version RELEASE.2022-03-22T02-05-10Z 2022-03-22 02:45:24 +00:00
Harshavardhana
f6113264f4 add detection for GOMAXPROCS < NumCPU 2022-03-21 19:05:10 -07:00
Harshavardhana
a3534a730b fallback quorum should be "strict" globally if config is not loaded (#14589) 2022-03-20 17:39:06 -07:00
Minio Trusted
7f8b8a0e43 update console to v0.15.4 2022-03-20 15:35:20 -07:00
Harshavardhana
bd6f7b6d83 fix: make decommission restart non-blocking (#14591)
currently an on-going decommission, during a server
restart might block the startup sequence for relatively
longer periods, instead start the decommission in
background lazily.
2022-03-20 14:46:43 -07:00
Andreas Auernhammer
b0a4beb66a PutObjectPart: set SSE-KMS headers and truncate ETags. (#14578)
This commit fixes two bugs in the `PutObjectPartHandler`.
First, `PutObjectPart` should return SSE-KMS headers
when the object is encrypted using SSE-KMS.
Before, this was not the case.

Second, the ETag should always be a 16 byte hex string,
perhaps followed by a `-X` (where `X` is the number of parts).
However, `PutObjectPart` used to return the encrypted ETag
in case of SSE-KMS. This leaks MinIO internal etag details
through the S3 API.

The combination of both bugs causes clients that use SSE-KMS
to fail when trying to validate the ETag. Since `PutObjectPart`
did not send the SSE-KMS response headers, the response looked
like a plaintext `PutObjectPart` response. Hence, the client
tries to verify that the ETag is the content-md5 of the part.
This could never be the case, since MinIO used to return the
encrypted ETag.

Therefore, clients behaving as specified by the S3 protocol
tried to verify the ETag in a situation they should not.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-03-19 10:15:12 -07:00
Klaus Post
472c2d828c Fix waitgroup add after wait on config reload (#14584)
Fix `panic: "POST /minio/peer/v21/signalservice?signal=2": sync: WaitGroup is reused before previous Wait has returned`

Log entries already on the channel would cause `logEntry` to increment the
 waitgroup when sending messages, after Cancel has been called.

Instead of tracking every single message, just check the send goroutine. Faster 
and safe, since it will not decrement until the channel is closed.

Regression from #14289
2022-03-19 09:15:45 -07:00
Harshavardhana
01ee49045e fix: handle race in server setup global CI/CD variable (#14579) 2022-03-18 18:21:09 -07:00
Harshavardhana
7bd9f821dd return correct context errors for locking operations (#14569)
if a context is canceled do not need to return a timeout error
instead, return the appropriate error for context canceled.
2022-03-18 15:32:45 -07:00
Anis Elleuch
b20ecc7b54 Add support of TLS session tickets with KES server (#14577)
Reduce overhead for communication between MinIO server and KES server.
2022-03-18 15:14:10 -07:00
Klaus Post
61eb9d4e29 Fix listing fallback re-using disks (#14576)
When more than 2 disks are unavailable for listing, the same disk will be used for fallback.

This makes quorum calculations incorrect since the same disk will have multiple entries.

This PR keeps track of which fallback disks have been handed out and only every returns a disk once.
2022-03-18 11:35:27 -07:00
Harshavardhana
43eb5a001c re-use transport for AdminInfo() call (#14571)
avoids creating new transport for each `isServerResolvable`
request, instead re-use the available global transport and do
not try to forcibly close connections to avoid TIME_WAIT
build upon large clusters.

Never use httpClient.CloseIdleConnections() since that can have
a drastic effect on existing connections on the transport pool.

Remove it everywhere.
2022-03-17 16:20:10 -07:00
Minio Trusted
f58692abb7 update helm to v3.6.2 2022-03-17 11:30:55 -07:00
Klaus Post
c1760fb764 Move apiCalls to front for field alignment (#14568)
Fixes #14565
2022-03-17 10:57:52 -07:00
Minio Trusted
e9bc0e7e98 Update yaml files to latest version RELEASE.2022-03-17T06-34-49Z 2022-03-17 00:11:59 -07:00
Minio Trusted
ffcadcd99e Revert "Use S3 client for uplooads/downloads during perf test (#14553)"
This reverts commit ff811f594b.

Speedtest is broken need to fix this more cleanly.
2022-03-16 23:34:49 -07:00
Minio Trusted
7a733a8d54 Update yaml files to latest version RELEASE.2022-03-17T02-57-36Z 2022-03-16 22:27:48 -07:00
Aditya Manthramurthy
ce97313fda Add extra LDAP configuration validation (#14535)
- The result now contains suggestions on fixing common configuration issues.
- These suggestions will subsequently be exposed in console/mc
2022-03-16 19:57:36 -07:00
Krishnan Parthasarathi
7b81967a3c Fix handling of object versions pending purge (#14555)
- GetObject() with vid should return 405
- GetObject() without vid should return 404
- ListObjects() should ignore this object if this is the "latest" version of the object
- ListObjectVersions() should list this object as "DELETE marker"
- Remove data parts before sync'ing the version pending purge
2022-03-16 16:59:43 -07:00
Krishna Srinivas
ff811f594b Use S3 client for uplooads/downloads during perf test (#14553) 2022-03-16 16:58:46 -07:00
Harshavardhana
0bf80b3c89 update console v0.15.3 2022-03-16 01:19:00 -07:00
Harshavardhana
ae3b369fe1 logger webhook failure can overrun the queue_size (#14556)
PR introduced in #13819 was incorrect and was not
handling the situation where a buffer is full can
cause incessant amount of logs that would keep the
logger webhook overrun by the requests.

To avoid this only log failures to console logger
instead of all targets as it can cause self reference,
leading to an infinite loop.
2022-03-15 17:45:51 -07:00
Kourosh Tafreshi
77b15e7194 Add Console Service port to the NetworkPolicy (#14545) 2022-03-14 17:13:42 -07:00
Harshavardhana
20537f974e add missing v3.6.1 tarball 2022-03-14 17:13:17 -07:00
Harshavardhana
4476a64bdf update helm to v3.6.1 2022-03-14 14:40:24 -07:00
Steven Meyer
d4b701576e Fix helm chart k8s version comparison (#14552) 2022-03-14 14:39:32 -07:00
Minio Trusted
721c053712 Update yaml files to latest version RELEASE.2022-03-14T18-25-24Z 2022-03-14 19:32:22 +00:00
Harshavardhana
e3071157f0 allow MakeBucketLocation to work for metaBucket (#14548)
decommission would fail to start due to failure
in MakeBucketLocation() error on .minio.sys/ bucket
creation.

Allow these special buckets.
2022-03-14 11:25:24 -07:00
Klaus Post
c07af89e48 select: Add ScanRange to CSV&JSON (#14546)
Implements https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html#AmazonS3-SelectObjectContent-request-ScanRange

Fixes #14539
2022-03-14 09:48:36 -07:00
Harshavardhana
9c846106fa decouple service accounts from root credentials (#14534)
changing root credentials makes service accounts
in-operable, this PR changes the way sessionToken
is generated for service accounts.

It changes service account behavior to generate
sessionToken claims from its own secret instead
of using global root credential.

Existing credentials will be supported by
falling back to verify using root credential.

fixes #14530
2022-03-14 09:09:22 -07:00
Harshavardhana
cf94d1f1f1 do not crash readXLMetaNoData - if the xl.meta has incorrect content (#14538)
```
tmp = buf[want:]
```

Would potentially crash when `buf` is truncated for some reason
and does not have the expected bytes, this is of course considered
not normal and is an odd situation. But we do not need to crash
here instead allow for errors to be returned and let callers handle
the errors.
2022-03-14 09:07:46 -07:00
Harshavardhana
6187440f35 update helm release v3.6.0 2022-03-13 15:44:21 -07:00
Minio Trusted
57b7c3494f Update yaml files to latest version RELEASE.2022-03-11T23-57-45Z 2022-03-13 08:47:27 +00:00
Harshavardhana
dda18c28c5 Bump github.com/nats-io/nats-server/v2 from 2.7.2 to 2.7.4 2022-03-11 15:57:45 -08:00
Poorna
f8d6eaaa96 fix: regression from range GET proxy on replicated buckets #14345 (#14532)
Fixes: #14531
2022-03-11 15:56:49 -08:00
Vijay Dharap
47d4fabb58 add filesystem group change policy for large minio deployments (#14528)
* add group change policy for large MinIO deployments
* Added Kubernetes version > 1.20 check for applying the proposed change
2022-03-11 14:21:58 -08:00
Minio Trusted
80039f60d5 Update yaml files to latest version RELEASE.2022-03-11T11-08-23Z 2022-03-11 11:47:17 +00:00
Harshavardhana
5a5e9b8a89 update console to v0.15.2 2022-03-11 03:08:23 -08:00
Aditya Manthramurthy
b7ed3b77bd Indicate required fields in LDAP configuration correctly (#14526) 2022-03-10 19:03:38 -08:00
Poorna
75b925c326 Deprecate root disk for disk caching (#14527)
This PR modifies #14513 to issue a deprecation
warning rather than reject settings on startup.
2022-03-10 18:42:44 -08:00
Harshavardhana
91d419ee6c warn issues about large block I/O performance for Linux older than 4.0.0 (#14524)
This PR simply adds a warning message when it detects older kernel
versions and warn's them about potential performance issues on this
kernel.

The issue can be seen only with parallel I/O across all drives
on denser setups such as 90 drives or 45 drives per server configurations.
2022-03-10 17:36:13 -08:00
Harshavardhana
23345098ea change dperf to use standard Go io.Copy 2022-03-10 12:53:39 -08:00
Poorna
7ce91ea1a1 Disallow root disk to be used for cache drives (#14513) 2022-03-10 02:45:31 -08:00
Harshavardhana
41079f1015 heal: remove blocking healDiskMeta upon startup (#14514)
This type of code is not necessary, read's of all
metadata content at `.minio.sys/config` automatically
triggers healing when necessary in the GetObjectNInfo()
call-path.

Having this code is not useful and this also adds to
the overall startup time of MinIO when there are lots
of users and policies.
2022-03-10 02:45:14 -08:00
Poorna
712dfa40cd Add missing site replication hook for clearing sse config (#14512) 2022-03-10 00:04:34 -08:00
Harshavardhana
decfd6108c update dperf to calculate timing for fdatasync()/close() calls as well 2022-03-09 13:47:44 -08:00
Klaus Post
b890bbfa63 Add local disk health checks (#14447)
The main goal of this PR is to solve the situation where disks stop 
responding to operations. This generally causes an FD build-up and 
eventually will crash the server.

This adds detection of hung disks, where calls on disk get stuck.

We add functionality to `xlStorageDiskIDCheck` where it keeps 
track of the number of concurrent requests on a given disk.

A total number of 100 operations are allowed. If this limit is reached 
we will block (but not reject) new requests, but we will monitor the 
state of the disk.

If no requests have been completed or updated within a 15-second 
window, we mark the disk as offline. Requests that are blocked will be 
unblocked and return an error as "faulty disk".

New requests will be rejected until the disk is marked OK again.

Once a disk has been marked faulty, a check will run every 5 seconds that 
will attempt to write and read back a file. As long as this fails the disk will 
remain faulty.

To prevent lots of long-running requests to mark the disk faulty we 
implement a callback feature that allows updating the status as parts 
of these operations are running.

We add a reader and writer wrapper that will update the status of each 
successful read/write operation. This should allow fine enough granularity 
that a slow, but still operational disk will not reach 15 seconds where 
50 operations have not progressed.

Note that errors themselves are not enough to mark a disk faulty. 
A nil (or io.EOF) error will mark a disk as "good".

* Make concurrent disk setting configurable via `_MINIO_DISK_MAX_CONCURRENT`.

* de-couple IsOnline() from disk health tracker

The purpose of IsOnline() is to ensure that we
reconnect the drive only when the "drive" was

- disconnected from network we need to validate
  if the drive is "correct" and is the same drive
  which belongs to this server.

- drive was replaced we have to format it - we
  support hot swapping of the drives.

IsOnline() is not meant for taking the drive offline
when it is hung, it is not useful we can let the
drive be online instead "return" errors for relevant
calls.

* return errFaultyDisk for DiskInfo() call

Co-authored-by: Harshavardhana <harsha@minio.io>

Possible future Improvements:

* Unify the REST server and local xlStorageDiskIDCheck. This would also improve stats significantly.
* Allow reads/writes to be aborted by the context.
* Add usage stats, concurrent count, blocked operations, etc.
2022-03-09 11:38:54 -08:00
Daichi Mukai
0e3a570b85 helm: add namespace to StatefulSet (#14509)
Even if we specify the target namespace by `helm install --namespace`, 
the StatefulSet is created on the default namespace. Since this resource
references the ServiceAccount created on the target namespace, pods are
hindered to be created. To avoid this, we deploy the StatefulSet to the
target namespace of helm.
2022-03-09 11:25:36 -08:00
Klaus Post
7060c809c0 Add authorization header to HEAD requests (#14510)
Add Authorization to network check requests.

Fixes #14507
2022-03-09 10:48:56 -08:00
Andreas Auernhammer
9dbfd84c5b CI: use MINIO_KMS_SECRET_KEY when verify healing (#14511)
This commit replaces the KMS / KES environment
variables with `MINIO_KMS_SECRET_KEY` when testing
healing on CI.

This change is necessary since KES `0.18.0` introduced
some API breaking changes and the healing tests run
a test (`verify-3604`) that requires an older MinIO
version (e.g. `2021-11-24T23-19-33Z`) which is not
able to parse a KES error as expected.

This commit allows the KES instance at `https://play.min.io:7373`
to get updated to newer versions.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-03-09 10:48:29 -08:00
Minio Trusted
fce380a044 Update yaml files to latest version RELEASE.2022-03-08T22-28-51Z 2022-03-09 01:36:59 +00:00
Poorna
46ba15ab03 Return MethodNotAllowed if force del on replicated bucket (#14505) 2022-03-08 14:28:51 -08:00
Poorna
1e39ca39c3 fix: consistent replies for incorrect range requests on replicated buckets (#14345)
Propagate error from replication proxy target correctly to the client if range GET is unsatisfiable.
2022-03-08 13:58:55 -08:00
Krishnan Parthasarathi
80ef1ae51c Simplify assembling of tierStats from data-usage (#14504) 2022-03-08 12:08:29 -08:00
Krishna Srinivas
4d0715d226 Implement netperf for "mc support perf net" (#14397)
Co-authored-by: Klaus Post <klauspost@gmail.com>
2022-03-08 09:54:38 -08:00
Klaus Post
8a274169da heal: Fix first entry on dangling (#14495)
Instead of the first, the last entry was returned
pointerizing the range value.
2022-03-08 09:04:20 -08:00
Harshavardhana
21d8298fe1 update console UI to release v0.15.1 2022-03-07 23:40:58 -08:00
Harshavardhana
5d6f6d8d5b create missing .minio.sys/config, .minio.sys/buckets during decommission (#14497) 2022-03-07 16:18:57 -08:00
Anis Elleuch
bacf6156c1 metrics: Avoid crash when fetching tier metrics (#14493)
Data usage does not always contain tiering info even if the data usage
information is valid. Avoid a crash in that case.

(e.g. the scanner scanned the namespace, the user enables tiering,
prometheus scrapes the server before the scanner gets a chance to
update the data usage with new tiering information)
2022-03-07 10:59:32 -08:00
Klaus Post
1d1b213f1f scanner: Consider preselection bias when selecting for Healing (#14492)
Healing decisions would align with skipped folder counters. This can lead to files 
never being selected for heal checks on "clean" paths.

Use different hashing methods and take objectHealProbDiv into account when 
calculating the cycle.

Found by @vadmeste
2022-03-07 09:25:53 -08:00
Minio Trusted
1f11af42f1 Update yaml files to latest version RELEASE.2022-03-05T06-32-39Z 2022-03-05 09:27:28 +00:00
Jan Madera
a026c8748f Update nginx.conf for large file uploads (#14481) 2022-03-04 22:32:39 -08:00
David Young
9f7d89b3cd Add option to ignore checksumming config/secrets (#14396)
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2022-03-04 22:32:15 -08:00
Harshavardhana
92a77cc78e update pkg v1.1.20 to reload certs in k8s always (#14470) 2022-03-04 20:34:39 -08:00
Harshavardhana
b0c84e3de7 fix: deleteVersions causing xl.meta to have empty Versions[] slice (#14483)
This is a side-affect of the optimization done in PR #13544 which
causes a certain type of delete operations on given object versions
can cause lastVersion indication to be skipped, which leads to
an `xl.meta` where Versions[] slice is empty while the entire
file is intact by itself.

This PR tries to ensure that such files are visible and deletable
by regular means of listing as null 'delete-marker' and also
avoid the situation where this potential issue might arise.
2022-03-04 20:01:26 -08:00
Anis Elleuch
bbc914e174 heal: Do not override heal scan mode mode if it is set (#14476)
mc admin heal has --scan=deep flag which enforces bitrot checking 
when doing the healing.

Do not force override an existing heal scan option.
2022-03-04 18:25:06 -08:00
Anis Elleuch
3fca4055d2 heal: Re-heal an object when a corruption is found during normal scan (#14482)
When scanning using normal mode, HealObject() can report an 
error saying that it found a corrupted part. This doesn't have 
when HealObject() is called with bitrot scan flag. However, when 
this happens, we can still restart HealObject() with the bitrot scan.

This is also important because this means the scanner and the 
new disks healer will not be able to heal an object that doesn't 
exist in a specific disk and has corruption in another disk.

Also without this PR, mc admin heal command without bitrot will report
an error.
2022-03-04 18:24:34 -08:00
Harshavardhana
66afa16aed canceled PUTs throw frivolous logs (#14475)
remote drives might throw frivolous logs,
if the caller canceled the PUT operation
in such scenarios there is no reason to log.
2022-03-04 10:31:33 -08:00
Harshavardhana
9b0a8de7de update helm v3.5.9 2022-03-03 15:29:03 -08:00
Minio Trusted
04bbede17d Update yaml files to latest version RELEASE.2022-03-03T21-21-16Z 2022-03-03 22:16:10 +00:00
Harshavardhana
0e3bafcc54 improve logs, fix banner formatting (#14456) 2022-03-03 13:21:16 -08:00
Andreas Auernhammer
b48f719b8e kes: remove unnecessary error conversion (#14459)
This commit removes some duplicate code that
converts KES API errors.

This code was added since KES `0.18.0` changed
some exported API errors. However, the KES SDK
handles this error conversion itself.
Therefore, it is not necessary to duplicate this
behavior in MinIO.

See: 21555fa624/error.go (L94)

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2022-03-03 09:42:37 -08:00
Lenin Alevski
289fcbd08c KES dependency upgrade (#14454)
- Updating KES dependency to v.0.18.0
- Fixing incompatibility issue when checking for errors during KES key creation

Signed-off-by: Lenin Alevski <alevsk.8772@gmail.com>
2022-03-02 23:03:40 -08:00
Harshavardhana
f6875bb893 fix: regression from refactor in AMQP notification (#14455)
fixes a regression introduced in #14269 that refactored
the notification registration logic, all the amqp targets
however online will not be available for use anymore.

fixes #14451
2022-03-02 21:35:48 -08:00
Harshavardhana
7e803adf13 do not attempt force delete on bucket (#14452)
caller needs to ask explicitly for force delete
otherwise, the force delete might end up deleting
an existing bucket with data.

fixes #14445
2022-03-02 20:47:53 -08:00
Harshavardhana
5b5deee5b3 update minio/pkg to v1.1.18 2022-03-02 19:25:07 -08:00
Krishnan Parthasarathi
7dae4cb685 Update minio/pkg to v1.1.17 (#14450)
Fix for admin policy validation of KMSCreateKey
2022-03-02 17:06:06 -08:00
Emmet McPoland
27fad98179 Replace HeadBucket permission with GetBucketAcl (#14436)
Resolves https://github.com/minio/minio/issues/14379
2022-03-01 21:18:23 -08:00
Harshavardhana
58f7e3a829 update console v0.15.0, coredns v1.9.0 2022-03-01 17:17:18 -08:00
Anis Elleuch
4a15bd8ff8 Return info for DiskInfo when the disk is unformatted (#14427)
In a distributed setup, a DiskInfo REST call to an unformatted disk
returns an error with no disk information, such as the disk endpoint
URL, which is unexpected.
2022-03-01 15:06:47 -08:00
Klaus Post
b030ef1aca tests: Clean up dsync package (#14415)
Add non-constant timeouts to dsync package.

Reduce test runtime by minutes. Hopefully not too aggressive.
2022-03-01 11:14:28 -08:00
Harshavardhana
cc46a99f97 skip object-lock headers without values (#14430)
metadata headers can have headers without values
as per AWS S3 spec however, we need to skip some
headers that do not have values that potentially
can have empty values set.
2022-03-01 11:04:47 -08:00
Xuehan Xu
becec6cb6b correct mrf.newSetReconnected invocation's param order (#14426)
Signed-off-by: xuxuehan <xuxuehan@qianxin.com>
2022-02-28 09:13:19 -08:00
Harshavardhana
bc33db9fc0 update helm v3.5.8 2022-02-26 22:44:38 -08:00
Minio Trusted
7d4579e737 Update yaml files to latest version RELEASE.2022-02-26T02-54-46Z 2022-02-26 03:36:08 +00:00
Harshavardhana
b7c90751b0 allow drive tests to respond only drive paths 2022-02-25 18:54:46 -08:00
Klaus Post
88fd1cba71 select: add MISSING operator support (#14406)
Probably not full support, but for regular checks it should work.

Fixes #14358
2022-02-25 12:31:19 -08:00
Harshavardhana
e43cc316ff remove errCh usage from HealObjects() simplify it (#14414)
errCh is not needed instead, rely on errs slice to
capture and return errors instead.

most probably fixes #14247
2022-02-25 12:20:41 -08:00
Klaus Post
e3f24a29fa Upgrade simdjson & compress deps (#14411) 2022-02-25 10:48:41 -08:00
Harshavardhana
890e526bde rename 'mc admin inspect' to 'mc support inspect' 2022-02-24 17:17:53 -08:00
Harshavardhana
16ce455fca update docker release to RELEASE.2022-02-24T22-12-01Z 2022-02-24 15:35:14 -08:00
Harshavardhana
29b7164468 update console update v0.14.8 2022-02-24 14:12:01 -08:00
Harshavardhana
acdd03f609 update CREDITs file for new dependencies 2022-02-24 12:58:53 -08:00
hellivan
03b35ecdd0 collect correct parentUser for OIDC creds auto expiration (#14400) 2022-02-24 11:43:15 -08:00
hellivan
5307e18085 use keycloak_realm properly for keycloak user lookups (#14401)
In case a user-defined a value for the MINIO_IDENTITY_OPENID_KEYCLOAK_REALM 
environment variable, construct the path properly.
2022-02-24 10:16:53 -08:00
Klaus Post
2cea944cdb select: Allow lower case 'is' (#14405)
Ref: #14358
2022-02-24 09:10:48 -08:00
Harshavardhana
c08540c7b7 reject speedtest when there isn't enough disk space available (#14402)
small setups do not return appropriate errors when speedtest
cannot run on small tiny setups, allow the tests to fail
appropriately more pro-actively.

many users bring toy setups, this PR simply returns an error
in such situations.
2022-02-24 09:06:18 -08:00
Shireesh Anjal
3934700a08 Make audit webhook and kafka config dynamic (#14390) 2022-02-24 09:05:33 -08:00
hellivan
0913eb6655 fix: openid config provider not initialized correctly (#14399)
Up until now `InitializeProvider` method of `Config` struct was
implemented on a value receiver which is why changes on `provider`
field where never reflected to method callers. In order to fix this
issue, the method was implemented on a pointer receiver.
2022-02-23 23:42:37 -08:00
Harshavardhana
1bfbe354f5 fix: clientId must be unique for all servers (#14398)
This is a regression from #14037, distributed setups
with MQTT was not working anymore. According to MQTT
spec it is expected this is unique per server.

We shall proceed to use unix nano timestamp hex
value instead here.
2022-02-23 20:19:59 -08:00
Harshavardhana
2d78e20120 enable CI environment additionally for MINIO_CI_CD (#14395)
all CI/CD environments set CI=true this is enough
for MinIO to be run inside CI environments, support
it.
2022-02-23 16:01:59 -08:00
Harshavardhana
77210513c9 update minio/pkg, minio/madmin-go, minio/minio-go/v7 2022-02-23 14:34:47 -08:00
Harshavardhana
2e6f8bdf19 do not skip healing disks during deletes (#14394)
healing disks take active I/O it is possible
that deleted objects might stay in .trash
folder for a really long time until the drive
is fully healed.

this PR changes it such that we are making sure
we purge the active content written to these
disks as well.
2022-02-23 14:30:46 -08:00
Shireesh Anjal
25144fedd5 Send deployment id and minio version in http header (#14378) 2022-02-23 13:36:01 -08:00
Krishnan Parthasarathi
27f64dd9a4 Add support for tier-remove and tier-verify (#14382)
* Add tier remove support only if it's empty
* Add support for tier verify
2022-02-23 13:34:25 -08:00
Harshavardhana
9d7648f02f reduce unnecessary logging during speedtest (#14387)
- speedtest logs calls that were canceled
  spuriously, in situations where it should
  be ignored.

- all errors of interest are always sent back
  to the client there is no need to log them
  on the server console.

- PUT failures should negate the increments
  such that GET is not attempted on unsuccessful
  calls.

- do not attempt MRF on speedtest objects.
2022-02-23 11:59:13 -08:00
Poorna
1ef8babfef cache: improve error reported for atime check (#14384) 2022-02-23 11:57:06 -08:00
Poorna
4ea7bf0510 Use custom transport for site replication (#14391)
Also, ensure that tiering uses a different instance of custom transport
2022-02-23 11:50:40 -08:00
Anis Elleuch
5dcf1d13a9 ci: Always set disks as non root disks (#14389)
In the testing mode, reformatting disks will fail because the healing
code will complain if one disk is in root mode. This commit will
automatically set all disks as non-root if MINIO_CI_CD is set.
2022-02-23 10:11:33 -08:00
Shireesh Anjal
94d37d05e5 Apply dynamic config at sub-system level (#14369)
Currently, when applying any dynamic config, the system reloads and
re-applies the config of all the dynamic sub-systems.

This PR refactors the code in such a way that changing config of a given
dynamic sub-system will work on only that sub-system.
2022-02-22 10:59:28 -08:00
Harshavardhana
0cbdc458c5 fix: do not reload disk format.json on a reconnected disk (#14351)
An onlineDisk means its a valid disk but it may be a
re-connected disk, this PR verifies that based on LastConn()
to only trigger MRF. Current code would again re-load the
disk 'format.json' which is not necessary and perhaps an
unnecessary call.

A potential side affect of this is closing perfectly online
disks and getting re-replaced by reloading 'format.json'.

This PR tries to avoid this situation by making sure MRF
is triggered but not reloading 'format.json' because of MRF.
2022-02-21 15:51:54 -08:00
Shireesh Anjal
c1437c7b46 allow config reset api to work by overloading default values (#14368)
The `LookupConfig` code was not using `GetWithDefault`, because of which
some of the config values were being returned as empty string, and calls
like `strconv.Atoi` and `time.ParseDuration` on these were failing.
2022-02-21 15:50:45 -08:00
Eric
f357f65d04 Allow policy bootstrapping with nil "Resource" (#14359) 2022-02-20 15:56:41 -08:00
Harshavardhana
ef8e952fc4 update helm v3.5.7 2022-02-20 00:55:08 -08:00
Eric
a2bc383e15 Allow bootstrapping policies with special characters in Helm (#14356)
If the policy fails MinIO's minimum threshold for a valid policy,
they'll still (correctly) fail, but policies with a : (and probably a
/) should be allowed since they work with standard MC/MinIO 
Console interactions.

This creates the files as policy_IDX.json instead of <name>.json 
to avoid any issues with the name + Kubernetes ConfigMaps since 
ConfigMap keys must be: [-._a-zA-Z0-9]+
2022-02-19 23:21:17 -08:00
Harshavardhana
23930355a7 rename 'config host add' -> 'alias set'
update helm to v3.5.6
2022-02-19 12:34:14 -08:00
Domonkos Cinke
bb9f41e613 Add ability to use custom commands (#14227) 2022-02-19 12:29:15 -08:00
Aditya Manthramurthy
bc110d8055 fix: mysql notification target table creation (#14350)
Add a generated hash column as the primary key for the key name as 
MySQL does not allow indexes on long VARCHAR columns.
2022-02-18 12:13:49 -08:00
Minio Trusted
b23b19e5c3 Update yaml files to latest version RELEASE.2022-02-18T01-50-10Z 2022-02-17 19:12:27 -08:00
564 changed files with 39867 additions and 24048 deletions

5
.github/markdown-lint-cfg.yaml vendored Normal file
View File

@@ -0,0 +1,5 @@
# Config file for markdownlint-cli
MD033:
allowed_elements:
- details
- summary

14
.github/workflows/depsreview.yaml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: 'Dependency Review'
on: [pull_request]
permissions:
contents: read
jobs:
dependency-review:
runs-on: ubuntu-latest
steps:
- name: 'Checkout Repository'
uses: actions/checkout@v3
- name: 'Dependency Review'
uses: actions/dependency-review-action@v1

View File

@@ -11,19 +11,23 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Build Tests with Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.17.x]
go-version: [1.18.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
- uses: actions/checkout@629c2de402a417ea7690ca6ce3f33229e27606a5 # v2
- uses: actions/setup-go@bfdd3570ce990073878bf10f6b2d79082de49492 # v2
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Build on ${{ matrix.os }}
if: matrix.os == 'ubuntu-latest'
env:

51
.github/workflows/go-fips.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: FIPS Build Test
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go BoringCrypto ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.17.11b7, 1.18.3b7]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Setup dockerfile for build test
run: |
echo "FROM us-docker.pkg.dev/google.com/api-project-999119582588/go-boringcrypto/golang:${{ matrix.go-version }}" > Dockerfile.fips.test
echo "COPY . /minio" >> Dockerfile.fips.test
echo "WORKDIR /minio" >> Dockerfile.fips.test
echo "RUN make" >> Dockerfile.fips.test
- name: Build
uses: docker/build-push-action@v3
with:
context: .
file: Dockerfile.fips.test
push: false
load: true
tags: minio/fips-test:latest
# This should fail if grep returns non-zero exit
- name: Test binary
run: |
docker run --rm minio/fips-test:latest ./minio --version
docker run --rm -i minio/fips-test:latest /bin/bash -c 'go tool nm ./minio' | grep -q FIPS

View File

@@ -11,19 +11,23 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.17.x]
go-version: [1.18.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- uses: actions/cache@v2
with:
path: |
@@ -37,12 +41,12 @@ jobs:
env:
CGO_ENABLED: 0
GO111MODULE: on
MINIO_KMS_KES_CERT_FILE: /home/runner/work/minio/minio/.github/workflows/root.cert
MINIO_KMS_KES_KEY_FILE: /home/runner/work/minio/minio/.github/workflows/root.key
MINIO_KMS_KES_ENDPOINT: "https://play.min.io:7373"
MINIO_KMS_KES_KEY_NAME: "my-minio-key"
MINIO_KMS_SECRET_KEY: "my-minio-key:oyArl7zlPECEduNbB1KXgdzDn2Bdpvvw0l8VO51HQnY="
MINIO_KMS_AUTO_ENCRYPTION: on
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make verify-healing
make verify-healing-inconsistent-versions
make verify-healing-with-root-disks
make verify-healing-with-rewrite

View File

@@ -11,19 +11,23 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.17.x]
go-version: [1.18.x]
os: [ubuntu-latest, windows-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- uses: actions/cache@v2
if: matrix.os == 'ubuntu-latest'
with:

View File

@@ -11,19 +11,23 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go ${{ matrix.go-version }} on ${{ matrix.os }} - healing
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.17.x]
go-version: [1.18.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- uses: actions/cache@v2
with:
path: |

View File

@@ -11,6 +11,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
iam-matrix-test:
name: "[Go=${{ matrix.go-version }}|ldap=${{ matrix.ldap }}|etcd=${{ matrix.etcd }}|openid=${{ matrix.openid }}]"
@@ -44,13 +47,21 @@ jobs:
- "5556:5556"
env:
DEX_LDAP_SERVER: "openldap:389"
openid2:
image: quay.io/minio/dex
ports:
- "5557:5557"
env:
DEX_LDAP_SERVER: "openldap:389"
DEX_ISSUER: "http://127.0.0.1:5557/dex"
DEX_WEB_HTTP: "0.0.0.0:5557"
strategy:
# When ldap, etcd or openid vars are empty below, those external servers
# are turned off - i.e. if ldap="", then ldap server is not enabled for
# the tests.
matrix:
go-version: [1.17.x]
go-version: [1.18.x]
ldap: ["", "localhost:389"]
etcd: ["", "http://localhost:2379"]
openid: ["", "http://127.0.0.1:5556/dex"]
@@ -65,9 +76,10 @@ jobs:
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- uses: actions/cache@v2
with:
path: |
@@ -85,6 +97,28 @@ jobs:
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-iam
- name: Test with multiple OpenID providers
if: matrix.openid == 'http://127.0.0.1:5556/dex'
env:
LDAP_TEST_SERVER: ${{ matrix.ldap }}
ETCD_SERVER: ${{ matrix.etcd }}
OPENID_TEST_SERVER: ${{ matrix.openid }}
OPENID_TEST_SERVER_2: "http://127.0.0.1:5557/dex"
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-iam
- name: Test with Access Management Plugin enabled
env:
LDAP_TEST_SERVER: ${{ matrix.ldap }}
ETCD_SERVER: ${{ matrix.etcd }}
OPENID_TEST_SERVER: ${{ matrix.openid }}
POLICY_PLUGIN_ENDPOINT: "http://127.0.0.1:8080"
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
go run docs/iam/access-manager-plugin.go &
make test-iam
- name: Test LDAP for automatic site replication
if: matrix.ldap == 'localhost:389'
run: |

View File

@@ -11,6 +11,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
lint:
name: Lint all docs
@@ -22,4 +25,6 @@ jobs:
- name: Lint all docs
run: |
npm install -g markdownlint-cli
markdownlint --fix '**/*.md' --disable MD013 MD040
markdownlint --fix '**/*.md' \
--config /home/runner/work/minio/minio/.github/markdown-lint-cfg.yaml \
--disable MD013 MD040 MD051

View File

@@ -1,4 +1,4 @@
name: Multi-site replication tests
name: MinIO advanced tests
on:
pull_request:
@@ -11,20 +11,24 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
replication-test:
name: Replication Tests with Go ${{ matrix.go-version }}
name: Advanced Tests with Go ${{ matrix.go-version }}
runs-on: ubuntu-latest
strategy:
matrix:
go-version: [1.17.x]
go-version: [1.18.x]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- uses: actions/cache@v2
with:
path: |
@@ -33,11 +37,18 @@ jobs:
key: ${{ runner.os }}-${{ matrix.go-version }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go-version }}-go-
- name: Test Decom
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-decom
- name: Test Replication
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-replication
- name: Test MinIO IDP for automatic site replication
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0

View File

@@ -11,21 +11,24 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.17.x]
go-version: [1.18.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v1
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Start upgrade tests
run: |
make test-upgrade

11
.gitignore vendored
View File

@@ -9,8 +9,7 @@ site/
/.idea/
/Minio.iml
**/access.log
vendor/**/*.js
vendor/**/*.json
vendor/
.DS_Store
*.syso
coverage.txt
@@ -32,4 +31,10 @@ hash-set
minio.RELEASE*
mc
nancy
inspects/*
inspects/*
docs/debugging/s3-verify/s3-verify
docs/debugging/xl-meta/xl-meta
docs/debugging/s3-check-md5/s3-check-md5
docs/debugging/hash-set/hash-set
docs/debugging/healing-bin/healing-bin
docs/debugging/inspect/inspect

View File

@@ -14,17 +14,15 @@ linters:
- govet
- revive
- ineffassign
- gosimple
- deadcode
- structcheck
- gomodguard
- gofmt
- unused
- structcheck
- unconvert
- varcheck
- gocritic
- gofumpt
- tenv
- durationcheck
linters-settings:
gofumpt:

7556
CREDITS

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
ARG RELEASE

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
ARG TARGETARCH

View File

@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
ARG TARGETARCH

View File

@@ -15,11 +15,11 @@ checks: ## check dependencies
@(env bash $(PWD)/buildscripts/checkdeps.sh)
help: ## print this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' Makefile | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' Makefile | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-40s\033[0m %s\n", $$1, $$2}'
getdeps: ## fetch necessary dependencies
@mkdir -p ${GOPATH}/bin
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin v1.43.0
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin v1.45.2
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@v1.1.7-0.20211026165309-e818a1881b0e
@echo "Installing stringer" && go install -v golang.org/x/tools/cmd/stringer@latest
@@ -39,7 +39,14 @@ lint: ## runs golangci-lint suite of linters
check: test
test: verifiers build ## builds minio, runs linters, tests
@echo "Running unit tests"
@CGO_ENABLED=0 go test -tags kqueue ./...
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -tags kqueue ./...
test-decom: install
@echo "Running minio decom tests"
@env bash $(PWD)/docs/distributed/decom.sh
@env bash $(PWD)/docs/distributed/decom-encrypted.sh
@env bash $(PWD)/docs/distributed/decom-encrypted-sse-s3.sh
@env bash $(PWD)/docs/distributed/decom-compressed-sse-s3.sh
test-upgrade: build
@echo "Running minio upgrade tests"
@@ -51,13 +58,14 @@ test-race: verifiers build ## builds minio, runs linters, tests (race)
test-iam: build ## verify IAM (external IDP, etcd backends)
@echo "Running tests for IAM (external IDP, etcd backends)"
@CGO_ENABLED=0 go test -tags kqueue -v -run TestIAM* ./cmd
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -tags kqueue -v -run TestIAM* ./cmd
@echo "Running tests for IAM (external IDP, etcd backends) with -race"
@CGO_ENABLED=1 go test -race -tags kqueue -v -run TestIAM* ./cmd
@MINIO_API_REQUESTS_MAX=10000 GORACE=history_size=7 CGO_ENABLED=1 go test -race -tags kqueue -v -run TestIAM* ./cmd
test-replication: install ## verify multi site replication
@echo "Running tests for replicating three sites"
@(env bash $(PWD)/docs/bucket/replication/setup_3site_replication.sh)
@(env bash $(PWD)/docs/bucket/replication/setup_2site_existing_replication.sh)
test-site-replication-ldap: install ## verify automatic site replication
@echo "Running tests for automatic site replication of IAM (with LDAP)"
@@ -73,15 +81,30 @@ test-site-replication-minio: install ## verify automatic site replication
verify: ## verify minio various setups
@echo "Verifying build with race"
@CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/verify-build.sh)
verify-healing: ## verify healing and replacing disks with minio binary
@echo "Verify healing build with race"
@CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/verify-healing.sh)
@(env bash $(PWD)/buildscripts/unaligned-healing.sh)
verify-healing-with-root-disks: ## verify healing root disks
@echo "Verify healing with root drives"
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/verify-healing-with-root-disks.sh)
verify-healing-with-rewrite: ## verify healing to rewrite old xl.meta -> new xl.meta
@echo "Verify healing with rewrite"
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/rewrite-old-new.sh)
verify-healing-inconsistent-versions: ## verify resolving inconsistent versions
@echo "Verify resolving inconsistent versions build with race"
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/resolve-right-versions.sh)
build: checks ## builds minio to $(PWD)
@echo "Building minio binary to './minio'"
@CGO_ENABLED=0 go build -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@@ -122,6 +145,8 @@ clean: ## cleanup all generated assets
@echo "Cleaning up all the generated files"
@find . -name '*.test' | xargs rm -fv
@find . -name '*~' | xargs rm -fv
@find . -name '.#*#' | xargs rm -fv
@find . -name '#*#' | xargs rm -fv
@rm -rvf minio
@rm -rvf build
@rm -rvf release

7
README.fips.md Normal file
View File

@@ -0,0 +1,7 @@
# MinIO FIPS Builds
MinIO creates FIPS builds using a patched version of the Go compiler (that uses BoringCrypto, from BoringSSL, which is [FIPS 140-2 validated](https://csrc.nist.gov/csrc/media/projects/cryptographic-module-validation-program/documents/security-policies/140sp2964.pdf)) published by the Golang Team [here](https://github.com/golang/go/tree/dev.boringcrypto/misc/boring).
MinIO FIPS executables are available at <http://dl.min.io> - they are only published for `linux-amd64` architecture as binary files with the suffix `.fips`. We also publish corresponding container images to our official image repositories.
We are not making any statements or representations about the suitability of this code or build in relation to the FIPS 140-2 standard. Interested users will have to evaluate for themselves whether this is useful for their own purposes.

View File

@@ -125,7 +125,7 @@ You can also connect using any S3-compatible tool, such as the MinIO Client `mc`
## Install from Source
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.17](https://golang.org/dl/#stable)
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.18](https://golang.org/dl/#stable)
```sh
GO111MODULE=on go install github.com/minio/minio@latest
@@ -196,12 +196,6 @@ iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart
```
## Pre-existing data
When deployed on a single drive, MinIO server lets clients access any pre-existing data in the data directory. For example, if MinIO is started with the command `minio server /mnt/data`, any pre-existing data in the `/mnt/data` directory would be accessible to the clients.
The above statement is also valid for all gateway backends.
## Test MinIO Connectivity
### Test using MinIO Console
@@ -242,16 +236,16 @@ Upgrades require zero downtime in MinIO, all upgrades are non-disruptive, all tr
mc admin update <minio alias, e.g., myminio>
```
- For deployments without external internet access (e.g. airgapped environments), download the binary from <https://dl.min.io> and replace the existing MinIO binary let's say for example `/opt/bin/minio`, apply executable permissions `chmod +x /opt/bin/minio` and do `mc admin service restart alias/`.
- For deployments without external internet access (e.g. airgapped environments), download the binary from <https://dl.min.io> and replace the existing MinIO binary let's say for example `/opt/bin/minio`, apply executable permissions `chmod +x /opt/bin/minio` and proceed to perform `mc admin service restart alias/`.
- For RPM/DEB installations, upgrade packages **parallelly** on all servers. Once upgraded, perform `systemctl restart minio` across all nodes in **parallel**. RPM/DEB based installations are usually automated using [`ansible`](https://github.com/minio/ansible-minio).
- For installations using Systemd MinIO service, upgrade via RPM/DEB packages **parallelly** on all servers or replace the binary lets say `/opt/bin/minio` on all nodes, apply executable permissions `chmod +x /opt/bin/minio` and process to perform `mc admin service restart alias/`.
### Upgrade Checklist
- Test all upgrades in a lower environment (DEV, QA, UAT) before applying to production. Performing blind upgrades in production environments carries significant risk.
- Read the release notes for the targeted MinIO release *before* performing any installation, there is no forced requirement to upgrade to latest releases every week. If it has a bug fix you are looking for then yes, else avoid actively upgrading a running production system.
- Make sure MinIO process has write access to `/opt/bin` if you plan to use `mc admin update`. This is needed for MinIO to download the latest binary from <https://dl.min.io> and save it locally for upgrades.
- `mc admin update` is not supported in kubernetes/container environments, container environments provide their own mechanisms for container updates.
- Read the release notes for MinIO *before* performing any upgrade, there is no forced requirement to upgrade to latest releases upon every releases. Some releases may not be relevant to your setup, avoid upgrading production environments unnecessarily.
- If you plan to use `mc admin update`, MinIO process must have write access to the parent directory where the binary is present on the host system.
- `mc admin update` is not supported and should be avoided in kubernetes/container environments, please upgrade containers by upgrading relevant container images.
- **We do not recommend upgrading one MinIO server at a time, the product is designed to support parallel upgrades please follow our recommended guidelines.**
## Explore Further

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -24,14 +24,18 @@ import (
"fmt"
"os"
"os/exec"
"strconv"
"strings"
"time"
)
func genLDFlags(version string) string {
releaseTag, date := releaseTag(version)
copyrightYear := strconv.Itoa(date.Year())
ldflagsStr := "-s -w"
ldflagsStr += " -X github.com/minio/minio/cmd.Version=" + version
ldflagsStr += " -X github.com/minio/minio/cmd.ReleaseTag=" + releaseTag(version)
ldflagsStr += " -X github.com/minio/minio/cmd.CopyrightYear=" + copyrightYear
ldflagsStr += " -X github.com/minio/minio/cmd.ReleaseTag=" + releaseTag
ldflagsStr += " -X github.com/minio/minio/cmd.CommitID=" + commitID()
ldflagsStr += " -X github.com/minio/minio/cmd.ShortCommitID=" + commitID()[:12]
ldflagsStr += " -X github.com/minio/minio/cmd.GOPATH=" + os.Getenv("GOPATH")
@@ -40,7 +44,7 @@ func genLDFlags(version string) string {
}
// genReleaseTag prints release tag to the console for easy git tagging.
func releaseTag(version string) string {
func releaseTag(version string) (string, time.Time) {
relPrefix := "DEVELOPMENT"
if prefix := os.Getenv("MINIO_RELEASE"); prefix != "" {
relPrefix = prefix
@@ -53,14 +57,17 @@ func releaseTag(version string) string {
relTag := strings.Replace(version, " ", "-", -1)
relTag = strings.Replace(relTag, ":", "-", -1)
t, err := time.Parse("2006-01-02T15-04-05Z", relTag)
if err != nil {
panic(err)
}
relTag = strings.Replace(relTag, ",", "", -1)
relTag = relPrefix + "." + relTag
if relSuffix != "" {
relTag += "." + relSuffix
}
return relTag
return relTag, t
}
// commitID returns the abbreviated commit-id hash of the last commit.

View File

@@ -0,0 +1,87 @@
//go:build ignore
// +build ignore
//
// MinIO Object Storage (c) 2022 MinIO, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"time"
"github.com/minio/madmin-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY are
// dummy values, please replace them with original values.
// API requests are secure (HTTPS) if secure=true and insecure (HTTP) otherwise.
// New returns an MinIO Admin client object.
madmClnt, err := madmin.New(os.Args[1], os.Args[2], os.Args[3], false)
if err != nil {
log.Fatalln(err)
}
opts := madmin.HealOpts{
Recursive: true, // recursively heal all objects at 'prefix'
Remove: true, // remove content that has lost quorum and not recoverable
Recreate: true, // rewrite all old non-inlined xl.meta to new xl.meta
ScanMode: madmin.HealNormalScan, // by default do not do 'deep' scanning
}
start, _, err := madmClnt.Heal(context.Background(), "healing-rewrite-bucket", "", opts, "", false, false)
if err != nil {
log.Fatalln(err)
}
fmt.Println("Healstart sequence ===")
enc := json.NewEncoder(os.Stdout)
if err = enc.Encode(&start); err != nil {
log.Fatalln(err)
}
fmt.Println()
for {
_, status, err := madmClnt.Heal(context.Background(), "healing-rewrite-bucket", "", opts, start.ClientToken, false, false)
if status.Summary == "finished" {
fmt.Println("Healstatus on items ===")
for _, item := range status.Items {
if err = enc.Encode(&item); err != nil {
log.Fatalln(err)
}
}
break
}
if status.Summary == "stopped" {
fmt.Println("Healstatus on items ===")
fmt.Println("Heal failed with", status.FailureDetail)
break
}
for _, item := range status.Items {
if err = enc.Encode(&item); err != nil {
log.Fatalln(err)
}
}
time.Sleep(time.Second)
}
}

View File

@@ -2,6 +2,9 @@
set -e
export GORACE="history_size=7"
export MINIO_API_REQUESTS_MAX=10000
## TODO remove `dsync` from race detector once this is merged and released https://go-review.googlesource.com/c/go/+/333529/
for d in $(go list ./... | grep -v dsync); do
CGO_ENABLED=1 go test -v -race --timeout 100m "$d"

View File

@@ -0,0 +1,72 @@
#!/bin/bash -e
set -E
set -o pipefail
set -x
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
function start_minio_5drive() {
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
"${WORK_DIR}/mc" cp --quiet -r "buildscripts/cicd-corpus/" "${WORK_DIR}/cicd-corpus/"
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/cicd-corpus/disk{1...5}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" stat minio/bucket/testobj
pkill minio
sleep 3
}
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_5drive ${start_port}
}
function purge()
{
rm -rf "$1"
}
( main "$@" )
rv=$?
purge "$WORK_DIR"
exit "$rv"

151
buildscripts/rewrite-old-new.sh Executable file
View File

@@ -0,0 +1,151 @@
#!/bin/bash -e
set -E
set -o pipefail
set -x
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO_OLD=( "$PWD/minio.RELEASE.2020-10-28T08-16-50Z" --config-dir "$MINIO_CONFIG_DIR" server )
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
function download_old_release() {
if [ ! -f minio.RELEASE.2020-10-28T08-16-50Z ]; then
curl --silent -O https://dl.minio.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2020-10-28T08-16-50Z
chmod a+x minio.RELEASE.2020-10-28T08-16-50Z
fi
}
function verify_rewrite() {
start_port=$1
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${WORK_DIR}/mc" mb minio/healing-rewrite-bucket --quiet --with-lock
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
"${WORK_DIR}/mc" cp \
buildscripts/verify-build.sh \
minio/healing-rewrite-bucket/ \
--disable-multipart --quiet
kill ${pid}
sleep 3
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" > "${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
go build ./docs/debugging/s3-check-md5/
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**)
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
purge "$WORK_DIR"
exit 1
fi
go run ./buildscripts/heal-manual.go "127.0.0.1:${start_port}" "minio" "minio123"
sleep 1
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
mkdir -p inspects
(cd inspects; "${WORK_DIR}/mc" admin inspect minio/healing-rewrite-bucket/verify-build.sh/**)
"${WORK_DIR}/mc" mb play/inspects
"${WORK_DIR}/mc" mirror inspects play/inspects
purge "$WORK_DIR"
exit 1
fi
kill ${pid}
}
function main() {
download_old_release
start_port=$(shuf -i 10000-65000 -n 1)
verify_rewrite ${start_port}
}
function purge()
{
rm -rf "$1"
}
( main "$@" )
rv=$?
purge "$WORK_DIR"
exit "$rv"

View File

@@ -0,0 +1,97 @@
#!/bin/bash -e
set -E
set -o pipefail
set -x
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
WORK_DIR="$(mktemp -d)"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
function start_minio() {
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
unset MINIO_CI_CD
unset CI
args=()
for i in $(seq 1 4); do
args+=("http://localhost:$[${start_port}+$i]${WORK_DIR}/mnt/disk$i/ ")
done
for i in $(seq 1 4); do
"${MINIO[@]}" --address ":$[$start_port+$i]" ${args[@]} 2>&1 >"${WORK_DIR}/server$i.log" &
done
# Wait until all nodes return 403
for i in $(seq 1 4); do
while [ "$(curl -m 1 -s -o /dev/null -w "%{http_code}" http://localhost:$[$start_port+$i])" -ne "403" ]; do
echo -n ".";
sleep 1;
done
done
}
# Prepare fake disks with losetup
function prepare_block_devices() {
mkdir -p ${WORK_DIR}/disks/ ${WORK_DIR}/mnt/
for i in 1 2 3 4; do
dd if=/dev/zero of=${WORK_DIR}/disks/img.$i bs=1M count=2048
mkfs.ext4 -F ${WORK_DIR}/disks/img.$i
sudo mknod /dev/minio-loopdisk$i b 7 $[256-$i]
sudo losetup /dev/minio-loopdisk$i ${WORK_DIR}/disks/img.$i
mkdir -p ${WORK_DIR}/mnt/disk$i/
sudo mount /dev/minio-loopdisk$i ${WORK_DIR}/mnt/disk$i/
sudo chown "$(id -u):$(id -g)" /dev/minio-loopdisk$i ${WORK_DIR}/mnt/disk$i/
done
}
# Start a distributed MinIO setup, unmount one disk and check if it is formatted
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_minio ${start_port}
# Unmount the disk, after the unmount the device id
# /tmp/xxx/mnt/disk4 will be the same as '/' and it
# will be detected as root disk
while [ "$u" != "0" ]; do
sudo umount ${WORK_DIR}/mnt/disk4/
u=$?
sleep 1
done
# Wait until MinIO self heal kicks in
sleep 60
if [ -f ${WORK_DIR}/mnt/disk4/.minio.sys/format.json ]; then
echo "A root disk is formatted unexpectedely"
cat "${WORK_DIR}/server4.log"
exit -1
fi
}
function cleanup() {
pkill minio
sudo umount ${WORK_DIR}/mnt/disk{1..3}/
sudo rm /dev/minio-loopdisk*
rm -rf "$WORK_DIR"
}
( prepare_block_devices )
( main "$@" )
rv=$?
cleanup
exit "$rv"

View File

@@ -80,7 +80,7 @@ func (api objectAPIHandlers) PutBucketACLHandler(w http.ResponseWriter, r *http.
}
// Before proceeding validate if bucket exists.
_, err := objAPI.GetBucketInfo(ctx, bucket)
_, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{})
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -142,7 +142,7 @@ func (api objectAPIHandlers) GetBucketACLHandler(w http.ResponseWriter, r *http.
}
// Before proceeding validate if bucket exists.
_, err := objAPI.GetBucketInfo(ctx, bucket)
_, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{})
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return

View File

@@ -18,15 +18,31 @@
package cmd
import (
"bytes"
"encoding/base64"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"strings"
"time"
"github.com/gorilla/mux"
jsoniter "github.com/json-iterator/go"
"github.com/klauspost/compress/zip"
"github.com/minio/kes"
"github.com/minio/madmin-go"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/bucket/lifecycle"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
"github.com/minio/minio/internal/bucket/versioning"
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
iampolicy "github.com/minio/pkg/iam/policy"
)
@@ -47,14 +63,13 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SetBucketQuotaAdminAction)
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
vars := mux.Vars(r)
bucket := pathClean(vars["bucket"])
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -76,15 +91,17 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
return
}
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketQuotaConfigFile, data); err != nil {
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketQuotaConfigFile, data)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
bucketMeta := madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeQuotaConfig,
Bucket: bucket,
Quota: data,
Type: madmin.SRBucketMetaTypeQuotaConfig,
Bucket: bucket,
Quota: data,
UpdatedAt: updatedAt,
}
if quotaConfig.Quota == 0 {
bucketMeta.Quota = nil
@@ -108,19 +125,18 @@ func (a adminAPIHandlers) GetBucketQuotaConfigHandler(w http.ResponseWriter, r *
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.GetBucketQuotaAdminAction)
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
vars := mux.Vars(r)
bucket := pathClean(vars["bucket"])
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
config, err := globalBucketMetadataSys.GetQuotaConfig(ctx, bucket)
config, _, err := globalBucketMetadataSys.GetQuotaConfig(ctx, bucket)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -145,7 +161,7 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
bucket := pathClean(vars["bucket"])
update := r.Form.Get("update") == "true"
if !globalIsErasure {
if globalIsGateway {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
@@ -153,12 +169,11 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SetBucketTargetAction)
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -251,7 +266,7 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminConfigBadJSON, err), r.URL)
return
}
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketTargetsFile, tgtBytes); err != nil {
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketTargetsFile, tgtBytes); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -274,19 +289,19 @@ func (a adminAPIHandlers) ListRemoteTargetsHandler(w http.ResponseWriter, r *htt
vars := mux.Vars(r)
bucket := pathClean(vars["bucket"])
arnType := vars["type"]
if !globalIsErasure {
if globalIsGateway {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.GetBucketTargetAction)
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if bucket != "" {
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -314,19 +329,18 @@ func (a adminAPIHandlers) RemoveRemoteTargetHandler(w http.ResponseWriter, r *ht
bucket := pathClean(vars["bucket"])
arn := vars["arn"]
if !globalIsErasure {
if globalIsGateway {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SetBucketTargetAction)
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -345,7 +359,7 @@ func (a adminAPIHandlers) RemoveRemoteTargetHandler(w http.ResponseWriter, r *ht
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminConfigBadJSON, err), r.URL)
return
}
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketTargetsFile, tgtBytes); err != nil {
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketTargetsFile, tgtBytes); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -353,3 +367,794 @@ func (a adminAPIHandlers) RemoveRemoteTargetHandler(w http.ResponseWriter, r *ht
// Write success response.
writeSuccessNoContent(w)
}
// ExportBucketMetadataHandler - exports all bucket metadata as a zipped file
func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ExportBucketMetadata")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
bucket := pathClean(r.Form.Get("bucket"))
if globalIsGateway {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ExportBucketMetadataAction)
if objectAPI == nil {
return
}
var (
buckets []BucketInfo
err error
)
if bucket != "" {
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
buckets = append(buckets, BucketInfo{Name: bucket})
} else {
buckets, err = objectAPI.ListBuckets(ctx, BucketOptions{})
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
}
// Initialize a zip writer which will provide a zipped content
// of bucket metadata
zipWriter := zip.NewWriter(w)
defer zipWriter.Close()
rawDataFn := func(r io.Reader, filename string, sz int) error {
header, zerr := zip.FileInfoHeader(dummyFileInfo{
name: filename,
size: int64(sz),
mode: 0o600,
modTime: time.Now(),
isDir: false,
sys: nil,
})
if zerr != nil {
logger.LogIf(ctx, zerr)
return nil
}
header.Method = zip.Deflate
zwriter, zerr := zipWriter.CreateHeader(header)
if zerr != nil {
logger.LogIf(ctx, zerr)
return nil
}
if _, err := io.Copy(zwriter, r); err != nil {
logger.LogIf(ctx, err)
}
return nil
}
cfgFiles := []string{
bucketPolicyConfig,
bucketNotificationConfig,
bucketLifecycleConfig,
bucketSSEConfig,
bucketTaggingConfig,
bucketQuotaConfigFile,
objectLockConfig,
bucketVersioningConfig,
bucketReplicationConfig,
bucketTargetsFile,
}
for _, bi := range buckets {
for _, cfgFile := range cfgFiles {
cfgPath := pathJoin(bi.Name, cfgFile)
bucket := bi.Name
switch cfgFile {
case bucketNotificationConfig:
config, err := globalBucketMetadataSys.GetNotificationConfig(bucket)
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
case bucketLifecycleConfig:
config, err := globalBucketMetadataSys.GetLifecycleConfig(bucket)
if err != nil {
if errors.Is(err, BucketLifecycleNotFound{Bucket: bucket}) {
continue
}
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
case bucketQuotaConfigFile:
config, _, err := globalBucketMetadataSys.GetQuotaConfig(ctx, bucket)
if err != nil {
if errors.Is(err, BucketQuotaConfigNotFound{Bucket: bucket}) {
continue
}
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
configData, err := json.Marshal(config)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
case bucketSSEConfig:
config, _, err := globalBucketMetadataSys.GetSSEConfig(bucket)
if err != nil {
if errors.Is(err, BucketSSEConfigNotFound{Bucket: bucket}) {
continue
}
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
case bucketTaggingConfig:
config, _, err := globalBucketMetadataSys.GetTaggingConfig(bucket)
if err != nil {
if errors.Is(err, BucketTaggingNotFound{Bucket: bucket}) {
continue
}
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
case objectLockConfig:
config, _, err := globalBucketMetadataSys.GetObjectLockConfig(bucket)
if err != nil {
if errors.Is(err, BucketObjectLockConfigNotFound{Bucket: bucket}) {
continue
}
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
case bucketVersioningConfig:
config, _, err := globalBucketMetadataSys.GetVersioningConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
// ignore empty versioning configs
if config.Status != versioning.Enabled && config.Status != versioning.Suspended {
continue
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
case bucketReplicationConfig:
config, _, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket)
if err != nil {
if errors.Is(err, BucketReplicationConfigNotFound{Bucket: bucket}) {
continue
}
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
case bucketTargetsFile:
config, err := globalBucketMetadataSys.GetBucketTargetsConfig(bucket)
if err != nil {
if errors.Is(err, BucketRemoteTargetNotFound{Bucket: bucket}) {
continue
}
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
if err = rawDataFn(bytes.NewReader(configData), cfgPath, len(configData)); err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
}
}
}
}
type importMetaReport struct {
madmin.BucketMetaImportErrs
}
func (i *importMetaReport) SetStatus(bucket, fname string, err error) {
st := i.Buckets[bucket]
var errMsg string
if err != nil {
errMsg = err.Error()
}
switch fname {
case bucketPolicyConfig:
st.Policy = madmin.MetaStatus{IsSet: true, Err: errMsg}
case bucketNotificationConfig:
st.Notification = madmin.MetaStatus{IsSet: true, Err: errMsg}
case bucketLifecycleConfig:
st.Lifecycle = madmin.MetaStatus{IsSet: true, Err: errMsg}
case bucketSSEConfig:
st.SSEConfig = madmin.MetaStatus{IsSet: true, Err: errMsg}
case bucketTaggingConfig:
st.Tagging = madmin.MetaStatus{IsSet: true, Err: errMsg}
case bucketQuotaConfigFile:
st.Quota = madmin.MetaStatus{IsSet: true, Err: errMsg}
case objectLockConfig:
st.ObjectLock = madmin.MetaStatus{IsSet: true, Err: errMsg}
case bucketVersioningConfig:
st.Versioning = madmin.MetaStatus{IsSet: true, Err: errMsg}
default:
st.Err = errMsg
}
i.Buckets[bucket] = st
}
// ImportBucketMetadataHandler - imports all bucket metadata from a zipped file and overwrite bucket metadata config
// There are some caveats regarding the following:
// 1. object lock config - object lock should have been specified at time of bucket creation. Only default retention settings are imported here.
// 2. Replication config - is omitted from import as remote target credentials are not available from exported data for security reasons.
// 3. lifecycle config - if transition rules are present, tier name needs to have been defined.
func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ImportBucketMetadata")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
if globalIsGateway {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
// Get current object layer instance.
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ImportBucketMetadataAction)
if objectAPI == nil {
return
}
data, err := ioutil.ReadAll(r.Body)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
reader := bytes.NewReader(data)
zr, err := zip.NewReader(reader, int64(len(data)))
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
bucketMap := make(map[string]struct{}, 1)
rpt := importMetaReport{
madmin.BucketMetaImportErrs{
Buckets: make(map[string]madmin.BucketStatus, len(zr.File)),
},
}
// import object lock config if any - order of import matters here.
for _, file := range zr.File {
slc := strings.Split(file.Name, slashSeparator)
if len(slc) != 2 { // expecting bucket/configfile in the zipfile
rpt.SetStatus(file.Name, "", fmt.Errorf("malformed zip - expecting format bucket/<config.json>"))
continue
}
bucket, fileName := slc[0], slc[1]
switch fileName {
case objectLockConfig:
reader, err := file.Open()
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
config, err := objectlock.ParseObjectLockConfig(reader)
if err != nil {
rpt.SetStatus(bucket, fileName, fmt.Errorf("%s (%s)", errorCodes[ErrMalformedXML].Description, err))
continue
}
configData, err := xml.Marshal(config)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
if _, ok := bucketMap[bucket]; !ok {
opts := MakeBucketOptions{
LockEnabled: config.ObjectLockEnabled == "Enabled",
}
err = objectAPI.MakeBucketWithLocation(ctx, bucket, opts)
if err != nil {
if _, ok := err.(BucketExists); !ok {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
bucketMap[bucket] = struct{}{}
}
// Deny object locking configuration settings on existing buckets without object lock enabled.
if _, _, err = globalBucketMetadataSys.GetObjectLockConfig(bucket); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, objectLockConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeObjectLockConfig,
Bucket: bucket,
ObjectLockConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
}
// import versioning metadata
for _, file := range zr.File {
slc := strings.Split(file.Name, slashSeparator)
if len(slc) != 2 { // expecting bucket/configfile in the zipfile
rpt.SetStatus(file.Name, "", fmt.Errorf("malformed zip - expecting format bucket/<config.json>"))
continue
}
bucket, fileName := slc[0], slc[1]
switch fileName {
case bucketVersioningConfig:
reader, err := file.Open()
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
v, err := versioning.ParseConfig(io.LimitReader(reader, maxBucketVersioningConfigSize))
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
if _, ok := bucketMap[bucket]; !ok {
if err = objectAPI.MakeBucketWithLocation(ctx, bucket, MakeBucketOptions{}); err != nil {
if _, ok := err.(BucketExists); !ok {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
bucketMap[bucket] = struct{}{}
}
if globalSiteReplicationSys.isEnabled() && v.Suspended() {
rpt.SetStatus(bucket, fileName, fmt.Errorf("Cluster replication is enabled for this site, so the versioning state cannot be suspended."))
continue
}
if rcfg, _ := globalBucketObjectLockSys.Get(bucket); rcfg.LockEnabled && v.Suspended() {
rpt.SetStatus(bucket, fileName, fmt.Errorf("An Object Lock configuration is present on this bucket, so the versioning state cannot be suspended."))
continue
}
if _, err := getReplicationConfig(ctx, bucket); err == nil && v.Suspended() {
rpt.SetStatus(bucket, fileName, fmt.Errorf("A replication configuration is present on this bucket, so the versioning state cannot be suspended."))
continue
}
configData, err := xml.Marshal(v)
if err != nil {
rpt.SetStatus(bucket, fileName, fmt.Errorf("%s (%s)", errorCodes[ErrMalformedXML].Description, err))
continue
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketVersioningConfig, configData); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rpt.SetStatus(bucket, fileName, nil)
}
}
for _, file := range zr.File {
reader, err := file.Open()
if err != nil {
rpt.SetStatus(file.Name, "", err)
continue
}
sz := file.FileInfo().Size()
slc := strings.Split(file.Name, slashSeparator)
if len(slc) != 2 { // expecting bucket/configfile in the zipfile
rpt.SetStatus(file.Name, "", fmt.Errorf("malformed zip - expecting format bucket/<config.json>"))
continue
}
bucket, fileName := slc[0], slc[1]
// create bucket if it does not exist yet.
if _, ok := bucketMap[bucket]; !ok {
err = objectAPI.MakeBucketWithLocation(ctx, bucket, MakeBucketOptions{})
if err != nil {
if _, ok := err.(BucketExists); !ok {
rpt.SetStatus(bucket, "", err)
continue
}
}
bucketMap[bucket] = struct{}{}
}
if _, ok := bucketMap[bucket]; !ok {
continue
}
switch fileName {
case bucketNotificationConfig:
config, err := event.ParseConfig(io.LimitReader(reader, sz), globalSite.Region, globalNotificationSys.targetList)
if err != nil {
rpt.SetStatus(bucket, fileName, fmt.Errorf("%s (%s)", errorCodes[ErrMalformedXML].Description, err))
continue
}
configData, err := xml.Marshal(config)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketNotificationConfig, configData); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rulesMap := config.ToRulesMap()
globalNotificationSys.AddRulesMap(bucket, rulesMap)
rpt.SetStatus(bucket, fileName, nil)
case bucketPolicyConfig:
// Error out if Content-Length is beyond allowed size.
if sz > maxBucketPolicySize {
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrPolicyTooLarge.String()))
continue
}
bucketPolicyBytes, err := ioutil.ReadAll(io.LimitReader(reader, sz))
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
bucketPolicy, err := policy.ParseConfig(bytes.NewReader(bucketPolicyBytes), bucket)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
// Version in policy must not be empty
if bucketPolicy.Version == "" {
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrMalformedPolicy.String()))
continue
}
configData, err := json.Marshal(bucketPolicy)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketPolicyConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypePolicy,
Bucket: bucket,
Policy: bucketPolicyBytes,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
case bucketLifecycleConfig:
bucketLifecycle, err := lifecycle.ParseLifecycleConfig(io.LimitReader(reader, sz))
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
// Validate the received bucket policy document
if err = bucketLifecycle.Validate(); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
// Validate the transition storage ARNs
if err = validateTransitionTier(bucketLifecycle); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
configData, err := xml.Marshal(bucketLifecycle)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketLifecycleConfig, configData); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rpt.SetStatus(bucket, fileName, nil)
case bucketSSEConfig:
// Parse bucket encryption xml
encConfig, err := validateBucketSSEConfig(io.LimitReader(reader, maxBucketSSEConfigSize))
if err != nil {
rpt.SetStatus(bucket, fileName, fmt.Errorf("%s (%s)", errorCodes[ErrMalformedXML].Description, err))
continue
}
// Return error if KMS is not initialized
if GlobalKMS == nil {
rpt.SetStatus(bucket, fileName, fmt.Errorf("%s", errorCodes[ErrKMSNotConfigured].Description))
continue
}
kmsKey := encConfig.KeyID()
if kmsKey != "" {
kmsContext := kms.Context{"MinIO admin API": "ServerInfoHandler"} // Context for a test key operation
_, err := GlobalKMS.GenerateKey(ctx, kmsKey, kmsContext)
if err != nil {
if errors.Is(err, kes.ErrKeyNotFound) {
rpt.SetStatus(bucket, fileName, errKMSKeyNotFound)
continue
}
rpt.SetStatus(bucket, fileName, err)
continue
}
}
configData, err := xml.Marshal(encConfig)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
// Store the bucket encryption configuration in the object layer
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketSSEConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeSSEConfig,
Bucket: bucket,
SSEConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
case bucketTaggingConfig:
tags, err := tags.ParseBucketXML(io.LimitReader(reader, sz))
if err != nil {
rpt.SetStatus(bucket, fileName, fmt.Errorf("%s (%s)", errorCodes[ErrMalformedXML].Description, err))
continue
}
configData, err := xml.Marshal(tags)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketTaggingConfig, configData)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rpt.SetStatus(bucket, fileName, nil)
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
Tags: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
case bucketQuotaConfigFile:
data, err := ioutil.ReadAll(reader)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
quotaConfig, err := parseBucketQuota(bucket, data)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
if quotaConfig.Type == "fifo" {
rpt.SetStatus(bucket, fileName, fmt.Errorf("Detected older 'fifo' quota config, 'fifo' feature is removed and not supported anymore"))
continue
}
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketQuotaConfigFile, data)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
rpt.SetStatus(bucket, fileName, nil)
bucketMeta := madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeQuotaConfig,
Bucket: bucket,
Quota: data,
UpdatedAt: updatedAt,
}
if quotaConfig.Quota == 0 {
bucketMeta.Quota = nil
}
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, bucketMeta); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
}
}
rptData, err := json.Marshal(rpt.BucketMetaImportErrs)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, rptData)
}
// ReplicationDiffHandler - POST returns info on unreplicated versions for a remote target ARN
// to the connected HTTP client. This is a MinIO only extension
func (a adminAPIHandlers) ReplicationDiffHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ReplicationDiff")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
if globalIsGateway {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
// check if user has permissions to perform this operation
if s3Error := checkRequestAuthType(ctx, r, policy.ListBucketVersionsAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
opts := extractReplicateDiffOpts(r.Form)
if opts.ARN != "" {
tgt := globalBucketTargetSys.GetRemoteBucketTargetByArn(ctx, bucket, opts.ARN)
if tgt.Empty() {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrInvalidRequest, fmt.Errorf("invalid arn : '%s'", opts.ARN)), r.URL)
return
}
}
keepAliveTicker := time.NewTicker(500 * time.Millisecond)
defer keepAliveTicker.Stop()
diffCh, err := getReplicationDiff(ctx, objectAPI, bucket, opts)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
enc := json.NewEncoder(w)
for {
select {
case entry, ok := <-diffCh:
if !ok {
return
}
if err := enc.Encode(entry); err != nil {
return
}
if len(diffCh) == 0 {
// Flush if nothing is queued
w.(http.Flusher).Flush()
}
case <-keepAliveTicker.C:
if len(diffCh) > 0 {
continue
}
if _, err := w.Write([]byte(" ")); err != nil {
return
}
w.(http.Flusher).Flush()
case <-ctx.Done():
return
}
}
}

View File

@@ -20,6 +20,7 @@ package cmd
import (
"context"
"errors"
"fmt"
"net/http"
"github.com/minio/kes"
@@ -29,6 +30,10 @@ import (
iampolicy "github.com/minio/pkg/iam/policy"
)
// validateAdminReq will validate request against and return whether it is allowed.
// If any of the supplied actions are allowed it will be successful.
// If nil ObjectLayer is returned, the operation is not permitted.
// When nil ObjectLayer has been returned an error has always been sent to w.
func validateAdminReq(ctx context.Context, w http.ResponseWriter, r *http.Request, actions ...iampolicy.AdminAction) (ObjectLayer, auth.Credentials) {
// Get current object layer instance.
objectAPI := newObjectLayerFn()
@@ -40,11 +45,16 @@ func validateAdminReq(ctx context.Context, w http.ResponseWriter, r *http.Reques
for _, action := range actions {
// Validate request signature.
cred, adminAPIErr := checkAdminRequestAuth(ctx, r, action, "")
if adminAPIErr != ErrNone {
switch adminAPIErr {
case ErrNone:
return objectAPI, cred
case ErrAccessDenied:
// Try another
continue
default:
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return nil, cred
}
return objectAPI, cred
}
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return nil, auth.Credentials{}
@@ -96,6 +106,12 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
}
default:
switch {
case errors.Is(err, errTooManyPolicies):
apiErr = APIError{
Code: "XMinioAdminInvalidRequest",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errDecommissionAlreadyRunning):
apiErr = APIError{
Code: "XMinioDecommissionNotAllowed",
@@ -180,7 +196,13 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
apiErr = APIError{
Code: "XMinioAdminTierBackendInUse",
Description: err.Error(),
HTTPStatusCode: http.StatusConflict,
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierBackendNotEmpty):
apiErr = APIError{
Code: "XMinioAdminTierBackendNotEmpty",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierInsufficientCreds):
apiErr = APIError{
@@ -211,3 +233,27 @@ func toAdminAPIErrCode(ctx context.Context, err error) APIErrorCode {
return toAPIErrorCode(ctx, err)
}
}
// wraps export error for more context
func exportError(ctx context.Context, err error, fname, entity string) APIError {
if entity == "" {
return toAPIError(ctx, fmt.Errorf("error exporting %s with: %w", fname, err))
}
return toAPIError(ctx, fmt.Errorf("error exporting %s from %s with: %w", entity, fname, err))
}
// wraps import error for more context
func importError(ctx context.Context, err error, fname, entity string) APIError {
if entity == "" {
return toAPIError(ctx, fmt.Errorf("error importing %s with: %w", fname, err))
}
return toAPIError(ctx, fmt.Errorf("error importing %s from %s with: %w", entity, fname, err))
}
// wraps import error for more context
func importErrorWithAPIErr(ctx context.Context, apiErr APIErrorCode, err error, fname, entity string) APIError {
if entity == "" {
return errorCodes.ToAPIErrWithErr(apiErr, fmt.Errorf("error importing %s with: %w", fname, err))
}
return errorCodes.ToAPIErrWithErr(apiErr, fmt.Errorf("error importing %s from %s with: %w", entity, fname, err))
}

View File

@@ -33,7 +33,8 @@ import (
"github.com/minio/minio/internal/config/etcd"
xldap "github.com/minio/minio/internal/config/identity/ldap"
"github.com/minio/minio/internal/config/identity/openid"
"github.com/minio/minio/internal/config/policy/opa"
idplugin "github.com/minio/minio/internal/config/identity/plugin"
polplugin "github.com/minio/minio/internal/config/policy/plugin"
"github.com/minio/minio/internal/config/storageclass"
"github.com/minio/minio/internal/logger"
iampolicy "github.com/minio/pkg/iam/policy"
@@ -64,6 +65,12 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
subSys, _, _, err := config.GetSubSys(string(kvBytes))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
cfg, err := readServerConfig(ctx, objectAPI)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@@ -74,7 +81,7 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = validateConfig(cfg, ""); err != nil {
if err = validateConfig(cfg, subSys); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -84,19 +91,21 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
return
}
dynamic := config.SubSystemsDynamic.Contains(string(kvBytes))
dynamic := config.SubSystemsDynamic.Contains(subSys)
if dynamic {
applyDynamic(ctx, objectAPI, cfg, r, w)
applyDynamic(ctx, objectAPI, cfg, subSys, r, w)
}
}
func applyDynamic(ctx context.Context, objectAPI ObjectLayer, cfg config.Config, r *http.Request, w http.ResponseWriter) {
func applyDynamic(ctx context.Context, objectAPI ObjectLayer, cfg config.Config, subSys string,
r *http.Request, w http.ResponseWriter,
) {
// Apply dynamic values.
if err := applyDynamicConfig(GlobalContext, objectAPI, cfg); err != nil {
if err := applyDynamicConfigForSubSys(GlobalContext, objectAPI, cfg, subSys); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
globalNotificationSys.SignalService(serviceReloadDynamic)
globalNotificationSys.SignalConfigReload(subSys)
// Tell the client that dynamic config was applied.
w.Header().Set(madmin.ConfigAppliedHeader, madmin.ConfigAppliedTrue)
}
@@ -162,7 +171,7 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
}
if dynamic {
applyDynamic(ctx, objectAPI, cfg, r, w)
applyDynamic(ctx, objectAPI, cfg, subSys, r, w)
}
writeSuccessResponseHeadersOnly(w)
}
@@ -180,15 +189,20 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
cfg := globalServerConfig.Clone()
vars := mux.Vars(r)
buf := &bytes.Buffer{}
cw := config.NewConfigWriteTo(cfg, vars["key"])
if _, err := cw.WriteTo(buf); err != nil {
subSys := vars["key"]
subSysConfigs, err := cfg.GetSubsysInfo(subSys)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
var s strings.Builder
for _, subSysConfig := range subSysConfigs {
subSysConfig.AddString(&s, false)
}
password := cred.SecretKey
econfigData, err := madmin.EncryptData(password, buf.Bytes())
econfigData, err := madmin.EncryptData(password, []byte(s.String()))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -418,40 +432,30 @@ func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Reques
count += len(cfg[hkv.Key])
}
for _, hkv := range hkvs {
v := cfg[hkv.Key]
for target, kv := range v {
off := kv.Get(config.Enable) == config.EnableOff
// We ignore the error below, as we cannot get one.
cfgSubsysItems, _ := cfg.GetSubsysInfo(hkv.Key)
for _, item := range cfgSubsysItems {
off := item.Params.Get(config.Enable) == config.EnableOff
switch hkv.Key {
case config.EtcdSubSys:
off = !etcd.Enabled(kv)
off = !etcd.Enabled(item.Params)
case config.CacheSubSys:
off = !cache.Enabled(kv)
off = !cache.Enabled(item.Params)
case config.StorageClassSubSys:
off = !storageclass.Enabled(kv)
case config.PolicyOPASubSys:
off = !opa.Enabled(kv)
off = !storageclass.Enabled(item.Params)
case config.PolicyPluginSubSys:
off = !polplugin.Enabled(item.Params)
case config.IdentityOpenIDSubSys:
off = !openid.Enabled(kv)
off = !openid.Enabled(item.Params)
case config.IdentityLDAPSubSys:
off = !xldap.Enabled(kv)
off = !xldap.Enabled(item.Params)
case config.IdentityTLSSubSys:
off = !globalSTSTLSConfig.Enabled
case config.IdentityPluginSubSys:
off = !idplugin.Enabled(item.Params)
}
if off {
s.WriteString(config.KvComment)
s.WriteString(config.KvSpaceSeparator)
}
s.WriteString(hkv.Key)
if target != config.Default {
s.WriteString(config.SubSystemSeparator)
s.WriteString(target)
}
s.WriteString(config.KvSpaceSeparator)
s.WriteString(kv.String())
count--
if count > 0 {
s.WriteString(config.KvNewline)
}
item.AddString(&s, off)
}
}

View File

@@ -0,0 +1,321 @@
// Copyright (c) 2015-2022 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strings"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/config/identity/openid"
"github.com/minio/minio/internal/logger"
iampolicy "github.com/minio/pkg/iam/policy"
)
// List of implemented ID config types.
var idCfgTypes = set.CreateStringSet("openid")
// SetIdentityProviderCfg:
//
// PUT <admin-prefix>/id-cfg?type=openid&name=dex1
func (a adminAPIHandlers) SetIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetIdentityCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
if r.ContentLength > maxEConfigJSONSize || r.ContentLength == -1 {
// More than maxConfigSize bytes were available
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigTooLarge), r.URL)
return
}
password := cred.SecretKey
reqBytes, err := madmin.DecryptData(password, io.LimitReader(r.Body, r.ContentLength))
if err != nil {
logger.LogIf(ctx, err, logger.Application)
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), r.URL)
return
}
cfgType := mux.Vars(r)["type"]
if !idCfgTypes.Contains(cfgType) {
// TODO: change this to invalid type error when implementation
// is complete.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
var cfgDataBuilder strings.Builder
switch cfgType {
case "openid":
fmt.Fprintf(&cfgDataBuilder, "identity_openid")
}
// Ensure body content type is opaque.
contentType := r.Header.Get("Content-Type")
if contentType != "application/octet-stream" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrBadRequest), r.URL)
return
}
// Subsystem configuration name could be empty.
cfgName := mux.Vars(r)["name"]
if cfgName != "" {
fmt.Fprintf(&cfgDataBuilder, "%s%s", config.SubSystemSeparator, cfgName)
}
fmt.Fprintf(&cfgDataBuilder, "%s%s", config.KvSpaceSeparator, string(reqBytes))
cfgData := cfgDataBuilder.String()
subSys, _, _, err := config.GetSubSys(cfgData)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
cfg, err := readServerConfig(ctx, objectAPI)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
dynamic, err := cfg.ReadConfig(strings.NewReader(cfgData))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// IDP config is not dynamic. Sanity check.
if dynamic {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInternalError), err.Error(), r.URL)
return
}
if err = validateConfig(cfg, subSys); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
// Update the actual server config on disk.
if err = saveServerConfig(ctx, objectAPI, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write to the config input KV to history.
if err = saveServerConfigHistory(ctx, objectAPI, []byte(cfgData)); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseHeadersOnly(w)
}
// GetIdentityProviderCfg:
//
// GET <admin-prefix>/id-cfg?type=openid&name=dex_test
//
// GetIdentityProviderCfg returns a list of configured IDPs on the server if
// name is empty. If name is non-empty, returns the configuration details for
// the IDP of the given type and configuration name. The configuration name for
// the default ("un-named") configuration target is `_`.
func (a adminAPIHandlers) GetIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
cfgType := mux.Vars(r)["type"]
cfgName := r.Form.Get("name")
password := cred.SecretKey
if !idCfgTypes.Contains(cfgType) {
// TODO: change this to invalid type error when implementation
// is complete.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
// If no cfgName is provided, we list.
if cfgName == "" {
a.listIdentityProviders(ctx, w, r, cfgType, password)
return
}
cfg := globalServerConfig.Clone()
cfgInfos, err := globalOpenIDConfig.GetConfigInfo(cfg, cfgName)
if err != nil {
if err == openid.ErrProviderConfigNotFound {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchConfigTarget), r.URL)
return
}
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
res := madmin.IDPConfig{
Type: cfgType,
Name: cfgName,
Info: cfgInfos,
}
data, err := json.Marshal(res)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
econfigData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, econfigData)
}
func (a adminAPIHandlers) listIdentityProviders(ctx context.Context, w http.ResponseWriter, r *http.Request, cfgType, password string) {
// var subSys string
switch cfgType {
case "openid":
// subSys = config.IdentityOpenIDSubSys
default:
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
cfg := globalServerConfig.Clone()
cfgList, err := globalOpenIDConfig.GetConfigList(cfg)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
data, err := json.Marshal(cfgList)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
econfigData, err := madmin.EncryptData(password, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, econfigData)
}
// DeleteIdentityProviderCfg:
//
// DELETE <admin-prefix>/id-cfg?type=openid&name=dex_test
func (a adminAPIHandlers) DeleteIdentityProviderCfg(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteIdentityProviderCfg")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConfigUpdateAdminAction)
if objectAPI == nil {
return
}
cfgType := mux.Vars(r)["type"]
cfgName := mux.Vars(r)["name"]
if !idCfgTypes.Contains(cfgType) {
// TODO: change this to invalid type error when implementation
// is complete.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
cfg := globalServerConfig.Clone()
cfgInfos, err := globalOpenIDConfig.GetConfigInfo(cfg, cfgName)
if err != nil {
if err == openid.ErrProviderConfigNotFound {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminNoSuchConfigTarget), r.URL)
return
}
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
hasEnv := false
for _, ci := range cfgInfos {
if ci.IsCfg && ci.IsEnv {
hasEnv = true
break
}
}
if hasEnv {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigEnvOverridden), r.URL)
return
}
var subSys string
switch cfgType {
case "openid":
subSys = config.IdentityOpenIDSubSys
default:
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
cfg, err = readServerConfig(ctx, objectAPI)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = cfg.DelKVS(fmt.Sprintf("%s:%s", subSys, cfgName)); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = validateConfig(cfg, subSys); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
if err = saveServerConfig(ctx, objectAPI, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
dynamic := config.SubSystemsDynamic.Contains(subSys)
if dynamic {
applyDynamic(ctx, objectAPI, cfg, subSys, r, w)
}
}

View File

@@ -19,6 +19,7 @@ package cmd
import (
"encoding/json"
"fmt"
"net/http"
"github.com/gorilla/mux"
@@ -58,6 +59,16 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
return
}
if ep := globalEndpoints[idx].Endpoints[0]; !ep.IsLocal {
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == ep.Host {
if proxyRequestByNodeIndex(ctx, w, r, nodeIdx) {
return
}
}
}
}
if err := pools.Decommission(r.Context(), idx); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -96,6 +107,16 @@ func (a adminAPIHandlers) CancelDecommission(w http.ResponseWriter, r *http.Requ
return
}
if ep := globalEndpoints[idx].Endpoints[0]; !ep.IsLocal {
for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Endpoint.Host == ep.Host {
if proxyRequestByNodeIndex(ctx, w, r, nodeIdx) {
return
}
}
}
}
if err := pools.DecommissionCancel(ctx, idx); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -129,8 +150,10 @@ func (a adminAPIHandlers) StatusPool(w http.ResponseWriter, r *http.Request) {
idx := globalEndpoints.GetPoolIdx(v)
if idx == -1 {
apiErr := toAdminAPIErr(ctx, errInvalidArgument)
apiErr.Description = fmt.Sprintf("specified pool '%s' not found, please specify a valid pool", v)
// We didn't find any matching pools, invalid input
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errInvalidArgument), r.URL)
writeErrorResponseJSON(ctx, w, apiErr, r.URL)
return
}

View File

@@ -24,6 +24,8 @@ import (
"io"
"io/ioutil"
"net/http"
"strings"
"time"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
@@ -115,18 +117,39 @@ func (a adminAPIHandlers) SRPeerBucketOps(w http.ResponseWriter, r *http.Request
case madmin.MakeWithVersioningBktOp:
_, isLockEnabled := r.Form["lockEnabled"]
_, isVersioningEnabled := r.Form["versioningEnabled"]
opts := BucketOptions{
_, isForceCreate := r.Form["forceCreate"]
createdAtStr := strings.TrimSpace(r.Form.Get("createdAt"))
createdAt, cerr := time.Parse(time.RFC3339Nano, createdAtStr)
if cerr != nil {
createdAt = timeSentinel
}
opts := MakeBucketOptions{
Location: r.Form.Get("location"),
LockEnabled: isLockEnabled,
VersioningEnabled: isVersioningEnabled,
ForceCreate: isForceCreate,
CreatedAt: createdAt,
}
err = globalSiteReplicationSys.PeerBucketMakeWithVersioningHandler(ctx, bucket, opts)
case madmin.ConfigureReplBktOp:
err = globalSiteReplicationSys.PeerBucketConfigureReplHandler(ctx, bucket)
case madmin.DeleteBucketBktOp:
err = globalSiteReplicationSys.PeerBucketDeleteHandler(ctx, bucket, false)
_, noRecreate := r.Form["noRecreate"]
err = globalSiteReplicationSys.PeerBucketDeleteHandler(ctx, bucket, DeleteBucketOptions{
Force: false,
NoRecreate: noRecreate,
SRDeleteOp: getSRBucketDeleteOp(true),
})
case madmin.ForceDeleteBucketBktOp:
err = globalSiteReplicationSys.PeerBucketDeleteHandler(ctx, bucket, true)
_, noRecreate := r.Form["noRecreate"]
err = globalSiteReplicationSys.PeerBucketDeleteHandler(ctx, bucket, DeleteBucketOptions{
Force: true,
NoRecreate: noRecreate,
SRDeleteOp: getSRBucketDeleteOp(true),
})
case madmin.PurgeDeletedBucketOp:
globalSiteReplicationSys.purgeDeletedBucket(ctx, objectAPI, bucket)
}
if err != nil {
logger.LogIf(ctx, err)
@@ -158,7 +181,7 @@ func (a adminAPIHandlers) SRPeerReplicateIAMItem(w http.ResponseWriter, r *http.
err = errSRInvalidRequest(errInvalidArgument)
case madmin.SRIAMItemPolicy:
if item.Policy == nil {
err = globalSiteReplicationSys.PeerAddPolicyHandler(ctx, item.Name, nil)
err = globalSiteReplicationSys.PeerAddPolicyHandler(ctx, item.Name, nil, item.UpdatedAt)
} else {
policy, perr := iampolicy.ParseConfig(bytes.NewReader(item.Policy))
if perr != nil {
@@ -166,21 +189,21 @@ func (a adminAPIHandlers) SRPeerReplicateIAMItem(w http.ResponseWriter, r *http.
return
}
if policy.IsEmpty() {
err = globalSiteReplicationSys.PeerAddPolicyHandler(ctx, item.Name, nil)
err = globalSiteReplicationSys.PeerAddPolicyHandler(ctx, item.Name, nil, item.UpdatedAt)
} else {
err = globalSiteReplicationSys.PeerAddPolicyHandler(ctx, item.Name, policy)
err = globalSiteReplicationSys.PeerAddPolicyHandler(ctx, item.Name, policy, item.UpdatedAt)
}
}
case madmin.SRIAMItemSvcAcc:
err = globalSiteReplicationSys.PeerSvcAccChangeHandler(ctx, item.SvcAccChange)
err = globalSiteReplicationSys.PeerSvcAccChangeHandler(ctx, item.SvcAccChange, item.UpdatedAt)
case madmin.SRIAMItemPolicyMapping:
err = globalSiteReplicationSys.PeerPolicyMappingHandler(ctx, item.PolicyMapping)
err = globalSiteReplicationSys.PeerPolicyMappingHandler(ctx, item.PolicyMapping, item.UpdatedAt)
case madmin.SRIAMItemSTSAcc:
err = globalSiteReplicationSys.PeerSTSAccHandler(ctx, item.STSCredential)
err = globalSiteReplicationSys.PeerSTSAccHandler(ctx, item.STSCredential, item.UpdatedAt)
case madmin.SRIAMItemIAMUser:
err = globalSiteReplicationSys.PeerIAMUserChangeHandler(ctx, item.IAMUser)
err = globalSiteReplicationSys.PeerIAMUserChangeHandler(ctx, item.IAMUser, item.UpdatedAt)
case madmin.SRIAMItemGroupInfo:
err = globalSiteReplicationSys.PeerGroupInfoChangeHandler(ctx, item.GroupInfo)
err = globalSiteReplicationSys.PeerGroupInfoChangeHandler(ctx, item.GroupInfo, item.UpdatedAt)
}
if err != nil {
logger.LogIf(ctx, err)
@@ -212,7 +235,7 @@ func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *ht
err = errSRInvalidRequest(errInvalidArgument)
case madmin.SRBucketMetaTypePolicy:
if item.Policy == nil {
err = globalSiteReplicationSys.PeerBucketPolicyHandler(ctx, item.Bucket, nil)
err = globalSiteReplicationSys.PeerBucketPolicyHandler(ctx, item.Bucket, nil, item.UpdatedAt)
} else {
bktPolicy, berr := policy.ParseConfig(bytes.NewReader(item.Policy), item.Bucket)
if berr != nil {
@@ -220,31 +243,33 @@ func (a adminAPIHandlers) SRPeerReplicateBucketItem(w http.ResponseWriter, r *ht
return
}
if bktPolicy.IsEmpty() {
err = globalSiteReplicationSys.PeerBucketPolicyHandler(ctx, item.Bucket, nil)
err = globalSiteReplicationSys.PeerBucketPolicyHandler(ctx, item.Bucket, nil, item.UpdatedAt)
} else {
err = globalSiteReplicationSys.PeerBucketPolicyHandler(ctx, item.Bucket, bktPolicy)
err = globalSiteReplicationSys.PeerBucketPolicyHandler(ctx, item.Bucket, bktPolicy, item.UpdatedAt)
}
}
case madmin.SRBucketMetaTypeQuotaConfig:
if item.Quota == nil {
err = globalSiteReplicationSys.PeerBucketQuotaConfigHandler(ctx, item.Bucket, nil)
err = globalSiteReplicationSys.PeerBucketQuotaConfigHandler(ctx, item.Bucket, nil, item.UpdatedAt)
} else {
quotaConfig, err := parseBucketQuota(item.Bucket, item.Quota)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = globalSiteReplicationSys.PeerBucketQuotaConfigHandler(ctx, item.Bucket, quotaConfig); err != nil {
if err = globalSiteReplicationSys.PeerBucketQuotaConfigHandler(ctx, item.Bucket, quotaConfig, item.UpdatedAt); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
}
case madmin.SRBucketMetaTypeVersionConfig:
err = globalSiteReplicationSys.PeerBucketVersioningHandler(ctx, item.Bucket, item.Versioning, item.UpdatedAt)
case madmin.SRBucketMetaTypeTags:
err = globalSiteReplicationSys.PeerBucketTaggingHandler(ctx, item.Bucket, item.Tags)
err = globalSiteReplicationSys.PeerBucketTaggingHandler(ctx, item.Bucket, item.Tags, item.UpdatedAt)
case madmin.SRBucketMetaTypeObjectLockConfig:
err = globalSiteReplicationSys.PeerBucketObjectLockConfigHandler(ctx, item.Bucket, item.ObjectLockConfig)
err = globalSiteReplicationSys.PeerBucketObjectLockConfigHandler(ctx, item.Bucket, item.ObjectLockConfig, item.UpdatedAt)
case madmin.SRBucketMetaTypeSSEConfig:
err = globalSiteReplicationSys.PeerBucketSSEConfigHandler(ctx, item.Bucket, item.SSEConfig)
err = globalSiteReplicationSys.PeerBucketSSEConfigHandler(ctx, item.Bucket, item.SSEConfig, item.UpdatedAt)
}
if err != nil {
logger.LogIf(ctx, err)
@@ -432,6 +457,7 @@ func getSRStatusOptions(r *http.Request) (opts madmin.SRStatusOptions) {
opts.Users = q.Get("users") == "true"
opts.Entity = madmin.GetSREntityType(q.Get("entity"))
opts.EntityValue = q.Get("entityvalue")
opts.ShowDeleted = q.Get("showDeleted") == "true"
return
}

View File

@@ -26,6 +26,7 @@ package cmd
import (
"context"
"fmt"
"runtime"
"sync"
"testing"
"time"
@@ -41,11 +42,15 @@ func runAllIAMConcurrencyTests(suite *TestSuiteIAM, c *check) {
}
func TestIAMInternalIDPConcurrencyServerSuite(t *testing.T) {
if runtime.GOOS == globalWindowsOSName {
t.Skip("windows is clunky")
}
baseTestCases := []TestSuiteCommon{
// Init and run test on FS backend with signature v4.
{serverType: "FS", signer: signerV4},
// Init and run test on FS backend, with tls enabled.
{serverType: "FS", signer: signerV4, secure: true},
// Init and run test on ErasureSD backend with signature v4.
{serverType: "ErasureSD", signer: signerV4},
// Init and run test on ErasureSD backend, with tls enabled.
{serverType: "ErasureSD", signer: signerV4, secure: true},
// Init and run test on Erasure backend.
{serverType: "Erasure", signer: signerV4},
// Init and run test on ErasureSet backend.
@@ -73,7 +78,7 @@ func TestIAMInternalIDPConcurrencyServerSuite(t *testing.T) {
}
func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
ctx, cancel := context.WithTimeout(context.Background(), 90*time.Second)
defer cancel()
bucket := getRandomBucketName()

File diff suppressed because it is too large Load Diff

View File

@@ -28,6 +28,7 @@ import (
"net/http"
"net/url"
"os"
"runtime"
"strings"
"testing"
"time"
@@ -50,6 +51,8 @@ const (
type TestSuiteIAM struct {
TestSuiteCommon
ServerTypeDescription string
// Flag to turn on tests for etcd backend IAM
withEtcdBackend bool
@@ -59,7 +62,15 @@ type TestSuiteIAM struct {
}
func newTestSuiteIAM(c TestSuiteCommon, withEtcdBackend bool) *TestSuiteIAM {
return &TestSuiteIAM{TestSuiteCommon: c, withEtcdBackend: withEtcdBackend}
etcdStr := ""
if withEtcdBackend {
etcdStr = " (with etcd backend)"
}
return &TestSuiteIAM{
TestSuiteCommon: c,
ServerTypeDescription: fmt.Sprintf("%s%s", c.serverType, etcdStr),
withEtcdBackend: withEtcdBackend,
}
}
func (s *TestSuiteIAM) iamSetup(c *check) {
@@ -87,6 +98,29 @@ func (s *TestSuiteIAM) iamSetup(c *check) {
}
}
// List of all IAM test suites (i.e. test server configuration combinations)
// common to tests.
var iamTestSuites = func() []*TestSuiteIAM {
baseTestCases := []TestSuiteCommon{
// Init and run test on ErasureSD backend with signature v4.
{serverType: "ErasureSD", signer: signerV4},
// Init and run test on ErasureSD backend, with tls enabled.
{serverType: "ErasureSD", signer: signerV4, secure: true},
// Init and run test on Erasure backend.
{serverType: "Erasure", signer: signerV4},
// Init and run test on ErasureSet backend.
{serverType: "ErasureSet", signer: signerV4},
}
testCases := []*TestSuiteIAM{}
for _, bt := range baseTestCases {
testCases = append(testCases,
newTestSuiteIAM(bt, false),
newTestSuiteIAM(bt, true),
)
}
return testCases
}()
const (
EnvTestEtcdBackend = "ETCD_SERVER"
)
@@ -156,30 +190,12 @@ func (s *TestSuiteIAM) getUserClient(c *check, accessKey, secretKey, sessionToke
}
func TestIAMInternalIDPServerSuite(t *testing.T) {
baseTestCases := []TestSuiteCommon{
// Init and run test on FS backend with signature v4.
{serverType: "FS", signer: signerV4},
// Init and run test on FS backend, with tls enabled.
{serverType: "FS", signer: signerV4, secure: true},
// Init and run test on Erasure backend.
{serverType: "Erasure", signer: signerV4},
// Init and run test on ErasureSet backend.
{serverType: "ErasureSet", signer: signerV4},
if runtime.GOOS == globalWindowsOSName {
t.Skip("windows is clunky disable these tests")
}
testCases := []*TestSuiteIAM{}
for _, bt := range baseTestCases {
testCases = append(testCases,
newTestSuiteIAM(bt, false),
newTestSuiteIAM(bt, true),
)
}
for i, testCase := range testCases {
etcdStr := ""
if testCase.withEtcdBackend {
etcdStr = " (with etcd backend)"
}
for i, testCase := range iamTestSuites {
t.Run(
fmt.Sprintf("Test: %d, ServerType: %s%s", i+1, testCase.serverType, etcdStr),
fmt.Sprintf("Test: %d, ServerType: %s", i+1, testCase.ServerTypeDescription),
func(t *testing.T) {
suite := testCase
c := &check{t, testCase.serverType}
@@ -226,6 +242,7 @@ func (s *TestSuiteIAM) TestUserCreate(c *check) {
if err != nil {
c.Fatalf("unable to set policy: %v", err)
}
client := s.getUserClient(c, accessKey, secretKey, "")
err = client.MakeBucket(ctx, getRandomBucketName(), minio.MakeBucketOptions{})
if err != nil {
@@ -890,6 +907,9 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
// 5. Check that service account can be deleted.
c.assertSvcAccDeletion(ctx, s, userAdmClient, accessKey, bucket)
// 6. Check that service account cannot be created for some other user.
c.mustNotCreateSvcAccount(ctx, globalActiveCred.AccessKey, userAdmClient)
}
func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
@@ -960,6 +980,166 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
c.assertSvcAccDeletion(ctx, s, s.adm, accessKey, bucket)
}
func (s *TestSuiteIAM) SetUpAccMgmtPlugin(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
pluginEndpoint := os.Getenv("POLICY_PLUGIN_ENDPOINT")
if pluginEndpoint == "" {
c.Skip("POLICY_PLUGIN_ENDPOINT not given - skipping.")
}
configCmds := []string{
"policy_plugin",
"url=" + pluginEndpoint,
}
_, err := s.adm.SetConfigKV(ctx, strings.Join(configCmds, " "))
if err != nil {
c.Fatalf("unable to setup access management plugin for tests: %v", err)
}
s.RestartIAMSuite(c)
}
// TestIAM_AMPInternalIDPServerSuite - tests for access management plugin
func TestIAM_AMPInternalIDPServerSuite(t *testing.T) {
for i, testCase := range iamTestSuites {
t.Run(
fmt.Sprintf("Test: %d, ServerType: %s", i+1, testCase.ServerTypeDescription),
func(t *testing.T) {
suite := testCase
c := &check{t, testCase.serverType}
suite.SetUpSuite(c)
defer suite.TearDownSuite(c)
suite.SetUpAccMgmtPlugin(c)
suite.TestAccMgmtPlugin(c)
},
)
}
}
// TestAccMgmtPlugin - this test assumes that the access-management-plugin is
// the same as the example in `docs/iam/access-manager-plugin.go` -
// specifically, it denies only `s3:Put*` operations on non-root accounts.
func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
// 0. Check that owner is able to make-bucket.
bucket := getRandomBucketName()
err := s.client.MakeBucket(ctx, bucket, minio.MakeBucketOptions{})
if err != nil {
c.Fatalf("bucket creat error: %v", err)
}
// 1. Create a user.
accessKey, secretKey := mustGenerateCredentials(c)
err = s.adm.SetUser(ctx, accessKey, secretKey, madmin.AccountEnabled)
if err != nil {
c.Fatalf("Unable to set user: %v", err)
}
// 2. Check new user appears in listing
usersMap, err := s.adm.ListUsers(ctx)
if err != nil {
c.Fatalf("error listing: %v", err)
}
v, ok := usersMap[accessKey]
if !ok {
c.Fatalf("user not listed: %s", accessKey)
}
c.Assert(v.Status, madmin.AccountEnabled)
// 3. Check that user is able to make a bucket.
client := s.getUserClient(c, accessKey, secretKey, "")
err = client.MakeBucket(ctx, getRandomBucketName(), minio.MakeBucketOptions{})
if err != nil {
c.Fatalf("user not create bucket: %v", err)
}
// 3.1 check user has access to bucket
c.mustListObjects(ctx, client, bucket)
// 3.2 check that user cannot upload an object.
_, err = client.PutObject(ctx, bucket, "objectName", bytes.NewBuffer([]byte("some content")), 12, minio.PutObjectOptions{})
if err == nil {
c.Fatalf("user was able to upload unexpectedly")
}
// Create an madmin client with user creds
userAdmClient, err := madmin.NewWithOptions(s.endpoint, &madmin.Options{
Creds: cr.NewStaticV4(accessKey, secretKey, ""),
Secure: s.secure,
})
if err != nil {
c.Fatalf("Err creating user admin client: %v", err)
}
userAdmClient.SetCustomTransport(s.TestSuiteCommon.client.Transport)
// Create svc acc
cr := c.mustCreateSvcAccount(ctx, accessKey, userAdmClient)
// 1. Check that svc account appears in listing
c.assertSvcAccAppearsInListing(ctx, userAdmClient, accessKey, cr.AccessKey)
// 2. Check that svc account info can be queried
c.assertSvcAccInfoQueryable(ctx, userAdmClient, accessKey, cr.AccessKey, false)
// 3. Check S3 access
c.assertSvcAccS3Access(ctx, s, cr, bucket)
// Check that session policies do not apply - as policy enforcement is
// delegated to plugin.
{
svcAK, svcSK := mustGenerateCredentials(c)
// This policy does not allow listing objects.
policyBytes := []byte(fmt.Sprintf(`{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
cr, err := userAdmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: policyBytes,
TargetUser: accessKey,
AccessKey: svcAK,
SecretKey: svcSK,
})
if err != nil {
c.Fatalf("Unable to create svc acc: %v", err)
}
svcClient := s.getUserClient(c, cr.AccessKey, cr.SecretKey, "")
// Though the attached policy does not allow listing, it will be
// ignored because the plugin allows it.
c.mustListObjects(ctx, svcClient, bucket)
}
// 4. Check that service account's secret key and account status can be
// updated.
c.assertSvcAccSecretKeyAndStatusUpdate(ctx, s, userAdmClient, accessKey, bucket)
// 5. Check that service account can be deleted.
c.assertSvcAccDeletion(ctx, s, userAdmClient, accessKey, bucket)
// 6. Check that service account **can** be created for some other user.
// This is possible because of the policy enforced in the plugin.
c.mustCreateSvcAccount(ctx, globalActiveCred.AccessKey, userAdmClient)
}
func (c *check) mustCreateIAMUser(ctx context.Context, admClnt *madmin.AdminClient) madmin.Credentials {
randUser := mustGetUUID()
randPass := mustGetUUID()
@@ -1013,7 +1193,7 @@ func (c *check) mustNotListObjects(ctx context.Context, client *minio.Client, bu
res := client.ListObjects(ctx, bucket, minio.ListObjectsOptions{})
v, ok := <-res
if !ok || v.Err == nil {
c.Fatalf("user was able to list unexpectedly!")
c.Fatalf("user was able to list unexpectedly! on %s", bucket)
}
}
@@ -1021,11 +1201,20 @@ func (c *check) mustListObjects(ctx context.Context, client *minio.Client, bucke
res := client.ListObjects(ctx, bucket, minio.ListObjectsOptions{})
v, ok := <-res
if ok && v.Err != nil {
msg := fmt.Sprintf("user was unable to list: %v", v.Err)
c.Fatalf(msg)
c.Fatalf("user was unable to list: %v", v.Err)
}
}
func (c *check) mustNotUpload(ctx context.Context, client *minio.Client, bucket string) {
_, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
if e, ok := err.(minio.ErrorResponse); ok {
if e.Code == "AccessDenied" {
return
}
}
c.Fatalf("upload did not get an AccessDenied error - got %#v instead", err)
}
func (c *check) assertSvcAccS3Access(ctx context.Context, s *TestSuiteIAM, cr madmin.Credentials, bucket string) {
svcClient := s.getUserClient(c, cr.AccessKey, cr.SecretKey, "")
c.mustListObjects(ctx, svcClient, bucket)

File diff suppressed because it is too large Load Diff

View File

@@ -91,8 +91,11 @@ type allHealState struct {
sync.RWMutex
// map of heal path to heal sequence
healSeqMap map[string]*healSequence // Indexed by endpoint
healLocalDisks map[Endpoint]struct{}
healSeqMap map[string]*healSequence // Indexed by endpoint
// keep track of the healing status of disks in the memory
// false: the disk needs to be healed but no healing routine is started
// true: the disk is currently healing
healLocalDisks map[Endpoint]bool
healStatus map[string]healingTracker // Indexed by disk ID
}
@@ -100,7 +103,7 @@ type allHealState struct {
func newHealState(cleanup bool) *allHealState {
hstate := &allHealState{
healSeqMap: make(map[string]*healSequence),
healLocalDisks: map[Endpoint]struct{}{},
healLocalDisks: make(map[Endpoint]bool),
healStatus: make(map[string]healingTracker),
}
if cleanup {
@@ -109,13 +112,6 @@ func newHealState(cleanup bool) *allHealState {
return hstate
}
func (ahs *allHealState) healDriveCount() int {
ahs.RLock()
defer ahs.RUnlock()
return len(ahs.healLocalDisks)
}
func (ahs *allHealState) popHealLocalDisks(healLocalDisks ...Endpoint) {
ahs.Lock()
defer ahs.Unlock()
@@ -165,23 +161,34 @@ func (ahs *allHealState) getLocalHealingDisks() map[string]madmin.HealingDisk {
return dst
}
// getHealLocalDiskEndpoints() returns the list of disks that need
// to be healed but there is no healing routine in progress on them.
func (ahs *allHealState) getHealLocalDiskEndpoints() Endpoints {
ahs.RLock()
defer ahs.RUnlock()
var endpoints Endpoints
for ep := range ahs.healLocalDisks {
endpoints = append(endpoints, ep)
for ep, healing := range ahs.healLocalDisks {
if !healing {
endpoints = append(endpoints, ep)
}
}
return endpoints
}
func (ahs *allHealState) markDiskForHealing(ep Endpoint) {
ahs.Lock()
defer ahs.Unlock()
ahs.healLocalDisks[ep] = true
}
func (ahs *allHealState) pushHealLocalDisks(healLocalDisks ...Endpoint) {
ahs.Lock()
defer ahs.Unlock()
for _, ep := range healLocalDisks {
ahs.healLocalDisks[ep] = struct{}{}
ahs.healLocalDisks[ep] = false
}
}
@@ -194,7 +201,6 @@ func (ahs *allHealState) periodicHealSeqsClean(ctx context.Context) {
for {
select {
case <-periodicTimer.C:
periodicTimer.Reset(time.Minute * 5)
now := UTCNow()
ahs.Lock()
for path, h := range ahs.healSeqMap {
@@ -203,6 +209,8 @@ func (ahs *allHealState) periodicHealSeqsClean(ctx context.Context) {
}
}
ahs.Unlock()
periodicTimer.Reset(time.Minute * 5)
case <-ctx.Done():
// server could be restarting - need
// to exit immediately
@@ -581,12 +589,7 @@ func (h *healSequence) pushHealResultItem(r madmin.HealResultItem) error {
// heal-results in memory and the client has not consumed it
// for too long.
unconsumedTimer := time.NewTimer(healUnconsumedTimeout)
defer func() {
// stop the timeout timer so it is garbage collected.
if !unconsumedTimer.Stop() {
<-unconsumedTimer.C
}
}()
defer unconsumedTimer.Stop()
var itemsLen int
for {
@@ -700,8 +703,9 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
}
if source.opts != nil {
task.opts = *source.opts
} else {
task.opts.ScanMode = madmin.HealNormalScan
}
task.opts.ScanMode = globalHealConfig.ScanMode()
h.mutex.Lock()
h.scannedItemsMap[healType]++
@@ -807,16 +811,6 @@ func (h *healSequence) healMinioSysMeta(objAPI ObjectLayer, metaPrefix string) f
}
}
// healDiskFormat - heals format.json, return value indicates if a
// failure error occurred.
func (h *healSequence) healDiskFormat() error {
if h.isQuitting() {
return errHealStopSignalled
}
return h.queueHealTask(healSource{bucket: SlashSeparator}, madmin.HealItemMetadata)
}
// healBuckets - check for all buckets heal or just particular bucket.
func (h *healSequence) healBuckets(objAPI ObjectLayer, bucketsOnly bool) error {
if h.isQuitting() {
@@ -828,7 +822,7 @@ func (h *healSequence) healBuckets(objAPI ObjectLayer, bucketsOnly bool) error {
return h.healBucket(objAPI, h.bucket, bucketsOnly)
}
buckets, err := objAPI.ListBuckets(h.ctx)
buckets, err := objAPI.ListBuckets(h.ctx, BucketOptions{})
if err != nil {
return errFnHealFromAPIErr(h.ctx, err)
}

View File

@@ -66,6 +66,8 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/storageinfo").HandlerFunc(gz(httpTraceAll(adminAPI.StorageInfoHandler)))
// DataUsageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/datausageinfo").HandlerFunc(gz(httpTraceAll(adminAPI.DataUsageInfoHandler)))
// Metrics operation
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/metrics").HandlerFunc(gz(httpTraceAll(adminAPI.MetricsHandler)))
if globalIsDistErasure || globalIsErasure {
// Heal operations
@@ -84,10 +86,12 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/pools/cancel").HandlerFunc(gz(httpTraceAll(adminAPI.CancelDecommission))).Queries("pool", "{pool:.*}")
}
// Profiling operations
// Profiling operations - deprecated API
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/profiling/start").HandlerFunc(gz(httpTraceAll(adminAPI.StartProfilingHandler))).
Queries("profilerType", "{profilerType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(gz(httpTraceAll(adminAPI.DownloadProfilingHandler)))
// Profiling operations
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/profile").HandlerFunc(gz(httpTraceAll(adminAPI.ProfileHandler)))
// Config KV operations.
if enableConfigOps {
@@ -168,49 +172,72 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// Set Group Status
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-group-status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetGroupStatus))).Queries("group", "{group:.*}").Queries("status", "{status:.*}")
if globalIsDistErasure || globalIsErasure {
// GetBucketQuotaConfig
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-bucket-quota").HandlerFunc(
gz(httpTraceHdrs(adminAPI.GetBucketQuotaConfigHandler))).Queries("bucket", "{bucket:.*}")
// PutBucketQuotaConfig
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-bucket-quota").HandlerFunc(
gz(httpTraceHdrs(adminAPI.PutBucketQuotaConfigHandler))).Queries("bucket", "{bucket:.*}")
// Export IAM info to zipped file
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/export-iam").HandlerFunc(httpTraceHdrs(adminAPI.ExportIAM))
// Bucket replication operations
// GetBucketTargetHandler
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-remote-targets").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ListRemoteTargetsHandler))).Queries("bucket", "{bucket:.*}", "type", "{type:.*}")
// SetRemoteTargetHandler
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-remote-target").HandlerFunc(
gz(httpTraceHdrs(adminAPI.SetRemoteTargetHandler))).Queries("bucket", "{bucket:.*}")
// RemoveRemoteTargetHandler
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-remote-target").HandlerFunc(
gz(httpTraceHdrs(adminAPI.RemoveRemoteTargetHandler))).Queries("bucket", "{bucket:.*}", "arn", "{arn:.*}")
// Import IAM info
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam").HandlerFunc(httpTraceHdrs(adminAPI.ImportIAM))
// Remote Tier management operations
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/tier").HandlerFunc(gz(httpTraceHdrs(adminAPI.AddTierHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.EditTierHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListTierHandler)))
// IDentity Provider configuration APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/idp-config").HandlerFunc(gz(httpTraceHdrs(adminAPI.SetIdentityProviderCfg))).Queries("type", "{type:.*}").Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/idp-config").HandlerFunc(gz(httpTraceHdrs(adminAPI.GetIdentityProviderCfg))).Queries("type", "{type:.*}")
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/idp-config").HandlerFunc(gz(httpTraceHdrs(adminAPI.DeleteIdentityProviderCfg))).Queries("type", "{type:.*}").Queries("name", "{name:.*}")
// Tier stats
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier-stats").HandlerFunc(gz(httpTraceHdrs(adminAPI.TierStatsHandler)))
// -- END IAM APIs --
// Cluster Replication APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/add").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationAdd)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/remove").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationRemove)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/info").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/metainfo").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationMetaInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationStatus)))
// GetBucketQuotaConfig
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-bucket-quota").HandlerFunc(
gz(httpTraceHdrs(adminAPI.GetBucketQuotaConfigHandler))).Queries("bucket", "{bucket:.*}")
// PutBucketQuotaConfig
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-bucket-quota").HandlerFunc(
gz(httpTraceHdrs(adminAPI.PutBucketQuotaConfigHandler))).Queries("bucket", "{bucket:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/join").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerJoin)))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/peer/bucket-ops").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerBucketOps))).Queries("bucket", "{bucket:.*}").Queries("operation", "{operation:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/iam-item").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerReplicateIAMItem)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/bucket-meta").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerReplicateBucketItem)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/peer/idp-settings").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerGetIDPSettings)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/edit").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationEdit)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/edit").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerEdit)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/remove").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerRemove)))
}
// Bucket replication operations
// GetBucketTargetHandler
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-remote-targets").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ListRemoteTargetsHandler))).Queries("bucket", "{bucket:.*}", "type", "{type:.*}")
// SetRemoteTargetHandler
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-remote-target").HandlerFunc(
gz(httpTraceHdrs(adminAPI.SetRemoteTargetHandler))).Queries("bucket", "{bucket:.*}")
// RemoveRemoteTargetHandler
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-remote-target").HandlerFunc(
gz(httpTraceHdrs(adminAPI.RemoveRemoteTargetHandler))).Queries("bucket", "{bucket:.*}", "arn", "{arn:.*}")
// ReplicationDiff - MinIO extension API
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/replication/diff").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ReplicationDiffHandler))).Queries("bucket", "{bucket:.*}")
// Bucket migration operations
// ExportBucketMetaHandler
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/export-bucket-metadata").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ExportBucketMetadataHandler)))
// ImportBucketMetaHandler
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-bucket-metadata").HandlerFunc(
gz(httpTraceHdrs(adminAPI.ImportBucketMetadataHandler)))
// Remote Tier management operations
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/tier").HandlerFunc(gz(httpTraceHdrs(adminAPI.AddTierHandler)))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.EditTierHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier").HandlerFunc(gz(httpTraceHdrs(adminAPI.ListTierHandler)))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.RemoveTierHandler)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier/{tier}").HandlerFunc(gz(httpTraceHdrs(adminAPI.VerifyTierHandler)))
// Tier stats
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/tier-stats").HandlerFunc(gz(httpTraceHdrs(adminAPI.TierStatsHandler)))
// Cluster Replication APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/add").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationAdd)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/remove").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationRemove)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/info").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/metainfo").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationMetaInfo)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/status").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationStatus)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/join").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerJoin)))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/site-replication/peer/bucket-ops").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerBucketOps))).Queries("bucket", "{bucket:.*}").Queries("operation", "{operation:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/iam-item").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerReplicateIAMItem)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/bucket-meta").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerReplicateBucketItem)))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/site-replication/peer/idp-settings").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerGetIDPSettings)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/edit").HandlerFunc(gz(httpTraceHdrs(adminAPI.SiteReplicationEdit)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/edit").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerEdit)))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/site-replication/peer/remove").HandlerFunc(gz(httpTraceHdrs(adminAPI.SRPeerRemove)))
if globalIsDistErasure {
// Top locks
@@ -220,10 +247,10 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
Queries("paths", "{paths:.*}").HandlerFunc(gz(httpTraceHdrs(adminAPI.ForceUnlockHandler)))
}
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest").HandlerFunc(httpTraceHdrs(adminAPI.SpeedtestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/object").HandlerFunc(httpTraceHdrs(adminAPI.ObjectSpeedtestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest").HandlerFunc(httpTraceHdrs(adminAPI.SpeedTestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/object").HandlerFunc(httpTraceHdrs(adminAPI.ObjectSpeedTestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/drive").HandlerFunc(httpTraceHdrs(adminAPI.DriveSpeedtestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/net").HandlerFunc(httpTraceHdrs(adminAPI.NetSpeedtestHandler))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/speedtest/net").HandlerFunc(httpTraceHdrs(adminAPI.NetperfHandler))
// HTTP Trace
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/trace").HandlerFunc(gz(http.HandlerFunc(adminAPI.TraceHandler)))

View File

@@ -31,7 +31,10 @@ import (
// local endpoints from given list of endpoints
func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Request) madmin.ServerProperties {
var localEndpoints Endpoints
addr := r.Host
addr := globalLocalNodeName
if r != nil {
addr = r.Host
}
if globalIsDistErasure {
addr = globalLocalNodeName
}
@@ -40,7 +43,7 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
for _, endpoint := range ep.Endpoints {
nodeName := endpoint.Host
if nodeName == "" {
nodeName = r.Host
nodeName = addr
}
if endpoint.IsLocal {
// Only proceed for local endpoints
@@ -50,7 +53,7 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
}
_, present := network[nodeName]
if !present {
if err := isServerResolvable(endpoint, 2*time.Second); err == nil {
if err := isServerResolvable(endpoint, 5*time.Second); err == nil {
network[nodeName] = string(madmin.ItemOnline)
} else {
network[nodeName] = string(madmin.ItemOffline)

View File

@@ -131,7 +131,7 @@ const (
ErrReplicationNeedsVersioningError
ErrReplicationBucketNeedsVersioningError
ErrReplicationDenyEditError
ErrReplicationNoMatchingRuleError
ErrReplicationNoExistingObjects
ErrObjectRestoreAlreadyInProgress
ErrNoSuchKey
ErrNoSuchUpload
@@ -267,6 +267,8 @@ const (
ErrAdminConfigNoQuorum
ErrAdminConfigTooLarge
ErrAdminConfigBadJSON
ErrAdminNoSuchConfigTarget
ErrAdminConfigEnvOverridden
ErrAdminConfigDuplicateKeys
ErrAdminCredentialsMismatch
ErrInsecureClientRequest
@@ -280,6 +282,7 @@ const (
ErrSiteReplicationBucketConfigError
ErrSiteReplicationBucketMetaError
ErrSiteReplicationIAMError
ErrSiteReplicationConfigMissing
// Bucket Quota error codes
ErrAdminBucketQuotaExceeded
@@ -383,6 +386,7 @@ const (
ErrAdminProfilerNotEnabled
ErrInvalidDecompressedSize
ErrAddUserInvalidArgument
ErrAdminResourceInvalidArgument
ErrAdminAccountNotEligible
ErrAccountNotEligible
ErrAdminServiceAccountNotFound
@@ -891,15 +895,15 @@ var errorCodes = errorCodeMap{
Description: "Bandwidth limit for remote target must be atleast 100MBps",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationNoMatchingRuleError: {
Code: "XMinioReplicationNoMatchingRule",
Description: "No matching replication rule found for this object prefix",
ErrReplicationNoExistingObjects: {
Code: "XMinioReplicationNoExistingObjects",
Description: "No matching ExistingsObjects rule enabled",
HTTPStatusCode: http.StatusBadRequest,
},
ErrReplicationDenyEditError: {
Code: "XMinioReplicationDenyEdit",
Description: "Cannot alter local replication config since this server is in a cluster replication setup",
HTTPStatusCode: http.StatusConflict,
HTTPStatusCode: http.StatusBadRequest,
},
ErrBucketRemoteIdenticalToSource: {
Code: "XMinioAdminRemoteIdenticalToSource",
@@ -1154,7 +1158,7 @@ var errorCodes = errorCodeMap{
// MinIO extensions.
ErrStorageFull: {
Code: "XMinioStorageFull",
Description: "Storage backend has reached its minimum free disk threshold. Please delete a few objects to proceed.",
Description: "Storage backend has reached its minimum free drive threshold. Please delete a few objects to proceed.",
HTTPStatusCode: http.StatusInsufficientStorage,
},
ErrRequestBodyParse: {
@@ -1165,7 +1169,7 @@ var errorCodes = errorCodeMap{
ErrObjectExistsAsDirectory: {
Code: "XMinioObjectExistsAsDirectory",
Description: "Object name already exists as a directory.",
HTTPStatusCode: http.StatusConflict,
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidObjectName: {
Code: "XMinioInvalidObjectName",
@@ -1238,11 +1242,21 @@ var errorCodes = errorCodeMap{
maxEConfigJSONSize),
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminNoSuchConfigTarget: {
Code: "XMinioAdminNoSuchConfigTarget",
Description: "No such named configuration target exists",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigBadJSON: {
Code: "XMinioAdminConfigBadJSON",
Description: "JSON configuration provided is of incorrect format",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigEnvOverridden: {
Code: "XMinioAdminConfigEnvOverridden",
Description: "Unable to update config via Admin API due to environment variable override",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminConfigDuplicateKeys: {
Code: "XMinioAdminConfigDuplicateKeys",
Description: "JSON configuration provided has objects with duplicate keys",
@@ -1339,7 +1353,11 @@ var errorCodes = errorCodeMap{
Description: "Error while replicating an IAM item",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrSiteReplicationConfigMissing: {
Code: "XMinioSiteReplicationConfigMissingError",
Description: "Site not found in site replication configuration",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMaximumExpires: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Expires must be less than a week (in seconds); that is, the given X-Amz-Expires must be less than 604800 seconds",
@@ -1825,6 +1843,11 @@ var errorCodes = errorCodeMap{
Description: "User is not allowed to be same as admin access key",
HTTPStatusCode: http.StatusForbidden,
},
ErrAdminResourceInvalidArgument: {
Code: "XMinioInvalidResource",
Description: "Policy, user or group names are not allowed to begin or end with space characters",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminAccountNotEligible: {
Code: "XMinioInvalidIAMCredentials",
Description: "The administrator key is not eligible for this operation",
@@ -2220,7 +2243,7 @@ func toAPIError(ctx context.Context, err error) APIError {
}
case crypto.Error:
apiErr = APIError{
Code: "XMinIOEncryptionError",
Code: "XMinioEncryptionError",
Description: e.Error(),
HTTPStatusCode: http.StatusBadRequest,
}

View File

@@ -126,6 +126,11 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
// Set all other user defined metadata.
for k, v := range objInfo.UserDefined {
// Empty values for object lock and retention can be skipped.
if v == "" && equals(k, xhttp.AmzObjectLockMode, xhttp.AmzObjectLockRetainUntilDate) {
continue
}
if strings.HasPrefix(strings.ToLower(k), ReservedMetadataPrefixLower) {
// Do not need to send any internal metadata
// values to client.
@@ -194,5 +199,12 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
lc.SetPredictionHeaders(w, objInfo.ToLifecycleOpts())
}
if v, ok := objInfo.UserDefined[ReservedMetadataPrefix+"compression"]; ok {
if i := strings.LastIndexByte(v, '/'); i >= 0 {
v = v[i+1:]
}
w.Header()[xhttp.MinIOCompressed] = []string{v}
}
return nil
}

View File

@@ -728,6 +728,14 @@ func generateMultiDeleteResponse(quiet bool, deletedObjects []DeletedObject, err
}
func writeResponse(w http.ResponseWriter, statusCode int, response []byte, mType mimeType) {
if statusCode == 0 {
statusCode = 200
}
// Similar check to http.checkWriteHeaderCode
if statusCode < 100 || statusCode > 999 {
logger.Error(fmt.Sprintf("invalid WriteHeader code %v", statusCode))
statusCode = http.StatusInternalServerError
}
setCommonHeaders(w)
if mType != mimeNone {
w.Header().Set(xhttp.ContentType, string(mType))
@@ -791,6 +799,12 @@ func writeErrorResponse(ctx context.Context, w http.ResponseWriter, err APIError
err.Description = fmt.Sprintf("The authorization header is malformed; the region is wrong; expecting '%s'.", globalSite.Region)
}
// Similar check to http.checkWriteHeaderCode
if err.HTTPStatusCode < 100 || err.HTTPStatusCode > 999 {
logger.Error(fmt.Sprintf("invalid WriteHeader code %v from %v", err.HTTPStatusCode, err.Code))
err.HTTPStatusCode = http.StatusInternalServerError
}
// Generate error response.
errorResponse := getAPIErrorResponse(ctx, err, reqURL.Path,
w.Header().Get(xhttp.AmzRequestID), globalDeploymentID)

View File

@@ -342,7 +342,7 @@ func registerAPIRouter(router *mux.Router) {
collectAPIStats("getbucketnotification", maxClients(gz(httpTraceAll(api.GetBucketNotificationHandler))))).Queries("notification", "")
// ListenNotification
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("listennotification", maxClients(gz(httpTraceAll(api.ListenNotificationHandler))))).Queries("events", "{events:.*}")
collectAPIStats("listennotification", gz(httpTraceAll(api.ListenNotificationHandler)))).Queries("events", "{events:.*}")
// ResetBucketReplicationStatus - MinIO extension API
router.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("resetbucketreplicationstatus", maxClients(gz(httpTraceAll(api.ResetBucketReplicationStatusHandler))))).Queries("replication-reset-status", "")
@@ -474,7 +474,7 @@ func registerAPIRouter(router *mux.Router) {
// ListenNotification
apiRouter.Methods(http.MethodGet).Path(SlashSeparator).HandlerFunc(
collectAPIStats("listennotification", maxClients(gz(httpTraceAll(api.ListenNotificationHandler))))).Queries("events", "{events:.*}")
collectAPIStats("listennotification", gz(httpTraceAll(api.ListenNotificationHandler)))).Queries("events", "{events:.*}")
// ListBuckets
apiRouter.Methods(http.MethodGet).Path(SlashSeparator).HandlerFunc(

File diff suppressed because one or more lines are too long

View File

@@ -197,13 +197,7 @@ func mustGetClaimsFromToken(r *http.Request) map[string]interface{} {
return claims
}
// Fetch claims in the security token returned by the client.
func getClaimsFromToken(token string) (map[string]interface{}, error) {
if token == "" {
claims := xjwt.NewMapClaims()
return claims.Map(), nil
}
func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{}, error) {
// JWT token for x-amz-security-token is signed with admin
// secret key, temporary credentials become invalid if
// server admin credentials change. This is done to ensure
@@ -212,13 +206,19 @@ func getClaimsFromToken(token string) (map[string]interface{}, error) {
// hijacking the policies. We need to make sure that this is
// based an admin credential such that token cannot be decoded
// on the client side and is treated like an opaque value.
claims, err := auth.ExtractClaims(token, globalActiveCred.SecretKey)
claims, err := auth.ExtractClaims(token, secret)
if err != nil {
return nil, errAuthentication
if subtle.ConstantTimeCompare([]byte(secret), []byte(globalActiveCred.SecretKey)) == 1 {
return nil, errAuthentication
}
claims, err = auth.ExtractClaims(token, globalActiveCred.SecretKey)
if err != nil {
return nil, errAuthentication
}
}
// If OPA is set, return without any further checks.
if globalPolicyOPA != nil {
// If AuthZPlugin is set, return without any further checks.
if newGlobalAuthZPluginFn() != nil {
return claims.Map(), nil
}
@@ -235,29 +235,56 @@ func getClaimsFromToken(token string) (map[string]interface{}, error) {
logger.LogIf(GlobalContext, err, logger.Application)
return nil, errAuthentication
}
claims.MapClaims[iampolicy.SessionPolicyName] = string(spBytes)
claims.MapClaims[sessionPolicyNameExtracted] = string(spBytes)
}
return claims.Map(), nil
}
// Fetch claims in the security token returned by the client.
func getClaimsFromToken(token string) (map[string]interface{}, error) {
return getClaimsFromTokenWithSecret(token, globalActiveCred.SecretKey)
}
// Fetch claims in the security token returned by the client and validate the token.
func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]interface{}, APIErrorCode) {
token := getSessionToken(r)
if token != "" && cred.AccessKey == "" {
// x-amz-security-token is not allowed for anonymous access.
return nil, ErrNoAccessKey
}
if cred.IsServiceAccount() && token == "" {
token = cred.SessionToken
}
if subtle.ConstantTimeCompare([]byte(token), []byte(cred.SessionToken)) != 1 {
if token == "" && cred.IsTemp() {
// Temporary credentials should always have x-amz-security-token
return nil, ErrInvalidToken
}
claims, err := getClaimsFromToken(token)
if err != nil {
return nil, toAPIErrorCode(r.Context(), err)
if token != "" && !cred.IsTemp() {
// x-amz-security-token should not present for static credentials.
return nil, ErrInvalidToken
}
return claims, ErrNone
if cred.IsTemp() && subtle.ConstantTimeCompare([]byte(token), []byte(cred.SessionToken)) != 1 {
// validate token for temporary credentials only.
return nil, ErrInvalidToken
}
secret := globalActiveCred.SecretKey
if cred.IsServiceAccount() {
token = cred.SessionToken
secret = cred.SecretKey
}
if token != "" {
claims, err := getClaimsFromTokenWithSecret(token, secret)
if err != nil {
return nil, toAPIErrorCode(r.Context(), err)
}
return claims, ErrNone
}
claims := xjwt.NewMapClaims()
return claims.Map(), ErrNone
}
// Check request auth type verifies the incoming http request
@@ -473,11 +500,18 @@ func isSupportedS3AuthType(aType authType) bool {
func setAuthHandler(h http.Handler) http.Handler {
// handler for validating incoming authorization headers.
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
tc, ok := r.Context().Value(contextTraceReqKey).(*traceCtxt)
aType := getRequestAuthType(r)
if aType == authTypeSigned || aType == authTypeSignedV2 || aType == authTypeStreamingSigned {
// Verify if date headers are set, if not reject the request
amzDate, errCode := parseAmzDateHeader(r)
if errCode != ErrNone {
if ok {
tc.funcName = "handler.Auth"
tc.responseRecorder.LogErrBody = true
}
// All our internal APIs are sensitive towards Date
// header, for all requests where Date header is not
// present we will reject such clients.
@@ -489,6 +523,11 @@ func setAuthHandler(h http.Handler) http.Handler {
// or in the future, reject request otherwise.
curTime := UTCNow()
if curTime.Sub(amzDate) > globalMaxSkewTime || amzDate.Sub(curTime) > globalMaxSkewTime {
if ok {
tc.funcName = "handler.Auth"
tc.responseRecorder.LogErrBody = true
}
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrRequestTimeTooSkewed), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
return
@@ -498,6 +537,12 @@ func setAuthHandler(h http.Handler) http.Handler {
h.ServeHTTP(w, r)
return
}
if ok {
tc.funcName = "handler.Auth"
tc.responseRecorder.LogErrBody = true
}
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrSignatureVersionNotSupported), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsAuth, 1)
})

View File

@@ -32,6 +32,12 @@ import (
iampolicy "github.com/minio/pkg/iam/policy"
)
type nullReader struct{}
func (r *nullReader) Read(b []byte) (int, error) {
return len(b), nil
}
// Test get request auth type.
func TestGetRequestAuthType(t *testing.T) {
type testCase struct {
@@ -341,7 +347,8 @@ func mustNewSignedEmptyMD5Request(method string, urlStr string, contentLength in
}
func mustNewSignedBadMD5Request(method string, urlStr string, contentLength int64,
body io.ReadSeeker, t *testing.T) *http.Request {
body io.ReadSeeker, t *testing.T,
) *http.Request {
req := mustNewRequest(method, urlStr, contentLength, body, t)
req.Header.Set("Content-Md5", "YWFhYWFhYWFhYWFhYWFhCg==")
cred := globalActiveCred

View File

@@ -22,6 +22,7 @@ import (
"runtime"
"github.com/minio/madmin-go"
"github.com/minio/minio/internal/pubsub"
)
// healTask represents what to heal along with options
@@ -49,10 +50,10 @@ type healRoutine struct {
workers int
}
func systemIO() int {
func activeListeners() int {
// Bucket notification and http trace are not costly, it is okay to ignore them
// while counting the number of concurrent connections
return int(globalHTTPListen.NumSubscribers()) + int(globalTrace.NumSubscribers())
return int(globalHTTPListen.NumSubscribers(pubsub.MaskAll)) + int(globalTrace.NumSubscribers(pubsub.MaskAll))
}
func waitForLowHTTPReq() {
@@ -61,7 +62,7 @@ func waitForLowHTTPReq() {
currentIO = httpServer.GetRequestCount
}
globalHealConfig.Wait(currentIO, systemIO)
globalHealConfig.Wait(currentIO, activeListeners)
}
func initBackgroundHealing(ctx context.Context, objAPI ObjectLayer) {

View File

@@ -26,15 +26,12 @@ import (
"os"
"sort"
"strings"
"sync"
"time"
"github.com/dustin/go-humanize"
"github.com/minio/madmin-go"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/color"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/console"
)
const (
@@ -90,14 +87,14 @@ type healingTracker struct {
// The disk ID will be validated against the loaded one.
func loadHealingTracker(ctx context.Context, disk StorageAPI) (*healingTracker, error) {
if disk == nil {
return nil, errors.New("loadHealingTracker: nil disk given")
return nil, errors.New("loadHealingTracker: nil drive given")
}
diskID, err := disk.GetDiskID()
if err != nil {
return nil, err
}
b, err := disk.ReadAll(ctx, minioMetaBucket,
pathJoin(bucketMetaPrefix, slashSeparator, healingTrackerFilename))
pathJoin(bucketMetaPrefix, healingTrackerFilename))
if err != nil {
return nil, err
}
@@ -107,7 +104,7 @@ func loadHealingTracker(ctx context.Context, disk StorageAPI) (*healingTracker,
return nil, err
}
if h.ID != diskID && h.ID != "" {
return nil, fmt.Errorf("loadHealingTracker: disk id mismatch expected %s, got %s", h.ID, diskID)
return nil, fmt.Errorf("loadHealingTracker: drive id mismatch expected %s, got %s", h.ID, diskID)
}
h.disk = disk
h.ID = diskID
@@ -132,7 +129,7 @@ func newHealingTracker(disk StorageAPI) *healingTracker {
// If the tracker has been deleted an error is returned.
func (h *healingTracker) update(ctx context.Context) error {
if h.disk.Healing() == nil {
return fmt.Errorf("healingTracker: disk %q is not marked as healing", h.ID)
return fmt.Errorf("healingTracker: drive %q is not marked as healing", h.ID)
}
if h.ID == "" || h.PoolIndex < 0 || h.SetIndex < 0 || h.DiskIndex < 0 {
h.ID, _ = h.disk.GetDiskID()
@@ -158,15 +155,19 @@ func (h *healingTracker) save(ctx context.Context) error {
}
globalBackgroundHealState.updateHealStatus(h)
return h.disk.WriteAll(ctx, minioMetaBucket,
pathJoin(bucketMetaPrefix, slashSeparator, healingTrackerFilename),
pathJoin(bucketMetaPrefix, healingTrackerFilename),
htrackerBytes)
}
// delete the tracker on disk.
func (h *healingTracker) delete(ctx context.Context) error {
return h.disk.Delete(ctx, minioMetaBucket,
pathJoin(bucketMetaPrefix, slashSeparator, healingTrackerFilename),
false)
pathJoin(bucketMetaPrefix, healingTrackerFilename),
DeleteOptions{
Recursive: false,
Force: false,
},
)
}
func (h *healingTracker) isHealed(bucket string) bool {
@@ -258,34 +259,9 @@ func initAutoHeal(ctx context.Context, objAPI ObjectLayer) {
initBackgroundHealing(ctx, objAPI) // start quick background healing
bgSeq := mustGetHealSequence(ctx)
globalBackgroundHealState.pushHealLocalDisks(getLocalDisksToHeal()...)
if drivesToHeal := globalBackgroundHealState.healDriveCount(); drivesToHeal > 0 {
logger.Info(fmt.Sprintf("Found drives to heal %d, waiting until %s to heal the content...",
drivesToHeal, defaultMonitorNewDiskInterval))
// Heal any disk format and metadata early, if possible.
// Start with format healing
if err := bgSeq.healDiskFormat(); err != nil {
if newObjectLayerFn() != nil {
// log only in situations, when object layer
// has fully initialized.
logger.LogIf(bgSeq.ctx, err)
}
}
}
if err := bgSeq.healDiskMeta(objAPI); err != nil {
if newObjectLayerFn() != nil {
// log only in situations, when object layer
// has fully initialized.
logger.LogIf(bgSeq.ctx, err)
}
}
go monitorLocalDisksAndHeal(ctx, z, bgSeq)
go monitorLocalDisksAndHeal(ctx, z)
}
func getLocalDisksToHeal() (disksToHeal Endpoints) {
@@ -307,10 +283,111 @@ func getLocalDisksToHeal() (disksToHeal Endpoints) {
return disksToHeal
}
var newDiskHealingTimeout = newDynamicTimeout(30*time.Second, 10*time.Second)
func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint) error {
logger.Info(fmt.Sprintf("Proceeding to heal '%s' - 'mc admin heal alias/ --verbose' to check the status.", endpoint))
disk, format, err := connectEndpoint(endpoint)
if err != nil {
return fmt.Errorf("Error: %w, %s", err, endpoint)
}
poolIdx := globalEndpoints.GetLocalPoolIdx(disk.Endpoint())
if poolIdx < 0 {
return fmt.Errorf("unexpected pool index (%d) found in %s", poolIdx, disk.Endpoint())
}
// Calculate the set index where the current endpoint belongs
z.serverPools[poolIdx].erasureDisksMu.RLock()
setIdx, _, err := findDiskIndex(z.serverPools[poolIdx].format, format)
z.serverPools[poolIdx].erasureDisksMu.RUnlock()
if err != nil {
return err
}
if setIdx < 0 {
return fmt.Errorf("unexpected set index (%d) found in %s", setIdx, disk.Endpoint())
}
// Prevent parallel erasure set healing
locker := z.NewNSLock(minioMetaBucket, fmt.Sprintf("new-drive-healing/%s/%d/%d", endpoint, poolIdx, setIdx))
lkctx, err := locker.GetLock(ctx, newDiskHealingTimeout)
if err != nil {
return err
}
ctx = lkctx.Context()
defer locker.Unlock(lkctx.Cancel)
buckets, _ := z.ListBuckets(ctx, BucketOptions{})
// Buckets data are dispersed in multiple zones/sets, make
// sure to heal all bucket metadata configuration.
buckets = append(buckets, BucketInfo{
Name: pathJoin(minioMetaBucket, minioConfigPrefix),
}, BucketInfo{
Name: pathJoin(minioMetaBucket, bucketMetaPrefix),
})
// Heal latest buckets first.
sort.Slice(buckets, func(i, j int) bool {
a, b := strings.HasPrefix(buckets[i].Name, minioMetaBucket), strings.HasPrefix(buckets[j].Name, minioMetaBucket)
if a != b {
return a
}
return buckets[i].Created.After(buckets[j].Created)
})
if serverDebugLog {
logger.Info("Healing drive '%v' on %s pool", disk, humanize.Ordinal(poolIdx+1))
}
// Load healing tracker in this disk
tracker, err := loadHealingTracker(ctx, disk)
if err != nil {
// So someone changed the drives underneath, healing tracker missing.
logger.LogIf(ctx, fmt.Errorf("Healing tracker missing on '%s', drive was swapped again on %s pool: %w",
disk, humanize.Ordinal(poolIdx+1), err))
tracker = newHealingTracker(disk)
}
// Load bucket totals
cache := dataUsageCache{}
if err := cache.load(ctx, z.serverPools[poolIdx].sets[setIdx], dataUsageCacheName); err == nil {
dataUsageInfo := cache.dui(dataUsageRoot, nil)
tracker.ObjectsTotalCount = dataUsageInfo.ObjectsTotalCount
tracker.ObjectsTotalSize = dataUsageInfo.ObjectsTotalSize
}
tracker.PoolIndex, tracker.SetIndex, tracker.DiskIndex = disk.GetDiskLoc()
tracker.setQueuedBuckets(buckets)
if err := tracker.save(ctx); err != nil {
return err
}
// Start or resume healing of this erasure set
if err = z.serverPools[poolIdx].sets[setIdx].healErasureSet(ctx, tracker.QueuedBuckets, tracker); err != nil {
return err
}
if tracker.ItemsFailed > 0 {
logger.Info("Healing drive '%s' failed (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
} else {
logger.Info("Healing drive '%s' complete (healed: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsFailed)
}
if serverDebugLog {
tracker.printTo(os.Stdout)
logger.Info("\n")
}
logger.LogIf(ctx, tracker.delete(ctx))
return nil
}
// monitorLocalDisksAndHeal - ensures that detected new disks are healed
// 1. Only the concerned erasure set will be listed and healed
// 2. Only the node hosting the disk is responsible to perform the heal
func monitorLocalDisksAndHeal(ctx context.Context, z *erasureServerPools, bgSeq *healSequence) {
func monitorLocalDisksAndHeal(ctx context.Context, z *erasureServerPools) {
// Perform automatic disk healing when a disk is replaced locally.
diskCheckTimer := time.NewTimer(defaultMonitorNewDiskInterval)
defer diskCheckTimer.Stop()
@@ -320,138 +397,37 @@ func monitorLocalDisksAndHeal(ctx context.Context, z *erasureServerPools, bgSeq
case <-ctx.Done():
return
case <-diskCheckTimer.C:
// Reset to next interval.
diskCheckTimer.Reset(defaultMonitorNewDiskInterval)
var erasureSetInPoolDisksToHeal []map[int][]StorageAPI
healDisks := globalBackgroundHealState.getHealLocalDiskEndpoints()
if len(healDisks) > 0 {
// Reformat disks
bgSeq.queueHealTask(healSource{bucket: SlashSeparator}, madmin.HealItemMetadata)
// Ensure that reformatting disks is finished
bgSeq.queueHealTask(healSource{bucket: nopHeal}, madmin.HealItemMetadata)
logger.Info(fmt.Sprintf("Found drives to heal %d, proceeding to heal content...",
len(healDisks)))
erasureSetInPoolDisksToHeal = make([]map[int][]StorageAPI, len(z.serverPools))
for i := range z.serverPools {
erasureSetInPoolDisksToHeal[i] = map[int][]StorageAPI{}
}
if len(healDisks) == 0 {
// Reset for next interval.
diskCheckTimer.Reset(defaultMonitorNewDiskInterval)
continue
}
if serverDebugLog && len(healDisks) > 0 {
console.Debugf(color.Green("healDisk:")+" disk check timer fired, attempting to heal %d drives\n", len(healDisks))
// Reformat disks immediately
_, err := z.HealFormat(context.Background(), false)
if err != nil && !errors.Is(err, errNoHealRequired) {
logger.LogIf(ctx, err)
// Reset for next interval.
diskCheckTimer.Reset(defaultMonitorNewDiskInterval)
continue
}
// heal only if new disks found.
for _, endpoint := range healDisks {
disk, format, err := connectEndpoint(endpoint)
if err != nil {
printEndpointError(endpoint, err, true)
continue
}
poolIdx := globalEndpoints.GetLocalPoolIdx(disk.Endpoint())
if poolIdx < 0 {
continue
}
// Calculate the set index where the current endpoint belongs
z.serverPools[poolIdx].erasureDisksMu.RLock()
// Protect reading reference format.
setIndex, _, err := findDiskIndex(z.serverPools[poolIdx].format, format)
z.serverPools[poolIdx].erasureDisksMu.RUnlock()
if err != nil {
printEndpointError(endpoint, err, false)
continue
}
erasureSetInPoolDisksToHeal[poolIdx][setIndex] = append(erasureSetInPoolDisksToHeal[poolIdx][setIndex], disk)
}
buckets, _ := z.ListBuckets(ctx)
// Buckets data are dispersed in multiple zones/sets, make
// sure to heal all bucket metadata configuration.
buckets = append(buckets, BucketInfo{
Name: pathJoin(minioMetaBucket, minioConfigPrefix),
}, BucketInfo{
Name: pathJoin(minioMetaBucket, bucketMetaPrefix),
})
// Heal latest buckets first.
sort.Slice(buckets, func(i, j int) bool {
a, b := strings.HasPrefix(buckets[i].Name, minioMetaBucket), strings.HasPrefix(buckets[j].Name, minioMetaBucket)
if a != b {
return a
}
return buckets[i].Created.After(buckets[j].Created)
})
// TODO(klauspost): This will block until all heals are done,
// in the future this should be able to start healing other sets at once.
var wg sync.WaitGroup
for i, setMap := range erasureSetInPoolDisksToHeal {
i := i
for setIndex, disks := range setMap {
if len(disks) == 0 {
continue
for _, disk := range healDisks {
go func(disk Endpoint) {
globalBackgroundHealState.markDiskForHealing(disk)
err := healFreshDisk(ctx, z, disk)
if err != nil {
printEndpointError(disk, err, false)
return
}
wg.Add(1)
go func(setIndex int, disks []StorageAPI) {
defer wg.Done()
for _, disk := range disks {
logger.Info("Healing disk '%v' on %s pool", disk, humanize.Ordinal(i+1))
// So someone changed the drives underneath, healing tracker missing.
tracker, err := loadHealingTracker(ctx, disk)
if err != nil {
logger.Info("Healing tracker missing on '%s', disk was swapped again on %s pool",
disk, humanize.Ordinal(i+1))
tracker = newHealingTracker(disk)
}
// Load bucket totals
cache := dataUsageCache{}
if err := cache.load(ctx, z.serverPools[i].sets[setIndex], dataUsageCacheName); err == nil {
dataUsageInfo := cache.dui(dataUsageRoot, nil)
tracker.ObjectsTotalCount = dataUsageInfo.ObjectsTotalCount
tracker.ObjectsTotalSize = dataUsageInfo.ObjectsTotalSize
}
tracker.PoolIndex, tracker.SetIndex, tracker.DiskIndex = disk.GetDiskLoc()
tracker.setQueuedBuckets(buckets)
if err := tracker.save(ctx); err != nil {
logger.LogIf(ctx, err)
// Unable to write healing tracker, permission denied or some
// other unexpected error occurred. Proceed to look for new
// disks to be healed again, we cannot proceed further.
return
}
err = z.serverPools[i].sets[setIndex].healErasureSet(ctx, tracker.QueuedBuckets, tracker)
if err != nil {
logger.LogIf(ctx, err)
continue
}
logger.Info("Healing disk '%s' on %s pool, %s set complete", disk,
humanize.Ordinal(i+1), humanize.Ordinal(setIndex+1))
logger.Info("Summary:\n")
tracker.printTo(os.Stdout)
logger.LogIf(ctx, tracker.delete(ctx))
logger.Info("\n")
// Only upon success pop the healed disk.
globalBackgroundHealState.popHealLocalDisks(disk.Endpoint())
}
}(setIndex, disks)
}
// Only upon success pop the healed disk.
globalBackgroundHealState.popHealLocalDisks(disk)
}(disk)
}
wg.Wait()
// Reset for next interval.
diskCheckTimer.Reset(defaultMonitorNewDiskInterval)
}
}
}

View File

@@ -35,7 +35,7 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
err = obj.MakeBucketWithLocation(context.Background(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -76,7 +76,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
object := getRandomObjectName()
// create bucket.
err = obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
err = obj.MakeBucketWithLocation(context.Background(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}
@@ -196,7 +196,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
err := obj.MakeBucketWithLocation(context.Background(), bucket, MakeBucketOptions{})
if err != nil {
b.Fatal(err)
}

View File

@@ -152,10 +152,10 @@ func (b *streamingBitrotReader) ReadAt(buf []byte, offset int64) (int, error) {
streamOffset := (offset/b.shardSize)*int64(b.h.Size()) + offset
if len(b.data) == 0 && b.tillOffset != streamOffset {
b.rc, err = b.disk.ReadFileStream(context.TODO(), b.volume, b.filePath, streamOffset, b.tillOffset-streamOffset)
if err != nil {
if err != nil && err != errDiskNotFound {
logger.LogIf(GlobalContext,
fmt.Errorf("Error(%w) reading erasure shards at (%s: %s/%s), will attempt to reconstruct if we have quorum",
err, b.disk, b.volume, b.filePath))
fmt.Errorf("Reading erasure shards at (%s: %s/%s) returned '%w', will attempt to reconstruct if we have quorum",
b.disk, b.volume, b.filePath, err))
}
} else {
b.rc = io.NewSectionReader(bytes.NewReader(b.data), streamOffset, b.tillOffset-streamOffset)
@@ -180,7 +180,7 @@ func (b *streamingBitrotReader) ReadAt(buf []byte, offset int64) (int, error) {
b.h.Write(buf)
if !bytes.Equal(b.h.Sum(nil), b.hashBytes) {
logger.LogIf(GlobalContext, fmt.Errorf("Disk: %s -> %s/%s - content hash does not match - expected %s, got %s",
logger.LogIf(GlobalContext, fmt.Errorf("Drive: %s -> %s/%s - content hash does not match - expected %s, got %s",
b.disk, b.volume, b.filePath, hex.EncodeToString(b.hashBytes), hex.EncodeToString(b.h.Sum(nil))))
return 0, errFileCorrupt
}

View File

@@ -38,12 +38,12 @@ type wholeBitrotWriter struct {
func (b *wholeBitrotWriter) Write(p []byte) (int, error) {
err := b.disk.AppendFile(context.TODO(), b.volume, b.filePath, p)
if err != nil {
logger.LogIf(GlobalContext, fmt.Errorf("Disk: %s returned %w", b.disk, err))
logger.LogIf(GlobalContext, fmt.Errorf("Drive: %s returned %w", b.disk, err))
return 0, err
}
_, err = b.Hash.Write(p)
if err != nil {
logger.LogIf(GlobalContext, fmt.Errorf("Disk: %s returned %w", b.disk, err))
logger.LogIf(GlobalContext, fmt.Errorf("Drive: %s returned %w", b.disk, err))
return 0, err
}
return len(p), nil
@@ -72,12 +72,12 @@ func (b *wholeBitrotReader) ReadAt(buf []byte, offset int64) (n int, err error)
if b.buf == nil {
b.buf = make([]byte, b.tillOffset-offset)
if _, err := b.disk.ReadFile(context.TODO(), b.volume, b.filePath, offset, b.buf, b.verifier); err != nil {
logger.LogIf(GlobalContext, fmt.Errorf("Disk: %s -> %s/%s returned %w", b.disk, b.volume, b.filePath, err))
logger.LogIf(GlobalContext, fmt.Errorf("Drive: %s -> %s/%s returned %w", b.disk, b.volume, b.filePath, err))
return 0, err
}
}
if len(b.buf) < len(buf) {
logger.LogIf(GlobalContext, fmt.Errorf("Disk: %s -> %s/%s returned %w", b.disk, b.volume, b.filePath, errLessData))
logger.LogIf(GlobalContext, fmt.Errorf("Drive: %s -> %s/%s returned %w", b.disk, b.volume, b.filePath, errLessData))
return 0, errLessData
}
n = copy(buf, b.buf)

View File

@@ -19,7 +19,6 @@ package cmd
import (
"bytes"
"crypto/sha256"
"encoding/hex"
"errors"
"fmt"
@@ -27,6 +26,7 @@ import (
"io"
"github.com/minio/highwayhash"
"github.com/minio/minio/internal/hash/sha256"
"golang.org/x/crypto/blake2b"
xioutil "github.com/minio/minio/internal/ioutil"

View File

@@ -20,17 +20,11 @@ package cmd
import (
"context"
"io"
"io/ioutil"
"os"
"testing"
)
func testBitrotReaderWriterAlgo(t *testing.T, bitrotAlgo BitrotAlgorithm) {
tmpDir, err := ioutil.TempDir("", "")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
tmpDir := t.TempDir()
volume := "testvol"
filePath := "testfile"
@@ -60,7 +54,9 @@ func testBitrotReaderWriterAlgo(t *testing.T, bitrotAlgo BitrotAlgorithm) {
if err != nil {
t.Fatal(err)
}
writer.(io.Closer).Close()
if bw, ok := writer.(io.Closer); ok {
bw.Close()
}
reader := newBitrotReader(disk, nil, volume, filePath, 35, bitrotAlgo, bitrotWriterSum(writer), 10)
b := make([]byte, 10)
@@ -76,6 +72,9 @@ func testBitrotReaderWriterAlgo(t *testing.T, bitrotAlgo BitrotAlgorithm) {
if _, err = reader.ReadAt(b[:5], 30); err != nil {
t.Fatal(err)
}
if br, ok := reader.(io.Closer); ok {
br.Close()
}
}
func TestAllBitrotAlgorithms(t *testing.T) {

View File

@@ -206,7 +206,7 @@ func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointS
for _, clnt := range clnts {
if err := clnt.Verify(ctx, srcCfg); err != nil {
if !isNetworkError(err) {
logger.Info(fmt.Errorf("%s has incorrect configuration: %w", clnt.String(), err).Error())
logger.LogIf(ctx, fmt.Errorf("%s has incorrect configuration: %w", clnt.String(), err))
}
offlineEndpoints = append(offlineEndpoints, clnt.String())
continue

View File

@@ -65,7 +65,7 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
}
// Check if bucket exists.
if _, err := objAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -90,7 +90,7 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
kmsKey := encConfig.KeyID()
if kmsKey != "" {
kmsContext := kms.Context{"MinIO admin API": "ServerInfoHandler"} // Context for a test key operation
_, err := GlobalKMS.GenerateKey(kmsKey, kmsContext)
_, err := GlobalKMS.GenerateKey(ctx, kmsKey, kmsContext)
if err != nil {
if errors.Is(err, kes.ErrKeyNotFound) {
writeErrorResponse(ctx, w, toAPIError(ctx, errKMSKeyNotFound), r.URL)
@@ -108,7 +108,8 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
}
// Store the bucket encryption configuration in the object layer
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketSSEConfig, configData); err != nil {
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketSSEConfig, configData)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -122,6 +123,7 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
Type: madmin.SRBucketMetaTypeSSEConfig,
Bucket: bucket,
SSEConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -153,12 +155,12 @@ func (api objectAPIHandlers) GetBucketEncryptionHandler(w http.ResponseWriter, r
// Check if bucket exists
var err error
if _, err = objAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err = objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
config, err := globalBucketMetadataSys.GetSSEConfig(bucket)
config, _, err := globalBucketMetadataSys.GetSSEConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -196,16 +198,27 @@ func (api objectAPIHandlers) DeleteBucketEncryptionHandler(w http.ResponseWriter
// Check if bucket exists
var err error
if _, err = objAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err = objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Delete bucket encryption config from object layer
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketSSEConfig, nil); err != nil {
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketSSEConfig, nil)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Call site replication hook.
//
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeSSEConfig,
Bucket: bucket,
SSEConfig: nil,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
writeSuccessNoContent(w)
}

View File

@@ -43,7 +43,8 @@ func (sys *BucketSSEConfigSys) Get(bucket string) (*sse.BucketSSEConfig, error)
return nil, BucketSSEConfigNotFound{Bucket: bucket}
}
return globalBucketMetadataSys.GetSSEConfig(bucket)
sseCfg, _, err := globalBucketMetadataSys.GetSSEConfig(bucket)
return sseCfg, err
}
// validateBucketSSEConfig parses bucket encryption configuration and validates if it is supported by MinIO.

View File

@@ -21,7 +21,6 @@ import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"encoding/xml"
"fmt"
"io"
@@ -33,7 +32,6 @@ import (
"strconv"
"strings"
"sync"
"time"
"github.com/google/uuid"
"github.com/gorilla/mux"
@@ -203,7 +201,7 @@ func (api objectAPIHandlers) GetBucketLocationHandler(w http.ResponseWriter, r *
getBucketInfo := objectAPI.GetBucketInfo
if _, err := getBucketInfo(ctx, bucket); err != nil {
if _, err := getBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -334,7 +332,7 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
} else {
// Invoke the list buckets.
var err error
bucketsInfo, err = listBuckets(ctx)
bucketsInfo, err = listBuckets(ctx, BucketOptions{})
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -364,6 +362,18 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
}) {
bucketsInfo[n] = bucketInfo
n++
} else if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.GetBucketLocationAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
IsOwner: owner,
ObjectName: "",
Claims: cred.Claims,
}) {
bucketsInfo[n] = bucketInfo
n++
}
}
bucketsInfo = bucketsInfo[:n]
@@ -437,7 +447,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
checkRequestAuthType(ctx, r, policy.DeleteObjectAction, bucket, "")
// Before proceeding validate if bucket exists.
_, err := objectAPI.GetBucketInfo(ctx, bucket)
_, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{})
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -471,9 +481,6 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
hasLockEnabled = true
}
versioned := globalBucketVersioningSys.Enabled(bucket)
suspended := globalBucketVersioningSys.Suspended(bucket)
type deleteResult struct {
delInfo DeletedObject
errInfo DeleteError
@@ -481,8 +488,8 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
deleteResults := make([]deleteResult, len(deleteObjectsReq.Objects))
vc, _ := globalBucketVersioningSys.Get(bucket)
oss := make([]*objSweeper, len(deleteObjectsReq.Objects))
for index, object := range deleteObjectsReq.Objects {
if apiErrCode := checkRequestAuthType(ctx, r, policy.DeleteObjectAction, bucket, object.ObjectName); apiErrCode != ErrNone {
if apiErrCode == ErrSignatureDoesNotMatch || apiErrCode == ErrInvalidAccessKeyID {
@@ -514,8 +521,8 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
opts := ObjectOptions{
VersionID: object.VersionID,
Versioned: versioned,
VersionSuspended: suspended,
Versioned: vc.PrefixEnabled(object.ObjectName),
VersionSuspended: vc.Suspended(),
}
if replicateDeletes || object.VersionID != "" && hasLockEnabled || !globalTierConfigMgr.Empty() {
@@ -526,7 +533,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
}
if !globalTierConfigMgr.Empty() {
oss[index] = newObjSweeper(bucket, object.ObjectName).WithVersion(opts.VersionID).WithVersioning(versioned, suspended)
oss[index] = newObjSweeper(bucket, object.ObjectName).WithVersion(opts.VersionID).WithVersioning(opts.Versioned, opts.VersionSuspended)
oss[index].SetTransitionState(goi.TransitionedObject)
}
@@ -581,8 +588,8 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
deleteList := toNames(objectsToDelete)
dObjects, errs := deleteObjectsFn(ctx, bucket, deleteList, ObjectOptions{
Versioned: versioned,
VersionSuspended: suspended,
PrefixEnabledFn: vc.PrefixEnabled,
VersionSuspended: vc.Suspended(),
})
for i := range errs {
@@ -638,22 +645,13 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
continue
}
if replicateDeletes {
if dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == Pending {
dv := DeletedObjectReplicationInfo{
DeletedObject: dobj,
Bucket: bucket,
}
scheduleReplicationDelete(ctx, dv, objectAPI)
if replicateDeletes && (dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == Pending) {
dv := DeletedObjectReplicationInfo{
DeletedObject: dobj,
Bucket: bucket,
EventType: ReplicateIncomingDelete,
}
}
}
// Notify deleted event for objects.
for _, dobj := range deletedObjects {
if dobj.ObjectName == "" {
continue
scheduleReplicationDelete(ctx, dv, objectAPI)
}
eventName := event.ObjectRemovedDelete
@@ -706,20 +704,53 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
bucket := vars["bucket"]
objectLockEnabled := false
if vs, found := r.Header[http.CanonicalHeaderKey("x-amz-bucket-object-lock-enabled")]; found {
v := strings.ToLower(strings.Join(vs, ""))
if v != "true" && v != "false" {
if vs := r.Header.Get(xhttp.AmzObjectLockEnabled); len(vs) > 0 {
v := strings.ToLower(vs)
switch v {
case "true", "false":
objectLockEnabled = v == "true"
default:
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
objectLockEnabled = v == "true"
}
if s3Error := checkRequestAuthType(ctx, r, policy.CreateBucketAction, bucket, ""); s3Error != ErrNone {
forceCreate := false
if vs := r.Header.Get(xhttp.MinIOForceCreate); len(vs) > 0 {
v := strings.ToLower(vs)
switch v {
case "true", "false":
forceCreate = v == "true"
default:
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
}
cred, owner, s3Error := checkRequestAuthTypeCredential(ctx, r, policy.CreateBucketAction, bucket, "")
if s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
if objectLockEnabled {
// Creating a bucket with locking requires the user having more permissions
for _, action := range []iampolicy.Action{iampolicy.PutBucketObjectLockConfigurationAction, iampolicy.PutBucketVersioningAction} {
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: action,
ConditionValues: getConditionValues(r, "", cred.AccessKey, cred.Claims),
BucketName: bucket,
IsOwner: owner,
Claims: cred.Claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
}
}
// Parse incoming location constraint.
location, s3Error := parseLocationConstraint(r)
if s3Error != ErrNone {
@@ -734,9 +765,10 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
return
}
opts := BucketOptions{
opts := MakeBucketOptions{
Location: location,
LockEnabled: objectLockEnabled,
ForceCreate: forceCreate,
}
if globalDNSConfig != nil {
@@ -752,7 +784,11 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
if err = globalDNSConfig.Put(bucket); err != nil {
objectAPI.DeleteBucket(context.Background(), bucket, DeleteBucketOptions{Force: false, NoRecreate: true})
objectAPI.DeleteBucket(context.Background(), bucket, DeleteBucketOptions{
Force: false,
NoRecreate: true,
SRDeleteOp: getSRBucketDeleteOp(globalSiteReplicationSys.isEnabled()),
})
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -794,19 +830,14 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
}
// Proceed to creating a bucket.
err := objectAPI.MakeBucketWithLocation(ctx, bucket, opts)
if _, ok := err.(BucketExists); ok {
// Though bucket exists locally, we send the site-replication
// hook to ensure all sites have this bucket. If the hook
// succeeds, the client will still receive a bucket exists
// message.
err2 := globalSiteReplicationSys.MakeBucketHook(ctx, bucket, opts)
if err2 != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
if err := objectAPI.MakeBucketWithLocation(ctx, bucket, opts); err != nil {
if _, ok := err.(BucketExists); ok {
// Though bucket exists locally, we send the site-replication
// hook to ensure all sites have this bucket. If the hook
// succeeds, the client will still receive a bucket exists
// message.
globalSiteReplicationSys.MakeBucketHook(ctx, bucket, opts)
}
}
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -815,8 +846,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
globalNotificationSys.LoadBucketMetadata(GlobalContext, bucket)
// Call site replication hook
err = globalSiteReplicationSys.MakeBucketHook(ctx, bucket, opts)
if err != nil {
if err := globalSiteReplicationSys.MakeBucketHook(ctx, bucket, opts); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -858,7 +888,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
if _, ok := crypto.IsRequested(r.Header); !objectAPI.IsEncryptionSupported() && ok {
if crypto.Requested(r.Header) && !objectAPI.IsEncryptionSupported() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
@@ -1034,7 +1064,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
if objectAPI.IsEncryptionSupported() {
if _, ok := crypto.IsRequested(formValues); ok && !HasSuffix(object, SlashSeparator) { // handle SSE requests
if crypto.Requested(formValues) && !HasSuffix(object, SlashSeparator) { // handle SSE requests
if crypto.SSECopy.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParameters), r.URL)
return
@@ -1060,7 +1090,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
}
reader, objectEncryptionKey, err = newEncryptReader(hashReader, kind, keyID, key, bucket, object, metadata, kmsCtx)
reader, objectEncryptionKey, err = newEncryptReader(ctx, hashReader, kind, keyID, key, bucket, object, metadata, kmsCtx)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -1109,9 +1139,12 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
Host: handlers.GetSourceIP(r),
})
if successRedirect != "" {
// Replace raw query params..
redirectURL.RawQuery = getRedirectPostRawQuery(objInfo)
if redirectURL != nil { // success_action_redirect is valid and set.
v := redirectURL.Query()
v.Add("bucket", objInfo.Bucket)
v.Add("key", objInfo.Name)
v.Add("etag", "\""+objInfo.ETag+"\"")
redirectURL.RawQuery = v.Encode()
writeRedirectSeeOther(w, redirectURL.String())
return
}
@@ -1155,7 +1188,7 @@ func (api objectAPIHandlers) GetBucketPolicyStatusHandler(w http.ResponseWriter,
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -1218,7 +1251,7 @@ func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Re
getBucketInfo := objectAPI.GetBucketInfo
if _, err := getBucketInfo(ctx, bucket); err != nil {
if _, err := getBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponseHeadersOnly(w, toAPIError(ctx, err))
return
}
@@ -1270,6 +1303,17 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
rcfg, err := getReplicationConfig(ctx, bucket)
switch {
case err != nil:
if _, ok := err.(BucketReplicationConfigNotFound); !ok {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
case rcfg.HasActiveRules("", true):
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
}
}
@@ -1284,7 +1328,10 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
deleteBucket := objectAPI.DeleteBucket
// Attempt to delete bucket.
if err := deleteBucket(ctx, bucket, DeleteBucketOptions{Force: forceDelete}); err != nil {
if err := deleteBucket(ctx, bucket, DeleteBucketOptions{
Force: forceDelete,
SRDeleteOp: getSRBucketDeleteOp(globalSiteReplicationSys.isEnabled()),
}); err != nil {
apiErr := toAPIError(ctx, err)
if _, ok := err.(BucketNotEmpty); ok {
if globalBucketVersioningSys.Enabled(bucket) || globalBucketVersioningSys.Suspended(bucket) {
@@ -1339,7 +1386,7 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if !globalIsErasure {
if globalIsGateway {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
@@ -1363,12 +1410,13 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
}
// Deny object locking configuration settings on existing buckets without object lock enabled.
if _, err = globalBucketMetadataSys.GetObjectLockConfig(bucket); err != nil {
if _, _, err = globalBucketMetadataSys.GetObjectLockConfig(bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if err = globalBucketMetadataSys.Update(ctx, bucket, objectLockConfig, configData); err != nil {
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, objectLockConfig, configData)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -1382,6 +1430,7 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
Type: madmin.SRBucketMetaTypeObjectLockConfig,
Bucket: bucket,
ObjectLockConfig: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -1416,7 +1465,7 @@ func (api objectAPIHandlers) GetBucketObjectLockConfigHandler(w http.ResponseWri
return
}
config, err := globalBucketMetadataSys.GetObjectLockConfig(bucket)
config, _, err := globalBucketMetadataSys.GetObjectLockConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -1449,7 +1498,7 @@ func (api objectAPIHandlers) PutBucketTaggingHandler(w http.ResponseWriter, r *h
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -1473,7 +1522,8 @@ func (api objectAPIHandlers) PutBucketTaggingHandler(w http.ResponseWriter, r *h
return
}
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketTaggingConfig, configData); err != nil {
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketTaggingConfig, configData)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -1484,9 +1534,10 @@ func (api objectAPIHandlers) PutBucketTaggingHandler(w http.ResponseWriter, r *h
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
Tags: &cfgStr,
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
Tags: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -1518,7 +1569,7 @@ func (api objectAPIHandlers) GetBucketTaggingHandler(w http.ResponseWriter, r *h
return
}
config, err := globalBucketMetadataSys.GetTaggingConfig(bucket)
config, _, err := globalBucketMetadataSys.GetTaggingConfig(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -1555,14 +1606,16 @@ func (api objectAPIHandlers) DeleteBucketTaggingHandler(w http.ResponseWriter, r
return
}
if err := globalBucketMetadataSys.Update(ctx, bucket, bucketTaggingConfig, nil); err != nil {
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketTaggingConfig, nil)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if err := globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
Type: madmin.SRBucketMetaTypeTags,
Bucket: bucket,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -1571,375 +1624,3 @@ func (api objectAPIHandlers) DeleteBucketTaggingHandler(w http.ResponseWriter, r
// Write success response.
writeSuccessResponseHeadersOnly(w)
}
// PutBucketReplicationConfigHandler - PUT Bucket replication configuration.
// ----------
// Add a replication configuration on the specified bucket as specified in https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html
func (api objectAPIHandlers) PutBucketReplicationConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketReplicationConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if !globalIsErasure {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.PutReplicationConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if globalSiteReplicationSys.isEnabled() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrReplicationDenyEditError), r.URL)
return
}
if versioned := globalBucketVersioningSys.Enabled(bucket); !versioned {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrReplicationNeedsVersioningError), r.URL)
return
}
replicationConfig, err := replication.ParseConfig(io.LimitReader(r.Body, r.ContentLength))
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedXML)
apiErr.Description = err.Error()
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
sameTarget, apiErr := validateReplicationDestination(ctx, bucket, replicationConfig)
if apiErr != noError {
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
// Validate the received bucket replication config
if err = replicationConfig.Validate(bucket, sameTarget); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
configData, err := xml.Marshal(replicationConfig)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketReplicationConfig, configData); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseHeadersOnly(w)
}
// GetBucketReplicationConfigHandler - GET Bucket replication configuration.
// ----------
// Gets the replication configuration for a bucket.
func (api objectAPIHandlers) GetBucketReplicationConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketReplicationConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
// check if user has permissions to perform this operation
if s3Error := checkRequestAuthType(ctx, r, policy.GetReplicationConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
config, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseXML(w, configData)
}
// DeleteBucketReplicationConfigHandler - DELETE Bucket replication config.
// ----------
func (api objectAPIHandlers) DeleteBucketReplicationConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteBucketReplicationConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.PutReplicationConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if globalSiteReplicationSys.isEnabled() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrReplicationDenyEditError), r.URL)
return
}
if err := globalBucketMetadataSys.Update(ctx, bucket, bucketReplicationConfig, nil); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseHeadersOnly(w)
}
// GetBucketReplicationMetricsHandler - GET Bucket replication metrics.
// ----------
// Gets the replication metrics for a bucket.
func (api objectAPIHandlers) GetBucketReplicationMetricsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketReplicationMetrics")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
// check if user has permissions to perform this operation
if s3Error := checkRequestAuthType(ctx, r, policy.GetReplicationConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
var usageInfo BucketUsageInfo
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objectAPI)
if err == nil && !dataUsageInfo.LastUpdate.IsZero() {
usageInfo = dataUsageInfo.BucketsUsage[bucket]
}
w.Header().Set(xhttp.ContentType, string(mimeJSON))
enc := json.NewEncoder(w)
if err = enc.Encode(getLatestReplicationStats(bucket, usageInfo)); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
}
// ResetBucketReplicationStartHandler - starts a replication reset for all objects in a bucket which
// qualify for replication and re-sync the object(s) to target, provided ExistingObjectReplication is
// enabled for the qualifying rule. This API is a MinIO only extension provided for situations where
// remote target is entirely lost,and previously replicated objects need to be re-synced. If resync is
// already in progress it returns an error
func (api objectAPIHandlers) ResetBucketReplicationStartHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ResetBucketReplicationStart")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
durationStr := r.URL.Query().Get("older-than")
arn := r.URL.Query().Get("arn")
resetID := r.URL.Query().Get("reset-id")
if resetID == "" {
resetID = mustGetUUID()
}
var (
days time.Duration
err error
)
if durationStr != "" {
days, err = time.ParseDuration(durationStr)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, InvalidArgument{
Bucket: bucket,
Err: fmt.Errorf("invalid query parameter older-than %s for %s : %w", durationStr, bucket, err),
}), r.URL)
}
}
resetBeforeDate := UTCNow().AddDate(0, 0, -1*int(days/24))
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.ResetBucketReplicationStateAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
config, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if !config.HasActiveRules("", true) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrReplicationNoMatchingRuleError), r.URL)
return
}
tgtArns := config.FilterTargetArns(
replication.ObjectOpts{
OpType: replication.ResyncReplicationType,
TargetArn: arn,
})
if len(tgtArns) == 0 {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrBadRequest, InvalidArgument{
Bucket: bucket,
Err: fmt.Errorf("Remote target ARN %s missing or ineligible for replication resync", arn),
}), r.URL)
return
}
if len(tgtArns) > 1 && arn == "" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrBadRequest, InvalidArgument{
Bucket: bucket,
Err: fmt.Errorf("ARN should be specified for replication reset"),
}), r.URL)
return
}
var rinfo ResyncTargetsInfo
target := globalBucketTargetSys.GetRemoteBucketTargetByArn(ctx, bucket, tgtArns[0])
target.ResetBeforeDate = UTCNow().AddDate(0, 0, -1*int(days/24))
target.ResetID = resetID
rinfo.Targets = append(rinfo.Targets, ResyncTarget{Arn: tgtArns[0], ResetID: target.ResetID})
if err = globalBucketTargetSys.SetTarget(ctx, bucket, &target, true); err != nil {
switch err.(type) {
case BucketRemoteConnectionErr:
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrReplicationRemoteConnectionError, err), r.URL)
default:
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
}
}
if err := startReplicationResync(ctx, bucket, arn, resetID, resetBeforeDate, objectAPI); err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrBadRequest, InvalidArgument{
Bucket: bucket,
Err: err,
}), r.URL)
return
}
data, err := json.Marshal(rinfo)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseJSON(w, data)
}
// ResetBucketReplicationStatusHandler - returns the status of replication reset.
// This API is a MinIO only extension
func (api objectAPIHandlers) ResetBucketReplicationStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ResetBucketReplicationStatus")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
arn := r.URL.Query().Get("arn")
var err error
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.ResetBucketReplicationStateAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if _, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
globalReplicationPool.resyncState.RLock()
brs, ok := globalReplicationPool.resyncState.statusMap[bucket]
if !ok {
brs, err = loadBucketResyncMetadata(ctx, bucket, objectAPI)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErrWithErr(ErrBadRequest, InvalidArgument{
Bucket: bucket,
Err: fmt.Errorf("No replication resync status available for %s", arn),
}), r.URL)
}
return
}
var rinfo ResyncTargetsInfo
for tarn, st := range brs.TargetsMap {
if arn != "" && tarn != arn {
continue
}
rinfo.Targets = append(rinfo.Targets, ResyncTarget{
Arn: tarn,
ResetID: st.ResyncID,
StartTime: st.StartTime,
EndTime: st.EndTime,
ResyncStatus: st.ResyncStatus.String(),
ReplicatedSize: st.ReplicatedSize,
ReplicatedCount: st.ReplicatedCount,
FailedSize: st.FailedSize,
FailedCount: st.FailedCount,
Bucket: st.Bucket,
Object: st.Object,
})
}
globalReplicationPool.resyncState.RUnlock()
data, err := json.Marshal(rinfo)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseJSON(w, data)
}

View File

@@ -36,7 +36,8 @@ func TestRemoveBucketHandler(t *testing.T) {
}
func testRemoveBucketHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials auth.Credentials, t *testing.T) {
credentials auth.Credentials, t *testing.T,
) {
_, err := obj.PutObject(GlobalContext, bucketName, "test-object", mustGetPutObjReader(t, bytes.NewReader([]byte{}), int64(0), "", "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"), ObjectOptions{})
// if object upload fails stop the test.
if err != nil {

View File

@@ -62,7 +62,7 @@ func (api objectAPIHandlers) PutBucketLifecycleHandler(w http.ResponseWriter, r
}
// Check if bucket exists.
if _, err := objAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -91,7 +91,7 @@ func (api objectAPIHandlers) PutBucketLifecycleHandler(w http.ResponseWriter, r
return
}
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketLifecycleConfig, configData); err != nil {
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketLifecycleConfig, configData); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -121,7 +121,7 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
}
// Check if bucket exists.
if _, err := objAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -163,12 +163,12 @@ func (api objectAPIHandlers) DeleteBucketLifecycleHandler(w http.ResponseWriter,
}
// Check if bucket exists.
if _, err := objAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if err := globalBucketMetadataSys.Update(ctx, bucket, bucketLifecycleConfig, nil); err != nil {
if _, err := globalBucketMetadataSys.Update(ctx, bucket, bucketLifecycleConfig, nil); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}

View File

@@ -34,7 +34,8 @@ func TestBucketLifecycleWrongCredentials(t *testing.T) {
// Test for authentication
func testBucketLifecycleHandlersWrongCredentials(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials auth.Credentials, t *testing.T) {
credentials auth.Credentials, t *testing.T,
) {
// test cases with sample input and expected output.
testCases := []struct {
method string

View File

@@ -247,11 +247,11 @@ func (t *transitionState) addLastDayStats(tier string, ts tierStats) {
t.lastDayStats[tier].addStats(ts)
}
func (t *transitionState) getDailyAllTierStats() dailyAllTierStats {
func (t *transitionState) getDailyAllTierStats() DailyAllTierStats {
t.lastDayMu.RLock()
defer t.lastDayMu.RUnlock()
res := make(dailyAllTierStats, len(t.lastDayStats))
res := make(DailyAllTierStats, len(t.lastDayStats))
for tier, st := range t.lastDayStats {
res[tier] = st.clone()
}
@@ -330,7 +330,7 @@ const (
// 2. when a transitioned object expires (based on an ILM rule).
func expireTransitionedObject(ctx context.Context, objectAPI ObjectLayer, oi *ObjectInfo, lcOpts lifecycle.ObjectOpts, action expireAction) error {
var opts ObjectOptions
opts.Versioned = globalBucketVersioningSys.Enabled(oi.Bucket)
opts.Versioned = globalBucketVersioningSys.PrefixEnabled(oi.Bucket, oi.Name)
opts.VersionID = lcOpts.VersionID
opts.Expiration = ExpirationOptions{Expire: true}
switch action {
@@ -415,8 +415,8 @@ func transitionObject(ctx context.Context, objectAPI ObjectLayer, oi ObjectInfo)
ETag: oi.ETag,
},
VersionID: oi.VersionID,
Versioned: globalBucketVersioningSys.Enabled(oi.Bucket),
VersionSuspended: globalBucketVersioningSys.Suspended(oi.Bucket),
Versioned: globalBucketVersioningSys.PrefixEnabled(oi.Bucket, oi.Name),
VersionSuspended: globalBucketVersioningSys.PrefixSuspended(oi.Bucket, oi.Name),
MTime: oi.ModTime,
}
return tier, objectAPI.TransitionObject(ctx, oi.Bucket, oi.Name, opts)
@@ -560,7 +560,7 @@ func (r *RestoreObjectRequest) validate(ctx context.Context, objAPI ObjectLayer)
}
// Check if bucket exists.
if !r.OutputLocation.IsEmpty() {
if _, err := objAPI.GetBucketInfo(ctx, r.OutputLocation.S3.BucketName); err != nil {
if _, err := objAPI.GetBucketInfo(ctx, r.OutputLocation.S3.BucketName, BucketOptions{}); err != nil {
return err
}
if r.OutputLocation.S3.Prefix == "" {
@@ -575,8 +575,8 @@ func (r *RestoreObjectRequest) validate(ctx context.Context, objAPI ObjectLayer)
// postRestoreOpts returns ObjectOptions with version-id from the POST restore object request for a given bucket and object.
func postRestoreOpts(ctx context.Context, r *http.Request, bucket, object string) (opts ObjectOptions, err error) {
versioned := globalBucketVersioningSys.Enabled(bucket)
versionSuspended := globalBucketVersioningSys.Suspended(bucket)
versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object)
versionSuspended := globalBucketVersioningSys.PrefixSuspended(bucket, object)
vid := strings.TrimSpace(r.Form.Get(xhttp.VersionID))
if vid != "" && vid != nullVersionID {
_, err := uuid.Parse(vid)
@@ -627,8 +627,8 @@ func putRestoreOpts(bucket, object string, rreq *RestoreObjectRequest, objInfo O
meta[xhttp.AmzServerSideEncryption] = xhttp.AmzEncryptionAES
}
return ObjectOptions{
Versioned: globalBucketVersioningSys.Enabled(bucket),
VersionSuspended: globalBucketVersioningSys.Suspended(bucket),
Versioned: globalBucketVersioningSys.PrefixEnabled(bucket, object),
VersionSuspended: globalBucketVersioningSys.PrefixSuspended(bucket, object),
UserDefined: meta,
}
}
@@ -640,8 +640,8 @@ func putRestoreOpts(bucket, object string, rreq *RestoreObjectRequest, objInfo O
}
return ObjectOptions{
Versioned: globalBucketVersioningSys.Enabled(bucket),
VersionSuspended: globalBucketVersioningSys.Suspended(bucket),
Versioned: globalBucketVersioningSys.PrefixEnabled(bucket, object),
VersionSuspended: globalBucketVersioningSys.PrefixSuspended(bucket, object),
UserDefined: meta,
VersionID: objInfo.VersionID,
MTime: objInfo.ModTime,

View File

@@ -203,7 +203,7 @@ func TestObjectIsRemote(t *testing.T) {
if got := fi.IsRemote(); got != tc.remote {
t.Fatalf("Test %d.a: expected %v got %v", i+1, tc.remote, got)
}
oi := fi.ToObjectInfo("bucket", "object")
oi := fi.ToObjectInfo("bucket", "object", false)
if got := oi.IsRemote(); got != tc.remote {
t.Fatalf("Test %d.b: expected %v got %v", i+1, tc.remote, got)
}

View File

@@ -26,26 +26,9 @@ import (
"github.com/gorilla/mux"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/sync/errgroup"
"github.com/minio/pkg/bucket/policy"
)
func concurrentDecryptETag(ctx context.Context, objects []ObjectInfo) {
g := errgroup.WithNErrs(len(objects)).WithConcurrency(500)
for index := range objects {
index := index
g.Go(func() error {
size, err := objects[index].GetActualSize()
if err == nil {
objects[index].Size = size
}
objects[index].ETag = objects[index].GetActualETag(nil)
return nil
}, index)
}
g.Wait()
}
// Validate all the ListObjects query arguments, returns an APIErrorCode
// if one of the args do not meet the required conditions.
// Special conditions required by MinIO server are as below
@@ -116,7 +99,10 @@ func (api objectAPIHandlers) ListObjectVersionsHandler(w http.ResponseWriter, r
return
}
concurrentDecryptETag(ctx, listObjectVersionsInfo.Objects)
if err = DecryptETags(ctx, GlobalKMS, listObjectVersionsInfo.Objects); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
response := generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType, maxkeys, listObjectVersionsInfo)
@@ -178,7 +164,10 @@ func (api objectAPIHandlers) ListObjectsV2MHandler(w http.ResponseWriter, r *htt
return
}
concurrentDecryptETag(ctx, listObjectsV2Info.Objects)
if err = DecryptETags(ctx, GlobalKMS, listObjectsV2Info.Objects); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// The next continuation token has id@node_index format to optimize paginated listing
nextContinuationToken := listObjectsV2Info.NextContinuationToken
@@ -253,7 +242,10 @@ func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http
return
}
concurrentDecryptETag(ctx, listObjectsV2Info.Objects)
if err = DecryptETags(ctx, GlobalKMS, listObjectsV2Info.Objects); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
response := generateListObjectsV2Response(bucket, prefix, token, listObjectsV2Info.NextContinuationToken, startAfter,
delimiter, encodingType, fetchOwner, listObjectsV2Info.IsTruncated,
@@ -350,7 +342,10 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
return
}
concurrentDecryptETag(ctx, listObjectsInfo.Objects)
if err = DecryptETags(ctx, GlobalKMS, listObjectsInfo.Objects); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
response := generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingType, maxKeys, listObjectsInfo)

View File

@@ -23,6 +23,7 @@ import (
"errors"
"fmt"
"sync"
"time"
"github.com/minio/madmin-go"
"github.com/minio/minio-go/v7/pkg/tags"
@@ -74,28 +75,28 @@ func (sys *BucketMetadataSys) Set(bucket string, meta BucketMetadata) {
// Update update bucket metadata for the specified config file.
// The configData data should not be modified after being sent here.
func (sys *BucketMetadataSys) Update(ctx context.Context, bucket string, configFile string, configData []byte) error {
func (sys *BucketMetadataSys) Update(ctx context.Context, bucket string, configFile string, configData []byte) (updatedAt time.Time, err error) {
objAPI := newObjectLayerFn()
if objAPI == nil {
return errServerNotInitialized
return updatedAt, errServerNotInitialized
}
if globalIsGateway && globalGatewayName != NASBackendGateway {
if configFile == bucketPolicyConfig {
if configData == nil {
return objAPI.DeleteBucketPolicy(ctx, bucket)
return updatedAt, objAPI.DeleteBucketPolicy(ctx, bucket)
}
config, err := policy.ParseConfig(bytes.NewReader(configData), bucket)
if err != nil {
return err
return updatedAt, err
}
return objAPI.SetBucketPolicy(ctx, bucket, config)
return updatedAt, objAPI.SetBucketPolicy(ctx, bucket, config)
}
return NotImplemented{}
return updatedAt, NotImplemented{}
}
if bucket == minioMetaBucket {
return errInvalidArgument
return updatedAt, errInvalidArgument
}
meta, err := loadBucketMetadata(ctx, objAPI, bucket)
@@ -104,58 +105,56 @@ func (sys *BucketMetadataSys) Update(ctx context.Context, bucket string, configF
// Only single drive mode needs this fallback.
meta = newBucketMetadata(bucket)
} else {
return err
return updatedAt, err
}
}
updatedAt = UTCNow()
switch configFile {
case bucketPolicyConfig:
meta.PolicyConfigJSON = configData
meta.PolicyConfigUpdatedAt = updatedAt
case bucketNotificationConfig:
meta.NotificationConfigXML = configData
case bucketLifecycleConfig:
meta.LifecycleConfigXML = configData
case bucketSSEConfig:
meta.EncryptionConfigXML = configData
meta.EncryptionConfigUpdatedAt = updatedAt
case bucketTaggingConfig:
meta.TaggingConfigXML = configData
meta.TaggingConfigUpdatedAt = updatedAt
case bucketQuotaConfigFile:
meta.QuotaConfigJSON = configData
meta.QuotaConfigUpdatedAt = updatedAt
case objectLockConfig:
if !globalIsErasure && !globalIsDistErasure {
return NotImplemented{}
}
meta.ObjectLockConfigXML = configData
meta.ObjectLockConfigUpdatedAt = updatedAt
case bucketVersioningConfig:
if !globalIsErasure && !globalIsDistErasure {
return NotImplemented{}
}
meta.VersioningConfigXML = configData
meta.VersioningConfigUpdatedAt = updatedAt
case bucketReplicationConfig:
if !globalIsErasure && !globalIsDistErasure {
return NotImplemented{}
}
meta.ReplicationConfigXML = configData
meta.ReplicationConfigUpdatedAt = updatedAt
case bucketTargetsFile:
meta.BucketTargetsConfigJSON, meta.BucketTargetsConfigMetaJSON, err = encryptBucketMetadata(meta.Name, configData, kms.Context{
meta.BucketTargetsConfigJSON, meta.BucketTargetsConfigMetaJSON, err = encryptBucketMetadata(ctx, meta.Name, configData, kms.Context{
bucket: meta.Name,
bucketTargetsFile: bucketTargetsFile,
})
if err != nil {
return fmt.Errorf("Error encrypting bucket target metadata %w", err)
return updatedAt, fmt.Errorf("Error encrypting bucket target metadata %w", err)
}
default:
return fmt.Errorf("Unknown bucket %s metadata update requested %s", bucket, configFile)
return updatedAt, fmt.Errorf("Unknown bucket %s metadata update requested %s", bucket, configFile)
}
if err := meta.Save(ctx, objAPI); err != nil {
return err
return updatedAt, err
}
sys.Set(bucket, meta)
globalNotificationSys.LoadBucketMetadata(bgContext(ctx), bucket) // Do not use caller context here
return nil
return updatedAt, nil
}
// Get metadata for a bucket.
@@ -186,44 +185,47 @@ func (sys *BucketMetadataSys) Get(bucket string) (BucketMetadata, error) {
// GetVersioningConfig returns configured versioning config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetVersioningConfig(bucket string) (*versioning.Versioning, error) {
func (sys *BucketMetadataSys) GetVersioningConfig(bucket string) (*versioning.Versioning, time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
return nil, err
if errors.Is(err, errConfigNotFound) {
return &versioning.Versioning{XMLNS: "http://s3.amazonaws.com/doc/2006-03-01/"}, meta.Created, nil
}
return &versioning.Versioning{XMLNS: "http://s3.amazonaws.com/doc/2006-03-01/"}, time.Time{}, err
}
return meta.versioningConfig, nil
return meta.versioningConfig, meta.VersioningConfigUpdatedAt, nil
}
// GetTaggingConfig returns configured tagging config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetTaggingConfig(bucket string) (*tags.Tags, error) {
func (sys *BucketMetadataSys) GetTaggingConfig(bucket string) (*tags.Tags, time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketTaggingNotFound{Bucket: bucket}
return nil, time.Time{}, BucketTaggingNotFound{Bucket: bucket}
}
return nil, err
return nil, time.Time{}, err
}
if meta.taggingConfig == nil {
return nil, BucketTaggingNotFound{Bucket: bucket}
return nil, time.Time{}, BucketTaggingNotFound{Bucket: bucket}
}
return meta.taggingConfig, nil
return meta.taggingConfig, meta.TaggingConfigUpdatedAt, nil
}
// GetObjectLockConfig returns configured object lock config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetObjectLockConfig(bucket string) (*objectlock.Config, error) {
func (sys *BucketMetadataSys) GetObjectLockConfig(bucket string) (*objectlock.Config, time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketObjectLockConfigNotFound{Bucket: bucket}
return nil, time.Time{}, BucketObjectLockConfigNotFound{Bucket: bucket}
}
return nil, err
return nil, time.Time{}, err
}
if meta.objectLockConfig == nil {
return nil, BucketObjectLockConfigNotFound{Bucket: bucket}
return nil, time.Time{}, BucketObjectLockConfigNotFound{Bucket: bucket}
}
return meta.objectLockConfig, nil
return meta.objectLockConfig, meta.ObjectLockConfigUpdatedAt, nil
}
// GetLifecycleConfig returns configured lifecycle config
@@ -283,69 +285,82 @@ func (sys *BucketMetadataSys) GetNotificationConfig(bucket string) (*event.Confi
// GetSSEConfig returns configured SSE config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetSSEConfig(bucket string) (*bucketsse.BucketSSEConfig, error) {
func (sys *BucketMetadataSys) GetSSEConfig(bucket string) (*bucketsse.BucketSSEConfig, time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketSSEConfigNotFound{Bucket: bucket}
return nil, time.Time{}, BucketSSEConfigNotFound{Bucket: bucket}
}
return nil, err
return nil, time.Time{}, err
}
if meta.sseConfig == nil {
return nil, BucketSSEConfigNotFound{Bucket: bucket}
return nil, time.Time{}, BucketSSEConfigNotFound{Bucket: bucket}
}
return meta.sseConfig, nil
return meta.sseConfig, meta.EncryptionConfigUpdatedAt, nil
}
// CreatedAt returns the time of creation of bucket
func (sys *BucketMetadataSys) CreatedAt(bucket string) (time.Time, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
return time.Time{}, err
}
return meta.Created.UTC(), nil
}
// GetPolicyConfig returns configured bucket policy
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetPolicyConfig(bucket string) (*policy.Policy, error) {
func (sys *BucketMetadataSys) GetPolicyConfig(bucket string) (*policy.Policy, time.Time, error) {
if globalIsGateway {
objAPI := newObjectLayerFn()
if objAPI == nil {
return nil, errServerNotInitialized
return nil, time.Time{}, errServerNotInitialized
}
return objAPI.GetBucketPolicy(GlobalContext, bucket)
p, err := objAPI.GetBucketPolicy(GlobalContext, bucket)
return p, UTCNow(), err
}
meta, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketPolicyNotFound{Bucket: bucket}
return nil, time.Time{}, BucketPolicyNotFound{Bucket: bucket}
}
return nil, err
return nil, time.Time{}, err
}
if meta.policyConfig == nil {
return nil, BucketPolicyNotFound{Bucket: bucket}
return nil, time.Time{}, BucketPolicyNotFound{Bucket: bucket}
}
return meta.policyConfig, nil
return meta.policyConfig, meta.PolicyConfigUpdatedAt, nil
}
// GetQuotaConfig returns configured bucket quota
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetQuotaConfig(ctx context.Context, bucket string) (*madmin.BucketQuota, error) {
func (sys *BucketMetadataSys) GetQuotaConfig(ctx context.Context, bucket string) (*madmin.BucketQuota, time.Time, error) {
meta, err := sys.GetConfig(ctx, bucket)
if err != nil {
return nil, err
if errors.Is(err, errConfigNotFound) {
return nil, time.Time{}, BucketQuotaConfigNotFound{Bucket: bucket}
}
return nil, time.Time{}, err
}
return meta.quotaConfig, nil
return meta.quotaConfig, meta.QuotaConfigUpdatedAt, nil
}
// GetReplicationConfig returns configured bucket replication config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetReplicationConfig(ctx context.Context, bucket string) (*replication.Config, error) {
func (sys *BucketMetadataSys) GetReplicationConfig(ctx context.Context, bucket string) (*replication.Config, time.Time, error) {
meta, err := sys.GetConfig(ctx, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketReplicationConfigNotFound{Bucket: bucket}
return nil, time.Time{}, BucketReplicationConfigNotFound{Bucket: bucket}
}
return nil, err
return nil, time.Time{}, err
}
if meta.replicationConfig == nil {
return nil, BucketReplicationConfigNotFound{Bucket: bucket}
return nil, time.Time{}, BucketReplicationConfigNotFound{Bucket: bucket}
}
return meta.replicationConfig, nil
return meta.replicationConfig, meta.ReplicationConfigUpdatedAt, nil
}
// GetBucketTargetsConfig returns configured bucket targets for this bucket
@@ -353,6 +368,9 @@ func (sys *BucketMetadataSys) GetReplicationConfig(ctx context.Context, bucket s
func (sys *BucketMetadataSys) GetBucketTargetsConfig(bucket string) (*madmin.BucketTargets, error) {
meta, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, BucketRemoteTargetNotFound{Bucket: bucket}
}
return nil, err
}
if meta.bucketTargetConfig == nil {
@@ -361,20 +379,6 @@ func (sys *BucketMetadataSys) GetBucketTargetsConfig(bucket string) (*madmin.Buc
return meta.bucketTargetConfig, nil
}
// GetBucketTarget returns the target for the bucket and arn.
func (sys *BucketMetadataSys) GetBucketTarget(bucket string, arn string) (madmin.BucketTarget, error) {
targets, err := sys.GetBucketTargetsConfig(bucket)
if err != nil {
return madmin.BucketTarget{}, err
}
for _, t := range targets.Targets {
if t.Arn == arn {
return t, nil
}
}
return madmin.BucketTarget{}, errConfigNotFound
}
// GetConfig returns a specific configuration from the bucket metadata.
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetConfig(ctx context.Context, bucket string) (BucketMetadata, error) {

View File

@@ -81,6 +81,13 @@ type BucketMetadata struct {
ReplicationConfigXML []byte
BucketTargetsConfigJSON []byte
BucketTargetsConfigMetaJSON []byte
PolicyConfigUpdatedAt time.Time
ObjectLockConfigUpdatedAt time.Time
EncryptionConfigUpdatedAt time.Time
TaggingConfigUpdatedAt time.Time
QuotaConfigUpdatedAt time.Time
ReplicationConfigUpdatedAt time.Time
VersioningConfigUpdatedAt time.Time
// Unexported fields. Must be updated atomically.
policyConfig *policy.Policy
@@ -99,8 +106,7 @@ type BucketMetadata struct {
// newBucketMetadata creates BucketMetadata with the supplied name and Created to Now.
func newBucketMetadata(name string) BucketMetadata {
return BucketMetadata{
Name: name,
Created: UTCNow(),
Name: name,
notificationConfig: &event.Config{
XMLNS: "http://s3.amazonaws.com/doc/2006-03-01/",
},
@@ -113,6 +119,17 @@ func newBucketMetadata(name string) BucketMetadata {
}
}
// SetCreatedAt preserves the CreatedAt time for bucket across sites in site replication. It defaults to
// creation time of bucket on this cluster in all other cases.
func (b *BucketMetadata) SetCreatedAt(createdAt time.Time) {
if b.Created.IsZero() {
b.Created = UTCNow()
}
if !createdAt.IsZero() {
b.Created = createdAt.UTC()
}
}
// Load - loads the metadata of bucket by name from ObjectLayer api.
// If an error is returned the returned metadata will be default initialized.
func (b *BucketMetadata) Load(ctx context.Context, api ObjectLayer, name string) error {
@@ -152,13 +169,20 @@ func loadBucketMetadata(ctx context.Context, objectAPI ObjectLayer, bucket strin
if err != nil && !errors.Is(err, errConfigNotFound) {
return b, err
}
if err == nil {
b.defaultTimestamps()
}
// Old bucket without bucket metadata. Hence we migrate existing settings.
if err := b.convertLegacyConfigs(ctx, objectAPI); err != nil {
return b, err
}
// migrate unencrypted remote targets
return b, b.migrateTargetConfig(ctx, objectAPI)
if err := b.migrateTargetConfig(ctx, objectAPI); err != nil {
return b, err
}
return b, nil
}
// parseAllConfigs will parse all configs and populate the private fields.
@@ -332,6 +356,7 @@ func (b *BucketMetadata) convertLegacyConfigs(ctx context.Context, objectAPI Obj
b.BucketTargetsConfigJSON = configData
}
}
b.defaultTimestamps()
if err := b.Save(ctx, objectAPI); err != nil {
return err
@@ -347,6 +372,37 @@ func (b *BucketMetadata) convertLegacyConfigs(ctx context.Context, objectAPI Obj
return nil
}
// default timestamps to metadata Created timestamp if unset.
func (b *BucketMetadata) defaultTimestamps() {
if b.PolicyConfigUpdatedAt.IsZero() {
b.PolicyConfigUpdatedAt = b.Created
}
if b.EncryptionConfigUpdatedAt.IsZero() {
b.EncryptionConfigUpdatedAt = b.Created
}
if b.TaggingConfigUpdatedAt.IsZero() {
b.TaggingConfigUpdatedAt = b.Created
}
if b.ObjectLockConfigUpdatedAt.IsZero() {
b.ObjectLockConfigUpdatedAt = b.Created
}
if b.QuotaConfigUpdatedAt.IsZero() {
b.QuotaConfigUpdatedAt = b.Created
}
if b.ReplicationConfigUpdatedAt.IsZero() {
b.ReplicationConfigUpdatedAt = b.Created
}
if b.VersioningConfigUpdatedAt.IsZero() {
b.VersioningConfigUpdatedAt = b.Created
}
}
// Save config to supplied ObjectLayer api.
func (b *BucketMetadata) Save(ctx context.Context, api ObjectLayer) error {
if err := b.parseAllConfigs(ctx, api); err != nil {
@@ -394,7 +450,7 @@ func (b *BucketMetadata) migrateTargetConfig(ctx context.Context, objectAPI Obje
return nil
}
encBytes, metaBytes, err := encryptBucketMetadata(b.Name, b.BucketTargetsConfigJSON, kms.Context{b.Name: b.Name, bucketTargetsFile: bucketTargetsFile})
encBytes, metaBytes, err := encryptBucketMetadata(ctx, b.Name, b.BucketTargetsConfigJSON, kms.Context{b.Name: b.Name, bucketTargetsFile: bucketTargetsFile})
if err != nil {
return err
}
@@ -405,14 +461,14 @@ func (b *BucketMetadata) migrateTargetConfig(ctx context.Context, objectAPI Obje
}
// encrypt bucket metadata if kms is configured.
func encryptBucketMetadata(bucket string, input []byte, kmsContext kms.Context) (output, metabytes []byte, err error) {
func encryptBucketMetadata(ctx context.Context, bucket string, input []byte, kmsContext kms.Context) (output, metabytes []byte, err error) {
if GlobalKMS == nil {
output = input
return
}
metadata := make(map[string]string)
key, err := GlobalKMS.GenerateKey("", kmsContext)
key, err := GlobalKMS.GenerateKey(ctx, "", kmsContext)
if err != nil {
return
}
@@ -421,7 +477,7 @@ func encryptBucketMetadata(bucket string, input []byte, kmsContext kms.Context)
objectKey := crypto.GenerateKey(key.Plaintext, rand.Reader)
sealedKey := objectKey.Seal(key.Plaintext, crypto.GenerateIV(rand.Reader), crypto.S3.String(), bucket, "")
crypto.S3.CreateMetadata(metadata, key.KeyID, key.Ciphertext, sealedKey)
_, err = sio.Encrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20, CipherSuites: fips.CipherSuitesDARE()})
_, err = sio.Encrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20, CipherSuites: fips.DARECiphers()})
if err != nil {
return output, metabytes, err
}
@@ -451,6 +507,6 @@ func decryptBucketMetadata(input []byte, bucket string, meta map[string]string,
}
outbuf := bytes.NewBuffer(nil)
_, err = sio.Decrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20, CipherSuites: fips.CipherSuitesDARE()})
_, err = sio.Decrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20, CipherSuites: fips.DARECiphers()})
return outbuf.Bytes(), err
}

View File

@@ -108,6 +108,48 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "BucketTargetsConfigMetaJSON")
return
}
case "PolicyConfigUpdatedAt":
z.PolicyConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "PolicyConfigUpdatedAt")
return
}
case "ObjectLockConfigUpdatedAt":
z.ObjectLockConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "ObjectLockConfigUpdatedAt")
return
}
case "EncryptionConfigUpdatedAt":
z.EncryptionConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "EncryptionConfigUpdatedAt")
return
}
case "TaggingConfigUpdatedAt":
z.TaggingConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "TaggingConfigUpdatedAt")
return
}
case "QuotaConfigUpdatedAt":
z.QuotaConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "QuotaConfigUpdatedAt")
return
}
case "ReplicationConfigUpdatedAt":
z.ReplicationConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "ReplicationConfigUpdatedAt")
return
}
case "VersioningConfigUpdatedAt":
z.VersioningConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "VersioningConfigUpdatedAt")
return
}
default:
err = dc.Skip()
if err != nil {
@@ -121,9 +163,9 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 14
// map header, size 21
// write "Name"
err = en.Append(0x8e, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
err = en.Append(0xde, 0x0, 0x15, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
if err != nil {
return
}
@@ -262,15 +304,85 @@ func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "BucketTargetsConfigMetaJSON")
return
}
// write "PolicyConfigUpdatedAt"
err = en.Append(0xb5, 0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.PolicyConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "PolicyConfigUpdatedAt")
return
}
// write "ObjectLockConfigUpdatedAt"
err = en.Append(0xb9, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x4c, 0x6f, 0x63, 0x6b, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.ObjectLockConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "ObjectLockConfigUpdatedAt")
return
}
// write "EncryptionConfigUpdatedAt"
err = en.Append(0xb9, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.EncryptionConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "EncryptionConfigUpdatedAt")
return
}
// write "TaggingConfigUpdatedAt"
err = en.Append(0xb6, 0x54, 0x61, 0x67, 0x67, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.TaggingConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "TaggingConfigUpdatedAt")
return
}
// write "QuotaConfigUpdatedAt"
err = en.Append(0xb4, 0x51, 0x75, 0x6f, 0x74, 0x61, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.QuotaConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "QuotaConfigUpdatedAt")
return
}
// write "ReplicationConfigUpdatedAt"
err = en.Append(0xba, 0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.ReplicationConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "ReplicationConfigUpdatedAt")
return
}
// write "VersioningConfigUpdatedAt"
err = en.Append(0xb9, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.VersioningConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "VersioningConfigUpdatedAt")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 14
// map header, size 21
// string "Name"
o = append(o, 0x8e, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = append(o, 0xde, 0x0, 0x15, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = msgp.AppendString(o, z.Name)
// string "Created"
o = append(o, 0xa7, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64)
@@ -311,6 +423,27 @@ func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
// string "BucketTargetsConfigMetaJSON"
o = append(o, 0xbb, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x73, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x4d, 0x65, 0x74, 0x61, 0x4a, 0x53, 0x4f, 0x4e)
o = msgp.AppendBytes(o, z.BucketTargetsConfigMetaJSON)
// string "PolicyConfigUpdatedAt"
o = append(o, 0xb5, 0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.PolicyConfigUpdatedAt)
// string "ObjectLockConfigUpdatedAt"
o = append(o, 0xb9, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x4c, 0x6f, 0x63, 0x6b, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.ObjectLockConfigUpdatedAt)
// string "EncryptionConfigUpdatedAt"
o = append(o, 0xb9, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.EncryptionConfigUpdatedAt)
// string "TaggingConfigUpdatedAt"
o = append(o, 0xb6, 0x54, 0x61, 0x67, 0x67, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.TaggingConfigUpdatedAt)
// string "QuotaConfigUpdatedAt"
o = append(o, 0xb4, 0x51, 0x75, 0x6f, 0x74, 0x61, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.QuotaConfigUpdatedAt)
// string "ReplicationConfigUpdatedAt"
o = append(o, 0xba, 0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.ReplicationConfigUpdatedAt)
// string "VersioningConfigUpdatedAt"
o = append(o, 0xb9, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.VersioningConfigUpdatedAt)
return
}
@@ -416,6 +549,48 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "BucketTargetsConfigMetaJSON")
return
}
case "PolicyConfigUpdatedAt":
z.PolicyConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "PolicyConfigUpdatedAt")
return
}
case "ObjectLockConfigUpdatedAt":
z.ObjectLockConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "ObjectLockConfigUpdatedAt")
return
}
case "EncryptionConfigUpdatedAt":
z.EncryptionConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "EncryptionConfigUpdatedAt")
return
}
case "TaggingConfigUpdatedAt":
z.TaggingConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "TaggingConfigUpdatedAt")
return
}
case "QuotaConfigUpdatedAt":
z.QuotaConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "QuotaConfigUpdatedAt")
return
}
case "ReplicationConfigUpdatedAt":
z.ReplicationConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "ReplicationConfigUpdatedAt")
return
}
case "VersioningConfigUpdatedAt":
z.VersioningConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "VersioningConfigUpdatedAt")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -430,6 +605,6 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BucketMetadata) Msgsize() (s int) {
s = 1 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON) + 28 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigMetaJSON)
s = 3 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON) + 28 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigMetaJSON) + 22 + msgp.TimeSize + 26 + msgp.TimeSize + 26 + msgp.TimeSize + 23 + msgp.TimeSize + 21 + msgp.TimeSize + 27 + msgp.TimeSize + 26 + msgp.TimeSize
return
}

View File

@@ -60,7 +60,7 @@ func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter,
return
}
_, err := objAPI.GetBucketInfo(ctx, bucketName)
_, err := objAPI.GetBucketInfo(ctx, bucketName, BucketOptions{})
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -132,7 +132,7 @@ func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter,
return
}
_, err := objectAPI.GetBucketInfo(ctx, bucketName)
_, err := objectAPI.GetBucketInfo(ctx, bucketName, BucketOptions{})
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -160,7 +160,7 @@ func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter,
return
}
if err = globalBucketMetadataSys.Update(ctx, bucketName, bucketNotificationConfig, configData); err != nil {
if _, err = globalBucketMetadataSys.Update(ctx, bucketName, bucketNotificationConfig, configData); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}

View File

@@ -44,7 +44,7 @@ func (sys *BucketObjectLockSys) Get(bucketName string) (r objectlock.Retention,
return r, nil
}
config, err := globalBucketMetadataSys.GetObjectLockConfig(bucketName)
config, _, err := globalBucketMetadataSys.GetObjectLockConfig(bucketName)
if err != nil {
if _, ok := err.(BucketObjectLockConfigNotFound); ok {
return r, nil

View File

@@ -61,7 +61,7 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
}
// Check if bucket exists.
if _, err := objAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -103,16 +103,18 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
return
}
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketPolicyConfig, configData); err != nil {
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketPolicyConfig, configData)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Call site replication hook.
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypePolicy,
Bucket: bucket,
Policy: bucketPolicyBytes,
Type: madmin.SRBucketMetaTypePolicy,
Bucket: bucket,
Policy: bucketPolicyBytes,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -143,20 +145,22 @@ func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r
}
// Check if bucket exists.
if _, err := objAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if err := globalBucketMetadataSys.Update(ctx, bucket, bucketPolicyConfig, nil); err != nil {
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketPolicyConfig, nil)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Call site replication hook.
if err := globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypePolicy,
Bucket: bucket,
Type: madmin.SRBucketMetaTypePolicy,
Bucket: bucket,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -187,7 +191,7 @@ func (api objectAPIHandlers) GetBucketPolicyHandler(w http.ResponseWriter, r *ht
}
// Check if bucket exists.
if _, err := objAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}

View File

@@ -113,7 +113,8 @@ func TestCreateBucket(t *testing.T) {
// testCreateBucket - Test for calling Create Bucket and ensure we get one and only one success.
func testCreateBucket(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials auth.Credentials, t *testing.T) {
credentials auth.Credentials, t *testing.T,
) {
bucketName1 := fmt.Sprintf("%s-1", bucketName)
const n = 100
@@ -127,7 +128,7 @@ func testCreateBucket(obj ObjectLayer, instanceType, bucketName string, apiRoute
defer wg.Done()
// Sync start.
<-start
if err := obj.MakeBucketWithLocation(GlobalContext, bucketName1, BucketOptions{}); err != nil {
if err := obj.MakeBucketWithLocation(GlobalContext, bucketName1, MakeBucketOptions{}); err != nil {
if _, ok := err.(BucketExists); !ok {
t.Logf("unexpected error: %T: %v", err, err)
return
@@ -162,7 +163,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
credentials auth.Credentials, t *testing.T,
) {
bucketName1 := fmt.Sprintf("%s-1", bucketName)
if err := obj.MakeBucketWithLocation(GlobalContext, bucketName1, BucketOptions{}); err != nil {
if err := obj.MakeBucketWithLocation(GlobalContext, bucketName1, MakeBucketOptions{}); err != nil {
t.Fatal(err)
}
@@ -378,7 +379,8 @@ func TestGetBucketPolicyHandler(t *testing.T) {
// testGetBucketPolicyHandler - Test for end point which fetches the access policy json of the given bucket.
func testGetBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials auth.Credentials, t *testing.T) {
credentials auth.Credentials, t *testing.T,
) {
// template for constructing HTTP request body for PUT bucket policy.
bucketPolicyTemplate := `{"Version":"2012-10-17","Statement":[{"Action":["s3:GetBucketLocation","s3:ListBucket"],"Effect":"Allow","Principal":{"AWS":["*"]},"Resource":["arn:aws:s3:::%s"]},{"Action":["s3:GetObject"],"Effect":"Allow","Principal":{"AWS":["*"]},"Resource":["arn:aws:s3:::%s/this*"]}]}`

View File

@@ -38,7 +38,8 @@ type PolicySys struct{}
// Get returns stored bucket policy
func (sys *PolicySys) Get(bucket string) (*policy.Policy, error) {
return globalBucketMetadataSys.GetPolicyConfig(bucket)
policy, _, err := globalBucketMetadataSys.GetPolicyConfig(bucket)
return policy, err
}
// IsAllowed - checks given policy args is allowed to continue the Rest API.

View File

@@ -42,8 +42,8 @@ func (sys *BucketQuotaSys) Get(ctx context.Context, bucketName string) (*madmin.
}
return &madmin.BucketQuota{}, nil
}
return globalBucketMetadataSys.GetQuotaConfig(ctx, bucketName)
qCfg, _, err := globalBucketMetadataSys.GetQuotaConfig(ctx, bucketName)
return qCfg, err
}
// NewBucketQuotaSys returns initialized BucketQuotaSys

View File

@@ -0,0 +1,410 @@
// Copyright (c) 2015-2022 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"encoding/json"
"encoding/xml"
"fmt"
"io"
"net/http"
"time"
"github.com/gorilla/mux"
"github.com/minio/minio/internal/bucket/replication"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
)
// PutBucketReplicationConfigHandler - PUT Bucket replication configuration.
// ----------
// Add a replication configuration on the specified bucket as specified in https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html
func (api objectAPIHandlers) PutBucketReplicationConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketReplicationConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if globalIsGateway {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.PutReplicationConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if versioned := globalBucketVersioningSys.Enabled(bucket); !versioned {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrReplicationNeedsVersioningError), r.URL)
return
}
replicationConfig, err := replication.ParseConfig(io.LimitReader(r.Body, r.ContentLength))
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedXML)
apiErr.Description = err.Error()
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
sameTarget, apiErr := validateReplicationDestination(ctx, bucket, replicationConfig, true)
if apiErr != noError {
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
// Validate the received bucket replication config
if err = replicationConfig.Validate(bucket, sameTarget); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
configData, err := xml.Marshal(replicationConfig)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if _, err = globalBucketMetadataSys.Update(ctx, bucket, bucketReplicationConfig, configData); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseHeadersOnly(w)
}
// GetBucketReplicationConfigHandler - GET Bucket replication configuration.
// ----------
// Gets the replication configuration for a bucket.
func (api objectAPIHandlers) GetBucketReplicationConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketReplicationConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
// check if user has permissions to perform this operation
if s3Error := checkRequestAuthType(ctx, r, policy.GetReplicationConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
config, _, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
configData, err := xml.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseXML(w, configData)
}
// DeleteBucketReplicationConfigHandler - DELETE Bucket replication config.
// ----------
func (api objectAPIHandlers) DeleteBucketReplicationConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteBucketReplicationConfig")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.PutReplicationConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if globalSiteReplicationSys.isEnabled() {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrReplicationDenyEditError), r.URL)
return
}
if _, err := globalBucketMetadataSys.Update(ctx, bucket, bucketReplicationConfig, nil); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseHeadersOnly(w)
}
// GetBucketReplicationMetricsHandler - GET Bucket replication metrics.
// ----------
// Gets the replication metrics for a bucket.
func (api objectAPIHandlers) GetBucketReplicationMetricsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketReplicationMetrics")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
// check if user has permissions to perform this operation
if s3Error := checkRequestAuthType(ctx, r, policy.GetReplicationConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if _, _, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
var usageInfo BucketUsageInfo
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objectAPI)
if err == nil && !dataUsageInfo.LastUpdate.IsZero() {
usageInfo = dataUsageInfo.BucketsUsage[bucket]
}
w.Header().Set(xhttp.ContentType, string(mimeJSON))
enc := json.NewEncoder(w)
if err = enc.Encode(getLatestReplicationStats(bucket, usageInfo)); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
}
// ResetBucketReplicationStartHandler - starts a replication reset for all objects in a bucket which
// qualify for replication and re-sync the object(s) to target, provided ExistingObjectReplication is
// enabled for the qualifying rule. This API is a MinIO only extension provided for situations where
// remote target is entirely lost,and previously replicated objects need to be re-synced. If resync is
// already in progress it returns an error
func (api objectAPIHandlers) ResetBucketReplicationStartHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ResetBucketReplicationStart")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
durationStr := r.URL.Query().Get("older-than")
arn := r.URL.Query().Get("arn")
resetID := r.URL.Query().Get("reset-id")
if resetID == "" {
resetID = mustGetUUID()
}
var (
days time.Duration
err error
)
if durationStr != "" {
days, err = time.ParseDuration(durationStr)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, InvalidArgument{
Bucket: bucket,
Err: fmt.Errorf("invalid query parameter older-than %s for %s : %w", durationStr, bucket, err),
}), r.URL)
}
}
resetBeforeDate := UTCNow().AddDate(0, 0, -1*int(days/24))
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.ResetBucketReplicationStateAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
config, _, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if !config.HasExistingObjectReplication(arn) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrReplicationNoExistingObjects), r.URL)
return
}
tgtArns := config.FilterTargetArns(
replication.ObjectOpts{
OpType: replication.ResyncReplicationType,
TargetArn: arn,
})
if len(tgtArns) == 0 {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrBadRequest, InvalidArgument{
Bucket: bucket,
Err: fmt.Errorf("Remote target ARN %s missing or ineligible for replication resync", arn),
}), r.URL)
return
}
if len(tgtArns) > 1 && arn == "" {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrBadRequest, InvalidArgument{
Bucket: bucket,
Err: fmt.Errorf("ARN should be specified for replication reset"),
}), r.URL)
return
}
var rinfo ResyncTargetsInfo
target := globalBucketTargetSys.GetRemoteBucketTargetByArn(ctx, bucket, tgtArns[0])
target.ResetBeforeDate = UTCNow().AddDate(0, 0, -1*int(days/24))
target.ResetID = resetID
rinfo.Targets = append(rinfo.Targets, ResyncTarget{Arn: tgtArns[0], ResetID: target.ResetID})
if err = globalBucketTargetSys.SetTarget(ctx, bucket, &target, true); err != nil {
switch err.(type) {
case BucketRemoteConnectionErr:
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrReplicationRemoteConnectionError, err), r.URL)
default:
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
}
}
if err := startReplicationResync(ctx, bucket, arn, resetID, resetBeforeDate, objectAPI); err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrBadRequest, InvalidArgument{
Bucket: bucket,
Err: err,
}), r.URL)
return
}
data, err := json.Marshal(rinfo)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseJSON(w, data)
}
// ResetBucketReplicationStatusHandler - returns the status of replication reset.
// This API is a MinIO only extension
func (api objectAPIHandlers) ResetBucketReplicationStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ResetBucketReplicationStatus")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
arn := r.URL.Query().Get("arn")
var err error
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.ResetBucketReplicationStateAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if _, _, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
globalReplicationPool.resyncState.RLock()
brs, ok := globalReplicationPool.resyncState.statusMap[bucket]
globalReplicationPool.resyncState.RUnlock()
if !ok {
brs, err = loadBucketResyncMetadata(ctx, bucket, objectAPI)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErrWithErr(ErrBadRequest, InvalidArgument{
Bucket: bucket,
Err: fmt.Errorf("No replication resync status available for %s", arn),
}), r.URL)
return
}
}
var rinfo ResyncTargetsInfo
for tarn, st := range brs.TargetsMap {
if arn != "" && tarn != arn {
continue
}
rinfo.Targets = append(rinfo.Targets, ResyncTarget{
Arn: tarn,
ResetID: st.ResyncID,
StartTime: st.StartTime,
EndTime: st.EndTime,
ResyncStatus: st.ResyncStatus.String(),
ReplicatedSize: st.ReplicatedSize,
ReplicatedCount: st.ReplicatedCount,
FailedSize: st.FailedSize,
FailedCount: st.FailedCount,
Bucket: st.Bucket,
Object: st.Object,
})
}
data, err := json.Marshal(rinfo)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Write success response.
writeSuccessResponseJSON(w, data)
}

View File

@@ -135,6 +135,23 @@ func (r *ReplicationStats) GetInitialUsage(bucket string) BucketReplicationStats
return st.Clone()
}
// GetAll returns replication metrics for all buckets at once.
func (r *ReplicationStats) GetAll() map[string]BucketReplicationStats {
if r == nil {
return map[string]BucketReplicationStats{}
}
r.RLock()
defer r.RUnlock()
bucketReplicationStats := make(map[string]BucketReplicationStats, len(r.Cache))
for k, v := range r.Cache {
bucketReplicationStats[k] = v.Clone()
}
return bucketReplicationStats
}
// Get replication metrics for a bucket from this node since this node came up.
func (r *ReplicationStats) Get(bucket string) BucketReplicationStats {
if r == nil {
@@ -161,7 +178,7 @@ func NewReplicationStats(ctx context.Context, objectAPI ObjectLayer) *Replicatio
// load replication metrics at cluster start from initial data usage
func (r *ReplicationStats) loadInitialReplicationMetrics(ctx context.Context) {
rTimer := time.NewTimer(time.Minute * 1)
rTimer := time.NewTimer(time.Minute)
defer rTimer.Stop()
var (
dui DataUsageInfo
@@ -174,13 +191,12 @@ outer:
return
case <-rTimer.C:
dui, err = loadDataUsageFromBackend(GlobalContext, newObjectLayerFn())
if err != nil {
continue
}
// If LastUpdate is set, data usage is available.
if !dui.LastUpdate.IsZero() {
if err == nil && !dui.LastUpdate.IsZero() {
break outer
}
rTimer.Reset(time.Minute)
}
}

View File

@@ -20,13 +20,17 @@ package cmd
import (
"bytes"
"fmt"
"net/http"
"net/url"
"regexp"
"strconv"
"strings"
"sync"
"time"
"github.com/minio/madmin-go"
"github.com/minio/minio/internal/bucket/replication"
xhttp "github.com/minio/minio/internal/http"
)
//go:generate msgp -file=$GOFILE
@@ -501,28 +505,32 @@ func getHealReplicateObjectInfo(objInfo ObjectInfo, rcfg replicationConfig) Repl
}
var dsc ReplicateDecision
var tgtStatuses map[string]replication.StatusType
var purgeStatuses map[string]VersionPurgeStatusType
if oi.DeleteMarker || !oi.VersionPurgeStatus.Empty() {
dsc = checkReplicateDelete(GlobalContext, oi.Bucket, ObjectToDelete{
ObjectV: ObjectV{
ObjectName: oi.Name,
VersionID: oi.VersionID,
},
}, oi, ObjectOptions{}, nil)
}, oi, ObjectOptions{VersionSuspended: globalBucketVersioningSys.PrefixSuspended(oi.Bucket, oi.Name)}, nil)
} else {
dsc = mustReplicate(GlobalContext, oi.Bucket, oi.Name, getMustReplicateOptions(ObjectInfo{
UserDefined: oi.UserDefined,
}, replication.HealReplicationType, ObjectOptions{}))
}
tgtStatuses = replicationStatusesMap(oi.ReplicationStatusInternal)
purgeStatuses = versionPurgeStatusesMap(oi.VersionPurgeStatusInternal)
existingObjResync := rcfg.Resync(GlobalContext, oi, &dsc, tgtStatuses)
tm, _ := time.Parse(time.RFC3339Nano, oi.UserDefined[ReservedMetadataPrefixLower+ReplicationTimestamp])
return ReplicateObjectInfo{
ObjectInfo: oi,
OpType: replication.HealReplicationType,
Dsc: dsc,
ExistingObjResync: existingObjResync,
TargetStatuses: tgtStatuses,
ObjectInfo: oi,
OpType: replication.HealReplicationType,
Dsc: dsc,
ExistingObjResync: existingObjResync,
TargetStatuses: tgtStatuses,
TargetPurgeStatuses: purgeStatuses,
ReplicationTimestamp: tm,
}
}
@@ -699,3 +707,33 @@ func newBucketResyncStatus(bucket string) BucketReplicationResyncStatus {
Version: resyncMetaVersion,
}
}
var contentRangeRegexp = regexp.MustCompile(`bytes ([0-9]+)-([0-9]+)/([0-9]+|\\*)`)
// parse size from content-range header
func parseSizeFromContentRange(h http.Header) (sz int64, err error) {
cr := h.Get(xhttp.ContentRange)
if cr == "" {
return sz, fmt.Errorf("Content-Range not set")
}
parts := contentRangeRegexp.FindStringSubmatch(cr)
if len(parts) != 4 {
return sz, fmt.Errorf("invalid Content-Range header %s", cr)
}
if parts[3] == "*" {
return -1, nil
}
var usz uint64
usz, err = strconv.ParseUint(parts[3], 10, 64)
if err != nil {
return sz, err
}
return int64(usz), nil
}
func extractReplicateDiffOpts(q url.Values) (opts madmin.ReplDiffOpts) {
opts.Verbose = q.Get("verbose") == "true"
opts.ARN = q.Get("arn")
opts.Prefix = q.Get("prefix")
return
}

View File

@@ -81,12 +81,13 @@ func getReplicationConfig(ctx context.Context, bucketName string) (rc *replicati
return rc, BucketReplicationConfigNotFound{Bucket: bucketName}
}
return globalBucketMetadataSys.GetReplicationConfig(ctx, bucketName)
rCfg, _, err := globalBucketMetadataSys.GetReplicationConfig(ctx, bucketName)
return rCfg, err
}
// validateReplicationDestination returns error if replication destination bucket missing or not configured
// It also returns true if replication destination is same as this server.
func validateReplicationDestination(ctx context.Context, bucket string, rCfg *replication.Config) (bool, APIError) {
func validateReplicationDestination(ctx context.Context, bucket string, rCfg *replication.Config, checkRemote bool) (bool, APIError) {
var arns []string
if rCfg.RoleArn != "" {
arns = append(arns, rCfg.RoleArn)
@@ -95,26 +96,29 @@ func validateReplicationDestination(ctx context.Context, bucket string, rCfg *re
arns = append(arns, rule.Destination.String())
}
}
var sameTarget bool
for _, arnStr := range arns {
arn, err := madmin.ParseARN(arnStr)
if err != nil {
return false, errorCodes.ToAPIErrWithErr(ErrBucketRemoteArnInvalid, err)
return sameTarget, errorCodes.ToAPIErrWithErr(ErrBucketRemoteArnInvalid, err)
}
if arn.Type != madmin.ReplicationService {
return false, toAPIError(ctx, BucketRemoteArnTypeInvalid{Bucket: bucket})
return sameTarget, toAPIError(ctx, BucketRemoteArnTypeInvalid{Bucket: bucket})
}
clnt := globalBucketTargetSys.GetRemoteTargetClient(ctx, arnStr)
if clnt == nil {
return false, toAPIError(ctx, BucketRemoteTargetNotFound{Bucket: bucket})
return sameTarget, toAPIError(ctx, BucketRemoteTargetNotFound{Bucket: bucket})
}
if found, err := clnt.BucketExists(ctx, arn.Bucket); !found {
return false, errorCodes.ToAPIErrWithErr(ErrRemoteDestinationNotFoundError, err)
}
if ret, err := globalBucketObjectLockSys.Get(bucket); err == nil {
if ret.LockEnabled {
lock, _, _, _, err := clnt.GetObjectLockConfig(ctx, arn.Bucket)
if err != nil || lock != "Enabled" {
return false, errorCodes.ToAPIErrWithErr(ErrReplicationDestinationMissingLock, err)
if checkRemote { // validate remote bucket
if found, err := clnt.BucketExists(ctx, arn.Bucket); !found {
return sameTarget, errorCodes.ToAPIErrWithErr(ErrRemoteDestinationNotFoundError, err)
}
if ret, err := globalBucketObjectLockSys.Get(bucket); err == nil {
if ret.LockEnabled {
lock, _, _, _, err := clnt.GetObjectLockConfig(ctx, arn.Bucket)
if err != nil || lock != "Enabled" {
return sameTarget, errorCodes.ToAPIErrWithErr(ErrReplicationDestinationMissingLock, err)
}
}
}
}
@@ -122,12 +126,19 @@ func validateReplicationDestination(ctx context.Context, bucket string, rCfg *re
c, ok := globalBucketTargetSys.arnRemotesMap[arnStr]
if ok {
if c.EndpointURL().String() == clnt.EndpointURL().String() {
sameTarget, _ := isLocalHost(clnt.EndpointURL().Hostname(), clnt.EndpointURL().Port(), globalMinioPort)
return sameTarget, toAPIError(ctx, nil)
selfTarget, _ := isLocalHost(clnt.EndpointURL().Hostname(), clnt.EndpointURL().Port(), globalMinioPort)
if !sameTarget {
sameTarget = selfTarget
}
continue
}
}
}
return false, toAPIError(ctx, BucketRemoteTargetNotFound{Bucket: bucket})
if len(arns) == 0 {
return false, toAPIError(ctx, BucketRemoteTargetNotFound{Bucket: bucket})
}
return sameTarget, toAPIError(ctx, nil)
}
type mustReplicateOptions struct {
@@ -179,6 +190,17 @@ func mustReplicate(ctx context.Context, bucket, object string, mopts mustReplica
return
}
// object layer not initialized we return with no decision.
if newObjectLayerFn() == nil {
return
}
// Disable server-side replication on object prefixes which are excluded
// from versioning via the MinIO bucket versioning extension.
if globalBucketVersioningSys.PrefixSuspended(bucket, object) {
return
}
replStatus := mopts.ReplicationStatus()
if replStatus == replication.Replica && !mopts.isMetadataReplication() {
return
@@ -263,6 +285,11 @@ func checkReplicateDelete(ctx context.Context, bucket string, dobj ObjectToDelet
if delOpts.ReplicationRequest {
return
}
// Skip replication if this object's prefix is excluded from being
// versioned.
if !delOpts.Versioned {
return
}
opts := replication.ObjectOpts{
Name: dobj.ObjectName,
SSEC: crypto.SSEC.IsEncrypted(oi.UserDefined),
@@ -320,7 +347,7 @@ func checkReplicateDelete(ctx context.Context, bucket string, dobj ObjectToDelet
// target cluster, the object version is marked deleted on the source and hidden from listing. It is permanently
// deleted from the source when the VersionPurgeStatus changes to "Complete", i.e after replication succeeds
// on target.
func replicateDelete(ctx context.Context, dobj DeletedObjectReplicationInfo, objectAPI ObjectLayer, trigger string) {
func replicateDelete(ctx context.Context, dobj DeletedObjectReplicationInfo, objectAPI ObjectLayer) {
var replicationStatus replication.StatusType
bucket := dobj.Bucket
versionID := dobj.DeleteMarkerVersionID
@@ -331,7 +358,7 @@ func replicateDelete(ctx context.Context, dobj DeletedObjectReplicationInfo, obj
defer func() {
replStatus := string(replicationStatus)
auditLogInternal(context.Background(), bucket, dobj.ObjectName, AuditLogOptions{
Trigger: trigger,
Event: dobj.EventType,
APIName: ReplicateDeleteAPI,
VersionID: versionID,
Status: replStatus,
@@ -425,7 +452,7 @@ func replicateDelete(ctx context.Context, dobj DeletedObjectReplicationInfo, obj
wg.Add(1)
go func(index int, tgt *TargetClient) {
defer wg.Done()
rinfo := replicateDeleteToTarget(ctx, dobj, objectAPI, tgt)
rinfo := replicateDeleteToTarget(ctx, dobj, tgt)
rinfos.Targets[index] = rinfo
}(idx, tgt)
}
@@ -452,12 +479,16 @@ func replicateDelete(ctx context.Context, dobj DeletedObjectReplicationInfo, obj
eventName = event.ObjectReplicationFailed
}
drs := getReplicationState(rinfos, dobj.ReplicationState, dobj.VersionID)
if replicationStatus != prevStatus {
drs.ReplicationTimeStamp = UTCNow()
}
dobjInfo, err := objectAPI.DeleteObject(ctx, bucket, dobj.ObjectName, ObjectOptions{
VersionID: versionID,
MTime: dobj.DeleteMarkerMTime.Time,
DeleteReplication: drs,
Versioned: globalBucketVersioningSys.Enabled(bucket),
VersionSuspended: globalBucketVersioningSys.Suspended(bucket),
Versioned: globalBucketVersioningSys.PrefixEnabled(bucket, dobj.ObjectName),
VersionSuspended: globalBucketVersioningSys.PrefixSuspended(bucket, dobj.ObjectName),
})
if err != nil && !isErrVersionNotFound(err) { // VersionNotFound would be reported by pool that object version is missing on.
logger.LogIf(ctx, fmt.Errorf("Unable to update replication metadata for %s/%s(%s): %s", bucket, dobj.ObjectName, versionID, err))
@@ -482,7 +513,7 @@ func replicateDelete(ctx context.Context, dobj DeletedObjectReplicationInfo, obj
}
}
func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationInfo, objectAPI ObjectLayer, tgt *TargetClient) (rinfo replicatedTargetInfo) {
func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationInfo, tgt *TargetClient) (rinfo replicatedTargetInfo) {
versionID := dobj.DeleteMarkerVersionID
if versionID == "" {
versionID = dobj.VersionID
@@ -830,7 +861,7 @@ func getReplicationAction(oi1 ObjectInfo, oi2 minio.ObjectInfo, opType replicati
// replicateObject replicates the specified version of the object to destination bucket
// The source object is then updated to reflect the replication status.
func replicateObject(ctx context.Context, ri ReplicateObjectInfo, objectAPI ObjectLayer, trigger string) {
func replicateObject(ctx context.Context, ri ReplicateObjectInfo, objectAPI ObjectLayer) {
var replicationStatus replication.StatusType
defer func() {
if replicationStatus.Empty() {
@@ -841,7 +872,7 @@ func replicateObject(ctx context.Context, ri ReplicateObjectInfo, objectAPI Obje
replicationStatus = ri.ReplicationStatus
}
auditLogInternal(ctx, ri.Bucket, ri.Name, AuditLogOptions{
Trigger: trigger,
Event: ri.EventType,
APIName: ReplicateObjectAPI,
VersionID: ri.VersionID,
Status: replicationStatus.String(),
@@ -963,6 +994,7 @@ func replicateObject(ctx context.Context, ri ReplicateObjectInfo, objectAPI Obje
// the target site is down. Leave it to scanner to catch up instead.
if rinfos.ReplicationStatus() != replication.Completed && ri.RetryCount < 1 {
ri.OpType = replication.HealReplicationType
ri.EventType = ReplicateMRF
ri.ReplicationStatusInternal = rinfos.ReplicationStatusInternal()
ri.RetryCount++
globalReplicationPool.queueReplicaFailedTask(ri)
@@ -1018,8 +1050,13 @@ func replicateObjectToTarget(ctx context.Context, ri ReplicateObjectInfo, object
return
}
versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object)
versionSuspended := globalBucketVersioningSys.PrefixSuspended(bucket, object)
gr, err = objectAPI.GetObjectNInfo(ctx, bucket, object, nil, http.Header{}, readLock, ObjectOptions{
VersionID: objInfo.VersionID,
VersionID: objInfo.VersionID,
Versioned: versioned,
VersionSuspended: versionSuspended,
})
if err != nil {
sendEvent(eventArgs{
@@ -1259,6 +1296,7 @@ func filterReplicationStatusMetadata(metadata map[string]string) map[string]stri
type DeletedObjectReplicationInfo struct {
DeletedObject
Bucket string
EventType string
OpType replication.Type
ResetID string
TargetArn string
@@ -1281,12 +1319,15 @@ const (
// ReplicateMRF - audit trail for replication from Most Recent Failures (MRF) queue
ReplicateMRF = "replicate:mrf"
// ReplicateIncoming - audit trail indicating replication started [could be from incoming/existing/heal activity]
// ReplicateIncoming - audit trail of inline replication
ReplicateIncoming = "replicate:incoming"
// ReplicateIncomingDelete - audit trail of inline replication of deletes.
ReplicateIncomingDelete = "replicate:incoming:delete"
// ReplicateHeal - audit trail for healing of failed/pending replications
ReplicateHeal = "replicate:heal"
// ReplicateDelete - audit trail for delete replication
ReplicateDelete = "replicate:delete"
// ReplicateHealDelete - audit trail of healing of failed/pending delete replications.
ReplicateHealDelete = "replicate:heal:delete"
)
var (
@@ -1332,13 +1373,14 @@ func NewReplicationPool(ctx context.Context, o ObjectLayer, opts replicationPool
pool.ResizeWorkers(opts.Workers)
pool.ResizeFailedWorkers(opts.FailedWorkers)
go pool.AddExistingObjectReplicateWorker()
go pool.periodicResyncMetaSave(ctx, o)
go pool.updateResyncStatus(ctx, o)
return pool
}
// AddMRFWorker adds a pending/failed replication worker to handle requests that could not be queued
// to the other workers
func (p *ReplicationPool) AddMRFWorker() {
defer p.mrfWorkerWg.Done()
for {
select {
case <-p.ctx.Done():
@@ -1347,7 +1389,7 @@ func (p *ReplicationPool) AddMRFWorker() {
if !ok {
return
}
replicateObject(p.ctx, oi, p.objLayer, ReplicateMRF)
replicateObject(p.ctx, oi, p.objLayer)
case <-p.mrfWorkerKillCh:
return
}
@@ -1365,12 +1407,12 @@ func (p *ReplicationPool) AddWorker() {
if !ok {
return
}
replicateObject(p.ctx, oi, p.objLayer, ReplicateIncoming)
replicateObject(p.ctx, oi, p.objLayer)
case doi, ok := <-p.replicaDeleteCh:
if !ok {
return
}
replicateDelete(p.ctx, doi, p.objLayer, ReplicateDelete)
replicateDelete(p.ctx, doi, p.objLayer)
case <-p.workerKillCh:
return
}
@@ -1387,12 +1429,12 @@ func (p *ReplicationPool) AddExistingObjectReplicateWorker() {
if !ok {
return
}
replicateObject(p.ctx, oi, p.objLayer, ReplicateExisting)
replicateObject(p.ctx, oi, p.objLayer)
case doi, ok := <-p.existingReplicaDeleteCh:
if !ok {
return
}
replicateDelete(p.ctx, doi, p.objLayer, ReplicateExistingDelete)
replicateDelete(p.ctx, doi, p.objLayer)
}
}
}
@@ -1450,7 +1492,7 @@ func (p *ReplicationPool) queueReplicaFailedTask(ri ReplicateObjectInfo) {
})
case p.mrfReplicaCh <- ri:
default:
logger.LogOnceIf(GlobalContext, fmt.Errorf("WARNING: Replication failed workers could not keep up with healing failures - consider increasing number of replication failed workers with `mc admin config set api replication_failed_workers=%d`", p.suggestedWorkers(true)), replicationSubsystem)
logger.LogOnceIf(GlobalContext, fmt.Errorf("WARNING: Unable to keep up retrying failed replication - we recommend increasing number of replication failed workers with `mc admin config set api replication_failed_workers=%d`", p.suggestedWorkers(true)), string(replicationSubsystem))
}
}
@@ -1476,7 +1518,7 @@ func (p *ReplicationPool) queueReplicaTask(ri ReplicateObjectInfo) {
})
case ch <- ri:
default:
logger.LogOnceIf(GlobalContext, fmt.Errorf("WARNING: Replication workers could not keep up with incoming traffic - consider increasing number of replication workers with `mc admin config set api replication_workers=%d`", p.suggestedWorkers(false)), replicationSubsystem)
logger.LogOnceIf(GlobalContext, fmt.Errorf("WARNING: Unable to keep up with incoming traffic - we recommend increasing number of replicate object workers with `mc admin config set api replication_workers=%d`", p.suggestedWorkers(false)), string(replicationSubsystem))
}
}
@@ -1513,7 +1555,7 @@ func (p *ReplicationPool) queueReplicaDeleteTask(doi DeletedObjectReplicationInf
})
case ch <- doi:
default:
logger.LogOnceIf(GlobalContext, fmt.Errorf("WARNING: Replication workers could not keep up with incoming traffic - consider increasing number of replication workers with `mc admin config set api replication_workers=%d`", p.suggestedWorkers(false)), replicationSubsystem)
logger.LogOnceIf(GlobalContext, fmt.Errorf("WARNING: Unable to keep up with incoming deletes - we recommend increasing number of replicate workers with `mc admin config set api replication_workers=%d`", p.suggestedWorkers(false)), string(replicationSubsystem))
}
}
@@ -1531,16 +1573,21 @@ func initBackgroundReplication(ctx context.Context, objectAPI ObjectLayer) {
go globalReplicationStats.loadInitialReplicationMetrics(ctx)
}
type proxyResult struct {
Proxy bool
Err error
}
// get Reader from replication target if active-active replication is in place and
// this node returns a 404
func proxyGetToReplicationTarget(ctx context.Context, bucket, object string, rs *HTTPRangeSpec, h http.Header, opts ObjectOptions, proxyTargets *madmin.BucketTargets) (gr *GetObjectReader, proxy bool) {
tgt, oi, proxy := proxyHeadToRepTarget(ctx, bucket, object, opts, proxyTargets)
if !proxy {
return nil, false
func proxyGetToReplicationTarget(ctx context.Context, bucket, object string, rs *HTTPRangeSpec, h http.Header, opts ObjectOptions, proxyTargets *madmin.BucketTargets) (gr *GetObjectReader, proxy proxyResult, err error) {
tgt, oi, proxy := proxyHeadToRepTarget(ctx, bucket, object, rs, opts, proxyTargets)
if !proxy.Proxy {
return nil, proxy, nil
}
fn, off, length, err := NewGetObjectReader(rs, oi, opts)
fn, _, _, err := NewGetObjectReader(nil, oi, opts)
if err != nil {
return nil, false
return nil, proxy, err
}
gopts := miniogo.GetObjectOptions{
VersionID: opts.VersionID,
@@ -1548,33 +1595,47 @@ func proxyGetToReplicationTarget(ctx context.Context, bucket, object string, rs
Internal: miniogo.AdvancedGetOptions{
ReplicationProxyRequest: "true",
},
PartNumber: opts.PartNumber,
}
// get correct offsets for encrypted object
if off >= 0 && length >= 0 {
if err := gopts.SetRange(off, off+length-1); err != nil {
return nil, false
if rs != nil {
h, err := rs.ToHeader()
if err != nil {
return nil, proxy, err
}
gopts.Set(xhttp.Range, h)
}
// Make sure to match ETag when proxying.
if err = gopts.SetMatchETag(oi.ETag); err != nil {
return nil, false
return nil, proxy, err
}
c := miniogo.Core{Client: tgt.Client}
obj, _, _, err := c.GetObject(ctx, bucket, object, gopts)
obj, _, h, err := c.GetObject(ctx, bucket, object, gopts)
if err != nil {
return nil, false
return nil, proxy, err
}
closeReader := func() { obj.Close() }
reader, err := fn(obj, h, closeReader)
if err != nil {
return nil, false
return nil, proxy, err
}
reader.ObjInfo = oi.Clone()
return reader, true
if rs != nil {
contentSize, err := parseSizeFromContentRange(h)
if err != nil {
return nil, proxy, err
}
reader.ObjInfo.Size = contentSize
}
return reader, proxyResult{Proxy: true}, nil
}
func getproxyTargets(ctx context.Context, bucket, object string, opts ObjectOptions) (tgts *madmin.BucketTargets) {
func getProxyTargets(ctx context.Context, bucket, object string, opts ObjectOptions) (tgts *madmin.BucketTargets) {
if opts.VersionSuspended {
return &madmin.BucketTargets{}
}
cfg, err := getReplicationConfig(ctx, bucket)
if err != nil || cfg == nil {
return &madmin.BucketTargets{}
@@ -1590,11 +1651,11 @@ func getproxyTargets(ctx context.Context, bucket, object string, opts ObjectOpti
return tgts
}
func proxyHeadToRepTarget(ctx context.Context, bucket, object string, opts ObjectOptions, proxyTargets *madmin.BucketTargets) (tgt *TargetClient, oi ObjectInfo, proxy bool) {
func proxyHeadToRepTarget(ctx context.Context, bucket, object string, rs *HTTPRangeSpec, opts ObjectOptions, proxyTargets *madmin.BucketTargets) (tgt *TargetClient, oi ObjectInfo, proxy proxyResult) {
// this option is set when active-active replication is in place between site A -> B,
// and site B does not have the object yet.
if opts.ProxyRequest || (opts.ProxyHeaderSet && !opts.ProxyRequest) { // true only when site B sets MinIOSourceProxyRequest header
return nil, oi, false
return nil, oi, proxy
}
for _, t := range proxyTargets.Targets {
tgt = globalBucketTargetSys.GetRemoteTargetClient(ctx, t.Arn)
@@ -1612,9 +1673,22 @@ func proxyHeadToRepTarget(ctx context.Context, bucket, object string, opts Objec
Internal: miniogo.AdvancedGetOptions{
ReplicationProxyRequest: "true",
},
PartNumber: opts.PartNumber,
}
if rs != nil {
h, err := rs.ToHeader()
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Invalid range header for %s/%s(%s) - %w", bucket, object, opts.VersionID, err))
continue
}
gopts.Set(xhttp.Range, h)
}
objInfo, err := tgt.StatObject(ctx, t.TargetBucket, object, gopts)
if err != nil {
if isErrInvalidRange(ErrorRespToObjectError(err, bucket, object)) {
return nil, oi, proxyResult{Err: err}
}
continue
}
@@ -1645,23 +1719,24 @@ func proxyHeadToRepTarget(ctx context.Context, bucket, object string, opts Objec
if ok {
oi.ContentEncoding = ce
}
return tgt, oi, true
return tgt, oi, proxyResult{Proxy: true}
}
return nil, oi, false
return nil, oi, proxy
}
// get object info from replication target if active-active replication is in place and
// this node returns a 404
func proxyHeadToReplicationTarget(ctx context.Context, bucket, object string, opts ObjectOptions, proxyTargets *madmin.BucketTargets) (oi ObjectInfo, proxy bool) {
_, oi, proxy = proxyHeadToRepTarget(ctx, bucket, object, opts, proxyTargets)
func proxyHeadToReplicationTarget(ctx context.Context, bucket, object string, rs *HTTPRangeSpec, opts ObjectOptions, proxyTargets *madmin.BucketTargets) (oi ObjectInfo, proxy proxyResult) {
_, oi, proxy = proxyHeadToRepTarget(ctx, bucket, object, rs, opts, proxyTargets)
return oi, proxy
}
func scheduleReplication(ctx context.Context, objInfo ObjectInfo, o ObjectLayer, dsc ReplicateDecision, opType replication.Type) {
ri := ReplicateObjectInfo{ObjectInfo: objInfo, OpType: opType, Dsc: dsc, EventType: ReplicateIncoming}
if dsc.Synchronous() {
replicateObject(ctx, ReplicateObjectInfo{ObjectInfo: objInfo, OpType: opType, Dsc: dsc}, o, ReplicateIncoming)
replicateObject(ctx, ri, o)
} else {
globalReplicationPool.queueReplicaTask(ReplicateObjectInfo{ObjectInfo: objInfo, OpType: opType, Dsc: dsc})
globalReplicationPool.queueReplicaTask(ri)
}
if sz, err := objInfo.GetActualSize(); err == nil {
for arn := range dsc.targetsMap {
@@ -1698,10 +1773,6 @@ func (c replicationConfig) Resync(ctx context.Context, oi ObjectInfo, dsc *Repli
if c.Empty() {
return
}
// existing object replication does not apply to un-versioned objects
if oi.VersionID == "" {
return
}
// Now overlay existing object replication choices for target
if oi.DeleteMarker {
@@ -1734,6 +1805,7 @@ func (c replicationConfig) Resync(ctx context.Context, oi ObjectInfo, dsc *Repli
objInfo.VersionPurgeStatusInternal = ""
objInfo.ReplicationStatus = ""
objInfo.VersionPurgeStatus = ""
delete(objInfo.UserDefined, xhttp.AmzBucketReplicationStatus)
resyncdsc := mustReplicate(ctx, oi.Bucket, oi.Name, getMustReplicateOptions(objInfo, replication.ExistingObjectReplicationType, ObjectOptions{}))
dsc = &resyncdsc
return c.resync(oi, dsc, tgtStatuses)
@@ -1802,9 +1874,26 @@ func resyncTarget(oi ObjectInfo, arn string, resetID string, resetBeforeDate tim
return
}
// get the most current of in-memory replication stats and data usage info from crawler.
func getLatestReplicationStats(bucket string, u BucketUsageInfo) (s BucketReplicationStats) {
bucketStats := globalNotificationSys.GetClusterBucketStats(GlobalContext, bucket)
func getAllLatestReplicationStats(bucketsUsage map[string]BucketUsageInfo) (bucketsReplicationStats map[string]BucketReplicationStats) {
peerBucketStatsList := globalNotificationSys.GetClusterAllBucketStats(GlobalContext)
bucketsReplicationStats = make(map[string]BucketReplicationStats, len(bucketsUsage))
for bucket, u := range bucketsUsage {
bucketStats := make([]BucketStats, len(peerBucketStatsList))
for i, peerBucketStats := range peerBucketStatsList {
bucketStat, ok := peerBucketStats[bucket]
if !ok {
continue
}
bucketStats[i] = bucketStat
}
bucketsReplicationStats[bucket] = calculateBucketReplicationStats(bucket, u, bucketStats)
}
return bucketsReplicationStats
}
func calculateBucketReplicationStats(bucket string, u BucketUsageInfo, bucketStats []BucketStats) (s BucketReplicationStats) {
// accumulate cluster bucket stats
stats := make(map[string]*BucketReplicationStat)
var totReplicaSize int64
@@ -1873,17 +1962,22 @@ func getLatestReplicationStats(bucket string, u BucketUsageInfo) (s BucketReplic
return s
}
const resyncTimeInterval = time.Minute * 10
// get the most current of in-memory replication stats and data usage info from crawler.
func getLatestReplicationStats(bucket string, u BucketUsageInfo) (s BucketReplicationStats) {
bucketStats := globalNotificationSys.GetClusterBucketStats(GlobalContext, bucket)
return calculateBucketReplicationStats(bucket, u, bucketStats)
}
// periodicResyncMetaSave saves in-memory resync meta stats to disk in periodic intervals
func (p *ReplicationPool) periodicResyncMetaSave(ctx context.Context, objectAPI ObjectLayer) {
const resyncTimeInterval = time.Minute * 1
// updateResyncStatus persists in-memory resync metadata stats to disk at periodic intervals
func (p *ReplicationPool) updateResyncStatus(ctx context.Context, objectAPI ObjectLayer) {
resyncTimer := time.NewTimer(resyncTimeInterval)
defer resyncTimer.Stop()
for {
select {
case <-resyncTimer.C:
resyncTimer.Reset(resyncTimeInterval)
now := UTCNow()
p.resyncState.RLock()
for bucket, brs := range p.resyncState.statusMap {
@@ -1898,12 +1992,14 @@ func (p *ReplicationPool) periodicResyncMetaSave(ctx context.Context, objectAPI
if updt {
brs.LastUpdate = now
if err := saveResyncStatus(ctx, bucket, brs, objectAPI); err != nil {
logger.LogIf(ctx, fmt.Errorf("Could not save resync metadata to disk for %s - %w", bucket, err))
logger.LogIf(ctx, fmt.Errorf("Could not save resync metadata to drive for %s - %w", bucket, err))
continue
}
}
}
p.resyncState.RUnlock()
resyncTimer.Reset(resyncTimeInterval)
case <-ctx.Done():
// server could be restarting - need
// to exit immediately
@@ -1923,6 +2019,7 @@ func resyncBucket(ctx context.Context, bucket, arn string, heal bool, objectAPI
st.EndTime = UTCNow()
st.ResyncStatus = resyncStatus
m.TargetsMap[arn] = st
globalReplicationPool.resyncState.statusMap[bucket] = m
globalReplicationPool.resyncState.Unlock()
}()
// Allocate new results channel to receive ObjectInfo.
@@ -1983,7 +2080,6 @@ func resyncBucket(ctx context.Context, bucket, arn string, heal bool, objectAPI
}
if roi.DeleteMarker || !roi.VersionPurgeStatus.Empty() {
versionID := ""
dmVersionID := ""
if roi.VersionPurgeStatus.Empty() {
@@ -2001,13 +2097,15 @@ func resyncBucket(ctx context.Context, bucket, arn string, heal bool, objectAPI
DeleteMarkerMTime: DeleteMarkerMTime{roi.ModTime},
DeleteMarker: roi.DeleteMarker,
},
Bucket: roi.Bucket,
OpType: replication.ExistingObjectReplicationType,
Bucket: roi.Bucket,
OpType: replication.ExistingObjectReplicationType,
EventType: ReplicateExistingDelete,
}
replicateDelete(ctx, doi, objectAPI, ReplicateDelete)
replicateDelete(ctx, doi, objectAPI)
} else {
roi.OpType = replication.ExistingObjectReplicationType
replicateObject(ctx, roi, objectAPI, ReplicateExisting)
roi.EventType = ReplicateExisting
replicateObject(ctx, roi, objectAPI)
}
_, err = tgt.StatObject(ctx, tgt.Bucket, roi.Name, miniogo.StatObjectOptions{
VersionID: roi.VersionID,
@@ -2030,6 +2128,7 @@ func resyncBucket(ctx context.Context, bucket, arn string, heal bool, objectAPI
st.ReplicatedSize += roi.Size
}
m.TargetsMap[arn] = st
globalReplicationPool.resyncState.statusMap[bucket] = m
globalReplicationPool.resyncState.Unlock()
}
resyncStatus = ResyncCompleted
@@ -2110,10 +2209,6 @@ func (p *ReplicationPool) initResync(ctx context.Context, buckets []BucketInfo,
if objAPI == nil {
return errServerNotInitialized
}
// replication applies only to erasure coded setups
if !globalIsErasure {
return nil
}
// Load bucket metadata sys in background
go p.loadResync(ctx, buckets, objAPI)
return nil
@@ -2124,18 +2219,21 @@ func (p *ReplicationPool) loadResync(ctx context.Context, buckets []BucketInfo,
for index := range buckets {
meta, err := loadBucketResyncMetadata(ctx, buckets[index].Name, objAPI)
if err != nil {
if errors.Is(err, errVolumeNotFound) {
meta = newBucketResyncStatus(buckets[index].Name)
} else {
if !errors.Is(err, errVolumeNotFound) {
logger.LogIf(ctx, err)
continue
}
continue
}
p.resyncState.Lock()
p.resyncState.statusMap[buckets[index].Name] = meta
p.resyncState.Unlock()
}
for index := range buckets {
bucket := buckets[index].Name
p.resyncState.RLock()
m, ok := p.resyncState.statusMap[bucket]
p.resyncState.RUnlock()
if ok {
for arn, st := range m.TargetsMap {
if st.ResyncStatus == ResyncFailed || st.ResyncStatus == ResyncStarted {
@@ -2202,3 +2300,176 @@ func saveResyncStatus(ctx context.Context, bucket string, brs BucketReplicationR
configFile := path.Join(bucketMetaPrefix, bucket, replicationDir, resyncFileName)
return saveConfig(ctx, objectAPI, configFile, buf)
}
// getReplicationDiff returns unreplicated objects in a channel
func getReplicationDiff(ctx context.Context, objAPI ObjectLayer, bucket string, opts madmin.ReplDiffOpts) (diffCh chan madmin.DiffInfo, err error) {
objInfoCh := make(chan ObjectInfo)
if err := objAPI.Walk(ctx, bucket, opts.Prefix, objInfoCh, ObjectOptions{}); err != nil {
logger.LogIf(ctx, err)
return diffCh, err
}
cfg, err := getReplicationConfig(ctx, bucket)
if err != nil {
logger.LogIf(ctx, err)
return diffCh, err
}
tgts, err := globalBucketTargetSys.ListBucketTargets(ctx, bucket)
if err != nil {
logger.LogIf(ctx, err)
return diffCh, err
}
rcfg := replicationConfig{
Config: cfg,
remotes: tgts,
}
diffCh = make(chan madmin.DiffInfo, 4000)
go func() {
defer close(diffCh)
for obj := range objInfoCh {
// Ignore object prefixes which are excluded
// from versioning via the MinIO bucket versioning extension.
if globalBucketVersioningSys.PrefixSuspended(bucket, obj.Name) {
continue
}
roi := getHealReplicateObjectInfo(obj, rcfg)
switch roi.ReplicationStatus {
case replication.Completed, replication.Replica:
if !opts.Verbose {
continue
}
fallthrough
default:
// ignore pre-existing objects that don't satisfy replication rule(s)
if roi.ReplicationStatus.Empty() && !roi.ExistingObjResync.mustResync() {
continue
}
tgtsMap := make(map[string]madmin.TgtDiffInfo)
for arn, st := range roi.TargetStatuses {
if opts.ARN == "" || opts.ARN == arn {
if !opts.Verbose && (st == replication.Completed || st == replication.Replica) {
continue
}
tgtsMap[arn] = madmin.TgtDiffInfo{
ReplicationStatus: st.String(),
}
}
}
for arn, st := range roi.TargetPurgeStatuses {
if opts.ARN == "" || opts.ARN == arn {
if !opts.Verbose && st == Complete {
continue
}
t, ok := tgtsMap[arn]
if !ok {
t = madmin.TgtDiffInfo{}
}
t.DeleteReplicationStatus = string(st)
tgtsMap[arn] = t
}
}
select {
case diffCh <- madmin.DiffInfo{
Object: obj.Name,
VersionID: obj.VersionID,
LastModified: obj.ModTime,
IsDeleteMarker: obj.DeleteMarker,
ReplicationStatus: string(roi.ReplicationStatus),
DeleteReplicationStatus: string(roi.VersionPurgeStatus),
ReplicationTimestamp: roi.ReplicationTimestamp,
Targets: tgtsMap,
}:
case <-ctx.Done():
return
}
}
}
}()
return diffCh, nil
}
// QueueReplicationHeal is a wrapper for queueReplicationHeal
func QueueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo) {
// un-versioned case
if oi.VersionID == "" {
return
}
rcfg, _, _ := globalBucketMetadataSys.GetReplicationConfig(ctx, bucket)
tgts, _ := globalBucketTargetSys.ListBucketTargets(ctx, bucket)
queueReplicationHeal(ctx, bucket, oi, replicationConfig{
Config: rcfg,
remotes: tgts,
})
}
// queueReplicationHeal enqueues objects that failed replication OR eligible for resyncing through
// an ongoing resync operation or via existing objects replication configuration setting.
func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcfg replicationConfig) (roi ReplicateObjectInfo) {
// un-versioned case
if oi.VersionID == "" {
return roi
}
if rcfg.Config == nil || rcfg.remotes == nil {
return roi
}
roi = getHealReplicateObjectInfo(oi, rcfg)
if !roi.Dsc.ReplicateAny() {
return
}
// early return if replication already done, otherwise we need to determine if this
// version is an existing object that needs healing.
if oi.ReplicationStatus == replication.Completed && oi.VersionPurgeStatus.Empty() && !roi.ExistingObjResync.mustResync() {
return
}
if roi.DeleteMarker || !roi.VersionPurgeStatus.Empty() {
versionID := ""
dmVersionID := ""
if roi.VersionPurgeStatus.Empty() {
dmVersionID = roi.VersionID
} else {
versionID = roi.VersionID
}
dv := DeletedObjectReplicationInfo{
DeletedObject: DeletedObject{
ObjectName: roi.Name,
DeleteMarkerVersionID: dmVersionID,
VersionID: versionID,
ReplicationState: roi.getReplicationState(roi.Dsc.String(), versionID, true),
DeleteMarkerMTime: DeleteMarkerMTime{roi.ModTime},
DeleteMarker: roi.DeleteMarker,
},
Bucket: roi.Bucket,
OpType: replication.HealReplicationType,
EventType: ReplicateHealDelete,
}
// heal delete marker replication failure or versioned delete replication failure
if roi.ReplicationStatus == replication.Pending ||
roi.ReplicationStatus == replication.Failed ||
roi.VersionPurgeStatus == Failed || roi.VersionPurgeStatus == Pending {
globalReplicationPool.queueReplicaDeleteTask(dv)
return
}
// if replication status is Complete on DeleteMarker and existing object resync required
if roi.ExistingObjResync.mustResync() && (roi.ReplicationStatus == replication.Completed || roi.ReplicationStatus.Empty()) {
queueReplicateDeletesWrapper(dv, roi.ExistingObjResync)
return
}
return
}
if roi.ExistingObjResync.mustResync() {
roi.OpType = replication.ExistingObjectReplicationType
}
switch roi.ReplicationStatus {
case replication.Pending, replication.Failed:
roi.EventType = ReplicateHeal
globalReplicationPool.queueReplicaTask(roi)
return
}
if roi.ExistingObjResync.mustResync() {
roi.EventType = ReplicateExisting
globalReplicationPool.queueReplicaTask(roi)
}
return
}

View File

@@ -26,7 +26,7 @@ import (
// ReplicationLatency holds information of bucket operations latency, such us uploads
type ReplicationLatency struct {
// Single & Multipart PUTs latency
UploadHistogram LastMinuteLatencies
UploadHistogram LastMinuteHistogram
}
// Merge two replication latency into a new one
@@ -41,7 +41,7 @@ func (rl ReplicationLatency) getUploadLatency() (ret map[string]uint64) {
avg := rl.UploadHistogram.GetAvgData()
for k, v := range avg {
// Convert nanoseconds to milliseconds
ret[sizeTagToString(k)] = v.avg() / uint64(time.Millisecond)
ret[sizeTagToString(k)] = uint64(v.avg() / time.Millisecond)
}
return
}
@@ -51,6 +51,9 @@ func (rl *ReplicationLatency) update(size int64, duration time.Duration) {
rl.UploadHistogram.Add(size, duration)
}
// BucketStatsMap captures bucket statistics for all buckets
type BucketStatsMap map[string]BucketStats
// BucketStats bucket statistics
type BucketStats struct {
ReplicationStats BucketReplicationStats

View File

@@ -792,6 +792,183 @@ func (z *BucketStats) Msgsize() (s int) {
return
}
// DecodeMsg implements msgp.Decodable
func (z *BucketStatsMap) DecodeMsg(dc *msgp.Reader) (err error) {
var zb0003 uint32
zb0003, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
if (*z) == nil {
(*z) = make(BucketStatsMap, zb0003)
} else if len((*z)) > 0 {
for key := range *z {
delete((*z), key)
}
}
for zb0003 > 0 {
zb0003--
var zb0001 string
var zb0002 BucketStats
zb0001, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err)
return
}
var field []byte
_ = field
var zb0004 uint32
zb0004, err = dc.ReadMapHeader()
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
for zb0004 > 0 {
zb0004--
field, err = dc.ReadMapKeyPtr()
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
switch msgp.UnsafeString(field) {
case "ReplicationStats":
err = zb0002.ReplicationStats.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, zb0001, "ReplicationStats")
return
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
}
}
(*z)[zb0001] = zb0002
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BucketStatsMap) EncodeMsg(en *msgp.Writer) (err error) {
err = en.WriteMapHeader(uint32(len(z)))
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0005, zb0006 := range z {
err = en.WriteString(zb0005)
if err != nil {
err = msgp.WrapError(err)
return
}
// map header, size 1
// write "ReplicationStats"
err = en.Append(0x81, 0xb0, 0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x53, 0x74, 0x61, 0x74, 0x73)
if err != nil {
return
}
err = zb0006.ReplicationStats.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, zb0005, "ReplicationStats")
return
}
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BucketStatsMap) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
o = msgp.AppendMapHeader(o, uint32(len(z)))
for zb0005, zb0006 := range z {
o = msgp.AppendString(o, zb0005)
// map header, size 1
// string "ReplicationStats"
o = append(o, 0x81, 0xb0, 0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x53, 0x74, 0x61, 0x74, 0x73)
o, err = zb0006.ReplicationStats.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, zb0005, "ReplicationStats")
return
}
}
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BucketStatsMap) UnmarshalMsg(bts []byte) (o []byte, err error) {
var zb0003 uint32
zb0003, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
if (*z) == nil {
(*z) = make(BucketStatsMap, zb0003)
} else if len((*z)) > 0 {
for key := range *z {
delete((*z), key)
}
}
for zb0003 > 0 {
var zb0001 string
var zb0002 BucketStats
zb0003--
zb0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
var field []byte
_ = field
var zb0004 uint32
zb0004, bts, err = msgp.ReadMapHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
for zb0004 > 0 {
zb0004--
field, bts, err = msgp.ReadMapKeyZC(bts)
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
switch msgp.UnsafeString(field) {
case "ReplicationStats":
bts, err = zb0002.ReplicationStats.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, zb0001, "ReplicationStats")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
}
}
(*z)[zb0001] = zb0002
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BucketStatsMap) Msgsize() (s int) {
s = msgp.MapHeaderSize
if z != nil {
for zb0005, zb0006 := range z {
_ = zb0006
s += msgp.StringPrefixSize + len(zb0005) + 1 + 17 + zb0006.ReplicationStats.Msgsize()
}
}
return
}
// DecodeMsg implements msgp.Decodable
func (z *ReplicationLatency) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte

View File

@@ -348,6 +348,119 @@ func BenchmarkDecodeBucketStats(b *testing.B) {
}
}
func TestMarshalUnmarshalBucketStatsMap(t *testing.T) {
v := BucketStatsMap{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBucketStatsMap(b *testing.B) {
v := BucketStatsMap{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBucketStatsMap(b *testing.B) {
v := BucketStatsMap{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBucketStatsMap(b *testing.B) {
v := BucketStatsMap{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBucketStatsMap(t *testing.T) {
v := BucketStatsMap{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBucketStatsMap Msgsize() is inaccurate")
}
vn := BucketStatsMap{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBucketStatsMap(b *testing.B) {
v := BucketStatsMap{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBucketStatsMap(b *testing.B) {
v := BucketStatsMap{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalReplicationLatency(t *testing.T) {
v := ReplicationLatency{}
bts, err := v.MarshalMsg(nil)

View File

@@ -19,13 +19,12 @@ package cmd
import (
"context"
"net/http"
"sync"
"time"
jsoniter "github.com/json-iterator/go"
"github.com/minio/madmin-go"
minio "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7"
miniogo "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
"github.com/minio/minio/internal/bucket/replication"
@@ -120,9 +119,6 @@ func (sys *BucketTargetSys) SetTarget(ctx context.Context, bucket string, tgt *m
return BucketRemoteConnectionErr{Bucket: tgt.TargetBucket, Err: err}
}
if tgt.Type == madmin.ReplicationService {
if !globalIsErasure {
return NotImplemented{Message: "Replication is not implemented in " + getMinioMode()}
}
if !globalBucketVersioningSys.Enabled(bucket) {
return BucketReplicationSourceNotVersioned{Bucket: bucket}
}
@@ -184,9 +180,6 @@ func (sys *BucketTargetSys) RemoveTarget(ctx context.Context, bucket, arnStr str
if globalIsGateway {
return nil
}
if !globalIsErasure {
return NotImplemented{Message: "Replication is not implemented in " + getMinioMode()}
}
if arnStr == "" {
return BucketRemoteArnInvalid{Bucket: bucket}
@@ -201,7 +194,7 @@ func (sys *BucketTargetSys) RemoveTarget(ctx context.Context, bucket, arnStr str
// reject removal of remote target if replication configuration is present
rcfg, err := getReplicationConfig(ctx, bucket)
if err == nil {
for _, tgtArn := range rcfg.FilterTargetArns(replication.ObjectOpts{}) {
for _, tgtArn := range rcfg.FilterTargetArns(replication.ObjectOpts{OpType: replication.AllReplicationType}) {
if err == nil && (tgtArn == arnStr || rcfg.RoleArn == arnStr) {
sys.RLock()
_, ok := sys.arnRemotesMap[arnStr]
@@ -279,17 +272,20 @@ func (sys *BucketTargetSys) UpdateAllTargets(bucket string, tgts *madmin.BucketT
}
sys.Lock()
defer sys.Unlock()
if tgts == nil || tgts.Empty() {
// remove target and arn association
if tgts, ok := sys.targetsMap[bucket]; ok {
for _, t := range tgts {
if tgt, ok := sys.arnRemotesMap[t.Arn]; ok && tgt.healthCancelFn != nil {
tgt.healthCancelFn()
}
delete(sys.arnRemotesMap, t.Arn)
// Remove existingtarget and arn association
if tgts, ok := sys.targetsMap[bucket]; ok {
for _, t := range tgts {
if tgt, ok := sys.arnRemotesMap[t.Arn]; ok && tgt.healthCancelFn != nil {
tgt.healthCancelFn()
}
delete(sys.arnRemotesMap, t.Arn)
}
delete(sys.targetsMap, bucket)
}
// No need for more if not adding anything
if tgts == nil || tgts.Empty() {
sys.updateBandwidthLimit(bucket, 0)
return
}
@@ -331,25 +327,16 @@ func (sys *BucketTargetSys) set(bucket BucketInfo, meta BucketMetadata) {
sys.targetsMap[bucket.Name] = cfg.Targets
}
// getRemoteTargetInstanceTransport contains a singleton roundtripper.
var (
getRemoteTargetInstanceTransport http.RoundTripper
getRemoteTargetInstanceTransportOnce sync.Once
)
// Returns a minio-go Client configured to access remote host described in replication target config.
func (sys *BucketTargetSys) getRemoteTargetClient(tcfg *madmin.BucketTarget) (*TargetClient, error) {
config := tcfg.Credentials
creds := credentials.NewStaticV4(config.AccessKey, config.SecretKey, "")
getRemoteTargetInstanceTransportOnce.Do(func() {
getRemoteTargetInstanceTransport = NewRemoteTargetHTTPTransport()
})
api, err := minio.New(tcfg.Endpoint, &miniogo.Options{
Creds: creds,
Secure: tcfg.Secure,
Region: tcfg.Region,
Transport: getRemoteTargetInstanceTransport,
Transport: globalRemoteTargetTransport,
})
if err != nil {
return nil, err

View File

@@ -18,12 +18,14 @@
package cmd
import (
"encoding/base64"
"encoding/xml"
"io"
"net/http"
humanize "github.com/dustin/go-humanize"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
"github.com/minio/minio/internal/bucket/versioning"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
@@ -63,28 +65,28 @@ func (api objectAPIHandlers) PutBucketVersioningHandler(w http.ResponseWriter, r
return
}
if globalSiteReplicationSys.isEnabled() {
if globalSiteReplicationSys.isEnabled() && !v.Enabled() {
writeErrorResponse(ctx, w, APIError{
Code: "InvalidBucketState",
Description: "Cluster replication is enabled for this site, so the versioning state cannot be changed.",
HTTPStatusCode: http.StatusConflict,
Description: "Cluster replication is enabled on this site, versioning cannot be suspended on bucket.",
HTTPStatusCode: http.StatusBadRequest,
}, r.URL)
return
}
if rcfg, _ := globalBucketObjectLockSys.Get(bucket); rcfg.LockEnabled && v.Suspended() {
if rcfg, _ := globalBucketObjectLockSys.Get(bucket); rcfg.LockEnabled && (v.Suspended() || v.PrefixesExcluded()) {
writeErrorResponse(ctx, w, APIError{
Code: "InvalidBucketState",
Description: "An Object Lock configuration is present on this bucket, so the versioning state cannot be changed.",
HTTPStatusCode: http.StatusConflict,
Description: "An Object Lock configuration is present on this bucket, versioning cannot be suspended.",
HTTPStatusCode: http.StatusBadRequest,
}, r.URL)
return
}
if _, err := getReplicationConfig(ctx, bucket); err == nil && v.Suspended() {
writeErrorResponse(ctx, w, APIError{
Code: "InvalidBucketState",
Description: "A replication configuration is present on this bucket, so the versioning state cannot be changed.",
HTTPStatusCode: http.StatusConflict,
Description: "A replication configuration is present on this bucket, bucket wide versioning cannot be suspended.",
HTTPStatusCode: http.StatusBadRequest,
}, r.URL)
return
}
@@ -95,7 +97,23 @@ func (api objectAPIHandlers) PutBucketVersioningHandler(w http.ResponseWriter, r
return
}
if err = globalBucketMetadataSys.Update(ctx, bucket, bucketVersioningConfig, configData); err != nil {
updatedAt, err := globalBucketMetadataSys.Update(ctx, bucket, bucketVersioningConfig, configData)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
// Call site replication hook.
//
// We encode the xml bytes as base64 to ensure there are no encoding
// errors.
cfgStr := base64.StdEncoding.EncodeToString(configData)
if err = globalSiteReplicationSys.BucketMetaHook(ctx, madmin.SRBucketMeta{
Type: madmin.SRBucketMetaTypeVersionConfig,
Bucket: bucket,
Versioning: &cfgStr,
UpdatedAt: updatedAt,
}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -125,7 +143,7 @@ func (api objectAPIHandlers) GetBucketVersioningHandler(w http.ResponseWriter, r
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
if _, err := objectAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}

View File

@@ -17,44 +17,69 @@
package cmd
import "github.com/minio/minio/internal/bucket/versioning"
import (
"strings"
"github.com/minio/minio/internal/bucket/versioning"
"github.com/minio/minio/internal/logger"
)
// BucketVersioningSys - policy subsystem.
type BucketVersioningSys struct{}
// Enabled enabled versioning?
func (sys *BucketVersioningSys) Enabled(bucket string) bool {
vc, err := globalBucketMetadataSys.GetVersioningConfig(bucket)
vc, err := sys.Get(bucket)
if err != nil {
return false
logger.CriticalIf(GlobalContext, err)
}
return vc.Enabled()
}
// PrefixEnabled returns true is versioning is enabled at bucket level and if
// the given prefix doesn't match any excluded prefixes pattern. This is
// part of a MinIO versioning configuration extension.
func (sys *BucketVersioningSys) PrefixEnabled(bucket, prefix string) bool {
vc, err := sys.Get(bucket)
if err != nil {
logger.CriticalIf(GlobalContext, err)
}
return vc.PrefixEnabled(prefix)
}
// Suspended suspended versioning?
func (sys *BucketVersioningSys) Suspended(bucket string) bool {
vc, err := globalBucketMetadataSys.GetVersioningConfig(bucket)
vc, err := sys.Get(bucket)
if err != nil {
return false
logger.CriticalIf(GlobalContext, err)
}
return vc.Suspended()
}
// PrefixSuspended returns true if the given prefix matches an excluded prefix
// pattern. This is part of a MinIO versioning configuration extension.
func (sys *BucketVersioningSys) PrefixSuspended(bucket, prefix string) bool {
vc, err := sys.Get(bucket)
if err != nil {
logger.CriticalIf(GlobalContext, err)
}
return vc.PrefixSuspended(prefix)
}
// Get returns stored bucket policy
func (sys *BucketVersioningSys) Get(bucket string) (*versioning.Versioning, error) {
if globalIsGateway {
objAPI := newObjectLayerFn()
if objAPI == nil {
return nil, errServerNotInitialized
}
return nil, NotImplemented{}
// Gateway does not implement versioning.
return &versioning.Versioning{XMLNS: "http://s3.amazonaws.com/doc/2006-03-01/"}, nil
}
return globalBucketMetadataSys.GetVersioningConfig(bucket)
}
// Reset BucketVersioningSys to initial state.
func (sys *BucketVersioningSys) Reset() {
// There is currently no internal state.
if bucket == minioMetaBucket || strings.HasPrefix(bucket, minioMetaBucket) {
return &versioning.Versioning{XMLNS: "http://s3.amazonaws.com/doc/2006-03-01/"}, nil
}
vcfg, _, err := globalBucketMetadataSys.GetVersioningConfig(bucket)
return vcfg, err
}
// NewBucketVersioningSys - creates new versioning system.

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2021 MinIO, Inc.
// Copyright (c) 2015-2022 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -37,4 +37,7 @@ var (
// ShortCommitID - first 12 characters from CommitID.
ShortCommitID = "DEVELOPMENT.GOGET"
// CopyrightYear - dynamic value of the copyright end year
CopyrightYear = "0000"
)

150
cmd/callhome.go Normal file
View File

@@ -0,0 +1,150 @@
// Copyright (c) 2015-2022 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"context"
"fmt"
"math/rand"
"time"
"github.com/minio/madmin-go"
"github.com/minio/minio/internal/logger"
uatomic "go.uber.org/atomic"
)
const (
// callhomeSchemaVersion1 is callhome schema version 1
callhomeSchemaVersion1 = "1"
// callhomeSchemaVersion is current callhome schema version.
callhomeSchemaVersion = callhomeSchemaVersion1
// callhomeCycleDefault is the default interval between two callhome cycles (24hrs)
callhomeCycleDefault = 24 * time.Hour
)
// CallhomeInfo - Contains callhome information
type CallhomeInfo struct {
SchemaVersion string `json:"schema_version"`
AdminInfo madmin.InfoMessage `json:"admin_info"`
}
var (
enableCallhome = uatomic.NewBool(false)
callhomeLeaderLockTimeout = newDynamicTimeout(30*time.Second, 10*time.Second)
callhomeFreq = uatomic.NewDuration(callhomeCycleDefault)
)
func updateCallhomeParams(ctx context.Context, objAPI ObjectLayer) {
alreadyEnabled := enableCallhome.Load()
enableCallhome.Store(globalCallhomeConfig.Enable)
callhomeFreq.Store(globalCallhomeConfig.Frequency)
// If callhome was disabled earlier and has now been enabled,
// initialize the callhome process again.
if !alreadyEnabled && enableCallhome.Load() {
initCallhome(ctx, objAPI)
}
}
// initCallhome will start the callhome task in the background.
func initCallhome(ctx context.Context, objAPI ObjectLayer) {
go func() {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
// Leader node (that successfully acquires the lock inside runCallhome)
// will keep performing the callhome. If the leader goes down for some reason,
// the lock will be released and another node will acquire it and take over
// because of this loop.
for {
runCallhome(ctx, objAPI)
if !enableCallhome.Load() {
return
}
// callhome running on a different node.
// sleep for some time and try again.
duration := time.Duration(r.Float64() * float64(callhomeFreq.Load()))
if duration < time.Second {
// Make sure to sleep atleast a second to avoid high CPU ticks.
duration = time.Second
}
time.Sleep(duration)
if !enableCallhome.Load() {
return
}
}
}()
}
func runCallhome(ctx context.Context, objAPI ObjectLayer) {
// Make sure only 1 callhome is running on the cluster.
locker := objAPI.NewNSLock(minioMetaBucket, "callhome/runCallhome.lock")
lkctx, err := locker.GetLock(ctx, callhomeLeaderLockTimeout)
if err != nil {
return
}
ctx = lkctx.Context()
defer locker.Unlock(lkctx.Cancel)
callhomeTimer := time.NewTimer(callhomeFreq.Load())
defer callhomeTimer.Stop()
for {
select {
case <-ctx.Done():
return
case <-callhomeTimer.C:
if !enableCallhome.Load() {
// Stop the processing as callhome got disabled
return
}
performCallhome(ctx)
// Reset the timer for next cycle.
callhomeTimer.Reset(callhomeFreq.Load())
}
}
}
func performCallhome(ctx context.Context) {
err := sendCallhomeInfo(
CallhomeInfo{
SchemaVersion: callhomeSchemaVersion,
AdminInfo: getServerInfo(ctx, nil),
})
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to perform callhome: %w", err))
}
}
const (
callhomeURL = "https://subnet.min.io/api/callhome"
callhomeURLDev = "http://localhost:9000/api/callhome"
)
func sendCallhomeInfo(ch CallhomeInfo) error {
url := callhomeURL
if globalIsCICD {
url = callhomeURLDev
}
_, err := globalSubnetConfig.Post(url, ch)
return err
}

View File

@@ -24,6 +24,7 @@ import (
"crypto/tls"
"crypto/x509"
"encoding/gob"
"encoding/pem"
"errors"
"fmt"
"io/ioutil"
@@ -32,6 +33,7 @@ import (
"net/http"
"net/url"
"os"
"path"
"path/filepath"
"runtime"
"sort"
@@ -46,6 +48,7 @@ import (
"github.com/inconshreveable/mousetrap"
dns2 "github.com/miekg/dns"
"github.com/minio/cli"
consoleoauth2 "github.com/minio/console/pkg/auth/idp/oauth2"
consoleCerts "github.com/minio/console/pkg/certs"
"github.com/minio/console/restapi"
"github.com/minio/console/restapi/operations"
@@ -100,12 +103,21 @@ func init() {
PersistOnFailure: false,
}
t, _ := minioVersionToReleaseTime(Version)
if !t.IsZero() {
globalVersionUnix = uint64(t.Unix())
}
globalIsCICD = env.Get("MINIO_CI_CD", "") != "" || env.Get("CI", "") != ""
containers := IsKubernetes() || IsDocker() || IsBOSH() || IsDCOS() || IsPCFTile()
// Call to refresh will refresh names in cache. If you pass true, it will also
// remove cached names not looked up since the last call to Refresh. It is a good idea
// to call this method on a regular interval.
go func() {
var t *time.Ticker
if IsKubernetes() || IsDocker() || IsBOSH() || IsDCOS() || IsPCFTile() {
if containers {
t = time.NewTicker(1 * time.Minute)
} else {
t = time.NewTicker(10 * time.Minute)
@@ -172,39 +184,28 @@ func minioConfigToConsoleFeatures() {
os.Setenv("CONSOLE_LOG_QUERY_AUTH_TOKEN", value)
}
}
// pass the console subpath configuration
if value := env.Get(config.EnvMinIOBrowserRedirectURL, ""); value != "" {
subPath := path.Clean(pathJoin(strings.TrimSpace(globalBrowserRedirectURL.Path), SlashSeparator))
if subPath != SlashSeparator {
os.Setenv("CONSOLE_SUBPATH", subPath)
}
}
// Enable if prometheus URL is set.
if value := env.Get("MINIO_PROMETHEUS_URL", ""); value != "" {
os.Setenv("CONSOLE_PROMETHEUS_URL", value)
if value := env.Get("MINIO_PROMETHEUS_JOB_ID", "minio-job"); value != "" {
os.Setenv("CONSOLE_PROMETHEUS_JOB_ID", value)
// Support additional labels for more granular filtering.
if value := env.Get("MINIO_PROMETHEUS_EXTRA_LABELS", ""); value != "" {
os.Setenv("CONSOLE_PROMETHEUS_EXTRA_LABELS", value)
}
}
}
// Enable if LDAP is enabled.
if globalLDAPConfig.Enabled {
os.Setenv("CONSOLE_LDAP_ENABLED", config.EnableOn)
}
// if IDP is enabled, set IDP environment variables
if globalOpenIDConfig.URL != nil {
os.Setenv("CONSOLE_IDP_URL", globalOpenIDConfig.URL.String())
os.Setenv("CONSOLE_IDP_CLIENT_ID", globalOpenIDConfig.ClientID)
os.Setenv("CONSOLE_IDP_SECRET", globalOpenIDConfig.ClientSecret)
os.Setenv("CONSOLE_IDP_HMAC_SALT", globalDeploymentID)
os.Setenv("CONSOLE_IDP_HMAC_PASSPHRASE", globalOpenIDConfig.ClientID)
os.Setenv("CONSOLE_IDP_SCOPES", strings.Join(globalOpenIDConfig.DiscoveryDoc.ScopesSupported, ","))
if globalOpenIDConfig.ClaimUserinfo {
os.Setenv("CONSOLE_IDP_USERINFO", config.EnableOn)
}
if globalOpenIDConfig.RedirectURIDynamic {
// Enable dynamic redirect-uri's based on incoming 'host' header,
// Overrides any other callback URL.
os.Setenv("CONSOLE_IDP_CALLBACK_DYNAMIC", config.EnableOn)
}
if globalOpenIDConfig.RedirectURI != "" {
os.Setenv("CONSOLE_IDP_CALLBACK", globalOpenIDConfig.RedirectURI)
} else {
os.Setenv("CONSOLE_IDP_CALLBACK", getConsoleEndpoints()[0]+"/oauth_callback")
}
}
os.Setenv("CONSOLE_MINIO_REGION", globalSite.Region)
os.Setenv("CONSOLE_CERT_PASSWD", env.Get("MINIO_CERT_PASSWD", ""))
if globalSubnetConfig.License != "" {
@@ -213,11 +214,34 @@ func minioConfigToConsoleFeatures() {
if globalSubnetConfig.APIKey != "" {
os.Setenv("CONSOLE_SUBNET_API_KEY", globalSubnetConfig.APIKey)
}
if globalSubnetConfig.Proxy != "" {
os.Setenv("CONSOLE_SUBNET_PROXY", globalSubnetConfig.Proxy)
if globalSubnetConfig.ProxyURL != nil {
os.Setenv("CONSOLE_SUBNET_PROXY", globalSubnetConfig.ProxyURL.String())
}
}
func buildOpenIDConsoleConfig() consoleoauth2.OpenIDPCfg {
m := make(map[string]consoleoauth2.ProviderConfig, len(globalOpenIDConfig.ProviderCfgs))
for name, cfg := range globalOpenIDConfig.ProviderCfgs {
callback := getConsoleEndpoints()[0] + "/oauth_callback"
if cfg.RedirectURI != "" {
callback = cfg.RedirectURI
}
m[name] = consoleoauth2.ProviderConfig{
URL: cfg.URL.String(),
DisplayName: cfg.DisplayName,
ClientID: cfg.ClientID,
ClientSecret: cfg.ClientSecret,
HMACSalt: globalDeploymentID,
HMACPassphrase: cfg.ClientID,
Scopes: strings.Join(cfg.DiscoveryDoc.ScopesSupported, ","),
Userinfo: cfg.ClaimUserinfo,
RedirectCallbackDynamic: cfg.RedirectURIDynamic,
RedirectCallback: callback,
}
}
return m
}
func initConsoleServer() (*restapi.Server, error) {
// unset all console_ environment variables.
for _, cenv := range env.List(consolePrefix) {
@@ -251,6 +275,12 @@ func initConsoleServer() (*restapi.Server, error) {
api.Logger = noLog
}
// Pass in console application config. This needs to happen before the
// ConfigureAPI() call.
restapi.GlobalMinIOConfig = restapi.MinIOConfig{
OpenIDProviders: buildOpenIDConsoleConfig(),
}
server := restapi.NewServer(api)
// register all APIs
server.ConfigureAPI()
@@ -327,7 +357,7 @@ func checkUpdate(mode string) {
return
}
logStartupMessage(prepareUpdateMessage("\nRun `mc admin update`", lrTime.Sub(crTime)))
logger.Info(prepareUpdateMessage("Run `mc admin update`", lrTime.Sub(crTime)))
}
func newConfigDirFromCtx(ctx *cli.Context, option string, getDefaultDir func() string) (*ConfigDir, bool) {
@@ -408,7 +438,6 @@ func handleCommonCmdArgs(ctx *cli.Context) {
if err != nil {
logger.FatalIf(err, "Unable to get free port for console on the host")
}
globalMinioConsolePortAuto = true
consoleAddr = net.JoinHostPort("", p.String())
}
@@ -473,8 +502,13 @@ func parsEnvEntry(envEntry string) (envKV, error) {
Skip: true,
}, nil
}
const envSeparator = "="
envTokens := strings.SplitN(strings.TrimSpace(strings.TrimPrefix(envEntry, "export")), envSeparator, 2)
if strings.HasPrefix(envEntry, "#") {
// Skip commented lines
return envKV{
Skip: true,
}, nil
}
envTokens := strings.SplitN(strings.TrimSpace(strings.TrimPrefix(envEntry, "export")), config.EnvSeparator, 2)
if len(envTokens) != 2 {
return envKV{}, fmt.Errorf("envEntry malformed; %s, expected to be of form 'KEY=value'", envEntry)
}
@@ -500,11 +534,8 @@ func parsEnvEntry(envEntry string) (envKV, error) {
// the environment values from a file, in the form "key, value".
// in a structured form.
func minioEnvironFromFile(envConfigFile string) ([]envKV, error) {
f, err := os.Open(envConfigFile)
f, err := Open(envConfigFile)
if err != nil {
if os.IsNotExist(err) { // ignore if file doesn't exist.
return nil, nil
}
return nil, err
}
defer f.Close()
@@ -600,7 +631,7 @@ func loadEnvVarsFromFiles() {
if env.IsSet(config.EnvConfigEnvFile) {
ekvs, err := minioEnvironFromFile(env.Get(config.EnvConfigEnvFile, ""))
if err != nil {
if err != nil && !os.IsNotExist(err) {
logger.Fatal(err, "Unable to read the config environment file")
}
for _, ekv := range ekvs {
@@ -625,12 +656,11 @@ func handleCommonEnvVars() {
}
// Look for if URL has invalid values and return error.
if !((u.Scheme == "http" || u.Scheme == "https") &&
(u.Path == "/" || u.Path == "") && u.Opaque == "" &&
u.Opaque == "" &&
!u.ForceQuery && u.RawQuery == "" && u.Fragment == "") {
err := fmt.Errorf("URL contains unexpected resources, expected URL to be of http(s)://minio.example.com format: %v", u)
logger.Fatal(err, "Invalid MINIO_BROWSER_REDIRECT_URL value is environment variable")
}
u.Path = "" // remove any path component such as `/`
globalBrowserRedirectURL = u
}
}
@@ -664,8 +694,6 @@ func handleCommonEnvVars() {
globalRootDiskThreshold = size
}
globalIsCICD = env.Get("MINIO_CI_CD", "") != ""
domains := env.Get(config.EnvDomain, "")
if len(domains) != 0 {
for _, domainName := range strings.Split(domains, config.ValueSeparator) {
@@ -762,21 +790,26 @@ func handleCommonEnvVars() {
" Please use %s and %s",
config.EnvAccessKey, config.EnvSecretKey,
config.EnvRootUser, config.EnvRootPassword)
logStartupMessage(color.RedBold(msg))
logger.Info(color.RedBold(msg))
}
globalActiveCred = cred
}
}
// Initialize KMS global variable after valiadating and loading the configuration.
// It depends on KMS env variables and global cli flags.
func handleKMSConfig() {
switch {
case env.IsSet(config.EnvKMSSecretKey) && env.IsSet(config.EnvKESEndpoint):
logger.Fatal(errors.New("ambigious KMS configuration"), fmt.Sprintf("The environment contains %q as well as %q", config.EnvKMSSecretKey, config.EnvKESEndpoint))
}
if env.IsSet(config.EnvKMSSecretKey) {
GlobalKMS, err = kms.Parse(env.Get(config.EnvKMSSecretKey, ""))
KMS, err := kms.Parse(env.Get(config.EnvKMSSecretKey, ""))
if err != nil {
logger.Fatal(err, "Unable to parse the KMS secret key inherited from the shell environment")
}
GlobalKMS = KMS
}
if env.IsSet(config.EnvKESEndpoint) {
var endpoints []string
@@ -796,21 +829,56 @@ func handleCommonEnvVars() {
endpoints = append(endpoints, strings.Join(lbls, ""))
}
}
certificate, err := tls.LoadX509KeyPair(env.Get(config.EnvKESClientCert, ""), env.Get(config.EnvKESClientKey, ""))
if err != nil {
logger.Fatal(err, "Unable to load KES client certificate as specified by the shell environment")
}
rootCAs, err := certs.GetRootCAs(env.Get(config.EnvKESServerCA, globalCertsCADir.Get()))
if err != nil {
logger.Fatal(err, fmt.Sprintf("Unable to load X.509 root CAs for KES from %q", env.Get(config.EnvKESServerCA, globalCertsCADir.Get())))
}
loadX509KeyPair := func(certFile, keyFile string) (tls.Certificate, error) {
// Manually load the certificate and private key into memory.
// We need to check whether the private key is encrypted, and
// if so, decrypt it using the user-provided password.
certBytes, err := os.ReadFile(certFile)
if err != nil {
return tls.Certificate{}, fmt.Errorf("Unable to load KES client certificate as specified by the shell environment: %v", err)
}
keyBytes, err := os.ReadFile(keyFile)
if err != nil {
return tls.Certificate{}, fmt.Errorf("Unable to load KES client private key as specified by the shell environment: %v", err)
}
privateKeyPEM, rest := pem.Decode(bytes.TrimSpace(keyBytes))
if len(rest) != 0 {
return tls.Certificate{}, errors.New("Unable to load KES client private key as specified by the shell environment: private key contains additional data")
}
if x509.IsEncryptedPEMBlock(privateKeyPEM) {
keyBytes, err = x509.DecryptPEMBlock(privateKeyPEM, []byte(env.Get(config.EnvKESClientPassword, "")))
if err != nil {
return tls.Certificate{}, fmt.Errorf("Unable to decrypt KES client private key as specified by the shell environment: %v", err)
}
keyBytes = pem.EncodeToMemory(&pem.Block{Type: privateKeyPEM.Type, Bytes: keyBytes})
}
certificate, err := tls.X509KeyPair(certBytes, keyBytes)
if err != nil {
return tls.Certificate{}, fmt.Errorf("Unable to load KES client certificate as specified by the shell environment: %v", err)
}
return certificate, nil
}
reloadCertEvents := make(chan tls.Certificate, 1)
certificate, err := certs.NewCertificate(env.Get(config.EnvKESClientCert, ""), env.Get(config.EnvKESClientKey, ""), loadX509KeyPair)
if err != nil {
logger.Fatal(err, "Failed to load KES client certificate")
}
certificate.Watch(context.Background(), 15*time.Minute, syscall.SIGHUP)
certificate.Notify(reloadCertEvents)
defaultKeyID := env.Get(config.EnvKESKeyName, "")
KMS, err := kms.NewWithConfig(kms.Config{
Endpoints: endpoints,
DefaultKeyID: defaultKeyID,
Certificate: certificate,
RootCAs: rootCAs,
Endpoints: endpoints,
DefaultKeyID: defaultKeyID,
Certificate: certificate,
ReloadCertEvents: reloadCertEvents,
RootCAs: rootCAs,
})
if err != nil {
logger.Fatal(err, "Unable to initialize a connection to KES as specified by the shell environment")
@@ -820,20 +888,13 @@ func handleCommonEnvVars() {
// This implicitly checks that we can communicate to KES. We don't treat
// a policy error as failure condition since MinIO may not have the permission
// to create keys - just to generate/decrypt data encryption keys.
if err = KMS.CreateKey(defaultKeyID); err != nil && !errors.Is(err, kes.ErrKeyExists) && !errors.Is(err, kes.ErrNotAllowed) {
if err = KMS.CreateKey(context.Background(), defaultKeyID); err != nil && !errors.Is(err, kes.ErrKeyExists) && !errors.Is(err, kes.ErrNotAllowed) {
logger.Fatal(err, "Unable to initialize a connection to KES as specified by the shell environment")
}
GlobalKMS = KMS
}
}
func logStartupMessage(msg string) {
if globalConsoleSys != nil {
globalConsoleSys.Send(msg, string(logger.All))
}
logger.StartupMessage(msg)
}
func getTLSConfig() (x509Certs []*x509.Certificate, manager *certs.Manager, secureConn bool, err error) {
if !(isFile(getPublicCertFile()) && isFile(getPrivateKeyFile())) {
return nil, nil, false, nil
@@ -867,7 +928,7 @@ func getTLSConfig() (x509Certs []*x509.Certificate, manager *certs.Manager, secu
// Therefore, we read all filenames in the cert directory and check
// for each directory whether it contains a public.crt and private.key.
// If so, we try to add it to certificate manager.
root, err := os.Open(globalCertsDir.Get())
root, err := Open(globalCertsDir.Get())
if err != nil {
return nil, nil, false, err
}
@@ -886,7 +947,7 @@ func getTLSConfig() (x509Certs []*x509.Certificate, manager *certs.Manager, secu
continue
}
if file.Mode()&os.ModeSymlink == os.ModeSymlink {
file, err = os.Stat(filepath.Join(root.Name(), file.Name()))
file, err = Stat(filepath.Join(root.Name(), file.Name()))
if err != nil {
// not accessible ignore
continue
@@ -910,6 +971,9 @@ func getTLSConfig() (x509Certs []*x509.Certificate, manager *certs.Manager, secu
}
secureConn = true
// Certs might be symlinks, reload them every 10 seconds.
manager.UpdateReloadDuration(10 * time.Second)
// syscall.SIGHUP to reload the certs.
manager.ReloadOnSignal(syscall.SIGHUP)

View File

@@ -134,6 +134,25 @@ export MINIO_ROOT_PASSWORD=minio123`,
true,
nil,
},
{
`
# simple comment
# MINIO_ROOT_USER=minioadmin
# MINIO_ROOT_PASSWORD=minioadmin
MINIO_ROOT_USER=minio
MINIO_ROOT_PASSWORD=minio123`,
false,
[]envKV{
{
Key: "MINIO_ROOT_USER",
Value: "minio",
},
{
Key: "MINIO_ROOT_PASSWORD",
Value: "minio123",
},
},
},
}
for _, testCase := range testCases {
testCase := testCase
@@ -153,6 +172,11 @@ export MINIO_ROOT_PASSWORD=minio123`,
if err == nil && testCase.expectedErr {
t.Error(errors.New("expected error, found success"))
}
if len(ekvs) != len(testCase.expectedEkvs) {
t.Errorf("expected %v keys, got %v keys", len(testCase.expectedEkvs), len(ekvs))
}
if !reflect.DeepEqual(ekvs, testCase.expectedEkvs) {
t.Errorf("expected %v, got %v", testCase.expectedEkvs, ekvs)
}

View File

@@ -28,15 +28,18 @@ import (
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/config/api"
"github.com/minio/minio/internal/config/cache"
"github.com/minio/minio/internal/config/callhome"
"github.com/minio/minio/internal/config/compress"
"github.com/minio/minio/internal/config/dns"
"github.com/minio/minio/internal/config/etcd"
"github.com/minio/minio/internal/config/heal"
xldap "github.com/minio/minio/internal/config/identity/ldap"
"github.com/minio/minio/internal/config/identity/openid"
idplugin "github.com/minio/minio/internal/config/identity/plugin"
xtls "github.com/minio/minio/internal/config/identity/tls"
"github.com/minio/minio/internal/config/notify"
"github.com/minio/minio/internal/config/policy/opa"
polplugin "github.com/minio/minio/internal/config/policy/plugin"
"github.com/minio/minio/internal/config/scanner"
"github.com/minio/minio/internal/config/storageclass"
"github.com/minio/minio/internal/config/subnet"
@@ -44,8 +47,6 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/logger/target/http"
"github.com/minio/minio/internal/logger/target/kafka"
"github.com/minio/pkg/env"
)
@@ -57,7 +58,9 @@ func initHelp() {
config.IdentityLDAPSubSys: xldap.DefaultKVS,
config.IdentityOpenIDSubSys: openid.DefaultKVS,
config.IdentityTLSSubSys: xtls.DefaultKVS,
config.IdentityPluginSubSys: idplugin.DefaultKVS,
config.PolicyOPASubSys: opa.DefaultKVS,
config.PolicyPluginSubSys: polplugin.DefaultKVS,
config.SiteSubSys: config.DefaultSiteKVS,
config.RegionSubSys: config.DefaultRegionKVS,
config.APISubSys: api.DefaultKVS,
@@ -68,6 +71,7 @@ func initHelp() {
config.HealSubSys: heal.DefaultKVS,
config.ScannerSubSys: scanner.DefaultKVS,
config.SubnetSubSys: subnet.DefaultKVS,
config.CallhomeSubSys: callhome.DefaultKVS,
}
for k, v := range notify.DefaultNotificationKVS {
kvs[k] = v
@@ -96,8 +100,9 @@ func initHelp() {
Description: "federate multiple clusters for IAM and Bucket DNS",
},
config.HelpKV{
Key: config.IdentityOpenIDSubSys,
Description: "enable OpenID SSO support",
Key: config.IdentityOpenIDSubSys,
Description: "enable OpenID SSO support",
MultipleTargets: true,
},
config.HelpKV{
Key: config.IdentityLDAPSubSys,
@@ -108,8 +113,12 @@ func initHelp() {
Description: "enable X.509 TLS certificate SSO support",
},
config.HelpKV{
Key: config.PolicyOPASubSys,
Description: "[DEPRECATED] enable external OPA for policy enforcement",
Key: config.IdentityPluginSubSys,
Description: "enable Identity Plugin via external hook",
},
config.HelpKV{
Key: config.PolicyPluginSubSys,
Description: "enable Access Management Plugin for policy enforcement",
},
config.HelpKV{
Key: config.APISubSys,
@@ -194,6 +203,12 @@ func initHelp() {
Description: "set subnet config for the cluster e.g. api key",
Optional: true,
},
config.HelpKV{
Key: config.CallhomeSubSys,
Type: "string",
Description: "enable callhome for the cluster",
Optional: true,
},
}
if globalIsErasure {
@@ -219,7 +234,9 @@ func initHelp() {
config.IdentityOpenIDSubSys: openid.Help,
config.IdentityLDAPSubSys: xldap.Help,
config.IdentityTLSSubSys: xtls.Help,
config.IdentityPluginSubSys: idplugin.Help,
config.PolicyOPASubSys: opa.Help,
config.PolicyPluginSubSys: polplugin.Help,
config.LoggerWebhookSubSys: logger.Help,
config.AuditWebhookSubSys: logger.HelpWebhook,
config.AuditKafkaSubSys: logger.HelpKafka,
@@ -234,6 +251,7 @@ func initHelp() {
config.NotifyWebhookSubSys: notify.HelpWebhook,
config.NotifyESSubSys: notify.HelpES,
config.SubnetSubSys: subnet.HelpSubnet,
config.CallhomeSubSys: callhome.HelpCallhome,
}
config.RegisterHelpSubSys(helpMap)
@@ -244,6 +262,10 @@ func initHelp() {
Key: config.RegionSubSys,
Description: "[DEPRECATED - use `site` instead] label the location of the server",
},
config.PolicyOPASubSys: {
Key: config.PolicyOPASubSys,
Description: "[DEPRECATED - use `policy_plugin` instead] enable external OPA for policy enforcement",
},
}
config.RegisterHelpDeprecatedSubSys(deprecatedHelpKVMap)
@@ -316,7 +338,7 @@ func validateSubSysConfig(s config.Config, subSys string, objAPI ObjectLayer) er
etcdClnt.Close()
}
case config.IdentityOpenIDSubSys:
if _, err := openid.LookupConfig(s[config.IdentityOpenIDSubSys][config.Default],
if _, err := openid.LookupConfig(s,
NewGatewayHTTPTransport(), xhttp.DrainBody, globalSite.Region); err != nil {
return err
}
@@ -336,14 +358,34 @@ func validateSubSysConfig(s config.Config, subSys string, objAPI ObjectLayer) er
if _, err := xtls.Lookup(s[config.IdentityTLSSubSys][config.Default]); err != nil {
return err
}
case config.IdentityPluginSubSys:
if _, err := idplugin.LookupConfig(s[config.IdentityPluginSubSys][config.Default],
NewGatewayHTTPTransport(), xhttp.DrainBody, globalSite.Region); err != nil {
return err
}
case config.SubnetSubSys:
if _, err := subnet.LookupConfig(s[config.SubnetSubSys][config.Default]); err != nil {
if _, err := subnet.LookupConfig(s[config.SubnetSubSys][config.Default], nil); err != nil {
return err
}
case config.CallhomeSubSys:
if _, err := callhome.LookupConfig(s[config.CallhomeSubSys][config.Default]); err != nil {
return err
}
case config.PolicyOPASubSys:
if _, err := opa.LookupConfig(s[config.PolicyOPASubSys][config.Default],
// In case legacy OPA config is being set, we treat it as if the
// AuthZPlugin is being set.
subSys = config.PolicyPluginSubSys
fallthrough
case config.PolicyPluginSubSys:
if ppargs, err := polplugin.LookupConfig(s[config.PolicyPluginSubSys][config.Default],
NewGatewayHTTPTransport(), xhttp.DrainBody); err != nil {
return err
} else if ppargs.URL == nil {
// Check if legacy opa is configured.
if _, err := opa.LookupConfig(s[config.PolicyOPASubSys][config.Default],
NewGatewayHTTPTransport(), xhttp.DrainBody); err != nil {
return err
}
}
default:
if config.LoggerSubSystems.Contains(subSys) {
@@ -353,12 +395,6 @@ func validateSubSysConfig(s config.Config, subSys string, objAPI ObjectLayer) er
}
}
if config.LoggerSubSystems.Contains(subSys) {
if err := logger.ValidateSubSysConfig(s, subSys); err != nil {
return err
}
}
if config.NotifySubSystems.Contains(subSys) {
if err := notify.TestSubSysNotificationTargets(GlobalContext, s, NewGatewayHTTPTransport(), globalNotificationSys.ConfiguredTargetIDs(), subSys); err != nil {
return err
@@ -435,14 +471,12 @@ func lookupConfigs(s config.Config, objAPI ObjectLayer) {
}
if etcdCfg.Enabled {
if globalEtcdClient == nil {
globalEtcdClient, err = etcd.New(etcdCfg)
if err != nil {
if globalIsGateway {
logger.FatalIf(err, "Unable to initialize etcd config")
} else {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize etcd config: %w", err))
}
globalEtcdClient, err = etcd.New(etcdCfg)
if err != nil {
if globalIsGateway {
logger.FatalIf(err, "Unable to initialize etcd config")
} else {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize etcd config: %w", err))
}
}
@@ -450,7 +484,7 @@ func lookupConfigs(s config.Config, objAPI ObjectLayer) {
if globalDNSConfig != nil {
// if global DNS is already configured, indicate with a warning, incase
// users are confused.
logger.LogIf(ctx, fmt.Errorf("DNS store is already configured with %s, not using etcd for DNS store", globalDNSConfig))
logger.LogIf(ctx, fmt.Errorf("DNS store is already configured with %s, etcd is not used for DNS store", globalDNSConfig))
} else {
globalDNSConfig, err = dns.NewCoreDNS(etcdCfg.Config,
dns.DomainNames(globalDomainNames),
@@ -482,32 +516,6 @@ func lookupConfigs(s config.Config, objAPI ObjectLayer) {
logger.LogIf(ctx, fmt.Errorf("Invalid site configuration: %w", err))
}
apiConfig, err := api.LookupConfig(s[config.APISubSys][config.Default])
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Invalid api configuration: %w", err))
}
// Initialize remote instance transport once.
getRemoteInstanceTransportOnce.Do(func() {
getRemoteInstanceTransport = newGatewayHTTPTransport(apiConfig.RemoteTransportDeadline)
})
if globalIsErasure && objAPI != nil {
setDriveCounts := objAPI.SetDriveCounts()
for i, setDriveCount := range setDriveCounts {
sc, err := storageclass.LookupConfig(s[config.StorageClassSubSys][config.Default], setDriveCount)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize storage class config: %w", err))
break
}
// if we validated all setDriveCounts and it was successful
// proceed to store the correct storage class globally.
if i == len(setDriveCounts)-1 {
globalStorageClass.Update(sc)
}
}
}
globalCacheConfig, err = cache.LookupConfig(s[config.CacheSubSys][config.Default])
if err != nil {
if globalIsGateway {
@@ -537,71 +545,22 @@ func lookupConfigs(s config.Config, objAPI ObjectLayer) {
}
if globalSTSTLSConfig.InsecureSkipVerify {
logger.Info("CRITICAL: enabling %s is not recommended in a production environment", xtls.EnvIdentityTLSSkipVerify)
logger.LogIf(ctx, fmt.Errorf("CRITICAL: enabling %s is not recommended in a production environment", xtls.EnvIdentityTLSSkipVerify))
}
globalOpenIDConfig, err = openid.LookupConfig(s[config.IdentityOpenIDSubSys][config.Default],
NewGatewayHTTPTransport(), xhttp.DrainBody, globalSite.Region)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize OpenID: %w", err))
}
opaCfg, err := opa.LookupConfig(s[config.PolicyOPASubSys][config.Default],
NewGatewayHTTPTransport(), xhttp.DrainBody)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize OPA: %w", err))
}
globalOpenIDValidators = getOpenIDValidators(globalOpenIDConfig)
globalPolicyOPA = opa.New(opaCfg)
globalLDAPConfig, err = xldap.Lookup(s[config.IdentityLDAPSubSys][config.Default],
globalRootCAs)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to parse LDAP configuration: %w", err))
}
globalSubnetConfig, err = subnet.LookupConfig(s[config.SubnetSubSys][config.Default])
globalSubnetConfig, err = subnet.LookupConfig(s[config.SubnetSubSys][config.Default], globalProxyTransport)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to parse subnet configuration: %w", err))
}
// Load logger targets based on user's configuration
loggerUserAgent := getUserAgent(getMinioMode())
transport := NewGatewayHTTPTransport()
loggerCfg, err := logger.LookupConfig(s)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize logger/audit targets: %w", err))
}
for _, l := range loggerCfg.AuditWebhook {
if l.Enabled {
l.LogOnce = logger.LogOnceIf
l.UserAgent = loggerUserAgent
l.Transport = NewGatewayHTTPTransportWithClientCerts(l.ClientCert, l.ClientKey)
// Enable http audit logging
if err = logger.AddAuditTarget(http.New(l)); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize server audit HTTP target: %w", err))
}
}
}
for _, l := range loggerCfg.AuditKafka {
if l.Enabled {
l.LogOnce = logger.LogOnceIf
// Enable Kafka audit logging
if err = logger.AddAuditTarget(kafka.New(l)); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize server audit Kafka target: %w", err))
}
}
}
globalConfigTargetList, err = notify.GetNotificationTargets(GlobalContext, s, NewGatewayHTTPTransport(), false)
globalConfigTargetList, err = notify.GetNotificationTargets(GlobalContext, s, transport, false)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize notification target(s): %w", err))
}
globalEnvTargetList, err = notify.GetNotificationTargets(GlobalContext, newServerConfig(), NewGatewayHTTPTransport(), true)
globalEnvTargetList, err = notify.GetNotificationTargets(GlobalContext, newServerConfig(), transport, true)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize notification target(s): %w", err))
}
@@ -616,84 +575,140 @@ func lookupConfigs(s config.Config, objAPI ObjectLayer) {
}
}
// applyDynamicConfig will apply dynamic config values.
// Dynamic systems should be in config.SubSystemsDynamic as well.
func applyDynamicConfig(ctx context.Context, objAPI ObjectLayer, s config.Config) error {
// Read all dynamic configs.
// API
apiConfig, err := api.LookupConfig(s[config.APISubSys][config.Default])
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Invalid api configuration: %w", err))
}
func applyDynamicConfigForSubSys(ctx context.Context, objAPI ObjectLayer, s config.Config, subSys string) error {
switch subSys {
case config.APISubSys:
apiConfig, err := api.LookupConfig(s[config.APISubSys][config.Default])
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Invalid api configuration: %w", err))
}
var setDriveCounts []int
if objAPI != nil {
setDriveCounts = objAPI.SetDriveCounts()
}
globalAPIConfig.init(apiConfig, setDriveCounts)
// Compression
cmpCfg, err := compress.LookupConfig(s[config.CompressionSubSys][config.Default])
if err != nil {
return fmt.Errorf("Unable to setup Compression: %w", err)
}
// Initialize remote instance transport once.
getRemoteInstanceTransportOnce.Do(func() {
getRemoteInstanceTransport = newGatewayHTTPTransport(apiConfig.RemoteTransportDeadline)
})
case config.CompressionSubSys:
cmpCfg, err := compress.LookupConfig(s[config.CompressionSubSys][config.Default])
if err != nil {
return fmt.Errorf("Unable to setup Compression: %w", err)
}
// Validate if the object layer supports compression.
if objAPI != nil {
if cmpCfg.Enabled && !objAPI.IsCompressionSupported() {
return fmt.Errorf("Backend does not support compression")
}
}
globalCompressConfigMu.Lock()
globalCompressConfig = cmpCfg
globalCompressConfigMu.Unlock()
case config.HealSubSys:
healCfg, err := heal.LookupConfig(s[config.HealSubSys][config.Default])
if err != nil {
return fmt.Errorf("Unable to apply heal config: %w", err)
}
globalHealConfig.Update(healCfg)
case config.ScannerSubSys:
scannerCfg, err := scanner.LookupConfig(s[config.ScannerSubSys][config.Default])
if err != nil {
return fmt.Errorf("Unable to apply scanner config: %w", err)
}
// update dynamic scanner values.
scannerCycle.Store(scannerCfg.Cycle)
logger.LogIf(ctx, scannerSleeper.Update(scannerCfg.Delay, scannerCfg.MaxWait))
case config.LoggerWebhookSubSys:
loggerCfg, err := logger.LookupConfigForSubSys(s, config.LoggerWebhookSubSys)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to load logger webhook config: %w", err))
}
userAgent := getUserAgent(getMinioMode())
for n, l := range loggerCfg.HTTP {
if l.Enabled {
l.LogOnce = logger.LogOnceConsoleIf
l.UserAgent = userAgent
l.Transport = NewGatewayHTTPTransportWithClientCerts(l.ClientCert, l.ClientKey)
loggerCfg.HTTP[n] = l
}
}
if err = logger.UpdateSystemTargets(loggerCfg); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to update logger webhook config: %w", err))
}
case config.AuditWebhookSubSys:
loggerCfg, err := logger.LookupConfigForSubSys(s, config.AuditWebhookSubSys)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to load audit webhook config: %w", err))
}
userAgent := getUserAgent(getMinioMode())
for n, l := range loggerCfg.AuditWebhook {
if l.Enabled {
l.LogOnce = logger.LogOnceConsoleIf
l.UserAgent = userAgent
l.Transport = NewGatewayHTTPTransportWithClientCerts(l.ClientCert, l.ClientKey)
loggerCfg.AuditWebhook[n] = l
}
}
// Validate if the object layer supports compression.
if objAPI != nil {
if cmpCfg.Enabled && !objAPI.IsCompressionSupported() {
return fmt.Errorf("Backend does not support compression")
if err = logger.UpdateAuditWebhookTargets(loggerCfg); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to update audit webhook targets: %w", err))
}
case config.AuditKafkaSubSys:
loggerCfg, err := logger.LookupConfigForSubSys(s, config.AuditKafkaSubSys)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to load audit kafka config: %w", err))
}
for n, l := range loggerCfg.AuditKafka {
if l.Enabled {
l.LogOnce = logger.LogOnceIf
loggerCfg.AuditKafka[n] = l
}
}
if err = logger.UpdateAuditKafkaTargets(loggerCfg); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to update audit kafka targets: %w", err))
}
case config.StorageClassSubSys:
if globalIsErasure && objAPI != nil {
setDriveCounts := objAPI.SetDriveCounts()
for i, setDriveCount := range setDriveCounts {
sc, err := storageclass.LookupConfig(s[config.StorageClassSubSys][config.Default], setDriveCount)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize storage class config: %w", err))
break
}
// if we validated all setDriveCounts and it was successful
// proceed to store the correct storage class globally.
if i == len(setDriveCounts)-1 {
globalStorageClass.Update(sc)
}
}
}
case config.CallhomeSubSys:
callhomeCfg, err := callhome.LookupConfig(s[config.CallhomeSubSys][config.Default])
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to load callhome config: %w", err))
} else {
globalCallhomeConfig = callhomeCfg
updateCallhomeParams(ctx, objAPI)
}
}
// Heal
healCfg, err := heal.LookupConfig(s[config.HealSubSys][config.Default])
if err != nil {
return fmt.Errorf("Unable to apply heal config: %w", err)
}
// Scanner
scannerCfg, err := scanner.LookupConfig(s[config.ScannerSubSys][config.Default])
if err != nil {
return fmt.Errorf("Unable to apply scanner config: %w", err)
}
// Logger webhook
loggerCfg, err := logger.LookupConfig(s)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to load logger webhook config: %w", err))
}
userAgent := getUserAgent(getMinioMode())
for n, l := range loggerCfg.HTTP {
if l.Enabled {
l.LogOnce = logger.LogOnceIf
l.UserAgent = userAgent
l.Transport = NewGatewayHTTPTransportWithClientCerts(l.ClientCert, l.ClientKey)
loggerCfg.HTTP[n] = l
}
}
err = logger.UpdateTargets(loggerCfg)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to update logger webhook config: %w", err))
}
// Apply configurations.
// We should not fail after this.
var setDriveCounts []int
if objAPI != nil {
setDriveCounts = objAPI.SetDriveCounts()
}
globalAPIConfig.init(apiConfig, setDriveCounts)
globalCompressConfigMu.Lock()
globalCompressConfig = cmpCfg
globalCompressConfigMu.Unlock()
globalHealConfig.Update(healCfg)
// update dynamic scanner values.
scannerCycle.Update(scannerCfg.Cycle)
logger.LogIf(ctx, scannerSleeper.Update(scannerCfg.Delay, scannerCfg.MaxWait))
// Update all dynamic config values in memory.
globalServerConfigMu.Lock()
defer globalServerConfigMu.Unlock()
if globalServerConfig != nil {
for k := range config.SubSystemsDynamic {
globalServerConfig[k] = s[k]
globalServerConfig[subSys] = s[subSys]
}
return nil
}
// applyDynamicConfig will apply dynamic config values.
// Dynamic systems should be in config.SubSystemsDynamic as well.
func applyDynamicConfig(ctx context.Context, objAPI ObjectLayer, s config.Config) error {
for subSys := range config.SubSystemsDynamic {
err := applyDynamicConfigForSubSys(ctx, objAPI, s, subSys)
if err != nil {
return err
}
}
return nil
@@ -790,7 +805,7 @@ func newSrvConfig(objAPI ObjectLayer) error {
globalServerConfigMu.Unlock()
// Save config into file.
return saveServerConfig(GlobalContext, objAPI, globalServerConfig)
return saveServerConfig(GlobalContext, objAPI, srvCfg)
}
func getValidConfig(objAPI ObjectLayer) (config.Config, error) {
@@ -815,17 +830,3 @@ func loadConfig(objAPI ObjectLayer) error {
return nil
}
// getOpenIDValidators - returns ValidatorList which contains
// enabled providers in server config.
// A new authentication provider is added like below
// * Add a new provider in pkg/iam/openid package.
func getOpenIDValidators(cfg openid.Config) *openid.Validators {
validators := openid.NewValidators()
if cfg.Enabled {
validators.Add(&cfg)
}
return validators
}

View File

@@ -69,12 +69,18 @@ func migrateIAMConfigsEtcdToEncrypted(ctx context.Context, client *etcd.Client)
return err
}
// If backend doesn't have this file means we have already
// attempted then migration
if !encrypted {
return nil
}
if encrypted && GlobalKMS != nil {
stat, err := GlobalKMS.Stat()
stat, err := GlobalKMS.Stat(ctx)
if err != nil {
return err
}
logger.Info("Attempting to re-encrypt IAM users and policies on etcd with %q (%s)", stat.DefaultKey, stat.Name)
logger.Info(fmt.Sprintf("Attempting to re-encrypt IAM users and policies on etcd with %q (%s)", stat.DefaultKey, stat.Name))
}
listCtx, cancel := context.WithTimeout(ctx, 1*time.Minute)
@@ -109,6 +115,8 @@ func migrateIAMConfigsEtcdToEncrypted(ctx context.Context, client *etcd.Client)
return fmt.Errorf("Decrypting IAM config failed %w, possibly credentials are incorrect", err)
}
}
} else {
return fmt.Errorf("Decrypting IAM config failed %w, possibly credentials are incorrect", err)
}
}
data = pdata
@@ -131,6 +139,7 @@ func migrateIAMConfigsEtcdToEncrypted(ctx context.Context, client *etcd.Client)
if encrypted && GlobalKMS != nil {
logger.Info("Migration of encrypted IAM config data completed. All data is now encrypted on etcd.")
}
return deleteKeyEtcd(ctx, client, backendEncryptedFile)
}
@@ -139,51 +148,63 @@ func migrateConfigPrefixToEncrypted(objAPI ObjectLayer, encrypted bool) error {
return nil
}
if encrypted && GlobalKMS != nil {
stat, err := GlobalKMS.Stat()
stat, err := GlobalKMS.Stat(context.Background())
if err != nil {
return err
}
logger.Info("Attempting to re-encrypt config, IAM users and policies on MinIO with %q (%s)", stat.DefaultKey, stat.Name)
logger.Info(fmt.Sprintf("Attempting to re-encrypt config, IAM users and policies on MinIO with %q (%s)", stat.DefaultKey, stat.Name))
}
var marker string
for {
res, err := objAPI.ListObjects(GlobalContext, minioMetaBucket, minioConfigPrefix, marker, "", maxObjectList)
results := make(chan ObjectInfo)
if err := objAPI.Walk(GlobalContext, minioMetaBucket, minioConfigPrefix, results, ObjectOptions{}); err != nil {
return err
}
for obj := range results {
data, err := readConfig(GlobalContext, objAPI, obj.Name)
if err != nil {
return err
}
for _, obj := range res.Objects {
data, err := readConfig(GlobalContext, objAPI, obj.Name)
if !utf8.Valid(data) {
pdata, err := madmin.DecryptData(globalActiveCred.String(), bytes.NewReader(data))
if err != nil {
if GlobalKMS != nil {
pdata, err = config.DecryptBytes(GlobalKMS, data, kms.Context{
minioMetaBucket: path.Join(minioMetaBucket, obj.Name),
})
if err != nil {
pdata, err = config.DecryptBytes(GlobalKMS, data, kms.Context{
minioMetaBucket: obj.Name,
})
if err != nil {
return fmt.Errorf("Decrypting IAM config failed %w, possibly credentials are incorrect", err)
}
}
} else {
return fmt.Errorf("Decrypting IAM config failed %w, possibly credentials are incorrect", err)
}
}
data = pdata
}
if GlobalKMS != nil {
data, err = config.EncryptBytes(GlobalKMS, data, kms.Context{
obj.Bucket: path.Join(obj.Bucket, obj.Name),
})
if err != nil {
return err
}
if !utf8.Valid(data) {
data, err = madmin.DecryptData(globalActiveCred.String(), bytes.NewReader(data))
if err != nil {
return fmt.Errorf("Decrypting config failed %w, possibly credentials are incorrect", err)
}
}
if GlobalKMS != nil {
data, err = config.EncryptBytes(GlobalKMS, data, kms.Context{
obj.Bucket: path.Join(obj.Bucket, obj.Name),
})
if err != nil {
return err
}
}
if err = saveConfig(GlobalContext, objAPI, obj.Name, data); err != nil {
return err
}
}
if !res.IsTruncated {
break
if err = saveConfig(GlobalContext, objAPI, obj.Name, data); err != nil {
return err
}
marker = res.NextMarker
}
if encrypted && GlobalKMS != nil {
logger.Info("Migration of encrypted config data completed. All config data is now encrypted.")
}
return deleteConfig(GlobalContext, objAPI, backendEncryptedFile)
}

Some files were not shown because too many files have changed in this diff Show More