Compare commits

...

438 Commits

Author SHA1 Message Date
Aditya Manthramurthy
c4239ced22 run IAM purge routines deterministically every hr (#20587)
Existing implementation runs IAM purge routines for expired LDAP and
OIDC accounts with a probability of 0.25 after every IAM refresh. This
change ensures that they are run once in each hour.
2024-10-29 09:01:48 -07:00
Anis Eleuch
f85c28e960 heal: large objects fix and avoid .healing.bin corner case premature exit (#20577)
xlStorage.Healing() returns nil if there is an error reading
.healing.bin or if this latter is empty. healing.bin update()
call returns early if .healing.bin is empty; hence, no further update
of .healing.bin is possible.

A .healing.bin can be empty if os.Open() with O_TRUNC is successful
but the next Write returns an error.

To avoid this weird situation, avoid making healingTracker.update()
to return early if .healing.bin is empty, so write again.

This commit also fixes wrong error log printing when an object is 
healed in another drive in the same erasure set but not in the drive 
that is actively healing by fresh drive healing code. Currently, it prints 
<nil> instead of a factual error.

* heal: Scan .minio.sys metadata only during site-wide heal (#137)

mc admin heal always invoke .minio.sys heal, but sometimes, this latter
contains a lot of data, many service accounts, STS accounts etc, which
makes mc admin heal command very slow.

Only invoke .minio.sys healing when no bucket was specified in `mc admin
heal` command.
2024-10-26 02:58:27 -07:00
Anis Eleuch
f7e176d4ca heal: Avoid deadline error with very large objects (#140) (#20586)
Healing a large object with a normal scan mode where no parts read 
is involved can still fail after 30 seconds if an object has

There are too many parts when hard disks are being used mainly. 
The reason is there is a general deadline that checks for all parts we 
do a deadline per part.
2024-10-26 02:56:26 -07:00
Aditya Manthramurthy
72a0d14195 fix: avoid useless expires value in listing meta (#20584)
When listing objects with metadata, avoid returning an "expires" time
metadata value when its value is the zero time as this means that no
expires value is set on the object.
2024-10-24 19:13:19 -07:00
Klaus Post
6abe4128d7 Fix ILM expire workers exiting (#20578)
Fix expire workers exiting

Under 2 conditions ILM expire workers would exit, eventually causing all workers to terminate.
2024-10-23 08:35:37 -07:00
Klaus Post
ed5ed7e490 Trace ILM errors (#20576)
Some paths would attempt transitions but in case of failures 
no traces would be emitted.

Add traces (with errors) when transition operations fail.
2024-10-22 14:10:34 -07:00
Klaus Post
51410c9023 Clear omitted fields (#20575)
Searched `msg:"[a-zA-Z0-9]*,omitempty` through the codebase.

Uses latest tinylib master.
2024-10-22 08:30:50 -07:00
Shubhendu
96ca402dcd Correct the date filter check for batch replication (#20569)
The condition were incorrect as we were comparing the filter
value against the modification time object.

For example if created after filter date is after modification
time of object, that means object was created before the filter
time and should be skipped while replication because as per the
filter we need only the objects created after the filter date.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-10-18 08:32:09 -07:00
Anis Eleuch
3da7c9cce3 repl: Fix removal of replicator svc when keycloak is configured (#120)
When Keycloak vendor is set, the code will start to clean up service
accounts that parents do not exist anymore. However, the code will also
look for the parent user of site-replicator-0, MINIO_ROOT_USER, which
obviously does not exist in Keycloak. Therefore, the site-replicator-0
will be removed automatically.

This commit will avoid cleaning up service accounts generated from
the root user.
2024-10-14 09:35:37 -07:00
Harshavardhana
a14e19ec54 remove support for s390x 2024-10-13 07:06:17 -07:00
Minio Trusted
e091dde041 Update yaml files to latest version RELEASE.2024-10-13T13-34-11Z 2024-10-13 14:05:41 +00:00
Harshavardhana
d10bb7e1b6 remove reference to s390x binary, we wont publish them anymore 2024-10-13 06:34:11 -07:00
Anis Eleuch
7ebceacac6 heal: Fix deep scan failing to heal objects (#117)
The verify file handler response format was changed from gob to msgp
since two months but we forgot updating the verify handler client.

VerifyFile is only called during a heal deep scan (bitrot check).
HealObject() will fail in that case and will mark all disks corrupted and
will return early (as unrecoverable object but it will also not be
removed)

It is a bit rare for HealObject to be called with a deep scan flag. It
is called when a HealObject with a normal scan (e.g. new drive healing)
detects a bitrot corruption, therefore healing objects with a detected
bitrot corruption will fail.
2024-10-13 06:07:21 -07:00
Harshavardhana
1593cb615d avoid unnecessary logging for KMS secret key mismatch (#20549) 2024-10-13 06:06:08 -07:00
Yannis Mazzer
86a41d1631 fix(helm) removing clusterDomain from startup command to avoid local … (#20547) 2024-10-11 05:21:05 -07:00
Taran Pelkey
d4157b819c Allow LDAP DNs with slashes to be loaded from object store (#20541) 2024-10-10 16:40:37 -07:00
Yannis Mazzer
e0aceca1b7 feat(helm) making securityContext consistent (#20546) 2024-10-10 08:48:31 -07:00
Harrison Brown
87804624fe Helm: Add extraVolumes and extraVolumeMounts to the customCommandJob section (#19988)
added extraVolumes and extraVolumeMounts to the customCommandJob section
2024-10-10 08:48:16 -07:00
Poorna
e029f8a9d7 set kms keyid in replication opts (#20542) 2024-10-09 23:49:55 -07:00
Poorna
1bc6681176 fix tagging overwrite during resync (#20525) 2024-10-04 22:16:15 -07:00
Poorna
28322124e2 remove replication stats from data usage cache (#20524)
This is no longer needed since historical stats are not maintained anymore.
2024-10-04 15:23:33 -07:00
Harshavardhana
cbfe9de3e7 do not download binary before verifying the version (#20523)
fixes https://github.com/minio/mc/issues/4980
2024-10-04 04:32:32 -07:00
Harshavardhana
dc86b8d9d4 fix: when readQuorum, inconsistent metadata return 404 (#20522)
in cases where we cannot possibly know a way to read and 
construct the object,  it is impossible to achieve any form of 
quorum via xl.meta while we have sufficient responses from 
all the drives, we should return object not found.
2024-10-04 00:13:14 -07:00
Taran Pelkey
ba70118e2b Add root user to ListAccessKeysBulk (#20517) 2024-10-03 16:11:02 -07:00
Minio Trusted
cb1d3e50f7 Update yaml files to latest version RELEASE.2024-10-02T17-50-41Z 2024-10-02 21:26:16 +00:00
Harshavardhana
ded0b19d97 avoid audit logs with unexpected errors (#20516)
fixes #20513
2024-10-02 10:50:41 -07:00
Poorna
d0bb3dd136 list all batch job types (#20510)
continues #20480
2024-10-01 23:38:17 -07:00
Harshavardhana
ab7714b01e upgrade relevant dependencies (#20507) 2024-10-01 23:37:55 -07:00
Ramon de Klein
e5b18df6db Fix checksum error during startup when minio is loaded via PATH environment variable (#20509) 2024-10-01 15:13:18 -07:00
Anis Eleuch
0abfd1bcb1 heal: Use etag as quorum when none found for modtime (#20500) 2024-10-01 08:19:10 -07:00
Harshavardhana
6186d11761 handle the locks properly for multi-pool callers (#20495)
- PutObjectMetadata()
- PutObjectTags()
- DeleteObjectTags()
- TransitionObject()
- RestoreTransitionObject()

Also improve the behavior of multipart code across
pool locks, hold locks only once per upload ID for

- CompleteMultipartUpload()
- AbortMultipartUpload()
- ListObjectParts() (read-lock)
- GetMultipartInfo() (read-lock)
- PutObjectPart() (read-lock)

This avoids lock attempts across pools for no
reason, this increases O(n) when there are n-pools.
2024-09-29 15:40:36 -07:00
Poorna
e8b457e8a6 Change delete marker proxy test to use distributed setup (#20494) 2024-09-27 18:02:26 -07:00
Harshavardhana
afea40cc0f fix: keep locks based on the first pool, first EC set (#93)
multi-object deletion may or may not compete with locks
granted for other callers, causing concurrent operations
to succeed on each other.

A continuation of the PR https://github.com/minio/minio/pull/20356
2024-09-27 03:41:37 -07:00
Aditya Manthramurthy
402b798f1b fix: allow all console actions with custom authZ (#20489)
When custom authorization via plugin is enabled, the console will now
render the UI as if all actions are allowed. Since server cannot
determine the exact policy allowed for a user via the plugin, this is
acceptable to do. If a particular action is actually not allowed by the
plugin the call will result in an error.

Previously the server was evaluating a policy when custom authZ is
enabled - this is fixed now.
2024-09-26 23:44:44 -07:00
Klaus Post
4759532e90 Fix PPC cgroup memory limit (#20488)
The "unlimited" value on PPC wasn't exactly the same as amd64.

Instead compare against an "unreasonably big value".

Would cause OOM in anything using the concurrent request limit.
2024-09-26 10:07:10 -07:00
Harshavardhana
7f1e1713ab use absolute path for binary checksum verification (#20487) 2024-09-26 08:03:08 -07:00
Poorna
b2c5819dbc hold on to batch job stats till cleanup (#20480)
This PR also fixes job stats not available after restart
2024-09-24 14:50:11 -07:00
Anis Eleuch
2b0156b1fc Add TTFB to all APIs and enable for responses without body (#20479)
Add TTFB for all requests in metrics-v3 in addition to the existing
GetObject. Also for the requests that do not return a body in the
response, calculate TTFB as the HTTP status code and the headers are
sent.
2024-09-24 10:13:00 -07:00
Harshavardhana
f6f0807c86 cleanup existing part.N's before renamePart() (#20466)
this is a safety-net to avoid any unexpected parts to show up.
2024-09-24 04:26:41 -07:00
Harshavardhana
0c53d86017 remove the list from 'mc stat' from testing via '--no-list' (#20468)
avoid 'listing' as it may get incorrect results for
these tests, we are only interested in 'mc stat' as
in HEAD object here.
2024-09-24 01:03:58 -07:00
Minio Trusted
6a6ee46d76 Update yaml files to latest version RELEASE.2024-09-22T00-33-43Z 2024-09-23 19:42:46 +00:00
Klaus Post
974cbb3bb7 Limit jstream parse depth (#20474)
Add https://github.com/bcicen/jstream/pull/15 by vendoring the package.

Sets JSON depth limit to 100 entries in S3 Select.
2024-09-23 12:35:41 -07:00
Harshavardhana
03e996320e upgrade deps pkg/v3, madmin-go/v3 and lz4/v4 (#20467) 2024-09-21 17:33:43 -07:00
Taran Pelkey
78fcb76294 Add ListAccessKeysBulk API for builtin user access keys (#20381) 2024-09-21 04:35:40 -07:00
Ramon de Klein
3d152015eb Use MinIO console v1.7.1 (#20465) 2024-09-20 18:18:54 -07:00
jiuker
ade8925155 fix: add default http timeout for audit webhook (#20460) 2024-09-20 09:02:50 -07:00
Klaus Post
05a6c170bf Fix PutObject Trailing checksum (#20456)
PutObject would verify trailing checksums, but not store them.

Fixes #20455
2024-09-19 05:59:07 -07:00
Ramon de Klein
e1c2344591 Log an error when calculating the binary checksum failed (#20454) 2024-09-18 20:48:32 -07:00
Ramon de Klein
48a591e9b4 Ensure proper stale_uploads_cleanup_interval is used at all times (#20451) 2024-09-18 10:59:26 -07:00
Anis Eleuch
fa5d9c02ef batch: Set a default retry attempts and a prefix (#20452)
A batch job will fail if the retry attempt is not provided. The reason
is that the code mistakenly gets the retry attempts from the job status
rather than the job yaml file.

This will also set a default empty prefix for batch expiration.

Also this will avoid trimming the prefix since the yaml decoder already
does that if no quotes were provided, and we should not trim if quotes
were provided and the user provided a leading or a trailing space.
2024-09-18 10:59:03 -07:00
Shubhendu
5bd27346ac Added iam import tests for openid (#20432)
Tests if imported service accounts have 
required access to buckets and objects.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>

Co-authored-by: Harshavardhana <harsha@minio.io>
2024-09-17 09:45:46 -07:00
Taran Pelkey
3c82cf9327 Fix behavior of AddServiceAccountLDAP for non-admin users (#20442) 2024-09-16 16:04:51 -07:00
Harshavardhana
70d40083e9 remove windows CI/CD for now (#20441)
windows has decided to be a community support
only and source compile-friendly.
2024-09-16 13:46:53 -07:00
Klaus Post
8a30967542 Limit S3 Select JSON documents to 10MB (#20439)
Closes #20430

Limit allocations from badly formed documents.
2024-09-16 09:59:03 -07:00
Harshavardhana
229f04ab79 fix: the permissions one more time on /usr/bin
fixes https://github.com/minio/operator/issues/2319
2024-09-15 16:10:23 -07:00
Minio Trusted
1123dc3676 Update yaml files to latest version RELEASE.2024-09-13T20-26-02Z 2024-09-14 13:18:19 +00:00
Harshavardhana
5bf41aff17 hold granular locking for multi-pool PutObject() (#20434)
- PutObject() for multi-pooled was holding large
  region locks, which was not necessary. This affects
  almost all slowpoke clients and lengthy uploads.

- Re-arrange locks for CompleteMultipart, PutObject
  to be close to rename()
2024-09-13 13:26:02 -07:00
Anis Eleuch
e47d787adb tier: Add force param to force tiering removal (#20355)
Currently, it is not possible to remove a tier if it is not accessible
or contains some data, add a force flag to make the removal successful
in that case.
2024-09-12 13:44:05 -07:00
Anis Eleuch
398ffb1136 Enable compression with encryption in CopyObject API (#20411)
When the encryption and compression are both enabled, the
the server will avoid compressing the data for no apparent reason

This commit will enable it and update unit tests.
2024-09-12 13:10:44 -07:00
Shubhendu
5862582cd7 IAM import test with missing entities (#20368)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-09-12 08:59:00 -07:00
fumoboy007
e36b1146d6 Download static cURL into release Docker image for all supported architectures (#20424)
Download static cURL into release Docker image for all supported architectures.

Currently, the static cURL executable is only downloaded for the `amd64` architecture. However, `arm64` and `ppc64le` variants are also [available](https://github.com/moparisthebest/static-curl/releases/tag/v8.7.1).
2024-09-12 08:45:19 -07:00
Harshavardhana
c28a4beeb7 multipart support etag and pre-read small objects (#20423) 2024-09-12 05:24:04 -07:00
Sveinn
15ab0808b3 making sure we don't panic if globalReplicationStats have not been set (#20427) 2024-09-12 04:39:51 -07:00
Sveinn
3bae73fb42 Add http_timeout to audit webhook configurations (#20421) 2024-09-11 15:20:42 -07:00
Harshavardhana
bc527eceda handle the actualSize() properly for PostUpload() (#20422)
postUpload() incorrectly saves actual size as '-1'
we should save correct size when its possible.

Bonus: fix the PutObjectPart() write locker, instead
of holding a lock before we read the client stream.

We should hold it only when we need to commit the parts.
2024-09-11 11:35:37 -07:00
Anis Eleuch
b963f36e1e fix: Add missing grid handler of clearing upload-id from the cache (#20420) 2024-09-11 09:09:13 -07:00
Poorna
cdd7512a2e use rename() safety for in-place 'xl.meta' updates (#20414) 2024-09-11 09:08:51 -07:00
Frank Wessels
a6d5287310 Reenable SVE support for Graviton 4 (#20410) 2024-09-10 08:35:12 -07:00
Minio Trusted
22822f4151 Update yaml files to latest version RELEASE.2024-09-09T16-59-28Z 2024-09-10 00:47:02 +00:00
Shubhendu
0b7aa6af87 Skip non existent ldap entities while import (#20352)
Dont hard error for nonexisting LDAP entries instead of logging them
report them via `mc`

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-09-09 09:59:28 -07:00
Harshavardhana
8c9ab85cfa Add multipart uploads cache for ListMultipartUploads() (#20407)
this cache will be honored only when `prefix=""` while
performing ListMultipartUploads() operation.

This is mainly to satisfy applications like alluxio
for their underfs implementation and tests.

replaces https://github.com/minio/minio/pull/20181
2024-09-09 09:58:30 -07:00
Klaus Post
b1c849bedc Don't send a canceled context to Unlock (#20409)
AFAICT we send a canceled context to unlock (and thereby releaseAll). This will cause network calls to fail.

Instead use background and add 30s timeout.
2024-09-09 08:49:49 -07:00
Harshavardhana
fb24bcfee0 fix: set audit/logger webhook retry interval to maximum 1m (#20404) 2024-09-09 02:36:47 -07:00
Harshavardhana
8268c12cfb Add support for audit/logger max retry and retry interval (#20402)
Current implementation retries forever until our
log buffer is full, and we start dropping events.

This PR allows you to set a value until we give
up on existing audit/logger batches to proceed to
process the new ones.

Bonus:
 - do not blow up buffers beyond batchSize value
 - do not leak the ticker if the worker returns
2024-09-08 05:15:09 -07:00
Sveinn
3f39da48ea fix: retries and failed message counter (#20401) 2024-09-07 17:13:57 -07:00
Klaus Post
9d5cdaa2e3 Limit Response Recorder memory (#20399)
Disable body recording for...

* admin inspect
* admin metrics
* profiling download

Also, if the recorded body is > 10MB, drop it.
2024-09-07 12:16:04 -07:00
Taran Pelkey
84e122c5c3 Fix duplicate groups in ListGroups API (#20396) 2024-09-06 17:28:47 -07:00
Praveen raj Mani
261111e728 Kafka notify: support batched commits for queue store (#20377)
The items will be saved per target batch and will
be committed to the queue store when the batch is full

Also, periodically commit the batched items to the queue store
based on configured commit_timeout; default is 30s;

Bonus: compress queue store multi writes
2024-09-06 16:06:30 -07:00
Harshavardhana
0f1e8db4c5 all 2xx status codes to be success for audit (#20394) 2024-09-06 15:53:34 -07:00
Harshavardhana
64e803b136 fix: avoid waiting on rebalance metadata (#20392)
rebalance metadata is good to have only,
if it cannot be loaded when starting MinIO
for some reason we can possibly ignore it
and move on and let user start rebalance
again if needed.
2024-09-06 06:20:19 -07:00
Krishnan Parthasarathi
a0f9e9f661 readParts: Return error when quorum unavailable (#20389)
readParts requires that both part.N and part.N.meta files be present.
This change addresses an issue with how an error to return to the upper
layers was picked from most drives where a UploadPart operation 
had failed.
2024-09-06 03:51:23 -07:00
Harshavardhana
b6b7cddc9c make sure listParts returns parts that are valid (#20390) 2024-09-06 02:42:21 -07:00
jiuker
241be9709c fix: jwt error overrwriten by nil public key (#20387) 2024-09-05 19:46:36 -07:00
Harshavardhana
85f08d7752 verify part.N exists before reading part.N.meta (#20383)
if part.N doesn't exist we do not have to complete
the multipart transaction, it simply means that we
have some partial upload situation at hand.
2024-09-05 13:37:19 -07:00
Klaus Post
6be88a2b99 xl-meta - verify parity data (#20384) 2024-09-05 04:57:44 -07:00
Poorna
060276932d batch:repl fix copy from source -> remote (#20382)
completes fix started by  #20365
2024-09-05 04:57:23 -07:00
Shubhendu
6224849fd1 Dont start console service if MINIO_BROWSER=off (#20374)
By default, even if MINIO_BROWSER=off set code tries to get free
port available for the console. 

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-09-04 10:02:39 -07:00
Anis Eleuch
9b79eec29e site-repl: Fix ILM document replication in some cases (#20380)
S3 spec does not accept an ILM XML document containing both <Filter>
and <Prefix> XML tags, even if both are empty. That is why we added
a 'set' field in some lifecycle structures to decide when and when not to
show a tag. However, we forgot to disallow marshaling of Filter when
'set' is set to false.

This will fix ILM document replication in a site replication
configuration in some cases.
2024-09-04 10:01:26 -07:00
Harshavardhana
c2e318dd40 remove mincache EOS related feature from upstream (#20375) 2024-09-03 11:23:41 -07:00
T-TRz879
69258d5945 ignore if-unmodified-since header if if-match is set (#20326) 2024-09-02 23:33:53 -07:00
Mark Theunissen
d7ef6315ae Store the checksum in PostPolicyHandler so that we can return it on Get/Head (#20364) 2024-09-02 06:13:54 -07:00
Anis Eleuch
aaf4fb1184 batch: repl: A missing prefix in the remote source will fail replication (#20365)
When the prefix field is not provided in the remote source of a yaml
replication job, the code fails to do listing and makes replication
successful. This commit fixes it.
2024-09-02 05:36:43 -07:00
Harshavardhana
f05641c3c6 fix: /usr/bin path permissions for docker 2024-09-01 04:25:07 -07:00
Harshavardhana
7a34c88d73 add consistent nonce to make multipart deterministic per part (#20359)
This change adds a consistent nonce to ensure
that multipart uploads are deterministic on a 
per-part basis.

Thanks to @klauspost for the work here minio/sio@3cd3734
2024-08-31 11:25:48 -07:00
Harshavardhana
6c746843ac fix: keep locks on same pool for simplicity (#20356)
locks handed by different pools would become non-compete for
multi-object delete request, this is wrong for obvious 
reasons.

New locking implementation and revamp will rewrite multi-object
lock anyway, this is a workaround for now.
2024-08-30 19:26:49 -07:00
Harshavardhana
bb07df7e7b do not list dangling objects with unmatched ECs (#20351)
This mostly applies to all new objects, this simply
ignores these objects and no application would have
to deal with getting 503s on them.
2024-08-30 09:02:26 -07:00
Minio Trusted
1cb824039e Update yaml files to latest version RELEASE.2024-08-29T01-40-52Z 2024-08-29 22:33:43 +00:00
Harshavardhana
504e52b45e protect bpool from buffer pollution by invalid buffers (#20342) 2024-08-28 18:40:52 -07:00
Anis Eleuch
38c0840834 bucket-metadata: Reload events/repl-targets for all buckets (#20334)
Currently, the bucket events and replication targets are only reloaded
with buckets that failed to load during the first cluster startup,
which is wrong because if one bucket change was done in one node but
that node was not able to notify other nodes; the other nodes will
reload the bucket metadata config but fails to set the events and bucket
targets in the memory.
2024-08-28 08:32:18 -07:00
Harshavardhana
c65e67c357 add more details on the payload sent to webhook audit (#20335) 2024-08-28 08:31:56 -07:00
Harshavardhana
fb2360ff88 when a drive is closed cancel the cleanupTrash goroutine (#20337)
when a hung drive is hot-unplugged, the server might go
into a loop where the previous `format.json` is somehow
still accessible to the process, we try to re-init() drives,
but that seems to cause a previous goroutine to hang around
since it is not canceled away when the drive is closed.

Bonus: add deadline for immediate purge routine, to unblock
it if the drive is blocking mutations.
2024-08-28 08:31:42 -07:00
jiuker
1a2de1bdde fix: string format when log IAM refresh take over 5s (#20331) 2024-08-26 23:40:33 -07:00
Harshavardhana
993b97f1db fix: docker buildx warnings 2024-08-26 11:30:15 -07:00
Harshavardhana
1c4d28d7af update pkg/v3, minio-go/v7 and mc (#20327) 2024-08-26 08:33:07 -07:00
Harshavardhana
af55f37b27 do not fallback on the drives to load groups for LDAP (#20320)
if a user policy is found, avoid reading from the drives
for missing group mappings, group mappings are not mandatory
and conditional.

This PR restores the older behavior while making sure that
if a direct user policy is not found, we would still attempt
to load from the group from the drives.
2024-08-25 17:22:45 -07:00
Andreas Auernhammer
2d67c26794 improve multipart decryption (#20324)
This commit simplifies and optimizes the decryption of large (multipart)
objects. This PR does two things:
 
- Re-write the init logic for the decryption reader
- Reduce the number of OEK decryptions

Before, the init logic copied some SSE HTTP request headers to 
parse them later. This is simplified to parsing them right away. This
removes some fields from the decryption reader struct.

Further, the decryption reader decrypted the OEK using the client-provided 
key (SSE-C) or the KMS (SSE-S3 / SSE-KMS) for each part. This is redundant 
since the OEK is the same for all parts. In particular, a KMS call might be a 
network request. Now, the OEK is decrypted once for the entire multipart object.

This should improve latency when reading encrypted multipart objects 
and reduce requests to the KMS.

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2024-08-25 11:07:13 -07:00
Harshavardhana
006cacfefb to turn-off healing drop legacy ENV (#20315) 2024-08-23 15:43:31 -07:00
bestgopher
c28f09d4a7 refactor: displays the OS-specific doc url (#20313) 2024-08-23 07:11:35 -07:00
Anis Eleuch
73992d2b9f s3: DeleteBucket to use listing before returning bucket not empty error (#20301)
Use Walk(), which is a recursive listing with versioning, to check if 
the bucket has some objects before being removed. This is beneficial
because the bucket can contain multiple dangling objects in multiple
drives.

Also, this will prevent a bug where a bucket is deleted in a deployment
that has many erasure sets but the bucket contains one or few objects
not spread to enough erasure sets.
2024-08-22 14:57:20 -07:00
Anis Eleuch
a8f143298f heal: Reset healing params when a retry is decided (#20285)
Currently, retry healing of a new drive healing does not reset
HealedBuckets means that the next healing retry will skip those
buckets. The commit will fix this behavior.

Also, the skipped objects counter will include objects uploaded
that are uploaded after the healing is started.
2024-08-22 05:35:43 -07:00
jiuker
2d44c161c7 fix: support export bucket policy with ExportBucketMetadata (#20308) 2024-08-22 03:44:35 -07:00
Mark Theunissen
9511056f44 fix: simplify error logged when logger target is unreachable (#20304) 2024-08-22 02:43:48 -07:00
Mark Theunissen
fb4ad000b6 support parseObjectAttributes to handle multiple header values (#20295) 2024-08-21 14:13:59 -07:00
shandongzhejiang
a8ff12bc72 chore: fix some comments (#20294)
Signed-off-by: shandongzhejiang <shandongzhejiang@icloud.com>
2024-08-21 13:14:24 -07:00
jiuker
1e1bd3afd9 use io.NopCloser replace closeWrapper (#20287) 2024-08-21 05:20:54 -07:00
Anis Eleuch
7b239ae154 sftp: Fix operations with a internal service account (#20293)
sftp sends local requests to the S3 port while passing the session token
header when the account corresponds to a service account. However, this
is not permitted and will throw an error: "The security token included in the
request is invalid"

This commit will avoid passing the session token to the upper layer that
initializes MinIO client to avoid this error.
2024-08-20 13:00:29 -07:00
Aditya Manthramurthy
8a11282522 [fix] S3Select: Add some missing input validation (#20278)
Prevents server panic when some CSV parameters are empty.
2024-08-20 11:31:45 -07:00
Anis Eleuch
85c3db3a93 heal: Add finished flag to .healing.bin to avoid removing this latter (#20250)
Sometimes, we need historical information in .healing.bin, such as the
number of expired objects that the healing avoids to heal and that can
create drive usage disparency in the same erasure set. For that reason,
this commit will not remove .healing.bin anymore and it will have a new
field called Finished so we know healing is finished in that drive.
2024-08-20 08:42:49 -07:00
Ramon de Klein
37383ecd09 Ensure that sig/sha are in the same layer (#20282) 2024-08-18 00:13:33 -07:00
Mark Theunissen
6378ca10a4 kms.ListKeys returns CreatedBy/CreatedAt when information is available (#20223) 2024-08-17 23:43:03 -07:00
Minio Trusted
9e81ccd2d9 Update yaml files to latest version RELEASE.2024-08-17T01-24-54Z 2024-08-17 12:00:24 +00:00
Harshavardhana
72cff79c8a add missing STS accounts loading (#20279)
PR #20268 missed loading STS accounts map properly
2024-08-16 18:24:54 -07:00
Harshavardhana
a5702f978e remove requests deadline, instead just reject the requests (#20272)
Additionally set

 - x-ratelimit-limit
 - x-ratelimit-remaining

To indicate the request rates.
2024-08-16 01:43:49 -07:00
Poorna
4687c4616f try loading temp account if not in cache (#20266) 2024-08-15 23:12:42 -07:00
Harshavardhana
d8dfb57d5c fix: typos from previous PR #20270 2024-08-15 12:33:45 -07:00
Ramon de Klein
b07c58aa05 Add signature and SHA to the Docker images (#20270)
add signature and SHA to the Docker images
2024-08-15 08:48:04 -07:00
Harshavardhana
cc0c41d216 remove region locks and make them simpler (#20268)
- single flight approach is now optional, instead of default.
- parallelize the loaders upto 32 items per assets (more room for improvement possible)
2024-08-15 08:41:03 -07:00
Klaus Post
f1302c40fe Fix uninitialized replication stats (#20260)
Services are unfrozen before `initBackgroundReplication` is finished. This means that 
the globalReplicationStats write is racy. Switch to an atomic pointer.

Provide the `ReplicationPool` with the stats, so it doesn't have to be grabbed 
from the atomic pointer on every use.

All other loads and checks are nil, and calls return empty values when stats 
still haven't been initialized.
2024-08-15 05:04:40 -07:00
Harshavardhana
3b1aa40372 support relative paths for KMS_SECRET_KEY_FILE (#20264)
fixes #20251
2024-08-15 04:46:39 -07:00
Klaus Post
d96798ae7b Add support profile deadlines and concurrent operations (#20244)
* Allow a maximum of 10 seconds to start profiling operations.
* Download up to 16 profiles concurrently, but only allow 10 seconds for
  each (does not include write time).
* Add cluster info as the first operation.
* Ignore remote download errors.
* Stop remote profiles if the request is terminated.
2024-08-15 03:36:00 -07:00
Anis Eleuch
b508264ac4 sr: Avoid recursion when loading site replicator credentials (#20262)
If the site replication is enabled and the code tries to extract jwt
claims while the site replication service account credentials are still
not loaded yet, the code will enter an infinite loop, causing in a
high CPU usage.

Another possibility of the infinite loop is having some service accounts
created by an old deployment version where the service account JWT was
signed by the root credentials, but not anymore.

This commit will remove the possibility of the infinite loop in the code
and add root credential fallback to extract claims from old service
accounts.
2024-08-14 18:29:20 -07:00
Harshavardhana
db78431b1d avoid crash when initializing bucket quota cache (#20258) 2024-08-14 17:34:56 -07:00
Sveinn
743ddb196a Removing the audit log retry mechanism (#20259) 2024-08-14 15:25:08 -07:00
Klaus Post
3ffeabdfcb Fix govet+staticcheck issues (#20263)
This is better: https://github.com/golang/go/issues/60529
2024-08-14 10:11:51 -07:00
Anis Eleuch
51b1f41518 heal: Persist MRF queue in the disk during shutdown (#19410) 2024-08-13 15:26:05 -07:00
Harshavardhana
e7a56f35b9 flatten out audit tags, do not send as free-form (#20256)
move away from map[string]interface{} to map[string]string
to simplify the audit, and also provide concise information.

avoids large allocations under load(), reduces the amount
of audit information generated, as the current implementation
was a bit free-form. instead all datastructures must be
flattened.
2024-08-13 15:22:04 -07:00
rubyisrust
516af01a12 chore: fix some function names (#20243)
Signed-off-by: rubyisrust <rustrover@icloud.com>
2024-08-13 11:23:33 -07:00
Harshavardhana
acdb355070 update deps and update azure WARM tier implementation (#20247) 2024-08-13 11:21:34 -07:00
Mark Theunissen
37c02a5f7b Add dummy DeleteBucketCors for safety (#20253) 2024-08-13 08:25:16 -07:00
Krishnan Parthasarathi
04be352ae9 Relax quorum agreement on DataDir values (#20232)
Previously, we checked if we had a quorum on the DataDir value. 
We are removing this check, which allows reading objects with different 
DataDir values in a few drives (due to a rebalance-stop race bug) 
provided their eTags or ModTimes match.
2024-08-12 12:02:21 -07:00
Klaus Post
53eb7656de Add admin info timeouts (#20249)
Since a lot of operations load from storage, do remote calls, add a 10 second timeout to each operation.

This should make `mc admin info` return values even under extreme conditions.
2024-08-12 10:24:29 -07:00
Klaus Post
d8f0e0ea6e Simplify error logging on event send (#20246)
Overly verbose, hard to read and can leak data.

Print even as JSON and simplify target&error printing.
2024-08-12 08:55:28 -07:00
Harshavardhana
2e0fd2cba9 implement a safer completeMultipart implementation (#20227)
- optimize writing part.N.meta by writing both part.N
  and its meta in sequence without network component.

- remove part.N.meta, part.N which were partially success
  ful, in quorum loss situations during renamePart()

- allow for strict read quorum check arbitrated via ETag
  for the given part number, this makes it double safer
  upon final commit.

- return an appropriate error when read quorum is missing,
  instead of returning InvalidPart{}, which is non-retryable
  error. This kind of situation can happen when many
  nodes are going offline in rotation, an example of such
  a restart() behavior is statefulset updates in k8s.

fixes #20091
2024-08-12 01:38:15 -07:00
Harshavardhana
909b169593 avoid source index to be same as destination index (#20238)
during rebalance stop, it can possibly happen that
Put() would race by overwriting the same object again.

This may very well if done "successfully" it can
potentially proceed to delete the object from the pool,
causing data loss.

This PR enhances #20233 to handle more scenarios such
as these.
2024-08-09 19:30:44 -07:00
Krishnan Parthasarathi
4e67a4027e Prevent overwrites due to rebalance-stop race (#20233)
Rebalance-stop can race with ongoing rebalance operations. This change
prevents these operations from overwriting objects by checking the source
and destination pool indices are different.
2024-08-08 19:05:14 -07:00
Klaus Post
49055658a9 Fix missing hash in GetObjectAttributes (#20231)
SHA256/SHA1 were mixed up.

Simplify code as well.
2024-08-08 13:19:41 -07:00
Harshavardhana
89c58ce87d enhance getActualSize() to return valid values for most situations (#20228) 2024-08-08 08:29:58 -07:00
Andreas Auernhammer
14876a4df1 ldap: use custom TLS cipher suites (#20221)
This commit replaces the LDAP client TLS config and
adds a custom list of TLS cipher suites which support
RSA key exchange (RSA kex).

Some LDAP server connections experience a significant slowdown
when these cipher suites are not available. The Go TLS stack
disables them by default. (Can be enabled via GODEBUG=tlsrsakex=1).

fixes https://github.com/minio/minio/issues/20214

With a custom list of TLS ciphers, Go can pick the TLS RSA key-exchange
cipher. Ref:
```
	if c.CipherSuites != nil {
		return c.CipherSuites
	}
	if tlsrsakex.Value() == "1" {
		return defaultCipherSuitesWithRSAKex
	}
```
Ref: https://cs.opensource.google/go/go/+/refs/tags/go1.22.5:src/crypto/tls/common.go;l=1017

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2024-08-07 05:59:47 -07:00
Mark Theunissen
2681219039 Add dummy PutBucketCors for functional test compatibility (#20220) 2024-08-06 08:41:38 -07:00
Leo
5da7f0100a chore: Adjust setup guide for development. (#20204)
Signed-off-by: ifuryst <ifuryst@gmail.com>
2024-08-05 11:35:53 -07:00
Harshavardhana
dea9abed29 use singleflight when bucket metadata is reloaded() (#20216)
this allows for de-duplicating the callers when called
concurrently, allowing for bucketmetadata reads to be
single call. All concurrent callers will get the same data
as the first one.
2024-08-05 09:50:11 -07:00
Harshavardhana
e3eb5c1328 batch-exp: Remove 1000 maximum objects per call (#20212)
It seems ObjectAPI.DeleteObjects() is clogging up when it is removing
10k versions of a single object.

Authored-by: Anis Eleuch <anis@min.io>
2024-08-04 21:55:25 -07:00
Minio Trusted
fb9364f1fb Update yaml files to latest version RELEASE.2024-08-03T04-33-23Z 2024-08-03 08:48:40 +00:00
Cesar N.
6efb56851c Update console version to 1.7.0 (#20211) 2024-08-02 21:33:23 -07:00
Andrea Longo
db2c7ed1d1 Docs: link to prom collector repo for info on debug metrics (#20209)
link to prom collector repo for info on debug metrics
2024-08-02 15:30:11 -07:00
Poorna
74c047cb03 fix replication last hour metric (#20199)
also adding missing recent_backlog_count metric to v3 metrics
2024-08-01 17:55:27 -07:00
jiuker
50a5ad48fc feat: support batch replication prefix slice (#20033) 2024-08-01 05:53:30 -07:00
Minio Trusted
292fccff6e Update yaml files to latest version RELEASE.2024-07-31T05-46-26Z 2024-07-31 08:50:26 +00:00
Harshavardhana
a9dc061d84 count metrics properly for any failures during drive heal (#20193)
or via `mc admin heal --set 1 --pool 1`
2024-07-30 22:46:26 -07:00
Krishnan Parthasarathi
01a8c09920 Add fmt-gen subcommand (#20192)
fmt-gen subcommand is only available when built with build tag `fmtgen`.
2024-07-30 15:59:48 -07:00
Aditya Manthramurthy
4c8562bcec Fix v2 metrics: Send all ttfb api labels (#20191)
Fix a regression in #19733 where TTFB metrics for all APIs except
GetObject were removed in v2 and v3 metrics. This causes breakage for
existing v2 metrics users. Instead we continue to send TTFB for all APIs
in V2 but only send for GetObject in V3.
2024-07-30 15:28:46 -07:00
Harshavardhana
f13c04629b allow multipart uploads expiration to be dynamic (#20190)
allow multipart uploads expiration to be dyamic

It would seem like the new values will take effect
only after a restart for changes in multipart_expiration.
This PR fixes this by making it dynamic as it should have
been.
2024-07-30 12:01:06 -07:00
Harshavardhana
80ff907d08 add DeleteBulk support, add sufficient deadlines per rename() (#20185)
deadlines per moveToTrash() allows for a more granular timeout
approach for syscalls, instead of an aggregate timeout.

This PR also enhances multipart state cleanup to be optimal by
removing 100's of multipart network rename() calls into single
network call.
2024-07-29 18:56:40 -07:00
Minio Trusted
673df6d517 Update yaml files to latest version RELEASE.2024-07-29T22-14-52Z 2024-07-30 00:00:47 +00:00
Poorna
2d40433bc1 remove replication throttle deadline for objects > 128MiB (#20184)
context deadline was introduced to avoid a slow transfer from blocking
replication queue(s) shared by other buckets that may not be under throttling.

This PR removes this context deadline for larger objects since they are 
anyway restricted to a limited set of workers. Otherwise, objects would 
get dequeued when the throttle limit is exceeded and cannot proceed 
within the deadline.
2024-07-29 15:14:52 -07:00
Andrea Longo
3bc39db34e Restructure metrics v3 readme for docs use (#20114) 2024-07-29 11:48:51 -07:00
Harshavardhana
a17f14f73a separate lock from common grid to avoid epoll contention (#20180)
epoll contention on TCP causes latency build-up when
we have high volume ingress. This PR is an attempt to
relieve this pressure.

upstream issue https://github.com/golang/go/issues/65064
It seems to be a deeper problem; haven't yet tried the fix
provide in this issue, but however this change without
changing the compiler helps. 

Of course, this is a workaround for now, hoping for a
more comprehensive fix from Go runtime.
2024-07-29 11:10:04 -07:00
Poorna
6651c655cb fix replication of checksum when encryption is enabled (#20161)
- Adding functional tests
- Return checksum header on GET/HEAD, previously this was returning
  InvalidPartNumber error
2024-07-29 01:02:16 -07:00
Harshavardhana
3ae104edae change Read* calls over net/http to move to http.MethodGet (#20173)
- ReadVersion
- ReadFile
- ReadXL

Further changes include to

- Compact internode resource RPC paths
- Compact internode query params

To optimize on parsing by gorilla/mux as the
length of this string increases latency in
gorilla/mux - reduce to a meaningful string.
2024-07-29 01:00:12 -07:00
jiuker
c87a489514 fix: support prefix when batchJob replicate enable the snowball (#20178) 2024-07-29 00:59:50 -07:00
Minio Trusted
a60267501d Update yaml files to latest version RELEASE.2024-07-26T20-48-21Z 2024-07-27 10:43:28 +00:00
Poorna
641a56da0d fix panic in replication queuing (#20169)
Regression from #20077

```
Jul 26 19:08:29 minio-dr-0101a minio[275423]: Error: grid handler (NSScanner) panic: runtime error: index out of range [4] with length 1 (*errors.errorString)
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       33: internal/logger/logger.go:268:logger.LogIf()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       32: internal/grid/connection.go:50:grid.gridLogIf()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       31: internal/grid/muxserver.go:234:grid.(*muxServer).handleRequests.func1()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       30: cmd/bucket-replication.go:2165:cmd.(*ReplicationPool).queueReplicaTask()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       29: cmd/bucket-replication.go:3440:cmd.queueReplicationHeal()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       28: cmd/data-scanner.go:1396:cmd.(*scannerItem).healReplication()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       27: cmd/data-scanner.go:1220:cmd.(*scannerItem).applyActions()
Jul 26 19:08:29 minio-dr-0101a minio[275423]:       26: cmd/xl-storage.go:627:cmd.(*xlStorage).NSScanner.func2()
```
2024-07-26 13:48:21 -07:00
Klaus Post
59788e25c7 Update connection deadlines less frequently (#20166)
Only set write deadline on connections every second. Combine the 2 write locations into 1.
2024-07-26 10:40:11 -07:00
Harshavardhana
a16193bb50 remove fdatasync() discard, we write with O_SYNC (#20168)
fdatasync() discard for page-cached READs is not
needed, it would seem like this can cause latencies
in situations when things are loaded.
2024-07-26 10:27:56 -07:00
jiuker
132e7413ba fix: check once ready for site-replication (#20149) 2024-07-26 10:27:42 -07:00
Klaus Post
1966668066 Avoid Batch Replication Job log spam (#20158)
Only print once per job and error location.

Set default retry to default 1 second wait, and use as minimum.
2024-07-26 05:55:50 -07:00
Harshavardhana
064f36ca5a move to GET for internal stream READs instead of POST (#20160)
the main reason is to let Go net/http perform necessary
book keeping properly, and in essential from consistency
point of view its GETs all the way.

Deprecate sendFile() as its buggy inside Go runtime.
2024-07-26 05:55:01 -07:00
Klaus Post
15b609ecea Expose RPC reconnections and ping time (#20157)
- Keeps track of reconnection count.
- Keeps track of connection ping roundtrip times. 
  Sends timestamp in ping message.
- Allow ping without payload.
2024-07-25 14:07:21 -07:00
Krishnan Parthasarathi
4a1edfd9aa Different read quorum for tiered objects (#20115)
For a non-tiered object, MinIO requires that EcM (# of data blocks) of
xl.meta agree, corresponding to the number of data blocks needed to 
read this object.

OTOH, tiered objects have metadata in the hot tier and data in the 
warm tier. The data and its integrity are offloaded to the warm tier. This
allows us to reduce the read quorum from EcM (typically > N/2, where N -
erasure stripe width) to N/2 + 1. The simple majority of metadata
ensures consensus on what the object is and where it is
located.
2024-07-25 14:02:50 -07:00
Anis Eleuch
b7f319b62a properly reload a fresh drive when found in a failed state during startup (#20145)
When a drive is in a failed state when a single node multiple drives
deployment is started, a replacement of a fresh disk will not be
properly healed unless the user restarts the node.

Fix this by always adding the new fresh disk to globalLocalDrivesMap. Also
remove globalLocalDrives for simplification, a map to store local node
drives can still be used since the order of local drives of a node is
not defined.
2024-07-24 16:30:33 -07:00
Anis Eleuch
33c101544d kms: Expose API when bucket federation is enabled (#20143)
kms: Expose API available when bucket federation is enabled

When bucket federation feature is enabled, KMS API will not work, such
as `mc admin kms key list`

The commit will fix the issue by disabling bucket forwarding when this
is a KMS request.
2024-07-24 15:44:29 -07:00
Anis Eleuch
21cf29330e grafana: Fix the unit in Open FDs panel (#20144) 2024-07-24 07:51:03 -07:00
Harshavardhana
3b21bb5be8 use unixNanoTime instead of time.Time in lockRequestorInfo (#20140)
Bonus: Skip Source, Quorum fields in lockArgs that are never
sent during Unlock() phase.
2024-07-24 03:24:01 -07:00
Harshavardhana
6fe2b3f901 avoid sendFile() for ranges or object lengths < 4MiB (#20141) 2024-07-24 03:22:50 -07:00
Taran Pelkey
b368d4cc13 Fix updateGroupMembershipsForLDAP behavior with unicode (#20137) 2024-07-23 19:10:03 -07:00
Klaus Post
0680af7414 Set O_NONBLOCK for reads and writes on unix (#20133)
Tracing syscalls, opening and reading an `xl.meta` looks like this:

```
openat(AT_FDCWD, "/mnt/drive1/ss8-old/testbucket/ObjSize4MiBThreads72/(554O51H/peTb(0iztdbTKw59.csv/xl.meta", O_RDONLY|O_NOATIME|O_CLOEXEC) = 34 <0.000>
fcntl(34, F_GETFL)                      = 0x48000 (flags O_RDONLY|O_LARGEFILE|O_NOATIME) <0.000>
fcntl(34, F_SETFL, O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_NOATIME) = 0 <0.000>
epoll_ctl(4, EPOLL_CTL_ADD, 34, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=3172471557, u64=8145488475984499461}}) = -1 EPERM (Operation not permitted) <0.000>
fcntl(34, F_GETFL)                      = 0x48800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_NOATIME) <0.000>
fcntl(34, F_SETFL, O_RDONLY|O_LARGEFILE|O_NOATIME) = 0 <0.000>
fstat(34, {st_mode=S_IFREG|0644, st_size=354, ...}) = 0 <0.000>
read(34, "XL2 \1\0\3\0\306\0\0\1P\2\2\1\304$\225\304\20\0\0\0\0\0\0\0\0\0\0\0"..., 354) = 354 <0.000>
close(34)                               = 0 <0.000>
```

Everything until `fstat` is the `os.Open` call.

Looking at the code: https://github.com/golang/go/blob/master/src/os/file_unix.go#L212-L243

It seems for every file it "tries" to see if it is pollable. This causes `syscall.SetNonblock(fd, true)` to be called. This is the first `F_SETFL`.

It then calls `f.pfd.Init("file", true)`. This will attempt to set it as pollable using `epoll_ctl`. This will always fail for files. It therefore calls `syscall.SetNonblock(fd, false)` resulting in the second `F_SETFL`.

If we set the `O_NONBLOCK` call on the initial open, we should avoid the 4 `fcntl` syscalls per file.

I don't see any way to avoid the `epoll_ctl` call, since kind is either `kindOpenFile` or `kindNonBlock`, so "pollable" will always be true. However avoiding 4 of 6 syscalls still seems worth it.

This should not have any effect, since files will end up with "nonblock" anyway.
2024-07-23 09:36:24 -07:00
Harshavardhana
91805bcab6 add optimizations to bring performance on unversioned READS (#20128)
allow non-inlined on disk to be inlined via
an unversioned ReadVersion() call, we only
need ReadXL() to resolve objects with multiple
versions only.

The choice of this block makes it to be dynamic
and chosen by the user via `mc admin config set`

Other bonus things

- Start measuring internode TTFB performance.
- Set TCP_NODELAY, TCP_CORK for low latency
2024-07-23 03:53:03 -07:00
Klaus Post
c0e2886e37 Tweak grid for less writes (#20129)
Use `runtime.Gosched()` if we have less than maxMergeMessages and the 
queue is empty.  Up maxMergeMessages to 50 to merge more messages into 
a single write.

Add length check for an early bailout on readAllInto when we know packet length.
2024-07-23 03:28:14 -07:00
Andreas Auernhammer
4f5dded4d4 fips: enforce FIPS-compliant TLS ciphers in FIPS mode (#20131)
This commit enforces FIPS-compliant TLS ciphers in FIPS mode
by importing the `fipsonly` module.

Otherwise, MinIO still accepts non-FIPS compliant TLS connections.
2024-07-23 03:11:25 -07:00
jiuker
b3a94c4e85 fix: Use xtime duration to parse batch job (#20117) 2024-07-23 00:05:53 -07:00
Harshavardhana
8e618d45fc remove unnecessary LRU for internode auth token (#20119)
removes contentious usage of mutexes in LRU, which
were never really reused in any manner; we do not
need it.

To trust hosts, the correct way is TLS certs; this PR completely
removes this dependency, which has never been useful.

```
0  0%  100%  25.83s 26.76%  github.com/hashicorp/golang-lru/v2/expirable.(*LRU[...])
0  0%  100%  28.03s 29.04%  github.com/hashicorp/golang-lru/v2/expirable.(*LRU[...])
```

Bonus: use `x-minio-time` as a nanosecond to avoid unnecessary
parsing logic of time strings instead of using a more
straightforward mechanism.
2024-07-22 00:04:48 -07:00
Harshavardhana
3ef59d2821 do not set KMSSecretKey env from KMSSecretKeyFile (#20122)
fixes #20121
2024-07-21 14:39:15 -07:00
Harshavardhana
23db4958f5 fix tuned-adm command typo 2024-07-18 18:15:02 -07:00
Anis Eleuch
d9ee668b6d s3: Fix wrong continuation token during listing with ILM enabled bucket (#20113) 2024-07-18 13:37:34 -07:00
Anis Eleuch
2e5d792f0c batch-expiry: Save progress regularly in the drives and at the end (#20098)
- Also, fix failure reporting at the end.
- Also, avoid parsing report objects when listing or resuming jobs, this
does not cause any bugs, it is only printing, not useful errors.
2024-07-17 09:42:32 -07:00
Minio Trusted
b276651eaa Update yaml files to latest version RELEASE.2024-07-16T23-46-41Z 2024-07-17 15:26:12 +00:00
Poorna
3535197f99 replication: proxy only on missing object or read quorum err (#20101) 2024-07-16 16:46:41 -07:00
Frank Wessels
95f076340a Update reedsolomon dependency with fix for Graviton4 processor (#20102) 2024-07-16 12:27:21 -07:00
Mark Theunissen
698bb93a46 Allow a KMS Action to specify keys in the Resources of a policy (#20079) 2024-07-16 07:03:03 -07:00
Minio Trusted
2584430141 Update yaml files to latest version RELEASE.2024-07-15T19-02-30Z 2024-07-15 22:10:04 +00:00
Klaus Post
ded373e600 Split handleMessages (cosmetic) (#20095)
Split the read and write sides of handleMessages into two separate functions

Cosmetic. The only non-copy-and-paste change is that `cancel(ErrDisconnected)` is moved 
into the defer on `readStream`.
2024-07-15 12:02:30 -07:00
Harshavardhana
e8c54c3d6c add validation test for v3 metrics for all its endpoints (#20094)
add unit test for v3 metrics for all its exposed endpoints

Bonus:
  - support OpenMetrics encoding
  - adds boot time for prometheus
  - continueOnError is better to serve as
    much metrics as possible.
2024-07-15 09:28:02 -07:00
Shubhendu
f944a42886 Removed user and group details from logs (#20072)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-07-14 11:12:07 -07:00
Harshavardhana
eff0ea43aa fix: typo in BucketUsageMetrics group registration in v3 metrics (#20090)
```
curl http://localhost:9000/minio/metrics/v3/cluster/usage/buckets
```

Did not work as documented, due to the fact that there was a typo
in the bucket usage metrics registration group. This endpoint is
a cluster endpoint and does not require any `buckets` argument.
2024-07-14 11:11:42 -07:00
Minio Trusted
3b602bb532 Update yaml files to latest version RELEASE.2024-07-13T01-46-15Z 2024-07-13 02:08:28 +00:00
Alex
459985f0fa Update Console to v1.6.3 (#20084)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2024-07-12 18:46:15 -07:00
Harshavardhana
d0080046c2 allow sysfs tuning from tuned 2024-07-12 16:31:18 -07:00
Harshavardhana
7fcb428622 do not print unexpected logs (#20083) 2024-07-12 13:51:54 -07:00
dependabot[bot]
4ea6f94ed8 Bump github.com/nats-io/nats-streaming-server from 0.24.3 to 0.24.6 (#20082)
Bumps [github.com/nats-io/nats-streaming-server](https://github.com/nats-io/nats-streaming-server) from 0.24.3 to 0.24.6.
- [Release notes](https://github.com/nats-io/nats-streaming-server/releases)
- [Changelog](https://github.com/nats-io/nats-streaming-server/blob/main/.goreleaser.yml)
- [Commits](https://github.com/nats-io/nats-streaming-server/compare/v0.24.3...v0.24.6)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats-streaming-server
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-12 11:45:09 -07:00
Klaus Post
83adc2eebf Fix ListObjects aborting after 3 minute on async request (#20074)
When creating the async listing, if the first request does not return within 3 
minutes, it is stopped, since it isn't being kept alive.

Keep updating `lastHandout` while we are waiting for the initial request to be fulfilled.
2024-07-12 09:23:16 -07:00
Poorna
989c318a28 replication: make large workers configurable (#20077)
This PR also improves throttling by reducing tokens requested
from rate limiter based on available tokens to avoid exceeding
throttle wait deadlines
2024-07-12 07:57:31 -07:00
Frank Wessels
ef802f2b2c Updated dependencies for ARM SVE support (#20081) 2024-07-12 07:36:29 -07:00
Taran Pelkey
f5d2fbc84c Add DecodeDN and QuickNormalizeDN functions to LDAP config (#20076) 2024-07-11 18:04:53 -07:00
Allan Roger Reid
e139673969 Audit failure in batch job key rotate (#20073) 2024-07-11 16:13:15 -07:00
Harshavardhana
a8c6465f22 hide some deprecated fields from 'get' output (#20069)
also update wording on `subnet license="" api_key=""`
2024-07-10 13:16:44 -07:00
Minio Trusted
27538e2d22 Update yaml files to latest version RELEASE.2024-07-10T18-41-49Z 2024-07-10 19:27:17 +00:00
Taran Pelkey
6c6f0987dc Add groups to policy entities (#20052)
* Add groups to policy entities

* update comment

---------

Co-authored-by: Harshavardhana <harsha@minio.io>
2024-07-10 11:41:49 -07:00
Austin Chang
5f64658faa clarify error message for root user credential (#20043)
Signed-off-by: Austin Chang <austin880625@gmail.com>
2024-07-10 09:57:01 -07:00
Anis Eleuch
ce183cb2b4 heal: List and heal again for any listing error (#19999)
When a fresh drive healing is finished, add more checks for the drive listing
errors. If any, re-list and heal again. Although this is an infrequent use
case to have listPathRaw() returning nil when minDisks is set to 1, we
still need to handle all possible use cases to avoid missing healing
any object.

Also, check for HealObject result to decide of an object is healed in the
fresh disk since HealObject returns nil if an object is healed in any
disk, and not in the new fresh drive.
2024-07-10 09:55:36 -07:00
Klaus Post
b3bac73c0f Clarify post policy error message (#20067)
It is not really clear that the listed keys are missing.

Clarify the error
2024-07-10 07:18:44 -07:00
Anis Eleuch
e726d8ff0f list: Hide objects/versions with pending/failed replicated deletion (#20047)
In regular listing, this commit will avoid showing an object when its
latest version has a pending or failed deletion. In replicated setup.
It will also prevent showing older versions in the same case.
2024-07-09 15:26:42 -07:00
Shubhendu
f4230777b3 Log replication errors once (#20063)
Also, sort the error map for multiple sites in ascending order
of deployment IDs, so that the error message generated is always
definitive order and same.

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-07-09 10:10:31 -07:00
Krishnan Parthasarathi
380233d646 batch: Update job info object on success (#20053) 2024-07-08 18:45:54 -07:00
Allan Roger Reid
d592bc0c1c Fix documentation for removal of delete markers ILM rule (#20056) 2024-07-08 18:45:38 -07:00
Klaus Post
0d0b0aa599 Abstract grid connections (#20038)
Add `ConnDialer` to abstract connection creation.

- `IncomingConn(ctx context.Context, conn net.Conn)` is provided as an entry point for 
   incoming custom connections.

- `ConnectWS` is provided to create web socket connections.
2024-07-08 14:44:00 -07:00
Anis Eleuch
b433bf14ba Add typos check to Makefile (#20051) 2024-07-08 14:39:49 -07:00
Minio Trusted
cf371da346 Update yaml files to latest version RELEASE.2024-07-04T14-25-45Z 2024-07-04 14:58:08 +00:00
Klaus Post
107d951893 Log ILM failed object name (#20040)
Log so we know which object we are dealing with.

Log each object once.
2024-07-04 07:25:45 -07:00
Shireesh Anjal
22c53b1c70 Remove license update job (#20037) 2024-07-03 11:49:48 -07:00
Mark Theunissen
88926ad8e9 return appropriate error upon tier update for incorrect credentials (#20034) 2024-07-03 00:17:20 -07:00
Harshavardhana
32d04091a2 resume any batch jobs in a goroutine (#20035)
Bonus: move batch job initialization to the last item after all other initialization, 
            allowing for faster startup time for different subsystems.
2024-07-03 00:16:05 -07:00
Harshavardhana
b6d4a77b94 update vulncheck 2024-07-02 14:34:59 -07:00
Harshavardhana
be84a4fd68 do not proxy invalid object names (#20031) 2024-07-02 14:28:55 -07:00
Anis Eleuch
2ec1f404ac info: Always refresh the root disk status (#20023)
Add root drive status in the disk info cache function, so unmounting a
drive without restarting a local node reflects the correct value.
2024-07-02 13:41:29 -07:00
Klaus Post
2040559f71 Fix SkipReader performance with small initial read (#20030)
If `SkipReader` is called with a small initial buffer it may be doing a huge number if Reads to skip the requested number of bytes. If a small buffer is provided grab a 32K buffer and use that.

Fixes slow execution of `testAPIGetObjectWithMPHandler`.

Bonuses:

* Use `-short` with `-race` test.
* Do all suite test types with `-short`.
* Enable compressed+encrypted in `testAPIGetObjectWithMPHandler`.
* Disable big file tests in `testAPIGetObjectWithMPHandler` when using `-short`.
2024-07-02 08:13:05 -07:00
Anis Eleuch
ca0ce4c6ef tests: Fix setting max openfds as memory limit (#20029)
The code was advertenly passing max openfds to debug.SetMemoryLimit(),
fixing this accelerate go test in my machine.

This is only a testing bug, since the server context has always a valid
MaxMem, so the buggy code was never called in users environments.
2024-07-02 08:09:36 -07:00
Anis Eleuch
757cf413cb Add batch status API (#19679)
Currently the status of a completed or failed batch is held in the
memory, a simple restart will lose the status and the user will not
have any visibility of the job that was long running.

In addition to the metrics, add a new API that reads the batch status
from the drives. A batch job will be cleaned up three days after
completion.

Also add the batch type in the batch id, the reason is that the batch
job request is removed immediately when the job is finished, then we
do not know the type of batch job anymore, hence a difficulty to locate
the job report
2024-07-02 01:17:52 -07:00
Anis Eleuch
b35acb3dbc heal: Add support of healing particular pool/set (#20024) 2024-07-01 15:02:25 -07:00
Sveinn
e404abf103 Letting password enable auth bypass caPublicKey (only if passauth is … (#20022) 2024-07-01 15:02:01 -07:00
jiuker
f7ff19cb18 fix: warning for decommissioned pool while start (#20019) 2024-07-01 07:38:46 -07:00
Minio Trusted
f736702da8 Update yaml files to latest version RELEASE.2024-06-29T01-20-47Z 2024-06-29 16:45:31 +00:00
Poorna
91faaa1387 fix panic in batch replicate (#20014)
Fixes:

```
panic: send on closed channel
	panic: close of closed channel

goroutine 878 [running]:
github.com/minio/minio/internal/ioutil.SafeClose[...](...)
	/Users/kp/code/src/github.com/minio/minio/internal/ioutil/ioutil.go:407
github.com/minio/minio/cmd.(*erasureServerPools).Walk.func2.2()
	/Users/kp/code/src/github.com/minio/minio/cmd/erasure-server-pool.go:2229 +0xc0
panic({0x108c25e60?, 0x1090b28d0?})
	/usr/local/go/src/runtime/panic.go:770 +0x124
github.com/minio/minio/cmd.(*erasureServerPools).Walk.func2.3({{0x1400e397316, 0x5}, {0x1400d88b8a8, 0x8}, {0x1f99d80, 0xede101c42, 0x0}, 0x3bc, 0x0, 0x0, ...})
	/Users/kp/code/src/github.com/minio/minio/cmd/erasure-server-pool.go:2235 +0xb4
github.com/minio/minio/cmd.(*erasureServerPools).Walk.func2()
	/Users/kp/code/src/github.com/minio/minio/cmd/erasure-server-pool.go:2277 +0xabc
created by github.com/minio/minio/cmd.(*erasureServerPools).Walk in goroutine 575
	/Users/kp/code/src/github.com/minio/minio/cmd/erasure-server-pool.go:2210 +0x33c
```
2024-06-28 18:20:47 -07:00
Poorna
68a9f521d5 fix object lock metadata filter (#20011) 2024-06-28 18:20:27 -07:00
Harshavardhana
f365a98029 fix: hot-reloading STS credential policy documents (#20012)
* fix: hot-reloading STS credential policy documents
* Support Role ARNs hot load policies (#28)

---------

Co-authored-by: Anis Eleuch <vadmeste@users.noreply.github.com>
2024-06-28 16:17:22 -07:00
Minio Trusted
47bbc272df Update yaml files to latest version RELEASE.2024-06-28T09-06-49Z 2024-06-28 14:11:36 +00:00
Anis Eleuch
aebac90013 tests: Fix minor issue in the config yaml file testing (#20005)
Convert x86_64 to amd64 in the test script to correctly download mc binary.
2024-06-28 02:06:49 -07:00
Taran Pelkey
7ca4ba77c4 Update tests to use AttachPolicy(LDAP) instead of deprecated SetPolicy (#19972) 2024-06-28 02:06:25 -07:00
Poorna
13512170b5 list: Do not decrypt SSE-S3 Etags in a non encrypted format (#20008) 2024-06-27 19:44:56 -07:00
Krishnan Parthasarathi
154fcaeb56 Allow rebalance start when it's stopped/completed (#20009) 2024-06-27 17:22:30 -07:00
Anis Eleuch
722118386d iam: Hot load of the policy during request authorization (#20007)
Hot load a policy document when during account authorization evaluation
to avoid returning 403 during server startup, when not all policies are
already loaded.

Add this support for group policies as well.
2024-06-27 17:03:07 -07:00
Harshavardhana
709612cb37 fix: rebalance upon pool expansion would crash when in progress (#20004)
you can attempt a rebalance first i.e, start with 2 pools.

```
mc admin rebalance start alias/
```

and after that you can add a new pool, this would
potentially crash.

```
Jun 27 09:22:19 xxx minio[7828]: panic: runtime error: invalid memory address or nil pointer dereference
Jun 27 09:22:19 xxx minio[7828]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x58 pc=0x22cc225]
Jun 27 09:22:19 xxx minio[7828]: goroutine 1 [running]:
Jun 27 09:22:19 xxx minio[7828]: github.com/minio/minio/cmd.(*erasureServerPools).findIndex(...)
```
2024-06-27 11:35:34 -07:00
Harshavardhana
b35d083872 fix; change retry-after 60sec for 503s and 10s for 429s (#19996) 2024-06-26 01:32:06 -07:00
Harshavardhana
5e7b243bde extend cluster health to return errors for IAM, and Bucket metadata (#19995)
Bonus: make API freeze to be opt-in instead of default
2024-06-26 00:44:34 -07:00
Minio Trusted
f8f9fc77ac Update yaml files to latest version RELEASE.2024-06-26T01-06-18Z 2024-06-26 02:12:12 +00:00
Harshavardhana
499531f0b5 update minio/console v1.6.1
Signed-off-by: Harshavardhana <harsha@minio.io>
2024-06-25 18:06:18 -07:00
Taran Pelkey
3c2141513f add ListAccessKeysLDAPBulk API to list accessKeys for multiple/all LDAP users (#19835) 2024-06-25 14:21:28 -07:00
Aditya Manthramurthy
602f6a9ad0 Add IAM (re)load timing logs (#19984)
This is useful to debug large IAM load times - the usual cause is when
there are a large amount of temporary accounts.
2024-06-25 10:33:10 -07:00
Harshavardhana
22c5a5b91b add healing retries when there are failed heal attempts (#19986)
transient errors for long running tasks are normal, allow for
drive to retry again upto 3 times before giving up on healing
the drive.
2024-06-25 10:32:56 -07:00
jiuker
41f508765d fix: format the scanner object error (#19991) 2024-06-25 08:54:24 -07:00
Aditya Manthramurthy
7dccd1f589 fix: bootstrap msgs should only be sent at startup (#19985) 2024-06-24 19:30:28 -07:00
Allan Roger Reid
55ff598b23 Refactor the documentation on minio server config notation (#19987)
Refactor minio server config notation to add bracket notation to the TODO list
2024-06-24 19:30:18 -07:00
Harshavardhana
a22ce4550c protect workers and simplify use of atomics (#19982)
without atomic load() it is possible that for
a slow receiver we would get into a hot-loop, when
logCh is full and there are many incoming callers.

to avoid this as a workaround enable BATCH_SIZE
greater than 100 to ensure that your slow receiver
receives data in bulk to avoid being throttled in
some manner.

this PR however fixes the unprotected access to
the current workers value.
2024-06-24 18:15:27 -07:00
Taran Pelkey
168ae81b1f Fix error when validating DN that is not under base DN (#19971) 2024-06-21 23:35:35 -07:00
Minio Trusted
5f6a25cdd0 Update yaml files to latest version RELEASE.2024-06-22T05-26-45Z 2024-06-22 06:20:13 +00:00
Harshavardhana
be97ae4c5d fix: gcs tier going offline due to customer HTTPclient (#19973)
specifying customer HTTP client makes the gcs SDK
ignore the passed credentials, instead let the GCS
SDK manage the transport.

this PR fixes #19922 a regression from #19565
2024-06-21 22:26:45 -07:00
Anis Eleuch
4d7d008741 bootstrap: Speed up bucket metadata loading (#19969)
Currently, bucket metadata is being loaded serially inside ListBuckets
Objet API. Fix that by loading the bucket metadata as the number of
erasure sets * 10, which is a good approximation.
2024-06-21 15:22:24 -07:00
Klaus Post
2d7a3d1516 Return error from mergeEntryChannels (#19970)
- Add error from mergeEntryChannels to `results.`
- Make sure we check the context error before we close the channel.
2024-06-21 12:06:51 -07:00
Harshavardhana
dfab400d43 reject bootup, if binaries are different in a cluster (#19968) 2024-06-21 07:49:49 -07:00
Pedro Juarez
70078eab10 Fix browser UI animation (#19966)
Browse UI is not showing the animation because the default 
content-security-policy do not trust the file https://unpkg.com/detect-gpu@5.0.38/dist/benchmarks/d-apple.json 
the GPU library needs to identify if the web browser can play it.
2024-06-20 17:58:58 -07:00
Klaus Post
3415c4dd1e Fix reconnected deadlock with full queue (#19964)
When a reconnection happens, `handleMessages` must be able to complete and exit. 

This can be prevented in a full queue.

Deadlock chain (May 10th release)

```
1 @ 0x44110e 0x453125 0x109f88c 0x109f7d5 0x10a472c 0x10a3f72 0x10a34ed 0x4795e1
#	0x109f88b	github.com/minio/minio/internal/grid.(*Connection).send+0x3eb			github.com/minio/minio/internal/grid/connection.go:548
#	0x109f7d4	github.com/minio/minio/internal/grid.(*Connection).queueMsg+0x334		github.com/minio/minio/internal/grid/connection.go:586
#	0x10a472b	github.com/minio/minio/internal/grid.(*Connection).handleAckMux+0xab		github.com/minio/minio/internal/grid/connection.go:1284
#	0x10a3f71	github.com/minio/minio/internal/grid.(*Connection).handleMsg+0x231		github.com/minio/minio/internal/grid/connection.go:1211
#	0x10a34ec	github.com/minio/minio/internal/grid.(*Connection).handleMessages.func1+0x6cc	github.com/minio/minio/internal/grid/connection.go:1019

---> blocks ---> via (Connection).handleMsgWg

1 @ 0x44110e 0x454165 0x454134 0x475325 0x486b08 0x10a161a 0x10a1465 0x2470e67 0x7395a9 0x20e61af 0x20e5f1f 0x7395a9 0x22f781c 0x7395a9 0x22f89a5 0x7395a9 0x22f6e82 0x7395a9 0x22f49a2 0x7395a9 0x2206e45 0x7395a9 0x22f4d9c 0x7395a9 0x210ba06 0x7395a9 0x23089c2 0x7395a9 0x22f86e9 0x7395a9 0xd42582 0x2106c04
#	0x475324	sync.runtime_Semacquire+0x24								runtime/sema.go:62
#	0x486b07	sync.(*WaitGroup).Wait+0x47								sync/waitgroup.go:116
#	0x10a1619	github.com/minio/minio/internal/grid.(*Connection).reconnected+0xb9			github.com/minio/minio/internal/grid/connection.go:857
#	0x10a1464	github.com/minio/minio/internal/grid.(*Connection).handleIncoming+0x384			github.com/minio/minio/internal/grid/connection.go:825
```

Add a queue cleaner in reconnected that will pop old messages so `handleMessages` can 
send messages without blocking and exit appropriately for the connection to be re-established.

Messages are likely dropped by the remote, but we may have some that can succeed, 
so we only drop when running out of space.
2024-06-20 16:11:40 -07:00
Shireesh Anjal
e200808ab7 fix errors in metrics code on macos (#19965)
- do not load proc fs metrics in case of macos
- null-check TimeStat before accessing
2024-06-20 10:55:03 -07:00
Klaus Post
fae563b85d Add fixed timed restarts to updates (#19960) 2024-06-20 07:49:22 -07:00
Klaus Post
3e6dc02f8f Add actual inline data to JSON output in xl-meta (#19958)
Add the inlined data as base64 encoded field and try to add a string version if feasible.

Example:

```
λ xl-meta -data xl.meta
{
  "8e03504e-1123-4957-b272-7bc53eda0d55": {
    "bitrot_valid": true,
    "bytes": 58,
    "data_base64": "Z29sYW5nLm9yZy94L3N5cyB2MC4xNS4wIC8=",
    "data_string": "golang.org/x/sys v0.15.0 /"
}
```

The string will have quotes, newlines escaped to produce valid JSON.

If content isn't valid utf8 or the encoding otherwise fails, only the base64 data will be added.

`-export` can still be used separately to extract the data as files (including bitrot).
2024-06-20 07:46:44 -07:00
Anis Eleuch
95e4cbbfde Do not ping event targets during cluster initialization (#19959)
S3 operations are frozen during startup, therefore we should avoid pinging
event targets during the initialization since it can stall.
2024-06-20 07:46:02 -07:00
Harshavardhana
2825294b7b allow server startup to come online with READ success (#19957) 2024-06-19 22:21:31 -07:00
Sveinn
bce93b5cfa Removing timeout on shutdown (#19956) 2024-06-19 11:42:47 -07:00
Harshavardhana
7a4b250c8b avoid waiting for quorum health while debugging (#19955) 2024-06-19 10:12:20 -07:00
Anis Eleuch
e5335450a4 test: Healing test to avoid infinite waiting for servers to be up (#19954)
tests: Healing test to avoid infinite waiting for servers to be up

Quit after 15 minutes and print server logs instead
2024-06-19 09:00:38 -07:00
Klaus Post
a6ffdf1dd4 Do not block on distributed unlocks (#19952)
* Prevents blocking when losing quorum (standard on cluster restarts).
* Time out to prevent endless buildup. Timed-out remote locks will be canceled because they miss the refresh anyway.
* Reduces latency for all calls since the wall time for the roundtrip to remotes no longer adds to the requests.
2024-06-19 07:35:19 -07:00
Harshavardhana
69e41f87ef compute localIPs only once per server startup() (#19951)
repeatedly calling this function is not necessary,
on systems with lots of interfaces, including virtual
ones can make this reasonably delayed.
2024-06-19 07:34:00 -07:00
Harshavardhana
ee48f9f206 perform healthchecks before initializing everything fully (#19953)
adds more informative logs that provide details on which
erasure set is losing quorum etc.
2024-06-19 07:33:40 -07:00
Sveinn
9ba39d7fad Removing a channel that was not being used (#19948) 2024-06-19 01:59:39 -07:00
Harshavardhana
d2fb371f80 do not need response record body (#19949)
since the connection is active, the
response recorder body can grow endlessly
causing leak, as this bytes buffer is
never given back to GC due to an goroutine.
2024-06-19 01:59:21 -07:00
Klaus Post
2f9018f03b Do regular checks for healing status while scanning (#19946) 2024-06-18 09:11:04 -07:00
Harshavardhana
eb990f64a9 update pkger to use v2.3.1 2024-06-17 22:48:41 -07:00
Harshavardhana
bbb64eaade skip healing properly in the scanner when a drive is hotplugged (#19939)
skip healing properly in scanner when drive is hotplugged

due to how the state is passed around the SkipHealing
might not be the true state() of the system always, causing
a situation where we might healing from the scanner on the
same drive which is being. Due to this competing heals get
triggered that slow each other down.
2024-06-17 16:39:11 -07:00
Harshavardhana
7bd1d899bc remove overzealous check during HEAD() (#19940)
due to a historic bug in CopyObject() where
an inlined object loses its metadata, the
check causes an incorrect fallback verifying
data-dir.

CopyObject() bug was fixed in ffa91f9794 however
the occurrence of this problem is historic, so
the aforementioned check is stretching too much.

Bonus: simplify fileInfoRaw() to read xl.json as well,
also recreate buckets properly.
2024-06-17 07:29:18 -07:00
Harshavardhana
c91d1ec2e3 fix: avoid metadata cache without data for all callers (#19935) 2024-06-14 06:28:35 -07:00
Minio Trusted
c50b64027d Update yaml files to latest version RELEASE.2024-06-13T22-53-53Z 2024-06-14 05:40:03 +00:00
Cesar N
20960b6a2d Update console to v1.6.0 (#19933) 2024-06-13 15:53:53 -07:00
Shubhendu
3bd3470d0b Corrected names of node replication metrics (#19932)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-13 15:26:54 -07:00
Harshavardhana
ba39ed9af7 loadUser() if not able to load() credential return error (#19931) 2024-06-13 15:26:38 -07:00
jiuker
62e6dc950d fix: do not update metadata cache upon headObject() (#19929) 2024-06-13 08:42:02 -07:00
Harshavardhana
5a5046ce45 upgrade all deps and credits (#19930)
Signed-off-by: Harshavardhana <harsha@minio.io>
2024-06-13 08:34:20 -07:00
Klaus Post
ad04afe381 Fix SSEC multipart checksum replication (#19915)
* Multipart SSEC checksums were not transferred.
* Remove key mismatch logging. This key is user-controlled with SSEC.
* If the source is SSEC and the destination reports ErrSSEEncryptedObject, 
  assume replication is good.
2024-06-12 23:56:12 -07:00
Harshavardhana
ba9f0f2480 fix: attempt to fix CI/CD upgrade tests with docker-compose (#19926) 2024-06-12 22:08:11 -07:00
Harshavardhana
d06b63d056 load credential for in-flights requests as singleflight (#19920)
avoid concurrent callers for LoadUser() to even initiate
object read() requests, if an on-going operation is in progress.

this avoids many callers hitting the drives causing I/O
spikes, also allows for loading credentials faster.
2024-06-12 13:47:56 -07:00
Andreas Auernhammer
7ce28c3b1d kms: use GetClientCertificate callback for KES API keys (#19921)
This commit fixes an issue in the KES client configuration
that can cause the following error when connecting to KES:
```
ERROR Failed to connect to KMS: failed to generate data key with KMS key: tls: client certificate is required
```

The Go TLS stack seems to not send a client certificate if it
thinks the client certificate cannot be validated by the peer.
In case of an API key, we don't care about this since we use
public key pinning and the X.509 certificate is just a transport
encoding.

The `GetClientCertificate` seems to be honored always such that
this error does not occur.

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2024-06-12 07:31:26 -07:00
Harshavardhana
e3ac4035b9 decrement requests inqueue correctly after the request is processed (#19918) 2024-06-12 01:13:12 -07:00
Harshavardhana
d21b6daa49 fix: avoid crash when delete() returns an error in batch expiration (#19909) 2024-06-11 06:50:53 -07:00
Minio Trusted
76ebb16688 Update yaml files to latest version RELEASE.2024-06-11T03-13-30Z 2024-06-11 06:11:10 +00:00
Harshavardhana
55aa431578 fix: on windows avoid ':' as part of the object name (#19907)
fixes #18865
avoid-colon
2024-06-10 20:13:30 -07:00
Harshavardhana
614981e566 allow purge expired STS while loading credentials (#19905)
the reason for this is to avoid STS mappings to be
purged without a successful load of other policies,
and all the credentials only loaded successfully
are properly handled.

This also avoids unnecessary cache store which was
implemented earlier for optimization.
2024-06-10 11:45:50 -07:00
Harshavardhana
b8b956a05d add changes to Makefile to support dev build 2024-06-10 10:41:02 -07:00
Klaus Post
d2eed44c78 Fix replication checksum transfer (#19906)
Compression will be disabled by default if SSE-C is specified. So we can still honor SSE-C.
2024-06-10 10:40:33 -07:00
Anis Eleuch
789cbc6fb2 heal: Dangling check to evaluate object parts separately (#19797) 2024-06-10 08:51:27 -07:00
jiuker
0662c90b5c fix: copyObject restore with a specific version, update test cases (#19895) 2024-06-10 08:50:49 -07:00
Klaus Post
a2cab02554 Fix SSE-C checksums (#19896)
Compression will be disabled by default if SSE-C is specified. So we can still honor SSE-C.
2024-06-10 08:31:51 -07:00
Harshavardhana
6c7a21df6b turn-off unexpected debug logging in List() calls (#19903) 2024-06-09 21:34:26 -07:00
Ali Afsharzadeh
f933b0b708 Upgrade setup-helm action from v3 to v4 (#19897) 2024-06-09 02:13:09 -07:00
Shubhendu
9f305273a7 Added tests for replication of checksum headers (#19879)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-08 09:24:15 -07:00
Cesar N
cbd9efcb43 Update console to v1.5.0 (#19899) 2024-06-08 01:12:00 -07:00
Harshavardhana
29a25a538f fix: make sure we list freeVersions like DEL marker with --versions (#19878)
freeVersions() was being incorrectly skipped; list it as
valid objects properly.

Co-authored-by: Krishnan Parthasarathi <Krishnan Parthasarathi>
2024-06-07 15:18:44 -07:00
Harshavardhana
2dd8faaedc remove unnecessary log in Listing() 2024-06-07 14:52:55 -07:00
Klaus Post
f00187033d Two way streams for upcoming locking enhancements (#19796) 2024-06-07 08:51:52 -07:00
Aditya Manthramurthy
c5141d65ac Update docker build script to pull all changes (#19892) 2024-06-07 08:43:38 -07:00
Krishnan Parthasarathi
069c4015cd Don't tier directory objects (#19891)
Directory objects are used by applications that simulate the folder
structure of an on-disk filesystem. These are zero-byte objects with names
ending with '/'. They are only used to check whether a 'folder' exists in
the namespace.
2024-06-07 08:43:17 -07:00
Shubhendu
2f6e03fb60 Calculate correct object size while replication (#19888)
It was missing in case of `replicateObject` but was present for
`replicateAll` already

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-06 12:31:01 -07:00
Klaus Post
0fbb945e13 Disable caching of encrypted objects (#19890)
Don't write encrypted objects to cache, if configured.
2024-06-06 11:39:18 -07:00
Anis Eleuch
b94dd835c9 decom: Fix CurrentSize output when generating the status (#19883)
StartSize starts with the raw free space of all disks in the given pool,
however during the status, CurrentSize is not showing the current free
raw space, as expected at least by `mc admin decom status` since it was
written.
2024-06-06 07:30:43 -07:00
Minio Trusted
44fc707423 Update yaml files to latest version RELEASE.2024-06-06T09-36-42Z 2024-06-06 13:36:05 +00:00
Poorna
5aaef9790f replication: pass checksum headers to replica (#19834) 2024-06-06 02:36:42 -07:00
Bala FA
7edc352d23 Add ILM metrics in metrics-v3 (#19539)
Signed-off-by: Bala.FA <bala@minio.io>
2024-06-06 02:36:25 -07:00
Poorna
850a84b08a simplify site replication multipart proxying (#19885) 2024-06-05 18:01:15 -07:00
Taran Pelkey
4148754ce0 Check both given and normalized group DN on LDAP policy detach requests (#19876) 2024-06-05 15:42:40 -07:00
Harshavardhana
2107722829 upgrade go-oidc to fix GO-2024-2631 (#19884) 2024-06-05 15:00:34 -07:00
jiuker
d326ba52e9 feat: support batchJob for windows (#19877) 2024-06-05 08:44:53 -07:00
Sveinn
91e1487de4 Add LDAP public key authentication to SFTP (#19833) 2024-06-05 00:51:13 -07:00
Minio Trusted
5ffb2a9605 Update yaml files to latest version RELEASE.2024-06-04T19-20-08Z 2024-06-04 22:25:53 +00:00
Harshavardhana
17fe91d6d1 chore: update all deps (#19875) 2024-06-04 12:20:08 -07:00
jiuker
90a9f2dd70 fix: log diskerror when detect the disk space failed (#19861) 2024-06-04 09:42:03 -07:00
Harshavardhana
d5e48cfd65 fix: remove DriveOPTimeout for REST callers as they don't work properly (#19873)
Go's net/http is notoriously difficult to have a streaming
deadlines per READ/WRITE on the net.Conn if we add them they
interfere with the Go's internal requirements for a HTTP
connection.

Remove this support for now

fixes #19853
2024-06-04 08:12:57 -07:00
Anis Eleuch
d274566463 race: Fix rare race detected by testing (#19872)
Below is the race warning:

```
WARNING: DATA RACE
Write at 0x00c02d3d27c0 by goroutine 1210:
  github.com/minio/minio/cmd.(*healingTracker).bucketDone()
      github.com/minio/minio/cmd/background-newdisks-heal-ops.go:273 +0x13a
  github.com/minio/minio/cmd.(*erasureObjects).healErasureSet()
      github.com/minio/minio/cmd/global-heal.go:525 +0x2158
  github.com/minio/minio/cmd.healFreshDisk()
      github.com/minio/minio/cmd/background-newdisks-heal-ops.go:450 +0x107e
  github.com/minio/minio/cmd.monitorLocalDisksAndHeal.func1()
      github.com/minio/minio/cmd/background-newdisks-heal-ops.go:528 +0x150
  github.com/minio/minio/cmd.monitorLocalDisksAndHeal.gowrap2()
      github.com/minio/minio/cmd/background-newdisks-heal-ops.go:538 +0x82

Previous read at 0x00c02d3d27c0 by goroutine 1446:
  github.com/minio/minio/cmd.(*erasureObjects).healErasureSet.func5()
      github.com/minio/minio/cmd/global-heal.go:232 +0xfd
```
2024-06-04 08:12:32 -07:00
Shubhendu
39ac720826 Remove hardcoded override as not needed (#19868)
Fixes: https://github.com/minio/minio/issues/19867

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-04 06:24:37 -07:00
Shubhendu
21b6204692 Test proxying of DEL marker for bucket replication (#19870)
Make sure to avoid proxying for DEL markers

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-06-04 04:38:26 -07:00
Taran Pelkey
d98faeb26a Check if LDAP User has attached policy before creating Service Account (#19843)
Check if ldap user has policy before creating
2024-06-03 12:58:48 -07:00
Klaus Post
0a63dc199c Add trace sizes to more trace types (#19864)
Add trace sizes to

* ILM traces
* Replication traces
* Healing traces
* Decommission traces
* Rebalance traces
* (s)ftp traces
* http traces.
2024-06-03 08:45:54 -07:00
Anis Eleuch
3ba857dfa1 race: Fix detected test race in the internal audit code (#19865) 2024-06-03 08:44:50 -07:00
Klaus Post
a8554c4022 Update madmin (#19862)
Make it include https://github.com/minio/madmin-go/pull/285
2024-06-03 05:00:14 -07:00
Harshavardhana
ba54b39c02 fix: crash when audit webhook queue_dir is not writable (#19854)
This is regression introduced in #19275 refactor
2024-06-01 20:03:39 -07:00
Anis Eleuch
2a75225569 kafka: _MINIO_KAFKA_DEBUG to enable sarama debug messages (#19849) 2024-06-01 08:02:59 -07:00
Klaus Post
e72429c79c Add sizes to traces (#19851)
added to storage and grid traces. Can provide more context for traces that aren't HTTP. Others may apply.
2024-05-31 22:17:37 -07:00
Klaus Post
c5b3f5553f Add per connection RPC metrics (#19852)
Provides individual and aggregate stats for each RPC connection.

Example:

```
  "rpc": {
   "collectedAt": "2024-05-31T14:33:29.1373103+02:00",
   "connected": 30,
   "disconnected": 0,
   "outgoingStreams": 69,
   "incomingStreams": 0,
   "outgoingBytes": 174822796,
   "incomingBytes": 175821566,
   "outgoingMessages": 768595,
   "incomingMessages": 768589,
   "outQueue": 0,
   "lastPongTime": "2024-05-31T12:33:28Z",
   "byDestination": {
    "http://127.0.0.1:9001": {
     "collectedAt": "2024-05-31T14:33:29.1373103+02:00",
     "connected": 5,
     "disconnected": 0,
     "outgoingStreams": 2,
     "incomingStreams": 0,
     "outgoingBytes": 38432543,
     "incomingBytes": 66604052,
     "outgoingMessages": 229496,
     "incomingMessages": 229575,
     "outQueue": 0,
     "lastPongTime": "2024-05-31T12:33:27Z"
    },
    "http://127.0.0.1:9002": {
     "collectedAt": "2024-05-31T14:33:29.1373103+02:00",
     "connected": 5,
     "disconnected": 0,
     "outgoingStreams": 6,
     "incomingStreams": 0,
     "outgoingBytes": 38215680,
     "incomingBytes": 66121283,
     "outgoingMessages": 228525,
     "incomingMessages": 228510,
     "outQueue": 0,
     "lastPongTime": "2024-05-31T12:33:27Z"
    },
...
```
2024-05-31 22:16:24 -07:00
Klaus Post
d3ae0aaad3 Add max buffering to SFTP (#19848)
Prevent OOM by adversarial use of SFTP upload by setting a 100MB max upload buffer.
2024-05-31 14:28:07 -07:00
Klaus Post
d67bccf861 Add xl-meta partial shard reconstruction (#19841)
Add partial shard reconstruction

* Add partial shard reconstruction
* Fix padding causing the last shard to be rejected
* Add md5 checks on single parts
* Move md5 verified to `verified/filename.ext`
* Move complete (without md5) to `complete/filename.ext.partno`

It's not pretty, but at least now the md5 gives some confidence it works correctly.
2024-05-31 07:49:23 -07:00
Anis Eleuch
1277ad69a6 heal: Remove .healing.bin when all ES drives are healing (#19846)
In the very rare case when all drives in a erasure set need to be healed,
remove .healing.bin from all drives, otherwise it will be stuck in a
loop

Also, fix a unit test that fails sometimes due to wrong test.
2024-05-31 07:48:50 -07:00
Harshavardhana
8f93e81afb change service account embedded policy size limit (#19840)
Bonus: trim-off all the unnecessary spaces to allow
for real 2048 characters in policies for STS handlers
and re-use the code in all STS handlers.
2024-05-30 11:10:41 -07:00
Harshavardhana
4af31e654b avoid pre-populating buffers for deployments < 32GiB memory (#19839) 2024-05-30 04:58:12 -07:00
Harshavardhana
aad50579ba fix: wire up ILM sub-system properly for help (#19836) 2024-05-30 01:14:58 -07:00
Harshavardhana
38d059b0ae fix: single node multi-drive must register local drives properly (#19832)
since #19688 there was a regression introduced during drive
lookups for single node multi-drive setups, drive replacement
would not work correctly without this PR.
2024-05-29 13:12:44 -07:00
Klaus Post
bd4eeb4522 Fix flipped EcM, EcN in metadata header (#19831)
Since this is a tuple encoded field we can just flip the struct members.
2024-05-29 12:14:09 -07:00
jiuker
03e3493288 fix: correct parse the tagging error for PostPolicyBucketHandler (#19825) 2024-05-29 11:50:46 -07:00
Harshavardhana
64baedf5a4 fix: hide prefixes for Hadoop properly (#19821) 2024-05-28 15:53:15 -07:00
Minio Trusted
2f64d5f77e Update yaml files to latest version RELEASE.2024-05-28T17-19-04Z 2024-05-28 19:23:04 +00:00
Anis Eleuch
f79a4ef4d0 policy: More defensive code validating svc:DurationSeconds (#19820)
This does not fix any current issue, but merging https://github.com/minio/madmin-go/pull/282
can lose the validation of the service account expiration time.

Add more defensive code for now. In the future, we should avoid doing
validation in another library.
2024-05-28 10:19:04 -07:00
Taran Pelkey
2d53854b19 Restrict access keys for users and groups to not allow '=' or ',' (#19749)
* initial commit

* Add UTF check

---------

Co-authored-by: Harshavardhana <harsha@minio.io>
2024-05-28 10:14:16 -07:00
Harshavardhana
e5c83535af chore: upgrade deps (#19819)
Signed-off-by: Harshavardhana <harsha@minio.io>
2024-05-28 02:27:44 -07:00
jiuker
c904ef966e feat: support tags for PostPolicy upload (#19816) 2024-05-27 21:44:00 -07:00
Minio Trusted
8f266e0772 Update yaml files to latest version RELEASE.2024-05-27T19-17-46Z 2024-05-27 23:52:43 +00:00
Harshavardhana
e0fe7cc391 fix: information disclosure bug in preconditions GET (#19810)
precondition check was being honored before, validating
if anonymous access is allowed on the metadata of an
object, leading to metadata disclosure of the following
headers.

```
Last-Modified
Etag
x-amz-version-id
Expires:
Cache-Control:
```

although the information presented is minimal in nature,
and of opaque nature. It still simply discloses that an
object by a specific name exists or not without even having
enough permissions.
2024-05-27 12:17:46 -07:00
Harshavardhana
9d20dec56a Revert "remove dataErrs from er.deleteIfDangling code"
This reverts commit 7d75b1e758.

This fails multipart tests we need this code to handle
existing challenges, so wait for the comprehensive fix.
2024-05-26 11:13:29 -07:00
Harshavardhana
597a785253 fix: authenticate LDAP via actual DN instead of normalized DN (#19805)
fix: authenticate LDAP via actual DN instead of normalized DN

Normalized DN is only for internal representation, not for
external communication, any communication to LDAP must be
based on actual user DN. LDAP servers do not understand
normalized DN.

fixes #19757
2024-05-25 06:43:06 -07:00
Harshavardhana
7d75b1e758 remove dataErrs from er.deleteIfDangling code
avoid this until a comprehensive change is
merged such as https://github.com/minio/minio/pull/19797
2024-05-24 18:20:04 -07:00
Aditya Manthramurthy
5f78691fcf ldap: Add user DN attributes list config param (#19758)
This change uses the updated ldap library in minio/pkg (bumped
up to v3). A new config parameter is added for LDAP configuration to
specify extra user attributes to load from the LDAP server and to store
them as additional claims for the user.

A test is added in sts_handlers.go that shows how to access the LDAP
attributes as a claim.

This is in preparation for adding SSH pubkey authentication to MinIO's SFTP
integration.
2024-05-24 16:05:23 -07:00
Shireesh Anjal
a591e06ae5 Add cluster scanner metrics in metrics-v3 (#19517)
endpoint: /minio/metrics/v3/cluster/scanner
metrics:
 - bucket_scans_finished (counter)
 - bucket_scans_started (counter)
 - directories_scanned (counter)
 - last_activity_nano_seconds (gauge)
 - objects_scanned (counter)
 - versions_scanned (counter)
2024-05-24 12:29:25 -07:00
Harshavardhana
443c93c634 compute time spent in ILM properly (#19806) 2024-05-24 12:28:51 -07:00
Shireesh Anjal
5659cddc84 Add cluster config metrics in metrics-v3 (#19507)
endpoint: /minio/metrics/v3/cluster/config
metrics:
- write_quorum
- rrs_parity
- standard_parity
2024-05-24 05:50:46 -07:00
Shireesh Anjal
2a03a34bde Upgrade madmin-go to v3.0.52 (#19798)
This will ensure that content of /proc/cmdline from each server is
captured in the health report.
2024-05-24 05:34:57 -07:00
Shubhendu
1654a9b7e6 Use point in time values for gauge metrics in graphs (#19690)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-05-24 04:11:51 -07:00
Shireesh Anjal
673a521711 Change endpoint of v3 notification metrics (#19804)
from /cluster/notification to /notification
2024-05-24 04:10:24 -07:00
Harshavardhana
2e23076688 move windows runners to in-house (#19800)
GitHub CI runners for windows have gotten very slow,
moving them to our own hosted runners
2024-05-23 15:29:33 -07:00
Klaus Post
b92ac55250 Add multipart combination to xl-meta (#19780)
Add combination of multiple parts.

Parts will be reconstructed and saved separately and can manually be combined to the complete object.

Parts will be named `(version_id)-(filename).(partnum).(in)complete`.
2024-05-23 09:37:31 -07:00
Shireesh Anjal
7981509cc8 Add cluster and bucket replication metrics in metrics-v3 (#19546)
endpoint: /minio/metrics/v3/cluster/replication
metrics:
- average_active_workers
- average_queued_bytes
- average_queued_count
- average_transfer_rate
- current_active_workers
- current_transfer_rate
- last_minute_queued_bytes
- last_minute_queued_count
- max_active_workers
- max_queued_bytes
- max_queued_count
- max_transfer_rate
- recent_backlog_count

endpoint: /minio/metrics/v3/api/bucket/replication
metrics:
- last_hour_failed_bytes
- last_hour_failed_count
- last_minute_failed_bytes
- last_minute_failed_count
- latency_ms
- proxied_delete_tagging_requests_total
- proxied_get_requests_failures
- proxied_get_requests_total
- proxied_get_tagging_requests_failures
- proxied_get_tagging_requests_total
- proxied_head_requests_failures
- proxied_head_requests_total
- proxied_put_tagging_requests_failures
- proxied_put_tagging_requests_total
- sent_bytes
- sent_count
- total_failed_bytes
- total_failed_count
- proxied_delete_tagging_requests_failures
2024-05-23 00:41:18 -07:00
Krishnan Parthasarathi
6d5bc045bc Disallow ExpiredObjectAllVersions with object lock (#19792)
Relaxes restrictions on Expiration and NoncurrentVersionExpiration
placed by https://github.com/minio/minio/pull/19785.
ref: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html#object-lock-managing-lifecycle

> Object lifecycle management configurations continue functioning
normally on protected objects, including placing delete markers.
However, a locked version of an object cannot be deleted by a S3
Lifecycle expiration policy. Object Lock is maintained regardless of
the object's storage class and throughout S3 Lifecycle
transitions between storage classes.
2024-05-22 18:12:48 -07:00
Harshavardhana
d38e020b29 remove errant logs for disconnected remote (#19793)
Signed-off-by: Harshavardhana <harsha@minio.io>
2024-05-22 18:12:23 -07:00
Poorna
7d29030292 fix list results returned for spark max-keys=2 listing (#19791)
This PR continues fix #19725 for some unhandled cases
2024-05-22 16:16:34 -07:00
Shubhendu
7c7650b7c3 Add sufficient deadlines and countermeasures to handle hung node scenario (#19688)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
Signed-off-by: Harshavardhana <harsha@minio.io>
2024-05-22 16:07:14 -07:00
Harshavardhana
ca80eced24 usage of deadline conn at Accept() breaks websocket (#19789)
fortunately not wired up to use, however if anyone
enables deadlines for conn then sporadically MinIO
startups fail.
2024-05-22 10:49:27 -07:00
Anis Eleuch
d0e0b81d8e Fix race get/set system/audit targest to avoid race errors (#19790) 2024-05-22 09:23:03 -07:00
jiuker
391baa1c9a test: add reject ilm rule test case (#19788) 2024-05-22 04:26:59 -07:00
Harshavardhana
ae14681c3e Revert "Fix two-way stream cancelation and pings (#19763)"
This reverts commit 4d698841f4.
2024-05-22 03:00:00 -07:00
Klaus Post
4d698841f4 Fix two-way stream cancelation and pings (#19763)
Do not log errors on oneway streams when sending ping fails. Instead, cancel the stream.

This also makes sure pings are sent when blocked on sending responses.
2024-05-22 01:25:25 -07:00
jiuker
9906b3ade9 fix: reject ilm rule when bucket LockEnabled (#19785) 2024-05-21 23:50:03 -07:00
Anis Eleuch
bf1769d3e0 xl: Avoid marking a drive offline after one part read failure (#19779)
This commit will fix one rare case of a multipart object that
can be read in theory but GetObject API returned an error.

It turned out that a six years old code was marking a drive offline
when the bitrot streaming fails to read a part in a disk with any error.
This can affect reading a subsequent part, though having enough shards,
but unable to construct because one drive was marked offline earlier.

This commit will remove the drive marking offline code. It will also
close the bitrotstreaming reader before marking it as nil.
2024-05-21 07:36:21 -07:00
Harshavardhana
63e1ad9f29 fix: the user-agent for Veeam 2024-05-20 11:54:52 -07:00
Klaus Post
2c7bcee53f Add cross-version remapped merges to xl-meta (#19765)
Adds `-xver` which can be used with `-export` and `-combine` to attempt to combine files across versions if data is suspected to be the same. Overlapping data is compared.

Bonus: Make `inspect` accept wildcards.
2024-05-19 08:31:54 -07:00
Harshavardhana
1fd90c93ff re-use StorageAPI while loading drive formats (#19770)
Bonus: safe settings for deployment ID to avoid races
2024-05-19 01:06:49 -07:00
Poorna
e947a844c9 Fix test scripts to use mc ready (#19768) 2024-05-18 11:19:01 -07:00
Poorna
4e2d39293a Fix build script to wait for server to come up (#19767) 2024-05-17 14:43:59 -07:00
Krishnan Parthasarathi
1228d6bf1a Return NumVersions in quorum when available (#19766)
Similar to https://github.com/minio/minio/pull/17925
2024-05-17 13:57:37 -07:00
Shireesh Anjal
fc4561c64c Start callhome immediately after enabling (#19764)
Currently, on enabling callhome (or restarting the server), the callhome
job gets scheduled. This means that one has to wait for 24hrs (the
default frequency duration) to see it in action and to figure out if it
is working as expected.

It will be a better user experience to perform the first callhome
execution immediately after enabling it (or on server start if already
enabled).

Also, generate audit event on callhome execution, setting the error
field in case the execution has failed.
2024-05-17 09:53:34 -07:00
Klaus Post
3b7747b42b Tweak multipart uploads (#19756)
* Store ModTime in the upload ID; return it when listing instead of the current time.
* Use this ModTime to expire and skip reading the file info.
* Consistent upload sorting in listing (since it now has the ModTime).
* Exclude healing disks to avoid returning an empty list.
2024-05-17 09:40:09 -07:00
Harshavardhana
e432e79324 avoid calling 'admin info' for disk, cpu, net metrics collection (#19762)
resource metrics collection was incorrectly making fan-out
liveness peer calls where it's not needed.
2024-05-17 08:15:13 -07:00
Harshavardhana
08d74819b6 handle racy updates to globalSite config (#19750)
```
==================
WARNING: DATA RACE
Read at 0x0000082be990 by goroutine 205:
  github.com/minio/minio/cmd.setCommonHeaders()

Previous write at 0x0000082be990 by main goroutine:
  github.com/minio/minio/cmd.lookupConfigs()
```
2024-05-16 16:13:47 -07:00
Poorna
aa3fde1784 Add ListObjectsV2 unit test (#19753)
for PR: #19725
2024-05-15 20:40:51 -07:00
Harshavardhana
0b3eb7f218 add more deadlines and pass around context under most situations (#19752) 2024-05-15 15:19:00 -07:00
Anis Eleuch
69c9496c71 Upgrade github.com/minio/pkg/v2 and other deps (#19747) 2024-05-15 11:04:40 -07:00
Klaus Post
b792b36495 Add Veeam storage class override (#19748)
Recent Veeam is very picky about storage class names. Add `_MINIO_VEEAM_FORCE_SC` env var.

It will override the storage class returned by the storage backend if it is non-standard
and we detect a Veeam client by checking the User Agent.

Applies to HeadObject/GetObject/ListObject*
2024-05-15 11:04:16 -07:00
Harshavardhana
d3db7d31a3 fix: add deadlines for all synchronous REST callers (#19741)
add deadlines that can be dynamically changed via
the drive max timeout values.

Bonus: optimize "file not found" case and hung drives/network - circuit break the check and return right
away instead of waiting.
2024-05-15 09:52:29 -07:00
Shireesh Anjal
c05ca63158 Fix crash on /minio/metrics/v3?list (#19745)
An unchecked map access was causing panic.
2024-05-15 09:06:35 -07:00
Klaus Post
6d3e0c7db6 Tweak one way stream ping (#19743)
Do not log errors on oneway streams when sending ping fails. Instead cancel the stream.

This also makes sure pings are sent when blocked on sending responses.

I will do a separate PR that includes this and adds pings to two-way streams as well as tests for pings.
2024-05-15 08:39:21 -07:00
Shireesh Anjal
0e59e50b39 Capture ttfb api metrics only for GetObject (#19733)
as that is the only API where the TTFB metric is beneficial, and
capturing this for all APIs exponentially increases the response size in
large clusters.
2024-05-14 23:25:13 -07:00
Klaus Post
d4b391de1b Add PutObject Ring Buffer (#19605)
Replace the `io.Pipe` from streamingBitrotWriter -> CreateFile with a fixed size ring buffer.

This will add an output buffer for encoded shards to be written to disk - potentially via RPC.

This will remove blocking when `(*streamingBitrotWriter).Write` is called, and it writes hashes and data.

With current settings, the write looks like this:

```
Outbound
┌───────────────────┐             ┌────────────────┐               ┌───────────────┐                      ┌────────────────┐
│                   │   Parr.     │                │  (http body)  │               │                      │                │
│ Bitrot Hash       │     Write   │      Pipe      │      Read     │  HTTP buffer  │    Write (syscall)   │  TCP Buffer    │
│ Erasure Shard     │ ──────────► │  (unbuffered)  │ ────────────► │   (64K Max)   │ ───────────────────► │    (4MB)       │
│                   │             │                │               │  (io.Copy)    │                      │                │
└───────────────────┘             └────────────────┘               └───────────────┘                      └────────────────┘
```

We write a Hash (32 bytes). Since the pipe is unbuffered, it will block until the 32 bytes have 
been delivered to the TCP buffer, and the next Read hits the Pipe.

Then we write the shard data. This will typically be bigger than 64KB, so it will block until two blocks 
have been read from the pipe.

When we insert a ring buffer:

```
Outbound
┌───────────────────┐             ┌────────────────┐               ┌───────────────┐                      ┌────────────────┐
│                   │             │                │  (http body)  │               │                      │                │
│ Bitrot Hash       │     Write   │  Ring Buffer   │      Read     │  HTTP buffer  │    Write (syscall)   │  TCP Buffer    │
│ Erasure Shard     │ ──────────► │    (2MB)       │ ────────────► │   (64K Max)   │ ───────────────────► │    (4MB)       │
│                   │             │                │               │  (io.Copy)    │                      │                │
└───────────────────┘             └────────────────┘               └───────────────┘                      └────────────────┘
```

The hash+shard will fit within the ring buffer, so writes will not block - but will complete after a 
memcopy. Reads can fill the 64KB buffer if there is data for it.

If the network is congested, the ring buffer will become filled, and all syscalls will be on full buffers.
Only when the ring buffer is filled will erasure coding start blocking.

Since there is always "space" to write output data, we remove the parallel writing since we are 
always writing to memory now, and the goroutine synchronization overhead probably not worth taking. 

If the output were blocked in the existing, we would still wait for it to unblock in parallel write, so it would 
make no difference there - except now the ring buffer smoothes out the load.

There are some micro-optimizations we could look at later. The biggest is that, in most cases, 
we could encode directly to the ring buffer - if we are not at a boundary. Also, "force filling" the 
Read requests (i.e., blocking until a full read can be completed) could be investigated and maybe 
allow concurrent memory on read and write.
2024-05-14 17:11:04 -07:00
Shubhendu
de4d3dac00 Added tests for IAM policies for bucket operations (#19734)
* Added tests for bucket access policies

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>

* move to correct category of tests

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>

---------

Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-05-14 08:43:07 -07:00
Olli Janatuinen
534e7161df SFTP: Correctly inform client about unsupported commands (#19735) 2024-05-14 03:29:30 -07:00
Harshavardhana
9b219cd646 fix: return quorum based error, temporary failures must be ignored (#19732) 2024-05-14 03:29:17 -07:00
Shireesh Anjal
3bab4822f3 Add logger webhook metrics in metrics-v3 (#19515)
endpoint: /minio/metrics/v3/cluster/webhook
metrics:
- failed_messages (counter)
- online (gauge)
- queue_length (gauge)
- total_messages (counter)
2024-05-14 00:27:33 -07:00
coderwander
3c5f2d8916 fix some typo in struct name comments (#19513)
Signed-off-by: coderwander <770732124@qq.com>
2024-05-14 00:26:50 -07:00
Shireesh Anjal
5808190398 Add more metrics to v3/cluster/erasure-set (#19714)
Metrics being added:

- read_tolerance: No of drive failures that can be tolerated without
  disrupting read operations
- write_tolerance: No of drive failures that can be tolerated without
  disrupting write operations
- read_health: Health of the erasure set in a pool for read operations
  (1=healthy, 0=unhealthy)
- write_health: Health of the erasure set in a pool for write operations
  (1=healthy, 0=unhealthy)
2024-05-14 00:25:56 -07:00
Shireesh Anjal
b2a82248b1 Move /system/go to /debug/go (#19707) 2024-05-14 00:25:37 -07:00
dependabot[bot]
4e5fcca8b9 build(deps): bump golang.org/x/net (#23)
Bumps the go_modules group with 1 update in the /docs/debugging/s3-verify directory: [golang.org/x/net](https://github.com/golang/net).


Updates `golang.org/x/net` from 0.24.0 to 0.25.0
- [Commits](https://github.com/golang/net/compare/v0.24.0...v0.25.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-13 10:59:52 -07:00
Klaus Post
c36eaedb93 Re-add "Fix incorrect merging of slash-suffixed objects (#19729)
Adds regression test for #19699

Failures are a bit luck based, since it requires objects to be placed on different sets.

However this generates a failure prior to #19699

* Revert "Revert "Fix incorrect merging of slash-suffixed objects (#19699)""

This reverts commit f30417d9a8.

* Don't override when suffix doesn't match. Instead rely on quorum for each.
2024-05-13 09:30:24 -07:00
Poorna
7752b03add optimize max-keys=2 listing for spark workloads (#19725)
to return results appropriately for versioned buckets, especially
when underlying prefixes have been deleted
2024-05-13 07:57:42 -07:00
jiuker
01bfc78535 Optimization: reuse hashedSecret when LookupConfig (#19724) 2024-05-12 22:52:27 -07:00
Shireesh Anjal
074d70112d Consolidate drive health related metrics into single metric (#19706)
Instead of having "online" and "healing" as two metrics, replace with a
single metric "health" which can have following values:

0 = offline
1 = healthy
2 = healing
2024-05-12 10:23:50 -07:00
Harshavardhana
e8d14c0d90 verify preconditions during CompleteMultipart (#19713)
Bonus: hold the write lock properly to apply
optimistic concurrency during NewMultipartUpload()
2024-05-10 17:31:22 -07:00
Shireesh Anjal
60d7e8143a Move /cluster/audit to /audit (#19708)
As the audit metrics are server level and not 
overall cluster level.
2024-05-10 07:50:39 -07:00
Klaus Post
9667a170de Add usage cache cleanup and lower forced top compaction (#19719)
Lower forced compaction to 250K entries.

If there is more than 250K entries on the top level force compact it and log an error.
2024-05-10 07:49:50 -07:00
Shubhendu
abae30f9e1 Added decom test with KES using sse-s3 and sse-kms (#19695)
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2024-05-10 01:24:14 -07:00
Minio Trusted
f9311bc9d1 Update yaml files to latest version RELEASE.2024-05-10T01-41-38Z 2024-05-10 02:00:49 +00:00
Harshavardhana
b598402738 fix: unexpected credentials missing while passing 2024-05-09 18:41:38 -07:00
Harshavardhana
bd026b913f remove references for MINIO_SERVER_URL 2024-05-09 17:22:36 -07:00
Harshavardhana
72ff69d9bb add log-prefix name for specifying custom log-name (#19712) 2024-05-09 14:29:37 -07:00
Harshavardhana
f30417d9a8 Revert "Fix incorrect merging of slash-suffixed objects (#19699)"
This reverts commit 2f7a10ab31.
2024-05-09 12:32:05 -07:00
jiuker
47a4ad3cd7 fix: truncate Expiration to second when Add ServiceAccount (#19674)
Truncate Expiration at the second when Add ServiceAccount
2024-05-09 11:08:04 -07:00
Klaus Post
2f7a10ab31 Fix incorrect merging of slash-suffixed objects (#19699)
If two objects share everything but one object has a slash prefix, those would be merged in listings, 
with secondary properties used for a tiebreak.

Example: An object with the key `prefix/obj` would be merged with an object named `prefix/obj/`. 
While this violates the [no object can be a prefix of another](https://min.io/docs/minio/linux/operations/concepts/thresholds.html#conflicting-objects), let's resolve these.

If we have an object with 'name' and a directory named 'name/' discard the directory only - but allow objects 
of 'name' and 'name/' (xldir) to be uniquely returned.

Regression from #15772
2024-05-09 11:05:45 -07:00
Harshavardhana
b534dc69ab deprecate unexpected healing failed counters (#19705)
simplify this to avoid verbose metrics, and make
room for valid metrics to be reported for alerting
etc.
2024-05-09 11:04:41 -07:00
Harshavardhana
7b7d2ea7d4 pass around correct endpoint while registering remote storage (#19710) 2024-05-09 11:03:54 -07:00
Aditya Manthramurthy
e00de1c302 ldap-import: Add additional logs (#19691)
These logs are being added to provide better debugging of LDAP
normalization on IAM import.
2024-05-09 10:52:53 -07:00
Harshavardhana
3549e583a6 results must be a single channel to avoid overwriting healing.bin (#19702) 2024-05-09 10:15:03 -07:00
Andi
f5e3eedf34 chore: use errors.New to replace fmt.Errorf with no parameters (#19568)
Signed-off-by: ChengenH <hce19970702@gmail.com>
2024-05-09 01:44:07 -07:00
Harshavardhana
519dbfebf6 upgrade to go1.22.x 2024-05-09 01:36:00 -07:00
Harshavardhana
9a267f9270 allow caller context during reloads() to cancel (#19687)
canceled callers might linger around longer,
can potentially overwhelm the system. Instead
provider a caller context and canceled callers
don't hold on to them.

Bonus: we have no reason to cache errors, we should
never cache errors otherwise we can potentially have
quorum errors creeping in unexpectedly. We should
let the cache when invalidating hit the actual resources
instead.
2024-05-08 17:51:34 -07:00
Anis Eleuch
67bd71b7a5 grid: Fix a window of a disconnected node not marked as offline (#19703)
LastPong is saved as nanoseconds after a connection or reconnection but
saved as seconds when receiving a pong message. The code deciding if
a pong is too old can be skewed since it assumes LastPong is only in
seconds.
2024-05-08 17:50:13 -07:00
Klaus Post
ec49fff583 Accept multipart checksums with part count (#19680)
Accept multipart uploads where the combined checksum provides the expected part count.

It seems this was added by AWS to make the API more consistent, even if the 
data is entirely superfluous on multiple levels.

Improves AWS S3 compatibility.
2024-05-08 09:18:34 -07:00
Andreas Auernhammer
8b660e18f2 kms: add support for MinKMS and remove some unused/broken code (#19368)
This commit adds support for MinKMS. Now, there are three KMS
implementations in `internal/kms`: Builtin, MinIO KES and MinIO KMS.

Adding another KMS integration required some cleanup. In particular:
 - Various KMS APIs that haven't been and are not used have been
   removed. A lot of the code was broken anyway.
 - Metrics are now monitored by the `kms.KMS` itself. For basic
   metrics this is simpler than collecting metrics for external
   servers. In particular, each KES server returns its own metrics
   and no cluster-level view.
 - The builtin KMS now uses the same en/decryption implemented by
   MinKMS and KES. It still supports decryption of the previous
   ciphertext format. It's backwards compatible.
 - Data encryption keys now include a master key version since MinKMS
   supports multiple versions (~4 billion in total and 10000 concurrent)
   per key name.

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2024-05-07 16:55:37 -07:00
Harshavardhana
981497799a return appropriate error upon reaching maxClients() (#19669) 2024-05-07 13:41:56 -07:00
Minio Trusted
b9bdc17465 Update yaml files to latest version RELEASE.2024-05-07T06-41-25Z 2024-05-07 16:59:52 +00:00
555 changed files with 32673 additions and 17594 deletions

View File

@@ -4,7 +4,6 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -21,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.21.x]
go-version: [1.22.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@@ -3,8 +3,7 @@ name: FIPS Build Test
on:
pull_request:
branches:
- master
- next
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -21,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.21.x]
go-version: [1.22.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@@ -3,8 +3,7 @@ name: Healing Functional Tests
on:
pull_request:
branches:
- master
- next
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -21,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.21.x]
go-version: [1.22.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@@ -3,12 +3,11 @@ name: Linters and Tests
on:
pull_request:
branches:
- master
- next
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
@@ -21,23 +20,14 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.21.x]
os: [ubuntu-latest, windows-latest]
go-version: [1.22.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Build on ${{ matrix.os }}
if: matrix.os == 'windows-latest'
env:
CGO_ENABLED: 0
GO111MODULE: on
run: |
netsh int ipv4 set dynamicport tcp start=60000 num=61000
go build --ldflags="-s -w" -o %GOPATH%\bin\minio.exe
go test -v --timeout 50m ./...
- name: Build on ${{ matrix.os }}
if: matrix.os == 'ubuntu-latest'
env:

View File

@@ -3,8 +3,7 @@ name: Functional Tests
on:
pull_request:
branches:
- master
- next
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -21,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.21.x]
go-version: [1.22.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4

View File

@@ -3,8 +3,7 @@ name: Helm Chart linting
on:
pull_request:
branches:
- master
- next
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -23,7 +22,7 @@ jobs:
uses: actions/checkout@v4
- name: Install Helm
uses: azure/setup-helm@v3
uses: azure/setup-helm@v4
- name: Run helm lint
run: |

View File

@@ -4,7 +4,6 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -62,7 +61,7 @@ jobs:
# are turned off - i.e. if ldap="", then ldap server is not enabled for
# the tests.
matrix:
go-version: [1.21.x]
go-version: [1.22.x]
ldap: ["", "localhost:389"]
etcd: ["", "http://localhost:2379"]
openid: ["", "http://127.0.0.1:5556/dex"]
@@ -126,3 +125,37 @@ jobs:
if: matrix.openid == 'http://127.0.0.1:5556/dex'
run: |
make test-site-replication-oidc
iam-import-with-missing-entities:
name: Test IAM import in new cluster with missing entities
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Checkout minio-iam-testing
uses: actions/checkout@v4
with:
repository: minio/minio-iam-testing
path: minio-iam-testing
- name: Test import of IAM artifacts when in fresh cluster there are missing groups etc
run: |
make test-iam-import-with-missing-entities
iam-import-with-openid:
name: Test IAM import in new cluster with opendid configurations
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
check-latest: true
- name: Checkout minio-iam-testing
uses: actions/checkout@v4
with:
repository: minio/minio-iam-testing
path: minio-iam-testing
- name: Test import of IAM artifacts when in fresh cluster with openid configurations
run: |
make test-iam-import-with-openid

View File

@@ -4,7 +4,6 @@ on:
pull_request:
branches:
- master
- next
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -30,7 +29,7 @@ jobs:
- name: setup-go-step
uses: actions/setup-go@v5
with:
go-version: 1.21.x
go-version: 1.22.x
- name: github sha short
id: vars
@@ -56,6 +55,11 @@ jobs:
run: |
${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "erasure" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
# FIXME: renable this back when we have a valid way to add deadlines for PUT()s (internode CreateFile)
# - name: resiliency
# run: |
# ${GITHUB_WORKSPACE}/.github/workflows/run-mint.sh "resiliency" "minio" "minio123" "${{ steps.vars.outputs.sha_short }}"
- name: The job must cleanup
if: ${{ always() }}
run: |

View File

@@ -0,0 +1,78 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: quay.io/minio/minio:${JOB_NAME}
command: server --console-address ":9001" http://minio{1...4}/rdata{1...2}
expose:
- "9000"
- "9001"
environment:
MINIO_CI_CD: "on"
MINIO_ROOT_USER: "minio"
MINIO_ROOT_PASSWORD: "minio123"
MINIO_KMS_SECRET_KEY: "my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw="
MINIO_DRIVE_MAX_TIMEOUT: "5s"
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 5s
timeout: 5s
retries: 5
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- rdata1-1:/rdata1
- rdata1-2:/rdata2
minio2:
<<: *minio-common
hostname: minio2
volumes:
- rdata2-1:/rdata1
- rdata2-2:/rdata2
minio3:
<<: *minio-common
hostname: minio3
volumes:
- rdata3-1:/rdata1
- rdata3-2:/rdata2
minio4:
<<: *minio-common
hostname: minio4
volumes:
- rdata4-1:/rdata1
- rdata4-2:/rdata2
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
volumes:
- ./nginx-4-node.conf:/etc/nginx/nginx.conf:ro
ports:
- "9000:9000"
- "9001:9001"
depends_on:
- minio1
- minio2
- minio3
- minio4
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
rdata1-1:
rdata1-2:
rdata2-1:
rdata2-2:
rdata3-1:
rdata3-2:
rdata4-1:
rdata4-2:

View File

@@ -23,10 +23,9 @@ http {
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
server minio1:9000 max_fails=1 fail_timeout=10s;
server minio2:9000 max_fails=1 fail_timeout=10s;
server minio3:9000 max_fails=1 fail_timeout=10s;
}
upstream console {

View File

@@ -23,14 +23,14 @@ http {
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
server minio5:9000;
server minio6:9000;
server minio7:9000;
server minio8:9000;
server minio1:9000 max_fails=1 fail_timeout=10s;
server minio2:9000 max_fails=1 fail_timeout=10s;
server minio3:9000 max_fails=1 fail_timeout=10s;
server minio4:9000 max_fails=1 fail_timeout=10s;
server minio5:9000 max_fails=1 fail_timeout=10s;
server minio6:9000 max_fails=1 fail_timeout=10s;
server minio7:9000 max_fails=1 fail_timeout=10s;
server minio8:9000 max_fails=1 fail_timeout=10s;
}
upstream console {

View File

@@ -23,10 +23,10 @@ http {
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
server minio1:9000 max_fails=1 fail_timeout=10s;
server minio2:9000 max_fails=1 fail_timeout=10s;
server minio3:9000 max_fails=1 fail_timeout=10s;
server minio4:9000 max_fails=1 fail_timeout=10s;
}
upstream console {

View File

@@ -24,8 +24,6 @@ if [ ! -f ./mc ]; then
chmod +x mc
fi
go install -v github.com/minio/minio/docs/debugging/s3-check-md5@latest
export RELEASE=RELEASE.2023-08-29T23-07-35Z
docker-compose -f docker-compose-site1.yaml up -d
@@ -45,10 +43,10 @@ sleep 30s
sleep 5
s3-check-md5 -h
./s3-check-md5 -h
failed_count_site1=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site1-nginx:9001 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site2=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site2-nginx:9002 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site1=$(./s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site1-nginx:9001 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site2=$(./s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site2-nginx:9002 -bucket testbucket 2>&1 | grep FAILED | wc -l)
if [ $failed_count_site1 -ne 0 ]; then
echo "failed with multipart on site1 uploads"
@@ -64,8 +62,8 @@ fi
sleep 5
failed_count_site1=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site1-nginx:9001 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site2=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site2-nginx:9002 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site1=$(./s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site1-nginx:9001 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site2=$(./s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site2-nginx:9002 -bucket testbucket 2>&1 | grep FAILED | wc -l)
## we do not need to fail here, since we are going to test
## upgrading to master, healing and being able to recover
@@ -93,8 +91,8 @@ for i in $(seq 1 10); do
./mc admin heal -r --remove --json site2/ 2>&1 >/dev/null
done
failed_count_site1=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site1-nginx:9001 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site2=$(s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site2-nginx:9002 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site1=$(./s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site1-nginx:9001 -bucket testbucket 2>&1 | grep FAILED | wc -l)
failed_count_site2=$(./s3-check-md5 -versions -access-key minioadmin -secret-key minioadmin -endpoint http://site2-nginx:9002 -bucket testbucket 2>&1 | grep FAILED | wc -l)
if [ $failed_count_site1 -ne 0 ]; then
echo "failed with multipart on site1 uploads"

View File

@@ -3,8 +3,7 @@ name: MinIO advanced tests
on:
pull_request:
branches:
- master
- next
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -22,7 +21,7 @@ jobs:
strategy:
matrix:
go-version: [1.21.x]
go-version: [1.22.x]
steps:
- uses: actions/checkout@v4
@@ -42,6 +41,12 @@ jobs:
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-ilm
- name: Test PBAC
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-pbac
- name: Test Config File
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
@@ -65,3 +70,9 @@ jobs:
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-versioning
- name: Test Multipart upload with failures
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-multipart

View File

@@ -3,8 +3,7 @@ name: Root lockdown tests
on:
pull_request:
branches:
- master
- next
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -21,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.21.x]
go-version: [1.22.x]
os: [ubuntu-latest]
steps:

View File

@@ -16,7 +16,7 @@ docker volume rm $(docker volume ls -f dangling=true) || true
cd .github/workflows/mint
docker-compose -f minio-${MODE}.yaml up -d
sleep 30s
sleep 1m
docker system prune -f || true
docker volume prune -f || true
@@ -26,6 +26,9 @@ docker volume rm $(docker volume ls -q -f dangling=true) || true
[ "${MODE}" == "pools" ] && docker-compose -f minio-${MODE}.yaml stop minio2
[ "${MODE}" == "pools" ] && docker-compose -f minio-${MODE}.yaml stop minio6
# Pause one node, to check that all S3 calls work while one node goes wrong
[ "${MODE}" == "resiliency" ] && docker-compose -f minio-${MODE}.yaml pause minio4
docker run --rm --net=mint_default \
--name="mint-${MODE}-${JOB_NAME}" \
-e SERVER_ENDPOINT="nginx:9000" \
@@ -35,6 +38,18 @@ docker run --rm --net=mint_default \
-e MINT_MODE="${MINT_MODE}" \
docker.io/minio/mint:edge
# FIXME: enable this after fixing aws-sdk-java-v2 tests
# # unpause the node, to check that all S3 calls work while one node goes wrong
# [ "${MODE}" == "resiliency" ] && docker-compose -f minio-${MODE}.yaml unpause minio4
# [ "${MODE}" == "resiliency" ] && docker run --rm --net=mint_default \
# --name="mint-${MODE}-${JOB_NAME}" \
# -e SERVER_ENDPOINT="nginx:9000" \
# -e ACCESS_KEY="${ACCESS_KEY}" \
# -e SECRET_KEY="${SECRET_KEY}" \
# -e ENABLE_HTTPS=0 \
# -e MINT_MODE="${MINT_MODE}" \
# docker.io/minio/mint:edge
docker-compose -f minio-${MODE}.yaml down || true
sleep 10s

View File

@@ -3,8 +3,7 @@ name: Shell formatting checks
on:
pull_request:
branches:
- master
- next
- master
permissions:
contents: read

View File

@@ -3,8 +3,7 @@ name: Upgrade old version tests
on:
pull_request:
branches:
- master
- next
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
@@ -21,7 +20,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.21.x]
go-version: [1.22.x]
os: [ubuntu-latest]
steps:

View File

@@ -21,8 +21,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.21.9
check-latest: true
go-version: 1.22.7
- name: Get official govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
shell: bash

11
.gitignore vendored
View File

@@ -43,4 +43,13 @@ docs/debugging/inspect/inspect
docs/debugging/pprofgoparser/pprofgoparser
docs/debugging/reorder-disks/reorder-disks
docs/debugging/populate-hard-links/populate-hardlinks
docs/debugging/xattr/xattr
docs/debugging/xattr/xattr
hash-set
healing-bin
inspect
pprofgoparser
reorder-disks
s3-check-md5
s3-verify
xattr
xl-meta

View File

@@ -2,6 +2,9 @@
extend-exclude = [
".git/",
"docs/",
"CREDITS",
"go.mod",
"go.sum",
]
ignore-hidden = false
@@ -16,6 +19,7 @@ extend-ignore-re = [
"MIIDBTCCAe2gAwIBAgIQWHw7h.*",
'http\.Header\{"X-Amz-Server-Side-Encryptio":',
"ZoEoZdLlzVbOlT9rbhD7ZN7TLyiYXSAlB79uGEge",
"ERRO:",
]
[default.extend-words]

View File

@@ -12,8 +12,9 @@ Fork [MinIO upstream](https://github.com/minio/minio/fork) source repository to
```sh
git clone https://github.com/minio/minio
cd minio
go install -v
ls /go/bin/minio
ls $(go env GOPATH)/bin/minio
```
### Set up git remote as ``upstream``

5613
CREDITS

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,7 @@
FROM minio/minio:latest
RUN chmod -R 777 /usr/bin
COPY ./minio /usr/bin/minio
COPY dockerscripts/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh

View File

@@ -1,12 +0,0 @@
FROM minio/minio:latest
ENV PATH=/opt/bin:$PATH
COPY ./minio /opt/bin/minio
COPY dockerscripts/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/bin/docker-entrypoint.sh"]
VOLUME ["/data"]
CMD ["minio"]

View File

@@ -1,24 +1,26 @@
FROM golang:1.21-alpine as build
FROM golang:1.22-alpine as build
ARG TARGETARCH
ARG RELEASE
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GOPATH=/go
ENV CGO_ENABLED=0
# Install curl and minisign
RUN apk add -U --no-cache ca-certificates && \
apk add -U --no-cache curl && \
go install aead.dev/minisign/cmd/minisign@v0.2.1
# Download minio binary and signature file
# Download minio binary and signature files
RUN curl -s -q https://dl.min.io/server/minio/hotfixes/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /go/bin/minio && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /go/bin/minio.minisig && \
curl -s -q https://dl.min.io/server/minio/hotfixes/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /go/bin/minio.sha256sum && \
chmod +x /go/bin/minio
# Download mc binary and signature file
# Download mc binary and signature files
RUN curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc -o /go/bin/mc && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.minisig -o /go/bin/mc.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.sha256sum -o /go/bin/mc.sha256sum && \
chmod +x /go/bin/mc
RUN if [ "$TARGETARCH" = "amd64" ]; then \
@@ -51,9 +53,11 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_CONFIG_ENV_FILE=config.env \
MC_CONFIG_DIR=/tmp/.mc
RUN chmod -R 777 /usr/bin
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/bin/minio /usr/bin/minio
COPY --from=build /go/bin/mc /usr/bin/mc
COPY --from=build /go/bin/minio* /usr/bin/
COPY --from=build /go/bin/mc* /usr/bin/
COPY --from=build /go/bin/cur* /usr/bin/
COPY CREDITS /licenses/CREDITS

View File

@@ -1,35 +1,39 @@
FROM golang:1.21-alpine as build
FROM golang:1.22-alpine AS build
ARG TARGETARCH
ARG RELEASE
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GOPATH=/go
ENV CGO_ENABLED=0
WORKDIR /build
# Install curl and minisign
RUN apk add -U --no-cache ca-certificates && \
apk add -U --no-cache curl && \
apk add -U --no-cache bash && \
go install aead.dev/minisign/cmd/minisign@v0.2.1
# Download minio binary and signature file
# Download minio binary and signature files
RUN curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /go/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /go/bin/minio.minisig && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /go/bin/minio.sha256sum && \
chmod +x /go/bin/minio
# Download mc binary and signature file
# Download mc binary and signature files
RUN curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc -o /go/bin/mc && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.minisig -o /go/bin/mc.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.sha256sum -o /go/bin/mc.sha256sum && \
chmod +x /go/bin/mc
RUN if [ "$TARGETARCH" = "amd64" ]; then \
curl -L -s -q https://github.com/moparisthebest/static-curl/releases/latest/download/curl-${TARGETARCH} -o /go/bin/curl; \
chmod +x /go/bin/curl; \
fi
# Verify binary signature using public key "RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGavRUN"
RUN minisign -Vqm /go/bin/minio -x /go/bin/minio.minisig -P RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav && \
minisign -Vqm /go/bin/mc -x /go/bin/mc.minisig -P RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav
COPY dockerscripts/download-static-curl.sh /build/download-static-curl
RUN chmod +x /build/download-static-curl && \
/build/download-static-curl
FROM registry.access.redhat.com/ubi9/ubi-micro:latest
ARG RELEASE
@@ -51,10 +55,12 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_CONFIG_ENV_FILE=config.env \
MC_CONFIG_DIR=/tmp/.mc
RUN chmod -R 777 /usr/bin
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/bin/minio /usr/bin/minio
COPY --from=build /go/bin/mc /usr/bin/mc
COPY --from=build /go/bin/cur* /usr/bin/
COPY --from=build /go/bin/minio* /usr/bin/
COPY --from=build /go/bin/mc* /usr/bin/
COPY --from=build /go/bin/curl* /usr/bin/
COPY CREDITS /licenses/CREDITS
COPY LICENSE /licenses/LICENSE

View File

@@ -1,21 +1,28 @@
FROM golang:1.21-alpine as build
FROM golang:1.22-alpine AS build
ARG TARGETARCH
ARG RELEASE
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GOPATH=/go
ENV CGO_ENABLED=0
# Install curl and minisign
RUN apk add -U --no-cache ca-certificates && \
apk add -U --no-cache curl && \
go install aead.dev/minisign/cmd/minisign@v0.2.1
# Download minio binary and signature file
# Download minio binary and signature files
RUN curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips -o /go/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips.minisig -o /go/bin/minio.minisig && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.fips.sha256sum -o /go/bin/minio.sha256sum && \
chmod +x /go/bin/minio
# Download mc binary and signature files
RUN curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.fips -o /go/bin/mc && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.fips.minisig -o /go/bin/mc.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.fips.sha256sum -o /go/bin/mc.sha256sum && \
chmod +x /go/bin/mc
RUN if [ "$TARGETARCH" = "amd64" ]; then \
curl -L -s -q https://github.com/moparisthebest/static-curl/releases/latest/download/curl-${TARGETARCH} -o /go/bin/curl; \
chmod +x /go/bin/curl; \
@@ -44,8 +51,11 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_UPDATE_MINISIGN_PUBKEY="RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav" \
MINIO_CONFIG_ENV_FILE=config.env
RUN chmod -R 777 /usr/bin
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/bin/minio /usr/bin/minio
COPY --from=build /go/bin/minio* /usr/bin/
COPY --from=build /go/bin/mc* /usr/bin/
COPY --from=build /go/bin/cur* /usr/bin/
COPY CREDITS /licenses/CREDITS

View File

@@ -1,24 +1,26 @@
FROM golang:1.21-alpine as build
FROM golang:1.22-alpine AS build
ARG TARGETARCH
ARG RELEASE
ENV GOPATH /go
ENV CGO_ENABLED 0
ENV GOPATH=/go
ENV CGO_ENABLED=0
# Install curl and minisign
RUN apk add -U --no-cache ca-certificates && \
apk add -U --no-cache curl && \
go install aead.dev/minisign/cmd/minisign@v0.2.1
# Download minio binary and signature file
# Download minio binary and signature files
RUN curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /go/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /go/bin/minio.minisig && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /go/bin/minio.sha256sum && \
chmod +x /go/bin/minio
# Download mc binary and signature file
# Download mc binary and signature files
RUN curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc -o /go/bin/mc && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.minisig -o /go/bin/mc.minisig && \
curl -s -q https://dl.min.io/client/mc/release/linux-${TARGETARCH}/mc.sha256sum -o /go/bin/mc.sha256sum && \
chmod +x /go/bin/mc
RUN if [ "$TARGETARCH" = "amd64" ]; then \
@@ -51,9 +53,11 @@ ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_CONFIG_ENV_FILE=config.env \
MC_CONFIG_DIR=/tmp/.mc
RUN chmod -R 777 /usr/bin
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/bin/minio /usr/bin/minio
COPY --from=build /go/bin/mc /usr/bin/mc
COPY --from=build /go/bin/minio* /usr/bin/
COPY --from=build /go/bin/mc* /usr/bin/
COPY --from=build /go/bin/cur* /usr/bin/
COPY CREDITS /licenses/CREDITS

View File

@@ -24,7 +24,7 @@ help: ## print this help
getdeps: ## fetch necessary dependencies
@mkdir -p ${GOPATH}/bin
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOLANGCI_DIR)
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@v1.1.10-0.20240227114326-6d6f813fff1b
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@v1.2.3-0.20241022140105-4558fbf3a223
@echo "Installing stringer" && go install -v golang.org/x/tools/cmd/stringer@latest
crosscompile: ## cross compile minio
@@ -34,20 +34,23 @@ verifiers: lint check-gen
check-gen: ## check for updated autogenerated files
@go generate ./... >/dev/null
@go mod tidy -compat=1.21
@(! git diff --name-only | grep '_gen.go$$') || (echo "Non-committed changes in auto-generated code is detected, please commit them to proceed." && false)
@(! git diff --name-only | grep 'go.sum') || (echo "Non-committed changes in auto-generated go.sum is detected, please commit them to proceed." && false)
lint: getdeps ## runs golangci-lint suite of linters
@echo "Running $@ check"
@$(GOLANGCI) run --build-tags kqueue --timeout=10m --config ./.golangci.yml
@command typos && typos ./ || echo "typos binary is not found.. skipping.."
lint-fix: getdeps ## runs golangci-lint suite of linters with automatic fixes
@echo "Running $@ check"
@$(GOLANGCI) run --build-tags kqueue --timeout=10m --config ./.golangci.yml --fix
check: test
test: verifiers build build-debugging ## builds minio, runs linters, tests
test: verifiers build ## builds minio, runs linters, tests
@echo "Running unit tests"
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -v -tags kqueue ./...
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -v -tags kqueue,dev ./...
test-root-disable: install-race
@echo "Running minio root lockdown tests"
@@ -57,12 +60,17 @@ test-ilm: install-race
@echo "Running ILM tests"
@env bash $(PWD)/docs/bucket/replication/setup_ilm_expiry_replication.sh
test-pbac: install-race
@echo "Running bucket policies tests"
@env bash $(PWD)/docs/iam/policies/pbac-tests.sh
test-decom: install-race
@echo "Running minio decom tests"
@env bash $(PWD)/docs/distributed/decom.sh
@env bash $(PWD)/docs/distributed/decom-encrypted.sh
@env bash $(PWD)/docs/distributed/decom-encrypted-sse-s3.sh
@env bash $(PWD)/docs/distributed/decom-compressed-sse-s3.sh
@env bash $(PWD)/docs/distributed/decom-encrypted-kes.sh
test-versioning: install-race
@echo "Running minio versioning tests"
@@ -79,16 +87,24 @@ test-race: verifiers build ## builds minio, runs linters, tests (race)
@echo "Running unit tests under -race"
@(env bash $(PWD)/buildscripts/race.sh)
test-iam: build ## verify IAM (external IDP, etcd backends)
test-iam: install-race ## verify IAM (external IDP, etcd backends)
@echo "Running tests for IAM (external IDP, etcd backends)"
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -tags kqueue -v -run TestIAM* ./cmd
@MINIO_API_REQUESTS_MAX=10000 CGO_ENABLED=0 go test -timeout 15m -tags kqueue,dev -v -run TestIAM* ./cmd
@echo "Running tests for IAM (external IDP, etcd backends) with -race"
@MINIO_API_REQUESTS_MAX=10000 GORACE=history_size=7 CGO_ENABLED=1 go test -race -tags kqueue -v -run TestIAM* ./cmd
@MINIO_API_REQUESTS_MAX=10000 GORACE=history_size=7 CGO_ENABLED=1 go test -timeout 15m -race -tags kqueue,dev -v -run TestIAM* ./cmd
test-iam-ldap-upgrade-import: build ## verify IAM (external LDAP IDP)
test-iam-ldap-upgrade-import: install-race ## verify IAM (external LDAP IDP)
@echo "Running upgrade tests for IAM (LDAP backend)"
@env bash $(PWD)/buildscripts/minio-iam-ldap-upgrade-import-test.sh
test-iam-import-with-missing-entities: install-race ## test import of external iam config withg missing entities
@echo "Test IAM import configurations with missing entities"
@env bash $(PWD)/docs/distributed/iam-import-with-missing-entities.sh
test-iam-import-with-openid: install-race
@echo "Test IAM import configurations with openid"
@env bash $(PWD)/docs/distributed/iam-import-with-openid.sh
test-sio-error:
@(env bash $(PWD)/docs/bucket/replication/sio-error.sh)
@@ -101,7 +117,10 @@ test-replication-3site:
test-delete-replication:
@(env bash $(PWD)/docs/bucket/replication/delete-replication.sh)
test-replication: install-race test-replication-2site test-replication-3site test-delete-replication test-sio-error ## verify multi site replication
test-delete-marker-proxying:
@(env bash $(PWD)/docs/bucket/replication/test_del_marker_proxying.sh)
test-replication: install-race test-replication-2site test-replication-3site test-delete-replication test-sio-error test-delete-marker-proxying ## verify multi site replication
@echo "Running tests for replicating three sites"
test-site-replication-ldap: install-race ## verify automatic site replication
@@ -122,37 +141,36 @@ test-site-replication-minio: install-race ## verify automatic site replication
@echo "Running tests for automatic site replication of SSE-C objects with compression enabled for site"
@(env bash $(PWD)/docs/site-replication/run-ssec-object-replication-with-compression.sh)
verify: ## verify minio various setups
test-multipart: install-race ## test multipart
@echo "Test multipart behavior when part files are missing"
@(env bash $(PWD)/buildscripts/multipart-quorum-test.sh)
verify: install-race ## verify minio various setups
@echo "Verifying build with race"
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/verify-build.sh)
verify-healing: ## verify healing and replacing disks with minio binary
verify-healing: install-race ## verify healing and replacing disks with minio binary
@echo "Verify healing build with race"
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/verify-healing.sh)
@(env bash $(PWD)/buildscripts/verify-healing-empty-erasure-set.sh)
@(env bash $(PWD)/buildscripts/heal-inconsistent-versions.sh)
verify-healing-with-root-disks: ## verify healing root disks
verify-healing-with-root-disks: install-race ## verify healing root disks
@echo "Verify healing with root drives"
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/verify-healing-with-root-disks.sh)
verify-healing-with-rewrite: ## verify healing to rewrite old xl.meta -> new xl.meta
verify-healing-with-rewrite: install-race ## verify healing to rewrite old xl.meta -> new xl.meta
@echo "Verify healing with rewrite"
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/rewrite-old-new.sh)
verify-healing-inconsistent-versions: ## verify resolving inconsistent versions
verify-healing-inconsistent-versions: install-race ## verify resolving inconsistent versions
@echo "Verify resolving inconsistent versions build with race"
@GORACE=history_size=7 CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@(env bash $(PWD)/buildscripts/resolve-right-versions.sh)
build-debugging:
@(env bash $(PWD)/docs/debugging/build.sh)
build: checks ## builds minio to $(PWD)
build: checks build-debugging ## builds minio to $(PWD)
@echo "Building minio binary to './minio'"
@CGO_ENABLED=0 go build -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@@ -162,9 +180,9 @@ hotfix-vars:
$(eval VERSION := $(shell git describe --tags --abbrev=0).hotfix.$(shell git rev-parse --short HEAD))
hotfix: hotfix-vars clean install ## builds minio binary with hotfix tags
@wget -q -c https://github.com/minio/pkger/releases/download/v2.2.9/pkger_2.2.9_linux_amd64.deb
@wget -q -c https://github.com/minio/pkger/releases/download/v2.3.1/pkger_2.3.1_linux_amd64.deb
@wget -q -c https://raw.githubusercontent.com/minio/minio-service/v1.0.1/linux-systemd/distributed/minio.service
@sudo apt install ./pkger_2.2.9_linux_amd64.deb --yes
@sudo apt install ./pkger_2.3.1_linux_amd64.deb --yes
@mkdir -p minio-release/$(GOOS)-$(GOARCH)/archive
@cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio
@cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio.$(VERSION)
@@ -191,15 +209,15 @@ docker: build ## builds minio docker container
@echo "Building minio docker image '$(TAG)'"
@docker build -q --no-cache -t $(TAG) . -f Dockerfile
install-race: checks ## builds minio to $(PWD)
install-race: checks build-debugging ## builds minio to $(PWD)
@echo "Building minio binary with -race to './minio'"
@GORACE=history_size=7 CGO_ENABLED=1 go build -tags kqueue -race -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@GORACE=history_size=7 CGO_ENABLED=1 go build -tags kqueue,dev -race -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@echo "Installing minio binary with -race to '$(GOPATH)/bin/minio'"
@mkdir -p $(GOPATH)/bin && cp -f $(PWD)/minio $(GOPATH)/bin/minio
@mkdir -p $(GOPATH)/bin && cp -af $(PWD)/minio $(GOPATH)/bin/minio
install: build ## builds minio and installs it to $GOPATH/bin.
@echo "Installing minio binary to '$(GOPATH)/bin/minio'"
@mkdir -p $(GOPATH)/bin && cp -f $(PWD)/minio $(GOPATH)/bin/minio
@mkdir -p $(GOPATH)/bin && cp -af $(PWD)/minio $(GOPATH)/bin/minio
@echo "Installation successful. To learn more, try \"minio --help\"."
clean: ## cleanup all generated assets

View File

@@ -93,7 +93,6 @@ The following table lists supported architectures. Replace the `wget` URL with t
| 64-bit Intel/AMD | <https://dl.min.io/server/minio/release/linux-amd64/minio> |
| 64-bit ARM | <https://dl.min.io/server/minio/release/linux-arm64/minio> |
| 64-bit PowerPC LE (ppc64le) | <https://dl.min.io/server/minio/release/linux-ppc64le/minio> |
| IBM Z-Series (S390X) | <https://dl.min.io/server/minio/release/linux-s390x/minio> |
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
@@ -210,10 +209,6 @@ For deployments behind a load balancer, proxy, or ingress rule where the MinIO h
For example, consider a MinIO deployment behind a proxy `https://minio.example.net`, `https://console.minio.example.net` with rules for forwarding traffic on port :9000 and :9001 to MinIO and the MinIO Console respectively on the internal network. Set `MINIO_BROWSER_REDIRECT_URL` to `https://console.minio.example.net` to ensure the browser receives a valid reachable URL.
Similarly, if your TLS certificates do not have the IP SAN for the MinIO server host, the MinIO Console may fail to validate the connection to the server. Use the `MINIO_SERVER_URL` environment variable and specify the proxy-accessible hostname of the MinIO server to allow the Console to use the MinIO server API using the TLS certificate.
For example: `export MINIO_SERVER_URL="https://minio.example.net"`
| Dashboard | Creating a bucket |
| ------------- | ------------- |
| ![Dashboard](https://github.com/minio/minio/blob/master/docs/screenshots/pic1.png?raw=true) | ![Dashboard](https://github.com/minio/minio/blob/master/docs/screenshots/pic2.png?raw=true) |

0
buildscripts/checkdeps.sh Normal file → Executable file
View File

View File

@@ -32,6 +32,7 @@ fi
set +e
export MC_HOST_minioadm=http://minioadmin:minioadmin@localhost:9100/
./mc ready minioadm
./mc ls minioadm/
@@ -56,7 +57,7 @@ done
set +e
sleep 10
./mc ready minioadm/
./mc ls minioadm/
if [ $? -ne 0 ]; then
@@ -81,11 +82,12 @@ minio server --address 127.0.0.1:9003 "http://127.0.0.1:9003/tmp/multisiteb/data
minio server --address 127.0.0.1:9004 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
sleep 20s
export MC_HOST_sitea=http://minioadmin:minioadmin@127.0.0.1:9001
export MC_HOST_siteb=http://minioadmin:minioadmin@127.0.0.1:9004
./mc ready sitea
./mc ready siteb
./mc admin replicate add sitea siteb
./mc admin user add sitea foobar foo12345
@@ -109,11 +111,12 @@ minio server --address 127.0.0.1:9003 "http://127.0.0.1:9003/tmp/multisiteb/data
minio server --address 127.0.0.1:9004 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
sleep 20s
export MC_HOST_sitea=http://foobar:foo12345@127.0.0.1:9001
export MC_HOST_siteb=http://foobar:foo12345@127.0.0.1:9004
./mc ready sitea
./mc ready siteb
./mc admin user add sitea foobar-admin foo12345
sleep 2s

View File

@@ -56,6 +56,7 @@ create_iam_content_in_old_minio() {
set -x
mc alias set old-minio http://localhost:9000 minioadmin minioadmin
mc ready old-minio
mc idp ldap add old-minio \
server_addr=localhost:1389 \
server_insecure=on \
@@ -87,6 +88,7 @@ import_iam_content_in_new_minio() {
set -x
mc alias set new-minio http://localhost:9000 minioadmin minioadmin
echo "BEFORE IMPORT mappings:"
mc ready new-minio
mc idp ldap policy entities new-minio
mc admin cluster iam import new-minio ./old-minio-iam-info.zip
echo "AFTER IMPORT mappings:"

22
buildscripts/minio-upgrade.sh Normal file → Executable file
View File

@@ -4,10 +4,22 @@ trap 'cleanup $LINENO' ERR
# shellcheck disable=SC2120
cleanup() {
MINIO_VERSION=dev docker-compose \
MINIO_VERSION=dev /tmp/gopath/bin/docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm -s -f
down || true
MINIO_VERSION=dev /tmp/gopath/bin/docker-compose \
-f "buildscripts/upgrade-tests/compose.yml" \
rm || true
for volume in $(docker volume ls -q | grep upgrade); do
docker volume rm ${volume} || true
done
docker volume prune -f
docker system prune -f || true
docker volume prune -f || true
docker volume rm $(docker volume ls -q -f dangling=true) || true
}
verify_checksum_after_heal() {
@@ -60,6 +72,8 @@ __init__() {
go install github.com/docker/compose/v2/cmd@latest
mv -v /tmp/gopath/bin/cmd /tmp/gopath/bin/docker-compose
cleanup
TAG=minio/minio:dev make docker
MINIO_VERSION=RELEASE.2019-12-19T22-52-26Z docker-compose \
@@ -77,11 +91,11 @@ __init__() {
curl -s http://127.0.0.1:9000/minio-test/to-read/hosts | sha256sum
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
MINIO_VERSION=dev /tmp/gopath/bin/docker-compose -f "buildscripts/upgrade-tests/compose.yml" stop
}
main() {
MINIO_VERSION=dev docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
MINIO_VERSION=dev /tmp/gopath/bin/docker-compose -f "buildscripts/upgrade-tests/compose.yml" up -d --build
add_alias

View File

@@ -0,0 +1,126 @@
#!/bin/bash
if [ -n "$TEST_DEBUG" ]; then
set -x
fi
WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
if [ ! -x "$PWD/minio" ]; then
echo "minio executable binary not found in current directory"
exit 1
fi
trap 'catch $LINENO' ERR
function purge() {
rm -rf "$1"
}
# shellcheck disable=SC2120
catch() {
if [ $# -ne 0 ]; then
echo "error on line $1"
fi
echo "Cleaning up instances of MinIO"
pkill minio || true
pkill -9 minio || true
purge "$WORK_DIR"
if [ $# -ne 0 ]; then
exit $#
fi
}
catch
function start_minio_10drive() {
start_port=$1
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MC_HOST_minio="http://minio:minio123@127.0.0.1:${start_port}/"
unset MINIO_KMS_AUTO_ENCRYPTION # do not auto-encrypt objects
export MINIO_CI_CD=1
mkdir ${WORK_DIR}
C_PWD=${PWD}
if [ ! -x "$PWD/mc" ]; then
MC_BUILD_DIR="mc-$RANDOM"
if ! git clone --quiet https://github.com/minio/mc "$MC_BUILD_DIR"; then
echo "failed to download https://github.com/minio/mc"
purge "${MC_BUILD_DIR}"
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$C_PWD/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"
fi
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/disk{1...10}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 5
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
"${PWD}/mc" mb --with-versioning minio/bucket
export AWS_ACCESS_KEY_ID=minio
export AWS_SECRET_ACCESS_KEY=minio123
aws --endpoint-url http://localhost:"$start_port" s3api create-multipart-upload --bucket bucket --key obj-1 >upload-id.json
uploadId=$(jq -r '.UploadId' upload-id.json)
truncate -s 5MiB file-5mib
for i in {1..2}; do
aws --endpoint-url http://localhost:"$start_port" s3api upload-part \
--upload-id "$uploadId" --bucket bucket --key obj-1 \
--part-number "$i" --body ./file-5mib
done
for i in {1..6}; do
find ${WORK_DIR}/disk${i}/.minio.sys/multipart/ -type f -name "part.1" -delete
done
cat <<EOF >parts.json
{
"Parts": [
{
"PartNumber": 1,
"ETag": "5f363e0e58a95f06cbe9bbc662c5dfb6"
},
{
"PartNumber": 2,
"ETag": "5f363e0e58a95f06cbe9bbc662c5dfb6"
}
]
}
EOF
err=$(aws --endpoint-url http://localhost:"$start_port" s3api complete-multipart-upload --upload-id "$uploadId" --bucket bucket --key obj-1 --multipart-upload file://./parts.json 2>&1)
rv=$?
if [ $rv -eq 0 ]; then
echo "Failed to receive an error"
exit 1
fi
echo "Received an error during complete-multipart as expected: $err"
}
function main() {
start_port=$(shuf -i 10000-65000 -n 1)
start_minio_10drive ${start_port}
}
main "$@"

View File

@@ -45,7 +45,8 @@ function verify_rewrite() {
"${MINIO_OLD[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
"${WORK_DIR}/mc" ready minio/
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
@@ -77,7 +78,8 @@ function verify_rewrite() {
"${MINIO[@]}" --address ":$start_port" "${WORK_DIR}/xl{1...16}" >"${WORK_DIR}/server1.log" 2>&1 &
pid=$!
disown $pid
sleep 10
"${WORK_DIR}/mc" ready minio/
if ! ps -p ${pid} 1>&2 >/dev/null; then
echo "server1 log:"
@@ -87,14 +89,12 @@ function verify_rewrite() {
exit 1
fi
go install -v github.com/minio/minio/docs/debugging/s3-check-md5@latest
if ! s3-check-md5 \
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \
-secret-key minio123 \
-endpoint http://127.0.0.1:${start_port}/ 2>&1 | grep INTACT; then
-endpoint "http://127.0.0.1:${start_port}/" 2>&1 | grep INTACT; then
echo "server1 log:"
cat "${WORK_DIR}/server1.log"
echo "FAILED"
@@ -114,7 +114,7 @@ function verify_rewrite() {
go run ./buildscripts/heal-manual.go "127.0.0.1:${start_port}" "minio" "minio123"
sleep 1
if ! s3-check-md5 \
if ! ./s3-check-md5 \
-debug \
-versions \
-access-key minio \

View File

@@ -1,5 +1,3 @@
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: minio/minio:${MINIO_VERSION}

View File

@@ -15,13 +15,14 @@ WORK_DIR="$PWD/.verify-$RANDOM"
export MINT_MODE=core
export MINT_DATA_DIR="$WORK_DIR/data"
export SERVER_ENDPOINT="127.0.0.1:9000"
export MC_HOST_verify="http://minio:minio123@${SERVER_ENDPOINT}/"
export MC_HOST_verify_ipv6="http://minio:minio123@[::1]:9000/"
export ACCESS_KEY="minio"
export SECRET_KEY="minio123"
export ENABLE_HTTPS=0
export GO111MODULE=on
export GOGC=25
export ENABLE_ADMIN=1
export MINIO_CI_CD=1
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
@@ -36,18 +37,21 @@ function start_minio_fs() {
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
"${MINIO[@]}" server "${WORK_DIR}/fs-disk" >"$WORK_DIR/fs-minio.log" 2>&1 &
sleep 10
"${WORK_DIR}/mc" ready verify
}
function start_minio_erasure() {
"${MINIO[@]}" server "${WORK_DIR}/erasure-disk1" "${WORK_DIR}/erasure-disk2" "${WORK_DIR}/erasure-disk3" "${WORK_DIR}/erasure-disk4" >"$WORK_DIR/erasure-minio.log" 2>&1 &
sleep 15
"${WORK_DIR}/mc" ready verify
}
function start_minio_erasure_sets() {
export MINIO_ENDPOINTS="${WORK_DIR}/erasure-disk-sets{1...32}"
"${MINIO[@]}" server >"$WORK_DIR/erasure-minio-sets.log" 2>&1 &
sleep 15
"${WORK_DIR}/mc" ready verify
}
function start_minio_pool_erasure_sets() {
@@ -57,7 +61,7 @@ function start_minio_pool_erasure_sets() {
"${MINIO[@]}" server --address ":9000" >"$WORK_DIR/pool-minio-9000.log" 2>&1 &
"${MINIO[@]}" server --address ":9001" >"$WORK_DIR/pool-minio-9001.log" 2>&1 &
sleep 40
"${WORK_DIR}/mc" ready verify
}
function start_minio_pool_erasure_sets_ipv6() {
@@ -67,7 +71,7 @@ function start_minio_pool_erasure_sets_ipv6() {
"${MINIO[@]}" server --address="[::1]:9000" >"$WORK_DIR/pool-minio-ipv6-9000.log" 2>&1 &
"${MINIO[@]}" server --address="[::1]:9001" >"$WORK_DIR/pool-minio-ipv6-9001.log" 2>&1 &
sleep 40
"${WORK_DIR}/mc" ready verify_ipv6
}
function start_minio_dist_erasure() {
@@ -78,7 +82,7 @@ function start_minio_dist_erasure() {
"${MINIO[@]}" server --address ":900${i}" >"$WORK_DIR/dist-minio-900${i}.log" 2>&1 &
done
sleep 40
"${WORK_DIR}/mc" ready verify
}
function run_test_fs() {
@@ -222,7 +226,7 @@ function __init__() {
exit 1
fi
(cd "${MC_BUILD_DIR}" && go build -o "$WORK_DIR/mc")
(cd "${MC_BUILD_DIR}" && go build -o "${WORK_DIR}/mc")
# remove mc source.
purge "${MC_BUILD_DIR}"

View File

@@ -19,7 +19,7 @@ function start_minio_3_node() {
export MINIO_ERASURE_SET_DRIVE_COUNT=6
export MINIO_CI_CD=1
start_port=$2
start_port=$1
args=""
for i in $(seq 1 3); do
args="$args http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/1/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/2/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/3/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/4/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/5/ http://127.0.0.1:$((start_port + i))${WORK_DIR}/$i/6/"
@@ -37,40 +37,48 @@ function start_minio_3_node() {
pid3=$!
disown $pid3
sleep "$1"
export MC_HOST_myminio="http://minio:minio123@127.0.0.1:$((start_port + 1))"
timeout 15m /tmp/mc ready myminio || fail
# Wait for all drives to be online and formatted
while [ $(/tmp/mc admin info --json myminio | jq '.info.servers[].drives[].state | select(. != "ok")' | wc -l) -gt 0 ]; do sleep 1; done
# Wait for all drives to be healed
while [ $(/tmp/mc admin info --json myminio | jq '.info.servers[].drives[].healing | select(. != null) | select(. == true)' | wc -l) -gt 0 ]; do sleep 1; done
# Wait for Status: in MinIO output
while true; do
rv=$(check_online)
if [ "$rv" != "1" ]; then
# success
break
fi
# Check if we should retry
retry=$((retry + 1))
if [ $retry -le 20 ]; then
sleep 5
continue
fi
# Failure
fail
done
if ! ps -p $pid1 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio-server-1 is not running." && fail
fi
if ! ps -p $pid2 1>&2 >/dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio-server-2 is not running." && fail
fi
if ! ps -p $pid3 1>&2 >/dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio-server-3 is not running." && fail
fi
if ! pkill minio; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fail
fi
sleep 1
@@ -82,14 +90,24 @@ function start_minio_3_node() {
fi
}
function fail() {
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
}
function check_online() {
if ! grep -q 'Status:' ${WORK_DIR}/dist-minio-*.log; then
if ! grep -q 'API:' ${WORK_DIR}/dist-minio-*.log; then
echo "1"
fi
}
function purge() {
rm -rf "$1"
echo rm -rf "$1"
}
function __init__() {
@@ -99,10 +117,15 @@ function __init__() {
## version is purposefully set to '3' for minio to migrate configuration file
echo '{"version": "3", "credential": {"accessKey": "minio", "secretKey": "minio123"}, "region": "us-east-1"}' >"$MINIO_CONFIG_DIR/config.json"
if [ ! -f /tmp/mc ]; then
wget --quiet -O /tmp/mc https://dl.minio.io/client/mc/release/linux-amd64/mc &&
chmod +x /tmp/mc
fi
}
function perform_test() {
start_minio_3_node 120 $2
start_minio_3_node $2
echo "Testing Distributed Erasure setup healing of drives"
echo "Remove the contents of the disks belonging to '${1}' erasure set"
@@ -110,19 +133,7 @@ function perform_test() {
rm -rf ${WORK_DIR}/${1}/*/
set -x
start_minio_3_node 120 $2
rv=$(check_online)
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fi
start_minio_3_node $2
}
function main() {

View File

@@ -15,6 +15,10 @@ MINIO=("$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server)
GOPATH=/tmp/gopath
function start_minio_3_node() {
for i in $(seq 1 3); do
rm "${WORK_DIR}/dist-minio-server$i.log"
done
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MINIO_ERASURE_SET_DRIVE_COUNT=6
@@ -22,7 +26,7 @@ function start_minio_3_node() {
first_time=$(find ${WORK_DIR}/ | grep format.json | wc -l)
start_port=$2
start_port=$1
args=""
for d in $(seq 1 3 5); do
args="$args http://127.0.0.1:$((start_port + 1))${WORK_DIR}/1/${d}/ http://127.0.0.1:$((start_port + 2))${WORK_DIR}/2/${d}/ http://127.0.0.1:$((start_port + 3))${WORK_DIR}/3/${d}/ "
@@ -42,42 +46,26 @@ function start_minio_3_node() {
pid3=$!
disown $pid3
sleep "$1"
export MC_HOST_myminio="http://minio:minio123@127.0.0.1:$((start_port + 1))"
timeout 15m /tmp/mc ready myminio || fail
[ ${first_time} -eq 0 ] && upload_objects $start_port
[ ${first_time} -eq 0 ] && upload_objects
[ ${first_time} -ne 0 ] && sleep 120
if ! ps -p $pid1 1>&2 >/dev/null; then
echo "server1 log:"
cat "${WORK_DIR}/dist-minio-server1.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio server 1 is not running" && fail
fi
if ! ps -p $pid2 1>&2 >/dev/null; then
echo "server2 log:"
cat "${WORK_DIR}/dist-minio-server2.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio server 2 is not running" && fail
fi
if ! ps -p $pid3 1>&2 >/dev/null; then
echo "server3 log:"
cat "${WORK_DIR}/dist-minio-server3.log"
echo "FAILED"
purge "$WORK_DIR"
exit 1
echo "minio server 3 is not running" && fail
fi
if ! pkill minio; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
echo "FAILED"
purge "$WORK_DIR"
exit 1
fail
fi
sleep 1
@@ -90,7 +78,7 @@ function start_minio_3_node() {
}
function check_heal() {
if ! grep -q 'Status:' ${WORK_DIR}/dist-minio-*.log; then
if ! grep -q 'API:' ${WORK_DIR}/dist-minio-*.log; then
return 1
fi
@@ -112,6 +100,17 @@ function purge() {
rm -rf "$1"
}
function fail() {
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
}
function __init__() {
echo "Initializing environment"
mkdir -p "$WORK_DIR"
@@ -127,10 +126,6 @@ function __init__() {
}
function upload_objects() {
start_port=$1
/tmp/mc alias set myminio http://127.0.0.1:$((start_port + 1)) minio minio123 --api=s3v4
/tmp/mc ready myminio
/tmp/mc mb myminio/testbucket/
for ((i = 0; i < 20; i++)); do
echo "my content" | /tmp/mc pipe myminio/testbucket/file-$i
@@ -140,7 +135,7 @@ function upload_objects() {
function perform_test() {
start_port=$2
start_minio_3_node 120 $start_port
start_minio_3_node $start_port
echo "Testing Distributed Erasure setup healing of drives"
echo "Remove the contents of the disks belonging to '${1}' node"
@@ -148,19 +143,12 @@ function perform_test() {
rm -rf ${WORK_DIR}/${1}/*/
set -x
start_minio_3_node 120 $start_port
start_minio_3_node $start_port
check_heal ${1}
rv=$?
if [ "$rv" == "1" ]; then
for i in $(seq 1 3); do
echo "server$i log:"
cat "${WORK_DIR}/dist-minio-server$i.log"
done
pkill -9 minio
echo "FAILED"
purge "$WORK_DIR"
exit 1
fail
fi
}

View File

@@ -25,7 +25,7 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
// Data types used for returning dummy access control

View File

@@ -40,7 +40,7 @@ import (
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/kms"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
const (
@@ -427,6 +427,21 @@ func (a adminAPIHandlers) ExportBucketMetadataHandler(w http.ResponseWriter, r *
cfgPath := pathJoin(bi.Name, cfgFile)
bucket := bi.Name
switch cfgFile {
case bucketPolicyConfig:
config, _, err := globalBucketMetadataSys.GetBucketPolicy(bucket)
if err != nil {
if errors.Is(err, BucketPolicyNotFound{Bucket: bucket}) {
continue
}
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
configData, err := json.Marshal(config)
if err != nil {
writeErrorResponse(ctx, w, exportError(ctx, err, cfgFile, bucket), r.URL)
return
}
rawDataFn(bytes.NewReader(configData), cfgPath, len(configData))
case bucketNotificationConfig:
config, err := globalBucketMetadataSys.GetNotificationConfig(bucket)
if err != nil {
@@ -783,7 +798,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
}
switch fileName {
case bucketNotificationConfig:
config, err := event.ParseConfig(io.LimitReader(reader, sz), globalSite.Region, globalEventNotifier.targetList)
config, err := event.ParseConfig(io.LimitReader(reader, sz), globalSite.Region(), globalEventNotifier.targetList)
if err != nil {
rpt.SetStatus(bucket, fileName, fmt.Errorf("%s (%s)", errorCodes[ErrMalformedXML].Description, err))
continue
@@ -796,11 +811,12 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
}
bucketMap[bucket].NotificationConfigXML = configData
bucketMap[bucket].NotificationConfigUpdatedAt = updatedAt
rpt.SetStatus(bucket, fileName, nil)
case bucketPolicyConfig:
// Error out if Content-Length is beyond allowed size.
if sz > maxBucketPolicySize {
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrPolicyTooLarge.String()))
rpt.SetStatus(bucket, fileName, errors.New(ErrPolicyTooLarge.String()))
continue
}
@@ -818,7 +834,7 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
// Version in policy must not be empty
if bucketPolicy.Version == "" {
rpt.SetStatus(bucket, fileName, fmt.Errorf(ErrPolicyInvalidVersion.String()))
rpt.SetStatus(bucket, fileName, errors.New(ErrPolicyInvalidVersion.String()))
continue
}
@@ -837,9 +853,13 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
rpt.SetStatus(bucket, fileName, err)
continue
}
rcfg, err := globalBucketObjectLockSys.Get(bucket)
if err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
// Validate the received bucket policy document
if err = bucketLifecycle.Validate(); err != nil {
if err = bucketLifecycle.Validate(rcfg); err != nil {
rpt.SetStatus(bucket, fileName, err)
continue
}
@@ -874,8 +894,10 @@ func (a adminAPIHandlers) ImportBucketMetadataHandler(w http.ResponseWriter, r *
}
kmsKey := encConfig.KeyID()
if kmsKey != "" {
kmsContext := kms.Context{"MinIO admin API": "ServerInfoHandler"} // Context for a test key operation
_, err := GlobalKMS.GenerateKey(ctx, kmsKey, kmsContext)
_, err := GlobalKMS.GenerateKey(ctx, &kms.GenerateKeyRequest{
Name: kmsKey,
AssociatedData: kms.Context{"MinIO admin API": "ServerInfoHandler"}, // Context for a test key operation
})
if err != nil {
if errors.Is(err, kes.ErrKeyNotFound) {
rpt.SetStatus(bucket, fileName, errKMSKeyNotFound)

View File

@@ -27,7 +27,7 @@ import (
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/minio/internal/config"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
// validateAdminReq will validate request against and return whether it is allowed.
@@ -216,6 +216,12 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, errTierInvalidConfig):
apiErr = APIError{
Code: "XMinioAdminTierInvalidConfig",
Description: err.Error(),
HTTPStatusCode: http.StatusBadRequest,
}
default:
apiErr = errorCodes.ToAPIErrWithErr(toAdminAPIErrCode(ctx, err), err)
}

View File

@@ -37,7 +37,7 @@ import (
"github.com/minio/minio/internal/config/subnet"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
// DelConfigKVHandler - DELETE /minio/admin/v3/del-config-kv

View File

@@ -32,8 +32,8 @@ import (
cfgldap "github.com/minio/minio/internal/config/identity/ldap"
"github.com/minio/minio/internal/config/identity/openid"
"github.com/minio/mux"
"github.com/minio/pkg/v2/ldap"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/ldap"
"github.com/minio/pkg/v3/policy"
)
func addOrUpdateIDPHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, isUpdate bool) {

View File

@@ -20,6 +20,7 @@ package cmd
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strings"
@@ -27,7 +28,8 @@ import (
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
xldap "github.com/minio/pkg/v3/ldap"
"github.com/minio/pkg/v3/policy"
)
// ListLDAPPolicyMappingEntities lists users/groups mapped to given/all policies.
@@ -188,7 +190,7 @@ func (a adminAPIHandlers) AttachDetachPolicyLDAP(w http.ResponseWriter, r *http.
//
// PUT /minio/admin/v3/idp/ldap/add-service-account
func (a adminAPIHandlers) AddServiceAccountLDAP(w http.ResponseWriter, r *http.Request) {
ctx, cred, opts, createReq, targetUser, APIError := commonAddServiceAccount(r)
ctx, cred, opts, createReq, targetUser, APIError := commonAddServiceAccount(r, true)
if APIError.Code != "" {
writeErrorResponseJSON(ctx, w, APIError, r.URL)
return
@@ -236,12 +238,12 @@ func (a adminAPIHandlers) AddServiceAccountLDAP(w http.ResponseWriter, r *http.R
targetGroups = requestorGroups
// Deny if the target user is not LDAP
foundLDAPDN, err := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(targetUser)
foundResult, err := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(targetUser)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if foundLDAPDN == "" {
if foundResult == nil {
err := errors.New("Specified user does not exist on LDAP server")
APIErr := errorCodes.ToAPIErrWithErr(ErrAdminNoSuchUser, err)
writeErrorResponseJSON(ctx, w, APIErr, r.URL)
@@ -264,7 +266,8 @@ func (a adminAPIHandlers) AddServiceAccountLDAP(w http.ResponseWriter, r *http.R
isDN := globalIAMSys.LDAPConfig.ParsesAsDN(targetUser)
opts.claims[ldapUserN] = targetUser // simple username
targetUser, targetGroups, err = globalIAMSys.LDAPConfig.LookupUserDN(targetUser)
var lookupResult *xldap.DNSearchResult
lookupResult, targetGroups, err = globalIAMSys.LDAPConfig.LookupUserDN(targetUser)
if err != nil {
// if not found, check if DN
if strings.Contains(err.Error(), "User DN not found for:") {
@@ -278,7 +281,26 @@ func (a adminAPIHandlers) AddServiceAccountLDAP(w http.ResponseWriter, r *http.R
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
targetUser = lookupResult.NormDN
opts.claims[ldapUser] = targetUser // DN
opts.claims[ldapActualUser] = lookupResult.ActualDN
// Check if this user or their groups have a policy applied.
ldapPolicies, err := globalIAMSys.PolicyDBGet(targetUser, targetGroups...)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if len(ldapPolicies) == 0 {
err = fmt.Errorf("No policy set for user `%s` or any of their groups: `%s`", opts.claims[ldapActualUser], strings.Join(targetGroups, "`,`"))
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminNoSuchUser, err), r.URL)
return
}
// Add LDAP attributes that were looked up into the claims.
for attribKey, attribValue := range lookupResult.Attributes {
opts.claims[ldapAttribPrefix+attribKey] = attribValue
}
}
newCred, updatedAt, err := globalIAMSys.NewServiceAccount(ctx, targetUser, targetGroups, opts)
@@ -385,15 +407,16 @@ func (a adminAPIHandlers) ListAccessKeysLDAP(w http.ResponseWriter, r *http.Requ
}
}
targetAccount, err := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(userDN)
dnResult, err := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(userDN)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if targetAccount == "" {
if dnResult == nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, errNoSuchUser), r.URL)
return
}
targetAccount := dnResult.NormDN
listType := r.Form.Get("listType")
if listType != "sts-only" && listType != "svcacc-only" && listType != "" {
@@ -456,3 +479,178 @@ func (a adminAPIHandlers) ListAccessKeysLDAP(w http.ResponseWriter, r *http.Requ
writeSuccessResponseJSON(w, encryptedData)
}
// ListAccessKeysLDAPBulk - GET /minio/admin/v3/idp/ldap/list-access-keys-bulk
func (a adminAPIHandlers) ListAccessKeysLDAPBulk(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
dnList := r.Form["userDNs"]
isAll := r.Form.Get("all") == "true"
selfOnly := !isAll && len(dnList) == 0
if isAll && len(dnList) > 0 {
// This should be checked on client side, so return generic error
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
// Empty DN list and not self, list access keys for all users
if isAll {
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListUsersAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
} else if len(dnList) == 1 {
var dn string
foundResult, err := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(dnList[0])
if err == nil {
dn = foundResult.NormDN
}
if dn == cred.ParentUser || dnList[0] == cred.ParentUser {
selfOnly = true
}
}
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListServiceAccountsAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: selfOnly,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
if selfOnly && len(dnList) == 0 {
selfDN := cred.AccessKey
if cred.ParentUser != "" {
selfDN = cred.ParentUser
}
dnList = append(dnList, selfDN)
}
var ldapUserList []string
if isAll {
ldapUsers, err := globalIAMSys.ListLDAPUsers(ctx)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for user := range ldapUsers {
ldapUserList = append(ldapUserList, user)
}
} else {
for _, userDN := range dnList {
// Validate the userDN
foundResult, err := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(userDN)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if foundResult == nil {
continue
}
ldapUserList = append(ldapUserList, foundResult.NormDN)
}
}
listType := r.Form.Get("listType")
var listSTSKeys, listServiceAccounts bool
switch listType {
case madmin.AccessKeyListUsersOnly:
listSTSKeys = false
listServiceAccounts = false
case madmin.AccessKeyListSTSOnly:
listSTSKeys = true
listServiceAccounts = false
case madmin.AccessKeyListSvcaccOnly:
listSTSKeys = false
listServiceAccounts = true
case madmin.AccessKeyListAll:
listSTSKeys = true
listServiceAccounts = true
default:
err := errors.New("invalid list type")
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrInvalidRequest, err), r.URL)
return
}
accessKeyMap := make(map[string]madmin.ListAccessKeysLDAPResp)
for _, internalDN := range ldapUserList {
externalDN := globalIAMSys.LDAPConfig.DecodeDN(internalDN)
accessKeys := madmin.ListAccessKeysLDAPResp{}
if listSTSKeys {
stsKeys, err := globalIAMSys.ListSTSAccounts(ctx, internalDN)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, sts := range stsKeys {
accessKeys.STSKeys = append(accessKeys.STSKeys, madmin.ServiceAccountInfo{
AccessKey: sts.AccessKey,
Expiration: &sts.Expiration,
})
}
// if only STS keys, skip if user has no STS keys
if !listServiceAccounts && len(stsKeys) == 0 {
continue
}
}
if listServiceAccounts {
serviceAccounts, err := globalIAMSys.ListServiceAccounts(ctx, internalDN)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, svc := range serviceAccounts {
accessKeys.ServiceAccounts = append(accessKeys.ServiceAccounts, madmin.ServiceAccountInfo{
AccessKey: svc.AccessKey,
Expiration: &svc.Expiration,
})
}
// if only service accounts, skip if user has no service accounts
if !listSTSKeys && len(serviceAccounts) == 0 {
continue
}
}
accessKeyMap[externalDN] = accessKeys
}
data, err := json.Marshal(accessKeyMap)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(cred.SecretKey, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}

View File

@@ -27,8 +27,8 @@ import (
"strings"
"github.com/minio/mux"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/env"
"github.com/minio/pkg/v3/policy"
)
var (
@@ -374,6 +374,7 @@ func (a adminAPIHandlers) RebalanceStop(w http.ResponseWriter, r *http.Request)
globalNotificationSys.StopRebalance(r.Context())
writeSuccessResponseHeadersOnly(w)
adminLogIf(ctx, pools.saveRebalanceStats(GlobalContext, 0, rebalSaveStoppedAt))
globalNotificationSys.LoadRebalanceMeta(ctx, false)
}
func proxyDecommissionRequest(ctx context.Context, defaultEndPoint Endpoint, w http.ResponseWriter, r *http.Request) (proxy bool) {

View File

@@ -33,7 +33,7 @@ import (
"github.com/minio/madmin-go/v3"
xioutil "github.com/minio/minio/internal/ioutil"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
// SiteReplicationAdd - PUT /minio/admin/v3/site-replication/add

View File

@@ -32,7 +32,7 @@ import (
"github.com/minio/madmin-go/v3"
minio "github.com/minio/minio-go/v7"
"github.com/minio/pkg/v2/sync/errgroup"
"github.com/minio/pkg/v3/sync/errgroup"
)
func runAllIAMConcurrencyTests(suite *TestSuiteIAM, c *check) {
@@ -120,9 +120,12 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
userReq := madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
}
if _, err := s.adm.AttachPolicy(ctx, userReq); err != nil {
c.Fatalf("Unable to attach policy: %v", err)
}
accessKeys[i] = accessKey

View File

@@ -26,17 +26,20 @@ import (
"io"
"net/http"
"os"
"slices"
"sort"
"strconv"
"strings"
"time"
"unicode/utf8"
"github.com/klauspost/compress/zip"
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/auth"
"github.com/minio/minio/internal/cachevalue"
"github.com/minio/minio/internal/config/dns"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
xldap "github.com/minio/pkg/v3/ldap"
"github.com/minio/pkg/v3/policy"
"github.com/puzpuzpuz/xsync/v3"
)
@@ -473,6 +476,11 @@ func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
return
}
if !utf8.ValidString(accessKey) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAddUserValidUTF), r.URL)
return
}
checkDenyOnly := false
if accessKey == cred.AccessKey {
// Check that there is no explicit deny - otherwise it's allowed
@@ -629,7 +637,7 @@ func (a adminAPIHandlers) TemporaryAccountInfo(w http.ResponseWriter, r *http.Re
// AddServiceAccount - PUT /minio/admin/v3/add-service-account
func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Request) {
ctx, cred, opts, createReq, targetUser, APIError := commonAddServiceAccount(r)
ctx, cred, opts, createReq, targetUser, APIError := commonAddServiceAccount(r, false)
if APIError.Code != "" {
writeErrorResponseJSON(ctx, w, APIError, r.URL)
return
@@ -700,12 +708,20 @@ func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Reque
// In case of LDAP we need to resolve the targetUser to a DN and
// query their groups:
opts.claims[ldapUserN] = targetUser // simple username
targetUser, targetGroups, err = globalIAMSys.LDAPConfig.LookupUserDN(targetUser)
var lookupResult *xldap.DNSearchResult
lookupResult, targetGroups, err = globalIAMSys.LDAPConfig.LookupUserDN(targetUser)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
targetUser = lookupResult.NormDN
opts.claims[ldapUser] = targetUser // username DN
opts.claims[ldapActualUser] = lookupResult.ActualDN
// Add LDAP attributes that were looked up into the claims.
for attribKey, attribValue := range lookupResult.Attributes {
opts.claims[ldapAttribPrefix+attribKey] = attribValue
}
// NOTE: if not using LDAP, then internal IDP or open ID is
// being used - in the former, group info is enforced when
@@ -815,7 +831,11 @@ func (a adminAPIHandlers) UpdateServiceAccount(w http.ResponseWriter, r *http.Re
}
condValues := getConditionValues(r, "", cred)
addExpirationToCondValues(updateReq.NewExpiration, condValues)
err = addExpirationToCondValues(updateReq.NewExpiration, condValues)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Permission checks:
//
@@ -1147,6 +1167,172 @@ func (a adminAPIHandlers) DeleteServiceAccount(w http.ResponseWriter, r *http.Re
writeSuccessNoContent(w)
}
// ListAccessKeysBulk - GET /minio/admin/v3/list-access-keys-bulk
func (a adminAPIHandlers) ListAccessKeysBulk(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get current object layer instance.
objectAPI := newObjectLayerFn()
if objectAPI == nil || globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
cred, owner, s3Err := validateAdminSignature(ctx, r, "")
if s3Err != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
users := r.Form["users"]
isAll := r.Form.Get("all") == "true"
selfOnly := !isAll && len(users) == 0
if isAll && len(users) > 0 {
// This should be checked on client side, so return generic error
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
// Empty user list and not self, list access keys for all users
if isAll {
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListUsersAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
} else if len(users) == 1 {
if users[0] == cred.AccessKey || users[0] == cred.ParentUser {
selfOnly = true
}
}
if !globalIAMSys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.ListServiceAccountsAdminAction,
ConditionValues: getConditionValues(r, "", cred),
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: selfOnly,
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
if selfOnly && len(users) == 0 {
selfUser := cred.AccessKey
if cred.ParentUser != "" {
selfUser = cred.ParentUser
}
users = append(users, selfUser)
}
var checkedUserList []string
if isAll {
users, err := globalIAMSys.ListUsers(ctx)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for user := range users {
checkedUserList = append(checkedUserList, user)
}
checkedUserList = append(checkedUserList, globalActiveCred.AccessKey)
} else {
for _, user := range users {
// Validate the user
_, ok := globalIAMSys.GetUser(ctx, user)
if !ok {
continue
}
checkedUserList = append(checkedUserList, user)
}
}
listType := r.Form.Get("listType")
var listSTSKeys, listServiceAccounts bool
switch listType {
case madmin.AccessKeyListUsersOnly:
listSTSKeys = false
listServiceAccounts = false
case madmin.AccessKeyListSTSOnly:
listSTSKeys = true
listServiceAccounts = false
case madmin.AccessKeyListSvcaccOnly:
listSTSKeys = false
listServiceAccounts = true
case madmin.AccessKeyListAll:
listSTSKeys = true
listServiceAccounts = true
default:
err := errors.New("invalid list type")
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrInvalidRequest, err), r.URL)
return
}
accessKeyMap := make(map[string]madmin.ListAccessKeysResp)
for _, user := range checkedUserList {
accessKeys := madmin.ListAccessKeysResp{}
if listSTSKeys {
stsKeys, err := globalIAMSys.ListSTSAccounts(ctx, user)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, sts := range stsKeys {
accessKeys.STSKeys = append(accessKeys.STSKeys, madmin.ServiceAccountInfo{
AccessKey: sts.AccessKey,
Expiration: &sts.Expiration,
})
}
// if only STS keys, skip if user has no STS keys
if !listServiceAccounts && len(stsKeys) == 0 {
continue
}
}
if listServiceAccounts {
serviceAccounts, err := globalIAMSys.ListServiceAccounts(ctx, user)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
for _, svc := range serviceAccounts {
accessKeys.ServiceAccounts = append(accessKeys.ServiceAccounts, madmin.ServiceAccountInfo{
AccessKey: svc.AccessKey,
Expiration: &svc.Expiration,
})
}
// if only service accounts, skip if user has no service accounts
if !listSTSKeys && len(serviceAccounts) == 0 {
continue
}
}
accessKeyMap[user] = accessKeys
}
data, err := json.Marshal(accessKeyMap)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
encryptedData, err := madmin.EncryptData(cred.SecretKey, data)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, encryptedData)
}
// AccountInfoHandler returns usage, permissions and other bucket metadata for incoming us
func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
@@ -1216,18 +1402,6 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
return rd, wr
}
bucketStorageCache.InitOnce(10*time.Second,
cachevalue.Opts{ReturnLastGood: true, NoWait: true},
func() (DataUsageInfo, error) {
ctx, done := context.WithTimeout(context.Background(), 2*time.Second)
defer done()
return loadDataUsageFromBackend(ctx, objectAPI)
},
)
dataUsageInfo, _ := bucketStorageCache.Get()
// If etcd, dns federation configured list buckets from etcd.
var err error
var buckets []BucketInfo
@@ -1268,7 +1442,12 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
var buf []byte
switch {
case accountName == globalActiveCred.AccessKey:
case accountName == globalActiveCred.AccessKey || newGlobalAuthZPluginFn() != nil:
// For owner account and when plugin authZ is configured always set
// effective policy as `consoleAdmin`.
//
// In the latter case, we let the UI render everything, but individual
// actions would fail if not permitted by the external authZ service.
for _, policy := range policy.DefaultPolicies {
if policy.Name == "consoleAdmin" {
effectivePolicy = policy.Definition
@@ -1314,15 +1493,12 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
rd, wr := isAllowedAccess(bucket.Name)
if rd || wr {
// Fetch the data usage of the current bucket
var size uint64
var objectsCount uint64
var objectsHist, versionsHist map[string]uint64
if !dataUsageInfo.LastUpdate.IsZero() {
size = dataUsageInfo.BucketsUsage[bucket.Name].Size
objectsCount = dataUsageInfo.BucketsUsage[bucket.Name].ObjectsCount
objectsHist = dataUsageInfo.BucketsUsage[bucket.Name].ObjectSizesHistogram
versionsHist = dataUsageInfo.BucketsUsage[bucket.Name].ObjectVersionsHistogram
}
bui := globalBucketQuotaSys.GetBucketUsageInfo(ctx, bucket.Name)
size := bui.Size
objectsCount := bui.ObjectsCount
objectsHist := bui.ObjectSizesHistogram
versionsHist := bui.ObjectVersionsHistogram
// Fetch the prefix usage of the current bucket
var prefixUsage map[string]uint64
if enablePrefixUsage {
@@ -1636,22 +1812,22 @@ func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http
// form of the entityName (which will be an LDAP DN).
var err error
if isGroup {
var foundGroupDN string
var foundGroupDN *xldap.DNSearchResult
var underBaseDN bool
if foundGroupDN, underBaseDN, err = globalIAMSys.LDAPConfig.GetValidatedGroupDN(nil, entityName); err != nil {
iamLogIf(ctx, err)
} else if foundGroupDN == "" || !underBaseDN {
} else if foundGroupDN == nil || !underBaseDN {
err = errNoSuchGroup
}
entityName = foundGroupDN
entityName = foundGroupDN.NormDN
} else {
var foundUserDN string
var foundUserDN *xldap.DNSearchResult
if foundUserDN, err = globalIAMSys.LDAPConfig.GetValidatedDNForUsername(entityName); err != nil {
iamLogIf(ctx, err)
} else if foundUserDN == "" {
} else if foundUserDN == nil {
err = errNoSuchUser
}
entityName = foundUserDN
entityName = foundUserDN.NormDN
}
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@@ -2043,6 +2219,16 @@ func (a adminAPIHandlers) ExportIAM(w http.ResponseWriter, r *http.Request) {
// ImportIAM - imports all IAM info into MinIO
func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
a.importIAM(w, r, "")
}
// ImportIAMV2 - imports all IAM info into MinIO
func (a adminAPIHandlers) ImportIAMV2(w http.ResponseWriter, r *http.Request) {
a.importIAM(w, r, "v2")
}
// ImportIAM - imports all IAM info into MinIO
func (a adminAPIHandlers) importIAM(w http.ResponseWriter, r *http.Request, apiVer string) {
ctx := r.Context()
// Get current object layer instance.
@@ -2067,6 +2253,10 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL)
return
}
var skipped, removed, added madmin.IAMEntities
var failed madmin.IAMErrEntities
// import policies first
{
@@ -2092,8 +2282,10 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
for policyName, policy := range allPolicies {
if policy.IsEmpty() {
err = globalIAMSys.DeletePolicy(ctx, policyName, true)
removed.Policies = append(removed.Policies, policyName)
} else {
_, err = globalIAMSys.SetPolicy(ctx, policyName, policy)
added.Policies = append(added.Policies, policyName)
}
if err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, allPoliciesFile, policyName), r.URL)
@@ -2172,8 +2364,9 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
return
}
if _, err = globalIAMSys.CreateUser(ctx, accessKey, ureq); err != nil {
writeErrorResponseJSON(ctx, w, importErrorWithAPIErr(ctx, toAdminAPIErrCode(ctx, err), err, allUsersFile, accessKey), r.URL)
return
failed.Users = append(failed.Users, madmin.IAMErrEntity{Name: accessKey, Error: err})
} else {
added.Users = append(added.Users, accessKey)
}
}
@@ -2211,8 +2404,9 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
}
if _, gerr := globalIAMSys.AddUsersToGroup(ctx, group, grpInfo.Members); gerr != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, gerr, allGroupsFile, group), r.URL)
return
failed.Groups = append(failed.Groups, madmin.IAMErrEntity{Name: group, Error: err})
} else {
added.Groups = append(added.Groups, group)
}
}
}
@@ -2241,7 +2435,8 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
// Validations for LDAP enabled deployments.
if globalIAMSys.LDAPConfig.Enabled() {
err := globalIAMSys.NormalizeLDAPAccessKeypairs(ctx, serviceAcctReqs)
skippedAccessKeys, err := globalIAMSys.NormalizeLDAPAccessKeypairs(ctx, serviceAcctReqs)
skipped.ServiceAccounts = append(skipped.ServiceAccounts, skippedAccessKeys...)
if err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, allSvcAcctsFile, ""), r.URL)
return
@@ -2249,6 +2444,9 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
for user, svcAcctReq := range serviceAcctReqs {
if slices.Contains(skipped.ServiceAccounts, user) {
continue
}
var sp *policy.Policy
var err error
if len(svcAcctReq.SessionPolicy) > 0 {
@@ -2289,7 +2487,7 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
// clean import.
err := globalIAMSys.DeleteServiceAccount(ctx, svcAcctReq.AccessKey, true)
if err != nil {
delErr := fmt.Errorf("failed to delete existing service account(%s) before importing it: %w", svcAcctReq.AccessKey, err)
delErr := fmt.Errorf("failed to delete existing service account (%s) before importing it: %w", svcAcctReq.AccessKey, err)
writeErrorResponseJSON(ctx, w, importError(ctx, delErr, allSvcAcctsFile, user), r.URL)
return
}
@@ -2306,10 +2504,10 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
if _, _, err = globalIAMSys.NewServiceAccount(ctx, svcAcctReq.Parent, svcAcctReq.Groups, opts); err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, allSvcAcctsFile, user), r.URL)
return
failed.ServiceAccounts = append(failed.ServiceAccounts, madmin.IAMErrEntity{Name: user, Error: err})
} else {
added.ServiceAccounts = append(added.ServiceAccounts, user)
}
}
}
}
@@ -2346,8 +2544,15 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
return
}
if _, err := globalIAMSys.PolicyDBSet(ctx, u, pm.Policies, regUser, false); err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, userPolicyMappingsFile, u), r.URL)
return
failed.UserPolicies = append(
failed.UserPolicies,
madmin.IAMErrPolicyEntity{
Name: u,
Policies: strings.Split(pm.Policies, ","),
Error: err,
})
} else {
added.UserPolicies = append(added.UserPolicies, map[string][]string{u: strings.Split(pm.Policies, ",")})
}
}
}
@@ -2377,7 +2582,8 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
// Validations for LDAP enabled deployments.
if globalIAMSys.LDAPConfig.Enabled() {
isGroup := true
err := globalIAMSys.NormalizeLDAPMappingImport(ctx, isGroup, grpPolicyMap)
skippedDN, err := globalIAMSys.NormalizeLDAPMappingImport(ctx, isGroup, grpPolicyMap)
skipped.Groups = append(skipped.Groups, skippedDN...)
if err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, groupPolicyMappingsFile, ""), r.URL)
return
@@ -2385,9 +2591,19 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
for g, pm := range grpPolicyMap {
if slices.Contains(skipped.Groups, g) {
continue
}
if _, err := globalIAMSys.PolicyDBSet(ctx, g, pm.Policies, unknownIAMUserType, true); err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, groupPolicyMappingsFile, g), r.URL)
return
failed.GroupPolicies = append(
failed.GroupPolicies,
madmin.IAMErrPolicyEntity{
Name: g,
Policies: strings.Split(pm.Policies, ","),
Error: err,
})
} else {
added.GroupPolicies = append(added.GroupPolicies, map[string][]string{g: strings.Split(pm.Policies, ",")})
}
}
}
@@ -2417,13 +2633,17 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
// Validations for LDAP enabled deployments.
if globalIAMSys.LDAPConfig.Enabled() {
isGroup := true
err := globalIAMSys.NormalizeLDAPMappingImport(ctx, !isGroup, userPolicyMap)
skippedDN, err := globalIAMSys.NormalizeLDAPMappingImport(ctx, !isGroup, userPolicyMap)
skipped.Users = append(skipped.Users, skippedDN...)
if err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, stsUserPolicyMappingsFile, ""), r.URL)
return
}
}
for u, pm := range userPolicyMap {
if slices.Contains(skipped.Users, u) {
continue
}
// disallow setting policy mapping if user is a temporary user
ok, _, err := globalIAMSys.IsTempUser(u)
if err != nil && err != errNoSuchUser {
@@ -2436,22 +2656,51 @@ func (a adminAPIHandlers) ImportIAM(w http.ResponseWriter, r *http.Request) {
}
if _, err := globalIAMSys.PolicyDBSet(ctx, u, pm.Policies, stsUser, false); err != nil {
writeErrorResponseJSON(ctx, w, importError(ctx, err, stsUserPolicyMappingsFile, u), r.URL)
return
failed.STSPolicies = append(
failed.STSPolicies,
madmin.IAMErrPolicyEntity{
Name: u,
Policies: strings.Split(pm.Policies, ","),
Error: err,
})
} else {
added.STSPolicies = append(added.STSPolicies, map[string][]string{u: strings.Split(pm.Policies, ",")})
}
}
}
}
}
func addExpirationToCondValues(exp *time.Time, condValues map[string][]string) {
if exp == nil {
return
if apiVer == "v2" {
iamr := madmin.ImportIAMResult{
Skipped: skipped,
Removed: removed,
Added: added,
Failed: failed,
}
b, err := json.Marshal(iamr)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, b)
}
condValues["DurationSeconds"] = []string{strconv.FormatInt(int64(exp.Sub(time.Now()).Seconds()), 10)}
}
func commonAddServiceAccount(r *http.Request) (context.Context, auth.Credentials, newServiceAccountOpts, madmin.AddServiceAccountReq, string, APIError) {
func addExpirationToCondValues(exp *time.Time, condValues map[string][]string) error {
if exp == nil || exp.IsZero() || exp.Equal(timeSentinel) {
return nil
}
dur := exp.Sub(time.Now())
if dur <= 0 {
return errors.New("unsupported expiration time")
}
condValues["DurationSeconds"] = []string{strconv.FormatInt(int64(dur.Seconds()), 10)}
return nil
}
func commonAddServiceAccount(r *http.Request, ldap bool) (context.Context, auth.Credentials, newServiceAccountOpts, madmin.AddServiceAccountReq, string, APIError) {
ctx := r.Context()
// Get current object layer instance.
@@ -2476,6 +2725,12 @@ func commonAddServiceAccount(r *http.Request) (context.Context, auth.Credentials
return ctx, auth.Credentials{}, newServiceAccountOpts{}, madmin.AddServiceAccountReq{}, "", errorCodes.ToAPIErrWithErr(ErrAdminConfigBadJSON, err)
}
if createReq.Expiration != nil && !createReq.Expiration.IsZero() {
// truncate expiration at the second.
truncateTime := createReq.Expiration.Truncate(time.Second)
createReq.Expiration = &truncateTime
}
// service account access key cannot have space characters beginning and end of the string.
if hasSpaceBE(createReq.AccessKey) {
return ctx, auth.Credentials{}, newServiceAccountOpts{}, madmin.AddServiceAccountReq{}, "", errorCodes.ToAPIErr(ErrAdminResourceInvalidArgument)
@@ -2507,7 +2762,18 @@ func commonAddServiceAccount(r *http.Request) (context.Context, auth.Credentials
}
condValues := getConditionValues(r, "", cred)
addExpirationToCondValues(createReq.Expiration, condValues)
err = addExpirationToCondValues(createReq.Expiration, condValues)
if err != nil {
return ctx, auth.Credentials{}, newServiceAccountOpts{}, madmin.AddServiceAccountReq{}, "", toAdminAPIErr(ctx, err)
}
denyOnly := (targetUser == cred.AccessKey || targetUser == cred.ParentUser)
if ldap && !denyOnly {
res, _ := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(targetUser)
if res.NormDN == cred.ParentUser {
denyOnly = true
}
}
// Check if action is allowed if creating access key for another user
// Check if action is explicitly denied if for self
@@ -2518,7 +2784,7 @@ func commonAddServiceAccount(r *http.Request) (context.Context, auth.Credentials
ConditionValues: condValues,
IsOwner: owner,
Claims: cred.Claims,
DenyOnly: (targetUser == cred.AccessKey || targetUser == cred.ParentUser),
DenyOnly: denyOnly,
}) {
return ctx, auth.Credentials{}, newServiceAccountOpts{}, madmin.AddServiceAccountReq{}, "", errorCodes.ToAPIErr(ErrAccessDenied)
}

View File

@@ -28,6 +28,7 @@ import (
"net/http"
"net/url"
"runtime"
"slices"
"strings"
"testing"
"time"
@@ -39,7 +40,7 @@ import (
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/signer"
"github.com/minio/minio/internal/auth"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v3/env"
)
const (
@@ -239,9 +240,12 @@ func (s *TestSuiteIAM) TestUserCreate(c *check) {
c.Assert(v.Status, madmin.AccountEnabled)
// 3. Associate policy and check that user can access
err = s.adm.SetPolicy(ctx, "readwrite", accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{"readwrite"},
User: accessKey,
})
if err != nil {
c.Fatalf("unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
client := s.getUserClient(c, accessKey, secretKey, "")
@@ -334,23 +338,34 @@ func (s *TestSuiteIAM) TestUserPolicyEscalationBug(c *check) {
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket))
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// 2.3 check user has access to bucket
c.mustListObjects(ctx, uClient, bucket)
@@ -436,7 +451,7 @@ func (s *TestSuiteIAM) TestAddServiceAccountPerms(c *check) {
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::testbucket/*"
"arn:aws:s3:::testbucket"
]
}
]
@@ -470,9 +485,12 @@ func (s *TestSuiteIAM) TestAddServiceAccountPerms(c *check) {
c.mustNotListObjects(ctx, uClient, "testbucket")
// 3.2 associate policy to user
err = s.adm.SetPolicy(ctx, policy1, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy1},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
admClnt := s.getAdminClient(c, accessKey, secretKey, "")
@@ -490,10 +508,22 @@ func (s *TestSuiteIAM) TestAddServiceAccountPerms(c *check) {
c.Fatalf("policy was missing!")
}
// 3.2 associate policy to user
err = s.adm.SetPolicy(ctx, policy2, accessKey, false)
// Detach policy1 to set up for policy2
_, err = s.adm.DetachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy1},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to detach policy: %v", err)
}
// 3.2 associate policy to user
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy2},
User: accessKey,
})
if err != nil {
c.Fatalf("unable to attach policy: %v", err)
}
// 3.3 check user can create service account implicitly.
@@ -538,16 +568,24 @@ func (s *TestSuiteIAM) TestPolicyCreate(c *check) {
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket))
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@@ -571,9 +609,12 @@ func (s *TestSuiteIAM) TestPolicyCreate(c *check) {
c.mustNotListObjects(ctx, uClient, bucket)
// 3.2 associate policy to user
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// 3.3 check user has access to bucket
c.mustListObjects(ctx, uClient, bucket)
@@ -645,16 +686,24 @@ func (s *TestSuiteIAM) TestCannedPolicies(c *check) {
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket))
// Check that default policies can be overwritten.
err = s.adm.AddCannedPolicy(ctx, "readwrite", policyBytes)
@@ -690,16 +739,24 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket))
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@@ -726,9 +783,12 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
c.mustNotListObjects(ctx, uClient, bucket)
// 3. Associate policy to group and check user got access.
err = s.adm.SetPolicy(ctx, policy, group, true)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
Group: group,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// 3.1 check user has access to bucket
c.mustListObjects(ctx, uClient, bucket)
@@ -743,8 +803,9 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
if err != nil {
c.Fatalf("group list err: %v", err)
}
if !set.CreateStringSet(groups...).Contains(group) {
c.Fatalf("created group not present!")
expected := []string{group}
if !slices.Equal(groups, expected) {
c.Fatalf("expected group listing: %v, got: %v", expected, groups)
}
groupInfo, err := s.adm.GetGroupDescription(ctx, group)
if err != nil {
@@ -850,16 +911,24 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket))
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@@ -871,9 +940,12 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// Create an madmin client with user creds
@@ -931,16 +1003,24 @@ func (s *TestSuiteIAM) TestServiceAccountDurationSecondsCondition(c *check) {
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket))
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@@ -952,9 +1032,12 @@ func (s *TestSuiteIAM) TestServiceAccountDurationSecondsCondition(c *check) {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// Create an madmin client with user creds
@@ -1010,16 +1093,24 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
}`, bucket, bucket))
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
@@ -1031,9 +1122,12 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
_, err = s.adm.AttachPolicy(ctx, madmin.PolicyAssociationReq{
Policies: []string{policy},
User: accessKey,
})
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
c.Fatalf("unable to attach policy: %v", err)
}
// 1. Create a service account for the user
@@ -1564,7 +1658,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s/*"
"arn:aws:s3:::%s"
]
}
]

View File

@@ -59,9 +59,9 @@ import (
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v2/logger/message/log"
xnet "github.com/minio/pkg/v2/net"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/logger/message/log"
xnet "github.com/minio/pkg/v3/net"
"github.com/minio/pkg/v3/policy"
"github.com/secure-io/sio-go"
"github.com/zeebo/xxh3"
)
@@ -93,11 +93,18 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
return
}
if globalInplaceUpdateDisabled || currentReleaseTime.IsZero() {
if globalInplaceUpdateDisabled {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
if currentReleaseTime.IsZero() || currentReleaseTime.Equal(timeSentinel) {
apiErr := errorCodes.ToAPIErr(ErrMethodNotAllowed)
apiErr.Description = fmt.Sprintf("unable to perform in-place update, release time is unrecognized: %s", currentReleaseTime)
writeErrorResponseJSON(ctx, w, apiErr, r.URL)
return
}
vars := mux.Vars(r)
updateURL := vars["updateURL"]
dryRun := r.Form.Get("dry-run") == "true"
@@ -110,6 +117,11 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
}
}
local := globalLocalNodeName
if local == "" {
local = "127.0.0.1"
}
u, err := url.Parse(updateURL)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@@ -128,6 +140,39 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
return
}
updateStatus := madmin.ServerUpdateStatusV2{
DryRun: dryRun,
Results: make([]madmin.ServerPeerUpdateStatus, 0, len(globalNotificationSys.allPeerClients)),
}
peerResults := make(map[string]madmin.ServerPeerUpdateStatus, len(globalNotificationSys.allPeerClients))
failedClients := make(map[int]bool, len(globalNotificationSys.allPeerClients))
if lrTime.Sub(currentReleaseTime) <= 0 {
updateStatus.Results = append(updateStatus.Results, madmin.ServerPeerUpdateStatus{
Host: local,
Err: fmt.Sprintf("server is running the latest version: %s", Version),
CurrentVersion: Version,
})
for _, client := range globalNotificationSys.peerClients {
updateStatus.Results = append(updateStatus.Results, madmin.ServerPeerUpdateStatus{
Host: client.String(),
Err: fmt.Sprintf("server is running the latest version: %s", Version),
CurrentVersion: Version,
})
}
// Marshal API response
jsonBytes, err := json.Marshal(updateStatus)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseJSON(w, jsonBytes)
return
}
u.Path = path.Dir(u.Path) + SlashSeparator + releaseInfo
// Download Binary Once
binC, bin, err := downloadBinary(u, mode)
@@ -137,16 +182,6 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
return
}
updateStatus := madmin.ServerUpdateStatusV2{DryRun: dryRun}
peerResults := make(map[string]madmin.ServerPeerUpdateStatus)
local := globalLocalNodeName
if local == "" {
local = "127.0.0.1"
}
failedClients := make(map[int]struct{})
if globalIsDistErasure {
// Push binary to other servers
for idx, nerr := range globalNotificationSys.VerifyBinary(ctx, u, sha256Sum, releaseInfo, binC) {
@@ -156,7 +191,7 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
Err: nerr.Err.Error(),
CurrentVersion: Version,
}
failedClients[idx] = struct{}{}
failedClients[idx] = true
} else {
peerResults[nerr.Host.String()] = madmin.ServerPeerUpdateStatus{
Host: nerr.Host.String(),
@@ -167,25 +202,17 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
}
}
if lrTime.Sub(currentReleaseTime) > 0 {
if err = verifyBinary(u, sha256Sum, releaseInfo, mode, bytes.NewReader(bin)); err != nil {
peerResults[local] = madmin.ServerPeerUpdateStatus{
Host: local,
Err: err.Error(),
CurrentVersion: Version,
}
} else {
peerResults[local] = madmin.ServerPeerUpdateStatus{
Host: local,
CurrentVersion: Version,
UpdatedVersion: lrTime.Format(MinioReleaseTagTimeLayout),
}
if err = verifyBinary(u, sha256Sum, releaseInfo, mode, bytes.NewReader(bin)); err != nil {
peerResults[local] = madmin.ServerPeerUpdateStatus{
Host: local,
Err: err.Error(),
CurrentVersion: Version,
}
} else {
peerResults[local] = madmin.ServerPeerUpdateStatus{
Host: local,
Err: fmt.Sprintf("server is already running the latest version: %s", Version),
CurrentVersion: Version,
UpdatedVersion: lrTime.Format(MinioReleaseTagTimeLayout),
}
}
@@ -193,8 +220,7 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
if globalIsDistErasure {
ng := WithNPeers(len(globalNotificationSys.peerClients))
for idx, client := range globalNotificationSys.peerClients {
_, ok := failedClients[idx]
if ok {
if failedClients[idx] {
continue
}
client := client
@@ -237,17 +263,18 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
if globalIsDistErasure {
// Notify all other MinIO peers signal service.
startTime := time.Now().Add(restartUpdateDelay)
ng := WithNPeers(len(globalNotificationSys.peerClients))
for idx, client := range globalNotificationSys.peerClients {
_, ok := failedClients[idx]
if ok {
if failedClients[idx] {
continue
}
client := client
ng.Go(ctx, func() error {
prs, ok := peerResults[client.String()]
if ok && prs.CurrentVersion != prs.UpdatedVersion && prs.UpdatedVersion != "" {
return client.SignalService(serviceRestart, "", dryRun)
// We restart only on success, not for any failures.
if ok && prs.Err == "" {
return client.SignalService(serviceRestart, "", dryRun, &startTime)
}
return nil
}, idx, *client.host)
@@ -283,7 +310,9 @@ func (a adminAPIHandlers) ServerUpdateV2Handler(w http.ResponseWriter, r *http.R
writeSuccessResponseJSON(w, jsonBytes)
if !dryRun {
if lrTime.Sub(currentReleaseTime) > 0 {
prs, ok := peerResults[local]
// We restart only on success, not for any failures.
if ok && prs.Err == "" {
globalServiceSignalCh <- serviceRestart
}
}
@@ -542,9 +571,12 @@ func (a adminAPIHandlers) ServiceV2Handler(w http.ResponseWriter, r *http.Reques
}
var objectAPI ObjectLayer
var execAt *time.Time
switch serviceSig {
case serviceRestart:
objectAPI, _ = validateAdminReq(ctx, w, r, policy.ServiceRestartAdminAction)
t := time.Now().Add(restartUpdateDelay)
execAt = &t
case serviceStop:
objectAPI, _ = validateAdminReq(ctx, w, r, policy.ServiceStopAdminAction)
case serviceFreeze, serviceUnFreeze:
@@ -571,7 +603,7 @@ func (a adminAPIHandlers) ServiceV2Handler(w http.ResponseWriter, r *http.Reques
}
if globalIsDistErasure {
for _, nerr := range globalNotificationSys.SignalServiceV2(serviceSig, dryRun) {
for _, nerr := range globalNotificationSys.SignalServiceV2(serviceSig, dryRun, execAt) {
if nerr.Err != nil && process {
waitingDrives := map[string]madmin.DiskMetrics{}
jerr := json.Unmarshal([]byte(nerr.Err.Error()), &waitingDrives)
@@ -844,9 +876,10 @@ func (a adminAPIHandlers) DataUsageInfoHandler(w http.ResponseWriter, r *http.Re
}
func lriToLockEntry(l lockRequesterInfo, now time.Time, resource, server string) *madmin.LockEntry {
t := time.Unix(0, l.Timestamp)
entry := &madmin.LockEntry{
Timestamp: l.Timestamp,
Elapsed: now.Sub(l.Timestamp),
Timestamp: t,
Elapsed: now.Sub(t),
Resource: resource,
ServerList: []string{server},
Source: l.Source,
@@ -1031,7 +1064,7 @@ func (a adminAPIHandlers) StartProfilingHandler(w http.ResponseWriter, r *http.R
// Start profiling on remote servers.
var hostErrs []NotificationPeerErr
for _, profiler := range profiles {
hostErrs = append(hostErrs, globalNotificationSys.StartProfiling(profiler)...)
hostErrs = append(hostErrs, globalNotificationSys.StartProfiling(ctx, profiler)...)
// Start profiling locally as well.
prof, err := startProfiler(profiler)
@@ -1112,7 +1145,11 @@ func (a adminAPIHandlers) ProfileHandler(w http.ResponseWriter, r *http.Request)
// Start profiling on remote servers.
for _, profiler := range profiles {
globalNotificationSys.StartProfiling(profiler)
// Limit start time to max 10s.
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
globalNotificationSys.StartProfiling(ctx, profiler)
// StartProfiling blocks, so we can cancel now.
cancel()
// Start profiling locally as well.
prof, err := startProfiler(profiler)
@@ -1127,6 +1164,10 @@ func (a adminAPIHandlers) ProfileHandler(w http.ResponseWriter, r *http.Request)
for {
select {
case <-ctx.Done():
// Stop remote profiles
go globalNotificationSys.DownloadProfilingData(GlobalContext, io.Discard)
// Stop local
globalProfilerMu.Lock()
defer globalProfilerMu.Unlock()
for k, v := range globalProfiler {
@@ -1431,7 +1472,7 @@ func getAggregatedBackgroundHealState(ctx context.Context, o ObjectLayer) (madmi
if globalIsDistErasure {
// Get heal status from other peers
peersHealStates, nerrs := globalNotificationSys.BackgroundHealStatus()
peersHealStates, nerrs := globalNotificationSys.BackgroundHealStatus(ctx)
var errCount int
for _, nerr := range nerrs {
if nerr.Err != nil {
@@ -2173,14 +2214,16 @@ func (a adminAPIHandlers) KMSCreateKeyHandler(w http.ResponseWriter, r *http.Req
return
}
if err := GlobalKMS.CreateKey(ctx, r.Form.Get("key-id")); err != nil {
if err := GlobalKMS.CreateKey(ctx, &kms.CreateKeyRequest{
Name: r.Form.Get("key-id"),
}); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
writeSuccessResponseHeadersOnly(w)
}
// KMSKeyStatusHandler - GET /minio/admin/v3/kms/status
// KMSStatusHandler - GET /minio/admin/v3/kms/status
func (a adminAPIHandlers) KMSStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
@@ -2194,22 +2237,12 @@ func (a adminAPIHandlers) KMSStatusHandler(w http.ResponseWriter, r *http.Reques
return
}
stat, err := GlobalKMS.Stat(ctx)
stat, err := GlobalKMS.Status(ctx)
if err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInternalError), err.Error(), r.URL)
return
}
status := madmin.KMSStatus{
Name: stat.Name,
DefaultKeyID: stat.DefaultKey,
Endpoints: make(map[string]madmin.ItemState, len(stat.Endpoints)),
}
for _, endpoint := range stat.Endpoints {
status.Endpoints[endpoint] = madmin.ItemOnline // TODO(aead): Implement an online check for mTLS
}
resp, err := json.Marshal(status)
resp, err := json.Marshal(stat)
if err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInternalError), err.Error(), r.URL)
return
@@ -2231,15 +2264,9 @@ func (a adminAPIHandlers) KMSKeyStatusHandler(w http.ResponseWriter, r *http.Req
return
}
stat, err := GlobalKMS.Stat(ctx)
if err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInternalError), err.Error(), r.URL)
return
}
keyID := r.Form.Get("key-id")
if keyID == "" {
keyID = stat.DefaultKey
keyID = GlobalKMS.DefaultKey
}
response := madmin.KMSKeyStatus{
KeyID: keyID,
@@ -2247,7 +2274,10 @@ func (a adminAPIHandlers) KMSKeyStatusHandler(w http.ResponseWriter, r *http.Req
kmsContext := kms.Context{"MinIO admin API": "KMSKeyStatusHandler"} // Context for a test key operation
// 1. Generate a new key using the KMS.
key, err := GlobalKMS.GenerateKey(ctx, keyID, kmsContext)
key, err := GlobalKMS.GenerateKey(ctx, &kms.GenerateKeyRequest{
Name: keyID,
AssociatedData: kmsContext,
})
if err != nil {
response.EncryptionErr = err.Error()
resp, err := json.Marshal(response)
@@ -2260,7 +2290,11 @@ func (a adminAPIHandlers) KMSKeyStatusHandler(w http.ResponseWriter, r *http.Req
}
// 2. Verify that we can indeed decrypt the (encrypted) key
decryptedKey, err := GlobalKMS.DecryptKey(key.KeyID, key.Ciphertext, kmsContext)
decryptedKey, err := GlobalKMS.Decrypt(ctx, &kms.DecryptRequest{
Name: key.KeyID,
Ciphertext: key.Ciphertext,
AssociatedData: kmsContext,
})
if err != nil {
response.DecryptionErr = err.Error()
resp, err := json.Marshal(response)
@@ -2333,6 +2367,7 @@ func getPoolsInfo(ctx context.Context, allDisks []madmin.Disk) (map[int]map[int]
}
func getServerInfo(ctx context.Context, pools, metrics bool, r *http.Request) madmin.InfoMessage {
const operationTimeout = 10 * time.Second
ldap := madmin.LDAP{}
if globalIAMSys.LDAPConfig.Enabled() {
ldapConn, err := globalIAMSys.LDAPConfig.LDAP.Connect()
@@ -2354,7 +2389,7 @@ func getServerInfo(ctx context.Context, pools, metrics bool, r *http.Request) ma
notifyTarget := fetchLambdaInfo()
local := getLocalServerProperty(globalEndpoints, r, metrics)
servers := globalNotificationSys.ServerInfo(metrics)
servers := globalNotificationSys.ServerInfo(ctx, metrics)
servers = append(servers, local)
var poolsInfo map[int]map[int]madmin.ErasureSetInfo
@@ -2373,7 +2408,9 @@ func getServerInfo(ctx context.Context, pools, metrics bool, r *http.Request) ma
mode = madmin.ItemOnline
// Load data usage
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objectAPI)
ctx2, cancel := context.WithTimeout(ctx, operationTimeout)
dataUsageInfo, err := loadDataUsageFromBackend(ctx2, objectAPI)
cancel()
if err == nil {
buckets = madmin.Buckets{Count: dataUsageInfo.BucketsCount}
objects = madmin.Objects{Count: dataUsageInfo.ObjectsTotalCount}
@@ -2407,25 +2444,30 @@ func getServerInfo(ctx context.Context, pools, metrics bool, r *http.Request) ma
}
if pools {
poolsInfo, _ = getPoolsInfo(ctx, allDisks)
ctx2, cancel := context.WithTimeout(ctx, operationTimeout)
poolsInfo, _ = getPoolsInfo(ctx2, allDisks)
cancel()
}
}
domain := globalDomainNames
services := madmin.Services{
KMS: fetchKMSStatus(),
KMSStatus: fetchKMSStatusV2(ctx),
LDAP: ldap,
Logger: log,
Audit: audit,
Notifications: notifyTarget,
}
{
ctx2, cancel := context.WithTimeout(ctx, operationTimeout)
services.KMSStatus = fetchKMSStatus(ctx2)
cancel()
}
return madmin.InfoMessage{
Mode: string(mode),
Domain: domain,
Region: globalSite.Region,
SQSARN: globalEventNotifier.GetARNList(false),
Region: globalSite.Region(),
SQSARN: globalEventNotifier.GetARNList(),
DeploymentID: globalDeploymentID(),
Buckets: buckets,
Objects: objects,
@@ -3024,66 +3066,25 @@ func fetchLambdaInfo() []map[string][]madmin.TargetIDStatus {
return notify
}
// fetchKMSStatus fetches KMS-related status information.
func fetchKMSStatus() madmin.KMS {
kmsStat := madmin.KMS{}
if GlobalKMS == nil {
kmsStat.Status = "disabled"
return kmsStat
}
stat, err := GlobalKMS.Stat(context.Background())
if err != nil {
kmsStat.Status = string(madmin.ItemOffline)
return kmsStat
}
if len(stat.Endpoints) == 0 {
kmsStat.Status = stat.Name
return kmsStat
}
kmsStat.Status = string(madmin.ItemOnline)
kmsContext := kms.Context{"MinIO admin API": "ServerInfoHandler"} // Context for a test key operation
// 1. Generate a new key using the KMS.
key, err := GlobalKMS.GenerateKey(context.Background(), "", kmsContext)
if err != nil {
kmsStat.Encrypt = fmt.Sprintf("Encryption failed: %v", err)
} else {
kmsStat.Encrypt = "success"
}
// 2. Verify that we can indeed decrypt the (encrypted) key
decryptedKey, err := GlobalKMS.DecryptKey(key.KeyID, key.Ciphertext, kmsContext)
switch {
case err != nil:
kmsStat.Decrypt = fmt.Sprintf("Decryption failed: %v", err)
case subtle.ConstantTimeCompare(key.Plaintext, decryptedKey) != 1:
kmsStat.Decrypt = "Decryption failed: decrypted key does not match generated key"
default:
kmsStat.Decrypt = "success"
}
return kmsStat
}
// fetchKMSStatusV2 fetches KMS-related status information for all instances
func fetchKMSStatusV2(ctx context.Context) []madmin.KMS {
// fetchKMSStatus fetches KMS-related status information for all instances
func fetchKMSStatus(ctx context.Context) []madmin.KMS {
if GlobalKMS == nil {
return []madmin.KMS{}
}
results := GlobalKMS.Verify(ctx)
stats := []madmin.KMS{}
for _, result := range results {
stats = append(stats, madmin.KMS{
Status: result.Status,
Endpoint: result.Endpoint,
Encrypt: result.Encrypt,
Decrypt: result.Decrypt,
Version: result.Version,
})
stat, err := GlobalKMS.Status(ctx)
if err != nil {
kmsLogIf(ctx, err, "failed to fetch KMS status information")
return []madmin.KMS{}
}
stats := make([]madmin.KMS, 0, len(stat.Endpoints))
for endpoint, state := range stat.Endpoints {
stats = append(stats, madmin.KMS{
Status: string(state),
Endpoint: endpoint,
})
}
return stats
}
@@ -3094,7 +3095,7 @@ func targetStatus(ctx context.Context, h logger.Target) madmin.Status {
return madmin.Status{Status: string(madmin.ItemOffline)}
}
// fetchLoggerDetails return log info
// fetchLoggerInfo return log info
func fetchLoggerInfo(ctx context.Context) ([]madmin.Logger, []madmin.Audit) {
var loggerInfo []madmin.Logger
var auditloggerInfo []madmin.Audit

View File

@@ -463,6 +463,7 @@ func TestTopLockEntries(t *testing.T) {
Owner: lri.Owner,
ID: lri.UID,
Quorum: lri.Quorum,
Timestamp: time.Unix(0, lri.Timestamp),
})
}

View File

@@ -63,8 +63,8 @@ const (
)
var (
errHealIdleTimeout = fmt.Errorf("healing results were not consumed for too long")
errHealStopSignalled = fmt.Errorf("heal stop signaled")
errHealIdleTimeout = errors.New("healing results were not consumed for too long")
errHealStopSignalled = errors.New("heal stop signaled")
errFnHealFromAPIErr = func(ctx context.Context, err error) error {
apiErr := toAdminAPIErr(ctx, err)
@@ -455,8 +455,8 @@ type healSequence struct {
// Number of total items healed against item type
healedItemsMap map[madmin.HealItemType]int64
// Number of total items where healing failed against endpoint and drive state
healFailedItemsMap map[string]int64
// Number of total items where healing failed against item type
healFailedItemsMap map[madmin.HealItemType]int64
// The time of the last scan/heal activity
lastHealActivity time.Time
@@ -497,7 +497,7 @@ func newHealSequence(ctx context.Context, bucket, objPrefix, clientAddr string,
ctx: ctx,
scannedItemsMap: make(map[madmin.HealItemType]int64),
healedItemsMap: make(map[madmin.HealItemType]int64),
healFailedItemsMap: make(map[string]int64),
healFailedItemsMap: make(map[madmin.HealItemType]int64),
}
}
@@ -543,12 +543,12 @@ func (h *healSequence) getHealedItemsMap() map[madmin.HealItemType]int64 {
// getHealFailedItemsMap - returns map of all items where heal failed against
// drive endpoint and status
func (h *healSequence) getHealFailedItemsMap() map[string]int64 {
func (h *healSequence) getHealFailedItemsMap() map[madmin.HealItemType]int64 {
h.mutex.RLock()
defer h.mutex.RUnlock()
// Make a copy before returning the value
retMap := make(map[string]int64, len(h.healFailedItemsMap))
retMap := make(map[madmin.HealItemType]int64, len(h.healFailedItemsMap))
for k, v := range h.healFailedItemsMap {
retMap[k] = v
}
@@ -556,29 +556,27 @@ func (h *healSequence) getHealFailedItemsMap() map[string]int64 {
return retMap
}
func (h *healSequence) countFailed(res madmin.HealResultItem) {
func (h *healSequence) countFailed(healType madmin.HealItemType) {
h.mutex.Lock()
defer h.mutex.Unlock()
for _, d := range res.After.Drives {
// For failed items we report the endpoint and drive state
// This will help users take corrective actions for drives
h.healFailedItemsMap[d.Endpoint+","+d.State]++
}
h.healFailedItemsMap[healType]++
h.lastHealActivity = UTCNow()
}
func (h *healSequence) countHeals(healType madmin.HealItemType, healed bool) {
func (h *healSequence) countScanned(healType madmin.HealItemType) {
h.mutex.Lock()
defer h.mutex.Unlock()
if !healed {
h.scannedItemsMap[healType]++
} else {
h.healedItemsMap[healType]++
}
h.scannedItemsMap[healType]++
h.lastHealActivity = UTCNow()
}
func (h *healSequence) countHealed(healType madmin.HealItemType) {
h.mutex.Lock()
defer h.mutex.Unlock()
h.healedItemsMap[healType]++
h.lastHealActivity = UTCNow()
}
@@ -734,7 +732,7 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
task.opts.ScanMode = madmin.HealNormalScan
}
h.countHeals(healType, false)
h.countScanned(healType)
if source.noWait {
select {
@@ -763,9 +761,23 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
return nil
}
countOKDrives := func(drives []madmin.HealDriveInfo) (count int) {
for _, drive := range drives {
if drive.State == madmin.DriveStateOk {
count++
}
}
return count
}
// task queued, now wait for the response.
select {
case res := <-task.respCh:
if res.err == nil {
h.countHealed(healType)
} else {
h.countFailed(healType)
}
if !h.reportProgress {
if errors.Is(res.err, errSkipFile) { // this is only sent usually by nopHeal
return nil
@@ -778,6 +790,11 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
if res.err != nil {
res.result.Detail = res.err.Error()
}
if res.result.ParityBlocks > 0 && res.result.DataBlocks > 0 && res.result.DataBlocks > res.result.ParityBlocks {
if got := countOKDrives(res.result.After.Drives); got < res.result.ParityBlocks {
res.result.Detail = fmt.Sprintf("quorum loss - expected %d minimum, got drive states in OK %d", res.result.ParityBlocks, got)
}
}
return h.pushHealResultItem(res.result)
case <-h.ctx.Done():
return nil
@@ -789,18 +806,20 @@ func (h *healSequence) healDiskMeta(objAPI ObjectLayer) error {
return h.healMinioSysMeta(objAPI, minioConfigPrefix)()
}
func (h *healSequence) healItems(objAPI ObjectLayer, bucketsOnly bool) error {
func (h *healSequence) healItems(objAPI ObjectLayer) error {
if h.clientToken == bgHealingUUID {
// For background heal do nothing.
return nil
}
if err := h.healDiskMeta(objAPI); err != nil {
return err
if h.bucket == "" { // heal internal meta only during a site-wide heal
if err := h.healDiskMeta(objAPI); err != nil {
return err
}
}
// Heal buckets and objects
return h.healBuckets(objAPI, bucketsOnly)
return h.healBuckets(objAPI)
}
// traverseAndHeal - traverses on-disk data and performs healing
@@ -811,8 +830,7 @@ func (h *healSequence) healItems(objAPI ObjectLayer, bucketsOnly bool) error {
// has to wait until a safe point is reached, such as between scanning
// two objects.
func (h *healSequence) traverseAndHeal(objAPI ObjectLayer) {
bucketsOnly := false // Heals buckets and objects also.
h.traverseAndHealDoneCh <- h.healItems(objAPI, bucketsOnly)
h.traverseAndHealDoneCh <- h.healItems(objAPI)
xioutil.SafeClose(h.traverseAndHealDoneCh)
}
@@ -839,14 +857,14 @@ func (h *healSequence) healMinioSysMeta(objAPI ObjectLayer, metaPrefix string) f
}
// healBuckets - check for all buckets heal or just particular bucket.
func (h *healSequence) healBuckets(objAPI ObjectLayer, bucketsOnly bool) error {
func (h *healSequence) healBuckets(objAPI ObjectLayer) error {
if h.isQuitting() {
return errHealStopSignalled
}
// 1. If a bucket was specified, heal only the bucket.
if h.bucket != "" {
return h.healBucket(objAPI, h.bucket, bucketsOnly)
return h.healBucket(objAPI, h.bucket, false)
}
buckets, err := objAPI.ListBuckets(h.ctx, BucketOptions{})
@@ -860,7 +878,7 @@ func (h *healSequence) healBuckets(objAPI ObjectLayer, bucketsOnly bool) error {
})
for _, bucket := range buckets {
if err = h.healBucket(objAPI, bucket.Name, bucketsOnly); err != nil {
if err = h.healBucket(objAPI, bucket.Name, false); err != nil {
return err
}
}

View File

@@ -159,14 +159,14 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// Info operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/info").HandlerFunc(adminMiddleware(adminAPI.ServerInfoHandler, traceAllFlag, noObjLayerFlag))
adminRouter.Methods(http.MethodGet, http.MethodPost).Path(adminVersion + "/inspect-data").HandlerFunc(adminMiddleware(adminAPI.InspectDataHandler, noGZFlag, traceAllFlag))
adminRouter.Methods(http.MethodGet, http.MethodPost).Path(adminVersion + "/inspect-data").HandlerFunc(adminMiddleware(adminAPI.InspectDataHandler, noGZFlag, traceHdrsS3HFlag))
// StorageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/storageinfo").HandlerFunc(adminMiddleware(adminAPI.StorageInfoHandler, traceAllFlag))
// DataUsageInfo operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/datausageinfo").HandlerFunc(adminMiddleware(adminAPI.DataUsageInfoHandler, traceAllFlag))
// Metrics operation
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/metrics").HandlerFunc(adminMiddleware(adminAPI.MetricsHandler, traceAllFlag))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/metrics").HandlerFunc(adminMiddleware(adminAPI.MetricsHandler, traceHdrsS3HFlag))
if globalIsDistErasure || globalIsErasure {
// Heal operations
@@ -193,9 +193,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// Profiling operations - deprecated API
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/profiling/start").HandlerFunc(adminMiddleware(adminAPI.StartProfilingHandler, traceAllFlag, noObjLayerFlag)).
Queries("profilerType", "{profilerType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(adminMiddleware(adminAPI.DownloadProfilingHandler, traceAllFlag, noObjLayerFlag))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/profiling/download").HandlerFunc(adminMiddleware(adminAPI.DownloadProfilingHandler, traceHdrsS3HFlag, noObjLayerFlag))
// Profiling operations
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/profile").HandlerFunc(adminMiddleware(adminAPI.ProfileHandler, traceAllFlag, noObjLayerFlag))
adminRouter.Methods(http.MethodPost).Path(adminVersion + "/profile").HandlerFunc(adminMiddleware(adminAPI.ProfileHandler, traceHdrsS3HFlag, noObjLayerFlag))
// Config KV operations.
if enableConfigOps {
@@ -244,6 +244,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// STS accounts ops
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/temporary-account-info").HandlerFunc(adminMiddleware(adminAPI.TemporaryAccountInfo)).Queries("accessKey", "{accessKey:.*}")
// Access key (service account/STS) operations
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-access-keys-bulk").HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysBulk)).Queries("listType", "{listType:.*}")
// Info policy IAM latest
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/info-canned-policy").HandlerFunc(adminMiddleware(adminAPI.InfoCannedPolicy)).Queries("name", "{name:.*}")
// List policies latest
@@ -290,6 +293,7 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// Import IAM info
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam").HandlerFunc(adminMiddleware(adminAPI.ImportIAM, noGZFlag))
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/import-iam-v2").HandlerFunc(adminMiddleware(adminAPI.ImportIAMV2, noGZFlag))
// IDentity Provider configuration APIs
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/idp-config/{type}/{name}").HandlerFunc(adminMiddleware(adminAPI.AddIdentityProviderCfg))
@@ -301,8 +305,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
// LDAP specific service accounts ops
adminRouter.Methods(http.MethodPut).Path(adminVersion + "/idp/ldap/add-service-account").HandlerFunc(adminMiddleware(adminAPI.AddServiceAccountLDAP))
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/idp/ldap/list-access-keys").
HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysLDAP)).
Queries("userDN", "{userDN:.*}", "listType", "{listType:.*}")
HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysLDAP)).Queries("userDN", "{userDN:.*}", "listType", "{listType:.*}")
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/idp/ldap/list-access-keys-bulk").
HandlerFunc(adminMiddleware(adminAPI.ListAccessKeysLDAPBulk)).Queries("listType", "{listType:.*}")
// LDAP IAM operations
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/idp/ldap/policy-entities").HandlerFunc(adminMiddleware(adminAPI.ListLDAPPolicyMappingEntities))
@@ -340,6 +345,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps bool) {
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/list-jobs").HandlerFunc(
adminMiddleware(adminAPI.ListBatchJobs))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/status-job").HandlerFunc(
adminMiddleware(adminAPI.BatchJobStatus))
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/describe-job").HandlerFunc(
adminMiddleware(adminAPI.DescribeBatchJob))
adminRouter.Methods(http.MethodDelete).Path(adminVersion + "/cancel-job").HandlerFunc(

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2015-2021 MinIO, Inc.
// Copyright (c) 2015-2024 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
@@ -18,7 +18,6 @@
package cmd
import (
"context"
"math"
"net/http"
"os"
@@ -31,6 +30,7 @@ import (
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/config"
"github.com/minio/minio/internal/kms"
xnet "github.com/minio/pkg/v3/net"
)
// getLocalServerProperty - returns madmin.ServerProperties for only the
@@ -64,9 +64,11 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
if err := isServerResolvable(endpoint, 5*time.Second); err == nil {
network[nodeName] = string(madmin.ItemOnline)
} else {
network[nodeName] = string(madmin.ItemOffline)
// log once the error
peersLogOnceIf(context.Background(), err, nodeName)
if xnet.IsNetworkOrHostDown(err, false) {
network[nodeName] = string(madmin.ItemOffline)
} else if xnet.IsNetworkOrHostDown(err, true) {
network[nodeName] = "connection attempt timedout"
}
}
}
}

View File

@@ -67,7 +67,7 @@ type ObjectToDelete struct {
ReplicateDecisionStr string `xml:"-"`
}
// createBucketConfiguration container for bucket configuration request from client.
// createBucketLocationConfiguration container for bucket configuration request from client.
// Used for parsing the location from the request body for Makebucket.
type createBucketLocationConfiguration struct {
XMLName xml.Name `xml:"CreateBucketConfiguration" json:"-"`

View File

@@ -28,7 +28,7 @@ import (
"strconv"
"strings"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/minio/minio/internal/ioutil"
"google.golang.org/api/googleapi"
@@ -48,7 +48,7 @@ import (
levent "github.com/minio/minio/internal/config/lambda/event"
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/hash"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
// APIError structure
@@ -287,6 +287,7 @@ const (
ErrAdminNoSuchGroup
ErrAdminGroupNotEmpty
ErrAdminGroupDisabled
ErrAdminInvalidGroupName
ErrAdminNoSuchJob
ErrAdminNoSuchPolicy
ErrAdminPolicyChangeAlreadyApplied
@@ -425,6 +426,7 @@ const (
ErrAdminProfilerNotEnabled
ErrInvalidDecompressedSize
ErrAddUserInvalidArgument
ErrAddUserValidUTF
ErrAdminResourceInvalidArgument
ErrAdminAccountNotEligible
ErrAccountNotEligible
@@ -443,6 +445,8 @@ const (
ErrAdminNoAccessKey
ErrAdminNoSecretKey
ErrIAMNotInitialized
apiErrCodeEnd // This is used only for the testing code
)
@@ -456,9 +460,9 @@ func (e errorCodeMap) ToAPIErrWithErr(errCode APIErrorCode, err error) APIError
if err != nil {
apiErr.Description = fmt.Sprintf("%s (%s)", apiErr.Description, err)
}
if globalSite.Region != "" {
if region := globalSite.Region(); region != "" {
if errCode == ErrAuthorizationHeaderMalformed {
apiErr.Description = fmt.Sprintf("The authorization header is malformed; the region is wrong; expecting '%s'.", globalSite.Region)
apiErr.Description = fmt.Sprintf("The authorization header is malformed; the region is wrong; expecting '%s'.", region)
return apiErr
}
}
@@ -965,7 +969,7 @@ var errorCodes = errorCodeMap{
ErrReplicationRemoteConnectionError: {
Code: "XMinioAdminReplicationRemoteConnectionError",
Description: "Remote service connection error",
HTTPStatusCode: http.StatusNotFound,
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrReplicationBandwidthLimitError: {
Code: "XMinioAdminReplicationBandwidthLimitError",
@@ -974,7 +978,7 @@ var errorCodes = errorCodeMap{
},
ErrReplicationNoExistingObjects: {
Code: "XMinioReplicationNoExistingObjects",
Description: "No matching ExistingsObjects rule enabled",
Description: "No matching ExistingObjects rule enabled",
HTTPStatusCode: http.StatusBadRequest,
},
ErrRemoteTargetDenyAddError: {
@@ -1303,6 +1307,11 @@ var errorCodes = errorCodeMap{
Description: "Server not initialized yet, please try again.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrIAMNotInitialized: {
Code: "XMinioIAMNotInitialized",
Description: "IAM sub-system not initialized yet, please try again.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrBucketMetadataNotInitialized: {
Code: "XMinioBucketMetadataNotInitialized",
Description: "Bucket metadata not initialized yet, please try again.",
@@ -1477,8 +1486,8 @@ var errorCodes = errorCodeMap{
},
ErrTooManyRequests: {
Code: "TooManyRequests",
Description: "Deadline exceeded while waiting in incoming queue, please reduce your request rate",
HTTPStatusCode: http.StatusServiceUnavailable,
Description: "Please reduce your request rate",
HTTPStatusCode: http.StatusTooManyRequests,
},
ErrUnsupportedMetadata: {
Code: "InvalidArgument",
@@ -2101,6 +2110,16 @@ var errorCodes = errorCodeMap{
Description: "Expected LDAP short username but was given full DN.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAdminInvalidGroupName: {
Code: "XMinioInvalidGroupName",
Description: "The group name is invalid.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAddUserValidUTF: {
Code: "XMinioInvalidUTF",
Description: "Invalid UTF-8 character detected.",
HTTPStatusCode: http.StatusBadRequest,
},
}
// toAPIErrorCode - Converts embedded errors. Convenience
@@ -2140,6 +2159,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrAdminNoSuchGroup
case errGroupNotEmpty:
apiErr = ErrAdminGroupNotEmpty
case errGroupNameContainsReservedChars:
apiErr = ErrAdminInvalidGroupName
case errNoSuchJob:
apiErr = ErrAdminNoSuchJob
case errNoPolicyToAttachOrDetach:
@@ -2154,6 +2175,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrEntityTooSmall
case errAuthentication:
apiErr = ErrAccessDenied
case auth.ErrContainsReservedChars:
apiErr = ErrAdminInvalidAccessKey
case auth.ErrInvalidAccessKeyLength:
apiErr = ErrAdminInvalidAccessKey
case auth.ErrInvalidSecretKeyLength:
@@ -2430,9 +2453,9 @@ func toAPIError(ctx context.Context, err error) APIError {
switch e := err.(type) {
case kms.Error:
apiErr = APIError{
Description: e.Err.Error(),
Code: e.APICode,
HTTPStatusCode: e.HTTPStatusCode,
Description: e.Err,
HTTPStatusCode: e.Code,
}
case batchReplicationJobError:
apiErr = APIError{
@@ -2526,11 +2549,11 @@ func toAPIError(ctx context.Context, err error) APIError {
if len(e.Errors) >= 1 {
apiErr.Code = e.Errors[0].Reason
}
case azblob.StorageError:
case *azcore.ResponseError:
apiErr = APIError{
Code: string(e.ServiceCode()),
Code: e.ErrorCode,
Description: e.Error(),
HTTPStatusCode: e.Response().StatusCode,
HTTPStatusCode: e.StatusCode,
}
// Add more other SDK related errors here if any in future.
default:
@@ -2569,7 +2592,7 @@ func getAPIError(code APIErrorCode) APIError {
return errorCodes.ToAPIErr(ErrInternalError)
}
// getErrorResponse gets in standard error and resource value and
// getAPIErrorResponse gets in standard error and resource value and
// provides a encodable populated response values
func getAPIErrorResponse(ctx context.Context, err APIError, resource, requestID, hostID string) APIErrorResponse {
reqInfo := logger.GetReqInfo(ctx)
@@ -2579,7 +2602,7 @@ func getAPIErrorResponse(ctx context.Context, err APIError, resource, requestID,
BucketName: reqInfo.BucketName,
Key: reqInfo.ObjectName,
Resource: resource,
Region: globalSite.Region,
Region: globalSite.Region(),
RequestID: requestID,
HostID: hostID,
ActualObjectSize: err.ObjectSize,

View File

@@ -19,6 +19,7 @@ package cmd
import (
"bytes"
"context"
"encoding/json"
"encoding/xml"
"fmt"
@@ -53,7 +54,7 @@ func setCommonHeaders(w http.ResponseWriter) {
// Set `x-amz-bucket-region` only if region is set on the server
// by default minio uses an empty region.
if region := globalSite.Region; region != "" {
if region := globalSite.Region(); region != "" {
w.Header().Set(xhttp.AmzBucketRegion, region)
}
w.Header().Set(xhttp.AcceptRanges, "bytes")
@@ -107,7 +108,7 @@ func setPartsCountHeaders(w http.ResponseWriter, objInfo ObjectInfo) {
}
// Write object header
func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSpec, opts ObjectOptions) (err error) {
func setObjectHeaders(ctx context.Context, w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSpec, opts ObjectOptions) (err error) {
// set common headers
setCommonHeaders(w)
@@ -135,7 +136,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
// Set tag count if object has tags
if len(objInfo.UserTags) > 0 {
tags, _ := tags.ParseObjectTags(objInfo.UserTags)
if tags.Count() > 0 {
if tags != nil && tags.Count() > 0 {
w.Header()[xhttp.AmzTagCount] = []string{strconv.Itoa(tags.Count())}
if opts.Tagging {
// This is MinIO only extension to return back tags along with the count.
@@ -212,7 +213,7 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
if objInfo.IsRemote() {
// Check if object is being restored. For more information on x-amz-restore header see
// https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html#API_HeadObject_ResponseSyntax
w.Header()[xhttp.AmzStorageClass] = []string{objInfo.TransitionedObject.Tier}
w.Header()[xhttp.AmzStorageClass] = []string{filterStorageClass(ctx, objInfo.TransitionedObject.Tier)}
}
if lc, err := globalLifecycleSys.Get(objInfo.Bucket); err == nil {

View File

@@ -35,7 +35,7 @@ import (
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
xxml "github.com/minio/xxml"
)
@@ -544,7 +544,7 @@ func cleanReservedKeys(metadata map[string]string) map[string]string {
}
// generates an ListBucketVersions response for the said bucket with other enumerated options.
func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType string, maxKeys int, resp ListObjectVersionsInfo, metadata metaCheckFn) ListVersionsResponse {
func generateListVersionsResponse(ctx context.Context, bucket, prefix, marker, versionIDMarker, delimiter, encodingType string, maxKeys int, resp ListObjectVersionsInfo, metadata metaCheckFn) ListVersionsResponse {
versions := make([]ObjectVersion, 0, len(resp.Objects))
owner := &Owner{
@@ -573,7 +573,7 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
}
content.Size = object.Size
if object.StorageClass != "" {
content.StorageClass = object.StorageClass
content.StorageClass = filterStorageClass(ctx, object.StorageClass)
} else {
content.StorageClass = globalMinioDefaultStorageClass
}
@@ -593,8 +593,9 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
for k, v := range cleanReservedKeys(object.UserDefined) {
content.UserMetadata.Set(k, v)
}
content.UserMetadata.Set("expires", object.Expires.Format(http.TimeFormat))
if !object.Expires.IsZero() {
content.UserMetadata.Set("expires", object.Expires.Format(http.TimeFormat))
}
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
@@ -634,7 +635,7 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
}
// generates an ListObjectsV1 response for the said bucket with other enumerated options.
func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingType string, maxKeys int, resp ListObjectsInfo) ListObjectsResponse {
func generateListObjectsV1Response(ctx context.Context, bucket, prefix, marker, delimiter, encodingType string, maxKeys int, resp ListObjectsInfo) ListObjectsResponse {
contents := make([]Object, 0, len(resp.Objects))
owner := &Owner{
ID: globalMinioDefaultOwnerID,
@@ -654,7 +655,7 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingTy
}
content.Size = object.Size
if object.StorageClass != "" {
content.StorageClass = object.StorageClass
content.StorageClass = filterStorageClass(ctx, object.StorageClass)
} else {
content.StorageClass = globalMinioDefaultStorageClass
}
@@ -683,7 +684,7 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingTy
}
// generates an ListObjectsV2 response for the said bucket with other enumerated options.
func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter, delimiter, encodingType string, fetchOwner, isTruncated bool, maxKeys int, objects []ObjectInfo, prefixes []string, metadata metaCheckFn) ListObjectsV2Response {
func generateListObjectsV2Response(ctx context.Context, bucket, prefix, token, nextToken, startAfter, delimiter, encodingType string, fetchOwner, isTruncated bool, maxKeys int, objects []ObjectInfo, prefixes []string, metadata metaCheckFn) ListObjectsV2Response {
contents := make([]Object, 0, len(objects))
var owner *Owner
if fetchOwner {
@@ -707,7 +708,7 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
}
content.Size = object.Size
if object.StorageClass != "" {
content.StorageClass = object.StorageClass
content.StorageClass = filterStorageClass(ctx, object.StorageClass)
} else {
content.StorageClass = globalMinioDefaultStorageClass
}
@@ -729,7 +730,9 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
for k, v := range cleanReservedKeys(object.UserDefined) {
content.UserMetadata.Set(k, v)
}
content.UserMetadata.Set("expires", object.Expires.Format(http.TimeFormat))
if !object.Expires.IsZero() {
content.UserMetadata.Set("expires", object.Expires.Format(http.TimeFormat))
}
content.Internal = &ObjectInternalInfo{
K: object.DataBlocks,
M: object.ParityBlocks,
@@ -789,8 +792,8 @@ func generateInitiateMultipartUploadResponse(bucket, key, uploadID string) Initi
}
// generates CompleteMultipartUploadResponse for given bucket, key, location and ETag.
func generateCompleteMultpartUploadResponse(bucket, key, location string, oi ObjectInfo) CompleteMultipartUploadResponse {
cs := oi.decryptChecksums(0)
func generateCompleteMultipartUploadResponse(bucket, key, location string, oi ObjectInfo, h http.Header) CompleteMultipartUploadResponse {
cs := oi.decryptChecksums(0, h)
c := CompleteMultipartUploadResponse{
Location: location,
Bucket: bucket,
@@ -946,17 +949,18 @@ func writeSuccessResponseHeadersOnly(w http.ResponseWriter) {
// writeErrorResponse writes error headers
func writeErrorResponse(ctx context.Context, w http.ResponseWriter, err APIError, reqURL *url.URL) {
if err.HTTPStatusCode == http.StatusServiceUnavailable {
// Set retry-after header to indicate user-agents to retry request after 120secs.
switch err.HTTPStatusCode {
case http.StatusServiceUnavailable, http.StatusTooManyRequests:
// Set retry-after header to indicate user-agents to retry request after 60 seconds.
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After
w.Header().Set(xhttp.RetryAfter, "120")
w.Header().Set(xhttp.RetryAfter, "60")
}
switch err.Code {
case "InvalidRegion":
err.Description = fmt.Sprintf("Region does not match; expecting '%s'.", globalSite.Region)
err.Description = fmt.Sprintf("Region does not match; expecting '%s'.", globalSite.Region())
case "AuthorizationHeaderMalformed":
err.Description = fmt.Sprintf("The authorization header is malformed; the region is wrong; expecting '%s'.", globalSite.Region)
err.Description = fmt.Sprintf("The authorization header is malformed; the region is wrong; expecting '%s'.", globalSite.Region())
}
// Similar check to http.checkWriteHeaderCode

View File

@@ -24,7 +24,7 @@ import (
consoleapi "github.com/minio/console/api"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/mux"
"github.com/minio/pkg/v2/wildcard"
"github.com/minio/pkg/v3/wildcard"
"github.com/rs/cors"
)
@@ -64,7 +64,7 @@ func setObjectLayer(o ObjectLayer) {
globalObjLayerMutex.Unlock()
}
// objectAPIHandler implements and provides http handlers for S3 API.
// objectAPIHandlers implements and provides http handlers for S3 API.
type objectAPIHandlers struct {
ObjectAPI func() ObjectLayer
}
@@ -436,7 +436,7 @@ func registerAPIRouter(router *mux.Router) {
Queries("notification", "")
// ListenNotification
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.ListenNotificationHandler, noThrottleS3HFlag)).
HandlerFunc(s3APIMiddleware(api.ListenNotificationHandler, noThrottleS3HFlag, traceHdrsS3HFlag)).
Queries("events", "{events:.*}")
// ResetBucketReplicationStatus - MinIO extension API
router.Methods(http.MethodGet).
@@ -456,6 +456,14 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.GetBucketCorsHandler)).
Queries("cors", "")
// PutBucketCors - this is a dummy call.
router.Methods(http.MethodPut).
HandlerFunc(s3APIMiddleware(api.PutBucketCorsHandler)).
Queries("cors", "")
// DeleteBucketCors - this is a dummy call.
router.Methods(http.MethodDelete).
HandlerFunc(s3APIMiddleware(api.DeleteBucketCorsHandler)).
Queries("cors", "")
// GetBucketWebsiteHandler - this is a dummy call.
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.GetBucketWebsiteHandler)).
@@ -472,6 +480,7 @@ func registerAPIRouter(router *mux.Router) {
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.GetBucketLoggingHandler)).
Queries("logging", "")
// GetBucketTaggingHandler
router.Methods(http.MethodGet).
HandlerFunc(s3APIMiddleware(api.GetBucketTaggingHandler)).
@@ -615,7 +624,7 @@ func registerAPIRouter(router *mux.Router) {
// ListenNotification
apiRouter.Methods(http.MethodGet).Path(SlashSeparator).
HandlerFunc(s3APIMiddleware(api.ListenNotificationHandler, noThrottleS3HFlag)).
HandlerFunc(s3APIMiddleware(api.ListenNotificationHandler, noThrottleS3HFlag, traceHdrsS3HFlag)).
Queries("events", "{events:.*}")
// ListBuckets

File diff suppressed because one or more lines are too long

View File

@@ -41,7 +41,7 @@ import (
xjwt "github.com/minio/minio/internal/jwt"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/mcontext"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
// Verify if request has JWT.
@@ -178,7 +178,7 @@ func validateAdminSignature(ctx context.Context, r *http.Request, region string)
logger.GetReqInfo(ctx).Cred = cred
logger.GetReqInfo(ctx).Owner = owner
logger.GetReqInfo(ctx).Region = globalSite.Region
logger.GetReqInfo(ctx).Region = globalSite.Region()
return cred, owner, ErrNone
}
@@ -222,7 +222,7 @@ func mustGetClaimsFromToken(r *http.Request) map[string]interface{} {
return claims
}
func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{}, error) {
func getClaimsFromTokenWithSecret(token, secret string) (*xjwt.MapClaims, error) {
// JWT token for x-amz-security-token is signed with admin
// secret key, temporary credentials become invalid if
// server admin credentials change. This is done to ensure
@@ -244,7 +244,7 @@ func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{},
// If AuthZPlugin is set, return without any further checks.
if newGlobalAuthZPluginFn() != nil {
return claims.Map(), nil
return claims, nil
}
// Check if a session policy is set. If so, decode it here.
@@ -263,12 +263,16 @@ func getClaimsFromTokenWithSecret(token, secret string) (map[string]interface{},
claims.MapClaims[sessionPolicyNameExtracted] = string(spBytes)
}
return claims.Map(), nil
return claims, nil
}
// Fetch claims in the security token returned by the client.
func getClaimsFromToken(token string) (map[string]interface{}, error) {
return getClaimsFromTokenWithSecret(token, globalActiveCred.SecretKey)
jwtClaims, err := getClaimsFromTokenWithSecret(token, globalActiveCred.SecretKey)
if err != nil {
return nil, err
}
return jwtClaims.Map(), nil
}
// Fetch claims in the security token returned by the client and validate the token.
@@ -319,7 +323,7 @@ func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]in
if err != nil {
return nil, toAPIErrorCode(r.Context(), err)
}
return claims, ErrNone
return claims.Map(), ErrNone
}
claims := xjwt.NewMapClaims()
@@ -368,7 +372,7 @@ func authenticateRequest(ctx context.Context, r *http.Request, action policy.Act
}
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypeSigned, authTypePresigned:
region := globalSite.Region
region := globalSite.Region()
switch action {
case policy.GetBucketLocationAction, policy.ListAllMyBucketsAction:
region = ""
@@ -384,7 +388,7 @@ func authenticateRequest(ctx context.Context, r *http.Request, action policy.Act
logger.GetReqInfo(ctx).Cred = cred
logger.GetReqInfo(ctx).Owner = owner
logger.GetReqInfo(ctx).Region = globalSite.Region
logger.GetReqInfo(ctx).Region = globalSite.Region()
// region is valid only for CreateBucketAction.
var region string
@@ -684,7 +688,7 @@ func validateSignature(atype authType, r *http.Request) (auth.Credentials, bool,
}
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypePresigned, authTypeSigned:
region := globalSite.Region
region := globalSite.Region()
if s3Err = isReqAuthenticated(GlobalContext, r, region, serviceS3); s3Err != ErrNone {
return cred, owner, s3Err
}
@@ -745,7 +749,7 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectName string, r *http.Request, action policy.Action) (s3Err APIErrorCode) {
var cred auth.Credentials
var owner bool
region := globalSite.Region
region := globalSite.Region()
switch atype {
case authTypeUnknown:
return ErrSignatureVersionNotSupported

View File

@@ -28,7 +28,7 @@ import (
"time"
"github.com/minio/minio/internal/auth"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
type nullReader struct{}
@@ -403,7 +403,7 @@ func TestIsReqAuthenticated(t *testing.T) {
// Validates all testcases.
for i, testCase := range testCases {
s3Error := isReqAuthenticated(ctx, testCase.req, globalSite.Region, serviceS3)
s3Error := isReqAuthenticated(ctx, testCase.req, globalSite.Region(), serviceS3)
if s3Error != testCase.s3Error {
if _, err := io.ReadAll(testCase.req.Body); toAPIErrorCode(ctx, err) != testCase.s3Error {
t.Fatalf("Test %d: Unexpected S3 error: want %d - got %d (got after reading request %s)", i, testCase.s3Error, s3Error, toAPIError(ctx, err).Code)
@@ -443,7 +443,7 @@ func TestCheckAdminRequestAuthType(t *testing.T) {
{Request: mustNewPresignedRequest(http.MethodGet, "http://127.0.0.1:9000", 0, nil, t), ErrCode: ErrAccessDenied},
}
for i, testCase := range testCases {
if _, s3Error := checkAdminRequestAuth(ctx, testCase.Request, policy.AllAdminActions, globalSite.Region); s3Error != testCase.ErrCode {
if _, s3Error := checkAdminRequestAuth(ctx, testCase.Request, policy.AllAdminActions, globalSite.Region()); s3Error != testCase.ErrCode {
t.Errorf("Test %d: Unexpected s3error returned wanted %d, got %d", i, testCase.ErrCode, s3Error)
}
}

View File

@@ -25,7 +25,7 @@ import (
"time"
"github.com/minio/madmin-go/v3"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v3/env"
)
// healTask represents what to heal along with options
@@ -133,19 +133,20 @@ func (h *healRoutine) AddWorker(ctx context.Context, objAPI ObjectLayer, bgSeq *
}
}
if bgSeq != nil {
// We increment relevant counter based on the heal result for prometheus reporting.
if err != nil {
bgSeq.countFailed(res)
} else {
bgSeq.countHeals(res.Type, false)
}
}
if task.respCh != nil {
task.respCh <- healResult{result: res, err: err}
continue
}
// when respCh is not set caller is not waiting but we
// update the relevant metrics for them
if bgSeq != nil {
if err == nil {
bgSeq.countHealed(res.Type)
} else {
bgSeq.countFailed(res.Type)
}
}
case <-ctx.Done():
return
}

View File

@@ -33,7 +33,7 @@ import (
"github.com/minio/madmin-go/v3"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/config"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v3/env"
)
const (
@@ -72,10 +72,12 @@ type healingTracker struct {
// Numbers when current bucket started healing,
// for resuming with correct numbers.
ResumeItemsHealed uint64 `json:"-"`
ResumeItemsFailed uint64 `json:"-"`
ResumeBytesDone uint64 `json:"-"`
ResumeBytesFailed uint64 `json:"-"`
ResumeItemsHealed uint64 `json:"-"`
ResumeItemsFailed uint64 `json:"-"`
ResumeItemsSkipped uint64 `json:"-"`
ResumeBytesDone uint64 `json:"-"`
ResumeBytesFailed uint64 `json:"-"`
ResumeBytesSkipped uint64 `json:"-"`
// Filled on startup/restarts.
QueuedBuckets []string
@@ -88,6 +90,11 @@ type healingTracker struct {
ItemsSkipped uint64
BytesSkipped uint64
RetryAttempts uint64
Finished bool // finished healing, whether with errors or not
// Add future tracking capabilities
// Be sure that they are included in toHealingDisk
}
@@ -141,14 +148,34 @@ func initHealingTracker(disk StorageAPI, healID string) *healingTracker {
return h
}
func (h healingTracker) getLastUpdate() time.Time {
func (h *healingTracker) resetHealing() {
h.mu.Lock()
defer h.mu.Unlock()
h.ItemsHealed = 0
h.ItemsFailed = 0
h.BytesDone = 0
h.BytesFailed = 0
h.ResumeItemsHealed = 0
h.ResumeItemsFailed = 0
h.ResumeBytesDone = 0
h.ResumeBytesFailed = 0
h.ItemsSkipped = 0
h.BytesSkipped = 0
h.HealedBuckets = nil
h.Object = ""
h.Bucket = ""
}
func (h *healingTracker) getLastUpdate() time.Time {
h.mu.RLock()
defer h.mu.RUnlock()
return h.LastUpdate
}
func (h healingTracker) getBucket() string {
func (h *healingTracker) getBucket() string {
h.mu.RLock()
defer h.mu.RUnlock()
@@ -162,7 +189,7 @@ func (h *healingTracker) setBucket(bucket string) {
h.Bucket = bucket
}
func (h healingTracker) getObject() string {
func (h *healingTracker) getObject() string {
h.mu.RLock()
defer h.mu.RUnlock()
@@ -196,9 +223,6 @@ func (h *healingTracker) updateProgress(success, skipped bool, bytes uint64) {
// update will update the tracker on the disk.
// If the tracker has been deleted an error is returned.
func (h *healingTracker) update(ctx context.Context) error {
if h.disk.Healing() == nil {
return fmt.Errorf("healingTracker: drive %q is not marked as healing", h.ID)
}
h.mu.Lock()
if h.ID == "" || h.PoolIndex < 0 || h.SetIndex < 0 || h.DiskIndex < 0 {
h.ID, _ = h.disk.GetDiskID()
@@ -260,8 +284,10 @@ func (h *healingTracker) resume() {
h.ItemsHealed = h.ResumeItemsHealed
h.ItemsFailed = h.ResumeItemsFailed
h.ItemsSkipped = h.ResumeItemsSkipped
h.BytesDone = h.ResumeBytesDone
h.BytesFailed = h.ResumeBytesFailed
h.BytesSkipped = h.ResumeBytesSkipped
}
// bucketDone should be called when a bucket is done healing.
@@ -272,8 +298,10 @@ func (h *healingTracker) bucketDone(bucket string) {
h.ResumeItemsHealed = h.ItemsHealed
h.ResumeItemsFailed = h.ItemsFailed
h.ResumeItemsSkipped = h.ItemsSkipped
h.ResumeBytesDone = h.BytesDone
h.ResumeBytesFailed = h.BytesFailed
h.ResumeBytesSkipped = h.BytesSkipped
h.HealedBuckets = append(h.HealedBuckets, bucket)
for i, b := range h.QueuedBuckets {
if b == bucket {
@@ -322,6 +350,7 @@ func (h *healingTracker) toHealingDisk() madmin.HealingDisk {
PoolIndex: h.PoolIndex,
SetIndex: h.SetIndex,
DiskIndex: h.DiskIndex,
Finished: h.Finished,
Path: h.Path,
Started: h.Started.UTC(),
LastUpdate: h.LastUpdate.UTC(),
@@ -337,6 +366,7 @@ func (h *healingTracker) toHealingDisk() madmin.HealingDisk {
Object: h.Object,
QueuedBuckets: h.QueuedBuckets,
HealedBuckets: h.HealedBuckets,
RetryAttempts: h.RetryAttempts,
ObjectsHealed: h.ItemsHealed, // Deprecated July 2021
ObjectsFailed: h.ItemsFailed, // Deprecated July 2021
@@ -351,24 +381,26 @@ func initAutoHeal(ctx context.Context, objAPI ObjectLayer) {
}
initBackgroundHealing(ctx, objAPI) // start quick background healing
if env.Get("_MINIO_AUTO_DRIVE_HEALING", config.EnableOn) == config.EnableOn || env.Get("_MINIO_AUTO_DISK_HEALING", config.EnableOn) == config.EnableOn {
if env.Get("_MINIO_AUTO_DRIVE_HEALING", config.EnableOn) == config.EnableOn {
globalBackgroundHealState.pushHealLocalDisks(getLocalDisksToHeal()...)
go monitorLocalDisksAndHeal(ctx, z)
}
go globalMRFState.startMRFPersistence()
go globalMRFState.healRoutine(z)
}
func getLocalDisksToHeal() (disksToHeal Endpoints) {
globalLocalDrivesMu.RLock()
localDrives := cloneDrives(globalLocalDrives)
localDrives := cloneDrives(globalLocalDrivesMap)
globalLocalDrivesMu.RUnlock()
for _, disk := range localDrives {
_, err := disk.GetDiskID()
_, err := disk.DiskInfo(context.Background(), DiskInfoOptions{})
if errors.Is(err, errUnformattedDisk) {
disksToHeal = append(disksToHeal, disk.Endpoint())
continue
}
if disk.Healing() != nil {
if h := disk.Healing(); h != nil && !h.Finished {
disksToHeal = append(disksToHeal, disk.Endpoint())
}
}
@@ -382,6 +414,8 @@ func getLocalDisksToHeal() (disksToHeal Endpoints) {
var newDiskHealingTimeout = newDynamicTimeout(30*time.Second, 10*time.Second)
var errRetryHealing = errors.New("some items failed to heal, we will retry healing this drive again")
func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint) error {
poolIdx, setIdx := endpoint.PoolIdx, endpoint.SetIdx
disk := getStorageViaEndpoint(endpoint)
@@ -389,6 +423,17 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
return fmt.Errorf("Unexpected error disk must be initialized by now after formatting: %s", endpoint)
}
_, err := disk.DiskInfo(ctx, DiskInfoOptions{})
if err != nil {
if errors.Is(err, errDriveIsRoot) {
// This is a root drive, ignore and move on
return nil
}
if !errors.Is(err, errUnformattedDisk) {
return err
}
}
// Prevent parallel erasure set healing
locker := z.NewNSLock(minioMetaBucket, fmt.Sprintf("new-drive-healing/%d/%d", poolIdx, setIdx))
lkctx, err := locker.GetLock(ctx, newDiskHealingTimeout)
@@ -451,12 +496,30 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
return err
}
healingLogEvent(ctx, "Healing of drive '%s' is finished (healed: %d, skipped: %d, failed: %d).", disk, tracker.ItemsHealed, tracker.ItemsSkipped, tracker.ItemsFailed)
// if objects have failed healing, we attempt a retry to heal the drive upto 3 times before giving up.
if tracker.ItemsFailed > 0 && tracker.RetryAttempts < 4 {
tracker.RetryAttempts++
if len(tracker.QueuedBuckets) > 0 {
return fmt.Errorf("not all buckets were healed: %v", tracker.QueuedBuckets)
healingLogEvent(ctx, "Healing of drive '%s' is incomplete, retrying %s time (healed: %d, skipped: %d, failed: %d).", disk,
humanize.Ordinal(int(tracker.RetryAttempts)), tracker.ItemsHealed, tracker.ItemsSkipped, tracker.ItemsFailed)
tracker.resetHealing()
bugLogIf(ctx, tracker.update(ctx))
return errRetryHealing
}
if tracker.ItemsFailed > 0 {
healingLogEvent(ctx, "Healing of drive '%s' is incomplete, retried %d times (healed: %d, skipped: %d, failed: %d).", disk,
tracker.RetryAttempts, tracker.ItemsHealed, tracker.ItemsSkipped, tracker.ItemsFailed)
} else {
if tracker.RetryAttempts > 0 {
healingLogEvent(ctx, "Healing of drive '%s' is complete, retried %d times (healed: %d, skipped: %d).", disk,
tracker.RetryAttempts-1, tracker.ItemsHealed, tracker.ItemsSkipped)
} else {
healingLogEvent(ctx, "Healing of drive '%s' is finished (healed: %d, skipped: %d).", disk, tracker.ItemsHealed, tracker.ItemsSkipped)
}
}
if serverDebugLog {
tracker.printTo(os.Stdout)
fmt.Printf("\n")
@@ -486,7 +549,8 @@ func healFreshDisk(ctx context.Context, z *erasureServerPools, endpoint Endpoint
continue
}
if t.HealID == tracker.HealID {
t.delete(ctx)
t.Finished = true
t.update(ctx)
}
}
@@ -528,7 +592,7 @@ func monitorLocalDisksAndHeal(ctx context.Context, z *erasureServerPools) {
if err := healFreshDisk(ctx, z, disk); err != nil {
globalBackgroundHealState.setDiskHealingStatus(disk, false)
timedout := OperationTimedOut{}
if !errors.Is(err, context.Canceled) && !errors.As(err, &timedout) {
if !errors.Is(err, context.Canceled) && !errors.As(err, &timedout) && !errors.Is(err, errRetryHealing) {
printEndpointError(disk, err, false)
}
return

View File

@@ -132,6 +132,12 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "ResumeItemsFailed")
return
}
case "ResumeItemsSkipped":
z.ResumeItemsSkipped, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ResumeItemsSkipped")
return
}
case "ResumeBytesDone":
z.ResumeBytesDone, err = dc.ReadUint64()
if err != nil {
@@ -144,6 +150,12 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "ResumeBytesFailed")
return
}
case "ResumeBytesSkipped":
z.ResumeBytesSkipped, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ResumeBytesSkipped")
return
}
case "QueuedBuckets":
var zb0002 uint32
zb0002, err = dc.ReadArrayHeader()
@@ -200,6 +212,18 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "BytesSkipped")
return
}
case "RetryAttempts":
z.RetryAttempts, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "RetryAttempts")
return
}
case "Finished":
z.Finished, err = dc.ReadBool()
if err != nil {
err = msgp.WrapError(err, "Finished")
return
}
default:
err = dc.Skip()
if err != nil {
@@ -213,9 +237,9 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 25
// map header, size 29
// write "ID"
err = en.Append(0xde, 0x0, 0x19, 0xa2, 0x49, 0x44)
err = en.Append(0xde, 0x0, 0x1d, 0xa2, 0x49, 0x44)
if err != nil {
return
}
@@ -394,6 +418,16 @@ func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "ResumeItemsFailed")
return
}
// write "ResumeItemsSkipped"
err = en.Append(0xb2, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x49, 0x74, 0x65, 0x6d, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.ResumeItemsSkipped)
if err != nil {
err = msgp.WrapError(err, "ResumeItemsSkipped")
return
}
// write "ResumeBytesDone"
err = en.Append(0xaf, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x44, 0x6f, 0x6e, 0x65)
if err != nil {
@@ -414,6 +448,16 @@ func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "ResumeBytesFailed")
return
}
// write "ResumeBytesSkipped"
err = en.Append(0xb2, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.ResumeBytesSkipped)
if err != nil {
err = msgp.WrapError(err, "ResumeBytesSkipped")
return
}
// write "QueuedBuckets"
err = en.Append(0xad, 0x51, 0x75, 0x65, 0x75, 0x65, 0x64, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x73)
if err != nil {
@@ -478,15 +522,35 @@ func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "BytesSkipped")
return
}
// write "RetryAttempts"
err = en.Append(0xad, 0x52, 0x65, 0x74, 0x72, 0x79, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
if err != nil {
return
}
err = en.WriteUint64(z.RetryAttempts)
if err != nil {
err = msgp.WrapError(err, "RetryAttempts")
return
}
// write "Finished"
err = en.Append(0xa8, 0x46, 0x69, 0x6e, 0x69, 0x73, 0x68, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteBool(z.Finished)
if err != nil {
err = msgp.WrapError(err, "Finished")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 25
// map header, size 29
// string "ID"
o = append(o, 0xde, 0x0, 0x19, 0xa2, 0x49, 0x44)
o = append(o, 0xde, 0x0, 0x1d, 0xa2, 0x49, 0x44)
o = msgp.AppendString(o, z.ID)
// string "PoolIndex"
o = append(o, 0xa9, 0x50, 0x6f, 0x6f, 0x6c, 0x49, 0x6e, 0x64, 0x65, 0x78)
@@ -539,12 +603,18 @@ func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
// string "ResumeItemsFailed"
o = append(o, 0xb1, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x49, 0x74, 0x65, 0x6d, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeItemsFailed)
// string "ResumeItemsSkipped"
o = append(o, 0xb2, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x49, 0x74, 0x65, 0x6d, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeItemsSkipped)
// string "ResumeBytesDone"
o = append(o, 0xaf, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x44, 0x6f, 0x6e, 0x65)
o = msgp.AppendUint64(o, z.ResumeBytesDone)
// string "ResumeBytesFailed"
o = append(o, 0xb1, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeBytesFailed)
// string "ResumeBytesSkipped"
o = append(o, 0xb2, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeBytesSkipped)
// string "QueuedBuckets"
o = append(o, 0xad, 0x51, 0x75, 0x65, 0x75, 0x65, 0x64, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x73)
o = msgp.AppendArrayHeader(o, uint32(len(z.QueuedBuckets)))
@@ -566,6 +636,12 @@ func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
// string "BytesSkipped"
o = append(o, 0xac, 0x42, 0x79, 0x74, 0x65, 0x73, 0x53, 0x6b, 0x69, 0x70, 0x70, 0x65, 0x64)
o = msgp.AppendUint64(o, z.BytesSkipped)
// string "RetryAttempts"
o = append(o, 0xad, 0x52, 0x65, 0x74, 0x72, 0x79, 0x41, 0x74, 0x74, 0x65, 0x6d, 0x70, 0x74, 0x73)
o = msgp.AppendUint64(o, z.RetryAttempts)
// string "Finished"
o = append(o, 0xa8, 0x46, 0x69, 0x6e, 0x69, 0x73, 0x68, 0x65, 0x64)
o = msgp.AppendBool(o, z.Finished)
return
}
@@ -695,6 +771,12 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "ResumeItemsFailed")
return
}
case "ResumeItemsSkipped":
z.ResumeItemsSkipped, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResumeItemsSkipped")
return
}
case "ResumeBytesDone":
z.ResumeBytesDone, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
@@ -707,6 +789,12 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "ResumeBytesFailed")
return
}
case "ResumeBytesSkipped":
z.ResumeBytesSkipped, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResumeBytesSkipped")
return
}
case "QueuedBuckets":
var zb0002 uint32
zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts)
@@ -763,6 +851,18 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "BytesSkipped")
return
}
case "RetryAttempts":
z.RetryAttempts, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "RetryAttempts")
return
}
case "Finished":
z.Finished, bts, err = msgp.ReadBoolBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Finished")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -777,7 +877,7 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *healingTracker) Msgsize() (s int) {
s = 3 + 3 + msgp.StringPrefixSize + len(z.ID) + 10 + msgp.IntSize + 9 + msgp.IntSize + 10 + msgp.IntSize + 5 + msgp.StringPrefixSize + len(z.Path) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 8 + msgp.TimeSize + 11 + msgp.TimeSize + 18 + msgp.Uint64Size + 17 + msgp.Uint64Size + 12 + msgp.Uint64Size + 12 + msgp.Uint64Size + 10 + msgp.Uint64Size + 12 + msgp.Uint64Size + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Object) + 18 + msgp.Uint64Size + 18 + msgp.Uint64Size + 16 + msgp.Uint64Size + 18 + msgp.Uint64Size + 14 + msgp.ArrayHeaderSize
s = 3 + 3 + msgp.StringPrefixSize + len(z.ID) + 10 + msgp.IntSize + 9 + msgp.IntSize + 10 + msgp.IntSize + 5 + msgp.StringPrefixSize + len(z.Path) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 8 + msgp.TimeSize + 11 + msgp.TimeSize + 18 + msgp.Uint64Size + 17 + msgp.Uint64Size + 12 + msgp.Uint64Size + 12 + msgp.Uint64Size + 10 + msgp.Uint64Size + 12 + msgp.Uint64Size + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Object) + 18 + msgp.Uint64Size + 18 + msgp.Uint64Size + 19 + msgp.Uint64Size + 16 + msgp.Uint64Size + 18 + msgp.Uint64Size + 19 + msgp.Uint64Size + 14 + msgp.ArrayHeaderSize
for za0001 := range z.QueuedBuckets {
s += msgp.StringPrefixSize + len(z.QueuedBuckets[za0001])
}
@@ -785,6 +885,6 @@ func (z *healingTracker) Msgsize() (s int) {
for za0002 := range z.HealedBuckets {
s += msgp.StringPrefixSize + len(z.HealedBuckets[za0002])
}
s += 7 + msgp.StringPrefixSize + len(z.HealID) + 13 + msgp.Uint64Size + 13 + msgp.Uint64Size
s += 7 + msgp.StringPrefixSize + len(z.HealID) + 13 + msgp.Uint64Size + 13 + msgp.Uint64Size + 14 + msgp.Uint64Size + 9 + msgp.BoolSize
return
}

View File

@@ -33,9 +33,10 @@ import (
"github.com/minio/minio/internal/bucket/versioning"
xhttp "github.com/minio/minio/internal/http"
xioutil "github.com/minio/minio/internal/ioutil"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v2/wildcard"
"github.com/minio/pkg/v2/workers"
"github.com/minio/pkg/v3/env"
"github.com/minio/pkg/v3/wildcard"
"github.com/minio/pkg/v3/workers"
"github.com/minio/pkg/v3/xtime"
"gopkg.in/yaml.v3"
)
@@ -116,7 +117,7 @@ func (p BatchJobExpirePurge) Validate() error {
// BatchJobExpireFilter holds all the filters currently supported for batch replication
type BatchJobExpireFilter struct {
line, col int
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
OlderThan xtime.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedBefore *time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
@@ -162,7 +163,7 @@ func (ef BatchJobExpireFilter) Matches(obj ObjectInfo, now time.Time) bool {
if len(ef.Name) > 0 && !wildcard.Match(ef.Name, obj.Name) {
return false
}
if ef.OlderThan > 0 && now.Sub(obj.ModTime) <= ef.OlderThan {
if ef.OlderThan > 0 && now.Sub(obj.ModTime) <= ef.OlderThan.D() {
return false
}
@@ -280,7 +281,7 @@ type BatchJobExpire struct {
line, col int
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Prefix BatchJobPrefix `yaml:"prefix" json:"prefix"`
NotificationCfg BatchJobNotification `yaml:"notify" json:"notify"`
Retry BatchJobRetry `yaml:"retry" json:"retry"`
Rules []BatchJobExpireFilter `yaml:"rules" json:"rules"`
@@ -340,8 +341,24 @@ func (r *BatchJobExpire) Expire(ctx context.Context, api ObjectLayer, vc *versio
PrefixEnabledFn: vc.PrefixEnabled,
VersionSuspended: vc.Suspended(),
}
_, errs := api.DeleteObjects(ctx, r.Bucket, objsToDel, opts)
return errs
allErrs := make([]error, 0, len(objsToDel))
for {
count := len(objsToDel)
if count == 0 {
break
}
if count > maxDeleteList {
count = maxDeleteList
}
_, errs := api.DeleteObjects(ctx, r.Bucket, objsToDel[:count], opts)
allErrs = append(allErrs, errs...)
// Next batch of deletion
objsToDel = objsToDel[count:]
}
return allErrs
}
const (
@@ -371,9 +388,12 @@ func (oiCache objInfoCache) Get(toDel ObjectToDelete) (*ObjectInfo, bool) {
func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo, job BatchJobRequest, api ObjectLayer, wk *workers.Workers, expireCh <-chan []expireObjInfo) {
vc, _ := globalBucketVersioningSys.Get(r.Bucket)
retryAttempts := r.Retry.Attempts
retryAttempts := job.Expire.Retry.Attempts
if retryAttempts <= 0 {
retryAttempts = batchExpireJobDefaultRetries
}
delay := job.Expire.Retry.Delay
if delay == 0 {
if delay <= 0 {
delay = batchExpireJobDefaultRetryDelay
}
@@ -432,7 +452,7 @@ func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo
})
if err != nil {
stopFn(exp, err)
batchLogIf(ctx, fmt.Errorf("Failed to expire %s/%s versionID=%s due to %v (attempts=%d)", toExpire[i].Bucket, toExpire[i].Name, toExpire[i].VersionID, err, attempts))
batchLogIf(ctx, fmt.Errorf("Failed to expire %s/%s due to %v (attempts=%d)", exp.Bucket, exp.Name, err, attempts))
} else {
stopFn(exp, err)
success = true
@@ -464,13 +484,13 @@ func batchObjsForDelete(ctx context.Context, r *BatchJobExpire, ri *batchJobInfo
copy(toDelCopy, toDel)
var failed int
errs := r.Expire(ctx, api, vc, toDel)
// reslice toDel in preparation for next retry
// attempt
// reslice toDel in preparation for next retry attempt
toDel = toDel[:0]
for i, err := range errs {
if err != nil {
stopFn(toDelCopy[i], err)
batchLogIf(ctx, fmt.Errorf("Failed to expire %s/%s versionID=%s due to %v (attempts=%d)", ri.Bucket, toDelCopy[i].ObjectName, toDelCopy[i].VersionID, err, attempts))
batchLogIf(ctx, fmt.Errorf("Failed to expire %s/%s versionID=%s due to %v (attempts=%d)", ri.Bucket, toDelCopy[i].ObjectName, toDelCopy[i].VersionID,
err, attempts))
failed++
if oi, ok := oiCache.Get(toDelCopy[i]); ok {
ri.trackCurrentBucketObject(r.Bucket, *oi, false, attempts)
@@ -514,7 +534,7 @@ func (r *BatchJobExpire) Start(ctx context.Context, api ObjectLayer, job BatchJo
JobType: string(job.Type()),
StartTime: job.Started,
}
if err := ri.load(ctx, api, job); err != nil {
if err := ri.loadOrInit(ctx, api, job); err != nil {
return err
}
@@ -534,40 +554,58 @@ func (r *BatchJobExpire) Start(ctx context.Context, api ObjectLayer, job BatchJo
return err
}
ctx, cancel := context.WithCancel(ctx)
defer cancel()
ctx, cancelCause := context.WithCancelCause(ctx)
defer cancelCause(nil)
results := make(chan itemOrErr[ObjectInfo], workerSize)
if err := api.Walk(ctx, r.Bucket, r.Prefix, results, WalkOptions{
Marker: lastObject,
LatestOnly: false, // we need to visit all versions of the object to implement purge: retainVersions
VersionsSort: WalkVersionsSortDesc,
}); err != nil {
// Do not need to retry if we can't list objects on source.
return err
}
go func() {
prefixes := r.Prefix.F()
if len(prefixes) == 0 {
prefixes = []string{""}
}
for _, prefix := range prefixes {
prefixResultCh := make(chan itemOrErr[ObjectInfo], workerSize)
err := api.Walk(ctx, r.Bucket, prefix, prefixResultCh, WalkOptions{
Marker: lastObject,
LatestOnly: false, // we need to visit all versions of the object to implement purge: retainVersions
VersionsSort: WalkVersionsSortDesc,
})
if err != nil {
cancelCause(err)
xioutil.SafeClose(results)
return
}
for result := range prefixResultCh {
results <- result
}
}
xioutil.SafeClose(results)
}()
// Goroutine to periodically save batch-expire job's in-memory state
saverQuitCh := make(chan struct{})
go func() {
saveTicker := time.NewTicker(10 * time.Second)
defer saveTicker.Stop()
for {
quit := false
after := time.Minute
for !quit {
select {
case <-saveTicker.C:
// persist in-memory state to disk after every 10secs.
batchLogIf(ctx, ri.updateAfter(ctx, api, 10*time.Second, job))
case <-ctx.Done():
// persist in-memory state immediately before exiting due to context cancellation.
batchLogIf(ctx, ri.updateAfter(ctx, api, 0, job))
return
quit = true
case <-saverQuitCh:
// persist in-memory state immediately to disk.
batchLogIf(ctx, ri.updateAfter(ctx, api, 0, job))
return
quit = true
}
if quit {
// save immediately if we are quitting
after = 0
}
ctx, cancel := context.WithTimeout(GlobalContext, 30*time.Second) // independent context
batchLogIf(ctx, ri.updateAfter(ctx, api, after, job))
cancel()
}
}()
@@ -584,7 +622,7 @@ func (r *BatchJobExpire) Start(ctx context.Context, api ObjectLayer, job BatchJo
versionsCount int
toDel []expireObjInfo
)
failed := true
failed := false
for result := range results {
if result.Err != nil {
failed = true
@@ -653,6 +691,10 @@ func (r *BatchJobExpire) Start(ctx context.Context, api ObjectLayer, job BatchJo
ObjectInfo: result.Item,
})
}
if context.Cause(ctx) != nil {
xioutil.SafeClose(expireCh)
return context.Cause(ctx)
}
// Send any remaining objects downstream
if len(toDel) > 0 {
select {

View File

@@ -39,7 +39,7 @@ func (z *BatchJobExpire) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
case "Prefix":
z.Prefix, err = dc.ReadString()
err = z.Prefix.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@@ -114,7 +114,7 @@ func (z *BatchJobExpire) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteString(z.Prefix)
err = z.Prefix.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@@ -171,7 +171,11 @@ func (z *BatchJobExpire) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.AppendString(o, z.Bucket)
// string "Prefix"
o = append(o, 0xa6, 0x50, 0x72, 0x65, 0x66, 0x69, 0x78)
o = msgp.AppendString(o, z.Prefix)
o, err = z.Prefix.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
}
// string "NotificationCfg"
o = append(o, 0xaf, 0x4e, 0x6f, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x66, 0x67)
o, err = z.NotificationCfg.MarshalMsg(o)
@@ -230,7 +234,7 @@ func (z *BatchJobExpire) UnmarshalMsg(bts []byte) (o []byte, err error) {
return
}
case "Prefix":
z.Prefix, bts, err = msgp.ReadStringBytes(bts)
bts, err = z.Prefix.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@@ -280,7 +284,7 @@ func (z *BatchJobExpire) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobExpire) Msgsize() (s int) {
s = 1 + 11 + msgp.StringPrefixSize + len(z.APIVersion) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 16 + z.NotificationCfg.Msgsize() + 6 + z.Retry.Msgsize() + 6 + msgp.ArrayHeaderSize
s = 1 + 11 + msgp.StringPrefixSize + len(z.APIVersion) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + z.Prefix.Msgsize() + 16 + z.NotificationCfg.Msgsize() + 6 + z.Retry.Msgsize() + 6 + msgp.ArrayHeaderSize
for za0001 := range z.Rules {
s += z.Rules[za0001].Msgsize()
}
@@ -306,7 +310,7 @@ func (z *BatchJobExpireFilter) DecodeMsg(dc *msgp.Reader) (err error) {
}
switch msgp.UnsafeString(field) {
case "OlderThan":
z.OlderThan, err = dc.ReadDuration()
err = z.OlderThan.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@@ -433,7 +437,7 @@ func (z *BatchJobExpireFilter) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteDuration(z.OlderThan)
err = z.OlderThan.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@@ -544,7 +548,11 @@ func (z *BatchJobExpireFilter) MarshalMsg(b []byte) (o []byte, err error) {
// map header, size 8
// string "OlderThan"
o = append(o, 0x88, 0xa9, 0x4f, 0x6c, 0x64, 0x65, 0x72, 0x54, 0x68, 0x61, 0x6e)
o = msgp.AppendDuration(o, z.OlderThan)
o, err = z.OlderThan.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
}
// string "CreatedBefore"
o = append(o, 0xad, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65)
if z.CreatedBefore == nil {
@@ -613,7 +621,7 @@ func (z *BatchJobExpireFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
}
switch msgp.UnsafeString(field) {
case "OlderThan":
z.OlderThan, bts, err = msgp.ReadDurationBytes(bts)
bts, err = z.OlderThan.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@@ -734,7 +742,7 @@ func (z *BatchJobExpireFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobExpireFilter) Msgsize() (s int) {
s = 1 + 10 + msgp.DurationSize + 14
s = 1 + 10 + z.OlderThan.Msgsize() + 14
if z.CreatedBefore == nil {
s += msgp.NilSize
} else {

View File

@@ -18,9 +18,10 @@
package cmd
import (
"slices"
"testing"
"gopkg.in/yaml.v2"
"gopkg.in/yaml.v3"
)
func TestParseBatchJobExpire(t *testing.T) {
@@ -32,7 +33,7 @@ expire: # Expire objects that match a condition
rules:
- type: object # regular objects with zero or more older versions
name: NAME # match object names that satisfy the wildcard expression.
olderThan: 70h # match objects older than this value
olderThan: 7d10h # match objects older than this value
createdBefore: "2006-01-02T15:04:05.00Z" # match objects created before "date"
tags:
- key: name
@@ -64,8 +65,61 @@ expire: # Expire objects that match a condition
delay: 500ms # least amount of delay between each retry
`
var job BatchJobRequest
err := yaml.UnmarshalStrict([]byte(expireYaml), &job)
err := yaml.Unmarshal([]byte(expireYaml), &job)
if err != nil {
t.Fatal("Failed to parse batch-job-expire yaml", err)
}
if !slices.Equal(job.Expire.Prefix.F(), []string{"myprefix"}) {
t.Fatal("Failed to parse batch-job-expire yaml")
}
multiPrefixExpireYaml := `
expire: # Expire objects that match a condition
apiVersion: v1
bucket: mybucket # Bucket where this batch job will expire matching objects from
prefix: # (Optional) Prefix under which this job will expire objects matching the rules below.
- myprefix
- myprefix1
rules:
- type: object # regular objects with zero or more older versions
name: NAME # match object names that satisfy the wildcard expression.
olderThan: 7d10h # match objects older than this value
createdBefore: "2006-01-02T15:04:05.00Z" # match objects created before "date"
tags:
- key: name
value: pick* # match objects with tag 'name', all values starting with 'pick'
metadata:
- key: content-type
value: image/* # match objects with 'content-type', all values starting with 'image/'
size:
lessThan: "10MiB" # match objects with size less than this value (e.g. 10MiB)
greaterThan: 1MiB # match objects with size greater than this value (e.g. 1MiB)
purge:
# retainVersions: 0 # (default) delete all versions of the object. This option is the fastest.
# retainVersions: 5 # keep the latest 5 versions of the object.
- type: deleted # objects with delete marker as their latest version
name: NAME # match object names that satisfy the wildcard expression.
olderThan: 10h # match objects older than this value (e.g. 7d10h31s)
createdBefore: "2006-01-02T15:04:05.00Z" # match objects created before "date"
purge:
# retainVersions: 0 # (default) delete all versions of the object. This option is the fastest.
# retainVersions: 5 # keep the latest 5 versions of the object including delete markers.
notify:
endpoint: https://notify.endpoint # notification endpoint to receive job completion status
token: Bearer xxxxx # optional authentication token for the notification endpoint
retry:
attempts: 10 # number of retries for the job before giving up
delay: 500ms # least amount of delay between each retry
`
var multiPrefixJob BatchJobRequest
err = yaml.Unmarshal([]byte(multiPrefixExpireYaml), &multiPrefixJob)
if err != nil {
t.Fatal("Failed to parse batch-job-expire yaml", err)
}
if !slices.Equal(multiPrefixJob.Expire.Prefix.F(), []string{"myprefix", "myprefix1"}) {
t.Fatal("Failed to parse batch-job-expire yaml")
}
}

View File

@@ -28,6 +28,7 @@ import (
"math/rand"
"net/http"
"net/url"
"path/filepath"
"runtime"
"strconv"
"strings"
@@ -48,15 +49,20 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/ioutil"
xioutil "github.com/minio/minio/internal/ioutil"
"github.com/minio/pkg/v2/console"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v2/workers"
"github.com/minio/pkg/v3/console"
"github.com/minio/pkg/v3/env"
"github.com/minio/pkg/v3/policy"
"github.com/minio/pkg/v3/workers"
"gopkg.in/yaml.v3"
)
var globalBatchConfig batch.Config
const (
// Keep the completed/failed job stats 3 days before removing it
oldJobsExpiration = 3 * 24 * time.Hour
)
// BatchJobRequest this is an internal data structure not for external consumption.
type BatchJobRequest struct {
ID string `yaml:"-" json:"name"`
@@ -262,7 +268,7 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
JobType: string(job.Type()),
StartTime: job.Started,
}
if err := ri.load(ctx, api, job); err != nil {
if err := ri.loadOrInit(ctx, api, job); err != nil {
return err
}
if ri.Complete {
@@ -270,8 +276,12 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
}
globalBatchJobsMetrics.save(job.ID, ri)
retryAttempts := job.Replicate.Flags.Retry.Attempts
if retryAttempts <= 0 {
retryAttempts = batchReplJobDefaultRetries
}
delay := job.Replicate.Flags.Retry.Delay
if delay == 0 {
if delay <= 0 {
delay = batchReplJobDefaultRetryDelay
}
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
@@ -281,22 +291,22 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
isStorageClassOnly := len(r.Flags.Filter.Metadata) == 1 && strings.EqualFold(r.Flags.Filter.Metadata[0].Key, xhttp.AmzStorageClass)
skip := func(oi ObjectInfo) (ok bool) {
if r.Flags.Filter.OlderThan > 0 && time.Since(oi.ModTime) < r.Flags.Filter.OlderThan {
if r.Flags.Filter.OlderThan > 0 && time.Since(oi.ModTime) < r.Flags.Filter.OlderThan.D() {
// skip all objects that are newer than specified older duration
return true
}
if r.Flags.Filter.NewerThan > 0 && time.Since(oi.ModTime) >= r.Flags.Filter.NewerThan {
if r.Flags.Filter.NewerThan > 0 && time.Since(oi.ModTime) >= r.Flags.Filter.NewerThan.D() {
// skip all objects that are older than specified newer duration
return true
}
if !r.Flags.Filter.CreatedAfter.IsZero() && r.Flags.Filter.CreatedAfter.Before(oi.ModTime) {
if !r.Flags.Filter.CreatedAfter.IsZero() && r.Flags.Filter.CreatedAfter.After(oi.ModTime) {
// skip all objects that are created before the specified time.
return true
}
if !r.Flags.Filter.CreatedBefore.IsZero() && r.Flags.Filter.CreatedBefore.After(oi.ModTime) {
if !r.Flags.Filter.CreatedBefore.IsZero() && r.Flags.Filter.CreatedBefore.Before(oi.ModTime) {
// skip all objects that are created after the specified time.
return true
}
@@ -371,7 +381,6 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
return err
}
retryAttempts := ri.RetryAttempts
retry := false
for attempts := 1; attempts <= retryAttempts; attempts++ {
attempts := attempts
@@ -379,12 +388,27 @@ func (r *BatchJobReplicateV1) StartFromSource(ctx context.Context, api ObjectLay
s3Type := r.Target.Type == BatchJobReplicateResourceS3 || r.Source.Type == BatchJobReplicateResourceS3
minioSrc := r.Source.Type == BatchJobReplicateResourceMinIO
ctx, cancel := context.WithCancel(ctx)
objInfoCh := c.ListObjects(ctx, r.Source.Bucket, miniogo.ListObjectsOptions{
Prefix: r.Source.Prefix,
WithVersions: minioSrc,
Recursive: true,
WithMetadata: true,
})
objInfoCh := make(chan miniogo.ObjectInfo, 1)
go func() {
prefixes := r.Source.Prefix.F()
if len(prefixes) == 0 {
prefixes = []string{""}
}
for _, prefix := range prefixes {
prefixObjInfoCh := c.ListObjects(ctx, r.Source.Bucket, miniogo.ListObjectsOptions{
Prefix: prefix,
WithVersions: minioSrc,
Recursive: true,
WithMetadata: true,
})
for obj := range prefixObjInfoCh {
objInfoCh <- obj
}
}
xioutil.SafeClose(objInfoCh)
}()
prevObj := ""
skipReplicate := false
@@ -534,7 +558,7 @@ func toObjectInfo(bucket, object string, objInfo miniogo.ObjectInfo) ObjectInfo
return oi
}
func (r BatchJobReplicateV1) writeAsArchive(ctx context.Context, objAPI ObjectLayer, remoteClnt *minio.Client, entries []ObjectInfo) error {
func (r BatchJobReplicateV1) writeAsArchive(ctx context.Context, objAPI ObjectLayer, remoteClnt *minio.Client, entries []ObjectInfo, prefix string) error {
input := make(chan minio.SnowballObject, 1)
opts := minio.SnowballOptions{
Opts: minio.PutObjectOptions{},
@@ -556,6 +580,10 @@ func (r BatchJobReplicateV1) writeAsArchive(ctx context.Context, objAPI ObjectLa
continue
}
if prefix != "" {
entry.Name = pathJoin(prefix, entry.Name)
}
snowballObj := minio.SnowballObject{
// Create path to store objects within the bucket.
Key: entry.Name,
@@ -719,63 +747,73 @@ const (
batchReplJobAPIVersion = "v1"
batchReplJobDefaultRetries = 3
batchReplJobDefaultRetryDelay = 250 * time.Millisecond
batchReplJobDefaultRetryDelay = time.Second
)
func getJobReportPath(job BatchJobRequest) string {
var fileName string
switch {
case job.Replicate != nil:
fileName = batchReplName
case job.KeyRotate != nil:
fileName = batchKeyRotationName
case job.Expire != nil:
fileName = batchExpireName
}
return pathJoin(batchJobReportsPrefix, job.ID, fileName)
}
func getJobPath(job BatchJobRequest) string {
return pathJoin(batchJobPrefix, job.ID)
}
func (ri *batchJobInfo) getJobReportPath() (string, error) {
var fileName string
switch madmin.BatchJobType(ri.JobType) {
case madmin.BatchJobReplicate:
fileName = batchReplName
case madmin.BatchJobKeyRotate:
fileName = batchKeyRotationName
case madmin.BatchJobExpire:
fileName = batchExpireName
default:
return "", fmt.Errorf("unknown job type: %v", ri.JobType)
}
return pathJoin(batchJobReportsPrefix, ri.JobID, fileName), nil
}
func (ri *batchJobInfo) loadOrInit(ctx context.Context, api ObjectLayer, job BatchJobRequest) error {
err := ri.load(ctx, api, job)
if errors.Is(err, errNoSuchJob) {
switch {
case job.Replicate != nil:
ri.Version = batchReplVersionV1
case job.KeyRotate != nil:
ri.Version = batchKeyRotateVersionV1
case job.Expire != nil:
ri.Version = batchExpireVersionV1
}
return nil
}
return err
}
func (ri *batchJobInfo) load(ctx context.Context, api ObjectLayer, job BatchJobRequest) error {
path, err := job.getJobReportPath()
if err != nil {
batchLogIf(ctx, err)
return err
}
return ri.loadByPath(ctx, api, path)
}
func (ri *batchJobInfo) loadByPath(ctx context.Context, api ObjectLayer, path string) error {
var format, version uint16
switch {
case job.Replicate != nil:
switch filepath.Base(path) {
case batchReplName:
version = batchReplVersionV1
format = batchReplFormat
case job.KeyRotate != nil:
case batchKeyRotationName:
version = batchKeyRotateVersionV1
format = batchKeyRotationFormat
case job.Expire != nil:
case batchExpireName:
version = batchExpireVersionV1
format = batchExpireFormat
default:
return errors.New("no supported batch job request specified")
}
data, err := readConfig(ctx, api, getJobReportPath(job))
data, err := readConfig(ctx, api, path)
if err != nil {
if errors.Is(err, errConfigNotFound) || isErrObjectNotFound(err) {
ri.Version = int(version)
switch {
case job.Replicate != nil:
ri.RetryAttempts = batchReplJobDefaultRetries
if job.Replicate.Flags.Retry.Attempts > 0 {
ri.RetryAttempts = job.Replicate.Flags.Retry.Attempts
}
case job.KeyRotate != nil:
ri.RetryAttempts = batchKeyRotateJobDefaultRetries
if job.KeyRotate.Flags.Retry.Attempts > 0 {
ri.RetryAttempts = job.KeyRotate.Flags.Retry.Attempts
}
case job.Expire != nil:
ri.RetryAttempts = batchExpireJobDefaultRetries
if job.Expire.Retry.Attempts > 0 {
ri.RetryAttempts = job.Expire.Retry.Attempts
}
}
return nil
return errNoSuchJob
}
return err
}
@@ -919,7 +957,12 @@ func (ri *batchJobInfo) updateAfter(ctx context.Context, api ObjectLayer, durati
if err != nil {
return err
}
return saveConfig(ctx, api, getJobReportPath(job), buf)
path, err := ri.getJobReportPath()
if err != nil {
batchLogIf(ctx, err)
return err
}
return saveConfig(ctx, api, path, buf)
}
ri.mu.Unlock()
return nil
@@ -944,8 +987,10 @@ func (ri *batchJobInfo) trackCurrentBucketObject(bucket string, info ObjectInfo,
ri.mu.Lock()
defer ri.mu.Unlock()
ri.Bucket = bucket
ri.Object = info.Name
if success {
ri.Bucket = bucket
ri.Object = info.Name
}
ri.countItem(info.Size, info.DeleteMarker, success, attempt)
}
@@ -971,7 +1016,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
JobType: string(job.Type()),
StartTime: job.Started,
}
if err := ri.load(ctx, api, job); err != nil {
if err := ri.loadOrInit(ctx, api, job); err != nil {
return err
}
if ri.Complete {
@@ -980,29 +1025,34 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
globalBatchJobsMetrics.save(job.ID, ri)
lastObject := ri.Object
retryAttempts := job.Replicate.Flags.Retry.Attempts
if retryAttempts <= 0 {
retryAttempts = batchReplJobDefaultRetries
}
delay := job.Replicate.Flags.Retry.Delay
if delay == 0 {
if delay <= 0 {
delay = batchReplJobDefaultRetryDelay
}
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
selectObj := func(info FileInfo) (ok bool) {
if r.Flags.Filter.OlderThan > 0 && time.Since(info.ModTime) < r.Flags.Filter.OlderThan {
if r.Flags.Filter.OlderThan > 0 && time.Since(info.ModTime) < r.Flags.Filter.OlderThan.D() {
// skip all objects that are newer than specified older duration
return false
}
if r.Flags.Filter.NewerThan > 0 && time.Since(info.ModTime) >= r.Flags.Filter.NewerThan {
if r.Flags.Filter.NewerThan > 0 && time.Since(info.ModTime) >= r.Flags.Filter.NewerThan.D() {
// skip all objects that are older than specified newer duration
return false
}
if !r.Flags.Filter.CreatedAfter.IsZero() && r.Flags.Filter.CreatedAfter.Before(info.ModTime) {
if !r.Flags.Filter.CreatedAfter.IsZero() && r.Flags.Filter.CreatedAfter.After(info.ModTime) {
// skip all objects that are created before the specified time.
return false
}
if !r.Flags.Filter.CreatedBefore.IsZero() && r.Flags.Filter.CreatedBefore.After(info.ModTime) {
if !r.Flags.Filter.CreatedBefore.IsZero() && r.Flags.Filter.CreatedBefore.Before(info.ModTime) {
// skip all objects that are created after the specified time.
return false
}
@@ -1071,99 +1121,109 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
c.SetAppInfo("minio-"+batchJobPrefix, r.APIVersion+" "+job.ID)
var (
walkCh = make(chan itemOrErr[ObjectInfo], 100)
slowCh = make(chan itemOrErr[ObjectInfo], 100)
)
if !*r.Source.Snowball.Disable && r.Source.Type.isMinio() && r.Target.Type.isMinio() {
go func() {
// Snowball currently needs the high level minio-go Client, not the Core one
cl, err := miniogo.New(u.Host, &miniogo.Options{
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport(),
BucketLookup: lookupStyle(r.Target.Path),
})
if err != nil {
batchLogIf(ctx, err)
return
}
// Already validated before arriving here
smallerThan, _ := humanize.ParseBytes(*r.Source.Snowball.SmallerThan)
batch := make([]ObjectInfo, 0, *r.Source.Snowball.Batch)
writeFn := func(batch []ObjectInfo) {
if len(batch) > 0 {
if err := r.writeAsArchive(ctx, api, cl, batch); err != nil {
batchLogIf(ctx, err)
for _, b := range batch {
slowCh <- itemOrErr[ObjectInfo]{Item: b}
}
} else {
ri.trackCurrentBucketBatch(r.Source.Bucket, batch)
globalBatchJobsMetrics.save(job.ID, ri)
// persist in-memory state to disk after every 10secs.
batchLogIf(ctx, ri.updateAfter(ctx, api, 10*time.Second, job))
}
}
}
for obj := range walkCh {
if obj.Item.DeleteMarker || !obj.Item.VersionPurgeStatus.Empty() || obj.Item.Size >= int64(smallerThan) {
slowCh <- obj
continue
}
batch = append(batch, obj.Item)
if len(batch) < *r.Source.Snowball.Batch {
continue
}
writeFn(batch)
batch = batch[:0]
}
writeFn(batch)
xioutil.SafeClose(slowCh)
}()
} else {
slowCh = walkCh
}
workerSize, err := strconv.Atoi(env.Get("_MINIO_BATCH_REPLICATION_WORKERS", strconv.Itoa(runtime.GOMAXPROCS(0)/2)))
if err != nil {
return err
}
wk, err := workers.New(workerSize)
if err != nil {
// invalid worker size.
return err
}
walkQuorum := env.Get("_MINIO_BATCH_REPLICATION_WALK_QUORUM", "strict")
if walkQuorum == "" {
walkQuorum = "strict"
}
retryAttempts := ri.RetryAttempts
retry := false
for attempts := 1; attempts <= retryAttempts; attempts++ {
attempts := attempts
var (
walkCh = make(chan itemOrErr[ObjectInfo], 100)
slowCh = make(chan itemOrErr[ObjectInfo], 100)
)
ctx, cancel := context.WithCancel(ctx)
if r.Source.Snowball.Disable != nil && !*r.Source.Snowball.Disable && r.Source.Type.isMinio() && r.Target.Type.isMinio() {
go func() {
// Snowball currently needs the high level minio-go Client, not the Core one
cl, err := miniogo.New(u.Host, &miniogo.Options{
Creds: credentials.NewStaticV4(cred.AccessKey, cred.SecretKey, cred.SessionToken),
Secure: u.Scheme == "https",
Transport: getRemoteInstanceTransport(),
BucketLookup: lookupStyle(r.Target.Path),
})
if err != nil {
batchLogOnceIf(ctx, err, job.ID+"miniogo.New")
return
}
// Already validated before arriving here
smallerThan, _ := humanize.ParseBytes(*r.Source.Snowball.SmallerThan)
batch := make([]ObjectInfo, 0, *r.Source.Snowball.Batch)
writeFn := func(batch []ObjectInfo) {
if len(batch) > 0 {
if err := r.writeAsArchive(ctx, api, cl, batch, r.Target.Prefix); err != nil {
batchLogOnceIf(ctx, err, job.ID+"writeAsArchive")
for _, b := range batch {
slowCh <- itemOrErr[ObjectInfo]{Item: b}
}
} else {
ri.trackCurrentBucketBatch(r.Source.Bucket, batch)
globalBatchJobsMetrics.save(job.ID, ri)
// persist in-memory state to disk after every 10secs.
batchLogOnceIf(ctx, ri.updateAfter(ctx, api, 10*time.Second, job), job.ID+"updateAfter")
}
}
}
for obj := range walkCh {
if obj.Item.DeleteMarker || !obj.Item.VersionPurgeStatus.Empty() || obj.Item.Size >= int64(smallerThan) {
slowCh <- obj
continue
}
batch = append(batch, obj.Item)
if len(batch) < *r.Source.Snowball.Batch {
continue
}
writeFn(batch)
batch = batch[:0]
}
writeFn(batch)
xioutil.SafeClose(slowCh)
}()
} else {
slowCh = walkCh
}
workerSize, err := strconv.Atoi(env.Get("_MINIO_BATCH_REPLICATION_WORKERS", strconv.Itoa(runtime.GOMAXPROCS(0)/2)))
if err != nil {
return err
}
wk, err := workers.New(workerSize)
if err != nil {
// invalid worker size.
return err
}
walkQuorum := env.Get("_MINIO_BATCH_REPLICATION_WALK_QUORUM", "strict")
if walkQuorum == "" {
walkQuorum = "strict"
}
ctx, cancelCause := context.WithCancelCause(ctx)
// one of source/target is s3, skip delete marker and all versions under the same object name.
s3Type := r.Target.Type == BatchJobReplicateResourceS3 || r.Source.Type == BatchJobReplicateResourceS3
if err := api.Walk(ctx, r.Source.Bucket, r.Source.Prefix, walkCh, WalkOptions{
Marker: lastObject,
Filter: selectObj,
AskDisks: walkQuorum,
}); err != nil {
cancel()
// Do not need to retry if we can't list objects on source.
return err
}
go func() {
prefixes := r.Source.Prefix.F()
if len(prefixes) == 0 {
prefixes = []string{""}
}
for _, prefix := range prefixes {
prefixWalkCh := make(chan itemOrErr[ObjectInfo], 100)
if err := api.Walk(ctx, r.Source.Bucket, prefix, prefixWalkCh, WalkOptions{
Marker: lastObject,
Filter: selectObj,
AskDisks: walkQuorum,
}); err != nil {
cancelCause(err)
xioutil.SafeClose(walkCh)
return
}
for obj := range prefixWalkCh {
walkCh <- obj
}
}
xioutil.SafeClose(walkCh)
}()
prevObj := ""
@@ -1171,7 +1231,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
for res := range slowCh {
if res.Err != nil {
ri.Failed = true
batchLogIf(ctx, res.Err)
batchLogOnceIf(ctx, res.Err, job.ID+"res.Err")
continue
}
result := res.Item
@@ -1198,7 +1258,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
return
}
stopFn(result, err)
batchLogIf(ctx, err)
batchLogOnceIf(ctx, err, job.ID+"ReplicateToTarget")
success = false
} else {
stopFn(result, nil)
@@ -1206,7 +1266,7 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
ri.trackCurrentBucketObject(r.Source.Bucket, result, success, attempts)
globalBatchJobsMetrics.save(job.ID, ri)
// persist in-memory state to disk after every 10secs.
batchLogIf(ctx, ri.updateAfter(ctx, api, 10*time.Second, job))
batchLogOnceIf(ctx, ri.updateAfter(ctx, api, 10*time.Second, job), job.ID+"updateAfter2")
if wait := globalBatchConfig.ReplicationWait(); wait > 0 {
time.Sleep(wait)
@@ -1214,20 +1274,23 @@ func (r *BatchJobReplicateV1) Start(ctx context.Context, api ObjectLayer, job Ba
}()
}
wk.Wait()
// Do not need to retry if we can't list objects on source.
if context.Cause(ctx) != nil {
return context.Cause(ctx)
}
ri.RetryAttempts = attempts
ri.Complete = ri.ObjectsFailed == 0
ri.Failed = ri.ObjectsFailed > 0
globalBatchJobsMetrics.save(job.ID, ri)
// persist in-memory state to disk.
batchLogIf(ctx, ri.updateAfter(ctx, api, 0, job))
batchLogOnceIf(ctx, ri.updateAfter(ctx, api, 0, job), job.ID+"updateAfter3")
if err := r.Notify(ctx, ri); err != nil {
batchLogIf(ctx, fmt.Errorf("unable to notify %v", err))
batchLogOnceIf(ctx, fmt.Errorf("unable to notify %v", err), job.ID+"notify")
}
cancel()
cancelCause(nil)
if ri.Failed {
ri.ObjectsFailed = 0
ri.Bucket = ""
@@ -1436,10 +1499,24 @@ func (j BatchJobRequest) Validate(ctx context.Context, o ObjectLayer) error {
}
func (j BatchJobRequest) delete(ctx context.Context, api ObjectLayer) {
deleteConfig(ctx, api, getJobReportPath(j))
deleteConfig(ctx, api, getJobPath(j))
}
func (j BatchJobRequest) getJobReportPath() (string, error) {
var fileName string
switch {
case j.Replicate != nil:
fileName = batchReplName
case j.KeyRotate != nil:
fileName = batchKeyRotationName
case j.Expire != nil:
fileName = batchExpireName
default:
return "", errors.New("unknown job type")
}
return pathJoin(batchJobReportsPrefix, j.ID, fileName), nil
}
func (j *BatchJobRequest) save(ctx context.Context, api ObjectLayer) error {
if j.Replicate == nil && j.KeyRotate == nil && j.Expire == nil {
return errInvalidArgument
@@ -1476,7 +1553,7 @@ func (j *BatchJobRequest) load(ctx context.Context, api ObjectLayer, name string
func batchReplicationOpts(ctx context.Context, sc string, objInfo ObjectInfo) (putOpts miniogo.PutObjectOptions, err error) {
// TODO: support custom storage class for remote replication
putOpts, err = putReplicationOpts(ctx, "", objInfo)
putOpts, err = putReplicationOpts(ctx, "", objInfo, 0)
if err != nil {
return putOpts, err
}
@@ -1500,9 +1577,6 @@ func (a adminAPIHandlers) ListBatchJobs(w http.ResponseWriter, r *http.Request)
}
jobType := r.Form.Get("jobType")
if jobType == "" {
jobType = string(madmin.BatchJobReplicate)
}
resultCh := make(chan itemOrErr[ObjectInfo])
@@ -1520,6 +1594,9 @@ func (a adminAPIHandlers) ListBatchJobs(w http.ResponseWriter, r *http.Request)
writeErrorResponseJSON(ctx, w, toAPIError(ctx, result.Err), r.URL)
return
}
if strings.HasPrefix(result.Item.Name, batchJobReportsPrefix+slashSeparator) {
continue
}
req := &BatchJobRequest{}
if err := req.load(ctx, objectAPI, result.Item.Name); err != nil {
if !errors.Is(err, errNoSuchJob) {
@@ -1528,7 +1605,7 @@ func (a adminAPIHandlers) ListBatchJobs(w http.ResponseWriter, r *http.Request)
continue
}
if jobType == string(req.Type()) {
if jobType == string(req.Type()) || jobType == "" {
listResult.Jobs = append(listResult.Jobs, madmin.BatchJobResult{
ID: req.ID,
Type: req.Type(),
@@ -1542,6 +1619,55 @@ func (a adminAPIHandlers) ListBatchJobs(w http.ResponseWriter, r *http.Request)
batchLogIf(ctx, json.NewEncoder(w).Encode(&listResult))
}
// BatchJobStatus - returns the status of a batch job saved in the disk
func (a adminAPIHandlers) BatchJobStatus(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
objectAPI, _ := validateAdminReq(ctx, w, r, policy.ListBatchJobsAction)
if objectAPI == nil {
return
}
jobID := r.Form.Get("jobId")
if jobID == "" {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, errInvalidArgument), r.URL)
return
}
req := BatchJobRequest{ID: jobID}
if i := strings.Index(jobID, "-"); i > 0 {
switch madmin.BatchJobType(jobID[:i]) {
case madmin.BatchJobReplicate:
req.Replicate = &BatchJobReplicateV1{}
case madmin.BatchJobKeyRotate:
req.KeyRotate = &BatchJobKeyRotateV1{}
case madmin.BatchJobExpire:
req.Expire = &BatchJobExpire{}
default:
writeErrorResponseJSON(ctx, w, toAPIError(ctx, errors.New("job ID format unrecognized")), r.URL)
return
}
}
ri := &batchJobInfo{}
if err := ri.load(ctx, objectAPI, req); err != nil {
if !errors.Is(err, errNoSuchJob) {
batchLogIf(ctx, err)
}
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
buf, err := json.Marshal(madmin.BatchJobStatus{LastMetric: ri.metric()})
if err != nil {
batchLogIf(ctx, err)
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
w.Write(buf)
}
var errNoSuchJob = errors.New("no such job")
// DescribeBatchJob returns the currently active batch job definition
@@ -1579,7 +1705,7 @@ func (a adminAPIHandlers) DescribeBatchJob(w http.ResponseWriter, r *http.Reques
w.Write(buf)
}
// StarBatchJob queue a new job for execution
// StartBatchJob queue a new job for execution
func (a adminAPIHandlers) StartBatchJob(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
@@ -1633,7 +1759,7 @@ func (a adminAPIHandlers) StartBatchJob(w http.ResponseWriter, r *http.Request)
return
}
job.ID = fmt.Sprintf("%s:%d", shortuuid.New(), GetProxyEndpointLocalIndex(globalProxyEndpoints))
job.ID = fmt.Sprintf("%s-%s%s%d", job.Type(), shortuuid.New(), getKeySeparator(), GetProxyEndpointLocalIndex(globalProxyEndpoints))
job.User = user
job.Started = time.Now()
@@ -1721,11 +1847,60 @@ func newBatchJobPool(ctx context.Context, o ObjectLayer, workers int) *BatchJobP
jobCancelers: make(map[string]context.CancelFunc),
}
jpool.ResizeWorkers(workers)
jpool.resume()
randomWait := func() time.Duration {
// randomWait depends on the number of nodes to avoid triggering resume and cleanups at the same time.
return time.Duration(rand.Float64() * float64(time.Duration(globalEndpoints.NEndpoints())*time.Hour))
}
go func() {
jpool.resume(randomWait)
jpool.cleanupReports(randomWait)
}()
return jpool
}
func (j *BatchJobPool) resume() {
func (j *BatchJobPool) cleanupReports(randomWait func() time.Duration) {
t := time.NewTimer(randomWait())
defer t.Stop()
for {
select {
case <-GlobalContext.Done():
return
case <-t.C:
results := make(chan itemOrErr[ObjectInfo], 100)
ctx, cancel := context.WithCancel(j.ctx)
defer cancel()
if err := j.objLayer.Walk(ctx, minioMetaBucket, batchJobReportsPrefix, results, WalkOptions{}); err != nil {
batchLogIf(j.ctx, err)
t.Reset(randomWait())
continue
}
for result := range results {
if result.Err != nil {
batchLogIf(j.ctx, result.Err)
continue
}
ri := &batchJobInfo{}
if err := ri.loadByPath(ctx, j.objLayer, result.Item.Name); err != nil {
batchLogIf(ctx, err)
continue
}
if (ri.Complete || ri.Failed) && time.Since(ri.LastUpdate) > oldJobsExpiration {
deleteConfig(ctx, j.objLayer, result.Item.Name)
}
}
t.Reset(randomWait())
}
}
}
func (j *BatchJobPool) resume(randomWait func() time.Duration) {
time.Sleep(randomWait())
results := make(chan itemOrErr[ObjectInfo], 100)
ctx, cancel := context.WithCancel(j.ctx)
defer cancel()
@@ -1738,6 +1913,9 @@ func (j *BatchJobPool) resume() {
batchLogIf(j.ctx, result.Err)
continue
}
if strings.HasPrefix(result.Item.Name, batchJobReportsPrefix+slashSeparator) {
continue
}
// ignore batch-replicate.bin and batch-rotate.bin entries
if strings.HasSuffix(result.Item.Name, slashSeparator) {
continue
@@ -1808,7 +1986,6 @@ func (j *BatchJobPool) AddWorker() {
}
}
}
job.delete(j.ctx, j.objLayer)
j.canceler(job.ID, false)
case <-j.workerKillCh:
return
@@ -1869,7 +2046,9 @@ func (j *BatchJobPool) canceler(jobID string, cancel bool) error {
canceler()
}
}
delete(j.jobCancelers, jobID)
if cancel {
delete(j.jobCancelers, jobID)
}
return nil
}
@@ -1988,18 +2167,49 @@ func (m *batchJobMetrics) purgeJobMetrics() {
var toDeleteJobMetrics []string
m.RLock()
for id, metrics := range m.metrics {
if time.Since(metrics.LastUpdate) > 24*time.Hour && (metrics.Complete || metrics.Failed) {
if time.Since(metrics.LastUpdate) > oldJobsExpiration && (metrics.Complete || metrics.Failed) {
toDeleteJobMetrics = append(toDeleteJobMetrics, id)
}
}
m.RUnlock()
for _, jobID := range toDeleteJobMetrics {
m.delete(jobID)
j := BatchJobRequest{
ID: jobID,
}
j.delete(GlobalContext, newObjectLayerFn())
}
}
}
}
// load metrics from disk on startup
func (m *batchJobMetrics) init(ctx context.Context, objectAPI ObjectLayer) error {
resultCh := make(chan itemOrErr[ObjectInfo])
ctx, cancel := context.WithCancel(ctx)
defer cancel()
if err := objectAPI.Walk(ctx, minioMetaBucket, batchJobReportsPrefix, resultCh, WalkOptions{}); err != nil {
return err
}
for result := range resultCh {
if result.Err != nil {
return result.Err
}
ri := &batchJobInfo{}
if err := ri.loadByPath(ctx, objectAPI, result.Item.Name); err != nil {
if !errors.Is(err, errNoSuchJob) {
batchLogIf(ctx, err)
}
continue
}
m.metrics[ri.JobID] = ri
}
return nil
}
func (m *batchJobMetrics) delete(jobID string) {
m.Lock()
defer m.Unlock()
@@ -2077,3 +2287,30 @@ func lookupStyle(s string) miniogo.BucketLookupType {
}
return lookup
}
// BatchJobPrefix - to support prefix field yaml unmarshalling with string or slice of strings
type BatchJobPrefix []string
var _ yaml.Unmarshaler = &BatchJobPrefix{}
// UnmarshalYAML - to support prefix field yaml unmarshalling with string or slice of strings
func (b *BatchJobPrefix) UnmarshalYAML(value *yaml.Node) error {
// try slice first
tmpSlice := []string{}
if err := value.Decode(&tmpSlice); err == nil {
*b = tmpSlice
return nil
}
// try string
tmpStr := ""
if err := value.Decode(&tmpStr); err == nil {
*b = []string{tmpStr}
return nil
}
return fmt.Errorf("unable to decode %s", value.Value)
}
// F - return prefix(es) as slice
func (b *BatchJobPrefix) F() []string {
return *b
}

View File

@@ -6,6 +6,89 @@ import (
"github.com/tinylib/msgp/msgp"
)
// DecodeMsg implements msgp.Decodable
func (z *BatchJobPrefix) DecodeMsg(dc *msgp.Reader) (err error) {
var zb0002 uint32
zb0002, err = dc.ReadArrayHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
if cap((*z)) >= int(zb0002) {
(*z) = (*z)[:zb0002]
} else {
(*z) = make(BatchJobPrefix, zb0002)
}
for zb0001 := range *z {
(*z)[zb0001], err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z BatchJobPrefix) EncodeMsg(en *msgp.Writer) (err error) {
err = en.WriteArrayHeader(uint32(len(z)))
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0003 := range z {
err = en.WriteString(z[zb0003])
if err != nil {
err = msgp.WrapError(err, zb0003)
return
}
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z BatchJobPrefix) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
o = msgp.AppendArrayHeader(o, uint32(len(z)))
for zb0003 := range z {
o = msgp.AppendString(o, z[zb0003])
}
return
}
// UnmarshalMsg implements msgp.Unmarshaler
func (z *BatchJobPrefix) UnmarshalMsg(bts []byte) (o []byte, err error) {
var zb0002 uint32
zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
if cap((*z)) >= int(zb0002) {
(*z) = (*z)[:zb0002]
} else {
(*z) = make(BatchJobPrefix, zb0002)
}
for zb0001 := range *z {
(*z)[zb0001], bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, zb0001)
return
}
}
o = bts
return
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z BatchJobPrefix) Msgsize() (s int) {
s = msgp.ArrayHeaderSize
for zb0003 := range z {
s += msgp.StringPrefixSize + len(z[zb0003])
}
return
}
// DecodeMsg implements msgp.Decodable
func (z *BatchJobRequest) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte

View File

@@ -9,6 +9,119 @@ import (
"github.com/tinylib/msgp/msgp"
)
func TestMarshalUnmarshalBatchJobPrefix(t *testing.T) {
v := BatchJobPrefix{}
bts, err := v.MarshalMsg(nil)
if err != nil {
t.Fatal(err)
}
left, err := v.UnmarshalMsg(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after UnmarshalMsg(): %q", len(left), left)
}
left, err = msgp.Skip(bts)
if err != nil {
t.Fatal(err)
}
if len(left) > 0 {
t.Errorf("%d bytes left over after Skip(): %q", len(left), left)
}
}
func BenchmarkMarshalMsgBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.MarshalMsg(nil)
}
}
func BenchmarkAppendMsgBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
bts := make([]byte, 0, v.Msgsize())
bts, _ = v.MarshalMsg(bts[0:0])
b.SetBytes(int64(len(bts)))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
bts, _ = v.MarshalMsg(bts[0:0])
}
}
func BenchmarkUnmarshalBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
bts, _ := v.MarshalMsg(nil)
b.ReportAllocs()
b.SetBytes(int64(len(bts)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := v.UnmarshalMsg(bts)
if err != nil {
b.Fatal(err)
}
}
}
func TestEncodeDecodeBatchJobPrefix(t *testing.T) {
v := BatchJobPrefix{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
m := v.Msgsize()
if buf.Len() > m {
t.Log("WARNING: TestEncodeDecodeBatchJobPrefix Msgsize() is inaccurate")
}
vn := BatchJobPrefix{}
err := msgp.Decode(&buf, &vn)
if err != nil {
t.Error(err)
}
buf.Reset()
msgp.Encode(&buf, &v)
err = msgp.NewReader(&buf).Skip()
if err != nil {
t.Error(err)
}
}
func BenchmarkEncodeBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
en := msgp.NewWriter(msgp.Nowhere)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
v.EncodeMsg(en)
}
en.Flush()
}
func BenchmarkDecodeBatchJobPrefix(b *testing.B) {
v := BatchJobPrefix{}
var buf bytes.Buffer
msgp.Encode(&buf, &v)
b.SetBytes(int64(buf.Len()))
rd := msgp.NewEndlessReader(buf.Bytes(), b)
dc := msgp.NewReader(rd)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := v.DecodeMsg(dc)
if err != nil {
b.Fatal(err)
}
}
}
func TestMarshalUnmarshalBatchJobRequest(t *testing.T) {
v := BatchJobRequest{}
bts, err := v.MarshalMsg(nil)

View File

@@ -0,0 +1,75 @@
// Copyright (c) 2015-2024 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"slices"
"testing"
"gopkg.in/yaml.v3"
)
func TestBatchJobPrefix_UnmarshalYAML(t *testing.T) {
type args struct {
yamlStr string
}
type PrefixTemp struct {
Prefix BatchJobPrefix `yaml:"prefix"`
}
tests := []struct {
name string
b PrefixTemp
args args
want []string
wantErr bool
}{
{
name: "test1",
b: PrefixTemp{},
args: args{
yamlStr: `
prefix: "foo"
`,
},
want: []string{"foo"},
wantErr: false,
},
{
name: "test2",
b: PrefixTemp{},
args: args{
yamlStr: `
prefix:
- "foo"
- "bar"
`,
},
want: []string{"foo", "bar"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := yaml.Unmarshal([]byte(tt.args.yamlStr), &tt.b); (err != nil) != tt.wantErr {
t.Errorf("UnmarshalYAML() error = %v, wantErr %v", err, tt.wantErr)
}
if !slices.Equal(tt.b.Prefix.F(), tt.want) {
t.Errorf("UnmarshalYAML() = %v, want %v", tt.b.Prefix.F(), tt.want)
}
})
}
}

View File

@@ -23,7 +23,7 @@ import (
"time"
"github.com/dustin/go-humanize"
"github.com/minio/pkg/v2/wildcard"
"github.com/minio/pkg/v3/wildcard"
"gopkg.in/yaml.v3"
)

View File

@@ -21,8 +21,8 @@ import (
"time"
miniogo "github.com/minio/minio-go/v7"
"github.com/minio/minio/internal/auth"
"github.com/minio/pkg/v3/xtime"
)
//go:generate msgp -file $GOFILE
@@ -65,12 +65,12 @@ import (
// BatchReplicateFilter holds all the filters currently supported for batch replication
type BatchReplicateFilter struct {
NewerThan time.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan time.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
NewerThan xtime.Duration `yaml:"newerThan,omitempty" json:"newerThan"`
OlderThan xtime.Duration `yaml:"olderThan,omitempty" json:"olderThan"`
CreatedAfter time.Time `yaml:"createdAfter,omitempty" json:"createdAfter"`
CreatedBefore time.Time `yaml:"createdBefore,omitempty" json:"createdBefore"`
Tags []BatchJobKV `yaml:"tags,omitempty" json:"tags"`
Metadata []BatchJobKV `yaml:"metadata,omitempty" json:"metadata"`
}
// BatchJobReplicateFlags various configurations for replication job definition currently includes
@@ -151,7 +151,7 @@ func (t BatchJobReplicateTarget) ValidPath() bool {
type BatchJobReplicateSource struct {
Type BatchJobReplicateResourceType `yaml:"type" json:"type"`
Bucket string `yaml:"bucket" json:"bucket"`
Prefix string `yaml:"prefix" json:"prefix"`
Prefix BatchJobPrefix `yaml:"prefix" json:"prefix"`
Endpoint string `yaml:"endpoint" json:"endpoint"`
Path string `yaml:"path" json:"path"`
Creds BatchJobReplicateCredentials `yaml:"credentials" json:"credentials"`

View File

@@ -411,7 +411,7 @@ func (z *BatchJobReplicateSource) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
case "Prefix":
z.Prefix, err = dc.ReadString()
err = z.Prefix.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@@ -514,7 +514,7 @@ func (z *BatchJobReplicateSource) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteString(z.Prefix)
err = z.Prefix.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@@ -600,7 +600,11 @@ func (z *BatchJobReplicateSource) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.AppendString(o, z.Bucket)
// string "Prefix"
o = append(o, 0xa6, 0x50, 0x72, 0x65, 0x66, 0x69, 0x78)
o = msgp.AppendString(o, z.Prefix)
o, err = z.Prefix.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
}
// string "Endpoint"
o = append(o, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
o = msgp.AppendString(o, z.Endpoint)
@@ -664,7 +668,7 @@ func (z *BatchJobReplicateSource) UnmarshalMsg(bts []byte) (o []byte, err error)
return
}
case "Prefix":
z.Prefix, bts, err = msgp.ReadStringBytes(bts)
bts, err = z.Prefix.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Prefix")
return
@@ -742,7 +746,7 @@ func (z *BatchJobReplicateSource) UnmarshalMsg(bts []byte) (o []byte, err error)
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchJobReplicateSource) Msgsize() (s int) {
s = 1 + 5 + msgp.StringPrefixSize + len(string(z.Type)) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Prefix) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 5 + msgp.StringPrefixSize + len(z.Path) + 6 + 1 + 10 + msgp.StringPrefixSize + len(z.Creds.AccessKey) + 10 + msgp.StringPrefixSize + len(z.Creds.SecretKey) + 13 + msgp.StringPrefixSize + len(z.Creds.SessionToken) + 9 + z.Snowball.Msgsize()
s = 1 + 5 + msgp.StringPrefixSize + len(string(z.Type)) + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + z.Prefix.Msgsize() + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 5 + msgp.StringPrefixSize + len(z.Path) + 6 + 1 + 10 + msgp.StringPrefixSize + len(z.Creds.AccessKey) + 10 + msgp.StringPrefixSize + len(z.Creds.SecretKey) + 13 + msgp.StringPrefixSize + len(z.Creds.SessionToken) + 9 + z.Snowball.Msgsize()
return
}
@@ -1409,13 +1413,13 @@ func (z *BatchReplicateFilter) DecodeMsg(dc *msgp.Reader) (err error) {
}
switch msgp.UnsafeString(field) {
case "NewerThan":
z.NewerThan, err = dc.ReadDuration()
err = z.NewerThan.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "NewerThan")
return
}
case "OlderThan":
z.OlderThan, err = dc.ReadDuration()
err = z.OlderThan.DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@@ -1489,7 +1493,7 @@ func (z *BatchReplicateFilter) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteDuration(z.NewerThan)
err = z.NewerThan.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "NewerThan")
return
@@ -1499,7 +1503,7 @@ func (z *BatchReplicateFilter) EncodeMsg(en *msgp.Writer) (err error) {
if err != nil {
return
}
err = en.WriteDuration(z.OlderThan)
err = z.OlderThan.EncodeMsg(en)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@@ -1567,10 +1571,18 @@ func (z *BatchReplicateFilter) MarshalMsg(b []byte) (o []byte, err error) {
// map header, size 6
// string "NewerThan"
o = append(o, 0x86, 0xa9, 0x4e, 0x65, 0x77, 0x65, 0x72, 0x54, 0x68, 0x61, 0x6e)
o = msgp.AppendDuration(o, z.NewerThan)
o, err = z.NewerThan.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "NewerThan")
return
}
// string "OlderThan"
o = append(o, 0xa9, 0x4f, 0x6c, 0x64, 0x65, 0x72, 0x54, 0x68, 0x61, 0x6e)
o = msgp.AppendDuration(o, z.OlderThan)
o, err = z.OlderThan.MarshalMsg(o)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
}
// string "CreatedAfter"
o = append(o, 0xac, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64, 0x41, 0x66, 0x74, 0x65, 0x72)
o = msgp.AppendTime(o, z.CreatedAfter)
@@ -1619,13 +1631,13 @@ func (z *BatchReplicateFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
}
switch msgp.UnsafeString(field) {
case "NewerThan":
z.NewerThan, bts, err = msgp.ReadDurationBytes(bts)
bts, err = z.NewerThan.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "NewerThan")
return
}
case "OlderThan":
z.OlderThan, bts, err = msgp.ReadDurationBytes(bts)
bts, err = z.OlderThan.UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "OlderThan")
return
@@ -1694,7 +1706,7 @@ func (z *BatchReplicateFilter) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BatchReplicateFilter) Msgsize() (s int) {
s = 1 + 10 + msgp.DurationSize + 10 + msgp.DurationSize + 13 + msgp.TimeSize + 14 + msgp.TimeSize + 5 + msgp.ArrayHeaderSize
s = 1 + 10 + z.NewerThan.Msgsize() + 10 + z.OlderThan.Msgsize() + 13 + msgp.TimeSize + 14 + msgp.TimeSize + 5 + msgp.ArrayHeaderSize
for za0001 := range z.Tags {
s += z.Tags[za0001].Msgsize()
}

182
cmd/batch-replicate_test.go Normal file
View File

@@ -0,0 +1,182 @@
// Copyright (c) 2015-2024 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"slices"
"testing"
"gopkg.in/yaml.v3"
)
func TestParseBatchJobReplicate(t *testing.T) {
replicateYaml := `
replicate:
apiVersion: v1
# source of the objects to be replicated
source:
type: minio # valid values are "s3" or "minio"
bucket: mytest
prefix: object-prefix1 # 'PREFIX' is optional
# If your source is the 'local' alias specified to 'mc batch start', then the 'endpoint' and 'credentials' fields are optional and can be omitted
# Either the 'source' or 'remote' *must* be the "local" deployment
# endpoint: "http://127.0.0.1:9000"
# # path: "on|off|auto" # "on" enables path-style bucket lookup. "off" enables virtual host (DNS)-style bucket lookup. Defaults to "auto"
# credentials:
# accessKey: minioadmin # Required
# secretKey: minioadmin # Required
# # sessionToken: SESSION-TOKEN # Optional only available when rotating credentials are used
snowball: # automatically activated if the source is local
disable: true # optionally turn-off snowball archive transfer
# batch: 100 # upto this many objects per archive
# inmemory: true # indicates if the archive must be staged locally or in-memory
# compress: false # S2/Snappy compressed archive
# smallerThan: 5MiB # create archive for all objects smaller than 5MiB
# skipErrs: false # skips any source side read() errors
# target where the objects must be replicated
target:
type: minio # valid values are "s3" or "minio"
bucket: mytest
prefix: stage # 'PREFIX' is optional
# If your source is the 'local' alias specified to 'mc batch start', then the 'endpoint' and 'credentials' fields are optional and can be omitted
# Either the 'source' or 'remote' *must* be the "local" deployment
endpoint: "http://127.0.0.1:9001"
# path: "on|off|auto" # "on" enables path-style bucket lookup. "off" enables virtual host (DNS)-style bucket lookup. Defaults to "auto"
credentials:
accessKey: minioadmin
secretKey: minioadmin
# sessionToken: SESSION-TOKEN # Optional only available when rotating credentials are used
# NOTE: All flags are optional
# - filtering criteria only applies for all source objects match the criteria
# - configurable notification endpoints
# - configurable retries for the job (each retry skips successfully previously replaced objects)
flags:
filter:
newerThan: "7d10h31s" # match objects newer than this value (e.g. 7d10h31s)
olderThan: "7d" # match objects older than this value (e.g. 7d10h31s)
# createdAfter: "date" # match objects created after "date"
# createdBefore: "date" # match objects created before "date"
## NOTE: tags are not supported when "source" is remote.
tags:
- key: "name"
value: "pick*" # match objects with tag 'name', with all values starting with 'pick'
metadata:
- key: "content-type"
value: "image/*" # match objects with 'content-type', with all values starting with 'image/'
# notify:
# endpoint: "https://notify.endpoint" # notification endpoint to receive job status events
# token: "Bearer xxxxx" # optional authentication token for the notification endpoint
#
# retry:
# attempts: 10 # number of retries for the job before giving up
# delay: "500ms" # least amount of delay between each retry
`
var job BatchJobRequest
err := yaml.Unmarshal([]byte(replicateYaml), &job)
if err != nil {
t.Fatal("Failed to parse batch-job-replicate yaml", err)
}
if !slices.Equal(job.Replicate.Source.Prefix.F(), []string{"object-prefix1"}) {
t.Fatal("Failed to parse batch-job-replicate yaml", err)
}
multiPrefixReplicateYaml := `
replicate:
apiVersion: v1
# source of the objects to be replicated
source:
type: minio # valid values are "s3" or "minio"
bucket: mytest
prefix: # 'PREFIX' is optional
- object-prefix1
- object-prefix2
# If your source is the 'local' alias specified to 'mc batch start', then the 'endpoint' and 'credentials' fields are optional and can be omitted
# Either the 'source' or 'remote' *must* be the "local" deployment
# endpoint: "http://127.0.0.1:9000"
# # path: "on|off|auto" # "on" enables path-style bucket lookup. "off" enables virtual host (DNS)-style bucket lookup. Defaults to "auto"
# credentials:
# accessKey: minioadmin # Required
# secretKey: minioadmin # Required
# # sessionToken: SESSION-TOKEN # Optional only available when rotating credentials are used
snowball: # automatically activated if the source is local
disable: true # optionally turn-off snowball archive transfer
# batch: 100 # upto this many objects per archive
# inmemory: true # indicates if the archive must be staged locally or in-memory
# compress: false # S2/Snappy compressed archive
# smallerThan: 5MiB # create archive for all objects smaller than 5MiB
# skipErrs: false # skips any source side read() errors
# target where the objects must be replicated
target:
type: minio # valid values are "s3" or "minio"
bucket: mytest
prefix: stage # 'PREFIX' is optional
# If your source is the 'local' alias specified to 'mc batch start', then the 'endpoint' and 'credentials' fields are optional and can be omitted
# Either the 'source' or 'remote' *must* be the "local" deployment
endpoint: "http://127.0.0.1:9001"
# path: "on|off|auto" # "on" enables path-style bucket lookup. "off" enables virtual host (DNS)-style bucket lookup. Defaults to "auto"
credentials:
accessKey: minioadmin
secretKey: minioadmin
# sessionToken: SESSION-TOKEN # Optional only available when rotating credentials are used
# NOTE: All flags are optional
# - filtering criteria only applies for all source objects match the criteria
# - configurable notification endpoints
# - configurable retries for the job (each retry skips successfully previously replaced objects)
flags:
filter:
newerThan: "7d10h31s" # match objects newer than this value (e.g. 7d10h31s)
olderThan: "7d" # match objects older than this value (e.g. 7d10h31s)
# createdAfter: "date" # match objects created after "date"
# createdBefore: "date" # match objects created before "date"
## NOTE: tags are not supported when "source" is remote.
tags:
- key: "name"
value: "pick*" # match objects with tag 'name', with all values starting with 'pick'
metadata:
- key: "content-type"
value: "image/*" # match objects with 'content-type', with all values starting with 'image/'
# notify:
# endpoint: "https://notify.endpoint" # notification endpoint to receive job status events
# token: "Bearer xxxxx" # optional authentication token for the notification endpoint
#
# retry:
# attempts: 10 # number of retries for the job before giving up
# delay: "500ms" # least amount of delay between each retry
`
var multiPrefixJob BatchJobRequest
err = yaml.Unmarshal([]byte(multiPrefixReplicateYaml), &multiPrefixJob)
if err != nil {
t.Fatal("Failed to parse batch-job-replicate yaml", err)
}
if !slices.Equal(multiPrefixJob.Replicate.Source.Prefix.F(), []string{"object-prefix1", "object-prefix2"}) {
t.Fatal("Failed to parse batch-job-replicate yaml")
}
}

View File

@@ -33,8 +33,8 @@ import (
"github.com/minio/minio/internal/crypto"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/kms"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v2/workers"
"github.com/minio/pkg/v3/env"
"github.com/minio/pkg/v3/workers"
)
// keyrotate:
@@ -95,6 +95,7 @@ func (e BatchJobKeyRotateEncryption) Validate() error {
if e.Type == ssekms && spaces {
return crypto.ErrInvalidEncryptionKeyID
}
if e.Type == ssekms && GlobalKMS != nil {
ctx := kms.Context{}
if e.Context != "" {
@@ -113,7 +114,7 @@ func (e BatchJobKeyRotateEncryption) Validate() error {
e.kmsContext[k] = v
}
ctx["MinIO batch API"] = "batchrotate" // Context for a test key operation
if _, err := GlobalKMS.GenerateKey(GlobalContext, e.Key, ctx); err != nil {
if _, err := GlobalKMS.GenerateKey(GlobalContext, &kms.GenerateKeyRequest{Name: e.Key, AssociatedData: ctx}); err != nil {
return err
}
}
@@ -256,7 +257,7 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
JobType: string(job.Type()),
StartTime: job.Started,
}
if err := ri.load(ctx, api, job); err != nil {
if err := ri.loadOrInit(ctx, api, job); err != nil {
return err
}
if ri.Complete {
@@ -266,8 +267,12 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
globalBatchJobsMetrics.save(job.ID, ri)
lastObject := ri.Object
retryAttempts := job.KeyRotate.Flags.Retry.Attempts
if retryAttempts <= 0 {
retryAttempts = batchKeyRotateJobDefaultRetries
}
delay := job.KeyRotate.Flags.Retry.Delay
if delay == 0 {
if delay <= 0 {
delay = batchKeyRotateJobDefaultRetryDelay
}
@@ -353,7 +358,6 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
return err
}
retryAttempts := ri.RetryAttempts
ctx, cancel := context.WithCancel(ctx)
results := make(chan itemOrErr[ObjectInfo], 100)
@@ -388,6 +392,17 @@ func (r *BatchJobKeyRotateV1) Start(ctx context.Context, api ObjectLayer, job Ba
stopFn(result, err)
batchLogIf(ctx, err)
success = false
if attempts >= retryAttempts {
auditOptions := AuditLogOptions{
Event: "KeyRotate",
APIName: "StartBatchJob",
Bucket: result.Bucket,
Object: result.Name,
VersionID: result.VersionID,
Error: err.Error(),
}
auditLogInternal(ctx, auditOptions)
}
} else {
stopFn(result, nil)
}
@@ -478,8 +493,5 @@ func (r *BatchJobKeyRotateV1) Validate(ctx context.Context, job BatchJobRequest,
}
}
if err := r.Flags.Retry.Validate(); err != nil {
return err
}
return nil
return r.Flags.Retry.Validate()
}

View File

@@ -26,15 +26,17 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/ringbuffer"
)
// Calculates bitrot in chunks and writes the hash into the stream.
type streamingBitrotWriter struct {
iow io.WriteCloser
closeWithErr func(err error) error
closeWithErr func(err error)
h hash.Hash
shardSize int64
canClose *sync.WaitGroup
byteBuf []byte
}
func (b *streamingBitrotWriter) Write(p []byte) (int, error) {
@@ -62,7 +64,10 @@ func (b *streamingBitrotWriter) Write(p []byte) (int, error) {
}
func (b *streamingBitrotWriter) Close() error {
// Close the underlying writer.
// This will also flush the ring buffer if used.
err := b.iow.Close()
// Wait for all data to be written before returning else it causes race conditions.
// Race condition is because of io.PipeWriter implementation. i.e consider the following
// sequent of operations:
@@ -73,29 +78,34 @@ func (b *streamingBitrotWriter) Close() error {
if b.canClose != nil {
b.canClose.Wait()
}
// Recycle the buffer.
if b.byteBuf != nil {
globalBytePoolCap.Load().Put(b.byteBuf)
b.byteBuf = nil
}
return err
}
// newStreamingBitrotWriterBuffer returns streaming bitrot writer implementation.
// The output is written to the supplied writer w.
func newStreamingBitrotWriterBuffer(w io.Writer, algo BitrotAlgorithm, shardSize int64) io.Writer {
return &streamingBitrotWriter{iow: ioutil.NopCloser(w), h: algo.New(), shardSize: shardSize, canClose: nil, closeWithErr: func(err error) error {
// Similar to CloseWithError on pipes we always return nil.
return nil
}}
return &streamingBitrotWriter{iow: ioutil.NopCloser(w), h: algo.New(), shardSize: shardSize, canClose: nil, closeWithErr: func(err error) {}}
}
// Returns streaming bitrot writer implementation.
func newStreamingBitrotWriter(disk StorageAPI, origvolume, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64) io.Writer {
r, w := io.Pipe()
h := algo.New()
buf := globalBytePoolCap.Load().Get()
rb := ringbuffer.NewBuffer(buf[:cap(buf)]).SetBlocking(true)
bw := &streamingBitrotWriter{
iow: ioutil.NewDeadlineWriter(w, globalDriveConfig.GetMaxTimeout()),
closeWithErr: w.CloseWithError,
iow: ioutil.NewDeadlineWriter(rb.WriteCloser(), globalDriveConfig.GetMaxTimeout()),
closeWithErr: rb.CloseWithError,
h: h,
shardSize: shardSize,
canClose: &sync.WaitGroup{},
byteBuf: buf,
}
bw.canClose.Add(1)
go func() {
@@ -106,7 +116,7 @@ func newStreamingBitrotWriter(disk StorageAPI, origvolume, volume, filePath stri
bitrotSumsTotalSize := ceilFrac(length, shardSize) * int64(h.Size()) // Size used for storing bitrot checksums.
totalFileSize = bitrotSumsTotalSize + length
}
r.CloseWithError(disk.CreateFile(context.TODO(), origvolume, volume, filePath, totalFileSize, r))
rb.CloseWithError(disk.CreateFile(context.TODO(), origvolume, volume, filePath, totalFileSize, rb))
}()
return bw
}
@@ -131,13 +141,7 @@ func (b *streamingBitrotReader) Close() error {
}
if closer, ok := b.rc.(io.Closer); ok {
// drain the body for connection reuse at network layer.
xhttp.DrainBody(struct {
io.Reader
io.Closer
}{
Reader: b.rc,
Closer: closeWrapper(func() error { return nil }),
})
xhttp.DrainBody(io.NopCloser(b.rc))
return closer.Close()
}
return nil

View File

@@ -19,9 +19,13 @@ package cmd
import (
"context"
"crypto/md5"
"encoding/hex"
"errors"
"fmt"
"io"
"math/rand"
"os"
"reflect"
"strings"
"sync"
@@ -30,7 +34,7 @@ import (
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/grid"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v2/env"
"github.com/minio/pkg/v3/env"
)
// To abstract a node over network.
@@ -43,10 +47,15 @@ type ServerSystemConfig struct {
NEndpoints int
CmdLines []string
MinioEnv map[string]string
Checksum string
}
// Diff - returns error on first difference found in two configs.
func (s1 *ServerSystemConfig) Diff(s2 *ServerSystemConfig) error {
if s1.Checksum != s2.Checksum {
return fmt.Errorf("Expected MinIO binary checksum: %s, seen: %s", s1.Checksum, s2.Checksum)
}
ns1 := s1.NEndpoints
ns2 := s2.NEndpoints
if ns1 != ns2 {
@@ -82,7 +91,7 @@ func (s1 *ServerSystemConfig) Diff(s2 *ServerSystemConfig) error {
extra = append(extra, k)
}
}
msg := "Expected same MINIO_ environment variables and values across all servers: "
msg := "Expected MINIO_* environment name and values across all servers to be same: "
if len(missing) > 0 {
msg += fmt.Sprintf(`Missing environment values: %v. `, missing)
}
@@ -97,14 +106,17 @@ func (s1 *ServerSystemConfig) Diff(s2 *ServerSystemConfig) error {
}
var skipEnvs = map[string]struct{}{
"MINIO_OPTS": {},
"MINIO_CERT_PASSWD": {},
"MINIO_SERVER_DEBUG": {},
"MINIO_DSYNC_TRACE": {},
"MINIO_ROOT_USER": {},
"MINIO_ROOT_PASSWORD": {},
"MINIO_ACCESS_KEY": {},
"MINIO_SECRET_KEY": {},
"MINIO_OPTS": {},
"MINIO_CERT_PASSWD": {},
"MINIO_SERVER_DEBUG": {},
"MINIO_DSYNC_TRACE": {},
"MINIO_ROOT_USER": {},
"MINIO_ROOT_PASSWORD": {},
"MINIO_ACCESS_KEY": {},
"MINIO_SECRET_KEY": {},
"MINIO_OPERATOR_VERSION": {},
"MINIO_VSPHERE_PLUGIN_VERSION": {},
"MINIO_CI_CD": {},
}
func getServerSystemCfg() *ServerSystemConfig {
@@ -120,7 +132,7 @@ func getServerSystemCfg() *ServerSystemConfig {
}
envValues[envK] = logger.HashString(env.Get(envK, ""))
}
scfg := &ServerSystemConfig{NEndpoints: globalEndpoints.NEndpoints(), MinioEnv: envValues}
scfg := &ServerSystemConfig{NEndpoints: globalEndpoints.NEndpoints(), MinioEnv: envValues, Checksum: binaryChecksum}
var cmdLines []string
for _, ep := range globalEndpoints {
cmdLines = append(cmdLines, ep.CmdLine)
@@ -167,6 +179,26 @@ func (client *bootstrapRESTClient) String() string {
return client.gridConn.String()
}
var binaryChecksum = getBinaryChecksum()
func getBinaryChecksum() string {
mw := md5.New()
binPath, err := os.Executable()
if err != nil {
logger.Error("Calculating checksum failed: %s", err)
return "00000000000000000000000000000000"
}
b, err := os.Open(binPath)
if err != nil {
logger.Error("Calculating checksum failed: %s", err)
return "00000000000000000000000000000000"
}
defer b.Close()
io.Copy(mw, b)
return hex.EncodeToString(mw.Sum(nil))
}
func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointServerPools, gm *grid.Manager) error {
srcCfg := getServerSystemCfg()
clnts := newBootstrapRESTClients(endpointServerPools, gm)
@@ -196,7 +228,7 @@ func verifyServerSystemConfig(ctx context.Context, endpointServerPools EndpointS
err := clnt.Verify(ctx, srcCfg)
mu.Lock()
if err != nil {
bootstrapTraceMsg(fmt.Sprintf("clnt.Verify: %v, endpoint: %s", err, clnt))
bootstrapTraceMsg(fmt.Sprintf("bootstrapVerify: %v, endpoint: %s", err, clnt))
if !isNetworkError(err) {
bootLogOnceIf(context.Background(), fmt.Errorf("%s has incorrect configuration: %w", clnt, err), "incorrect_"+clnt.String())
incorrectConfigs = append(incorrectConfigs, fmt.Errorf("%s has incorrect configuration: %w", clnt, err))

View File

@@ -79,6 +79,12 @@ func (z *ServerSystemConfig) DecodeMsg(dc *msgp.Reader) (err error) {
}
z.MinioEnv[za0002] = za0003
}
case "Checksum":
z.Checksum, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Checksum")
return
}
default:
err = dc.Skip()
if err != nil {
@@ -92,9 +98,9 @@ func (z *ServerSystemConfig) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *ServerSystemConfig) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 3
// map header, size 4
// write "NEndpoints"
err = en.Append(0x83, 0xaa, 0x4e, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x73)
err = en.Append(0x84, 0xaa, 0x4e, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x73)
if err != nil {
return
}
@@ -142,15 +148,25 @@ func (z *ServerSystemConfig) EncodeMsg(en *msgp.Writer) (err error) {
return
}
}
// write "Checksum"
err = en.Append(0xa8, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x73, 0x75, 0x6d)
if err != nil {
return
}
err = en.WriteString(z.Checksum)
if err != nil {
err = msgp.WrapError(err, "Checksum")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *ServerSystemConfig) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 3
// map header, size 4
// string "NEndpoints"
o = append(o, 0x83, 0xaa, 0x4e, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x73)
o = append(o, 0x84, 0xaa, 0x4e, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x73)
o = msgp.AppendInt(o, z.NEndpoints)
// string "CmdLines"
o = append(o, 0xa8, 0x43, 0x6d, 0x64, 0x4c, 0x69, 0x6e, 0x65, 0x73)
@@ -165,6 +181,9 @@ func (z *ServerSystemConfig) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.AppendString(o, za0002)
o = msgp.AppendString(o, za0003)
}
// string "Checksum"
o = append(o, 0xa8, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x73, 0x75, 0x6d)
o = msgp.AppendString(o, z.Checksum)
return
}
@@ -241,6 +260,12 @@ func (z *ServerSystemConfig) UnmarshalMsg(bts []byte) (o []byte, err error) {
}
z.MinioEnv[za0002] = za0003
}
case "Checksum":
z.Checksum, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Checksum")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -266,5 +291,6 @@ func (z *ServerSystemConfig) Msgsize() (s int) {
s += msgp.StringPrefixSize + len(za0002) + msgp.StringPrefixSize + len(za0003)
}
}
s += 9 + msgp.StringPrefixSize + len(z.Checksum)
return
}

View File

@@ -30,7 +30,7 @@ import (
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
const (
@@ -85,7 +85,7 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
kmsKey := encConfig.KeyID()
if kmsKey != "" {
kmsContext := kms.Context{"MinIO admin API": "ServerInfoHandler"} // Context for a test key operation
_, err := GlobalKMS.GenerateKey(ctx, kmsKey, kmsContext)
_, err := GlobalKMS.GenerateKey(ctx, &kms.GenerateKeyRequest{Name: kmsKey, AssociatedData: kmsContext})
if err != nil {
if errors.Is(err, kes.ErrKeyNotFound) {
writeErrorResponse(ctx, w, toAPIError(ctx, errKMSKeyNotFound), r.URL)

View File

@@ -28,6 +28,7 @@ import (
"errors"
"fmt"
"io"
"mime"
"mime/multipart"
"net/http"
"net/textproto"
@@ -51,9 +52,9 @@ import (
sse "github.com/minio/minio/internal/bucket/encryption"
objectlock "github.com/minio/minio/internal/bucket/object/lock"
"github.com/minio/minio/internal/bucket/replication"
"github.com/minio/minio/internal/config/cache"
"github.com/minio/minio/internal/config/dns"
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/etag"
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/handlers"
"github.com/minio/minio/internal/hash"
@@ -61,8 +62,8 @@ import (
"github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v2/sync/errgroup"
"github.com/minio/pkg/v3/policy"
"github.com/minio/pkg/v3/sync/errgroup"
)
const (
@@ -72,6 +73,8 @@ const (
xMinIOErrCodeHeader = "x-minio-error-code"
xMinIOErrDescHeader = "x-minio-error-desc"
postPolicyBucketTagging = "tagging"
)
// Check if there are buckets on server without corresponding entry in etcd backend and
@@ -90,7 +93,7 @@ const (
// -- If IP of the entry doesn't match, this means entry is
//
// for another instance. Log an error to console.
func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
func initFederatorBackend(buckets []string, objLayer ObjectLayer) {
if len(buckets) == 0 {
return
}
@@ -111,10 +114,10 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
domainMissing := err == dns.ErrDomainMissing
if dnsBuckets != nil {
for _, bucket := range buckets {
bucketsSet.Add(bucket.Name)
r, ok := dnsBuckets[bucket.Name]
bucketsSet.Add(bucket)
r, ok := dnsBuckets[bucket]
if !ok {
bucketsToBeUpdated.Add(bucket.Name)
bucketsToBeUpdated.Add(bucket)
continue
}
if !globalDomainIPs.Intersection(set.CreateStringSet(getHostsSlice(r)...)).IsEmpty() {
@@ -133,7 +136,7 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
// but if we do see a difference with local domain IPs with
// hostSlice from etcd then we should update with newer
// domainIPs, we proceed to do that here.
bucketsToBeUpdated.Add(bucket.Name)
bucketsToBeUpdated.Add(bucket)
continue
}
@@ -142,7 +145,7 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
// bucket names are globally unique in federation at a given
// path prefix, name collision is not allowed. We simply log
// an error and continue.
bucketsInConflict.Add(bucket.Name)
bucketsInConflict.Add(bucket)
}
}
@@ -227,7 +230,7 @@ func (api objectAPIHandlers) GetBucketLocationHandler(w http.ResponseWriter, r *
// Generate response.
encodedSuccessResponse := encodeResponse(LocationResponse{})
// Get current region.
region := globalSite.Region
region := globalSite.Region()
if region != globalMinioDefaultRegion {
encodedSuccessResponse = encodeResponse(LocationResponse{
Location: region,
@@ -668,8 +671,6 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
continue
}
defer globalCacheConfig.Delete(bucket, dobj.ObjectName)
if replicateDeletes && (dobj.DeleteMarkerReplicationStatus() == replication.Pending || dobj.VersionPurgeStatus() == Pending) {
// copy so we can re-add null ID.
dobj := dobj
@@ -888,6 +889,30 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
})
}
// multipartReader is just like https://pkg.go.dev/net/http#Request.MultipartReader but
// rejects multipart/mixed as its not supported in S3 API.
func multipartReader(r *http.Request) (*multipart.Reader, error) {
v := r.Header.Get("Content-Type")
if v == "" {
return nil, http.ErrNotMultipart
}
if r.Body == nil {
return nil, errors.New("missing form body")
}
d, params, err := mime.ParseMediaType(v)
if err != nil {
return nil, http.ErrNotMultipart
}
if d != "multipart/form-data" {
return nil, http.ErrNotMultipart
}
boundary, ok := params["boundary"]
if !ok {
return nil, http.ErrMissingBoundary
}
return multipart.NewReader(r.Body, boundary), nil
}
// PostPolicyBucketHandler - POST policy
// ----------
// This implementation of the POST operation handles object creation with a specified
@@ -921,9 +946,14 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
if r.ContentLength <= 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrEmptyRequestBody), r.URL)
return
}
// Here the parameter is the size of the form data that should
// be loaded in memory, the remaining being put in temporary files.
mp, err := r.MultipartReader()
mp, err := multipartReader(r)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
@@ -935,7 +965,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
var (
reader io.Reader
fileSize int64 = -1
actualSize int64 = -1
fileName string
fanOutEntries = make([]minio.PutObjectFanOutEntry, 0, 100)
)
@@ -943,6 +973,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
maxParts := 1000
// Canonicalize the form values into http.Header.
formValues := make(http.Header)
var headerLen int64
for {
part, err := mp.NextRawPart()
if errors.Is(err, io.EOF) {
@@ -984,7 +1015,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
var b bytes.Buffer
headerLen += int64(len(name)) + int64(len(fileName))
if name != "file" {
if http.CanonicalHeaderKey(name) == http.CanonicalHeaderKey("x-minio-fanout-list") {
dec := json.NewDecoder(part)
@@ -995,7 +1026,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
if err := dec.Decode(&m); err != nil {
part.Close()
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, multipart.ErrMessageTooLarge)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
@@ -1005,8 +1036,12 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
continue
}
buf := bytebufferpool.Get()
// value, store as string in memory
n, err := io.CopyN(&b, part, maxMemoryBytes+1)
n, err := io.CopyN(buf, part, maxMemoryBytes+1)
value := buf.String()
buf.Reset()
bytebufferpool.Put(buf)
part.Close()
if err != nil && err != io.EOF {
@@ -1028,7 +1063,8 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
formValues[http.CanonicalHeaderKey(name)] = append(formValues[http.CanonicalHeaderKey(name)], b.String())
headerLen += n
formValues[http.CanonicalHeaderKey(name)] = append(formValues[http.CanonicalHeaderKey(name)], value)
continue
}
@@ -1037,6 +1073,21 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// The file or text content must be the last field in the form.
// You cannot upload more than one file at a time.
reader = part
possibleShardSize := (r.ContentLength - headerLen)
if globalStorageClass.ShouldInline(possibleShardSize, false) { // keep versioned false for this check
var b bytes.Buffer
n, err := io.Copy(&b, reader)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
reader = &b
actualSize = n
}
// we have found the File part of the request we are done processing multipart-form
break
}
@@ -1138,11 +1189,33 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
hashReader, err := hash.NewReader(ctx, reader, fileSize, "", "", fileSize)
clientETag, err := etag.FromContentMD5(formValues)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidDigest), r.URL)
return
}
var forceMD5 []byte
// Optimization: If SSE-KMS and SSE-C did not request Content-Md5. Use uuid as etag. Optionally enable this also
// for server that is started with `--no-compat`.
kind, _ := crypto.IsRequested(formValues)
if !etag.ContentMD5Requested(formValues) && (kind == crypto.SSEC || kind == crypto.S3KMS || !globalServerCtxt.StrictS3Compat) {
forceMD5 = mustGetUUIDBytes()
}
hashReader, err := hash.NewReaderWithOpts(ctx, reader, hash.Options{
Size: actualSize,
MD5Hex: clientETag.String(),
SHA256Hex: "",
ActualSize: actualSize,
DisableMD5: false,
ForceMD5: forceMD5,
})
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if checksum != nil && checksum.Valid() {
if err = hashReader.AddChecksumNoTrailer(formValues, false); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -1199,9 +1272,9 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponseHeadersOnly(w, toAPIError(ctx, err))
return
}
opts.WantChecksum = checksum
fanOutOpts := fanOutOptions{Checksum: checksum}
if crypto.Requested(formValues) {
if crypto.SSECopy.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParameters), r.URL)
@@ -1246,8 +1319,15 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
wantSize := int64(-1)
if actualSize >= 0 {
info := ObjectInfo{Size: actualSize}
wantSize = info.EncryptedSize()
}
// do not try to verify encrypted content/
hashReader, err = hash.NewReader(ctx, reader, -1, "", "", -1)
hashReader, err = hash.NewReader(ctx, reader, wantSize, "", "", actualSize)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -1327,7 +1407,6 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
Key: objInfo.Name,
Error: errs[i].Error(),
})
eventArgsList = append(eventArgsList, eventArgs{
EventName: event.ObjectCreatedPost,
BucketName: objInfo.Bucket,
@@ -1340,22 +1419,6 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
continue
}
asize, err := objInfo.GetActualSize()
if err != nil {
asize = objInfo.Size
}
globalCacheConfig.Set(&cache.ObjectInfo{
Key: objInfo.Name,
Bucket: objInfo.Bucket,
ETag: getDecryptedETag(formValues, objInfo, false),
ModTime: objInfo.ModTime,
Expires: objInfo.Expires.UTC().Format(http.TimeFormat),
CacheControl: objInfo.CacheControl,
Metadata: cleanReservedKeys(objInfo.UserDefined),
Size: asize,
})
fanOutResp = append(fanOutResp, minio.PutObjectFanOutResponse{
Key: objInfo.Name,
ETag: getDecryptedETag(formValues, objInfo, false),
@@ -1415,6 +1478,19 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
if formValues.Get(postPolicyBucketTagging) != "" {
tags, err := tags.ParseObjectXML(strings.NewReader(formValues.Get(postPolicyBucketTagging)))
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedPOSTRequest), r.URL)
return
}
tagsStr := tags.String()
opts.UserDefined[xhttp.AmzObjectTagging] = tagsStr
} else {
// avoid user set an invalid tag using `X-Amz-Tagging`
delete(opts.UserDefined, xhttp.AmzObjectTagging)
}
objInfo, err := objectAPI.PutObject(ctx, bucket, object, pReader, opts)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@@ -1437,22 +1513,6 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
w.Header().Set(xhttp.Location, obj)
}
asize, err := objInfo.GetActualSize()
if err != nil {
asize = objInfo.Size
}
defer globalCacheConfig.Set(&cache.ObjectInfo{
Key: objInfo.Name,
Bucket: objInfo.Bucket,
ETag: etag,
ModTime: objInfo.ModTime,
Expires: objInfo.ExpiresStr(),
CacheControl: objInfo.CacheControl,
Metadata: cleanReservedKeys(objInfo.UserDefined),
Size: asize,
})
// Notify object created event.
defer sendEvent(eventArgs{
EventName: event.ObjectCreatedPost,
@@ -1700,7 +1760,7 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
}
globalNotificationSys.DeleteBucketMetadata(ctx, bucket)
globalReplicationPool.deleteResyncMetadata(ctx, bucket)
globalReplicationPool.Get().deleteResyncMetadata(ctx, bucket)
// Call site replication hook.
replLogIf(ctx, globalSiteReplicationSys.DeleteBucketHook(ctx, bucket, forceDelete))

View File

@@ -32,7 +32,7 @@ import (
// Wrapper for calling RemoveBucket HTTP handler tests for both Erasure multiple disks and single node setup.
func TestRemoveBucketHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testRemoveBucketHandler, []string{"RemoveBucket"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testRemoveBucketHandler, endpoints: []string{"RemoveBucket"}})
}
func testRemoveBucketHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
@@ -78,7 +78,7 @@ func testRemoveBucketHandler(obj ObjectLayer, instanceType, bucketName string, a
// Wrapper for calling GetBucketPolicy HTTP handler tests for both Erasure multiple disks and single node setup.
func TestGetBucketLocationHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testGetBucketLocationHandler, []string{"GetBucketLocation"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testGetBucketLocationHandler, endpoints: []string{"GetBucketLocation"}})
}
func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
@@ -220,7 +220,7 @@ func testGetBucketLocationHandler(obj ObjectLayer, instanceType, bucketName stri
// Wrapper for calling HeadBucket HTTP handler tests for both Erasure multiple disks and single node setup.
func TestHeadBucketHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testHeadBucketHandler, []string{"HeadBucket"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testHeadBucketHandler, endpoints: []string{"HeadBucket"}})
}
func testHeadBucketHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
@@ -322,7 +322,7 @@ func testHeadBucketHandler(obj ObjectLayer, instanceType, bucketName string, api
// Wrapper for calling TestListMultipartUploadsHandler tests for both Erasure multiple disks and single node setup.
func TestListMultipartUploadsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListMultipartUploadsHandler, []string{"ListMultipartUploads"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testListMultipartUploadsHandler, endpoints: []string{"ListMultipartUploads"}})
}
// testListMultipartUploadsHandler - Tests validate listing of multipart uploads.
@@ -558,7 +558,7 @@ func testListMultipartUploadsHandler(obj ObjectLayer, instanceType, bucketName s
// Wrapper for calling TestListBucketsHandler tests for both Erasure multiple disks and single node setup.
func TestListBucketsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testListBucketsHandler, []string{"ListBuckets"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testListBucketsHandler, endpoints: []string{"ListBuckets"}})
}
// testListBucketsHandler - Tests validate listing of buckets.
@@ -649,7 +649,7 @@ func testListBucketsHandler(obj ObjectLayer, instanceType, bucketName string, ap
// Wrapper for calling DeleteMultipleObjects HTTP handler tests for both Erasure multiple disks and single node setup.
func TestAPIDeleteMultipleObjectsHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testAPIDeleteMultipleObjectsHandler, []string{"DeleteMultipleObjects", "PutBucketPolicy"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testAPIDeleteMultipleObjectsHandler, endpoints: []string{"DeleteMultipleObjects", "PutBucketPolicy"}})
}
func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,

View File

@@ -17,7 +17,11 @@
package cmd
import "github.com/minio/minio/internal/bucket/lifecycle"
import (
"strconv"
"github.com/minio/minio/internal/bucket/lifecycle"
)
//go:generate stringer -type lcEventSrc -trimprefix lcEventSrc_ $GOFILE
type lcEventSrc uint8
@@ -43,7 +47,7 @@ type lcAuditEvent struct {
source lcEventSrc
}
func (lae lcAuditEvent) Tags() map[string]interface{} {
func (lae lcAuditEvent) Tags() map[string]string {
event := lae.Event
src := lae.source
const (
@@ -55,7 +59,7 @@ func (lae lcAuditEvent) Tags() map[string]interface{} {
ilmNewerNoncurrentVersions = "ilm-newer-noncurrent-versions"
ilmNoncurrentDays = "ilm-noncurrent-days"
)
tags := make(map[string]interface{}, 5)
tags := make(map[string]string, 5)
if src > lcEventSrc_None {
tags[ilmSrc] = src.String()
}
@@ -63,7 +67,7 @@ func (lae lcAuditEvent) Tags() map[string]interface{} {
tags[ilmRuleID] = event.RuleID
if !event.Due.IsZero() {
tags[ilmDue] = event.Due
tags[ilmDue] = event.Due.Format(iso8601Format)
}
// rule with Transition/NoncurrentVersionTransition in effect
@@ -73,10 +77,10 @@ func (lae lcAuditEvent) Tags() map[string]interface{} {
// rule with NewernoncurrentVersions in effect
if event.NewerNoncurrentVersions > 0 {
tags[ilmNewerNoncurrentVersions] = event.NewerNoncurrentVersions
tags[ilmNewerNoncurrentVersions] = strconv.Itoa(event.NewerNoncurrentVersions)
}
if event.NoncurrentDays > 0 {
tags[ilmNoncurrentDays] = event.NoncurrentDays
tags[ilmNoncurrentDays] = strconv.Itoa(event.NoncurrentDays)
}
return tags
}

View File

@@ -28,7 +28,7 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
const (
@@ -64,7 +64,8 @@ func (api objectAPIHandlers) PutBucketLifecycleHandler(w http.ResponseWriter, r
}
// Check if bucket exists.
if _, err := objAPI.GetBucketInfo(ctx, bucket, BucketOptions{}); err != nil {
rcfg, err := globalBucketObjectLockSys.Get(bucket)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@@ -76,7 +77,7 @@ func (api objectAPIHandlers) PutBucketLifecycleHandler(w http.ResponseWriter, r
}
// Validate the received bucket policy document
if err = bucketLifecycle.Validate(); err != nil {
if err = bucketLifecycle.Validate(rcfg); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}

View File

@@ -29,7 +29,7 @@ import (
// Test S3 Bucket lifecycle APIs with wrong credentials
func TestBucketLifecycleWrongCredentials(t *testing.T) {
ExecObjectLayerAPITest(t, testBucketLifecycleHandlersWrongCredentials, []string{"GetBucketLifecycle", "PutBucketLifecycle", "DeleteBucketLifecycle"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testBucketLifecycleHandlersWrongCredentials, endpoints: []string{"GetBucketLifecycle", "PutBucketLifecycle", "DeleteBucketLifecycle"}})
}
// Test for authentication
@@ -145,7 +145,7 @@ func testBucketLifecycleHandlersWrongCredentials(obj ObjectLayer, instanceType,
// Test S3 Bucket lifecycle APIs
func TestBucketLifecycle(t *testing.T) {
ExecObjectLayerAPITest(t, testBucketLifecycleHandlers, []string{"GetBucketLifecycle", "PutBucketLifecycle", "DeleteBucketLifecycle"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testBucketLifecycleHandlers, endpoints: []string{"GetBucketLifecycle", "PutBucketLifecycle", "DeleteBucketLifecycle"}})
}
// Simple tests of bucket lifecycle: PUT, GET, DELETE.

View File

@@ -40,7 +40,7 @@ import (
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/s3select"
xnet "github.com/minio/pkg/v2/net"
xnet "github.com/minio/pkg/v3/net"
"github.com/zeebo/xxh3"
)
@@ -71,7 +71,8 @@ func NewLifecycleSys() *LifecycleSys {
return &LifecycleSys{}
}
func ilmTrace(startTime time.Time, duration time.Duration, oi ObjectInfo, event string) madmin.TraceInfo {
func ilmTrace(startTime time.Time, duration time.Duration, oi ObjectInfo, event string, metadata map[string]string, err string) madmin.TraceInfo {
sz, _ := oi.GetActualSize()
return madmin.TraceInfo{
TraceType: madmin.TraceILM,
Time: startTime,
@@ -79,18 +80,23 @@ func ilmTrace(startTime time.Time, duration time.Duration, oi ObjectInfo, event
FuncName: event,
Duration: duration,
Path: pathJoin(oi.Bucket, oi.Name),
Error: "",
Bytes: sz,
Error: err,
Message: getSource(4),
Custom: map[string]string{"version-id": oi.VersionID},
Custom: metadata,
}
}
func (sys *LifecycleSys) trace(oi ObjectInfo) func(event string) {
func (sys *LifecycleSys) trace(oi ObjectInfo) func(event string, metadata map[string]string, err error) {
startTime := time.Now()
return func(event string) {
return func(event string, metadata map[string]string, err error) {
duration := time.Since(startTime)
if globalTrace.NumSubscribers(madmin.TraceILM) > 0 {
globalTrace.Publish(ilmTrace(startTime, duration, oi, event))
e := ""
if err != nil {
e = err.Error()
}
globalTrace.Publish(ilmTrace(startTime, duration, oi, event, metadata, e))
}
}
}
@@ -277,6 +283,10 @@ func (es *expiryState) getWorkerCh(h uint64) chan<- expiryOp {
}
func (es *expiryState) ResizeWorkers(n int) {
if n == 0 {
n = 100
}
// Lock to avoid multiple resizes to happen at the same time.
es.mu.Lock()
defer es.mu.Unlock()
@@ -342,7 +352,7 @@ func (es *expiryState) Worker(input <-chan expiryOp) {
traceFn := globalLifecycleSys.trace(oi)
if !oi.TransitionedObject.FreeVersion {
// nothing to be done
return
continue
}
ignoreNotFoundErr := func(err error) error {
@@ -356,7 +366,8 @@ func (es *expiryState) Worker(input <-chan expiryOp) {
err := deleteObjectFromRemoteTier(es.ctx, oi.TransitionedObject.Name, oi.TransitionedObject.VersionID, oi.TransitionedObject.Tier)
if ignoreNotFoundErr(err) != nil {
transitionLogIf(es.ctx, err)
return
traceFn(ILMFreeVersionDelete, nil, err)
continue
}
// Remove this free version
@@ -538,6 +549,10 @@ func (t *transitionState) UpdateWorkers(n int) {
}
func (t *transitionState) updateWorkers(n int) {
if n == 0 {
n = 100
}
for t.numWorkers < n {
go t.worker(t.objAPI)
t.numWorkers++
@@ -573,6 +588,10 @@ func enqueueTransitionImmediate(obj ObjectInfo, src lcEventSrc) {
if lc, err := globalLifecycleSys.Get(obj.Bucket); err == nil {
switch event := lc.Eval(obj.ToLifecycleOpts()); event.Action {
case lifecycle.TransitionAction, lifecycle.TransitionVersionAction:
if obj.DeleteMarker || obj.IsDir {
// nothing to transition
return
}
globalTransitionState.queueTransitionTask(obj, event, src)
}
}
@@ -663,11 +682,12 @@ func genTransitionObjName(bucket string) (string, error) {
// is moved to the transition tier. Note that in the case of encrypted objects, entire encrypted stream is moved
// to the transition tier without decrypting or re-encrypting.
func transitionObject(ctx context.Context, objectAPI ObjectLayer, oi ObjectInfo, lae lcAuditEvent) (err error) {
timeILM := globalScannerMetrics.timeILM(lae.Action)
defer func() {
if err != nil {
return
}
globalScannerMetrics.timeILM(lae.Action)(1)
timeILM(1)
}()
opts := ObjectOptions{
@@ -692,6 +712,11 @@ type auditTierOp struct {
Error string `json:"error,omitempty"`
}
func (op auditTierOp) String() string {
// flattening the auditTierOp{} for audit
return fmt.Sprintf("tier:%s,respNS:%d,tx:%d,err:%s", op.Tier, op.TimeToResponseNS, op.OutputBytes, op.Error)
}
func auditTierActions(ctx context.Context, tier string, bytes int64) func(err error) {
startTime := time.Now()
return func(err error) {
@@ -715,7 +740,7 @@ func auditTierActions(ctx context.Context, tier string, bytes int64) func(err er
globalTierMetrics.logFailure(tier)
}
logger.GetReqInfo(ctx).AppendTags("tierStats", op)
logger.GetReqInfo(ctx).AppendTags("tierStats", op.String())
}
}
@@ -726,7 +751,7 @@ func getTransitionedObjectReader(ctx context.Context, bucket, object string, rs
return nil, fmt.Errorf("transition storage class not configured: %w", err)
}
fn, off, length, err := NewGetObjectReader(rs, oi, opts)
fn, off, length, err := NewGetObjectReader(rs, oi, opts, h)
if err != nil {
return nil, ErrorRespToObjectError(err, bucket, object)
}
@@ -983,7 +1008,7 @@ func ongoingRestoreObj() restoreObjStatus {
}
}
// completeRestoreObj constructs restoreObjStatus for a completed restore-object with given expiry.
// completedRestoreObj constructs restoreObjStatus for a completed restore-object with given expiry.
func completedRestoreObj(expiry time.Time) restoreObjStatus {
return restoreObjStatus{
ongoing: false,

View File

@@ -26,7 +26,7 @@ import (
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
// Validate all the ListObjects query arguments, returns an APIErrorCode
@@ -124,7 +124,7 @@ func (api objectAPIHandlers) listObjectVersionsHandler(w http.ResponseWriter, r
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
response := generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType, maxkeys, listObjectVersionsInfo, checkObjMeta)
response := generateListVersionsResponse(ctx, bucket, prefix, marker, versionIDMarker, delimiter, encodingType, maxkeys, listObjectVersionsInfo, checkObjMeta)
// Write success response.
writeSuccessResponseXML(w, encodeResponseList(response))
@@ -202,7 +202,7 @@ func (api objectAPIHandlers) listObjectsV2Handler(ctx context.Context, w http.Re
if r.Header.Get(xMinIOExtract) == "true" && strings.Contains(prefix, archivePattern) {
// Initiate a list objects operation inside a zip file based in the input params
listObjectsV2Info, err = listObjectsV2InArchive(ctx, objectAPI, bucket, prefix, token, delimiter, maxKeys, fetchOwner, startAfter)
listObjectsV2Info, err = listObjectsV2InArchive(ctx, objectAPI, bucket, prefix, token, delimiter, maxKeys, startAfter, r.Header)
} else {
// Initiate a list objects operation based on the input params.
// On success would return back ListObjectsInfo object to be
@@ -219,7 +219,7 @@ func (api objectAPIHandlers) listObjectsV2Handler(ctx context.Context, w http.Re
return
}
response := generateListObjectsV2Response(bucket, prefix, token, listObjectsV2Info.NextContinuationToken, startAfter,
response := generateListObjectsV2Response(ctx, bucket, prefix, token, listObjectsV2Info.NextContinuationToken, startAfter,
delimiter, encodingType, fetchOwner, listObjectsV2Info.IsTruncated,
maxKeys, listObjectsV2Info.Objects, listObjectsV2Info.Prefixes, checkObjMeta)
@@ -231,7 +231,7 @@ func parseRequestToken(token string) (subToken string, nodeIndex int) {
if token == "" {
return token, -1
}
i := strings.Index(token, ":")
i := strings.Index(token, getKeySeparator())
if i < 0 {
return token, -1
}
@@ -318,7 +318,7 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
return
}
response := generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingType, maxKeys, listObjectsInfo)
response := generateListObjectsV1Response(ctx, bucket, prefix, marker, delimiter, encodingType, maxKeys, listObjectsInfo)
// Write success response.
writeSuccessResponseXML(w, encodeResponseList(response))

View File

@@ -37,8 +37,9 @@ import (
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v2/sync/errgroup"
"github.com/minio/pkg/v3/policy"
"github.com/minio/pkg/v3/sync/errgroup"
"golang.org/x/sync/singleflight"
)
// BucketMetadataSys captures all bucket metadata for a given cluster.
@@ -47,6 +48,7 @@ type BucketMetadataSys struct {
sync.RWMutex
initialized bool
group *singleflight.Group
metadataMap map[string]BucketMetadata
}
@@ -62,6 +64,7 @@ func (sys *BucketMetadataSys) Count() int {
func (sys *BucketMetadataSys) Remove(buckets ...string) {
sys.Lock()
for _, bucket := range buckets {
sys.group.Forget(bucket)
delete(sys.metadataMap, bucket)
globalBucketMonitor.DeleteBucket(bucket)
}
@@ -121,6 +124,7 @@ func (sys *BucketMetadataSys) updateAndParse(ctx context.Context, bucket string,
meta.PolicyConfigUpdatedAt = updatedAt
case bucketNotificationConfig:
meta.NotificationConfigXML = configData
meta.NotificationConfigUpdatedAt = updatedAt
case bucketLifecycleConfig:
meta.LifecycleConfigXML = configData
meta.LifecycleConfigUpdatedAt = updatedAt
@@ -150,12 +154,13 @@ func (sys *BucketMetadataSys) updateAndParse(ctx context.Context, bucket string,
if err != nil {
return updatedAt, fmt.Errorf("Error encrypting bucket target metadata %w", err)
}
meta.BucketTargetsConfigUpdatedAt = updatedAt
meta.BucketTargetsConfigMetaUpdatedAt = updatedAt
default:
return updatedAt, fmt.Errorf("Unknown bucket %s metadata update requested %s", bucket, configFile)
}
err = sys.save(ctx, meta)
return updatedAt, err
return updatedAt, sys.save(ctx, meta)
}
func (sys *BucketMetadataSys) save(ctx context.Context, meta BucketMetadata) error {
@@ -262,6 +267,21 @@ func (sys *BucketMetadataSys) GetVersioningConfig(bucket string) (*versioning.Ve
return meta.versioningConfig, meta.VersioningConfigUpdatedAt, nil
}
// GetBucketPolicy returns configured bucket policy
func (sys *BucketMetadataSys) GetBucketPolicy(bucket string) (*policy.BucketPolicy, time.Time, error) {
meta, _, err := sys.GetConfig(GlobalContext, bucket)
if err != nil {
if errors.Is(err, errConfigNotFound) {
return nil, time.Time{}, BucketPolicyNotFound{Bucket: bucket}
}
return nil, time.Time{}, err
}
if meta.policyConfig == nil {
return nil, time.Time{}, BucketPolicyNotFound{Bucket: bucket}
}
return meta.policyConfig, meta.PolicyConfigUpdatedAt, nil
}
// GetTaggingConfig returns configured tagging config
// The returned object may not be modified.
func (sys *BucketMetadataSys) GetTaggingConfig(bucket string) (*tags.Tags, time.Time, error) {
@@ -392,9 +412,7 @@ func (sys *BucketMetadataSys) GetReplicationConfig(ctx context.Context, bucket s
return nil, time.Time{}, BucketReplicationConfigNotFound{Bucket: bucket}
}
if reloaded {
globalBucketTargetSys.set(BucketInfo{
Name: bucket,
}, meta)
globalBucketTargetSys.set(bucket, meta)
}
return meta.replicationConfig, meta.ReplicationConfigUpdatedAt, nil
}
@@ -413,9 +431,7 @@ func (sys *BucketMetadataSys) GetBucketTargetsConfig(bucket string) (*madmin.Buc
return nil, BucketRemoteTargetNotFound{Bucket: bucket}
}
if reloaded {
globalBucketTargetSys.set(BucketInfo{
Name: bucket,
}, meta)
globalBucketTargetSys.set(bucket, meta)
}
return meta.bucketTargetConfig, nil
}
@@ -455,13 +471,20 @@ func (sys *BucketMetadataSys) GetConfig(ctx context.Context, bucket string) (met
if ok {
return meta, reloaded, nil
}
meta, err = loadBucketMetadata(ctx, objAPI, bucket)
if err != nil {
if !sys.Initialized() {
// bucket metadata not yet initialized
return newBucketMetadata(bucket), reloaded, errBucketMetadataNotInitialized
val, err, _ := sys.group.Do(bucket, func() (val interface{}, err error) {
meta, err = loadBucketMetadata(ctx, objAPI, bucket)
if err != nil {
if !sys.Initialized() {
// bucket metadata not yet initialized
return newBucketMetadata(bucket), errBucketMetadataNotInitialized
}
}
return meta, reloaded, err
return meta, err
})
meta, _ = val.(BucketMetadata)
if err != nil {
return meta, false, err
}
sys.Lock()
sys.metadataMap[bucket] = meta
@@ -471,7 +494,7 @@ func (sys *BucketMetadataSys) GetConfig(ctx context.Context, bucket string) (met
}
// Init - initializes bucket metadata system for all buckets.
func (sys *BucketMetadataSys) Init(ctx context.Context, buckets []BucketInfo, objAPI ObjectLayer) error {
func (sys *BucketMetadataSys) Init(ctx context.Context, buckets []string, objAPI ObjectLayer) error {
if objAPI == nil {
return errServerNotInitialized
}
@@ -484,7 +507,7 @@ func (sys *BucketMetadataSys) Init(ctx context.Context, buckets []BucketInfo, ob
}
// concurrently load bucket metadata to speed up loading bucket metadata.
func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []BucketInfo, failedBuckets map[string]struct{}) {
func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []string) {
g := errgroup.WithNErrs(len(buckets))
bucketMetas := make([]BucketMetadata, len(buckets))
for index := range buckets {
@@ -494,8 +517,8 @@ func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []Buck
// herd upon start up sequence.
time.Sleep(25*time.Millisecond + time.Duration(rand.Int63n(int64(100*time.Millisecond))))
_, _ = sys.objAPI.HealBucket(ctx, buckets[index].Name, madmin.HealOpts{Recreate: true})
meta, err := loadBucketMetadata(ctx, sys.objAPI, buckets[index].Name)
_, _ = sys.objAPI.HealBucket(ctx, buckets[index], madmin.HealOpts{Recreate: true})
meta, err := loadBucketMetadata(ctx, sys.objAPI, buckets[index])
if err != nil {
return err
}
@@ -508,7 +531,7 @@ func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []Buck
for index, err := range errs {
if err != nil {
internalLogOnceIf(ctx, fmt.Errorf("Unable to load bucket metadata, will be retried: %w", err),
"load-bucket-metadata-"+buckets[index].Name, logger.WarningKind)
"load-bucket-metadata-"+buckets[index], logger.WarningKind)
}
}
@@ -519,16 +542,12 @@ func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []Buck
if errs[i] != nil {
continue
}
sys.metadataMap[buckets[i].Name] = meta
sys.metadataMap[buckets[i]] = meta
}
sys.Unlock()
for i, meta := range bucketMetas {
if errs[i] != nil {
if failedBuckets == nil {
failedBuckets = make(map[string]struct{})
}
failedBuckets[buckets[i].Name] = struct{}{}
continue
}
globalEventNotifier.set(buckets[i], meta) // set notification targets
@@ -536,7 +555,7 @@ func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []Buck
}
}
func (sys *BucketMetadataSys) refreshBucketsMetadataLoop(ctx context.Context, failedBuckets map[string]struct{}) {
func (sys *BucketMetadataSys) refreshBucketsMetadataLoop(ctx context.Context) {
const bucketMetadataRefresh = 15 * time.Minute
sleeper := newDynamicSleeper(2, 150*time.Millisecond, false)
@@ -548,7 +567,7 @@ func (sys *BucketMetadataSys) refreshBucketsMetadataLoop(ctx context.Context, fa
case <-ctx.Done():
return
case <-t.C:
buckets, err := sys.objAPI.ListBuckets(ctx, BucketOptions{})
buckets, err := sys.objAPI.ListBuckets(ctx, BucketOptions{NoMetadata: true})
if err != nil {
internalLogIf(ctx, err, logger.WarningKind)
break
@@ -566,7 +585,10 @@ func (sys *BucketMetadataSys) refreshBucketsMetadataLoop(ctx context.Context, fa
for i := range buckets {
wait := sleeper.Timer(ctx)
meta, err := loadBucketMetadata(ctx, sys.objAPI, buckets[i].Name)
bucket := buckets[i].Name
updated := false
meta, err := loadBucketMetadata(ctx, sys.objAPI, bucket)
if err != nil {
internalLogIf(ctx, err, logger.WarningKind)
wait() // wait to proceed to next entry.
@@ -574,14 +596,16 @@ func (sys *BucketMetadataSys) refreshBucketsMetadataLoop(ctx context.Context, fa
}
sys.Lock()
sys.metadataMap[buckets[i].Name] = meta
// Update if the bucket metadata in the memory is older than on-disk one
if lu := sys.metadataMap[bucket].lastUpdate(); lu.Before(meta.lastUpdate()) {
updated = true
sys.metadataMap[bucket] = meta
}
sys.Unlock()
// Initialize the failed buckets
if _, ok := failedBuckets[buckets[i].Name]; ok {
globalEventNotifier.set(buckets[i], meta)
globalBucketTargetSys.set(buckets[i], meta)
delete(failedBuckets, buckets[i].Name)
if updated {
globalEventNotifier.set(bucket, meta)
globalBucketTargetSys.set(bucket, meta)
}
wait() // wait to proceed to next entry.
@@ -600,15 +624,14 @@ func (sys *BucketMetadataSys) Initialized() bool {
}
// Loads bucket metadata for all buckets into BucketMetadataSys.
func (sys *BucketMetadataSys) init(ctx context.Context, buckets []BucketInfo) {
count := 100 // load 100 bucket metadata at a time.
failedBuckets := make(map[string]struct{})
func (sys *BucketMetadataSys) init(ctx context.Context, buckets []string) {
count := globalEndpoints.ESCount() * 10
for {
if len(buckets) < count {
sys.concurrentLoad(ctx, buckets, failedBuckets)
sys.concurrentLoad(ctx, buckets)
break
}
sys.concurrentLoad(ctx, buckets[:count], failedBuckets)
sys.concurrentLoad(ctx, buckets[:count])
buckets = buckets[count:]
}
@@ -617,7 +640,7 @@ func (sys *BucketMetadataSys) init(ctx context.Context, buckets []BucketInfo) {
sys.Unlock()
if globalIsDistErasure {
go sys.refreshBucketsMetadataLoop(ctx, failedBuckets)
go sys.refreshBucketsMetadataLoop(ctx)
}
}
@@ -634,5 +657,6 @@ func (sys *BucketMetadataSys) Reset() {
func NewBucketMetadataSys() *BucketMetadataSys {
return &BucketMetadataSys{
metadataMap: make(map[string]BucketMetadata),
group: &singleflight.Group{},
}
}

View File

@@ -41,7 +41,7 @@ import (
"github.com/minio/minio/internal/fips"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
"github.com/minio/sio"
)
@@ -81,14 +81,19 @@ type BucketMetadata struct {
ReplicationConfigXML []byte
BucketTargetsConfigJSON []byte
BucketTargetsConfigMetaJSON []byte
PolicyConfigUpdatedAt time.Time
ObjectLockConfigUpdatedAt time.Time
EncryptionConfigUpdatedAt time.Time
TaggingConfigUpdatedAt time.Time
QuotaConfigUpdatedAt time.Time
ReplicationConfigUpdatedAt time.Time
VersioningConfigUpdatedAt time.Time
LifecycleConfigUpdatedAt time.Time
PolicyConfigUpdatedAt time.Time
ObjectLockConfigUpdatedAt time.Time
EncryptionConfigUpdatedAt time.Time
TaggingConfigUpdatedAt time.Time
QuotaConfigUpdatedAt time.Time
ReplicationConfigUpdatedAt time.Time
VersioningConfigUpdatedAt time.Time
LifecycleConfigUpdatedAt time.Time
NotificationConfigUpdatedAt time.Time
BucketTargetsConfigUpdatedAt time.Time
BucketTargetsConfigMetaUpdatedAt time.Time
// Add a new UpdatedAt field and update lastUpdate function
// Unexported fields. Must be updated atomically.
policyConfig *policy.BucketPolicy
@@ -120,6 +125,46 @@ func newBucketMetadata(name string) BucketMetadata {
}
}
// Return the last update of this bucket metadata, which
// means, the last update of any policy document.
func (b BucketMetadata) lastUpdate() (t time.Time) {
if b.PolicyConfigUpdatedAt.After(t) {
t = b.PolicyConfigUpdatedAt
}
if b.ObjectLockConfigUpdatedAt.After(t) {
t = b.ObjectLockConfigUpdatedAt
}
if b.EncryptionConfigUpdatedAt.After(t) {
t = b.EncryptionConfigUpdatedAt
}
if b.TaggingConfigUpdatedAt.After(t) {
t = b.TaggingConfigUpdatedAt
}
if b.QuotaConfigUpdatedAt.After(t) {
t = b.QuotaConfigUpdatedAt
}
if b.ReplicationConfigUpdatedAt.After(t) {
t = b.ReplicationConfigUpdatedAt
}
if b.VersioningConfigUpdatedAt.After(t) {
t = b.VersioningConfigUpdatedAt
}
if b.LifecycleConfigUpdatedAt.After(t) {
t = b.LifecycleConfigUpdatedAt
}
if b.NotificationConfigUpdatedAt.After(t) {
t = b.NotificationConfigUpdatedAt
}
if b.BucketTargetsConfigUpdatedAt.After(t) {
t = b.BucketTargetsConfigUpdatedAt
}
if b.BucketTargetsConfigMetaUpdatedAt.After(t) {
t = b.BucketTargetsConfigMetaUpdatedAt
}
return
}
// Versioning returns true if versioning is enabled
func (b BucketMetadata) Versioning() bool {
return b.LockEnabled || (b.versioningConfig != nil && b.versioningConfig.Enabled()) || (b.objectLockConfig != nil && b.objectLockConfig.Enabled())
@@ -440,6 +485,18 @@ func (b *BucketMetadata) defaultTimestamps() {
if b.LifecycleConfigUpdatedAt.IsZero() {
b.LifecycleConfigUpdatedAt = b.Created
}
if b.NotificationConfigUpdatedAt.IsZero() {
b.NotificationConfigUpdatedAt = b.Created
}
if b.BucketTargetsConfigUpdatedAt.IsZero() {
b.BucketTargetsConfigUpdatedAt = b.Created
}
if b.BucketTargetsConfigMetaUpdatedAt.IsZero() {
b.BucketTargetsConfigMetaUpdatedAt = b.Created
}
}
// Save config to supplied ObjectLayer api.
@@ -490,7 +547,7 @@ func encryptBucketMetadata(ctx context.Context, bucket string, input []byte, kms
}
metadata := make(map[string]string)
key, err := GlobalKMS.GenerateKey(ctx, "", kmsContext)
key, err := GlobalKMS.GenerateKey(ctx, &kms.GenerateKeyRequest{AssociatedData: kmsContext})
if err != nil {
return
}
@@ -519,7 +576,11 @@ func decryptBucketMetadata(input []byte, bucket string, meta map[string]string,
if err != nil {
return nil, err
}
extKey, err := GlobalKMS.DecryptKey(keyID, kmsKey, kmsContext)
extKey, err := GlobalKMS.Decrypt(context.TODO(), &kms.DecryptRequest{
Name: keyID,
Ciphertext: kmsKey,
AssociatedData: kmsContext,
})
if err != nil {
return nil, err
}

View File

@@ -156,6 +156,24 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "LifecycleConfigUpdatedAt")
return
}
case "NotificationConfigUpdatedAt":
z.NotificationConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "NotificationConfigUpdatedAt")
return
}
case "BucketTargetsConfigUpdatedAt":
z.BucketTargetsConfigUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "BucketTargetsConfigUpdatedAt")
return
}
case "BucketTargetsConfigMetaUpdatedAt":
z.BucketTargetsConfigMetaUpdatedAt, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "BucketTargetsConfigMetaUpdatedAt")
return
}
default:
err = dc.Skip()
if err != nil {
@@ -169,9 +187,9 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 22
// map header, size 25
// write "Name"
err = en.Append(0xde, 0x0, 0x16, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
err = en.Append(0xde, 0x0, 0x19, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
if err != nil {
return
}
@@ -390,15 +408,45 @@ func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "LifecycleConfigUpdatedAt")
return
}
// write "NotificationConfigUpdatedAt"
err = en.Append(0xbb, 0x4e, 0x6f, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.NotificationConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "NotificationConfigUpdatedAt")
return
}
// write "BucketTargetsConfigUpdatedAt"
err = en.Append(0xbc, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x73, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.BucketTargetsConfigUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "BucketTargetsConfigUpdatedAt")
return
}
// write "BucketTargetsConfigMetaUpdatedAt"
err = en.Append(0xd9, 0x20, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x73, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x4d, 0x65, 0x74, 0x61, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
if err != nil {
return
}
err = en.WriteTime(z.BucketTargetsConfigMetaUpdatedAt)
if err != nil {
err = msgp.WrapError(err, "BucketTargetsConfigMetaUpdatedAt")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 22
// map header, size 25
// string "Name"
o = append(o, 0xde, 0x0, 0x16, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = append(o, 0xde, 0x0, 0x19, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = msgp.AppendString(o, z.Name)
// string "Created"
o = append(o, 0xa7, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64)
@@ -463,6 +511,15 @@ func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
// string "LifecycleConfigUpdatedAt"
o = append(o, 0xb8, 0x4c, 0x69, 0x66, 0x65, 0x63, 0x79, 0x63, 0x6c, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.LifecycleConfigUpdatedAt)
// string "NotificationConfigUpdatedAt"
o = append(o, 0xbb, 0x4e, 0x6f, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.NotificationConfigUpdatedAt)
// string "BucketTargetsConfigUpdatedAt"
o = append(o, 0xbc, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x73, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.BucketTargetsConfigUpdatedAt)
// string "BucketTargetsConfigMetaUpdatedAt"
o = append(o, 0xd9, 0x20, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x73, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x4d, 0x65, 0x74, 0x61, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74)
o = msgp.AppendTime(o, z.BucketTargetsConfigMetaUpdatedAt)
return
}
@@ -616,6 +673,24 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "LifecycleConfigUpdatedAt")
return
}
case "NotificationConfigUpdatedAt":
z.NotificationConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "NotificationConfigUpdatedAt")
return
}
case "BucketTargetsConfigUpdatedAt":
z.BucketTargetsConfigUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "BucketTargetsConfigUpdatedAt")
return
}
case "BucketTargetsConfigMetaUpdatedAt":
z.BucketTargetsConfigMetaUpdatedAt, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "BucketTargetsConfigMetaUpdatedAt")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -630,6 +705,6 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BucketMetadata) Msgsize() (s int) {
s = 3 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON) + 28 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigMetaJSON) + 22 + msgp.TimeSize + 26 + msgp.TimeSize + 26 + msgp.TimeSize + 23 + msgp.TimeSize + 21 + msgp.TimeSize + 27 + msgp.TimeSize + 26 + msgp.TimeSize + 25 + msgp.TimeSize
s = 3 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON) + 28 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigMetaJSON) + 22 + msgp.TimeSize + 26 + msgp.TimeSize + 26 + msgp.TimeSize + 23 + msgp.TimeSize + 21 + msgp.TimeSize + 27 + msgp.TimeSize + 26 + msgp.TimeSize + 25 + msgp.TimeSize + 28 + msgp.TimeSize + 29 + msgp.TimeSize + 34 + msgp.TimeSize
return
}

View File

@@ -26,7 +26,7 @@ import (
"github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
const (
@@ -66,8 +66,9 @@ func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter,
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
config.SetRegion(globalSite.Region)
if err = config.Validate(globalSite.Region, globalEventNotifier.targetList); err != nil {
region := globalSite.Region()
config.SetRegion(region)
if err = config.Validate(region, globalEventNotifier.targetList); err != nil {
arnErr, ok := err.(*event.ErrARNNotFound)
if ok {
for i, queue := range config.QueueList {
@@ -134,7 +135,7 @@ func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter,
return
}
config, err := event.ParseConfig(io.LimitReader(r.Body, r.ContentLength), globalSite.Region, globalEventNotifier.targetList)
config, err := event.ParseConfig(io.LimitReader(r.Body, r.ContentLength), globalSite.Region(), globalEventNotifier.targetList)
if err != nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedXML)
if event.IsEventError(err) {

View File

@@ -28,7 +28,7 @@ import (
"github.com/minio/minio/internal/bucket/replication"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
// BucketObjectLockSys - map of bucket and retention configuration.

View File

@@ -27,7 +27,7 @@ import (
"github.com/minio/madmin-go/v3"
"github.com/minio/minio/internal/logger"
"github.com/minio/mux"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
const (

View File

@@ -29,8 +29,8 @@ import (
"testing"
"github.com/minio/minio/internal/auth"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v2/policy/condition"
"github.com/minio/pkg/v3/policy"
"github.com/minio/pkg/v3/policy/condition"
)
func getAnonReadOnlyBucketPolicy(bucketName string) *policy.BucketPolicy {
@@ -107,7 +107,7 @@ func getAnonWriteOnlyObjectPolicy(bucketName, prefix string) *policy.BucketPolic
// Wrapper for calling Create Bucket and ensure we get one and only one success.
func TestCreateBucket(t *testing.T) {
ExecObjectLayerAPITest(t, testCreateBucket, []string{"MakeBucket"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testCreateBucket, endpoints: []string{"MakeBucket"}})
}
// testCreateBucket - Test for calling Create Bucket and ensure we get one and only one success.
@@ -154,7 +154,7 @@ func testCreateBucket(obj ObjectLayer, instanceType, bucketName string, apiRoute
// Wrapper for calling Put Bucket Policy HTTP handler tests for both Erasure multiple disks and single node setup.
func TestPutBucketPolicyHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testPutBucketPolicyHandler, []string{"PutBucketPolicy"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testPutBucketPolicyHandler, endpoints: []string{"PutBucketPolicy"}})
}
// testPutBucketPolicyHandler - Test for Bucket policy end point.
@@ -373,7 +373,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// Wrapper for calling Get Bucket Policy HTTP handler tests for both Erasure multiple disks and single node setup.
func TestGetBucketPolicyHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testGetBucketPolicyHandler, []string{"PutBucketPolicy", "GetBucketPolicy"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testGetBucketPolicyHandler, endpoints: []string{"PutBucketPolicy", "GetBucketPolicy"}})
}
// testGetBucketPolicyHandler - Test for end point which fetches the access policy json of the given bucket.
@@ -577,7 +577,7 @@ func testGetBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// Wrapper for calling Delete Bucket Policy HTTP handler tests for both Erasure multiple disks and single node setup.
func TestDeleteBucketPolicyHandler(t *testing.T) {
ExecObjectLayerAPITest(t, testDeleteBucketPolicyHandler, []string{"PutBucketPolicy", "DeleteBucketPolicy"})
ExecObjectLayerAPITest(ExecObjectLayerAPITestArgs{t: t, objAPITest: testDeleteBucketPolicyHandler, endpoints: []string{"PutBucketPolicy", "DeleteBucketPolicy"}})
}
// testDeleteBucketPolicyHandler - Test for Delete bucket policy end point.

View File

@@ -32,7 +32,7 @@ import (
"github.com/minio/minio/internal/handlers"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v2/policy"
"github.com/minio/pkg/v3/policy"
)
// PolicySys - policy subsystem.

View File

@@ -49,8 +49,11 @@ var bucketStorageCache = cachevalue.New[DataUsageInfo]()
func (sys *BucketQuotaSys) Init(objAPI ObjectLayer) {
bucketStorageCache.InitOnce(10*time.Second,
cachevalue.Opts{ReturnLastGood: true, NoWait: true},
func() (DataUsageInfo, error) {
ctx, done := context.WithTimeout(context.Background(), 2*time.Second)
func(ctx context.Context) (DataUsageInfo, error) {
if objAPI == nil {
return DataUsageInfo{}, errServerNotInitialized
}
ctx, done := context.WithTimeout(ctx, 2*time.Second)
defer done()
return loadDataUsageFromBackend(ctx, objAPI)
@@ -59,8 +62,10 @@ func (sys *BucketQuotaSys) Init(objAPI ObjectLayer) {
}
// GetBucketUsageInfo return bucket usage info for a given bucket
func (sys *BucketQuotaSys) GetBucketUsageInfo(bucket string) (BucketUsageInfo, error) {
dui, err := bucketStorageCache.Get()
func (sys *BucketQuotaSys) GetBucketUsageInfo(ctx context.Context, bucket string) BucketUsageInfo {
sys.Init(newObjectLayerFn())
dui, err := bucketStorageCache.GetWithCtx(ctx)
timedout := OperationTimedOut{}
if err != nil && !errors.Is(err, context.DeadlineExceeded) && !errors.As(err, &timedout) {
if len(dui.BucketsUsage) > 0 {
@@ -73,10 +78,10 @@ func (sys *BucketQuotaSys) GetBucketUsageInfo(bucket string) (BucketUsageInfo, e
if len(dui.BucketsUsage) > 0 {
bui, ok := dui.BucketsUsage[bucket]
if ok {
return bui, nil
return bui
}
}
return BucketUsageInfo{}, nil
return BucketUsageInfo{}
}
// parseBucketQuota parses BucketQuota from json
@@ -118,11 +123,7 @@ func (sys *BucketQuotaSys) enforceQuotaHard(ctx context.Context, bucket string,
return BucketQuotaExceeded{Bucket: bucket}
}
bui, err := sys.GetBucketUsageInfo(bucket)
if err != nil {
return err
}
bui := sys.GetBucketUsageInfo(ctx, bucket)
if bui.Size > 0 && ((bui.Size + uint64(size)) >= quotaSize) {
return BucketQuotaExceeded{Bucket: bucket}
}

Some files were not shown because too many files have changed in this diff Show More