Generate 16 bytes IV instead of an IV of 32 bytes (and then use half of it) when using ChaCha20 to
encrypt tokens, this is to prevent tokens to become malleable.
This PR adds the following features:
- Allow user to provide its own keypair certificates for enable TLS in
MinIO
- Allow user to configure data encryption at rest in MinIO with KES
- Removes JWT schema for login and instead Console authentication will use
encrypted session tokens
Enable TLS between client and MinIO with user provided certificates
Instead of using AutoCert feature now the user can provide `cert` and
`key` via `tls` object, values must be valid `x509.Certificate`
formatted files encoded in `base64`
Enable encryption at rest configuring KES
User can deploy KES via Console/Operator by defining the encryption
object, AutoCert must be enabled or custom certificates for KES must be
provided, KES support 3 KMS backends: `Vault`, `AWS KMS` and `Gemalto`,
previous configuration of the KMS is necessary.
eg of body request for create-tenant
```
{
"name": "honeywell",
"access_key": "minio",
"secret_key": "minio123",
"enable_mcs": false,
"enable_ssl": false,
"service_name": "honeywell",
"zones": [
{
"name": "honeywell-zone-1",
"servers": 1,
"volumes_per_server": 4,
"volume_configuration": {
"size": 256000000,
"storage_class": "vsan-default-storage-policy"
}
}
],
"namespace": "default",
"tls": {
"tls.crt": "",
"tls.key": ""
},
"encryption": {
"server": {
"tls.crt": "",
"tls.key": ""
},
"client": {
"tls.crt": "",
"tls.key": ""
},
"vault": {
"endpoint": "http://vault:8200",
"prefix": "",
"approle": {
"id": "",
"secret": ""
}
}
}
}
```
Previously every Handler function was receiving the session token in the
form of a jwt string, in consequence every time we want to access the
encrypted claims of the jwt we needed to run a decryption process,
additionally we were decrypting the jwt twice, first at the session
validation then inside each handler function, this was also causing a
lot of using related to the merge between m3 and mcs
What changed:
Now we validate and decrypt the jwt once in `configure_mcs.go`, this
works for both, mcs (console) and operator sessions, and then pass the
decrypted claims to all the functions that need it, so no further token
validation or decryption is need it.
`MCS` will authenticate against `Mkube`using bearer tokens via HTTP
`Authorization` header. The user will provide this token once
in the login form, MCS will validate it against Mkube (list tenants) and
if valid will generate and return a new MCS sessions
with encrypted claims (the user Service account token will be inside the
JWT in the data field)
Kubernetes
The provided `JWT token` corresponds to the `Kubernetes service account`
that `Mkube` will use to run tasks on behalf of the
user, ie: list, create, edit, delete tenants, storage class, etc.
Development
If you are running mcs in your local environment and wish to make
request to `Mkube` you can set `MCS_M3_HOSTNAME`, if
the environment variable is not present by default `MCS` will use
`"http://m3:8787"`, additionally you will need to set the
`MCS_MKUBE_ADMIN_ONLY=on` variable to make MCS display the Mkube UI
Extract the Service account token and use it with MCS
For local development you can use the jwt associated to the `m3-sa`
service account, you can get the token running
the following command in your terminal:
```
kubectl get secret $(kubectl get serviceaccount m3-sa -o
jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64
--decode
```
Then run the mcs server
```
MCS_M3_HOSTNAME=http://localhost:8787 MCS_MKUBE_ADMIN_ONLY=on ./mcs
server
```
Self-signed certificates and Custom certificate authority for Mkube
If Mkube uses TLS with a self-signed certificate, or a certificate
issued by a custom certificate authority you can add those
certificates usinng the `MCS_M3_SERVER_TLS_CA_CERTIFICATE` env variable
````
MCS_M3_SERVER_TLS_CA_CERTIFICATE=cert1.pem,cert2.pem,cert3.pem ./mcs
server
````
This PR sets the initial version of the ACL for mcs, the idea behind
this is to start using the principle of least privileges when assigning
policies to users when creating users through mcs, currently mcsAdmin policy uses admin:*
and s3:* and by default a user with that policy will have access to everything, if want to limit
that we can create a policy with least privileges.
We need to start validating explicitly if users has acccess to an
specific endpoint based on IAM policy actions.
In this first version every endpoint (you can see it as a page to),
defines a set of well defined admin/s3 actions to work properly, ie:
```
// corresponds to /groups endpoint used by the groups page
var groupsActionSet = iampolicy.NewActionSet(
iampolicy.ListGroupsAdminAction,
iampolicy.AddUserToGroupAdminAction,
//iampolicy.GetGroupAdminAction,
iampolicy.EnableGroupAdminAction,
iampolicy.DisableGroupAdminAction,
)
// corresponds to /policies endpoint used by the policies page
var iamPoliciesActionSet = iampolicy.NewActionSet(
iampolicy.GetPolicyAdminAction,
iampolicy.DeletePolicyAdminAction,
iampolicy.CreatePolicyAdminAction,
iampolicy.AttachPolicyAdminAction,
iampolicy.ListUserPoliciesAdminAction,
)
```
With that said, for this initial version, now the sessions endpoint will
return a list of authorized pages to be render on the UI, on subsequent
prs we will add this verification of authorization via a server
middleware.
Uses a similar approach as Trace and Console Logs by using
websockets. It also includes the integration with the UI which
needs 3 input fields that are sent as query parameters.
This PR adds ldap authentication support for mcs based on
https://github.com/minio/minio/blob/master/docs/sts/ldap.md
How to test:
```
$ docker run --rm -p 389:389 -p 636:636 --name my-openldap-container
--detach osixia/openldap:1.3.0
```
Run the `billy.ldif` file using `ldapadd` command to create a new user
and assign it to a group.
```
$ cat > billy.ldif << EOF
dn: uid=billy,dc=example,dc=org
uid: billy
cn: billy
sn: 3
objectClass: top
objectClass: posixAccount
objectClass: inetOrgPerson
loginShell: /bin/bash
homeDirectory: /home/billy
uidNumber: 14583102
gidNumber: 14564100
userPassword: {SSHA}j3lBh1Seqe4rqF1+NuWmjhvtAni1JC5A
mail: billy@example.org
gecos: Billy User
dn: ou=groups,dc=example,dc=org
objectclass:organizationalunit
ou: groups
description: generic groups branch
of s3::*)
dn: cn=mcsAdmin,ou=groups,dc=example,dc=org
objectClass: top
objectClass: posixGroup
gidNumber: 678
dn: cn=mcsAdmin,ou=groups,dc=example,dc=org
changetype: modify
add: memberuid
memberuid: billy
EOF
$ docker cp billy.ldif
my-openldap-container:/container/service/slapd/assets/test/billy.ldif
$ docker exec my-openldap-container ldapadd -x -D
"cn=admin,dc=example,dc=org" -w admin -f
/container/service/slapd/assets/test/billy.ldif -H ldap://localhost -ZZ
```
Query the ldap server to check the user billy was created correctly and
got assigned to the mcsAdmin group, you should get a list
containing ldap users and groups.
```
$ docker exec my-openldap-container ldapsearch -x -H ldap://localhost -b
dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w admin
```
Query the ldap server again, this time filtering only for the user
`billy`, you should see only 1 record.
```
$ docker exec my-openldap-container ldapsearch -x -H ldap://localhost -b
uid=billy,dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w admin
```
Change the password for user billy
Set the new password for `billy` to `minio123` and enter `admin` as the
default `LDAP Password`
```
$ docker exec -it my-openldap-container /bin/bash
ldappasswd -H ldap://localhost -x -D "cn=admin,dc=example,dc=org" -W
-S "uid=billy,dc=example,dc=org"
New password:
Re-enter new password:
Enter LDAP Password:
```
Add the mcsAdmin policy to user billy on MinIO
```
$ cat > mcsAdmin.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"admin:*"
],
"Effect": "Allow",
"Sid": ""
},
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
],
"Sid": ""
}
]
}
EOF
$ mc admin policy add myminio mcsAdmin mcsAdmin.json
$ mc admin policy set myminio mcsAdmin user=billy
```
Run MinIO
```
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MINIO_IDENTITY_LDAP_SERVER_ADDR='localhost:389'
export MINIO_IDENTITY_LDAP_USERNAME_FORMAT='uid=%s,dc=example,dc=org'
export
MINIO_IDENTITY_LDAP_USERNAME_SEARCH_FILTER='(|(objectclass=posixAccount)(uid=%s))'
export MINIO_IDENTITY_LDAP_TLS_SKIP_VERIFY=on
export MINIO_IDENTITY_LDAP_SERVER_INSECURE=on
./minio server ~/Data
```
Run MCS
```
export MCS_ACCESS_KEY=minio
export MCS_SECRET_KEY=minio123
...
export MCS_LDAP_ENABLED=on
./mcs server
```
This PR adds support for oidc in mcs, to enable idp
authentication you need to pass the following environment variables and
restart mcs.
```
MCS_IDP_URL=""
MCS_IDP_CLIENT_ID=""
MCS_IDP_SECRET=""
MCS_IDP_CALLBACK=""
```
Trace Api uses websocket to send trace information, a
valid jwt token needs to be sent either on the header
or as a cookie of the ws request to start.
Three goroutines are needed to ensure communication
if read hearbeat fails all trace should stop by cancelling
the context. WaitGroups are needed to ensure all
goroutines finish gracefully.
adds new functionality for creating a service
account for a user, for this, an admin client
is created with the user credentials so that
the service account can be assigned to him.
This also updates to minio RELEASE.2020-04-28T23-56-56Z
This commit changes the authentication mechanism between mcs and minio to an sts
(security token service) schema using the user provided credentials, previously
mcs was using master credentials. With that said in order for you to
login to MCS as an admin your user must exists first on minio and have enough
privileges to do administrative operations.
```
./mc admin user add myminio alevsk alevsk12345
```
```
cat admin.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"admin:*",
"s3:*"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
./mc admin policy add myminio admin admin.json
```
```
./mc admin policy set myminio admin user=alevsk
```