prepareError receives an array of errors and return *model.Error object
with a message and error code, we can extend this function to add more
error types/code
* Allow to Specify the Tenant Console Image. Support Image Pull Secrets by Name.
This PR adds support for `console_image` on create tenant and update tenant so the console image can be set by the caller. This is in case the image used is hosted in a private registry.
Also adds support to specify the Image Pull Secret, if it's not specified, the individual image registry credentials can still be specified.
* Add tests for new fields.
Since toleration seconds can be empty, we were forcing it to be an integer defaulting to 0 which
was creating a toleration with value 0 when value should have been nil.
This PR adds the following features:
- Allow user to provide its own keypair certificates for enable TLS in
MinIO
- Allow user to configure data encryption at rest in MinIO with KES
- Removes JWT schema for login and instead Console authentication will use
encrypted session tokens
Enable TLS between client and MinIO with user provided certificates
Instead of using AutoCert feature now the user can provide `cert` and
`key` via `tls` object, values must be valid `x509.Certificate`
formatted files encoded in `base64`
Enable encryption at rest configuring KES
User can deploy KES via Console/Operator by defining the encryption
object, AutoCert must be enabled or custom certificates for KES must be
provided, KES support 3 KMS backends: `Vault`, `AWS KMS` and `Gemalto`,
previous configuration of the KMS is necessary.
eg of body request for create-tenant
```
{
"name": "honeywell",
"access_key": "minio",
"secret_key": "minio123",
"enable_mcs": false,
"enable_ssl": false,
"service_name": "honeywell",
"zones": [
{
"name": "honeywell-zone-1",
"servers": 1,
"volumes_per_server": 4,
"volume_configuration": {
"size": 256000000,
"storage_class": "vsan-default-storage-policy"
}
}
],
"namespace": "default",
"tls": {
"tls.crt": "",
"tls.key": ""
},
"encryption": {
"server": {
"tls.crt": "",
"tls.key": ""
},
"client": {
"tls.crt": "",
"tls.key": ""
},
"vault": {
"endpoint": "http://vault:8200",
"prefix": "",
"approle": {
"id": "",
"secret": ""
}
}
}
}
```
Whe Console is configured, we auto generate credentials for Console and store them in a secret but we need to return them to the user so he knows what credentials he/she can use to log in to console.
Previously every Handler function was receiving the session token in the
form of a jwt string, in consequence every time we want to access the
encrypted claims of the jwt we needed to run a decryption process,
additionally we were decrypting the jwt twice, first at the session
validation then inside each handler function, this was also causing a
lot of using related to the merge between m3 and mcs
What changed:
Now we validate and decrypt the jwt once in `configure_mcs.go`, this
works for both, mcs (console) and operator sessions, and then pass the
decrypted claims to all the functions that need it, so no further token
validation or decryption is need it.
`MCS` will authenticate against `Mkube`using bearer tokens via HTTP
`Authorization` header. The user will provide this token once
in the login form, MCS will validate it against Mkube (list tenants) and
if valid will generate and return a new MCS sessions
with encrypted claims (the user Service account token will be inside the
JWT in the data field)
Kubernetes
The provided `JWT token` corresponds to the `Kubernetes service account`
that `Mkube` will use to run tasks on behalf of the
user, ie: list, create, edit, delete tenants, storage class, etc.
Development
If you are running mcs in your local environment and wish to make
request to `Mkube` you can set `MCS_M3_HOSTNAME`, if
the environment variable is not present by default `MCS` will use
`"http://m3:8787"`, additionally you will need to set the
`MCS_MKUBE_ADMIN_ONLY=on` variable to make MCS display the Mkube UI
Extract the Service account token and use it with MCS
For local development you can use the jwt associated to the `m3-sa`
service account, you can get the token running
the following command in your terminal:
```
kubectl get secret $(kubectl get serviceaccount m3-sa -o
jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64
--decode
```
Then run the mcs server
```
MCS_M3_HOSTNAME=http://localhost:8787 MCS_MKUBE_ADMIN_ONLY=on ./mcs
server
```
Self-signed certificates and Custom certificate authority for Mkube
If Mkube uses TLS with a self-signed certificate, or a certificate
issued by a custom certificate authority you can add those
certificates usinng the `MCS_M3_SERVER_TLS_CA_CERTIFICATE` env variable
````
MCS_M3_SERVER_TLS_CA_CERTIFICATE=cert1.pem,cert2.pem,cert3.pem ./mcs
server
````
This PR sets the initial version of the ACL for mcs, the idea behind
this is to start using the principle of least privileges when assigning
policies to users when creating users through mcs, currently mcsAdmin policy uses admin:*
and s3:* and by default a user with that policy will have access to everything, if want to limit
that we can create a policy with least privileges.
We need to start validating explicitly if users has acccess to an
specific endpoint based on IAM policy actions.
In this first version every endpoint (you can see it as a page to),
defines a set of well defined admin/s3 actions to work properly, ie:
```
// corresponds to /groups endpoint used by the groups page
var groupsActionSet = iampolicy.NewActionSet(
iampolicy.ListGroupsAdminAction,
iampolicy.AddUserToGroupAdminAction,
//iampolicy.GetGroupAdminAction,
iampolicy.EnableGroupAdminAction,
iampolicy.DisableGroupAdminAction,
)
// corresponds to /policies endpoint used by the policies page
var iamPoliciesActionSet = iampolicy.NewActionSet(
iampolicy.GetPolicyAdminAction,
iampolicy.DeletePolicyAdminAction,
iampolicy.CreatePolicyAdminAction,
iampolicy.AttachPolicyAdminAction,
iampolicy.ListUserPoliciesAdminAction,
)
```
With that said, for this initial version, now the sessions endpoint will
return a list of authorized pages to be render on the UI, on subsequent
prs we will add this verification of authorization via a server
middleware.
This PR adds support for oidc in mcs, to enable idp
authentication you need to pass the following environment variables and
restart mcs.
```
MCS_IDP_URL=""
MCS_IDP_CLIENT_ID=""
MCS_IDP_SECRET=""
MCS_IDP_CALLBACK=""
```
adds new functionality for creating a service
account for a user, for this, an admin client
is created with the user credentials so that
the service account can be assigned to him.
This also updates to minio RELEASE.2020-04-28T23-56-56Z
updates code to be compatible with:
- github.com/minio/mc v0.0.0-20200415193718-68b638f2f96c
- github.com/minio/minio v0.0.0-20200415191640-bde0f444dbab
Note: admin_config api is patched temporarily now to
return the target configuration as a raw string due to the
changes done on minio.
Implemented user-groups integration for mcs, this allows to store the user groups during the user creation.
Co-authored-by: Benjamin Perez <benjamin@bexsoft.net>
* Added structure to swagger
* Added updateUserGroups handlers
* Updated return definition for user groups.
* Logic rewrite
* Removed logs
* Added some tests to updateUserGroups
* lint fix
* Updated tests for the new API
* Lint
* Added comment about why we are setting this groups individually. & more lint fixes
* Updated tests page
* Added more tests & fixed comments for PR
* Lint utils file
* Fixed import orders
* Changed import order
Co-authored-by: Benjamin Perez <benjamin@bexsoft.net>
addPolicy endpoint will read policies as json string, this to allow
s3 iam policy compatibility (uppercase in json attributes) and to be
consistent with other mcs apis, once https://github.com/minio/minio/pull/9181
is merged we can return a type struct{}
fix policies test to new refactor
goimports
more golint fixes