purge deprecate docker swarm documentation

documentation license moved to CC-BY-4.0
This commit is contained in:
Harshavardhana
2021-05-10 09:49:53 -07:00
parent c03a06cca8
commit 477cd85bef
6 changed files with 377 additions and 564 deletions

View File

@@ -2,18 +2,18 @@
MinIO is a cloud-native application designed to scale in a sustainable manner in multi-tenant environments. Orchestration platforms provide perfect launchpad for MinIO to scale. Below is the list of MinIO deployment documents for various orchestration platforms:
| Orchestration platforms|
|:---|
| [`Docker Swarm`](https://docs.min.io/docs/deploy-minio-on-docker-swarm) |
| [`Docker Compose`](https://docs.min.io/docs/deploy-minio-on-docker-compose) |
| [`Kubernetes`](https://docs.min.io/docs/deploy-minio-on-kubernetes) |
| Orchestration platforms |
|:---------------------------------------------------------------------------------------------------|
| [`Kubernetes`](https://docs.min.io/docs/deploy-minio-on-kubernetes) |
| [`Docker Compose`](https://docs.min.io/docs/deploy-minio-on-docker-compose) (only for test setups) |
## Why is MinIO cloud-native?
The term cloud-native revolves around the idea of applications deployed as micro services, that scale well. It is not about just retrofitting monolithic applications onto modern container based compute environment. A cloud-native application is portable and resilient by design, and can scale horizontally by simply replicating. Modern orchestration platforms like Swarm, Kubernetes and DC/OS make replicating and managing containers in huge clusters easier than ever.
The term cloud-native revolves around the idea of applications deployed as micro services, that scale well. It is not about just retrofitting monolithic applications onto modern container based compute environment. A cloud-native application is portable and resilient by design, and can scale horizontally by simply replicating. Modern orchestration platforms like Kubernetes, DC/OS make replicating and managing containers in huge clusters easier than ever.
While containers provide isolated application execution environment, orchestration platforms allow seamless scaling by helping replicate and manage containers. MinIO extends this by adding isolated storage environment for each tenant.
MinIO is built ground up on the cloud-native premise. With features like erasure-coding, distributed and shared setup, it focuses only on storage and does it very well. While, it can be scaled by just replicating MinIO instances per tenant via an orchestration platform.
MinIO is built ground up on the cloud-native premise. With features like erasure-coding, distributed and shared setup, it focuses only on storage and does it very well. While, it can be scaled by just replicating MinIO instances per tenant via an orchestration platform.
> In a cloud-native environment, scalability is not a function of the application but the orchestration platform.

View File

@@ -1,16 +1,13 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
@@ -20,14 +17,9 @@ http {
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
@@ -42,13 +34,13 @@ http {
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
location / {
proxy_set_header Host $http_host;

View File

@@ -1,103 +0,0 @@
# Deploy MinIO on Docker Swarm [![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/)
Docker Engine provides cluster management and orchestration features in Swarm mode. MinIO server can be easily deployed in distributed mode on Swarm to create a multi-tenant, highly-available and scalable object store.
As of [Docker Engine v1.13.0](https://blog.docker.com/2017/01/whats-new-in-docker-1-13/) (Docker Compose v3.0), Docker Swarm and Compose are [cross-compatible](https://docs.docker.com/compose/compose-file/#version-3). This allows a Compose file to be used as a template to deploy services on Swarm. We have used a Docker Compose file to create distributed MinIO setup.
## 1. Prerequisites
* Familiarity with [Swarm mode key concepts](https://docs.docker.com/engine/swarm/key-concepts/).
* Docker engine v1.13.0 running on a cluster of [networked host machines](https://docs.docker.com/engine/swarm/swarm-tutorial/#/three-networked-host-machines).
## 2. Create a Swarm
Create a swarm on the manager node by running
```shell
docker swarm init --advertise-addr <MANAGER-IP>
```
Once the swarm is initialized, you'll see the below response.
```shell
docker swarm join \
--token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
192.168.99.100:2377
```
You can now [add worker nodes](https://docs.docker.com/engine/swarm/swarm-tutorial/add-nodes/) to the swarm by running the above command. Find detailed steps to create the swarm on [Docker documentation site](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/).
## 3. Create Docker secrets for MinIO
```shell
echo "AKIAIOSFODNN7EXAMPLE" | docker secret create access_key -
echo "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" | docker secret create secret_key -
```
## 4. Deploy distributed MinIO services
The example MinIO stack uses 4 Docker volumes, which are created automatically by deploying the stack. We have to make sure that the services in the stack are always (re)started on the same node, where the service is deployed the first time.
Otherwise Docker will create a new volume upon restart of the service on another Docker node, which is not in sync with the other volumes and the stack will fail to start healthy.
Before deploying the stack, add labels to the Docker nodes where you want the minio services to run:
```
docker node update --label-add minio1=true <DOCKER-NODE1>
docker node update --label-add minio2=true <DOCKER-NODE2>
docker node update --label-add minio3=true <DOCKER-NODE3>
docker node update --label-add minio4=true <DOCKER-NODE4>
```
It is possible to run more than one minio service on one Docker Node. Set the labels accordingly.
Download the [Docker Compose file](https://github.com/minio/minio/blob/master/docs/orchestration/docker-swarm/docker-compose-secrets.yaml?raw=true) on your Swarm master. Then execute the command
```shell
docker stack deploy --compose-file=docker-compose-secrets.yaml minio_stack
```
This deploys services described in the Compose file as Docker stack `minio_stack`. Look up the `docker stack` [command reference](https://docs.docker.com/engine/reference/commandline/stack/) for more info.
After the stack is successfully deployed, you should be able to access MinIO server via [MinIO Client](https://docs.min.io/docs/minio-client-complete-guide) `mc` or your browser at http://[Node_Public_IP_Address]:[Expose_Port_on_Host]
## 4. Remove distributed MinIO services
Remove the distributed MinIO services and related network by
```shell
docker stack rm minio_stack
```
Swarm doesn't automatically remove host volumes created for services. This may lead to corruption when a new MinIO service is created in the swarm. So, we recommend removing all the volumes used by MinIO, manually. To do this, logon to each node in the swarm and run
```shell
docker volume prune
```
This will remove all the volumes not associated with any container.
## 5. Accessing MinIO services
The services are exposed, by default, on the internal overlay network by their services names (minio1, minio2, ...).
The docker-compose.yml file also exposes the MinIO services behind a single alias on the minio_distributed network.
Services in the Swarm which are attached to that network can interact with the host "minio-cluster" instead of individual services' hostnames. This provides a simple way to loosely load balance across all the MinIO services in the Swarm as well as simplifies configuration and management.
### Notes
* By default the Docker Compose file uses the Docker image for latest MinIO server release. You can change the image tag to pull a specific [MinIO Docker image](https://hub.docker.com/r/minio/minio/).
* There are 4 minio distributed instances created by default. You can add more MinIO services (up to total 16) to your MinIO Swarm deployment. To add a service
* Replicate a service definition and change the name of the new service appropriately.
* Add a volume in volumes section, and update volume section in the service accordingly.
* Update the command section in each service. Specifically, add the drive location to be used as storage on the new service.
* Update the port number to exposed for the new service.
Read more about distributed MinIO [here](https://docs.min.io/docs/distributed-minio-quickstart-guide).
* By default the services use `local` volume driver. Refer to [Docker documentation](https://docs.docker.com/compose/compose-file/#/volume-configuration-reference) to explore further options.
* MinIO services in the Docker compose file expose ports 9001 to 9004. This allows multiple services to run on a host. Explore other configuration options in [Docker documentation](https://docs.docker.com/compose/compose-file/#/ports).
* Docker Swarm uses ingress load balancing by default. You can configure [external load balancer based](https://docs.docker.com/engine/swarm/ingress/#/configure-an-external-load-balancer) on requirements.
### Explore Further
- [Overview of Docker Swarm mode](https://docs.docker.com/engine/swarm/)
- [MinIO Docker Quickstart Guide](https://docs.min.io/docs/minio-docker-quickstart-guide)
- [Deploy MinIO on Docker Compose](https://docs.min.io/docs/deploy-minio-on-docker-compose)
- [MinIO Erasure Code QuickStart Guide](https://docs.min.io/docs/minio-erasure-code-quickstart-guide)

View File

@@ -1,129 +0,0 @@
version: '3.7'
services:
minio1:
image: minio/minio:RELEASE.2021-04-22T15-44-28Z
hostname: minio1
volumes:
- minio1-data:/export
ports:
- "9001:9000"
networks:
- minio_distributed
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
placement:
constraints:
- node.labels.minio1==true
command: server http://minio{1...4}/export
secrets:
- secret_key
- access_key
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio2:
image: minio/minio:RELEASE.2021-04-22T15-44-28Z
hostname: minio2
volumes:
- minio2-data:/export
ports:
- "9002:9000"
networks:
- minio_distributed
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
placement:
constraints:
- node.labels.minio2==true
command: server http://minio{1...4}/export
secrets:
- secret_key
- access_key
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio3:
image: minio/minio:RELEASE.2021-04-22T15-44-28Z
hostname: minio3
volumes:
- minio3-data:/export
ports:
- "9003:9000"
networks:
- minio_distributed
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
placement:
constraints:
- node.labels.minio3==true
command: server http://minio{1...4}/export
secrets:
- secret_key
- access_key
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio4:
image: minio/minio:RELEASE.2021-04-22T15-44-28Z
hostname: minio4
volumes:
- minio4-data:/export
ports:
- "9004:9000"
networks:
- minio_distributed
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
placement:
constraints:
- node.labels.minio4==true
command: server http://minio{1...4}/export
secrets:
- secret_key
- access_key
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
volumes:
minio1-data:
minio2-data:
minio3-data:
minio4-data:
networks:
minio_distributed:
driver: overlay
secrets:
secret_key:
external: true
access_key:
external: true

View File

@@ -1,140 +0,0 @@
version: '3.7'
services:
minio1:
image: minio/minio:RELEASE.2021-04-22T15-44-28Z
hostname: minio1
volumes:
- minio1-data:/export
ports:
- "9001:9000"
networks:
# On the internal you are exposed as minio1/2/3/4 by default
internal: {}
minio_distributed:
aliases:
- minio-cluster
environment:
MINIO_ROOT_USER: AKIAIOSFODNN7EXAMPLE
MINIO_ROOT_PASSWORD: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
placement:
constraints:
- node.labels.minio1==true
command: server http://minio{1...4}/export
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio2:
image: minio/minio:RELEASE.2021-04-22T15-44-28Z
hostname: minio2
volumes:
- minio2-data:/export
ports:
- "9002:9000"
networks:
# On the internal you are exposed as minio1/2/3/4 by default
internal: {}
minio_distributed:
aliases:
- minio-cluster
environment:
MINIO_ROOT_USER: AKIAIOSFODNN7EXAMPLE
MINIO_ROOT_PASSWORD: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
placement:
constraints:
- node.labels.minio2==true
command: server http://minio{1...4}/export
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio3:
image: minio/minio:RELEASE.2021-04-22T15-44-28Z
hostname: minio3
volumes:
- minio3-data:/export
ports:
- "9003:9000"
networks:
# On the internal you are exposed as minio1/2/3/4 by default
internal: {}
minio_distributed:
aliases:
- minio-cluster
environment:
MINIO_ROOT_USER: AKIAIOSFODNN7EXAMPLE
MINIO_ROOT_PASSWORD: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
placement:
constraints:
- node.labels.minio3==true
command: server http://minio{1...4}/export
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 1m30s
timeout: 20s
retries: 3
minio4:
image: minio/minio:RELEASE.2021-04-22T15-44-28Z
hostname: minio4
volumes:
- minio4-data:/export
ports:
- "9004:9000"
networks:
# On the internal you are exposed as minio1/2/3/4 by default
internal: {}
minio_distributed:
aliases:
- minio-cluster
environment:
MINIO_ROOT_USER: AKIAIOSFODNN7EXAMPLE
MINIO_ROOT_PASSWORD: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
placement:
constraints:
- node.labels.minio4==true
command: server http://minio{1...4}/export
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
volumes:
minio1-data:
minio2-data:
minio3-data:
minio4-data:
networks:
minio_distributed:
driver: overlay
internal: {}