目录
-
先上docker-compose.yml和nginx.conf
version: '2'
services:
minio1:
hostname: minio1
image: 'bitnami/minio:latest'
environment:
- MINIO_ACCESS_KEY=minio
- MINIO_SECRET_KEY=minio1234
- MINIO_DISTRIBUTED_MODE_ENABLED=yes
- MINIO_DISTRIBUTED_NODES=minio1,minio2,minio3,minio4
- MINIO_SKIP_CLIENT=yes
volumes:
- /data1/minio/data1:/data
minio2:
image: 'bitnami/minio:latest'
hostname: minio2
environment:
- MINIO_ACCESS_KEY=minio
- MINIO_SECRET_KEY=minio1234
- MINIO_DISTRIBUTED_MODE_ENABLED=yes
- MINIO_DISTRIBUTED_NODES=minio1,minio2,minio3,minio4
- MINIO_SKIP_CLIENT=yes
volumes:
- /data1/minio/data2:/data
minio3:
hostname: minio3
image: 'bitnami/minio:latest'
environment:
- MINIO_ACCESS_KEY=minio
- MINIO_SECRET_KEY=minio1234
- MINIO_DISTRIBUTED_MODE_ENABLED=yes
- MINIO_DISTRIBUTED_NODES=minio1,minio2,minio3,minio4
- MINIO_SKIP_CLIENT=yes
volumes:
- /data1/minio/data3:/data
minio4:
hostname: minio4
image: 'bitnami/minio:latest'
environment:
- MINIO_ACCESS_KEY=minio
- MINIO_SECRET_KEY=minio1234
- MINIO_DISTRIBUTED_MODE_ENABLED=yes
- MINIO_DISTRIBUTED_NODES=minio1,minio2,minio3,minio4
- MINIO_SKIP_CLIENT=yes
volumes:
- /data1/minio/data4:/data
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "8000:9000"
- "8001:9001"
depends_on:
- minio1
- minio2
- minio3
- minio4
因为nginx的配置文件被挂载到了宿主机,故必须在宿主机提前准备好nginx.conf,否则会会被容器转换为文件夹,nginx的配置如下
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
}
upstream console {
ip_hash;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 9001;
listen [::]:9001;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}
-
具体说明
本次部署使用单机4个minio服务器集群搭建方式,数据分别挂载在/data1/minio/data{1...4},
最后由nginx转发暴露给外部统一访问,因为服务器的9000端口被占用,故使用8000转发nginx的9000端口,8001转发9001端口,集群内部网络使用桥接模式。nginx依赖minio{1...4}服务提前运行
-
遇到的问题及解决方案
Error: Read failed. Insufficient number of disks online (*errors.errorString)
每个盘都必须挂载,以下是虚拟挂载:
version: '2'
services:
minio1:
hostname: minio1
image: 'bitnami/minio:latest'
environment:
- MINIO_ACCESS_KEY=minio
- MINIO_SECRET_KEY=minio1234
- MINIO_DISTRIBUTED_MODE_ENABLED=yes
- MINIO_DISTRIBUTED_NODES=minio1,minio2,minio3,minio4
- MINIO_SKIP_CLIENT=yes
volumes:
- data1:/data
volumes:
data1:
data2:
data3:
宿主机挂载:
minio2:
image: 'bitnami/minio:latest'
hostname: minio2
environment:
- MINIO_ACCESS_KEY=minio
- MINIO_SECRET_KEY=minio1234
- MINIO_DISTRIBUTED_MODE_ENABLED=yes
- MINIO_DISTRIBUTED_NODES=minio1,minio2,minio3,minio4
- MINIO_SKIP_CLIENT=yes
volumes:
- /data1/minio/data2:/data
宿主机挂载要注意权限问题,minio的目录的用户和组是zookeeper:root,可以将宿主机中的/data1/minio/data{1...4}修改所有者
sudo chown zookeeper:root /data1/minio/data1
sudo chown zookeeper:root /data1/minio/data2
sudo chown zookeeper:root /data1/minio/data3
sudo chown zookeeper:root /data1/minio/data4
或者修改读写权限为777
sudo chmod 777 /data1/minio/data1
sudo chmod 777 /data1/minio/data2
sudo chmod 777 /data1/minio/data3
sudo chmod 777 /data1/minio/data4
成功部署后只能看到一个server的情况
必须配置以下两项才能开启集群,否则只能已单机单节点部署的形式存在,各节点不互通
- MINIO_DISTRIBUTED_MODE_ENABLED=yes
- MINIO_DISTRIBUTED_NODES=minio1,minio2,minio3,minio
部署有错误信息
minio3_1 | API: SYSTEM()
minio3_1 | Time: 03:39:34 UTC 12/21/2021
minio3_1 | Error: file not found (cmd.StorageErr)
minio3_1 | 4: cmd/xl-storage.go:1965:cmd.(*xlStorage).RenameData.func1()
minio3_1 | 3: cmd/xl-storage.go:2229:cmd.(*xlStorage).RenameData()
minio3_1 | 2: cmd/storage-rest-server.go:689:cmd.(*storageRESTServer).RenameDataHandler()
minio3_1 | 1: net/http/server.go:2047:http.HandlerFunc.ServeHTTP()
minio1_1 | Automatically configured API requests per node based on available memory on the system: 418
此部分错误不用管,等一段时间,所有服务启动后就正常。
-
效果
-
参考链接