文章目录
所有compose文件统一命名 docker-compose.yaml
所有启动 docker-compose up -d
停止 docker-compose down
docker安装
直接安装脚本
1: curl -fsSL https://get.docker.com -o install-docker.sh
sh install-docker.sh --mirror Aliyun --channel stable --version 20.10
1.1:一键安装脚本2
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
失败了如(curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to get.docker.com:443)就多来几次
2: bash <(curl -sSL https://linuxmirrors.cn/docker.sh)
1安装一些必要的系统工具
yum -y install yum-utils device-mapper-persistent-data lvm2
2、添加软件源信息
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
更新 yum 缓存
yum makecache fast
安装 Docker-ce:(版本安装错误 可以 yum remove -y docker* 卸载)
yum -y install docker-ce
如 yum -y install docker-ce-20.10.24
启动 Docker 后台服务
systemctl start docker
自启动
systemctl enable docker
如果修改了配置 重新加载docker
systemctl daemon-reload
systemctl restart docker
docker离线安装
迅雷下载安装包 https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz
#!/bin/sh
https://blog.csdn.net/jianghuchuang/article/details/141220379
tar -zxvf docker-24.0.2.tgz
cp docker/* /usr/bin/
cat << EOF > /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutStartSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
EOF
cat << EOF > /usr/lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
vi /usr/lib/systemd/system/containerd.service
中间位置的
ExecStart=/usr/local/bin/containerd
改成下边的
ExecStart=/usr/bin/containerd
systemctl enable --now containerd
systemctl status containerd
systemctl enable --now docker
systemctl status docker
新服务器默认docker在系统盘下,所以
需要修改docker存储目录
此方式仅限于新服务器
假设挂载磁盘在 /data下
#查看docker安装信息,会显示Docker Root Dir: /var/lib/docker
docker info
sudo systemctl stop docker.service
mkdir /data/docker
sudo cp -r /var/lib/docker/* /data/docker/
sudo vim /etc/docker/daemon.json
#daemon.json示例
{
"registry-mirrors": ["https://dockerpull.com","https://docker.1panelproxy.com"],
"data-root": "/data/docker"
}
docker-compose安装
安装 Docker-Compose
通过访问 https://github.com/docker/compose/releases/latest 得到最新的 docker-compose 版本
下载慢可以自己迅雷下载上传上去
1: cd /usr/local/bin
2: curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
3: chmod a+x /usr/local/bin/docker-compose
4: docker-compose --version
kubekey安装单机k3s
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | sh -
./kk create config --with-kubernetes v1.21.4-k3s --with-kubesphere v3.3.2
## 核心修改的地方,别的可以不修改
hosts:
- {name: node1, address: 主机IP, internalAddress: 主机IP, user: root, password: "密码"}
roleGroups:
etcd:
- node1
control-plane:
- node1
worker:
- node1
##改完以后 创建集群
./kk create cluster -f config-sample.yaml
如果IP设置错误了或者别的原因初始化错误了,
./kk delete cluster -f config-sample.yaml
例如
在kk同级会生成kubekey目录 里边有缓存,比如ip设错了,etcd启动失败,提示ip或者证书问题,可以这样处理
./kk delete cluster -f config-sample.yaml
rm -rf kubekey
然后重新创建
kubesphere安装K8s (一主一从)
1:提前安装docker
我这里使用这个脚本安装
curl -fsSL https://get.docker.com -o install-docker.sh
sh install-docker.sh --mirror Aliyun --channel stable --version 20.10
2:如果不知道机器密码可以自己创建root权限用户,一下是创建一个账号为jenkins 密码为pwd123的root权限用户
adduser jenkins
passwd jenkins
输入密码
pwd123
pwd123
usermod -g root jenkins
vim /etc/passwd
## 把用户ID修改为 0 ,
jenkins:x:0:0::/home/jenkins:/bin/bash
1 修改DNS
vim /etc/resolv.conf
nameserver 8.8.8.8
nameserver 114.114.114.114
2 下载kk
cd /data/k8s(目录没有自己创建)
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
tar -xzvf kubekey-v3.0.7-linux-amd64.tar.gz
chmod +x kk
3 创建配置
创建配置,生成config-sample.yaml
./kk create config --with-kubesphere v3.3.2
vim config-sample.yaml
主要修改
1: hosts 中的机器名和主机配置
2:roleGroups 中的配置
以下为我的示例修改
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master1, address: 主机IP, internalAddress:主机IP, user: jenkins, password: "pwd123"}
- {name: node1, address: 主机IP, internalAddress:主机IP, user: jenkins, password: "pwd123"}
roleGroups:
etcd:
- master1
control-plane:
- master1
worker:
- node1
4:安装需要的工具 集群所有主机
yum install conntrack
yum install socat
4:创建集群
./kk create cluster -f config-sample.yaml
1:mongodb
version: '2'
services:
mongo:
image: mongo:4.4.0
restart: always
environment:
- TZ=Asia/Shanghai
ports:
- 27017:27017
volumes:
- ./data/db:/data/db # 挂载数据目录
- ./data/log:/var/log/mongodb # 挂载日志目录
- ./data/config:/etc/mongo # 挂载配置目录
2:redis
将redis.conf放到conf目录下
version: '2'
services:
redis:
image: redis:5
container_name: redis
hostname: redis
restart: always
environment:
- TZ=Asia/Shanghai
ports:
- 6379:6379
volumes:
- ./conf/redis.conf:/etc/redis/redis.conf
- ./data:/data
command:
redis-server /etc/redis/redis.conf
redis.conf
bind 0.0.0.0
protected-mode no
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
replica-priority 100
requirepass root
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
3: nacos
(如果使用k8s部署 需要提前将表初始化完成
https://github.com/nacos-group/nacos-docker/blob/master/example/image/mysql/8/Dockerfile
这里第二行有初始化表的sql下载地址
https://github.com/alibaba/nacos/archive/refs/tags/2.0.4.zip
)
将application.properties放到当前目录下 nacos/conf/下
version: "2"
services:
nacos:
image: nacos/nacos-server:v2.0.4
container_name: nacos-container
volumes:
- ./nacos/standalone-logs/:/home/nacos/logs
- ./nacos/conf/application.properties:/home/nacos/conf/application.properties
ports:
- "8848:8848"
- "9848:9848"
- "9555:9555"
restart: always
application.properties
spring.datasource.platform=mysql
db.num=1
db.url.0=jdbc:mysql://127.0.0.1:3306/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
db.user=root
db.password=root
nacos.naming.empty-service.auto-clean=true
nacos.naming.empty-service.clean.initial-delay-ms=50000
nacos.naming.empty-service.clean.period-time-ms=30000
management.endpoints.web.cmcsosure.include=*
management.metrics.export.elastic.enabled=false
management.metrics.export.influx.enabled=false
server.tomcat.accesslog.enabled=true
server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i
server.tomcat.basedir=
nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**
nacos.core.auth.system.type=nacos
nacos.core.auth.enabled=false
nacos.core.auth.default.token.expire.seconds=18000
nacos.core.auth.default.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789
nacos.core.auth.caching.enabled=true
nacos.core.auth.enable.userAgentAuthWhite=false
nacos.core.auth.server.identity.key=serverIdentity
nacos.core.auth.server.identity.value=security
nacos.istio.mcp.server.enabled=false
4:kafka , zookeeper , kafka-manager
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper_container
volumes:
- ./zkdata:/data
ports:
- "2181:2181"
restart: always
kafka:
image: wurstmeister/kafka
container_name: kafka_container
volumes:
- ./kfdata:/kafka
ports:
- "39092:9092"
environment:
- KAFKA_ZOOKEEPER_CONNECT=192.168.0.24:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.0.24:9092
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
restart: always
kafka-manager:
image: kafkamanager/kafka-manager:2.0.0.2
container_name: kafka-manager_container
environment:
ZK_HOSTS: 192.168.0.24:2181
ports:
- 19000:9000
kafka3
docker-compose.yaml 同级下
1:创建目录kafkadata
chmod 777 kafkadata
2:修改yaml文件中的IP
version: "3"
services:
kafka:
image: 'bitnami/kafka:latest'
container_name: kafka3
restart: always
ports:
- "19092:9092"
- "19093:9093"
volumes:
- ./kafkadata:/bitnami/kafka
environment:
- BITNAMI_DEBUG=yes
- KAFKA_CFG_NODE_ID=1
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=CONTROLLER://:9094,BROKER://:9092,EXTERNAL://:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,BROKER:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://机器IP:19092,EXTERNAL://机器IP:19093
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=BROKER
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9094
- ALLOW_PLAINTEXT_LISTENER=yes
5: mysql
(需要MYSQL8则将image: mysql:5.7.18 修改版本即可 如 image: mysql:8.0.18)
mkdir -p ./mysql/{mydir,datadir,conf,source}
version: '3'
services:
mysql:
restart: always
image: mysql:8.0.18
container_name: mysql_container
volumes:
- ./mysql/mydir:/mydir
- ./mysql/datadir:/var/lib/mysql
- ./mysql/conf/my.cnf:/etc/my.cnf
environment:
- "MYSQL_ROOT_PASSWORD=root"
- "TZ=Asia/Shanghai"
- "MYSQL_ROOT_HOST=%"
ports:
- 3306:3306
my.cnf
[mysqld]
user=mysql
default-storage-engine=INNODB
character-set-server=utf8
character-set-client-handshake=FALSE
collation-server=utf8_unicode_ci
init_connect='SET NAMES utf8'
max_connections=2048
thread_cache_size=18
innodb_buffer_pool_size=2G
innodb_log_file_size=256M
innodb_buffer_pool_instances=1
innodb_flush_log_at_trx_commit=2
read_buffer_size = 16M
read_rnd_buffer_size = 8M
sort_buffer_size = 8M
table_open_cache=256
log-bin=mysql-binlog
binlog-format=ROW
server-id=1
lower_case_table_names=1
max_allowed_packet = 300M
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
MYSQL 多机器主从
主的配置
[mysqld]
user=mysql
default-storage-engine=INNODB
character-set-server=utf8
character-set-client-handshake=FALSE
collation-server=utf8_unicode_ci
init_connect='SET NAMES utf8'
max_connections=2048
thread_cache_size=18
innodb_buffer_pool_size=2G
innodb_log_file_size=256M
innodb_buffer_pool_instances=1
innodb_flush_log_at_trx_commit=2
read_buffer_size = 16M
read_rnd_buffer_size = 8M
sort_buffer_size = 8M
table_open_cache=256
log-bin=mysql-binlog
binlog-format=ROW
server-id=10
lower_case_table_names=1
max_allowed_packet = 300M
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
6: redis
version: '2'
services:
redis:
image: redis:5.0.0
container_name: redis
command: redis-server --requirepass root
ports:
- "6379:6379"
volumes:
- ./data:/data
7:es kibana es-head
services:
elasticsearch:
image: elasticsearch:7.17.1
container_name: elasticsearch
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- TZ=Asia/Shanghai
ports:
- "9200:9200"
- "9300:9300"
volumes:
- ./elasticsearch/logs:/usr/share/elasticsearch/logs
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
healthcheck:
test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
interval: 10s
timeout: 10s
retries: 3
kibana:
image: kibana:7.17.2
container_name: kibana
ports:
- "5601:5601"
volumes:
- ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml:rw
es-head:
image: tobias74/elasticsearch-head:latest
container_name: es-head
restart: always
ports:
- "9100:9100"
elasticsearch.yml
# 集群名称
cluster.name: elasticsearch-cluster
# 节点名称
node.name: es-node-1
network.bind_host: 0.0.0.0
# 绑定host,0.0.0.0代表当前节点的ip
network.host: 0.0.0.0
# 设置其它节点和该节点交互的ip地址,如果不设置它会自动判断,值必须是个真实的ip地址(本机ip)
network.publish_host: 0.0.0.0
# 设置对外服务的http端口,默认为9200
http.port: 9200
# 设置节点间交互的tcp端口,默认是9300
transport.tcp.port: 9300
# 是否支持跨域,默认为false
http.cors.enabled: true
# 当设置允许跨域,默认为*,表示支持所有域名,如果我们只是允许某些网站能访问,那么可以使用正则表达式。比如只允许本地地址。 /https?:\/\/localhost(:[0-9]+)?/
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
# 表示这个节点是否可以充当主节点
node.master: true
# 是否充当数据节点
node.data: true
# 所有主从节点ip:port
#discovery.seed_hosts: ["192.168.200.135:9300"] #本地只有一个节点,无法正常启动,先注释
# 这个参数决定了在选主过程中需要 有多少个节点通信 预防脑裂 N/2+1
discovery.zen.minimum_master_nodes: 1
#初始化主节点
#cluster.initial_master_nodes: ["es-node-1"] #本地只有一个节点,无法正常启动,先注释
kibana.yml
server.name: kibana
# kibana的主机地址 0.0.0.0可表示监听所有IP
server.host: "0.0.0.0"
# kibana访问es的URL
elasticsearch.hosts: [ "http://192.168.133.1:9200" ]
# 显示登陆页面
xpack.monitoring.ui.container.elasticsearch.enabled: true
# 语言
i18n.locale: "zh-CN"
pgsql
version: "3.1"
services:
db_test:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: root
POSTGRES_USER: root
POSTGRES_DB: dev
TZ: Asia/Shanghai
ports:
- 5432:5432
volumes:
- ./data:/var/lib/postgresql/data
restart: always
kong
version: "3"
services:
kong-migration:
image: kong:latest
command: "kong migrations bootstrap"
restart: on-failure
environment:
KONG_PG_HOST: xxxIP
KONG_DATABASE: postgres
KONG_PG_USER: root
KONG_PG_PASSWORD: root
kong:
image: kong:latest
restart: always
environment:
KONG_PG_HOST: xxxIP
KONG_DATABASE: postgres
KONG_PG_USER: root
KONG_PG_PASSWORD: root
KONG_CASSANDRA_CONTACT_POINTS: xxxIP
KONG_PROXY_LISTEN: 0.0.0.0:8000
KONG_PROXY_LISTEN_SSL: 0.0.0.0:8443
KONG_ADMIN_LISTEN: 0.0.0.0:8001
TZ: Asia/Shanghai
healthcheck:
test: ["CMD", "curl", "-f", "http://xxxIP:8001"]
interval: 5s
timeout: 2s
retries: 15
ports:
- "8001:8001"
- "8000:8000"
- "8443:8443"
- "8444:8444"
konga
version: "3"
services:
monga-prepare:
image: pantsel/konga:latest
command: "-c prepare -a postgres -u postgresql://root:root@xxxIP:5432/konga"
restart: on-failure
konga:
image: pantsel/konga:latest
restart: always
environment:
DB_ADAPTER: postgres
DB_URI: postgres://root:root@XXXIP:5432/konga
NODE_ENV: production
TZ: Asia/Shanghai
ports:
- "1337:1337"
kong konga pg一体化部署
-
部署完成访问 http://机器IP:1337/ 如 http://192.168.3.3:1337
-
添加kong连接 connection http://机器IP:8001 如 http://192.168.3.3:8001
version: "3"
networks:
kong-net:
driver: bridge
services:
kong-database:
image: postgres:9.6
restart: always
networks:
- kong-net
environment:
POSTGRES_USER: kong
POSTGRES_DB: kong
POSTGRES_PASSWORD: kong
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "kong"]
interval: 5s
timeout: 5s
retries: 5
kong-migration:
image: kong:3.5.0
command: "kong migrations bootstrap"
networks:
- kong-net
restart: on-failure
environment:
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-database
- KONG_PG_DATABASE=kong
- KONG_PG_PASSWORD=kong
links:
- kong-database
depends_on:
- kong-database
kong:
image: kong:3.5.0
restart: always
networks:
- kong-net
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_PASSWORD: kong
KONG_PROXY_LISTEN: 0.0.0.0:8000
KONG_PROXY_LISTEN_SSL: 0.0.0.0:8443
KONG_ADMIN_LISTEN: 0.0.0.0:8001
depends_on:
- kong-migration
links:
- kong-database
healthcheck:
test: ["CMD", "curl", "-f", "http://kong:8001"]
interval: 5s
timeout: 2s
retries: 15
ports:
- "8001:8001"
- "8000:8000"
- "8443:8443"
konga-prepare:
image: pantsel/konga:0.14.9
command: "-c prepare -a postgres -u postgresql://kong:kong@kong-database:5432/konga"
networks:
- kong-net
restart: on-failure
environment:
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-database
- KONG_PG_DATABASE=konga
- KONG_PG_PASSWORD=kong
links:
- kong-database
depends_on:
- kong-database
konga:
image: pantsel/konga:0.14.9
restart: always
networks:
- kong-net
environment:
DB_ADAPTER: postgres
DB_URI: postgresql://kong:kong@kong-database:5432/konga
NODE_ENV: production
links:
- kong-database
depends_on:
- kong
- konga-prepare
ports:
- "1337:1337"
jenkins
jenkins/jenkins:2.452.1-lts
version: "2.4"
services:
jenkins:
image: jenkins/jenkins:2.452.1-lts
restart: always
privileged: true
user: root
container_name: jenkins
cpus: 2
mem_limit: 4g
environment:
- TZ=Asia/Shanghai
volumes:
- /etc/localtime:/etc/localtime:ro
- ./jks_data:/var/jenkins_home
ports:
- "8888:8080"
- "50000:50000"
启动后
进入data下
修改hudson.model.UpdateCenter.xml文件中的url为
https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json
重启容器
刷新页面 出现输入密码的时候,进入jenkins挂载目录下的updates
执行
sed -i ‘s#http://updates.jenkins.io/download#https://mirrors.tuna.tsinghua.edu.cn/jenkins#g’ default.json && sed -i ‘s#http://www.google.com#https://www.baidu.com#g’ default.json
再次重启容器
然后执行后续处理
consul
version: '2'
services:
consul1:
image: consul:1.9.17
network_mode: bridge
container_name: consul1
command: "agent -server -node=node1 -bind=0.0.0.0 -client=0.0.0.0 -bootstrap-expect=1 -client 0.0.0.0 -ui"
restart: always
ports:
- "8500:8500"
- "8300:8300"
- "8301:8301"
- "8302:8302"
- "8600:8600"
portainer管理面板
version: '2'
services:
portainer:
image: portainer/portainer-ce
container_name: portainer
ports:
- 9000:9000
- 8000:8000
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/data
harbor 安装
cd /data/harbor
wget https://github.com/goharbor/harbor/releases/download/v2.8.2/harbor-offline-installer-v2.8.2.tgz
此处建议使用 https://github.akams.cn/ 代下载,再上传到服务器
tar -xzf harbor-offline-installer-v2.8.2.tgz
cd harbor
mkdir harbordata
chmod 777 harbordata/
cp harbor.yml.tmpl harbor.yml
vim harbor.yml
1:修改hostname为本机地址
修改http port为自定义端口 如 20080
2: 不要ssl 注释 https相关配置
3:修改data_volumn 为 /data/harbor/harbor/harbordata
执行(如果运行以后 修改了配置 那么需要重新执行)
./prepare
4:启动harbor
./install.sh
5: 增加docker配置
vim /etc/docker/daemon.json
增加insecure-registries参数
{
"insecure-registries": ["本机IP:20080"]
}
6:重新加载docker
systemctl daemon-reload
systemctl restart docker.service
7:最后访问
http://本机IP:20080
初始用户名:admin
初始密码:Harbor12345
minio
mkdir -p ./{data,config}
chmod 777 data
chmod 777 config
version: '3.8'
services:
minio:
image: bitnami/minio:2022-debian-11
privileged: true
restart: always
ports:
- '9000:9000'
- '9001:9001'
volumes:
- ./data:/data
- ./config:/root/.minio
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
CONTAINER_TIMEZONE: Asia/Shanghai
command: minio server /data --console-address ":9001"
nginx
mkdir -p ./nginx/html
mkdir -p ./nginx/conf.d
mkdir -p ./nginx/log
(TIP,如果部署静态文件 如vue的dist,可再docker-compose.yml中增加对应的volumes)
version: '3'
services:
nginx:
container_name: nginx
network_mode: host
image: nginx:stable
privileged: true
volumes:
- ./nginx/html:/etc/nginx/html
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/log:/var/log/nginx
restart: always
将示例文件生成,放到刚刚第一步生成的目录下
cp -f nginxhttp.conf nginx/conf.d/nginxhttp.conf
cp -f nginxssl.conf nginx/conf.d/nginxssl.conf
cp -f ssl.crt nginx/conf.d/ssl.crt
cp -f ssl.key nginx/conf.d/ssl.key
#示例 nginxhttp.conf
server {
listen 80 ;
client_max_body_size 1000m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #安全链接可选的加密协议
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Methods 'GET,POST,OPTIONS,PUT,DELETE';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X- Requested-With,If-Modified-Since,Cache-Control,Content-Type' always;
add_header 'Access-Control-Allow-Headers' '*' always;
add_header Cache-Control no-cache;
if ($request_method = 'OPTIONS') {
return 200;
}
#例如是vue
location / {
alias /home/service/xxx/dist/;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
#例如后端接口
location ^~/api/ {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
#示例 nginxssl.conf
server {
listen 443 ssl;
client_max_body_size 1000m;
ssl_certificate /etc/nginx/conf.d/ssl.crt; # pem文件的路径
ssl_certificate_key /etc/nginx/conf.d/ssl.key; # key文件的路径
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #安全链接可选的加密协议
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Methods 'GET,POST,OPTIONS,PUT,DELETE';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X- Requested-With,If-Modified-Since,Cache-Control,Content-Type' always;
add_header 'Access-Control-Allow-Headers' '*' always;
add_header Cache-Control no-cache;
if ($request_method = 'OPTIONS') {
return 200;
}
location / {
proxy_pass http://127.0.0.1:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ^~/api/ {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
}