Docker Swarm集群与Kubernetes的搭建与试用

一、Docker Swarm集群的环境搭建与试用

Docker Swarm 搭建

1. OS设置

[root@vm1 ~]# ip -br a | grep 0s8 | awk '{print $3}'
192.168.50.100/24
[root@vm2 ~]# ip -br a | grep 0s8 | awk '{print $3}'
192.168.50.120/24

2. 安装Docker

[root@vm1 ~]# cat install-docker.sh
yum remove docker* -y
rm -rf /var/lib/docker
yum -y install wget
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io -y
docker –version
systemctl enable docker –now
docker run hello-world
[root@vm1 ~]# bash install-docker.sh
[root@vm1 ~]# curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
[root@vm1 ~]# chmod +x /usr/local/bin/docker-compose
[root@vm1 ~]# docker -v
Docker version 20.10.12, build e91ed57
[root@vm1 ~]# docker-compose -v
docker-compose version 1.29.2, build 5becea4c
[root@vm2 ~]# docker -v
Docker version 20.10.12, build e91ed57
[root@vm2 ~]# docker-compose -v
docker-compose version 1.29.2, build 5becea4c

3. 设置Docker0网络

[root@vm1 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"
[{192.168.80.0/24  192.168.80.1 map[]}]
[root@vm2 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"
[{192.168.90.0/24  192.168.90.1 map[]}]

搭建Swarm集群

1. 初始化

[root@vm1 ~]# docker swarm init --advertise-addr 192.168.50.100
Swarm initialized: current node (kdcrkd6sqteevq9jgy70fd0h0) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1- 0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@vm1 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377
[root@vm1 ~]# docker node ls
 ID                       HOSTNAME   STATUS   AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0*   vm1            Ready      Active             Leader                  20.10.12

2. 添加WorkerSwarm集群中

[root@vm2 ~]# docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377
This node joined a swarm as a worker.
[root@vm2 ~]# docker node ls
ID                                HOSTNAME    STATUS     AVAILABILITY    MANAGER STATUS    ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1              Ready        Active                Leader                     20.10.12
4hh92oj2meotbi0etnje15bzq     vm2              Ready        Active                                        20.10.12

3. 添加Label

(1)查询DNS

[root@vm1 ~]# docker node update --label-add name=swarm-master-1 vm1
[root@vm1 ~]# docker node update --label-add name=swarm-master-2 vm2
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active           Leader                20.10.12
4hh92oj2meotbi0etnje15bzq       vm2          Ready      Active                                 20.10.12

(2)查看Label

[root@vm1 ~]# docker node inspect vm1 -f "{{.Spec.Labels}}"
map[name:swarm-master-1]
[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"
map[HOSTNAME:master-2 name:master-2]3

(3)添加Label

[root@vm1 ~]# docker node update --help
Usage:  docker node update [OPTIONS] NODE
Update a node

Options:

      --availability string   Availability of the node ("active"|"pause"|"drain")

      --label-add list        Add or update a node label (key=value)

      --label-rm list         Remove a node label if exists

      --role string           Role of the node ("worker"|"manager")
[root@vm1 ~]# docker node update --label-add name=master-2
[root@vm1 ~]# echo $?
[root@vm1 ~]#
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active           Leader                20.10.12
4hh92oj2meotbi0etnje15bzq       vm2          Ready      Active                                 20.10.12
[root@vm1 ~]# docker node promote master-2
[root@vm1 ~]# docker node update --label-add HOSTNAME=master-2 vm2
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active           Leader               20.10.12
4hh92oj2meotbi0etnje15bzq       vm2          Ready      Active                                20.10.12
[root@vm1 ~]# docker node promote master-2

4.提升Worker为Master

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *     vm1           Ready      Active           Leader                20.10.12
4hh92oj2meotbi0etnje15bzq      vm2           Ready      Active           Reachable             20.10.12
[root@vm2 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0       vm1          Ready       Active            Leader               20.10.12
4hh92oj2meotbi0etnje15bzq *     vm2          Ready       Active            Reachable             20.10.12

5.查看节点信息

[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"
map[HOSTNAME:master-2 name:master-2]
[root@vm1 ~]# docker node inspect vm2

6.创建网络

[root@vm1 ~]# docker network create -d overlay --subnet=192.168.82.0/24 --gateway=192.168.82.1 --attachable swarm-net
xywzrf7ftwenaxbu0zmewh183
[root@vm1 ~]# docker network inspect swarm-net -f "{{.IPAM}}"
{default map[] [{192.168.82.0/24  192.168.82.1 map[]}]}

7.创建Service并验证

(1)创建

[root@vm1 ~]# docker service create --replicas 3 -p 10080:80 --network swarm-net --name nginx-cluster nginx
r4v6w094yxl370bynyzghh37a
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged
[root@vm1 ~]#

(2)查看

[root@vm1 ~]# docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
r4v6w094yxl3    nginx-cluster      replicated       3/3          nginx:latest       *:10080->80/tcp
[root@vm1 ~]# ss -ntl | grep 10080
LISTEN 0      128                *:10080            *:*
[root@vm1 ~]# docker ps
CONTAINER ID  IMAGE      COMMAND              CREATED         STATUS         PORTS     NAMES
3ddd0e479de6   nginx:latest   "/docker-entrypoint.…"      7 minutes ago        Up 7 minutes     80/tcp   
nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l
[root@vm1 ~]# docker port nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l
[root@vm1 ~]# echo $?
0
[root@vm1 ~]#

(3)访问

[root@vm1 ~]# curl 192.168.50.100:10080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

8.查看同一台主机的负载均衡分配情况

(1)修改默认Web主页

[root@vm1 ~]# docker exec -it nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l bash
root@3ddd0e479de6:/# echo '#1 in master 1' > /usr/share/nginx/html/index.html
[root@vm2 ~]# docker exec -it nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga bash
root@0d6709372322:/# echo '#2 in master 2' > /usr/share/nginx/html/index.html
root@0d6709372322:/#
[root@vm2 ~]# docker exec -it nginx-cluster.3.yofvioldzci3k4geve7lykyrs bash
root@6b1e246bdc34:/#  echo '#3 in master 2' > /usr/share/nginx/html/index.html

(2)访问测试

Step 1
[root@vm2 ~]# curl 192.168.50.120:10080
#3 in master 2
[root@vm2 ~]# curl 192.168.50.120:10080
#1 in master 1
[root@vm2 ~]# curl 192.168.50.120:10080
#2 in master 2
[root@vm2 ~]# curl 192.168.50.120:10080
#3 in master 2

9.验证HA

[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
6b1e246bdc34   nginx:latest   “/docker-entrypoint.…”   18 minutes ago   Up 18 minutes   80/tcp    nginx-cluster.3.yofvioldzci3k4geve7lykyrs
0d6709372322   nginx:latest   “/docker-entrypoint.…”   18 minutes ago   Up 18 minutes   80/tcp    nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga
[root@vm2 ~]#
[root@vm2 ~]# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
[root@vm2 ~]# systemctl status docker
● docker.service – Docker Application Container Engine

   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
[root@vm1 ~]# docker node ls
Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded
[root@vm2 ~]# systemctl status docker
● docker.service – Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2022-01-23 13:22:38 JST; 28s ago
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active           Leader               20.10.12
4hh92oj2meotbi0etnje15bzq       vm2          Ready      Active           Reachable            20.10.12
[root@vm2 ~]# shutdown -h now
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *     vm1           Ready       Active           Leader                20.10.12
4hh92oj2meotbi0etnje15bzq      vm2           Ready       Active           Unreachable            20.10.12
[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
acc40b31df3f   nginx:latest   “/docker-entrypoint.…”   2 minutes ago    Up 2 minutes    80/tcp    nginx-cluster.3.ei48wkjvtmn53mfsjthlb52ef
d4635d8f2322   nginx:latest   “/docker-entrypoint.…”   2 minutes ago    Up 2 minutes    80/tcp    nginx-cluster.2.yyc2fmh73p23adbu44auuzf7r
3ddd0e479de6   nginx:latest   “/docker-entrypoint.…”   31 minutes ago   Up 31 minutes   80/tcp    nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active            Leader               20.10.12
4hh92oj2meotbi0etnje15bzq       vm2          Ready       Active           Reachable            20.10.12
lp4i21pj0sij9yz81f7u8dzy7        vm3          Ready       Active                               20.10.12
[root@vm1 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’
192.168.50.100/24
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm2 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’
192.168.50.120/24
[root@vm3 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’
192.168.50.130/24
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *     vm1           Ready      Active           Leader               20.10.12
4hh92oj2meotbi0etnje15bzq      vm2           Ready      Active           Reachable            20.10.12
lp4i21pj0sij9yz81f7u8dzy7       vm3           Ready      Active                               20.10.12
[root@vm1 ~]# docker service rm r4v6w094yxl3
r4v6w094yxl3
[root@vm1 ~]# docker service create –replicas 6 -p 10080:80 –network swarm-net –name nginx-cluster nginx                kzdm5zhgt1eo9goxy0rjwmklm
overall progress: 6 out of 6 tasks
1/6: running   [================================================è]
2/6: running   [================================================è]
3/6: running   [================================================è]
4/6: running   [================================================è]
5/6: running   [================================================è]
6/6: running   [================================================è]
verify: Service converged
[root@vm1 ~]#
[root@vm1 ~]# docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
kzdm5zhgt1eo   nginx-cluster       replicated        6/6         nginx:latest       *:10080->80/tcp
[root@vm3 ~]# docker service ls
Error response from daemon: This node is not a swarm manager. Worker nodes can’t be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.
[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
31177737f781   nginx:latest   “/docker-entrypoint.…”   28 seconds ago   Up 25 seconds   80/tcp    nginx-cluster.5.9a7wd7yqo4lweayw3352kssru
791b93b799d8   nginx:latest   “/docker-entrypoint.…”   28 seconds ago   Up 25 seconds   80/tcp    nginx-cluster.2.bcrt9chkfrmbjyy30n4g3il71
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
4a59afd4d2d2   nginx:latest   “/docker-entrypoint.…”   3 seconds ago   Up 2 seconds   80/tcp    nginx-cluster.1.c46oo5nzbchhbn6v07rft9d5b
bac746d67e4f   nginx:latest   “/docker-entrypoint.…”   4 seconds ago   Up 2 seconds   80/tcp    nginx-cluster.4.71wn1fz6yojbzunktws2qqslg
[root@vm3 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
f0985bc21d07   nginx:latest   “/docker-entrypoint.…”   7 seconds ago   Up 5 seconds   80/tcp    nginx-cluster.6.vvykeb5g04gihc5jg4la26903
d14c20577a85   nginx:latest   “/docker-entrypoint.…”   7 seconds ago   Up 5 seconds   80/tcp    nginx-cluster.3.qwccv5u0jlwfp5txp5593d8cc

10.删除某容器

[root@vm1 ~]# docker rm $(docker stop nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48)
nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   20 seconds ago   Up 15 seconds   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

11.删除服务

[root@vm1 ~]# docker service --help
Usage:  docker service COMMAND
Manage services
Commands:
  create      Create a new service
  inspect     Display detailed information on one or more services
  logs        Fetch the logs of a service or task
  ls          List services
  ps          List the tasks of one or more services
  rm          Remove one or more services
  rollback    Revert changes to a service's configuration
  scale       Scale one or multiple replicated services
  update      Update a service
[root@vm1 ~]# docker service rm nginx-cluster
nginx-cluster
[root@vm1 ~]# docker service ls
ID             NAME              MODE         REPLICAS   IMAGE          PORTS
sfcq2vc5orxs    nginx-cluster-2        replicated       2/2          nginx:latest       *:10081->80/tcp
[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm1 ~]# docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm1 ~]#
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS              PORTS     NAMES
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   2 minutes ago   Up About a minute   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS              PORTS     NAMES
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   2 minutes ago   Up About a minute   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

12.手动停止容器

[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAME
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   4 minutes ago   Up 4 minutes   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n68
[root@vm2 ~]# docker stop nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm2 ~]#
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
598c665b7a8e   nginx:latest   "/docker-entrypoint.…"   19 seconds ago   Up 13 seconds   80/tcp    nginx-cluster-2.2.qmgbury0aixey2sozive6377l
[root@vm2 ~]# docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CRATED          STATUS                      PORTS     NAMES
598c665b7a8e   nginx:latest   "/docker-entrypoint.…"   24 seconds ago   Up 18 seconds               80/tcp    nginx-cluster-2.2.qmgbury0aixey2sozive6377l
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   5 minutes ago    Exited (0) 24 seconds ago             nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker service ls
ID             NAME              MODE         REPLICAS   IMAGE          PORTS
sfcq2vc5orxs    nginx-cluster-2       replicated        2/2          nginx:latest       *:10081->80/tcp
[root@vm2 ~]# docker start nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
598c665b7a8e   nginx:latest   "/docker-entrypoint.…"   2 minutes ago   Up 2 minutes   80/tcp    nginx-cluster-2.2.qmgbury0aixey2sozive6377l
719e0941efa4   nginx:latest   "/docker-entrypoint.…"   7 minutes ago   Up 1 second    80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688
[root@vm2 ~]# docker service ls
ID             NAME              MODE         REPLICAS   IMAGE          PORTS
sfcq2vc5orxs     nginx-cluster-2      replicated        2/2          nginx:latest       *:10081->80/tcp
[root@vm2 ~]# docker service rm $(docker service ls -q)
sfcq2vc5orxs
[root@vm2 ~]# docker service ls
ID        NAME      MODE      REPLICAS   IMAGE     PORTS
[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm2 ~]# docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@vm2 ~]#

13.离开集群

[root@vm3 ~]# docker swarm leave
Error response from daemon: You are attempting to leave the swarm on a node that is participating as a manager. The only way to restore a swarm that has lost consensus is to reinitialize it with `--force-new-cluster`. Use `--force` to suppress this message.
[root@vm3 ~]# docker swarm leave --force
[root@vm3 ~]# docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

14.删除集群

[root@vm1 ~]# docker swarm leave --force
Node left the swarm.
[root@vm1 ~]# docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

二、Kubernetes集群的环境搭建与试用

1.安装Docker

wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-18.06.0.ce-3.el7.x86_64
systemctl start docker.service
systemctl enable docker.service

2.安装Kubernetes

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.12.3
yum install -y kubeadm-1.12.3
yum install -y kubectl-1.12.3

3.获取镜像

docker save -o k8s-1.12.3.tar k8s.gcr.io/kube-proxy:v1.12.3  k8s.gcr.io/kube-apiserver:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3  k8s.gcr.io/etcd:3.2.24   k8s.gcr.io/coredns:1.2.2  quay.io/coreos/flannel:v0.10.0-amd64   k8s.gcr.io/pause:3.1 
docker load -i k8s-1.12.3.tar

4.禁用节点上的Swap

swapoff -a
Step 2
sysctl -p
Step 3
vim /ets/fstab

5.开启路由转发功能以及iptables的过滤策略

vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

6.初始化Master节点

kubeadm init  --kubernetes-version=v1.12.3  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.6.6.110

7.从节点加入

kubeadm join 10.6.6.192:6443 --token afbkdo.6335xh1w0lv7odbh --discovery-token-ca-cert-hash sha256:b9abe5a668609f0225c8bb3ecba3a70a0be370f90905fcce79a6d783bbd0aeef

8.配置主节点是否参与调度

kubectl taint nodes master.k8s node-role.kubernetes.io/master-
kubectl taint nodes master.k8s node-role.kubernetes.io/master=:NoSchedule

9.开启非安全端口访问

--secure-port=6443

--insecure-bind-address=0.0.0.0

--insecure-port=8080

10.配置证书续期

--kubeconfig=/etc/kubernetes/controller-manager.conf

--experimental-cluster-signing-duration=87600h0m0s

--feature-gates=RotateKubeletServerCertificate=true
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值