Kubernetes---使用单节点二进制方式部署k8s集群

一、单master节点用二进制部署K8s集群

1.1 拓扑图

在这里插入图片描述

1.2 环境部署

主机名IP地址所需部署组件
master192.168.140.20kube-apiserver、kube-controller-manager、kube-scheduler、etcd
node01192.168.140.21kubelet、kube-proxy、docker、flannel、etcd
node02192.168.140.22kubelet、kube-proxy、docker、flannel、etcd

1.3 自签SSL证书

  • 部署K8S集群中会用到
组件使用的证书
etcdca.pem,server.pem,server-key.pem
flannelca.pem,server.pem,server-key.pem
kube-apiserverca.pem,server.pem,server-key.pem
kubeletca.pem,ca-key.pem
kube-proxyca.pem,kube-proxy.pem,kube-proxy-key.pem
kubectlca.pem,admin-pem,admin-key.pem

1.4 开局优化

  • 修改主机名
hostnamectl set-hostname master

hostnamectl set-hostname node01

hostnamectl set-hostname node02
  • 关闭防火墙与核心防护(三个节点都做)
systemctl stop firewalld && systemctl disable firewalld

setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config

1.5 etcd 集群部署

  • master主机创建k8s文件夹并上传etcd脚本,下载cffssl官方证书生成工具
#创建工作目录上传脚本
[root@master ~]# mkdir k8s
[root@master ~]# cd k8s/
[root@master k8s]# rz -E	'//上传etcd脚本'
rz waiting to receive.
[root@master k8s]# ls
etcd-cert.sh  etcd.sh
[root@master k8s]# mkdir etcd-cert
[root@master k8s]# mv etcd-cert.sh etcd-cert

#下载证书制作工具
[root@master k8s]# vi cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

#下载cfssl官方包
[root@master k8s]# bash cfssl.sh
[root@master k8s]# ls /usr/local/bin/
 cfssl  cfssl-certinfo  cfssljson
//cfssl 生成证书工具   cfssljson通过传入json文件生成证书  cfssl-certinfo查看证书信息

[root@master k8s]# chmod +x /usr/local/bin/cfssl
[root@master k8s]# chmod +x /usr/local/bin/cfssl-certinfo
[root@master k8s]# chmod +x /usr/local/bin/cfssljson

#如果你已经下载好了这三个工具,可以直接导入到 /usr/local/bin/下,注意需要赋予可执行权限
  • 创建证书
[root@master ~]# cd /root/k8s/etcd-cert/
[root@master etcd-cert]# cat > ca-config.json <<EOF
{
   "signing": {
    "default": {
       "expiry": "87600h"
     },
    "profiles": {
       "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"     
        ]  
       } 
     }         
   }
}
EOF

'//定义ca证书配置文件'

[root@master etcd-cert]# ls
ca-config.json etcd-cert.sh
[root@master etcd-cert]# cat > ca-csr.json <<EOF 
{   
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Chengdu",
            "ST": "Chengdu"
        }
    ]
}
EOF

'//实现证书签名'

[root@master etcd-cert]# ls
ca-config.json  ca-csr.json  etcd-cert.sh
[root@master etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 
2021/03/18 11:38:03 [INFO] generating a new CA key and certificate from CSR
2021/03/18 11:38:03 [INFO] generate received request
2021/03/18 11:38:03 [INFO] received CSR
2021/03/18 11:38:03 [INFO] generating key: rsa-2048
2021/03/18 11:38:03 [INFO] encoded CSR
2021/03/18 11:38:03 [INFO] signed certificate with serial number 646726809365034111674974961625583230830644619161

'//生成证书:ca-key.pem、ca.pem'

[root@master etcd-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd-cert.sh
  • 指定 etcd 三个节点之间的通信验证
'#配置服务器端的签名文件'
[root@master etcd-cert]# cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.140.20",
    "192.168.140.21",
    "192.168.140.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Chengdu",
            "ST": "Chengdu"
        }
    ]
}
EOF

[root@master etcd-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd-cert.sh  server-csr.json

'#生成ETCD证书 server-key.pem   server.pem'
[root@master etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2021/03/18 11:44:52 [INFO] generate received request
2021/03/18 11:44:52 [INFO] received CSR
2021/03/18 11:44:52 [INFO] generating key: rsa-2048
2021/03/18 11:44:53 [INFO] encoded CSR
2021/03/18 11:44:53 [INFO] signed certificate with serial number 474867500554792348214975026719573693837760757686
2021/03/18 11:44:53 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@master etcd-cert]# ls
ca-config.json  ca-csr.json  ca.pem        server.csr       server-key.pem
ca.csr          ca-key.pem   etcd-cert.sh  server-csr.json  server.pem
  • 下载并解压ETCD二进制包
    下载地址:https://github.com/etcd-io/etcd/releases
[root@master ~]# cd k8s/
[root@master k8s]# rz -E		'//我已经下载好了,直接上传'
rz waiting to receive.
[root@master k8s]# ls
cfssl.sh   etcd.sh                          flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert  etcd-v3.3.10-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz

[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz		'//解压软件'
  • 配置文件,命令文件,证书
[root@master k8s]# mkdir -p /opt/etcd/{cfg,bin,ssl}		//递归创建目录
[root@master k8s]# ls /opt/etcd/
bin  cfg  ssl
[root@master k8s]# ls etcd-v3.3.10-linux-amd64
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md

[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd* /opt/etcd/bin		'//移动命令到创建的bin目录'
[root@master k8s]# ls /opt/etcd/bin/
etcd  etcdctl

[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl	'//将证书文件拷贝到创建的ssl目录'
[root@master k8s]# ls /opt/etcd/ssl
ca-key.pem  ca.pem  server-key.pem  server.pem

[root@master k8s]# vim etcd.sh	'//查看配置文件'

在这里插入图片描述

  • 主节点进入卡住状态等待其他节点加入
[root@master k8s]# bash etcd.sh etcd01 192.168.140.20 etcd02=https://192.168.140.21:2380,etcd03=https://192.168.140.22:2380

#使用另外一个会话打开,会发现etcd进程已经开启
[root@master ~]# ps -ef | grep etcd

在这里插入图片描述

  • 此时还需要拷贝证书和启动脚本到两个工作节点
[root@master k8s]# scp -r /opt/etcd/ root@192.168.140.21:/opt
The authenticity of host '192.168.140.21 (192.168.140.21)' can't be established.
ECDSA key fingerprint is SHA256:5MAR/SpyvPKdJkN2J2yoeWttwiO2/2LgyHt6hNvawmQ.
ECDSA key fingerprint is MD5:f1:ff:6f:fd:14:28:f6:90:4d:f9:51:66:33:21:df:bf.
Are you sure you want to continue connecting (yes/no)? yes			'//输入yes'
Warning: Permanently added '192.168.140.21' (ECDSA) to the list of known hosts.
root@192.168.140.21's password: 		'//输入拷贝的节点root密码'
etcd                                                                                    100%  516   708.3KB/s   00:00    
etcd                                                                                    100%   18MB  81.0MB/s   00:00    
etcdctl                                                                                 100%   15MB 127.8MB/s   00:00    
ca-key.pem                                                                              100% 1679     2.2MB/s   00:00    
ca.pem                                                                                  100% 1265   382.5KB/s   00:00    
server-key.pem                                                                          100% 1675     3.5MB/s   00:00    
server.pem                                                                              100% 1338   495.1KB/s   00:00    
[root@master k8s]# scp -r /opt/etcd/ root@192.168.140.22:/opt

[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.140.21:/usr/lib/systemd/system
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.140.22:/usr/lib/systemd/system
root@192.168.140.22's password: 
etcd.service                                                                            100%  923     1.0MB/s   00:00

  • 修改node01和node02两个工作节点上的etcd配置文件
#修改相应的名称和IP地址
[root@node01 ~]# vim /opt/etcd/cfg/etcd

[root@node02 ~]# vim /opt/etcd/cfg/etcd

node01
在这里插入图片描述
node02
在这里插入图片描述

  • 启动etcd服务
    先开启主节点的集群脚本,然后两个节点启动etcd
[root@master k8s]# bash etcd.sh etcd01 192.168.140.20 etcd02=https://192.168.140.21:2380,etcd03=https://192.168.140.22:2380

[root@node01 ~]# systemctl start etcd

[root@node02 ~]# systemctl start etcd

在这里插入图片描述
在这里插入图片描述

  • 最后检查群集状态
[root@master k8s]# cd etcd-cert/
[root@master etcd-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd-cert.sh  server.csr  server-csr.json  server-key.pem  server.pem
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.140.20:2379,https://192.168.140.21:2379,https://192.168.140.22:2379" cluster-health
member 7e1b55f14a638ac2 is healthy: got healthy result from https://192.168.140.20:2379
member bef76abee0df27ac is healthy: got healthy result from https://192.168.140.22:2379
member f5d4e61e4d6e5f2e is healthy: got healthy result from https://192.168.140.21:2379
cluster is healthy

在这里插入图片描述
出现 cluster is healthy 表示ETCD集群已经搭建成功

1.6 Docker相关部署

  • 在node节点上安装docker-ce,可以参考之前的安装,这里不再赘述
    docker安装详见

1.7 部署 flannel 容器集群网络

1.7.1 flannel网络理论介绍

  • Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟化网络技术模式,该网络中的主机通过虚拟链路连接起来;
  • VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上进行传输,到达目的地后由隧道端点解封装并将数据发送给目标地址;
  • Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式;

在这里插入图片描述

1.7.2 flannel工作流程与功能

在这里插入图片描述

  • Flannel功能

    • Flannel是CoreOS团队针对 Kubernetes设计的一个网络规划服务,
    • 简单来说,它的功能是让集群中的不同节点主机创建的 Docker容器都具有全集群唯一的虚拟IP地址。而且它还能在这些IP地址之间建立一个覆盖网络(overlay Network),通过这个覆盖网络,将数据包原封不动地传递到目标容器内
  • ETCD在 flannel 里起的作用:为Flannel提供说明

    • 存储管理 Flannel可分配的IP地址段资源
    • 监控ETCD中每个Pod的实际地址,并在内存中建立维护Pod节点路由表

1.7.3 部署flannel

  • master节点上写入分配的子网段到ETCD中,供 flannel 使用
[root@master ]# cd /root/k8s/etcd-cert
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.140.20:2379,https://192.168.140.21:2379,https://192.168.140.22:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}} 	'//写入分配的网段'

[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.140.20:2379,https://192.168.140.21:2379,https://192.168.140.22:2379" get /coreos.com/network/config

{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}		'//查看写入的信息'
  • 在node节点上部署 flannel
#将msater节点上的flannel拷贝到所有node节点
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.140.21:/opt
root@192.168.140.21's password: 
flannel-v0.10.0-linux-amd64.tar.gz                                                              100% 9479KB  60.3MB/s   00:00    
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.140.22:/opt
root@192.168.140.22's password: 
flannel-v0.10.0-linux-amd64.tar.gz

#然后在node01和node02节点上解压,这里以node01节点为例
[root@node01 ~]# cd /opt/
[root@node01 opt]# ls
containerd  etcd  flannel-v0.10.0-linux-amd64.tar.gz  rh
[root@node01 opt]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
  • node节点上的其他操作
#创建k8s工作目录,并移动脚本到对应工作目录
[root@node01 opt]# mkdir -p /opt/k8s/{cfg,bin,ssl}
[root@node01 opt]# mv mk-docker-opts.sh flanneld ./k8s/bin/

#编辑flannel.sh脚本:创建配置文件与启动脚本,定义的端口是2379,节点对外提供的端口
[root@node01 opt]# vi flannel.sh		

#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/k8s/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/k8s/cfg/flanneld
ExecStart=/opt/k8s/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
#执行脚本,开启flannel网络功能
[root@node01 opt]# bash flannel.sh https://192.168.140.20:2379,https://192.168.140.21:2379,https://192.168.140.22:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

[root@node01 opt]# systemctl status flanneld		'//查看状态'

在这里插入图片描述

#配置docker连接flannel
[root@node01 ~]# vim /usr/lib/systemd/system/docker.service
...
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env		'//添加的配置'
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS  -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID		'//修改配置'
...//省略部分信息

[root@node01 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.80.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.80.1/24 --ip-masq=false --mtu=1450"
//bip:指定启动时的子网

#重启Docker服务,再次查看flannel网络是否有变化
[root@node01 ]# systemctl daemon-reload
[root@node01 ]# systemctl restart docker

在这里插入图片描述
注意node2上也要进行相同的操作

1.7.4 测试

  • 测试ping通对方docker0网卡 证明flannel起到路由作用
  • 先创建容器
[root@node01 ~]# docker run -it centos:7 /bin/bash		'//创建并运容器'
Unable to find image 'centos:7' locally
7: Pulling from library/centos
2d473b07cdd5: Pull complete 
Digest: sha256:0f4ec88e21daf75124b8a9e5ca03c37a5e937e0e108a255d890492430789b60e
Status: Downloaded newer image for centos:7
[root@b6c0c30808c5 /]# yum install net-tools -y			'//安装网络工具'
[root@b6c0c30808c5 /]# ifconfig

在这里插入图片描述

[root@node02 ~]# docker run -it centos:7 /bin/bash		'//创建并运容器'
[root@de7c49ccf719 /]# yum install net-tools -y			'//安装网络工具'
[root@de7c49ccf719 /]# ifconfig

在这里插入图片描述

  • 再测试两个node节点是否可以互联互通
[root@de7c49ccf719 /]# ping 172.17.80.2		'//在node02容器里测试ping node01容器'
PING 172.17.80.2 (172.17.80.2) 56(84) bytes of data.
64 bytes from 172.17.80.2: icmp_seq=1 ttl=62 time=0.499 ms
64 bytes from 172.17.80.2: icmp_seq=2 ttl=62 time=0.741 ms
64 bytes from 172.17.80.2: icmp_seq=3 ttl=62 time=0.502 ms

在这里插入图片描述
证明flannel网络部署成功

1.8 部署 master 组件

  • 在master上操作,api-server生成证书
[root@master ~]# cd k8s/
[root@master k8s]# rz -E		//导入master压缩包
rz waiting to receive.
[root@master k8s]# unzip master.zip 		//解压
Archive:  master.zip
  inflating: apiserver.sh            
  inflating: controller-manager.sh   
  inflating: scheduler.sh

[root@master k8s]# mkdir k8s-cert		'//创建k8s证书目录'
[root@master k8s]# cd k8s-cert/
[root@master k8s-cert]# vim k8s-cert.sh
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Chengdu",
            "ST": "Chengdu",
      	    "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.140.20",
      "192.168.140.13",
      "192.168.140.30",
      "192.168.140.14",
      "192.168.140.15",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Chengdu",
            "ST": "Chengdu",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Chengdu",
      "ST": "Chengdu",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Chengdu",
      "ST": "Chengdu",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#生成k8s证书
[root@master k8s-cert]# bash k8s-cert.sh		//生成证书
[root@master k8s-cert]# ls
admin.csr       ca-config.json  ca.pem               kube-proxy-key.pem  server-key.pem
admin-csr.json  ca.csr          k8s-cert.sh          kube-proxy.pem      server.pem
admin-key.pem   ca-csr.json     kube-proxy.csr       server.csr
admin.pem       ca-key.pem      kube-proxy-csr.json  server-csr.json

[root@master k8s-cert]# ls *.pem
admin-key.pem  ca-key.pem  kube-proxy-key.pem  server-key.pem
admin.pem      ca.pem      kube-proxy.pem      server.pem

[root@master k8s-cert]# cp ca*.pem server*.pem /opt/kubernetes/ssl/		'//复制证书到指定工作目录'
[root@master k8s-cert]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
  • 解压k8s压缩包
[root@master k8s-cert]# cd ..
[root@master k8s]# ls
apiserver.sh           etcd.sh                             k8s-cert
cfssl.sh               etcd-v3.3.10-linux-amd64            kubernetes-server-linux-amd64.tar.gz
controller-manager.sh  etcd-v3.3.10-linux-amd64.tar.gz     master.zip
etcd-cert              flannel-v0.10.0-linux-amd64.tar.gz  scheduler.sh
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
  • 复制关键命令
[root@master k8s]# cd kubernetes/server/bin/
[root@master bin]# cp kube-controller-manager kube-scheduler kubectl kube-apiserver /opt/kubernetes/bin/
//复制到k8s工作目录中
[root@master bin]# ls /opt/kubernetes/bin/
kube-apiserver  kube-controller-manager  kubectl  kube-scheduler
  • 编辑令牌并绑定角色
[root@master bin]# cd /root/k8s/
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ''		//随机生成序列号
 58ee31e8 f123a4b1 f57a4c64 73aa912f

[root@master k8s]# vim /opt/kubernetes/cfg/token.csv
58ee31e8f123a4b1f57a4c6473aa912f,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

'//参数解释:序列号,用户名,id,角色;这个用户是master用来管理node节点的'
  • 开启apiserver,将数据存放在etcd集群中并检查kube状态
[root@master k8s]# bash apiserver.sh 192.168.140.20 https://192.168.140.20:2379,https://192.168.140.21:2379,https://192.168.140.22:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

[root@master k8s]# ls /opt/kubernetes/cfg/
kube-apiserver  token.csv

[root@master k8s]# netstat -ntap | grep 6443		'//监听的https端口'
tcp        0      0 192.168.140.20:6443     0.0.0.0:*               LISTEN      78627/kube-apiserve 
tcp        0      0 192.168.140.20:44592    192.168.140.20:6443     ESTABLISHED 78627/kube-apiserve 
tcp        0      0 192.168.140.20:6443     192.168.140.20:44592    ESTABLISHED 78627/kube-apiserve 


[root@master k8s]# ps aux |grep kube		'//检查进程是否启动成功'
[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver 		'//查看配置文件'

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.140.20:2379,https://192.168.140.21:2379,https://192.168.140.22:2379 \
--bind-address=192.168.140.20 \
--secure-port=6443 \
--advertise-address=192.168.140.20 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  • 启动scheduler服务
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]# systemctl status kube-scheduler

在这里插入图片描述

  • 启动controller-manager
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
如果出现:-bash: ./controller-manager.sh: 权限不够
执行 chmod +x controller-manager.sh 再重新启动即可

[root@master k8s]# systemctl status kube-controller-manager

在这里插入图片描述

  • 最后查看master节点状态
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

在这里插入图片描述

1.9 node节点上的部署

1.9.1 部署node01节点

  • master上操作
#将kubectl和kube-proxy拷贝到node节点
[root@master k8s]# cd /root/k8s/kubernetes/server/bin/

[root@master bin]# scp kubelet kube-proxy root@192.168.140.21:/opt/k8s/bin
root@192.168.140.21's password: 
kubelet                                                          100%  168MB 112.3MB/s   00:01    
kube-proxy                                                       100%   48MB  85.7MB/s   00:00    
[root@master bin]# scp kubelet kube-proxy root@192.168.140.22:/opt/k8s/bin
root@192.168.140.22's password: 
kubelet                                                          100%  168MB  99.2MB/s   00:01    
kube-proxy                                                       100%   48MB  91.3MB/s   00:00
  • nod01节点操作(复制node.zip到/root目录下再解压)
[root@node01 ~]# rz -E
rz waiting to receive.
[root@node01 ~]# ls
... node.zip
[root@node01 ~]# 
[root@node01 ~]# unzip node.zip
  Archive:  node.zip
  inflating: proxy.sh                
  inflating: kubelet.sh
'//解压node.zip,获得kubelet.sh  proxy.sh'

[root@node01 ~]# ls
... kubelet.sh  proxy.sh node.zip

  • master上操作
#创建kubeconfig目录
[root@master ~]# cd /root/k8s/
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/
[root@master kubeconfig]# rz -E		//上传kubeconfig.sh脚本
rz waiting to receive.
[root@master kubeconfig]# ls
kubeconfig.sh

#kubeconfig.sh文件进行重命名
[root@master kubeconfig]# mv kubeconfig.sh kubeconfig

[root@master kubeconfig]# vim kubeconfig
----------------删除以下部分----------------------
# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#获取token信息
[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv
58ee31e8f123a4b1f57a4c6473aa912f,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
'//注意:配置文件修改为tokenID'

在这里插入图片描述

#设置环境变量可以写入到/etc/profile中
[root@master kubeconfig]# export PATH=$PATH://opt/kubernetes/bin

#生成配置文件并拷贝到node节点
[root@master kubeconfig]# bash kubeconfig 192.168.140.20 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig

[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.140.21:/opt/k8s/cfg
root@192.168.140.21's password: 
bootstrap.kubeconfig                                                                                                            100% 2168     2.5MB/s   00:00    
kube-proxy.kubeconfig                                                                                                           100% 6274     8.9MB/s   00:00    
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.140.22:/opt/k8s/cfg
'#创建bootstrap角色并赋予权限用于连接apiserver请求签名(关键)'
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  • 在node01节点上操作
[root@node01 ~]# vim kubelet.sh 	
'//将/opt/kubernetes路径都修改为/opt/k8s,因为脚本工作目录与node节点工作目录不匹配'

[root@node01 ~]# bash kubelet.sh 192.168.140.21

在这里插入图片描述

  • master上检查到node01节点的请求,查看证书状态
[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-t05XEI92wGzkOVJRhDscfY5dqmQRNk-r4YnKh5Ls1FY   63s   kubelet-bootstrap   Pending

在这里插入图片描述

  • 颁发证书,再次查看证书状态
[root@master kubeconfig]# kubectl certificate approve node-csr-t05XEI92wGzkOVJRhDscfY5dqmQRNk-r4YnKh5Ls1FY
certificatesigningrequest.certificates.k8s.io/node-csr-t05XEI92wGzkOVJRhDscfY5dqmQRNk-r4YnKh5Ls1FY approved

[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-t05XEI92wGzkOVJRhDscfY5dqmQRNk-r4YnKh5Ls1FY   3m24s   kubelet-bootstrap   Approved,Issued

在这里插入图片描述

  • 查看集群状态并启动proxy服务
[root@master kubeconfig]# kubectl get node
NAME             STATUS   ROLES    AGE   VERSION
192.168.140.21   Ready    <none>   89s   v1.12.3

#在node01节点上
[root@node01 ~]# vim proxy.sh 
'//修改配置文件,将/opt/kubernetes路径换成/opt/k8s'
'//因为之前创建的文件目录不匹配'

[root@node01 ~]# bash proxy.sh 192.168.140.21
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node01 ~]# systemctl status kube-proxy.service

在这里插入图片描述

1.9.2 部署node02节点

  • 这里我们可以把node01上的配置文件拷贝过来
[root@node01 ~]# scp -r /opt/k8s/ root@192.168.140.22:/opt/
root@192.168.140.22's password: 
flanneld                                                                                                                        100%  238   392.3KB/s   00:00    
bootstrap.kubeconfig                                                                                                            100% 2168     3.4MB/s   00:00    
kube-proxy.kubeconfig                                                                                                           100% 6274    10.7MB/s   00:00    
kubelet                                                                                                                         100%  350   609.7KB/s   00:00    
kubelet.config                                                                                                                  100%  268   589.6KB/s   00:00    
kubelet.kubeconfig                                                                                                              100% 2283     4.8MB/s   00:00    
kube-proxy                                                                                                                      100%  183   362.0KB/s   00:00    
mk-docker-opts.sh                                                                                                               100% 2139     5.8MB/s   00:00    
scp: /opt//k8s/bin/flanneld: Text file busy
kubelet                                                                                                                         100%  168MB 128.1MB/s   00:01    
kube-proxy                                                                                                                      100%   48MB 133.4MB/s   00:00    
kubelet.crt                                                                                                                     100% 2193     2.8MB/s   00:00    
kubelet.key                                                                                                                     100% 1675     1.9MB/s   00:00    
kubelet-client-2021-03-22-01-19-53.pem                                                                                          100% 1277   495.4KB/s   00:00    
kubelet-client-current.pem

[root@node01 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.140.22:/usr/lib/systemd/system
  • 首先删除复制过来的证书,等会node02会自行申请证书
[root@node02 ~]# cd /opt/k8s/ssl/
[root@node02 ssl]# rm -rf *
  • 然后在node02节点上修改下三个配置文件的IP地址
//修改配置文件kubelet  kubelet.config kube-proxy(三个配置文件)
[root@node02 ~]# cd /opt/k8s/cfg/
[root@node02 cfg]# ls
bootstrap.kubeconfig  cfg  flanneld  kubelet  kubelet.config  kubelet.kubeconfig  kube-proxy  kube-proxy.kubeconfig

#只需要把node01上的地址改为node02的地址即可
[root@node02 cfg]# vim kubelet
[root@node02 cfg]# vim kubelet.config 
[root@node02 cfg]# vim kube-proxy
  • 启动服务并查看状态
[root@node02 cfg]# systemctl start kubelet
[root@node02 cfg]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node02 cfg]# systemctl start kube-proxy
[root@node02 cfg]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node02 cfg]# systemctl status kubelet

[root@node02 cfg]# systemctl status kube-proxy

在这里插入图片描述
在这里插入图片描述

  • 在master上操作查看请求,并授权许可加入群集
[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-U7z-fG1Jmktx3QT7KNGaHjlmpDwW18eJajrAuWLCXWQ   2m21s   kubelet-bootstrap   Pending
node-csr-t05XEI92wGzkOVJRhDscfY5dqmQRNk-r4YnKh5Ls1FY   23m     kubelet-bootstrap   Approved,Issued

[root@master kubeconfig]# kubectl certificate approve node-csr-U7z-fG1Jmktx3QT7KNGaHjlmpDwW18eJajrAuWLCXWQ
certificatesigningrequest.certificates.k8s.io/node-csr-U7z-fG1Jmktx3QT7KNGaHjlmpDwW18eJajrAuWLCXWQ approved
'//授权许可加入群集'
[root@master kubeconfig]# kubectl get csr
  • 查看群集中的节点
[root@master kubeconfig]# kubectl get node

在这里插入图片描述
此时以单节点二进制方式k8s集群部署完成

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
部署Kubernetes集群二进制方式包括以下步骤: 1. 下载Kubernetes二进制文件:在Kubernetes官网下载适用于您的操作系统的二进制文件。例如,在Linux中,您需要下载kubectl、kubelet和kubeadm。 2. 安装Docker:您需要安装Docker以运行Kubernetes集群中的容器。您可以从Docker官网下载并安装Docker。 3. 初始化集群使用kubeadm工具初始化集群。在Master节点上运行以下命令: ``` sudo kubeadm init --pod-network-cidr=10.244.0.0/16 ``` 其中,pod-network-cidr是您要使用的网络插件的CIDR。 4. 安装网络插件:在Kubernetes集群中,您需要一个网络插件来使Pod之间能够通信。例如,您可以使用Flannel插件。在Master节点上运行以下命令: ``` kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` 5. 加入节点:在要加入集群节点上运行以下命令: ``` sudo kubeadm join <master-node-ip>:<master-node-port> --token <token> --discovery-token-ca-cert-hash <hash> ``` 其中,master-node-ip和master-node-port是Master节点的IP地址和端口号,token是在Master节点上生成的令牌,hash是在Master节点上生成的Discovery Token CA证书哈希。 6. 验证集群:在Master节点上运行以下命令,验证集群是否正常运行: ``` kubectl get nodes ``` 如果所有节点都已成功加入集群,则应该看到所有节点的状态都是“Ready”。 这就是通过二进制方式部署Kubernetes集群的基本步骤。注意,这只是一个简的示例,您可能需要根据您的特定需求进行更改。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值