kubernetes安装(二进制安装)

kubernetes

k8s和docker之间的关系?
k8s是一个容器化管理平台,docker是一个容器,

二进制部署kubernetes

集群角色

  • master节点: 管理集群
  • node节点: 主要用来部署应用
    Master节点部署插件
  • kube-apiserver : 中央管理器,调度管理集群
  • kube-controller-manager :控制器: 管理容器,监控容器
  • kube-scheduler:调度器:调度容器
  • flannel : 提供集群间网络
  • etcd:数据库
  • kubelet
  • kube-proxy
    Node节点部署插件
  • kubelet : 部署容器,监控容器
  • kube-proxy : 提供容器间的网络

节点规划

192.168.1.81 kubernetes-master-01 m1
192.168.1.82 kubernetes-master-02 m2
192.168.1.83 kubernetes-master-03 m3
192.168.1.84 kubernetes-node-01 n1
192.168.1.85 kubernetes-node-02 n2

#虚拟ip
192.168.1.86 kubernetes-master-vip vip

插件规划

# Master节点规划
kube-apiserver
kube-controller-manager
kube-scheduler
flannel
etcd
kubelet
kube-proxy

# Node节点规划
kubelet
kube-proxy

系统优化(全部都做)

# 关闭selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config 

# 关闭防火墙
systemctl disable --now firewalld

# 关闭swap分区
swapoff -a
修改/etc/fstab
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet   # kubelet忽略swap

# 做好ip解析
[root@kubernetes-master-01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.81 kubernetes-master-01 m1
192.168.1.82 kubernetes-master-02 m2
192.168.1.83 kubernetes-master-03 m3
192.168.1.84 kubernetes-node-01 n1
192.168.1.85 kubernetes-node-02 n2
#虚拟ip
192.168.1.86 kubernetes-master-vip vip

# 设置主机名
[root@kubernetes-master-01 ~]# hostnamectl set-hostname kubernetes-master-01


# 做免密登录
[root@kubernetes-master-01 ~]#  ssh-keygen -t rsa
[root@kubernetes-master-01 ~]#  for i in m1 m2 m3 n1 n2;do  ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i; done

# 同步集群时间
ntpdate ntp1.aliyun.com
hwclock --systohc

# 配置镜像源
[root@kubernetes-master-01 ~]#  curl  -o /etc/yum.repos.d/CentOS-Base.repo https://repo.huaweicloud.com/repository/conf/CentOS-7-reg.repo
[root@kubernetes-master-01 ~]# yum clean all
[root@kubernetes-master-01 ~]# yum makecache

# 更新系统
[root@kubernetes-master-01 ~]# yum update -y --exclud=kernel*

# 安装基础常用软件
[root@kubernetes-master-01 ~]# yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp -y

# 更新系统内核(docker 对系统内核要求比较高,最好使用4.4+)
[root@kubernetes-master-01 ~]# wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.130-1.el7.elrepo.x86_64.rpm
[root@kubernetes-master-01 ~]# wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.130-1.el7.elrepo.x86_64.rpm

###   注意这里因为linux内核版本一直在更新,所以5.4.130很有可能会找不到,此时进入https://elrepo.org/linux/kernel/el7/x86_64/RPMS选择你对应的版本即可

    ## 安装系统内容
[root@kubernetes-master-01 ~]# yum localinstall -y kernel-lt*
    ## 调到默认启动
[root@kubernetes-master-01 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
    ## 查看当前默认启动的内核
[root@kubernetes-master-01 ~]# grubby --default-kernel
    ## 重启
[root@kubernetes-master-01 ~]# reboot

# 安装IPVS
    yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp

   	## 加载IPVS模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF

    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

# 修改内核启动参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF

# 立即生效
sysctl --system

安装docker

# 卸载之前安装过得docker
[root@kubernetes-master-01 ~]#  sudo yum remove docker docker-common docker-selinux docker-engine

# 安装docker需要的依赖包
[root@kubernetes-master-01 ~]#  sudo yum install -y yum-utils device-mapper-persistent-data lvm2

# 安装dockeryum源
[root@kubernetes-master-01 ~]#  wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo

# 安装docker
[root@kubernetes-master-01 ~]#  yum install docker-ce -y

# 设置开机自启动
[root@kubernetes-master-01 ~]#  systemctl enable --now docker.service

集群证书

以下命令只需要在master01执行即可
# 安装证书生成工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

# 设置执行权限
chmod +x cfssljson_linux-amd64
chmod +x cfssl_linux-amd64

# 移动到/usr/local/bin
mv cfssljson_linux-amd64 cfssljson
mv cfssl_linux-amd64 cfssl
mv cfssljson cfssl /usr/local/bin
生成根证书
mkdir -p /opt/cert/ca
cat > /opt/cert/ca/ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
           "expiry": "8760h"
      }
    }
  }
}
EOF
生成根证书请求文件
cat > /opt/cert/ca/ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names":[{
    "C": "CN",
    "ST": "ShangHai",
    "L": "ShangHai"
  }]
}
EOF
生成根证书

[root@kubernetes-master-01 ~]# cd /opt/cert/ca/
[root@kubernetes-master-01 /opt/cert/ca]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2021/03/26 17:34:55 [INFO] generating a new CA key and certificate from CSR
2021/03/26 17:34:55 [INFO] generate received request
2021/03/26 17:34:55 [INFO] received CSR
2021/03/26 17:34:55 [INFO] generating key: rsa-2048
2021/03/26 17:34:56 [INFO] encoded CSR
2021/03/26 17:34:56 [INFO] signed certificate with serial number 661764636777400005196465272245416169967628201792
[root@kubernetes-master-01 /opt/cert/ca]# ll
total 20
-rw-r--r-- 1 root root  285 Mar 26 17:34 ca-config.json
-rw-r--r-- 1 root root  960 Mar 26 17:34 ca.csr
-rw-r--r-- 1 root root  153 Mar 26 17:34 ca-csr.json
-rw------- 1 root root 1675 Mar 26 17:34 ca-key.pem
-rw-r--r-- 1 root root 1281 Mar 26 17:34 ca.pem

部署ETCD集群

节点规划
192.168.1.81 etcd-01
192.168.1.82 etcd-01
192.168.1.83 etcd-01
创建ETCD集群证书
mkdir -p /opt/cert/etcd
cd /opt/cert/etcd

cat > etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.1.81",
        "192.168.1.82",
        "192.168.1.83",
        "192.168.1.84",
        "192.168.1.85",
        "192.168.1.86"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
          "C": "CN",
          "ST": "ShangHai",
          "L": "ShangHai"
        }
    ]
}
EOF
生成ETCD证书
[root@kubernetes-master-01 /opt/cert/etcd]# cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2021/03/26 17:38:57 [INFO] generate received request
2021/03/26 17:38:57 [INFO] received CSR
2021/03/26 17:38:57 [INFO] generating key: rsa-2048
2021/03/26 17:38:58 [INFO] encoded CSR
2021/03/26 17:38:58 [INFO] signed certificate with serial number 179909685000914921289186132666286329014949215773
2021/03/26 17:38:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
分发ETCD证书
[root@kubernetes-master-01 /opt/cert/etcd]# for ip in m1 m2 m3;do 
   ssh root@${ip} "mkdir -pv /etc/etcd/ssl"
   scp ../ca/ca*.pem  root@${ip}:/etc/etcd/ssl
   scp ./etcd*.pem  root@${ip}:/etc/etcd/ssl   
 done
 
mkdir: created directory ‘/etc/etcd’
mkdir: created directory ‘/etc/etcd/ssl’
ca-key.pem                                                      100% 1675   299.2KB/s   00:00    
ca.pem                                                          100% 1281   232.3KB/s   00:00    
etcd-key.pem                                                    100% 1675     1.4MB/s   00:00    
etcd.pem                                                        100% 1379   991.0KB/s   00:00    
mkdir: created directory ‘/etc/etcd’
mkdir: created directory ‘/etc/etcd/ssl’
ca-key.pem                                                      100% 1675     1.1MB/s   00:00    
ca.pem                                                          100% 1281   650.8KB/s   00:00    
etcd-key.pem                                                    100% 1675   507.7KB/s   00:00    
etcd.pem                                                        100% 1379   166.7KB/s   00:00    
mkdir: created directory ‘/etc/etcd’
mkdir: created directory ‘/etc/etcd/ssl’
ca-key.pem                                                      100% 1675   109.1KB/s   00:00    
ca.pem                                                          100% 1281   252.9KB/s   00:00    
etcd-key.pem                                                    100% 1675   121.0KB/s   00:00    
etcd.pem                                                        100% 1379   180.4KB/s   00:00    
[root@kubernetes-master-01 /opt/cert/etcd]# ll /etc/etcd/ssl/
total 16
-rw------- 1 root root 1675 Mar 26 17:41 ca-key.pem
-rw-r--r-- 1 root root 1281 Mar 26 17:41 ca.pem
-rw------- 1 root root 1675 Mar 26 17:41 etcd-key.pem
-rw-r--r-- 1 root root 1379 Mar 26 17:41 etcd.pem

部署ETCD
# 下载ETCD安装包
wget https://mirrors.huaweicloud.com/etcd/v3.3.24/etcd-v3.3.24-linux-amd64.tar.gz

# 解压
tar xf etcd-v3.3.24-linux-amd64.tar.gz

# 分发至其他节点
for i in m1 m2 m3
do
    scp ./etcd-v3.3.24-linux-amd64/etcd* root@$i:/usr/local/bin/
done
[root@kubernetes-master-01 /opt/etcd-v3.3.24-linux-amd64]# etcd --version
etcd Version: 3.3.24
Git SHA: bdd57848d
Go Version: go1.12.17
Go OS/Arch: linux/amd64

注册ETCD服务
# 在三台master节点上执行
mkdir -pv /etc/kubernetes/conf/etcd
[root@kubernetes-master-01 etcd]# hostnamectl set-hostname kubernetes-master-01
ETCD_NAME=`hostname`
INTERNAL_IP=`hostname -i`
INITIAL_CLUSTER=kubernetes-master-01=https://192.168.1.81:2380,kubernetes-master-02=https://192.168.1.82:2380,kubernetes-master-03=https://192.168.1.83:2380

cat << EOF | sudo tee /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd \\
  --name $ETCD_NAME \\
  --cert-file=/etc/etcd/ssl/etcd.pem \\
  --key-file=/etc/etcd/ssl/etcd-key.pem \\
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \\
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster \\
  --initial-cluster ${INITIAL_CLUSTER} \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

# 启动ETCD服务
systemctl enable --now etcd
测试ETCD服务
# 第一种方式
ETCDCTL_API=3 etcdctl \
--cacert=/etc/etcd/ssl/etcd.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379" \
endpoint status --write-out='table'


# 第二种方式
ETCDCTL_API=3 etcdctl \
--cacert=/etc/etcd/ssl/etcd.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379" \
member list --write-out='table'

部署master节点

主要把master节点上的各个组件部署成功。

集群规划
192.168.1.81 172.16.1.81 kubernetes-master-01 m1 
192.168.1.82 172.16.1.82 kubernetes-master-02 m2
192.168.1.83 172.16.1.83 kubernetes-master-03 m3

kube-apiserver、控制器、调度器、flannel、etcd、kubelet、kube-proxy、DNS
创建证书

创建集群证书

创建集群CA证书
# 只需要在master01上执行
[root@kubernetes-master-01 k8s]# mkdir /opt/cert/k8s
[root@kubernetes-master-01 k8s]# cd /opt/cert/k8s
[root@kubernetes-master-01 k8s]# pwd
/opt/cert/k8s
[root@kubernetes-master-01 k8s]# cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
[root@kubernetes-master-01 k8s]# cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
EOF
[root@kubernetes-master-01 k8s]# ll
total 8
-rw-r--r-- 1 root root 294 Sep 13 19:59 ca-config.json
-rw-r--r-- 1 root root 212 Sep 13 20:01 ca-csr.json
[root@kubernetes-master-01 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/13 20:01:45 [INFO] generating a new CA key and certificate from CSR
2020/09/13 20:01:45 [INFO] generate received request
2020/09/13 20:01:45 [INFO] received CSR
2020/09/13 20:01:45 [INFO] generating key: rsa-2048
2020/09/13 20:01:46 [INFO] encoded CSR
2020/09/13 20:01:46 [INFO] signed certificate with serial number 588993429584840635805985813644877690042550093427
[root@kubernetes-master-01 k8s]# ll
total 20
-rw-r--r-- 1 root root  294 Sep 13 19:59 ca-config.json
-rw-r--r-- 1 root root  960 Sep 13 20:01 ca.csr
-rw-r--r-- 1 root root  212 Sep 13 20:01 ca-csr.json
-rw------- 1 root root 1679 Sep 13 20:01 ca-key.pem
-rw-r--r-- 1 root root 1273 Sep 13 20:01 ca.pem
创建集群普通证书

创建集群各个组件之间的证书

创建kube-apiserver的证书
[root@k8s-m-01 /opt/cert/k8s]# mkdir /opt/cert/k8s
[root@k8s-m-01 /opt/cert/k8s]# cd /opt/cert/k8s
[root@k8s-m-01 /opt/cert/k8s]# cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "192.168.1.81",
        "192.168.1.82",
        "192.168.1.83",
        "192.168.1.84",
        "192.168.1.85",
        "192.168.1.86",
        "10.96.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
EOF

[root@k8s-m-01 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2021/03/29 09:31:02 [INFO] generate received request
2021/03/29 09:31:02 [INFO] received CSR
2021/03/29 09:31:02 [INFO] generating key: rsa-2048
2021/03/29 09:31:02 [INFO] encoded CSR
2021/03/29 09:31:02 [INFO] signed certificate with serial number 475285860832876170844498652484239182294052997083
2021/03/29 09:31:02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-m-01 /opt/cert/k8s]# ll
total 36
-rw-r--r-- 1 root root  294 Mar 29 09:13 ca-config.json
-rw-r--r-- 1 root root  960 Mar 29 09:16 ca.csr
-rw-r--r-- 1 root root  214 Mar 29 09:14 ca-csr.json
-rw------- 1 root root 1675 Mar 29 09:16 ca-key.pem
-rw-r--r-- 1 root root 1281 Mar 29 09:16 ca.pem
-rw-r--r-- 1 root root 1245 Mar 29 09:31 server.csr
-rw-r--r-- 1 root root  603 Mar 29 09:29 server-csr.json
-rw------- 1 root root 1675 Mar 29 09:31 server-key.pem
-rw-r--r-- 1 root root 1574 Mar 29 09:31 server.pem

创建controller-manager的证书
[kubernetes-master-01 /opt/cert/k8s]# cat > kube-controller-manager-csr.json << EOF
{ 
    "CN": "system:kube-controller-manager", 
    "hosts": [ 
        "127.0.0.1",
        "192.168.1.83", 
        "192.168.1.81", 
        "192.168.1.82", 
        "192.168.1.84",
        "192.168.1.85",
        "192.168.1.86"
    ],
    "key": {
        "algo": "rsa", 
        "size": 2048
    },
    "names": [ 
        { 
            "C": "CN", 
            "L": "ShangHai", 
            "ST": "ShangHai", 
            "O": "system:kube-controller-manager",
            "OU": "System"
        } 
    ] 
}
EOF
[kubernetes-master-01 /opt/cert/k8s]# vim kube-controller-manager-csr.json 
[root@kubernetes-master-01 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2021/03/29 09:33:31 [INFO] generate received request
2021/03/29 09:33:31 [INFO] received CSR
2021/03/29 09:33:31 [INFO] generating key: rsa-2048
2021/03/29 09:33:31 [INFO] encoded CSR
2021/03/29 09:33:31 [INFO] signed certificate with serial number 159207911625502250093013220742142932946474251607
2021/03/29 09:33:31 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

创建kube-scheduler的证书
[root@kubernetes-master-01 /opt/cert/k8s]# cat > kube-scheduler-csr.json << EOF
{ 
    "CN": "system:kube-scheduler", 
    "hosts": [ 
        "127.0.0.1",
        "192.168.1.83", 
        "192.168.1.81", 
        "192.168.1.82", 
        "192.168.1.84",
        "192.168.1.85",
        "192.168.1.86"
    ],
    "key": {
        "algo": "rsa", 
        "size": 2048
    },
    "names": [ 
        { 
            "C": "CN", 
            "L": "ShangHai", 
            "ST": "ShangHai", 
            "O": "system:kube-scheduler",
            "OU": "System"
        } 
    ] 
}
EOF

[root@kubernetes-master-01 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2021/03/29 09:34:57 [INFO] generate received request
2021/03/29 09:34:57 [INFO] received CSR
2021/03/29 09:34:57 [INFO] generating key: rsa-2048
2021/03/29 09:34:58 [INFO] encoded CSR
2021/03/29 09:34:58 [INFO] signed certificate with serial number 38647006614878532408684142936672497501281226307
2021/03/29 09:34:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

创建kube-proxy证书
[root@kubernetes-master-01 /opt/cert/k8s]# cat > kube-proxy-csr.json << EOF
{
    "CN":"system:kube-proxy",
    "hosts":[],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"ShangHai",
            "ST":"ShangHai",
            "O":"system:kube-proxy",
            "OU":"System"
        }
    ]
}
EOF
[root@kubernetes-master-01 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2021/03/29 09:37:44 [INFO] generate received request
2021/03/29 09:37:44 [INFO] received CSR
2021/03/29 09:37:44 [INFO] generating key: rsa-2048
2021/03/29 09:37:44 [INFO] encoded CSR
2021/03/29 09:37:44 [INFO] signed certificate with serial number 703321465371340829919693910125364764243453439484
2021/03/29 09:37:44 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

创建集群管理员证书
[root@kubernetes-master-01 /opt/cert/k8s]# cat > admin-csr.json << EOF
{
    "CN":"admin",
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"ShangHai",
            "ST":"ShangHai",
            "O":"system:masters",
            "OU":"System"
        }
    ]
}
EOF
[root@kubernetes-master-01 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2021/03/29 09:36:26 [INFO] generate received request
2021/03/29 09:36:26 [INFO] received CSR
2021/03/29 09:36:26 [INFO] generating key: rsa-2048
2021/03/29 09:36:26 [INFO] encoded CSR
2021/03/29 09:36:26 [INFO] signed certificate with serial number 258862825289855717894394114308507213391711602858
2021/03/29 09:36:26 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

颁发证书
[root@kubernetes-master-01 /opt/cert/k8s]# mkdir -pv /etc/kubernetes/ssl
[root@kubernetes-master-01 /opt/cert/k8s]# cp -p ./{ca*pem,server*pem,kube-controller-manager*pem,kube-scheduler*.pem,kube-proxy*pem,admin*.pem} /etc/kubernetes/ssl

[root@kubernetes-master-01 /opt/cert/k8s]# for i in m1 m2 m3;do
ssh root@$i "mkdir -pv /etc/kubernetes/ssl"
scp /etc/kubernetes/ssl/* root@$i:/etc/kubernetes/ssl
done
编写配置文件以及下载安装包
  • 下载安装包
# 下载安装包
## 下载server安装包
[root@kubernetes-master-01 k8s]# mkdir /opt/data
[root@kubernetes-master-01 k8s]# cd /opt/data/
[root@kubernetes-master-01 /opt/data]# wget https://dl.k8s.io/v1.18.8/kubernetes-server-linux-amd64.tar.gz

## 从容器中复制
[root@kubernetes-master-01 /opt/data]# docker run -it  registry.cn-hangzhou.aliyuncs.com/k8sos/k8s:v1.18.8.1 bash

## 分发组件
[root@kubernetes-master-01 /opt/data]# tar -xf kubernetes-server-linux-amd64.tar.gz
[root@kubernetes-master-01 /opt/data]# cd kubernetes/server/bin
[root@kubernetes-master-01 /opt/data]# for i in m1 m2 m3 ;do  scp kube-apiserver kube-controller-manager kube-proxy kubectl kubelet kube-scheduler root@$i:/usr/local/bin; done
  • 创建集群配置文件

    • 创建kube-controller-manager.kubeconfig

      ## 创建kube-controller-manager.kubeconfig
      export KUBE_APISERVER="https://192.168.1.86:8443"
      
      # 设置集群参数
      kubectl config set-cluster kubernetes \
        --certificate-authority=/etc/kubernetes/ssl/ca.pem \
        --embed-certs=true \
        --server=${KUBE_APISERVER} \
        --kubeconfig=kube-controller-manager.kubeconfig
      
      # 设置客户端认证参数
      kubectl config set-credentials "kube-controller-manager" \
        --client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem \
        --client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
        --embed-certs=true \
        --kubeconfig=kube-controller-manager.kubeconfig
      
      # 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
      kubectl config set-context default \
        --cluster=kubernetes \
        --user="kube-controller-manager" \
        --kubeconfig=kube-controller-manager.kubeconfig
      
      # 配置默认上下文
      kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
      
      
      
    • 创建kube-scheduler.kubeconfig

      # 创建kube-scheduler.kubeconfig
      export KUBE_APISERVER="https://192.168.1.86:8443"
      
      # 设置集群参数
      kubectl config set-cluster kubernetes \
        --certificate-authority=/etc/kubernetes/ssl/ca.pem \
        --embed-certs=true \
        --server=${KUBE_APISERVER} \
        --kubeconfig=kube-scheduler.kubeconfig
      
      # 设置客户端认证参数
      kubectl config set-credentials "kube-scheduler" \
        --client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem \
        --client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem \
        --embed-certs=true \
        --kubeconfig=kube-scheduler.kubeconfig
      
      # 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
      kubectl config set-context default \
        --cluster=kubernetes \
        --user="kube-scheduler" \
        --kubeconfig=kube-scheduler.kubeconfig
      
      # 配置默认上下文
      kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
      
      
    • 创建kube-proxy.kubeconfig集群配置文件

      ## 创建kube-proxy.kubeconfig集群配置文件
      export KUBE_APISERVER="https://192.168.1.86:8443"
      
      # 设置集群参数
      kubectl config set-cluster kubernetes \
        --certificate-authority=/etc/kubernetes/ssl/ca.pem \
        --embed-certs=true \
        --server=${KUBE_APISERVER} \
        --kubeconfig=kube-proxy.kubeconfig
      
      # 设置客户端认证参数
      kubectl config set-credentials "kube-proxy" \
        --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
        --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
        --embed-certs=true \
        --kubeconfig=kube-proxy.kubeconfig
      
      # 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
      kubectl config set-context default \
        --cluster=kubernetes \
        --user="kube-proxy" \
        --kubeconfig=kube-proxy.kubeconfig
      
      # 配置默认上下文
      kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
      
    • 创建超级管理员的集群配置文件

      export KUBE_APISERVER="https://192.168.1.86:8443"
      
      # 设置集群参数
      kubectl config set-cluster kubernetes \
        --certificate-authority=/etc/kubernetes/ssl/ca.pem \
        --embed-certs=true \
        --server=${KUBE_APISERVER} \
        --kubeconfig=admin.kubeconfig
      
      # 设置客户端认证参数
      kubectl config set-credentials "admin" \
        --client-certificate=/etc/kubernetes/ssl/admin.pem \
        --client-key=/etc/kubernetes/ssl/admin-key.pem \
        --embed-certs=true \
        --kubeconfig=admin.kubeconfig
      
      # 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
      kubectl config set-context default \
        --cluster=kubernetes \
        --user="admin" \
        --kubeconfig=admin.kubeconfig
      
      # 配置默认上下文
      kubectl config use-context default --kubeconfig=admin.kubeconfig
      
    • 颁发集群配置文件

      [root@kubernetes-master-01 /opt/cert/k8s]# for i in m1 m2 m3; do
      ssh root@$i  "mkdir -pv /etc/kubernetes/cfg"
      scp ./*.kubeconfig root@$i:/etc/kubernetes/cfg
      done
      [root@kubernetes-master-01 /opt/cert/k8s]# ll /etc/kubernetes/cfg/
      total 32
      -rw------- 1 root root 6103 Mar 29 10:32 admin.kubeconfig
      -rw------- 1 root root 6319 Mar 29 10:32 kube-controller-manager.kubeconfig
      -rw------- 1 root root 6141 Mar 29 10:32 kube-proxy.kubeconfig
      -rw------- 1 root root 6261 Mar 29 10:32 kube-scheduler.kubeconfig
      
      
    • 创建集群token

      # 只需要创建一次
      # 必须要用自己机器创建的Token
      
      [root@kubernetes-master-01 bin]# cd /opt/cert/k8s/
      TLS_BOOTSTRAPPING_TOKEN=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`
      
      cat > token.csv << EOF
      ${TLS_BOOTSTRAPPING_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
      EOF
      
      # 分发集群token,用于集群TLS认证
      [root@kubernetes-master-01 /opt/cert/k8s]# for i in m1 m2 m3;do
      scp token.csv root@$i:/etc/kubernetes/cfg/
      done
      
部署各个组件

安装各个组件,使其可以正常工作

安装kube-apiserver
创建kube-apiserver的配置文件
# 在所有的master节点上执行
KUBE_APISERVER_IP=`hostname -i`

cat > /etc/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--advertise-address=${KUBE_APISERVER_IP} \\
--default-not-ready-toleration-seconds=360 \\
--default-unreachable-toleration-seconds=360 \\
--max-mutating-requests-inflight=2000 \\
--max-requests-inflight=4000 \\
--default-watch-cache-size=200 \\
--delete-collection-workers=2 \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.96.0.0/16 \\
--service-node-port-range=30000-52767 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/etc/kubernetes/cfg/token.csv \\
--kubelet-client-certificate=/etc/kubernetes/ssl/server.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/etc/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kubernetes/k8s-audit.log \\
--etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem"
EOF
注册kube-apiserver的服务
# 在所有的master节点上执行
[root@kubernetes-master-01 /opt/cert/k8s]# cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

[root@kubernetes-master-01 /opt/cert/k8s]# systemctl daemon-reload
对kube-apiserver做高可用

安装高可用软件

#三台master节点都需要安装
#keeplived + haproxy
[root@k8s-m-01 ~]# yum install -y keepalived haproxy
  • 修改keepalived配置文件
  # 根据节点的不同,修改的配置也不同
  mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak  
  cd /etc/keepalived
  
  KUBE_APISERVER_IP=`hostname -i`
  
  cat > /etc/keepalived/keepalived.conf <<EOF
  ! Configuration File for keepalived
  global_defs {
      router_id LVS_DEVEL
  }
  vrrp_script chk_kubernetes {
      script "/etc/keepalived/check_kubernetes.sh"
      interval 2
      weight -5
      fall 3
      rise 2
  }
  vrrp_instance VI_1 {
      state MASTER
      interface eth0
      mcast_src_ip ${KUBE_APISERVER_IP}
      virtual_router_id 51
      priority 100
      advert_int 2
      authentication {
          auth_type PASS
          auth_pass K8SHA_KA_AUTH
      }
      virtual_ipaddress {
          192.168.1.86
      }
  }
  EOF
  
  [root@kubernetes-master-01 /etc/keepalived]# systemctl enable --now keepalived
  • 修改haproxy配置文件

    高可用软件

  cat > /etc/haproxy/haproxy.cfg <<EOF
  global
    maxconn  2000
    ulimit-n  16384
    log  127.0.0.1 local0 err
    stats timeout 30s
  
  defaults
    log global
    mode  http
    option  httplog
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    timeout http-request 15s
    timeout http-keep-alive 15s
  
  frontend monitor-in
    bind *:33305
    mode http
    option httplog
    monitor-uri /monitor
  
  listen stats
    bind    *:8006
    mode    http
    stats   enable
    stats   hide-version
    stats   uri       /stats
    stats   refresh   30s
    stats   realm     Haproxy\ Statistics
    stats   auth      admin:admin
  
  frontend k8s-master
    bind 0.0.0.0:8443
    bind 127.0.0.1:8443
    mode tcp
    option tcplog
    tcp-request inspect-delay 5s
    default_backend k8s-master
  
  backend k8s-master
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server kubernetes-master-01    192.168.1.81:6443  check inter 2000 fall 2 rise 2 weight 100
    server kubernetes-master-02    192.168.1.82:6443  check inter 2000 fall 2 rise 2 weight 100
    server kubernetes-master-03    192.168.1.83:6443  check inter 2000 fall 2 rise 2 weight 100
  EOF
  
  [root@k8s-m-01 /etc/keepalived]# systemctl enable --now haproxy.service 
  Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.

部署TLS

apiserver 动态签署颁发到Node节点,实现证书签署自动化

创建集群配置文件
# 只需要在一台节点上执行
export KUBE_APISERVER="https://192.168.1.86:8443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kubelet-bootstrap.kubeconfig
  
[root@kubernetes-master-01 ~]# cat /opt/cert/k8s/token.csv
260d7ba3bf682677c42bc50c88ed509b,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

# 设置客户端认证参数,此处token必须用上叙token.csv中的token
kubectl config set-credentials "kubelet-bootstrap" \
  --token=260d7ba3bf682677c42bc50c88ed509b \  # 使用自己的token.csv里面的token
  --kubeconfig=kubelet-bootstrap.kubeconfig

# 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=kubelet-bootstrap.kubeconfig

# 配置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

颁发证书
# 颁发集群配置文件
[root@k8s-m-01 /opt/cert/k8s]# for i in m1 m2 m3; do
scp kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg/
done
创建TLS低权限用户
# 创建一个低权限用户
[root@k8s-m-01 /opt/cert/k8s]# kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
部署contorller-manager
编辑配置文件
# 需要在三台master节点上执行
cat > /etc/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--leader-elect=true \\
--cluster-name=kubernetes \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/12 \\
--service-cluster-ip-range=10.96.0.0/16 \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--kubeconfig=/etc/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=10s \\
--horizontal-pod-autoscaler-use-rest-clients=true"
EOF
注册服务
# 需要在三台master节点上执行
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
启动
[root@k8s-m-01 /opt/cert/k8s]# systemctl daemon-reload 
[root@k8s-m-01 /opt/cert/k8s]# systemctl enable --now kube-controller-manager.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

部署kube-scheduler
编写配置文件
# 三台机器上都需要执行
cat > /etc/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--kubeconfig=/etc/kubernetes/cfg/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--master=http://127.0.0.1:8080 \\
--bind-address=127.0.0.1 "
EOF
注册服务
# 三台节点上都需要执行
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
启动
[root@k8s-m-01 /opt/cert/k8s]# systemctl daemon-reload 
[root@k8s-m-01 /opt/cert/k8s]# systemctl enable --now kube-scheduler.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

查看集群状态
[root@k8s-m-01 /opt/cert/k8s]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
部署kubelet服务
创建kubelet服务配置文件
# 需要在三台master节点上执行
KUBE_HOSTNAME=`hostname`

cat > /etc/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--hostname-override=${KUBE_HOSTNAME} \\
--container-runtime=docker \\
--kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig \\
--config=/etc/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/etc/kubernetes/ssl \\
--image-pull-progress-deadline=15m \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2"
EOF
  • 创建kubelet-config.yaml

    # 需要在三台master节点上执行
    KUBE_HOSTNAME=`hostname -i`
    
    cat > /etc/kubernetes/cfg/kubelet-config.yml << EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: ${KUBE_HOSTNAME}
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.96.0.2
    clusterDomain: cluster.local
    failSwapOn: false
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 2m0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/ssl/ca.pem
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
    evictionHard:
      imagefs.available: 15%
      memory.available: 100Mi
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    maxOpenFiles: 1000000
    maxPods: 110
    EOF
    
注册kubelet的服务
# 需要在三台master节点上执行
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
启动
[root@k8s-m-01 /opt/cert/k8s]# systemctl daemon-reload 
[root@k8s-m-01 /opt/cert/k8s]# systemctl enable --now kubelet.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-m-01 /opt/cert/k8s]# 

部署kube-proxy
创建配置文件
# 需要在三台master节点上执行
cat > /etc/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--config=/etc/kubernetes/cfg/kube-proxy-config.yml"
EOF
  • 创建kube-proxy-config.yml
  # 需要在三台master节点上执行
  KUBE_HOSTNAME=`hostname -i`
  HOSTNAME=`hostname`
  cat > /etc/kubernetes/cfg/kube-proxy-config.yml << EOF
  kind: KubeProxyConfiguration
  apiVersion: kubeproxy.config.k8s.io/v1alpha1
  bindAddress: ${KUBE_HOSTNAME}
  healthzBindAddress: ${KUBE_HOSTNAME}:10256
  metricsBindAddress: ${KUBE_HOSTNAME}:10249
  clientConnection:
    burst: 200
    kubeconfig: /etc/kubernetes/cfg/kube-proxy.kubeconfig
    qps: 100
  hostnameOverride: ${HOSTNAME}
  clusterCIDR: 10.96.0.0/16
  enableProfiling: true
  mode: "ipvs"
  kubeProxyIPTablesConfiguration:
    masqueradeAll: false
  kubeProxyIPVSConfiguration:
    scheduler: rr
    excludeCIDRs: []
  EOF
注册服务
# 需要在三台master节点上执行
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
启动
[root@k8s-m-01 /opt/cert/k8s]# 
[root@k8s-m-01 /opt/cert/k8s]# systemctl daemon-reload 
[root@k8s-m-01 /opt/cert/k8s]# systemctl enable --now kube-proxy.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
加入集群节点
查看集群节点加入请求
# 只需要在一台节点上执行即可
[root@k8s-m-01 /opt/cert/k8s]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-5AWYEWZ0DkF4DzHTOP00M2_Ne6on7XMwvryxbwsh90M   6m3s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-8_Rjm9D7z-04h400v_8RDHHCW3UGILeSRhxx-KkIWNI   6m3s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-wlHMJiNAkMuPsQPoD6dan8QF4AIlm-x_hVYJt9DukIg   6m2s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
批准加入
# 只需要在一台节点上执行即可
[root@k8s-m-01 /opt/cert/k8s]# kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'`
certificatesigningrequest.certificates.k8s.io/node-csr-5AWYEWZ0DkF4DzHTOP00M2_Ne6on7XMwvryxbwsh90M approved
certificatesigningrequest.certificates.k8s.io/node-csr-8_Rjm9D7z-04h400v_8RDHHCW3UGILeSRhxx-KkIWNI approved
certificatesigningrequest.certificates.k8s.io/node-csr-wlHMJiNAkMuPsQPoD6dan8QF4AIlm-x_hVYJt9DukIg approved
[root@k8s-m-01 /opt/cert/k8s]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
k8s-m-01   Ready    <none>   13s   v1.18.8
k8s-m-02   Ready    <none>   12s   v1.18.8
k8s-m-03   Ready    <none>   12s   v1.18.8

安装网络插件

本次选择使用flannel网络插件

flannel版本https://github.com/flannel-io/flannel/releases

官方包地址:https://github.com/coreos/flannel

下载flannel安装包并安装
# 只需要在一台节点上执行即可
[root@kubernetes-master-01 data]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@k8s-m-01 /opt/data]# tar -xf flannel-v0.11.0-linux-amd64.tar.gz
[root@k8s-m-01 /opt/data]# for i in m1 m2 m3;do
scp flanneld mk-docker-opts.sh root@$i:/usr/local/bin/
done
将flannel配置写入集群数据库
# 只需要在一台节点上执行即可
etcdctl \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379" \
mk /coreos.com/network/config '{"Network":"10.244.0.0/12", "SubnetLen": 21, "Backend": {"Type": "vxlan", "DirectRouting": true}}'
注册flannel服务
# 需要在三台机器运行
cat > /usr/lib/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld address
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \\
  -etcd-cafile=/etc/etcd/ssl/ca.pem \\
  -etcd-certfile=/etc/etcd/ssl/etcd.pem \\
  -etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
  -etcd-endpoints=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \\
  -etcd-prefix=/coreos.com/network \\
  -ip-masq
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
修改docker启动文件
#三台机器都运行
# 让flannel接管docker网络
sed -i '/ExecStart/s/\(.*\)/#\1/' /usr/lib/systemd/system/docker.service
sed -i '/ExecReload/a ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock' /usr/lib/systemd/system/docker.service
sed -i '/ExecReload/a EnvironmentFile=-/run/flannel/subnet.env' /usr/lib/systemd/system/docker.service
启动
#三台都运行
# 先启动flannel,再启动docker
[root@k8s-m-01 ~]# systemctl daemon-reload 
[root@k8s-m-01 ~]# systemctl enable --now flanneld.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-m-01 ~]# systemctl restart docker

验证集群网络
# 集群节点互ping对方的flannel网络
安装集群DNS
# 只需要在一台节点上执行即可
# 下载DNS安装配置文件包
[root@k8s-m-01 ~]# wget https://github.com/coredns/deployment/archive/refs/heads/master.zip
[root@k8s-m-01 ~]# unzip master.zip
[root@k8s-m-01 ~]# cd deployment-master/kubernetes

# 执行部署命令
[root@k8s-m-01 ~/deployment-master/kubernetes]# ./deploy.sh -i 10.96.0.2 -s | kubectl apply -f -

# 验证集群DNS
[root@k8s-m-01 ~/deployment-master/kubernetes]# kubectl get pods -n kube-system
NAME                      READY   STATUS    RESTARTS   AGE
coredns-6ff445f54-m28gw   1/1     Running   0          48s

验证集群
# 绑定一下超管用户(只需要在一台服务器上执行即可)
[root@k8s-m-01 ~/deployment-master/kubernetes]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubernetes
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created

# 验证集群DNS和集群网络成功
[root@k8s-m-01 ~/deployment-master/kubernetes]# kubectl run test -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.96.0.2
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

部署Node节点

node需要部署哪些组件?

kubelet、kube-proxy、flannel

集群规划

192.168.1.84  k8s-n-01 n1
192.168.1.85  k8s-n-02 n2

集群优化

# 做一下免密登录
for i in n1 n2;do  ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i; done

分发软件包

[root@k8s-m-01 /opt/data]# cd/opt/data
[root@k8s-m-01 /opt/data]# for i in n1 n2;do scp flanneld mk-docker-opts.sh kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy root@$i:/usr/local/bin; done

分发证书

[root@k8s-m-01 /opt/data]# for i in n1 n2; do ssh root@$i "mkdir -pv /etc/kubernetes/ssl"; scp -pr /etc/kubernetes/ssl/{ca*.pem,admin*pem,kube-proxy*pem} root@$i:/etc/kubernetes/ssl; done

分发配置文件

# flanneld、etcd的证书、docker.service

# 分发ETCD证书
[root@k8s-m-01 /etc/etcd/ssl]# cd /etc/etcd/ssl
[root@k8s-m-01 /etc/etcd/ssl]# for i in n1 n2 ;do ssh root@$i "mkdir -pv /etc/etcd/ssl"; scp ./*  root@$i:/etc/etcd/ssl; done

# 分发flannel和docker的启动脚本
[root@k8s-m-01 /etc/etcd/ssl]# for i in n1 n2;do scp /usr/lib/systemd/system/docker.service root@$i:/usr/lib/systemd/system/docker.service; scp /usr/lib/systemd/system/flanneld.service root@$i:/usr/lib/systemd/system/flanneld.service; done

部署kubelet

[root@k8s-m-01 ~]# for i in n1 n2 ;do 
    ssh root@$i "mkdir -pv  /etc/kubernetes/cfg";
    scp /etc/kubernetes/cfg/kubelet.conf root@$i:/etc/kubernetes/cfg/kubelet.conf; 
    scp /etc/kubernetes/cfg/kubelet-config.yml root@$i:/etc/kubernetes/cfg/kubelet-config.yml; 
    scp /usr/lib/systemd/system/kubelet.service root@$i:/usr/lib/systemd/system/kubelet.service; 
    scp /etc/kubernetes/cfg/kubelet.kubeconfig root@$i:/etc/kubernetes/cfg/kubelet.kubeconfig; 
    scp /etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig; 
    scp /etc/kubernetes/cfg/token.csv root@$i:/etc/kubernetes/cfg/token.csv;
done

# 修改配置文件kubelet-config.yml的ip地址和kubelet.conf的hostname
[root@k8s-n-02 ~]# vim /etc/kubernetes/cfg/kubelet-config.yml 
....
address: 192.168.1.85
....

[root@k8s-n-02 ~]# vim /etc/kubernetes/cfg/kubelet.conf
....
--hostname-override=k8s-n-02 
....
# 启动kubelet
[root@k8s-n-02 ~]# systemctl enable --now kubelet.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

部署kube-proxy

[root@k8s-m-01 ~]# for i in n1 n2 ; do 
    scp /etc/kubernetes/cfg/kube-proxy.conf root@$i:/etc/kubernetes/cfg/kube-proxy.conf;  
    scp /etc/kubernetes/cfg/kube-proxy-config.yml root@$i:/etc/kubernetes/cfg/kube-proxy-config.yml ;  
    scp /usr/lib/systemd/system/kube-proxy.service root@$i:/usr/lib/systemd/system/kube-proxy.service;  
    scp /etc/kubernetes/cfg/kube-proxy.kubeconfig root@$i:/etc/kubernetes/cfg/kube-proxy.kubeconfig;
    done
    
# 修改kube-proxy-config.yml中IP和主机名
[root@k8s-n-02 ~]# vim /etc/kubernetes/cfg/kube-proxy-config.yml 
....
bindAddress: 192.168.1.85
healthzBindAddress: 192.168.1.85:10256
metricsBindAddress: 192.168.1.85:10249
....
   
# 启动
[root@k8s-n-02 ~]# systemctl enable --now kube-proxy.service 

启动flanner
[root@k8s-n-01 ~]# systemctl enable --now flanneld.service

systemctl enable --now docker.service

加入集群

# 查看集群状态
[root@k8s-m-01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   

# 查看加入集群请求
[root@k8s-m-01 ~]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-_yClVuQCNzDb566yZV5sFJmLsoU13Wba0FOhQ5pmVPY   12m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-m3kFnO7GPBYeBcen5GQ1RdTlt77_rhedLPe97xO_5hw   12m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准加入
[root@k8s-m-01 ~]# kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'`
certificatesigningrequest.certificates.k8s.io/node-csr-_yClVuQCNzDb566yZV5sFJmLsoU13Wba0FOhQ5pmVPY approved
certificatesigningrequest.certificates.k8s.io/node-csr-m3kFnO7GPBYeBcen5GQ1RdTlt77_rhedLPe97xO_5hw approved

# 查看加入状态
[root@k8s-m-01 ~]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-_yClVuQCNzDb566yZV5sFJmLsoU13Wba0FOhQ5pmVPY   14m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-m3kFnO7GPBYeBcen5GQ1RdTlt77_rhedLPe97xO_5hw   14m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

# 查看加入节点
[root@k8s-m-01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
k8s-m-01   Ready    <none>   21h   v1.18.8
k8s-m-02   Ready    <none>   21h   v1.18.8
k8s-m-03   Ready    <none>   21h   v1.18.8
k8s-n-01   Ready    <none>   36s   v1.18.8
k8s-n-02   Ready    <none>   36s   v1.18.8

设置集群角色

[root@k8s-m-01 ~]# kubectl label nodes k8s-m-01 node-role.kubernetes.io/master=k8s-m-01
s.io/node=k8s-n-01
kubectl label nodes k8s-n-02 node-role.kubernetes.io/node=k8s-n-02node/k8s-m-01 labeled
[root@k8s-m-01 ~]# kubectl label nodes k8s-m-02 node-role.kubernetes.io/master=k8s-m-02
node/k8s-m-02 labeled
[root@k8s-m-01 ~]# kubectl label nodes k8s-m-03 node-role.kubernetes.io/master=k8s-m-03
node/k8s-m-03 labeled
[root@k8s-m-01 ~]# 
[root@k8s-m-01 ~]# kubectl label nodes k8s-n-01 node-role.kubernetes.io/node=k8s-n-01
node/k8s-n-01 labeled
[root@k8s-m-01 ~]# kubectl label nodes k8s-n-02 node-role.kubernetes.io/node=k8s-n-02
node/k8s-n-02 labeled
[root@k8s-m-01 ~]# kubectl get nodes
NAME       STATUS     ROLES    AGE    VERSION
k8s-m-01   Ready      master   21h    v1.18.8
k8s-m-02   Ready      master   21h    v1.18.8
k8s-m-03   NotReady   master   21h    v1.18.8
k8s-n-01   Ready      node     4m5s   v1.18.8
k8s-n-02   Ready      node     4m5s   v1.18.8

安装集群图形化界面

# https://github.com/kubernetes/dashboard
#编辑recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.4
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

# 安装
[root@k8s-m-01 ~]# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

# 开一个端口,用于访问
[root@k8s-m-01 ~]# kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
type: ClusterIP   =>  type: NodePort

# 查看修改后得端口
[root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.86.206    <none>        8000/TCP        3m58s
kubernetes-dashboard        NodePort    10.96.113.185   <none>        443:46491/TCP   3m59s

# 创建token配置文件
[root@k8s-m-01 ~]# vim token.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

# 部署token到集群
[root@k8s-m-01 ~]# kubectl apply -f token.yaml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

# 获取token
[root@k8s-m-01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token: | awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikx1bS1XSGE3Rk9wMnFNdVNqdHJGVi1ERFAzVjFyQXFXeFBWQ0RGS2N0bUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWtuYnptIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1YzVkMzdmMS03MTFkLTQ0YjYtOWIzOS0zZjEzMDFkNDRjMmUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.oJqolcMm_W81p3JnprpHSRaIFCjL533ihC5_YMWRRE9WZmpK6_-EpBk6GmnuGJ4EsRT89AxJIqDN3edxtLnirRoqTUTfqDbU-ik5fyDZ_rRjuLD8fFtkZ6-WXHGo76Kj-Sw8CnpLzkaced9KhpRLHtMFawQxkeMf2SKVgxr1uWWmzVTXaylhh_frIvSbkxt5A_YflEhGyHPh6EbPg1T9WoDsLa7oTXGGAXzu97_j2AU3u6TBPkwKn4S-cFOceh_KqSKNsnmwGE9FohaKQP1X_WpDgfjXohR7xbGScW_VUj1XuI_75Exip8yflgZ70rF93xnfS69V_1wvPs2sfOO5-g

#在网页上粘贴token登录出现以下界面即为成功
在这里插入图片描述

问题总结

问题一:Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

问题分析:字面理解,就是服务器没有成功处理这个请求,可能kube-apiserver-service没开
解决方案:
step1:systemctl status kube-apiserver-service查看服务状态
step2:若没有启动,运行systemctl start kube-apiserver-service,将其打开即可
此时运行命令,发现一切恢复

问题二:The connection to the server localhost:8080 was refused - did you specify the right host or port?

问题分析:字面理解,说我们端口错误,但其实是因为kubernetes的配置文件没有加入环境变量
解决方案:
因为yum安装与二进制安装的配置文件不一样,所以这里执行的命令也会有区别(路径的区别)

yum安装:
方法一:
vim /etc/profile
在文件末尾加上 export KUBECONFIG=/etc/kubernetes/admin.conf
source /etc/profile
方法二:
直接运行 echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> /etc/profile
source /etc/profile

二进制安装:
方法一:
vim /etc/profile
在文件末尾加上 export KUBECONFIG=/etc/kubernetes/cfg/admin.kubeconfig
source /etc/profile
方法二:
直接运行 echo “export KUBECONFIG=/etc/kubernetes/cfg/admin.kubeconfig” >> /etc/profile
source /etc/profile

问题三:Unable to connect to the server: net/http: TLS handshake timeout

问题分析:看上去是连接超时,其实是内存不足
解决方案:将1核1G改为2核2G

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值