二进制部署Kubernetes高可用集群

二进制方式部署K8S集群

Kubernetes概述

  • 官网:https://kubernetes.io
  • Github:https://github.com/kubernetes/kubernetes
  • 由来:源于谷歌的Borg系统,后经过Go语言重写
  • 重要作用:开源的容器编排框架工具(生态丰富)
# 官方概述:
Kubernetes 是一个可移植的、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。 Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。

Kubernetes核心组件

# 配置存储中心
   etcd服务

# 主控(master)节点
 be-apiserver服务
  提供了集群管理的REST API接口(包括鉴权、数据校验及集群状态变更)
  负责其他模块之间的数据交互,承担通信枢纽功能
  是资源配额控制大的入口
  提供完备的集群安全机制
 
 kube-controller-manager服务
  由一系列控制器组成,通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态
  Node Controller
  Deployment Controller
  Service Controller
  Volume Controller
  Endpoint Controller
  Namespace Controller
  Job Controller
  Resource quta Controller
   ...

kube-scheduler服务 
  主要功能是接收调度pod到适合的运算节点上
  预算策略(predict)
  优选策略(priorities)

#运算(work)节点
kube-kubelet服务
  调用对应的容器平台接口达到期望状态(运行什么容器、副本数量、网络以及存储的配置等)
  定时汇报当前节点的状态给apiserver,以供调度的时候使用
  镜像和容器的清理工作,保证节点上镜像不会占满磁盘空间,退出的容器不会占用太多资源
  
kube-proxy服务
  是K8S再每个节点上运行网络代理,service资源的载体
  建立了pod网络和集群网络的关系(clusterIP--->podIP)
  常用三种流量调度模式
     Userspace
     Iptables
     Ipvs
  负责建立和删除包括更新调度规则、通知apiserver自己的跟新、或者从apiserver哪里获取其他kube-proxy的调度规则变化来更新自己的

# CLI客户端
  kubectl

Kubernetes核心附件

# CNI网络插件:flannel/calico

# 服务发现用插件:coredns

# 服务暴露用插件:traefik

# GUI管理插件:Dashboard

Kubernetes常见安装部署方式

#- Minikube 单节点微型K8S
#- 二进制安装部署
#- 使用kuneadmin进行部署,K8S的部署工具

一、准备阶段

1.1 硬件(虚拟机)环境准备

# 5台虚拟机
# 内存:不低于2G
# CPU:不低于2核心
# 磁盘:30G

1.2 软件环境准备

软件版本
操作系统Centos7
容器Docker CE 20
K8SKubernetes v1.20.10

1.3 虚拟机规划

虚拟机主机名IP地址部署规划
虚拟机-1K8S-MASTER-01192.62.10.11kube-apiserver、kube-controller-manager、kube-scheduler、etcd、nginx、keepalived、docker
虚拟机-2K8S-MASTER-02192.62.10.12kube-apiserver、kube-controller-manager、kube-scheduler、nginx、keepalived、docker
虚拟机-3K8S-NODE-01192.62.10.21kube-kubelet、kube-proxy、etcd、docker
虚拟机-4K8S-NODE-02192.62.10.22kube-kubelet、kube-proxy、etcd、docker
虚拟机-5K8S-HARBOR192.62.10.100ca、harbor、bind、
VIP负载均衡器192.62.10.200

1.4 操作系统环境配置

# 关闭防火墙
]# systemctl stop firewalld
]# systemctl disable firewalld

# 关闭selinux
]# sed -i 's/enforceing/disabled' /etc/selinux/config

# 清空iptalbes
]# iptables -F 
]# iptables-save

# 设置主机名
]# hostnamectl set-hostname 名字   

# 安装辅助工具
]# yum  -y install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils

# 设置yum源
]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
]# yum makecache fast

# 安装epel-release
]# yum -y install epel-release

# 时间同步
]# yum -y install nptdate
]# nptdate time.windows.com

二、部署bind服务

配置DNS服务,使我们访问harbor的时候使用域名来访问,不必记IP地址了

2.1 安装配置bind服务
# 安装bind9
]# yum -y install bind

# 修改配置文件
]# vim /etc/named.conf
options {
        listen-on port 53 { 192.62.10.100; };   #  修改IP地址位本机地址
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        recursing-file  "/var/named/data/named.recursing";
        secroots-file   "/var/named/data/named.secroots";
        allow-query     { any; };     # 修改此处为any,表示任何人都可以通过我来解析dns
        forwarders      { 192.62.10.2; }; # 上一级解析地址
        recursion yes;                 # 改为yes,表示dns使用递归的算法进行dns查询
        dnssec-enable no;
        dnssec-validation no;
        
# 启动服务
]# systemctl start named

# 查看
 ]# netstat -anpt |grep 53
tcp        0      0 192.62.10.100:53        0.0.0.0:*               LISTEN      27683/named
tcp        0      0 127.0.0.1:953           0.0.0.0:*               LISTEN      27683/named
2.2 编辑区域配置文件
]# vim /etc/named.rfc1912.zones
# 在最后添加以下内容:
zone "lxq.com" IN {
		type  master;
		file  "lxq.com.zone";
		allow-update { 192.62.10.100; };
};
zone "zjh.com" IN {
		type  master;
		file  "zjh.com.zone";
		allow-update { 192.62.10.100; };
};
2.3 编辑区域数据文件
]# vim /var/named/lxq.com.zone
$ORIGIN lxq.com.
$TTL 600	; 10 minutes
@       IN SOA  dns.lxq.com. dnsadmin.lxq.com. (
                                        2021091801  ; serial
                                        10800       ; refresh
                                        900		    ; retry
                                        604800      ; expire
                                        86400 )     ; minimum
                        NS      dns.lxq.com.
dns                     A       192.62.10.100
k8s-harbor              A       192.62.10.100
k8s-master-01           A       192.62.10.11
k8s-master-02      		A       192.62.10.12
k8s-node-01             A       192.62.10.21
k8s-node-02             A       192.62.10.22


]# vim /var/named/zjh.com.zone
$ORIGIN zjh.com.
$TTL 1D
@       IN SOA  dns.zjh.com. dnsadmin.zjh.com. (
                                        2021091801  ; serial
                                        10800       ; refresh
                                        900		    ; retry
                                        604800      ; expire
                                        86400 )     ; minimum
                        NS      dns.zjh.com.
dns                     A       192.62.10.100
harbor					A		192.62.10.100

#检查配置文件并重启
]# named-checkconf
]# systemctl restart named
2.4 验证DNS服务是否生效
]# dig -t A k8s-master-01.lxq.com @192.62.10.100 +short
192.62.10.11

# 解析成功
2.5 修改网卡配置文件
# 修改所有主机里的网卡配置文件。使DNS解析都走自建DNS服务器
]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
NAME=ens33
UUID=6acdc07e-a8b4-4b68-b295-2754fa46265a
DEVICE=ens33
ONBOOT=yes
IPADDR=192.62.10.100
NETMASK=255.255.255.0
GATEWAY=192.62.10.2
DNS1=192.62.10.100    #  修改为DNS服务部署主机地址

三、部署harbor私有仓库

3.1 安装Docker

# 因为Master和Work节点都需要Docker,在这里就一块都先安装了
# 1.卸载旧版本
]# yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

# 2.安装相关软件包
]# yum install -y yum-utils

# 3.设置镜像仓库
]# yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo     #  默认是国外的,不建议使用
]# yum-config-manager \
    --add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo  #   建议使用国内阿里云镜像

# 4.更新yum软件包索引
]# yum makecache fast

# 5.开始安装Docker
]# yum -y install docker-ce docker-ce-cli containerd.io

# 6.启动Dcoker
]# systemctl start docker
]# systemctl enable docker

# 7.使用Docker version是否安装成功
]# docker version


# 配置/etc/docker/daemon.json文件
]# vim /etc/docker/daemon.json
{
  "graph": "/var/lib/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.zjh.com"],
  "registry-mirrors": ["https://erbrwkgk.mirror.aliyuncs.com"],
  "bip": "172.7.100.1/24",
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "live-restore": true
}

# 加载配置文件
]# systemctl daemon-reload

# 重启Docker
]# systemctl restart docker

3.2 安装Docker-Compose

# 安装harbor需要docker-compose的支撑
]# yum -y install docker-compose

3.3 下载部署Harbor

# 下载harbor 
]# wget -O https://github.com/goharbor/harbor/releases/download/v2.0.6/harbor-offline-installer-v2.0.6.tgz   /opt/
]# ll
-rw-r--r--  1 root root 558003770 918 13:48 harbor-offline-installer-v2.0.6.tgz

# 解压harbor软件包到/opt下
]# tar xf harbor-offline-installer-v2.0.6.tgz -C /opt

# 改名做个软链接
]# mv /opt/harbor  /opt/harbor-v2.0.6
]# ln -s /opt/harbor-v2.0.6 /opt/harbor

# 新建harbor数据存放目录
]# mkdir  -p /data/harbor

# 修改harbor配置文件harbor.yml,默认文件名是harbor.yml.tmpl,复制一份进行修改
]# vim harbor.yml
hostname: harbor.od.com    # 修改主机名
http:
  port: 800                # 修改端口
#https:                    # 注释掉https,因为没有证书,不注释会安装不成功
 # port: 443
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345      # 管理员密码
# The default data volume
data_volume: /data/harbor               # 数据存放路径
location: /data/harbor/logs             # 日志存放路径

# 安装harbor
]# ./install.sh

# 报错1
ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
## 这是因为配置文件中没有把https注释掉!!(滑稽老是忘...)
https:
  port: 443
  certificate: /your/certificate/path
  private_key: /your/private/key/path
  
  
  
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir


Creating harbor-db ... done
Creating harbor-core ... done
Creating network "harborv206_harbor" with the default driver
Creating nginx ... done
Creating registry ...
Creating redis ...
Creating registryctl ...
Creating harbor-portal ...
Creating harbor-db ...
Creating harbor-core ...
Creating harbor-jobservice ...
Creating nginx ...
✔ ----Harbor has been installed and started successfully.----
# 安装完成

# 验证,服务都已经起来了
]# docker-compose ps
      Name                     Command               State                   Ports
---------------------------------------------------------------------------------------------------
harbor-core         /harbor/entrypoint.sh            Up
harbor-db           /docker-entrypoint.sh            Up      5432/tcp
harbor-jobservice   /harbor/entrypoint.sh            Up
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up      127.0.0.1:1514->10514/tcp
harbor-portal       nginx -g daemon off;             Up      8080/tcp
nginx               nginx -g daemon off;             Up      0.0.0.0:800->8080/tcp,:::800->8080/tcp
redis               redis-server /etc/redis.conf     Up      6379/tcp
registry            /home/harbor/entrypoint.sh       Up      5000/tcp
registryctl         /home/harbor/start.sh            Up

# 启动停止命令
]# docker-compose stop
]# docker-compose up -d

3.4 安装nginx服务

# 在nginx里面进行配置,把我们配置的harbor的800端口代理到nginx的80端口。

# 安装nginx
]# yun -y install nginx

# 修改nginx配置文件
]# cat > /etc/nginx/conf.d/harbor.zjh.com.conf  <<EOF
server {
	listen 80;
	server_name harbor.zjh.com;
	client_max_body_size 1000m;
	location / {
		proxy_pass http://127.0.0.1:800;
	}

}
EOF

# 启动nginx服务
]# systemctl start nginx
]# systemctl enable nginx

3.5 浏览器访问Harbor

在这里插入图片描述

访问成功,用户admin,密码Harbor12345

四、部署Etcd服务

Etcd是一个分布式键值存储数据库,K8S使用Etcd进行存放数据,Etcd单点部署没有冗余性,所以我们采用集群的方式部署,Etcd集群部署最少3个节点,保证可以有一个节点故障!

4.1 准备cfssl证书生成工具

# 下载工具
]# wget -O https://pkg.cfssl.org/R1.2/cfssl_linux-amd64  
]# wget -O https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
]# wget -O https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
]# cp cfssl_linux-amd64 /usr/bin/cfssl
]# cp cfssljson_linux-amd64 /usr/bin/cfssl-json
]# cp cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
]# chmod +x /usr/bin/cfssl*

4.2 生成Etcd证书

4.2.1 构建自签证书颁发机构

为了防止搞的乱糟糟,我证书全都签在harbor机器上,再传到各个节点上!

# 创建工作目录
]# mkdir /opt/TLS{etcd,k8s}

# 自签ca需要的配置文件
]# cat > /opt/TLS/etcd/ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

生成etcd根证书

]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca

# 生成ca.pem和ca-key.pem两个文件
4.2.2 用自签CA签发Etcd证书
# 编辑配置文件
]# cat > /opt/TLS/etcd/etcd-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

# 编辑证书的申请文件
cat > /opt/TLS/etcd/etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.62.10.11",
    "192.62.10.12",
    "192.62.10.21",
    "192.62.10.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

生成证书

# 注意!写的都是相对路径
etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=etcd-config.json -profile=etcd etc-csr.json |cfssl-json -bare etcd

# 生成etcd.pem和etcd-key.pem两个文件

4.3 下载Etcd二进制包

]# wget -O https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz /opt

4.4 部署Etcd集群

集群部署在master01、node01、node02上。在一台上操作即可,把编写的配置文件传到另外两台上!

4.4.1 准备工作
# 创建Etcd工作路径
]# mkdir /opt/etcd

# 把二进制包解压
]# tar xf etcd-v3.4.9-linux-amd64.tar.gz 

# 把etcd-v3.4.9-linux-amd64中的文件目录移动到etcd下
]# mv etcd-v3.4.9-linux-amd64/* /opt/etcd/

# 更改属主属组
]# chown -R root.root etcd/

# 创建证书目录
]# mkdir /opt/etcd/ssl

# 把再harbor上生成的证书用scp命令拉到这个目录下
]# scp k8s-harbor:/opt/TLS/etcd/* .
4.4.2 创建Etcd配置文件
# 创建配置文件目录
]# mkdir conf && cd conf

# 创建数据存放路径(也可以使用默认的/var/lib/etcd/...)
]# mkdir -p /data/etcd/


# 编写配置文件
]# vim /opt/etcd/conf/etcd.conf
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/data/etcd/"    
ETCD_LISTEN_PEER_URLS="https://192.62.10.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.62.10.11:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.62.10.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.62.10.11:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.62.10.11:2380,etcd-2=https://192.62.10.21:2380,etcd-3=https://192.62.10.22:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

配置文件中的每个参数可以通过 etcd --help来查看

/**
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEERURLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIALCLUSTER_TOKEN:集群Token
ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
**/
4.4.3 编写systemd service文件
# ]# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/conf/etcd.conf
ExecStart=/opt/etcd/etcd \
--cert-file=/opt/etcd/ssl/etcd.pem \
--key-file=/opt/etcd/ssl/etcd-key.pem \
--peer-cert-file=/opt/etcd/ssl/etcd.pem \
--peer-key-file=/opt/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
4.4.4 把配置文件拷贝到其他两台
# 配置文件
/opt/etcd/conf/etcd.conf
/usr/bin/systemd/system/etcd.service
# 证书文件
ca.pem
ca-key.pem
etcd.pem
etcd-key.pem

]# scp -r k8s-master-01:/opt/etcd/* .
]# scp -r k8s-master-01:/usr/lib/systemd/system/etcd.service .


# 配置文件需要修改的地方etcd.conf

#[Member]
ETCD_NAME="etcd-1"     # 名字修改etcd-2和etcd-3
ETCD_DATA_DIR="/data/etcd/"    
ETCD_LISTEN_PEER_URLS="https://192.62.10.11:2380"      #  修改为本身IP地址
ETCD_LISTEN_CLIENT_URLS="https://192.62.10.11:2379"    #  修改为本身IP地址

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.62.10.11:2380"   #  修改为本身IP地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.62.10.11:2379"         #  修改为本身IP地址
ETCD_INITIAL_CLUSTER="etcd-1=https://192.62.10.11:2380,etcd-2=https://192.62.10.21:2380,etcd-3=https://192.62.10.22:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"   

4.5 启动Etcd服务

]# systemctl daemon-reload
]# systemctl start etcd.service

# 查看端口状态
]# netstat -anpt|grep etcd
p        0      0 192.62.10.11:2379       0.0.0.0:*               LISTEN      9505/etcd
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      9505/etcd
tcp        0      0 192.62.10.11:2380       0.0.0.0:*               LISTEN      9505/etcd

4.6 Etcd集群的查看

# 查看集群的详细状态
]# /opt/etcd/etcdctl endpoint status --endpoints https://192.62.10.11:2379,https://192.62.10.21:2379,https://192.62.10.22:2379 --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --cacert=/opt/etcd/ssl/ca.pem  --write-out=table
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.62.10.11:2379 | c8f284a60f99947a |   3.4.9 |   20 kB |      true |      false |        11 |         12 |                 12 |        |
| https://192.62.10.21:2379 | f41b55a02dfdc8b1 |   3.4.9 |   20 kB |     false |      false |        11 |         12 |                 12 |        |
| https://192.62.10.22:2379 | 342b8970d8b8221f |   3.4.9 |   20 kB |     false |      false |        11 |         12 |                 12 |        |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

# 注意!!不加--endpoint参数时。默认访问的时127.0.0.1:2379
# 如果加上--endpoint参数就必须指定证书!!!

# 查看集群健康状态
 ]# /opt/etcd/etcdctl endpoint health --endpoints https://192.62.10.11:2379,https://192.62.10.21:2379,https://192.62.10.22:2379 --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --cacert=/opt/etcd/ssl/ca.pem  --write-out=table
+---------------------------+--------+-------------+-------+
|         ENDPOINT          | HEALTH |    TOOK     | ERROR |
+---------------------------+--------+-------------+-------+
| https://192.62.10.11:2379 |   true |  8.231746ms |       |
| https://192.62.10.22:2379 |   true | 10.495846ms |       |
| https://192.62.10.21:2379 |   true | 10.490035ms |       |
+---------------------------+--------+-------------+-------+


# 查看节点信息
]# /opt/etcd/etcdctl member list --endpoints https://192.62.10.11:2379,https://192.62.10.21:2379,https://192.62.10.22:2379 --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --cacert=/opt/etcd/ssl/ca.pem  --write-out=table
+------------------+---------+--------+---------------------------+-------------------------------------------------+------------+
|        ID        | STATUS  |  NAME  |        PEER ADDRS         |                  CLIENT ADDRS                   | IS LEARNER |
+------------------+---------+--------+---------------------------+-------------------------------------------------+------------+
| 342b8970d8b8221f | started | etcd-3 | https://192.62.10.22:2380 | http://127.0.0.1:2379,https://192.62.10.22:2379 |      false |
| c8f284a60f99947a | started | etcd-1 | https://192.62.10.11:2380 | http://127.0.0.1:2379,https://192.62.10.11:2379 |      false |
| f41b55a02dfdc8b1 | started | etcd-2 | https://192.62.10.21:2380 | http://127.0.0.1:2379,https://192.62.10.21:2379 |      false |
+------------------+---------+--------+---------------------------+-------------------------------------------------+------------+

部署完成!!

在这里插入图片描述

五、部署Master Node

5.1 下载二进制包

# 下载地址:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
# 解压tar包
]# tar zxvf kubernetes-server-20-linux-amd64.tar.gz -C /opt/

# 创建三个目录tls、conf、bin
]# mkdir -p /opt/kubernetes/{tls,conf,bin}

# 把kubernetes里面主节点需要的可执行文件拷贝到bin下
]# cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin/

# 把证书拷贝到tls目录下
]# scp *.pem k8s-master-01:/opt/kubernetes/tls
]# scp *.pem k8s-master-02:/opt/kubernetes/tls

5.2 Kube-Api Server组件

5.2.1 签发apiserver证书
# 再10.100上签发证书,先构建自签证书签发机构CA
]# vim /opt/TLS/k8s/ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
# 生成证书
]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca -

# 查看帧数
]# ll
-rw-r--r-- 1 root root 1001 922 11:07 ca.csr
-rw-r--r-- 1 root root  264 922 11:07 ca-csr.json
-rw------- 1 root root 1679 922 11:07 ca-key.pem
-rw-r--r-- 1 root root 1359 922 11:07 ca.pem
# 创建apiserver证书申请文件
]# vim /opt/TLS/k8s/apiserver-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "apiserver": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

]# vim /opt/TLS/k8s/apiserver-csr.json
{
    "CN": "k8s",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.62.10.11",
      "192.62.10.21",
      "192.62.10.22",
	  "192.62.10.12",
      "192.62.10.100",
      "192.62.10.200",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
# 生成证书
]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=apiserver-config.json -profile=apiserver apiserver-csr.json |cfssl-json -bare apiserver

# 查看证书
]# ll
-rw-r--r-- 1 root root  293 922 11:10 apiserver-config.json
-rw-r--r-- 1 root root 1277 922 11:14 apiserver.csr
-rw-r--r-- 1 root root  596 922 11:12 apiserver-csr.json
-rw------- 1 root root 1679 922 11:14 apiserver-key.pem
-rw-r--r-- 1 root root 1643 922 11:14 apiserver.pem
5.2.2 部署apiserver
5.2.2.1 创建配置文件
]# vim /opt/kubernetes/conf/kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/logs/kubernetes/kube-apiserver \
--etcd-servers=https://192.62.10.11:2379,https://192.62.10.21:2379,https://192.62.10.22:2379 \
--bind-address=192.62.10.11 \
--secure-port=6443 \
--advertise-address=192.62.10.11 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/conf/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/apiserver.pem \
--kubelet-client-key=/opt/kubernetes/ssl/apiserver-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/apiserver.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/apiserver-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--service-account-signing-key-file=/opt/kubernetes/ssl/apiserver-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/etcd.pem \
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--proxy-client-cert-file=/opt/kubernetes/ssl/apiserver.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/apiserver-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/data/logs/kubernetes/kube-apiserver/apiserver-audit.log"


# 启用TLS Bootstrapping,创建CSV文件
]# vim /opt/kubernetes/conf/token.csv
4d9afa191443b7239ea70f1752a9c5b8,kubelet-bootstrap,10001,"system:node-bootstrapper"

# 生成token的命令
]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '

# 创建日志目录
]# mkdir -p /data/logs/kubernetes/kube-apiserver 
5.2.2.2 创建service服务文件
# 编写service配置文件
]# vim /usr/lib/systemd/system/apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/conf/apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
5.2.2.3 启动apiserver
]# systemctl daemon-reload
]# systemctl start apiserver

# 查看日志
cat /var/log/messages|grep kube-apiserver|grep -i error   

在这里插入图片描述

5.3 部署nginx+keepalived

kube-apiserver高可用架构图:

在这里插入图片描述

Nginx是一个Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。

Keepalived是一个高可用软件,用VIP绑定实现服务器双机热备,Keepalived主要根据Nginx运行状态判断是否需要故障转移,例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。

5.3.1 nginx配置
5.3.1.1 编译安装nginx
#安装过程
]# groupadd -r nginx
]# useradd -r -g nginx -s /bin/false -M nginx
]# ./configure --prefix=/usr/local/nginx --sbin-path=/usr/sbin/nginx  --user=nginx  --group=nginx  --with-http_ssl_module --with-pcre --with-http_stub_status_module --with-stream
]# make && make install 
5.3.1.2 修改配置文件
]# vim /etc/nginx/nginx.conf
# 在文件最后加上以下内容,不能写在http模块中!!
 stream {
        upstream kube-apiserver {
                server 192.62.10.11:6443   max_fails=3     fail_timeout=30s;
                server 192.62.10.22:6443   max_fails=3     fail_timeout=30s;
        }
        server {
                listen 7443;
                proxy_connect_timeout 2s;
                proxy_timeout 900s;
                proxy_pass kube-apiserver;
                }

}
5.3.1.3 启动nginx
# 检查配置文件
nginx -t

# 启动服务
]# nginx
5.3.1.4 创建service服务文件
# 编辑service文件
]# vim /usr/lib/systemd/system/nginx.service
[Unit]
Description=nginx server daemon
Documentation=man:nginx(8)
After=network.target
[Service]
Type=forking
ExecStart=/usr/sbin/nginx
ExecReload=/usr/sbin/nginx -s reload
ExecStop=/usr/sbin/nginx -s quit
PrivateTmp=true
[Install]
WantedBy=multi-user.target


# 加载文件
]# systemctl daemon-reload
# 重启服务
]# systemctl restart nginx
5.3.2 keepalived配置
5.3.2.1 安装keepalived
]# yum -y install keepalived 
5.3.2.2 编写监控脚本
]# vim  /etc/keepalived/check_port.sh 
#!/bin/bash
port=$1
if [ -n $port ];then
	port_process=`ss -lt|grep $port|wc -l`
	if [ $port_process -eq 0 ];then
		echo "port is not used!"
		exit 1
	fi
else
	echo "port is busy!"
fi
5.3.2.3 修改配置文件
# 修改配置文件 主机的:
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
		router_id 192.
}

vrrp_script chk_nginx {
        script "/etc/keepalived/check_port.sh 7443"
        interval 2
        weight -20
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id  251
    priority 100
    advert_int 1
    mcast_src_ip 192.62.10.11
    nopreemp

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        192.62.10.200
    }

}
# 修改配置文件 备机的:
! Configuration File for keepalived

global_defs {
		router_id 192.62.10.12   # routerid修改
}

vrrp_script chk_nginx {
        script "/etc/keepalived/check_port.sh 7443"
        interval 2
        weight -20
}

vrrp_instance VI_2 {
    state MASTER
    interface ens33
    virtual_router_id  251
    priority 90        # 优先级修改
    advert_int 1
    mcast_src_ip 192.62.10.12   # IP地址修改
    nopreemp

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        192.62.10.200
    }

}
5.3.2.4 启动服务并验证
]# systemctl start keepalived
]# systemctl enable keepalived

# 验证
]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:6d:46:d9 brd ff:ff:ff:ff:ff:ff
    inet 192.62.10.11/24 brd 192.62.10.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.62.10.200/32 scope global ens33   # vip生效
       valid_lft forever preferred_lft forever

5.3.3 测试
]# curl -k https://192.62.10.200:7443/version
{
  "major": "1",
  "minor": "20",
  "gitVersion": "v1.20.10",
  "gitCommit": "8152330a2b6ca3621196e62966ef761b8f5a61bb",
  "gitTreeState": "clean",
  "buildDate": "2021-08-11T18:00:37Z",
  "goVersion": "go1.15.15",
  "compiler": "gc",
  "platform": "linux/amd64"

5.4 kube-controller组件

5.4.1 签发controller证书
# 创建申请证书文件
]# vim /opt/TLS/k8s/kube-controller-manager-csr.json
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

# 创建配置文件
]# vim /opt/TLS/k8s/controller-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "controller": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

# 生成证书
]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=controller-config.json -profile=controller kube-controller-manager-csr.json |cfssl-json -bare controller

# 查看证书
s]# ll con*
-rw-r--r-- 1 root root  295 922 15:58 controller-config.json
-rw-r--r-- 1 root root 1045 922 16:00 controller.csr
-rw------- 1 root root 1675 922 16:00 controller-key.pem
-rw-r--r-- 1 root root 1436 922 16:00 controller.pem

# 将证书拷贝到master节点上!!!
5.3.2 部署controller
5.3.2.1 创建配置文件
]# vim /opt/kubernetes/conf/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/logs/kubernetes/kube-controller-manager \
--leader-elect=true \
--kubeconfig=/opt/kubernetes/conf/kube-controller-manager.kubeconfig \
--bind-address=192.62.10.11 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--cluster-signing-duration=87600h0m0s"
5.3.2.2 生成kubeconfig文件
# 四个步骤!
]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server="https://192.62.10.200:7443" \
  --kubeconfig="/opt/kubernetes/conf/kube-controller-manager.kubeconfig"
  
]# kubectl config set-credentials kube-controller-manager \
  --client-certificate=/opt/kubernetes/ssl/controller.pem \
  --client-key=/opt/kubernetes/ssl/controller-key.pem \
  --embed-certs=true \
  --kubeconfig="/opt/kubernetes/conf/kube-controller-manager.kubeconfig"
  
]# kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig="/opt/kubernetes/conf/kube-controller-manager.kubeconfig"
  
]# kubectl config use-context default --kubeconfig="/opt/kubernetes/conf/kube-controller-manager.kubeconfig"
5.3.2.3 创建service服务文件
# 创建service文件
]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/conf/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
5.3.2.4 启动kube-controller-manager
]# systemctl daemon-reload
]# systemctl start kube-controller-manager
]# systemctl enable kube-controller-manager

在这里插入图片描述
在这里插入图片描述

5.4 kube-scheduler组件

5.4.1 签发kube-scheduler证书
# 创建证书申请文件
]# vim /opt/TLS/k8s/kube-scheduler-csr.json
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

# 创建配置文件
]# vim /opt/TLS/k8s/scheduler-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "scheduler": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
# 生成证书
]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=scheduler-config.json -profile=scheduler kube-scheduler-csr.json |cfssl-json -bare scheduler

# 查看证书
]# ll sch*
-rw-r--r-- 1 root root  293 922 16:33 scheduler-config.json
-rw-r--r-- 1 root root 1029 922 16:36 scheduler.csr
-rw------- 1 root root 1679 922 16:36 scheduler-key.pem
-rw-r--r-- 1 root root 1424 922 16:36 scheduler.pem


# 将证书拷贝到master节点上!!!
]# scp scheduler* root@192.62.10.11:/opt/kubernetes/ssl/
]# scp scheduler* root@192.62.10.12:/opt/kubernetes/ssl/
5.4.2 部署scheduler
5.4.2.1 创建配置文件
]# vim /opt/kubernetes/conf/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/logs/kubernetes/kube-scheduler \
--leader-elect \
--kubeconfig=/opt/kubernetes/conf/kube-scheduler.kubeconfig \
--bind-address=192.62.10.11"

# 创建日志目录
]# mkdir -p /data/logs/kubernetes/kube-scheduler
5.4.2.2 生成kubeconfig文件
# 四个步骤!
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server="https://192.62.10.200:7443" \
  --kubeconfig="/opt/kubernetes/conf/kube-scheduler.kubeconfig"
  
kubectl config set-credentials kube-scheduler \
  --client-certificate=/opt/kubernetes/ssl/scheduler.pem \
  --client-key=/opt/kubernetes/ssl/scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig="/opt/kubernetes/conf/kube-scheduler.kubeconfig"
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig="/opt/kubernetes/conf/kube-scheduler.kubeconfig"
  
kubectl config use-context default --kubeconfig="/opt/kubernetes/conf/kube-scheduler.kubeconfig"
5.4.2.3 创建service服务文件
# 创建service文件
]# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/conf/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
5.4.2.4 启动scheduler
]# systemctl daemon-reload
]# systemctl start kube-scheduler

在这里插入图片描述
在这里插入图片描述

5.5 查看Master节点状态

5.5.1 生成kubectl连接集群的证书
# 编写申请证书配置文件
]# vim /opt/TLS/k8s/kubectl-csr.json
{
  "CN": "kubectl",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

]# vim /opt/TLS/k8s/kubectl-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubectl": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
# 生成证书
]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=kubectl-config.json -profile=kubectl kubectl-csr.json |cfssl-json -bare kubectl

# 查看证书
]# ll kubectl*
-rw-r--r-- 1 root root  291 923 10:15 kubectl-config.json
-rw-r--r-- 1 root root 1013 923 10:16 kubectl.csr
-rw-r--r-- 1 root root  232 923 10:15 kubectl-csr.json
-rw------- 1 root root 1679 923 10:16 kubectl-key.pem
-rw-r--r-- 1 root root 1403 923 10:16 kubectl.pem

# 记得拷贝证书!
5.5.2 生成kubeconfig文件
# 四个步骤!
# 创建环境变量文件
mkdir /root/.kubectl

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server="https://192.62.10.200:7443" \
  --kubeconfig="/root/.kubectl/config"
  
kubectl config set-credentials cluster-admin \
  --client-certificate=/opt/kubernetes/ssl/kubectl.pem \
  --client-key=/opt/kubernetes/ssl/kubectl-key.pem \
  --embed-certs=true \
  --kubeconfig="/root/.kubectl/config"
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig="/root/.kubectl/config"
  
kubectl config use-context default --kubeconfig="/root/.kubectl/config"
5.5.3 检查节点状态
]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}

# 显示以上内容表示正常

# 报错:The connection to the server localhost:8080 was refused - did you specify the right host or port?
# 解决方法:
1、将上一步生成的、/root/.kubectl/config用一个变量名指定,添加到/root/.bash_profile里面
]# echo "export KUBECONFIG=/root/.kubectl/config" >> /root/.bash_profile
2、刷新环境变量
]# source /root/.bash_profile
3、再次执行kubectl get cs正常
4、如果还会报错例如以下:
Unable to connect to the server: x509: certificate is valid for 10.0.0.1, 127.0.0.1, 192.62.10.11, 192.62.10.21, 192.62.10.22, 192.62.10.12, 192.62.10.100, not 192.62.10.200
5、检查证书,在签发apiserver证书的时候是否把需要的IP地址都添加到配置文件了!!

5.6 授权kubelet-bootstrap用户允许请求证书

# 为node节点加入集群请求证书做准备!
]# kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

六、部署Work节点

6.1 准备二进制文件

所有工作节点都要操作!!!

# 创建工作目录(所有工作节点)
]# mkdir -p kubernetes/{conf,bin,ssl}

# 创建日志路径,为之后部署做准备
]# mkdir -p /data/logs/kubernetes/kubelet
]# mkdir -p /data/logs/kubernetes/kube-proxy

# 二进制包直接从主节点上拷贝过来
]# scp root@192.62.10.11:/opt/kubernetes/server/bin/kubelet bin/
]# scp root@192.62.10.11:/opt/kubernetes/server/bin/kube-proxy bin/
]# scp root@192.62.10.11:/opt/kubernetes/server/bin/kubectl bin/

# 创建kubectl配置文件路径
]# mkdir -p /root/.kubectl/
# 拷贝kubectl配置文件
]# scp -r root@192.62.10.11:/root/.kubectl/config /root/.kubectl/

# 把ca证书拷贝到ssl目录下
]# scp root@192.62.10.11:/opt/kubernetes/ssl/ca* ssl/


6.2 kubelet组件

6.2.1 创建配置文件
]# vim /opt/kubernetes/conf/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/logs/kubernetes/kubelet \
--hostname-override=k8s-node-02 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/conf/bootstrap.kubeconfig \
--config=/opt/kubernetes/conf/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
6.2.2 创建kubelet.yml参数配置文件
]# vim /opt/kubernetes/conf/kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
6.2.3 创建bootstrap-kubeconfig文件
# 四个步骤!
# 生成 kubelet bootstrap kubeconfig 配置文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server="https://192.62.10.200:7443" \
  --kubeconfig="/opt/kubernetes/conf/bootstrap.kubeconfig"
  
kubectl config set-credentials "kubelet-bootstrap" \
  --token=4d9afa191443b7239ea70f1752a9c5b8 \
  --kubeconfig="/opt/kubernetes/conf/bootstrap.kubeconfig"
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig="/opt/kubernetes/conf/bootstrap.kubeconfig"
  
kubectl config use-context default --kubeconfig="/opt/kubernetes/conf/bootstrap.kubeconfig"
6.2.3 创建service服务文件
]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/conf/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
6.2.4 启动kubelet服务
]# systemctl daemon-reload   加载service文件 
]# systemctl start kubelet   启动服务
]# systemctl enable kubelet  设置为开机自启

在这里插入图片描述
在这里插入图片描述

6.2.5 Master同意kubelet所提交的证书申请

当kubelet组件启动成功后,就会给apiserver发送一个请求加入集群的消息,只有当master节点授权同意后,才可以正常加入,虽然是master节点部署的node组件,但是也会发生一个加入集群的信息,需要master同意。

# 在Master上操作!
]# kubectl get csr   获取证书申请信息
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-Z91wah0TcBv60_iQqoFWx2Qe7VFDq-OFlfc6u6Ozzds   9m31s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-nJlNv0iPoG90adHelkik_5vRc5xvbWl20SxJqYLdSz0   11m     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准申请
]# kubectl certificate approve node-csr-Z91wah0TcBv60_iQqoFWx2Qe7VFDq-OFlfc6u6Ozzds
certificatesigningrequest.certificates.k8s.io/node-csr-Z91wah0TcBv60_iQqoFWx2Qe7VFDq-OFlfc6u6Ozzds approved

# 查看节点信息
]# kubectl get node 
NAME          STATUS     ROLES    AGE   VERSION
k8s-node-01   NotReady   <none>   34s   v1.20.10
k8s-node-02   NotReady   <none>   14s   v1.20.10

报错:ailed to run Kubelet: misconfiguration: kubelet cgroup driver: “systemd” is different from docker cgroup driver: "cgroupfs"
原因:docker配置文件中驱动指定与kubelet配置文件中得驱动不一致!
解决:将驱动指定成一致得就可以了!

6.3 kube-proxy组件

6.3.1 签发证书
# 创建证书申请文件
]# vim /opt/TLS/k8s/kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

# 创建配置文件
]# vim /opt/TLS/k8s/proxy-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "proxy": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
# 生成证书
]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=proxy-config.json -profile=proxy kube-proxy-csr.json |cfssl-json -bare proxy

]# ll proxy*
-rw-r--r-- 1 root root  289 923 14:39 proxy-config.json
-rw-r--r-- 1 root root 1025 923 14:39 proxy.csr
-rw------- 1 root root 1679 923 14:39 proxy-key.pem
-rw-r--r-- 1 root root 1415 923 14:39 proxy.pem

6.3.2 创建配置文件
]# vim /opt/kubernetes/conf/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/logs/kubernetes/kube-proxy \
--config=/opt/kubernetes/conf/kube-proxy-config.yml"


# 创建日志目录
]# mkdir -p /data/logs/kubernetes/kube-proxy
6.3.3 创建yml参数配置文件
]# vim /opt/kubernetes/conf/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/conf/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.244.0.0/16
6.3.4 生成kube-proxy.kubeconfig文件
# 四个步骤!

]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server="https://192.62.10.200:7443" \
  --kubeconfig="/opt/kubernetes/conf/kube-proxy.kubeconfig"
  
]# kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/ssl/proxy.pem \
  --client-key=/opt/kubernetes/ssl/proxy-key.pem \
  --embed-certs=true \
  --kubeconfig="/opt/kubernetes/conf/kube-proxy.kubeconfig"
  
]# kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig="/opt/kubernetes/conf/kube-proxy.kubeconfig"
  
]# kubectl config use-context default --kubeconfig="/opt/kubernetes/conf/kube-proxy.kubeconfig"
6.3.5 创建service服务文件
]# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/conf/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
6.3.6 启动服务
]# systemctl daemon-reload   加载service文件 
]# systemctl start kube-proxy   启动服务
]# systemctl enable kube-proxy  设置为开机自启

6.4 部署网络组件calico

# 部署命令
]# kubectl apply -y calico.yaml   

# 查看资源  -n指定namespace
]# kubectl get pods -n kube-system

在这里插入图片描述

在这里插入图片描述

注意点:docker一定要配置国内镜像源!!!否则镜像拉取不下来!

6.5 授权apiserver访问kubelet

如果不收取apiserver访问kubelet,那么将无法使用kubectl查看集群的一些信息,比如kubectl logs等命令就无法使用。

# 在master上操作!
]# vim /opt/kubernetes/conf/apiserver-to-kubelet-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
    
    
# 部署yaml文件
]# kubectl apply -f apiserver-to-kubelet-rbac.yaml

七、部署Coredns和Dashboard

7.1 部署Coredns

Coredns用于集群内部Service名称解析

主要是用作服务发现,也就是服务(应用)之间相互定位的过程

# 创建coredns.yaml文件,从GitHub上下载,地址:https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

# 部署coredns
]# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

# 查看pod、svc
~]# kubectl get pod,svc -n kube-system
NAME                                          READY   STATUS    RESTARTS   AGE
pod/coredns-6cc56c94bd-747jk                  1/1     Running   0          8m49s

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP,9153/TCP   8m49s


# DNS解析测试
]# kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup baidu.com
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      baidu.com
Address 1: 220.181.38.148
Address 2: 220.181.38.251

7.2 部署Dashboard

可视化工具

# 部署Dashboard
]# kubectl apply -f kubernetes-dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created


# 查看pod、svc
]# kubectl get pod,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-7445d59dfd-zv4s7   1/1     Running   0          2m37s
pod/kubernetes-dashboard-5ddcdf9c99-b8wvp        1/1     Running   0          2m37s

NAME                                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.0.0.117   <none>        8000/TCP        2m37s
service/kubernetes-dashboard        NodePort    10.0.0.174   <none>        443:30001/TCP   2m37s

# 获取token值
]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         calico-kube-controllers-token-wmxlx
Type:  kubernetes.io/service-account-token
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkVkSTQtYnVTcTZOaDRhRlQ1R1pETmlqVDR1YUtoem9hbDlYYTFvNmFxZzQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjYWxpY28ta3ViZS1jb250cm9sbGVycy10b2tlbi13bXhseCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjYWxpY28ta3ViZS1jb250cm9sbGVycyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjU0N2FkYzM4LWYwYmUtNGE3Zi1iZmVlLTZmN2Y3N2MxNWNjOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpjYWxpY28ta3ViZS1jb250cm9sbGVycyJ9.pnQWI1y6sScSFb0AwCRaU6cUbz_LwvOCzxGmG2SRrlSmibnyuMFUhy4K_rY5iIZkwwyxzxqhBudSelbA0Boh0Wz2TS7zjeorVTDrU1cDVWaFkD_wG_jnSW7Vgnhuz7Kh2-QxvDdNPFXaUNqebJawXPclrgUd_45fLwQAF06RWp1PELCmjCXCQEf1mfOuPINNuKEMcoKqOQxPXLSuEJ_iTmQWGLhURJNtKew7lXy813TRPFTiP9bcNl0hchvGndusO105T6ukKaHWj8-xSjyWJObH4ubU1LeQl4Z0qTQykeJtHhvXBycBck-L9en2eZz1FgyKEgLhg8IgWIYeubwOOQ
Name:         calico-node-token-6trdq

状态都running后web访问dashboard

https://任意节点IP地址:300001

使用token值登录

在这里插入图片描述

在这里插入图片描述

八、到这里,一套高可用K8S集群就已经部署完成了!

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值