二进制包部署集群K8S-(1M3W)

二进制包部署集群K8S-(1M3W)

给刚入门的建议:不要一开始就对着一些《高可用的集群》部署文档来练习,应该先搭建一个最简洁的单节点集群,虽然看起来不上档次;但相信我,能理解单节点部署的基础组件后,再去看多节点部署会更简单明了。

一、概述

1M3W就是1个master node 和 3个worker node 组成的集群,其实就是在单节点上加多了两个个worker node,虽然从理论上可以直接通过修改相关配置来达到这个扩展的效果,但希望能从头开始,这可以加深对每个组件的理解。

本集群部署使用到三台主机

IPhostnamenodemodules
192.168.91.132bin-mastermaster node、worker nodekube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubectl 、kubelet 、kube-proxy
192.168.91.133bin-node-1worker nodeetcd、kubelet 、kube-proxy
192.168.91.134bin-node-2worker nodeetcd、kubelet 、kube-proxy

二、版本说明

这里用到的包和单机部署时一样,如果已经在本地下载好,可以直接上传到master节点的相关目录下。

可以先下载好相关软件,或者后面使用命令下载


说明:本文中命令前的 $ 符号只是做命令标记,在复制命令时请忽略


三、初始化系统

三台主机都需要进行操作,可以先操作好一台后进行VMware克隆。

#1.关闭防火墙
$ systemctl stop firewalld  && systemctl disable firewalld

#2.关闭selinux
$setenforce 0 && sed -i 's/enforcing/disabled/' /etc/selinux/config

# 3.关闭swap分区
$swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab

# 4.设置主机名
$hostnamectl set-hostname <hostname>

#5、在节点添加master和host节点的配置
$ cat >> /etc/hosts << EOF
192.168.91.132 bin-master
192.168.91.133 bin-node-1
192.168.91.134 bin-node-2
EOF

#6、将桥接的IPv4 流量传递到iptables 的链:
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

$cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

$ sudo sysctl --system

#7. 时间同步(虚拟机可以使用VMware Tools功能)[非必要操作]
$ yum install ntpdate -y
$ ntpdate time.windows.com

#重启
$reboot
#或
$shutdown -r now

四、部署二进制包

4.1、部署docker

三台主机都需要安装docker,但也只需要在master上操作即可。

4.1.1、下载二进制包

#在master主机创建目录存放二进制软件包
$mkdir usr/local/k8s -p

$cd /usr/local/k8s
#下载docker包,
$wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.9.tgz
#解压
$tar zxvf docker-20.10.9.tgz

#复制到其他节点的运行目录
$scp -r /usr/local/k8s/docker/* root@192.168.91.133:/usr/bin
$scp -r /usr/local/k8s/docker/* root@192.168.91.134:/usr/bin
#复制到本地运行目录
$cp docker/* /usr/bin

4.1.2、systemd 管理docker

创建docker.service脚本

$cat > /usr/lib/systemd/system/docker.service<< EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

4.1.3、创建配置文件,配置阿里云镜像仓库;配置容器的 cgroup

#在所有节点主机创建docker配置文件默认的目录(这一步在所有节点操作)
$mkdir /etc/docker -p

$cat > /etc/docker/daemon.json<< EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

4.1.4、启动docker并设置开机启动

#启动docker
$systemctl daemon-reload && systemctl start docker && systemctl enable docker

4.1.5、复制到其他节点

如果在master节点中的docker启动正常后,将docker.service和daemon.json复制到其他节点。

$scp -r  /usr/lib/systemd/system/docker.service  root@192.168.91.133:/usr/lib/systemd/system/
$scp -r  /usr/lib/systemd/system/docker.service  root@192.168.91.134:/usr/lib/systemd/system/

然后在其他节点上执行启动命令即可。

4.2、部署etcd集群

4.2.1、生成证书

4.2.1.1、下载cfssl
$cd /usr/local/k8s
$wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
$wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
$wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
#赋予可执行的权限
$chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

$cp cfssl_linux-amd64 /usr/local/bin/cfssl
$cp cfssljson_linux-amd64 /usr/local/bin/cfssljson
$cp cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
4.2.1.2、生成etcd证书
4.2.1.2.1、创建自签证书颁发机构(CA)的配置文件
#创建证书生成的工作目录(后面相关证书的生成都在此目录下操作)
$mkdir -p ~/TLS/{etcd,k8s}

$cd ~/TLS/etcd
#CA配置文件
$cat > ca-config.json<<EOF
{
	"signing":{ 
		"default":{
			"expiry":"87600h"
		},
		"profiles":{
			"www":{
				"expiry":"87600h",
				"usages":[
					"signing",
					"key encipherment",
					"server auth",
					"client auth"
				]
			}
		}
	}
}
EOF

$cat > ca-csr.json<< EOF
{
	"CN":"etcd CA",
	"key":{
		"algo": "rsa",
		"size": 2048
	},
	"names":[
		{
			"C":"CN",
			"L":"BeiJing",
			"ST":"BeiJing"
		}
	]
}
EOF
4.2.1.2.2、生成证书
$cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

$ls *pem 
4.2.1.2.2、使用自签CA进行签发etcd HTTPS证书

创建证书申请文件:

$cat >> server-csr.json<< EOF
{
	"CN": "etcd",
	"hosts": [
		"192.168.91.132",
		"192.168.91.133",
		"192.168.91.134"
	],
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [
		{
			"C": "CN",
			"L": "BeiJing",
			"ST": "BeiJing"
		}
	]
}
EOF

注:hosts 字段中IP 为所有etcd 节点的集群内部通信IP。

生成HTTPS证书:

$cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

$ls server*pem
4.2.1.3. 部署ETCD
4.2.1.3.1、下载etcd二进制包文件
$cd /usr/local/k8s

$wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz

#创建etcd的工作目录
$mkdir /opt/etcd/{bin,cfg,ssl} -p
#解压
$tar zxvf etcd-v3.5.1-linux-amd64.tar.gz
#将二进制文件复制到工作目录
$cp etcd-v3.5.1-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
4.2.1.3.2、创建etcd 配置文件
$cat > /opt/etcd/cfg/etcd.conf<< EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.91.132:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.91.132:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.91.132:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.91.132:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.91.132:2380,etcd-2=https://192.168.91.133:2380,etcd-3=https://192.168.91.134:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_ENABLE_V2="true"  
EOF

ETCD_INITIAL_CLUSTER:集群节点地址集

4.2.1.3.3、systemd 管理etcd
#将之前生成的证书复制到etcd的工作目录
$cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl

$cat > /usr/lib/systemd/system/etcd.service<< EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4.2.1.3.4、将配置文件、证书和etcd.service服务文件复制到其他子节点
#node1
$scp -r /opt/etcd/ root@192.168.91.133:/opt/
$scp /usr/lib/systemd/system/etcd.service root@192.168.91.133:/usr/lib/systemd/system/

#node2
$scp -r /opt/etcd/ root@192.168.91.134:/opt/
$scp /usr/lib/systemd/system/etcd.service root@192.168.91.134:/usr/lib/systemd/system/
4.2.1.3.5、修改其他节点的etcd配置

在复制目标节点分别修改etcd.conf 配置文件中的节点名称和当前服务器IP:
node1:

#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.91.133:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.91.133:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.91.133:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.91.133:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.91.132:2380,etcd-2=https://192.168.91.133:2380,etcd-3=https://192.168.91.134:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

node2:

#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.91.134:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.91.134:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.91.134:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.91.134:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.91.132:2380,etcd-2=https://192.168.91.133:2380,etcd-3=https://192.168.91.134:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

修改完后先启动worker节点的etcd服务,再启动master节点的etcd

$systemctl daemon-reload && systemctl start etcd && systemctl enable etcd
4.2.1.3.6、检查etcd集群的状态
$ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.91.132:2379,https://192.168.91.133:2379,https://192.168.91.134:2379" endpoint health

在这里插入图片描述
如果有节点失败可查看日志

#失败可看日志查找原因
/var/log/message 或journalctl -u etcd

4.3、部署Master Node

4.3.1、部署kube-apiserver

4.3.1.1、生成kube-apiserver证书

1、创建自签证书颁发机构(CA)配置

$cd /root/TLS/k8s
$cat > ca-config.json<< EOF
{
    "signing":{
        "default":{
            "expiry":"87600h"
        },
        "profiles":{
            "kubernetes":{
                "expiry":"87600h",
                "usages":[
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF


$cat > ca-csr.json<< EOF
{
    "CN":"kubernetes",
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"Beijing",
            "ST":"Beijing",
            "O":"system:masters",
            "OU":"System"
        }
    ]
}
EOF

2、生成证书

$cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

$ls *pem

3、使用自签证书签发kube-apiserver HTTPS证书
创建证书申请文件;配置相关ip

$cd /root/TLS/k8s
$cat > server-csr.json<< EOF
{
    "CN":"kubernetes",
    "hosts":[
        "10.0.0.1",
        "192.168.91.132",
        "192.168.91.133",
        "192.168.91.134",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"k8s",
            "OU":"System"
        }
    ]
}
EOF

生成HTTPS证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

$ls server*pem
4.3.1.2、下载二进制文件
$cd /usr/local/k8s

$wget https://dl.k8s.io/v1.23.1/kubernetes-server-linux-amd64.tar.gz

$tar zxvf kubernetes-server-linux-amd64.tar.gz
#创建k8s工作目录
$mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

$cd /usr/local/k8s/kubernetes/server/bin
$cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
$cp kubectl /usr/bin/
4.3.1.3、部署kube-apiserver
4.3.1.3.1、生成token文件
$echo $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" > /opt/kubernetes/cfg/token.csv
4.3.1.3.2、创建配置文件
$cat > /opt/kubernetes/cfg/kube-apiserver.conf<< EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.91.132:2379,https://192.168.91.133:2379,https://192.168.91.134:2379 \\
--bind-address=192.168.91.132 \\
--secure-port=6443 \\
--advertise-address=192.168.91.132 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd 集群地址
–bind-address:监听地址
–secure-port:https 安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service 虚拟IP 地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC 授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap 机制
–token-auth-file:bootstrap token 文件
–service-node-port-range:Service nodeport 类型默认分配端口范围
–kubelet-client-xxx:apiserver 访问kubelet 客户端证书
–tls-xxx-file:apiserver https 证书
–etcd-xxxfile:连接Etcd 集群证书
–audit-log-xxx:审计日志
–service-account-issuer=string 指定service account token issuer的标识符 ;该issuer将在iss声明中分发Token以使标识符生效;该参数的值为字符串或URL
–service-account-signing-key-file=string 含有当前service account token issuer私钥的文件路径 ;该issuer将以此私钥签名发布的ID token(需开启TokenRequest特性

4.3.1.3.3、拷贝生成的k8s证书
$cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl
4.3.1.3.4、system管理apiserver
$cat > /usr/lib/systemd/system/kube-apiserver.service<< EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4.3.1.3.5、启动 kube-apiserver并设置开机启动
$systemctl daemon-reload && systemctl start kube-apiserver && systemctl enable kube-apiserver

4.3.2、部署kubectl

apiserver启用了认证,所有要配置kubectl的证书,否则kubectl无法访问api

4.3.2.1、生成kubectl证书
$cd /root/TLS/k8s
$cat > kubectl-csr.json<<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

#生成证书
$cfssl gencert -ca=ca.pem -ca-key=ca-key.pem  -config=ca-config.json -profile=kubernetes kubectl-csr.json | cfssljson -bare kubectl

$cp ~/TLS/k8s/kubectl*.pem /opt/kubernetes/ssl
4.3.2.2、生成kubectl.kubeconfig 配置文件
$cd /opt/kubernetes/ssl

# 生成kubelet kubeconfig 配置文件
$kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://192.168.91.132:6443 \
--kubeconfig=kubectl.kubeconfig

$kubectl config set-credentials admin \
--client-certificate=/opt/kubernetes/ssl/kubectl.pem \
--embed-certs=true \
--client-key=/opt/kubernetes/ssl/kubectl-key.pem  \
--kubeconfig=kubectl.kubeconfig

$kubectl config set-context default \
--cluster=kubernetes \
--user=admin \
--kubeconfig=kubectl.kubeconfig

$kubectl config use-context default --kubeconfig=kubectl.kubeconfig

$mkdir ~/.kube -p

$cp kubectl.kubeconfig ~/.kube/config

#查看kubernetes状态
$kubectl cluster-info

$kubectl get cs

$kubectl get all --all-namespaces

在这里插入图片描述

4.3.3、部署kube-controller-manager

4.3.3.1、生成证书(host填写master节点ip)
$cd /root/TLS/k8s
$cat > kube-controller-csr.json<<EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [
  	 "127.0.0.1",
  	 "192.168.91.132"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "KUBERNETES",
      "OU": "System"
    }
  ]
}
EOF

#生成证书
$cfssl gencert -ca=ca.pem -ca-key=ca-key.pem  -config=ca-config.json -profile=kubernetes kube-controller-csr.json | cfssljson -bare kube-controller-manager

$cp ~/TLS/k8s/kube-controller-manager*.pem /opt/kubernetes/ssl
4.3.3.2、生成kube-controller-manager.kubeconfig配置文件
$cd /opt/kubernetes/ssl

# 生成kube-controller-manager.kubeconfig配置文件
#--server 为master节点ip
$kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://192.168.91.132:6443 \
--kubeconfig=kube-controller-manager.kubeconfig

$kubectl config set-credentials kube-controller-manager \
--client-certificate=/opt/kubernetes/ssl/kube-controller-manager.pem \
--embed-certs=true \
--client-key=/opt/kubernetes/ssl/kube-controller-manager-key.pem  \
--kubeconfig=kube-controller-manager.kubeconfig

$kubectl config set-context default \
--cluster=kubernetes \
--user=kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

$kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
4.3.3.3、创建配置文件
$cat > /opt/kubernetes/cfg/kube-controller-manager.conf<< EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--cluster-name=kubernetes \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--leader-elect=true \\
--controllers=*,bootstrapsigner,tokencleaner \\
--kubeconfig=/opt/kubernetes/ssl/kube-controller-manager.kubeconfig \\
--tls-cert-file=/opt/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/kube-controller-manager-key.pem \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--use-service-account-credentials=true \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

–master:通过本地非安全本地端口8080 连接apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet 颁发证书的CA,与apiserver 保持一致

4.3.3.4、systemd 管理controller-manager
$cat > /usr/lib/systemd/system/kube-controller-manager.service<< EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4.3.3.5、启动kube-controller-manager
$systemctl daemon-reload && systemctl start kube-controller-manager && systemctl enable kube-controller-manager
4.3.3.6、查看kube-controller-manager状态
$kubectl get cs

在这里插入图片描述

4.3.4、部署kube-scheduler

4.3.4.1、生成证书(host填写master节点ip)
$cd /root/TLS/k8s
$cat > kube-scheduler-csr.json<<EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [
  	"127.0.0.1",
  	"192.168.91.132"
  	],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "KUBERNETES",
      "OU": "System"
    }
  ]
}
EOF

#生成证书
$cfssl gencert -ca=ca.pem -ca-key=ca-key.pem  -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

$cp ~/TLS/k8s/kube-scheduler*.pem /opt/kubernetes/ssl
4.3.4.2、生成kube-scheduler.kubeconfig配置文件
$cd /opt/kubernetes/ssl

# 生成kube-scheduler.kubeconfig配置文件
$kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://192.168.91.132:6443 \
--kubeconfig=kube-scheduler.kubeconfig

$kubectl config set-credentials kube-scheduler \
--client-certificate=/opt/kubernetes/ssl/kube-scheduler.pem \
--embed-certs=true \
--client-key=/opt/kubernetes/ssl/kube-scheduler-key.pem  \
--kubeconfig=kube-scheduler.kubeconfig

$kubectl config set-context default \
--cluster=kubernetes \
--user=kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

$kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
4.3.4.3、创建参数配置文件
$cat > /opt/kubernetes/cfg/kube-scheduler.conf<< EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--kubeconfig=/opt/kubernetes/ssl/kube-scheduler.kubeconfig \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--tls-cert-file=/opt/kubernetes/ssl/kube-scheduler.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/kube-scheduler-key.pem \\
--bind-address=127.0.0.1"
EOF

–master:通过本地非安全本地端口8080 连接apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)

4.3.4.4、systemd 管理scheduler
$cat > /usr/lib/systemd/system/kube-scheduler.service<< EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4.3.4.5、启动并设置开机启动
$systemctl daemon-reload && systemctl start kube-scheduler && systemctl enable kube-scheduler
4.3.4.6、查看kube-schedule状态
$kubectl get cs

在这里插入图片描述

4.4、部署Worker Node

这里将Master Node同时作为Worker Node

4.4.1、创建工作目录并拷贝二进制文件

在所有worker node 创建工作目录:

$mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

从master 节点拷贝:

$cd /usr/local/k8s/kubernetes/server/bin
$cp kubelet /opt/kubernetes/bin # 本地拷贝
$scp -r kubelet  root@192.168.91.133:/opt/kubernetes/bin
$scp -r kubelet  root@192.168.91.134:/opt/kubernetes/bin

4.4.2、部署kubelet

4.4.2.1、生成kubelet.kubeconfig配置文件
$cd /opt/kubernetes/ssl

#允许用户请求证书
$kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

# 生成kubelet.kubeconfig配置文件
$export KUBE_APISERVER="https://192.168.91.132:6443" # apiserver IP:PORT
$export TOKEN="31063b40da865ec7682667dcaa46ce7e" # 与token.csv 里保持一致

$kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kubelet-bootstrap.kubeconfig


$kubectl config set-credentials kubelet-bootstrap \
--token=${TOKEN} \
--kubeconfig=kubelet-bootstrap.kubeconfig

$kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=kubelet-bootstrap.kubeconfig

$kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
4.4.2.2、创建配置文件
$cat > /opt/kubernetes/cfg/kubelet.conf<< EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=bin-master \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/ssl/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/ssl/kubelet-bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"
EOF

–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver 申请证书
–config:配置参数文件
–cert-dir:kubelet 证书生成目录
–pod-infra-container-image:管理Pod 网络容器的镜像

4.4.2.3、配置参数文件
$cat > /opt/kubernetes/cfg/kubelet-config.yml<< EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
   enabled: false
  webhook:
   cacheTTL: 2m0s
   enabled: true
  x509:
   clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
   cacheAuthorizedTTL: 5m0s
   cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
4.4.2.4、systemctl管理kubelet
$cat > /usr/lib/systemd/system/kubelet.service<< EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4.4.2.5、启动
$systemctl daemon-reload && systemctl start kubelet && systemctl enable kubelet

4.4.3、批准 kubelet 证书申请

每个节点部署申请后都需被批准才能加入集群

#在master节点查看 kubelet 证书请求
$kubectl get csr

在这里插入图片描述

#批准申请
#kubectl certificate approve <NAME>
$kubectl certificate approve node-csr-7N9-zKgInN2KCzExyhO2oqu0xCNjSl0_WtbEHZoOb9Q

在这里插入图片描述
查看节点:

$kubectl get nodes

在这里插入图片描述

由于网络插件还没有部署,节点会没有准备就绪NotReady

4.4.4、部署kube-proxy

4.4.4.1、二进制文件
#复制文件
$cp /usr/local/k8s/kubernetes/server/bin/kube-proxy /opt/kubernetes/bin

$scp -r /usr/local/k8s/kubernetes/server/bin/kube-proxy root@192.168.91.133:/opt/kubernetes/bin

$scp -r /usr/local/k8s/kubernetes/server/bin/kube-proxy root@192.168.91.134:/opt/kubernetes/bin

4.4.4.2、生成证书
$cd /root/TLS/k8s
$cat > kube-proxy-csr.json<<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "KUBERNETES",
      "OU": "System"
    }
  ]
}
EOF

#生成证书
$cfssl gencert -ca=ca.pem -ca-key=ca-key.pem  -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

$cp ~/TLS/k8s/kube-proxy*.pem /opt/kubernetes/ssl
4.4.4.3、生成kube-proxy.kubeconfig配置文件
$cd /opt/kubernetes/ssl

# 生成kube-scheduler.kubeconfig配置文件
$kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://192.168.91.132:6443 \
--kubeconfig=kube-proxy.kubeconfig

$kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
--embed-certs=true \
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem  \
--kubeconfig=kube-proxy.kubeconfig

$kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

$kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
4.4.4.4、创建配置文件
$cat > /opt/kubernetes/cfg/kube-proxy.conf<< EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml \\
--hostname-override=abin-master"
EOF
4.4.4.5、配置 参数文件
$cat > /opt/kubernetes/cfg/kube-proxy-config.yml<< EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
healthzBindAddress: 0.0.0.0:10256
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/ssl/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16
EOF
4.4.4.6、systemd管理kube-proxy
$cat > /usr/lib/systemd/system/kube-proxy.service<< EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4.4.4.7、启动kube-proxy并设置开机启动
$systemctl daemon-reload && systemctl start kube-proxy && systemctl enable kube-proxy

4.4.5、部署网络组件(在master节点部署)

4.4.5.1、 Calico

1、下载yaml文件

$mkdir /opt/plugins/calico -p
$cd /opt/plugins/calico
#下载配置文件
$wget https://docs.projectcalico.org/v3.20/manifests/calico.yaml

#修改calico.yaml 属性 CALICO_IPV4POOL_CIDR
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
#与kube-controller-manager.config中的--cluster-cidr=10.244.0.0/16
#与kube-proxy-config.yml中clusterCIDR: 10.244.0.0/16一致


#应用yaml文件
$kubectl apply -f calico.yaml


#使用命令查看 yaml中包含的镜像
$ cat calico.yaml |grep image
#在apply或create命令无法下载镜像时通过docker pull 来拉取镜像

2、查看组件状态

#验证
$kubectl get pods -n kube-system

#-w可以实时查看
$kubectl get pods -n kube-system -w

$kubectl get node

在这里插入图片描述

------------END

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
k8s(Kubernetes)是一个开源的容器编排系统,它可以自动化地部署、扩展和操作容器化的应用程序。k8s二进制高可用集群部署是指在生产环境中使用k8s二进制文件手动部署高可用的k8s集群。 首先,需要准备三台或以上的服务器,并为每台服务器安装好操作系统(如Ubuntu、CentOS等)。然后,从kubernetes官网下载对应版本的k8s二进制文件,并在每台服务器上进行解压和配置。在配置过程中,需要为每个节点创建和配置kubelet、kube-controller-manager、kube-scheduler和kube-apiserver等组件,同时配置etcd集群用于存储集群的元数据。 接下来,需要配置网络插件(如Flannel或Calico)用于实现容器之间的通信和网络隔离。此外,还需要配置负载均衡器(如Nginx、HAProxy)来将流量均衡到k8s集群中的各个节点上。 在部署完集群后,需要进行一系列的测试和验证工作,确保集群的高可用性和稳定性。这括检查节点之间的通信、集群中各组件的健康状态、容器的调度和网络连接等方面。 最后,需要配置监控和日志系统(如Prometheus、Grafana、ELK等)来实时监控和收集集群的运行状态和日志信息。此外,还可以考虑配置自动化运维工具(如Ansible、Terraform)来简化集群的管理和维护工作。 通过以上步骤,就可以实现k8s二进制高可用集群的部署和运维工作。这样就可以保证在生产环境中,k8s集群能够实现高可用性、稳定性和可扩展性,满足企业应用程序的需求。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值