CentOS 使用二进制部署 Kubernetes 1.13.1集群

组件版本 && 集群环境

组件版本:

  • Kubernetes 1.13.1
  • Etcd 3.3.10
  • Flanneld 0.10

部署节点:

ip主机名
192.168.20.203master
192.168.20.202host2
192.168.20.201host1

集群环境变量:

# 建议使用未用的网段来定义服务网段和Pod 网段
# 服务网段(Service CIDR),部署前路由不可达,部署后集群内部使用IP:Port可达
SERVICE_CIDR="10.254.0.0/16"

# Pod 网段(Cluster CIDR),部署前路由不可达,部署后路由可达(flanneld 保证)
CLUSTER_CIDR="172.18.0.0/16"

# kubernetes 服务IP(预先分配,一般为SERVICE_CIDR中的第一个IP)
CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"

# 集群 DNS 服务IP(从SERVICE_CIDR 中预先分配)
CLUSTER_DNS_SVC_IP="10.254.0.2"

# flanneld 网络配置前缀
FLANNEL_ETCD_PREFIX="/kubernetes/network"

初始化环境

1. 设置关闭防火墙及SELINUX

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled

2. 关闭Swap

swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0

3. 设置系统参数 - 允许路由转发,不对bridge的数据进行处理

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

4. 创建安装目录

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

5. 安装 Docker

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce
systemctl start docker && systemctl enable docker

6. ssh-key认证

ssh-keygen 
ssh-copy-id 192.168.20.201
ssh-copy-id 192.168.20.202

创建CA 证书和密钥

kubernetes 系统各个组件需要使用TLS证书对通信进行加密,这里我们使用CloudFlare的PKI 工具集cfssl 来生成Certificate Authority(CA) 证书和密钥文件, CA 是自签名的证书,用来签名后续创建的其他TLS 证书。

1.安装 CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.创建CA

# cat ca-config.json
{
	"signing": {
		"default": {
			"expiry": "87600h"
		},
		"profiles": {
			"kubernetes": {
				"expiry": "87600h",
				"usages": [
					"signing",
					"key encipherment",
					"server auth",
					"client auth"
				]
			}
		}
	}
}	
  • config.json:可以定义多个profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个profile;
  • signing: 表示该证书可用于签名其它证书;生成的ca.pem 证书中CA=TRUE;
  • server auth: 表示client 可以用该CA 对server 提供的证书进行校验;
  • client auth: 表示server 可以用该CA 对client 提供的证书进行验证。

修改CA 证书签名请求:

# cat ca-csr.json
{
	"CN": "kubernetes",
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [
		{
			"C": "CN",
			"L": "BeiJing",
			"ST": "BeiJing",
			"O": "k8s",
			"OU": "System"
		}
	]
}	

  • CN: Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名(User
    Name);浏览器使用该字段验证网站是否合法;
  • O: Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);

生成CA 证书和私钥:

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
# ls ca*
# ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem	

3.分发证书:
将生成的CA 证书、密钥文件、配置文件拷贝到所有机器的/k8s/kubernetes/ssl/目录下:

cp ca* /k8s/kubernetes/ssl/
scp /k8s/kubernetes/ssl/* 192.168.20.202:/k8s/kubernetes/ssl/
scp /k8s/kubernetes/ssl/* 192.168.20.201:/k8s/kubernetes/ssl/

部署高可用etcd 集群

kubernetes系统使用etcd存储所有的数据,我们这里部署3个节点的etcd集群,这3个节点直接复用kubernetes 的3个节点,分别命名为etcd01etcd02etcd03:

  • 192.168.20.203 etcd01
  • 192.168.20.202 etcd02
  • 192.168.20.201 etcd03

1.解压安装文件
下载地址:https://github.com/etcd-io/etcd/releases

tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/

vim /k8s/etcd/cfg/etcd   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.203:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.203:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.203:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.203:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.203:2380,etcd02=https://192.168.20.202:2380,etcd03=https://192.168.20.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

2.创建TLS 密钥和证书
为了保证通信安全,客户端(如etcdctl)与etcd 集群、etcd 集群之间的通信需要使用TLS 加密。
创建etcd 证书签名请求:

cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
	"192.168.20.203",
	"192.168.20.202",
	"192.168.20.201"
  ],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF
  • hosts 字段指定授权使用该证书的etcd节点IP

生成etcd证书和私钥:

# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
  -ca-key=/k8s/kubernetes/ssl/ca-key.pem \
  -config=/k8s/kubernetes/ssl/ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

# ls etcd*
etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

# cp etcd* /k8s/etcd/ssl/

3.创建 etcd的 systemd unit 文件

vim /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/etcd.pem \
--key-file=/k8s/etcd/ssl/etcd-key.pem \
--peer-cert-file=/k8s/etcd/ssl/etcd.pem \
--peer-key-file=/k8s/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/k8s/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • 为了保证通信安全,需要指定etcd 的公私钥(cert-file和key-file)、Peers通信的公私钥和CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA 证书(trusted-ca-file);

4.将启动文件、配置文件拷贝到 节点1、节点2:

cd /k8s/ 
scp -r etcd/ 192.168.20.201:/k8s/
scp -r etcd/ 192.168.20.202:/k8s/

scp /lib/systemd/system/etcd.service 192.168.20.201:/lib/systemd/system/etcd.service
scp /lib/systemd/system/etcd.service 192.168.20.202:/lib/systemd/system/etcd.service

修改对应节点的cfg/etcd文件:

[root@host1 ~]# cat /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.201:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.201:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.201:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.201:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.203:2380,etcd02=https://192.168.20.202:2380,etcd03=https://192.168.20.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

[root@host2 ~]# cat /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.202:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.202:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.202:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.202:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.203:2380,etcd02=https://192.168.20.202:2380,etcd03=https://192.168.20.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

5.启动etcd 服务

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

6.验证服务
部署完etcd 集群后,在任一etcd 节点上执行下面命令:

# /k8s/etcd/bin/etcdctl \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/etcd/ssl/etcd.pem \
	--key-file=/k8s/etcd/ssl/etcd-key.pem \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	cluster-health

输出结果如下:

member 2e4d105025f61a1b is healthy: got healthy result from https://192.168.20.202:2379
member 8ad9da8a203d86d8 is healthy: got healthy result from https://192.168.20.203:2379
member c1b34b5ace31a23f is healthy: got healthy result from https://192.168.20.201:2379
cluster is healthy

可以看到上面的信息3个节点上的etcd 均为healthy,则表示集群服务正常。

部署Flannel 网络

kubernetes 要求集群内各节点能通过Pod 网段互联互通,下面我们来使用Flannel 在所有节点上创建互联互通的Pod 网段的步骤。
1.创建TLS 密钥和证书
etcd 集群启用了双向TLS 认证,所以需要为flanneld 指定与etcd 集群通信的CA 和密钥。
创建flanneld 证书签名请求:

cat > flanneld-csr.json <<EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF

生成flanneld 证书和私钥:

# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
	  -ca-key=/k8s/kubernetes/ssl/ca-key.pem \
	  -config=/k8s/kubernetes/ssl/ca-config.json \
	  -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld	

# ls flanneld*
flanneld.csr  flanneld-csr.json  flanneld-key.pem  flanneld.pem	

# mkdir -p /k8s/flanneld/ssl
# cp flanneld*.pem /k8s/flanneld/ssl/

2.向etcd 写入集群Pod 网段信息
该步骤只需在第一次部署Flannel 网络时执行,后续在其他节点上部署Flanneld 时无需再写入该信息

# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	set /kubernetes/network/config  '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'

输出信息:

{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}

写入的 Pod 网段(${CLUSTER_CIDR},172.18.0.0/16)必须与kube-controller-manager--cluster-cidr 选项值一致;

3.安装和配置flanneld
下载地址:https://github.com/coreos/flannel/releases

tar xf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/

创建flanneld的systemd unit 文件

# cat /lib/systemd/system/flanneld.service    
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
#EnvironmentFile=/k8s/kubernetes/cfg/flanneld
#ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStart=/k8s/kubernetes/bin/flanneld --etcd-cafile=/k8s/kubernetes/ssl/ca.pem --etcd-certfile=/k8s/flanneld/ssl/flanneld.pem --etcd-keyfile=/k8s/flanneld/ssl/flanneld-key.pem --etcd-endpoints=https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379 --etcd-prefix=/kubernetes/network
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
  • mk-docker-opts.sh脚本将分配给flanneld 的Pod 子网网段信息写入到/run/flannel/docker 文件中,后续docker 启动时使用这个文件中的参数值为 docker0 网桥
  • flanneld 使用系统缺省路由所在的接口和其他节点通信,对于有多个网络接口的机器(内网和公网),可以用 --iface 选项值指定通信接口(上面的 systemd unit 文件没指定这个选项)

配置Docker启动指定子网段

# cat /lib/systemd/system/docker.service    
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

4.启动flanneld

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl restart docker

5.检查flanneld 服务,检查分配给各flanneld 的Pod 网段信息

ifconfig flannel.1

# 查看集群 Pod 网段(/16)
# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	get /kubernetes/network/config
{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}

# 查看已分配的 Pod 子网段列表(/24)
# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	ls /kubernetes/network/subnets
/kubernetes/network/subnets/172.18.100.0-24

# 查看某一 Pod 网段对应的 flanneld 进程监听的 IP 和网络参数
# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	get /kubernetes/network/subnets/172.18.100.0-24
{"PublicIP":"192.168.20.203","BackendType":"vxlan","BackendData":{"VtepMAC":"4e:9b:aa:9a:ce:ac"}}

6.将配置文件复制到其他节点

scp -r /k8s/kubernetes/bin/* 192.168.20.201:/k8s/kubernetes/bin/
scp -r /k8s/kubernetes/bin/* 192.168.20.202:/k8s/kubernetes/bin/
scp -r /k8s/flanneld/ssl/* 192.168.20.201:/k8s/flanneld/ssl/
scp -r /k8s/flanneld/ssl/* 192.168.20.202:/k8s/flanneld/ssl/
scp /lib/systemd/system/flanneld.service 192.168.20.201:/lib/systemd/system/flanneld.service
scp /lib/systemd/system/flanneld.service 192.168.20.202:/lib/systemd/system/flanneld.service
scp /lib/systemd/system/docker.service 192.168.20.201:/lib/systemd/system/docker.service 
scp /lib/systemd/system/docker.service 192.168.20.202:/lib/systemd/system/docker.service 

7.确保各节点间Pod 网段能互联互通
在各个节点部署完Flanneld 后,查看已分配的Pod 子网段列表:

# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	ls /kubernetes/network/subnets
/kubernetes/network/subnets/172.18.88.0-24
/kubernetes/network/subnets/172.18.85.0-24
/kubernetes/network/subnets/172.18.100.0-24

部署master 节点

kubernetes master 节点包含的组件有:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-schedulerkube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。

下载解压二进制文件

# wget https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz
# tar xf kubernetes-server-linux-amd64.tar.gz

# cd kubernetes/server/bin/
# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /k8s/kubernetes/bin/

创建kubernetes 证书

创建kubernetes 证书签名请求:

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
	"127.0.0.1",
	"192.168.20.203",
	"k8s-api.virtual.local",
	"10.254.0.1",
	"kubernetes",
	"kubernetes.default",
	"kubernetes.default.svc",
	"kubernetes.default.svc.cluster",
	"kubernetes.default.svc.cluster.local"
  ],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF	
  • 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表,所以上面分别指定了当前部署的 master 节点主机 IP
    以及apiserver 负载的内部域名
  • 还需要添加 kube-apiserver 注册的名为 kubernetes 的服务 IP (Service Cluster IP),一般是 kube-apiserver --service-cluster-ip-range 选项值指定的网段的第一个IP,如 “10.254.0.1”

生成kubernetes 证书和私钥:

# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem   -ca-key=/k8s/kubernetes/ssl/ca-key.pem   -config=/k8s/kubernetes/ssl/ca-config.json   -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# ls kub*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

# cp kubernetes*.pem /k8s/kubernetes/ssl/

配置和启动kube-apiserver

1.创建kube-apiserver 使用的客户端token 文件:
kubelet 首次启动时向kube-apiserver 发送TLS Bootstrapping 请求,kube-apiserver 验证请求中的token 是否与它配置的token.csv 一致,如果一致则自动为kubelet 生成证书和密钥。
TLS Bootstrapping 使用的Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成

# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
9d3d0413211c8d92ed1b33a913154ce5

# cat /k8s/kubernetes/cfg/token.csv
9d3d0413211c8d92ed1b33a913154ce5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"	

2.创建apiserver配置文件

# cat /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379 \
--bind-address=192.168.20.203 \
--secure-port=6443 \
--advertise-address=192.168.20.203 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/kubernetes.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/kubernetes/ssl/ca.pem \
--etcd-certfile=/k8s/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/k8s/kubernetes/ssl/kubernetes-key.pem"	

3.创建kube-apiserver 的systemd unit文件

# cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	

4.启动服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

配置和启动kube-controller-manager

1.创建kube-controller-manager配置文件

# cat /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/24 \
--cluster-cidr=172.18.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"	

2.创建kube-controller-manager systemd unit 文件

# cat /lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	

3.启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager

配置和启动kube-scheduler

1.创建kube-scheduler配置文件

# cat /k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true"

2.创建kube-scheduler systemd unit 文件

# cat /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	

3.启动服务

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl restart kube-scheduler.service
systemctl status kube-scheduler.service

验证master 节点

将可执行文件添加到 PATH 变量中

# echo "export PATH=$PATH:/k8s/kubernetes/bin/" >>/etc/profile
# source /etc/profile

验证:

# kubectl get cs 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"} 	

部署Node 节点

kubernetes Node 节点包含如下组件:

  • kubelet
  • kube-proxy

安装和配置kubelet

kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况;
kubelet就是运行在Node节点上的,接收kube-apiserver发送的请求,管理Pod容器,执行交互式命令,如exec、run、logs等;所以这一步安装是在所有的Node节点上,如果你想把你的Master也当做Node节点的话,当然也可以在Master节点上安装的。

kubelet 启动时向kube-apiserver 发送TLS bootstrapping 请求,需要先将bootstrap token 文件中的kubelet-bootstrap 用户赋予system:node-bootstrapper 角色,然后kubelet 才有权限创建认证请求(certificatesigningrequests):

# kubectl create clusterrolebinding kubelet-bootstrap \
   --clusterrole=system:node-bootstrapper \
   --user=kubelet-bootstrap	

将kubelet 二进制文件拷贝node节点

cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.20.201:/k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.20.202:/k8s/kubernetes/bin/

创建 kubelet bootstrap kubeconfig 文件

# cat environment.sh 
BOOTSTRAP_TOKEN=9d3d0413211c8d92ed1b33a913154ce5
KUBE_APISERVER="https://192.168.20.203:6443"	

# source environment.sh

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数	
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig	
  • --embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
  • 设置 kubelet 客户端认证参数时没有指定秘钥和证书,后续由 kube-apiserver 自动生成;

将bootstrap kubeconfig文件拷贝到所有 nodes节点

cp bootstrap.kubeconfig /k8s/kubernetes/cfg/
scp bootstrap.kubeconfig 192.168.20.201:/k8s/kubernetes/cfg/
scp bootstrap.kubeconfig 192.168.20.202:/k8s/kubernetes/cfg/	

创建kubelet 参数配置文件拷贝到所有 nodes节点
创建 kubelet 参数配置模板文件:

vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.20.203
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
	enabled: true

创建kubelet配置文件

vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.20.203 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

创建kubelet systemd unit 文件

vim /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target	

拷贝文件:

scp /k8s/kubernetes/cfg/kubelet* 192.168.20.201:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/cfg/kubelet* 192.168.20.202:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kubelet.service 192.168.20.201:/lib/systemd/system/kubelet.service 
scp /lib/systemd/system/kubelet.service 192.168.20.202:/lib/systemd/system/kubelet.service 

其他节点需要修改对应的addresshostname-override地址

启动kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet

通过kubelet 的TLS 证书请求
kubelet 首次启动时向kube-apiserver 发送证书签名请求,必须通过后kubernetes 系统才会将该 Node 加入到集群。
查看未授权的CSR 请求:

# kubectl get csr  
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   2m37s   kubelet-bootstrap   Pending

# kubectl get nodes
No resources found.	

通过CSR 请求:

# kubectl certificate approve node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I
certificatesigningrequest.certificates.k8s.io/node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I approved
 
# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   4m9s   kubelet-bootstrap   Approved,Issued
# kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.20.203   Ready    <none>   13s   v1.13.1	

其余两台节点启动后通过csr请求:

# kubectl get csr  
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo   3m22s   kubelet-bootstrap   Pending
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   12m     kubelet-bootstrap   Approved,Issued
node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U   3m35s   kubelet-bootstrap   Pending
 
# kubectl certificate approve node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo
certificatesigningrequest.certificates.k8s.io/node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo approved

# kubectl certificate approve node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U
certificatesigningrequest.certificates.k8s.io/node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U approved

# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo   4m40s   kubelet-bootstrap   Approved,Issued
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   14m     kubelet-bootstrap   Approved,Issued
node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U   4m53s   kubelet-bootstrap   Approved,Issued

# kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.20.201   Ready    <none>   9s    v1.13.1
192.168.20.202   Ready    <none>   22s   v1.13.1
192.168.20.203   Ready    <none>   10m   v1.13.1	

配置kube-proxy

kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
创建kube-proxy 证书签名请求:

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF	
  • kube-apiserver 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限

生成kube-proxy 客户端证书和私钥

# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
	  -ca-key=/k8s/kubernetes/ssl/ca-key.pem \
	  -config=/k8s/kubernetes/ssl/ca-config.json \
	  -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy	

# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

# cp kube-proxy*.pem /k8s/kubernetes/ssl/
# scp kube-proxy*.pem 192.168.20.201:/k8s/kubernetes/ssl/
# scp kube-proxy*.pem 192.168.20.202:/k8s/kubernetes/ssl/

创建kube-proxy kubeconfig 文件

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
  --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \
  --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig	

# 将kube-proxy kubeconfig文件拷贝到所有 nodes节点
cp kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.20.201:/k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.20.202:/k8s/kubernetes/cfg/

创建 kube-proxy 配置文件

# cat /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.20.203 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"	
  • --cluster-cidr 必须与 kube-apiserver 的 --service-cluster-ip-range 选项值一致

创建kube-proxy systemd unit 文件

# cat /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	

启动服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy

启动其余两台节点服务:

scp /k8s/kubernetes/cfg/kube-proxy 192.168.20.201:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/cfg/kube-proxy 192.168.20.202:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kube-proxy.service 192.168.20.201:/lib/systemd/system/kube-proxy.service
scp /lib/systemd/system/kube-proxy.service 192.168.20.202:/lib/systemd/system/kube-proxy.service

修改对应节点的hostname-override地址

集群状态

打node 或者master 节点的标签

kubectl label node 192.168.20.203  node-role.kubernetes.io/master='master'
kubectl label node 192.168.20.202  node-role.kubernetes.io/node='node'
kubectl label node 192.168.20.201  node-role.kubernetes.io/node='node'

查看集群状态:

# kubectl get node,cs
NAME                  STATUS   ROLES    AGE   VERSION
node/192.168.20.201   Ready    node     42m   v1.13.1
node/192.168.20.202   Ready    node     42m   v1.13.1
node/192.168.20.203   Ready    master   52m   v1.13.1

NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok                  
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}

参考链接

https://www.qikqiak.com/post/manual-install-high-available-kubernetes-cluster/
https://www.kubernetes.org.cn/4963.html

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值