目 录
OPTIONS="-g -l /var/log/ntpstats/ntpd.log". 14
3.3 该步骤分ntp服务端和客户端,默认master1节点 为服务端,其它为客户端.... 14
master1 修改 /etc/ntp.conf 文件如下:.... 14
driftfile /var/lib/ntp/drift. 14
#IP地址从172.16.206.2到172.16.254.254,默认网关为255.255.255.0的机器都可以从NTP服务器进行同步时间.... 14
restrict 172.16.206.2 mask 255.255.255.0 nomodify notrap.. 14
restrict time1.aliyun.com nomodify notrap noquery. 14
restrict ntp1.aliyun.com nomodify notrap noquery. 14
fudge 127.0.0.1 stratum 10.. 15
includefile /etc/ntp/crypto/pw.. 15
driftfile /var/lib/ntp/drift. 15
restrict 122.16.206.110 nomodify notrap noquery. 15
fudge 127.0.0.1 stratum 10.. 15
includefile /etc/ntp/crypto/pw.. 15
ntpstat 命令查看时间同步状态,这个一般需要5-10分钟后才能成功连接和同步。所以,服务器启动后需要稍等下。也可以通过ntpq -p 命令查看。.... 16
(五) 同步kebectl配置文件到其它节点(高可用时候使用).... 34
4) 部署kube-controller-manager. 34
(三) 创建kube-controller-manager的kubeconfig.. 36
(三) 创建kube-scheduler的kubeconfig.. 40
(二) 拷贝文件到每个子节点,以下操作在子节点进行.... 42
(一) 创建kubelet-bootstrap.kubeconfig.. 44
一 机器配置
机器名 | 版本 | 机器配置 | IP | 部署软件 |
master1 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.110 | Kube-apiserver、Kube-controller-manager、Kube-scheduler、Etcd |
master2 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.111 | Kube-apiserver、Kube-controller-manager、Kube-scheduler、Etcd |
master3 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.112 | Kube-apiserver、Kube-controller-manager、Kube-scheduler、Etcd |
node1 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.113 | kubelet、Kube-proxy |
node2 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.114 | kubelet、Kube-proxy |
node3 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.115 | kubelet、Kube-proxy |
node4 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.116 | kubelet、Kube-proxy |
node5 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.117 | kubelet、Kube-proxy |
node6 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.118 | kubelet、Kube-proxy |
node7 | CentOS Linux release 7.5.1804 | 8c/8g/100g | 172.16.206.119 | kubelet、Kube-proxy |
虚拟VIP | 172.16.206.100 | |||
centos68(LB) | CentOS Linux release 7.5.1804 | 4c/10g/100g | 172.16.206.68 | nginx、keepalived |
centos107(LB) | CentOS Linux release 7.5.1804 | 4c/10g/100g | 172.16.206.107 | nginx、keepalived |
centos108 | CentOS Linux release 7.5.1804 | 4c/10g/100g | 172.16.206.108 | Harbor,Rancher |
二 安装软件
1 软件版本
软件名称 | 版本 |
Etcd | 3.4.15 |
Kube-apiserver、Kube-controller-manager、Kube-scheduler、Kubelet、Kube-proxy | 1.20.6 |
CoreDNS | 1.8.3 |
Calico | 3.19 |
Containerd | 1.4.3 |
Harbor | 2.3.1 |
Rancher | 2.5.9 |
2 安装要求
1)要求
检查机器每个磁盘划分情况,保证安装目录拥有足够的空间安装k8s,保证镜像和数据目录足够大。
默认情况。
2)规则
安装目录空间大于50G 。
镜像和数据目录空间大于200G 。
3)例如
该次安装/opt目录下。
三 安装准备
1 修改主机名
1)目标
修改主机名称如下::master1、master2、master3、node1、node2、node3、node4、node5、node6、node7
2)操作
每个节点执行如下命令
hostnamectl set-hostname master1
# master1 要修改成的主机名称
2 配置hosts
1)目标
修改k8s和harbor机器节点的/etc/hosts文件
2)操作
cat >> / etc/hosts << EOF
172.16.206.110 master1
172.16.206.111 master2
172.16.206.112 master3
172.16.206.113 node1
172.16.206.114 node2
172.16.206.115 node3
172.16.206.116 node4
172.16.206.117 node5
172.16.206.118 node6
172.16.206.119 node7
172.16.206.108 registry.com
EOF
3)备注
master1、master2、master3、node1、node2、node3、node4、node5、node6、node7是k8s机器名。
registry.com是harbor配置文件中hostname字段配置的值,也是域名,不一定是主机名称。
3 关闭防火墙和Selinux
1)关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
2)关闭selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
# 检查 /etc/selinux/config 中的SELINUX= 值是否为disabled 如不是手动修改成disabled
4 关闭交换分区
#临时关闭
swapoff -a
#永久关闭,修改/opt/fstab,注释掉swap一行
sed -ri 's/.*swap.*/#&/' /etc/fstab
5 时间同步
1)安装包
http://mirror.centos.org/centos/7/os/x86_64/Packages/autogen-libopts-5.18-5.el7.x86_64.rpm
http://mirror.centos.org/centos/7/os/x86_64/Packages/ntp-4.2.6p5-29.el7.centos.2.x86_64.rpm
http://mirror.centos.org/centos/7/os/x86_64/Packages/ntp-4.2.6p5-29.el7.centos.2.x86_64.rpm
2)安装
选择一台机器节点作为ntp服务器,其它机器节点为客户端,该次安装 master2 作为ntp服务器,其它节点作为客户端。
上传安装包,并且执行安装命令
rpm -ivh libopts25-5.18.12-1.64.x86_64.rpm
rpm -ivh ntpdate-4.2.6p5-29.el7.centos.2.x86_64.rpm
rpm -ivh ntp-4.2.6p5-29.el7.centos.2.x86_64.rpm
3)配置
3.1 配置日志文件路径
vi /etc/sysconfig/ntpd
OPTIONS="-g -l /var/log/ntpstats/ntpd.log"
3.2 配置开机自启动
chkconfig ntpd on
3.3 该步骤分ntp服务端和客户端,默认master1节点 为服务端,其它为客户端
3.3.1 服务端节点
master1 修改 /etc/ntp.conf 文件如下:
driftfile /var/lib/ntp/drift
#IP地址从172.16.206.2到172.16.254.254,默认网关为255.255.255.0的机器都可以从NTP服务器进行同步时间
restrict 172.16.206.2 mask 255.255.255.0 nomodify notrap
#定义使用的上游ntp服务器,将原来的注释掉
server ntp1.aliyun.com
server time1.aliyun.com
#允许上层时间服务器主动修改本机时间
restrict time1.aliyun.com nomodify notrap noquery
restrict ntp1.aliyun.com nomodify notrap noquery
#外部时间不可用时,使用本地时间作为时间服务
server 127.0.0.1
fudge 127.0.0.1 stratum 10
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
3.3.2 客户端节点
其它客户端节点配置如下:
driftfile /var/lib/ntp/drift
#配置时间服务器为上面搭建的ntp服务器
server 122.16.206.110
#配置允许ntp服务器主动修改本机的时间
restrict 122.16.206.110 nomodify notrap noquery
#同样配置本地服务器
server 127.0.0.1
fudge 127.0.0.1 stratum 10
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
3.4 启动 ntpd服务
service ntpd start
3.5 ntp同步状态查看
ntpstat 命令查看时间同步状态,这个一般需要5-10分钟后才能成功连接和同步。所以,服务器启动后需要稍等下。也可以通过ntpq -p 命令查看。
6 修改内核参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
7 加载ipvs模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
可以安装ipvsadm来进行管理
ipvsadm-1.27-8.el7.x86_64.rpm
#可以不安装
8 免密登陆
选择一台主节点作为命令操作节点,并配置与其它机器的互信。
Centos7 默认安装了ssh服务
1) 配置
k8s 节点配置 master1 免密登陆其它节点,在master1 上执行以下命令
ssh-keygen
# 遇到提示直接敲回车即可
2) 异常处理
解决找不到.ssh 目录(该步骤非必须)
如果在上面生成秘钥后,执行 cd ~/.ssh 找不到 .ssh 目录,是因为没有使用 ssh 登录过,使用 ssh 登录一下即可生成 .ssh 目录,之后再重新执行 ssh-keygen 生成秘钥即可。
3) 移动id_rsa.pub 文件
# 将 master1 ~/.ssh目录中的 id_rsa.pub 这个文件拷贝到你要登录的 其它节点的~/.ssh目录中
scp ~/.ssh/id_rsa.pub 172.16.206.101:~/.ssh/
# 然后在 其它节点运行以下命令来将公钥导入到~/.ssh/authorized_keys这个文件中
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# 另外要注意请务必要将服务器上
~/.ssh权限设置为700
~/.ssh/authorized_keys的权限设置为600
# 这是linux的安全要求,如果权限不对,自动登录将不会生效
4) 验证免密登陆
ssh master2
退出
exit
1安装etcd集群
[root@master1 ~]# mkdir -p /opt/etcd # 程序目录
[root@master1 ~]# mkdir -p /opt/etcd/ssl # 证书文件存放目录
cd /opt/work/
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
上传安装文件到改目录下
chmod +x cfssl*
mv cfssl_linux-amd64 /opt/local/bin/cfssl
mv cfssljson_linux-amd64 /opt/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /opt/local/bin/cfssl-certinfo
把/opt/local/bin 目录追加到环境变量中
vi /etc/profile
export PATH=/opt/local/bin:$PATH
source /etc/profile
vi ca-csr.json
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Chengdu",
"ST": "Chengdu"
}
]
}
注:
CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
vi ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
备注:87600h =10年
vi etc-csr.json
{
"CN": "etcd",
"hosts": [
"172.16.206.110",
"172.16.206.111",
"172.16.206.112"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Chengdu",
"ST": "Chengdu"
}
]
}
hosts: 配置的ip是etcd对应的ip
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile= etcd etcd-csr.json | cfssljson -bare etcd
ls etcd*.pem
etcd-key.pem etcd.pem
https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
tar -xvf etcd-v3.4.13-linux-amd64.tar.gz -C /opt/etcd
拷贝etcd 二进制文件包中的 etcd和etcdctl 可执行文件到/opt/etcd/bin目录下
vim etcd.conf
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/opt/etcd/data/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.206.110:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.206.110:2379,https://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.206.110:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.206.110:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.206.110:2380,etcd02=https://172.16.206.111:2380,etcd03=https://172.16.206.112:2380"
ETCD_INITIAL_CLUSTER_TOKEN="techstar@2021@2022"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_ENABLE_V2="true"
#[Security]
ETCD_CERT_FILE="/opt/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/etcd/ssl/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/opt/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/etcd/ssl/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
注:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
备注: 不同的etcd节点除了以上ETCD_INITIAL_CLUSTER字段ip相同外,其它字段中ip需要修改成对应节点ip
vim /opt/etcd/bin/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/conf/etcd.conf
ExecStart=/opt/etcd/bin/etcd
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
cp ca*.pem /opt/etcd/ssl/
cp etcd*.pem /opt/etcd/ssl/
cp etcd.conf /opt/etcd/conf/
cp etcd.service /opt/etcd/bin/
for i in master2 master3;do scp etcd.conf $i:/opt/etcd/conf/;done
for i in master2 master3;do scp etcd*.pem ca*.pem $i:/opt/etcd/ssl/;done
for i in master2 master3;do scp etcd.service /opt/etcd/bin/etcd /opt/etcd/bin/etcdctl $i:/opt/etcd/bin/;done
注:master2和master3分别修改配置文件中etcd名字和IP。
ln -s /opt/etcd/bin/etcd.service /usr/lib/systemd/system/etcd.service
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
systemctl status etcd
在任意主节点执行一下命令查询Etcd状态:
/opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.206.110:2379,https://172.16.206.111:2379,https://172.16.206.112:2379" endpoint status --write-out=table
2安装kubernetes
cd /data/work
wget https://dl.k8s.io/v1.20.6/kubernetes-server-linux-amd64.tar.gz
tar -xf kubernetes-server-linux-amd64.tar
把k8s 安装包中的kubernetes\server\bin目录下的
kube-apiserver,kube-controller-manager,kube-scheduler,kubeadm,kubectl,kubelet,kube-proxy 可执行文件拷贝到/opt/kubernetes/bin目录。
kubectl 文件放到/opt/local/bin/目录下。
把可执行文件放到相关节点下边去
for i in master2 master3;do scp /opt/kubernets/bin/kube-apiserver /opt/kubernets/bin/kube-controller-manager /opt/kubernets/bin/kube-scheduler $i:/opt/kubernetes/bin/ done
for i in node1 node2 node3 node4 node5 node6 node7;do scp /opt/kubernets/bin/kubelet /opt/kubernets/bin/kube-proxy /opt/kubernets/bin/kube-scheduler $i:/opt/kubernetes/bin/ done
vim kube-apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"172.16.206.110",
"172.16.206.111",
"172.16.206.112",
"172.16.206.100",
"172.16.206.107",
"172.16.206.108"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Chengdu",
"L": "Chengdu",
"O": "kubernetes",
"OU": "system"
}
]
}
注:
如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。
使用自签CA签发kube-apiserver HTTPS证书,hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以写几个预留的IP,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.0.0.1)。127.0.0.1 也需要。kubernetes开头的几个域名也必须有。
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
vim kube-apiserver.conf
KUBE_APISERVER_OPTS=
"--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAcco
unt,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=172.16.206.110 \
--secure-port=6443 \
--advertise-address=172.16.206.110 \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--service-cluster-ip-range=10.0.0.0/24 \
--token-auth-file=/opt/kubernetes/conf/token.csv \
--service-node-port-range=30000-32000 \
--tls-cert-file=/opt/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/opt/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/opt/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/opt/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--etcd-servers=https://172.16.206.110:2379,https://172.16.206.111:2379,https://172.16.206.112:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--runtime-config=api/all=true \
--requestheader-allowed-names=aggregator \
--enable-aggregator-routing=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/opt/kubernetes/log \
--v=2"
注:
–logtostderr:启用日志
–v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志
vim kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/opt/kubernetes/conf/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
cp ca*.pem kube-apiserver*.pem /opt/kubernetes/ssl/
cp token.csv kube-apiserver.conf /opt/kubernetes/conf/
cp kube-apiserver.service /opt/kubernetes/bin/
for i in master2 master3;do scp token.csv kube-apiserver.conf $i:/opt/kubernetes/conf/ done
for i in master2 master3;do scp kube-apiserver*.pem ca*.pem $i:/opt/kubernetes/ssl/ done
for i in master2 master3;do scp kube-apiserver.service /opt/kubernets/bin/ $i:/opt/kubernetes/bin/ done
注:master2和master3配置文件的IP地址修改为实际的本机IP
ln -s /opt/kubernetes/bin/kube-apiserver.service /usr/lib/systemd/system/kube-apiserver.service
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
测试
curl --insecure https://172.16.206.110:6443/
有返回说明启动正常
vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Chengdu",
"L": "Chengdu",
"O": "system:masters",
"OU": "system"
}
]
}
说明:
后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;
注:
这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group;
“O”: “system:masters”, 必须是system:masters,否则后面kubectl create clusterrolebinding报错。
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
cp admin*.pem /opt/kubernetes/ssl/
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书
设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.16.206.110:6443 --kubeconfig=kube.config
#--server 值指向apiserver值,如果apiserver 有多个需要填写kubernetes master VIP;也可以填写第一个apiserver的值,代表apiserver没有高可用。
设置客户端认证参数
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
设置上下文参数
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kube.config
mkdir ~/.kube
cp kube.config ~/.kube/config
授权kubernetes证书访问kubelet api权限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
上面步骤完成后,kubectl就可以与kube-apiserver通信了
[root@master1 work]
# kubectl cluster-info
[root@master1 work]
# kubectl get componentstatuses
[root@master1 work]
# kubectl get all --all-namespaces
scp /root/.kube/config root@master2:/root/.kube/
scp /root/.kube/config root@master3:/root/.kube/
vim kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"172.16.206.110",
"172.16.206.111",
"172.16.206.112",
"172.16.206.100",
"172.16.206.107",
"172.16.206.108"
],
"names": [
{
"C": "CN",
"ST": "Chengdu",
"L": "Chengdu",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
注:
hosts 列表包含所有 kube-controller-manager 节点 IP,LB,VIP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
ls kube-controller-manager*.pem
设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.16.206.110:6443 --kubeconfig=kube-controller-manager.kubeconfig
设置客户端认证参数
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
设置上下文参数
kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
设置默认上下文
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--kubeconfig=/opt/kubernetes/conf/kube-controller-manager.kubeconfig \
--log-dir=/opt/kubernetes/logs/controller-manager \
--leader-elect=true \
--master=https://172.16.206.110:6443 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--use-service-account-credentials=true \
--alsologtostderr=true \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--cluster-signing-duration=87600h0m0s"
vim kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/conf/kube-controller-manager.conf
ExecStart=/opt/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
cp kube-controller-manager*.pem /opt/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /opt/kubernetes/conf/
cp kube-controller-manager.conf /opt/kubernetes/conf/
cp kube-controller-manager.service /opt/kubernetes/bin/
for i master1 master2 do scp kube-controller-manager*.pem $i:/opt/kubernetes/ssl/ done
for i master1 master2 do scp kube-controller-manager.kubeconfig kube-controller-manager.conf $i:/opt/kubernetes/conf/ done
for i master1 master2 do scp kube-controller-manager.service $i:/opt/kubernetes/bin/ done
ln -s /opt/kubernetes/bin/kube-controller-manager.service /usr/lib/systemd/system/kube-controller-manager.service
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
kubectl create clusterrolebinding controller-node-clusterrolebing --clusterrole=system:controller:node-controller --user=system:kube-controller-manager
kubectl create configmap kube-root-ca.crt --from-file=/opt/kubernetes/ssl/ca.pem
vim kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"172.16.206.110",
"172.16.206.111",
"172.16.206.112",
"172.16.206.100",
"172.16.206.107",
"172.16.206.108"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Chengdu",
"L": "Chengdu",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
注:
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
ls kube-scheduler*.pem
设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.16.206.110:6443 --kubeconfig=kube-scheduler.kubeconfig
设置客户端认证参数
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
设置上下文参数
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
设置默认上下文
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
vi kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--master=https://172.16.206.110:6443 \
--kubeconfig=/opt/kubernetes/conf/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/opt/kubernetes/log/scheduler \
--v=2"
vi kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/conf/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
cp kube-scheduler*.pem /opt/kubernetes/ssl/
cp kube-scheduler.kubeconfig /opt/kubernetes/conf/
cp kube-scheduler.conf /opt/kubernetes/conf/
cp kube-scheduler.service /opt/kubernetes/bin/
for i master1 master2 do scp kube-scheduler*.pem $i:/opt/kubernetes/ssl/ done
for i master1 master2 do scp kube-scheduler.kubeconfig kube-scheduler.conf $i:/opt/kubernetes/conf/ done
for i master1 master2 do scp kube-scheduler.service $i:/opt/kubernetes/bin/ done
ln -s /opt/kubernetes/bin/kube-scheduler.service /usr/lib/systemd/system/kube-scheduler.service
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
https://github.com/containerd/containerd/releases/download/v1.4.3/cri-containerd-cni-1.4.3-linux-amd64.tar.gz
tar -zxvf cri-containerd-cni-1.4.3-linux-amd64.tar.gz -C /
mkdir -p /opt/containerd/bin
mkdir -p /opt/cri/bin
mkdir -p /opt/runc/bin
chmod 755 /opt/cni/bin/*
chmod 755 /opt/containerd/bin/*
chmod 755 /opt/cri/bin/*
chmod 755 /opt/runc/bin/*
vim /opt/containerd/bin/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/opt/containerd/bin/containerd --config /opt/containerd/conf/config.toml
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=1048576
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
vi /opt/cri/conf/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
cp /opt/cri/conf/crictl.yaml /etc/
mkdir -p /etc/cni/net.d/
cp /opt/cni/net.d/10-containerd-net.conflist /etc/cni/net.d/
cp /opt/runc/bin/runc /usr/local/bin/
在/opt/profile中, export PATH=/opt/runc/bin:/opt/containerd/bin:/opt/cri/bin:/opt/kubernetes/bin:$PATH
source /opt/profile
containerd config default > /opt/containerd/conf/config.toml
配置 私有仓库需要配置harbor endpoint地址和 跟证书文件ca.crt,证书文件xxx.cert,密钥文件xxx.key
ln -s /opt/containerd/bin/containerd.service /usr/lib/systemd/system/containerd.service
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd
以下前四步骤master1 节点上操作,第五步骤在各个node节点上操作。
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /opt/kubernetes/conf/token.csv)
设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.16.206.100:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
设置上下文参数
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
创建角色绑定
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
vim kubelet.json
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/opt/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "172.16.206.113",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.0.0.2"]
}
vim kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
--bootstrap-kubeconfig=/opt/kubernetes/conf/kubelet-bootstrap.kubeconfig \
--cert-dir=/opt/kubernetes/ssl \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig \
--config=/opt/kubernetes/conf/kubelet.json \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
注:
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像
cp kubelet-bootstrap.kubeconfig /opt/kubernetes/conf/
cp kubelet.json /opt/kubernetes/conf/
cp kubelet.service /opt/kubernetes/bin/
以上步骤,如果master节点不安装kubelet,则不用执行
for i in node1 node2 node3 node4 node5 node6 node7;do scp kubelet-bootstrap.kubeconfig kubelet.json $i:/opt/kubernetes/conf/;done
for i in node1 node2 node3 node4 node5 node6 node7;do scp ca.pem $i:/opt/kubernetes/ssl/;done
for i in node1 node2 node3 node4 node5 node6 node7;do scp kubelet.service $i:/opt/kubernetes/bin/;done
注:kubelete.json配置文件address改为各个节点的ip地址
ln -s /opt/kubernetes/bin/kubelet.service /usr/lib/systemd/system/kubelet.service
mkdir /var/lib/kubelet
mkdir -p /opt/kubernetes/log/kubelet
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到7个worker节点分别发送了7个 CSR 请求(建议先安装完一个node节点后再安装其它的节点):
以下操作在master1 节点执行
kubectl get csr
该步骤第一次执行需要等待几分钟。
kubectl certificate approve node-csr-HlX3cExsZohWsu8Dd6Rp_ztFejmMdpzvti_qgxo4SAQ
然后再执行以下命令会发现节点
kubectl get nodes
vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Chengdu",
"L": "Chengdu",
"O": "kubernetes",
"OU": "system"
}
]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*.pem
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.16.206.100:6443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
vim kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.16.206.113
clientConnection:
kubeconfig: /opt/kubernetes/conf/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16
healthzBindAddress: 172.16.206.113:10256
kind: KubeProxyConfiguration
metricsBindAddress: 172.16.206.113:10249
mode: "ipvs"
vim kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/opt/kubernetes/kube-proxy_workdir
ExecStart=/opt/kubernetes/bin/kube-proxy \
--config=/opt/kubernetes/conf/kube-proxy.yaml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/opt/kubernetes/log/kube-proxy \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
cp kube-proxy*.pem /opt/kubernetes/ssl/
cp kube-proxy.kubeconfig kube-proxy.yaml /opt/kubernetes/conf/
cp kube-proxy.service /opt/kubernetes/bin/
master节点不安装kube-proxy,则以上步骤不用执行
for i in node1 node2 node3 node4 node5 node6 node7;do scp kube-proxy.kubeconfig kube-proxy.yaml $i:/opt/kubernetes/;done
for i in node1 node2 node3 node4 node5 node6 node7;do scp kube-proxy.service $i:/opt/lib/systemd/system/;done
注:配置文件kube-proxy.yaml中address修改为各节点的实际IP
mkdir -p /opt/kubernetes/kube-proxy_workdir
mkdir -p /opt/kubernetes/log/kube-proxy
ln -s /opt/kubernetes/bin/kube-proxy.service /usr/lib/systemd/system/kube-proxy.service
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy
https://docs.projectcalico.org/v3.19/manifests/calico.yaml
kubectl apply -f calico.yaml
此处如果离线安装需要手动把calico.yaml 文件中的镜像手动上传到每个节点下并且加载镜像。
此时再来查看各个节点,均为Ready状态
kubectl get pods -A
kubectl get nodes
备注 导入镜像命令 ctr -n k8s.io i import pause.tar
https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
修改yaml文件:
clusterIP为:10.0.0.2(kubelet配置文件中的clusterDNS)
执行以下命令,如果和下图一样说明部署成功。
kubectl get pod -n kube-system
1 安装 rancher
1)首先安装docker
2)从外网下载好rancher镜像
3)导入rancher镜像
docker load -i rancher.tar
4)启动rancher
docker run –privileged -d –restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
备注80:80 第一个端口是容器端口,第二个端口是服务端口。
2 安装 harbor
- K8s 验证
K8s安装完成后,已经验证过了。
- Coredns验证
- 1 使用以下配置文件创建一个pod
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
需要先在node上导入镜像 ctr -n k8s.io i import busybox
然后
kubectl exec busybox -- nslookup kubernetes
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
验证逻辑
部署一个service type=NodePort,再部署一个pod type=CluseterIp,让后通过外网访问service,再跳转到pod 服务上。