K8s集群搭建流程
参考:https://www.cnblogs.com/jasonboren/p/11483898.html
视频参考:https://www.bilibili.com/video/av95133276?p=1
k8s集群部署:
*****************************************************************************************************
1.环境规划
-
- 安装docker
- 自签TLS证书
- 部署Etcd集群
- 部署Flannel网络
- 创建Node节点kubeconfig文件
- 获取k8s二进制包
- 运行Master组件
- 运行Node节点
*****************************************************************************************************
1.环境规划:
Linux操作系统:vmware+centos7.5: 3.10.0-693.el7.x86_64
Kubernetes:1.9 docker:latest<Version:19.03.7> etcd:3.0
关闭selinux
机器:host:192.168.159.220(管理节点)
需要安装:
kube-apiserver;kube-controller-manager;kube-scheduler;etcd;docker
host-one:192.168.159.221(节点机)
需要安装:
kubelet;kube-proxy;docker;flannel;etcd;docker
host-two:192.168.159.222(节点机)
需要安装:
kubelet;kube-proxy;docker;flannel;etcd;docker
要求:cpu:2G+ ;内存:2G+
注:关闭selinux:
vi /etc/selinux/config将selinux改为disabled;即SELINUX=disabled
2.docker安装
1. yum install -y yum-utils \
device-mapper-persistent-data \
lvm2(安装依赖包)
2. yum-config-manager \(安装官方源)
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
3.yum install docker-ce
配置镜像仓库:vi /etc/docker/daemon.json
例如添加:
{"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["192.168.159.220:5000"]}
按esc,加:wq退出保存;重启docker:
systemctl daemon-reload
systemctl enable docker && systemctl restart docker
<注:可参考docker教程——centos安装docker>
3.自签TLS证书
下载安装自签证书工具:cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
下载完毕,ls查看,有三个文件,依次进行赋权:chmod +x 文件
移动目录:
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
打印证书模板:
cfssl print-defaults config>config.json//证书模板文件
cfssl print-defaults csr>csr.json//请求颁发证书的文件
ca-config.json;ca.csr.json;生成公私钥
命令:cfssl gencert -initca ca-csr.json |cfssljson -bare ca –
生成ca.csr;ca-key.pem;ca.pem
利用server.json
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json |cfssljson -bare server
生成server.csr;server.pem;server-key.pem
其余证书制作参考:https://www.cnblogs.com/jasonboren/p/11483458.html
4.Etcd集群部署:
下载etc二进制包:https://github.com/coreos/etcd/releases/tag/v3.2.12
解压:tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
在opt路径下新建kubernetes,cfg,ssl,bin文件夹
命令:mkdir opt/Kubernetes{bin,ssl,cfg}
将解压后的etcd文件中的etcd和etcdctl放进bin文件夹下
创建etcd.service配置文件
vi /usr/lib/systemd/system/etcd.service
//内容为:
[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/opt/kubernetes/cfg/etcd ExecStart=/opt/kubernetes/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-state=new \ --cert-file=/opt/kubernetes/ssl/server.pem \ --key-file=/opt/kubernetes/ssl/server-key.pem \ --peer-cert-file=/opt/kubernetes/ssl/server.pem \ --peer-key-file=/opt/kubernetes/ssl/server-key.pem \ --trusted-ca-file=/opt/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
注:请按照实际情况修改文件路径
在cfg文件夹中配置etcd文件(新建)
内容如下:
#[Member] ETCD_NAME="etcd01" //节点名字 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" //数据目录 ETCD_LISTEN_PEER_URLS="https://172.16.163.131:2380" 当前节点的ip地址 ETCD_LISTEN_CLIENT_URLS="https://172.16.163.131:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.163.131:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.163.131:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.16.163.131:2380,etcd02=https://172.16.163.130:2380,etcd03=https://172.16.163.129:2380" //集群所有的节点的ip地址 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
注:根据自身情况,修改对应的ip地址
将TLS证书生成的pem文件,复制到/opt/kubernetes/ssl/目录下
启动etcd
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
启动失败:报错子节点connection refused
查看日志:tail /var/log/messages -f
或journalctl -u etcd
查看状态:systemctl status etcd.service
查看防火墙开放的端口:
firewall-cmd --zone=public --list-ports
参考:https://blog.csdn.net/lvqingyao520/article/details/81075094
查看是否开放8080端口:firewall-cmd --zone=public --query-port=8080/tcp
永久开放某个端口:
firewall-cmd --zone=public --add-port=8080/tcp --permanent
防火墙重新记载,使之生效: firewall-cmd –reload
关闭防火墙:systemctl stop firewalld
查看是否启动:ps -ef |grep etcd
设置互信免密登录:
生成秘钥和私钥:ssh-keygen
将秘钥和私钥复制到其他节点,就可以相互免密登录了: ssh-copy-id root@其他ip
测试免密效果:ssh root@其他ip
将主机所有配置文件copy到节点机上:
即:bin,cfg,ssl下文件,以及ecd.service文件
在/etc/profile最后一行加入etcd,etcdctl路径:PATH=$PATH:/opt/kubernetes/bin
在/opt/kubernetes/ssl目录下,检查集群是否搭建成功:
etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://主机ip:2379,https://节点机1ip:2379,https://节点机2ip:2379" cluster-health
或者:
etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem cluster-health
如果输出:
member 204f1af770aff3d9 is healthy: got healthy result from https://xxx.xx.xxx.xxx:2379 member 81e41daa4ea73cbc is healthy: got healthy result from https:// xxx.xx.xxx.xxx:2379 member d2349ea355902626 is healthy: got healthy result from https:// xxx.xx.xxx.xxx:2379 cluster is healthy
成功
注:建议etcd配置文件内容不采用https访问,改为http,否则易报错
主机名修改后,记得重启机器更新;etcd中ETCD_NAME="etcd01"需要响应修改,如etcd02;
5.部署Flannel网络:
1)写入分配的子网段到etcd,供flanneld使用
2)下载二进制包
3)配置Flannel
4)systemd管理Flannel
5)配置Docker启动指定子网段
6)启动
第一步:下载flannel二进制包:
wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
第二步:解压flannel二进制包并且复制到其他节点:
//解压二进制文件 tar -zxvf flannel-v0.9.1-linux-amd64.tar.gz //将可执行文件flanneld mk-docker-opts.sh复制发送到其他节点 scp flanneld mk-docker-opts.sh root@1xx.xxx.1xx.xxx:/opt/kubernetes/bin/
第三步:写入分配的子网段到etcd,供flanneld使用:
/opt/kubernetes/bin/etcdctl \ --ca-file=ca.pem --cert-file=flanneld.pem --key-file=flanneld-key.pem \ --endpoints="https://172.16.163.131:2379,https://172.16.163.130:2379,https://172.16.163.129:2379" \ set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' 此处报错,缺少私钥和cluster error;建议填写证书完整路径且使用http形式
第四步:编写flanneld配置文件:
FLANNEL_OPTIONS="--etcd-endpoints=https://172.16.163.131:2379,https://172.16.163.130:2379,https://172.16.163.129:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
第五步:编写flanneld.service配置文件:
[Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
第六步:docker.service配置文件:
[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service containerd.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
以上配置好了之后,需要将flanneld配置文件,flanneld.service文件,docker.service文件,复制到其他节点。
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker
最终配置结果图 |
6.部署master节点组件:
Master二进制包下载:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md
解压并将如下文件移动到bin目录下:
mv kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/bin
并且加上执行权限:
chmod +x /opt/kubernetes/bin/*
编写apiserver.sh
#!/bin/bash MASTER_ADDRESS=${1:-"192.168.1.195"} ETCD_SERVERS=${2:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=${ETCD_SERVERS} \\ --insecure-bind-address=127.0.0.1 \\ --bind-address=${MASTER_ADDRESS} \\ --insecure-port=8080 \\ --secure-port=6443 \\ --advertise-address=${MASTER_ADDRESS} \\ --allow-privileged=true \\ --service-cluster-ip-range=10.10.10.0/24 \\ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\ --etcd-certfile=/opt/kubernetes/ssl/server.pem \\ --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
执行:
./apiserver.sh 172.16.163.131 https://172.16.163.131:2379,https://172.16.163.130:2379,https://172.16.163.129:2379
编辑controller-manager.sh
#!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect=true \\ --address=127.0.0.1 \\ --service-cluster-ip-range=10.10.10.0/24 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager
编写scheduler.sh
#!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler
编写kube-apiserver配置文件
#cat /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://172.16.163.131:2379,https://172.16.163.130:2379,https://172.16.163.129:2379 \ --insecure-bind-address=127.0.0.1 \ --bind-address=172.16.163.131 \ --insecure-port=8080 \ --secure-port=6443 \ --advertise-address=172.16.163.131 \ --allow-privileged=true \ --service-cluster-ip-range=10.10.10.0/24 \ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node \ --kubelet-https=true \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \ --etcd-certfile=/opt/kubernetes/ssl/server.pem \ --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
编写kube-apiserver.service文件
vi /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
编写kube-controller-manager 配置文件
#cat /opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.10.10.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem"
编写kube-controller-manager.service文件
#cat /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
编写kube-scheduler 配置文件
# cat /opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect"
编写kube-scheduler.service 文件
# cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
启动组件
systemctl start kube-apiserver
systemctl start kube-scheduler
systemctl start kube-controller-manager
查看master集群状态
[root@master master_pkg]# kubectl get cs
NAME STATUS MESSAGE ERROR controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
7.node节点的kubeconfig文件
创建TLS Bootstrapping Token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
指定访问入口
export KUBE_APISERVER="https://172.16.163.131:6443"
创建kubelet kubeconfig
# 设置集群参数 kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default
--kubeconfig=bootstrap.kubeconfig
创建kube-proxy kubeconfig
export KUBE_APISERVER="https://172.16.163.131:6443"
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig kubectl config use-context default
--kubeconfig=kube-proxy.kubeconfig
将bootstrap.kubeconfig kube-proxy.kubeconfig拷贝到所有节点
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@172.16.163.129:/opt/kubernetes/ssl/
8.node节点组件
mv kubelet kube-proxy /opt/kubernetes/bin chmod +x /opt/kubernetes/bin/* && chmod +x *.sh ./kubelet.sh 172.16.163.130 10.10.10.2 ./proxy.sh 172.16.163.130
编写kubelet.sh
[root@node1 ~]# cat kubelet.sh #!/bin/bash NODE_ADDRESS=${1:-"192.168.1.196"} DNS_SERVER_IP=${2:-"10.10.10.2"} cat <<EOF >/opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\ --v=4 \\ --address=${NODE_ADDRESS} \\ --hostname-override=${NODE_ADDRESS} \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --cert-dir=/opt/kubernetes/ssl \\ --allow-privileged=true \\ --cluster-dns=${DNS_SERVER_IP} \\ --cluster-domain=cluster.local \\ --fail-swap-on=false \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet
会生成kubelet.service,kubelet文件
Kubelet.service内容如下
[root@node1 ~]# cat /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
编写proxy.sh
[root@node1 ~]# cat proxy.sh #!/bin/bash NODE_ADDRESS=${1:-"192.168.1.200"} cat <<EOF >/opt/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=${NODE_ADDRESS} \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF cat <<EOF >/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy
会生成kube-proxy.service,kube-proxy文件
kube-proxy.service内容如下
[root@node1 ~]# cat /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
执行完成之后需要添加角色权限
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
结果:clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
查看csr列表
kubectl get csr [root@master ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION node-csr-81F5uBehyEyLWco5qavBsxc1GzFcZk3aFM3XW5rT3mw 5m52s kubelet-bootstrap Pending node-csr-Ed0kbFhc_q7qx14H3QpqLIUs0uKo036O2SnFpIheM18 6m56s kubelet-bootstrap Pending
Master授权
[root@master ~]# kubectl certificate approve node-csr-81F5uBehyEyLWco5qavBsxc1GzFcZk3aFM3XW5rT3mw node-csr-Ed0kbFhc_q7qx14H3QpqLIUs0uKo036O2SnFpIheM18 certificatesigningrequest.certificates.k8s.io/node-csr-81F5uBehyEyLWco5qavBsxc1GzFcZk3aFM3XW5rT3mw approved certificatesigningrequest.certificates.k8s.io/node-csr-Ed0kbFhc_q7qx14H3QpqLIUs0uKo036O2SnFpIheM18 approved
查看node集群节点信息
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 172.16.163.129 Ready <none> 18s v1.9.0 172.16.163.130 Ready <none> 19s v1.9.0