k8s运算节点-kubelet
一、证书准备(node200节点处理)
- 准备证书签名请求csr
vi /opt/certs/kubelet-server-csr.json
{
"CN": "kubelet-server",
"hosts": [
"127.0.0.1",
"172.10.10.10",
"172.10.10.21",
"172.10.10.22"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "hzw",
"OU": "hzwself"
}
]
}
证书请求文件中配置了node节点的hosts,若后续需要增加node节点,此证书则需要重新签发,这个有办法简化吗?
- 生成服务端证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-server-csr.json | cfssljson -bare kubelet-server
[root@node200 certs]# ll kubelet*
-rw-r--r-- 1 root root 1074 3月 23 23:48 kubelet-server.csr
-rw-r--r-- 1 root root 365 3月 23 23:48 kubelet-server-csr.json
-rw------- 1 root root 1679 3月 23 23:48 kubelet-server-key.pem
-rw-r--r-- 1 root root 1318 3月 23 23:48 kubelet-server.pem
- 拷贝证书到node节点
scp kubelet-server*.pem root@node21:/opt/kubernetes/server/bin/cert
二、kubelet配置
软连全局命令
ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
生成kubelet.kubeconfig文件
- set-cluster
cd /opt/kubernetes/server/bin/conf
下列命令会生成配置文件在当前目录,我们将配置文件生成到conf目录下,后续命令都是在conf下执行
[root@node21 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://172.10.10.10:7443 \
--kubeconfig=kubelet.kubeconfig
Cluster "myk8s" set.
--certificate-authority
ca证书;
--server
访问apiserver地址(此处配置的VIP地址);
--kubeconfig
写入目标地址
--embed-certs=true
是否将证书内容写入配置文件
true则证书内容会拷贝到kubelet.kubeconfig文件内,配置文件不用依赖本地证书文件
false则kubelet.kubeconfig内的证书项将为证书地址,需依赖本地证书文件
- set-credentials
[root@node21 conf]# kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/cert/k8s-node-client.pem \
--client-key=/opt/kubernetes/server/bin/cert/k8s-node-client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
User "k8s-node" set.
这一步主要是向kubelet.kubeconfig配置了client证书,用于和apiserver的通信
注意:后续启动kubelet时的用户就是此处client证书的CN,要注意制作client证书时就要想好用户名(本案例中的用户为k8s-node
),下文也要通过clusterrolebinding为该用户名绑定对应的权限
- set-context
[root@node21 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node-client \
--kubeconfig=kubelet.kubeconfig
Context "myk8s-context" created.
关键点:证书CN=k8s-node-client
指定user为k8s-node-client
,后文会基于RBAC给k8s-node授予能成为运算节点的权限(system:node)
- use-context
切换context
[root@node21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
Switched to context "myk8s-context".
为k8s-node-client用户绑定system:node角色权限
- 创建ClusterRoleBinding资源配置文件
vi rbac_k8s-node-client.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node-client
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node-client
这个资源文件的作用:定一个"ClusterRoleBinding"类型的资源; 创建一个k8s用户“k8s-node-client”; 为这个用户授予ClusterRole角色权限,使其具有运算节点的权限
- 应用资源配置文件
kubectl create -f rbac_k8s-node-client.yaml
通过kubectl 根据 rbac_k8s-node-client.yaml 创建资源,存到etcd中
与上两步同义陈述式创建命令(提前扩展作了解,后续会介绍资源的各种管理方式)
~]# kubectl create clusterrolebinding k8s-node-client --clusterrole=system:node --user=k8s-node-client
通过陈述式命令生成资源配置文件:
kubectl create clusterrolebinding k8s-node-client --clusterrole=system:node --user=k8s-node-client --dry-run -o yaml
输出声明式配置文件,通过kubectl apply -f xxx.yaml应用到集群中,内容和上文rbac_k8s-node-client.yaml无异
- 查看创建的clusterrolebinding资源
[root@n100 ~]# kubectl get clusterrolebindings.rbac.authorization.k8s.io k8s-node-client
NAME AGE
k8s-node-client 3h21m
[root@n100 ~]# kubectl get clusterrolebindings.rbac.authorization.k8s.io k8s-node-client -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"k8s-node-client"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"system:node"},"subjects":[{"apiGroup":"rbac.authorization.k8s.io","kind":"User","name":"k8s-node-client"}]}
creationTimestamp: "2021-04-26T15:26:50Z"
name: k8s-node-client
resourceVersion: "7413"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/k8s-node-client
uid: ca026704-d8a8-4a65-9763-d4ea3626e636
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node-client
准备pause基础镜像
kubelet提前创建pause容器为业务容器提前初始化UTS、NET、IPC空间
pause是k8s的基础设施的一部分,pod中其他容器通过pause容器跟其他pod进行通信。pod中其他容器跟pause容器共享命名空间,她是pod中第一个被启动的容器
docker pull kubernetes/pause
准备infra_pod基础镜像???这个镜像干嘛用的??
拉取镜像传递到镜像私服
docker pull xplenty/rhel7-pod-infrastructure:v3.4
拉取
docker tag 34d3450d733b harbor.hzwod.com/k8s/pod:v3.4
打上我们自己的tag
docker login harbor.hzwod.com
登录我们自己的镜像私服
docker push harbor.hzwod.com/k8s/pod:v3.4
推送到镜像私服
其他节点就可以拉取镜像了
docker pull harbor.hzwod.com/k8s/pod:v3.4
启动kubelet
启动脚本
vi /opt/kubernetes/server/bin/kubelet-1021.sh
#!/bin/sh
./kubelet \
--anonymous-auth=false \ # 不允许匿名登录必须通过apiserver
--cgroup-driver systemd \ # 和docker的cgroup-driver保持一致
--cluster-dns 192.168.0.2 \ # coredns相关
--cluster-domain cluster.local \ # coredns相关
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./cert/ca.pem \
--tls-cert-file ./cert/kubelet.pem \
--tls-private-key-file ./cert/kubelet-key.pem \
--hostname-override 172.10.10.21 \ # 主机名
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.hzwod.com/k8s/pause:latest \ # pause镜像
--root-dir /data/kubelet
kubelet默认是需要运算节点关闭swap分区的--fail-swap-on="false"
这个配置是让kubelet忽略主机是否开启swap,否则kubelet会因主机开启swap导致启动失败
主机是否关闭swap还是要视具体情况而定,若物理内存足够大,我们可以将swap关闭; 若物理内存不是很大,开启swap,内核就可以优化不常用常驻内存移入swap中
chmod +x kubelet-1021.sh
mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
我们使用k8s版本1.17,内核低于5+时,某些特性不支持,在启动时需关闭这些特性如:
--feature-gates=SupportPodPidsLimit=false,SupportNodePidsLimit=false
supervisor托管启动
- 配置文件
vi /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-10.21]
command=/opt/kubernetes/server/bin/kubelet-1021.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=22
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopasgroup=true
stopwaitsecs=10
user=root
redirect_stderr=false
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
stderr_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stderr.log
stderr_logfile_maxbytes=64MB
stderr_logfile_backups=4
stderr_capture_maxbytes=1MB
stderr_events_enabled=false
- 启动
[root@node21 bin]# supervisorctl update
kube-kubelet-10.21: added process group
[root@node21 bin]# supervisorctl status
etcd-server-10.21 RUNNING pid 2224, uptime 7:19:26
kube-apiserver-10.21 RUNNING pid 2225, uptime 7:19:26
kube-controller-manager-10.21 RUNNING pid 3337, uptime 0:08:47
kube-kubelet-10.21 STARTING
kube-scheduler-10.21 RUNNING pid 3345, uptime 0:08:45
验证
[root@node21 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
172.10.10.21 Ready <none> 65m v1.17.16
同上安装好node22节点,验证node集群正确如下:
[root@node22 kubernetes]# kubectl get node
NAME STATUS ROLES AGE VERSION
172.10.10.21 Ready <none> 12h v1.17.16
172.10.10.22 Ready <none> 32s v1.17.16