1.集群规划
主机名 | 角色 | ip |
h134.host.com | kubelet | 192.168.146.134 |
h135.host.com | kubelet | 192.168.146.135 |
PS:以h134为部署对象,h135过程省略
2. 签发证书
对象:h136
这个证书是用于作为服务端时使用
2.1 创建生成证书签名请求(csr)的json配置文件
vim /opt/certs/kubelet-csr.json
{
"CN": "k8s-kubelet",
"hosts": [
"127.0.0.1",
"192.168.146.2",
"192.168.146.130",
"192.168.146.134",
"192.168.146.135",
"192.168.146.150"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
2.2 生成证书及私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
####会生成4个文件
-rw-r--r-- 1 root root 1082 Jul 8 01:51 kubelet.csr
-rw-r--r-- 1 root root 395 Jul 8 01:42 kubelet-csr.json
-rw------- 1 root root 1679 Jul 8 01:51 kubelet-key.pem
-rw-r--r-- 1 root root 1436 Jul 8 01:51 kubelet.pem
2.3 证书拷贝到node节点
将kubelet.pem和kubelet-key.pem拷贝到h134,h135的/opt/kubernetes/server/bin/cert下,过程省略
3. kubelet
对象节点:h134,h135
3.1 创建kubelet配置文件
下面几步需要在/opt/kubernetes/server/bin/conf下执行kubelet命令
3.1.1 set-cluster
[root@h134 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.146.130:7443 \
--kubeconfig=kubelet.kubeconfig
######会有如下输出
Cluster "myk8s" set.
######会在conf目录下生成kubelet.kubeconfig文件
3.1.2 set-credentials
[root@h134 conf]# kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
--client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
######会有如下输出
User "k8s-node" set.
3.1.3 set-context
[root@h134 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig
#####会有如下输出
Context "myk8s-context" created.
3.1.4 use-context
[root@h134 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
Switched to context "myk8s-context".
3.2 kubelete角色绑定
创建资源配置文件(以下的步骤只需在h134或h135中的一台执行,最终会在etcd下创建角色,最后的验证过程可在h134与h135上都会有执行的返回结果)
vim /opt/kubernetes/server/bin/conf/k8s-node.yaml
理解:下面配置文件意在 授予用户k8s-node集群角色,这个集群角色的名称叫system:node
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
创建角色
[root@h135 conf]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
验证创建结果
kubectl get clusterrolebinding k8s-node
kubectl get clusterrolebinding k8s-node -o yaml
3.3 kubelete启动脚本
3.3.1 部署pause基础镜像
对象h136
先部署这个pause的目的在于,k8s以边车模式启用,需要kubelete在启用业务容器之前,先控制该pause容器来统一控制各命名空间的分配,以达到在业务容器启用之前,pod的ip先被分配的目的。
docker pull kubernetes/pause
3.3.2 kubelet启动脚本
对象h134,h135,注意部署在不同主机上的hostname-override参数需要修改
#!/bin/bash
./kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./cert/ca.pem \
--tls-cert-file ./cert/kubelet.pem \
--tls-private-key-file ./cert/kubelet-key.pem \
--hostname-override h135.host.com \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.od.com/public/pause:latest \
--root-dir /data/kubelet
创建相关目录
mkdir -p /data/logs/kubernetes/kube-kubelet
将kubelet服务托管给supervisord启动,vim /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-h134]
command=/opt/kubernetes/server/bin/kubelet.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
supervisorctl update
supervisorctl status
检测启动后的kubelet,node节点是否加入集群
kubectl get nodes
####有如下输出
NAME STATUS ROLES AGE VERSION
h134.host.com Ready <none> 9m21s v1.18.20
h135.host.com Ready <none> 88s v1.18.20
将两节点打上label
[root@h134 conf]# kubectl label node h134.host.com node-role.kubernetes.io/master=
[root@h134 conf]# kubectl label node h134.host.com node-role.kubernetes.io/node=
[root@h135 conf]# kubectl label node h135.host.com node-role.kubernetes.io/node=
###有如下输出
NAME STATUS ROLES AGE VERSION
h134.host.com Ready master,node 43m v1.18.20
h135.host.com Ready node 35m v1.18.20