这篇文章接《Kubernetes1.9生产环境高可用实践–004-node中安装flannel网络插件》。
主要讲在服务器yds-dev-svc02-node01中如何安装kubernetes1.9中的kubelet和proxy。
在配置的过程中,我会把执行命令的所有输出都复制出来,供大家参考。也可以让大家知道这个命令是在那一台服务器上面执行。
01 准备文件
01.01 下载需要使用的文件
我们在 Kubernetes1.9生产环境高可用实践–002 中,已经下载了集群安装的所有二进制文件。下载地址为::https://pan.baidu.com/s/1wyhV_kBpIqZ_MdS2Ghb8sg
在这节中,我们使用到的文件有: kubelet和kube-proxy
接下来,我们开始配置。
02 配置kubelet
02.01 准备kubelet
将kubelet二进制文件放到目录/usr/bin/目录中。
[root@yds-dev-svc02-node01 ~]# cp kubelet /usr/bin/
[root@yds-dev-svc02-node01 ~]# chmod +x /usr/bin/kubelet
02.02 下载pod-infrastructure镜像
[root@yds-dev-svc02-node01 ssl]# yum install *rhsm*
[root@yds-dev-svc02-node01 ssl]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
02.02 准备证书文件
我们需要再创建proxy的证书文件。
和前面一样,还是回到服务器yds-dev-svc01-etcd01中进行创建。
创建kube-proxy-csr.json
[root@yds-dev-svc01-etcd01 key]# cat kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "chengdu",
"L": "chengdu",
"O": "k8s",
"OU": "System"
}
]
}
使用cfssl命令创建证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
查看创建的证书
[root@yds-dev-svc01-etcd01 key]# ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
[root@yds-dev-svc01-etcd01 key]# pwd
/tmp/key
02.03 创建proxy kubeconfig配置文件
* 配置集群 *
kubectl config set-cluster kubernetes \
--certificate-authority=/tmp/key/ca.pem \
--embed-certs=true \
--server=https://192.168.3.55:6443 \
--kubeconfig=kube-proxy.kubeconfig
* 配置客户端认证 *
kubectl config set-credentials kube-proxy \
--client-certificate=/tmp/key/kube-proxy.pem \
--client-key=/tmp/key/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
* 配置关联 *
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
* 配置默认关联 *
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
配置完成后,会生成kube-proxy.kubeconfig文件。接下来。我们把这个文件复制到节点的/etc/kubernetes目录中。
02.04 创建bootstrap配置文件
这一步,我们会在安装kubectl的yds-dev-svc01-master01上面执行。
kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):
[root@yds-dev-svc01-master01 ~]# cd /etc/kubernetes/
[root@yds-dev-svc01-master01 kubernetes]# ls
apiserver config controller-manager scheduler ssl token.csv
[root@yds-dev-svc01-master01 kubernetes]# kubectl create clusterrolebinding kubelet-bootstrap \
> --clusterrole=system:node-bootstrapper \
> --user=kubelet-bootstrap
clusterrolebinding "kubelet-bootstrap" created
查看创建结果:
[root@yds-dev-svc01-master01 kubernetes]# kubectl get clusterrolebinding
NAME AGE
cluster-admin 8d
kubelet-bootstrap 3m
system:aws-cloud-provider 8d
system:basic-user 8d
system:controller:attachdetach-controller 8d
system:controller:certificate-controller 8d
system:controller:clusterrole-aggregation-controller 8d
system:controller:cronjob-controller 8d
system:controller:daemon-set-controller 8d
system:controller:deployment-controller 8d
system:controller:disruption-controller 8d
system:controller:endpoint-controller 8d
system:controller:generic-garbage-collector 8d
system:controller:horizontal-pod-autoscaler 8d
system:controller:job-controller 8d
system:controller:namespace-controller 8d
system:controller:node-controller 8d
system:controller:persistent-volume-binder 8d
system:controller:pod-garbage-collector 8d
system:controller:replicaset-controller 8d
system:controller:replication-controller 8d
system:controller:resourcequota-controller 8d
system:controller:route-controller 8d
system:controller:service-account-controller 8d
system:controller:service-controller 8d
system:controller:statefulset-controller 8d
system:controller:ttl-controller 8d
system:discovery 8d
system:kube-controller-manager 8d
system:kube-dns 8d
system:kube-scheduler 8d
system:node 8d
system:node-proxier 8d
查看描述:
[root@yds-dev-svc01-master01 kubernetes]# kubectl describe clusterrolebinding kubelet-bootstrap
Name: kubelet-bootstrap
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: system:node-bootstrapper
Subjects:
Kind Name Namespace
---- ---- ---------
User kubelet-bootstrap
查看内容:
[root@yds-dev-svc01-master01 kubernetes]# kubectl edit clusterrolebinding kubelet-bootstrap
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: 2018-04-17T08:01:22Z
name: kubelet-bootstrap
resourceVersion: "528680"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubelet-bootstrap
uid: 851e77fc-4215-11e8-b786-000c2948d8a8
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubelet-bootstrap
02.05 创建kubelet配置文件
配置文件地址为:/etc/kubernetes/kubelet
[root@yds-dev-svc02-node01 ~]# cat /etc/kubernetes/kubelet
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.3.56"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=yds-dev-svc02-node01"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
#
## Add your own!
KUBELET_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --cgroup-driver=systemd --fail-swap-on=false --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cluster-dns=10.254.0.2 --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
KUBELET_ADDRESS: 填写本节点的IP地址。
KUBELET_HOSTNAME: 填写本节点的主机名,配置这里明显的影响是‘kubectl get nodes’这个命令的输出。
KUBELET_API_SERVER: 填写我们前面配置的apiserver地址。
cert-dir: 自动生成证书的存放路径。
tls-cert-file: 指向apiserver证书
tls-private-key-file: 指向apiserver key
02.06 创建config配置文件
[root@yds-dev-svc02-node01 ~]# cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
02.07 创建service配置文件
创建配置文件: /usr/lib/systemd/system/kubelet.service
[root@yds-dev-svc02-node01 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
03. 配置kube-proxy
03.01 创建proxy配置文件
[root@yds-dev-svc02-node01 kubernetes]# cat proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig"
03.01 创建service文件
[root@yds-dev-svc02-node01 kubernetes]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
04 启动服务
systemctl enable kubelet kube-proxy ; systemctl restart kubelet kube-proxy; systemctl status kubelet kube-proxy
04.01 发送证书签名请求
kubelet 首次启动时会向apiserver发送证书签名请求,apiserver通过才会将该 Node 加入到集群。
查看节点发送的证书签名请求命令为:
kubectl get certificatesigningrequests 或者
kubectl get csr 这两个命令是一样的。
[root@yds-dev-svc01-master01 ~]# kubectl get certificatesigningrequests
NAME AGE REQUESTOR CONDITION
node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo 11s kubelet-bootstrap Pending
[root@yds-dev-svc01-master01 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo 3m kubelet-bootstrap Pending
node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo 为发送请求的名称。
04.02 同意签名请求
由于需要apiserver同意签名请求,因此,我们需要通过kubectl工具来执行。这里我们在服务器yds-dev-svc01-master01中执行。
[root@yds-dev-svc01-master01 ~]# kubectl certificate approve node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo
certificatesigningrequest "node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo" approved
04.03 检查证书生成
我们在同意签名请求后,节点服务器会自动生成证书文件,证书文件存放目录在我们前面的配置文件中已经配置的/etc/kubernetes/ssl。现在我们看下这个目录中的生成文件。
[root@yds-dev-svc02-node01 ssl]# ls kubelet*
kubelet-client.crt kubelet-client.key kubelet.crt kubelet.key
04.03 检查节点信息
还记得我们配置kubectl的服务器yds-dev-svc01-master01吗。现在我们需要在这样面执行命令。
[root@yds-dev-svc01-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
yds-dev-svc02-node01 Ready <none> 5d v1.9.0
看到,我们创建的节点都已经显示出来了。
以上,我们的节点配置已经完成,如果要增加多个节点,只按相同的操作便可。