【VM】Ubuntu 18.04环境下搭建Kubernetes集群(一主两从)
服务器名称和IP地址设置
类型 | 主机名称 | IP地址 |
---|---|---|
主节点 | kubernetes-master | 192.168.1.3 |
从节点1 | kubernetes-slave1 | 192.168.1.4 |
从节点2 | kubernetes-slave2 | 192.168.1.5 |
首先在三个服务器上安装Docker,请参考Ubuntu 18.04 安装 Docker-ce这篇文章。
主机名可以通过系统命令进行修改,修改方法如下
#查看主机名
hostnamectl
#显示如下内容
Static hostname: kevin
Icon name: computer-vm
Chassis: vm
Machine ID: 1891bc24c87c4cba944f9ece69099677
Boot ID: 3c6454f3fbaf4e858502d08d78e72dac
Virtualization: vmware
Operating System: Ubuntu 18.04.3 LTS
Kernel: Linux 4.15.0-66-generic
Architecture: x86-64
#修改主机名,将主机名修改为kubernetes-master
hostnamectl set-hostname kubernetes-master
如果安装了cloud-init package,那我们还需要修改cloud.cfg文件。改文件在路径/etc/cloud/下。修改方式:
vi /etc/cloud/cloud.cfg
#如果文件内容为空,不用做任何操作
#找到下面的配置项,将false修改为true
preserve_hostname: true
安装 Kubernetes 的相关软件。
三台服务器上都要进行下面的操作。
- 配置软件源
# 安装系统工具
apt-get update && apt-get install -y apt-transport-https
# 安装 GPG 证书
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
# 写入软件源;注意:我们用系统代号为 bionic,但目前阿里云不支持,所以沿用 16.04 的 xenial
cat << EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF
- 安装kubeadm, kubelet, kubectl
kubeadm: 用于初始化kubernetes集群
kubelet: kubernetes命令行工具,主要是部署和管理应用,查看各种资源,创建、更新、删除组件。
kubectl: 主要负责启动pod和容器
具体安装步骤如下:
# 安装
apt-get update
apt-get install -y kubelet kubeadm kubectl
#注意安装过程中的软件版本号
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
conntrack cri-tools kubernetes-cni socat
The following NEW packages will be installed:
conntrack cri-tools kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 7 newly installed, 0 to remove and 46 not upgraded.
Need to get 54.3 MB of archives.
After this operation, 291 MB of additional disk space will be used.
Get:1 http://mirrors.aliyun.com/ubuntu bionic/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-6ubuntu2 [30.6 kB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-00 [8,776 kB]
Get:3 http://mirrors.aliyun.com/ubuntu bionic/main amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.7.5-00 [6,473 kB]
Get:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.16.2-00 [20.7 MB]
Get:6 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.16.2-00 [9,234 kB]
Get:7 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.16.2-00 [8,761 kB]
Fetched 54.3 MB in 20s (2,743 kB/s)
Selecting previously unselected package conntrack.
(Reading database ... 67191 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb ...
Unpacking conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.13.0-00_amd64.deb ...
Unpacking cri-tools (1.13.0-00) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../2-kubernetes-cni_0.7.5-00_amd64.deb ...
Unpacking kubernetes-cni (0.7.5-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../3-socat_1.7.3.2-2ubuntu2_amd64.deb ...
Unpacking socat (1.7.3.2-2ubuntu2) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../4-kubelet_1.16.2-00_amd64.deb ...
Unpacking kubelet (1.16.2-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../5-kubectl_1.16.2-00_amd64.deb ...
Unpacking kubectl (1.16.2-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../6-kubeadm_1.16.2-00_amd64.deb ...
Unpacking kubeadm (1.16.2-00) ...
Setting up conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
Setting up kubernetes-cni (0.7.5-00) ...
Setting up cri-tools (1.13.0-00) ...
Setting up socat (1.7.3.2-2ubuntu2) ...
Setting up kubelet (1.16.2-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubectl (1.16.2-00) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up kubeadm (1.16.2-00) ...
可以看到我们安装的Kubernetes相关软件的版本号为1.16.2。
安装 kubernetes 主节点
首先我们需要在/usr/local目录下创建docker/kubernetes目录结构。进入到/usr/local/docker/kubernetes/目录下,执行如下操作:
#创建配置文件
kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml
#修改配置文件内容
.
.
.
localAPIEndpoint:
# 修改为主节点 IP
advertiseAddress: 192.168.1.3
.
.
.
# 国内不能访问 Google,修改为阿里云
imageRepository: registry.aliyuncs.com/google_containers
.
.
.
# 修改版本号
kubernetesVersion: v1.16.2
networking:
dnsDomain: cluster.local
# 配置成 Calico 的默认网段,该网段不能和当前服务器在同一个网段
podSubnet: "10.224.0.0/16"
配置完成后,查看镜像列表
kubeadm config images list --config kubeadm.yml
拉取镜像文件:
kubeadm config images pull --config kubeadm.yml
执行以下命令初始化主节点,该命令指定了初始化时需要使用的配置文件,其中添加 --upload-certs 参数可以在后续执行加入节点时自动分发证书文件。追加的 tee kubeadm-init.log 用以输出日志。
kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log
#安装成功的输出
[init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.3]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.1.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.1.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 53.002024 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
3a690d09bfc716b5db1bfb684afcac89544a4dc598883584d6878bf73580db57
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
#slave节点加入到集群使用下面的命令
kubeadm join 192.168.1.3:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:aa08323b811116d8c8af81b6f854e27f31d5b62e77369664c1633c7921ac9e71
注:如果安装 kubernetes 版本和下载的镜像版本不统一则会出现 timed out waiting for the condition 错误。中途失败或修改配置可以使用 kubeadm reset 命令重置配置,再做初始化操作即可。
配置kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# 非 ROOT 用户执行
chown $(id -u):$(id -g) $HOME/.kube/config
查看主节点是否配置成功,进行如下操作:
kubectl get node
#能够打印下面信息,表示配置成功
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady master 4m51s v1.16.2
附:kubeadm init 的执行过程
init:指定版本进行初始化操作
preflight:初始化前的检查和下载所需要的 Docker 镜像文件
kubelet-start:生成 kubelet 的配置文件 var/lib/kubelet/config.yaml,没有这个文件 kubelet 无法启动,所以初始化之前的 kubelet 实际上启动不会成功
certificates:生成 Kubernetes 使用的证书,存放在 /etc/kubernetes/pki 目录中
kubeconfig:生成 KubeConfig 文件,存放在 /etc/kubernetes 目录中,组件之间通信需要使用对应文件
control-plane:使用 /etc/kubernetes/manifest 目录下的 YAML 文件,安装 Master 组件
etcd:使用 /etc/kubernetes/manifest/etcd.yaml 安装 Etcd 服务
wait-control-plane:等待 control-plan 部署的 Master 组件启动
apiclient:检查 Master 组件服务状态。
uploadconfig:更新配置
kubelet:使用 configMap 配置 kubelet
patchnode:更新 CNI 信息到 Node 上,通过注释的方式记录
mark-control-plane:为当前节点打标签,打了角色 Master,和不可调度标签,这样默认就不会使用 Master 节点来运行 Pod
bootstrap-token:生成 token 记录下来,后边使用 kubeadm join 往集群中添加节点时会用到
addons:安装附加组件 CoreDNS 和 kube-proxy
配置kubernetes的slave节点
配置kubernetes的slave节点,只需要在slave节点服务器上安装 kubeadm,kubectl,kubelet 。然后使用 kubeadm join 命令加入到主节点即可。
- 修改主机名称
- 配置软件源
- 安装 kubeadm,kubectl,kubelet
以上三步参考上面的 安装 Kubernetes 的相关软件 - 在master安装时 kubeadm-init.log 日志中找到如下命令,在slave节点上运行
kubeadm join 192.168.1.3:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:aa08323b811116d8c8af81b6f854e27f31d5b62e77369664c1633c7921ac9e71
说明:
可能出现的问题
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized
kubeadm init生成的token有效期只有1天,如果你的node节点在使用kubeadm join时出现上述错误,请在kubernetes-master上查看当前的token是否有效,命令是:
kubeadm token list
#查看token是否过期
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
abcdef.0123456789abcdef <invalid> 2019-10-31T00:04:32Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
#可以通过如下命令生成不过期的token并打印出加入集群的命令
kubeadm token create --ttl 0 --print-join-command
验证集群是否成功
回到kubernetes-master
kubectl get nodes
#可以看到整个集群服务的列表
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady master 25h v1.16.2
kubernetes-slave1 NotReady <none> 29m v1.16.2
查看pod的状态
kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-58cc8c89f4-4d684 0/1 Pending 0 25h <none> <none> <none> <none>
coredns-58cc8c89f4-p7ht6 0/1 Pending 0 25h <none> <none> <none> <none>
etcd-kubernetes-master 1/1 Running 0 25h 192.168.1.3 kubernetes-master <none> <none>
kube-apiserver-kubernetes-master 1/1 Running 0 25h 192.168.1.3 kubernetes-master <none> <none>
kube-controller-manager-kubernetes-master 1/1 Running 1 25h 192.168.1.3 kubernetes-master <none> <none>
kube-proxy-45pz5 1/1 Running 0 25h 192.168.1.3 kubernetes-master <none> <none>
kube-proxy-7xrlh 1/1 Running 0 38m 192.168.1.4 kubernetes-slave1 <none> <none>
kube-scheduler-kubernetes-master 1/1 Running 1 25h 192.168.1.3 kubernetes-master <none> <none>
可以看到,我们的coredns还没有配置,下篇文章我们将对此进行分享。