基础环境准备
接下来进入安装的过程
1.安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
添加软件源信息
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
开启Docker服务
sudo service docker start
安装完成后输入docker version查看docker是否安装成功
[root@k8s-master ~]# docker version
Client: Docker Engine - Community
Version: 19.03.9
API version: 1.40
Go version: go1.13.10
Git commit: 9d988398e7
Built: Fri May 15 00:25:27 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.9
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 9d988398e7
Built: Fri May 15 00:24:05 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
所有节点都需要安装docker,在我的环境中后续准备使用虚拟机克隆的方式,省略这些软件的安装过程
[root@ken ~]# vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://XXX.mirror.aliyuncs.com"]
}
每个节点都需要使docker开机自启
[root@ken ~]# systemctl restart docker
[root@ken ~]# systemctl enable docker
2.安装K8S相关软件
[root@k8s-master yum.repos.d]# vi kubernetes.repo
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
yum install kubelet kubeadm kubectl -y
启动kubelete
systemctl enable kubelet
修改hosts文件
这里可以提前添加flannel网络所需要连接的raw.githubusercontent.com,可能是因为防火墙的原因,dns这个域名是没有结果的,可以通过hosts绑定ip地址连接
[root@k8s-master etc]# cat hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.237.201 k8s-master
192.168.237.202 k8s-node-1
192.168.237.203 k8s-node-2
151.101.76.133 raw.githubusercontent.com
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
关闭swap
这里不关闭的话,在初始化使用kubeadm 时会报错
swapoff -a && sysctl -w vm.swappiness=0
free -m
关闭防火墙
[root@k8s-master etc]# systemctl stop firewalld
[root@k8s-master etc]# systemctl disable firewalld
关闭SELINUX
[root@k8s-master etc]# setenforce 0
[root@k8s-master etc]# getenforce
Permissive
可通过修改/etc/selinux/config 文件,实现selinux开机自动关闭
[root@k8s-master etc]# cd /etc/selinux/
[root@k8s-master selinux]# ls
config final semanage.conf targeted tmp
[root@k8s-master selinux]# cat config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
将它后面的值修改为SELINUX=enforcing ——》permissive或者disabled
swapoff -a && sysctl -w vm.swappiness=0
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
初始化master
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.2 --apiserver-advertise-address 192.168.237.201 --pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address 指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.237.201:6443 --token dxdsdg.n9w1raqsv1pfu025 \
--discovery-token-ca-cert-hash sha256:4c60937f7add85d164e23ca927bcb3b35cd6820331e02bf9b310d3f727a8cde6
注意上述的秘钥很重要,其他node节点,加入集群时需要用到
为了使用更便捷,启用 kubectl 命令的自动补全功能。
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
可以尝试执行一些kubectl命令
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
3.安装pod网络,选择flannel
这里前面提到raw.githubusercontent.com该网址无法dns解析,通过修改hosts指定IP地址,可实现该网址的访问
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
这样,我们可以看到master这个node已经初始化完毕了
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 14m v1.18.3
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-lvbb7 1/1 Running 0 13m
coredns-7ff77c879f-vpn7w 1/1 Running 0 13m
etcd-k8s-master 1/1 Running 0 14m
kube-apiserver-k8s-master 1/1 Running 0 14m
kube-controller-manager-k8s-master 1/1 Running 0 14m
kube-flannel-ds-amd64-4wwsh 1/1 Running 0 4m49s
kube-proxy-ddpx6 1/1 Running 0 13m
kube-scheduler-k8s-master 1/1 Running 0 14m
在K8S集群中,添加其他node
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
关闭swap
swapoff -a && sysctl -w vm.swappiness=0
free -m
2.添加nodes
之后根据前面master初始化输出的结果,执行下面的命令,加入cluster集群
这里的--token 来自前面kubeadm init输出提示,如果当时没有记录下来可以通过kubeadm token list 查看。
kubeadm join 192.168.237.201:6443 --token dxdsdg.n9w1raqsv1pfu025 --discovery-token-ca-cert-hash sha256:4c60937f7add85d164e23ca927bcb3b35cd6820331e02bf9b310d3f727a8cde6
这时在master节点上看,kubectl get nodes,可以看到nodes没有ready
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 33m v1.18.3
k8s-node-1 NotReady <none> 86s v1.18.3
[root@k8s-master ~]#
这里其实需要等一会,这个node1节点才会变成Ready状态
网上其他教程提示node节点需要下载四个镜像flannel coredns kube-proxy pause
实际查看只有三个
[root@k8s-node-1 yum.repos.d]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.18.2 0d40868643c6 5 weeks ago 117MB
quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 2 months ago 52.8MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 months ago
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 53m v1.18.3
k8s-node-1 Ready <none> 6m56s v1.18.3
对于node2节点,也是采用同样的操作,可以将node2节点加入到cluster集群中
3.一些提示
kubeadm reset
有时node加入cluster会报错提示/proc/sys/net/ipv4/ip_forward问题,执行
echo 1 > /proc/sys/net/ipv4/ip_forward
若之前加入的node始终没有ready,可以删除node
kubectl delete node host1
之后在node上重新执行
kubeadm join 192.168.237.201:6443 ...
4.最终添加nodes效果
[root@k8s-node-2 ~]# kubeadm join 192.168.237.201:6443 --token dxdsdg.n9w1raqsv1pfu025 --discovery-token-ca-cert-hash sha256:4c60937f7add85d164e23ca927bcb3b35cd6820331e02bf9b310d3f727a8cde6
W0522 22:56:06.696722 4820 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h25m v1.18.3
k8s-node-1 Ready <none> 132m v1.18.3
k8s-node-2 Ready <none> 137m v1.18.3
k8s集群初步建立完成