一.环境准备
软件:VMware14,centos7镜像
安装3台centos7并配置网络
master:192.168.48.130
node1:192.168.48.131
node2:192.168.48.132
关闭防火墙
systemctl disable firewalld
systemctl stop firewalld
setenforce 0
关闭swap分区
swapoff -a
确定
[root@localhost ~]# echo "1">/proc/sys/net/bridge/bridge-nf-call-iptables
[root@localhost ~]# echo "1">/proc/sys/net/ipv4/ip_forward
相互之间可以访问并且可以连接外网,配置可参考https://blog.csdn.net/u013261007/article/details/106596437
二.设置各节点的安装包
登录master节点:
先获取docker-ce的配置仓库配置文件:
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/
生成kubernetes的yum仓库配置文件/etc/yum.repos.d/kubernetes.repo,内容如下:
[kubernetes]
name=Kubernetes Repository
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enable=1
gpgcheck=0
安装相关的程序包:
yum install docker-ce kubelet kubeadm kubectl
查看版本
[root@master ~]# kubelet --version
Kubernetes v1.18.3
其他镜像
[root@master ~]# kubeadm config images list
W0527 16:00:08.494089 28000 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
出于某种不可描述的因素,我们无法直接从外网拉取镜像所以只能
docker pull mirrorgcrio/kube-apiserver:v1.18.3
docker pull mirrorgcrio/kube-controller-manager:v1.18.3
docker pull mirrorgcrio/kube-scheduler:v1.18.3
docker pull mirrorgcrio/kube-proxy:v1.18.3
docker pull mirrorgcrio/pause:3.2
docker pull mirrorgcrio/etcd:3.4.3-0
docker pull mirrorgcrio/coredns:1.6.7
docker tag mirrorgcrio/kube-apiserver:v1.18.3 k8s.gcr.io/kube-apiserver:v1.18.3
docker tag mirrorgcrio/kube-controller-manager:v1.18.3 k8s.gcr.io/kube-controller-manager:v1.18.3
docker tag mirrorgcrio/kube-scheduler:v1.18.3 k8s.gcr.io/kube-scheduler:v1.18.3
docker tag mirrorgcrio/kube-proxy:v1.18.3 k8s.gcr.io/kube-proxy:v1.18.3
docker tag mirrorgcrio/pause:3.2 k8s.gcr.io/pause:3.2
docker tag mirrorgcrio/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag mirrorgcrio/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker image rm mirrorgcrio/kube-apiserver:v1.18.3
docker image rm mirrorgcrio/kube-controller-manager:v1.18.3
docker image rm mirrorgcrio/kube-scheduler:v1.18.3
docker image rm mirrorgcrio/kube-proxy:v1.18.3
docker image rm mirrorgcrio/pause:3.2
docker image rm mirrorgcrio/etcd:3.4.3-0
docker image rm mirrorgcrio/coredns:1.6.7
当然也可以使用下面的命令,然后更改标签
[root@192 yum.repos.d]# kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
镜像拉取之后,执行kubeadm init,需要指明pod网络可以使用的IP地址段,即‘--pod-network-cidr’,如果安装flannel,参数为--pod-network-cidr=10.244.0.0/16,安装calico,参数为‘--pod-network-cidr=192.168.0.0/16’
我用的是calico网络所以用的是后面的
[root@master ~]# kubeadm init --pod-network-cidr=192.168.0.0/16
W0527 09:08:38.957909 20937 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
看到successfully,恭喜你成功一半了!
如果初始化错了执行如下,彻底清除 。因为有的初始化文件清不掉,再次init会报错
kubeadm reset
rm -rf $HOME/.kube /etc/kubernetes
rm -rf /var/lib/cni/
按照提示设置账户权限
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看安装情况
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-9s565 0/1 ContainerCreating 0 31s
kube-system coredns-66bff467f8-rfz2v 0/1 ContainerCreating 0 31s
kube-system etcd-izwz99w6o2tqabl1qt0pcsz 1/1 Running 0 40s
kube-system kube-apiserver-izwz99w6o2tqabl1qt0pcsz 1/1 Running 0 40s
kube-system kube-controller-manager-izwz99w6o2tqabl1qt0pcsz 1/1 Running 0 40s
kube-system kube-proxy-mtc4f 1/1 Running 0 32s
kube-system kube-scheduler-izwz99w6o2tqabl1qt0pcsz 1/1 Running 0 40s
发现所有的coredns pod不是Running状态,我们还需要安装Pod Network插件, kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).
这里使用calico网络
root@iZwz9:~# kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
root@iZwz9:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-dc4469c7f-bf46j 1/1 Running 0 18s
kube-system calico-node-w54kq 1/1 Running 0 18s
kube-system coredns-66bff467f8-9s565 1/1 Running 0 86s
kube-system coredns-66bff467f8-rfz2v 0/1 Running 0 86s
kube-system etcd-izwz99w6o2tqabl1qt0pcsz 1/1 Running 0 95s
kube-system kube-apiserver-izwz99w6o2tqabl1qt0pcsz 1/1 Running 0 95s
kube-system kube-controller-manager-izwz99w6o2tqabl1qt0pcsz 1/1 Running 0 95s
kube-system kube-proxy-mtc4f 1/1 Running 0 87s
kube-system kube-scheduler-izwz99w6o2tqabl1qt0pcsz 1/1 Running 0 95s
所有的pod状态都变为Running
到此已kubernate master已安装完成!
三.加入工作节点
1.修改本机hostname
hostnamectl set-hostname node03
2.同样的你需要安装docker,kubeadm,kubectl并设置开机自启动
3.同样的需要pull image
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
4.加入主节点
#关闭swap分区
$ swapoff -a
$ kubeadm join 192.168.48.130:6443 --token 2dfhgj.sop2w90e0zdh23pd --discovery-token-ca-cert-hash sha256:5627c67c1f515ed2eb8f3b38761ef9966687784f05ad35fb7c87338a3156c050
如果报error:xxx配置文件has exist需要重新执行kubeadm reset然后重复4
查看节点错误命令
journalctl -f -u kubelet