安装k8s: 高可用k8s搭建参照(https://blog.csdn.net/fanren224/article/details/86573264)
- #setenforce 0
- #swapoff -a #当然也可以配置不关闭
- #echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
如果出现下面的错误,就执行:
#modprobe br_netfilter
#echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
- #echo 1 > /proc/sys/net/ipv4/ip_forward
- 安装docker
a. #wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
b. #yum -y install docker-ce-18.06.1.ce-3.el7 (当然可以不指定版本,yum -y install docker-ce,那么会安装最新的版本)
c. #systemctl enable docker
d. #systemctl start docker
e. #docker --version - #vim /etc/yum.repos.d/kubernetes.repo (添加阿里云YUM软件源)
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
- #yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 (不加具体版本号,默认安装最新版本)
- #systemctl enable kubelet
- #systemctl start kubelet
- #初始化master
#kubeadm init --apiserver-advertise-address=172.1.3.40 --kubernetes-version=v1.16.2 --pod-network-cidr=10.244.0.0/16
ps:172.1.3.40是master主机IP
大概率会遇到以下错误:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.16.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.16.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.16.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.16.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.15-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
错误原因:
是因为docker k8s 镜像在国外的服务器,由于国内网络限制, 这些镜像无法访问到, 需要从dockerhub官方网站上把需要的镜像先下载到本地 然后修改镜像的tag 再执行kubeadm init,kubeadm首先会查看本机是否存在相应的docker镜像 如果有就直接启动本机的镜像如果没有就会从https://k8s.gcr.io仓库中下载相关镜像。
上面的提示信息中包含所需要的镜像的名称和tag 根据提示所需的名称和tag来从dockerhub中下载镜像到本机
解决办办法:
可以创建一个shell脚本,写入下面的信息,然后批量执行
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.16.2
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.16.2
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.16.2
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.16.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.3.15-0
docker pull coredns/coredns:1.6.2
# 修改dockerhub镜像tag为k8s.gcr.io
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.16.2 k8s.gcr.io/kube-proxy:v1.16.2
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0
docker tag docker.io/coredns/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2
# 删除多余镜像
docker rmi mirrorgooglecontainers/kube-apiserver-amd64:v1.16.2
docker rmi mirrorgooglecontainers/kube-controller-manager-amd64:v1.16.2
docker rmi mirrorgooglecontainers/kube-scheduler-amd64:v1.16.2
docker rmi mirrorgooglecontainers/kube-proxy-amd64:v1.16.2
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd-amd64:3.3.15-0
docker rmi coredns/coredns:1.6.2
注意:再次运行kubeadm init之前,需要先执行kubeadm reset
#kubeadm init 成功以后将会在 /etc 路径下生成配置文件和证书文件
初始化成功后,同时会在控制台上输出的信息非常重要
............................
............................
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube #如果执行几行没有权限,加sudo
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config #如果重复执行kubeadm init,那么执行完后,这行命令都必须重新执行,不然后续会出现问题。
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join xx.xx.xx.xx:6443 --token a1dji3.2xjrid83w0w532qm \
--discovery-token-ca-cert-hash sha256:df0920c7fa49921c9f96a16efbed681df435d49267d1b312394073968c268cda
...................................
....................................
-
根据kubeadm init的结果提示,执行拷贝,修改权限等
#mkdir -p $HOME/.kube #如果执行几行没有权限,加sudo
#cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
#chown ( i d − u ) : (id -u): (id−u):(id -g) $HOME/.kube/config -
安装Pod Network
#wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#kubectl apply -f kube-flannel.yml
flannel插件运行一段时间后,可能会出现一些问题,尝试解决
如果还是无法解决,可以参考:https://www.jianshu.com/p/866f02f67578 -
使master也参与工作负载
#kubectl describe node nodename | grep Taint (查看此时的node是否被污染了)
使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点master被打上了node-role.kubernetes.io/master:NoSchedule的污点
#kubectl taint nodes hostname node-role.kubernetes.io/master- #hostname填写对应master主机名
node/hostename untainted #node 未被污染,业务请求可以调度到此master主机上
#kubectl taint nodes hostname node-role.kubernetes.io/master=true:NoSchedule
node/hostname tainted #node 被污染了,业务请求不可调度到此master主机上