[K8s 1.9实践]Kubeadm 1.9 HA 高可用 集群 本地离线镜像部署

ernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
 https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm kubectl cni

  • 下载镜像(自行搬梯子先获取rpm)
mkdir -p /root/k8s/rpm
cd /root/k8s/rpm

#安装同步工具
yum install -y yum-utils

#同步本地镜像
yumdownloader kubelet kubeadm kubectl kubernetes-cni docker
scp root@10.129.6.224:/root/k8s/rpm/* /root/k8s/rpm
  • 离线安装
mkdir -p /root/k8s/rpm
scp root@10.129.6.211:/root/k8s/rpm/* /root/k8s/rpm
yum install /root/k8s/rpm/*.rpm -y
  • 启动k8s
#restart
systemctl enable docker && systemctl restart docker
systemctl enable kubelet && systemctl restart kubelet

镜像获取方法

  • 加速器获取 gcr.io k8s镜像 ,导出,导入镜像 或 上传本地仓库
#国内可以使用daocloud加速器下载相关镜像,然后通过docker save、docker load把本地下载的镜像放到kubernetes集群的所在机器上,daocloud加速器链接如下:
https://www.daocloud.io/mirror#accelerator-doc  #pull 获取
docker pull gcr.io/google_containers/kube-proxy-amd64:v1.9.0  #导出
mkdir -p docker-images
docker save -o docker-images/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.9.0  #导入
docker load -i /root/kubeadm-ha/docker-images/kube-proxy-amd64
  • 代理或vpn获取 gcr.io k8s镜 ,导出,导入镜像 或 上传本地仓库

自谋生路,天机屋漏

kubelet 指定本地镜像

kubelet 修改 配置以使用本地自定义pause镜像
devhub.beisencorp.com/google_containers/pause-amd64:3.0 替换你的环境镜像

cat > /etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf <<EOF
[Service] Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=devhub.beisencorp.com/google_containers/pause-amd64:3.0"
EOF
systemctl daemon-reload
systemctl restart kubelet

Kubeadm Init 初始化

  • 我们使用config 模板方式来初始化集群,便于我们指定etcd 集群
  • devhub.beisencorp.com 使我们的 测试镜像仓库 可以改成自己或者手动导入每个机器镜像
cat <<EOF > config.yaml 
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
 endpoints: - https://10.129.6.211:2379 - https://10.129.6.212:2379 - https://10.129.6.213:2379
 caFile: /etc/etcd/ssl/ca.pem 
 certFile: /etc/etcd/ssl/etcd.pem 
 keyFile: /etc/etcd/ssl/etcd-key.pem
 dataDir: /var/lib/etcd
networking:
 podSubnet: 10.244.0.0/16
kubernetesVersion: 1.9.0
api:
 advertiseAddress: "10.129.6.220"
token: "b99a00.a144ef80536d4344"
tokenTTL: "0s"
apiServerCertSANs: - etcd-host1
- etcd-host2
- etcd-host3
- 10.129.6.211 - 10.129.6.212 - 10.129.6.213 - 10.129.6.220
featureGates: CoreDNS: true
imageRepository: "devhub.beisencorp.com/google_containers"
EOF
  • 初始化集群
kubeadm init --config config.yaml 
  • 结果
To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
as root:

 kubeadm join --token b99a00.a144ef80536d4344 10.129.6.220:6443 --discovery-token-ca-cert-hash sha256:ebc2f64e9bcb14639f26db90288b988c90efc43828829c557b6b66bbe6d68dfa
  • 查看node
[root@etcd-host1 k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
etcd-host1 noReady master 5h v1.9.0 [root@etcd-host1 k8s]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok 
controller-manager Healthy ok 
etcd-1 Healthy {"health": "true"} 
etcd-2 Healthy {"health": "true"} 
etcd-0 Healthy {"health": "true"} 
  • 问题记录
如果使用kubeadm初始化集群,启动过程可能会卡在以下位置,那么可能是因为cgroup-driver参数与docker的不一致引起 [apiclient] Created API client, waiting for the control plane to become ready
journalctl -t kubelet -S '2017-06-08'查看日志,发现如下错误
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" 需要修改KUBELET_CGROUP_ARGS=--cgroup-driver=systemdKUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

systemctl daemon-reload && systemctl restart kubelet

安装网络组件 podnetwork

  • 我们选用kube-router
wget https://github.com/cloudnativelabs/kube-router/blob/master/daemonset/kubeadm-kuberouter.yaml

kubectl apply -f kubeadm-kuberouter.yaml
  • 结果
[root@etcd-host1 k8s]# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-546545bc84-zc5dx 1/1 Running 0 6h
kube-system kube-apiserver-etcd-host1 1/1 Running 0 6h
kube-system kube-controller-manager-etcd-host1 1/1 Running 0 6h
kube-system kube-proxy-pfj7x 1/1 Running 0 6h
kube-system kube-router-858b7 1/1 Running 0 37m
kube-system kube-scheduler-etcd-host1 1/1 Running 0 6h [root@etcd-host1 k8s]# 

部署其他Master 节点

  • 拷贝master01 配置 master02 master03
#拷贝pki 证书
mkdir -p /etc/kubernetes/pki
scp -r root@10.129.6.211:/etc/kubernetes/pki /etc/kubernetes

#拷贝初始化配置
scp -r root@10.129.6.211://root/k8s/config.yaml /etc/kubernetes/config.yaml
  • 初始化 master02 master03
#初始化 kubeadm init --config /etc/kubernetes/config.yaml 

部署成功 验证结果

为了测试我们把master 设置为 可部署role

默认情况下,为了保证master的安全,master是不会被调度到app的。你可以取消这个限制通过输入:

kubectl taint nodes --all node-role.kubernetes.io/master-

录制终端验证 结果

asciicast
 
 
-验证 [ zeming@etcd - host1 k8s ] $ kubectl get node
NAME STATUS ROLES AGE VERSION
etcd-host1 Ready master 6h v1.9.0
etcd-host2 Ready master 5m v1.9.0
etcd-host3 Ready master 49s v1.9.0 [zeming@etcd-host1 k8s]$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx01-d87b4fd74-2445l 1/1 Running 0 1h default nginx01-d87b4fd74-7966r 1/1 Running 0 1h default nginx01-d87b4fd74-rcbhw 1/1 Running 0 1h
kube-system coredns-546545bc84-zc5dx 1/1 Running 0 3d
kube-system kube-apiserver-etcd-host1 1/1 Running 0 3d
kube-system kube-apiserver-etcd-host2 1/1 Running 0 3d
kube-system kube-apiserver-etcd-host3 1/1 Running 0 3d
kube-system kube-controller-manager-etcd-host1 1/1 Running 0 3d
kube-system kube-controller-manager-etcd-host2 1/1 Running 0 3d
kube-system kube-controller-manager-etcd-host3 1/1 Running 0 3d
kube-system kube-proxy-gk95d 1/1 Running 0 3d
kube-system kube-proxy-mrzbq 1/1 Running 0 3d
kube-system kube-proxy-pfj7x 1/1 Running 0 3d
kube-system kube-router-bbgpq 1/1 Running 0 3h
kube-system kube-router-v2jbh 1/1 Running 0 3h
kube-system kube-router-w4cbb 1/1 Running 0 3h
kube-system kube-scheduler-etcd-host1 1/1 Running 0 3d
kube-system kube-scheduler-etcd-host2 1/1 Running 0 3d
kube-system kube-scheduler-etcd-host3 1/1 Running 0 3d [zeming@etcd-host1 k8s]$

主备测试

  • 关闭 主节点 master01 观察切换到 master02 机器
  • master03 一直不管获取node信息 测试高可用
while true; do sleep 1; kubectl get node;date; done
观察主备VIP切换过程
#观察当Master01主节点关闭后,被节点VIP状态 BACKUP 切换到 MASTER  [root@etcd-host2 net.d]# systemctl status keepalived  keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2018-01-22 13:54:17 CST; 21s ago

Jan 22 13:54:17 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 22 13:54:17 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Received advert with higher priority 120, ours 110 Jan 22 13:54:17 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Entering BACKUP STATE

#切换到 MASTER  [root@etcd-host2 net.d]# systemctl status keepalived  keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2018-01-22 13:54:17 CST; 4min 6s ago

Jan 22 14:03:02 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 22 14:03:03 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 22 14:03:03 etcd-host2 Keepalived_vrrp[15908]: VRRP_Instance(VI_1) setting protocol VIPs. Jan 22 14:03:03 etcd-host2 Keepalived_vrrp[15908]: Sending gratuitous ARP on ens32 for 10.129.6.220 
验证集群高可用
#观察 master01 关机后状态变成NotReady [root@etcd-host3 ~]# while true; do sleep 1; kubectl get node;date; done Tue Jan 22 14:03:16 CST 2018 NAME STATUS ROLES AGE VERSION etcd-host1 Ready master 19m v1.9.0 etcd-host2 Ready master 3d v1.9.0 etcd-host3 Ready master 3d v1.9.0 Tue Jan 22 14:03:17 CST 2018 NAME STATUS ROLES AGE VERSION etcd-host1 NotReady master 19m v1.9.0 etcd-host2 Ready master 3d v1.9.0 etcd-host3 Ready master 3d v1.9.0

参观文档

#k8s 官方文档
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/ # kubeadm ha 项目文档
https://github.com/indiketa/kubeadm-ha # kubespray 之前的kargo ansible项目
https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md # 若转载请注明出处 By Zeming 若有描述不清楚 请留言指出
http://xuzeming.top/2018/01/19/K8s-1-9%E5%AE%9E%E8%B7%B5-Kubeadm-HA-1-9-%E9%AB%98%E5%8F%AF%E7%94%A8-%E9%9B%86%E7%BE%A4-%E6%9C%AC%E5%9C%B0%E7%A6%BB%E7%BA%BF%E9%83%A8%E7%BD%B2/

本文转自kubernetes中文社区-[K8s 1.9实践]Kubeadm 1.9 HA 高可用 集群 本地离线镜像部署

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值