k8s 通过kubeadm 创建高可用集群

1.环境 ubuntu16.04
2.准备3个master节点,任意个数work节点
本次测试采用3个master,3个work节点
3.
k8s-master-1 172.16.0.100
k8s-master-2 172.16.0.101
k8s-master-3 172.16.0.102
k8s-slave-1 172.16.0.103
k8s-slave-2 172.16.0.104
k8s-slave-3 172.16.0.105

4.k8s 版本 v1.11.2,etcd版本etcd-v3.3.9,
5.出现问题后注意观察日志/var/log/syslog
6.k8s集群配置文档
https://kubernetes.io/docs/setup/independent/high-availability/
由于采用统一部署etcd方式没能部署成功,采用外部独立配置etcd方式(不带证书,采用http请求方式,带证书没有测试)

开始安装etcd:
在三台master上配置etcd集群,以下配置在三台master上一致
1.手动安装etcd
下载地址 https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz
2.#tar -zxvf etcd-v3.3.9-linux-amd64.tar.gz \
&& mv etcd-v3.3.9-linux-amd64 /usr/local/ \
&& ln -s /usr/local/etcd-v3.3.9-linux-amd64/etcd /usr/sbin/etcd \
&& ln -s /usr/local/etcd-v3.3.9-linux-amd64/etcdctl /usr/sbin/etcdctl
3.创建配置文件
#cat > /etc/default/etcd <<EOF
ETCD_NAME="k8s-master-1"
ETCD_DATA_DIR="/var/lib/etcd/default"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.0.100:2380"
ETCD_INITIAL_CLUSTER="k8s-master-1=http://172.16.0.100:2380,k8s-master-2=http://172.16.0.101:2380,k8s-master-3=http://172.16.0.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.0.100:2379"
EOF
注意替换 ETCD_NAME,ETCD_INITIAL_ADVERTISE_PEER_URLS,ETCD_ADVERTISE_CLIENT_URLS 为修改的当前服务器配置
2.为了启动方便创建etcd.service文件
#cat > /etc/systemd/system/etcd.service <<EOF
[Unit]
Description=etcd - highly-available key value store
Documentation=https://github.com/coreos/etcd
Documentation=man:etcd
After=network.target
Wants=network-online.target

[Service]
Environment=DAEMON_ARGS=
Environment=ETCD_NAME=%H
Environment=ETCD_DATA_DIR=/var/lib/etcd/default
EnvironmentFile=-/etc/default/%p
Type=notify
User=root
PermissionsStartOnly=true
#ExecStart=/bin/sh -c "GOMAXPROCS=$(nproc) /usr/bin/etcd $DAEMON_ARGS"
ExecStart=/usr/sbin/etcd $DAEMON_ARGS
Restart=on-abnormal
#RestartSec=10s
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd.service
EOF
#systemctl daemon-reload;systemctl restart etcd.service
检查etcd集群启动状况
#sudo etcdctl cluster-health
member 7791479710dfbdf3 is healthy: got healthy result from http://172.16.0.100:2379
member af2e418298ffaf16 is healthy: got healthy result from http://172.16.0.102:2379
member bd64e48082d1ece5 is healthy: got healthy result from http://172.16.0.101:2379

开始安装docker与kubelet kubeadm kubectl
1.初始化系统变量
#cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF
#sudo cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
#sudo sysctl -p /etc/sysctl.d/kubernetes.conf
2.需要安装docker,docker版本目前支持17.03https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-docker
docker 安装
#apt-get update
#apt-get install -y apt-transport-https ca-certificates curl software-properties-common
#curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
#add-apt-repository "deb https://mirrors.aliyun.com/docker-ce/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
#apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')
3.需要安装 kubelet kubeadm kubectl
#apt-get update && apt-get install -y apt-transport-https
#curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
#cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
#apt-get update && apt-get install -y kubelet kubeadm kubectl

当安装kubelet后,该服务一直处于尝试启动中,此时查看系统配置文件可以看到一直处于报错状态
#tailf /var/log/syslog
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory

4.安装完成后修改kubelet服务启动文件
#vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"

5.重新加载systemctl,重启kubelet
systemctl daemon-reload;systemctl restart kubelet

开始通过kubeadm部署多master集群

1.创建kubeadm-config.yaml

采用官网配置方法一直有问题,配置带上external:后初始化kubeadm init --config kubeadm-config.yaml 会一直提示etcd端口2379已经被占用,由于我启动etcd时没有带证书,访问通过http访问,所有caFile,certFile,keyFile均不用配置
得到如下配置,测试可以正确启动。
#cat <<EOF >kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 172.16.0.100
networking:
podSubnet: 10.244.0.0/16
etcd:
endpoints:

2.初始化
#kubeadm init --config kubeadm-config.yaml
3.初始化成功后
#mkdir -p $HOME/.kube
#sudo cp -a /etc/kubernetes/admin.conf $HOME/.kube/config
#sudo sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.记录
#kubeadm join 172.16.0.100:6443 --token y6lvhm.igcsteyrwyeaf26p --discovery-token-ca-cert-hash sha256:e410c5b00bbf1c5350f39c08b86c37c79994825d00bb3d6a0f6e52616196e1d0

5.备份证书(非必须)
#cp -a /etc/kubernetes/pki /opt/pki

6.删除apiserver 证书,如果不删除在另外两台master初始化时会包证书错误
#rm -f /opt/pki/apiserver.crt && rm -f /opt/pki/apiserver.key

7.把pki文件夹和kubeadm-config.yaml同步到另外两台master上
#rsync -avSH /opt/pki XXXX@172.16.0.101:/tmp
#rsync -avSH kubeadm-config.yaml XXXX@172.16.0.101:/tmp
#rsync -avSH /opt/pki XXXX@172.16.0.102:/tmp
#rsync -avSH kubeadm-config.yaml XXXX@172.16.0.102:/tmp

8.登录另外两台master(172.16.0.101,172.16.0.102)执行以下操作
#mv /tmp/pki /opt/ && mv /tmp/kubeadm-config.yaml /opt/
#cp -a /opt/pki /etc/kubernetes/
#kubeadm init --config kubeadm-config.yaml

9.初始化成功后

kubectl get nodes

NAME STATUS ROLES AGE VERSION
k8s-master-1 Ready master 1h v1.11.2
k8s-master-2 Ready master 49m v1.11.2
k8s-master-3 Ready master 46m v1.11.2

10.每个节点初始化完成后都需要执行
#mkdir -p $HOME/.kube
#sudo cp -a /etc/kubernetes/admin.conf $HOME/.kube/config
#sudo sudo chown $(id -u):$(id -g) $HOME/.kube/config
每个节点都会产生
kubeadm join 172.16.0.100:6443 --token y6lvhm.igcsteyrwyeaf26p --discovery-token-ca-cert-hash sha256:e410c5b00bbf1c5350f39c08b86c37c79994825d00bb3d6a0f6e52616196e1d0
kubeadm join 172.16.0.101:6443 --token luphn9.a831g47qjbzxnbjb --discovery-token-ca-cert-hash sha256:e410c5b00bbf1c5350f39c08b86c37c79994825d00bb3d6a0f6e52616196e1d0
kubeadm join 172.16.0.102:6443 --token ptmcjc.cvxpp31bq66t1pl5 --discovery-token-ca-cert-hash sha256:e410c5b00bbf1c5350f39c08b86c37c79994825d00bb3d6a0f6e52616196e1d0

11.安装flannel网络
在任意一台master上
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

12.成功后状态如下
#kubectl get pods --all-namespaces -o wide
kube-system coredns-777d78ff6f-c7hrn 1/1 Running 0 2h 10.244.1.2 k8s-master-2 <none>
kube-system coredns-777d78ff6f-rz7qz 1/1 Running 0 2h 10.244.1.3 k8s-master-2 <none>
kube-system kube-apiserver-k8s-master-1 1/1 Running 0 2h 172.16.0.100 k8s-master-1 <none>
kube-system kube-apiserver-k8s-master-2 1/1 Running 0 1h 172.16.0.101 k8s-master-2 <none>
kube-system kube-apiserver-k8s-master-3 1/1 Running 0 1h 172.16.0.102 k8s-master-3 <none>
kube-system kube-controller-manager-k8s-master-1 1/1 Running 0 2h 172.16.0.100 k8s-master-1 <none>
kube-system kube-controller-manager-k8s-master-2 1/1 Running 0 1h 172.16.0.101 k8s-master-2 <none>
kube-system kube-controller-manager-k8s-master-3 1/1 Running 0 1h 172.16.0.102 k8s-master-3 <none>
kube-system kube-flannel-ds-amd64-dnmj8 1/1 Running 0 1h 172.16.0.100 k8s-master-1 <none>
kube-system kube-flannel-ds-amd64-r4tq4 1/1 Running 0 1h 172.16.0.102 k8s-master-3 <none>
kube-system kube-flannel-ds-amd64-wxk5t 1/1 Running 0 1h 172.16.0.101 k8s-master-2 <none>
kube-system kube-proxy-29lkm 1/1 Running 0 1h 172.16.0.102 k8s-master-3 <none>
kube-system kube-proxy-hzjfw 1/1 Running 0 2h 172.16.0.100 k8s-master-1 <none>
kube-system kube-proxy-r2j9p 1/1 Running 0 1h 172.16.0.101 k8s-master-2 <none>
kube-system kube-scheduler-k8s-master-1 1/1 Running 0 2h 172.16.0.100 k8s-master-1 <none>
kube-system kube-scheduler-k8s-master-2 1/1 Running 0 1h 172.16.0.101 k8s-master-2 <none>
kube-system kube-scheduler-k8s-master-3 1/1 Running 0 1h 172.16.0.102 k8s-master-3 <none>

13.在从节点上执行join操作,在每个节点join不同的master,在任意一台master上执行
root@k8s-master-1:/opt# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-1 Ready master 1h v1.11.2
k8s-master-2 Ready master 49m v1.11.2
k8s-master-3 Ready master 46m v1.11.2
k8s-slave-1 Ready <none> 25m v1.11.2
k8s-slave-2 Ready <none> 25m v1.11.2
k8s-slave-3 Ready <none> 24m v1.11.2

14.测试关闭一个master,其他状态没有任何变化

转载于:https://blog.51cto.com/xuchenhui/2165400

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值