K8S-Demo集群实践09:部署高可用kube-scheduler集群

  • kube-scheduler部署在3个master节点,启动后将通过竞争选举机制产生一个leader节点,其它节点为阻塞状态。
  • 当leader节点不可用后,剩余节点将再次进行选举产生新的leader节点,从而保证服务的高可用。

一、创建和分发kube-scheduler的kubeconfig文件

  • 创建kube-scheduler的kubeconfig文件
[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# kubectl config set-cluster k8s-demo \
  --certificate-authority=/opt/cert/ca.pem \
  --embed-certs=true \
  --server="https://##NODE_IP##:6443" \
  --kubeconfig=kube-scheduler.kubeconfig

[root@master1 kubeconfig]# kubectl config set-credentials system:kube-scheduler \
  --client-certificate=/opt/cert/kube-scheduler.pem \
  --client-key=/opt/cert/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

[root@master1 kubeconfig]# kubectl config set-context system:kube-scheduler \
  --cluster=k8s-demo \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig
  • 分发kube-scheduler的kubeconfig文件
[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kube-scheduler.kubeconfig > kube-scheduler-${node_ip}.kubeconfig
    scp kube-scheduler-${node_ip}.kubeconfig root@${node_ip}:/opt/k8s/etc/kube-scheduler.kubeconfig
  done

二、创建和分发kube-scheduler参数配置文件

  • 创建模板文件kube-scheduler.yaml.template
[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# cat >kube-scheduler.yaml.template <<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
  burst: 200
  kubeconfig: "/opt/k8s/etc/kube-scheduler.kubeconfig"
  qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 127.0.0.1:10251
leaderElection:
  leaderElect: true
metricsBindAddress: ##NODE_IP##:10251
EOF
  • 分发 kube-scheduler.yaml
[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_IP##/${MASTER_IPS[i]}/" kube-scheduler.yaml.template > kube-scheduler-${MASTER_IPS[i]}.yaml
  done
[root@master1 kubeconfig]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler-${node_ip}.yaml root@${node_ip}:/opt/k8s/etc/kube-scheduler.yaml
  done

三、创建和分发kube-scheduler systemd unit文件

  • 创建kube-scheduler服务模板
[root@master1 ~]# cd /opt/install/servcie
[root@master1 servcie]# cat > kube-scheduler.service.template <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \\
  --config=/opt/k8s/etc/kube-scheduler.yaml \\
  --bind-address=##NODE_IP## \\
  --secure-port=10259 \\
  --port=0 \\
  --tls-cert-file=/opt/k8s/etc/cert/kube-scheduler.pem \\
  --tls-private-key-file=/opt/k8s/etc/cert/kube-scheduler-key.pem \\
  --authentication-kubeconfig=/opt/k8s/etc/kube-scheduler.kubeconfig \\
  --client-ca-file=/opt/k8s/etc/cert/ca.pem \\
  --requestheader-allowed-names= \\
  --requestheader-client-ca-file=/opt/k8s/etc/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers="X-Remote-Group" \\
  --requestheader-username-headers="X-Remote-User" \\
  --authorization-kubeconfig=/opt/k8s/etc/kube-scheduler.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
  • 分发到3个master节点
[root@master1 ~]# cd /opt/install/service
[root@master1 service]# for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${MASTER_IPS[i]}/" kube-scheduler.service.template > kube-scheduler-${MASTER_IPS[i]}.service 
  done
[root@master1 ~]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-scheduler.service
  done

四、启动并验证kube-scheduler集群服务

[root@master1 ~]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
  done
[root@master1 ~]# for node_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-scheduler|grep Active"
  done
[root@master1 ~]# ss -lnpt |grep kube-sch
tcp        0      0 192.168.66.10:10251    0.0.0.0:*               LISTEN      114702/kube-schedul
tcp        0      0 192.168.66.10:10259    0.0.0.0:*               LISTEN      114702/kube-schedul
  • 如果上述状态异常,可以查看日志
[root@master1 ~]# journalctl -u kube-scheduler

附:K8s-Demo集群版本信息

组件版本命令
kubernetes1.18.5kubectl version
docker-ce19.03.11docker version 或者 rpm -qa | grep docker
etcd3.4.3etcdctl version
calico3.13.3calico -v
coredns1.7.0coredns -version

附:专栏链接

K8S-Demo集群实践00:搭建镜像仓库Harbor+安全扫描
K8S-Demo集群实践01:准备VMware虚拟机模板
K8S-Demo集群实践02:准备VMware虚拟机3台Master+3台Node
K8S-Demo集群实践03:准备集群各组件间HTTPS通讯需要的x509证书
K8S-Demo集群实践04:部署etcd三节点高可用集群
K8S-Demo集群实践05:安装kubectl并配置集群管理员账户
K8S-Demo集群实践06:部署kube-apiserver到master节点(3个无状态实例)
K8S-Demo集群实践07:kube-apiserver高可用方案
K8S-Demo集群实践08:部署高可用kube-controller-manager集群
K8S-Demo集群实践09:部署高可用kube-scheduler集群
K8S-Demo集群实践10:部署ipvs模式的kube-proxy组件
K8S-Demo集群实践11:部署ipvs模式的kube-kubelet组件
K8S-Demo集群实践12:部署Calico网络
K8S-Demo集群实践13:部署集群CoreDNS
K8S-Demo集群实践14:部署集群监控服务Metrics Server
K8S-Demo集群实践15:部署Kubernetes Dashboard
K8S-Demo集群实践16:部署Kube-Prometheus
K8S-Demo集群实践17:部署私有云盘owncloud(10.6版本)
K8S-Demo集群实践18:构建宇宙中第一个基础容器镜像


  • 先用起来,通过操作实践认识k8s,积累多了自然就理解了
  • 把理解的知识分享出来,自造福田,自得福缘
  • 追求简单,容易使人理解,知识的上下文也是知识的一部分,例如版本,时间等
  • 欢迎留言交流,也可以提出问题,一般在周末回复和完善文档
  • Jason@vip.qq.com 2021-1-20。
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值