kubenetes配置vip的一些问题处理

按照kubeadm部署k8s高可用集群, 执行kubeadm init --config kubeadm-config.yaml --v=5,初始化失败,添加--v=5是为了看更详细的日志信息
kubernetes之kubeadm 安装kubernetes 高可用集群
根据指示,跟进日志

Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'
 couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/anago-v1.18.9-rc.0.79+8147d851af540a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

执行journalctl -xeu kubelet,跟进日志,异常如下:

Mar 11 17:59:33 master1 kubelet[6879]: I0311 17:59:33.594045    6879 kubelet_node_status.go:294] Setting node annotation to enable volume controller att
Mar 11 17:59:33 master1 kubelet[6879]: E0311 17:59:33.613681    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:33 master1 kubelet[6879]: I0311 17:59:33.621491    6879 kubelet_node_status.go:70] Attempting to register node master1
Mar 11 17:59:33 master1 kubelet[6879]: E0311 17:59:33.621978    6879 kubelet_node_status.go:92] Unable to register node "master1" with API server: Post 
Mar 11 17:59:33 master1 kubelet[6879]: E0311 17:59:33.713811    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:33 master1 kubelet[6879]: E0311 17:59:33.813955    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:33 master1 kubelet[6879]: E0311 17:59:33.914068    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.014206    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.114347    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.214460    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.231304    6879 kubelet.go:2190] Container runtime network not ready: NetworkReady=false reason:Net
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.314587    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.414701    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.514826    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.614937    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.715068    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.815196    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:34 master1 kubelet[6879]: E0311 17:59:34.915320    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.015436    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.115570    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.215691    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.315815    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.415941    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.516064    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.616203    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.716318    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.816447    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:35 master1 kubelet[6879]: E0311 17:59:35.916578    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:36 master1 kubelet[6879]: E0311 17:59:36.016691    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:36 master1 kubelet[6879]: E0311 17:59:36.116824    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:36 master1 kubelet[6879]: E0311 17:59:36.216938    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:36 master1 kubelet[6879]: E0311 17:59:36.317069    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:36 master1 kubelet[6879]: E0311 17:59:36.417197    6879 kubelet.go:2270] node "master1" not found
Mar 11 17:59:36 master1 kubelet[6879]: W0311 17:59:36.419041    6879 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

查看[kubernates]Unable to update cni config: no networks found in /etc/cni/net.d,执行kubectl get nodes,提示异常

Unable to connect to the server: http: server gave HTTP response to HTTPS client

执行journalctl -f -u kubelet.service可以看到稍微详细一点的日志

Mar 11 18:05:19 master1 kubelet[6879]: E0311 18:05:19.910903    6879 kubelet.go:2190] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 11 18:05:19 master1 kubelet[6879]: E0311 18:05:19.954870    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.055052    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.155194    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.255314    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.355455    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.455611    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.555746    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.655873    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.756003    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.856149    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:20 master1 kubelet[6879]: E0311 18:05:20.956303    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:21 master1 kubelet[6879]: E0311 18:05:21.056424    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:21 master1 kubelet[6879]: E0311 18:05:21.156542    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:21 master1 kubelet[6879]: E0311 18:05:21.256671    6879 kubelet.go:2270] node "master1" not found
Mar 11 18:05:21 master1 kubelet[6879]: E0311 18:05:21.321245    6879 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get https://10.128.4.18:16443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master1?timeout=10s: http: server gave HTTP response to HTTPS client

执行docker ps -a | grep kube | grep -v pause
在华为云,还需要在子网中配置vpc,将这个ip与keepalived的虚拟ip进行绑定
1
通过上面的配置解决了VIP的问题,但是还有问题,就应该顺着排查haproxy了,还是从下方这个异常着手,haproxy代理https配置方法

Unable to connect to the server: http: server gave HTTP response to HTTPS client

既然api-server配置的证书,那么这里采用haproxy只提供代理,由web服务器配置https更合理了。对比kubeadm部署k8s高可用集群发现haproxy的配置多了

stick-table type ip size 200k expire 30m
 stick on src

haproxy实现会话保持(2):stick table
完整的/etc/haproxy/haproxy.cfg配置如下

#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  kubernetes-apiserver
    mode                        tcp
    bind                        *:16443
    option                      tcplog
    default_backend             kubernetes-apiserver

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
listen stats
    bind            *:1080
    stats auth      admin:awesomePassword
    stats refresh   5s
    stats realm     HAProxy\ Statistics
    stats uri       /admin?stats

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    stick-table type ip size 200k expire 30m
    stick on src
    server  master1  10.128.4.164:6443 check
    server  master2  10.128.4.251:6443 check
    server  master3  10.128.4.211:6443 check
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

warrah

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值