天翼云服务器部署 k8s etcdmain: listen tcp xx.xx.xx.xx:2380: bind: cannot assign requested address

初始化之前的一些配置我不再解释了,
这篇文章主要是为了解决使用公网ip部署k8s出现的问题

[root@d0tihpxwtqddgpwm manifests]# kubeadm init --apiserver-advertise-address 182.42.61.199 --apiserver-bind-port=6443 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/16 --kubernetes-version=v1.18.0 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=Swap
W1110 13:56:45.282491   26690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "d0tihpxwtqddgpwm.novalocal" could not be reached
        [WARNING Hostname]: hostname "d0tihpxwtqddgpwm.novalocal": lookup d0tihpxwtqddgpwm.novalocal on 114.114.114.114:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [d0tihpxwtqddgpwm.novalocal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 182.42.61.199]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [d0tihpxwtqddgpwm.novalocal localhost] and IPs [182.42.61.199 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [d0tihpxwtqddgpwm.novalocal localhost] and IPs [182.42.61.199 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1110 13:56:56.445726   26690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1110 13:56:56.454798   26690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

kubeadm join参数链接

[root@d0tihpxwtqddgpwm ~]# docker ps -a
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS                      PORTS               NAMES
35dc312407e8        303ce5db0e90                                        "etcd --advertise-cl…"   7 seconds ago       Exited (1) 5 seconds ago                        k8s_etcd_etcd-d0tihpxwtqddgpwm.novalocal_kube-system_f5397f81486cc4bfe33e891f54d17382_20
007016b181b6        74060cea7f70                                        "kube-apiserver --ad…"   20 seconds ago      Up 18 seconds                                   k8s_kube-apiserver_kube-apiserver-d0tihpxwtqddgpwm.novalocal_kube-system_287bd348f0ff06f4041537d86bef13d0_13
c7d51274cb1e        74060cea7f70                                        "kube-apiserver --ad…"   58 seconds ago      Exited (2) 34 seconds ago                       k8s_kube-apiserver_kube-apiserver-d0tihpxwtqddgpwm.novalocal_kube-system_287bd348f0ff06f4041537d86bef13d0_12
7679c15258a4        d3e55153f52f                                        "kube-controller-man…"   8 minutes ago       Up 8 minutes                                    k8s_kube-controller-manager_kube-controller-manager-d0tihpxwtqddgpwm.novalocal_kube-system_15bc58da92044dd99d1758196aded4c9_0
199ef44dc35e        a31f78c7c8ce                                        "kube-scheduler --au…"   8 minutes ago       Up 8 minutes                                    k8s_kube-scheduler_kube-scheduler-d0tihpxwtqddgpwm.novalocal_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
717e58dd77ca        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 8 minutes ago       Up 8 minutes                                    k8s_POD_kube-scheduler-d0tihpxwtqddgpwm.novalocal_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
e6c29b121358        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 8 minutes ago       Up 8 minutes                                    k8s_POD_kube-controller-manager-d0tihpxwtqddgpwm.novalocal_kube-system_15bc58da92044dd99d1758196aded4c9_0
e91a405e1ffb        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 8 minutes ago       Up 8 minutes                                    k8s_POD_kube-apiserver-d0tihpxwtqddgpwm.novalocal_kube-system_287bd348f0ff06f4041537d86bef13d0_0
06306874fc96        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 8 minutes ago       Up 8 minutes                                    k8s_POD_etcd-d0tihpxwtqddgpwm.novalocal_kube-system_f5397f81486cc4bfe33e891f54d17382_0

[root@d0tihpxwtqddgpwm ~]# docker logs -f 35dc312407e8
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-11-10 01:51:47.542346 I | etcdmain: etcd Version: 3.4.3
2020-11-10 01:51:47.542426 I | etcdmain: Git SHA: 3cf2f69b5
2020-11-10 01:51:47.542430 I | etcdmain: Go Version: go1.12.12
2020-11-10 01:51:47.542434 I | etcdmain: Go OS/Arch: linux/amd64
2020-11-10 01:51:47.542438 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-11-10 01:51:47.542590 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-11-10 01:51:47.542785 C | etcdmain: listen tcp 182.42.61.199:2380: bind: cannot assign requested address

apiserver、etcd容器都是exited状态,etcd容器日志报了一个错误,绑定不了ip端口,通过查看网卡ip可以看到,公网ip是没有绑定在网卡上的,
解决方法:
1.修改/etc/kubernetes/manifests/etcd.yaml,将–listen-client-urls、–listen-peer-urls后面的ip都设置为0.0.0.0
etcd.yaml内容如下

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/etcd.advertise-client-urls: https://182.42.61.199:2379
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://182.42.61.199:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://182.42.61.199:2380
    - --initial-cluster=d0tihpxwtqddgpwm.novalocal=https://182.42.61.199:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://0.0.0.0:2379,https://0.0.0.0:2379(修改的位置)
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://0.0.0.0:2380(修改的位置)
    - --name=d0tihpxwtqddgpwm.novalocal
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    image: registry.aliyuncs.com/google_containers/etcd:3.4.3-0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/etcd
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}

2.重置节点kubeadm reset(重置节点之前将etcd.yaml文件移出来)
3.重新初始化集群,当/etc/kubernetes/manifests/etcd.yaml被创建出来时,迅速将etcd.yaml文件删除,然后将重置节点之前保存的etcd.yaml文件移动到/etc/kubernetes/manifests/目录

最后解决问题!!!

  • 7
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 9
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值