k8s的master节点的init初始化和部署pod网络

master节点k8s1.14.2的init初始化和部署pod网络—文档1

机器规划:

192.168.171.128 master

192.168.171.129 node1

192.168.171.130 node2

cpu需要2个,内存1.5G即可

注意:需要做好时间同步,需要注意做好时间同步,否普罗米修斯会因浏览器和服务器时间不同步,显示不了数据

kubadm部署k8s参考视频:

准备环境_30分钟部署一个Kubernetes集群_K8s视频-51CTO学堂

1.master节点环境准备和init初始化k8s-1.13.3集群: 192.168.43.166

[root@localhost ~]# hostnamectl set-hostname master

[root@master ~]# ntpdate cn.ntp.org.cn    #时间同步

[root@master ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.171.128 master

192.168.171.129 node1

192.168.171.130 node2

关闭swap分区

[root@master ~]# swapoff -a              #关闭swap

[root@master ~]# free -m |grep Swap

Swap:             0           0           0

[root@master ~]# grep -i swap /etc/fstab

#/dev/mapper/centos-swap swap                    swap    defaults        0 0

关闭selinux和防火墙:

[root@master ~]# grep -w SELINUX /etc/selinux/config

# SELINUX= can take one of these three values:

SELINUX=disabled

[root@master ~]# setenforce 0

[root@master ~]# systemctl stop firewalld

[root@master ~]# systemctl disable firewalld

将桥接的ipv4流量传递到iptables的链:

Centos或redhat的用户反馈:ipv4的流量不能走iptables的链,导致部分流量丢失,到达不了具体的容器里面。需要配置下面两个参数,确保流量不丢失。

[root@master ~]# cat <<EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

[root@master ~]# sysctl --system    #刷新生效

安装docker19.03-ce:(所有节点yum安装指定版本docker-ce18.06.1——和k8s1.13.3对应的版本)

[root@master ~]# rz

上传docker19.03包

[root@master ~]# ls docker19.03-ce_lixian.tar.gz

docker19.03-ce_lixian.tar.gz

[root@master ~]# tar -zxf docker19.03-ce_lixian.tar.gz

[root@master ~]# ls

docker19.03-ce_lixian  docker19.03-ce_lixian.tar.gz

[root@master ~]# cd docker19.03-ce_lixian

root@master docker19.03-ce_lixian]# ls

containerd.io-1.2.6-3.3.el7.x86_64.rpm  docker-ce-19.03.2-3.el7.x86_64.rpm  docker-ce-cli-19.03.2-3.el7.x86_64.rpm

[root@master docker19.03-ce_lixian]# yum -y localinstall *.rpm

[root@master docker19.03-ce_lixian]# cd

[root@master ~]# mkdir /etc/docker

[root@master ~]# vim /etc/docker/daemon.json

{

 "graph":"/data/docker"

 }

[root@master ~]# mkdir /data/docker -p

[root@master ~]# systemctl daemon-reload

[root@master ~]# systemctl start docker

[root@master ~]# systemctl enable docker

[root@master ~]# systemctl status docker

● docker.service - Docker Application Container Engine

   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)

   Active: active (running) since Sun 2019-12-08 10:10:39 CST; 20s ago

[root@master ~]# docker info

Client:

 Debug Mode: false

Server:

 Containers: 0

  Running: 0

  Paused: 0

  Stopped: 0

 Images: 0

 Server Version: 19.03.2

安装k8s1.14.2:

[root@master ~]# rz

上传k8s1.14.2包

[root@master ~]# ls

docker19.03-ce_lixian  docker19.03-ce_lixian.tar.gz  k8s-1.14.2-images.tar.gz

[root@master ~]# tar -zxf k8s-1.14.2-images.tar.gz

[root@master ~]# ls

docker19.03-ce_lixian  docker19.03-ce_lixian.tar.gz  k8s-1.14.2-images  k8s-1.14.2-images.tar.gz

[root@master ~]# cd k8s-1.14.2-images

[root@master k8s-1.14.2-images]# ls

coredns_1.3.1.tar  flannel_v0.11.0-amd64.tar   kube-controller-manager_v1.14.2.tar  kubernetes-dashboard-amd64_1.10.0.tar  pause_3.1.tar

etcd_v3.3.10.tar   kube-apiserver_v1.14.2.tar  kube-proxy_v1.14.2.tar               kube-scheduler_v1.14.2.tar

[root@master k8s-1.14.2-images]# for i in `ls`;do docker load -i $i;done    #将镜像都load上去,各node节点也load上去

[root@master k8s-1.14.2-images]# docker images

REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE

k8s.gcr.io/kube-proxy                   v1.14.2             5c24210246bb        6 months ago        82.1MB

k8s.gcr.io/kube-apiserver               v1.14.2             5eeff402b659        6 months ago        210MB

k8s.gcr.io/kube-controller-manager      v1.14.2             8be94bdae139        6 months ago        158MB

k8s.gcr.io/kube-scheduler               v1.14.2             ee18f350636d        6 months ago        81.6MB

quay.io/coreos/flannel                  v0.11.0-amd64       ff281650a721        10 months ago       52.6MB

k8s.gcr.io/coredns                      1.3.1               eb516548c180        10 months ago       40.3MB

k8s.gcr.io/etcd                         3.3.10              2c4adeb21b4f        12 months ago       258MB

k8s.gcr.io/kubernetes-dashboard-amd64   v1.10.0             0dab2435c100        15 months ago       122MB

k8s.gcr.io/pause                        3.1                 da86e6ba6ca1        23 months ago       742kB

[root@master k8s-1.14.2-images]# cd

 #安装k8s.1.14.2和相关组件

[root@master ~]# ls

docker19.03-ce_lixian  docker19.03-ce_lixian.tar.gz  k8s-1.14.2-images  k8s-1.14.2-images.tar.gz

[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo           #配置k8s1.14.2的yum源

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

#repo_gpgcheck=1

#gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

[root@master ~]# yum clean all

[root@master ~]# yum -y makecache

[root@master ~]# yum install -y kubelet-1.14.2

[root@master ~]# yum -y install kubeadm-1.14.2

[root@master ~]# yum -y install kubectl-1.14.2

[root@master ~]# systemctl enable kubelet   #仅仅设置开机自启,服务不用启动,初始化时候会自动启动,启动已经加入初始化了

master上初始化集群:

[root@master ~]# kubeadm init \

--apiserver-advertise-address=192.168.171.128 \

--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \

--kubernetes-version v1.14.2 \

--service-cidr=10.1.0.0/16 \

--pod-network-cidr=10.244.0.0/16  #回车

解释:

#--apiserver-advertise-address=192.168.171.128  指定apiserver监听的ip

#--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers 指定仓库,不指定,默认仓库下载源在国外,需指定到国内的,拉取镜像从国内拉取

#--kubernetes-version v1.14.2 指定kubernetes的版本

#--service-cidr=10.1.0.0/16 \  指定service的ip范围

#--pod-network-cidr=10.244.0.0/16 指定pod容器的ip分配范围地址段

[init] Using Kubernetes version: v1.14.2

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

初始化完成后如下:

......

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.171.128:6443 --token 5p3kdg.gfiarg9jp1fcwo5x \

    --discovery-token-ca-cert-hash sha256:73508763959b3c6c7070e8039018469baa9f26f7a1741cfc5460c9443a8c5a1e

初始化完成后,需要执行的最后提示需要指定的命令,否则无法使用kubectl命令:

[root@master ~]# mkdir -p $HOME/.kube

[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@master ~]# ls /root/.kube/

config

[root@master ~]# kubectl get nodes

NAME     STATUS     ROLES    AGE   VERSION

master   NotReady   master   3m6s   v1.14.2            #没有安装pod的网络flannel或calico,所以是notready状态

[root@master ~]# docker images    #检查自动下载的镜像(初始化时候自动下载的镜像)

REPOSITORY                                                                    TAG                 IMAGE ID            CREATED             SIZE

k8s.gcr.io/kube-proxy                                                         v1.14.2             5c24210246bb        6 months ago        82.1MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.14.2             5c24210246bb        6 months ago        82.1MB

k8s.gcr.io/kube-apiserver                                                     v1.14.2             5eeff402b659        6 months ago        210MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.14.2             5eeff402b659        6 months ago        210MB

k8s.gcr.io/kube-controller-manager                                            v1.14.2             8be94bdae139        6 months ago        158MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.14.2             8be94bdae139        6 months ago        158MB

k8s.gcr.io/kube-scheduler                                                     v1.14.2             ee18f350636d        6 months ago        81.6MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.14.2             ee18f350636d        6 months ago        81.6MB

quay.io/coreos/flannel                                                        v0.11.0-amd64       ff281650a721        10 months ago       52.6MB

k8s.gcr.io/coredns                                                            1.3.1               eb516548c180        10 months ago       40.3MB

registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.3.1               eb516548c180        10 months ago       40.3MB

registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.3.10              2c4adeb21b4f        12 months ago       258MB

k8s.gcr.io/etcd                                                               3.3.10              2c4adeb21b4f        12 months ago       258MB

k8s.gcr.io/kubernetes-dashboard-amd64                                         v1.10.0             0dab2435c100        15 months ago       122MB

k8s.gcr.io/pause                                                              3.1                 da86e6ba6ca1        23 months ago       742kB

registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        23 months ago       742kB

[root@master ~]# ls /etc/kubernetes/pki/     #初始化后自动生成的证书

apiserver.crt              apiserver-etcd-client.key  apiserver-kubelet-client.crt  ca.crt  etcd                front-proxy-ca.key      front-proxy-client.key  sa.pub

apiserver-etcd-client.crt  apiserver.key              apiserver-kubelet-client.key  ca.key  front-proxy-ca.crt  front-proxy-client.crt  sa.key

安装pod通信网络——此处安装flannel,给pod容器提供网络ip,不部署网络,会出现notready状态,部署后才能ready:

注意: flannel的yaml文件是damonset控制器,所以会在所有节点上都会自动创建一个flannel的pod

[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

[root@master ~]# kubectl get nodes

NAME     STATUS     ROLES    AGE     VERSION

master   NotReady   master   8m22s   v1.14.2

node1    NotReady   <none>   3m27s   v1.14.2

node2    NotReady   <none>   110s    v1.14.2

[root@master ~]# rz

上传flannel的yaml文件和flannel镜像

[root@master ~]# ls

docker19.03-ce_lixian  docker19.03-ce_lixian.tar.gz  flannel_v0.11.0-amd64.tar  k8s-1.14.2-images  k8s-1.14.2-images.tar.gz  kube-flannel.yml

[root@master ~]# docker load -i flannel_v0.11.0-amd64.tar   #其他节点也都load上,以方便部署

[root@master ~]# docker images |grep flannel

quay.io/coreos/flannel                                                        v0.11.0-amd64       ff281650a721        10 months ago       52.6MB

[root@master ~]# cat kube-flannel.yml    #不用改

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

  name: flannel

rules:

  - apiGroups:

      - ""

    resources:

      - pods

    verbs:

      - get

  - apiGroups:

      - ""

    resources:

      - nodes

    verbs:

      - list

      - watch

  - apiGroups:

      - ""

    resources:

      - nodes/status

    verbs:

      - patch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

  name: flannel

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: flannel

subjects:

- kind: ServiceAccount

  name: flannel

  namespace: kube-system

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: flannel

  namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: kube-flannel-cfg

  namespace: kube-system

  labels:

    tier: node

    app: flannel

data:

  cni-conf.json: |

    {

      "name": "cbr0",

      "plugins": [

        {

          "type": "flannel",

          "delegate": {

            "hairpinMode": true,

            "isDefaultGateway": true

          }

        },

        {

          "type": "portmap",

          "capabilities": {

            "portMappings": true

          }

        }

      ]

    }

  net-conf.json: |

    {

      "Network": "10.244.0.0/16",

      "Backend": {

        "Type": "vxlan"

      }

    }

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  name: kube-flannel-ds-amd64

  namespace: kube-system

  labels:

    tier: node

    app: flannel

spec:

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      hostNetwork: true

      nodeSelector:

        beta.kubernetes.io/arch: amd64

      tolerations:

      - operator: Exists

        effect: NoSchedule

      serviceAccountName: flannel

      initContainers:

      - name: install-cni

        image: quay.io/coreos/flannel:v0.11.0-amd64

        command:

        - cp

        args:

        - -f

        - /etc/kube-flannel/cni-conf.json

        - /etc/cni/net.d/10-flannel.conflist

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      containers:

      - name: kube-flannel

        image: quay.io/coreos/flannel:v0.11.0-amd64

        command:

        - /opt/bin/flanneld

        args:

        - --ip-masq

        - --kube-subnet-mgr

        resources:

          requests:

            cpu: "100m"

            memory: "50Mi"

          limits:

            cpu: "100m"

            memory: "50Mi"

        securityContext:

          privileged: true

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        volumeMounts:

        - name: run

          mountPath: /run

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      volumes:

        - name: run

          hostPath:

            path: /run

        - name: cni

          hostPath:

            path: /etc/cni/net.d

        - name: flannel-cfg

          configMap:

            name: kube-flannel-cfg

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  name: kube-flannel-ds-arm64

  namespace: kube-system

  labels:

    tier: node

    app: flannel

spec:

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      hostNetwork: true

      nodeSelector:

        beta.kubernetes.io/arch: arm64

      tolerations:

      - operator: Exists

        effect: NoSchedule

      serviceAccountName: flannel

      initContainers:

      - name: install-cni

        image: quay.io/coreos/flannel:v0.11.0-arm64

        command:

        - cp

        args:

        - -f

        - /etc/kube-flannel/cni-conf.json

        - /etc/cni/net.d/10-flannel.conflist

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      containers:

      - name: kube-flannel

        image: quay.io/coreos/flannel:v0.11.0-arm64

        command:

        - /opt/bin/flanneld

        args:

        - --ip-masq

        - --kube-subnet-mgr

        resources:

          requests:

            cpu: "100m"

            memory: "50Mi"

          limits:

            cpu: "100m"

            memory: "50Mi"

        securityContext:

          privileged: true

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        volumeMounts:

        - name: run

          mountPath: /run

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      volumes:

        - name: run

          hostPath:

            path: /run

        - name: cni

          hostPath:

            path: /etc/cni/net.d

        - name: flannel-cfg

          configMap:

            name: kube-flannel-cfg

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  name: kube-flannel-ds-arm

  namespace: kube-system

  labels:

    tier: node

    app: flannel

spec:

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      hostNetwork: true

      nodeSelector:

        beta.kubernetes.io/arch: arm

      tolerations:

      - operator: Exists

        effect: NoSchedule

      serviceAccountName: flannel

      initContainers:

      - name: install-cni

        image: quay.io/coreos/flannel:v0.11.0-arm

        command:

        - cp

        args:

        - -f

        - /etc/kube-flannel/cni-conf.json

        - /etc/cni/net.d/10-flannel.conflist

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      containers:

      - name: kube-flannel

        image: quay.io/coreos/flannel:v0.11.0-arm

        command:

        - /opt/bin/flanneld

        args:

        - --ip-masq

        - --kube-subnet-mgr

        resources:

          requests:

            cpu: "100m"

            memory: "50Mi"

          limits:

            cpu: "100m"

            memory: "50Mi"

        securityContext:

          privileged: true

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        volumeMounts:

        - name: run

          mountPath: /run

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      volumes:

        - name: run

          hostPath:

            path: /run

        - name: cni

          hostPath:

            path: /etc/cni/net.d

        - name: flannel-cfg

          configMap:

            name: kube-flannel-cfg

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  name: kube-flannel-ds-ppc64le

  namespace: kube-system

  labels:

    tier: node

    app: flannel

spec:

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      hostNetwork: true

      nodeSelector:

        beta.kubernetes.io/arch: ppc64le

      tolerations:

      - operator: Exists

        effect: NoSchedule

      serviceAccountName: flannel

      initContainers:

      - name: install-cni

        image: quay.io/coreos/flannel:v0.11.0-ppc64le

        command:

        - cp

        args:

        - -f

        - /etc/kube-flannel/cni-conf.json

        - /etc/cni/net.d/10-flannel.conflist

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      containers:

      - name: kube-flannel

        image: quay.io/coreos/flannel:v0.11.0-ppc64le

        command:

        - /opt/bin/flanneld

        args:

        - --ip-masq

        - --kube-subnet-mgr

        resources:

          requests:

            cpu: "100m"

            memory: "50Mi"

          limits:

            cpu: "100m"

            memory: "50Mi"

        securityContext:

          privileged: true

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        volumeMounts:

        - name: run

          mountPath: /run

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      volumes:

        - name: run

          hostPath:

            path: /run

        - name: cni

          hostPath:

            path: /etc/cni/net.d

        - name: flannel-cfg

          configMap:

            name: kube-flannel-cfg

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  name: kube-flannel-ds-s390x

  namespace: kube-system

  labels:

    tier: node

    app: flannel

spec:

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      hostNetwork: true

      nodeSelector:

        beta.kubernetes.io/arch: s390x

      tolerations:

      - operator: Exists

        effect: NoSchedule

      serviceAccountName: flannel

      initContainers:

      - name: install-cni

        image: quay.io/coreos/flannel:v0.11.0-s390x

        command:

        - cp

        args:

        - -f

        - /etc/kube-flannel/cni-conf.json

        - /etc/cni/net.d/10-flannel.conflist

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      containers:

      - name: kube-flannel

        image: quay.io/coreos/flannel:v0.11.0-s390x

        command:

        - /opt/bin/flanneld

        args:

        - --ip-masq

        - --kube-subnet-mgr

        resources:

          requests:

            cpu: "100m"

            memory: "50Mi"

          limits:

            cpu: "100m"

            memory: "50Mi"

        securityContext:

          privileged: true

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        volumeMounts:

        - name: run

          mountPath: /run

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      volumes:

        - name: run

          hostPath:

            path: /run

        - name: cni

          hostPath:

            path: /etc/cni/net.d

        - name: flannel-cfg

          configMap:

            name: kube-flannel-cfg

[root@master ~]# kubectl apply -f kube-flannel.yml    #部署flannel网络

等一段时间后观查:

[root@master ~]# kubectl get pod -n kube-system -o wide

NAME                             READY   STATUS    RESTARTS   AGE   IP                NODE     NOMINATED NODE   READINESS GATES

coredns-d5947d4b-dts66           1/1     Running   0          27m   10.244.0.2        master   <none>           <none>

coredns-d5947d4b-sk6ff           1/1     Running   0          27m   10.244.1.2        node1    <none>           <none>

etcd-master                      1/1     Running   0          26m   192.168.171.128   master   <none>           <none>

kube-apiserver-master            1/1     Running   0          26m   192.168.171.128   master   <none>           <none>

kube-controller-manager-master   1/1     Running   0          26m   192.168.171.128   master   <none>           <none>

kube-flannel-ds-amd64-7hrwk      1/1     Running   0          85s   192.168.171.128   master   <none>           <none>

kube-flannel-ds-amd64-hvrgx      1/1     Running   0          85s   192.168.171.129   node1    <none>           <none>

kube-flannel-ds-amd64-lsq72      1/1     Running   0          85s   192.168.171.130   node2    <none>           <none>

kube-proxy-fgwqg                 1/1     Running   0          27m   192.168.171.128   master   <none>           <none>

kube-proxy-hgwrf                 1/1     Running   0          22m   192.168.171.129   node1    <none>           <none>

kube-proxy-xdsv7                 1/1     Running   0          20m   192.168.171.130   node2    <none>           <none>

kube-scheduler-master            1/1     Running   0          26m   192.168.171.128   master   <none>           <none>

[root@master ~]# kubectl get nodes

NAME     STATUS   ROLES    AGE   VERSION

master   Ready    master   27m   v1.14.2

node1    Ready    <none>   22m   v1.14.2

node2    Ready    <none>   21m   v1.14.2      #flannel网络部署完成后,节点编程ready状态

注意:因为flannel的yaml文件是damonset控制器,所以会在所有节点上都会自动创建一个flannel的pod,虽然此时node节点还没加进来,但一旦加入进来,稍等会就会自动在node节点创建flannel网络。

此时:master节点初始化完成,和部署pod完成,等待各个node节点join加入集群。

注意:如果遇到ipv4 转发的问题,执行如下命令
echo ‘FORWARD_IPV4=YES’ >> /etc/sysconfig/network
echo “1” > /proc/sys/net/ipv4/ip_forward

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

运维实战课程

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值