Kubernetes简介安装

简介

Kubernetes 是谷歌开源的容器集群管理系统,是 Google 多年大规模容器管理技术 Borg 的开源版本,主要功能包括:

  • 跨多台主机进行容器编排;

  • 更加充分地利用硬件,最大程度获取运行企业应用所需的资源;

  • 有效管控应用部署和更新,并实现自动化操作;

  • 挂载和增加存储,用于运行有状态的应用;

  • 快速、按需扩展容器化应用及其资源;

  • 对服务进行声明式管理,保证所部署的应用始终按照部署的方式运行;

  • 利用自动布局、自动重启

    核心功能

    Kubernetes 整个系统由主节点和工作节点组成。开发者把一个应用列表提交到主节点,Kubernetes 会将它们部署到集群的工作节点,组件被部署到哪个节点对于开发者和系统管理员来说都不用关心。

    启、自动复制以及自动扩展功能,对应用实施状况检查和自我修复。

    核心组件

    Kubernetes 主要由以下几个核心组件组成:

  • etcd:保存了整个集群的状态;

  • apiserver:提供了资源操作的唯一入口,并提供认证、授权、访问控制、API 注册和发现等机制;

  • controller manager:负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;

  • scheduler:负责资源的调度,按照预定的调度策略将 Pod 调度到相应的机器上;

  • kubelet:负责维护容器的生命周期,同时也负责 Volume(CVI)和网络(CNI)的管理;

  • Container runtime:负责镜像管理以及 Pod 和容器的真正运行(CRI);

  • kube-proxy:负责为 Service 提供 cluster 内部的服务发现和负载均衡

 

安装

kubeadm 是 Kubernetes 官方提供用于快速安装 Kubernetes 集群的工具。

 

集群方案

组件版本

系统/组件

版本

CentOS

7.6.1810

Kernel

3.10.0-957.el7.x86_64

Docker

18.09.6

Kubernetes

1.14.2

 

方案

网络方案:flannel

kube-proxy 模式:IPVS

 

主机角色

主机名

角色

IP

k8s-master-1

Kubernetes 主节点

10.10.113.17

k8s-node-1

Kubernetes 工作节点

10.10.113.18

 

端口开通

主节点:

协议

方向

端口范围

用途

使用者

TCP

Inbound

6443

Kubernetes API server

All

TCP

Inbound

2379-2380

etcd server client API

kube-apiserver, etcd

TCP

Inbound

10250

Kubelet API

Self, Control plane

TCP

Inbound

10251

kube-scheduler

Self

TCP

Inbound

10252

kube-controller-manager

Self

 

工作节点:

协议

方向

端口范围

用途

使用者

TCP

Inbound

10250

Kubelet API

Self, Control plane

TCP

Inbound

30000-32767

NodePort Services

All

Kubernetes API server 端口可配置为自定义端口,根据实际配置的端口进行开放。

NodePort Service 默认端口范围为 30000-32767,可配置为自定义端口范围,根据实际配置的端口范围进行开放。

 

 

安装步骤

 

1 系统配置

所有节点操作

1.1 验证 MAC 和 product_uuid 每个节点唯一

$ ip link
$ cat /sys/class/dmi/id/product_uuid

 1.2 关闭 SELinux

$ setenforce 0
$ sed -ri '/^[^#]*SELINUX=/s#=.+$#=disabled#' /etc/selinux/config

1.3 关闭防火墙

$ systemctl disable --now firewalld

1.4 添加 Hosts 解析

$ echo """
10.10.113.17 k8s-master-1
10.10.113.18 k8s-node-1
""" >> /etc/hosts

 

1.5 关闭 Swap 交换空间

$ swapoff -a && sysctl -w vm.swappiness=0
$ sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

1.6 配置 Kubernetes 相关系统参数

$ cat << EOF > /etc/sysctl.d/k8s.conf
# https://github.com/moby/moby/issues/31208 
# ipvsadm -l --timout
# 修复ipvs模式下长连接timeout问题 小于900即可
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_watches=89100
fs.may_detach_mounts = 1
fs.file-max = 52706963
fs.nr_open = 52706963
net.bridge.bridge-nf-call-arptables = 1
vm.swappiness = 0
vm.overcommit_memory=1
vm.panic_on_oom=0
EOF

$ sysctl --system

1.7 加载内核模块

$ :> /etc/modules-load.d/ipvs.conf

$ module=(
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
br_netfilter
)

$ for kernel_module in ${module[@]};do
/sbin/modinfo -F filename $kernel_module |& grep -qv ERROR && echo $kernel_module >> /etc/modules-load.d/ipvs.conf || :
done

$ systemctl enable --now systemd-modules-load.service

$ lsmod | grep -e ip_vs -e nf_conntrack_ipv4

1.8 安装 ipset 与 ipvsadm

$ yum install -y ipset ipvsadm

1.9 同步时间 

$ yum install -y ntpdate

$ ntpdate -u ntp.api.bz

 

2 安装 Docker

所有节点上操作

2.1 安装 Docker Yum 源

$ yum install -y yum-utils device-mapper-persistent-data lvm2

$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

2.2 查看 Docker 最新版本

$ yum list docker-ce.x86_64  --showduplicates |sort -r
...
docker-ce.x86_64            3:18.09.6-3.el7                    docker-ce-stable 
docker-ce.x86_64            3:18.09.6-3.el7                    @docker-ce-stable
docker-ce.x86_64            3:18.09.5-3.el7                    docker-ce-stable 
docker-ce.x86_64            3:18.09.4-3.el7                    docker-ce-stable 
...

2.3 安装 Docker

$ yum makecache fast

$ yum install -y docker-ce-18.09.6-3.el7

2.4 配置 Docker

根据官方文档 [「CRI installation」](https://kubernetes.io/docs/setup/cri/)中的建议,对于使用 Systemd 管理守护进程的 Linux 系统,使用 `systemd` 作为 Docker 的 Cgroup Driver 可以确保服务器节点在资源紧张的情况下更加稳定。

 

 

$ mkdir -p /etc/docker/

$ cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "storage-driver": "overlay2",
  "storage-opts": [
      "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
      "max-size": "100m",
      "max-file": "3"
  }
}
EOF

2.5 配置命令补全并启动 Docker

 

 

$ yum install -y bash-completion && cp /usr/share/bash-completion/completions/docker /etc/bash_completion.d/

$ systemctl enable --now docker
 

 

 

3 Kubernetes 安装

3.1 安装 kubeadm 和 kubelet

所有节点上操作

 

 

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

$ yum makecache fast

$ yum install -y kubelet kubeadm kubectl

$ systemctl enable kubelet.service

$ kubeadm version -o short

配置 kubectl 命令补全:

# 因为之前配置 Docker 命令补全时已经安装了 bash-completion,所以不需要安装
# 如果没有安装则需要使用 yum install -y bash-completion 安装 bash-completion
$ echo """
source <(kubectl completion bash)
""" >> ~/.bashrc

$ source ~/.bashrc

3.2 初始化集群

k8s-master-1 节点上操作

因为我们使用 flannel 作为 Pod 的网络插件,所以需要指定 –pod-network-cidr=10.244.0.0/16

    $ kubeadm init \
        --kubernetes-version=v1.14.2 \
        --pod-network-cidr=10.244.0.0/16 \
        --apiserver-advertise-address=10.10.113.17

 

注意

因为 Kubernetes 的 Static Pod 启动需要从 k8s.gcr.io 上拉取镜像,如果你的主机没有翻墙,则无法下载。需要执行以下命令:

$ bash -c "$(curl -fsSL https://raw.githubusercontent.com/JuggGao/kubernetes.files/master/script/kubernetes-images-pull.sh)"

以下是完成初始化的输出内容:

[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.113.17]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [10.10.113.17 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [10.10.113.17 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.002479 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ng3m2e.1btk1qrflepo2tzt
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.113.17:6443 --token ng3m2e.1btk1qrflepo2tzt \
    --discovery-token-ca-cert-hash sha256:65edc637e2610e33cea9f075b2d265009cb58b66ed686cde60c165e0a6fdb47f

根据输出内容基本上可以看出初始化 Kubernetes 集群的大致步骤为:

  • [preflight]:下载 Kubernetes 所需镜像;

  • [kubelet-start]:生成 kubelet 的配置文件 /var/lib/kubelet/config.yaml

  • [certs]:生成相关证书;

  • [kubeconfig]:在 /etc/kubernetes 目录下生成相关的 kubeconfig 配置文件,其中包括 admin.confkubelet.confcontroller-manager.confscheduler.conf

  • [control-plane]:在 /etc/kubernetes/manifests 生成 Kubernetes 核心组件的描述文件,其中包括 kube-apiserverkube-controller-managerkube-scheduler

  • [etcd]:在 /etc/kubernetes/manifests 生成 Etcd 的描述文件;

  • [wait-control-plane]:启动 /etc/kubernetes/manifests 目录下的组件;

  • [mark-control-plane]:为 Master 节点打上 NoSchedule 污点,防止业务的应用 Pod 被调度到 Master 节点上;

  • [bootstrap-token]:配置 bootstrap 令牌与 RBAC 规则;

  • [addons]:应用 corednskube-proxy 插件,整个集群初始化完毕。

接下来,还给出了如何配置 kubectl 访问集群:

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

并提示接下来应该将 Pod 的网络插件(我们使用的是 flannel)部署到集群中:

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

最后,给出了将工作节点加入集群的命令

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.113.17:6443 --token ng3m2e.1btk1qrflepo2tzt \
    --discovery-token-ca-cert-hash sha256:65edc637e2610e33cea9f075b2d265009cb58b66ed686cde60c165e0a6fdb47f

 

3.3 配置 kubectl 访问集群

k8s-master-1 节点上操作

我们按照初始化输出的的提示,配置 kubectl 命令访问集群:

$ mkdir -p $HOME/.kube

$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

$ chown $(id -u):$(id -g) $HOME/.kube/config

 配置完成后检查各组件的状态:

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

 

 如果集群初始化如果遇到问题,可以使用下面的命令进行清理:

3.4 安装 Pod 网络

k8s-master-1 节点上操作

如果没有安装 Pod 网络插件,则 Kubernetes Node 节点状态和 kubelet 组件会出现错误:

接下来按照初始化输出的提示来安装 Flannel 网络插件:

$ mkdir -p ~/kubernetes/master/flannel

$ cd ~/kubernetes/master/flannel

$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

$ kubectl apply -f  kube-flannel.yml

如果 Master 节点上多有个网卡,则需要在下载下来的 kube-flannel.yml 文件中添加 Flanneld 启动参数 --iface=<iface-name> 指定集群主机内网网卡名称,否则可能会出现 DNS 无法解析:

$ vim kube-flannel.yml

...
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens192
...

 

 

 检查所有 Pod 都为健康状态:

$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-pzqbs                1/1     Running   0          3m53s   10.244.0.7     k8s-master-1   <none>           <none>
kube-system   coredns-fb8b8dccf-qkhhg                1/1     Running   0          3m53s   10.244.0.6     k8s-master-1   <none>           <none>
kube-system   etcd-k8s-master-1                      1/1     Running   0          2m52s   10.10.113.17   k8s-master-1   <none>           <none>
kube-system   kube-apiserver-k8s-master-1            1/1     Running   0          3m2s    10.10.113.17   k8s-master-1   <none>           <none>
kube-system   kube-controller-manager-k8s-master-1   1/1     Running   0          3m15s   10.10.113.17   k8s-master-1   <none>           <none>
kube-system   kube-flannel-ds-amd64-jbw7x            1/1     Running   0          3m31s   10.10.113.17   k8s-master-1   <none>           <none>
kube-system   kube-proxy-hrj8p                       1/1     Running   0          3m53s   10.10.113.17   k8s-master-1   <none>           <none>
kube-system   kube-scheduler-k8s-master-1            1/1     Running   0          3m6s    10.10.113.17   k8s-master-1   <none>           <none>

此时检查 Node 节点状态:

kubectl get node
NAME           STATUS   ROLES    AGE   VERSION
k8s-master-1   Ready    master   72m   v1.14.2

3.5 将工作节点加入集群

k8s-node-1 节点上操作

我们按照初始化输出的的提示,将工作节点加入到集群:

$ kubeadm join 10.10.113.17:6443 --token ng3m2e.1btk1qrflepo2tzt \
    --discovery-token-ca-cert-hash sha256:65edc637e2610e33cea9f075b2d265009cb58b66ed686cde60c165e0a6fdb47f

 

输出内容如下:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

此时,在 Master 节点上查看节点:

$ kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8s-master-1   Ready    master   3h19m   v1.14.2   10.10.113.17   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.6
k8s-node-1     Ready    <none>   5m22s   v1.14.2   10.10.113.18   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.6

 

如果想让 Node 节点也能使用 kubectl 调用 Kubernetes API 接口,则需要将 Master 节点的 /etc/kubernetes/admin.conf 文件拷贝至当前用户的宿主目录中:

$ mkdir -p $HOME/.kube

$ scp root@10.10.113.17:/etc/kubernetes/admin.conf $HOME/.kube/config

$ chown $(id -u):$(id -g) $HOME/.kube/config

3.6 将工作节点从集群中移除

如果想从集群中移除工作节点,在 Master 节点中执行:

$ kubectl drain k8s-node-1 --delete-local-data --force --ignore-daemonsets

$ kubectl delete node k8s-node-1

在工作节点上执行:

$ kubeadm reset

$ ifconfig flannel.1 down

$ ip link delete flannel.1

3.7 kube-proxy 开启 ipvs

k8s-master-1 节点上操作

修改 ConfigMap 对象 kube-system/kube-proxy 中的 config.conf,将其模式改为 ipvs

$ kubectl edit configmap kube-proxy -n kube-system
...
mode: "ipvs"
...
# :wq 保存后退出

修改完配置文件后,重启各节点上的 kube-proxy Pod:

$ kubectl get pod -n kube-system -l k8s-app=kube-proxy | awk 'NR!=1 {system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-f27cv" deleted
pod "kube-proxy-x7zfr" deleted

等待重启完成后查看 Pod 的日志:

$ kubectl get pods -n kube-system -l k8s-app=kube-proxy
NAME               READY   STATUS    RESTARTS   AGE
kube-proxy-kd4tk   1/1     Running   0          17s
kube-proxy-pkbjq   1/1     Running   0          18s

$ kubectl logs pod/kube-proxy-kd4tk -n kube-system
I0521 07:04:30.421175       1 server_others.go:176] Using ipvs Proxier.
W0521 07:04:30.421572       1 proxier.go:386] IPVS scheduler not specified, use rr by default
I0521 07:04:30.421880       1 server.go:562] Version: v1.14.2
I0521 07:04:30.434108       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0521 07:04:30.434412       1 config.go:102] Starting endpoints config controller
I0521 07:04:30.434445       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0521 07:04:30.434466       1 config.go:202] Starting service config controller
I0521 07:04:30.434485       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0521 07:04:30.534652       1 controller_utils.go:1034] Caches are synced for service config controller
I0521 07:04:30.534739       1 controller_utils.go:1034] Caches are synced for endpoints config controller

$ kubectl logs pod/kube-proxy-pkbjq -n kube-system
I0521 07:04:28.943860       1 server_others.go:176] Using ipvs Proxier.
W0521 07:04:28.944077       1 proxier.go:386] IPVS scheduler not specified, use rr by default
I0521 07:04:28.944252       1 server.go:562] Version: v1.14.2
I0521 07:04:28.958142       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0521 07:04:28.958362       1 config.go:202] Starting service config controller
I0521 07:04:28.958433       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0521 07:04:28.958743       1 config.go:102] Starting endpoints config controller
I0521 07:04:28.958756       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0521 07:04:29.058677       1 controller_utils.go:1034] Caches are synced for service config controller
I0521 07:04:29.059046       1 controller_utils.go:1034] Caches are synced for endpoints config controller

日志中打印出了 Using ipvs Proxier,说明 ipvs 模式已经开启。

到此结束,k8s的学习至此开始 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值