使用kubeadm搭建高可用的K8s集群

本文详细介绍了如何使用kubeadm在CentOS7.x环境下搭建高可用Kubernetes集群,包括安装前提、环境准备、部署keepalived、haproxy、Docker、kubeadm、kubelet,以及master节点和node节点的加入,最后进行了集群的测试。
摘要由CSDN通过智能技术生成

1.安装前提

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
  • 禁止swap分区

2.准备环境

节点                     IP
master-1              10.30.59.189      
master-2              10.30.59.206
node-1                10.30.59.218   
VIP(虚拟ip)           10.30.59.254

所有节点都操作
关闭防火墙

 [root@localhost ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

关闭selinux

[root@localhost ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@localhost ~]#  setenforce 0 

关闭swap

[root@localhost ~]# swapoff -a //临时
[root@localhost ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab  //永久

根据规划设置主机名

[root@localhost ~]# hostnamectl set-hostname master1
[root@localhost ~]# bash
[root@master1 ~]# 
[root@localhost ~]# hostnamectl set-hostname master2
[root@localhost ~]# bash
[root@master2 ~]# 
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# bash
[root@node1 ~]# 
[root@localhost ~]# hostnamectl set-hostname k8s-vip
[root@localhost ~]# bash
[root@k8s-vip ~]# 

建立映射:

[root@master-1 ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.30.59.189    master01.k8s.io master1
10.30.59.206    master02.k8s.io master2
10.30.59.218    node01.k8s.io   node1
10.30.59.254    master.k8s.io   k8s-vip

将桥接的IPv4流量传递到iptables的链

[root@master-1 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@master-1 ~]# sysctl --system  //生效
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...

时间同步:

[root@master-1 ~]# yum install ntpdate -y
[root@master-1 ~]# ntpdate time.windows.com
10 Jun 16:47:13 ntpdate[9878]: step time server 52.231.114.183 offset -28874.570513 sec

3.所有master(1,2)节点部署keepalived

3.1安装相关包和keepalived

[root@master-1 ~]# yum install -y conntrack-tools libseccomp libtool-ltdl
[root@master-1 ~]# yum install -y keepalived

3.2配置master节点

master-1节点配置:

[root@master-1 ~]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id k8s
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 250
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass ceb1b3ec013d66163d6ab
    }
    virtual_ipaddress {
        10.30.59.254   //虚拟ip地址
    }
    track_script {
        check_haproxy
    }

}

master-2节点配置:

[root@master-2 ~]# vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id k8s
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP 
    interface ens192
    virtual_router_id 51
    priority 200
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass ceb1b3ec013d66163d6ab
    }
    virtual_ipaddress {
        10.30.59.254
    }
    track_script {
        check_haproxy
    }

}

3.3启动和检查

是为了区分一条一条命令,没有实际意义
在两台master节点都执行
启动keepalived

[root@master-1 ~]# systemctl start keepalived.service

设置开机启动:

[root@master-1 ~]# systemctl enable keepalived.service

查看启动的状态:

[root@master-1 ~]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-06-11 09:44:01 CST; 12s ago
 Main PID: 10498 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─10498 /usr/sbin/keepalived -D
           ├─10499 /usr/sbin/keepalived -D
           └─10500 /usr/sbin/keepalived -D

Jun 11 09:44:03 master-1 Keepalived_vrrp[10500]: Sending gratuitous ARP on e...4
Jun 11 09:44:03 master-1 Keepalived_vrrp[10500]: Sending gratuitous ARP on e...4
Jun 11 09:44:03 master-1 Keepalived_vrrp[10500]: Sending gratuitous ARP on e...4
Jun 11 09:44:03 master-1 Keepalived_vrrp[10500]: Sending gratuitous ARP on e...4
Jun 11 09:44:08 master-1 Keepalived_vrrp[10500]: Sending gratuitous ARP on e...4
Jun 11 09:44:08 master-1 Keepalived_vrrp[10500]: VRRP_Instance(VI_1) Sending...4
Jun 11 09:44:08 master-1 Keepalived_vrrp[10500]: Sending gratuitous ARP on e...4
Jun 11 09:44:08 master-1 Keepalived_vrrp[10500]: Sending gratuitous ARP on e...4
Jun 11 09:44:08 master-1 Keepalived_vrrp[10500]: Sending gratuitous ARP on e...4
Jun 11 09:44:08 master-1 Keepalived_vrrp[10500]: Sending gratuitous ARP on e...4
Hint: Some lines were ellipsized, use -l to show in full.

启动后查看master-1的网卡信息

[root@master-1 ~]# ip a s ens192
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:a0:14:5a brd ff:ff:ff:ff:ff:ff
    inet 10.30.59.189/25 brd 10.30.59.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet 10.30.59.254/32 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fea0:145a/64 scope link 
       valid_lft forever preferred_lft forever

4.部署haproxy

4.1安装:

[root@master-1 ~]# yum install -y haproxy

4.2配置

两台master节点的配置均相同,配置中声明了后端代理的两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口

cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2
    
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon 
       
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------  
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#--------------------------------------------------------------------- 
frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver    
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server      master01.k8s.io   192.168.44.155:6443 check
    server      master02.k8s.io   192.168.44.156:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
    bind                 *:1080
    stats auth           admin:awesomePassword
    stats refresh        5s
    stats realm          HAProxy\ Statistics
    stats uri            /admin?stats
EOF

4.3启动和检查

设置开机启动:

[root@master-1 ~]# systemctl enable haproxy
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.

开启haproxy:

[root@master-1 ~]# systemctl start haproxy

查看启动状态:

[root@master-1 ~]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-06-11 10:06:03 CST; 7s ago
 Main PID: 10616 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─10616 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg...
           ├─10617 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy...
           └─10618 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy...

Jun 11 10:06:03 master-1 systemd[1]: Started HAProxy Load Balancer.
Jun 11 10:06:03 master-1 systemd[1]: Starting HAProxy Load Balancer...
Jun 11 10:06:03 master-1 haproxy-systemd-wrapper[10616]: haproxy-systemd-wrapper...
Hint: Some lines were ellipsized, use -l to show in full.

检查端口:

[root@master1 ~]# yum install -y net-tools
[root@master1 ~]# netstat -lntup|grep haproxy
tcp        0      0 0.0.0.0:1080            0.0.0.0:*               LISTEN      11666/haproxy       
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      11666/haproxy       
udp        0      0 0.0.0.0:43982           0.0.0.0:*                           11665/haproxy            

5.所有节点安装Docker/kubeadm/kubelet

5.1安装docker:

[root@master-1 ~]# yum install -y wget
[root@master-1 ~]#  wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
--2021-06-11 10:10:27--  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 120.221.137.98, 120.223.244.245, 120.223.244.243, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|120.221.137.98|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2081 (2.0K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/docker-ce.repo’

100%[=========================================>] 2,081       --.-K/s   in 0s      

2021-06-11 10:10:27 (150 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2081/2081]

[root@master-1 ~]# yum -y install docker-ce-18.06.1.ce-3.el7

docker开机自启:

[root@master-1 ~]#  systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

查看版本信息:

[root@master-1 ~]# docker --version
Docker version 18.06.1-ce, build e68fc7a

配置:

[root@master-1 ~]# cat > /etc/docker/daemon.json << EOF
> {
> "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
> }
> EOF

5.2添加阿里云YUM软件源

[root@master-1 ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@master-1 ~]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
kubernetes                                                  | 1.4 kB  00:00:00     
kubernetes/primary                                          |  90 kB  00:00:00     
kubernetes                                                                 666/666
repo id                               repo name                              status
base/7/x86_64                         CentOS-7 - Base                        10,072
docker-ce-stable/7/x86_64             Docker CE Stable - x86_64                 117
extras/7/x86_64                       CentOS-7 - Extras                         498
kubernetes                            Kubernetes                                666
updates/7/x86_64                      CentOS-7 - Updates                      2,189
repolist: 13,542

5.4安装kubeadm,kubelet和kubectl

[root@master-1 ~]# yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3
[root@master-1 ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

6.部署Kubernetes Master

6.1创建kubeadm配置文件

在具有vip的master上操作,这里为master1

[root@master-1 ~]# mkdir /usr/local/kubernetes/manifests -p
[root@master-1 ~]# cd /usr/local/kubernetes/manifests/
[root@master-1 manifests]# vi kubeadm-config.yaml
apiServer:
  certSANs:
    - master1
    - master2
    - master.k8s.io
    - 10.30.59.254
    - 10.30.59.189
    - 10.30.59.206
    - 127.0.0.1
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "master.k8s.io:16443"
controllerManager: {}
dns: 
  type: CoreDNS
etcd:
  local:    
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking: 
  dnsDomain: cluster.local  
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.1.0.0/16
scheduler: {}

6.2 在master1节点执行

[root@master-1 ~]# cd /usr/local/kubernetes/manifests/
[root@master-1 manifests]# kubeadm init --config kubeadm-config.yaml

在这里插入图片描述
在这里插入图片描述
保存以下内容,一会要使用,先不执行

kubeadm join master.k8s.io:16443 --token gb13su.023dz1dkp681mfc6 \
    --discovery-token-ca-cert-hash sha256:4b2918b634addae91448fd99aaed7a25a145b97ce5de9ed4836e85e7c51393bc \
    --control-plane 	

按照提示配置环境变量,使用kubectl工具:

[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master1 ~]# kubectl get nodes
[root@master1 ~]# kubectl get pods -n kube-system

查看集群状态:

[root@master1 ~]# kubectl get cs
NAME                 AGE
controller-manager   <unknown>
scheduler            <unknown>
etcd-0               <unknown>
[root@master1 ~]# kubectl get pods -n kube-system  //现在的状态没有正常运行
NAME                              READY   STATUS    RESTARTS   AGE
coredns-58cc8c89f4-2w7t8          0/1     Pending   0          2m24s
coredns-58cc8c89f4-w6vm4          0/1     Pending   0          2m24s
etcd-master1                      1/1     Running   0          81s
kube-apiserver-master1            1/1     Running   0          83s
kube-controller-manager-master1   1/1     Running   0          99s
kube-proxy-rd5hx                  1/1     Running   0          2m24s
kube-scheduler-master1            1/1     Running   0          87s

在这里插入图片描述
在这里插入图片描述

7.安装集群网络

从官方地址获取到flannel的yaml,在master1上执行

[root@master1 ~]# mkdir flannel
[root@master1 ~]# ll
total 4
-rw-------. 1 root root 1260 Mar 29 23:35 anaconda-ks.cfg
drwxr-xr-x. 2 root root    6 Jun 15 10:43 flannel
[root@master1 ~]# cd flannel/

[root@master1 ~]# wget -c   https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
//我现在是下不下来
[root@master1 flannel]# ll
total 8
-rw-r--r--. 1 root root 4813 Jun 15 10:56 kube-flannel.yml

安装flannel网络

[root@master1 flannel]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

检查:

[root@master1 flannel]# kubectl get pods -n kube-system

会有一个缓冲的过程才会都正常运行在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

8、master2节点加入集群

8.1 复制密钥及相关文件

从master1复制密钥及相关文件到master2

[root@master1 ~]# ssh root@10.30.59.206 mkdir -p /etc/kubernetes/pki/etcd
The authenticity of host '10.30.59.206 (10.30.59.206)' can't be established.
ECDSA key fingerprint is SHA256:4hN1+edBB8HYHiTjITfpUbgmBqpWrqMagmMx5a3cEDg.
ECDSA key fingerprint is MD5:4b:9a:54:ef:90:18:96:e7:3c:2b:a2:8f:4d:1c:ac:95.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.30.59.206' (ECDSA) to the list of known hosts.
root@10.30.59.206's password: 
[root@master1 ~]# scp /etc/kubernetes/admin.conf root@10.30.59.206:/etc/kubernetes
root@10.30.59.206's password: 
admin.conf                                    100% 5450     3.2MB/s   00:00    
[root@master1 ~]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@10.30.59.206:/etc/kubernetes/pki
root@10.30.59.206's password: 
ca.crt                                        100% 1025     1.1MB/s   00:00    
ca.key                                        100% 1679     2.4MB/s   00:00    
sa.key                                        100% 1675     2.6MB/s   00:00    
sa.pub                                        100%  451   633.9KB/s   00:00    
front-proxy-ca.crt                            100% 1038     1.8MB/s   00:00    
front-proxy-ca.key                            100% 1679     2.1MB/s   00:00    
[root@master1 ~]# scp /etc/kubernetes/pki/etcd/ca.* root@10.30.59.206:/etc/kubernetes/pki/etcd
root@10.30.59.206's password: 
ca.crt                                        100% 1017     1.0MB/s   00:00    
ca.key                                        100% 1675     2.2MB/s   00:00    

8.2 master2加入集群

执行在master1上init后输出的join命令,需要带上参数–control-plane表示把master控制节点加入集群

[root@master2 ~]# kubeadm join master.k8s.io:16443 --token gb13su.023dz1dkp681mfc6     --discovery-token-ca-cert-hash sha256:4b2918b634addae91448fd99aaed7a25a145b97ce5de9ed4836e85e7c51393bc     --control-plane 

检查状态(我报了一个错)

[root@master2 ~]# kubectl get nodes

在这里插入图片描述
解决一下:

https://blog.csdn.net/CEVERY/article/details/108753379
步骤一:设置环境变量

具体根据情况,此处记录linux设置该环境变量
方式一:编辑文件设置
vim /etc/profile
在底部增加新的环境变量 export KUBECONFIG=/etc/kubernetes/admin.conf
方式二:直接追加文件内容
echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> /etc/profile
步骤二:使生效
[root@master2 ~]# source /etc/profile

在这里插入图片描述

8.3加入Kubernetes Node

添加的时候会卡死不会运行(解决办法)
默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:

[root@master1 ~]# systemctl restart keepalived.service
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join master.k8s.io:16443 --token i3fjzc.crzp7mnz5ose4dkm     --discovery-oken-ca-cert-hash sha256:4b2918b634addae91448fd99aaed7a25a145b97ce5de9ed4836e85e751393bc 

在node1上执行向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

[root@node1 ~]# kubeadm join master.k8s.io:16443 --token i3fjzc.crzp7mnz5ose4dkm    --discovery-token-ca-cert-hash sha256:4b2918b634addae91448fd99aaed7a25a145b97c5de9ed4836e85e7c51393bc 

集群网络重新安装,因为添加了新的node节点
在master1中检查状态

[root@master1 ~]# kubectl get node
NAME      STATUS   ROLES    AGE     VERSION
master1   Ready    master   23h     v1.16.3
master2   Ready    master   22h     v1.16.3
node1     Ready    <none>   2m44s   v1.16.3
[root@master1 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
default       nginx-86c57db685-2b2cm            1/1     Running   0          56m
kube-system   coredns-58cc8c89f4-2w7t8          1/1     Running   0          23h
kube-system   coredns-58cc8c89f4-w6vm4          1/1     Running   0          23h
kube-system   etcd-master1                      1/1     Running   0          23h
kube-system   etcd-master2                      1/1     Running   0          22h
kube-system   kube-apiserver-master1            1/1     Running   0          23h
kube-system   kube-apiserver-master2            1/1     Running   0          22h
kube-system   kube-controller-manager-master1   1/1     Running   5          23h
kube-system   kube-controller-manager-master2   1/1     Running   3          22h
kube-system   kube-flannel-ds-dbv7h             1/1     Running   0          22h
kube-system   kube-flannel-ds-fc5xp             1/1     Running   0          22h
kube-system   kube-flannel-ds-xx7dn             1/1     Running   0          2m57
kube-system   kube-proxy-d75ch                  1/1     Running   0          2m57
kube-system   kube-proxy-lknn7                  1/1     Running   0          22h
kube-system   kube-proxy-rd5hx                  1/1     Running   0          23h
kube-system   kube-scheduler-master1            1/1     Running   5          23h
kube-system   kube-scheduler-master2            1/1     Running   3          22h

9.测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc

在这里插入图片描述
在这里插入图片描述

访问地址:http://NodeIP:Port
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值