K8S学习(三)实验之Master节点部署

Master节点部署什么

  Master节点要部署三个服务:API Server、Scheduler、Controller Manager。

    1.API Server

  •    提供集群管理的REST API接口,包括认证授权、数据校验以 及集群状态变更等
  •    只有API Server才直接操作etcd
  •    其他模块通过API Server查询或修改数据
  •    提供其他模块之间的数据交互和通信的枢纽

    2.Scheduler

  •    负责分配调度Pod到集群内的node节点
  •    监听kube-apiserver,查询还未分配Node的Pod
  •    根据调度策略为这些Pod分配节点

    3.Controller Manager

  •    由一系列的控制器组成,它通过API Server监控整个 集群的状态,并确保集群处于预期的工作状态

在这里插入图片描述

 1. 部署API Server服务

  a. 准备软件包

[root@linux-node1 ~]# cd /usr/local/src/kubernetes
[root@linux-node1 kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/
[root@linux-node1 kubernetes]# cp server/bin/kube-controller-manager /opt/kubernetes/bin/
[root@linux-node1 kubernetes]# cp server/bin/kube-scheduler /opt/kubernetes/bin/

  b. 创建生成CSR的JSON配置文件

[root@linux-node1 kubernetes]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# vim kubernetes-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.219.135",  #Master的ip地址,在实际配置文件中此注释要删除
    "10.1.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

  c. 生成K8S证书和私钥

[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem    -ca-key=/opt/kubernetes/ssl/ca-key.pem    -config=/opt/kubernetes/ssl/ca-config.json    -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2019/05/10 14:35:26 [INFO] generate received request
2019/05/10 14:35:26 [INFO] received CSR
2019/05/10 14:35:26 [INFO] generating key: rsa-2048
2019/05/10 14:35:27 [INFO] encoded CSR
2019/05/10 14:35:27 [INFO] signed certificate with serial number 475278262073565653888389440882916523548487878002
2019/05/10 14:35:27 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@linux-node1 ssl]# cp kubernetes*.pem /opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp kubernetes*.pem 192.168.219.136:/opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp kubernetes*.pem 192.168.219.137:/opt/kubernetes/ssl/

  d. 创建Kube-APIServer使用的客户端token文件

[root@linux-node1 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
34be4e64966c13c789458f4bcbb4b97c
[root@linux-node1 ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv
34be4e64966c13c789458f4bcbb4b97c,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

  c. 创建基础用户名/密码认证配置

[root@linux-node1 ~]# vim /opt/kubernetes/ssl/basic-auth.csv
admin,admin,1
readonly,readonly,2

  d. 部署K8S API Server

[root@linu-node1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --bind-address=192.168.219.135 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  --service-cluster-ip-range=10.1.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://192.168.219.135:2379,https://192.168.219.136:2379,https://192.168.219.137:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \ 
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/api-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  e. 启动K8S API Server服务

[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@linux-node1 ~]# systemctl start kube-apiserver
[root@linux-node1 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2019-05-10 14:46:39 CST; 28s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 26916 (kube-apiserver)
    Tasks: 19
   Memory: 337.0M
   CGroup: /system.slice/kube-apiserver.service
           └─26916 /opt/kubernetes/bin/kube-apiserver --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction --bind-address=192.168.219.135 --ins...

510 14:46:31 localhost.localdomain systemd[1]: Starting Kubernetes API Server...
510 14:46:32 localhost.localdomain kube-apiserver[26916]: Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed...ure version.
510 14:46:32 localhost.localdomain kube-apiserver[26916]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
510 14:46:34 localhost.localdomain kube-apiserver[26916]: [restful] 2019/05/10 14:46:34 log.go:33: [restful/swagger] listing is available at https://192.168.219.135:6443/swaggerapi
510 14:46:34 localhost.localdomain kube-apiserver[26916]: [restful] 2019/05/10 14:46:34 log.go:33: [restful/swagger] https://192.168.219.135:6443/swaggerui/ is mapped to folder /swagger-ui/
510 14:46:36 localhost.localdomain kube-apiserver[26916]: [restful] 2019/05/10 14:46:36 log.go:33: [restful/swagger] listing is available at https://192.168.219.135:6443/swaggerapi
510 14:46:36 localhost.localdomain kube-apiserver[26916]: [restful] 2019/05/10 14:46:36 log.go:33: [restful/swagger] https://192.168.219.135:6443/swaggerui/ is mapped to folder /swagger-ui/
510 14:46:39 localhost.localdomain systemd[1]: Started Kubernetes API Server.
Hint: Some lines were ellipsized, use -l to show in full.
[root@lnux-node1 ~]# ss -tulnp | grep kube-apiserver
tcp    LISTEN     0      128    192.168.219.135:6443                  *:*                   users:(("kube-apiserver",pid=26916,fd=5))
tcp    LISTEN     0      128    127.0.0.1:8080                  *:*                   users:(("kube-apiserver",pid=26916,fd=55))
#从监听端口可以看出来API Server正在监听6443端口,同时夜间听了本地的8080端口,是提供kubu-schduler和kube-controller使用

 2. 部署Controller Manager服务

  •   Controller Manager由一系列的控制器组成,它通过API Server监控整个 集群的状态,并确保集群处于预期的工作状态。

  a. 部署Controller Manager服务

[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=10.1.0.0/16 \
  --cluster-cidr=10.2.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/opt/kubernetes/ssl/ca.pem \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

  b. 启动Controller Manager服务

[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable kube-controller-manager
[root@linux-node1 ~]# systemctl start kube-controller-manager
[root@linux-node1 ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2019-05-10 14:53:51 CST; 6s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 26984 (kube-controller)
    Tasks: 15
   Memory: 53.5M
   CGroup: /system.slice/kube-controller-manager.service
           └─26984 /opt/kubernetes/bin/kube-controller-manager --address=127.0.0.1 --master=http://127.0.0.1:8080 --allocate-node-cidrs=true --service-cluster-ip-range=10.1.0.0/16 --cluster-cidr=10.2.0.0/16...

510 14:53:51 localhost.localdomain systemd[1]: Started Kubernetes Controller Manager.
[root@linux-node1 ~]# ss -tulnp | grep kube-controller
tcp    LISTEN     0      128    127.0.0.1:10252                 *:*                   users:(("kube-controller",pid=26984,fd=5))
#从监听端口上,可以看到kube-controller监听在本地的10252端口,外部是无法直接访问kube-controller,需要通过api-server才能进行访问。从监听端口上,可以看到kube-controller监听在本地的10252端口,外部是无法直接访问kube-controller,需要通过api-server才能进行访问。

 3. 部署K8S Scheduler服务

  a. 部署Controller Manager服务

[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

  b. 启动Controller Manager服务

[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable kube-scheduler
[root@linux-node1 ~]# systemctl start kube-scheduler
[root@linux-node1 ~]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2019-05-10 15:02:18 CST; 6s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 27063 (kube-scheduler)
    Tasks: 14
   Memory: 9.5M
   CGroup: /system.slice/kube-scheduler.service
           └─27063 /opt/kubernetes/bin/kube-scheduler --address=127.0.0.1 --master=http://127.0.0.1:8080 --leader-elect=true --v=2 --logtostderr=false --log-dir=/opt/kubernetes/log

510 15:02:18 localhost.localdomain systemd[1]: Started Kubernetes Scheduler.
[root@linux-node1 ssl]# ss-tulnp |grep kube-scheduler
tcp    LISTEN     0      128    127.0.0.1:10251                 *:*                   users:(("kube-scheduler",pid=27063,fd=6))
#从kube-scheduler的监听端口上,同样可以看到监听在本地的10251端口上,外部无法直接访问,同样是需要通过api-server进行访问。

 4. 部署Kubectl命令行工具

  •   kubectl用于日常直接管理K8S集群,那么kubectl要进行管理k8s,就需要和k8s的组件进行通信,也就需要用到证书。此时kubectl需要单独部署,也是因为kubectl也是需要用到证书,而前面的kube-apiserver、kube-controller、kube-scheduler都是不需要用到证书,可以直接通过服务进行启动。

  a. 准备二进制命令包

[root@linux-node1 ~]# cd /usr/local/src/kubernetes/client/bin
[root@linux-node1 bin]# cp kubectl /opt/kubernetes/bin/

  b. 创建admin证书签名请求

[root@linux-node1 ~]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# vim admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

  c. 生成admin证书和私钥

[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
>    -ca-key=/opt/kubernetes/ssl/ca-key.pem \
>    -config=/opt/kubernetes/ssl/ca-config.json \
>    -profile=kubernetes admin-csr.json | cfssljson -bare admin
2019/05/10 15:13:07 [INFO] generate received request
2019/05/10 15:13:07 [INFO] received CSR
2019/05/10 15:13:07 [INFO] generating key: rsa-2048
2019/05/10 15:13:07 [INFO] encoded CSR
2019/05/10 15:13:07 [INFO] signed certificate with serial number 289140313675676861479107326081494268774735552549
2019/05/10 15:13:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@linuxnode1 ssl]# ll admin*
-rw-r--r--. 1 root root 1013 510 15:13 admin.csr
-rw-r--r--. 1 root root  231 510 15:08 admin-csr.json
-rw-------. 1 root root 1675 510 15:13 admin-key.pem
-rw-r--r--. 1 root root 1407 510 15:13 admin.pem
[root@linux-node1 ssl]# cp admin*.pem /opt/kubernetes/ssl/

  d. 设置群集参数

[root@linux-node1 ssl]# kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://192.168.219.135:6443
Cluster "kubernetes" set.

  e. 设置客户端认证参数

[root@linux-node1 ssl]# kubectl config set-credentials admin \
   --client-certificate=/opt/kubernetes/ssl/admin.pem \
   --embed-certs=true \
   --client-key=/opt/kubernetes/ssl/admin-key.pem
User "admin" set.

  f. 设置上下文参数

[root@linux-node1 ssl]# kubectl config set-context kubernetes \
   --cluster=kubernetes \
   --user=admin
Context "kubernetes" created.

  g. 设置默认上下文

[root@linux-node1 src]# kubectl config use-context kubernetes
Switched to context "kubernetes".

  上面d-g步骤的配置是为了在家目录下生成config文件,之后kubectl和api通信就需要用到该文件,这也就是说如果在其他节点上需要用到这个kubectl,就需要将该文件拷贝到其他节点。

[root@localhost .kube]# pwd
/root/.kube
[root@localhost .kube]# ls
config
[root@localhost .kube]# vim config 

    server: https://192.168.219.135:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ0VENDQXNtZ0F3SUJBZ0lVTXFXQUJVMUdiVUxUOEpJaTlydTk0QThTSUNVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0ZOb1lXNW5TR0ZwTVJFd0R3WURWUVFIRXdoVAphR0Z1WjBoaGFURU1NQW9HQTFVRUNoTURhemh6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RXpBUkJnTlZCQU1UCkNtdDFZbVZ5Ym1WMFpYTXdIaGNOTVRrd05URXdNRGN3T0RBd1doY05NakF3TlRBNU1EY3dPREF3V2pCdE1Rc3cKQ1FZRFZRUUdFd0pEVGpFUk1BOEdBMVVFQ0JNSVUyaGhibWRJWVdreEVUQVBCZ05WQkFjVENGTm9ZVzVuU0dGcApNUmN3RlFZRFZRUUtFdzV6ZVhOMFpXMDZiV0Z6ZEdWeWN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1RNHdEQVlEClZRUURFd1ZoWkcxcGJqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUt1Q0h5cVUKOHpiRCs4eFlEaVhvM2V6MjB6QnFXamJzdlhPY0p1d3dRTzV1SU9RcU1OQXFMWXR3ckNGdUVYaHg2K3hLY1VmYwpVZWNoSHllTnZJQ0E5ZkxwS0V3eS8xbVNxOEN1aEYxK2s3dnZLTFEwTVZ4MFd6SkEyTnkwc25EVFcxMkwxUURKCjRlQTF3NUZseG1KTDhvNXFsOFN5VjI0Uml2RFowbXRwdC9LcWFTcTQzUmplWWROVlJqR2s5aEZpT3dnNXM5ZlkKaEZRczkzU1hiTXdlblE4RW9GVi9GRTEyQWpaYndMMTdtSFdMQlA5cTFSMVd0dlBLNlBybExiVFR0UHZWWGVQbgp3b00vaS9XakhyN2ZnOGxvTVVoU1VjRmZhVEdFOEpibzVkRURWejdMRlc5NFl6VEVrZHpGNFNwZGw0MUV6bjd5CjgvblhLMks4MkN2dHJrRUNBd0VBQWFOL01IMHdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CMEdBMVVkSlFRV01CUUcKQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjBHQTFVZERnUVdCQlNYNW9IawpBKzQzYUdYZ0ttSExsdmdrQ3BSeXB6QWZCZ05WSFNNRUdEQVdnQlRUVGh0bUFXWjB2TU9IajBPSitJRmhhQUhMCk1EQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFoWkhBb2pYU2VFaGFVQWdWdGxJSnJ5aENEcnB6aEozNTRKUUEKQmFzUlIvQ1ZyamNkV1U1NjRyS01vZTEyb0JWeHRxRE5oQjVURGxwMEduUnpkNHJqb3l3NmEwUWVSMUZ5Z3BVYQpINFZ0QUtrdTF6Und1blkxSnNDRnA4RWFGblh0dTdYZmJ1MnRYNTU1M0FCSE01azFPSEpBMDY4MkdsdFIvVnNjCmhUKzFya0FPUUY1S3pLQzhqaWZVN1hHNm1Sd3ZsdkR6MmN6K21XYUc2WjJSWXNxN2Fuc3N5WER6WjFvZkN0ZmoKOUtlZWk3RXBuQm4rei9MRHNwL1BVUDJ5YkJETTJXZXhCOURlMnZwNzYxbnJvWkpRSEdjai83MzZBSFhWL2NETApLV0pZa0F4Zzk5ZFk1MmF5UXNxTWdZOVFRSGtieHV0MnFEQ2xSL1JlNjROam5PbHFydz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcTRJZktwVHpOc1A3ekZnT0plamQ3UGJUTUdwYU51eTljNXdtN0RCQTdtNGc1Q293CjBDb3RpM0NzSVc0UmVISHI3RXB4Ujl4UjV5RWZKNDI4Z0lEMTh1a29UREwvV1pLcndLNkVYWDZUdSs4b3REUXgKWEhSYk1rRFkzTFN5Y05OYlhZdlZBTW5oNERYRGtXWEdZa3Z5am1xWHhMSlhiaEdLOE5uU2EybTM4cXBwS3JqZApHTjVoMDFWR01hVDJFV0k3Q0RtejE5aUVWQ3ozZEpkc3pCNmREd1NnVlg4VVRYWUNObHZBdlh1WWRZc0UvMnJWCkhWYTI4OHJvK3VVdHROTzArOVZkNCtmQ2d6K0w5YU1ldnQrRHlXZ3hTRkpSd1Y5cE1ZVHdsdWpsMFFOWFBzc1YKYjNoak5NU1IzTVhoS2wyWGpVVE9mdkx6K2RjcllyellLKzJ1UVFJREFRQUJBb0lCQUV6R0pkOXc1K0xYSG10ZAo2NDlxeTVWYzlETFRHT2xIVnBOZkRrbGlYRjZmSzlnWFR0eVFWT3o3bGdJcy9HTVhWQTNsVVFwakJNTGJIOUFiCjhZcndyNmg2V05DcmI0VVFWQlFmeXg3ekgzemNWVE05dmU3dUl6aStzSlV6eWtFWlMrZjNSWFZoNmR2dEZVdUwKN0o5cDhmMXdsOW0wSDlFa3h6YUR1MTdiNXowWDhhbFZYTE1aZ282QzJ1Mm5oTTMrdm45N1ppYlNoVHl3VDFGUApvRUkyc240RG4vYUowV2cxSzNaaWtRVndlbFRLQ1dkVlIzR2FobFNFRVNYeVpkT3dZdzV4TDZ5UWZQYzljN3QxCkZHam92K3RiVUZMajc4T05kdjdQSVdYMC9hRjZZOWVPVllCT1lOekNIUzVLaE90VFJOZmZCRjRkUkx1UkxZS0kKMTFvMnV6VUNnWUVBeFlwYkNRZUcyd1VEcVZXS1U4OXhWMmJEL2RqZitsdFpneEJnbCt2TlVpZzJya3JWcFN5NwoxSFBCVHUrdWVwQm5LVkJJUTI2WmtDelBxVDFhdHNBbjRtY0dlT1hxNm9HeWUyTlE0cEZrVE0xVVFxeUk0TTZmCmRDUDJFS2VWSm5rTDA3QVZpSHlJV0pZa0J5eHU2aDVrT2IzdmJWUlBvcjRqb05FNm1HRjBBcHNDZ1lFQTNrT1MKZEtrTnNFbytrN3lKbkFGUm5WYmVpcStlS1U5cmUyMldLemtrU1JiTUplaWRQRUVFa0VoYm8zL2RCSlA5K2VIWQpmRmtiNzVNNUZleW1GY05GT094bVJEMlhaYlJFc0tVN2hhQkRuUERDQ2lXSEF3VXZycmV3L2dmY25Ncm1Nc2VYCjkrUTN0NFRlSFQzVUlhenZnMXVBM21kUWRHK2hzaERMWnRsTTRsTUNnWUJRUVcvTzhWSG10ZGpRK1VIajN3bkwKV3FNU0JRU3FjR2FqaXduVGJ5ZlIweWkwRXc5TnRpanhuYjNSMWlycS9MUU00dU1aRWx3dGFTZE5PUEljQVdHeQo1K3lIUGRIOVNJZzgvUktsbWpCSHk3d0tBcEx4MHNDUnJQS1J2YVFwSjFDWXhwZFpCazlXdmxrUTJRcU83NTRFCm41Z2dzUHBSd2pJemFnNEdUc0dWTlFLQmdHdUo3Q21QeGZTKzUyb1p0Y2NLaUUrRlFXVit0UnF0dDRaZnJtRzUKWXdvT0FyWnd4dXJwVm1qczZaSEJBdEg5UE13VGJ1Z3pRU1g0YUkxb0U2L0I3Qk12cGdkc2VYMFc3SWsvV1A0OQpYWmxvajZuVElIRGdxSUp6bENwRTZZUGZVK1BMMklaeklGWWw3a1hkclc2aHVyMG1uOEo3NEZ5RnlvbGFRTi9CClVjYkxBb0dCQUw2S0prWHpydzJCdHNrdjNhdEhRT3BJbklFbnNEemxTOXpWc083SCs1VjhlTnpmd05POUNhTVoKbTJzKzAvTFdTSHFKTElWMFkweGNPYnBybStnSThwN0tsVDhtcWl3WGhGeGFnUTN1WVlYQm1aQWg0WHY2N3Y1TwpITDV6dFl2cDJaNjNRNHU5VERQcUtSd011MVRlV2hJcTExc1JFa0dXYW80YkpyTW5kckJ0Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

  h. 使用kubectl命令行

[root@linux-node1 ssl]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok                                      
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}  
  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值