部署etcd集群
IP | 主机名 | 角色 |
---|---|---|
192.168.199.11 | 192-168-199-11 | 代理节点1 |
192.168.199.12 | 192-168-199-12 | 代理节点2 |
192.168.199.13 | 192-168-199-13 | 运算节点1 |
192.168.199.14 | 192-168-199-14 | 运算节点2 |
192.168.199.15 | 192-168-199-15 | 运维节点 |
为etcd签发证书
签发证书操作主要在运维主机 192.168.199.15
上进行
创建基于根证书的 config
配置文件
[root@192-168-199-15 ~]# cat /opt/certs/ca-config.json
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"server": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
创建生成自签证书签名请求(scr)的 JSON 配置文件
[root@192-168-199-15 ~]# cd /opt/certs/
[root@192-168-199-15 certs]# cat /opt/certs/etcd-peer-csr.json
{
"CN": "k8s-etcd",
"hosts": [
"192.168.199.11",
"192.168.199.12",
"192.168.199.13",
"192.168.199.14"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
hosts中表示要部署etcd服务的设备。192.168.199.11作为备用,当一个节点坏了之后,可以在192.168.199.11上部署一个etcd节点。不支持配置网段。
生产环境中应多配置一些hosts
生成 etcd 证书和私钥
[root@192-168-199-15 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer
安装etcd
以下部分,需要在 192.168.199.12
、192.168.199.13
、192.168.199.14
三个主机主机上操作,操作相同
[root@192-168-199-12 ~]# useradd -s /sbin/nologin -M etcd
[root@192-168-199-12 ~]# id etcd
uid=1000(etcd) gid=1000(etcd) groups=1000(etcd)
从 github
上下载 etcd
etcd下载地址
[root@192-168-199-12 ~]# tar xf etcd-v3.1.20-linux-amd64.tar.gz -C /opt/
[root@192-168-199-12 ~]# cd /opt/
[root@192-168-199-12 opt]# mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20
[root@192-168-199-12 opt]# ln -s /opt/etcd-v3.1.20/ /opt/etcd
创建目录,拷贝证书、私钥
[root@192-168-199-12 opt]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
[root@192-168-199-12 opt]# cd /opt/etcd/certs/
[root@192-168-199-12 certs]# scp 192-168-199-15:/opt/certs/ca.pem ./
[root@192-168-199-12 certs]# scp 192-168-199-15:/opt/certs/etcd-peer.pem ./
[root@192-168-199-12 certs]# scp 192-168-199-15:/opt/certs/etcd-peer-key.pem ./
创建 etcd
的启动文件,需要注意 --name
部分和 --initial-cluster
部分
[root@192-168-199-12 etcd]# cat /opt/etcd/etcd-server-startup.sh
#!/bin/sh
./etcd --name etcd-server-199-12 \
--data-dir /data/etcd/etcd-server \
--listen-peer-urls https://192.168.199.12:2380 \
--listen-client-urls https://192.168.199.12:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://192.168.199.12:2380 \
--advertise-client-urls https://192.168.199.12:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-199-12=https://192.168.199.12:2380,etcd-server-199-13=https://192.168.199.13:2380,etcd-server-199-14=https://192.168.199.14:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
三台主机的配置文件中,–name参数和所涉及到的ip地址不同。需要注意
对上述涉及到的目录授权,使 etcd
用户对其具有权限
[root@192-168-199-12 etcd]# chmod a+x /opt/etcd/etcd-server-startup.sh
[root@192-168-199-12 etcd]# chown -R etcd.etcd /opt/etcd-v3.1.20/
[root@192-168-199-12 etcd]# chown -R etcd.etcd /data/etcd/
[root@192-168-199-12 etcd]# chown -R etcd.etcd /data/logs/etcd-server/
安装 supervisor
,通过 supervisor
管理 etcd
进程
[root@192-168-199-12 etcd]# yum install supervisor -y
[root@192-168-199-12 etcd]# systemctl start supervisord
[root@192-168-199-12 etcd]# systemctl enable supervisord
编写 supervisord
相关文件,使其可以管理 etcd
[root@192-168-199-12 etcd]# cat /etc/supervisord.d/etcd-server.ini
[program:etcd-server-199-12]
command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
program:etcd-server-199-12 应与脚本中的名称相对应,三个主机的配置文件此部分也不同
通过 supervisord
启动 etcd
的进程,并查看其状态
[root@192-168-199-12 etcd]# supervisorctl update
[root@192-168-199-12 etcd]# supervisorctl status
# 开始状态为 STARTING ,等待约30秒后,状态会变为RUNNING
etcd-server-199-12 RUNNING pid 12546, uptime 0:02:40
验证 etcd
的端口是否正常启动
[root@192-168-199-12 etcd]# netstat -lntp | grep etcd
#2379 和 2380 端口都要起来
tcp 0 0 192.168.199.12:2379 0.0.0.0:* LISTEN 12547/./etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 12547/./etcd
tcp 0 0 192.168.199.12:2380 0.0.0.0:* LISTEN 12547/./etcd
集群健康状态检查
等三台服务都安装完 etcd
后,需要检查集群的状态,可采用以下两种方法进行检查
方法一:
[root@192-168-199-14 etcd]# ./etcdctl cluster-health
member 35d340010a02200a is healthy: got healthy result from http://127.0.0.1:2379
member 4a554f90b80c73ac is healthy: got healthy result from http://127.0.0.1:2379
member c899bc1f18888852 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy
方法二:
[root@192-168-199-14 etcd]# ./etcdctl member list
35d340010a02200a: name=etcd-server-199-14 peerURLs=https://192.168.199.14:2380 clientURLs=http://127.0.0.1:2379,https://192.168.199.14:2379 isLeader=false
4a554f90b80c73ac: name=etcd-server-199-13 peerURLs=https://192.168.199.13:2380 clientURLs=http://127.0.0.1:2379,https://192.168.199.13:2379 isLeader=false
c899bc1f18888852: name=etcd-server-199-12 peerURLs=https://192.168.199.12:2380 clientURLs=http://127.0.0.1:2379,https://192.168.199.12:2379 isLeader=true
部署K8S
准备安装包
主要在 192.168.199.13
和 192.168.199.14
上进行操作
K8S
安装包可在 github
上下载
[root@192-168-199-13 ~]# tar xf kubernetes-server-linux-amd64-v1.15.2.tar.gz -C /opt/
[root@192-168-199-13 ~]# cd /opt/
[root@192-168-199-13 opt]# mv kubernetes kubernetes-v1.15.2
[root@192-168-199-13 opt]# ln -s /opt/kubernetes-v1.15.2/ /opt/kubernetes
删除不必要的文件
[root@192-168-199-13 opt]# cd kubernetes
# go语言写的源码包
[root@192-168-199-13 kubernetes]# rm -rf kubernetes-src.tar.gz
[root@192-168-199-13 kubernetes]# cd server/bin/
# tar包为镜像文件,使用kubeadm方式部署会使用到
[root@192-168-199-13 bin]# rm -rf *.tar
[root@192-168-199-13 bin]# rm -rf *_tag
# 此时 /opt/kubernetes/server/bin 目录下只剩下一堆可执行文件
[root@192-168-199-13 bin]# ll
total 884636
-rwxr-xr-x 1 root root 43534816 Aug 5 2019 apiextensions-apiserver
-rwxr-xr-x 1 root root 100548640 Aug 5 2019 cloud-controller-manager
-rwxr-xr-x 1 root root 200648416 Aug 5 2019 hyperkube
-rwxr-xr-x 1 root root 40182208 Aug 5 2019 kubeadm
-rwxr-xr-x 1 root root 164501920 Aug 5 2019 kube-apiserver
-rwxr-xr-x 1 root root 116397088 Aug 5 2019 kube-controller-manager
-rwxr-xr-x 1 root root 42985504 Aug 5 2019 kubectl
-rwxr-xr-x 1 root root 119616640 Aug 5 2019 kubelet
-rwxr-xr-x 1 root root 36987488 Aug 5 2019 kube-proxy
-rwxr-xr-x 1 root root 38786144 Aug 5 2019 kube-scheduler
-rwxr-xr-x 1 root root 1648224 Aug 5 2019 mounter
192.168.199.14上的操作与上述相同
签发client证书
为 kube-apiserver
签发证书,此证书在 apiserver
和 etcd
通信时会使用到。etcd
为服务端,apiserver
为客户端。因此需要在 192.168.199.15
上制作 client
证书给 apiserver
使用。
创建生成证书签名请求(csr)的 JSON
的配置文件
[root@192-168-199-15 ~]# cd /opt/certs/
[root@192-168-199-15 ~]# cat /opt/certs/client-csr.json
{
"CN": "k8s-node",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
签发证书
[root@192-168-199-15 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
[root@192-168-199-15 certs]# ll client*
-rw-r--r-- 1 root root 993 May 30 09:40 client.csr
-rw-r--r-- 1 root root 280 May 30 09:38 client-csr.json
-rw------- 1 root root 1675 May 30 09:40 client-key.pem
-rw-r--r-- 1 root root 1363 May 30 09:40 client.pem
签发kube-apiserver证书
kub-apiserver
对外提供服务时,需要一个证书。同样在 192.168.199.15
上签发证书
创建生成证书签名请求(csr)的 JSON
配置文件
[root@192-168-199-15 certs]# cat /opt/certs/apiserver-csr.json
{
"CN": "k8s-apiserver",
"hosts": [
"127.0.0.1",
"192.168.0.1",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"192.168.199.10",
"192.168.199.13",
"192.168.199.14",
"192.168.133.16"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
在hosts中把可能会安装 apiserver 的 IP 全都列出来,同样不支持网段
签发证书
[root@192-168-199-15 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
安装apiserver
主要在 192.168.199.13
和 192.168.199.14
上进行操作
在 192.168.199.13
上安装apiserver
拷贝证书到 192.168.199.13
[root@192-168-199-13 bin]# pwd
/opt/kubernetes/server/bin
[root@192-168-199-13 bin]# mkdir cert
[root@192-168-199-13 bin]# cd cert/
[root@192-168-199-13 cert]# scp root@192.168.199.15:/opt/certs/apiserver.pem ./
[root@192-168-199-13 cert]# scp root@192.168.199.15:/opt/certs/apiserver-key.pem ./
[root@192-168-199-13 cert]# scp root@192.168.199.15:/opt/certs/ca.pem ./
[root@192-168-199-13 cert]# scp root@192.168.199.15:/opt/certs/ca-key.pem ./
[root@192-168-199-13 cert]# scp root@192.168.199.15:/opt/certs/client.pem ./
[root@192-168-199-13 cert]# scp root@192.168.199.15:/opt/certs/client-key.pem ./
# 验证所需6套证书
[root@192-168-199-13 cert]# ll
total 24
-rw------- 1 root root 1679 May 31 16:18 apiserver-key.pem
-rw-r--r-- 1 root root 1598 May 31 16:17 apiserver.pem
-rw------- 1 root root 1679 May 31 16:18 ca-key.pem
-rw-r--r-- 1 root root 1346 May 31 16:18 ca.pem
-rw------- 1 root root 1675 May 31 16:19 client-key.pem
-rw-r--r-- 1 root root 1363 May 31 16:19 client.pem
创建配置文件目录
[root@192-168-199-13 bin]# mkdir /opt/kubernetes/server/bin/conf
[root@192-168-199-13 bin]# cd /opt/kubernetes/server/bin/conf
创建 apiserver
日志审计规则,再下面的启动脚本中会用到
[root@192-168-199-13 conf]# cat audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
没有需要改动的地方
创建 apiserver
启动脚本
[root@192-168-199-13 bin]# cat /opt/kubernetes/server/bin/kube-apiserver.sh
#!/bin/bash
./kube-apiserver \
--apiserver-count 2 \
--audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
--audit-policy-file ./conf/audit.yaml \
--authorization-mode RBAC \
--client-ca-file ./cert/ca.pem \
--requestheader-client-ca-file ./cert/ca.pem \
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--etcd-cafile ./cert/ca.pem \
--etcd-certfile ./cert/client.pem \
--etcd-keyfile ./cert/client-key.pem \
--etcd-servers https://192.168.199.12:2379,https://192.168.199.13:2379,https://192.168.199.14:2379 \
--service-account-key-file ./cert/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--service-node-port-range 3000-29999 \
--target-ram-mb=1024 \
--kubelet-client-certificate ./cert/client.pem \
--kubelet-client-key ./cert/client-key.pem \
--log-dir /data/logs/kubernetes/kube-apiserver \
--tls-cert-file ./cert/apiserver.pem \
--tls-private-key-file ./cert/apiserver-key.pem \
--v 2
需要注意脚本中涉及到的文件路径及 IP 地址
对脚本进行授权
[root@192-168-199-13 bin]# chmod +x kube-apiserver.sh
创建 supervisord
配置文件
[root@192-168-199-13 bin]# cat /etc/supervisord.d/kube-apiserver.ini
[program:kube-apiserver-199-13]
command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
注意不同的主机在 [ ] 中的名称不一样
创建上述文件中所涉及到的目录
[root@192-168-199-13 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver
让 supervisord
管理服务
[root@192-168-199-13 bin]# supervisorctl update
[root@192-168-199-13 ~]# supervisorctl status
etcd-server-199-13 RUNNING pid 846, uptime 1 day, 7:35:30
kube-apiserver-199-13 RUNNING pid 3514, uptime 0:10:13
[root@192-168-199-13 ~]# netstat -lnpt | grep kube-api
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 3515/./kube-apiserv
tcp6 0 0 :::6443 :::* LISTEN 3515/./kube-apiserv
192.168.199.14 的操作与上述相同
安装Nginx实现四层负载均衡
同时在 192.168.199.11
和 192.168.199.12
上部署 nginx
实现对后端的 apiserver
负载均衡,在 192.168.199.11
和 192.168.199.12
上需要通过 keepalive
实现高可用。
安装并配置Nginx
安装并配置 nginx
,两个主机操作相同,均执行以下步骤
[root@192-168-199-11 ~]# yum install nginx -y
# 在 /etc/nginx/nginx.conf 文件的最后添加反向代理设置
[root@192-168-199-11 ~]# tail -n 12 /etc/nginx/nginx.conf
stream {
upstream kube-apiserver {
server 192.168.199.13:6443 max_fails=3 fail_timeout=30s;
server 192.168.199.14:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 2s;
proxy_timeout 900s;
proxy_pass kube-apiserver;
}
}
检查并启动 nginx
[root@192-168-199-11 ~]# nginx -t
[root@192-168-199-11 ~]# systemctl start nginx
[root@192-168-199-11 ~]# systemctl enable nginx
安装keepalived
主要在 192.168.199.11
和 192.168.199.12
上进行操作,两边配置相似。 192.168.199.11
为主节点, 192.168.199.12
为从节点。
安装并配置 keepalived
[root@192-168-199-11 ~]# yum install keepalived -y
在两个服务器上都需要添加监控端口的脚本
[root@192-168-199-11 ~]# cat /etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
# script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
# interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
if [ $PORT_PROCESS -eq 0 ];then
echo "Port $CHK_PORT Is Not Used,End."
exit 1
fi
else
echo "Check Port Cant Be Empty!"
fi
对上述脚本进行授权
[root@192-168-199-11 ~]# chmod +x /etc/keepalived/check_port.sh
配置 keepalived
主节点清空原来的配置,添加如下配置,192.168.199.10
为 VIP
[root@192-168-199-11 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.199.11
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 251
priority 100
advert_int 1
mcast_src_ip 192.168.199.11
nopreempt
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.199.10
}
}
从节点清空原来的配置,添加如下配置,192.168.199.10
为 VIP
[root@192-168-199-12 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.199.12
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 251
mcast_src_ip 192.168.199.12
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.199.10
}
}
启动 keepalived
[root@192-168-199-11 ~]# systemctl start keepalived.service
[root@192-168-199-11 ~]# systemctl enable keepalived.service
验证
VIP
在 192.168.199.11
的节点上,当此节点的 Nginx
停止,7443
端口不存活,VIP
会漂移到 192.168.199.12
节点上
[root@192-168-199-11 ~]# ip ad | grep 192.168.199.10
inet 192.168.199.10/32 scope global eth0
安装kube-controller-manager
需要在 192.168.199.13
和 192.168.199.14
上进行操作,两边操作相同。
创建启动脚本
[root@192-168-199-13 ~]# cat /opt/kubernetes/server/bin/kube-controller-manager.sh
#!/bin/sh
./kube-controller-manager \
--cluster-cidr 172.7.0.0/16 \
--leader-elect true \
--log-dir /data/logs/kubernetes/kube-controller-manager \
--master http://127.0.0.1:8080 \
--service-account-private-key-file ./cert/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--root-ca-file ./cert/ca.pem \
--v 2
对启动脚本进行授权并创建相应的目录
[root@192-168-199-13 ~]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh
[root@192-168-199-13 ~]# mkdir -p /data/logs/kubernetes/kube-controller-manager
配置 supervisord
[root@192-168-199-13 ~]# cat /etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager-199-13]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
注意不通的主机 [ ] 中的名称不一样
查看 supervisorctl
的状态
[root@192-168-199-13 ~]# supervisorctl update
[root@192-168-199-13 ~]# supervisorctl status
etcd-server-199-13 RUNNING pid 846, uptime 1 day, 13:51:56
kube-apiserver-199-13 RUNNING pid 3514, uptime 6:26:39
kube-controller-manager-199-13 RUNNING pid 4258, uptime 0:03:45
安装kube-scheduler
需要在 192.168.199.13
和 192.168.199.14
上进行操作,两边操作相同。
编辑 kube-scheduler
的启动文件
[root@192-168-199-13 ~]# cat /opt/kubernetes/server/bin/kube-scheduler.sh
#!/bin/sh
./kube-scheduler \
--leader-elect \
--log-dir /data/logs/kubernetes/kube-scheduler \
--master http://127.0.0.1:8080 \
--v 2
对脚本授权并创建相应的目录
[root@192-168-199-13 ~]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
[root@192-168-199-13 ~]# mkdir -p /data/logs/kubernetes/kube-scheduler
配置 supervisord
[root@192-168-199-13 ~]# cat /etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-199-13]
command=/opt/kubernetes/server/bin/kube-scheduler.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
注意不通的主机 [ ] 中的名称不一样
查看 supervisorctl
的状态
[root@192-168-199-13 ~]# supervisorctl update
[root@192-168-199-13 ~]# supervisorctl status
etcd-server-199-13 RUNNING pid 4497, uptime 0:01:13
kube-apiserver-199-13 RUNNING pid 4500, uptime 0:01:13
kube-controller-manager-199-13 RUNNING pid 4499, uptime 0:01:13
kube-scheduler-199-13 RUNNING pid 4498, uptime 0:01:13
创建软连接,使用户可以直接使用 kubectl
命令
[root@192-168-199-13 ~]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
[root@192-168-199-13 ~]# which kubectl
/usr/bin/kubectl
查看集群状态
[root@192-168-199-13 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
Kubelet部署
签发证书
主要在 192.168.199.15
上进行操作
创建生成证书签名请求(csr)的 JSON
配置文件
[root@192-168-199-15 certs]# cat /opt/certs/kubelet-csr.json
{
"CN": "k8s-kubelet",
"hosts": [
"127.0.0.1",
"192.168.199.10",
"192.168.199.13",
"192.168.199.14",
"192.168.199.16",
"192.168.199.17",
"192.168.199.18",
"192.168.199.19",
"192.168.199.20",
"192.168.199.21"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
hosts中表示可能会安装kubelet的主机,不支持网段
生成证书
[root@192-168-199-15 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
[root@192-168-199-15 certs]# ll kubelet*
-rw-r--r-- 1 root root 1115 Jun 2 00:10 kubelet.csr
-rw-r--r-- 1 root root 497 Jun 2 00:04 kubelet-csr.json
-rw------- 1 root root 1675 Jun 2 00:10 kubelet-key.pem
-rw-r--r-- 1 root root 1468 Jun 2 00:10 kubelet.pem
部署kubelet
在 192.168.199.13
上拷贝证书
[root@192-168-199-13 ~]# cd /opt/kubernetes/server/bin/cert/
[root@192-168-199-13 cert]# scp 192.168.199.15:/opt/certs/kubelet.pem ./
[root@192-168-199-13 cert]# scp 192.168.199.15:/opt/certs/kubelet-key.pem ./
set-cluster: 指定一个普通的用户,与 192.168.199.10
进行通信
[root@192-168-199-13 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.199.10:7443 \
--kubeconfig=kubelet.kubeconfig
注意IP
set-credentials
[root@192-168-199-13 conf]# kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
--client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
set-context
[root@192-168-199-13 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig
use-context
[root@192-168-199-13 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
创建角色绑定
以下步骤会将数据落入到 etcd 中,因此其他节点不需要执行以下步骤
[root@192-168-199-13 conf]# cat /opt/kubernetes/server/bin/conf/k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
生效配置
[root@192-168-199-13 conf]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
验证
[root@192-168-199-13 conf]# kubectl get clusterrolebinding k8s-node -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: "2020-06-01T16:36:57Z"
name: k8s-node
resourceVersion: "28654"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/k8s-node
uid: 8eb86a21-e85c-4289-bbe3-fe2671b6465c
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
192.168.199.14上
拷贝证书
[root@192-168-199-14 ~]# cd /opt/kubernetes/server/bin/cert/
[root@192-168-199-14 cert]# scp 192.168.199.15:/opt/certs/kubelet.pem ./
[root@192-168-199-14 cert]# scp 192.168.199.15:/opt/certs/kubelet-key.pem ./
拷贝配置文件
[root@192-168-199-14 cert]# cd /opt/kubernetes/server/bin/conf/
[root@192-168-199-14 conf]# scp root@192.168.199.13:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig ./
准备paush基础镜像
在 192.168.199.15
上,准备 paush
基础镜像,并上传至私有仓库
[root@192-168-199-15 ~]# docker pull kubernetes/pause
[root@192-168-199-15 ~]# docker images | grep pause
kubernetes/pause latest f9d5de079539 5 years ago 240kB
[root@192-168-199-15 ~]# docker tag f9d5de079539 harbor.od.com/public/pause:latest
[root@192-168-199-15 ~]# docker push harbor.od.com/public/pause:latest
在 192.168.199.13
上,创建 kubelet
启动脚本
[root@192-168-199-13 conf]# cat /opt/kubernetes/server/bin/kubelet.sh
#!/bin/sh
./kubelet \
# 禁止匿名用户使用kubectl
--anonymous-auth=false \
# 与docker配置文件/etc/docker/daemon.json中保持一致
--cgroup-driver systemd \
# coredns会使用此IP
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
# 不关闭swap也可以启动kubelet
--fail-swap-on="false" \
--client-ca-file ./cert/ca.pem \
--tls-cert-file ./cert/kubelet.pem \
--tls-private-key-file ./cert/kubelet-key.pem \
# 主机名
--hostname-override 192-168-199-13.host.com \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
# pause的位置
--pod-infra-container-image harbor.od.com/public/pause:latest \
--root-dir /data/kubelet
注意hostname-override 中的主机名
对脚本授权并创建相应的目录
[root@192-168-199-13 conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
[root@192-168-199-13 conf]# chmod a+x /opt/kubernetes/server/bin/kubelet.sh
配置 supervisord
[root@192-168-199-13 conf]# cat /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-199-13]
command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
检查 supervisorctl
的状态
[root@192-168-199-13 conf]# supervisorctl update
[root@192-168-199-13 conf]# supervisorctl status
etcd-server-199-13 RUNNING pid 4497, uptime 1 day, 1:40:00
kube-apiserver-199-13 RUNNING pid 4500, uptime 1 day, 1:40:00
kube-controller-manager-199-13 RUNNING pid 4817, uptime 4:12:46
kube-kubelet-199-13 RUNNING pid 5487, uptime 0:00:41
kube-scheduler-199-13 RUNNING pid 5144, uptime 1:22:14
查看集群状况
[root@192-168-199-13 conf]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192-168-199-13.host.com Ready <none> 94s v1.15.2
192-168-199-14.host.com Ready <none> 92s v1.15.2
上述 ROLES
标签为空,添加 ROLES
标签,非必须的操作
[root@192-168-199-13 conf]# kubectl label node 192-168-199-13.host.com node-role.kubernetes.io/master=
[root@192-168-199-13 conf]# kubectl label node 192-168-199-13.host.com node-role.kubernetes.io/node=
[root@192-168-199-13 conf]# kubectl label node 192-168-199-14.host.com node-role.kubernetes.io/master=
[root@192-168-199-13 conf]# kubectl label node 192-168-199-14.host.com node-role.kubernetes.io/node=
# 再次查看集群状态,ROLES中有角色
[root@192-168-199-13 conf]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192-168-199-13.host.com Ready master,node 6m35s v1.15.2
192-168-199-14.host.com Ready master,node 6m33s v1.15.2
部署kube-proxy
kube-proxy
主要用于连接集群网络和 pod
网络
签发证书
在 192.168.199.15
中签发证书
[root@192-168-199-15 certs]# cat /opt/certs/kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
[root@192-168-199-15 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
在 192.168.199.13
上拷贝证书
[root@192-168-199-13 ~]# cd /opt/kubernetes/server/bin/cert
[root@192-168-199-13 cert]# scp root@192.168.199.15:/opt/certs/kube-proxy-client.pem ./
[root@192-168-199-13 cert]# scp root@192.168.199.15:/opt/certs/kube-proxy-client-key.pem ./
set-cluster
[root@192-168-199-13 ~]# cd /opt/kubernetes/server/bin/conf/
[root@192-168-199-13 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.199.10:7443 \
--kubeconfig=kube-proxy.kubeconfig
set-credentials
[root@192-168-199-13 conf]# kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
set-contex
[root@192-168-199-13 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
use-context
[root@192-168-199-13 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
在 192.168.199.14
拷贝 192.168.199.13
上生成的 kube-proxy.kubeconfig
即可
[root@192-168-199-14 cert]# cd /opt/kubernetes/server/bin/conf/
[root@192-168-199-14 conf]# scp root@192.168.199.13:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig ./
加载lvs模块
192.168.199.13
和 192.168.199.14
启动内核中的 lvs
模块
[root@192-168-199-13 ~]# cat ip_vs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i &>/dev/null
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done
对脚本进行授权,并执行脚本
[root@192-168-199-13 ~]# chmod a+x ip_vs.sh
[root@192-168-199-13 ~]# sh ip_vs.sh
验证模块是否被加载
[root@192-168-199-13 ~]# lsmod | grep ip_vs
ip_vs_wrr 12697 0
ip_vs_wlc 12519 0
ip_vs_sh 12688 0
ip_vs_sed 12519 0
ip_vs_rr 12600 0
ip_vs_pe_sip 12740 0
nf_conntrack_sip 33780 1 ip_vs_pe_sip
ip_vs_nq 12516 0
ip_vs_lc 12516 0
ip_vs_lblcr 12922 0
ip_vs_lblc 12819 0
ip_vs_ftp 13079 0
ip_vs_dh 12688 0
ip_vs 145497 24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat 26583 3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
nf_conntrack 139264 8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c 12644 3 ip_vs,nf_nat,nf_conntrack
创建启动脚本
[root@192-168-199-13 ~]# cat /opt/kubernetes/server/bin/kube-proxy.sh
#!/bin/sh
./kube-proxy \
--cluster-cidr 172.7.0.0/16 \
--hostname-override 192-168-199-13.host.com \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ./conf/kube-proxy.kubeconfig
注意主机名
根据上述脚本创建目录并对脚本进行授权
[root@192-168-199-13 ~]# chmod a+x /opt/kubernetes/server/bin/kube-proxy.sh
[root@192-168-199-13 ~]# mkdir -p /data/logs/kubernetes/kube-proxy
配置 supervisord
[root@192-168-199-13 ~]# cat /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-199-13]
command=/opt/kubernetes/server/bin/kube-proxy.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
更新 supervisord
并查看状态
[root@192-168-199-13 ~]# supervisorctl update
[root@192-168-199-13 ~]# supervisorctl status
etcd-server-199-13 RUNNING pid 688, uptime 0:47:58
kube-apiserver-199-13 RUNNING pid 695, uptime 0:47:58
kube-controller-manager-199-13 RUNNING pid 693, uptime 0:47:58
kube-kubelet-199-13 RUNNING pid 691, uptime 0:47:58
kube-proxy-199-13 RUNNING pid 10633, uptime 0:00:59
kube-scheduler-199-13 RUNNING pid 689, uptime 0:47:58
安装 ipvsadm
工具,查看 ipvs
[root@192-168-199-13 ~]# yum install ipvsadm -y
[root@192-168-199-13 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.1:443 nq
-> 192.168.199.13:6443 Masq 1 0 0
-> 192.168.199.14:6443 Masq 1 0 0
[root@192-168-199-13 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 2d7h
集群验证
创建一个名称为 my-nginx
的 pod
,配置文件如下
[root@192-168-199-13 ~]# cat /root/nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: harbor.od.com/public/nginx:v1.7.9
ports:
- containerPort: 80
根据 yaml
文件启动 pod
,并查看其状态
[root@192-168-199-13 ~]# kubectl create -f nginx-ds.yaml
[root@192-168-199-13 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-6z2cl 1/1 Running 0 27s 172.7.14.2 192-168-199-14.host.com <none> <none>
nginx-ds-ln87w 1/1 Running 0 27s 172.7.13.2 192-168-199-13.host.com
[root@192-168-199-13 ~]# curl 172.7.13.2
查看集群状态
[root@192-168-199-13 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
[root@192-168-199-13 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
192-168-199-13.host.com Ready master,node 23h v1.15.2
192-168-199-14.host.com Ready master,node 23h v1.15.2
[root@192-168-199-13 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ds-6z2cl 1/1 Running 0 3m1s
nginx-ds-ln87w 1/1 Running 0 3m1s