Kubernetes入门学习(二)——利用二进制包部署Kubernetes集群

Kubernetes平台环境规划

下面是我的部署环境,我的宿主机是Macbook Pro,利用Parallel 14的虚拟机搭建的一个集群部署环境,其中我要部署如下七个节点:Master01,Master02,Node01,Node02,Load Balance(Master),Load Balance(Back Up),Registry。

不过我也不是一开始在准备的时候就要搞这么多节点,只是预先规划好了节点所需要的地址。首先搭建集群其实最重要的就是Master和Node,所以部署Master01,Master02,Node01,Node02就可以了,配置CPU2核,4G内存的相同的四台虚拟机,改一下IP地址。分别为192.168.31.63,192.168.31.64,192.168.31.65,192.168.31.66。

在这里插入图片描述

在这里插入图片描述

下面是我部署的节点的大致示意图(暂时没有Node03,忽略掉它)
在这里插入图片描述

Etcd数据库集群部署

下载etcd二进制包

• 二进制包下载地址 https://github.com/etcd-io/etcd/releases
可以看出最新版本是3.3.10,所以我选择了如下文件进行下载:etcd-v3.3.10-linux-amd64.tar.gz。

自签SSL证书

这里是各个组件所需要的一些证书,证书的作用为验证,先知道这个就好,看不懂也没关系,先部署着就行。
在这里插入图片描述
首先要明确的就是:Master01节点作为控制节点生成控制证书。
以下操作在192.168.31.63(Master01节点)进行操作:
先建立一个K8S的文件夹,这个文件夹用于部署我们所有关于kubernetes的信息,执行三个curl的命令下载生成SSL证书的一些必需品,然后chmod +x赋予可执行的权限(在执行可执行脚本的时候无需在前面加一个bash命令)。

[root@localhost ~] mkdir k8s
[root@localhost ~] cd k8s/
[root@localhost k8s] curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
[root@localhost k8s] curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
[root@localhost k8s] curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
[root@localhost k8s] chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

查看一下我们的bin(可执行目录下)有没有下载好的文件,发现文件是下载好的,这里说一下/usr/local/bin这个目录是放所有可执行文件的,比如说:你下面有个cfssl文件,那么在终端的任意目录下,cfssl就是一条可执行指令,这里又一个检验是不是可执行指令的一个方法:如果没有其他以cfs为开头的命令的话,输入cfs+Tab键就能够自动补齐:

[root@localhost k8s] ls /usr/local/bin
cfssl  cfssl-certinfo  cfssljson

在k8s里面新建一个文件夹etcd-cert,用于生成并存放我们关于etcd的SSL证书:
下面先来生成

[root@localhost k8s] mkdir etcd-cert
[root@localhost k8s] cd etcd-cert/
[root@localhost k8s] cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
[root@localhost etcd-cert] cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
[root@localhost etcd-cert] cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

下面是执行以上命令终端显示的日志:

2018/11/26 20:21:53 [INFO] generating a new CA key and certificate from CSR
2018/11/26 20:21:53 [INFO] generate received request
2018/11/26 20:21:53 [INFO] received CSR
2018/11/26 20:21:53 [INFO] generating key: rsa-2048
fig.json -profile=www server-csr.json | cfssljson -bare server
2018/11/26 20:21:54 [INFO] encoded CSR
2018/11/26 20:21:54 [INFO] signed certificate with serial number 518030623650953464658830991894694076650382457916

此时生成了关于ca的证书。

[root@localhost etcd-cert] cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.31.63",
    "192.168.31.64",
    "192.168.31.65"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF
[root@localhost etcd-cert] cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

下面是执行以上命令终端显示的日志:

2018/11/26 20:21:54 [INFO] generate received request
2018/11/26 20:21:54 [INFO] received CSR
2018/11/26 20:21:54 [INFO] generating key: rsa-2048
2018/11/26 20:21:54 [INFO] encoded CSR
2018/11/26 20:21:54 [INFO] signed certificate with serial number 61443133720389521594087555069648927361711958383
2018/11/26 20:21:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

此时生成了所有的关于server的证书,让我们查看一下:

[root@localhost etcd-cert]  ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  cert.sh  server.csr  server-csr.json  server-key.pem  server.pem

关闭SELINUX和防火墙

下面的操作在所有节点(192.168.31.63,192.168.31.64,192.168.31.65,192.168.31.66)上都进行一次。在部署之前,关闭selinux,防火墙,并重启电脑使selinux关闭生效。

[root@localhost etcd-cert] yum install yum
[root@localhost etcd-cert] vim /etc/selinux/config
# This file controls the state of SELinux on the system.                                                                                         
# SELINUX= can take one of these three values:                                                                                                   
#     enforcing - SELinux security policy is enforced.                                                                                           
#     permissive - SELinux prints warnings instead of enforcing.                                                                                 
#     disabled - No SELinux policy is loaded.                                                                                                    
SELINUX=disabled                                                                                                                                 
# SELINUXTYPE= can take one of three two values:                                                                                                 
#     targeted - Targeted processes are protected,                                                                                               
#     minimum - Modification of targeted policy. Only selected processes are protected.                                                          
#     mls - Multi Level Security protection.                                                                                                     
SELINUXTYPE=targeted 
[root@localhost etcd-cert] systemctl stop firewalld
[root@localhost etcd-cert] systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. 
[root@localhost etcd-cert] reboot

部署Master01的主控etcd节点

下面的操作在Master01(192.168.31.63)上进行:

解压二进制包文件:

[root@localhost ~] cd k8s/
[root@localhost k8s] ls
cfssl.sh  etcd-cert  etcd-v3.3.10-linux-amd64.tar.gz
[root@localhost k8s] tar zxf etcd-v3.3.10-linux-amd64.tar.gz
[root@localhost k8s] ls
cfssl.sh  etcd-cert  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar

在/opt/etcd/目录下建立三个文件,bin是etcd的可执行文件目录,cfg为相关配置文件的存放目录,ssl为证书的目录:

[root@localhost k8s] mkdir /opt/etcd/{bin,cfg,ssl} -p
[root@localhost k8s] ls /opt/etcd/
bin  cfg  ssl

将etcdctl,etcd两个可执行文件,放在刚才建立的目录的可执行目录bin下面:

[root@localhost k8s] cd etcd-v3.3.10-linux-amd64/
[root@localhost etcd-v3.3.10-linux-amd64] mv etcdctl etcd /opt/etcd/bin/
[root@localhost etcd-v3.3.10-linux-amd64] ls /opt/etcd/bin/
etcd  etcdctl

新建一个脚本,用于部署etcd:

[root@localhost etcd-v3.3.10-linux-amd64] cd ../
[root@localhost k8s] vim etcd.sh
#以下全部为脚本内容

#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/opt/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

下面来正式部署etcd:第一个参数etcd01是我们的etcd的主控节点,下面的参数为相关的etcd丛节点。

[root@localhost k8s] ls
cfssl.sh  etcd-cert  etcd.sh  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar
[root@localhost k8s] chmod +x etcd.sh 
[root@localhost k8s] ./etcd.sh etcd01 192.168.31.63 etcd02=https://192.168.31.64:2380,etcd03=https://192.168.31.65:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details.

执行完以上的命令可能会出现失败的提示,就像上面显示的那样,不要慌。这是因为在Master02和Node01节点(etcd的从节点)上面还没有开启etcd服务,所以会报错,一会儿开启了服务之后就好了。

把etcd相关证书也拷贝到opt的ssl目录下:

[root@localhost k8s] cp etcd-cert/ca*pem /opt/etcd/ssl/
[root@localhost k8s] cp etcd-cert/server*pem /opt/etcd/ssl/
[root@localhost k8s] systemctl start etcd
[root@localhost k8s] ps -ef |grep etcd
root      1816     1  0 13:59 ?        00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.31.63:2380 --listen-client-urls=https://192.168.31.63:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.31.63:2379 --initial-advertise-peer-urls=https://192.168.31.63:2380 --initial-cluster=etcd01=https://192.168.31.63:2380,etcd02=https://192.168.31.64:2380,etcd03=https://192.168.31.65:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root      1825  1422  0 13:59 pts/0    00:00:00 grep --color=auto etcd

[root@localhost k8s] systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: activating (start) since Tue 2018-11-27 14:00:41 CST; 18s ago
 Main PID: 1828 (etcd)
   CGroup: /system.slice/etcd.service
           └─1828 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.31.63:2380 --listen-client-urls=https:...

Nov 27 14:00:57 localhost.localdomain etcd[1828]: b46624837acedac9 is starting a new election at term 194
Nov 27 14:00:57 localhost.localdomain etcd[1828]: b46624837acedac9 became candidate at term 195
Nov 27 14:00:57 localhost.localdomain etcd[1828]: b46624837acedac9 received MsgVoteResp from b46624837acedac9 at term 195
Nov 27 14:00:57 localhost.localdomain etcd[1828]: b46624837acedac9 [logterm: 1, index: 3] sent MsgVote request to 259236946dea847e at term 195
Nov 27 14:00:57 localhost.localdomain etcd[1828]: b46624837acedac9 [logterm: 1, index: 3] sent MsgVote request to fd9073b56d4868cb at term 195
Nov 27 14:00:58 localhost.localdomain etcd[1828]: b46624837acedac9 is starting a new election at term 195
Nov 27 14:00:58 localhost.localdomain etcd[1828]: b46624837acedac9 became candidate at term 196
Nov 27 14:00:58 localhost.localdomain etcd[1828]: b46624837acedac9 received MsgVoteResp from b46624837acedac9 at term 196
Nov 27 14:00:58 localhost.localdomain etcd[1828]: b46624837acedac9 [logterm: 1, index: 3] sent MsgVote request to 259236946dea847e at term 196
Nov 27 14:00:58 localhost.localdomain etcd[1828]: b46624837acedac9 [logterm: 1, index: 3] sent MsgVote request to fd9073b56d4868cb at term 196

可以看出,etcd的状态是activating,代表其他两个节点的etcd服务没开启,那么我们现在去部署其他两个节点的etcd。

先把我们的etcd的opt下的可执行文件目录(bin),配置文件目录(cfg),证书目录(ssl)都拷贝一份到192.168.31.64(Master02)和192.168.31.65(Node01)节点上去。

[root@localhost k8s] scp -r /opt/etcd/ root@192.168.31.64:/opt
[root@localhost k8s] scp -r /opt/etcd/ root@192.168.31.65:/opt

部署Master02从节点

以下操作在192.168.31.34上执行:
改一下与Master02的相关信息

[root@localhost ~] yum install vim
[root@localhost ~] vim /opt/etcd/cfg/etcd 
#[Member]                                                                                                                                        
ETCD_NAME="etcd02"                                                                                                                               
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.64:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.64:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.64:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.64:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.31.63:2380,etcd02=https://192.168.31.64:2380,etcd03=https://192.168.31.65:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

重新加载配置并开启etcd服务

[root@localhost ~] systemctl daemon-reload
[root@localhost ~] systemctl start etcd
[root@localhost ~] systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-11-27 15:57:24 CST; 18s ago
 Main PID: 11845 (etcd)
   CGroup: /system.slice/etcd.service
           └─11845 /opt/etcd/bin/etcd --name=etcd02 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.31.64:2380 --l...

Nov 27 15:57:24 localhost.localdomain etcd[11845]: enabled capabilities for version 3.0
Nov 27 15:57:24 localhost.localdomain etcd[11845]: published {Name:etcd02 ClientURLs:[https://192.168.31.64:2379]} to cluster 2e3f83ba1dc6bfc
Nov 27 15:57:24 localhost.localdomain etcd[11845]: ready to serve client requests
Nov 27 15:57:24 localhost.localdomain etcd[11845]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Nov 27 15:57:24 localhost.localdomain etcd[11845]: ready to serve client requests
Nov 27 15:57:24 localhost.localdomain systemd[1]: Started Etcd Server.
Nov 27 15:57:24 localhost.localdomain etcd[11845]: serving client requests on 192.168.31.64:2379
Nov 27 15:57:24 localhost.localdomain etcd[11845]: 259236946dea847e initialzed peer connection; fast-forwarding 8 ticks (election ti...peer(s)
Nov 27 15:57:24 localhost.localdomain etcd[11845]: updated the cluster version from 3.0 to 3.3
Nov 27 15:57:24 localhost.localdomain etcd[11845]: enabled capabilities for version 3.3
Hint: Some lines were ellipsized, use -l to show in full.

此时显示的就是Active状态,证明64节点部署好了etcd。

部署Node01从节点

以下操作在192.168.31.35上执行:
改一下与Node01的相关信息

[root@localhost ~] yum install vim
[root@localhost ~] vim /opt/etcd/cfg/etcd 
#[Member]                                                                                                                                        
ETCD_NAME="etcd03"                                                                                                                               
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.65:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.65:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.65:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.65:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.31.63:2380,etcd02=https://192.168.31.64:2380,etcd03=https://192.168.31.65:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

重新加载配置并开启etcd服务

[root@localhost ~] systemctl daemon-reload
[root@localhost ~] systemctl start etcd
[root@localhost ~] systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-11-27 15:52:20 CST; 10s ago
 Main PID: 11203 (etcd)
   CGroup: /system.slice/etcd.service
           └─11203 /opt/etcd/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https:/...

Nov 27 15:52:20 localhost.localdomain etcd[11203]: serving insecure client requests on 127.0.0.1:2379, this is s...ged!
Nov 27 15:52:20 localhost.localdomain etcd[11203]: ready to serve client requests
Nov 27 15:52:20 localhost.localdomain etcd[11203]: serving client requests on 192.168.31.65:2379
Nov 27 15:52:20 localhost.localdomain systemd[1]: Started Etcd Server.
Nov 27 15:52:20 localhost.localdomain etcd[11203]: set the initial cluster version to 3.0
Nov 27 15:52:20 localhost.localdomain etcd[11203]: enabled capabilities for version 3.0
Nov 27 15:52:24 localhost.localdomain etcd[11203]: health check for peer 259236946dea847e could not connect: dia...OT")
Nov 27 15:52:24 localhost.localdomain etcd[11203]: health check for peer 259236946dea847e could not connect: dia...GE")
Nov 27 15:52:29 localhost.localdomain etcd[11203]: health check for peer 259236946dea847e could not connect: dia...OT")
Nov 27 15:52:29 localhost.localdomain etcd[11203]: health check for peer 259236946dea847e could not connect: dia...GE")
Hint: Some lines were ellipsized, use -l to show in full.

此时显示的就是Active状态,证明65节点部署好了etcd。

重启Master01的etcd服务

[root@localhost k8s] systemctl daemon-reload
[root@localhost k8s] systemctl enable etcd
[root@localhost k8s] systemctl restart etcd
[root@localhost k8s]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-11-27 15:52:21 CST; 5min ago
 Main PID: 2667 (etcd)
   CGroup: /system.slice/etcd.service
           └─2667 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.31.63:2380 --listen-client-urls=https:...

Nov 27 15:57:24 localhost.localdomain etcd[2667]: peer 259236946dea847e became active
Nov 27 15:57:24 localhost.localdomain etcd[2667]: established a TCP streaming connection with peer 259236946dea847e (stream MsgApp v2 writer)
Nov 27 15:57:24 localhost.localdomain etcd[2667]: established a TCP streaming connection with peer 259236946dea847e (stream Message reader)
Nov 27 15:57:24 localhost.localdomain etcd[2667]: established a TCP streaming connection with peer 259236946dea847e (stream Message writer)
Nov 27 15:57:24 localhost.localdomain etcd[2667]: established a TCP streaming connection with peer 259236946dea847e (stream MsgApp v2 reader)
Nov 27 15:57:25 localhost.localdomain etcd[2667]: updating the cluster version from 3.0 to 3.3
Nov 27 15:57:25 localhost.localdomain etcd[2667]: updated the cluster version from 3.0 to 3.3
Nov 27 15:57:25 localhost.localdomain etcd[2667]: enabled capabilities for version 3.3
Hint: Some lines were ellipsized, use -l to show in full.

至此,etcd服务就全部部署完了。

Docker的部署

以下操作在192.168.31.65进行(Node01)和192.168.31.66进行(Node02)分别进行:

[root@localhost k8s] sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
[root@localhost k8s] sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
[root@localhost k8s] sudo yum install docker-ce
[root@localhost k8s] systemctl start docker
[root@localhost k8s] systemctl enable docker
[root@localhost k8s] docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:48:22 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:19:08 2018
  OS/Arch:          linux/amd64
  Experimental:     false
[root@localhost k8s] mkdir -p /etc/docker
[root@localhost k8s] tee /etc/docker/daemon.json <<-'EOF'
{
 "registry-mirrors": ["https://a5aghnme.mirror.aliyuncs.com"]
}
EOF
[root@localhost k8s] systemctl daemon-reload && systemctl restart docker

下载一个Nginx测试一下docker:

[root@localhost k8s] docker run -it nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
a5a6f2f73cd8: Pull complete 
67da5fbcb7a0: Pull complete 
e82455fa5628: Pull complete 
Digest: sha256:31b8e90a349d1fce7621f5a5a08e4fc519b634f7d3feb09d53fac9b12aa4d991
Status: Downloaded newer image for nginx:latest

OK,搞定。

Flannel的部署

Flannel的作用就是提供一个虚拟网络,每一台运行Flannel的Linux主机都会有个不重叠的虚拟网段(一个局域网内)。比如,在192.168.31.65(Node01)运行的Flannel所分配的子网网段是172.16.38.0,那么在192.168.31.66(Node02)运行的Flannel所分配的子网网段就有可能是172.16.99.0。比如Docker运行了一个Tomcat,这个Tomcat也是基于一个简易版本的Linux内核,既然基于Linux内核,就会有网卡,有网卡就要有IP地址,那么这个IP地址是什么呢?可以正常来说是自动分配的,那么如果建立了Flannel之后并让Flannel管理Docker的网络地址分配,这样的话,Docker的容器中的IP就会用Flannel分配的网络。

下面的操作在192.168.31.65(Node01)上运行:

建立k8s的目录:

[root@localhost k8s] mkdir -p /opt/kubernetes/{bin,cfg,ssl}
[root@localhost k8s] ls /opt/kubernetes/
bin  cfg  ssl
[root@localhost k8s] cd ..

新建一个脚本以建立Flannel

[root@localhost ~] vim flannel.sh
##写入以下内容
#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

cat <<EOF >/usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart dockerd      

执行该文件,并把移动可执行文件到k8s的bin目录下

[root@localhost ~] bash flannel.sh
[root@localhost ~] tar zxf flannel-v0.10.0-linux-amd64.tar.gz 
[root@localhost ~] mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/

改一下其中的cfg配置文件中集群的Etcd集群的节点ip地址:

[root@localhost ~] cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379  \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

配置结束,来启动一下Flannel

[root@localhost ~] systemctl start flanneld
[root@localhost ~] systemctl status flanneld
● flanneld.service - Flanneld overlay address etcd agent
   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-11-27 17:38:03 CST; 8s ago
  Process: 11960 ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env (code=exited, status=0/SUCCESS)
 Main PID: 11947 (flanneld)
    Tasks: 8
   Memory: 8.5M
   CGroup: /system.slice/flanneld.service
           └─11947 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.31.63:2379,https://192.168.31.64...

Nov 27 17:38:03 localhost.localdomain flanneld[11947]: I1127 17:38:03.857860   11947 iptables.go:125] Adding iptables ...CCEPT
Nov 27 17:38:03 localhost.localdomain flanneld[11947]: I1127 17:38:03.858804   11947 iptables.go:137] Deleting iptable...ERADE
Nov 27 17:38:03 localhost.localdomain flanneld[11947]: I1127 17:38:03.861689   11947 iptables.go:137] Deleting iptable...ETURN
Nov 27 17:38:03 localhost.localdomain flanneld[11947]: I1127 17:38:03.863337   11947 iptables.go:125] Adding iptables ...CCEPT
Nov 27 17:38:03 localhost.localdomain systemd[1]: Started Flanneld overlay address etcd agent.
Nov 27 17:38:03 localhost.localdomain flanneld[11947]: I1127 17:38:03.865240   11947 iptables.go:137] Deleting iptable...ERADE
Nov 27 17:38:03 localhost.localdomain flanneld[11947]: I1127 17:38:03.867588   11947 iptables.go:125] Adding iptables ...ETURN
Nov 27 17:38:03 localhost.localdomain flanneld[11947]: I1127 17:38:03.869391   11947 iptables.go:125] Adding iptables ...ERADE
Nov 27 17:38:03 localhost.localdomain flanneld[11947]: I1127 17:38:03.871108   11947 iptables.go:125] Adding iptables ...ETURN
Nov 27 17:38:03 localhost.localdomain flanneld[11947]: I1127 17:38:03.872787   11947 iptables.go:125] Adding iptables ...ERADE
Hint: Some lines were ellipsized, use -l to show in full.

查看一下Flanneld给本机分配的子网网关IP:

[root@localhost ~] cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.67.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.67.1/24 --ip-masq=false --mtu=1450"

把docker的容器启用地址改成Flannel分配的子网IP:

[root@localhost ~] vim /usr/lib/systemd/system/
#改一下如下部分`在这里插入代码片`
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

[root@localhost ~] vim /usr/lib/systemd/system/docker.service 
[root@localhost ~] systemctl daemon-reload
[root@localhost ~] systemctl restart docker
[root@localhost ~] ps -ef|grep dockerd
root     12469     1  0 17:44 ?        00:00:00 /usr/bin/dockerd --bip=172.17.67.1/24 --ip-masq=false --mtu=1450
root     12597  1489  0 17:44 pts/0    00:00:00 grep --color=auto dockerd

OK,Docker重启成功了,应用Flannel分配的子网,然后把相关的配置文件复制到192.168.31.66(Node02),然后启用一下就可以了,同样的,要让docker容器应用Flannel分配的子网。

[root@localhost ~] scp -r /opt/kubernetes/ root@192.168.31.66:/opt/
[root@localhost ~] scp -r /usr/lib/systemd/system/flanneld.service root@192.168.31.66:/usr/lib/systemd/system

[root@localhost ~] systemctl start flanneld
[root@localhost ~] ps -ef|grep flanneld
root     12250     1  0 17:49 ?        00:00:00 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root     12310  1313  0 17:49 pts/0    00:00:00 grep --color=auto flanneld
[root@localhost ~] cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.99.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.99.1/24 --ip-masq=false --mtu=1450"
[root@localhost ~] vim /usr/lib/systemd/system/
#改一下如下部分`在这里插入代码片`
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

[root@localhost ~] vim /usr/lib/systemd/system/docker.service 
[root@localhost ~] systemctl daemon-reload
[root@localhost ~] systemctl enable flanneld
[root@localhost ~] systemctl restart flanneld
[root@localhost ~] systemctl restart docker
[root@localhost ~] ps -ef|grep dockerd
root     12469     1  0 17:44 ?        00:00:00 /usr/bin/dockerd --bip=172.17.99.1/24 --ip-masq=false --mtu=1450
root     12597  1489  0 17:44 pts/0    00:00:00 grep --color=auto dockerd

由此可见,Flannel分配的子网地址确实是不一样的。

Kube-apiserver等组件的部署

Kube-apiserver是按照之前的规划是需要部署在两个Master节点进行部署的。

Master01的部署

以下操作在Master01(192.168.31.63)节点上进行操作:

首先进入k8s目录,新建一个k8s-cert目录专门存放我们关于Kubernetes部署的相关证书文件。我写了一个脚本来专门写入k8s证书的信息—— k8s-cert.sh,server-csr.json这个文件中需要写入你对应的各个控制(controller)节点的信息,在我规划的网络中为192.168.31.60(负载均衡,以后会说到),192.168.31.61(负载均衡,以后会说到),192.168.31.62,192.168.31.63,192.168.31.64。写完脚本之后,执行一下,看看/k8s/k8s-cert目录下面有没有生成对应的server和ca的证书:

[root@localhost ~] cd k8s                                                                                                                         
[root@localhost k8s] ls                                                                                                                           
cfssl.sh  etcd-cert  etcd.sh  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar  k8s-cert.sh                                                  
[root@localhost k8s] mkdir k8s-cert/                                                                                                              
[root@localhost k8s] mv k8s-cert.sh k8s-cert                                                                                                      
[root@localhost k8s] cd k8s-cert/                                                                                                                 
[root@localhost k8s-cert] cat k8s-cert.sh                                                                                                         
cat > ca-config.json <<EOF                                                                                                                         
{                                                                                                                                                  
  "signing": {                                                                                                                                     
    "default": {                                                                                                                                   
      "expiry": "87600h"                                                                                                                           
    },                                                                                                                                             
    "profiles": {                                                                                                                                  
      "kubernetes": {                                                                                                                              
         "expiry": "87600h",                                                                                                                       
         "usages": [                                                                                                                               
            "signing",                                                                                                                             
            "key encipherment",                                                                                                                    
            "server auth",                                                                                                                         
            "client auth"                                                                                                                          
        ]                                                                                                                                          
      }                                                                                                                                            
    }                                                                                                                                              
  }                                                                                                                                                
}                                                                                                                                                  
EOF                                                                                                                                                
                                                                                                                                                   
cat > ca-csr.json <<EOF                                                                                                                            
{                                                                                                                                                  
    "CN": "kubernetes",                                                                                                                            
    "key": {                                                                                                                                       
        "algo": "rsa",                                                                                                                             
        "size": 2048                                                                                                                               
    },                                                                                                                                             
    "names": [                                                                                                                                     
        {                                                                                                                                          
            "C": "CN",                                                                                                                             
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.31.60",
      "192.168.31.61",
      "192.168.31.62",
      "192.168.31.63",
      "192.168.31.64",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@localhost k8s-cert] bash k8s-cert.sh     
[root@localhost k8s-cert] ls
admin.csr       admin.pem       ca-csr.json  k8s-cert.sh          kube-proxy-key.pem  server-csr.json
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy.csr       kube-proxy.pem      server-key.pem
admin-key.pem   ca.csr          ca.pem       kube-proxy-csr.json  server.csr          server.pem        

在K8S文件夹中,建立一个Soft目录,存放各种关于ApiServer的脚本文件和二进制包文件。我ls给大家看一下,sh文件是其中的脚本,kubernetes-server-linux-amd64.tar.gz这个是相关的二进制文件包。

同时,将opt下kubernetes的cfg,ssl,bin文件夹建立好。将二进制包内的文件解压出来,将相关的控制文件(二进制包中bin的文件)放到/opt/kubernetes/bin的可执行文件的目录下。

大家可以从下面的工作日志看到,二进制包中的文件在解压后的/kubernetes/server/bin/目录下。拷贝过去之后检查一下,是否已经存放到了 /opt/kubernetes/bin目录,下面我们ls的结果显示没有问题。

[root@localhost soft] ls
apiserver.sh  controller-manager.sh  kubernetes-server-linux-amd64.tar.gz scheduler.sh
[root@localhost soft] mkdir -p /opt/kubernetes/{bin,cfg,ssl}                                                                                     
[root@localhost soft] tar zxf kubernetes-server-linux-amd64.tar.gz 
[root@localhost soft] cd kubernetes
[root@localhost kubernetes] ls
addons  kubernetes-src.tar.gz  LICENSES  server
[root@localhost kubernetes] cd server/bin/                                                                                                        
[root@localhost bin] ls                                                                                                                           
apiextensions-apiserver              kubeadm                    kube-controller-manager.docker_tag  kube-proxy.docker_tag      mounter
cloud-controller-manager             kube-apiserver             kube-controller-manager.tar         kube-proxy.tar
cloud-controller-manager.docker_tag  kube-apiserver.docker_tag  kubectl                             kube-scheduler
cloud-controller-manager.tar         kube-apiserver.tar         kubelet                             kube-scheduler.docker_tag
hyperkube                            kube-controller-manager    kube-proxy                          kube-scheduler.tar
[root@localhost bin] cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@localhost bin] ls /opt/kubernetes/bin/                                                                                                      
kube-apiserver  kube-controller-manager  kubectl  kube-scheduler

关于建立Api-Server的脚本是这样的,我以cat的方式打开一下,可以直接复制:

[root@localhost soft] cat apiserver.sh 
#!/bin/bash

MASTER_ADDRESS=$1
ETCD_SERVERS=$2

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

把Api-Server建立相关的证书拷贝到/opt/kubernetes/ssl目录下面,并验证一下证书是否拷贝成功:

[root@localhost soft] cp ../k8s-cert/ca.pem /opt/kubernetes/ssl/                                                                                  
[root@localhost soft] cp ../k8s-cert/server*.pem /opt/kubernetes/ssl/
[root@localhost soft] ls /opt/kubernetes/ssl/                                                                          
ca.pem  server-key.pem  server.pem

由于我们部署Api-Server的时候需要一个Token的csv文件,所以我我们先来随机生成一个token文件:
下面这是生成token的命令,并把token写入配置文件cfg下的csv文件中,你自己生成的token要复制你自己生成的token,千万不要复制我的!!!

[root@localhost soft] head -c 16 /dev/urandom 
[root@localhost soft] head -c 16 /dev/urandom |od -An -t x
 fd656dca d6f7a03f 63a96624 7b883571
[root@localhost soft] head -c 16 /dev/urandom |od -An -t x|tr -d ' '                                                                           
b633fe09576f01bafe6d41690abc0790
[root@localhost soft] vim /opt/kubernetes/cfg/token.csv                                                                                        
[root@localhost soft] cat /opt/kubernetes/cfg/token.csv                                                                                        
b633fe09576f01bafe6d41690abc0790,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

证书,配置文件,二进制包都成功放到了他该在的位置,那么下面开始部署api-server,执行脚本:
这个脚本的执行的时候参数需要注意格式,这个在脚本文件的一开始就可以看出来,我解释一下,第一个参数是本机地址,后面的一个参数以逗号隔开,分别是部署Server控制的节点和控制节点。

[root@localhost soft] bash apiserver.sh 192.168.31.63 https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

执行一下controller-manage的脚本,这个脚本的内容如下:

[root@localhost soft] cat controller-manager.sh                                                                                                
#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager


KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
[root@localhost soft] bash controller-manager.sh 127.0.0.1                                                                                     
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-mana
ger.service.                                                                                   
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@localhost soft] cat k8s/soft/scheduler.sh                                                                                                      
#!/bin/bash                                                                                                                                         
                                                                                                                                                    
MASTER_ADDRESS=$1                                                                                                                                   
                                                                                                                                                    
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler                                                                                                       
                                                                                                                                                    
KUBE_SCHEDULER_OPTS="--logtostderr=true \\                                                                                                          
--v=4 \\                                                                                                                                            
--master=${MASTER_ADDRESS}:8080 \\                                                                                                                  
--leader-elect"                                                                                                                                     
                                                                                                                                                    
EOF                                                                                                                                                 
                                                                                                                                                    
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service                                                                                           
[Unit]                                                                                                                                              
Description=Kubernetes Scheduler                                                                                                                    
Documentation=https://github.com/kubernetes/kubernetes                                                                                              
                                                                                                                                                    
[Service]                                                                                                                                           
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler                                                                                                 
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS                                                                                  
Restart=on-failure                                                                                                                                  

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

[root@localhost soft] bash scheduler.sh 127.0.0.1  

安装一下必要的工具:
用netstat检查一下是否8080端口开始监听,证明我们的api-server部署成功,再检查一下我们的各种服务是否healthy。
用ntpdate同步一下时间,我们必须保证所有的节点的时间都是一致的,所以ntpdate同步的工具各个节点必须有,各个节点都要同步时间。

[root@localhost soft] yum install net-tools
[root@localhost soft] yum install ntpdate
[root@localhost soft] netstat -antp|grep 8080|grep LISTEN                                                                                      
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      2666/kube-apiserver                                             
[root@localhost soft] /opt/kubernetes/bin/kubectl get cs                                                                                       
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}  
[root@localhost ~] ntpdate time.windows.com 

Master02的部署

以下操作在Master01(192.168.31.64)节点上进行操作:

同样的,也是需要建立/opt/kubernetes/下cfg,ssl,bin三个目录:

[root@localhost ~] cd /opt/kubernetes/                                                                                                          
[root@localhost kubernetes]# ls                                                                                                                  
bin  cfg  ssl

进入cfg目录,直接把下面的几个配置文件从Master01节点scp传送过来即可,修改相关参数换为Master02的信息:

[root@localhost cfg] ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
[root@localhost cfg] cat kube-scheduler                                                                                                           

KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"

[root@localhost cfg] cat kube-apiserver                                                                                                           

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 \
--bind-address=192.168.31.64 \
--secure-port=6443 \
--advertise-address=192.168.31.64 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

[root@localhost cfg] cat kube-controller-manager                                                                                                  


KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"

开启服务

[root@localhost cfg] systemctl start kube-apiserver  
[root@localhost cfg] systemctl enable kube-apiserver  
[root@localhost cfg] systemctl start kube-scheduler  
[root@localhost cfg] systemctl enable kube-scheduler                                                                                             
[root@localhost cfg] systemctl start kube-controller-manager    
[root@localhost cfg] systemctl enable kube-controller-manager    

检查是否成功:

[root@localhost cfg] /opt/kubernetes/bin/kubectl get cs                                                                                         
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
[root@localhost cfg] /opt/kubernetes/bin/kubectl get node                                                                                       
NAME            STATUS   ROLES    AGE   VERSION
192.168.31.65   Ready    <none>   20h   v1.12.1
192.168.31.66   Ready    <none>   20h   v1.12.1
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值