Kubernetes部署记录

Kubernetes(1.8.1)部署记录

1、环境说明

服务器规划:

IPHostnameRole
192.168.119.180k8s-0、etcd-1Master、etcd、NFSServer
192.168.119.181k8s-1、etcd-2Mission、etcd
192.168.119.182k8s-2、etcd-3Mission、etcd
192.168.119.183k8s-3Mission

操作系统及软件版本:

  • OS:CentOS Linux release 7.3.1611 (Core)
  • ETCD:etcd-v3.2.9-linux-amd64
  • Flannel:flannel.x86_64-0.7.1-2.el7
  • Docker:docker.x86_64-2:1.12.6-61.git85d7426.el7.centos
  • K8S:v1.8.1

所有服务器的hosts文件内容相同,如下:

[root@k8s-0 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


192.168.119.180      k8s-0 etcd-1
192.168.119.181      k8s-1 etcd-2
192.168.119.182      k8s-2 etcd-3
192.168.119.183      k8s-3

关闭各个节点上的防火墙设置

[root@k8s-0 ~]# systemctl stop firewalld
[root@k8s-0 ~]# systemctl disable firewalld
[root@k8s-0 ~]# systemctl status firewalld

如果防火墙玩儿得不是很溜,还是先关闭为好,避免给自己挖坑!

关闭各个节点上的swap

[root@k8s-0 ~]# swapoff -a

2、ETCD集群部署

etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines. It’s open-source and available on GitHub. etcd gracefully handles leader elections during network partitions and will tolerate machine failure, including the leader.

ETCD在集群中充当分布式的数据存储角色,存储kubernetes和flanneld的数据集配置信息,例如:kubernetes的节点信息、flanneld的网段信息等。

2.1、制作证书

关于证书的一些概念可以参考:网络安全相关知识简介

ETCD可以配置TLS证书实现客户端到服务器和服务器到服务器的身份认证。因此,集群中的每一个节点都需要有自己的证书,需要访问ETCD集群的客户端在访问集群时也需要提供证书。除此之外,所有的集群节点证书之间需要相互信任,也就是说:我们需要一个CA来生成自签名的根证书,并为集群中的每个节点和需要访问集群的客户端签发对应的证书。ETCD集群中的所有节点必须信任根证书,通过根证书来验证其它证书的真伪。集群中各个角色对应证书及其关系如下图:

这里写图片描述

这里我们不需要搭建自己的CA服务器,只需要提供根证书及其它集群角色所需要的证书即可,将根证书的副本分发到集群的各个角色上。生成证书时,我们使用CloudFlare的开源工具CFSSL。

2.1.1、安装CFSSL工具
### 下载 ###
[root@k8s-0 ~]# curl -s -L -o ./cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-0 ~]# curl -s -L -o ./cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-0 ~]# ls -l cf*
-rw-r--r--. 1 root root 10376657 117 04:25 cfssl
-rw-r--r--. 1 root root  2277873 117 04:27 cfssljson

### 授权 ###
[root@k8s-0 ~]# chmod +x cf*
[root@k8s-0 ~]# ls -l cf*
-rwxr-xr-x. 1 root root 10376657 117 04:25 cfssl
-rwxr-xr-x. 1 root root  2277873 117 04:27 cfssljson

### 移动到/usr/local/bin目录 ###
[root@k8s-0 ~]# mv cf* /usr/local/bin/

### 测试 ###
[root@k8s-0 ~]# cfssl version
Version: 1.2.0
Revision: dev
Runtime: go1.6
2.1.2、生成证书文件
  1. 使用cfssl工具生成CA证书申请文件模版,JSON格式

    
    ### 生成默认的文件模版 ###
    
    [root@k8s-0 etcd]# pwd
    /root/cfssl/etcd
    [root@k8s-0 etcd]# cfssl print-defaults csr > ca-csr.json
    [root@k8s-0 etcd]# ls -l
    -rw-r--r--. 1 root root 287 117 05:11 ca-csr.json

    csr表示证书签名请求

  2. 编辑ca-csr.json模版文件,修改后的文件内容如下

    {
       "CN": "ETCD-Cluster",
       "hosts": [
           "localhost",
           "127.0.0.1",
           "etcd-1",
           "etcd-2",
           "etcd-3"
       ],
       "key": {
           "algo": "rsa",
           "size": 2048
       },
       "names": [
           {
               "C": "CN",
               "L": "Wuhan",
               "ST": "Hubei",
               "O": "Dameng",
               "OU": "CloudPlatform"
           }
       ]
    }

    hosts表示此证书可以在那些主机上使用

  3. 根据证书签名请求生成根证书及私钥

    [root@k8s-0 etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    2017/11/07 05:19:23 [INFO] generating a new CA key and certificate from CSR
    2017/11/07 05:19:23 [INFO] generate received request
    2017/11/07 05:19:23 [INFO] received CSR
    2017/11/07 05:19:23 [INFO] generating key: rsa-2048
    2017/11/07 05:19:24 [INFO] encoded CSR
    2017/11/07 05:19:24 [INFO] signed certificate with serial number 72023613742258533689603590346479034316827863176
    [root@k8s-0 etcd]# ls -l 
    -rw-r--r--. 1 root root 1106 11月  7 05:19 ca.csr
    -rw-r--r--. 1 root root  390 11月  7 05:19 ca-csr.json
    -rw-------. 1 root root 1675 11月  7 05:19 ca-key.pem
    -rw-r--r--. 1 root root 1403 11月  7 05:19 ca.pem

    ca.pem为证书文件,文件中包含CA的公钥

    ca-key.pem为私钥,妥善保管

    ca.csr为证书签名请求,可以使用此文件重新申请一个新的证书

  4. 使用cfssl工具生成证书签发策略文件模版,该文件告诉CA该签发什么样的证书

    [root@k8s-0 etcd]# pwd
    /root/cfssl/etcd
    [root@k8s-0 etcd]# cfssl print-defaults config > ca-config.json
    [root@k8s-0 etcd]# ls -l
    -rw-r--r--. 1 root root  567 11月  7 05:39 ca-config.json
    -rw-r--r--. 1 root root 1106 11月  7 05:19 ca.csr
    -rw-r--r--. 1 root root  390 11月  7 05:19 ca-csr.json
    -rw-------. 1 root root 1675 11月  7 05:19 ca-key.pem
    -rw-r--r--. 1 root root 1403 11月  7 05:19 ca.pem

  5. 编辑ca-config.json模版文件,编辑后的文件内容如下:

    {
       "signing": {
           "default": {
               "expiry": "43800h"
           },
           "profiles": {
               "server": {
                   "expiry": "43800h",
                   "usages": [
                       "signing",
                       "key encipherment",
                       "server auth"
                   ]
               },
               "client": {
                   "expiry": "43800h",
                   "usages": [
                       "signing",
                       "key encipherment",
                       "client auth"
                   ]
               },
               "peer": {
                   "expiry": "43800h",
                   "usages": [
                       "signing",
                       "key encipherment",
                       "client auth",
                       "server auth"
                   ]
               }
           }
       }
    }

    1、证书默认过期时间43800小时(5年)

    2、三个profile:

    ​ server:用于服务器身份认证,存放在服务器端,表明服务器身份

    ​ client:用于客户端身份认证,存放在客户端,表明客户端身份

    ​ peer:可同时用于服务器和客户端身份认证

  6. 创建服务器证书签名请求JSON文件(certificates-node-1.json)

    [root@k8s-0 etcd]# pwd
    /root/cfssl/etcd
    
    ### 先生成模版文件,然后填写自己的内容 ###
    
    [root@k8s-0 etcd]# cfssl print-defaults csr > certificates-node-1.json
    [root@k8s-0 etcd]# ls -l
    -rw-r--r--. 1 root root  833 117 06:00 ca-config.json
    -rw-r--r--. 1 root root 1106 117 05:19 ca.csr
    -rw-r--r--. 1 root root  390 117 05:19 ca-csr.json
    -rw-------. 1 root root 1675 117 05:19 ca-key.pem
    -rw-r--r--. 1 root root 1403 117 05:19 ca.pem
    -rw-r--r--. 1 root root  287 117 06:01 certificates-node-1.json
    
    ### 修改后的文件内容如下 ###
    
    [root@k8s-0 etcd]# cat certificates-node-1.json 
    {
       "CN": "etcd-node-1",
       "hosts": [
           "etcd-1",
           "localhost",
           "127.0.0.1"
       ],
       "key": {
           "algo": "rsa",
           "size": 2048
       },
       "names": [
           {
               "C": "CN",
               "L": "Wuhan",
               "ST": "Hubei",
               "O": "Dameng",
               "OU": "CloudPlatform"
           }
       ]
    }
    
    ### 使用CA的私钥、证书以及证书签发策略文件签发证书 ###
    
    [root@k8s-0 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server certificates-node-1.json | cfssljson -bare certificates-node-1
    2017/11/07 06:23:02 [INFO] generate received request
    2017/11/07 06:23:02 [INFO] received CSR
    2017/11/07 06:23:02 [INFO] generating key: rsa-2048
    2017/11/07 06:23:03 [INFO] encoded CSR
    2017/11/07 06:23:03 [INFO] signed certificate with serial number 50773407225485518252456721207664284207973931225
    2017/11/07 06:23:03 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    [root@k8s-0 etcd]# ls -l
    -rw-r--r--. 1 root root  833 117 06:00 ca-config.json
    -rw-r--r--. 1 root root 1106 117 05:19 ca.csr
    -rw-r--r--. 1 root root  390 117 05:19 ca-csr.json
    -rw-------. 1 root root 1675 117 05:19 ca-key.pem
    -rw-r--r--. 1 root root 1403 117 05:19 ca.pem
    -rw-r--r--. 1 root root 1082 117 06:23 certificates-node-1.csr
    -rw-r--r--. 1 root root  353 117 06:08 certificates-node-1.json
    -rw-------. 1 root root 1675 117 06:23 certificates-node-1-key.pem
    -rw-r--r--. 1 root root 1452 117 06:23 certificates-node-1.pem

    提示警告说证书没有hosts字段,可能导致此证书不适用于web站点。使用openssl x509 -in certificates-node-1.pem -text -noout 输出证书内容,在“X509v3 Subject Alternative Name”字段中包含了certificates-node-1.json文件中的“hosts”的内容。然后使用 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="etcd-1,localhost,127.0.0.1" certificates-node-1.json | cfssljson -bare certificates-node-1 命令生成证书时就没有警告,但是证书包含的内容是一样的。

    Tips:ETCD集群中的所有节点可以使用同一份证书及私钥,即将certificates-node-1.json和certificates-node-1-key.pem文件分发到etcd-1、etcd-2和etcd-3服务器上。

  7. 重复上一部操作签发etcd-2和etcd-3的证书

    [root@k8s-0 etcd]# pwd
    /root/cfssl/etcd
    [root@k8s-0 etcd]# cat certificates-node-2.json 
    {
       "CN": "etcd-node-2",
       "hosts": [
           "etcd-2",
           "localhost",
           "127.0.0.1"
       ],
       "key": {
           "algo": "rsa",
           "size": 2048
       },
       "names": [
           {
               "C": "CN",
               "L": "Wuhan",
               "ST": "Hubei",
               "O": "Dameng",
               "OU": "CloudPlatform"
           }
       ]
    }
    [root@k8s-0 etcd]# cat certificates-node-3.json 
    {
       "CN": "etcd-node-3",
       "hosts": [
           "etcd-3",
           "localhost",
           "127.0.0.1"
       ],
       "key": {
           "algo": "rsa",
           "size": 2048
       },
       "names": [
           {
               "C": "CN",
               "L": "Wuhan",
               "ST": "Hubei",
               "O": "Dameng",
               "OU": "CloudPlatform"
           }
       ]
    }
    [root@k8s-0 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="etcd-2,localhost,127.0.0.1" certificates-node-2.json | cfssljson -bare certificates-node-2
    2017/11/07 06:37:54 [INFO] generate received request
    2017/11/07 06:37:54 [INFO] received CSR
    2017/11/07 06:37:54 [INFO] generating key: rsa-2048
    2017/11/07 06:37:55 [INFO] encoded CSR
    2017/11/07 06:37:55 [INFO] signed certificate with serial number 53358189697471981482368171601115864435884153942
    [root@k8s-0 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="etcd-3,localhost,127.0.0.1" certificates-node-3.json | cfssljson -bare certificates-node-3
    2017/11/07 06:38:16 [INFO] generate received request
    2017/11/07 06:38:16 [INFO] received CSR
    2017/11/07 06:38:16 [INFO] generating key: rsa-2048
    2017/11/07 06:38:17 [INFO] encoded CSR
    2017/11/07 06:38:17 [INFO] signed certificate with serial number 202032929825719668992436771371275796219870214492
    [root@k8s-0 etcd]# ls -l
    -rw-r--r--. 1 root root  833 11月  7 06:00 ca-config.json
    -rw-r--r--. 1 root root 1106 11月  7 05:19 ca.csr
    -rw-r--r--. 1 root root  390 11月  7 05:19 ca-csr.json
    -rw-------. 1 root root 1675 11月  7 05:19 ca-key.pem
    -rw-r--r--. 1 root root 1403 11月  7 05:19 ca.pem
    -rw-r--r--. 1 root root 1082 11月  7 06:23 certificates-node-1.csr
    -rw-r--r--. 1 root root  353 11月  7 06:08 certificates-node-1.json
    -rw-------. 1 root root 1675 11月  7 06:23 certificates-node-1-key.pem
    -rw-r--r--. 1 root root 1452 11月  7 06:23 certificates-node-1.pem
    -rw-r--r--. 1 root root 1082 11月  7 06:37 certificates-node-2.csr
    -rw-r--r--. 1 root root  353 11月  7 06:36 certificates-node-2.json
    -rw-------. 1 root root 1675 11月  7 06:37 certificates-node-2-key.pem
    -rw-r--r--. 1 root root 1452 11月  7 06:37 certificates-node-2.pem
    -rw-r--r--. 1 root root 1082 11月  7 06:38 certificates-node-3.csr
    -rw-r--r--. 1 root root  353 11月  7 06:37 certificates-node-3.json
    -rw-------. 1 root root 1679 11月  7 06:38 certificates-node-3-key.pem
    -rw-r--r--. 1 root root 1452 11月  7 06:38 certificates-node-3.pem

  8. 将证书分发到对应的节点上

    [root@k8s-0 etcd]# pwd
    /root/cfssl/etcd
    
    ### 创建证书存放目录 ###
    
    [root@k8s-0 etcd]# mkdir -p /etc/etcd/ssl
    [root@k8s-0 etcd]# ssh root@k8s-1 mkdir -p /etc/etcd/ssl
    [root@k8s-0 etcd]# ssh root@k8s-2 mkdir -p /etc/etcd/ssl
    
    ### 复制对应的证书文件到对应服务器的目录中 ###
    
    [root@k8s-0 etcd]# cp ca.pem /etc/etcd/ssl/
    [root@k8s-0 etcd]# cp certificates-node-1.pem /etc/etcd/ssl/
    [root@k8s-0 etcd]# cp certificates-node-1-key.pem /etc/etcd/ssl/
    [root@k8s-0 etcd]# ls -l /etc/etcd/ssl/
    -rw-r--r--. 1 root root 1403 117 19:56 ca.pem
    -rw-------. 1 root root 1675 117 19:57 certificates-node-1-key.pem
    -rw-r--r--. 1 root root 1452 117 19:55 certificates-node-1.pem
    
    ### 复制文件到k8s-1节点上 ###
    
    [root@k8s-0 etcd]# scp ca.pem root@k8s-1:/etc/etcd/ssl/
    [root@k8s-0 etcd]# scp certificates-node-2.pem root@k8s-1:/etc/etcd/ssl/
    [root@k8s-0 etcd]# scp certificates-node-2-key.pem root@k8s-1:/etc/etcd/ssl/
    [root@k8s-0 etcd]# ssh root@k8s-1 ls -l /etc/etcd/ssl/
    -rw-r--r--. 1 root root 1403 117 19:58 ca.pem
    -rw-------. 1 root root 1675 117 20:00 certificates-node-2-key.pem
    -rw-r--r--. 1 root root 1452 117 19:59 certificates-node-2.pem
    
    ### 复制文件到k8s-2节点上 ###
    
    [root@k8s-0 etcd]# scp ca.pem root@k8s-2:/etc/etcd/ssl/
    [root@k8s-0 etcd]# scp certificates-node-3.pem root@k8s-2:/etc/etcd/ssl/
    [root@k8s-0 etcd]# scp certificates-node-3-key.pem root@k8s-2:/etc/etcd/ssl/
    [root@k8s-0 etcd]# ssh root@k8s-2 ls -l /etc/etcd/ssl/
    -rw-r--r--. 1 root root 1403 117 20:03 ca.pem
    -rw-------. 1 root root 1675 117 20:04 certificates-node-3-key.pem
    -rw-r--r--. 1 root root 1452 117 20:03 certificates-node-3.pem

查看证书内容:openssl x509 -in ca.pem -text -noout

2.2、部署ETCD集群

  1. 下载安装包并解压

    [root@k8s-0 ~]# pwd
    /root
    [root@k8s-0 ~]# wget http
    [root@k8s-0 ~]# ls -l
    -rw-r--r--. 1 root    root    10176896 116 19:18 etcd-v3.2.9-linux-amd64.tar.gz
    [root@k8s-0 ~]# tar -zxvf etcd-v3.2.9-linux-amd64.tar.gz 
    [root@k8s-0 ~]# ls -l
    drwxrwxr-x. 3 chenlei chenlei      123 107 01:10 etcd-v3.2.9-linux-amd64
    -rw-r--r--. 1 root    root    10176896 116 19:18 etcd-v3.2.9-linux-amd64.tar.gz
    [root@k8s-0 ~]# ls -l etcd-v3.2.9-linux-amd64
    drwxrwxr-x. 11 chenlei chenlei     4096 107 01:10 Documentation
    -rwxrwxr-x.  1 chenlei chenlei 17123360 107 01:10 etcd
    -rwxrwxr-x.  1 chenlei chenlei 14640128 107 01:10 etcdctl
    -rw-rw-r--.  1 chenlei chenlei    33849 107 01:10 README-etcdctl.md
    -rw-rw-r--.  1 chenlei chenlei     5801 107 01:10 README.md
    -rw-rw-r--.  1 chenlei chenlei     7855 107 01:10 READMEv2-etcdctl.md
    [root@k8s-0 ~]# cp etcd-v3.2.9-linux-amd64/etcd /usr/local/bin/
    [root@k8s-0 ~]# cp etcd-v3.2.9-linux-amd64/etcdctl /usr/local/bin/
    [root@k8s-0 ~]# etcd --version
    etcd Version: 3.2.9
    Git SHA: f1d7dd8
    Go Version: go1.8.4
    Go OS/Arch: linux/amd64

  2. 创建etcd配置文件

    [root@k8s-0 ~]# cat /etc/etcd/etcd.conf
    
    # [member]
    
    ETCD_NAME=etcd-1
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    
    #ETCD_WAL_DIR=""
    
    
    #ETCD_SNAPSHOT_COUNT="10000"
    
    
    #ETCD_HEARTBEAT_INTERVAL="100"
    
    
    #ETCD_ELECTION_TIMEOUT="1000"
    
    ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
    ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"
    
    #ETCD_MAX_SNAPSHOTS="5"
    
    
    #ETCD_MAX_WALS="5"
    
    
    #ETCD_CORS=""
    
    #
    
    #[cluster]
    
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd-1:2380"
    
    # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://
    
    ..."
    ETCD_INITIAL_CLUSTER="etcd-1=https://etcd-1:2380,etcd-2=https://etcd-2:2380,etcd-3=https://etcd-3:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://etcd-1:2379"
    
    #ETCD_DISCOVERY=""
    
    
    #ETCD_DISCOVERY_SRV=""
    
    
    #ETCD_DISCOVERY_FALLBACK="proxy"
    
    
    #ETCD_DISCOVERY_PROXY=""
    
    
    #ETCD_STRICT_RECONFIG_CHECK="false"
    
    
    #ETCD_AUTO_COMPACTION_RETENTION="0"
    
    #
    
    #[proxy]
    
    
    #ETCD_PROXY="off"
    
    
    #ETCD_PROXY_FAILURE_WAIT="5000"
    
    
    #ETCD_PROXY_REFRESH_INTERVAL="30000"
    
    
    #ETCD_PROXY_DIAL_TIMEOUT="1000"
    
    
    #ETCD_PROXY_WRITE_TIMEOUT="5000"
    
    
    #ETCD_PROXY_READ_TIMEOUT="0"
    
    #
    
    #[security]
    
    ETCD_CERT_FILE="/etc/etcd/ssl/certificates-node-1.pem"
    ETCD_KEY_FILE="/etc/etcd/ssl/certificates-node-1-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/ssl/certificates-node-1.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/ssl/certificates-node-1-key.pem"
    
    #ETCD_PEER_CLIENT_CERT_AUTH="false"
    
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
    ETCD_PEER_AUTO_TLS="true"
    #
    
    #[logging]
    
    
    #ETCD_DEBUG="false"
    
    
    # examples for -log-package-levels etcdserver=WARNING,security=DEBUG
    
    
    #ETCD_LOG_PACKAGE_LEVELS=""
    
    #
    
    #[profiling]
    
    
    #ETCD_ENABLE_PPROF="false"
    
    
    #ETCD_METRICS="basic"
    
    

  3. 创建Unit服务文件以及启动服务的用户

    [root@k8s-0 ~]# cat /usr/lib/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=notify
    WorkingDirectory=/var/lib/etcd/
    EnvironmentFile=-/etc/etcd/etcd.conf
    User=etcd
    
    # set GOMAXPROCS to number of processors
    
    ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/local/bin/etcd --name=\"${ETCD_NAME}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --data-dir=\"${ETCD_DATA_DIR}\""
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    
    ### 创建etcd服务对应的用户 ###
    
    [root@k8s-0 ~]# useradd etcd -d /var/lib/etcd -s /sbin/nologin -c "etcd user"
    
    ### 修改证书文件的属主为etcd ###
    
    [root@k8s-0 ~]# chown -R etcd:etcd /etc/etcd/
    [root@k8s-0 ~]# ls -lR /etc/etcd/
    /etc/etcd/:
    -rw-r--r--. 1 etcd etcd 1752 117 20:19 etcd.conf
    drwxr-xr-x. 2 etcd etcd   86 117 19:57 ssl
    
    /etc/etcd/ssl:
    -rw-r--r--. 1 etcd etcd 1403 117 19:56 ca.pem
    -rw-------. 1 etcd etcd 1675 117 19:57 certificates-node-1-key.pem
    -rw-r--r--. 1 etcd etcd 1452 117 19:55 certificates-node-1.pem

  4. 在k8s-1和k8s-2上重复上述1 - 3步骤

    [root@k8s-1 ~]# cat /etc/etcd/etcd.conf
    
    # [member]
    
    ETCD_NAME=etcd-2
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    
    #ETCD_WAL_DIR=""
    
    
    #ETCD_SNAPSHOT_COUNT="10000"
    
    
    #ETCD_HEARTBEAT_INTERVAL="100"
    
    
    #ETCD_ELECTION_TIMEOUT="1000"
    
    ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
    ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"
    
    #ETCD_MAX_SNAPSHOTS="5"
    
    
    #ETCD_MAX_WALS="5"
    
    
    #ETCD_CORS=""
    
    #
    
    #[cluster]
    
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd-2:2380"
    
    # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://
    
    ..."
    ETCD_INITIAL_CLUSTER="etcd-1=https://etcd-1:2380,etcd-2=https://etcd-2:2380,etcd-3=https://etcd-3:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://etcd-2:2379"
    
    #ETCD_DISCOVERY=""
    
    
    #ETCD_DISCOVERY_SRV=""
    
    
    #ETCD_DISCOVERY_FALLBACK="proxy"
    
    
    #ETCD_DISCOVERY_PROXY=""
    
    
    #ETCD_STRICT_RECONFIG_CHECK="false"
    
    
    #ETCD_AUTO_COMPACTION_RETENTION="0"
    
    #
    
    #[proxy]
    
    
    #ETCD_PROXY="off"
    
    
    #ETCD_PROXY_FAILURE_WAIT="5000"
    
    
    #ETCD_PROXY_REFRESH_INTERVAL="30000"
    
    
    #ETCD_PROXY_DIAL_TIMEOUT="1000"
    
    
    #ETCD_PROXY_WRITE_TIMEOUT="5000"
    
    
    #ETCD_PROXY_READ_TIMEOUT="0"
    
    #
    
    #[security]
    
    ETCD_CERT_FILE="/etc/etcd/ssl/certificates-node-2.pem"
    ETCD_KEY_FILE="/etc/etcd/ssl/certificates-node-2-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/ssl/certificates-node-2.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/ssl/certificates-node-2-key.pem"
    
    #ETCD_PEER_CLIENT_CERT_AUTH="false"
    
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
    ETCD_PEER_AUTO_TLS="true"
    #
    
    #[logging]
    
    
    #ETCD_DEBUG="false"
    
    
    # examples for -log-package-levels etcdserver=WARNING,security=DEBUG
    
    
    #ETCD_LOG_PACKAGE_LEVELS=""
    
    #
    
    #[profiling]
    
    
    #ETCD_ENABLE_PPROF="false"
    
    
    #ETCD_METRICS="basic"
    
    
    [root@k8s-1 ~]# ls -lR /etc/etcd/
    /etc/etcd/:
    总用量 4
    -rw-r--r--. 1 etcd etcd 1752 117 20:46 etcd.conf
    drwxr-xr-x. 2 etcd etcd   86 117 20:00 ssl
    
    /etc/etcd/ssl:
    总用量 12
    -rw-r--r--. 1 etcd etcd 1403 117 19:58 ca.pem
    -rw-------. 1 etcd etcd 1675 117 20:00 certificates-node-2-key.pem
    -rw-r--r--. 1 etcd etcd 1452 117 19:59 certificates-node-2.pem
    
    [root@k8s-2 ~]# cat /etc/etcd/etcd.conf
    
    # [member]
    
    ETCD_NAME=etcd-3
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    
    #ETCD_WAL_DIR=""
    
    
    #ETCD_SNAPSHOT_COUNT="10000"
    
    
    #ETCD_HEARTBEAT_INTERVAL="100"
    
    
    #ETCD_ELECTION_TIMEOUT="1000"
    
    ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
    ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"
    
    #ETCD_MAX_SNAPSHOTS="5"
    
    
    #ETCD_MAX_WALS="5"
    
    
    #ETCD_CORS=""
    
    #
    
    #[cluster]
    
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd-3:2380"
    
    # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://
    
    ..."
    ETCD_INITIAL_CLUSTER="etcd-1=https://etcd-1:2380,etcd-2=https://etcd-2:2380,etcd-3=https://etcd-3:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://etcd-3:2379"
    
    #ETCD_DISCOVERY=""
    
    
    #ETCD_DISCOVERY_SRV=""
    
    
    #ETCD_DISCOVERY_FALLBACK="proxy"
    
    
    #ETCD_DISCOVERY_PROXY=""
    
    
    #ETCD_STRICT_RECONFIG_CHECK="false"
    
    
    #ETCD_AUTO_COMPACTION_RETENTION="0"
    
    #
    
    #[proxy]
    
    
    #ETCD_PROXY="off"
    
    
    #ETCD_PROXY_FAILURE_WAIT="5000"
    
    
    #ETCD_PROXY_REFRESH_INTERVAL="30000"
    
    
    #ETCD_PROXY_DIAL_TIMEOUT="1000"
    
    
    #ETCD_PROXY_WRITE_TIMEOUT="5000"
    
    
    #ETCD_PROXY_READ_TIMEOUT="0"
    
    #
    
    #[security]
    
    ETCD_CERT_FILE="/etc/etcd/ssl/certificates-node-3.pem"
    ETCD_KEY_FILE="/etc/etcd/ssl/certificates-node-3-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/ssl/certificates-node-3.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/ssl/certificates-node-3-key.pem"
    
    #ETCD_PEER_CLIENT_CERT_AUTH="false"
    
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
    ETCD_PEER_AUTO_TLS="true"
    #
    
    #[logging]
    
    
    #ETCD_DEBUG="false"
    
    
    # examples for -log-package-levels etcdserver=WARNING,security=DEBUG
    
    
    #ETCD_LOG_PACKAGE_LEVELS=""
    
    #
    
    #[profiling]
    
    
    #ETCD_ENABLE_PPROF="false"
    
    
    #ETCD_METRICS="basic"
    
    
    [root@k8s-2 ~]# ls -lR /etc/etcd/
    /etc/etcd/:
    -rw-r--r--. 1 etcd etcd 1752 117 20:50 etcd.conf
    drwxr-xr-x. 2 etcd etcd   86 117 20:04 ssl
    
    /etc/etcd/ssl:
    -rw-r--r--. 1 etcd etcd 1403 117 20:03 ca.pem
    -rw-------. 1 etcd etcd 1675 117 20:04 certificates-node-3-key.pem
    -rw-r--r--. 1 etcd etcd 1452 117 20:03 certificates-node-3.pem

  5. 启动etcd服务

    
    ### 在三个节点上分别执行 ###
    
    [root@k8s-0 ~]# systemctl start etcd
    [root@k8s-1 ~]# systemctl start etcd
    [root@k8s-2 ~]# systemctl start etcd
    
    [root@k8s-0 ~]# systemctl status etcd
    [root@k8s-1 ~]# systemctl status etcd
    [root@k8s-2 ~]# systemctl status etcd

  6. 检查集群健康状态

    
    ### 生产客户端证书 ###
    
    [root@k8s-0 etcd]# pwd
    /root/cfssl/etcd
    [root@k8s-0 etcd]# cat certificates-client.json 
    {
       "CN": "etcd-client",
       "hosts": [
           "k8s-0",
           "k8s-1",
           "k8s-2",
           "k8s-3",
           "localhost",
           "127.0.0.1"
       ],
       "key": {
           "algo": "rsa",
           "size": 2048
       },
       "names": [
           {
               "C": "CN",
               "L": "Wuhan",
               "ST": "Hubei",
               "O": "Dameng",
               "OU": "CloudPlatform"
           }
       ]
    }
    [root@k8s-0 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client -hostname="k8s-0,k8s-1,k8s-2,k8s-3,localhost,127.0.0.1" certificates-client.json | cfssljson -bare certificates-client
    2017/11/07 21:22:52 [INFO] generate received request
    2017/11/07 21:22:52 [INFO] received CSR
    2017/11/07 21:22:52 [INFO] generating key: rsa-2048
    2017/11/07 21:22:52 [INFO] encoded CSR
    2017/11/07 21:22:52 [INFO] signed certificate with serial number 625476446160272733374126460300662233104566650826
    [root@k8s-0 etcd]# ls -l
    -rw-r--r--. 1 root root  833 11月  7 06:00 ca-config.json
    -rw-r--r--. 1 root root 1106 11月  7 05:19 ca.csr
    -rw-r--r--. 1 root root  390 11月  7 05:19 ca-csr.json
    -rw-------. 1 root root 1675 11月  7 05:19 ca-key.pem
    -rw-r--r--. 1 root root 1403 11月  7 05:19 ca.pem
    -rw-r--r--. 1 root root 1110 11月  7 21:22 certificates-client.csr
    -rw-r--r--. 1 root root  403 11月  7 21:20 certificates-client.json
    -rw-------. 1 root root 1679 11月  7 21:22 certificates-client-key.pem
    -rw-r--r--. 1 root root 1476 11月  7 21:22 certificates-client.pem
    -rw-r--r--. 1 root root 1082 11月  7 06:23 certificates-node-1.csr
    -rw-r--r--. 1 root root  353 11月  7 06:08 certificates-node-1.json
    -rw-------. 1 root root 1675 11月  7 06:23 certificates-node-1-key.pem
    -rw-r--r--. 1 root root 1452 11月  7 06:23 certificates-node-1.pem
    -rw-r--r--. 1 root root 1082 11月  7 06:37 certificates-node-2.csr
    -rw-r--r--. 1 root root  353 11月  7 06:36 certificates-node-2.json
    -rw-------. 1 root root 1675 11月  7 06:37 certificates-node-2-key.pem
    -rw-r--r--. 1 root root 1452 11月  7 06:37 certificates-node-2.pem
    -rw-r--r--. 1 root root 1082 11月  7 06:38 certificates-node-3.csr
    -rw-r--r--. 1 root root  353 11月  7 06:37 certificates-node-3.json
    -rw-------. 1 root root 1679 11月  7 06:38 certificates-node-3-key.pem
    -rw-r--r--. 1 root root 1452 11月  7 06:38 certificates-node-3.pem
    
    [root@k8s-0 etcd]# etcdctl --ca-file=ca.pem --cert-file=certificates-client.pem --key-file=certificates-client-key.pem --endpoints=https://etcd-1:2379,https://etcd-2:2379,https://etcd-3:2379 cluster-health
    member 1a147ce6336081c1 is healthy: got healthy result from https://etcd-1:2379
    member ce10c39ce110475b is healthy: got healthy result from https://etcd-3:2379
    member ed2c681b974a3802 is healthy: got healthy result from https://etcd-2:2379
    cluster is healthy

    客户端证书之后可用于kube-apiserver、flanneld等需要连接etcd集群的地方!

3、部署Kubernetes Master

Mater节点上运行的Kubernetes服务包括:kube-apiserver、kube-controller-manager和kube-scheduler。目前这三个服务需要部署在同一台服务器上。

Master节点启用TLS和TLS Bootstrapping,这里涉及到的证书交互非常复杂。为了弄明白其中的关系,我们为不同的对象建立不同的CA。首先我们来梳理一下我们可能会用到的CA:

  • CA-ApiServer,用来签发kube-apiserver的证书
  • CA-Client,用来签发kubectl证书、kube-proxy证书以及kubelet自动签发证书
  • CA-ServiceAccount,用来签发和验证Service Account的JWT bearer tokens

kubelet自动签发证书是由CA-Client签发的证书,但是并不直接作为kubelet服务的身份证书使用。因为启用TLS Bootstrapping,kubelet的身份证书由Kubenetes(具体可能是kube-controller-manager)签发,而kubelet自动签发证书将充当二级CA,复杂签发具体的kubelet身份证书。

kubectl和kube-proxy通过kubeconfig文件获得kube-apiserver的CA根证书用来验证kube-apiserver服务的身份证书,同时向kube-apiserver出示自己的身份证书。

除了这些CA之外,我们还需要ETCD的CA根证书以及ETCD的certificates-client来访问ETCD集群服务。

3.1、制作CA证书

3.1.1、制作kubernetes根证书
[root@k8s-0 kubernetes]# pwd
/root/cfssl/kubernetes
### 生产kube-apiserver CA根证书 ###
[root@k8s-0 kubernetes]# cat kubernetes-root-ca-csr.json 
{
    "CN": "Kubernetes-Cluster",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Wuhan",
            "ST": "Hubei",
            "O": "Dameng",
            "OU": "CloudPlatform"
        }
    ]
}
[root@k8s-0 kubernetes]# cfssl gencert -initca kubernetes-root-ca-csr.json | cfssljson -bare kubernetes-root-ca
2017/11/10 19:20:36 [INFO] generating a new CA key and certificate from CSR
2017/11/10 19:20:36 [INFO] generate received request
2017/11/10 19:20:36 [INFO] received CSR
2017/11/10 19:20:36 [INFO] generating key: rsa-2048
2017/11/10 19:20:37 [INFO] encoded CSR
2017/11/10 19:20:37 [INFO] signed certificate with serial number 409390209095238242979736842166999327083180050042
[root@k8s-0 kubernetes]# ls -l
-rw-r--r--. 1 root root 1021 11月 10 19:20 kubernetes-root-ca.csr
-rw-r--r--. 1 root root  279 11月 10 18:04 kubernetes-root-ca-csr.json
-rw-------. 1 root root 1675 11月 10 19:20 kubernetes-root-ca-key.pem
-rw-r--r--. 1 root root 1395 11月 10 19:20 kubernetes-root-ca.pem
### 复制ETCD的证书策略文件 ###
[root@k8s-0 kubernetes]# cp ../etcd/ca-config.json .
[root@k8s-0 kubernetes]# ll
-rw-r--r--. 1 root root  833 11月 10 16:29 ca-config.json
-rw-r--r--. 1 root root 1021 11月 10 19:20 kubernetes-root-ca.csr
-rw-r--r--. 1 root root  279 11月 10 18:04 kubernetes-root-ca-csr.json
-rw-------. 1 root root 1675 11月 10 19:20 kubernetes-root-ca-key.pem
-rw-r--r--. 1 root root 1395 11月 10 19:20 kubernetes-root-ca.pem
3.1.2、根据根证书签发kubectl证书
[root@k8s-0 kubernetes]# pwd
/root/cfssl/kubernetes
### kubenetes会提取证书中的"O"作为其RBAC模型中的"Group"值 ###
[root@k8s-0 kubernetes]# cat kubernetes-client-kubectl-csr.json 
{
    "CN": "kubectl-admin",
    "hosts": [
        "localhost",
        "127.0.0.1",
        "etcd-1"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Wuhan",
            "ST": "Hubei",
            "O": "system:masters",
            "OU": "system"
        }
    ]
}
[root@k8s-0 kubernetes]# cfssl gencert -ca=kubernetes-root-ca.pem -ca-key=kubernetes-root-ca-key.pem  -config=ca-config.json -profile=client -hostname="k8s-0,localhost,127.0.0.1" kubernetes-client-kubectl-csr.json | cfssljson -bare kubernetes-client-kubectl
2017/11/10 19:28:53 [INFO] generate received request
2017/11/10 19:28:53 [INFO] received CSR
2017/11/10 19:28:53 [INFO] generating key: rsa-2048
2017/11/10 19:28:53 [INFO] encoded CSR
2017/11/10 19:28:53 [INFO] signed certificate with serial number 48283780181062525775523310004102739160256608492
[root@k8s-0 kubernetes]# ls -l
总用量 40
-rw-r--r--. 1 root root  833 11月 10 16:29 ca-config.json
-rw-r--r--. 1 root root 1086 11月 10 19:28 kubernetes-client-kubectl.csr
-rw-r--r--. 1 root root  356 11月 10 18:17 kubernetes-client-kubectl-csr.json
-rw-------. 1 root root 1675 11月 10 19:28 kubernetes-client-kubectl-key.pem
-rw-r--r--. 1 root root 1460 11月 10 19:28 kubernetes-client-kubectl.pem
-rw-r--r--. 1 root root 1021 11月 10 19:20 kubernetes-root-ca.csr
-rw-r--r--. 1 root root  279 11月 10 18:04 kubernetes-root-ca-csr.json
-rw-------. 1 root root 1675 11月 10 19:20 kubernetes-root-ca-key.pem
-rw-r--r--. 1 root root 1395 11月 10 19:20 kubernetes-root-ca.pem

基于Kubenetes的RBAC模型,kubenetes会从连接集群的客户端提供的身份证书(此处是kubernetes-client-kubectl.pem)中提取”CN”和”O”,分别作为RBAC中的username和group。我们此处使用的”system:masters”是kubenetes的内置group,该group在kubenetes中被绑定了内置role “cluster-admin”,”cluster-admin”具备访问集群所有API的权限。也就是说,使用此证书访问kubenetes集群,将拥有操作kubenetes所有API的权限。”CN”的值随你喜欢,kubenetes会为你创建这个用户,然后绑定权限。

3.1.3、根据根证书签发apiserver证书
[root@k8s-0 kubernetes]# pwd
/root/cfssl/kubernetes
[root@k8s-0 kubernetes]# cat kubernetes-server-csr.json 
{
    "CN": "Kubernetes-Server",
    "hosts": [
        "localhost",
        "127.0.0.1",
        "k8s-0",
        "10.254.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Wuhan",
            "ST": "Hubei",
            "O": "Dameng",
            "OU": "CloudPlatform"
        }
    ]
}
[root@k8s-0 kubernetes]# cfssl gencert -ca=kubernetes-root-ca.pem -ca-key=kubernetes-root-ca-key.pem  -config=ca-config.json -profile=server kubernetes-server-csr.json | cfssljson -bare kubernetes-server
2017/11/10 19:42:40 [INFO] generate received request
2017/11/10 19:42:40 [INFO] received CSR
2017/11/10 19:42:40 [INFO] generating key: rsa-2048
2017/11/10 19:42:40 [INFO] encoded CSR
2017/11/10 19:42:40 [INFO] signed certificate with serial number 136243250541044739203078514726425397097204358889
2017/11/10 19:42:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-0 kubernetes]# ls -l
-rw-r--r--. 1 root root  833 1110 16:29 ca-config.json
-rw-r--r--. 1 root root 1086 1110 19:28 kubernetes-client-kubectl.csr
-rw-r--r--. 1 root root  356 1110 18:17 kubernetes-client-kubectl-csr.json
-rw-------. 1 root root 1675 1110 19:28 kubernetes-client-kubectl-key.pem
-rw-r--r--. 1 root root 1460 1110 19:28 kubernetes-client-kubectl.pem
-rw-r--r--. 1 root root 1021 1110 19:20 kubernetes-root-ca.csr
-rw-r--r--. 1 root root  279 1110 18:04 kubernetes-root-ca-csr.json
-rw-------. 1 root root 1675 1110 19:20 kubernetes-root-ca-key.pem
-rw-r--r--. 1 root root 1395 1110 19:20 kubernetes-root-ca.pem
-rw-r--r--. 1 root root 1277 1110 19:42 kubernetes-server.csr
-rw-r--r--. 1 root root  556 1110 19:40 kubernetes-server-csr.json
-rw-------. 1 root root 1675 1110 19:42 kubernetes-server-key.pem
-rw-r--r--. 1 root root 1651 1110 19:42 kubernetes-server.pem

3.2、安装kubectl命令行客户端

3.2.1、安装
[root@k8s-0 ~]# pwd
/root
### 下载 ###
wget http://......
[root@k8s-0 ~]# ls -l
-rw-------. 1 root    root         1510 1010 18:47 anaconda-ks.cfg
drwxr-xr-x. 3 root    root           18 117 05:05 cfssl
drwxrwxr-x. 3 chenlei chenlei       123 107 01:10 etcd-v3.2.9-linux-amd64
-rw-r--r--. 1 root    root     10176896 116 19:18 etcd-v3.2.9-linux-amd64.tar.gz
-rw-r--r--. 1 root    root    403881630 118 07:20 kubernetes-server-linux-amd64.tar.gz
### 解压 ###
[root@k8s-0 ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz 
[root@k8s-0 ~]# ls -l
-rw-------. 1 root    root         1510 1010 18:47 anaconda-ks.cfg
drwxr-xr-x. 3 root    root           18 117 05:05 cfssl
drwxrwxr-x. 3 chenlei chenlei       123 107 01:10 etcd-v3.2.9-linux-amd64
-rw-r--r--. 1 root    root     10176896 116 19:18 etcd-v3.2.9-linux-amd64.tar.gz
drwxr-x---. 4 root    root           79 1012 07:38 kubernetes
-rw-r--r--. 1 root    root    403881630 118 07:24 kubernetes-server-linux-amd64.tar.gz
[root@k8s-0 ~]# cp kubernetes/server/bin/kubectl /usr/local/bin/
[root@k8s-0 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
3.2.2、配置

为了让kubectl顺利的访问apiserver,我们需要为其配置如下信息:

  • kube-apiserver的服务地址
  • kube-apiserver的根证书(apiserver-ca.pem),因为启用的TLS,我们需要此根证书来验明正身
  • kubectl自己的证书(certificates-client-kubectl.pem)及密钥(certificates-client-kubectl-key.pem)

这些内容都定义在kubectl的kubeconfig文件中

[root@k8s-0 kubernetes]# pwd
/root/cfssl/kubernetes
### 创建证书存放路径 ###
[root@k8s-0 kubernetes]# mkdir -p /etc/kubernetes/ssl/
### 将kubectl需要用到的证书文件复制到证书存放目录 ###
[root@k8s-0 kubernetes]# cp kubernetes-root-ca.pem kubernetes-client-kubectl.pem kubernetes-client-kubectl-key.pem /etc/kubernetes/ssl/
[root@k8s-0 kubernetes]# ls -l /etc/kubernetes/ssl/
-rw-------. 1 root root 1675 1110 19:46 kubernetes-client-kubectl-key.pem
-rw-r--r--. 1 root root 1460 1110 19:46 kubernetes-client-kubectl.pem
-rw-r--r--. 1 root root 1395 1110 19:46 kubernetes-root-ca.pem
### 配置kubectl的kubeconfig ###
[root@k8s-0 kubernetes]# kubectl config set-cluster kubernetes-cluster --certificate-authority=/etc/kubernetes/ssl/kubernetes-root-ca.pem --embed-certs=true --server="https://k8s-0:6443"
Cluster "kubernetes-cluster" set.
[root@k8s-0 kubernetes]# kubectl config set-credentials kubernetes-kubectl --client-certificate=/etc/kubernetes/ssl/kubernetes-client-kubectl.pem --embed-certs=true --client-key=/etc/kubernetes/ssl/kubernetes-client-kubectl-key.pem
User "kubernetes-kubectl" set.
[root@k8s-0 kubernetes]# kubectl config set-context kubernetes-cluster-context --cluster=kubernetes-cluster --user=kubernetes-kubectl
Context "kubernetes-cluster-context" created.
[root@k8s-0 kubernetes]# kubectl config use-context kubernetes-cluster-context
Switched to context "kubernetes-cluster-context".
[root@k8s-0 kubernetes]# ls -l ~/.kube/
总用量 8
-rw-------. 1 root root 6445 118 21:37 config

如果你的kubectl需要连接多个不同的集群环境,你也可以定义多个context,根据实际需要来进行切换

set-cluster用来配置你的集群地址和CA根证书,kubernetes-cluster是集群的名称,有点Oracle TNS的感觉

set-credentials用来配置客户端证书及密钥,也就是访问集群的用户,用户信息在证书中

set-context用来组合cluster和credentials,也就是访问集群的上下文环境,你可以使用use-context来进行切换

3.2.3、测试
[root@k8s-0 kubernetes]# kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server k8s-0:6443 was refused - did you specify the right host or port?

此时的提示信息中”connection to the server k8s-0:6443”是上一步配置的kubeconfig中指定的地址,只是服务尚未启动

3.3、安装kube-apiserver服务

3.3.1、安装
[root@k8s-0 ~]# pwd
/root
[root@k8s-0 ~]# cp kubernetes/server/bin/kube-apiserver /usr/local/bin/
### 创建Unit文件 ###
[root@k8s-0 ~]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=kube
ExecStart=/usr/local/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

接下来的步骤中,我们会创建两个EnvironmentFile:/etc/kubernetes/config和/etc/kubernetes/apiserver。ExecStart对应kube-apiserver文件存放的位置。

3.3.2、配置
3.3.2.1、准备证书
[root@k8s-0 kubernetes]# pwd
/root/cfssl/kubernetes
### 复制kubernetes-server.pem和kubernetes-server-key.pem到证书存放目录 ###
[root@k8s-0 kubernetes]# cp kubernetes-server.pem kubernetes-server-key.pem /etc/kubernetes/ssl/
### 复制kubernetes-root-ca-key.pem到证书存放目录 ###
[root@k8s-0 kubernetes]# cp kubernetes-root-ca-key.pem /etc/kubernetes/ssl/
### 准本ETCD客户端证书,这里我们直接使用上面ETCD测试用的客户端证书 ###
[root@k8s-0 etcd]# pwd
/root/cfssl/etcd
[root@k8s-0 etcd]# cp ca.pem /etc/kubernetes/ssl/etcd-root-ca.pem
[root@k8s-0 etcd]# cp certificates-client.pem /etc/kubernetes/ssl/etcd-client-kubernetes.pem
[root@k8s-0 etcd]# cp certificates-client-key.pem /etc/kubernetes/ssl/etcd-client-kubernetes-key.pem
[root@k8s-0 etcd]# ls -l /etc/kubernetes/ssl/
-rw-------. 1 root root 1679 1110 19:58 etcd-client-kubernetes-key.pem
-rw-r--r--. 1 root root 1476 1110 19:58 etcd-client-kubernetes.pem
-rw-r--r--. 1 root root 1403 1110 19:57 etcd-root-ca.pem
-rw-------. 1 root root 1675 1110 19:46 kubernetes-client-kubectl-key.pem
-rw-r--r--. 1 root root 1460 1110 19:46 kubernetes-client-kubectl.pem
-rw-------. 1 root root 1675 1110 19:57 kubernetes-root-ca-key.pem
-rw-r--r--. 1 root root 1395 1110 19:46 kubernetes-root-ca.pem
-rw-------. 1 root root 1675 1110 19:56 kubernetes-server-key.pem
-rw-r--r--. 1 root root 1651 1110 19:56 kubernetes-server.pem

这里kubectl和kube-apiserver安装在一台服务器上,共用同一个证书存放目录

3.3.2.2、准备TLS bootstrapping配置

TLS bootstrap是指客户端证书由kube-apiserver自动签发,不需要手工为其准备身份证书。此功能目前仅支持为kubelet自动签发证书,kubelet加入集群时会向集群提出csr申请,管理员审批通过之后将自动为其签发证书。

TLS bootstrapping使用token认证,ApiServer必须先配置一个token认证,通过该token认证的用户需要具备”system:bootstrappers”组的权限。kubelet将使用token认证获得”system:bootstrappers”组的权限,然后提交CSR申请。token文件格式为:”token,username,userid,groups”,例如:

02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:bootstrappers"

Token可以是任意的包涵128 bit的字符串,可以使用安全的随机数发生器生成。

[root@k8s-0 kubernetes]# pwd
/etc/kubernetes
### 生产token文件,文件名以csv结尾 ###
[root@k8s-0 kubernetes]# BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
[root@k8s-0 kubernetes]# cat > token.csv << EOF
> $BOOTSTRAP_TOKEN,kubelet-bootstrap,10001,"system:bootstrappers"
> EOF
[root@k8s-0 kubernetes]# cat token.csv 
4f2c8c078e69cfc8b1ab7d640bbcb6f2,kubelet-bootstrap,10001,"system:bootstrappers"
[root@k8s-0 kubernetes]# ls -l
drwxr-xr-x. 2 kube kube 4096 1110 19:58 ssl
-rw-r--r--. 1 root root   80 1110 20:00 token.csv
### 配置kubelet bootstrapping kubeconfig ###
[root@k8s-0 kubernetes]# kubectl config set-cluster kubernetes-cluster --certificate-authority=/etc/kubernetes/ssl/kubernetes-root-ca.pem --embed-certs=true --server="https://k8s-0:6443" --kubeconfig=bootstrap.kubeconfig
Cluster "kubernetes-cluster" set.
### 确保你的${BOOTSTRAP_TOKEN}变量有效,其和上面一致 ### 
[root@k8s-0 kubernetes]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfig
User "kubelet-bootstrap" set.
[root@k8s-0 kubernetes]# kubectl config set-context kubelet-bootstrap --cluster=kubernetes-cluster --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
Context "kubelet-bootstrap" created.
[root@k8s-0 kubernetes]# kubectl config use-context kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
Switched to context "kubelet-bootstrap".
[root@k8s-0 kubernetes]# ls -l
总用量 8
-rw-------. 1 root root 2265 1110 20:03 bootstrap.kubeconfig
drwxr-xr-x. 2 kube kube 4096 1110 19:58 ssl
-rw-r--r--. 1 root root   80 1110 20:00 token.csv
[root@k8s-0 kubernetes]# cat bootstrap.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQyakNDQXNLZ0F3SUJBZ0lVUjdXeEh5NzdHc3h4S3R5QlJTd1VRMXgzeG5vd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2N6RUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVQTUEwR0ExVUVDaE1HUkdGdFpXNW5NUll3RkFZRFZRUUxFdzFEYkc5MVpGQnNZWFJtYjNKdE1Sc3dHUVlEClZRUURFeEpMZFdKbGNtNWxkR1Z6TFVOc2RYTjBaWEl3SGhjTk1UY3hNVEV3TVRFeE5qQXdXaGNOTWpJeE1UQTUKTVRFeE5qQXdXakJ6TVFzd0NRWURWUVFHRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVApCVmQxYUdGdU1ROHdEUVlEVlFRS0V3WkVZVzFsYm1jeEZqQVVCZ05WQkFzVERVTnNiM1ZrVUd4aGRHWnZjbTB4Ckd6QVpCZ05WQkFNVEVrdDFZbVZ5Ym1WMFpYTXRRMngxYzNSbGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQUQKZ2dFUEFEQ0NBUW9DZ2dFQkFMQ3hXNWhQNjU4RFl3VGFCZ24xRWJIaTBNUnYyUGVCM0Y1b3M5bHZaeXZVVlZZKwpPNU9MR1plU3hZamdYcnVWRm9jTHhUTE1uUldtcmZNaUx6UG9FQlpZZ0czMXpqRzlJMG5kTm55RWVBM0ltYWdBCndsRThsZ2N5VVd6MVA3ZWx0V1FTOThnWm5QK05ieHhCT3Nick1YMytsM0ZKSDZTUXM4NFR3dVo1MVMvbi9kUWoKQ1ZFMkJvME14ZFhZZ3FESkc3MUl2WVRUcjdqWkd4d2VLZCtvWUsvTVc5ZFFjbDNraklkU1BOQUhGTW5lMVRmTwpvdlpwazF6SDRRdEJ3b3FNSHh6ZDhsUG4yd3ZzR3NRZVRkNzdqRTlsTGZjRDdOK3NyL0xiL2VLWHlQbTFPV1c3CmxLOUFtQjNxTmdBc0xZVUxGNTV1NWVQN2ZwS3pTdTU3V1Qzc3hac0NBd0VBQWFObU1HUXdEZ1lEVlIwUEFRSC8KQkFRREFnRUdNQklHQTFVZEV3RUIvd1FJTUFZQkFmOENBUUl3SFFZRFZSME9CQllFRkc4dWNWTk5tKzJtVS9CcApnbURuS2RBK3FMcGZNQjhHQTFVZEl3UVlNQmFBRkc4dWNWTk5tKzJtVS9CcGdtRG5LZEErcUxwZk1BMEdDU3FHClNJYjNEUUVCQ3dVQUE0SUJBUUJiS0pSUG1kSWpRS3E1MWNuS2lYNkV1TzJVakpVYmNYOFFFaWYzTDh2N09IVGcKcnVMY1FDUGRkbHdSNHdXUW9GYU9yZWJTbllwcmduV2EvTE4yN3lyWC9NOHNFeG83WHBEUDJoNUYybllNSFVIcAp2V1hKSUFoR3FjNjBqNmg5RHlDcGhrWVV5WUZoRkovNkVrVEJvZ241S2Z6OE1ITkV3dFdnVXdSS29aZHlGZStwCk1sL3RWOHJkYVo4eXpMY2sxejJrMXdXRDlmSWk2R2VCTG1JTnJ1ZDVVaS9QTGI2Z2YwOERZK0ZTODBIZDhZdnIKM2dTc2VCQURlOXVHMHhZZitHK1V1YUtvMHdNSHc2VGxkWGlqcVQxU0Eyc1M0ZWpGRjl0TldPaVdPcVpLakxjMgpPM2tIYllUOTVYZGQ5MHplUU1KTmR2RTU5WmdIdmpwY09sZlNEdDhOCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://k8s-0:6443
  name: kubernetes-cluster
contexts:
- context:
    cluster: kubernetes-cluster
    user: kubelet-bootstrap
  name: kubelet-bootstrap
current-context: kubelet-bootstrap
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    as-user-extra: {}
    token: 4f2c8c078e69cfc8b1ab7d640bbcb6f2
3.3.2.3、配置config
[root@k8s-0 kubernetes]# cat /etc/kubernetes/config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://127.0.0.1:8080"
KUBE_MASTER="--master=http://k8s-0:8080"
### 配置apiserver文件 ###

3.3.2.4、配置apiserver
[root@k8s-0 kubernetes]# pwd
/etc/kubernetes
### 配置审计日志策略 ###
[root@k8s-0 kubernetes]# cat audit-policy.yaml 
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
[root@k8s-0 ~]# mkdir -p /var/log/kube-audit
[root@k8s-0 ~]# chown kube:kube /var/log/kube-audit/
[root@k8s-0 ~]# ls -l /var/log
drwxr-xr-x. 2 kube kube      23 118 23:57 kube-audit
[root@k8s-0 kubernetes]# cat apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=192.168.119.180 --bind-address=192.168.119.180 --insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://etcd-1:2379,https://etcd-2:2379,https://etcd-3:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction"

# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC,Node \
               --anonymous-auth=false \
               --kubelet-https=true \
               --enable-bootstrap-token-auth \
               --token-auth-file=/etc/kubernetes/token.csv \
               --service-node-port-range=30000-32767 \
               --tls-cert-file=/etc/kubernetes/ssl/kubernetes-server.pem \
               --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-server-key.pem \
               --client-ca-file=/etc/kubernetes/ssl/kubernetes-root-ca.pem \
               --service-account-key-file=/etc/kubernetes/ssl/kubernetes-root-ca.pem \
               --etcd-quorum-read=true \
               --storage-backend=etcd3 \
               --etcd-cafile=/etc/kubernetes/ssl/etcd-root-ca.pem \
               --etcd-certfile=/etc/kubernetes/ssl/etcd-client-kubernetes.pem \
               --etcd-keyfile=/etc/kubernetes/ssl/etcd-client-kubernetes-key.pem \
               --enable-swagger-ui=true \
               --apiserver-count=3 \
               --audit-policy-file=/etc/kubernetes/audit-policy.yaml \
               --audit-log-maxage=30 \
               --audit-log-maxbackup=3 \
               --audit-log-maxsize=100 \
               --audit-log-path=/var/log/kube-audit/audit.log \
               --event-ttl=1h"

–service-account-key-file用于service account访问kubernetes api校验token。service account校验采用的是JWT校验方式。

3.3.3、启动
### 创建kube用户,用于启动kubernetes相关服务 ###
[root@k8s-0 ~]# useradd kube -d /var/lib/kube -s /sbin/nologin -c "Kubernetes user"
### 修改目录属主 ###
[root@k8s-0 ~]# chown -Rf kube:kube /etc/kubernetes/
[root@k8s-0 kubernetes]# ls -lR /etc/kubernetes/
/etc/kubernetes/:
-rw-r--r--. 1 kube kube 2172 1110 20:06 apiserver
-rw-r--r--. 1 kube kube  113 118 23:42 audit-policy.yaml
-rw-------. 1 kube kube 2265 1110 20:03 bootstrap.kubeconfig
-rw-r--r--. 1 kube kube  696 118 23:23 config
drwxr-xr-x. 2 kube kube 4096 1110 19:58 ssl
-rw-r--r--. 1 kube kube   80 1110 20:00 token.csv

/etc/kubernetes/ssl:
-rw-------. 1 kube kube 1679 1110 19:58 etcd-client-kubernetes-key.pem
-rw-r--r--. 1 kube kube 1476 1110 19:58 etcd-client-kubernetes.pem
-rw-r--r--. 1 kube kube 1403 1110 19:57 etcd-root-ca.pem
-rw-------. 1 kube kube 1675 1110 19:46 kubernetes-client-kubectl-key.pem
-rw-r--r--. 1 kube kube 1460 1110 19:46 kubernetes-client-kubectl.pem
-rw-------. 1 kube kube 1675 1110 19:57 kubernetes-root-ca-key.pem
-rw-r--r--. 1 kube kube 1395 1110 19:46 kubernetes-root-ca.pem
-rw-------. 1 kube kube 1675 1110 19:56 kubernetes-server-key.pem
-rw-r--r--. 1 kube kube 1651 1110 19:56 kubernetes-server.pem
[root@k8s-0 ~]# chown kube:kube /usr/local/bin/kube-apiserver 
[root@k8s-0 ~]# ls -l /usr/local/bin/kube-apiserver 
-rwxr-x---. 1 kube kube 192911402 118 21:59 /usr/local/bin/kube-apiserver
### 启动kube-apiserver服务 ###
[root@k8s-0 ~]# systemctl start kube-apiserver
[root@k8s-0 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; disabled; vendor preset: disabled)
   Active: active (running) since 五 2017-11-10 20:13:42 CST; 9s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 3837 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─3837 /usr/local/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=https://etcd-1:2379...

1110 20:13:42 k8s-0 kube-apiserver[3837]: I1110 20:13:42.932647    3837 controller_utils.go:1041] W...ller
1110 20:13:42 k8s-0 systemd[1]: Started Kubernetes API Server.
1110 20:13:42 k8s-0 kube-apiserver[3837]: I1110 20:13:42.944774    3837 customresource_discovery_co...ller
1110 20:13:42 k8s-0 kube-apiserver[3837]: I1110 20:13:42.944835    3837 naming_controller.go:277] S...ller
1110 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.031094    3837 cache.go:39] Caches are syn...ller
1110 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.034168    3837 controller_utils.go:1048] C...ller
1110 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.034204    3837 cache.go:39] Caches are syn...ller
1110 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.039514    3837 autoregister_controller.go:...ller
1110 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.039527    3837 cache.go:32] Waiting for ca...ller
1110 20:13:43 k8s-0 kube-apiserver[3837]: I1110 20:13:43.139810    3837 cache.go:39] Caches are syn...ller
Hint: Some lines were ellipsized, use -l to show in full.

3.4、安装kube-controller-manager服务

3.4.1、安装
[root@k8s-0 ~]# pwd
/root
[root@k8s-0 ~]# cp kubernetes/server/bin/kube-controller-manager /usr/local/bin/
[root@k8s-0 ~]# kube-controller-manager version
I1109 00:08:25.254275    5281 controllermanager.go:109] Version: v1.8.1
W1109 00:08:25.254380    5281 client_config.go:529] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W1109 00:08:25.254390    5281 client_config.go:534] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
invalid configuration: no configuration has been provided
### 创建Unit文件 ###
[root@k8s-0 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=kube
ExecStart=/usr/local/bin/kube-controller-manager \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
3.4.2、配置controller-manager
### 配置/etc/kubernetes/controller-manager文件 ###
[root@k8s-0 kubernetes]# cat controller-manager 
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \
                              --service-cluster-ip-range=10.254.0.0/16 \
                              --cluster-name=kubernetes-cluster \
                              --cluster-signing-cert-file=/etc/kubernetes/ssl/kubernetes-root-ca.pem \
                              --cluster-signing-key-file=/etc/kubernetes/ssl/kubernetes-root-ca-key.pem \
                              --service-account-private-key-file=/etc/kubernetes/ssl/kubernetes-root-ca-key.pem \
                              --root-ca-file=/etc/kubernetes/ssl/kubernetes-root-ca.pem \
                              --leader-elect=true \
                              --node-monitor-grace-period=40s \
                              --node-monitor-period=5s \
                              --pod-eviction-timeout=5m0s"

–service-account-private-key-file与之前的–service-account-key-file对应,用于签署JWT token

–cluster-signing-*将被用来签发TLS Bootstrapping证书,需要与–client-ca-file建立信任关系(理论上,–cluster-signing的CA是–client-ca-file的下级CA也是被信任的,但是在实际操作过程中并没有达到预期的效果,kubelet顺利发送CSR申请,但是node无法加入集群,疑问待解!)。

3.4.3、启动
### 修改证书文件属主 ###
[root@k8s-0 kubernetes]# chown -R kube:kube /etc/kubernetes/
[root@k8s-0 kubernetes]# ls -lR /etc/kubernetes/
/etc/kubernetes/:
-rw-r--r--. 1 kube kube 2172 1110 20:06 apiserver
-rw-r--r--. 1 kube kube  113 118 23:42 audit-policy.yaml
-rw-------. 1 kube kube 2265 1110 20:03 bootstrap.kubeconfig
-rw-r--r--. 1 kube kube  696 118 23:23 config
-rw-r--r--. 1 kube kube  995 1110 18:32 controller-manager
drwxr-xr-x. 2 kube kube 4096 1110 19:58 ssl
-rw-r--r--. 1 kube kube   80 1110 20:00 token.csv

/etc/kubernetes/ssl:
-rw-------. 1 kube kube 1679 1110 19:58 etcd-client-kubernetes-key.pem
-rw-r--r--. 1 kube kube 1476 1110 19:58 etcd-client-kubernetes.pem
-rw-r--r--. 1 kube kube 1403 1110 19:57 etcd-root-ca.pem
-rw-------. 1 kube kube 1675 1110 19:46 kubernetes-client-kubectl-key.pem
-rw-r--r--. 1 kube kube 1460 1110 19:46 kubernetes-client-kubectl.pem
-rw-------. 1 kube kube 1675 1110 19:57 kubernetes-root-ca-key.pem
-rw-r--r--. 1 kube kube 1395 1110 19:46 kubernetes-root-ca.pem
-rw-------. 1 kube kube 1675 1110 19:56 kubernetes-server-key.pem
-rw-r--r--. 1 kube kube 1651 1110 19:56 kubernetes-server.pem
[root@k8s-0 kubernetes]# chown kube:kube /usr/local/bin/kube-controller-manager 
[root@k8s-0 kubernetes]# ls -l /usr/local/bin/kube-controller-manager 
-rwxr-x---. 1 kube kube 128087389 119 00:08 /usr/local/bin/kube-controller-manager
[root@k8s-0 kubernetes]# systemctl start kube-controller-manager
[root@k8s-0 kubernetes]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; disabled; vendor preset: disabled)
   Active: active (running) since 五 2017-11-10 20:14:12 CST; 1s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 3851 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─3851 /usr/local/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://k8s-0:808...

1110 20:14:13 k8s-0 kube-controller-manager[3851]: I1110 20:14:13.971671    3851 controller_utils.go...ler
1110 20:14:13 k8s-0 kube-controller-manager[3851]: I1110 20:14:13.992354    3851 controller_utils.go...ler
1110 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.035607    3851 controller_utils.go...ler
1110 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.041518    3851 controller_utils.go...ler
1110 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.049764    3851 controller_utils.go...ler
1110 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.049799    3851 garbagecollector.go...age
1110 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.071155    3851 controller_utils.go...ler
1110 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.071394    3851 controller_utils.go...ler
1110 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.071563    3851 controller_utils.go...ler
1110 20:14:14 k8s-0 kube-controller-manager[3851]: I1110 20:14:14.092450    3851 controller_utils.go...ler
Hint: Some lines were ellipsized, use -l to show in full.

3.5、安装kube-scheduler

3.5.1、安装
[root@k8s-0 ~]# pwd
/root
[root@k8s-0 ~]# cp kubernetes/server/bin/kube-scheduler /usr/local/bin/
### 创建Unit文件 ###
[root@k8s-0 ~]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=kube
ExecStart=/usr/local/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
3.5.2、配置
### 配置/etc/kubernetes/scheduler文件 ###
[root@k8s-0 ~]# cat /etc/kubernetes/scheduler
###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"
3.5.3、启动
### 修改启动文件属主 ###
[root@k8s-0 ~]# chown kube:kube /usr/local/bin/kube-scheduler 
[root@k8s-0 ~]# ls -l /usr/local/bin/kube-scheduler 
-rwxr-x---. 1 kube kube 53754721 119 01:04 /usr/local/bin/kube-scheduler
### 启动服务 ###
[root@k8s-0 ~]# systemctl start kube-scheduler
[root@k8s-0 ~]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler Plugin
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; disabled; vendor preset: disabled)
   Active: active (running) since 五 2017-11-10 20:14:24 CST; 4s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 3862 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─3862 /usr/local/bin/kube-scheduler --logtostderr=true --v=0 --master=http://k8s-0:8080 --leade...

1110 20:14:24 k8s-0 systemd[1]: Started Kubernetes Scheduler Plugin.
1110 20:14:24 k8s-0 systemd[1]: Starting Kubernetes Scheduler Plugin...
1110 20:14:24 k8s-0 kube-scheduler[3862]: I1110 20:14:24.904984    3862 controller_utils.go:1041] W...ller
1110 20:14:25 k8s-0 kube-scheduler[3862]: I1110 20:14:25.005451    3862 controller_utils.go:1048] C...ller
1110 20:14:25 k8s-0 kube-scheduler[3862]: I1110 20:14:25.005533    3862 leaderelection.go:174] atte...e...
1110 20:14:25 k8s-0 kube-scheduler[3862]: I1110 20:14:25.015298    3862 leaderelection.go:184] succ...uler
1110 20:14:25 k8s-0 kube-scheduler[3862]: I1110 20:14:25.015761    3862 event.go:218] Event(v1.Obje...ader
Hint: Some lines were ellipsized, use -l to show in full.

3.6、检查Master节点服务状态

[root@k8s-0 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"} 

出现上述信息,说明目前Master节点上的服务部署正常,且kubectl与集群之间通信正常。

扩展阅读:

1、什么是 JWT

2、Kubernetes里的证书认证

4、部署Kubernetes Mision

Mater节点上运行的Kubernetes服务包括:kubelet和kube-proxy。因为之前配置了TLS Bootstrapping,kubelet的证书由Master自动签发。前提是kubelet通过token认证,取得”system:bootstrappers”组权限。kube-proxy属于客户端,需要根证书client-ca.pem签发对应的客户端证书。

4.1、安装docker

### 直接使用yum安装,选查看版本信息 ###
[root@k8s-1 ~]# yum list | grep docker-common
[root@k8s-1 ~]# yum list | grep docker
docker-client.x86_64                        2:1.12.6-61.git85d7426.el7.centos
docker-common.x86_64                        2:1.12.6-61.git85d7426.el7.centos
cockpit-docker.x86_64                       151-1.el7.centos           extras   
docker.x86_64                               2:1.12.6-61.git85d7426.el7.centos
docker-client-latest.x86_64                 1.13.1-26.git1faa135.el7.centos
docker-devel.x86_64                         1.3.2-4.el7.centos         extras   
docker-distribution.x86_64                  2.6.2-1.git48294d9.el7     extras   
docker-forward-journald.x86_64              1.10.3-44.el7.centos       extras   
docker-latest.x86_64                        1.13.1-26.git1faa135.el7.centos
docker-latest-logrotate.x86_64              1.13.1-26.git1faa135.el7.centos
docker-latest-v1.10-migrator.x86_64         1.13.1-26.git1faa135.el7.centos
docker-logrotate.x86_64                     2:1.12.6-61.git85d7426.el7.centos
docker-lvm-plugin.x86_64                    2:1.12.6-61.git85d7426.el7.centos
docker-novolume-plugin.x86_64               2:1.12.6-61.git85d7426.el7.centos
docker-python.x86_64                        1.4.0-115.el7              extras   
docker-registry.x86_64                      0.9.1-7.el7                extras   
docker-unit-test.x86_64                     2:1.12.6-61.git85d7426.el7.centos
docker-v1.10-migrator.x86_64                2:1.12.6-61.git85d7426.el7.centos
pcp-pmda-docker.x86_64                      3.11.8-7.el7               base     
python-docker-py.noarch                     1.10.6-3.el7               extras   
python-docker-pycreds.noarch                1.10.6-3.el7               extras  
### 安装docker ###
[root@k8s-1 ~]# yum install -y docker
### 启动docker服务 ###
[root@k8s-1 ~]# systemctl start docker
[root@k8s-1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: active (running) since 四 2017-11-09 02:29:12 CST; 14s ago
     Docs: http://docs.docker.com
 Main PID: 4833 (dockerd-current)
   CGroup: /system.slice/docker.service
           ├─4833 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-curren...
           └─4837 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-contain...

1109 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.015481471+08:00" level=info ms...ase"
11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.049872681+08:00" level=info ms...nds"
1109 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.050724567+08:00" level=info ms...rt."
11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.068030608+08:00" level=info ms...lse"
1109 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.128054846+08:00" level=info ms...ess"
11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.189306705+08:00" level=info ms...ne."
1109 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.189594801+08:00" level=info ms...ion"
11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.189610061+08:00" level=info ms...12.6
11月 09 02:29:12 k8s-1 systemd[1]: Started Docker Application Container Engine.
11月 09 02:29:12 k8s-1 dockerd-current[4833]: time="2017-11-09T02:29:12.210430475+08:00" level=info ms...ock"
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-1 ~]# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.181/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:cd:de:a1:b0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever

docker info命令可以查看Cgroup Driver

4.2、安装kubelet服务

前面在Master节点上下载的kubernetes-server-linux-amd64.tar.gz文件中有部署kubernetes集群所需要的全部二进制文件。

4.2.1、安装
### 传输kubelet文件到mision节点上 ###
[root@k8s-0 ~]# scp kubernetes/server/bin/kubelet root@k8s-1:/usr/local/bin/
### 创建Unit文件 ###
[root@k8s-1 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
4.2.2、配置
4.2.2.1、准备bootstrap.kubeconfig文件
### 创建目录 ###
[root@k8s-1 ~]# mkdir -p /etc/kubernetes/ssl
### 将bootstrap.kubeconfig文件从master节点复制到mission节点 ###
[root@k8s-0 ~]# scp /etc/kubernetes/bootstrap.kubeconfig root@k8s-1:/etc/kubernetes/
[root@k8s-1 ~]# ls -l /etc/kubernetes/
-rw-------. 1 root root 2265 1110 20:26 bootstrap.kubeconfig
drwxr-xr-x. 2 root root    6 119 02:34 ssl
4.2.2.2、为TLS Bootstrapping对应的用户kubelet-bootstrap绑定角色
[root@k8s-0 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding "kubelet-bootstrap" created

kubelet-bootstrap是TLS Bootstrapping token文件中指定的用户,如果不为此用户绑定角色,将无法提交CSR申请。启动kubelet是肯出现异常:

error: failed to run Kubelet: cannot create certificate signing request: ce

rtificatesigningrequests.certificates.k8s.io is forbidden: User “kubelet-bootstrap” cannot create certificatesign

ingrequests.certificates.k8s.io at the cluster scope

4.2.2.3、配置config
[root@k8s-1 kubernetes]# pwd
/etc/kubernetes
[root@k8s-1 kubernetes]# cat config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://127.0.0.1:8080"
4.2.2.4、配置kubelet
[root@k8s-1 kubernetes]# pwd
/etc/kubernetes
[root@k8s-1 kubernetes]# cat kubelet 
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.119.181"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=kubernetes-mision-1"

# location of the api-server
# KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

# Add your own!
KUBELET_ARGS="--cgroup-driver=systemd \
              --cluster-dns=10.254.0.2 \
              --resolv-conf=/etc/resolv.conf \
              --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
              --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
              --fail-swap-on=false \
              --cert-dir=/etc/kubernetes/ssl \
              --cluster-domain=cluster.local. \
              --hairpin-mode=promiscuous-bridge \
              --serialize-image-pulls=false \
              --runtime-cgroups=/systemd/system.slice \
              --kubelet-cgroups=/systemd/system.slice"
[root@k8s-1 kubernetes]# ls -l
-rw-------. 1 root root 2265 1110 20:26 bootstrap.kubeconfig
-rw-r--r--. 1 root root  655 119 02:48 config
-rw-r--r--. 1 root root 1205 1110 15:40 kubelet
drwxr-xr-x. 2 root root    6 11月 10 17:46 ssl
4.2.3、启动
### 启动kubelet服务 ###
[root@k8s-1 kubernetes]# systemctl start kubelet 
[root@k8s-1 kubernetes]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
   Active: active (running) since 五 2017-11-10 20:27:39 CST; 7s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 3837 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─3837 /usr/local/bin/kubelet --logtostderr=true --v=0 --address=192.168.119.181 --hostname-over...

1110 20:27:39 k8s-1 systemd[1]: Started Kubernetes Kubelet Server.
1110 20:27:39 k8s-1 systemd[1]: Starting Kubernetes Kubelet Server...
1110 20:27:39 k8s-1 kubelet[3837]: I1110 20:27:39.543227    3837 feature_gate.go:156] feature gates: map[]
1110 20:27:39 k8s-1 kubelet[3837]: I1110 20:27:39.543479    3837 controller.go:114] kubelet config...oller
1110 20:27:39 k8s-1 kubelet[3837]: I1110 20:27:39.543483    3837 controller.go:118] kubelet config...flags
1110 20:27:40 k8s-1 kubelet[3837]: I1110 20:27:40.064289    3837 client.go:75] Connecting to docke....sock
1110 20:27:40 k8s-1 kubelet[3837]: I1110 20:27:40.064322    3837 client.go:95] Start docker client...=2m0s
1110 20:27:40 k8s-1 kubelet[3837]: W1110 20:27:40.067246    3837 cni.go:196] Unable to update cni ...net.d
1110 20:27:40 k8s-1 kubelet[3837]: I1110 20:27:40.076866    3837 feature_gate.go:156] feature gates: map[]
1110 20:27:40 k8s-1 kubelet[3837]: W1110 20:27:40.076980    3837 server.go:289] --cloud-provider=a...citly
Hint: Some lines were ellipsized, use -l to show in full.
4.2.4、查询CSR申请并审批
[root@k8s-0 kubernetes]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro   10s       kubelet-bootstrap   Pending
[root@k8s-0 kubernetes]# kubectl certificate approve node-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro
certificatesigningrequest "node-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro" approved
[root@k8s-0 kubernetes]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro   2m        kubelet-bootstrap   Approved,Issued
[root@k8s-0 kubernetes]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
k8s-1     Ready     <none>    6s        v1.8.1

4.3、安装kube-proxy服务

4.3.1、安装
[root@k8s-0 ~]# pwd
/root
[root@k8s-0 ~]# scp kubernetes/server/bin/kube-proxy root@k8s-1:/usr/local/bin/
[root@k8s-1 kubernetes]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
4.3.2、配置
4.3.2.1、签发kube-proxy证书
[root@k8s-0 kubernetes]# pwd
/root/cfssl/kubernetes
[root@k8s-0 kubernetes]# cat kubernetes-client-proxy-csr.json 
{
    "CN": "system:kube-proxy",
    "hosts": [
        "localhost",
        "127.0.0.1",
        "k8s-1",
        "k8s-2",
        "k8s-3"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Wuhan",
            "ST": "Hubei",
            "O": "Dameng",
            "OU": "system"
        }
    ]
}
[root@k8s-0 kubernetes]# cfssl gencert -ca=kubernetes-root-ca.pem -ca-key=kubernetes-root-ca-key.pem  -config=ca-config.json -profile=client kubernetes-client-proxy-csr.json | cfssljson -bare kubernetes-client-proxy
2017/11/10 20:50:27 [INFO] generate received request
2017/11/10 20:50:27 [INFO] received CSR
2017/11/10 20:50:27 [INFO] generating key: rsa-2048
2017/11/10 20:50:28 [INFO] encoded CSR
2017/11/10 20:50:28 [INFO] signed certificate with serial number 319926141282708642124995329378952678953790336868
2017/11/10 20:50:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-0 kubernetes]# ls -l
-rw-r--r--. 1 root root  833 1110 16:29 ca-config.json
-rw-r--r--. 1 root root 1086 1110 19:28 kubernetes-client-kubectl.csr
-rw-r--r--. 1 root root  356 1110 18:17 kubernetes-client-kubectl-csr.json
-rw-------. 1 root root 1675 1110 19:28 kubernetes-client-kubectl-key.pem
-rw-r--r--. 1 root root 1460 1110 19:28 kubernetes-client-kubectl.pem
-rw-r--r--. 1 root root 1074 1110 20:50 kubernetes-client-proxy.csr
-rw-r--r--. 1 root root  347 1110 20:48 kubernetes-client-proxy-csr.json
-rw-------. 1 root root 1679 1110 20:50 kubernetes-client-proxy-key.pem
-rw-r--r--. 1 root root 1452 1110 20:50 kubernetes-client-proxy.pem
-rw-r--r--. 1 root root 1021 1110 19:20 kubernetes-root-ca.csr
-rw-r--r--. 1 root root  279 1110 18:04 kubernetes-root-ca-csr.json
-rw-------. 1 root root 1675 1110 19:20 kubernetes-root-ca-key.pem
-rw-r--r--. 1 root root 1395 1110 19:20 kubernetes-root-ca.pem
-rw-r--r--. 1 root root 1277 1110 19:42 kubernetes-server.csr
-rw-r--r--. 1 root root  556 1110 19:40 kubernetes-server-csr.json
-rw-------. 1 root root 1675 1110 19:42 kubernetes-server-key.pem
-rw-r--r--. 1 root root 1651 1110 19:42 kubernetes-server.pem
[root@k8s-0 kubernetes]# cp kubernetes-client-proxy.pem kubernetes-client-proxy-key.pem /etc/kubernetes/ssl/
[root@k8s-0 kubernetes]# ls -l /etc/kubernetes/ssl/
-rw-------. 1 kube kube 1679 1110 19:58 etcd-client-kubernetes-key.pem
-rw-r--r--. 1 kube kube 1476 1110 19:58 etcd-client-kubernetes.pem
-rw-r--r--. 1 kube kube 1403 1110 19:57 etcd-root-ca.pem
-rw-------. 1 kube kube 1675 1110 19:46 kubernetes-client-kubectl-key.pem
-rw-r--r--. 1 kube kube 1460 1110 19:46 kubernetes-client-kubectl.pem
-rw-------. 1 root root 1679 1110 20:51 kubernetes-client-proxy-key.pem
-rw-r--r--. 1 root root 1452 1110 20:51 kubernetes-client-proxy.pem
-rw-------. 1 kube kube 1675 1110 19:57 kubernetes-root-ca-key.pem
-rw-r--r--. 1 kube kube 1395 1110 19:46 kubernetes-root-ca.pem
-rw-------. 1 kube kube 1675 1110 19:56 kubernetes-server-key.pem
-rw-r--r--. 1 kube kube 1651 1110 19:56 kubernetes-server.pem
4.3.2.2、配置kube-proxy.kubeconfig文件
[root@k8s-0 kubernetes]# pwd
/etc/kubernetes
[root@k8s-0 kubernetes]# kubectl config set-cluster kubernetes-cluster --certificate-authority=/etc/kubernetes/ssl/kubernetes-root-ca.pem --embed-certs=true --server="https://k8s-0:6443" --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes-cluster" set.
[root@k8s-0 kubernetes]# kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/ssl/kubernetes-client-proxy.pem --client-key=/etc/kubernetes/ssl/kubernetes-client-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
[root@k8s-0 kubernetes]# kubectl config set-context kube-proxy --cluster=kubernetes-cluster --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "kube-proxy" created.
[root@k8s-0 kubernetes]# kubectl config use-context kube-proxy --kubeconfig=kube-proxy.kubeconfig
Switched to context "kube-proxy".
[root@k8s-0 kubernetes]# ls -l
-rw-r--r--. 1 kube kube 2172 11月 10 20:06 apiserver
-rw-r--r--. 1 kube kube  113 11月  8 23:42 audit-policy.yaml
-rw-------. 1 kube kube 2265 11月 10 20:03 bootstrap.kubeconfig
-rw-r--r--. 1 kube kube  696 11月  8 23:23 config
-rw-r--r--. 1 kube kube  991 11月 10 20:35 controller-manager
-rw-------. 1 root root 6421 11月 10 20:57 kube-proxy.kubeconfig
-rw-r--r--. 1 kube kube  148 11月  9 01:07 scheduler
drwxr-xr-x. 2 kube kube 4096 11月 10 20:51 ssl
-rw-r--r--. 1 kube kube   80 11月 10 20:00 token.csv
[root@k8s-0 kubernetes]# scp kube-proxy.kubeconfig root@k8s-1:/etc/kubernetes/
4.3.2.3、配置proxy
[root@k8s-1 kubernetes]# pwd
/etc/kubernetes
[root@k8s-1 kubernetes]# cat proxy 
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.119.181 \
                 --hostname-override=kubernetes-mision-1 \
                 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
                 --cluster-cidr=10.254.0.0/16"
4.3.2.4、启动kube-proxy
[root@k8s-1 kubernetes]# systemctl start kube-proxy
[root@k8s-1 kubernetes]# systemctl status kube-proxy 
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled)
   Active: active (running) since 五 2017-11-10 21:07:08 CST; 6s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 4334 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           ‣ 4334 /usr/local/bin/kube-proxy --logtostderr=true --v=0 --bind-address=192.168.119.181 --hostn...

1110 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.159055    4334 conntrack.go:98] Set sysctl 'ne...1072
11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.159083    4334 conntrack.go:52] Setting nf_con...1072
11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.159107    4334 conntrack.go:98] Set sysctl 'ne...6400
1110 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.159120    4334 conntrack.go:98] Set sysctl 'ne...3600
11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.159958    4334 config.go:202] Starting service...ller
11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.159968    4334 controller_utils.go:1041] Waiti...ller
11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.160091    4334 config.go:102] Starting endpoin...ller
11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.160101    4334 controller_utils.go:1041] Waiti...ller
11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.260670    4334 controller_utils.go:1048] Cache...ller
11月 10 21:07:08 k8s-1 kube-proxy[4334]: I1110 21:07:08.260862    4334 controller_utils.go:1048] Cache...ller
Hint: Some lines were ellipsized, use -l to show in full.

4.4、其它mission节点(增加节点到已有集群)

部署方式:重复4.1到4.3,只是不需要在生成证书和kubeconfig文件

4.4.1、安装
[root@k8s-2 ~]# yum install -y docker

[root@k8s-0 ~]# scp kubernetes/server/bin/kubelet root@k8s-2:/usr/local/bin/
[root@k8s-0 ~]# scp kubernetes/server/bin/kube-proxy root@k8s-2:/usr/local/bin/

[root@k8s-2 ~]# mkdir -p /etc/kubernetes/ssl/

[root@k8s-0 ~]# scp /etc/kubernetes/bootstrap.kubeconfig root@k8s-2:/etc/kubernetes/
[root@k8s-0 ~]# scp /etc/kubernetes/kube-proxy.kubeconfig root@k8s-2:/etc/kubernetes/

[root@k8s-2 ~]# ls -lR /etc/kubernetes/
/etc/kubernetes/:
-rw-------. 1 root root 2265 1112 11:49 bootstrap.kubeconfig
-rw-------. 1 root root 6453 1112 11:50 kube-proxy.kubeconfig
drwxr-xr-x. 2 root root    6 1112 11:49 ssl

/etc/kubernetes/ssl:

[root@k8s-2 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

[root@k8s-2 ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
4.4.2、配置
[root@k8s-2 kubernetes]# cat /etc/kubernetes/config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://127.0.0.1:8080"

[root@k8s-2 kubernetes]# cat /etc/kubernetes/kubelet 
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.119.182"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-2"

# location of the api-server
# KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

# Add your own!
KUBELET_ARGS="--cgroup-driver=systemd \
              --cluster-dns=10.254.0.2 \
              --resolv-conf=/etc/resolv.conf \
              --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
              --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
              --fail-swap-on=false \
              --cert-dir=/etc/kubernetes/ssl \
              --cluster-domain=cluster.local. \
              --hairpin-mode=promiscuous-bridge \
              --serialize-image-pulls=false \
              --runtime-cgroups=/systemd/system.slice \
              --kubelet-cgroups=/systemd/system.slice"

[root@k8s-2 kubernetes]# cat /etc/kubernetes/proxy 
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.119.182 \
                 --hostname-override=kubernetes-mision-1 \
                 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
                 --cluster-cidr=10.254.0.0/16"

[root@k8s-2 kubernetes]# ls -lR /etc/kubernetes/
/etc/kubernetes/:
-rw-------. 1 root root 2265 1112 11:49 bootstrap.kubeconfig
-rw-r--r--. 1 root root  655 1112 11:57 config
-rw-r--r--. 1 root root 1205 1112 11:59 kubelet
-rw-------. 1 root root 6453 1112 11:50 kube-proxy.kubeconfig
-rw-r--r--. 1 root root  310 1112 12:16 proxy
drwxr-xr-x. 2 root root    6 1112 11:49 ssl

/etc/kubernetes/ssl:
4.4.3、启动
[root@k8s-2 ~]# systemctl start kubelet
[root@k8s-2 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
   Active: active (running) since 日 2017-11-12 12:11:03 CST; 3s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 2301 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─2301 /usr/local/bin/kubelet --logtostderr=true --v=0 --address=192.168.119.182 --hostname-over...

11月 12 12:11:03 k8s-2 systemd[1]: Started Kubernetes Kubelet Server.
11月 12 12:11:03 k8s-2 systemd[1]: Starting Kubernetes Kubelet Server...
11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.868488    2301 feature_gate.go:156] feature gates: map[]
11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.868848    2301 controller.go:114] kubelet config...oller
11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.868855    2301 controller.go:118] kubelet config...flags
11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.881541    2301 client.go:75] Connecting to docke....sock
11月 12 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.881640    2301 client.go:95] Start docker client...=2m0s
1112 12:11:03 k8s-2 kubelet[2301]: W1112 12:11:03.891364    2301 cni.go:196] Unable to update cni ...net.d
1112 12:11:03 k8s-2 kubelet[2301]: I1112 12:11:03.911496    2301 feature_gate.go:156] feature gates: map[]
1112 12:11:03 k8s-2 kubelet[2301]: W1112 12:11:03.911626    2301 server.go:289] --cloud-provider=a...citly
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s-0 ~]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro   1d        kubelet-bootstrap   Approved,Issued
node-csr-vB90SM4Qb4tW36zoSAf5lZZ8q8fB3mOF2g8VL06gjBo   32s       kubelet-bootstrap   Pending
[root@k8s-0 ~]# kubectl certificate approve node-csr-vB90SM4Qb4tW36zoSAf5lZZ8q8fB3mOF2g8VL06gjBo
certificatesigningrequest "node-csr-vB90SM4Qb4tW36zoSAf5lZZ8q8fB3mOF2g8VL06gjBo" approved
[root@k8s-0 ~]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-ZWf_4Q3ljwNqPJ_KKxLnS2s5ddOa5nCw9b0o0FLbPro   1d        kubelet-bootstrap   Approved,Issued
node-csr-vB90SM4Qb4tW36zoSAf5lZZ8q8fB3mOF2g8VL06gjBo   1m        kubelet-bootstrap   Approved,Issued
[root@k8s-0 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
k8s-1     Ready     <none>    1d        v1.8.1
k8s-2     Ready     <none>    10s       v1.8.1

[root@k8s-2 kubernetes]# systemctl start kube-proxy
[root@k8s-2 kubernetes]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled)
   Active: active (running) since 日 2017-11-12 12:18:05 CST; 3s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 2531 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           ‣ 2531 /usr/local/bin/kube-proxy --logtostderr=true --v=0 --bind-address=192.168.119.182 --hostn...

11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.516895    2531 conntrack.go:52] Setting nf_con...1072
11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.526725    2531 conntrack.go:83] Setting conntr...2768
11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.526946    2531 conntrack.go:98] Set sysctl 'ne...6400
11月 12 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.526977    2531 conntrack.go:98] Set sysctl 'ne...3600
1112 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.527725    2531 config.go:202] Starting service...ller
1112 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.527735    2531 controller_utils.go:1041] Waiti...ller
1112 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.527777    2531 config.go:102] Starting endpoin...ller
1112 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.527781    2531 controller_utils.go:1041] Waiti...ller
1112 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.628035    2531 controller_utils.go:1048] Cache...ller
1112 12:18:05 k8s-2 kube-proxy[2531]: I1112 12:18:05.628125    2531 controller_utils.go:1048] Cache...ller
Hint: Some lines were ellipsized, use -l to show in full.

5、安装配置Flanneld

Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有集群内唯一的虚拟IP地址。

在默认的Docker配置中,每个节点上的Docker服务会分别负责所在节点容器的IP分配。这样导致的一个问题是,不同节点上容器可能获得相同的内外IP地址。Flannel的设计目的就是为集群中的所有节点重新规划IP地址的使用规则,从而使得不同节点上的容器能够获得“同属一个内网”且”不重复的”IP地址,并让属于不同节点上的容器能够直接通过内网IP通信。

5.1、安装

[root@k8s-1 images]# yum list | grep flannel
flannel.x86_64                              0.7.1-2.el7                extras   
[root@k8s-1 images]# yum install -y flannel

[root@k8s-1 ~]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_ETCD_ENDPOINTS $FLANNEL_ETCD_PREFIX $FLANNEL_OPTIONS 
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

5.2、证书

flannel需要连接etcd,所以需要配置etcd的客户端证书,这里直接使用之前etcd测试用的客户端证书,将证书存放于/etc/kubernetes/ssl。

[root@k8s-0 ssl]# scp -r etcd-* root@k8s-1:/etc/kubernetes/ssl/
etcd-client-kubernetes-key.pem                                              100% 1679     1.6KB/s   00:00    
etcd-client-kubernetes.pem                                                  100% 1476     1.4KB/s   00:00    
etcd-root-ca.pem                                                            100% 1403     1.4KB/s   00:00

[root@k8s-1 ~]# ls -l /etc/kubernetes/ssl/
-rw-------. 1 root root 1679 1112 18:50 etcd-client-kubernetes-key.pem
-rw-r--r--. 1 root root 1476 1112 18:50 etcd-client-kubernetes.pem
-rw-r--r--. 1 root root 1403 1112 18:50 etcd-root-ca.pem
-rw-r--r--. 1 root root 1054 1110 20:39 kubelet-client.crt
-rw-------. 1 root root  227 1110 20:38 kubelet-client.key
-rw-r--r--. 1 root root 1094 1110 20:38 kubelet.crt
-rw-------. 1 root root 1679 1110 20:38 kubelet.key

5.3、配置

[root@k8s-1 ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="-etcd-endpoints=https://etcd-1:2379,https://etcd-2:2379,https://etcd-3:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="-etcd-prefix=/atomic.io/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/etcd-root-ca.pem -etcd-certfile=/etc/kubernetes/ssl/etcd-client-kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/etcd-client-kubernetes-key.pem"

[root@k8s-0 etcd]# pwd
/root/cfssl/etcd
[root@k8s-0 etcd]# etcdctl --ca-file=ca.pem --cert-file=certificates-client.pem --key-file=certificates-client-key.pem --endpoints=https://etcd-1:2379,https://etcd-2:2379,https://etcd-3:2379 mk /atomic.io/network/config '{ "Network": "10.2.0.0/16", "SubnetLen":24, "Backend": {"Type": "vxlan"}}'
{ "Network": "10.2.0.0/16", "SubnetLen":24, "Backend": {"Type": "vxlan"}}

5.4、启动

[root@k8s-1 ~]# systemctl start flanneld
[root@k8s-1 ~]# systemctl status flanneld
● flanneld.service - Flanneld overlay address etcd agent
   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled)
   Active: active (running) since 一 2017-11-13 10:37:49 CST; 1min 12s ago
  Process: 2255 ExecStartPost=/usr/libexec/flannel
   Main PID: 2243 (flanneld)
   CGroup: /system.slice/flanneld.service
           └─2243 /usr/bin/flanneld -etcd-endpoints=-etcd-endpoints=https://etcd-1:2379,https://etcd-2:2379...

11月 13 10:37:49 k8s-1 flanneld[2243]: warning: ignoring ServerName for user-provided CA for backwards...ated
11月 13 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.405403    2243 main.go:132] Installing sig...lers
1113 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.407174    2243 manager.go:136] Determining...face
1113 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.407880    2243 manager.go:149] Using inter....181
1113 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.407942    2243 manager.go:166] Defaulting ...181)
1113 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.459398    2243 local_manager.go:134] Found...sing
1113 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.464776    2243 manager.go:250] Lease acqui...0/24
1113 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.465318    2243 network.go:58] Watching for...sses
1113 10:37:49 k8s-1 flanneld-start[2243]: I1113 10:37:49.465327    2243 network.go:66] Watching for...ases
1113 10:37:49 k8s-1 systemd[1]: Started Flanneld overlay address etcd agent.
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.181/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe39:3d4c/64 scope link 
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 4a:77:38:8c:94:ce brd ff:ff:ff:ff:ff:ff
    inet 10.2.81.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
### 重启docker,正常情况下flannel.1和docker0在一个网段内 ###
[root@k8s-1 ~]# systemctl restart docker
[root@k8s-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.181/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe39:3d4c/64 scope link 
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 4a:77:38:8c:94:ce brd ff:ff:ff:ff:ff:ff
    inet 10.2.81.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:ed:34:86:6c brd ff:ff:ff:ff:ff:ff
    inet 10.2.81.1/24 scope global docker0
       valid_lft forever preferred_lft forever

剩余两个节点的操作方式和这里一样,只是不需要再向etcd中注册网段信息!

6、部署traefik-ingress服务

ingress用于对外暴露集群内部的服务。ingress通过hostNetwork直接连接物理网络,接收外部请求,然后根据配置规则将请求转发到集群内部。

6.1、YML文件内容

[root@k8s-0 addons]# pwd
/root/yml/addons
[root@k8s-0 addons]# cat traefik-ingress.yaml 
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      hostNetwork: true
      nodeSelector:
        edgenode: "true"
      containers:
      - image: docker.io/traefik:v1.4.1
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
        securityContext:
          privileged: true
        args:
        - -d
        - --web
        - --kubernetes
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - port: 80
    targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: traefik-ui.minikube
    http:
      paths:
      - backend:
          serviceName: traefik-web-ui
          servicePort: 80

6.2、标记边缘节点

边缘节点就是需要接收外部请求的节点,一个kubernetes集群可能由几十上百个鸡诶安组成,但是需要部署ingress的节点之上其中的一部分,这不是就被称之为边缘节点。

[root@k8s-0 addons]# kubectl label nodes k8s-1 edgenode=true
node "k8s-1" labeled
[root@k8s-0 addons]# kubectl label nodes k8s-2 edgenode=true
node "k8s-2" labeled
[root@k8s-0 addons]# kubectl label nodes k8s-3 edgenode=true
node "k8s-3" labeled

在traefik-ingress.yaml中,通过nodeSelector来选择在那些节点上部署服务。

6.3、启动服务

[root@k8s-0 addons]# kubectl create -f traefik-ingress.yaml 
clusterrole "traefik-ingress-controller" created
clusterrolebinding "traefik-ingress-controller" created
serviceaccount "traefik-ingress-controller" created
daemonset "traefik-ingress-controller" created
service "traefik-web-ui" created
ingress "traefik-web-ui" created
[root@k8s-0 addons]# kubectl get pods -n kube-system
NAME                               READY     STATUS    RESTARTS   AGE
traefik-ingress-controller-gnnn8   1/1       Running   0          1m
traefik-ingress-controller-v6c86   1/1       Running   0          1m
traefik-ingress-controller-wtmf8   1/1       Running   0          1m
[root@k8s-0 addons]# kubectl get all -n kube-system
NAME                            DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ds/traefik-ingress-controller   3         3         3         3            3           edgenode=true   8m
ds/traefik-ingress-controller   3         3         3         3            3           edgenode=true   8m

NAME                                  READY     STATUS    RESTARTS   AGE
po/traefik-ingress-controller-gnnn8   1/1       Running   0          2m
po/traefik-ingress-controller-v6c86   1/1       Running   0          2m
po/traefik-ingress-controller-wtmf8   1/1       Running   0          2m

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
svc/traefik-web-ui   ClusterIP   10.254.59.121   <none>        80/TCP    8m

6.4、测试

这里写图片描述

这里是直接访问mision node上的服务,也可以通过在hosts文件中配置traefik-ui.minikube来进行访问。

6.5、部署keepalived

keepalived可以在多个物理节点之间产生一个VIP,VIP同一时间只在一个节点上存在,如果所在节点宕机就会自动漂移到其它节点上继续服务,借此实现HA。

6.5.1、安装
[root@k8s-1 ~]# wget http://www.keepalived.org/software/keepalived-1.3.9.tar.gz
[root@k8s-1 ~]# tar -zxvf keepalived-1.3.9.tar.gz 
[root@k8s-1 ~]# yum -y install gcc
[root@k8s-1 ~]# yum -y install openssl-devel
[root@k8s-1 ~]# cd keepalived-1.3.9
[root@k8s-1 keepalived-1.3.9]# ./configure 
[root@k8s-1 keepalived-1.3.9]# make
[root@k8s-1 keepalived-1.3.9]# make install

在CentOS 7上建议使用源码编译安装,之前尝试使用rpm安装之后无法正常启动服,日志/var/log/messages提示:keepalived[2198]: segfault at 0 ip (null) sp 00007ffed57ac318 error 14 in libnss_files-2.17.so

进行编译安装时,提前安装好gcc和openssl-devel。

K8s-2和k8s-3的安装方式与此一致,只需要在边缘节点上安装此服务。

6.5.2、配置
[root@k8s-1 keepalived-1.3.9]# mkdir -p /etc/keepalived/
[root@k8s-1 keepalived-1.3.9]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from kaadmin@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33 
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.119.100
    }
}

virtual_server 192.168.119.100 80 {
    delay_loop 6
    lb_algo loadbalance
    lb_kind DR
    nat_mask 255.255.255.0
    persistence_timeout 0
    protocol TCP

    real_server 192.168.119.181 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

    real_server 192.168.119.182 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

    real_server 192.168.119.183 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

}

state表示状态,state为MASTER的节点将在启动服务时获得VIP。当MASTER节点宕机之后,其它节点根据priority(优先级)大小来获得VIP。

在配置时,只有一个节点的state被设置为MASTER,其它节点为BACKUP

6.5.3、启动服务
[root@k8s-1 keepalived-1.3.9]# systemctl start keepalived
[root@k8s-1 keepalived-1.3.9]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since2017-11-13 13:22:29 CST; 20s ago
  Process: 14593 ExecStart=/usr/local/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 14594 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─14594 /usr/local/sbin/keepalived -D
           ├─14595 /usr/local/sbin/keepalived -D
           └─14596 /usr/local/sbin/keepalived -D

1113 13:22:31 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100
1113 13:22:31 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100
1113 13:22:31 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100
1113 13:22:31 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100
1113 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100
1113 13:22:36 k8s-1 Keepalived_vrrp[14596]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on....100
1113 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100
1113 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100
1113 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100
1113 13:22:36 k8s-1 Keepalived_vrrp[14596]: Sending gratuitous ARP on ens33 for 192.168.119.100
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-1 keepalived-1.3.9]# ip address show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.181/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.119.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe39:3d4c/64 scope link 
       valid_lft forever preferred_lft forever
[root@k8s-0 ~]# ping 192.168.119.100
PING 192.168.119.100 (192.168.119.100) 56(84) bytes of data.
64 bytes from 192.168.119.100: icmp_seq=1 ttl=64 time=2.54 ms
64 bytes from 192.168.119.100: icmp_seq=2 ttl=64 time=0.590 ms
64 bytes from 192.168.119.100: icmp_seq=3 ttl=64 time=0.427 ms
^C
--- 192.168.119.100 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.427/1.188/2.548/0.964 ms
6.5.4、测试
### 在k8s-1、k8s-2和k8s-3上均启动服务之后 ###
[root@k8s-1 keepalived-1.3.9]# ip address show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.181/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.119.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe39:3d4c/64 scope link 
       valid_lft forever preferred_lft forever

[root@k8s-2 keepalived-1.3.9]# ip address show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:3e:9b:fa brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.182/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever

[root@k8s-3 keepalived-1.3.9]# ip address show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:35:af:f9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.183/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever
### 在k8s-1上停止服务之后 ###
[root@k8s-1 keepalived-1.3.9]# systemctl stop keepalived
[root@k8s-1 keepalived-1.3.9]# ip address show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:39:3d:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.181/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe39:3d4c/64 scope link 
       valid_lft forever preferred_lft forever

[root@k8s-2 keepalived-1.3.9]# ip address show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:3e:9b:fa brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.182/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.119.100/32 scope global ens33
       valid_lft forever preferred_lft forever

[root@k8s-3 keepalived-1.3.9]# ip address show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:35:af:f9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.119.183/24 brd 192.168.119.255 scope global ens33
       valid_lft forever preferred_lft forever

[root@k8s-0 ~]# ping 192.168.119.100
PING 192.168.119.100 (192.168.119.100) 56(84) bytes of data.
64 bytes from 192.168.119.100: icmp_seq=1 ttl=64 time=0.346 ms
64 bytes from 192.168.119.100: icmp_seq=2 ttl=64 time=0.618 ms
^C
--- 192.168.119.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.346/0.482/0.618/0.136 ms

7、部署私有docker registry

这里的docker私库我们使用nfs作为外部存储,国内很多镜像文件无法通过docker直接pull下来,经常需要下载下来之后上传到私库便于后续使用。如果是内网环境更是必不可少。

7.1、搭建NFS服务

7.1.1、安装
[root@k8s-0 ~]# yum install -y nfs-utils rpcbind

[root@k8s-1 images]# yum install -y nfs-utils
[root@k8s-2 images]# yum install -y nfs-utils
[root@k8s-3 images]# yum install -y nfs-utils
7.1.2、配置
[root@k8s-0 ~]# cat /etc/exports
/opt/data/ 192.168.119.0/24(rw,no_root_squash,no_all_squash,sync)
[root@k8s-0 ~]# mkdir -p /opt/data/
[root@k8s-0 ~]# exportfs -r
7.1.3、启动
[root@k8s-0 ~]# systemctl start rpcbind
[root@k8s-0 ~]# systemctl status rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
   Active: active (running) since 日 2017-11-12 21:17:17 CST; 3min 58s ago
 Main PID: 3558 (rpcbind)
   CGroup: /system.slice/rpcbind.service
           └─3558 /sbin/rpcbind -w

1112 21:17:17 k8s-0 systemd[1]: Starting RPC bind service...
1112 21:17:17 k8s-0 systemd[1]: Started RPC bind service.
[root@k8s-0 ~]# systemctl start nfs
[root@k8s-0 ~]# systemctl status nfs
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
   Active: active (exited) since 日 2017-11-12 21:21:23 CST; 3s ago
  Process: 3735 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
  Process: 3730 ExecStartPre=/bin/sh -c /bin/kill -HUP `cat /run/gssproxy.pid` (code=exited, status=0/SUCCESS)
  Process: 3729 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
 Main PID: 3735 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service

1112 21:21:23 k8s-0 systemd[1]: Starting NFS server and services...
1112 21:21:23 k8s-0 systemd[1]: Started NFS server and services.
7.1.4、测试
[root@k8s-0 ~]# showmount -e k8s-0
Export list for k8s-0:
/opt/data 192.168.119.0/24

在mission节点上测试是一样的。

7.2、YML文件内容

[root@k8s-0 yml]# pwd
/root/yml
[root@k8s-0 yml]# cat /root/yml/docker-registry.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: docker-registry-pv
  labels:
    release: stable
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /opt/data
    server: 192.168.119.180
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: docker-registry-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      release: stable
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: docker-registry-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: docker-registry
    spec:
      containers:
      - name: docker-registry
        image: docker.io/registry:latest
        volumeMounts:
        - mountPath: /var/lib/registry
          name: registry-volume
        ports:
        - containerPort: 5000
      volumes:
      - name: registry-volume
        persistentVolumeClaim:
          claimName: docker-registry-claim
---
apiVersion: v1
kind: Service
metadata:
  name: docker-registry-service
spec:
  selector:
    name: docker-registry
  ports:
  - port: 80
    targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: docker-registry-ingress
spec:
  rules:
  - host: docker.reg
    http:
      paths:
      - backend:
          serviceName: docker-registry-service
          servicePort: 80

7.3、部署服务

[root@k8s-0 yml]# pwd
/root/yml
[root@k8s-0 yml]# kubectl create -f docker-registry.yaml 
persistentvolume "docker-registry-pv" created
persistentvolumeclaim "docker-registry-claim" created
deployment "docker-registry-deployment" created
service "docker-registry-service" created
ingress "docker-registry-ingress" created
[root@k8s-0 yml]# kubectl get pods
NAME                                          READY     STATUS    RESTARTS   AGE
docker-registry-deployment-68d94fcf85-t897g   1/1       Running   0          9s

7.4、测试

### 查看服务地址 ###
[root@k8s-0 ~]# kubectl get svc
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
docker-registry-service   ClusterIP   10.254.125.34   <none>        80/TCP    12m
kubernetes                ClusterIP   10.254.0.1      <none>        443/TCP   2d
### 运行busybox,在其内部测试 ###
[root@k8s-1 ~]# docker run -ti --rm docker.io/busybox:1.27.2 sh
/ # wget -O - -q http://10.254.125.34/v2/_catalog
{"repositories":[]}
/ # 

7.5、配置hosts文件

[root@k8s-0 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


192.168.119.180      k8s-0 etcd-1
192.168.119.181      k8s-1 etcd-2
192.168.119.182      k8s-2 etcd-3
192.168.119.183      k8s-3
192.168.119.100      docker.reg
### 再次测试docker registry ###
[root@k8s-1 ~]# curl http://docker.reg/v2/_catalog
{"repositories":[]}

修改所有节点的hosts文件

7.6、配置国内镜像和本地私库的http

[root@k8s-1 ~]# cat /etc/sysconfig/docker
# /etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false  --registry-mirror=https://xxxxxxxx.mirror.aliyuncs.com'
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi

# Do not add registries in this file anymore. Use /etc/containers/registries.conf
# from the atomic-registries package.
#

# docker-latest daemon can be used by starting the docker-latest unitfile.
# To use docker-latest client, uncomment below lines
#DOCKERBINARY=/usr/bin/docker-latest
#DOCKERDBINARY=/usr/bin/dockerd-latest
#DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest
#DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest

INSECURE_REGISTRY='--insecure-registry docker.reg'

[root@k8s-1 ~]# systemctl restart docker

### 测试镜像上传下载 ###
[root@k8s-1 ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
docker.io/busybox                      1.27.2              6ad733544a63        9 days ago          1.129 MB
docker.io/registry                     2.5.2               876793cc984a        9 days ago          37.73 MB
docker.io/traefik                      v1.4.1              83df6581f3d9        2 weeks ago         45.58 MB
quay.io/coreos/flannel                 v0.9.0-amd64        4c600a64a18a        7 weeks ago         51.31 MB
gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        18 months ago       746.9 kB
[root@k8s-1 ~]# docker tag docker.io/busybox:1.27.2 docker.reg/busybox:1.27.2
[root@k8s-1 ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
docker.reg/busybox                     1.27.2              6ad733544a63        9 days ago          1.129 MB
docker.io/busybox                      1.27.2              6ad733544a63        9 days ago          1.129 MB
docker.io/registry                     2.5.2               876793cc984a        9 days ago          37.73 MB
docker.io/traefik                      v1.4.1              83df6581f3d9        2 weeks ago         45.58 MB
quay.io/coreos/flannel                 v0.9.0-amd64        4c600a64a18a        7 weeks ago         51.31 MB
gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        18 months ago       746.9 kB
[root@k8s-1 ~]# docker push docker.reg/busybox:1.27.2
The push refers to a repository [docker.reg/busybox]
0271b8eebde3: Pushed 
1.27.2: digest: sha256:91ef6c1c52b166be02645b8efee30d1ee65362024f7da41c404681561734c465 size: 527
[root@k8s-1 ~]# curl http://docker.reg/v2/_catalog
{"repositories":["busybox"]}
### 在另一个节点上下载 ###
[root@k8s-2 ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
docker.io/registry                     2.5.2               876793cc984a        8 days ago          37.73 MB
docker.io/traefik                      v1.4.1              83df6581f3d9        2 weeks ago         45.58 MB
quay.io/coreos/flannel                 v0.9.0-amd64        4c600a64a18a        7 weeks ago         51.31 MB
gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        18 months ago       746.9 kB
[root@k8s-2 ~]# docker pull docker.reg/busybox:1.27.2
Trying to pull repository docker.reg/busybox ... 
1.27.2: Pulling from docker.reg/busybox

0ffadd58f2a6: Pull complete 
Digest: sha256:91ef6c1c52b166be02645b8efee30d1ee65362024f7da41c404681561734c465
[root@k8s-2 ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
docker.reg/busybox                     1.27.2              6ad733544a63        8 days ago          1.129 MB
docker.io/registry                     2.5.2               876793cc984a        8 days ago          37.73 MB
docker.io/traefik                      v1.4.1              83df6581f3d9        2 weeks ago         45.58 MB
quay.io/coreos/flannel                 v0.9.0-amd64        4c600a64a18a        7 weeks ago         51.31 MB
gcr.io/google_containers/pause-amd64   3.0                 99e59f495ffa        18 months ago       746.9 kB

–registry-mirror=https://xxxxxxxx.mirror.aliyuncs.com‘根据实际地址配置

所有docker节点都需要修改

8、部署kube-dns服务

kube-dns是kubernetes的可选插件,用来实现集群内部的DNS服务,service之间可以直接通过name相互通信,从而避免IP不可靠的问题。

8.1、YML文件内容

[root@k8s-0 addons]# pwd
/root/yml/addons
[root@k8s-0 addons]# cat kube-dns.yaml 
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.

# Warning: This is a file generated from the base underscore template file: kube-dns.yaml.base

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/cluster.local/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

8.2、创建服务

[root@k8s-0 addons]# pwd
/root/yml/addons
[root@k8s-0 addons]# kubectl create -f kube-dns.yaml 
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment "kube-dns" created
[root@k8s-0 addons]# kubectl get pods -n kube-system
NAME                               READY     STATUS    RESTARTS   AGE
kube-dns-7dff49b8fc-2fl64          3/3       Running   0          20s
traefik-ingress-controller-gnnn8   1/1       Running   1          2h
traefik-ingress-controller-v6c86   1/1       Running   1          2h
traefik-ingress-controller-wtmf8   1/1       Running   1          2h

8.3、测试

[root@k8s-0 addons]# kubectl run -ti --rm --image=docker.reg/busybox:1.27.2 sh
If you don't see a command prompt, try pressing enter.
/ # wget -O - -q http://docker-registry-service/v2/_catalog
{"repositories":["busybox"]}

这里使用kubectl run而不是docker run,通过service name访问docker registry服务。

9、部署doshboard服务

9.1、YML文件内容

[root@k8s-0 addons]# pwd
/root/yml/addons
[root@k8s-0 addons]# cat kubernetes-dashboard.yaml 
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.7.
#
# Example usage: kubectl create -f <this_file>

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create and watch for changes of 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create", "watch"]
- apiGroups: [""]
  resources: ["secrets"]
  # Allow Dashboard to get, update and delete 'kubernetes-dashboard-key-holder' secret.
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      initContainers:
      - name: kubernetes-dashboard-init
        image: gcr.io/google_containers/kubernetes-dashboard-init-amd64:v1.0.1
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --tls-key-file=/certs/dashboard.key
          - --tls-cert-file=/certs/dashboard.crt
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          readOnly: true
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

目前是将cluster-admin角色直接绑定给kubernetes-dashboard用户。

9.2、部署

[root@k8s-0 addons]# pwd
/root/yml/addons
[root@k8s-0 addons]# kubectl create -f kubernetes-dashboard.yaml 
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@k8s-0 addons]# kubectl get pods -n kube-system
NAME                                   READY     STATUS    RESTARTS   AGE
kube-dns-7dff49b8fc-2fl64              3/3       Running   0          18m
kubernetes-dashboard-747c4f7cf-qtgcx   1/1       Running   0          25s
traefik-ingress-controller-gnnn8       1/1       Running   1          3h
traefik-ingress-controller-v6c86       1/1       Running   1          3h
traefik-ingress-controller-wtmf8       1/1       Running   1          3h

9.3、测试

因为使用了https,浏览器会提示证书验证,如果没有事项导入证书,会提示认证失败。

9.3.1、导入证书
[root@k8s-0 kubernetes]# pwd
/root/cfssl/kubernetes
[root@k8s-0 kubernetes]# openssl pkcs12 -export -in kubernetes-client-kubectl.pem -out kubernetes-client.p12 -inkey kubernetes-client-kubectl-key.pem 
Enter Export Password:
Verifying - Enter Export Password:
[root@k8s-0 kubernetes]# ls -l
总用量 72
-rw-r--r--. 1 root root  833 1110 16:29 ca-config.json
-rw-r--r--. 1 root root 1086 1110 19:28 kubernetes-client-kubectl.csr
-rw-r--r--. 1 root root  356 1110 18:17 kubernetes-client-kubectl-csr.json
-rw-------. 1 root root 1675 1110 19:28 kubernetes-client-kubectl-key.pem
-rw-r--r--. 1 root root 1460 1110 19:28 kubernetes-client-kubectl.pem
-rw-r--r--. 1 root root 2637 1113 00:06 kubernetes-client.p12
-rw-r--r--. 1 root root 1098 1110 21:04 kubernetes-client-proxy.csr
-rw-r--r--. 1 root root  385 1110 21:03 kubernetes-client-proxy-csr.json
-rw-------. 1 root root 1679 1110 21:04 kubernetes-client-proxy-key.pem
-rw-r--r--. 1 root root 1476 1110 21:04 kubernetes-client-proxy.pem
-rw-r--r--. 1 root root 1021 1110 19:20 kubernetes-root-ca.csr
-rw-r--r--. 1 root root  279 1110 18:04 kubernetes-root-ca-csr.json
-rw-------. 1 root root 1675 1110 19:20 kubernetes-root-ca-key.pem
-rw-r--r--. 1 root root 1395 1110 19:20 kubernetes-root-ca.pem
-rw-r--r--. 1 root root 1277 1110 19:42 kubernetes-server.csr
-rw-r--r--. 1 root root  556 1110 19:40 kubernetes-server-csr.json
-rw-------. 1 root root 1675 1110 19:42 kubernetes-server-key.pem
-rw-r--r--. 1 root root 1651 1110 19:42 kubernetes-server.pem

将kubernetes-client.p12导入到你的操作系统中。

9.3.2、浏览器访问

地址:http://192.168.119.180:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

这里写图片描述

这里写图片描述

这种访问方式需要在master节点上部署flannel。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值