Kubernetes的部署和配置

一、环境准备

192.168.253.220 master1 master和etcd
192.168.253.221 master2 master和etcd
192.168.253.222 master3 master和etcd
192.168.253.223 node3 node
192.168.253.224 node4 node

以下操作在所有节点进行

1.1 关闭防火墙,SELinux
$ systemctl stop firewalld  
$ systemctl disable firewalld  
$ setenforce 0

以下操作在所有master节点进行

1.2 master和node节点之间做互信
$ ssh-keygen
$ ssh-copy-id 192.168.253.223
$ ssh-copy-id 192.168.253.224

二、安装Docker-CE

2.1、因为这里k8s的组件服务都是用RPM包来安装的,所以master节点不需要生成容器,只有node节点需要安装Docker或者rkt。本文选择Docker-CE

以下操作在所有node节点进行

  • 移除旧版本docker
$ yum remove docker \
             docker-common \
             docker-selinux \
             docker-engine -y
  • 安装必要的组件
$ yum install yum-utils \
  	          device-mapper-persistent-data \
              lvm2 -y
  • 下载yum源文件
$ yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
  • 开启docker-ce的edge和test版本(可选项)
$ yum-config-manager --enable docker-ce-edge
$ yum-config-manager --enable docker-ce-test
  • 选择docker-ce版本
$ yum list docker-ce --showduplicates | sort -r
docker-ce.x86_64            17.12.0.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.09.1.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos            docker-ce-stable

  • 安装docker-ce
$ yum install docker-ce-17.12.0.ce -y
  • 启动docker并设置为开机自启
$ systemctl start docker && systemctl enable docker
  • 验证docker

由于显而易见的原因,国内访问Docker Hub拉取镜像的速度异常缓慢,所以需要使用加速器,本文选择的是DaoCloud.io,设置完成后重启docker

$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:150f6d05b8898b79f55114991c01b89688a02dab6115e402b7855792a440caff
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.
...

三、配置证书

由于 Etcd 和 Kubernetes 全部采用 TLS 通讯,所以先要生成 TLS 证书,证书生成工具采用 cfssl

以下所有证书操作在任意主机上进行,之后会下发到集群内所有节点

3.1、安装CFSSL工具
$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
$ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
$ mv cfssl_linux-amd64 /usr/local/bin/cfssl
$ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
3.2、生成etcd证书

生成etcd证书需要3个json文件,创建3个配置文件,内容如下

  • etcd-root-ca-csr.json
{
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "O": "etcd",
      "OU": "etcd Security",
      "L": "Shanghai",
      "ST": "Shanghai",
      "C": "CN"
    }
  ],
  "CN": "etcd-root-ca"
}
  • etcd-gencert.json
{
  "signing": {
    "default": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
        "expiry": "87600h"
    }
  }
}
  • etcd-csr.json(注意对应的IP
{
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "O": "etcd",
      "OU": "etcd Security",
      "L": "Shanghai",
      "ST": "Shanghai",
      "C": "CN"
    }
  ],
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "localhost",
    "192.168.253.220",
    "192.168.253.221",
    "192.168.253.222",
    "192.168.253.223",
    "192.168.253.224"
  ]
}

生成etcd证书

$ cfssl gencert --initca=true etcd-root-ca-csr.json | cfssljson --bare etcd-root-ca
$ cfssl gencert --ca etcd-root-ca.pem --ca-key etcd-root-ca-key.pem --config etcd-gencert.json etcd-csr.json | cfssljson --bare etcd

目录结构如下(.json文件在生成证书后可以删除,.pem文件之后要传到所有节点

$ tree
.
├── etcd.csr
├── etcd-csr.json
├── etcd-gencert.json
├── etcd-key.pem
├── etcd.pem
├── etcd-root-ca.csr
├── etcd-root-ca-csr.json
├── etcd-root-ca-key.pem
└── etcd-root-ca.pem

0 directories, 9 files
3.3、生成Kubernetes证书

生成kubernetes证书需要5个json文件,创建5个配置文件,内容如下

  • k8s-root-ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
  • k8s-gencert.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
  • kubernetes-csr.json(注意对应的IP
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "10.254.0.1",
        "192.168.253.220",
        "192.168.253.221",
        "192.168.253.222",
        "192.168.253.223",
        "192.168.253.224",
        "localhost",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
  • kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
  • admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

生成kubernetes证书

$ cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca
$ for targetName in kubernetes admin kube-proxy; do
  cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName
done

目录结构如下(.json文件在生成证书后可以删除,.pem文件之后要传到所有节点

$ tree
.
├── admin.csr
├── admin-csr.json
├── admin-key.pem
├── admin.pem
├── k8s-gencert.json
├── k8s-root-ca.csr
├── k8s-root-ca-csr.json
├── k8s-root-ca-key.pem
├── k8s-root-ca.pem
├── kube-proxy.csr
├── kube-proxy-csr.json
├── kube-proxy-key.pem
├── kube-proxy.pem
├── kubernetes.csr
├── kubernetes-csr.json
├── kubernetes-key.pem
└── kubernetes.pem

0 directories, 17 files
3.4、生成token及kubeconfig

由于采用了TLS Bootstrapping,node在加入集群的时候必须要有正确的token,否则认证无法通过

kubeconfig分为master的config(bootstrap.kubeconfig)和node的config(kube-proxy.kubeconfig)

生成token命令如下

$ export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
$ cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

定义bootstrap.kubeconfig

Master 上该地址应为 https://MasterIP:6443
$ export KUBE_APISERVER="https://127.0.0.1:6443"
设置集群参数
$ kubectl config set-cluster kubernetes \
  --certificate-authority=k8s-root-ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
设置客户端认证参数
$ kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
设置上下文参数
$ kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
设置默认上下文
$ kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

定义kube-proxy.kubeconfig

 # 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=k8s-root-ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

四、部署etcd

4.1、安装etcd

可用tar包或者RPM,本文选择RPM包来安装,前往Rpmfind网站搜索

etcd集群只需要在master节点上安装,但是etcd证书在所有节点上都必须存在

$ rpm -ivh https://www.rpmfind.net/linux/centos/7.4.1708/extras/x86_64/Packages/etcd-3.2.9-3.el7.x86_64.rpm
4.2 在所有节点上分发etcd证书
$ mkdir -p /etc/etcd/ssl
$ cp *.pem /etc/etcd/ssl
$ chown etcd:etcd /etc/etcd/ssl -R
$ chown 755 /etc/etcd -R
4.3修改etcd.conf配置文件
# [member]
ETCD_NAME=etcd1
ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd"
ETCD_WAL_DIR="/var/lib/etcd/wal"
ETCD_LISTEN_PEER_URLS="https://192.168.253.220:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.253.220:2379"
#ETCD_CORS=""

# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.253.220:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.253.220:2380,etcd2=https://192.168.253.221:2380,etcd3=https://192.168.253.222:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.253.220:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"

# [proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"

# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"
ETCD_PEER_AUTO_TLS="true"

# [logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
4.4、启动以及验证

配置完成后,在两个节点分别启动服务即可,如遇启动报错,清空/var/lib/etcd/下的所有文件,再修改配置文件内的ETCD_INITIAL_CLUSTER为本机,正常启动后再添加节点重启服务

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

验证集群

$ etcdctl --ca-file=/etc/etcd/ssl/etcd-root-ca.pem \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.253.220:2379,https://192.168.253.221:2379,https://192.168.253.222:2379 cluster-health
2018-01-23 16:02:07.361911 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-01-23 16:02:07.362707 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-01-23 16:02:07.363214 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member 38c96b69cf6ad45d is healthy: got healthy result from https://192.168.253.220:2379
member 5d19883316781cd5 is healthy: got healthy result from https://192.168.253.221:2379
member a0de9d2dac2c0f3d is healthy: got healthy result from https://192.168.253.222:2379
cluster is healthy

五、部署HA Master

5.1、HA Master简述

目前所谓的 Kubernetes HA 其实主要的就是 API Server 的 HA,master 上其他组件比如 controller-manager 等都是可以通过 Etcd 做选举;而 API Server 只是提供一个请求接收服务,所以对于 API Server 一般有两种方式做 HA;一种是对多个 API Server 做 vip,另一种使用 nginx 反向代理,本文采用后者。

5.2、安装kubernetes master所需软件

以下所有操作在master节点上进行

master节点需要安装以下两个包

rpm -ivh kubernetes-client-1.7.2-1.el7.centos.x86_64
rpm -ivh kubernetes-master-1.7.2-1.el7.centos.x86_64

将配置文件统一存放以及分发证书

mkdir /etc/kubernetes/ssl
cp k8s_certs/*.pem /etc/kubernetes/ssl
cp k8s_config/*.kubeconfig /etc/kubernetes
cp token.csv /etc/kubernetes
chown -R kube: /etc/kubernetes/ssl

建立日志文件目录,并修改好权限,以防api server无法启动

mkdir /var/log/kube-audit
chown -R kube:kube /var/log/kube-audit
chmod -R 755 /var/log/kube-audit
5.3、配置master

master 需要编辑 configapiservercontroller-managerscheduler这四个文件,具体修改如下

  • config通用配置
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
  • apiserver配置
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--advertise-address=192.168.253.220 --insecure-bind-address=127.0.0.1 --bind-address=192.168.253.220"


# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"
KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.253.220:2379,https://192.168.253.221:2379,https://192.168.253.222:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC \
               --runtime-config=rbac.authorization.k8s.io/v1beta1 \
               --anonymous-auth=false \
               --kubelet-https=true \
               --experimental-bootstrap-token-auth \
               --token-auth-file=/etc/kubernetes/token.csv \
               --service-node-port-range=30000-50000 \
               --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
               --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
               --client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
               --service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
               --etcd-quorum-read=true \
               --etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \
               --etcd-certfile=/etc/etcd/ssl/etcd.pem \
               --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
               --enable-swagger-ui=true \
               --apiserver-count=2 \
               --audit-log-maxage=30 \
               --audit-log-maxbackup=3 \
               --audit-log-maxsize=100 \
               --audit-log-path=/var/log/kube-audit/audit.log \
               --event-ttl=1h"
  • controller-manager配置
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \
                              --service-cluster-ip-range=10.254.0.0/16 \
                              --cluster-name=kubernetes \
                              --cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
                              --cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
                              --service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
                              --root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
                              --experimental-cluster-signing-duration=87600h0m0s \
                              --leader-elect=true \
                              --node-monitor-grace-period=40s \
                              --node-monitor-period=5s \
                              --pod-eviction-timeout=5m0s"
  • scheduler配置
###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"

其它master节点注意修改IP即可,修改完成后启动服务

systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler

kubectl在不做任何配置的情况下,默认连接本地的8080端口,可用如下命令测试master配置是否成功。

$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}

六、部署node

6.1、安装node

安装软件

rpm -ivh kubernetes-node-1.6.7-1.el7.centos.x86_64.rpm
rpm -ivh kubernetes-client-1.6.7-1.el7.centos.x86_64.rpm

分发证书

mkdir /etc/kubernetes/ssl
cp k8s_certs/*.pem /etc/kubernetes/ssl
cp k8s_config/*.kubeconfig /etc/kubernetes
cp token.csv /etc/kubernetes
chown -R kube: /etc/kubernetes/ssl
6.2、修改node配置

master 需要编辑 configkubeletproxy这三个文件,具体修改如下

注意:config 配置文件(包括下面的 kubelet、proxy)中全部未 定义 API Server 地址,因为 kubelet 和 kube-proxy 组件启动时使用了 --require-kubeconfig 选项,该选项会使其从 *.kubeconfig 中读取 API Server 地址,而忽略配置文件中设置的;所以配置文件中设置的地址其实是无效的

  • config通用配置
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://127.0.0.1:8080"
  • kubelet配置
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.253.223"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node3"

# location of the api-server
#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

# Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs \
              --cluster-dns=10.254.0.2 \
              --resolv-conf=/etc/resolv.conf \
              --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
              --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
              --require-kubeconfig \
              --cert-dir=/etc/kubernetes/ssl \
              --cluster-domain=cluster.local. \
              --hairpin-mode promiscuous-bridge \
              --serialize-image-pulls=false \
              --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
  • proxy配置
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.253.223 \
                 --hostname-override=node3 \
                 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
                 --cluster-cidr=10.254.0.0/16"
6.3、创建 ClusterRoleBinding

由于 kubelet 采用了TLS Bootstrapping,所有根绝 RBAC 控制策略,kubelet 使用的用户 kubelet-bootstrap 是不具备任何访问 API 权限的,这是需要预先在集群内创建 ClusterRoleBinding 授予其 system:node-bootstrapper Role

在任意master执行即可
kubectl create clusterrolebinding kubelet-bootstrap \
        --clusterrole=system:node-bootstrapper \
        --user=kubelet-bootstrap
6.4、创建Nginx代理

根据上面描述的 master HA 架构,此时所有 node 应该连接本地的 nginx 代理,然后 nginx 来负载所有 api server;以下为 nginx 代理相关配置

mkdir -p /etc/nginx

# 写入代理配置
$ cat << EOF >> /etc/nginx/nginx.conf
error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}

stream {
    upstream kube_apiserver {
        least_conn;
        server 192.168.253.221:6443;
        server 192.168.253.222:6443;
    }

    server {
        listen        0.0.0.0:6443;
        proxy_pass    kube_apiserver;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
}
EOF

# 更新权限
chmod +r /etc/nginx/nginx.conf

为了保证 nginx 的可靠性,综合便捷性考虑,node 节点上的 nginx 使用 docker 启动,同时 使用 systemd 来维护, systemd 配置如下

cat << EOF >> /etc/systemd/system/nginx-proxy.service
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\
                              -v /etc/nginx:/etc/nginx \\
                              --name nginx-proxy \\
                              --net=host \\
                              --restart=on-failure:5 \\
                              --memory=512M \\
                              nginx:1.13.3-alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s

[Install]
WantedBy=multi-user.target
EOF

最后启动 nginx,同时在每个 node 安装 kubectl,然后使用 kubectl 测试 api server 负载情况

systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy

kubectl测试

$ kubectl --server=https://127.0.0.1:6443 \
        --certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \
        --client-certificate=/etc/kubernetes/ssl/admin.pem \
        --client-key=/etc/kubernetes/ssl/admin-key.pem  \
        get cs
NAME                 STATUS    MESSAGE              ERROR
etcd-2               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
controller-manager   Healthy   ok
scheduler            Healthy   ok
6.5 将node加入到集群中

启动node上的kubelet服务,kubelet在启动后会进行证书申请,master端由controller manager服务来签署证书

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

此时在master上允许证书申请即可

# 查看 csr
$ kubectl get csr
NAME        AGE       REQUESTOR           CONDITION
csr-l9d25   2m        kubelet-bootstrap   Pending
csr-g6f35   2m        kubelet-bootstrap   Pending

# 签发证书
$ kubectl certificate approve csr-l9d25
certificatesigningrequest "csr-l9d25" approved
certificatesigningrequest "csr-g6f35" approved

# 查看 node
$ kubectl get node
NAME      STATUS    AGE       VERSION
node3     Ready     26s        v1.7.2
node4     Ready     26s        v1.7.2

七、部署flannel

7.1、下载flannel

在GitHub上下载最新版本的flannel二进制包,解压后将flanneldmk-docker-opts.sh文件甩到/usr/local/bin/目录下即可

$ wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
$ tar xf flannel-v0.9.1-linux-amd64.tar.gz
$ mv flanneld mk-docker-opts.sh /usr/local/bin/
7.2、配置flannel

由于 Flannel 需要依赖 Etcd 来保证集群 IP 分配不冲突的问题,所以首先要在 Etcd 中设置 Flannel 节点所使用的 IP 段

etcdctl \
		--ca-file=/etc/etcd/ssl/etcd-root-ca.pem \
		--cert-file=/etc/etcd/ssl/etcd.pem \
		--key-file=/etc/etcd/ssl/etcd-key.pem \
		--endpoints=https://192.168.253.221:2379,https://192.168.253.222:2379 \
		set /coreos.com/network/config '{ "Network": "172.17.0.0/16" }'

将flannel设置为由systemd守护

$ vim /etc/systemd/system/flannel.service
[Unit]
Description=flannel

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/local/bin/flanneld -etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \
                                  -etcd-certfile=/etc/etcd/ssl/etcd.pem \
                                  -etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
                                  -etcd-endpoints=https://192.168.253.221:2379,https://192.168.253.222:2379
ExecStop=kill -9 `ps -A |grep flannel|awk '{print $1}'`
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

启动flannel

$ systemctl start flannel
$ systemctl enable flannel

7.3、配置Docker

flannel运行后会生出一个环境变量文件,包含了当前主机要使用flannel通讯的相关参数

$ cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.17.0.0/16
FLANNEL_SUBNET=172.17.46.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false

修改docker0网卡参数

source /run/flannel/subnet.env
ifconfig docker0 ${FLANNEL_SUBNET}

此时可以看到 docker0 的网卡 ip 地址已经处于 Flannel 网卡网段之内

$ ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.46.1  netmask 255.255.255.0  broadcast 172.17.46.255
        ether 02:42:52:be:0a:4a  txqueuelen 0  (Ethernet)
        RX packets 1  bytes 76 (76.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
$ ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 172.17.46.0  netmask 255.255.0.0  destination 172.17.46.0
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 3  bytes 252 (252.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 252 (252.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

创建docker运行的环境变量,创建完的变量默认会保存在/run/docker_opts.env文件中

$ mk-docker-opts.sh
$ cat /run/docker_opts.env
DOCKER_OPT_BIP="--bip=172.17.55.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_OPTS=" --bip=172.17.55.1/24 --ip-masq=true --mtu=1472"

修改docker启动参数

# 编辑 systemd service 配置文件
vim /usr/lib/systemd/system/docker.service
# 在启动时增加 Flannel 提供的启动参数
ExecStart=/usr/bin/dockerd $DOCKER_OPTS
# 指定这些启动参数所在的文件位置(这个配置是新增的,同样放在 Service标签下)
EnvironmentFile=/run/docker_opts.env

重启动docker即可

$ systemctl daemon-reload
$ systemctl restart docker
7.4、测试

每个节点分别启动一个容器,并相互ping即可

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值