容器编排之战(六)连载

Kubernetes集群部署方式

方式1. minikube

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用。不能用于生产环境。
​
官方地址:https://kubernetes.io/docs/setup/minikube/ 

方式2. kubeadm

Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
​
官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/ 

方式3. 直接使用epel-release yum源,缺点就是版本较低 1.5

方式4. 二进制包

从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

官方也提供了一个互动测试环境供大家测试:https://kubernetes.io/cn/docs/tutorials/kubernetes-basics/cluster-interactive/

二进制方式部署k8s集群

目标任务:•
1、Kubernetes集群部署架构规划•
2、部署Etcd集群•
3、在Node节点安装Docker•
4、部署Flannel网络•
5、在Master节点部署组件•
6、在Node节点部署组件•
7、查看集群状态•
8、运行一个测试示例•
9、部署Dashboard(Web UI)

1、Kubernetes集群部署架构规划

操作系统:
CentOS7.6_x64
​
软件版本:
 Docker 18.09.0-ce
 Kubernetes 1.1 
服务器角色、IP、组件:
k8s-master1      
192.168.246.162 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-master2      
192.168.246.163 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1           
192.168.246.164 kubelet,kube-proxy,docker,flannel,etcd
k8s-node2           
192.168.246.165 kubelet,kube-proxy,docker,flannel
Master负载均衡    
192.168.246.166     LVS
镜像仓库                 
10.206.240.188  Harbor
机器配置要求:
2G  
主机名称 必须改  必须相互解析
[root@k8s-master1 ~]# vim /etc/hosts
192.168.246.162 k8s-master1
192.168.246.163 k8s-master2
192.168.246.164 k8s-node1
192.168.246.165 k8s-node2
192.168.246.166 lvs-server
关闭防火墙和selinux 

负载均衡器:

云环境:
可以采用slb
​
非云环境:
主流的软件负载均衡器,例如LVS、HAProxy、Nginx      

这里采用Nginx作为apiserver负载均衡器,架构图如下:


2.安装nginx使用stream模块作4层反向代理配置如下:
user  nginx;
worker_processes  4;
​
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
​
events {
    worker_connections  1024;
}
​
stream {
​
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
​
    upstream k8s-apiserver {
        server 192.168.246.162:6443;
        server 192.168.246.163:6443;
    }
    server {
        listen 6443;
        proxy_pass k8s-apiserver;
    }
}

准备环境

三台机器,所有机器相互做解析
关闭防火墙和selinux
[root@k8s-master ~]# vim /etc/hosts
192.168.96.134 k8s-master
192.168.96.135 k8s-node1
192.168.96.136 k8s-node2
  1. 部署Etcd集群

使用cfssl来生成自签证书,任何机器都行,证书这块儿知道怎么生成、怎么用即可,暂且不用过多研究(这个证书随便在那台机器生成都可以。哪里用将证书拷贝到哪里就可以了。)

下载cfssl工具:下载的这些是可执行的二进制命令直接用就可以了
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master1 ~]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
[root@k8s-master1 ~]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@k8s-master1 ~]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@k8s-master1 ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
​
生成Etcd证书:
创建以下三个文件:
[root@k8s-master1 ~]# mkdir cert
[root@k8s-master1 cert]# vim ca-config.json  #生成ca中心的
[root@k8s-master1 cert]# cat ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
​
[root@k8s-master1 cert]# vim ca-csr.json  #生成ca中心的证书请求文件
[root@k8s-master1 cert]# cat ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
​
[root@k8s-master1 cert]# vim server-csr.json #生成服务器的证书请求文件
[root@k8s-master1 cert]# cat server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.246.162",
    "192.168.246.163",
    "192.168.246.164"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
​
生成证书:
[root@k8s-master1 cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@k8s-master1 cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
[root@k8s-master1 cert]# ls *pem
ca-key.pem  ca.pem  server-key.pem  server.pem

安装Etcd: 二进制包下载地址: https://github.com/coreos/etcd/releases/tag/v3.2.12

以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的:
解压二进制包:
以下步骤三台机器都操作:
# wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
# mkdir /opt/etcd/{bin,cfg,ssl} -p
# tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
# mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
​
创建etcd配置文件:
# cd /opt/etcd/cfg/
# vim etcd
# cat /opt/etcd/cfg/etcd   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.246.162:2380"   #写每个节点的ip
ETCD_LISTEN_CLIENT_URLS="https://192.168.246.162:2379" #写每个节点的ip
​
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.246.162:2380" #写每个节点的ip
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.246.162:2379"  #写每个节点的ip
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.246.162:2380,etcd02=https://192.168.246.164:2380,etcd03=https://192.168.246.165:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
​
参数解释:
* ETCD_NAME 节点名称,每个节点名称不一样
* ETCD_DATA_DIR 存储数据目录(他是一个数据库,不是存在内存的,存在硬盘中的,所有和k8s有关的信息都会存到etcd里面的)
* ETCD_LISTEN_PEER_URLS 集群通信监听地址
* ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
* ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
* ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
* ETCD_INITIAL_CLUSTER 集群节点地址
* ETCD_INITIAL_CLUSTER_TOKEN 集群Token
* ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群
​
systemd管理etcd:
# vim /usr/lib/systemd/system/etcd.service
# cat /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
​
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
​
[Install]
WantedBy=multi-user.target
​
把刚才生成的证书拷贝到配置文件中的位置:(将master上面生成的证书scp到剩余两台机器上面)
# cd /root/cert/
# cp ca*pem server*pem /opt/etcd/ssl
直接拷贝到剩余两台etcd机器:
[root@k8s-master cert]# scp ca*pem server*pem k8s-node1:/opt/etcd/ssl
[root@k8s-master cert]# scp ca*pem server*pem k8s-node2:/opt/etcd/ssl
​
全部启动并设置开启启动:
# systemctl daemon-reload
# systemctl start etcd
# systemctl enable etcd
​
都部署完成后,三台机器都检查etcd集群状态:
# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.246.162:2379,https://192.168.246.164:2379,https://192.168.246.165:2379" cluster-health
​
member 18218cfabd4e0dea is healthy: got healthy result from https://10.206.240.111:2379
member 541c1c40994c939b is healthy: got healthy result from https://10.206.240.189:2379
member a342ea2798d20705 is healthy: got healthy result from https://10.206.240.188:2379
cluster is healthy
​
如果输出上面信息,就说明集群部署成功。
​
如果有问题第一步先看日志:/var/log/messages 或 journalctl -u etcd
​
报错:
Jan 15 12:06:55 k8s-master1 etcd: request cluster ID mismatch (got 99f4702593c94f98 want cdf818194e3a8c32)
解决:因为集群搭建过程,单独启动过单一etcd,做为测试验证,集群内第一次启动其他etcd服务时候,是通过发现服务引导的,所以需要删除旧的成员信息,所有节点作以下操作
[root@k8s-master1 default.etcd]# pwd
/var/lib/etcd/default.etcd
[root@k8s-master1 default.etcd]# rm -rf member/
========================================================
在Node节点安装Docker
​
# yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
​
# yum install -y yum-utils device-mapper-persistent-data lvm2 git
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum install docker-ce -y
​
启动设置开机自启
​
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io   #配置加速器

部署Flannel网络

Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段:
在node节点部署,如果没有在master部署应用,那就不要在master部署falnnel,他是用来给所有的容器用来通信的。
[root@k8s-master ~]# scp -r cert/ k8s-node1:/root/  #将生成的证书copy到剩下的机器上面
[root@k8s-master ~]# scp -r cert/ k8s-node2:/root/
[root@k8s-master ~]# cd cert/
/opt/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.246.162:2379,https://192.168.246.164:2379,https://192.168.246.165:2379" \
set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
​
======================================================================
#注:以下部署步骤在规划的每个node节点都操作。
下载二进制包:
# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
# mkdir -pv /opt/kubernetes/bin
# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
​
配置Flannel:
# mkdir -pv /opt/kubernetes/cfg/
# vim /opt/kubernetes/cfg/flanneld
# cat /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.246.162:2379,https://192.168.246.164:2379,https://192.168.246.165:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
​
systemd管理Flannel:
# vim /usr/lib/systemd/system/flanneld.service
# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
​
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
​
配置Docker启动指定子网段:可以将源文件直接覆盖掉
# vim /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
​
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
​
[Install]
WantedBy=multi-user.target
​
从master节点拷贝证书文件到node1和node2上:因为node1和2上没有证书,但是flanel需要证书
# mkdir -pv /opt/etcd/ssl/
# scp /opt/etcd/ssl/*  k8s-node1:/opt/etcd/ssl/
​
重启flannel和docker:
# systemctl daemon-reload
# systemctl start flanneld
# systemctl enable flanneld
# systemctl daemon-reload
# systemctl restart docker
​
注意:如果flannel启动不了请检查设置ip网段是否正确
​
检查是否生效:
[root@k8s-node1 ~]# ps -ef | grep docker
root       3632      1  1 22:19 ?        00:00:00 /usr/bin/dockerd --bip=172.17.77.1/24 --ip-masq=false --mtu=1450
# ip addr
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:cd:f6:c9:cc brd ff:ff:ff:ff:ff:ff
    inet 172.17.77.1/24 brd 172.17.77.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether ba:96:dc:cc:25:e0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.77.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::b896:dcff:fecc:25e0/64 scope link 
       valid_lft forever preferred_lft forever
       
注:
1.  确保docker0与flannel.1在同一网段。
2.  测试不同节点互通,在当前节点访问另一个Node节点docker0 IP:案例:node1机器pingnode2机器的docker0上面的ip地址
[root@k8s-node1 ~]# ping 172.17.33.1
PING 172.17.33.1 (172.17.33.1) 56(84) bytes of data.
64 bytes from 172.17.33.1: icmp_seq=1 ttl=64 time=0.520 ms
64 bytes from 172.17.33.1: icmp_seq=2 ttl=64 time=0.972 ms
64 bytes from 172.17.33.1: icmp_seq=3 ttl=64 time=0.642 ms
如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值