理论+实操:k8s部署思路+ssl+etcd+flannel

文章目录


前言:
集群部署步骤思路:

自签ssl证书
etcd数据库集群部署
node安装docker
flannel容器集群网络部署
部署master组件
部署node组件
部署一个测试示例
部署web ui(dashboard)
部署集群内部dns解析服务(coredns)

一:官方提供的三种部署方式

1.1 minikube

minikube是一个工具,可以在本地快速运行一个单点的kubernetes,仅用于尝试K8S或日常开发的测试环境使用

部署地址:https://kubernetes.io/docs/setup/minkube/

1.2 kubeadm

kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署kubernetes集群

部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

1.3 二进制包

从官方下载发行版的二进制包,手动部署每个组件,组成kubernetes集群

地址:https://github.com/kubernetes/kubernetes/releases

https://github.com/kubernetes/kubernetes/releases?after=v1.13.1

这里我是用的是二进制分组件安装方式

二:kubernetes平台环境规划

2.1 服务器配置:

软件版本
linux操作系统1908
dockerdocker-ce
kubernetesv1.13.1
etcd3.x
flannel0.10

2.2 服务器角色

序号角色IP组件配置
0master01192.168.247.149/24apiserver、controller-manager、scheduler、etcd2+4
1master02192.168.247.148/24apiserver、controller-manager、scheduler2+4
2node01192.168.247.143/24kubelet、kube-proxy、docker、flannel、etcd2+4
3node02192.168.247.144/24kubelet、kube-proxy、docker、flannel、etcd2+4
4load balancer192.168.247.145/24nginx1+2
5load balancer192.168.247.146/24nginx1+2
6registry192.168.247.147/24harbor1+2

2.3 多master集群架构图

其中还有一个harbor仓库

在这里插入图片描述

2.4 初步环境部署(分组件部署)

关闭网络管理器,清空iptabels,关闭核心防护,编辑主机名

master01:192.168.247.149/24

[root@localhost ~]# hostnamectl set-hostname master1
[root@localhost ~]# su
[root@master1 ~]# systemctl stop NetworkManager
[root@master1 ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.
[root@master1 ~]# setenforce 0
[root@master1 ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@master1 ~]# iptables -F

node01:192.168.247.143/24

[root@localhost ~]# hostnamectl set-hostname master1
[root@localhost ~]# su
[root@master1 ~]# systemctl stop NetworkManager
[root@master1 ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.
[root@master1 ~]# ^C
[root@master1 ~]# setenforce 0
[root@master1 ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@node01 ~]# iptables -F

node02:192.168.247.144/24

[root@localhost ~]# hostnamectl set-hostname node02
[root@localhost ~]# su
[root@node02 ~]# systemctl stop NetworkManager
[root@node02 ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.
[root@node02 ~]# setenforce 0
[root@node02 ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@node02 ~]# iptables -F

三:自签ssl证书

组件使用的证书
etcdca.pem、server.pem、server-key.pem
flannelca.pem、server.pem、server-key.pem
kube-apiserverca.pem、server.pem、server-key.pem
kubeletca.pem、server.pem
kube-proxyca.pem、kube-proxy.pem、kube-proxy-key.pem
kubectlca.pem、admin.pem、admin-key.pem

备注:

三个材料:

*-config.json 证书配置参数

*-csr.json 证书签名文件参数

*.csr 证书签名文件

生成结果:

*-key.pem 密钥

*.pem 证书

3.1 首先先搞一个ca证书,各组件之间的通讯必须有ca证书

  • 创建临时目录
[root@master1 ~]# mkdir k8s
[root@master1 ~]# cd k8s
[root@master1 k8s]# pwd
/root/k8s
[root@master1 k8s]# mkdir /abc
[root@master1 k8s]# mount.cifs //192.168.0.88/linuxs /abc
Password for root@//192.168.0.88/linuxs:  
[root@master1 k8s]# cp /abc/k8s/etcd* .
[root@master1 k8s]# ll
total 8
-rwxr-xr-x. 1 root root 1088 Apr 29 00:13 etcd-cert.sh
-rwxr-xr-x. 1 root root 1764 Apr 29 00:13 etcd.sh

看一下这两个脚本

3.2 etcd-cert.sh用来创建关于etcd的CA证书

expiry 有效期10年

使用密钥验证 key encipherment

[root@master1 k8s]# cat etcd-cert.sh 
#ca-config.json是ca证书的配置文件
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
#ca-csr.json是ca证书的签名文件
cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------
#server-csr.json是指定etcd三个节点之间的通信验证
cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.247.149",
    "192.168.247.143",
    "192.168.247.144"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

3.3 etcd.sh用来创建启动脚本和配置文件

2380是etcd之间进行通讯的端口

2379是etcd对外提供的端口

[root@master1 k8s]# cat etcd.sh 
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.247.149 etcd02=https://192.168.247.143:2380,etcd03=https://192.168.247.144:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/k8s/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd


3.4 下载cfssl官方脚本包

cfssl 生成证书工具

cfssljson 通过传入json文件生成证书

cfssl-certinfo 查看证书信息

-o 导出

[root@master1 k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl*
[root@master1 k8s]# bash cfssl.sh 
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  9.8M  100  9.8M    0     0   106k      0  0:01:35  0:01:35 --:--:-- 98678
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 2224k  100 2224k    0     0   316k      0  0:00:07  0:00:07 --:--:--  455k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 6440k  100 6440k    0     0   531k      0  0:00:12  0:00:12 --:--:--  736k

查看验证

[root@master1 k8s]# ll /usr/local/bin/*
-rwxr-xr-x. 1 root root 10376657 Apr 29 00:20 /usr/local/bin/cfssl
-rwxr-xr-x. 1 root root  6595195 Apr 29 00:21 /usr/local/bin/cfssl-certinfo
-rwxr-xr-x. 1 root root  2277873 Apr 29 00:20 /usr/local/bin/cfssljson
[root@master1 k8s]# rm -rf cfssl.sh 

3.5 创建etcd组件证书临时目录

[root@master1 k8s]# mkdir etcd-cert
[root@master1 k8s]# mv etcd-cert.sh etcd-cert

3.6 定义ca证书配置

[root@master1 k8s]# cd etcd-cert/
[root@master1 etcd-cert]# ls
etcd-cert.sh
[root@master1 etcd-cert]# cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
[root@master1 etcd-cert]# ls
ca-config.json  etcd-cert.sh

3.7 实现ca证书签名

[root@master1 etcd-cert]# cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
[root@master1 etcd-cert]# ls
ca-config.json  ca-csr.json  etcd-cert.sh

3.8 生产证书,生成ca-key.pem ca.pem这两个文件

[root@master1 etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/04/29 00:44:16 [INFO] generating a new CA key and certificate from CSR
2020/04/29 00:44:16 [INFO] generate received request
2020/04/29 00:44:16 [INFO] received CSR
2020/04/29 00:44:16 [INFO] generating key: rsa-2048
2020/04/29 00:44:16 [INFO] encoded CSR
2020/04/29 00:44:16 [INFO] signed certificate with serial number 527285287467326079906972398205016440554485642971
[root@master1 etcd-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd-cert.sh

3.9 指定etcd三个节点之间的通信验证

[root@master1 etcd-cert]#  cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.247.149",
    "192.168.247.143",
    "192.168.247.144"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF
[root@master1 etcd-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd-cert.sh  server-csr.json

备注:地区要一致

生成etcd证书 server-key.pem和server.pem

-ca=ca.pem

-ca-key=ca-key.pem

-config=ca-config.json

-profile=www

3.10 生成etcd的server证书和密钥

[root@master1 etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/04/29 00:48:07 [INFO] generate received request
2020/04/29 00:48:07 [INFO] received CSR
2020/04/29 00:48:07 [INFO] generating key: rsa-2048
2020/04/29 00:48:07 [INFO] encoded CSR
2020/04/29 00:48:07 [INFO] signed certificate with serial number 79028110669307243971733075611743333137367463128
2020/04/29 00:48:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master1 etcd-cert]# ls
ca-config.json  ca-csr.json  ca.pem        server.csr       server-key.pem
ca.csr          ca-key.pem   etcd-cert.sh  server-csr.json  server.pem

三个材料:

*-config.json 证书配置参数

*-csr.json 证书签名文件参数

*.csr 证书签名文件

生成结果:

*-key.pem 密钥

*.pem 证书

四:Etcd数据库集群部署

  • 二进制包下载地址

https://github.com/etcd-io/etcd/releases

这里我已经下载好了

  • 查看集群状态

/k8s/etcd/bin/etcdctl
–ca-file=/k8s/etcd/ssl/ca.pem
–cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem
–endpoints=“https://192.168.247.149:2379,https://192.168.247.143:2379,https://192.168.247.144:2379”
cluster-health

4.1 拉取包到本地

[root@master1 etcd-cert]# cp /abc/k8s/etcd-v3.3.10-linux-amd64.tar.gz /root/k8s/
[root@master1 etcd-cert]# cd ..
[root@master1 k8s]# pwd
/root/k8s
[root@master1 k8s]# ls
etcd-cert  etcd.sh  etcd-v3.3.10-linux-amd64.tar.gz
[root@master1 k8s]# tar xf etcd-v3.3.10-linux-amd64.tar.gz 
[root@master1 k8s]# ls
etcd-cert  etcd.sh  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar.gz
[root@master1 k8s]# cd etcd-v3.3.10-linux-amd64/
[root@master1 etcd-v3.3.10-linux-amd64]# ls
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md

4.2 创建etcd的工作目录,下面还有配置文件cfg,命令bin,证书ssl的目录

[root@master1 etcd-v3.3.10-linux-amd64]# mkdir /k8s/etcd/{cfg,bin,ssl} -p
[root@master1 etcd-v3.3.10-linux-amd64]# cd /k8s
[root@master1 k8s]# tree .
.
└── etcd
    ├── bin
    ├── cfg
    └── ssl

4 directories, 0 files

4.3 将证书文件和命令文件复制过来

[root@master1 k8s]# mv /root/k8s/etcd-v3.3.10-linux-amd64/etcd* /k8s/etcd/bin/
[root@master1 k8s]# cp /root/k8s/etcd-cert/*.pem /k8s/etcd/ssl/
[root@master1 k8s]# tree .
.
└── etcd
    ├── bin
    │   ├── etcd
    │   └── etcdctl
    ├── cfg
    └── ssl
        ├── ca-key.pem
        ├── ca.pem
        ├── server-key.pem
        └── server.pem

4.4 编辑etcd的配置文件和启动脚本

脚本文件etcd.sh

#!/bin/bash
# example: ./etcd.sh etcd01 192.168.247.149 etcd02=https://192.168.247.143:2380,etcd03=https://192.168.247.144:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/k8s/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

备注:执行这个脚本要注意指定参数,后面会再执行一次

[root@master1 k8s]# sh etcd.sh	
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details.
[root@master1 etcd]# cd /k8s
[root@master1 k8s]# tree .
.
└── etcd
    ├── bin
    │   ├── etcd
    │   └── etcdctl
    ├── cfg
    │   └── etcd
    └── ssl
        ├── ca-key.pem
        ├── ca.pem
        ├── server-key.pem
        └── server.pem
[root@master1 k8s]# ll /usr/lib/systemd/system/ | grep etcd
-rw-r--r--. 1 root root  923 Apr 29 08:11 etcd.service

4.4.1 第一步产生配置文件,启动脚本生成到systemd下

端口,2379是提供给外部端口,2380是内部集群通讯端口,最多65536端口

[root@master1 k8s]# cd /root/k8s/
[root@master1 k8s]# ls
etcd-cert  etcd.sh  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar.gz
[root@master1 k8s]# pwd
/root/k8s
[root@master1 k8s]# bash etcd.sh etcd01 192.168.247.149 etcd02=https://192.168.247.143:2380,etcd03=https://192.168.247.144:2380

此时进入一个等待状态,查找别的etcd集群节点,查找不到过5分钟默认退出

这里再去搭建另外两个etcd

在这里插入图片描述

Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.

4.5 此时可以发现,etcd单节点已经开启

[root@master1 k8s]# netstat -natp | grep etcd
tcp        0      0 192.168.247.149:2379    0.0.0.0:*               LISTEN      26604/etcd          
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      26604/etcd          
tcp        0      0 192.168.247.149:2380    0.0.0.0:*               LISTEN      26604/etcd    

4.6 拷贝证书去另外两个节点

[root@master1 k8s]# scp -r /k8s root@192.168.247.143:/k8s
root@192.168.247.143's password: 
etcd                                                                                  100%  523   252.4KB/s   00:00    
etcd                                                                                  100%   18MB  95.0MB/s   00:00    
etcdctl                                                                               100%   15MB 124.7MB/s   00:00    
ca-key.pem                                                                            100% 1679   725.6KB/s   00:00    
ca.pem                                                                                100% 1265   287.0KB/s   00:00    
server-key.pem                                                                        100% 1675   813.0KB/s   00:00    
server.pem                                                                            100% 1338   710.8KB/s   00:00 

4.7 到143即node01节点去查看验证

[root@node01 ~]# cd /k8s
[root@node01 k8s]# tree .
.
└── etcd
    ├── bin
    │   ├── etcd
    │   └── etcdctl
    ├── cfg
    │   └── etcd
    └── ssl
        ├── ca-key.pem
        ├── ca.pem
        ├── server-key.pem
        └── server.pem

[root@master1 k8s]# scp -r /k8s root@192.168.247.144:/k8s

4.8 还有启动脚本

[root@master1 k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.247.143:/usr/lib/systemd/system/
root@192.168.247.143's password: 
etcd.service                                                                          100%  923   814.8KB/s   00:00    
[root@master1 k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.247.144:/usr/lib/systemd/system/
root@192.168.247.144's password: 
etcd.service                                                                          100%  923   277.6KB/s   00:00    

4.9 相关文件复制过去了,/k8s/etcd/cfg/etcd配置文件中有些参数需要修改

  • node01节点
[root@node01 k8s]# vim /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.247.143:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.247.143:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.247.143:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.247.143:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.247.149:2380,etcd02=https://192.168.247.143:2380,etcd03=https://192.168.247.144:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • node02节点
[root@node02 ~]# vim /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.247.144:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.247.144:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.247.144:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.247.144:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.247.149:2380,etcd02=https://192.168.247.143:2380,etcd03=https://192.168.247.144:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

4.10 节点开启服务

[root@node02 ~]# systemctl start etcd
[root@node02 ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-29 08:42:15 CST; 9s ago
[root@node01 k8s]# systemctl start etcd
[root@node01 k8s]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-29 08:42:48 CST; 4s ago

4.11 此时在主节点master1上重新执行脚本命令

[root@master1 k8s]# cd /root/k8s/
[root@master1 k8s]# ls
etcd-cert  etcd.sh  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar.gz
[root@master1 k8s]# bash etcd.sh etcd01 192.168.247.149 etcd02=https://192.168.247.143:2380,etcd03=https://192.168.247.144:2380
[root@master1 k8s]# 

4.12 可以检查下集群状态

备注:证书路径要写对

可以发现,etcd是一个无中心化的集群

/k8s/etcd/bin/etcdctl \
--ca-file=/k8s/etcd/ssl/ca.pem \
--cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem \
--endpoints="https://192.168.247.149:2379,https://192.168.247.143:2379,https://192.168.247.144:2379" \
cluster-health
[root@master1 k8s]# cd /k8s/etcd/ssl/
[root@master1 ssl]# pwd
/k8s/etcd/ssl
[root@master1 ssl]# ls
ca-key.pem  ca.pem  server-key.pem  server.pem
[root@master1 ssl]# /k8s/etcd/bin/etcdctl \
> --ca-file=ca.pem \
> --cert-file=server.pem --key-file=server-key.pem \
> --endpoints="https://192.168.247.149:2379,https://192.168.247.143:2379,https://192.168.247.144:2379" \
> cluster-health
member 8f4e6ce663f0d49a is healthy: got healthy result from https://192.168.247.143:2379
member b6230d9c6f20feeb is healthy: got healthy result from https://192.168.247.144:2379
member d618618928dffeba is healthy: got healthy result from https://192.168.247.149:2379
cluster is healthy

4.13 查看2379端口

[root@master1 /]# netstat -natp | grep 2379
tcp        0      0 192.168.247.149:2379    0.0.0.0:*               LISTEN      67807/etcd          
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      67807/etcd          
tcp        0      0 192.168.247.149:2379    192.168.247.149:46934   ESTABLISHED 67807/etcd          
tcp        0      0 127.0.0.1:2379          127.0.0.1:53006         ESTABLISHED 67807/etcd          
tcp        0      0 192.168.247.149:46934   192.168.247.149:2379    ESTABLISHED 67807/etcd          
tcp        0      0 127.0.0.1:53006         127.0.0.1:2379          ESTABLISHED 67807/etcd 

接下来就要在所有node节点部署docker环境

五:node安装docker

在这里插入图片描述

这里只演示一个,另外一个操作一致

核心防护已关,防火墙暂时不管

[root@node01 ~]#  setenforce 0
[root@node01 ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

安装docker-ce

[root@node01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@node01 ~]# cd /etc/yum.repos.d/ && yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@node01 yum.repos.d]# ls
CentOS-Base.repo  CentOS-Debuginfo.repo  CentOS-Media.repo    CentOS-Vault.repo
CentOS-CR.repo    CentOS-fasttrack.repo  CentOS-Sources.repo  docker-ce.repo
[root@node01 yum.repos.d]# yum install -y docker-ce
[root@node01 yum.repos.d]# systemctl start docker
[root@node01 yum.repos.d]# systemctl enable docker
[root@node01 yum.repos.d]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://fk2yrsh1.mirror.aliyuncs.com"]
}
EOF
[root@node01 yum.repos.d]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf 
[root@node01 yum.repos.d]# sysctl -p
net.ipv4.ip_forward = 1
[root@node01 yum.repos.d]# systemctl restart network
[root@node01 yum.repos.d]# systemctl restart docker

docker-ce安装完毕,此时docker中很干净,没有容器和镜像

五:安装flannel

flannel网络组件,还有一个是calico,calico支持bgp

overlay network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路tunnmel连接起来

vxlan:将原数据包封装到UDP协议中,并使用基础网络的IP/mac作为外层报文头进行封装,然后在以太网二层链路上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址

在这里插入图片描述

flannel:是overlay网络中的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、aws VPS和gce路由等数据转发方式

5.1 vxlan网络拓扑

在这里插入图片描述

vtep可以当成docker 0 端口理解,vtep与物理网卡之间进行nat地址转换,像这种信息也会写入到etcd中

5.2 集群内不同节点间容器通讯流程

在这里插入图片描述

记载网络信息放到etcd中

5.3 写入分配的子网段到etcd中,共flannel使用

备注:/coreos.com/network/config文件cd进不去

在master节点写入

k8s/etcd/bin/etcdctl \
--ca-file=/k8s/etcd/ssl/ca.pem \
--cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem \
--endpoints="https://192.168.247.149:2379,https://192.168.247.143:2379,https://192.168.247.144:2379" \
set /coreos.com/network/config '{ "network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
[root@master1 /]# k8s/etcd/bin/etcdctl \
> --ca-file=/k8s/etcd/ssl/ca.pem \
> --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem \
> --endpoints="https://192.168.247.149:2379,https://192.168.247.143:2379,https://192.168.247.144:2379" \
> set /coreos.com/network/config '{ "network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
[root@master1 /]# 

查看写入信息,别的节点也能查看到

/k8s/etcd/bin/etcdctl \
--ca-file=/k8s/etcd/ssl/ca.pem \
--cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem \
--endpoints="https://192.168.247.149:2379,https://192.168.247.143:2379,https://192.168.247.144:2379" \
get /coreos.com/network/config

5.4 导入二进制包,flannel安装在node节点

谁需要跑业务资源,设就需要安装fannel

[root@master1 /]# cp /abc/k8s/flannel-v0.10.0-linux-amd64.tar.gz /root/k8s/
[root@master1 /]# cp /abc/k8s/flannel-v0.10.0-linux-amd64.tar.gz /root/k8s/
[root@master1 /]# cd /root/k8s
[root@master1 k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.247.143:/opt/
root@192.168.247.143's password: 
flannel-v0.10.0-linux-amd64.tar.gz                                                    100% 9479KB  53.4MB/s   00:00    
[root@master1 k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.247.144:/opt/
root@192.168.247.144's password: 
flannel-v0.10.0-linux-amd64.tar.gz                                                    100% 9479KB 124.2MB/s   00:00 

5.5 部署与配置flannel,编辑flannel启动脚本,加入到systemd中

node01节点为例

[root@node01 yum.repos.d]# cd /opt
[root@node01 opt]# tar xf flannel-v0.10.0-linux-amd64.tar.gz 
[root@node01 opt]# ls
containerd  flanneld  flannel-v0.10.0-linux-amd64.tar.gz  mk-docker-opts.sh  README.md  rh

创建flannel工作目录

[root@node01 opt]# mkdir /k8s/flannel/{cfg,bin,ssl} -p
[root@node01 opt]# mv mk-docker-opts.sh /k8s/flannel/bin/
[root@node01 opt]# mv flanneld /k8s/flannel/bin/

每个node节点都要做

[root@node01 opt]# vim flannel.sh
#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/k8s/flannel/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/k8s/etcd/ssl/ca.pem \
-etcd-certfile=/k8s/etcd/ssl/server.pem \
-etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/k8s/flannel/cfg/flanneld
ExecStart=/k8s/flannel/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/k8s/flannel/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld



开启flannel网络功能,指定etcdIP:端口

[root@node01 flannel]# bash flannel.sh https://192.168.247.149:2379,https://192.168.247.143:2379,https://192.168.247.144:2379

两个node都做

查看状态

[root@node01 flannel]# systemctl status flanneld.service 
● flanneld.service - Flanneld overlay address etcd agent
   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-29 10:36:59 CST; 1min 1s ago

5.6 配置docker,以使用flannel生成的子网

以node01为例,别的节点也要做

让docker连接flannel的网段

[root@node01 flannel]# vim /usr/lib/systemd/system/docker.service 
#在第十三行注释下添加
 14 EnvironmentFile=/run/flannel/subnet.env
 #在ExecStart中添加
  15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

重启docker服务

[root@node01 flannel]# systemctl daemon-reload
[root@node01 flannel]# systemctl restart docker

5.7 启动flannel

查看node01节点分配的flannelIP地址,为172.17.45.0/24

[root@node01 flannel]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.45.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.45.1/24 --ip-masq=false --mtu=1450"

node02

[root@node02 opt]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.42.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.42.1/24 --ip-masq=false --mtu=1450"

查看flannel网络

[root@node01 flannel]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.247.143  netmask 255.255.255.0  broadcast 192.168.247.255

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.45.0  netmask 255.255.255.255  broadcast 0.0.0.0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255

此时便可以让不同node间的容器互联互通

测试一下,两个node各创建容器测试ping

[root@node01 flannel]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
ab5ef0e58194: Pull complete 
Digest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813c
Status: Downloaded newer image for centos:7
[root@39f034a2f24e /]# yum install net-tools -y
[root@node02 opt]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
ab5ef0e58194: Pull complete 
Digest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813c
Status: Downloaded newer image for centos:7
[root@fea29d0ff39b /]# yum install net-tools -y

node01容器

[root@39f034a2f24e /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.45.2  netmask 255.255.255.0  broadcast 172.17.45.255

node02容器,ping通了

[root@fea29d0ff39b /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.42.2  netmask 255.255.255.0  broadcast 172.17.42.255
[root@fea29d0ff39b /]# ping 172.17.45.2
PING 172.17.45.2 (172.17.45.2) 56(84) bytes of data.
64 bytes from 172.17.45.2: icmp_seq=1 ttl=62 time=0.792 ms
64 bytes from 172.17.45.2: icmp_seq=2 ttl=62 time=0.762 ms
64 bytes from 172.17.45.2: icmp_seq=3 ttl=62 time=0.483 ms
64 bytes from 172.17.45.2: icmp_seq=4 ttl=62 time=1.38 ms
^C
--- 172.17.45.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.483/0.855/1.384/0.328 ms
[root@fea29d0ff39b /]# 

locally
7: Pulling from library/centos
ab5ef0e58194: Pull complete
Digest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813c
Status: Downloaded newer image for centos:7
[root@fea29d0ff39b /]# yum install net-tools -y


node01容器

[root@39f034a2f24e /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.45.2 netmask 255.255.255.0 broadcast 172.17.45.255


node02容器,ping通了

[root@fea29d0ff39b /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.42.2 netmask 255.255.255.0 broadcast 172.17.42.255
[root@fea29d0ff39b /]# ping 172.17.45.2
PING 172.17.45.2 (172.17.45.2) 56(84) bytes of data.
64 bytes from 172.17.45.2: icmp_seq=1 ttl=62 time=0.792 ms
64 bytes from 172.17.45.2: icmp_seq=2 ttl=62 time=0.762 ms
64 bytes from 172.17.45.2: icmp_seq=3 ttl=62 time=0.483 ms
64 bytes from 172.17.45.2: icmp_seq=4 ttl=62 time=1.38 ms
^C
— 172.17.45.2 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.483/0.855/1.384/0.328 ms
[root@fea29d0ff39b /]#




评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值