kubernetes/k8s多节点部署之etcd存储的部署以及flannel网络配置的部署

k8s多节点部署之etcd存储的部署

一、项目需求分析:

【1】192.168.60.10是master节点kube-apiserver kube-controller-manager kube-scheduler etcd
【2】192.168.60.100是node1节点kubelet kube-proxy docker flannel etcd
【3】192.168.60.60是node2节点kubelet kube-proxy docker flannel etcd

二、项目步骤部署(master节点):

//master主节点配置

【1】下载证书制作工具
[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# su
[root@master ~]# cd /usr/local/bin
[root@master bin]# chmod +x *
[root@master bin]# ls
cfssl  cfssl-certinfo  cfssljson
【2】定义ca证书
[root@master ~]#mkdir -p k8s/etcd-cert
[root@master etcd-cert]#cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"     
        ]  
      } 
    }         
  }
}
EOF
【3】实现证书签名
[root@master etcd-cert]#cat > ca-csr.json <<EOF 
{   
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
【4】生成证书
[root@master etcd-cert]#cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
【5】指定etcd三个节点之间的通信验证
[root@master etcd-cert]#cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.60.10",
    "192.168.60.100",
    "192.168.60.60"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF
【6】生成etcd证书server端
[root@master etcd-cert]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
【7】etcd二进制上传
[root@master k8s]# ls
etcd-cert  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar.gz
【8】创建配置文件,命令文件,证书
[root@master k8s]# mkdir -p /opt/etcd/{cfg,bin,ssl}
//命令文件
[root@master k8s]# cp etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
//证书
[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/
//上传etcd.sh脚本,配置文件的生成以及systemctl管理服务文件生成
[root@master k8s]# ls
etcd-cert  etcd.sh  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar.gz
[root@master k8s]#sh etcd.sh etcd01 192.168.60.10 etcd02=https://192.168.60.60:2380,etcd03=https://192.168.60.100:2380
//查看etcd的进程是否启动
[root@master ~]# ps -ef | grep etcd
【9】使用另一个终端复制证书和systemctl管理服务脚本到其他节点
[root@master ~]# scp -r /opt/etcd/ root@192.168.60.60:/opt/
[root@master ~]# scp -r /opt/etcd/ root@192.168.60.100:/opt/
//启动脚本拷贝到其他节点
scp /usr/lib/systemd/system/etcd.service root@192.168.60.60:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.60.100:/usr/lib/systemd/system/
【10】在另外两个节点修改cfg下的配置文件
//在192.168.60.60节点修改,主要是修改name和IP地址
[root@node1 ~]# cd /opt/etcd/cfg/
[root@node1 cfg]# ls
etcd
[root@node1 cfg]# vim etcd 
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.60.60:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.60.60:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.60.60:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.60.60:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.60.10:2380,etcd02=https://192.168.60.60:2380,etcd03=https://192.168.60.100:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@node1 cfg]# systemctl start etcd.service 
[root@node1 cfg]# systemctl status etcd.service


//在192.168.60.100节点修改,主要是修改name和IP地址
[root@node2 ~]# cd /opt/etcd/cfg/
[root@node2 cfg]# ls
etcd
[root@node2 cfg]# vim etcd 
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.60.100:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.60.100:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.60.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.60.100:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.60.10:2380,etcd02=https://192.168.60.60:2380,etcd03=https://192.168.60.100:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@node2 cfg]# systemctl start etcd.service 
[root@node2 cfg]# systemctl status etcd.service
【11】检查群集状态是否健康
[root@master etcd-cert]# /opt/etcd//bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoint="https://192.168.60.10:2379,https://192.168.60.60:2379,https://192.168.60.100:2379" cluster-health
member 59173e3f8aecc6c3 is healthy: got healthy result from https://192.168.60.100:2379
member 8da25ad72397ec6e is healthy: got healthy result from https://192.168.60.10:2379
member a21e580b9191cb20 is healthy: got healthy result from https://192.168.60.60:2379
cluster is healthy
[root@master etcd-cert]# 

————————————————————————————————————————

k8s多节点部署之flannel网络配置的部署

一、项目需求分析:

【1】192.168.60.10是master节点kube-apiserver kube-controller-manager kube-scheduler etcd
【2】192.168.60.100是node1节点kubelet kube-proxy docker flannel etcd
【3】192.168.60.60是node2节点kubelet kube-proxy docker flannel etcd

二、项目步骤部署:

【1】写入分配的子网段到etcd中,供flannel使用
[root@master etcd-cert]# /opt/etcd/bin/etcdctl \
--ca-file=ca.pem \
--cert-file=server.pem \
--key-file=server-key.pem \
--endpoint="https://192.168.60.10:2379,https://192.168.60.60:2379,https://192.168.60.100:2379" \
set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backenf":{"Type":"vxlan"}}'
【2】查看写入的信息
[root@master etcd-cert]# /opt/etcd/bin/etcdctl \
--ca-file=ca.pem \
--cert-file=server.pem \
--key-file=server-key.pem \
--endpoint="https://192.168.60.10:2379,https://192.168.60.60:2379,https://192.168.60.100:2379" \
get /coreos.com/network/config
【3】在所有node节点上面部署flannel组件
//在192.168.60.60节点
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
//在192.168.60.100节点
[root@node2 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
【4】创建k8s工作目录,拷贝命令文件
//在192.168.60.60节点下
[root@node1 ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl}
[root@node1 ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
//在192.168.60.100节点下
[root@node2 ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl}
[root@node2 ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
【5】编写flannel组件启动执行脚本【node节点都一样】
[root@node1 ~]# vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
【6】开启flannel组件网络功能
[root@node1 ~]# sh flannel.sh https://192.168.60.10:2379,https://192.168.60.60:2379,https://192.168.60.100:2379
【7】配置docker连接flannel组件【所有node节点都一样】
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service 
14 EnvironmentFile=/run/flannel/subnet.env
15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/contain    erd.sock
【8】查看bip指定启动时的子网
//在192.168.60.60节点node1
[root@node1 ~]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.39.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.39.1/24 --ip-masq=false --mtu=1472"
//在192.168.60.100节点node2
[root@node2 ~]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.85.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.85.1/24 --ip-masq=false --mtu=1472"
【9】重新启动docker服务
[root@node1 ~]# systemctl daemon-reload 
[root@node1 ~]# systemctl restart docker.service 
【10】查看flannel网络
//在node1节点192.168.60.60
[root@node1 ~]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.39.1  netmask 255.255.255.0  broadcast 172.17.39.255
        ether 02:42:b1:19:5b:a1  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
//在node2节点192.168.60.100
[root@node2 ~]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.85.1  netmask 255.255.255.0  broadcast 172.17.85.255
        ether 02:42:b5:54:91:f1  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
【11】测试node节点之间的连通性

//在192.168.60.60节点

[root@node1 ~]# docker run -it centos:7 /bin/bash
[root@2bbac9ebdc96 /]# yum install -y net-tools
[root@2bbac9ebdc96 /]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1472
        inet 172.17.39.2  netmask 255.255.255.0  broadcast 172.17.39.255
        ether 02:42:ac:11:27:02  txqueuelen 0  (Ethernet)
        RX packets 15198  bytes 12444271 (11.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7322  bytes 398889 (389.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
[root@2bbac9ebdc96 /]# ping 172.17.85.2
PING 172.17.85.2 (172.17.85.2) 56(84) bytes of data.
64 bytes from 172.17.85.2: icmp_seq=1 ttl=60 time=1.08 ms
64 bytes from 172.17.85.2: icmp_seq=2 ttl=60 time=0.523 ms
64 bytes from 172.17.85.2: icmp_seq=3 ttl=60 time=0.619 ms
64 bytes from 172.17.85.2: icmp_seq=4 ttl=60 time=2.24 ms

//在192.168.60.100节点

[root@node2 ~]# docker run -it centos:7 /bin/bash
[root@79995e04b320 /]# yum install -y net-tools
[root@79995e04b320 /]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1472
        inet 172.17.85.2  netmask 255.255.255.0  broadcast 172.17.85.255
        ether 02:42:ac:11:55:02  txqueuelen 0  (Ethernet)
        RX packets 15299  bytes 12447552 (11.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5864  bytes 320081 (312.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
[root@79995e04b320 /]# ping 172.17.39.2
PING 172.17.39.2 (172.17.39.2) 56(84) bytes of data.
64 bytes from 172.17.39.2: icmp_seq=1 ttl=60 time=0.706 ms
64 bytes from 172.17.39.2: icmp_seq=2 ttl=60 time=0.491 ms
64 bytes from 172.17.39.2: icmp_seq=3 ttl=60 time=0.486 ms
64 bytes from 172.17.39.2: icmp_seq=4 ttl=60 time=0.528 ms
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值