Table of Contents
- 目录
目录
基于Kubeadm部署Kubernetes1.13.3 HA 高可用集群
K8S 1.13版本中默认:
DNS:CoreDNS
Kube-proxy: IPVS模板 #非iptables
Network: Calico
注意:
默认kubeadm一键部署
etcd
非集群高可用,本示例使用
外接etcd
实现高可用集群,Master APIserver使用Keeplived;
01. 部署目的
1.1 Kubernetes的特性
- 分布式部署
- 服务发现
- 服务实例保护
- 滚动升级
- 节点无痕维护
- 负载平衡
- 动态扩缩容
从而能够贴合未来微服部署维护的需求
1.2 贴微服务,开发环境快速部署
01通过docker镜像,可以快速并且一致的为每个开发人员提供相同的linux开发环境,省去了每个员工自行部署开发环境带来的疑问和冗余,让开发人员能够能专注于code本身,节省大量时间
02. 环境说明
- 五台机器进行部署K8S v1.13+集群环境,一台内网Harbor。其中
etcd
为所有节点部署 - Kubernetes中所有数据都是存储在etcd中的,etcd必须高可用集群
- Master使用keepalived高可用,Master主要是分发用户操作指令等操作;
- Master官方给出是用keepalived进行集群,建议也可以使用自建LB/商业AWS的(ALB ELB )
- Node节点为搭配Taaefik解析IP高可用,也使用Keepalived,Tarefik解析 VIP上
System | Roles | IP Address |
---|---|---|
Master Keepalived VIP | VIP | 172.16.1.49 |
Node Keepalived VIP | VIP | 172.16.1.59 |
CentOS Linux release 7.4.1708 | Master01 | 172.16.1.50 |
CentOS Linux release 7.4.1708 | Master02 | 172.16.1.51 |
CentOS Linux release 7.4.1708 | Node01 | 172.16.1.52 |
CentOS Linux release 7.4.1708 | Node02 | 172.16.1.53 |
CentOS Linux release 7.4.1708 | Node01 | 172.16.1.54 |
CentOS Linux release 7.4.1708 | VM Harbor | xxx.xxx.xxx.xxx |
2.1 集群说明
Software | Version |
---|---|
Kubernetes | 1.13.3 |
Docker-CE | 18.06.1 |
Etcd | 3.3.11 |
Calico | 3.1.4 |
Dashboard | v1.10.0 |
Traefik Ingress | 1.7.9 |
prometheus-operator | 0.29.0 |
03. K8S集群名词说明
3.1 Kubernetes
Kubernetes 是 Google 团队发起并维护的基于Docker的开源容器集群管理系统,它不仅支持常见的云平台,而且支持内部数据中心。建于 Docker 之上的 Kubernetes 可以构建一个容器的调度服务,其目的是让用户透过Kubernetes集群来进行云端容器集群的管理,而无需用户进行复杂的设置工作。系统会自动选取合适的工作节点来执行具体的容器集群调度处理工作。其核心概念是Container Pod(容器仓)。一个Pod是有一组工作于同一物理工作节点的容器构成的。这些组容器拥有相同的网络命名空间/IP以及存储配额,可以根据实际情况对每一个Pod进行端口映射。此外,Kubernetes工作节点会由主系统进行管理,节点包含了能够运行Docker容器所用到的服务。
3.2 Docker
Docker是一个开源的引擎,可以轻松的为任何应用创建一个轻量级的、可移植的、自给自足的容器。开发者在笔记本上编译测试通过的容器可以批量地在生产环境中部署,包括VMs(虚拟机)、bare metal、OpenStack 集群和其他的基础应用平台。
3.3 Etcd
ETCD是用于共享配置和服务发现的分布式,一致性的KV存储系统。
3.4 Calico
同Flannel,用于解决 docker 容器直接跨主机的通信问题,Calico的实现是非侵入式的,不封包解包,直接通过iptables转发,基本没有消耗,flannel需要封包解包,有cpu消耗,效率不如calico,calico基本和原机差不多了
04. 开始部署Kubernetes集群
4.1 安装前准备
截至2019年2月,Kubernetes目前文档版本:v1.13+ 官方版本迭代很快,我们选择目前文档版本搭建
K8S所有节点配置主机名
# 设置主机名
hostnamectl set-hostname K8S01-Master01
hostnamectl set-hostname K8S01-Master02
hostnamectl set-hostname K8S01-Node01
hostnamectl set-hostname K8S01-Node02
hostnamectl set-hostname K8S01-Node03
# 配置hosts
cat <<EOF > /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.1.50 master01 K8S01-Master01
172.16.1.51 master02 K8S01-Master02
172.16.1.52 node01 K8S01-Node01
172.16.1.53 node02 K8S01-Node02
172.16.1.54 node03 K8S01-Node03
EOF
#配置免密钥登陆
ssh-keygen #一直回车
ssh-copy-id master01
ssh-copy-id master02
ssh-copy-id node01
ssh-copy-id node02
4.2 优化系统和集群准备
#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
###关闭Swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
###禁用Selinux
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
###报错请参考下面报错处理
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge
###K8S源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
###内核优化
echo "* soft nofile 204800" >> /etc/security/limits.conf
echo "* hard nofile 204800" >> /etc/security/limits.conf
echo "* soft nproc 204800" >> /etc/security/limits.conf
echo "* hard nproc 204800" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
###kube-proxy开启ipvs的前置条件
# 原文:https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md
# 参考:https://www.qikqiak.com/post/how-to-use-ipvs-in-kubernetes/
# 加载模块 <module_name>
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
# 检查加载的模块
lsmod | grep -e ipvs -e nf_conntrack_ipv4
# 或者
cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
#所有node节点安装ipvsadm
yum install ipvsadm -y
ipvsadm -l -n
# Version INFO: IP Virtual Server version 1.2.1 (size=4096)
4.3 安装Docker-CE
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum install -y --setopt=obsoletes=0 \
docker-ce-18.06.1.ce-3.el7
systemctl start docker
systemctl enable docker
4.4 所有节点配置Docker镜像加速
阿里云容器镜像加速器配置地址https://dev.aliyun.com/search.html 登录管理中心获取个人专属加速器地址
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://3csy84rx.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
05. 生成TLS证书和秘钥
5.1 Kubernetes 集群所需证书
ca
证书为集群admin证书。
etcd
证书为etcd集群使用。
shinezone
证书为Harbor使用。
CA&Key | etcd | api-server | proxy | kebectl | Calico | harbor |
---|---|---|---|---|---|---|
ca.csr | √ | √ | √ | √ | √ | |
ca.pem | √ | √ | √ | √ | √ | |
ca-key.pem | √ | √ | √ | √ | √ | |
ca.pem | √ | |||||
etcd.csr | √ | |||||
etcd-key.pem | √ | |||||
shinezone.com.crt | √ | |||||
shinezone.com.key | √ |
5.2 安装CFSSL
- K8S01执行
yum install wget -y
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH
5.3 创建CA文件,生成etcd证书
mkdir /root/ssl
cd /root/ssl
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes-Soulmate": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes-Soulmate",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "shanghai",
"L": "shanghai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
#hosts项需要加入所有etcd集群节点,建议将所有node也加入,便于扩容etcd集群。
cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"172.16.1.50",
"172.16.1.51",
"172.16.1.52",
"172.16.1.53",
"172.16.1.54"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "shanghai",
"L": "shanghai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd
字段说明
-
如果 hosts 字段不为空则需要指定授权使用该证书的?IP 或域名列表
-
ca-config.json
:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile; -
signing
:表示该证书可用于签名其它证书;生成的 ca.pem 证书中?CA=TRUE
; -
server auth
:表示client可以用该 CA 对server提供的证书进行验证; -
client auth
:表示server可以用该CA对client提供的证书进行验证; -
“CN”:
Common Name
,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法; -
“O”:
Organization
,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
5.4 分发证书到所有节点
- Master01执行
本集群所有所有节点安装etcd,因此需要证书分发所有节点。
mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/
scp -r /etc/etcd/ master02:/etc/
scp -r /etc/etcd/ node01:/etc/
scp -r /etc/etcd/ node02:/etc/
scp -r /etc/etcd/ node03:/etc/
06. 安装配置etcd
6.1 安装etcd
- 所有节点执行
yum install etcd -y
mkdir -p /var/lib/etcd
6.2 配置etcd
master01的etcd.service
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
--name k8s01 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://172.16.1.50:2380 \
--listen-peer-urls https://172.16.1.50:2380 \
--listen-client-urls https://172.16.1.50:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://172.16.1.50:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8s01=https://172.16.1.50:2380,k8s02=https://172.16.1.51:2380,k8s03=https://172.16.1.52:2380,k8s04=https://172.16.1.53:2380,k8s05=https://172.16.1.54:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
master02的etcd.service
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
--name k8s02 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://172.16.1.51:2380 \
--listen-peer-urls https://172.16.1.51:2380 \
--listen-client-urls https://172.16.1.51:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://172.16.1.51:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8s01=https://172.16.1.50:2380,k8s02=https://172.16.1.51:2380,k8s03=https://172.16.1.52:2380,k8s04=https://172.16.1.53:2380,k8s05=https://172.16.1.54:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
node01的etcd.service
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
--name k8s03 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://172.16.1.52:2380 \
--listen-peer-urls https://172.16.1.52:2380 \
--listen-client-urls https://172.16.1.52:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://172.16.1.52:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8s01=https://172.16.1.50:2380,k8s02=https://172.16.1.51:2380,k8s03=https://172.16.1.52:2380,k8s04=https://172.16.1.53:2380,k8s05=https://172.16.1.54:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
node02的etcd.service
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
--name k8s04 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://172.16.1.53:2380 \
--listen-peer-urls https://172.16.1.53:2380 \
--listen-client-urls https://172.16.1.53:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://172.16.1.53:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8s01=https://172.16.1.50:2380,k8s02=https://172.16.1.51:2380,k8s03=https://172.16.1.52:2380,k8s04=https://172.16.1.53:2380,k8s05=https://172.16.1.54:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
node03的etcd.service
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
--name k8s05 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://172.16.1.54:2380 \
--listen-peer-urls https://172.16.1.54:2380 \
--listen-client-urls https://172.16.1.54:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://172.16.1.54:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8s01=https://172.16.1.50:2380,k8s02=https://172.16.1.51:2380,k8s03=https://172.16.1.52:2380,k8s04=https://172.16.1.53:2380,k8s05=https://172.16.1.54:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
6.3 启动etcd
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
6.4 集群状态检查(维护)
使用v3版本API
$ echo "export ETCDCTL_API=3" >>/etc/profile && source /etc/profile
$ etcdctl version
etcdctl version: 3.2.18
API version: 3.2
查看集群健康状态
$ etcdctl --endpoints=https://172.16.1.50:2379,https://172.16.1.51:2379,https://172.16.1.52:2379,https://172.16.1.53:2379,https://172.16.1.54:2379 --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint health
//输出信息如下:
https://172.16.1.54:2379 is healthy: successfully committed proposal: took = 1.911784ms
https://172.16.1.50:2379 is healthy: successfully committed proposal: took = 2.648385ms
https://172.16.1.52:2379 is healthy: successfully committed proposal: took = 3.472479ms
https://172.16.1.51:2379 is healthy: successfully committed proposal: took = 2.850887ms
https://172.16.1.53:2379 is healthy: successfully committed proposal: took = 3.711259ms
查询所有key
$ etcdctl --endpoints=https://172.16.1.50:2379,https://172.16.1.51:2379,https://172.16.1.52:2379,https://172.16.1.53:2379,https://172.16.1.54:2379 --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem get / --prefix --keys-only
// kubeadm初始化之前是没有任何信息的,初始化完成后查询得到的信息如:
/registry/apiregistration.k8s.io/apiservices/v1.
/registry/apiregistration.k8s.io/apiservices/v1.apps
/registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io
/registry/apiregistration.k8s.io/apiservices/v1.autoscaling
/registry/apiregistration.k8s.io/apiservices/v1.batch