部署架构图
官网提供了2种高可用部署方式。区别就是etcd是否在容器里面部署。本人这次使用容器部署的方式。发现还是很多坑的。如在加入第二个master的时候失败了,第一个master的etcd起不来了,好像是修改了configmap,但是此时我的etcd起不来,又修改不了etcd的configmap。以后还是建议把etcd部署在外面的方式。 感兴趣的朋友可以查阅官网:https://kubernetes.io/docs/setup/independent/ha-topology/ 架构图如下:
服务器信息
主机名 ip 系统版本 备注 k8snode01 192.168.33.61 CentOS Linux release 7.6.1810 (Core) master etcd k8snode02 192.168.33.62 CentOS Linux release 7.6.1810 (Core) master etcd k8snode03 192.168.33.63 CentOS Linux release 7.6.1810 (Core) master etcd k8snode04 192.168.33.64 CentOS Linux release 7.6.1810 (Core) work k8snode05 192.168.33.65 CentOS Linux release 7.6.1810 (Core) work vip 192.168.33.66 虚拟机ip,如果是搭建单集群可以不使用,如果多集群使用LoadBalance也可以不使用
cat /etc/redhat-release
检查端口是否开通
Master node(s)
Protocol Direction Port Range Purpose Used By TCP Inbound 6443* Kubernetes API server All TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd TCP Inbound 10250 Kubelet API Self, Control plane TCP Inbound 10251 kube-scheduler Self TCP Inbound 10252 kube-controller-manager Self
Worker node(s)
Protocol Direction Port Range Purpose Used By TCP Inbound 10250 Kubelet API Self, Control plane TCP Inbound 30000-32767 NodePort Services** All
软件版本
docker17.03.2-ce
socat-1.7.3.2-2.el7.x86_64
kubelet-1.10.0-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubectl-1.10.0-0.x86_64
kubeadm-1.10.0-0.x86_64
环境初始化(root用户执行)
1.每台服务上修改主机名
hostnamectl set-hostname k8snode01
hostnamectl set-hostname k8snode02
hostnamectl set-hostname k8snode03
hostnamectl set-hostname k8snode04
hostnamectl set-hostname k8snode05
2.每台服务器上配置host
cat << EOF >> /etc/hosts
192.168.33.61 k8snode01
192.168.33.62 k8snode02
192.168.33.63 k8snode03
192.168.33.64 k8snode04
192.168.33.65 k8snode05
EOF
3.每台服务器进行配置、停防火墙、关闭Swap、关闭Selinux、设置内核、K8S的yum源、安装依赖包、配置ntp(配置完后建议重启一次)
systemctl stop firewalld
systemctl disable firewalld
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
modprobe br_netfilter
cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[ kubernetes]
name= Kubernetes
baseurl= https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled= 1
gpgcheck= 1
repo_gpgcheck= 1
gpgkey= https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service
1、远程连接 Linux 服务器。
2、执行命令 sudo rm /etc/localtime 删除系统里的当地时间链接。
3、执行命令 sudo vi /etc/sysconfig/clock 用 vim 打开并编辑配置文件 /etc/sysconfig/clock。
4、输入 i 添加时区城市,例如添加 Zone= Asia/Shanghai,按下 Esc 键退出编辑并输入 :wq 保存并退出。(可执行命令 ls /usr/share/zoneinfo 查询时区列表,Shanghai 为列表条目之一。)
5、执行命令 sudo ln -sf /usr/share/zoneinfo/Asia/Shanghai/etc/localtime 更新6、时区修改内容。
7、执行命令 hwclock -w 更新硬件时钟(RTC)。
8、执行命令 sudo reboot 重启实例。
9、执行命令 date -R 查看时区信息是否生效,未生效可重走一遍步骤。
10、安装ntp服务sudo yum install ntp。
11、修改成国内时区并同步。
timedatectl set-timezone Asia/Shanghai
timedatectl set-ntp yes
vi /etc/ssh/sshd_config
PasswordAuthentication yes
PubkeyAuthentication yes
PermitRootLogin yes
sudo systemctl restart sshd.service
sudo systemctl status sshd.service
ssh-keygen
ssh-copy-id k8snode02
ssh-copy-id k8snode03
ssh-copy-id k8snode04
ssh-copy-id k8snode05
reboot
4.安装、配置keepalived(master节点)。单集群或使用LoadBalance请跳过。
yum install -y keepalived
systemctl enable keepalived
cat << EOF > /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
script "curl -k https://192.168.33.66:6443"
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 61
priority 100
advert_int 1
mcast_src_ip 192.168.33.61
nopreempt
authentication {
auth_type PASS
auth_pass sqP05dQgMSlzrxHj
}
unicast_peer {
192.168.33.62
192.168.33.63
}
virtual_ipaddress {
192.168.33.66/24
}
track_script {
CheckK8sMaster
}
}
EOF
cat << EOF > /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
script "curl -k https://192.168.33.66:6443"
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 61
priority 90
advert_int 1
mcast_src_ip 192.168.33.62
nopreempt
authentication {
auth_type PASS
auth_pass sqP05dQgMSlzrxHj
}
unicast_peer {
192.168.33.61
192.168.33.63
}
virtual_ipaddress {
192.168.33.66/24
}
track_script {
CheckK8sMaster
}
}
EOF
cat << EOF > /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
script "curl -k https://192.168.33.66:6443"
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 61
priority 80
advert_int 1
mcast_src_ip 192.168.33.63
nopreempt
authentication {
auth_type PASS
auth_pass sqP05dQgMSlzrxHj
}
unicast_peer {
192.168.33.61
192.168.33.62
}
virtual_ipaddress {
192.168.33.66/24
}
track_script {
CheckK8sMaster
}
}
EOF
systemctl restart keepalived
ip addr
inet 192.168.33.66/24 scope global secondary eth1
valid_lft forever preferred_lft forever
5.每台服务安装docker、kubeadm、kubelet、kubectl
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates
yum install -y docker-ce-18.06.1.ce
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
systemctl status docker
yum list kubeadm --showduplicates
yum install -y kubeadm-1.13.2-0 kubelet-1.13.2-0 kubectl-1.13.2-0
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source < ( kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
systemctl daemon-reload
systemctl enable kubelet
6.安装etcd集群
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH= /usr/local/bin:$PATH
mkdir /root/ssl
cd /root/ssl
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes-Soulmate": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes-Soulmate",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "shanghai",
"L": "shanghai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cat > etcd-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.33.61",
"192.168.33.62",
"192.168.33.63",
"192.168.33.66"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "shanghai",
"L": "shanghai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca= ca.pem \
-ca-key= ca-key.pem \
-config= ca-config.json \
-profile= kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd
mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/
ssh -n k8snode02 "mkdir -p /etc/etcd/ssl && exit"
ssh -n k8snode03 "mkdir -p /etc/etcd/ssl && exit"
scp -r /etc/etcd/ssl/*.pem k8snode02:/etc/etcd/ssl/
scp -r /etc/etcd/ssl/*.pem k8snode03:/etc/etcd/ssl/
yum install etcd -y
mkdir -p /var/lib/etcd
cat << EOF > /etc/systemd/system/etcd.service
[ Unit]
Description= Etcd Server
After= network.target
After= network-online.target
Wants= network-online.target
Documentation= https://github.com/coreos
[ Service]
Type= notify
WorkingDirectory= /var/lib/etcd/
ExecStart= /usr/bin/etcd \
--name k8snode01 \
--cert-file= /etc/etcd/ssl/etcd.pem \
--key-file= /etc/etcd/ssl/etcd-key.pem \
--peer-cert-file= /etc/etcd/ssl/etcd.pem \
--peer-key-file= /etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file= /etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file= /etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://192.168.33.61:2380 \
--listen-peer-urls https://192.168.33.61:2380 \
--listen-client-urls https://192.168.33.61:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.33.61:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8snode01= https://192.168.33.61:2380,k8snode02= https://192.168.33.62:2380,k8snode03= https://192.168.33.63:2380 \
--initial-cluster-state new \
--data-dir= /var/lib/etcd
Restart= on-failure
RestartSec= 5
LimitNOFILE= 65536
[ Install]
WantedBy= multi-user.target
EOF
cat << EOF > /etc/systemd/system/etcd.service
[ Unit]
Description= Etcd Server
After= network.target
After= network-online.target
Wants= network-online.target
Documentation= https://github.com/coreos
[ Service]
Type= notify
WorkingDirectory= /var/lib/etcd/
ExecStart= /usr/bin/etcd \
--name k8snode02 \
--cert-file= /etc/etcd/ssl/etcd.pem \
--key-file= /etc/etcd/ssl/etcd-key.pem \
--peer-cert-file= /etc/etcd/ssl/etcd.pem \
--peer-key-file= /etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file= /etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file= /etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://192.168.33.62:2380 \
--listen-peer-urls https://192.168.33.62:2380 \
--listen-client-urls https://192.168.33.62:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.33.62:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8snode01= https://192.168.33.61:2380,k8snode02= https://192.168.33.62:2380,k8snode03= https://192.168.33.63:2380 \
--initial-cluster-state new \
--data-dir= /var/lib/etcd
Restart= on-failure
RestartSec= 5
LimitNOFILE= 65536
[ Install]
WantedBy= multi-user.target
EOF
cat << EOF > /etc/systemd/system/etcd.service
[ Unit]
Description= Etcd Server
After= network.target
After= network-online.target
Wants= network-online.target
Documentation= https://github.com/coreos
[ Service]
Type= notify
WorkingDirectory= /var/lib/etcd/
ExecStart= /usr/bin/etcd \
--name k8snode03 \
--cert-file= /etc/etcd/ssl/etcd.pem \
--key-file= /etc/etcd/ssl/etcd-key.pem \
--peer-cert-file= /etc/etcd/ssl/etcd.pem \
--peer-key-file= /etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file= /etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file= /etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://192.168.33.63:2380 \
--listen-peer-urls https://192.168.33.63:2380 \
--listen-client-urls https://192.168.33.63:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.33.63:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8snode01= https://192.168.33.61:2380,k8snode02= https://192.168.33.62:2380,k8snode03= https://192.168.33.63:2380 \
--initial-cluster-state new \
--data-dir= /var/lib/etcd
Restart= on-failure
RestartSec= 5
LimitNOFILE= 65536
[ Install]
WantedBy= multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
etcdctl --endpoints= https://192.168.33.61:2379,https://192.168.33.62:2379,https://192.33.63.183:2379 \
--ca-file= /etc/etcd/ssl/ca.pem \
--cert-file= /etc/etcd/ssl/etcd.pem \
--key-file= /etc/etcd/ssl/etcd-key.pem cluster-health
7.在k8snode01上初始化mster节点
cat << EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "192.168.33.61"
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: "v1.13.2"
apiServer:
certSANs:
- "192.168.33.66"
controlPlaneEndpoint: "192.168.33.66:6443"
networking:
serviceSubnet: "10.96.0.0/12"
podSubnet: "10.100.0.1/24"
dnsDomain: "cluster.local"
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
clusterName: "k8s-cluster"
etcd:
external:
endpoints:
- https://192.168.33.61:2379
- https://192.168.33.62:2379
- https://192.168.33.63:2379
caFile: /etc/etcd/ssl/ca.pem
certFile: /etc/etcd/ssl/etcd.pem
keyFile: /etc/etcd/ssl/etcd-key.pem
dataDir: /var/lib/etcd
EOF
kubeadm init --config= kubeadm-config.yaml
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME /.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config
sudo chown $( id -u) : $( id -g) $HOME /.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.33.66:6443 --token cqagqf.4o3vh6gqxwwij9cf --discovery-token-ca-cert-hash sha256:a594b3b55ae13d0a782b116e65d962d07f764bc2a3a84d593fe66cca11136988
kubeadm reset -f
rm -rf /etc/kubernetes/*.conf
rm -rf /etc/kubernetes/manifests/*.yaml
docker ps -a | awk '{print $1 }' | xargs docker rm -f
systemctl stop kubelet
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
systemctl start kubelet
mkdir -p $HOME /.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config
sudo chown $( id -u) : $( id -g) $HOME /.kube/config
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
kubectl get pod -n kube-system -w
scp -r /etc/kubernetes/pki k8snode02:/etc/kubernetes/
scp -r /etc/kubernetes/admin.conf k8snode02:/etc/kubernetes/admin.conf
scp -r /etc/kubernetes/pki k8snode03:/etc/kubernetes/
scp -r /etc/kubernetes/admin.conf k8snode03:/etc/kubernetes/admin.conf
kubeadm join 192.168.33.66:6443 --token cqagqf.4o3vh6gqxwwij9cf --discovery-token-ca-cert-hash sha256:a594b3b55ae13d0a782b116e65d962d07f764bc2a3a84d593fe66cca11136988 --experimental-control-plane --apiserver-advertise-address 192.168.33.62
kubeadm join 192.168.33.66:6443 --token cqagqf.4o3vh6gqxwwij9cf --discovery-token-ca-cert-hash sha256:a594b3b55ae13d0a782b116e65d962d07f764bc2a3a84d593fe66cca11136988 --experimental-control-plane --apiserver-advertise-address 192.168.33.63
kubeadm join 192.168.33.66:6443 --token cqagqf.4o3vh6gqxwwij9cf --discovery-token-ca-cert-hash sha256:a594b3b55ae13d0a782b116e65d962d07f764bc2a3a84d593fe66cca11136988 --apiserver-advertise-address 192.168.33.64
kubeadm join 192.168.33.66:6443 --token cqagqf.4o3vh6gqxwwij9cf --discovery-token-ca-cert-hash sha256:a594b3b55ae13d0a782b116e65d962d07f764bc2a3a84d593fe66cca11136988 --apiserver-advertise-address 192.168.33.65
kubectl get node
参考
https://www.kubernetes.org.cn/3808.html https://kubernetes.io/docs/setup/independent/install-kubeadm/ https://choerodon.io/zh/docs/installation-configuration/steps/kubernetes/ https://blog.csdn.net/nklinsirui/article/details/80610058 https://k8smeetup.github.io/docs/admin/kubeadm/#config-file https://k8smeetup.github.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file https://blog.csdn.net/networken/article/details/84571373 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1 https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ https://github.com/Lentil1016/kubeadm-ha/issues/34 https://kubernetes.io/docs/setup/independent/ha-topology/