部署全过程(原理)
- 预检
- kubelet start
- 生成证书(位于
/etc/kubernetes/pki
) - 生成kubeconfig(位于
/etc/kubernetes/
的conf文件) - 生成静态Pod清单(即创建pod模板,位于
/etc/kubernetes/manifest/
的yaml文件) - 等待pod模板启动成功(由kubelet启动)
- 加载配置文件到集群
- 为master设置污点(保证自己创建的Node不会跑到master)
- 生成引导令牌(随机字符串作为密码,加入集群的暗号,验证节点身份)
- 设定RBAC(基于角色的授权模型)
- 安装DNS和Proxy附件
安装注意事项
禁用swap,selinux,iptables,并优化内核参数及资源限制参数
部署过程🥕🥕🥕
# 说明:为什么每个节点都要部署 docker、kubelet、kubeadm、kubectl
因为使用kubeadm方式部署,会将master的所有组件都以pod方式启动,因此master节点也需要安装kubelet和docker
' kubelet 是为了运行pod
' kubeadm 是为了让master成为master(将worker加入master)
# 补充:
只有管理节点需要 kubectl,但是其他节点部署过程中由于依赖关系也会安装 kubectl
部署效果:
=
1 基础环境准备
最小化安装基础系统,如果使用centos系统,则关闭防火墙 selinux和swap,更新软件源、时间同步、安装常用命令,重启后验证基础配置,centos 推荐使用centos 7.5及以上的系统,ubuntu推荐18.04及以上稳定版。
1.防火墙、SElinux
2.时间同步
3.禁用swap
swapoff -a
sed -ri 's/(.*swap.*)/#\1/' /etc/fstab
=
=
2 所有节点安装 docker
Ubuntu1804
# step 1: 安装必要的一些系统工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安装GPG证书
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 写入软件源信息
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新并安装Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
apt-cache madison docker-ce
# Step 2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.1~ce-0~ubuntu-xenial)
sudo apt-get -y install docker-ce=[VERSION] docker-ce-cli=[VERSION]
'如 apt-get -y install \
docker-ce=5:19.03.13~3-0~ubuntu-bionic docker-ce-cli=5:19.03.13~3-0~ubuntu-bionic
=
CentOS 7
# 建议使用阿里云教程!
https://developer.aliyun.com/mirror/docker-ce
# 脚本不一定能成
#!/bin/bash
install (){
yum -y remove docker docker-common docker-selinux docker-engine
yum -y install yum-utils device-mapper-persistent-data lvm2 policycoreutils-python deltarpm
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# container-selinux 依赖
wget https://mirrors.aliyun.com/centos/7/cloud/x86_64/openstack-train/Packages/c/container-selinux-2.84-2.el7.noarch.rpm
yum -y install container-selinux-2.84-2.el7.noarch.rpm
# 安装
yum -y install docker-ce
# 启动
systemctl enable --now docker
docker version && echo -e "\e[1;32m成功安装docker-$VERSION\e[0m"
# 镜像加速
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://glk5eos6.mirror.aliyuncs.com"]
}
EOF
systemctl restart docker
}
rpm -q docker-ce &> /dev/null && echo -e "\e[1;31m本机已有docker\e[0m" || install
=
3 安装 kube*
【所有节点】安装 kubelet 、kubeadm 、kubectl
Ubuntu1804
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
=
CentOS 7
# 参考 https://developer.aliyun.com/mirror/kubernetes
# 仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装
yum install -y kubelet kubeadm kubectl
systemctl enable --now kubelet
=
4 初始化首个控制平面节点🥕
命令:kubeadm init
注意:初次运行此命令必须在管理节点上运行!!!
补充:控制平面的所有组件的镜像文件位置为 http://gcr.io/ Google container runtime
# 前提
各个节点之间使用 hostname 进行解析
# 修改主机名
hostnamectl set-hostname master && exec bash
hostnamectl set-hostname node-1 && exec bash
hostnamectl set-hostname node-2 && exec bash
hostnamectl set-hostname node-3 && exec bash
# 所有节点添加解析记录
cat >> /etc/hosts <<EOF
10.0.0.5 master master.ilinux.io k8s-api.ilinux.io
10.0.0.15 node-1 node-1.ilinux.io
10.0.0.25 node-2 node-2.ilinux.io
10.0.0.35 node-3 node-3.ilinux.io
EOF
#--------------------------------------
# 所有节点修改默认的cgroup驱动
# 加入多条键值对,行尾加逗号!!!!!!!!!!
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://glk5eos6.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker
docker info | grep Cgroup
# net.bridge设置
echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.conf
sysctl -p
sysctl -a | grep bridge
#-----------------------------------------------------------------
# 主节点初始化(命令解析)
kubeadm init \
--image-repository # 指定镜像仓库(使用阿里云)
--kubernetes-version # 指定k8s的版本 【不能低于上面安装的kubelet的版本】!!!!!
--control-plane-endpoint # 将控制节点的主机名告知所有客户端(多个主节点使用同名实现负载均衡)
--apiserver-advertise-address # apiserver的地址(若不指定值 则会自动探测)
--pod-network-cidr # 选择将要分配给pod内部容器使用的网段
--token-ttl 0 # 使token永不过期
# master执行:
# 当前主节点为 10.0.0.5
kubeadm init \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.20.0 \
--control-plane-endpoint=master \
--apiserver-advertise-address=10.0.0.5 \
--pod-network-cidr=10.244.0.0/16 \
--token-ttl=0
# 开始执行后,可另开终端查看镜像下载进度
docker image ls
# 执行全过程:
[root@master ~]# kubeadm init \
> --image-repository=registry.aliyuncs.com/google_containers \
> --kubernetes-version=v1.20.0 \
> --control-plane-endpoint=master \
> --apiserver-advertise-address=10.0.0.5 \
> --pod-network-cidr=10.244.0.0/16 \
> --token-ttl=0
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
......
......
Your Kubernetes control-plane has initialized successfully! # 控制平面初始化成功
# 以下是初始化操作提示
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
## 之后加入新的【mster节点】需要执行的命令: 建议保存此部分内容
kubeadm join master:6443 --token 3ch7av.w6mejh1y39srax9k \
--discovery-token-ca-cert-hash sha256:634f46b3cb2b3fedd05a9859bdcf35ca83493a7cf7eff9d2aa0c2d2088eee645 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
## 之后加入新的【node节点】需要执行的命令: 建议保存此部分内容
kubeadm join master:6443 --token 3ch7av.w6mejh1y39srax9k \
--discovery-token-ca-cert-hash sha256:634f46b3cb2b3fedd05a9859bdcf35ca83493a7cf7eff9d2aa0c2d2088eee645
=
5 相关必要操作
主节点初始化的后续操作
# 根据提示信息(建议以普通用户身份运行)
# 此处以root运行:
mkdir .kube
cp -i /etc/kubernetes/admin.conf .kube/config # 注意文件名必须为 config
# 查看节点状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 13m v1.20.0
' NotReady ... 因为目前还缺少网络插件。。。
' 提示信息:You should now deploy a pod network to the cluster.
# 解决:
# 手动部署 fannel
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master ~]#ls
kube-flannel.yml
[root@master ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
# 检查 kube-flannel-ds-tkksx 是否已启动
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-4wsc2 0/1 Pending 0 30m
coredns-6d56c8448f-w7wgq 0/1 Pending 0 30m
etcd-master 1/1 Running 0 30m
kube-apiserver-master 1/1 Running 0 30m
kube-controller-manager-master 1/1 Running 0 30m
kube-flannel-ds-tkksx 0/1 Init:ImagePullBackOff 0 4m45s # 未就绪!!!
kube-proxy-zmzm5 1/1 Running 0 30m
kube-scheduler-master 1/1 Running 0 30m
# 等一会
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-4wsc2 1/1 Running 0 51m
coredns-6d56c8448f-w7wgq 1/1 Running 0 51m
etcd-master 1/1 Running 1 51m
kube-apiserver-master 1/1 Running 1 51m
kube-controller-manager-master 1/1 Running 1 51m
kube-flannel-ds-tkksx 1/1 Running 0 25m # Running !!!!
kube-proxy-zmzm5 1/1 Running 1 51m
kube-scheduler-master 1/1 Running 1 51m
# 查看节点已经 Ready
[root@master ~]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 51m v1.20.0
=
6 = 加入工作节点
注:生产中在加工作节点之前应该扩展多个控制节点实现高可用,此处忽略
# 添加工作节点
'注意要求被加入的node能解析相关主机名(即为下面master:6443中的 master)
'Then you can join any number of worker nodes by running the following on each as root:
# 在每个节点分别执行!!!!!!
# 以下是一整条命令。。。
kubeadm join master:6443 --token 3ch7av.w6mejh1y39srax9k \
--discovery-token-ca-cert-hash \ sha256:634f46b3cb2b3fedd05a9859bdcf35ca83493a7cf7eff9d2aa0c2d2088eee645
# 主节点检查
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 11m v1.20.0
node-1 NotReady <none> 40s v1.20.0
node-2 NotReady <none> 20s v1.20.0
node-3 NotReady <none> 18s v1.20.0
'注:如果是NotReady,去工作节点查看镜像下载进度。。。
'总共需要下载3个镜像。。。如下:
[root@node-1 ~]#docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.13.1-rc1 f03a23d55e57 12 days ago 64.6MB
registry.aliyuncs.com/google_containers/kube-proxy v1.19.4 635b36f4d89f 3 weeks ago 118MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 9 months ago 683kB
# 最终状态:Ready
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 14m v1.20.0
node-1 Ready <none> 3m25s v1.20.0
node-2 Ready <none> 3m5s v1.20.0
node-3 Ready <none> 3m3s v1.20.0
==