二、Kubernetes安装
1、集群原理
集群:
主从:
- 主从同步/复制 ;mysql 主 – mysql 从
- 主管理从 v
分片(数据集群):
- 大家都一样
- 每个人存一部分东西
1、master-node 架构
11000台机器
地主+奴隶
地(机器)
奴隶(在机器上干活)
master:主节点(地主)。可能有很多(多人控股公司)
node:work节点(工作节点)。 很多。真正干应用的活
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-O6QqPSdq-1657518560601)(images/4.Kubernetes-基础入门/1619061915065.png)]
master 和 worker怎么交互
master决定worker里面都有什么
worker只是和master (API) 通信; 每一个节点自己干自己的活
程序员使用UI或者CLI操作k8s集群的master,就可以知道整个集群的状况。
2、工作原理
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tK2znEsq-1657518560602)(images/4.Kubernetes-基础入门/components-of-kubernetes.svg)]
master节点(Control Plane【控制面板】):master节点控制整个集群
master节点上有一些核心组件:
- Controller Manager:控制管理器
- etcd:键值数据库(redis)【记账本,记事本】
- scheduler:调度器
- api server:api网关(所有的控制都需要通过api-server)
node节点(worker工作节点):
- kubelet(监工):每一个node节点上必须安装的组件。
- kube-proxy:代理。代理网络
部署一个应用?
程序员:调用CLI告诉master,我们现在要部署一个tomcat应用
- 程序员的所有调用都先去master节点的网关api-server。这是matser的唯一入口(mvc模式中的c层)
- 收到的请求先交给master的api-server。由api-server交给controller-mannager进行控制
- controller-mannager 进行 应用部署
- controller-mannager 会生成一次部署信息。 tomcat --image:tomcat6 --port 8080 ,真正不部署应用
- 部署信息被记录在etcd中
- scheduler调度器从etcd数据库中,拿到要部署的应用,开始调度。看哪个节点合适,
- scheduler把算出来的调度信息再放到etcd中
- 每一个node节点的监控kubelet,随时和master保持联系的(给api-server发送请求不断获取最新数据),所有节点的kubelet就会从master
- 假设node2的kubelet最终收到了命令,要部署。
- kubelet就自己run一个应用在当前机器上,随时给master汇报当前应用的状态信息,分配ip
- node和master是通过master的api-server联系的
- 每一个机器上的kube-proxy能知道集群的所有网络。只要node访问别人或者别人访问node,node上的kube-proxy网络代理自动计算进行流量转发
下图和上图一样的,再理解一下
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IrK0aFQH-1657518560603)(images/4.Kubernetes-基础入门/1619075196642.png)]
无论访问哪个机器,都可以访问到真正应用(Service【服务】)
3、原理分解
1、主节点(master)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QnsbLiTS-1657518560603)(images/4.Kubernetes-基础入门/1619062152511.png)]
快速介绍:
master也要装kubelet和kubeproxy
前端访问(UI\CLI):
kube-apiserver:
scheduler:
controller manager:
etcd
kubelet+kubeproxy每一个节点的必备+docker(容器运行时环境)
2、工作节点(node)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-uGGga6zL-1657518560603)(images/4.Kubernetes-基础入门/1619062201206.png)]
快速介绍:
- Pod:
- docker run 启动的是一个container(容器),容器是docker的基本单位,一个应用是一个容器
- kubelet run 启动的一个应用称为一个Pod;Pod是k8s的基本单位。
- Pod是容器的一个再封装
- xue_test(永远不变) ==slf4j= log4j(类)
- 应用 ===== Pod ======= docker的容器
- 一个容器往往代表不了一个基本应用。博客(php+mysql合起来完成)
- 准备一个Pod 可以包含多个 container;一个Pod代表一个基本的应用。
- IPod(看电影、听音乐、玩游戏)【一个基本产品,原子】;
- Pod(music container、movie container)【一个基本产品,原子的】
- Kubelet:监工,负责交互master的api-server以及当前机器的应用启停等,在master机器就是master的小助手。每一台机器真正干活的都是这个 Kubelet
- Kube-proxy:
其他:
2、组件交互原理
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-7gHqkkow-1657518560604)(images/4.Kubernetes-基础入门/1619076211983.png)]
想让k8s部署一个tomcat?
0、开机默认所有节点的kubelet、master节点的scheduler(调度器)、controller-manager(控制管理器)一直监听master的api-server发来的事件变化(for ::)
1、程序员使用命令行工具: kubectl ; kubectl create deploy tomcat --image=tomcat8(告诉master让集群使用tomcat8镜像,部署一个tomcat应用)
2、kubectl命令行内容发给api-server,api-server保存此次创建信息到etcd
3、etcd给api-server上报事件,说刚才有人给我里面保存一个信息。(部署Tomcat[deploy])
4、controller-manager监听到api-server的事件,是 (部署Tomcat[deploy])
5、controller-manager 处理这个 (部署Tomcat[deploy])的事件。controller-manager会生成Pod的部署信息【pod信息】
6、controller-manager 把Pod的信息交给api-server,再保存到etcd
7、etcd上报事件【pod信息】给api-server。
8、scheduler专门监听 【pod信息】 ,拿到 【pod信息】的内容,计算,看哪个节点合适部署这个Pod【pod调度过后的信息(node: node-02)】,
9、scheduler把 【pod调度过后的信息(node: node-02)】交给api-server保存给etcd
10、etcd上报事件【pod调度过后的信息(node: node-02)】,给api-server
11、其他节点的kubelet专门监听 【pod调度过后的信息(node: node-02)】 事件,集群所有节点kubelet从api-server就拿到了 【pod调度过后的信息(node: node-02)】 事件
12、每个节点的kubelet判断是否属于自己的事情;node-02的kubelet发现是他的事情
13、node-02的kubelet启动这个pod。汇报给master当前启动好的所有信息
3、安装
1、理解
安装方式
- 二进制方式(建议生产环境使用)
- MiniKube…
- kubeadm引导方式(官方推荐)
- GA
大致流程
- 准备N台服务器,
内网互通
, - 安装Docker容器化环境【k8s放弃dockershim】
- 安装Kubernetes
- 三台机器安装核心组件(kubeadm(创建集群的引导工具), kubelet,kubectl(程序员用的命令行) )
- kubelet可以直接通过容器化的方式创建出之前的核心组件(api-server)【官方把核心组件做成镜像】
- 由kubeadm引导创建集群
2、执行
1、准备机器
- 开通三台机器,内网互通,配置公网ip。centos7.8/7.9,基础实验2c4g三台也可以
- 每台机器的hostname不要用localhost,可用k8s-01,k8s-02,k8s-03之类的【不包含下划线、小数点、大写字母】(这个后续步骤也可以做)
2、安装前置环境(都执行)
1、基础环境
#########################################################################
#关闭防火墙: 如果是云服务器,需要设置安全组策略放行端口
# https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
systemctl stop firewalld
systemctl disable firewalld
# 修改 hostname
hostnamectl set-hostname k8s-01
# 查看修改结果
hostnamectl status
# 设置 hostname 解析
echo "127.0.0.1 $(hostname)" >> /etc/hosts
#关闭 selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
#关闭 swap:
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
printf "##################配置路由转发################## \n"
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.d/k8s.conf
## 必须 ipv6流量桥接
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.d/k8s.conf
## 必须 ipv4流量桥接
echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.d/k8s.conf
modprobe br_netfilter
sudo sysctl --system
printf "##################配置ipvs################## \n"
cat <<EOF | sudo tee /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
sh /etc/sysconfig/modules/ipvs.modules
printf "##################安装ipvsadm相关软件################## \n"
yum install -y ipset ipvsadm
#################################################################
2、docker环境
sudo yum remove docker*
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-19.03.9 docker-ce-cli-19.03.9 containerd.io
systemctl enable docker
systemctl start docker
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
3、安装k8s核心(都执行)
# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 卸载旧版本
yum remove -y kubelet kubeadm kubectl
# 查看可以安装的版本
yum list kubelet --showduplicates | sort -r
# 安装kubelet、kubeadm、kubectl 指定版本
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
# 开机启动kubelet
systemctl enable kubelet && systemctl start kubelet
4、初始化master节点(master执行)
############下载核心镜像 kubeadm config images list:查看需要哪些镜像###########
####封装成images.sh 文件 -- 所有节点都要
#!/bin/bash
images=(
kube-apiserver:v1.21.0
kube-proxy:v1.21.0
kube-controller-manager:v1.21.0
kube-scheduler:v1.21.0
coredns:v1.8.0
etcd:3.4.13-0
pause:3.4.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/$imageName
done
#####封装结束
chmod +x images.sh && ./images.sh
# registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/coredns:v1.8.0
##注意1.21.0版本的k8s coredns镜像比较特殊,结合阿里云需要特殊处理,重新打标签
docker tag registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/coredns/coredns:v1.8.0
########kubeadm init 一个master########################
########kubeadm join 其他worker########################
### 10.170.12.22 这个ip要改成自己的ip -- 私网
kubeadm init \
--apiserver-advertise-address=10.170.12.22 \
--image-repository registry.cn-hangzhou.aliyuncs.com/xue_k8s_images \
--kubernetes-version v1.21.0 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
## 注意:pod-cidr与service-cidr
# cidr 无类别域间路由(Classless Inter-Domain Routing、CIDR)
# 指定一个网络可达范围 pod的子网范围+service负载均衡网络的子网范围+本机ip的子网范围不能有重复域
######按照提示继续######
## init完成后第一步:复制相关文件夹
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
## 导出环境变量
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
### 部署一个pod网络
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
##############如下:安装calico#####################
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
### 命令检查
kubectl get pod -A ##获取集群中所有部署好的应用Pod
kubectl get nodes ##查看集群所有机器的状态
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.24.80.222:6443 --token nz9azl.9bl27pyr4exy2wz4 \
--discovery-token-ca-cert-hash sha256:4bdc81a83b80f6bdd30bb56225f9013006a45ed423f131ac256ffe16bae73a20
5、初始化worker节点(worker执行)
## 用master生成的命令即可
kubeadm join 172.24.80.222:6443 --token nz9azl.9bl27pyr4exy2wz4 \
--discovery-token-ca-cert-hash sha256:4bdc81a83b80f6bdd30bb56225f9013006a45ed423f131ac256ffe16bae73a20
##过期怎么办
kubeadm token create --print-join-command
kubeadm token create --ttl 0 --print-join-command
kubeadm join --token y1eyw5.ylg568kvohfdsfco --discovery-token-ca-cert-hash sha256: 6c35e4f73f72afd89bf1c8c303ee55677d2cdb1342d67bb23c852aba2efc7c73
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-yvfpYUIF-1657518560604)(images/4.Kubernetes-基础入门/1619100578888.png)]
6、验证集群
#获取所有节点
kubectl get nodes
#给节点打标签
## k8s中万物皆对象。node:机器 Pod:应用容器
###加标签 《h1》
kubectl label node k8s-02 node-role.kubernetes.io/worker=''
###去标签
kubectl label node k8s-02 node-role.kubernetes.io/worker-
## k8s集群,机器重启了会自动再加入集群,master重启了会自动再加入集群控制中心
7、设置ipvs模式
k8s整个集群为了访问通;默认是用iptables,性能下(kube-proxy在集群之间同步iptables的内容)
#1、查看默认kube-proxy 使用的模式
kubectl logs -n kube-system kube-proxy-28xv4
#2、需要修改 kube-proxy 的配置文件,修改mode 为ipvs。默认iptables,但是集群大了以后就很慢
kubectl edit cm kube-proxy -n kube-system
修改如下
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
###修改了kube-proxy的配置,为了让重新生效,需要杀掉以前的Kube-proxy
kubectl get pod -A|grep kube-proxy
kubectl delete pod kube-proxy-pqgnt -n kube-system
### 修改完成后可以重启kube-proxy以生效
8、让其他客户端kubelet也能操作集群
#1、master获取管理员配置
cat /etc/kubernetes/admin.conf
#2、其他节点创建保存
vi ~/.kube/config
#3、重新测试使用
9、极速安装方式
1、三台机器设置自己的hostname(不能是localhost)。云厂商注意三台机器一定要通。
- 青云需要额外设置组内互信
- 阿里云默认是通的
- 虚拟机,关闭所有机器的防火墙
# 修改 hostname; k8s-01要变为自己的hostname
hostnamectl set-hostname k8s-01
# 设置 hostname 解析
echo "127.0.0.1 $(hostname)" >> /etc/hosts
2、所有机器批量执行如下脚本
#先在所有机器执行 vi k8s.sh
# 进入编辑模式(输入i),把如下脚本复制
# 所有机器给脚本权限 chmod +x k8s.sh
#执行脚本 ./k8s.sh
未安装Docker的脚本
#/bin/sh
#######################开始设置环境##################################### \n
printf "##################正在配置所有基础环境信息################## \n"
printf "##################关闭selinux################## \n"
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
printf "##################关闭swap################## \n"
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
printf "##################配置路由转发################## \n"
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.d/k8s.conf
## 必须 ipv6流量桥接
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.d/k8s.conf
## 必须 ipv4流量桥接
echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.d/k8s.conf
modprobe br_netfilter
sudo sysctl --system
printf "##################配置ipvs################## \n"
cat <<EOF | sudo tee /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
sh /etc/sysconfig/modules/ipvs.modules
printf "##################安装ipvsadm相关软件################## \n"
yum install -y ipset ipvsadm
printf "##################安装docker容器环境################## \n"
sudo yum remove docker*
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-19.03.9 docker-ce-cli-19.03.9 containerd.io
systemctl enable docker
systemctl start docker
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
###在这里配置自己的阿里云加速
"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
printf "##################安装k8s核心包 kubeadm kubelet kubectl################## \n"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
###指定k8s安装版本
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
###要把kubelet立即启动。
systemctl enable kubelet
systemctl start kubelet
printf "##################下载api-server等核心镜像################## \n"
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.21.0
kube-proxy:v1.21.0
kube-controller-manager:v1.21.0
kube-scheduler:v1.21.0
coredns:v1.8.0
etcd:3.4.13-0
pause:3.4.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/$imageName
done
## 全部完成后重新修改coredns镜像
docker tag registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/coredns/coredns:v1.8.0
EOF
chmod +x ./images.sh && ./images.sh
### k8s的所有基本环境全部完成
已安装docker的脚本
#/bin/sh
#######################开始设置环境##################################### \n
printf "##################正在配置所有基础环境信息################## \n"
printf "##################关闭selinux################## \n"
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
printf "##################关闭swap################## \n"
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
printf "##################配置路由转发################## \n"
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.d/k8s.conf
## 必须 ipv6流量桥接
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.d/k8s.conf
## 必须 ipv4流量桥接
echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.d/k8s.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.d/k8s.conf
modprobe br_netfilter
sudo sysctl --system
printf "##################配置ipvs################## \n"
cat <<EOF | sudo tee /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
sh /etc/sysconfig/modules/ipvs.modules
printf "##################安装ipvsadm相关软件################## \n"
yum install -y ipset ipvsadm
printf "##################安装k8s核心包 kubeadm kubelet kubectl################## \n"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
###指定k8s安装版本
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
###要把kubelet立即启动。
systemctl enable kubelet
systemctl start kubelet
printf "##################下载api-server等核心镜像################## \n"
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.21.0
kube-proxy:v1.21.0
kube-controller-manager:v1.21.0
kube-scheduler:v1.21.0
coredns:v1.8.0
etcd:3.4.13-0
pause:3.4.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/$imageName
done
## 全部完成后重新修改coredns镜像
docker tag registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/xue_k8s_images/coredns/coredns:v1.8.0
EOF
chmod +x ./images.sh && ./images.sh
### k8s的所有基本环境全部完成
3、使用kubeadm引导集群(参照初始化master继续做)
#### --apiserver-advertise-address 的地址一定写成自己master机器的ip地址
#### 虚拟机或者其他云厂商给你的机器ip 10.96 192.168
#### 以下的只在master节点执行
#### 10.140.120.18 这个ip要改成自己的ip 用 ip a 查看 eth0 对应的ip
kubeadm init \
--apiserver-advertise-address=10.140.120.18 \
--image-repository registry.cn-hangzhou.aliyuncs.com/xue_k8s_images \
--kubernetes-version v1.21.0 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
4、master结束以后,按照控制台引导继续往下
## 第一步
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
##第二步
export KUBECONFIG=/etc/kubernetes/admin.conf
##第三步 部署网络插件
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
##第四步,用控制台打印的kubeadm join 去其他node节点执行
kubeadm join 10.140.120.18:6443 --token zu037w.5wm5o0zj1cobfmk2 \
--discovery-token-ca-cert-hash sha256:7f774e0b497de7140afafd8415af6738fbe3493f57ac5d4bd89b612f1a59465b
## 要是这个命令不见了可以使用
kubeadm token create --print-join-command
5、验证集群
#等一会,在master节点执行
kubectl get nodes
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5aWplzhK-1657518560604)(images/4.Kubernetes-基础入门/image-20210715193920604.png)]
6、设置kube-proxy的ipvs模式
##修改kube-proxy默认的配置
kubectl edit cm kube-proxy -n kube-system
## 修改mode: "ipvs"
##改完以后重启kube-proxy
### 查到所有的kube-proxy
kubectl get pod -n kube-system |grep kube-proxy
### 删除之前的即可
kubectl delete pod 【用自己查出来的kube-proxy-dw5sf kube-proxy-hsrwp kube-proxy-vqv7n】 -n kube-system
kubectl delete pod kube-proxy-52tvq kube-proxy-7lrhp kube-proxy-l6f7q -n kube-system
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-PYKcrueL-1657518560604)(images/4.Kubernetes-基础入门/1619265364639.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-S91aA5L9-1657518560604)(images/4.Kubernetes-基础入门/1619265568111.png)]
10、master初始化日志
[root@i-iqrlgkwc ~]# kubeadm init \
> --apiserver-advertise-address=10.170.11.8 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/xue_k8s_images \
> --kubernetes-version v1.21.0 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.170.11.8]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-01 localhost] and IPs [10.170.11.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-01 localhost] and IPs [10.170.11.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 66.504822 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: os234q.tqr5fxmvapgu0b71
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.170.11.8:6443 --token os234q.tqr5fxmvapgu0b71 \
--discovery-token-ca-cert-hash sha256:68251032e1f77a7356e784bdeb8e1f7f728cb0fb31c258dc7b44befc9f516f85