kubeadm 快速部署k8s

一 kubeadm 1.2的部署


master(2C/4G,cpu核心数要求大于2)        192.168.80.10        docker、kubeadm、kubelet、kubectl、flannel
node01(2C/2G)                            192.168.80.11        docker、kubeadm、kubelet、kubectl、flannel
node02(2C/2G)                            192.168.80.12        docker、kubeadm、kubelet、kubectl、flannel
Harbor节点(hub.kgc.com)                192.168.80.13        docker、docker-compose、harbor-offline-v1.2.2

1、在所有节点上安装Docker和kubeadm
2、部署Kubernetes Master
3、部署容器网络插件
4、部署 Kubernetes Node,将节点加入Kubernetes集群中
5、部署 Dashboard Web 页面,可视化查看Kubernetes资源
6、部署 Harbor 私有仓库,存放镜像资源


------------------------------ 环境准备 ------------------------------
//所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a                        #交换分区必须要关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab        #永久关闭swap分区,&符号在sed命令中代表上次匹配的结果
#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

//修改主机名
hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02

//所有节点修改hosts文件
vim /etc/hosts
192.168.80.10 master01
192.168.80.11 node01
192.168.80.12 node02

//调整内核参数
cat > /etc/sysctl.d/kubernetes.conf << EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

//生效参数
sysctl --system  


-------------------- 所有节点安装docker --------------------
yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "500m", "max-file": "3"
  }
}
EOF
#使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
#日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。

systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 

docker info | grep "Cgroup Driver"
Cgroup Driver: systemd


-------------------- 所有节点安装kubeadm,kubelet和kubectl --------------------
//定义kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15

//开机自启kubelet
systemctl enable kubelet.service
#K8S通过kubeadm安装出来以后都是以Pod方式存在,即底层是以容器方式运行,所以kubelet必须设置开机自启


-------------------- 部署K8S集群 -------------------- 
//查看初始化需要的镜像
kubeadm config images list --kubernetes-version 1.20.15

//在 master 节点上传 v1.20.15.zip 压缩包至 /opt 目录
unzip v1.20.15.zip -d /opt/k8s
cd /opt/k8s/
for i in $(ls *.tar); do docker load -i $i; done

//复制镜像和脚本到 node 节点,并在 node 节点上执行脚本加载镜像文件
scp -r /opt/k8s root@node01:/opt
scp -r /opt/k8s root@node02:/opt

//初始化kubeadm
方法一:
kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12   advertiseAddress: 192.168.80.10        #指定master节点的IP地址
13   bindPort: 6443
......
32 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers          #指定拉取镜像的仓库,默认是k8s.gcr.io
33 kind: ClusterConfiguration
34 kubernetesVersion: v1.20.15                #指定kubernetes版本号
35 networking:
36   dnsDomain: cluster.local
37   podSubnet: "10.244.0.0/16"                #指定pod网段,10.244.0.0/16用于匹配flannel默认网段
38   serviceSubnet: 10.96.0.0/16            #指定service网段
39 scheduler: {}
#末尾再添加以下内容
--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs                                    #把默认的kube-proxy调度方式改为ipvs模式


//在线拉取镜像
kubeadm config images pull --config /opt/kubeadm-config.yaml

//初始化 master
kubeadm init --config=/opt/kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
#--upload-certs 参数可以在后续执行加入节点时自动分发证书文件
#tee kubeadm-init.log 用以输出日志

//查看 kubeadm-init 日志
less kubeadm-init.log

//kubernetes配置文件目录
ls /etc/kubernetes/

//存放ca等证书和密码的目录
ls /etc/kubernetes/pki        


方法二:
kubeadm init \
--apiserver-advertise-address=192.168.80.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.20.15 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--token-ttl=0
--------------------------------------------------------------------------------------------
初始化集群需使用kubeadm init命令,可以指定具体参数初始化,也可以指定配置文件初始化。
可选参数:
--apiserver-advertise-address:apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示节点上所有可用地址
--apiserver-bind-port:apiserver的监听端口,默认是6443
--cert-dir:通讯的ssl证书文件,默认/etc/kubernetes/pki
--control-plane-endpoint:控制台平面的共享终端,可以是负载均衡的ip地址或者dns域名,高可用集群时需要添加
--image-repository:拉取镜像的镜像仓库,默认是k8s.gcr.io
--kubernetes-version:指定kubernetes版本
--pod-network-cidr:pod资源的网段,需与pod网络插件的值设置一致。Flannel网络插件的默认为10.244.0.0/16,Calico插件的默认值为192.168.0.0/16;
--service-cidr:service资源的网段
--service-dns-domain:service全域名的后缀,默认是cluster.local
--token-ttl:默认token的有效期为24小时,如果不想过期,可以加上 --token-ttl=0 这个参数
---------------------------------------------------------------------------------------------

方法二初始化后需要修改 kube-proxy 的 configmap,开启 ipvs
kubectl edit cm kube-proxy -n=kube-system
修改mode: ipvs

提示:
......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.80.10:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
    --discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2


//设定kubectl
kubectl需经由API server认证及授权后方能执行相应的管理操作,kubeadm 部署的集群为其生成了一个具有管理员权限的认证配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通过默认的 “$HOME/.kube/config” 的路径进行加载。

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config


//如果 kubectl get cs 发现集群不健康,更改以下两个文件
vim /etc/kubernetes/manifests/kube-scheduler.yaml 
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
# 修改如下内容
把--bind-address=127.0.0.1变成--bind-address=192.168.80.10        #修改成k8s的控制节点master01的ip
把httpGet:字段下的hosts由127.0.0.1变成192.168.80.10(有两处)
#- --port=0                    # 搜索port=0,把这一行注释掉

systemctl restart kubelet


//所有节点部署网络插件flannel
方法一:
//所有节点上传 flannel 镜像 flannel.tar 和网络插件 cni-plugins-linux-amd64-v0.8.6.tgz 到 /opt 目录,master节点上传 kube-flannel.yml 文件
cd /opt
docker load < flannel.tar

mv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

//在 master 节点创建 flannel 资源
kubectl apply -f kube-flannel.yml 


方法二:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


//在 node 节点上执行 kubeadm join 命令加入群集
kubeadm join 192.168.80.10:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
    --discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2


//在master节点查看节点状态
kubectl get nodes

kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-c9w6l          1/1     Running   0          71m
coredns-bccdc95cf-nql5j          1/1     Running   0          71m
etcd-master                      1/1     Running   0          71m
kube-apiserver-master            1/1     Running   0          70m
kube-controller-manager-master   1/1     Running   0          70m
kube-flannel-ds-amd64-kfhwf      1/1     Running   0          2m53s
kube-flannel-ds-amd64-qkdfh      1/1     Running   0          46m
kube-flannel-ds-amd64-vffxv      1/1     Running   0          2m56s
kube-proxy-558p8                 1/1     Running   0          2m53s
kube-proxy-nwd7g                 1/1     Running   0          2m56s
kube-proxy-qpz8t                 1/1     Running   0          71m
kube-scheduler-master            1/1     Running   0          70m


//测试 pod 资源创建
kubectl create deployment nginx --image=nginx

kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-554b9c67f9-zr2xs   1/1     Running   0          14m   10.244.1.2   node01   <none>           <none>

//暴露端口提供服务
kubectl expose deployment nginx --port=80 --type=NodePort

kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        25h
nginx        NodePort    10.96.15.132   <none>        80:32698/TCP   4s

//测试访问
curl http://node01:32698

//扩展3个副本
kubectl scale deployment nginx --replicas=3
kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-554b9c67f9-9kh4s   1/1     Running   0          66s   10.244.1.3   node01   <none>           <none>
nginx-554b9c67f9-rv77q   1/1     Running   0          66s   10.244.2.2   node02   <none>           <none>
nginx-554b9c67f9-zr2xs   1/1     Running   0          17m   10.244.1.2   node01   <none>           <none>


------------------------------ 部署 Dashboard ------------------------------
//在 master01 节点上操作
#上传 recommended.yaml 文件到 /opt/k8s 目录中
cd /opt/k8s
vim recommended.yaml
#默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001     #添加
  type: NodePort          #添加
  selector:
    k8s-app: kubernetes-dashboard
    
kubectl apply -f recommended.yaml

#创建service account并绑定默认cluster-admin管理员集群角色
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

#使用输出的token登录Dashboard
https://NodeIP:30001


-------------------- 安装Harbor私有仓库 --------------------
//修改主机名
hostnamectl set-hostname hub.kgc.com

//所有节点加上主机名映射
echo '192.168.80.13 hub.kgc.com' >> /etc/hosts

//安装 docker
yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "500m", "max-file": "3"
  },
  "insecure-registries": ["https://hub.kgc.com"]
}
EOF

systemctl start docker
systemctl enable docker


//所有 node 节点都修改 docker 配置文件,加上私有仓库配置
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "500m", "max-file": "3"
  },
  "insecure-registries": ["https://hub.kgc.com"]
}
EOF

systemctl daemon-reload
systemctl restart docker


//安装 Harbor
//上传 harbor-offline-installer-v1.2.2.tgz 和 docker-compose 文件到 /opt 目录
cd /opt
cp docker-compose /usr/local/bin/
chmod +x /usr/local/bin/docker-compose

tar zxvf harbor-offline-installer-v1.2.2.tgz
cd harbor/
vim harbor.cfg
5  hostname = hub.kgc.com
9  ui_url_protocol = https
24 ssl_cert = /data/cert/server.crt
25 ssl_cert_key = /data/cert/server.key
59 harbor_admin_password = Harbor12345


//生成证书
mkdir -p /data/cert
cd /data/cert
#生成私钥
openssl genrsa -des3 -out server.key 2048
输入两遍密码:123456

#生成证书签名请求文件
openssl req -new -key server.key -out server.csr
输入私钥密码:123456
输入国家名:CN
输入省名:BJ
输入市名:BJ
输入组织名:KGC
输入机构名:KGC
输入域名:hub.kgc.com
输入管理员邮箱:admin@kgc.com
其它全部直接回车

#备份私钥
cp server.key server.key.org

#清除私钥密码
openssl rsa -in server.key.org -out server.key
输入私钥密码:123456

#签名证书
openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt

chmod +x /data/cert/*

cd /opt/harbor/
./install.sh

在本地使用火狐浏览器访问:https://hub.kgc.com
添加例外 -> 确认安全例外
用户名:admin
密码:Harbor12345

//在一个node节点上登录harbor
docker login -u admin -p Harbor12345 https://hub.kgc.com

//上传镜像
docker tag nginx:latest hub.kgc.com/library/nginx:v1
docker push hub.kgc.com/library/nginx:v1

//在master节点上删除之前创建的nginx资源
kubectl delete deployment nginx

kubectl run nginx-deployment --image=hub.kgc.com/library/nginx:v1 --port=80 --replicas=3

kubectl expose deployment nginx-deployment --port=30000 --target-port=80
kubectl get svc,pods
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
service/kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP     10m
service/nginx-deployment   ClusterIP   10.96.222.161   <none>        30000/TCP   3m15s

NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-77bcbfbfdc-bv5bz   1/1     Running   0          16s
pod/nginx-deployment-77bcbfbfdc-fq8wr   1/1     Running   0          16s
pod/nginx-deployment-77bcbfbfdc-xrg45   1/1     Running   0          3m39s


yum install ipvsadm -y
ipvsadm -Ln

curl 10.96.222.161:30000


kubectl edit svc nginx-deployment
25   type: NodePort                        #把调度策略改成NodePort

kubectl get svc
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
service/kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP           29m
service/nginx-deployment   NodePort    10.96.222.161   <none>        30000:32340/TCP   22m

浏览器访问:
192.168.80.10:32340
192.168.80.11:32340
192.168.80.12:32340


#将cluster-admin角色权限授予用户system:anonymous
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous


########### 内核参数优化方案 ##########
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0                                    #禁止使用 swap 空间,只有当系统内存不足(OOM)时才允许使用它
vm.overcommit_memory=1                            #不检查物理内存是否够用
vm.panic_on_oom=0                                #开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963                            #指定最大文件句柄数
fs.nr_open=52706963                                #仅4.4以上版本支持
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

初始化失败,进行的操作
# kubeadm reset -f
# ipvsadm --clear 
# rm -rf ~/.kube
# 再次进行初始化

二  K8S1.20 - 高可用集群部署

注意事项:
master节点cpu核心数要求大于2
●最新的版本不一定好,但相对于旧版本,核心功能稳定,但新增功能、接口相对不稳
●学会一个版本的 高可用部署,其他版本操作都差不多
●宿主机尽量升级到CentOS 7.9
●内核kernel升级到 4.19+ 这种稳定的内核
●部署k8s版本时,尽量找 1.xx.5 这种大于5的小版本(这种一般是比较稳定的版本)

------------------------------ 环境准备 ------------------------------
//所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab


//修改主机名
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname node01
hostnamectl set-hostname node02


//所有节点修改hosts文件
vim /etc/hosts
192.168.80.10 master01
192.168.80.11 master02
192.168.80.12 master03
192.168.80.20 node01
192.168.80.30 node02


//所有节点时间同步
yum -y install ntpdate
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com

systemctl enable --now crond

crontab -e
*/30 * * * * /usr/sbin/ntpdate time2.aliyun.com


//所有节点实现Linux的资源限制
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited


//所有节点升级内核
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

cd /opt/
yum localinstall -y kernel-ml*

#更改内核启动方式
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel
reboot


//调整内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

#生效参数
sysctl --system  


//加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done


-------------------- 所有节点安装docker --------------------
yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "500m", "max-file": "3"
  }
}
EOF

systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 

docker info | grep "Cgroup Driver"
Cgroup Driver: systemd


-------------------- 所有节点安装kubeadm,kubelet和kubectl --------------------
//定义kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15

#配置Kubelet使用阿里云的pause镜像
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF


//开机自启kubelet
systemctl enable --now kubelet


-------------------- 高可用组件安装、配置 --------------------
//所有 master 节点部署 Haproxy
yum -y install haproxy keepalived

cat > /etc/haproxy/haproxy.cfg << EOF
global
    log         127.0.0.1 local0 info
    log         127.0.0.1 local1 warning
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    tcp
    log                     global
    option                  tcplog
    option                  dontlognull
    option                  redispatch
    retries                 3
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout check           10s
    maxconn                 3000

frontend monitor-in
    bind *:33305
    mode http
    option httplog
    monitor-uri /monitor

frontend k8s-master
    bind *:16443
    mode tcp
    option tcplog
    default_backend k8s-master

backend k8s-master
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    server k8s-master1 192.168.80.10:6443  check inter 10000 fall 2 rise 2 weight 1
    server k8s-master2 192.168.80.11:6443  check inter 10000 fall 2 rise 2 weight 1
    server k8s-master3 192.168.80.12:6443  check inter 10000 fall 2 rise 2 weight 1
EOF


//所有 master 节点部署 keepalived
yum -y install keepalived

cd /etc/keepalived/
vim keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_HA1            #路由标识符,每个节点配置不同
}

vrrp_script chk_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state MASTER                #本机实例状态,MASTER/BACKUP,备机配置文件中设置BACKUP
    interface ens33
    virtual_router_id 51
    priority 100                #本机初始权重,备机设置小于主机的值
    advert_int 1
    virtual_ipaddress {
        192.168.80.100          #设置VIP地址
    }
    track_script {
        chk_haproxy
    }
}


vim check_haproxy.sh
#!/bin/bash
if ! killall -0 haproxy; then
    systemctl stop keepalived
fi


systemctl enable --now haproxy
systemctl enable --now keepalived


-------------------- 部署K8S集群 -------------------- 
//在 master01 节点上设置集群初始化配置文件
kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12   advertiseAddress: 192.168.80.10        #指定当前master节点的IP地址
13   bindPort: 6443

21 apiServer:
22   certSANs:                                #在apiServer属性下面添加一个certsSANs的列表,添加所有master节点的IP地址和集群VIP地址
23   - 192.168.80.100
24   - 192.168.80.10
25   - 192.168.80.11
26   - 192.168.80.12

30 clusterName: kubernetes
31 controlPlaneEndpoint: "192.168.80.100:6444"        #指定集群VIP地址
32 controllerManager: {}

38 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers            #指定镜像下载地址
39 kind: ClusterConfiguration
40 kubernetesVersion: v1.20.15                #指定kubernetes版本号
41 networking:
42   dnsDomain: cluster.local
43   podSubnet: "10.244.0.0/16"                #指定pod网段,10.244.0.0/16用于匹配flannel默认网段
44   serviceSubnet: 10.96.0.0/16            #指定service网段
45 scheduler: {}
#末尾再添加以下内容
--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs                                    #把默认的kube-proxy调度方式改为ipvs模式


#更新集群初始化配置文件
kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml


//所有节点拉取镜像
#拷贝yaml配置文件给其他主机,通过配置文件进行拉取镜像
for i in master02 master03 node01 node02; do scp /opt/new.yaml $i:/opt/; done

kubeadm config images pull --config /opt/new.yaml


//master01 节点进行初始化
kubeadm init --config new.yaml --upload-certs | tee kubeadm-init.log
#提示:
.........
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:
#master节点加入使用的命令,记录!
  kubeadm join 192.168.80.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \
    --control-plane --certificate-key 0f2a7ff2c46ec172f834e237fcca8a02e7c29500746594c25d995b78c92dde96

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
#node节点加入使用的命令。记录!
kubeadm join 192.168.80.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98

#若初始化失败,进行的操作
kubeadm reset -f
ipvsadm --clear 
rm -rf ~/.kube
再次进行初始化


//master01 节点进行环境配置
#配置 kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

#修改controller-manager和scheduler配置文件
vim /etc/kubernetes/manifests/kube-scheduler.yaml 
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
......
    #- --port=0                    #搜索port=0,把这一行注释掉

systemctl restart kubelet

#部署网络插件flannel
所有节点上传 flannel 镜像 flannel.tar 和网络插件 cni-plugins-linux-amd64-v0.8.6.tgz 到 /opt 目录,master节点上传 kube-flannel.yml 文件
cd /opt
docker load < flannel.tar

mv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

kubectl apply -f kube-flannel.yml 


//所有节点加入集群
#master 节点加入集群
kubeadm join 192.168.80.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \
    --control-plane --certificate-key 0f2a7ff2c46ec172f834e237fcca8a02e7c29500746594c25d995b78c92dde96

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#node 节点加入集群
kubeadm join 192.168.80.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98


#在 master01 查看集群信息
kubectl get nodes
NAME       STATUS   ROLES                  AGE    VERSION
master01   Ready    control-plane,master   2h5m   v1.20.15
master02   Ready    control-plane,master   2h5m   v1.20.15
master03   Ready    control-plane,master   2h5m   v1.20.15
node01     Ready    <none>                 2h5m   v1.20.15
node02     Ready    <none>                 2h5m   v1.20.15


kubectl get pod -n kube-system 
NAME                               READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-4fg44            1/1     Running   2          2h5m
coredns-74ff55c5b-jsdxz            1/1     Running   0          2h5m
etcd-master01                      1/1     Running   1          2h5m
etcd-master02                      1/1     Running   1          2h5m
etcd-master03                      1/1     Running   1          2h5m
kube-apiserver-master01            1/1     Running   1          2h5m
kube-apiserver-master02            1/1     Running   1          2h5m
kube-apiserver-master03            1/1     Running   1          2h5m
kube-controller-manager-master01   1/1     Running   3          2h5m
kube-controller-manager-master02   1/1     Running   1          2h5m
kube-controller-manager-master03   1/1     Running   2          2h5m
kube-flannel-ds-8qtx6              1/1     Running   2          2h4m
kube-flannel-ds-lmzdz              1/1     Running   0          2h4m
kube-flannel-ds-nb9qx              1/1     Running   1          2h4m
kube-flannel-ds-t4l4x              1/1     Running   1          2h4m
kube-flannel-ds-v592x              1/1     Running   1          2h4m
kube-proxy-6gd5j                   1/1     Running   1          2h5m
kube-proxy-f8k96                   1/1     Running   3          2h5m
kube-proxy-h7nrf                   1/1     Running   1          2h5m
kube-proxy-j96b6                   1/1     Running   1          2h5m
kube-proxy-mgmx6                   1/1     Running   0          2h5m
kube-scheduler-master01            1/1     Running   1          2h5m
kube-scheduler-master02            1/1     Running   2          2h5m
kube-scheduler-master03            1/1     Running   2          2h5m

//问题解决
1、加入集群的 Token 过期
注意:Token值在集群初始化后,有效期为 24小时 ,过了24小时过期。进行重新生成Token,再次加入集群,新生成的Token为 2小时。

1.1、生成Node节点加入集群的 Token
kubeadm token create --print-join-command
kubeadm join 192.168.80.100:16443 --token menw99.1hbsurvl5fiz119n     --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab865    12420215242c4313fb830a4eb98


1.2、生成Master节点加入集群的 --certificate-key
kubeadm init phase upload-certs  --upload-certs
I1105 12:33:08.201601   93226 version.go:254] remote version is much newer: v1.22.3; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb

#master节点加入集群的命令
kubeadm join 192.168.80.100:16443 --token menw99.1hbsurvl5fiz119n     --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \
 --control-plane --certificate-key 38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb


2、master节点 无法部署非系统Pod
解析:主要是因为master节点被加上污点,污点是不允许部署非系统 Pod,在 测试 环境,可以将污点去除,节省资源,可利用率。

2.1、查看污点
kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule


2.2、取消污点
kubectl  taint node  -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
node/master01 untainted
node/master02 untainted
node/master03 untainted

kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             <none>
Taints:             <none>
Taints:             <none>


3、修改NodePort的默认端口
原理:默认k8s的使用端口的范围为30000左右,作为对外部提供的端口。我们也可以通过对配置文件的修改去指定默认的对外端口的范围。

#报错
The Service "nginx-svc" is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767


[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-cluster-ip-range=10.96.0.0/16
- --service-node-port-range=1-65535    #找到后进行添加即可

#无需重启,k8s会自动生效


4、外部 etcd 部署配置
kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/
vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.80.14
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 10.96.0.1
  - 127.0.0.1
  - localhost
  - kubernetes
  - kubernetes.default
  - kubernetes.default.svc
  - kubernetes.default.svc.cluster.local
  - 192.168.80.100
  - 192.168.80.10
  - 192.168.80.11
  - 192.168.80.12
  - master01
  - master02
  - master03
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.80.100:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  external:                                #使用外部etcd的方式
    endpoints:
    - https://192.168.80.10:2379
    - https://192.168.80.11:2379
    - https://192.168.80.12:2379
    caFile: /opt/etcd/ssl/ca.pem           #需要把etcd的证书都复制到所有master节点上
    certFile: /opt/etcd/ssl/server.pem
    keyFile: /opt/etcd/ssl/server-key.pem
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.15
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs


三 k8s1.25部署


master(2C/4G,cpu核心数要求大于2)        192.168.80.10        docker、kubeadm、kubelet、kubectl、flannel
node01(2C/2G)                            192.168.80.11        docker、kubeadm、kubelet、kubectl、flannel
node02(2C/2G)                            192.168.80.12        docker、kubeadm、kubelet、kubectl、flannel
Harbor节点(hub.kgc.com)                192.168.80.13        docker、docker-compose、harbor-offline-v1.2.2


------------------------------ 环境准备 ------------------------------
//所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

//修改主机名
hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02

//所有节点修改hosts文件
vim /etc/hosts
192.168.80.10 master01
192.168.80.11 node01
192.168.80.12 node02

//调整内核参数
cat > /etc/sysctl.d/kubernetes.conf << EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

//生效参数
sysctl --system  


-------------------- 所有节点安装docker --------------------
yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}
EOF
#使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
#日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。

systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 

docker info | grep "Cgroup Driver"
Cgroup Driver: systemd


//所有主机安装 cri-dockerd
Kubernetes自v1.24移除了对docker-shim的支持,而Docker Engine默认又不支持CRI规范,因而二者将无法直接完成整合。 为此,Mirantis和Docker联合创建了cri-dockerd项目,用于为Docker Engine提供一个能够支持到CRI规范的垫片,从而能够让Kubernetes基于CRI控制Docker 。

项目地址:https://github.com/Mirantis/cri-dockerd

cd /opt/

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6-3.el7.x86_64.rpm
 

yum localinstall -y cri-dockerd-0.2.6-3.el7.x86_64.rpm

vim /lib/systemd/system/cri-docker.service
#修改ExecStart行如下
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8

systemctl daemon-reload
systemctl enable --now cri-docker


-------------------- 所有节点安装kubeadm,kubelet和kubectl --------------------
//定义kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.25.4 kubeadm-1.25.4 kubectl-1.25.4

kubeadm version

//开机自启kubelet
systemctl enable kubelet.service
#K8S通过kubeadm安装出来以后都是以Pod方式存在,即底层是以容器方式运行,所以kubelet必须设置开机自启


-------------------- 部署K8S集群 -------------------- 
//在 master01 节点上设置集群初始化配置文件
kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/
vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.80.10
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock
  imagePullPolicy: IfNotPresent
  name: master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.25.4
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs


//查看初始化需要的镜像
kubeadm config images list --kubernetes-version 1.25.4

//所有节点拉取镜像
#拷贝yaml配置文件给其他主机,通过配置文件进行拉取镜像
for i in node01 node02; do scp /opt/kubeadm-config.yaml $i:/opt/; done

kubeadm config images pull --config /opt/kubeadm-config.yaml


//master01 节点初始化kubeadm
方法一:
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
#--upload-certs 参数可以在后续执行加入节点时自动分发证书文件
#tee kubeadm-init.log 用以输出日志

//查看 kubeadm-init 日志
less kubeadm-init.log

//kubernetes配置文件目录
ls /etc/kubernetes/

//存放ca等证书和密码的目录
ls /etc/kubernetes/pki        


方法二:
kubeadm init \
--apiserver-advertise-address=192.168.80.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.25.4 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket /var/run/cri-dockerd.sock \
--upload-certs

--------------------------------------------------------------------------------------------
初始化集群需使用kubeadm init命令,可以指定具体参数初始化,也可以指定配置文件初始化。
可选参数:
--apiserver-advertise-address:apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示节点上所有可用地址
--apiserver-bind-port:apiserver的监听端口,默认是6443
--cert-dir:通讯的ssl证书文件,默认/etc/kubernetes/pki
--control-plane-endpoint:多主节点必选项,用于指定控制平面的固定访问地址。注意:kubeadm不支持将没有--control-plane-endpoint参数的单个控制平面集群转换为高可用性集群
--image-repository:拉取镜像的镜像仓库,默认是k8s.gcr.io
--kubernetes-version:指定kubernetes版本
--pod-network-cidr:pod资源的网段,需与pod网络插件的值设置一致。Flannel网络插件的默认为10.244.0.0/16,Calico插件的默认值为192.168.0.0/16;
--service-cidr:service资源的网段
--service-dns-domain:service全域名的后缀,默认是cluster.local
--token-ttl:默认token的有效期为24小时,0表示永不过期
--cri-socket #v1.24版之后指定连接cri的socket文件路径,注意;不同的CRI连接文件不同
#如果是cRI是containerd,则使用--cri-socket unix:///run/containerd/containerd.sock
#如果是cRI是docker,则使用--cri-socket unix:///var/run/cri-dockerd.sock
#如果是CRI是CRI-o,则使用--cri-socket unix:///var/run/crio/crio.sock
#注意:CRI-o与containerd的容器管理机制不一样,所以镜像文件不能通用。
---------------------------------------------------------------------------------------------

方法二初始化后需要修改 kube-proxy 的 configmap,开启 ipvs
kubectl edit cm kube-proxy -n=kube-system
修改mode: ipvs

提示:
......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.80.10:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
    --discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2


//设定kubectl
kubectl需经由API server认证及授权后方能执行相应的管理操作,kubeadm 部署的集群为其生成了一个具有管理员权限的认证配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通过默认的 “$HOME/.kube/config” 的路径进行加载。

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config


//所有节点部署网络插件flannel
#所有节点上传 flannel 镜像 flannel.tar、flannel-cni-plugin.tar 和网络插件 cni-plugins-linux-amd64-v1.1.1.tgz 到 /opt 目录,master 节点上传 kube-flannel.yml 文件
mv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin

方法一:
cd /opt
docker load < flannel.tar
docker load < flannel-cni-plugin.tar

kubectl apply -f kube-flannel.yml 


方法二:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


//在 node 节点上执行 kubeadm join 命令加入群集
kubeadm join 192.168.80.10:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
    --discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2 \
    --cri-socket /var/run/cri-dockerd.sock            #需要额外指定 cri-dockerd


//在master节点查看节点状态
kubectl get nodes
NAME       STATUS   ROLES           AGE    VERSION
master01   Ready    control-plane   88m    v1.25.4
node01     Ready    <none>          110s   v1.25.4
node02     Ready    <none>          105s   v1.25.4

kubectl get pods -A
NAMESPACE      NAME                               READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-lc7lg              1/1     Running   0          83m
kube-flannel   kube-flannel-ds-phlnb              1/1     Running   0          88s
kube-flannel   kube-flannel-ds-wlvvk              1/1     Running   0          93s
kube-system    coredns-c676cc86f-5x7b5            1/1     Running   0          88m
kube-system    coredns-c676cc86f-8wxg7            1/1     Running   0          88m
kube-system    etcd-master01                      1/1     Running   0          88m
kube-system    kube-apiserver-master01            1/1     Running   0          88m
kube-system    kube-controller-manager-master01   1/1     Running   0          88m
kube-system    kube-proxy-rjs6g                   1/1     Running   0          88s
kube-system    kube-proxy-vp2b5                   1/1     Running   0          88m
kube-system    kube-proxy-xnllf                   1/1     Running   0          93s
kube-system    kube-scheduler-master01            1/1     Running   0          88m


//测试 pod 资源创建
kubectl create deployment nginx --image=nginx

kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-554b9c67f9-zr2xs   1/1     Running   0          14m   10.244.1.2   node01   <none>           <none>

//暴露端口提供服务
kubectl expose deployment nginx --port=80 --type=NodePort

kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        25h
nginx        NodePort    10.96.15.132   <none>        80:32698/TCP   4s

//测试访问
curl http://node01:32698

//扩展3个副本
kubectl scale deployment nginx --replicas=3

kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-554b9c67f9-9kh4s   1/1     Running   0          66s   10.244.1.3   node01   <none>           <none>
nginx-554b9c67f9-rv77q   1/1     Running   0          66s   10.244.2.2   node02   <none>           <none>
nginx-554b9c67f9-zr2xs   1/1     Running   0          17m   10.244.1.2   node01   <none>           <none>


//部署 Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml


########### 内核参数优化方案 ##########
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0                                    #禁止使用 swap 空间,只有当系统内存不足(OOM)时才允许使用它
vm.overcommit_memory=1                            #不检查物理内存是否够用
vm.panic_on_oom=0                                #开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963                            #指定最大文件句柄数
fs.nr_open=52706963                                #仅4.4以上版本支持
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

初始化失败,进行的操作
# kubeadm reset -f
# ipvsadm --clear 
# rm -rf ~/.kube
# 再次进行初始化

四 containerd管理容器生命周期的开源容器运行时

containerd 是一个用于管理容器生命周期的开源容器运行时。它是由 Docker 公司开发的,并在2017年作为一个独立项目捐赠给了 Cloud Native Computing Foundation (CNCF)。

containerd 的主要目标是提供一个可靠、高效和可扩展的容器运行时环境,用于在容器生态系统中创建、启动、停止和监控容器。它具有以下主要功能:

1. 容器管理:containerd 提供了一组 API,用于创建和管理容器。它能够与容器编排工具(如 Kubernetes)进行集成,通过这些 API,可以创建和配置容器的运行环境、网络和存储等。

2. 容器生命周期管理:containerd 管理容器的整个生命周期,包括创建、启动、停止和销毁容器。它负责管理容器的进程、文件系统和资源隔离,确保容器能够在隔离的环境中安全地运行。

3. 镜像管理:containerd 支持容器镜像的拉取、推送和管理。它与容器注册表(如 Docker Hub、私有仓库等)进行交互,能够下载和分发容器镜像,并提供镜像的版本控制和缓存机制。

4. 存储管理:containerd 提供了对容器数据卷的管理和支持。它可以创建和管理容器的数据卷,并将其与容器进行关联,使容器可以持久化地存储和访问数据。

5. 安全和隔离:containerd 实施了一系列的安全机制,确保容器之间的隔离和安全运行。它利用 Linux 的命名空间和控制组 (cgroup) 等特性,提供了强大的容器隔离性。

总体而言,containerd 是一个轻量级的容器运行时,旨在提供核心的容器管理功能,为容器生态系统中的其他工具(如容器编排工具、容器构建工具等)提供基础设施。它在容器的创建、管理和执行过程中发挥着重要的作用,并被广泛用于各种容器平台和工具中。

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

修改配置文件:
vim /etc/containerd/config.toml
61  sandbox_image="registry.aliyuncs.com/google_containers/pause:3.8"
125 SystemdCgroup = true
145 config_path = "/etc/containerd/certs.d"


systemctl enable --now containerd

ctr -v            #输出的是 containerd 的版本
crictl -v        #输出的是当前 k8s 的版本,可以认为 crictl 是用于 k8s 的


配置containerd镜像加速器
mkdir -p /etc/containerd/certs.d/docker.io/

vim /etc/containerd/certs.d/docker.io/hosts.toml
server = "https://docker.io"
[host."https://6ijb8ubo.mirror.aliyuncs.com"]
  capabilities = ["pull","resolve"]
[host."https://docker.mirrors.ustc.edu.cn"]
  capabilities = ["pull","resolve"]
[host."https://registry-1.docker.io"]
  capabilities = ["pull","resolve","push"]

systemctl restart containerd


配置containerd作为容器运行时
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF


systemctl restart containerd


yum install -y kubelet-1.25.8 kubeadm-1.25.8 kubectl-1.25.8
systemctl enable kubelet

kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/
vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.80.10      #修改为masterIP地址
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master01      #修改为master主机名
  taints:             #给master节点添加污点
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers       #修改镜像仓库
kind: ClusterConfiguration
kubernetesVersion: 1.25.8       #指定kubernetes版本号
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/16   #指定service网段
  podSubnet: 10.244.0.0/16      #新增指定pod网段
scheduler: {}
#添加以下字段
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs                     #把默认的kube-proxy调度方式改为ipvs模式
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd          #使用systemd的cgroup驱动


kubeadm config images pull --config /opt/kubeadm-config.yaml

1、docker 由 docker-client、dockerd、containerd、runC 组成,所以 containerd 是 docker 的基础组件之一。
----------------------------------------
#runC(run container)是一个基于OCI标准(开放容器交互标准,定义了容器和镜像的生命周期管理)实现的一个轻量级容器运行工具,用来创建和运行容器。
#而containerd是用来维持通过runC创建的容器的运行状态。即runC用来创建和运行容器,containerd作为常驻进程用来管理容器。
#当runC和containerd成为标准化容器服务的基石后,上层的应用就可以直接建立在containerd和runC之上。
#runC和containerd本来是docker的,docker把runC捐赠给了OCI,后来Docker把containerd捐献给了CNCF管理。
----------------------------------------

2、从 k8s 的角度看,可以选择 containerd 或 docker 作为运行时组件:其中 containerd 调用链更短,组件更少,更稳定,占用节点资源更少。所以 k8s 从 1.24 版本开始默认使用 containerd 。

3、docker 作为 k8s 容器运行时,调用关系为:kubelet --> dockershim (在 kubelet 进程中) --> dockerd --> containerd
containerd 作为 k8s 容器运行时,调用关系为:kubelet --> cri plugin(在 containerd 进程中) --> containerd

4、containerd 相比于 docker,多了 namespace 概念,每个 image 和 container 都会在各自的 namespace 下可见。
----------------------------------------
#由于 Containerd 也有 namespaces 的概念,对于上层编排系统的支持,ctr 客户端主要区分了 3 个命名空间分别是 k8s.io、moby 和 default 。
#使用 crictl 操作的均在 k8s.io 命名空间,使用 ctr 看镜像列表就需要加上 -n 参数。crictl 是只有一个 k8s.io 命名空间,但是没有 -n 参数。
#ctr images pull 拉取的镜像默认放在 default,而 crictl pull 和 kubelet 默认拉取的镜像都在 k8s.io 命名空间下。
#所以通过 ctr 导入镜像的时候特别注意一点,最好指定命名空间。
----------------------------------------
ctr -n k8s.io image ls
crictl image

mkdir /opt/nerdctl 
tar xf nerdctl-0.22.2-linux-amd64.tar.gz -C /opt/nerdctl
cp /opt/nerdctl/nerdctl /usr/local/bin/

#安装依赖软件包bash-completion
yum install -y epel-release bash-completion
source /usr/share/bash-completion/bash_completion

echo "source <(nerdctl completion bash)" >> ~/.bashrc
source ~/.bashrc

快速部署代码:

----------------所有节点-------------
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a						
sed -ri 's/.*swap.*/#&/' /etc/fstab		
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

echo '192.168.92.10 master01
192.168.92.20 node01
192.168.92.30 node02' >>/etc/hosts


cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

sysctl --system  




systemctl stop firewalld.service
setenforce 0
yum install -y yum-utils device-mapper-persistent-data lvm2
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install epel-release -y
yum install container-selinux -y
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
systemctl start docker.service
systemctl enable docker.service 


 cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://o4ogh00n.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
     "max-size": "500m", "max-file": "3"
  }
}
EOF


systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 

docker info | grep "Cgroup Driver"
Cgroup Driver: systemd

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15


systemctl enable --now kubelet.service
systemctl status kubelet

hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02

----------------master上操作----------------
mkdir /opt/k8s  
cd /opt/k8s
kubeadm config print init-defaults > /opt/k8s/kubeadm-config.yaml
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12   advertiseAddress: 192.168.247.128		#指定master节点的IP地址
13   bindPort: 6443
......
32 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers          #指定拉取镜像的仓库,默认是k8s.gcr.io
33 kind: ClusterConfiguration
34 kubernetesVersion: v1.20.15				#指定kubernetes版本号
35 networking:
36   dnsDomain: cluster.local
37   podSubnet: "10.244.0.0/16"				#指定pod网段,10.244.0.0/16用于匹配flannel默认网段
38   serviceSubnet: 10.96.0.0/16			#指定service网段
39 scheduler: {}
#末尾再添加以下内容
--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs									#把默认的kube-proxy调度方式改为ipvs模式
保存退出:
kubeadm config images pull --config /opt/k8s/kubeadm-config.yaml
kubeadm init --config=/opt/k8s/kubeadm-config.yaml --upload-certs | tee kubeadm-init.log


mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
//如果 kubectl get cs 发现集群不健康,更改以下两个文件
vim /etc/kubernetes/manifests/kube-scheduler.yaml 
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
# 修改如下内容 两个文件修改的一样
sed -i 's/127.0.0.1/192.168.247.128/1' /etc/kubernetes/manifests/kube-controller-manager.yaml 
把--bind-address=127.0.0.1变成--bind-address=192.168.247.128		#修改成k8s的控制节点master01的ip
把httpGet:字段下的hosts由127.0.0.1变成192.168.247.128(有两处)
#- --port=0					# 搜索port=0,把这一行注释掉
systemctl restart kubelet
在node节点 
kubeadm join 192.168.247.128:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:feb15376b4dc83bcd7453e6edb1ee1597c420d9304960ae138ea5156132fd661 
使node节点加入集群

把flannel-v0.21.5上传的master节点的/opt/k8s目录
cd  /opt/k8s
unzip flannel-v0.21.5.zip 
scp flannel.tar flannel-cni-plugin.tar node01:/opt
scp flannel.tar flannel-cni-plugin.tar node02:/opt
在node01节点
cd /opt
docker load -i flannel.tar
docker load -i flannel-cni-plugin.tar
在node02节点 重复上面操作
在master01节点 重复上面操作
在master节点 
cd /opt
mv cni cni.bak
mkdir -p /opt/cni/bin
在node01 02上重复上面的操作
在master节点
cd /opt/k8s
tar xf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin
scp -r /opt/cni node01:/opt
scp -r /opt/cni node02:/opt
kubectl apply -f kube-flannel.yml 
查看 kubectl get pods -A kubectl get nodes
不记得node节点加入集群的命令可以用 vim kubeadm-init.log

















  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值