从零开始部署k8s集群1.29.x版本

从零开始部署k8s集群1.29.x

一、环境准备

本次使用的实验环境是Vmware 16

1.1、实验节点规划

服务器角色主机名IP地址操作系统版本角色硬件配置备注
跳板机jumpserver.shiyan.com172.172.8.11centos 7NTP server内存:4GB
CPU:4个
磁盘:20G(分区无要求)
非必要的虚机
由于本人操作环境限制只能使用跳板机登陆其它节点
k8s-master01k8s-master01.shiyan.com172.172.8.61centos 7k8s-master内存:4GB
CPU:4个
磁盘:50G(分区无要求)
内核版本要求 3.10以上版本
swap推荐关闭
防火墙和SELinux推荐关闭
k8s-node1k8s-node1.shiyan.com172.172.8.65centos 7k8s-node内存:4GB
CPU:4个
磁盘:50G(分区无要求)
内核版本要求 3.10以上版本
swap推荐关闭
防火墙和SELinux推荐关闭
k8s-node2k8s-node2.shiyan.com172.172.8.66centos 7k8s-node内存:4GB
CPU:4个
磁盘:50G(分区无要求)
内核版本要求 3.10以上版本
swap推荐关闭
防火墙和SELinux推荐关闭

1.2、操作系统和各组件版本:

核心组件 kubernetes:1.29.3
容器引擎 Docker-ce:25.0.4
容器运行接口 cri-dockerd 0.3.8
网络插件 Calico:v3.27.2
操作系统版本:CentOS 7.5 , 内核版本:5.4.272

1.3、部署前准备

1.3.1、创建工作目录

[root@jumpserver ~]# mkdir /root/k8s-cluster ; cd /root/k8s-cluster

1.3.2、配置ansible

[root@jumpserver k8s-cluster]# vim ansible.cfg
[defaults]
inventory      = iplist
host_key_checking = False
remote_user = root

[root@jumpserver k8s-cluster]# vim iplist 
[k8s]
172.172.8.61 hostname=k8s-master01.shiyan.com ansible_ssh_pass=123456
172.172.8.65 hostname=k8s-node1.shiyan.com ansible_ssh_pass=123456
172.172.8.66 hostname=k8s-node2.shiyan.com ansible_ssh_pass=123456

1.3.3、跳板机配置免密登陆其它主机

[root@jumpserver k8s-cluster]# ssh-keygen  # 一路回车 
[root@jumpserver k8s-cluster]# ansible all  -m authorized_key -a "user=root state=present key='{{ lookup('file', '/root/.ssh/id_rsa.pub') }}'"

1.3.4、配置实验节点主机名

[root@jumpserver k8s-cluster]# ansible all  -m shell -a 'hostnamectl set-hostname {{hostname}}'
[root@jumpserver k8s-cluster]# ansible all  -m shell -a 'hostname'
172.172.8.61 | CHANGED | rc=0 >>
k8s-master01.shiyan.com
172.172.8.66 | CHANGED | rc=0 >>
k8s-node2.shiyan.com
172.172.8.65 | CHANGED | rc=0 >>
k8s-node1.shiyan.com

1.3.5、配置/etc/hosts并同步到所有主机

[root@jumpserver k8s-cluster]# vim hosts.yaml
---  
- name: Configure /etc/hosts
  hosts: all
  become: yes  
  tasks:  
    - name: Configure /etc/hosts
      blockinfile:  
        path: /etc/hosts  
        block: |  
          172.172.8.61 k8s-master01.shiyan.com k8s-master01
          172.172.8.65 k8s-node1.shiyan.com k8s-node1
          172.172.8.66 k8s-node2.shiyan.com k8s-node2
          
[root@jumpserver k8s-cluster]# ansible-playbook hosts.yaml 

[root@jumpserver k8s-cluster]# ansible all -m shell -a 'cat /etc/hosts'

1.3.6、配置NTP服务器

使用跳板机作为NTP服务器,其它节点作为客户端,本次实验使用的是chrony

# 服务端配置
[root@jumpserver k8s-cluster]# yum -y install chrony ntpdate
[root@jumpserver k8s-cluster]# ntpdate ntp.aliyun.com	# 手动同步一次
[root@jumpserver k8s-cluster]# vim /etc/chrony.conf
[root@jumpserver k8s-cluster]# cat /etc/chrony.conf
server ntp.aliyun.com
rtcsync
allow 172.172.8.0/24
local stratum 10
logdir /var/log/chrony
[root@jumpserver k8s-cluster]# systemctl restart chronyd
[root@jumpserver k8s-cluster]# systemctl enable chronyd

# 配置客户端
## 创建配置文件模版
[root@jumpserver k8s-cluster]# vim chrony.conf.template
[root@jumpserver k8s-cluster]# cat chrony.conf.template
server 172.172.8.11
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony

[root@jumpserver k8s-cluster]# vim chrony.yaml
---  
- hosts: all  
  gather_facts: no  
  tasks:  
    - name: Install chrony ntpdate  
      yum:  
        name:  
          - chrony  
          - ntpdate  
        state: present
    - name: Copy file chrony.conf.template to server
      copy:
        src: chrony.conf.template
        dest: /etc/chrony.conf
    - name: ntpdate 172.172.8.11
      shell: ntpdate 172.172.8.11
    - name: Restart service chronyd
      service:
        name: chronyd
        state: restarted
        enabled: true 
        
[root@jumpserver k8s-cluster]# ansible-playbook chrony.yaml

1.3.7、machine-id检查

由于实验用的服务器是克隆出来的,machine-id都一样,需要重新生成

[root@jumpserver k8s-cluster]# ansible all -m shell -a 'cat /etc/machine-id'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'rm -f /etc/machine-id && systemd-machine-id-setup'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'cat /etc/machine-id'

1.3.8、关闭防火墙和selinux

[root@jumpserver k8s-cluster]# ansible all -m shell -a "systemctl disable firewalld --now"

# 配置永久关闭 && 临时关闭
[root@jumpserver k8s-cluster]# ansible all -m shell -a "sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config && setenforce 0"

1.3.9、关闭swap

[root@jumpserver k8s-cluster]# ansible all -m shell -a 'swapon -s'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'swapoff -a'
[root@jumpserver k8s-cluster]# ansible all -m shell -a "cp /etc/fstab /etc/fstab_bak_`date +%Y%m%d%H%M%S` "
[root@jumpserver k8s-cluster]# ansible all -m shell -a "sed -i 's/.*swap.*/#&/g' /etc/fstab"

以上内容完成后建议关机做快照

1.3.10、内核参数调整

[root@jumpserver k8s-cluster]# cat k8s.conf 
vm.swappiness=0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=k8s.conf dest=/etc/sysctl.d/k8s.conf'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'modprobe br_netfilter; modprobe overlay'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'sysctl -p /etc/sysctl.d/k8s.conf '

1.3.11、系统内核升级

# 导入gpg key
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org'
# 变更elrepo yum源仓库
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm'
# 安装lt长期维护版本
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64'
# 设置grub2默认引导为0
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'grub2-set-default 0'
# 重新生成grub2引导文件
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'grub2-mkconfig -o /boot/grub2/grub.cfg'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'reboot'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'uname -r'
172.172.8.66 | CHANGED | rc=0 >>
5.4.272-1.el7.elrepo.x86_64
172.172.8.65 | CHANGED | rc=0 >>
5.4.272-1.el7.elrepo.x86_64
172.172.8.61 | CHANGED | rc=0 >>
5.4.272-1.el7.elrepo.x86_64

1.3.12、安装ipvs模块

# 添加内核脚本
cat > ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=ipvs.modules dest=/etc/sysconfig/modules/ipvs.modules mode=755'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'bash /etc/sysconfig/modules/ipvs.modules '
 
# 验证模块是否有被加载
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'lsmod | grep -e ip_vs -e nf_conntrack'
172.172.8.65 | CHANGED | rc=0 >>
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 155648  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          147456  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
172.172.8.61 | CHANGED | rc=0 >>
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 155648  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          147456  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
172.172.8.66 | CHANGED | rc=0 >>
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 155648  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          147456  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

二、安装docker-ce 和 cri-dockerd

kubernetes从1.24版本后开始默认是containerd容器。抛弃对接docker-sim,如果想要把docker作为kubernetes的容器环境需要安装cri-docker。

2.1、安装docker-ce

[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum -y install docker-ce'

[root@jumpserver k8s-cluster]# ansible all -m shell -a 'docker -v '
172.172.8.66 | CHANGED | rc=0 >>
Docker version 25.0.4, build 1a576c5
172.172.8.65 | CHANGED | rc=0 >>
Docker version 25.0.4, build 1a576c5
172.172.8.61 | CHANGED | rc=0 >>
Docker version 25.0.4, build 1a576c5

# 添加配置文件
cat >> daemon.json << EOF
{
  "registry-mirrors": [
    "https://docker.mirrors.ustc.edu.cn"
  ],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=daemon.json dest=/etc/docker/daemon.json'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'systemctl enable --now docker'

[root@jumpserver k8s-cluster]# ansible all -m shell -a 'docker info | grep Registry -A1'
172.172.8.61 | CHANGED | rc=0 >>
 Registry Mirrors:
  https://docker.mirrors.ustc.edu.cn/
172.172.8.65 | CHANGED | rc=0 >>
 Registry Mirrors:
  https://docker.mirrors.ustc.edu.cn/
172.172.8.66 | CHANGED | rc=0 >>
 Registry Mirrors:
  https://docker.mirrors.ustc.edu.cn/

2.2、安装cri-dockerd

[root@jumpserver k8s-cluster]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el7.x86_64.rpm
[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=cri-dockerd-0.3.8-3.el7.x86_64.rpm dest=/root/'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum -y install /root/cri-dockerd-0.3.8-3.el7.x86_64.rpm'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'systemctl enable --now cri-docker'

# 需要变更的配置文件
[root@jumpserver k8s-cluster]# scp 172.172.8.61:/usr/lib/systemd/system/cri-docker.service ./
 
# 在ExecStart=/usr/bin/cri-dockerd 后面新增配置这里如果网络可以就用国外的 --pod-infra-container-image=registry.k8s.io/pause:3.9 
# 网络不方便使用国外的阿里源 --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
 
# 新增配置文件的位置
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://

[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=cri-docker.service dest=/usr/lib/systemd/system/cri-docker.service'
# 重启cri-dockerd
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'systemctl daemon-reload && systemctl restart cri-docker'

[root@jumpserver k8s-cluster]# ansible all -m shell -a 'ps -ef | grep cri-dockerd'
172.172.8.61 | CHANGED | rc=0 >>
root       4288      1  0 14:59 ?        00:00:00 /usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
root       4454   4453  0 15:00 pts/0    00:00:00 /bin/sh -c ps -ef | grep cri-dockerd
root       4456   4454  0 15:00 pts/0    00:00:00 grep cri-dockerd
172.172.8.66 | CHANGED | rc=0 >>
root       4100      1  0 14:59 ?        00:00:00 /usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
root       4263   4262  0 15:00 pts/0    00:00:00 /bin/sh -c ps -ef | grep cri-dockerd
root       4265   4263  0 15:00 pts/0    00:00:00 grep cri-dockerd
172.172.8.65 | CHANGED | rc=0 >>
root       4095      1  0 14:59 ?        00:00:00 /usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
root       4259   4258  0 15:00 pts/0    00:00:00 /bin/sh -c ps -ef | grep cri-dockerd
root       4261   4259  0 15:00 pts/0    00:00:00 grep cri-dockerd

三、kuberneter集群部署

3.1、安装kubelet kubeadm kubectl

cat >> kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/repodata/repomd.xml.key
EOF

[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=kubernetes.repo dest=/etc/yum.repos.d/'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum install -y kubelet kubeadm kubectl'

3.2、配置kubelet

[root@jumpserver k8s-cluster]# scp 172.172.8.61:/etc/sysconfig/kubelet ./

[root@jumpserver k8s-cluster]# vim kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=kubelet dest=/etc/sysconfig/kubelet'



# add配置文件在ExecStart后增加  --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml
[root@jumpserver k8s-cluster]# scp 172.172.8.61:/lib/systemd/system/kubelet.service ./
[root@jumpserver k8s-cluster]# cat kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
 
[Service]
ExecStart=/usr/bin/kubelet  --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml
Restart=always
StartLimitInterval=0
RestartSec=10
 
[Install]
WantedBy=multi-user.target

[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=kubelet.service dest=/lib/systemd/system/kubelet.service'

[root@jumpserver k8s-cluster]# ansible all -m shell -a 'systemctl enable kubelet && systemctl start kubelet'

3.3、镜像下载(master节点操作)

# 列出需要的镜像文件,这里是国外的镜像,国内下载不到,需要换成阿里源
[root@k8s-master01 ~]# kubeadm config images list 
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.12-0
 
# 下载镜像用阿里源下载
[root@k8s-master01 ~]# kubeadm config images pull --cri-socket unix:///var/run/cri-dockerd.sock --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.11.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.12-0


[root@k8s-master01 ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.29.3    39f995c9f199   4 days ago      127MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.29.3    6052a25da3f9   4 days ago      122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.29.3    8c390d98f50c   4 days ago      59.6MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.29.3    a1d263b5dc5b   4 days ago      82.4MB
registry.aliyuncs.com/google_containers/etcd                      3.5.12-0   3861cfcd7c04   6 weeks ago     149MB
registry.aliyuncs.com/google_containers/coredns                   v1.11.1    cbb01a7bd410   7 months ago    59.8MB
registry.aliyuncs.com/google_containers/pause                     3.9        e6f181688397   17 months ago   744kB

3.4、初始化集群

kubeadm init --kubernetes-version=v1.29.3 --pod-network-cidr=100.100.0.0/16 --apiserver-advertise-address=172.172.8.61 --cri-socket unix:///var/run/cri-dockerd.sock --image-repository registry.aliyuncs.com/google_containers 

初始化阶段报错清理环境

[root@k8s-master01 ~]# systemctl stop kubelet

[root@k8s-master01 ~]# for i in `crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | awk 'NR == 1 {next} {print $1}'`
do
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock stop $i
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock rm $i
done

[root@k8s-master01 ~]#  for i in `docker ps -a | awk 'NR == 1 {next} {print $1}'`
do
docker stop $i
docker rm $i
done

[root@k8s-master01 ~]# mv /etc/kubernetes/ /etc/kubernetes_old_`date +%Y%m%d%H%M%S`
[root@k8s-master01 ~]# mv /var/lib/kubelet /var/lib/kubelet_old_`date +%Y%m%d%H%M%S`
[root@k8s-master01 ~]# rm -rf /var/lib/etcd/*

3.4.1、初始化完成

日志输出内容 Your Kubernetes control-plane has initialized successfully!

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.172.8.61:6443 --token noq9po.4098vmtkwa8v53pc \
	--discovery-token-ca-cert-hash sha256:683c779a951e125caedcc15a3ce59bd51f917a4e1c629788fd4f9f4ff6369c22 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-master01 ~]# kubectl get nodes
NAME                      STATUS     ROLES           AGE     VERSION
k8s-master01.shiyan.com   NotReady   control-plane   3m32s   v1.29.3

3.5、添加node节点

[root@k8s-node1 ~]# kubeadm join 172.172.8.61:6443 --token noq9po.4098vmtkwa8v53pc --discovery-token-ca-cert-hash sha256:683c779a951e125caedcc15a3ce59bd51f917a4e1c629788fd4f9f4ff6369c22 --cri-socket unix:///var/run/cri-dockerd.sock
[root@k8s-node2 ~]# kubeadm join 172.172.8.61:6443 --token noq9po.4098vmtkwa8v53pc --discovery-token-ca-cert-hash sha256:683c779a951e125caedcc15a3ce59bd51f917a4e1c629788fd4f9f4ff6369c22 --cri-socket unix:///var/run/cri-dockerd.sock
[root@k8s-master01 ~]# kubectl get nodes
NAME                      STATUS     ROLES           AGE     VERSION
k8s-master01.shiyan.com   NotReady   control-plane   11m     v1.29.3
k8s-node1.shiyan.com      NotReady   <none>          2m13s   v1.29.3
k8s-node2.shiyan.com      NotReady   <none>          57s     v1.29.3

3.6、网络插件calico

只需要在master节点安装

https://github.com/projectcalico/calico/releases

当前最新版本是3.27.2

[root@k8s-master01 ~]# wget https://github.com/projectcalico/calico/releases/download/v3.27.2/release-v3.27.2.tgz

[root@k8s-master01 ~]# tar xf release-v3.27.2.tgz 
[root@k8s-master01 ~]# mv release-v3.27.2 Calico-v3.27.2
[root@k8s-master01 ~]# ls Calico-v3.27.2/
bin  images  manifests
[root@k8s-master01 ~]# cd Calico-v3.27.2/images/
[root@k8s-master01 images]# ls
calico-cni.tar  calico-dikastes.tar  calico-flannel-migration-controller.tar  calico-kube-controllers.tar  calico-node.tar  calico-pod2daemon.tar  calico-typha.tar
[root@k8s-master01 images]# for file in *.tar ; do docker load -i "$file"; done


[root@k8s-master01 manifests]# pwd
/root/Calico-v3.27.2/manifests


[root@k8s-master01 manifests]# kubectl create -f tigera-operator.yaml   # 直接运行
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

[root@k8s-master01 manifests]# cat custom-resources.yaml 
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 100.100.0.0/16			# pod使用的网段
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
    nodeAddressAutodetectionV4:
      interface: eth*			# master使用的网卡
---

# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}



[root@k8s-master01 manifests]# kubectl create -f custom-resources.yaml 
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
[root@k8s-master01 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE          NAME                                              READY   STATUS    RESTARTS      AGE     IP                NODE                      NOMINATED NODE   READINESS GATES
calico-apiserver   calico-apiserver-98db6cd5d-b8g29                  1/1     Running   0             8m18s   100.100.121.66    k8s-node1.shiyan.com      <none>           <none>
calico-apiserver   calico-apiserver-98db6cd5d-sslcw                  1/1     Running   0             8m18s   100.100.81.194    k8s-node2.shiyan.com      <none>           <none>
calico-system      calico-kube-controllers-5d9774f4d9-ggrxh          1/1     Running   0             58m     100.100.224.130   k8s-master01.shiyan.com   <none>           <none>
calico-system      calico-node-5nl2k                                 1/1     Running   0             26m     172.172.8.66      k8s-node2.shiyan.com      <none>           <none>
calico-system      calico-node-sgfq8                                 1/1     Running   0             58m     172.172.8.61      k8s-master01.shiyan.com   <none>           <none>
calico-system      calico-node-vkdsw                                 1/1     Running   0             28m     172.172.8.65      k8s-node1.shiyan.com      <none>           <none>
calico-system      calico-typha-77875d84b6-5ngt9                     1/1     Running   0             58m     172.172.8.66      k8s-node2.shiyan.com      <none>           <none>
calico-system      calico-typha-77875d84b6-w8vb4                     1/1     Running   0             58m     172.172.8.65      k8s-node1.shiyan.com      <none>           <none>
calico-system      csi-node-driver-89dck                             2/2     Running   0             58m     100.100.121.65    k8s-node1.shiyan.com      <none>           <none>
calico-system      csi-node-driver-mdms7                             2/2     Running   0             58m     100.100.81.193    k8s-node2.shiyan.com      <none>           <none>
calico-system      csi-node-driver-s6jzx                             2/2     Running   0             58m     100.100.224.129   k8s-master01.shiyan.com   <none>           <none>
kube-system        coredns-857d9ff4c9-pjsql                          1/1     Running   0             46h     100.100.224.132   k8s-master01.shiyan.com   <none>           <none>
kube-system        coredns-857d9ff4c9-stbqh                          1/1     Running   0             46h     100.100.224.131   k8s-master01.shiyan.com   <none>           <none>
kube-system        etcd-k8s-master01.shiyan.com                      1/1     Running   1 (43h ago)   46h     172.172.8.61      k8s-master01.shiyan.com   <none>           <none>
kube-system        kube-apiserver-k8s-master01.shiyan.com            1/1     Running   1 (43h ago)   46h     172.172.8.61      k8s-master01.shiyan.com   <none>           <none>
kube-system        kube-controller-manager-k8s-master01.shiyan.com   1/1     Running   1 (43h ago)   46h     172.172.8.61      k8s-master01.shiyan.com   <none>           <none>
kube-system        kube-proxy-74g69                                  1/1     Running   1 (43h ago)   46h     172.172.8.66      k8s-node2.shiyan.com      <none>           <none>
kube-system        kube-proxy-bhqnc                                  1/1     Running   1 (43h ago)   46h     172.172.8.65      k8s-node1.shiyan.com      <none>           <none>
kube-system        kube-proxy-fxc47                                  1/1     Running   1 (43h ago)   46h     172.172.8.61      k8s-master01.shiyan.com   <none>           <none>
kube-system        kube-scheduler-k8s-master01.shiyan.com            1/1     Running   1 (43h ago)   46h     172.172.8.61      k8s-master01.shiyan.com   <none>           <none>
tigera-operator    tigera-operator-748c69cf45-74qk2                  1/1     Running   5 (60m ago)   62m     172.172.8.65      k8s-node1.shiyan.com      <none>           <none>
[root@k8s-master01 ~]# kubectl get nodes -o wide
NAME                      STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master01.shiyan.com   Ready    control-plane   46h   v1.29.3   172.172.8.61   <none>        CentOS Linux 7 (Core)   5.4.272-1.el7.elrepo.x86_64   docker://25.0.4
k8s-node1.shiyan.com      Ready    <none>          46h   v1.29.3   172.172.8.65   <none>        CentOS Linux 7 (Core)   5.4.272-1.el7.elrepo.x86_64   docker://25.0.4
k8s-node2.shiyan.com      Ready    <none>          46h   v1.29.3   172.172.8.66   <none>        CentOS Linux 7 (Core)   5.4.272-1.el7.elrepo.x86_64   docker://25.0.4

3.7、设置kube-proxy代理模式为IPVS

kube-proxy默认采用iptables作为代理,而iptables的性能有限,不适合生产环境,需要改为IPVS模式。

[root@k8s-master01 ~]# kubectl get pods -n kube-system | grep proxy
kube-proxy-74g69                                  1/1     Running   1 (43h ago)   46h
kube-proxy-bhqnc                                  1/1     Running   1 (43h ago)   46h
kube-proxy-fxc47                                  1/1     Running   1 (43h ago)   46h



[root@k8s-master01 ~]# kubectl get configmap kube-proxy   -n kube-system  -o yaml | grep mode 
    mode: ""
    
    
[root@k8s-master01 ~]# kubectl edit configmap kube-proxy -n kube-system		# 找到 mode这一行,修改为  mode: "ipvs"
[root@k8s-master01 ~]# kubectl get configmap kube-proxy   -n kube-system  -o yaml | grep mode 
    mode: "ipvs"
    
    
#重启kube-proxy组件
[root@k8s-master01 ~]# kubectl rollout restart daemonset kube-proxy -n kube-system

# 查看日志
[root@k8s-master01 ~]# kubectl logs -n kube-system kube-proxy-74kf7 | grep ipvs
I0321 07:35:14.711375       1 server_others.go:236] "Using ipvs Proxier"

参考链接

https://blog.csdn.net/weixin_38924998/article/details/135493764

  • 19
    点赞
  • 30
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
从零开始搭建一个 Kubernetes(简称K8s集群可以按照以下步骤进行: 1. 选择操作系统:选择适合你的需求的操作系统,比如Ubuntu、CentOS等。 2. 安装Docker:Kubernetes使用Docker容器来运行应用程序,因此需要先安装Docker。可以通过官方文档来安装适合你操作系统版本的Docker。 3. 安装Kubernetes的控制节点(Master):在控制节点上安装Kubernetes的各个组件,包括kube-apiserver、kube-controller-manager、kube-scheduler等。可以通过二进制文件手动安装,或者使用Kubeadm工具来简化安装过程。 4. 配置网络:配置网络使得集群内的各个节点能够相互通信。可以选择使用Flannel、Calico等网络插件来实现网络配置。 5. 添加工作节点(Worker):在工作节点上安装Docker和Kubernetes的各个组件,比如kubelet和kube-proxy。可以使用同样的方式安装Docker和Kubernetes组件。 6. 加入工作节点到集群:在控制节点上使用Kubeadm工具将工作节点加入到集群中。 7. 部署应用程序:通过Kubernetes的资源对象(如Pod、Service、Deployment等)来部署应用程序。可以使用kubectl命令行工具或者YAML文件来定义和创建这些资源对象。 以上是一个大致的搭建Kubernetes集群的步骤,具体的安装和配置过程可能会因为操作系统和版本的不同而有所差异。你可以参考官方文档或者一些教程来获取更详细的指导。同时,搭建Kubernetes集群需要一定的系统管理和网络知识,确保你有足够的了解和准备。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

WYQXLGLM

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值