安装k8s集群(calico+crio)

centos7.9 ARM中kubeadm方式搭建k8s集群(k8s v1.21.2)

环境说明:

  1. centos 7.9 4.18.0-348.20.1.el7.aarch64

  1. crio:1.21.8

  1. go:go1.18.9 linux/arm64

  1. runc:1.0.2

  1. k8s:1.21.2

  1. kubelet-1.21.2 kubeadm-1.21.2 kubectl-1.21.2

注意事项及说明

  1. runc版本需要1.0.2版本以上支持

  1. 1.21版本k8s需下载go1.16以上版本支持

本次环境准备

hostname

ip

节点

k8s-003ecs

172.28.222.80

master

k8s-002ecs

172.28.222.82

node

k8s-001ecs

172.28.222.81

node

安装步骤

初始化配置

  1. 关闭一切网络、防火墙功能,确保k8s集群之间的连通性,切记一定要操作,要不后续可能出现任何不通的问题

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
# 永久关闭
sed -i 's/enforcing/disabled/' /etc/selinux/config  
# 临时关闭
setenforce 0  

# 关闭swap
# 临时
swapoff -a 
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab



# 在master、node添加hosts
cat >> /etc/hosts << EOF
172.28.222.81   k8s-001ecs
172.28.222.82   k8s-002ecs
172.28.222.80   k8s-003ecs
EOF


# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

# 设置内核参数
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.swappiness=0
EOF
 
modprobe overlay
modprobe br_netfilter

# 生效
sysctl --system 

# 安装ipvsadm,并设置ipvs模块自启
yum install ipvsadm
   
cat > /etc/sysconfig/modules/ipvs.modules << EOF
/sbin/modinfo -F filename ip_vs > /dev/null 2>&1
if [ $? -eq 0 ];then
 /sbin/modprobe ip_vs
fi
EOF

安装crio

安装依赖

yum install -y \
  containers-common \
  device-mapper-devel \
  git \
  glib2-devel \
  glibc-devel \
  glibc-static \
  gpgme-devel \
  libassuan-devel \
  libgpg-error-devel \
  libseccomp-devel \
  libselinux-devel \
  pkgconfig \
  make
------------------------------------
# 安装高版本go
[root@k8s-003ecs ~]#  wget https://dl.google.com/go/go1.18.9.linux-arm64.tar.gz
[root@k8s-003ecs ~]# tar xvf go1.18.9.linux-arm64.tar.gz -C /usr/local/
[root@k8s-003ecs ~]# ln -s /usr/local/go/bin/* /usr/bin/ 
[root@k8s-003ecs ~]# go version
go version go1.18.9 linux/arm64

#安装高版本runc
git clone https://github.com/opencontainers/runc -b v1.0.2
cd runc
make
make install
[root@k8s-003ecs ~]# runc -v
runc version 1.0.2
spec: 1.0.2-dev
go: go1.17
libseccomp: 2.3.1

[root@k8s-003ecs ~]# go version
go version go1.18.9 linux/arm64
[root@shaojie-k8s-003ecs ~]# crio -v
crio version 1.21.8
Version:       1.21.8
GitCommit:     6fe878638a6f9dccd239b0147c366e6f22cbdaa1
GitTreeState:  clean
BuildDate:     2023-01-10T03:01:20Z
GoVersion:     go1.18.9
Compiler:      gc
Platform:      linux/arm64
Linkmode:      dynamic

crio的配置文件默认为/etc/crio/crio.conf,可以通过命令crio config --default > /etc/crio/crio.conf来生成默认配置文件,修改pause镜像地址,如果不修改后续会拉不到镜像crio启动报错(切记,集群每台机器都需要更改)

sed -i 's?k8s.gcr.io/pause:3.5?registry.aliyuncs.com/google_containers/pause:3.5?g' /etc/crio/crio.conf

安装conmon、CNI

git clone https://github.com/containers/conmon
cd conmon
make
sudo make install
--------------
git clone https://github.com/containernetworking/plugins
cd plugins
# git checkout v0.8.7

./build_linux.sh
--------------
sudo mkdir -p /opt/cni/bin
sudo cp bin/* /opt/cni/bin/

cri-o配置

[root@k8s-003ecs ~]# cd cri-o/
[root@k8s-003ecs cri-o]# make install.config
install  -d /usr/local/share/containers/oci/hooks.d
install  -d /etc/crio/crio.conf.d
install  -D -m 644 crio.conf /etc/crio/crio.conf
install  -D -m 644 crio-umount.conf /usr/local/share/oci-umount/oci-umount.d/crio-umount.conf
install  -D -m 644 crictl.yaml /etc
[root@k8s-003ecs cri-o]# make install.systemd
install  -D -m 644 contrib/systemd/crio.service /usr/local/lib/systemd/system/crio.service
ln -sf crio.service /usr/local/lib/systemd/system/cri-o.service
install  -D -m 644 contrib/systemd/crio-shutdown.service /usr/local/lib/systemd/system/crio-shutdown.service
install  -D -m 644 contrib/systemd/crio-wipe.service /usr/local/lib/systemd/system/crio-wipe.service

crio配置systemd
sudo systemctl daemon-reload
sudo systemctl enable crio
sudo systemctl start crio
systemctl status crio
crio --version


[root@k8s-003ecs cri-o]# systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
   Loaded: loaded (/usr/local/lib/systemd/system/crio.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2023-01-10 11:06:02 CST; 3s ago
     Docs: https://github.com/cri-o/cri-o
 Main PID: 24354 (crio)
   CGroup: /system.slice/crio.service
           └─24354 /usr/local/bin/crio

安装crictl

VERSION="v1.21.0"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-v1.21.0-linux-arm64.tar.gz
sudo tar zxvf crictl-v1.21.0-linux-arm64.tar.gz -C /usr/local/bin
rm -f crictl-v1.21.0-linux-arm64.tar.gz


crictl --runtime-endpoint unix:///var/run/crio/crio.sock version

[root@k8s-001ecs ~]# crictl --version
crictl version v1.21.0

使用kubeadm部署Kubernetes(添加kubernetes软件源)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet和kubectl


yum install -y kubelet-1.21.2 kubeadm-1.21.2 kubectl-1.21.2
#下载如果找不到各个源,一定是你k8s的软件源配置的有问题,看好你的机器是amd的还是arm的,需要改后缀

#### 这一步很重要(pause版本要和crio配置的一样)
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.5
EOF

# 设置开机启动
systemctl daemon-reload && systemctl enable kubelet --now

#应该是没起来的,需要先kubeadm init 这个地方暂时先不用管了
#systemctl status kubelet   目前的status是activating的状态

# 查看日志    journalctl -xe

配置master节点

[root@k8s-001ecs ~]# kubeadm config print init-defaults > kubeadm-config.yaml
W0903 15:50:51.208437   16483 kubelet.go:210] cannot automatically set CgroupDriver when starting the Kubelet: cannot execute 'docker info -f {{.CgroupDriver}}': executable file not found in $PATH
#这就没问题,已经有了配置文件

#编辑kubeadm-config.yaml,清空默认配置,填写已下配置
注意:
advertiseAddress需改为masterip
podSubnet虚拟ip段需和后续calico设置的相同
criSocket设置为你crio sock文件的路径
imageRepository改为自己的镜像仓库,或者改成registry.aliyuncs.com/google_containers阿里云的
------------------------------
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.28.222.80
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
imageRepository: registry.aliyuncs.com/google_containers
networking:
  podSubnet: 10.244.0.0/16

kubeadm 初始化

[root@k8s-003ecs ~]# kubeadm init --config kubeadm-config.yaml
#init失败的后面会有报错记录及解决办法

在开始初始化集群之前可以使用kubeadm config images pull --config kubeadm.yaml预先在各个服务器节点上拉取所k8s需要的容器镜像,网速快、可以等的当我没说。

[root@k8s-001ecs ~]# kubeadm config images pull --config kubeadm-config.yaml 
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.4.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.0
#pause手动配置的pasue3.5可以手动下载,这个版本不冲突的,也可以设置为pause:3.4.1
kubeadm 初始化完成
#初始化完成会给你join 这是node加入集群的命令
kubeadm join 172.28.222.80:6443 --token 5ayvdr.6o9ybg0s2fuxjoap \
        --discovery-token-ca-cert-hash sha256:3152d7f2763c20274565c0692444dab9fb2ec02a76269b7b

使用kubectl工具 【master节点操作】

# 如果前面init过,先删除再新建  rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#执行完后执行kubectl命令,这时已经kubelet起来了
kubectl get nodes
kubectl get pods --all-namespaces -o wide
kubectl describe pod coredns-7d89967ddf-4qjvl -n kube-system
kubectl describe pod coredns-7d89967ddf-6md2p -n kube-system

#这是coredns应该没起来,但其他的服务应该是起来了,没有网络插件coredns一直在创建,装了calico之后再看看
#k8s系统组件全部至于kube-system命名空间中
#注意:etcd、kube-apiserver、kube-controller-manager、kube-scheduler组件为静态模式部署,其部署清单在主机的**/etc/kubernetes/manifests**目录里,kubelet会自动加载此目录并启动pod。

安装calico(源代码)

[root@k8s-001ecs ~]# wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml --no-check-certificate

[root@k8s-001ecs ~]# vim calico.yaml
把calico.yaml里pod所在网段改成kubeadm init时选项--pod-network-cidr所指定的网段,
直接用vim编辑打开此文件查找192,按如下标记进行修改:
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
#   value: "192.168.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
把两个#及#后面的空格去掉,并把192.168.0.0/16改成10.244.0.0/16(这个改为和kubeadm-config.yaml里podSubnet参数一样的配置)
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"

#执行calico.yaml 镜像比较多,下载多等会,也可以提前下载(grep image calico.yaml | crictl pull)
[root@k8s-001ecs ~]# kubectl apply -f calico.yaml

加入Kubernetes Node【Slave节点】

下面我们需要到 node1 和 node2服务器,执行下面的代码向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

#注意,以下的命令是在master初始化完成后,每个人的都不一样!!!需要复制自己生成的
kubeadm join 172.28.222.80:6443 --token 5ayvdr.6o9ybg0s2fuxjoap \
        --discovery-token-ca-cert-hash sha256:3152d7f2763c20274565c0692444dab9fb2ec02a76269b7b

#默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:
kubeadm token create --print-join-command

#当我们把两个节点都加入进来后,我们就可以去Master节点 执行下面命令查看情况
kubectl get node
[root@shaojie-k8s-001ecs ~]# kubectl get node
NAME                 STATUS   ROLES                  AGE     VERSION
k8s-003ecs   Ready    control-plane,master   3d20h   v1.21.2
k8s-002ecs   Ready    <none>                 3d20h   v1.21.2

错误汇总

错误:cri-o编译时提示:cannot find package “.” in:

go的版本太低了,升级go版本

错误:Error initializing source docker://k8s.gcr.io/pause:3.5:

解决办法:修改 /etc/crio/crio.conf 中的配置: (要记住所有的node都要修改),前面已经提到过,paus_image的镜像地址

错误: “Error getting node” err=“node “cnio-master” not found”

解决办法:在/etc/hosts文件中添加地址映射关系

[ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[root@k8s-001ecs ~]# cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
> net.bridge.bridge-nf-call-iptables  = 1
> net.ipv4.ip_forward                 = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> vm.swappiness=0
> EOF
# 之前没有加vm.swappiness

[root@k8s-001ecs ~]# modprobe br_netfilter
[root@k8s-001ecs ~]# sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
错误:kubectl get cs出现错误
[root@k8s-001ecs ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0               Healthy     {"health":"true"}

解决办法:修改/etc/kubernetes/manifests/下的静态pod配置文件kube-controller-manager.yaml和kube-scheduler.yaml,删除这两个文件中命令选项中的- --port=0,重启kubelet。

  • 4
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 5
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值