【K8S】搭建K8S集群教程

搭建K8S集群

1.minikube(新手)

官方安装文档

2.kubeadmin (工具)

2.1初始操作

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 关闭完swap后,一定要重启一下虚拟机!!!
# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.113.120 k8s-master
192.168.113.121 k8s-node1
192.168.113.122 k8s-node2
EOF


# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system  # 生效


# 时间同步
yum install ntpdate -y
ntpdate time.windows.com
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1

ntpdate[71187]: no server suitable for synchronization found

ntp常用服务器

中国国家授时中心:210.72.145.44
NTP服务器(上海) :ntp.api.bz

ntpdate -u ntp.api.bz
ntpdate -u 210.72.145.44

sudo yum install chrony
sudo systemctl start chronyd
sudo systemctl enable chronyd

2.2安装软件(所有节点)

2.2.1 安装docker
CETOS7
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# PS:如果出现如下错误信息
Loaded plugins: fastestmirror
adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
Could not fetch/save url https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to file /etc/yum.repos.d/docker-ce.repo: [Errno 14] curl#60 - "Peer's Certificate issuer is not recognized."
# 编辑 /etc/yum.conf 文件, 在 [main] 下面添加 sslverify=0 参数
vi /etc/yum.conf
# 配置如下----------------------
[main]
sslverify=0
# -----------------------------

# Step 3: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce

# Step 4: 开启Docker服务
sudo service docker start

# 注意:
# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
# vim /etc/yum.repos.d/docker-ee.repo
#   将[docker-ce-test]下方的enabled=0修改为enabled=1
#
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
#   Loading mirror speeds from cached hostfile
#   Loaded plugins: branch, fastestmirror, langpacks
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable
#   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
#   Available Packages
# Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)
# sudo yum -y install docker-ce-[VERSION]

修改daemon配置文件/etc/docker/daemon.json来使用加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["此处修改成你自己的加速 url"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://s3n1fl03.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
2.2.2 添加阿里云yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.2.3 安装kubeadmin kubelet kubectl
yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6

systemctl enable kubelet

# 配置关闭 Docker 的 cgroups,修改 /etc/docker/daemon.json,加入以下内容
"exec-opts": ["native.cgroupdriver=systemd"]

# 重启 docker
systemctl daemon-reload
systemctl restart docker
 卸载命令 yum remove -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6

2.3 部署Master

# 在 Master 节点下执行

kubeadm init \
      --apiserver-advertise-address=192.168.145.140 \
      --image-repository registry.aliyuncs.com/google_containers \
      --kubernetes-version v1.23.6 \
      --service-cidr=10.96.0.0/12 \
      --pod-network-cidr=10.244.0.0/16

# 安装成功后,复制如下配置并执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 20.10
....
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.145.140:6443 --token 6a6jjq.7d6jz1lj6gtzagzv \
        --discovery-token-ca-cert-hash sha256:0856c1ad4d5ceff441164319d038c7f658623e34778d7984bd6c391f538dc547


kubectl get nodes后

[root@zcyone ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE   VERSION
zcyone   NotReady   control-plane,master   88s   v1.23.6

2.4 加入Node

分别在 k8s-node1 和 k8s-node2 执行

# 下方命令可以在 k8s master 控制台初始化成功后复制 join 命令

kubeadm join 192.168.113.120:6443 --token w34ha2.66if2c8nwmeat9o7 --discovery-token-ca-cert-hash sha256:20e2227554f8883811c01edd850f0cf2f396589d32b57b9984de3353a7389477


# 如果初始化的 token 不小心清空了,可以通过如下命令获取或者重新申请
# 如果 token 已经过期,就重新申请
kubeadm token create

# token 没有过期可以通过如下命令获取
kubeadm token list

# 获取 --discovery-token-ca-cert-hash 值,得到值后需要在前面拼接上 sha256:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'

或者

kubeadm token create --print-join-command


[root@zcyone ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE   VERSION
zcyone   NotReady   control-plane,master   15m   v1.23.6
zcytwo   NotReady   <none>                 18s   v1.23.6


master 
kubectl get componentstatus(cs)


[root@zcyone ~]# kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}
scheduler            Healthy   ok

kubectl get pods -n kube-system

新加入node报错:(应该是复制虚拟机没完全克隆导致的)

discovery.bootstrapToken: Invalid value: "": using token-based discovery without caCertHashes can be unsafe. Set unsafeSkipCAVerification as true in your kubeadm config file or pass --discovery-token-unsafe-skip-ca-verification flag to continue
To see the stack trace of this error execute with --v=5 or higher

这个错误提示表明在使用 token-based discovery 时没有提供正确的 caCertHashes 参数,这可能会存在安全风险。为了继续操作,你可以在 kubeadm 配置文件中将 unsafeSkipCAVerification 设置为 true,或者在命令行中添加 --discovery-token-unsafe-skip-ca-verification 标志。这样就可以绕过 CA 验证继续进行操作。请注意,绕过 CA 验证可能存在一定的安全风险,请在确认安全性后再进行操作。

2.5 部署CNI网络插件

[root@zcyone ~]# cd /opt
[root@zcyone opt]# ls
cni                                                     nacos
containerd                                              nacos1
mysql-community-client-8.0.25-1.el7.x86_64.rpm          nacos2
mysql-community-client-plugins-8.0.25-1.el7.x86_64.rpm  nacos-server-2.0.4.tar.gz
mysql-community-common-8.0.25-1.el7.x86_64.rpm          nginx-1.16.1
mysql-community-libs-8.0.25-1.el7.x86_64.rpm            nginx-1.16.1.tar.gz
mysql-community-server-8.0.25-1.el7.x86_64.rpm          rh
[root@zcyone opt]# mkdir k8s
[root@zcyone opt]# cd k8s
[root@zcyone k8s]#

calico 

# 在 master 节点上执行
# 下载 calico 配置文件,可能会网络超时(这两条都访问不到)
curl https://docs.projectcalico.org/manifests/calico.yaml -O
curl -O https://docs.tigera.io/calico/latest/manifests/calico.yaml
#https://www.cnblogs.com/lfxx/p/17459878.html

curl -O https://docs.tigera.io/archive/v3.25/manifests/calico.yaml


# 修改 calico.yaml 文件中的 CALICO_IPV4POOL_CIDR 配置,修改为与初始化的 cidr 相同
#添加
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"

# 修改 IP_AUTODETECTION_METHOD 下的网卡名称

# 删除镜像 docker.io/ 前缀,避免下载过慢导致失败
[root@zcyone k8s]# grep image calico.yaml
          image: docker.io/calico/cni:v3.25.0
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/cni:v3.25.0
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/node:v3.25.0
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/node:v3.25.0
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/kube-controllers:v3.25.0
          imagePullPolicy: IfNotPresent



sed -i 's#docker.io/##g' calico.yaml




[root@zcyone k8s]# sed -i 's#docker.io/##g' calico.yaml
[root@zcyone k8s]# grep image calico.yaml
          image: calico/cni:v3.25.0
          imagePullPolicy: IfNotPresent
          image: calico/cni:v3.25.0
          imagePullPolicy: IfNotPresent
          image: calico/node:v3.25.0
          imagePullPolicy: IfNotPresent
          image: calico/node:v3.25.0
          imagePullPolicy: IfNotPresent
          image: calico/kube-controllers:v3.25.0
          imagePullPolicy: IfNotPresent

#直接docker pull 上面的东西(要写成脚本,下面不能直接执行,所以先不pull了)
docker pull  calico/cni:v3.25.0  calico/cni:v3.25.0 calico/node:v3.25.0 calico/node:v3.25.0 calico/kube-controllers:v3.25.0

#构建
kubectl apply -f calico.yaml
#查看是否正常
kubectl get cs


[root@zcyone k8s]# kubectl get po -n kube-system


[root@zcyone k8s]# kubectl describe po calico-node-plmzm -n kube-system
查看没READY的镜像的原因
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  7m1s                 default-scheduler  Successfully assigned kube-system/calico-node-plmzm to zcyone
  Warning  Failed     75s (x3 over 5m15s)  kubelet            Failed to pull image "calico/cni:v3.25.0": rpc error: code = Unknown desc = context canceled
  Warning  Failed     75s (x3 over 5m15s)  kubelet            Error: ErrImagePull
  Normal   BackOff    37s (x5 over 5m14s)  kubelet            Back-off pulling image "calico/cni:v3.25.0"
  Warning  Failed     37s (x5 over 5m14s)  kubelet            Error: ImagePullBackOff
  Normal   Pulling    26s (x4 over 7m)     kubelet            Pulling image "calico/cni:v3.25.0"

用docker pull 该image一直waiting


检查网卡问题
https://blog.csdn.net/u013149714/article/details/127763279
https://blog.csdn.net/bouttime/article/details/121014194

kubectl apply -f calico.yaml

calico/cni:v3.25.0

k8s1.27.3 离线安装calico 3.26.1_calico离线部署-CSDN博客

k8s集群安装calico插件 - christine-ting - 博客园 (cnblogs.com)

测试

# 创建部署
kubectl create deployment nginx --image=nginx

# 暴露端口
kubectl expose deployment nginx --port=80 --type=NodePort

# 查看 pod 以及服务信息
kubectl get pod,svc

从节点notReady

kubectl get events

 wanted to free 2960751001 bytes, but freed 0 bytes space with errors in image deletion: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "hello-world:latest" (must force) - container 2290b2ef2e69 is using its referenced image 9c7a54a9a43c

1.zcythree没启动docker

2.zcytwo kubelet 状态错误

[root@zcytwo ~]# systemctl  status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 日 2024-03-17 16:24:17 CST; 706ms ago
     Docs: https://kubernetes.io/docs/
  Process: 28303 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 28303 (code=exited, status=1/FAILURE)

3月 17 16:24:17 zcytwo systemd[1]: Unit kubelet.service entered failed state.
3月 17 16:24:17 zcytwo systemd[1]: kubelet.service failed.

k8s kubelet 服务无法启动报 code=exited, status=1/FAILURE错误_kubelet.service: main process exited, code=exited,-CSDN博客

Kubelet 启动异常排查_systemctl status kubelet-CSDN博客

最终是kubelet 版本和master不一致

rpm -qa | grep kube

k8s 集群 kubelet日志报错 command failed" err="failed to parse kubelet flag: unknown flag: --network-plugin - 知乎 (zhihu.com)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值