centos7 安装 k8s

1. 环境

1.1 操作系统

CentOS Linux release 7.8.2003
Linux master 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

1.2 三台虚拟机

master,node1,node2

2. 前期准备

三台机器操作。

2.1 关闭防火墙

systemctl stop  firewalld
systemctl disable   firewalld

2.2 关闭selinux

setenforce 0
sed -i 's/enforcing/permissive/' /etc/selinux/config

2.3 关闭swap

临时关闭 swapoff -a
永久关闭 修改 /etc/fstab,注释swap行
free 查看结果

2.4 修改主机名

# master节点
hostnamectl set-hostname master
# node1节点
hostnamectl set-hostname node1
# node2节点
hostnamectl set-hostname node2

2.5 将桥接的IPv4流量传递到iptables的链

在/etc/sysctl.d/目录上新增k8s.conf,内容如下,并把该文件拷贝到其他两台机器上
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

执行 sysctl --system命令使配置生效
sysctl --system

2.6 配置时间同步

yum install ntpdate -y
ntpdate ntp.aliyun.com

3. 安装containerd

三台机器安装。

3.1 说明

===========================================
The "cri-containerd-(cni-)-VERSION-OS-ARCH.tar.gz" release bundle has been deprecated since containerd 1.6,
does not work on some Linux distributions, and will be removed in containerd 2.0.

Instead of this, install the following components separately, either from the binary or from the source:
* containerd:  https://github.com/containerd/containerd/releases
* runc:        https://github.com/opencontainers/runc/releases
* CNI plugins: https://github.com/containernetworking/plugins/releases

The CRI plugin has been included in containerd since containerd 1.1.

See also the "Getting started" document:
https://github.com/containerd/containerd/blob/main/docs/getting-started.md
===========================================

3.2 下载

wget https://download.fastgit.org/containerd/containerd/releases/download/v1.6.6/cri-containerd-1.6.6-linux-amd64.tar.gz
若无法下载:
https://github.com/containerd/containerd/releases?page=5   找到 cri-containerd-1.6.6-linux-amd64.tar.gz

3.3 解压安装

tar -C / -zxf cri-containerd-1.6.6-linux-amd64.tar.gz

3.4 配置环境变量

编辑用户目录下的bashrc文件添加如下内容
export PATH=$PATH:/usr/local/bin:/usr/local/sbin

执行如下命令使环境变量立即生效
source ~/.bashrc

3.5 检查安装是否成功

执行如下命令启动containerd
systemctl start  containerd

执行如下命令查看版本号,出现如下信息表明安装成功。
[root@master ~]# ctr version
Client:
  Version:  v1.6.6
  Revision: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
  Go version: go1.17.11

Server:
  Version:  v1.6.6
  Revision: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
  UUID: c205638a-6c08-43a8-81a4-b15f97ef5cdc

3.6 配置文件

创建配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml

修改配置文件
sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g"  /etc/containerd/config.toml
sed -i "s#SystemdCgroup = false#SystemdCgroup = true#g"  /etc/containerd/config.toml

sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g"  /etc/containerd/config.toml

重新启动containerd
systemctl restart containerd

3.7 测试是否能创建和启动容器

执行如下命令拉取镜像并创建容器
ctr i pull docker.io/library/nginx:alpine #拉取镜像
ctr c create --net-host docker.io/library/nginx:alpine nginx #创建容器
ctr task start -d nginx

若出现如下报错,需升级libseccomp:

ctr: failed to create shim task: 
OCI runtime create failed: unable to retrieve OCI runtime error 
(open /run/containerd/io.containerd.runtime.v2.task/default/nginx/log.json: no such file or directory): 
runc did not terminate successfully: exit status 127: unknown

升级libseccomp
wget https://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm
rpm -qa | grep libseccomp

4. master节点安装kubernetes

4.1 修改hosts文件

192.168.52.132 master
192.168.52.133 node1
192.168.52.134 node2

不配置会导致:没有本地解析,k8s coredns组件状态是pending。

4.2 添加k8s yum源

在/etc/yum.repos.d/目录下新增kubernetes.repo,内容如下:

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

4.3 安装k8s组件

yum install -y kubelet-1.24.3 kubeadm-1.24.3 kubectl-1.24.3

4.4 配置

生成默认配置并修改相应的参数
通过如下命名生成一个默认的配置文件:
kubeadm config print init-defaults > kubeadm-init.yaml

根据自己的环境修改对应的参数:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.248.130    #master节点IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock   #containerd容器路径
  imagePullPolicy: IfNotPresent
  name: master                       # 节点名称   
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers    #阿里云容器源地址
kind: ClusterConfiguration
kubernetesVersion: 1.24.3
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16     #pod的IP网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}

4.5 初始化

执行如下命令进行初始化:
kubeadm init --config=kubeadm-init.yaml --v=6

--config:指定根据那个配置文件进行初始
--v:指定日志级别,越高越详细

初始化报错:

[preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
error execution phase preflight

解决:
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward

初始化成功后,会出现以下信息:

......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.120.12:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:cd52c5f3559e33d42fe2f8844c5556b1e10c9a0eebd307e6139da55a3248731c

按提示执行:

export KUBECONFIG=/etc/kubernetes/admin.conf

若执行 kubectl get nodes 报错:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

执行:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.6 检查master节点状态

[root@master ~]# kubectl get node
NAME     STATUS     ROLES           AGE   VERSION
master      Ready   control-plane   17h   v1.24.3

4.7 检查master节点k8s各部件启动情况

[root@master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS    RESTARTS      AGE   IP               NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-74586cf9b6-26gjh         0/1     Pending   0             17h   <none>           <none>   <none>           <none>
kube-system   coredns-74586cf9b6-9dwsb         0/1     Pending   0             17h   <none>           <none>   <none>           <none>
kube-system   etcd-master                      1/1     Running   1 (48m ago)   17h   192.168.52.132   master   <none>           <none>
kube-system   kube-apiserver-master            1/1     Running   1 (48m ago)   17h   192.168.52.132   master   <none>           <none>
kube-system   kube-controller-manager-master   1/1     Running   1 (48m ago)   17h   192.168.52.132   master   <none>           <none>
kube-system   kube-proxy-4mr78                 1/1     Running   1 (34m ago)   16h   192.168.52.134   node2    <none>           <none>
kube-system   kube-proxy-cldzh                 1/1     Running   1 (35m ago)   16h   192.168.52.133   node1    <none>           <none>
kube-system   kube-proxy-d5klq                 1/1     Running   1 (48m ago)   17h   192.168.52.132   master   <none>           <none>
kube-system   kube-scheduler-master            1/1     Running   1 (48m ago)   17h   192.168.52.132   master   <none>           <none>

coredns组件状态为Pending,原因尚未安装网络插件,后面步骤执行。 

5. node节点安装kubernetes

5.1 添加k8s yum源

在/etc/yum.repos.d/目录下新增kubernetes.repo,内容如下:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5.2 安装k8s组件

yum install -y kubeadm-1.24.3 kubelet-1.24.3 kubectl-1.24.3 --disableexcludes=kubernetes

5.3 加入k8s集群

kubeadm join 192.168.120.12:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:cd52c5f3559e33d42fe2f8844c5556b1e10c9a0eebd307e6139da55a3248731c

报错:

[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "node1" could not be reached
        [WARNING Hostname]: hostname "node1": lookup node1 on 192.168.120.254:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解决:

1. 编辑/etc/hosts,将 127.0.0.1 映射本机的hostname,添加一行:127.0.0.1 node1

2. 执行以下命令

modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward

再次执行 kubeadm join ......

报错:

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

解决:
systemctl enable kubelet
systemctl start kubelet

查看 kubelet状态,执行 systemctl status kubelet​
再次执行 kubeadm join......

成功:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

5.4 重新获取kubeadm join 所需token信息

如果后续添加node节点,可以到master节点执行命令获取token信息:
kubeadm token create --print-join-command

如果添加某台节点异常,修改后可以执行 kubeadm reset,然后重新join。

6. master节点安装网络插件

6.1 下载flannel

coredns组件没启动,因为没有安装网络插件。

[root@master ~]# wget http://down.i4t.com/k8s1.24/kube-flannel.yml

6.2 修改配置

[root@master ~]# vi kube-flannel.yml 

      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens33                // 修改为实际网卡名称

......

  net-conf.json: |
    {
      "Network": "10.244.0.0/16",   // 同kubeadm-init.yaml文件的podSubnet网段
      "Backend": {
        "Type": "vxlan"
      }
    }

6.3 部署flannel

[root@master ~]# kubectl apply -f kube-flannel.yml

查看flannel pod状态,报错:

[root@master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS              RESTARTS       AGE   IP               NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-74586cf9b6-26gjh         0/1     ContainerCreating   0              18h   <none>           node1    <none>           <none>
kube-system   coredns-74586cf9b6-9dwsb         0/1     ContainerCreating   0              18h   <none>           node1    <none>           <none>
kube-system   etcd-master                      1/1     Running             2 (31m ago)    18h   192.168.52.132   master   <none>           <none>
kube-system   kube-apiserver-master            1/1     Running             2 (31m ago)    18h   192.168.52.132   master   <none>           <none>
kube-system   kube-controller-manager-master   1/1     Running             2 (31m ago)    18h   192.168.52.132   master   <none>           <none>
kube-system   kube-flannel-ds-7l2vc            0/1     CrashLoopBackOff    2 (26s ago)    49s   192.168.52.134   node2    <none>           <none>
kube-system   kube-flannel-ds-cw5v6            0/1     CrashLoopBackOff    2 (27s ago)    49s   192.168.52.132   master   <none>           <none>
kube-system   kube-flannel-ds-l9gdx            0/1     CrashLoopBackOff    2 (26s ago)    49s   192.168.52.133   node1    <none>           <none>
kube-system   kube-proxy-4mr78                 1/1     Running             1 (101m ago)   17h   192.168.52.134   node2    <none>           <none>
kube-system   kube-proxy-cldzh                 1/1     Running             1 (103m ago)   17h   192.168.52.133   node1    <none>           <none>
kube-system   kube-proxy-d5klq                 1/1     Running             2 (31m ago)    18h   192.168.52.132   master   <none>           <none>
kube-system   kube-scheduler-master            1/1     Running             2 (31m ago)    18h   192.168.52.132   master   <none>           <none>

查看日志:

[root@master ~]# kubectl logs kube-flannel-ds-7l2vc 
Error from server (NotFound): pods "kube-flannel-ds-7l2vc" not found
[root@master ~]# kubectl logs kube-flannel-ds-7l2vc -n kube-system
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I1015 03:35:41.838060       1 main.go:205] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[ens33] ifaceRegex:[] ipMasq:true subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W1015 03:35:41.838248       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1015 03:35:42.138304       1 kube.go:120] Waiting 10m0s for node controller to sync
I1015 03:35:42.138390       1 kube.go:378] Starting kube subnet manager
I1015 03:35:43.138841       1 kube.go:127] Node controller sync successful
I1015 03:35:43.138884       1 main.go:225] Created subnet manager: Kubernetes Subnet Manager - node2
I1015 03:35:43.138892       1 main.go:228] Installing signal handlers
I1015 03:35:43.139037       1 main.go:454] Found network config - Backend type: vxlan
I1015 03:35:43.139722       1 match.go:242] Using interface with name ens33 and address 192.168.52.134
I1015 03:35:43.139931       1 match.go:264] Defaulting external address to interface address (192.168.52.134)
I1015 03:35:43.140036       1 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
E1015 03:35:43.140696       1 main.go:317] Error registering network: failed to acquire lease: node "node2" pod cidr not assigned
W1015 03:35:43.141134       1 reflector.go:436] github.com/flannel-io/flannel/subnet/kube/kube.go:379: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
I1015 03:35:43.141237       1 main.go:434] Stopping shutdownHandler...

错误原因:

集群初始化时没有指定--pod-network-cidr参数

或者

初始化时使用kubeadm-init.yaml文件,但没有配置podSubnet字段值。

flannal的编排中指定分配给pod使用的cidr,与集群设置不符,报错。

解决:

1、修改集群配置

[root@master ~]# kubectl edit cm kubeadm-config -n kube-system

在networking下添加配置 podSubnet: cidr        此处cidr需要与插件内指定的网段一致。

    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
      podSubnet: 10.244.0.0/16

2、修改静态pod  kube-controller-manager配置

[root@master ~]# vi /etc/kubernetes/manifests/kube-controller-manager.yaml

添加启动参数:

- --allocate-node-cidrs=true 
- --cluster-cidr=10.244.0.0/16

3、修改kube-proxy的配置

[root@master ~]# kubectl edit cm kube-proxy -n kube-system

clusterCIDR:"10.244.0.0/16"

查看pod状态,正常:

[root@master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS    RESTARTS       AGE   IP               NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-74586cf9b6-26gjh         1/1     Running   0              19h   10.244.1.2       node1    <none>           <none>
kube-system   coredns-74586cf9b6-9dwsb         1/1     Running   0              19h   10.244.1.3       node1    <none>           <none>
kube-system   etcd-master                      1/1     Running   2 (79m ago)    19h   192.168.52.132   master   <none>           <none>
kube-system   kube-apiserver-master            1/1     Running   2 (79m ago)    19h   192.168.52.132   master   <none>           <none>
kube-system   kube-controller-manager-master   1/1     Running   0              12m   192.168.52.132   master   <none>           <none>
kube-system   kube-flannel-ds-7l2vc            1/1     Running   13 (12m ago)   48m   192.168.52.134   node2    <none>           <none>
kube-system   kube-flannel-ds-cw5v6            1/1     Running   13 (12m ago)   48m   192.168.52.132   master   <none>           <none>
kube-system   kube-flannel-ds-l9gdx            1/1     Running   13 (12m ago)   48m   192.168.52.133   node1    <none>           <none>
kube-system   kube-proxy-4mr78                 1/1     Running   1 (149m ago)   18h   192.168.52.134   node2    <none>           <none>
kube-system   kube-proxy-cldzh                 1/1     Running   1 (151m ago)   18h   192.168.52.133   node1    <none>           <none>
kube-system   kube-proxy-d5klq                 1/1     Running   2 (79m ago)    19h   192.168.52.132   master   <none>           <none>
kube-system   kube-scheduler-master            1/1     Running   2 (79m ago)    19h   192.168.52.132   master   <none>           <none>

6.4 测试网络状态

新建test.yaml文件:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:alpine
        name: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30001
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: abcdocker9/centos:v1
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

创建pod

[root@master ~]# kubectl apply -f test.yaml

查看pod状态

[root@master ~]# kubectl get pod,svc
NAME                         READY   STATUS              RESTARTS   AGE
pod/busybox                  0/1     ContainerCreating   0          31s
pod/nginx-6fb79bc456-nhrmf   0/1     ContainerCreating   0          31s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        19h
service/nginx        NodePort    10.109.86.224   <none>        80:30001/TCP   31s
[root@master ~]# kubectl get pod,svc -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
pod/busybox                  1/1     Running   0          13m   10.244.2.2   node2   <none>           <none>
pod/nginx-6fb79bc456-nhrmf   1/1     Running   0          13m   10.244.2.3   node2   <none>           <none>

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        19h   <none>
service/nginx        NodePort    10.109.86.224   <none>        80:30001/TCP   13m   app=nginx

使用nslookup查看是否能返回地址

[root@master ~]# kubectl exec -it busybox -- nslookup kubernetes
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

测试nginx service 及 pod网络通信

分别在三台机器上进行操作

ping 10.244.2.2    // busybox pod ip,通

ping 10.244.2.3   // nginx pod ip,通

ping 10.109.86.224  // nginx service ip,不通

ping不通nginx service ip

master上修改kube-proxy配置文件,修改mode为"ipvs"
[root@master ~]# kubectl edit cm kube-proxy -n kube-system

    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"
    nodePortAddresses: null

删除原来的kube-proxy pod

[root@master ~]# kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-4mr78" deleted
pod "kube-proxy-cldzh" deleted
pod "kube-proxy-d5klq" deleted

自动重启新的kube-proxy pod 

[root@master ~]# kubectl get pod -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS       AGE     IP               NODE     NOMINATED NODE   READINESS GATES
coredns-74586cf9b6-26gjh         1/1     Running   2 (11m ago)    23h     10.244.1.7       node1    <none>           <none>
coredns-74586cf9b6-9dwsb         1/1     Running   2 (11m ago)    23h     10.244.1.6       node1    <none>           <none>
etcd-master                      1/1     Running   13 (11m ago)   23h     192.168.52.132   master   <none>           <none>
kube-apiserver-master            1/1     Running   15 (11m ago)   23h     192.168.52.132   master   <none>           <none>
kube-controller-manager-master   1/1     Running   6 (11m ago)    4h10m   192.168.52.132   master   <none>           <none>
kube-flannel-ds-7l2vc            1/1     Running   15 (11m ago)   4h46m   192.168.52.134   node2    <none>           <none>
kube-flannel-ds-cw5v6            1/1     Running   17 (11m ago)   4h46m   192.168.52.132   master   <none>           <none>
kube-flannel-ds-l9gdx            1/1     Running   16 (11m ago)   4h46m   192.168.52.133   node1    <none>           <none>
kube-proxy-59hrz                 1/1     Running   0              22s     192.168.52.133   node1    <none>           <none>
kube-proxy-jxhdj                 1/1     Running   0              21s     192.168.52.132   master   <none>           <none>
kube-proxy-qxr96                 1/1     Running   0              23s     192.168.52.134   node2    <none>           <none>
kube-scheduler-master            1/1     Running   8 (11m ago)    23h     192.168.52.132   master   <none>           <none>

可以ping通nginx service ip了: 

[root@master ~]# ping 10.109.86.224
PING 10.109.86.224 (10.109.86.224) 56(84) bytes of data.
64 bytes from 10.109.86.224: icmp_seq=1 ttl=64 time=0.071 ms

参考:https://blog.csdn.net/sinat_28521487/article/details/126057006

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,我会尽力为您解答问题。有关CentOS 7安装K8s的问题,您可以参考以下步骤: 1. 确认CentOS版本 使用以下命令确认CentOS版本: ``` cat /etc/centos-release ``` 如果版本是7.x,那么您可以继续进行安装。 2. 安装Docker 要安装Kubernetes(简称K8s),必须先安装Docker。使用以下命令安装Docker: ``` yum install docker -y ``` 启动Docker服务并设置开机自启动: ``` systemctl start docker systemctl enable docker ``` 3. 关闭SELinux和Swap Kubernetes安装时需要禁用SELinux和Swap。使用以下命令进行禁用: ``` swapoff -a sed -i 's/^\(.*swap.*\)$/#\1/' /etc/fstab setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config ``` 4. 安装和配置Kubernetes 使用以下命令添加Kubernetes安装源并安装K8s组件: ``` cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install kubelet kubeadm kubectl -y ``` 启动Kubernetes并设置开机自启动: ``` systemctl start kubelet systemctl enable kubelet ``` 5. 初始化Kubernetes 使用以下命令初始化Kubernetes: ``` kubeadm init --pod-network-cidr=10.244.0.0/16 ``` 这个命令将初始化一个Kubernetes集群,并生成一个令牌(token)。请注意令牌的值,后续使用需要用到。 6. 安装网络插件 使用以下命令安装网络插件(这里以Flannel为例): ``` kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` 安装完成后,您就可以开始使用Kubernetes了。如果您需要添加节点,请重新运行初始化命令,并使用令牌加入集群。 希望上述内容可以帮助到您。如果有任何问题或需要进一步帮助,请随时提出。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值