CentOS7部署k8s集群

CentOS版本 

​​​​​​​#cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)

角色 IP
k8s-master 172.26.197.206
k8s-node1 172.26.201.116
k8s-node2 172.26.203.93

以下步骤,1-8在所有节点执行;9-10在master节点执行

  • 1 关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld
  • 2 关闭 selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
$ setenforce 0 # 临时
  • 3 关闭 swap:
$ swapoff -a # 临时

$ vi /etc/fstab # 永久
#注释掉 /dev/mapper/centos-swap swap swap defaults 0 0 这行
systemctl reboot #重启生效
free ‐m #查看下swap交换区是否都为0,如果都为0则swap关闭成功
  • 4 设置主机名:
$ hostnamectl set-hostname <hostname>
  • 5 添加 hosts:
$ cat >> /etc/hosts << EOF
172.26.197.206 k8s-master
172.26.201.116 k8s-node1
172.26.203.93  k8s-node2
EOF
  • 6 将桥接的 IPv4 流量传递到 iptables 的链:
$ cat > /etc/sysctl.d/99-sysctl.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system # 生效
  • 7 时间同步:
$ yum install ntpdate -y
$ ntpdate time.windows.com
  • 8、所有节点安装 Docker/kubeadm/kubelet

Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker。
(1)安装 Docker

$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O
/etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version


(2)添加阿里云 YUM 软件源
设置仓库地址

# cat > /etc/docker/daemon.json << EOF
{
    "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# systemctl stop docker
# systemctl start docker

添加 yum 源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

(3)安装 kubeadm,kubelet 和 kubectl

$ yum install -y kubelet-1.23.0 kubectl-1.23.0 kubeadm-1.23.0
$ systemctl enable kubelet
  • 9、部署 Kubernetes Master

(1)在 172.26.197.206(Master)执行

[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=172.26.197.206 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.26.197.206]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.26.197.206 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.26.197.206 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 33.562436 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 27smjn.9i4zmj65gmru6aed
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed \
	--discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa 

由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址。
(2)使用 kubectl 工具:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
  • 10、安装 Pod 网络插件(CNI)
$ kubectl apply –f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kubeflannel.yml

确保能够访问到 quay.io 这个 registery。如果 Pod 镜像下载失败,可以多试几次,或者按以下任意方式处理

(1)通过https://www.ipaddress.com, 查询raw.githubusercontent.com的ip地址为 185.199.108.133,配置 /etc/hosts

$ vi /etc/hosts
185.199.108.133 raw.githubusercontent.com

(2)先wget到本地再执行

$ kubectl apply -f [path_to_kube-flannel.yml]
  • 11、加入 Kubernetes Node

(1)在 172.26.201.116  172.26.203.93(Node)执行
向集群添加新节点,执行在 kubeadm init 输出的 kubeadm join 命令:

[root@k8s-node1 ~]#  kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed \
--discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa
  • 12、测试 kubernetes 集群

在master执行,在 Kubernetes 集群中创建一个 pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-85b98978db-lr4b6   1/1     Running   0          21m

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        20h
service/nginx        NodePort    10.104.120.119   <none>        80:32215/TCP   20m

访问地址:http://NodeIP:Port
http://172.26.197.206:32215

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

------------------------------------------------------------------------------------------------------------------------------------------

可能遇到的问题:

  • bridge-nf-call-iptables
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=172.26.197.206 --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.23.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

按提示信息执行如下命令:

echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
  • join超时
     
kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed --discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'
error execution phase kubelet-start: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
$ vi /etc/docker/daemon.json
{
    "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"]
}
$ systemctl stop docker
$ systemctl start docker
  • 重启服务器后,kubelet服务无法启动
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor pres
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 二 2022-09-27 14:
     Docs: https://kubernetes.io/docs/
  Process: 7645 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CON
 Main PID: 7645 (code=exited, status=1/FAILURE)

9月 27 14:18:50 k8s-master systemd[1]: Unit kubelet.service entered failed state
9月 27 14:18:50 k8s-master systemd[1]: kubelet.service failed.


$ journalctl -xefu kubelet 
9月 27 15:11:52 k8s-master systemd[1]: kubelet.service holdoff time over, scheduling restart.
9月 27 15:11:52 k8s-master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kubelet.service has finished shutting down.
9月 27 15:11:52 k8s-master systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kubelet.service has finished starting up.
-- 
-- The start-up result is done.
9月 27 15:11:52 k8s-master kubelet[22728]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
9月 27 15:11:52 k8s-master kubelet[22728]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.565895   22728 server.go:446] "Kubelet version" kubeletVersion="v1.23.0"
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.566152   22728 server.go:874] "Client rotation is on, will bootstrap in background"
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.568237   22728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.570169   22728 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
9月 27 15:11:52 k8s-master kubelet[22728]: I0927 15:11:52.657911   22728 server.go:693] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
9月 27 15:11:52 k8s-master kubelet[22728]: E0927 15:11:52.658112   22728 server.go:302] "Failed to run kubelet" err="failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename\t\t\t\tType\t\tSize\tUsed\tPriority /dev/dm-1                               partition\t4063228\t0\t-2]"
9月 27 15:11:52 k8s-master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
9月 27 15:11:52 k8s-master systemd[1]: Unit kubelet.service entered failed state.
9月 27 15:11:52 k8s-master systemd[1]: kubelet.service failed.

running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false

参照本文第三步,永久关闭swap

  • node加入master,token超时用以下命令重新生成
$ kubeadm token create --print-join-command

node加入master时,提示Unauthorized

[root@k8s-node2 ~]# kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed \
--discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: Unauthorized
To see the stack trace of this error execute with --v=5 or higher

解决办法如下:

[root@k8s-node2 ~]# swapoff -a
[root@k8s-node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1112 10:51:20.127298    3527 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-node2 ~]# rm -rf /etc/cni/net.d
[root@k8s-node2 ~]# systemctl daemon-reload
[root@k8s-node2 ~]# systemctl restart kubelet
[root@k8s-node2 ~]# 
[root@k8s-node2 ~]# 
[root@k8s-node2 ~]# kubeadm join 172.26.197.206:6443 --token 27smjn.9i4zmj65gmru6aed \
--discovery-token-ca-cert-hash sha256:12cb2a5841863772a233d7124ab4cf4e447452d4f82cef591dab29d458aed3aa
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值