CentOs7 搭建 K8s 集群实践

Kubernates 简介

Kubernetes(简称,K8S)是 Google 开源的容器集群管理系统。在容器部署时代,我们知道了容器类似于 VM,但有着更好的隔离属性,与基础架构分离,使得它可以跨云和 OS 分发进行移植。它的应用和其运行环境捆绑在一起,可以进行持续开发,集成和部署,并且因为镜像不可更改可以快速回滚。

容器镜像比 VM 镜像能够更快地创建,隔离性好,敏捷的应用程序和部署,但是在生产环境中,如果容器较多,如何管理容器的生命周期将会变得棘手,但如果由系统处容器的弹性扩展,故障转移,是不是就极大地解放了运维的成本,这就是使用 Kubernetes 的好处!使用 Kubernetes 有如下好处:

  • 服务发现和负载均衡:如果访问容器的流量很高,Kubernetes 可以实现负载平衡并分配网络流量,从而使部署稳定。
  • 存储编排: 允许自动选择挂载自己的存储系统,比如本地存储,云提供商等。
  • 自动部署和回滚: 可以使用一定的策略来为您的部署创建新容器,删除现有容器并将其所有资源用于新容器。
  • 自动装箱: 由于 Kubernetes 提供了一个节点集群,可用于运行容器化任务,通过配置每个容器需要多少 CPU 和内存(RAM)。Kubernetes 可以将容器安装到您的节点上,以充分利用您的资源。
  • 自我修复: 重启失败的容器,替换容器,杀死不响应用户定义的运行状况检查的容器,并在准备好服务之前不将其通告给客户。
  • 密钥和配置管理:Kubernetes 使您可以存储和管理敏感信息,例如密码,OAuth 令牌和 SSH 密钥。您可以部署和更新机密和应用程序配置,而无需重建容器映像,也无需在堆栈配置中公开机密。

安装准备

节点规划

在本地创建了 3 个 centos7 VM, 组建一个简单的 K8s 集群,节点规划如下:

主机名IP⻆⾊配置
learn-k8s-master192.168.43.210k8s 主节点2 核 3G
learn-k8s-node1192.168.43.211k8s 从节点1 核 2G
learn-k8s-node2192.168.43.212k8s 从节点1 核 2G

在这里插入图片描述

各个节点的软件版本:

操作系统: CentOS-7.4-64Bit

Docker 版本: 1.13.1

Kubernetes 版本: v1.19.4

所有节点都需要安装以下组件:

Docker :打包应用以及依赖包到一个轻量级、可移植的容器中
kubelet :运⾏于所有 Node 上,负责启动容器和 Pod
kubeadm :负责初始化集群
kubectl : k8s 命令⾏⼯具,通过其可以部署/管理应⽤ 以及 CRUD 各种资源

节点配置

所有节点都要完成下面配置:

设置所有节点主机名

hostnamectl --static set-hostname learn-k8s-master
hostnamectl --static set-hostname learn-k8s-node-1
hostnamectl --static set-hostname learn-k8s-node-2

所有节点 主机名/IP 加⼊ hosts 解析

192.168.43.210 learn-k8s-master
192.168.43.211 learn-k8s-node-1
192.168.43.212 learn-k8s-node-2

所有节点关闭防⽕墙

systemctl disable firewalld.service
systemctl stop firewalld.service

禁⽤ SELINUX

vim /etc/sysconfig/selinux

# 永久关闭
SELINUX=disabled

设置系统参数

设置允许路由转发,不对 bridge 的数据进行处理

创建/etc/sysctl.d/k8s.conf 文件

vim /etc/sysctl.d/k8s.conf

# 输入如下内容
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
vm.swappiness=0

使修改生效,执行

sysctl -p /etc/sysctl.d/k8s.conf

kube-proxy 开启 ipvs 的前置条件

由于 ipvs 已经加入到内核主干,所以需要内核模块支持,请确保内核已经加载了相应模块;如不确定,执行以下脚本,以确保内核加载相应模块,否则会出现 failed to load kernel modules: [ip_vs_rr ip_vs_sh ip_vs_wrr] 错误

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

所有节点关闭 swap

swapoff -a

组件安装

Docker 安装

可参考我另一篇博文

安装 kubelet、kubeadm、kubectl(所有节点)
  • kubeadm: 用来初始化集群的指令。
  • kubelet: 在集群中的每个节点上用来启动 pod 和 container 等。
  • kubectl: 用来与集群通信的命令行工具。

设置 yum 安装源

cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF

执行命令安装

# 安装
yum install -y kubelet kubeadm kubectl

# 查看版本
kubelet --version

# 设置开机启动
systemctl enable kubelet

这时候先不启动

Master 节点配置
初始化 k8s 集群

运行初始化命令
在这里插入图片描述

kubeadm init --kubernetes-version=v1.19.4 \
--image-repository registry.aliyuncs.com/google_containers \
--apiserver-advertise-address=192.168.43.210 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
  • –kubernetes-version : ⽤于指定 k8s 版本

  • –apiserver-advertise-address :⽤于指定使⽤ Master 的哪个 network
    interface 进⾏通信,若不指定,则 kubeadm 会⾃动选择具有默认⽹关的 interface

  • –pod-network-cidr :⽤于指定 Pod 的⽹络范围。该参数使⽤依赖于使⽤的⽹络⽅案,本⽂
    将使⽤经典的 flannel ⽹络⽅案。

执⾏命令后,控制台给出了如下所示的详细集群初始化过程:

[root@learn-k8s-master ~]# kubeadm init --kubernetes-version=v1.19.4 \
> --image-repository registry.aliyuncs.com/google_containers \
> --apiserver-advertise-address=192.168.43.210 \
> --service-cidr=10.1.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
W1126 09:53:58.544775   19646 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local learn-k8s-master] and IPs [10.1.0.1 192.168.43.210]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [learn-k8s-master localhost] and IPs [192.168.43.210 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [learn-k8s-master localhost] and IPs [192.168.43.210 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.003611 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node learn-k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node learn-k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ev6351.4vdcer0778qg6zk7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.43.210:6443 --token ev6351.4vdcer0778qg6zk7 \
    --discovery-token-ca-cert-hash sha256:d2575d72454c6db6ff8c06d51aac7591dce912a3edeab658e48b7af7b52ef85e
[root@learn-k8s-master ~]#

在最后会提示节点安装的命令,需要记下来这个, 后面在 slave 节点用到
在这里插入图片描述

kubeadm join 192.168.43.210:6443 --token ev6351.4vdcer0778qg6zk7 \
    --discovery-token-ca-cert-hash sha256:d2575d72454c6db6ff8c06d51aac7591dce912a3edeab658e48b7af7b52ef85e

这时候启动 kubelet

systemctl restart kubelet

在这里插入图片描述

配置 kubectl 工具

在 Master 节点上⽤ root ⽤户执⾏下列命令来配置 kubectl:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
echo $KUBECONFIG
安装 Pod ⽹络

安装 Pod ⽹络是 Pod 之间进⾏通信的必要条件,k8s ⽀持众多⽹络⽅案,常用有 flannel 和 Calico

mkdir k8s
cd k8s

wget https://docs.projectcalico.org/v3.10/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

sed -i 's/192.168.0.0/10.244.0.0/g' calico.yaml

kubectl apply -f calico.yaml

看到有个 warning

Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition

在 Kubenetes v1.9 中,添加了一个新特性,允许 Kubernetes API 服务器向 API 客户机发送警告。对于已经弃用的 API 发出警告

查看所有 Pod 的状态

kubectl get pods --all-namespaces -o wide

在这里插入图片描述

Slave 节点配置
让 Slave 节点加入集群

使用前面 master 节点产生的命令加入 k8s 集群:

[root@learn-k8s-node-1 ~]# kubeadm join 192.168.43.210:6443 --token ev6351.4vdcer0778qg6zk7 \
>     --discovery-token-ca-cert-hash sha256:d2575d72454c6db6ff8c06d51aac7591dce912a3edeab658e48b7af7b52ef85e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在这里插入图片描述

启动 kubelet
systemctl enable kubelet && systemctl start kubelet

查看 kubelet 状态发现有 error

systemctl status kubelet


Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service":

原因是 kubernetes 和 docker 版本兼容性问题
vim 编辑 /var/lib/kubelet/kubeadm-flags.env 文件

vim /var/lib/kubelet/kubeadm-flags.env

# 加入下面这参数
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

在这里插入图片描述

重新启动后正常

systemctl daemon-reload
systemctl restart kubelet

在这里插入图片描述

效果验证

查看节点状态

kubectl get nodes

在这里插入图片描述

安装 DASHBOARD

应用官方的 DashBoard 模板
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

在这里插入图片描述

连接不上 raw.githubusercontent.com, 访问 https://site.ip138.com/raw.githubusercontent.com/ 查看可用的 ip, 然后设置 hosts

vim /etc/hosts

151.101.76.133 raw.githubusercontent.com

再次下载成功, 编辑 recommended.yaml

修改 DashBoard 的 Service 端口暴露模式为 NodePort

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

创建 dashboard 资源

kubectl create -f recommended.yaml

稍等一会就可以看到 kubernetesui/metrics-scraper 和 kubernetesui/dashboard 的 pod 和 deployment 都处于 ready 状态。
在这里插入图片描述

  • kubernetesui/metrics-scraper
  • kubernetesui/dashboard

如果发现上面这两个镜像无法正常拉取,则先拉取下来再重新打 tag

docker pull registry.cn-hangzhou.aliyuncs.com/ccgg/metrics-scraper:v1.0.4
docker pull registry.cn-hangzhou.aliyuncs.com/ccgg/dashboard:v2.0.4

docker tag registry.cn-hangzhou.aliyuncs.com/ccgg/metrics-scraper:v1.0.4 kubernetesui/metrics-scraper:v1.0.4
docker tag registry.cn-hangzhou.aliyuncs.com/ccgg/dashboard:v2.0.0 kubernetesui/dashboard:v2.0.0

然后重新行:kubectl apply -f recommended.yaml

创建 Service Account 及 ClusterRoleBinding

编辑 auth.yaml 加入如下内容:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

运行 kubectl apply -f auth.yaml

在这里插入图片描述

获取登录 Kubernetes Dashboard 的 Token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
登录 Kubernetes Dashboard

在这里插入图片描述

打开浏览器,输⼊ token 登录集群管理⻚⾯:

在这里插入图片描述

登录成功,可以看到 k8s 集群资源的运行情况
在这里插入图片描述

参考链接

  • https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
  • https://github.com/hansonwang99/JavaCollection
  • https://developer.aliyun.com/article/745086
  • https://www.bilibili.com/video/BV1kJ411p7mV?p=15&t=94
  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值