一、环境规划
1、集群类型
Kubernetes集群大体上分为两类:一主多从和多主多从
一主多从:一台master节点和多台node节点,搭建简单,但是有单机故障风险,适用于测试环境
多主多从:多台master节点和多台node节点,搭建麻烦,安全性高,适用于生产环境
2、安装方式
Kubernetes有多种部署方式,目前主流的方式有kubeadm、minikube、二进制包
1、Minikube:一个用于快速搭建单节点kubernetes的工具
2、Kubeadm:一个用于快速搭建kubernetes集群的工具,https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
3、二进制包:从官网下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效,https://github.com/kubernetes/kubernetes
说明:现在需要安装kubernetes的集群环境,但是又不想过于麻烦,所有选择使用kubeadm方式
3、主机规划
角色 | ip地址 | 组件 |
---|---|---|
master | 192.168.253.148 | docker,kubectl,kubeadm,kubelet |
node1 | 192.168.253.149 | docker,kubectl,kubeadm,kubelet |
node2 | 192.168.253.150 | docker,kubectl,kubeadm,kubelet |
三,环境搭建
本次环境搭建需要安装三台Linux系统(一主二从),内置centos7.5系统,然后在每台linux中分别安装docker。kubeadm(1.25),kubelet(1.25.4),kubelet(1.25.4).
1、主机安装
- 安装虚拟机过程中注意下面选项的设置:
- 1、操作系统环境:cpu2个 内存2G 硬盘50G centos7+
- 2、语言:中文简体/英文
- 3、软件选择:基础设施服务器
- 4、分区选择:自动分区/手动分区
- 5、网络配置:按照下面配置网络地址信息
网络地址:192.168.253.(148,149,150)
子网掩码:255.255.255.0
默认网关:192.168.253.254
DNS:8.8.8.8 - 6、主机名设置:
Master节点:master
Node节点:node1
Node节点:node2
2、主机名解析 (三个节点都做)
为了方便集群节点间的直接调用,在这个配置一下主机名解析,企业中推荐使用内部DNS服务器
[root@master ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.253.148 master.example.com master
192.168.253.149 node1.example.com node1
192.168.253.150 node2.example.com node2
3、时钟同步
kubernetes要求集群中的节点时间必须精确一致,这里使用chronyd服务从网络同步时间
企业中建议配置内部的时间同步服务器
Master:
vim /etc/chrony.conf
local stratum 10
systemctl restart chronyd
systemctl enable chronyd
hwclock -w
Node1和node2:
vim /etc/chrony.conf
server master.example.com iburst
systemctl restart chronyd
systemctl enable chronyd
hwclock -w
4、禁用firewalld、selinux、postfix(三个节点都做)
关闭防火墙、selinux,postfix----3台主机都配置
systemctl stop firewalld
systemctl disable firewalld
vim /etc/selinux/config
SELINUX=disabled
setenforce 0
systemctl stop postfix
systemctl disable postfix
5、禁用swap分区(三个节点都做)
vim /etc/fstab
注释掉swap分区那一行
#/dev/mapper/cs-swap none swap defaults 0 0
swapoff -a
6、开启IP转发,和修改内核信息—三个节点都需要配置
[root@master ~]# vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@master ~]# modprobe br_netfilter
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@node1 ~]# vim /etc/sysctl.d/k8s.conf
[root@node1 ~]# modprobe br_netfilter
[root@node1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@node2 ~]# vim /etc/sysctl.d/k8s.conf
[root@node2 ~]# modprobe br_netfilter
[root@node2 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
7、配置IPVS功能(三个节点都做)
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master ~]# bash /etc/sysconfig/modules/ipvs.modules
modprobe: FATAL: Module ip_vs_sh#!/bin/bash not found in directory /lib/modules/4.18.0-257.el8.x86_64
[root@master ~]# lsmod | grep -e ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 1 ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
[root@master ~]# reboot
二,安装docker(三个节点都要做)
1、切换镜像源
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
dnf -y install epel-release
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2、安装docker-ce
dnf -y install docker-ce --allowerasing
systemctl restart docker
systemctl enable docker
3、添加一个配置文件,配置docker仓库加速器(三个节点都要做)
[root@master ~]# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://14lrk6zd.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
[root@node1 ~]# cat > /etc/docker/daemon.json << EOF
> {
> "registry-mirrors": ["https://14lrk6zd.mirror.aliyuncs.com"],
> "exec-opts": ["native.cgroupdriver=systemd"],
> "log-driver": "json-file",
> "log-opts": {
> "max-size": "100m"
> },
> "storage-driver": "overlay2"
> }
> EOF
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
[root@node2 ~]# cat > /etc/docker/daemon.json << EOF
> {
> "registry-mirrors": ["https://14lrk6zd.mirror.aliyuncs.com"],
> "exec-opts": ["native.cgroupdriver=systemd"],
> "log-driver": "json-file",
> "log-opts": {
> "max-size": "100m"
> },
> "storage-driver": "overlay2"
> }
> EOF
[root@node2 ~]# systemctl daemon-reload
[root@node2 ~]# systemctl restart docker
三,安装kubernetes组件(1-3步骤都需要在所有节点运行)
1、由于kubernetes的镜像在国外,速度比较慢,这里切换成国内的镜像源
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# vim kubernetes.repo
[root@master yum.repos.d]# cat kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
node1,node2同上
2、安装kubeadm kubelet kubectl工具
[root@master yum.repos.d]# dnf -y install kubeadm kubelet kubectl
[root@master yum.repos.d]# systemctl restart kubelet
[root@master yum.repos.d]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
node1,node2同上
3、配置containerd
为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
[root@master yum.repos.d]# containerd config default > /etc/containerd/config.toml
[root@master yum.repos.d]# vim /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
[root@master yum.repos.d]# systemctl restart containerd
[root@master yum.repos.d]# systemctl enable containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
4、部署k8s的master节点(在master节点运行)
[root@master ~]# kubeadm init \
--apiserver-advertise-address=192.168.253.149 \ //master主机ip
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.25.4 \
--service-cidr=10.96.0.0/12 \
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.253.149:6443 --token 7t3ksv.a0a8wuuq1yejhndx \
--discovery-token-ca-cert-hash sha256:87a3a1d46a3813c79b29e61ae3a0bc9836a971136731974822246738aad34460
将初始化内容保存在某个文件中
[root@master ~]# vim /etc/profile.d/k8s.sh
[root@master ~]# cat /etc/profile.d/k8s.sh
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@master ~]# source /etc/profile.d/k8s.sh
5、安装pod网络插件(CNI/flannel)
[root@master ~]# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
[root@master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 24m v1.25.4
6.将node节点加入到k8s集群中
node1:
[root@node1 ~]# kubeadm join 192.168.253.149:6443 --token 7t3ksv.a0a8wuuq1yejhndx \
> --discovery-token-ca-cert-hash sha256:87a3a1d46a3813c79b29e61ae3a0bc9836a971136731974822246738aad34460
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
node2
[root@node2 ~]# kubeadm join 192.168.253.149:6443 --token 7t3ksv.a0a8wuuq1yejhndx --discovery-token-ca-cert-hash sha256:87a3a1d46a3813c79b29e61ae3a0bc9836a971136731974822246738aad34460
7. kubectl get nodes 查看node状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 33m v1.25.4
node1 NotReady <none> 5m34s v1.25.4
node2 NotReady <none> 22s v1.25.4
8. 使用k8s集群创建一个pod,运行nginx容器,然后进行测试
[root@master ~]# kubectl create deployment nginx --image nginx
deployment.apps/nginx created
[root@master ~]# kubectl expose deployment nginx --port 80 --type NodePort
service/nginx exposed
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-76d6c9b8c-mjvkt 1/1 Running 0 12m 10.244.2.2 node2 <none> <none>
[root@master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m
nginx NodePort 10.98.195.122 <none> 80:30022/TCP 16s
9.测试访问
10.修改默认网页
[root@master ~]# kubectl exec -it pod/nginx-76d6c9b8c-mjvkt -- /bin/bash
root@nginx-76d6c9b8c-mjvkt:/# cd /usr/share/nginx/html/
root@nginx-76d6c9b8c-mjvkt:/usr/share/nginx/html# echo "renweiwei" > index.html