Kubernetes介绍以及Kubernetes快速部署
一. Kubernetes简介
Kubernetes是一种用于自动化部署、扩展和管理容器化应用程序的开源平台。kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。
目的是实现资源管理的自动化,主要提供了如下的主要功能:
- 自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
- 弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
- 服务发现:服务可以通过自动发现的形式找到它所依赖的服务
- 负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
- 版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
- 存储编排:可以根据容器自身的需求自动创建存储卷
二. kubernetes组件
一个kubernetes集群主要是由控制节点(master)、**工作节点(node)**构成,每个节点上都会安装不同的组件。
master:集群的控制平面,负责集群的决策 ( 管理 )
ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制
Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上
ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等
Etcd :负责存储集群中各种资源对象的信息
node:集群的数据平面,负责为容器提供运行环境 ( 干活 )
Kubelet : 负责维护容器的生命周期,即通过控制docker,来创建、更新、销毁容器
KubeProxy : 负责提供集群内部的服务发现和负载均衡
Docker : 负责节点上容器的各种操作
三. Kubernetes环境部署
环境说明
主机名称 | IP地址 | 所需安装的软件 | 充当角色 | 系统版本 |
---|---|---|---|---|
master | 192.168.37.134 | docker,kubeadm,kubelet,kubectl | 控制节点 | RedHat8 |
node1 | 192.168.37.135 | docker,kubeadm,kubelet,kubectl | 工作节点 | RedHat8 |
node2 | 192.168.37.136 | docker,kubeadm,kubelet,kubectl | 工作节点 | RedHat8 |
1)所有节点准备环境
1. 配置yum源
# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
# yum clean all
# yum makecache
2. 关闭防火墙 关闭selinux
# systemctl disable --now firewalld
# sed -i 's/^SELINUX=*/SELINUX=disabled/g' /etc/selinux/config
# setenforce 0
3. 关闭swap分区
# swapoff -a
# vim /etc/fstab
# /dev/mapper/rhel-swap swap swap defaults 0 0
# free -h
4. 设置主机名
# hostnamectl set-hostname master
# hostnamectl set-hostname node1
# hostnamectl set-hostname node2
5. 添加hosts文件域名解析
# cat >> /etc/hosts << EOF
192.168.37.134 master
192.168.37.135 node1
192.168.37.136 node2
EOF
6. 将桥接的IPv4流量传递到iptables的链
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# sysctl --system
7. 设置时区
# 设置时区,所有节点(将时区设置为自己本地时区,此处我设置的为上海时区)
# timedatectl set-timezone Asia/Shanghai
# 时间同步
# master (控制节点)的配置
# yum -y install chrony
# vim /etc/chrony.conf
# head -3 /etc/chrony.conf
.......
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# 将默认的时间同步服务器换成阿里云的时间同步服务器
pool time1.aliyun.com iburst
.......
# 启动 chronyd
# systemctl restart chronyd
# systemctl enable chronyd
# chronyc sources
......
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 203.107.6.88 2 6 367 56 -3114us[-4960us] +/- 39ms
......
# node(工作节点)的配置
# yum -y install chrony
# vim /etc/chrony.conf
# head -3 /etc/chrony.conf
......
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# 将默认的时间同步服务器指向master主机(控制节点)
pool master iburst
......
# 启动 chronyd
# systemctl restart chronyd
# systemctl enable chronyd
# chronyc sources
......
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? master 0 6 0 - +0ns[ +0ns] +/- 0ns
......
8. 免密认证
# 配置免密登录,只在master主机(控制节点)上做配置
ssh-keygen -t rsa
ssh-copy-id root@master
ssh-copy-id root@node1
ssh-copy-id root@node2
2)所有节点安装kubeadm、kubelet、kubectl
1. 安装Docker
# curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
# sed -i 's@https://download.docker.com@https://mirrors.tuna.tsinghua.edu.cn/docker-ce@g' /etc/yum.repos.d/docker-ce.repo
# yum -y install docker-ce
# systemctl enable --now docker
# docker version
# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
# systemctl restart docker
2. 安装kubeadm,kubelet和kubectl
# 添加kubernetes阿里云YUM软件源
# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装kubeadm,kubelet和kubectl
yum -y install kubelet kubeadm kubectl
# 设置开机自启(仅设置开机自启,不要启动)
systemctl enable kubelet
# containerd配置
# 在 /etc/containerd/ 下面有一个 config.toml 的文件,基本上都是注释掉的
重新生成一个新的config.toml 的文件
containerd config default > /etc/containerd/config.toml
--在这个配置文件中找到sandbox_image = "registry.k8s.io/pause:3.6"这一行,把这行改为
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
sed -i 's#registry.k8s.io#registry.aliyuncs.com/google_containers#g' /etc/containerd/config.toml
systemctl restart containerd
systemctl enable containerd
# 可用scp复制到另外两个工作节点
scp /etc/containerd/config.toml root@node1:/etc/containerd/
scp /etc/containerd/config.toml root@node2:/etc/containerd/
systemctl restart containerd
systemctl enable containerd
3)部署Kubernetes Master
1. 初始化集群
# Master执行。
//初始化集群
[root@master ~]# kubeadm init \
--apiserver-advertise-address=192.168.37.134 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
# 出现以下成功
......略
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.37.134:6443 --token kklakv.i7sf6ueb90xpgrb3 \
--discovery-token-ca-cert-hash sha256:3fed6a12a77140b0d38df3dca2ba9e8fbf476250efd0dd9c7ac97f7b19ca0dfc
......略
# 设置系统变量
# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > /etc/profile.d/k8s.sh
# source /etc/profile.d/k8s.sh
2. 安装Pod网络插件(CNI)
# 下载一个flannel资源模板文件
# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# 添加flannel资源
# kubectl apply -f kube-flannel.yml
# 查看flannel资源
# kubectl get -f kube-flannel.yml
# 查看k8s组件的pod
# kubectl get pods -n kube-system
4)加入Kubernetes Node
在 (工作节点)Node 执行。
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
kubeadm join 192.168.37.134:6443 --token kklakv.i7sf6ueb90xpgrb3 \
--discovery-token-ca-cert-hash sha256:3fed6a12a77140b0d38df3dca2ba9e8fbf476250efd0dd9c7ac97f7b19ca0dfc
# 在master上查看加入的node节点
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 23m v1.28.2
node1 Ready <none> 113s v1.28.2
node2 Ready <none> 108s v1.28.2
# kubectl get pods -n kube-flannel -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-2lck8 1/1 Running 0 11m 192.168.37.134 master <none> <none>
kube-flannel-ds-4wx4z 1/1 Running 0 2m8s 192.168.37.136 node2 <none> <none>
kube-flannel-ds-95tlk 1/1 Running 0 2m13s 192.168.37.135 node1 <none> <none>
四. 测试kubernetes集群
# 在Kubernetes集群中创建一个pod,验证是否正常运行:
# deployment(简称deploy)是一个控制器,只要告诉deployment需要几个pod。deployment就会始终保持有几个pod,如果其中一个pod挂掉了,则deployment会重新生成一个新的pod。
# kubectl create deployment nginx --image=nginx
# 查看pod状态,在内部使用容器ip访问
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-h8b84 1/1 Running 0 26s 10.244.1.3 node1 <none> <none>
# 暴露端口号,使其能够通过真机的ip访问
# deployment类型只能在集群内部访问
# NodePort类型可以通过暴露端口号通过外部访问
# kubectl expose deployment nginx --port=80 --type=NodePort
# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-7854ff8877-h8b84 1/1 Running 0 48s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 121m
service/nginx NodePort 10.106.200.214 <none> 80:30751/TCP 7s
# 删除pod
# kubectl delete deployments.apps nginx
# 查看pod状态
# kubectl get pods -o wide
No resources found in default namespace.