HCIE-kubernetes(k8s)

1、kubernetes

kubernetes(k和s之间8个字母简称k8s)k8s 容器编排工具。生产环境里面运行成千上万个容器,这么多容器如果用手工管理很麻烦,这候需要一个工具对所有的容器进行编排和管理操作。docker有自己的容器编排工具swarm,安装好docker之后就自带有了swarm工具,无需额外去安装,但是k8s是一个独立的工具,需要单独部署和安装。
理解pod概念
pod是k8s集群管理和调度的最小单元,一个Pod中包含至少两个容器,其中一个(pause)用来承载Pod的网络,剩余容器用来承载真正的业务,Pod中的容器共享所有的资源。Kubernetes不直接操作容器,而是通过Pod对容器进行操控。
默认一个POD里面运行一个容器(一个POD也能运行多个容器),把容器封装成POD,再针对POD进行调度,k8s通过控制器deployment来管理POD,控制器deployment里面有个参数replicas副本数,如果在控制器deployment里指定replicas副本数为3,这个控制器会在k8s集群中启动3个一样的POD,有可能分布在不同主机节点上,也可能在同一个主机节点上。若另一台主机上的POD的主机故障了,那运行在这个主机上的POD会消亡,控制器会删除那个故障POD在其它主机节点重新创建这个副本,保证集群有3个POD。
k8s框架体系
在这里插入图片描述
Kubernetes是一个用于容器编排和管理的开源平台,它提供了一组核心组件用于自动化容器化应用程序的部署、扩展和管理。下面是一些Kubernetes的基本组件及其作用的介绍:
Master节点:
kube-apiserver:Kubernetes API服务器,提供了Kubernetes的API接口,负责接收响应所有的请求,用于管理和控制整个集群。
kube-scheduler:Kubernetes集群调度器,根据资源需求和约束条件,将Pod调度到合适的节点上运行。
kube-controller-manager:包含了一组控制器,用于维护集群的状态和进行自动化的管理操作,如Node控制器、Replication控制器、Endpoints控制器、Service Account和Token控制器等。
etcd:分布式键值存储系统,Kubernetes数据库,用于存储k8s集群所有的信息(配置数据和状态信息)。
kubelet代理软件,周期性获取状态信息等并反馈给apiserver
Node节点:
kubelet:运行在Node节点上的代理服务,负责管理该节点上的容器和Pod,与Master节点进行通信。
kubeproxy:负责为Pod提供网络代理和负载均衡功能,以及处理集群中的网络流量。
流量通过SVC对外暴露端口来到SVC,再通过kubeproxy网络代理(默认iptables模式,但实际用ipvs效率高)将流量转发到后端的POD上。

2、K8s集群搭建

Docker Engine 核心组件 Containerd,后将其开源并捐赠给CNCF(云原生计算基金会),作为独立的项目运营。也就是说 Docker Engine 里面是包含 containerd 的,安装 Docker 后,就会有 containerd。containerd 也可以进行单独安装,无需安装 Docker,yum install -y containerd。
K8s早期集成docker,调用docker engine 的containerd 来管理容器的生命周期。K8s为了兼容更多的容器运行时(比如 containerd/CRI-O),创建了一个CRI(Container Runtime Interface)容器运行时接口标准,制定标准的目的是为了实现编排器(如 Kubernetes)和其他不同的容器运行时之间交互操作。但是docker出来的比较早,它不满足这个标准,Docker Engine 本身就是一个完整的技术栈,没有实现(CRI)接口,也不可能为了迎合k8s编排工具指定的 CRI 标准而改变底层架构。k8s引入了一个临时解决方案dockershim来帮助docker通过k8s进行调用,通过dockershim可以让k8s的代理kubelet通过 CRI 来调用 Docker Engine 进行容器的操作。
k8s 在 1.7 的版本中就已经将 containerd 和 kubelet 集成使用了,最终都是要调用 containerd 这个容器运行时。如果没有使用cri-dockerd将Docker Engine 和 k8s 集成,那么docker命令是查不到k8s(crictl)的镜像,反过来也一样。
docker(docker):docker 命令行工具,无需单独安装。
ctr(containerd):containerd 命令行工具,无需单独安装,集成containerd,多用于测试或开发。
nerdctl(containerd):containerd 客户端工具,使用效果与 docker 命令的语法一致。需要单独安装。
crictl(kubernetes):遵循 CRI 接口规范的一个命令行工具,通常用它来检查和管理kubelet节点上的容器运行时和镜像。没有tag和push。
在这里插入图片描述
系统环境配置
环境准备,使用centos stream 8 克隆3台虚拟机(完整克隆)
master 10.1.1.200 2vcpus/8G master节点 可连通外网(NAT模式)
node1 10.1.1.201 2vcpus/8G node1节点 可连通外网
node2 10.1.1.202 2vcpus/8G node2节点 可连通外网(vcpus至少2个,建议vcpus4个、内存4G、硬盘100G)
完整克隆3台虚拟机,修改3台虚拟机主机名及ip地址。注意:修改ip地址的时候,看清楚网卡名字是什么,我这里是ifcfg-ens160,如果你是VMware17版本,安装的centos stream 8 网卡叫 ens160,如果你是VMware16版本,安装的网卡叫 ens33。

这三台都要修改主机名和ip,hostnamectl set-hostname master
cd /etc/sysconfig/network-scripts/
[root@kmaster network-scripts]# cat ifcfg-ens160
TYPE=Ethernet
BOOTPROTO=none
NAME=ens160
DEVICE=ens160
ONBOOT=yes
IPADDR=10.1.1.200
NETMASK=255.255.255.0
GATEWAY=10.1.1.2
DNS1=10.1.1.2

将Stream8-k8s-v1.27.0.sh脚本上传到三台主机家目录,三台主机都执行脚本。
注意:在脚本里面只需修改网卡名,hostip=$(ifconfig ens160 |grep -w “inet” |awk ‘{print $2}’)

[root@master ~]# chmod +x Stream8-k8s-v1.27.0.sh
[root@master ~]# ./Stream8-k8s-v1.27.0.sh
或者直接使用shell运行脚本[root@master ~]# sh Stream8-k8s-v1.27.0.sh

在这里插入图片描述
报错是系统默认从CentOS-Stream-Extras-common.repo下载了,把CentOS-Stream-Extras-common.repo(enabled=0)禁用就好了。
Stream8-k8s-v1.27.0.sh脚本内容如下:

#!/bin/bash
# CentOS stream 8 install kubenetes 1.27.0
# the number of available CPUs 1 is less than the required 2
# k8s 环境要求虚拟cpu数量至少2个
# 使用方法:在所有节点上执行该脚本,所有节点配置完成后,复制第11步语句,单独在master节点上进行集群初始化。
#1 rpm
echo '###00 Checking RPM###'
yum install -y yum-utils vim bash-completion net-tools wget
echo "00 configuration successful ^_^"
#Basic Information
echo '###01 Basic Information###'
hostname=`hostname`
hostip=$(ifconfig ens160 |grep -w "inet" |awk '{print $2}')
echo 'The Hostname is:'$hostname
echo 'The IPAddress is:'$hostip

#2 /etc/hosts
echo '###02 Checking File:/etc/hosts###'
hosts=$(cat /etc/hosts)
result01=$(echo $hosts |grep -w "${hostname}")
if [[ "$result01" != "" ]]
then
	echo "Configuration passed ^_^"
else
	echo "hostname and ip not set,configuring......"
	echo "$hostip $hostname" >> /etc/hosts
	echo "configuration successful ^_^"
fi
echo "02 configuration successful ^_^"

#3 firewall & selinux
echo '###03 Checking Firewall and SELinux###'
systemctl stop firewalld
systemctl disable firewalld
se01="SELINUX=disabled"
se02=$(cat /etc/selinux/config |grep -w "^SELINUX")
if [[ "$se01" == "$se02" ]]
then
	echo "Configuration passed ^_^"
else
	echo "SELinux Not Closed,configuring......"
	sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
	echo "configuration successful ^_^"
fi
echo "03 configuration successful ^_^"

#4 swap
echo '###04 Checking swap###'
swapoff -a
sed -i "s/^.*swap/#&/g" /etc/fstab
echo "04 configuration successful ^_^"

#5 docker-ce
echo '###05 Checking docker###'
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
echo 'list docker-ce versions'
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce
systemctl start docker 
systemctl enable docker
cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://cc2d8woc.mirror.aliyuncs.com"]
}
EOF
systemctl restart docker
echo "05 configuration successful ^_^"

#6 iptables
echo '###06 Checking iptables###'
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
echo "06 configuration successful ^_^"

#7 cgroup(systemd/cgroupfs)
echo '###07 Checking cgroup###'
containerd config default > /etc/containerd/config.toml
sed -i "s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
systemctl restart containerd
echo "07 configuration successful ^_^"

#8 kubenetes.repo
echo '###08 Checking repo###'
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
echo "08 configuration successful ^_^"

#9 crictl
echo "Checking crictl"
cat <<EOF > /etc/crictl.yaml 
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 5
debug: false
EOF
echo "09 configuration successful ^_^"

#10 kube1.27.0
echo "Checking kube"
yum install -y kubelet-1.27.0 kubeadm-1.27.0 kubectl-1.27.0 --disableexcludes=kubernetes
systemctl enable --now kubelet
echo "10 configuration successful ^_^"
echo "Congratulations ! The basic configuration has been completed"

#11 Initialize the cluster
# 仅在master主机上做集群初始化
# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.27.0 --pod-network-cidr=10.244.0.0/16

集群搭建
1、k8s集群初始化,把脚本的第11条复制到master主机运行,仅在master主机上做k8s集群初始化。

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.27.0 --pod-network-cidr=10.244.0.0/16

2、配置环境变量,k8s集群初始化后仅在master主机执行以下命令
在这里插入图片描述

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.1.1.200:6443 --token k5ieso.no0jtanbbgodlta7 \
        --discovery-token-ca-cert-hash sha256:418a41208ad32b04f39b5ba70ca4b59084450a65d3c852b8dc2b7f53d462a540
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
[root@master ~]# source /etc/profile

3、工作节点加入集群,在node节点执行加入集群的脚本命令
在这里插入图片描述

[root@node1 ~]# kubeadm join 10.1.1.200:6443 --token k5ieso.no0jtanbbgodlta7 --discovery-token-ca-cert-hash sha256:418a41208ad32b04f39b5ba70ca4b59084450a65d3c852b8dc2b7f53d462a540
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING Hostname]: hostname "node1" could not be reached
        [WARNING Hostname]: hostname "node1": lookup node1 on 10.1.1.2:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@master ~]# kubectl get no
NAME     STATUS     ROLES           AGE   VERSION
master   NotReady   control-plane   18m   v1.27.0
[root@master ~]# kubectl get no
NAME     STATUS     ROLES           AGE   VERSION
master   NotReady   control-plane   19m   v1.27.0
node1    NotReady   <none>          38s   v1.27.0
node2    NotReady   <none>          30s   v1.27.0

4、安装calico网络组件(仅master节点),上传tigera-operator-3-26-1.yaml和custom-resources-3-26-1.yaml到家目录

安装calico网络组件前,集群状态为 NotReady,安装后等待一段时间,集群状态将变为 Ready。
查看集群状态[root@master ~]# kubectl get nodes  或  kubectl get no
NAME     STATUS     ROLES           AGE   VERSION
master   NotReady   control-plane   97m   v1.27.0
node1    NotReady   <none>          78m   v1.27.0
node2    NotReady   <none>          78m   v1.27.0
查看所有命名空间[root@master ~]# kubectl get namespaces  或  kubectl get ns
NAME              STATUS   AGE
default           Active   97m
kube-node-lease   Active   97m
kube-public       Active   97m
kube-system       Active   97m
安装 Tigera Calico operator
[root@master ~]# kubectl create -f tigera-operator-3-26-1.yaml    会多个tigera-operator命名空间
配置 custom-resources.yaml[root@kmaster ~]# vim custom-resources-3-26-1.yaml 
更改IP地址池中的 CIDR,和k8s集群初始化的 --pod-network-cidr 参数保持一致(配置文件已做更改)cidr: 10.244.0.0/16
[root@master ~]# kubectl create -f custom-resources-3-26-1.yaml   会多个calico-system命名空间

查看所有命名空间[root@master ~]# kubectl get namespaces  或  kubectl get ns
[root@master ~]# kubectl get namespaces
NAME              STATUS   AGE
calico-system     Active   17m
default           Active   119m
kube-node-lease   Active   119m
kube-public       Active   119m
kube-system       Active   119m
tigera-operator   Active   18m
要点时间等待,要是安装有问题,如kubectl get ns查看STATUS状态为Terminating终止中,
倒着删除再执行,删除有延迟卡住的话,删除后再重启下所有节点,再执行安装操作
[root@master ~]# kubectl delete -f custom-resources-3-26-1.yaml
[root@master ~]# kubectl delete -f tigera-operator-3-26-1.yaml
动态查看calico容器状态,待全部running后,集群状态变为正常。(网络慢的话此过程可能要几小时)
[root@kmaster ~]# watch kubectl get pods -n calico-system
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-6c677477b7-6tthc   1/1     Running   0             12h
calico-node-cdjl8                          1/1     Running   0             12h
calico-node-cxcwb                          1/1     Running   0             12h
calico-node-rg75v                          1/1     Running   0             12h
calico-typha-69c55766b7-54ct4              1/1     Running   2 (10m ago)   12h
calico-typha-69c55766b7-nxzl9              1/1     Running   2 (10m ago)   12h
csi-node-driver-kdqqx                      2/2     Running   0             12h
csi-node-driver-qrjz5                      2/2     Running   0             12h
csi-node-driver-wsxx6                      2/2     Running   0             12h
再次查看集群状态[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   13h   v1.27.0
node1    Ready    <none>          13h   v1.27.0
node2    Ready    <none>          13h   v1.27.0

CNI Plugin,CNI网络插件,Calico通过CNI网络插件与kubelet关联,从而实现Pod网络。
Calico Node,Calico节点代理是运行在每个节点上的代理程序,负责管理节点路由信息、策略规则和创建Calico虚拟网络设备。
Calico Controller,Calico网络策略控制器。允许创建"NetworkPolicy"资源对象,并根据资源对象里面对网络策略定义,在对应节点主机上创建针对于Pod流出或流入流量的IPtables规则。
Calico Typha(可选的扩展组件),Typha是Calico的一个扩展组件,用于Calico通过Typha直接与Etcd通信,而不是通过kube-apiserver。通常当K8S的规模超过50个节点的时候推荐启用它,以降低kube-apiserver的负载。每个Pod/calico-typha可承载100~200个Calico节点的连接请求,最多不要超过200个。

自动补全,自动补全安装包在执行脚本时已安装

[root@master ~]# rpm -qa |grep bash-completion
bash-completion-2.7-5.el8.noarch
[root@master ~]# vim /etc/profile
配置文件第二行添加
source <(kubectl completion bash)
[root@master ~]# source /etc/profile

metrics-service,通过kubectl命令查询节点的cpu和内存的使用率

[root@master ~]# kubectl top nodes
error: Metrics API not available
想通过top命令查询节点的cpu和内存的使用率,必须依靠一个插件:metrics-service
下载yaml文件,官方托管地址https://github.com/kubernetes-sigs/metrics-server
安装metrics-service组件,但是默认情况下,网路问题,导致镜像无法拉取
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
国外网络不稳定建议下载到本地再上传,所以单独把components.yaml 下载过来,进行修改:改两行参数
  - --kubelet-insecure-tls
  image: registry.cn-hangzhou.aliyuncs.com/cloudcs/metrics-server:v0.6.2
修改后内容:
- args:
        - --kubelet-insecure-tls
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: registry.cn-hangzhou.aliyuncs.com/cloudcs/metrics-server:v0.6.2
将components.yaml上传到master家目录,[root@master ~]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

[root@master ~]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   129m         3%     1338Mi          35%
node1    59m          1%     726Mi           19%
node2    74m          1%     692Mi           18%

3、POD管理操作

了解namespace及相关操作:
一个k8s集群有多个命名空间,一个命名空间有多个pod,默认一个POD里面运行一个容器(一个POD也能运行多个容器),POD通过namespace命名空间进行隔离的,POD运行在命名空间里面。

查看所有命名空间[root@master ~]# kubectl get namespaces  或  kubectl get ns
查看当前命名空间,kubectl config get-contexts,命名空间是空的,代表default
[root@master ~]# kubectl config get-contexts
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin

查看当前命名空间的pod,kubectl get pod
查看指定命名空间pod,管理员权限可查看其它命名空间的pod,默认执行的是/etc/kubernetes/admin.conf是管理员权限,可用-n查看其它命名空间的pod
kubectl get pod -n tigera-operator
kubectl get pod -n calico-system
kubectl get pod -n kube-system
切换命名空间
kubectl config set-context --current --namespace kube-system
kubectl config set-context --current --namespace default
创建命名空间,[root@master ~]# kubectl create ns ns1

这样的话命令比较麻烦,所以这里提供一个小脚本,kubens,把它传到/bin 下面,并进行授权操作 chmod +x /bin/kubens,之后通过kubens脚本操作即可。kubens内容如下:

#!/usr/bin/env bash
#
# kubenx(1) is a utility to switch between Kubernetes namespaces.

# Copyright 2017 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[[ -n $DEBUG ]] && set -x

set -eou pipefail
IFS=$'\n\t'

KUBENS_DIR="${HOME}/.kube/kubens"

usage() {
  cat <<"EOF"
USAGE:
  kubens                    : list the namespaces in the current context
  kubens <NAME>             : change the active namespace of current context
  kubens -                  : switch to the previous namespace in this context
  kubens -h,--help          : show this message
EOF
  exit 1
}

current_namespace() {
  local cur_ctx
  cur_ctx="$(current_context)"
  ns="$(kubectl config view -o=jsonpath="{.contexts[?(@.name==\"${cur_ctx}\")].context.namespace}")"
  if [[ -z "${ns}" ]]; then
    echo "default"
  else
    echo "${ns}"
  fi
}

current_context() {
  kubectl config view -o=jsonpath='{.current-context}'
}

get_namespaces() {
  kubectl get namespaces -o=jsonpath='{range .items[*].metadata.name}{@}{"\n"}{end}'
}

escape_context_name() {
  echo "${1//\//-}"
}

namespace_file() {
  local ctx="$(escape_context_name "${1}")"
  echo "${KUBENS_DIR}/${ctx}"
}

read_namespace() {
  local f
  f="$(namespace_file "${1}")"
  [[ -f "${f}" ]] && cat "${f}"
  return 0
}

save_namespace() {
  mkdir -p "${KUBENS_DIR}"
  local f saved
  f="$(namespace_file "${1}")"
  saved="$(read_namespace "${1}")"

  if [[ "${saved}" != "${2}" ]]; then
    printf %s "${2}" > "${f}"
  fi
}

switch_namespace() {
  local ctx="${1}"
  kubectl config set-context "${ctx}" --namespace="${2}"
  echo "Active namespace is \"${2}\".">&2
}

set_namespace() {
  local ctx prev
  ctx="$(current_context)"
  prev="$(current_namespace)"

  if grep -q ^"${1}"\$ <(get_namespaces); then
    switch_namespace "${ctx}" "${1}"

    if [[ "${prev}" != "${1}" ]]; then
      save_namespace "${ctx}" "${prev}"
    fi
  else
    echo "error: no namespace exists with name \"${1}\".">&2
    exit 1
  fi
}

list_namespaces() {
  local yellow darkbg normal
  yellow=$(tput setaf 3)
  darkbg=$(tput setab 0)
  normal=$(tput sgr0)

  local cur_ctx_fg cur_ctx_bg
  cur_ctx_fg=${KUBECTX_CURRENT_FGCOLOR:-$yellow}
  cur_ctx_bg=${KUBECTX_CURRENT_BGCOLOR:-$darkbg}

  local cur ns_list
  cur="$(current_namespace)"
  ns_list=$(get_namespaces)
  for c in $ns_list; do
    if [[ -t 1 && -z "${NO_COLOR:-}" && "${c}" = "${cur}" ]]; then
      echo "${cur_ctx_bg}${cur_ctx_fg}${c}${normal}"
    else
      echo "${c}"
    fi
  done
}

swap_namespace() {
  local ctx ns
  ctx="$(current_context)"
  ns="$(read_namespace "${ctx}")"
  if [[ -z "${ns}" ]]; then
    echo "error: No previous namespace found for current context." >&2
    exit 1
  fi
  set_namespace "${ns}"
}

main() {
  if [[ "$#" -eq 0 ]]; then
    list_namespaces
  elif [[ "$#" -eq 1 ]]; then
    if [[ "${1}" == '-h' || "${1}" == '--help' ]]; then
      usage
    elif [[ "${1}" == "-" ]]; then
      swap_namespace
    elif [[ "${1}" =~ ^-(.*) ]]; then
      echo "error: unrecognized flag \"${1}\"" >&2
      usage
    elif [[ "${1}" =~ (.+)=(.+) ]]; then
      alias_context "${BASH_REMATCH[2]}" "${BASH_REMATCH[1]}"
    else
      set_namespace "${1}"
    fi
  else
    echo "error: too many flags" >&2
    usage
  fi
}

main "$@"

查看当前所有的命名空间,并高亮显示目前在哪个命名空间里面,kubens
在这里插入图片描述

切换命名空间,[root@master ~]# kubens kube-system
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "kube-system".
[root@master ~]# kubens default
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "default".

pod如何创建:
在k8s集群里面,k8s调度的最小单位 pod,pod里面跑容器(containerd)。K8s调用containerd,1、可以单独安装containerd, 2、可以安装docker,docker内部包含了containerd,这里采用第二种,dockershim 这个临时过渡方案已经在k8s 1.24版本里面删除了,所以是两套系统了,那么docker命令是查不到k8s(crictl)的镜像,反过来也一样。
在这里插入图片描述
docker自定义镜像后push推送到仓库,再k8s从仓库拉下来用,制作镜像都是通过dockerfile的,k8s重点是对容器的管理而不是制作镜像。
在这里插入图片描述
创建pod,有2种方法:命令行创建、编写yaml文件(推荐)

求帮助,[root@master ~]# kubectl run --help

命令行创建一个pod,会先下载镜像再创建pod
[root@master ~]# kubectl run nginx --image=nginx
pod/nginx created
查看当前命名空间的pod[root@master ~]# kubectl get pod
NAME    READY   STATUS              RESTARTS   AGE
nginx   0/1     ContainerCreating   0          5s
[root@master ~]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m2s
查询pod的详细信息[root@master ~]# kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          2m18s   10.244.166.131   node1   <none>           <none>

kubectl run nginx2 --image nginx  --image后面可以不用等号=
查询pod的详细信息[root@master ~]# kubectl get pod -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
nginx    1/1     Running   0          24m   10.244.166.131   node1   <none>           <none>
nginx2   1/1     Running   0          84s   10.244.104.4     node2   <none>           <none>
发现pod运行在哪个节点,其镜像就在哪个节点,可通过crictl images查得镜像。
[root@master ~]# kubectl describe pod pod1   查看pod描述信息,查看详细信息可用于排查
[root@master ~]# kubectl describe nodes node1   查看node节点描述信息
[root@master ~]# kubectl run pod2 --image nginx --image-pull-policy IfNotPresent
镜像的下载策略
Always:它每次都会联网检查最新的镜像,不管你本地有没有,都会到互联网下载。
Never:它只会使用本地镜像,从不下载
IfNotPresent:它如果检测本地没有镜像,才会联网下载。
编写yaml文件方式,--dry-run=client后面必须带等号=,不能空格
重定向输出到pod3.yaml [root@master ~]# kubectl run pod3 --image nginx --dry-run=client -o yaml > pod3.yaml
[root@master ~]# cat pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod3
  name: pod3
spec:
  containers:
  - image: nginx
    name: pod3
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f pod3.yaml
pod/pod3 created
[root@master ~]# kubectl get pod -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
nginx    1/1     Running   0          33m   10.244.166.131   node1   <none>           <none>
nginx2   1/1     Running   0          10m   10.244.104.4     node2   <none>           <none>
pod3     1/1     Running   0          72s   10.244.166.132   node1   <none>           <none>

删除pod

[root@master ~]# kubectl delete pods nginx
pod "nginx" deleted
[root@master ~]# kubectl delete pod nginx2
pod "nginx2" deleted
[root@master ~]# kubectl delete pod/pod1
pod "pod1" deleted
[root@master ~]# kubectl delete pods/pod2
pod "pod2" deleted
[root@master ~]# kubectl delete -f pod3.yaml     执行文件删除pod(不是删除文件)
pod "pod3" deleted

镜像下载策略imagePullPolicy,当镜像标签是latest时,默认策略是Always;当镜像标签是自定义时默认策略是IfNotPresent,–image-pull-policy Never 对应yaml文件中 imagePullPolicy: Never
[root@master ~]# kubectl run pod2 --image nginx --image-pull-policy Never --dry-run=client -o yaml > pod2.yaml
Never:直接使用本地镜像(已经下载好了),不会联网去下载,那如果本地没有,则创建失败。
IfNotPresent:如果本地有,则使用本地的,如果本地没有,则联网下载。
Always:不管本地有没有,每一次都联网下载。

[root@master ~]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    imagePullPolicy: Never
    name: pod2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

yaml语法格式:
1.必须通过空格进行缩进,不能使用tab
2.缩进几个空格没有要求,但是每一级标题必须左对齐
在这里插入图片描述
一个pod多个容器,pod4中运行c1和c2容器

[root@master ~]# vim pod4.yaml 
[root@master ~]# cat pod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod4
  name: pod4
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    args:
    - sleep
    - "3600"
    name: c1
    resources: {}
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: c2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]#  kubectl apply -f pod4.yaml
pod/pod4 created
[root@master ~]#  kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod4   2/2     Running   0          20s   10.244.166.135   node1   <none>           <none>

在node1节点查看运行的容器,crictl ps
在这里插入图片描述

[root@master ~]# kubectl exec -ti pod4 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "c1" out of: c1, c2
root@pod4:/# exit
exit
[root@master ~]# kubectl exec -ti pod4 -- bash     标准的,-- bash  会告诉现在默认进入哪个容器
Defaulted container "c1" out of: c1, c2
root@pod4:/# exit
exit
[root@master ~]# kubectl exec -ti pod4 -c c2 -- bash    -c参数指定进入哪个容器
root@pod4:/# ls
bin  boot  dev  docker-entrypoint.d  docker-entrypoint.sh  etc  home  lib  lib32  lib64  libx32  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

容器重启策略 restartPolicy(默认Always)
Always:不管什么情况下出现错误或退出,都会一直重启
Never:不管什么情况下出现错误或退出,都不会重启
OnFailure:如果是正常命令执行完毕正常退出,不重启;如果非正常退出或错误,会重启

Always:不管什么情况下出现错误或退出,都会一直重启
[root@master ~]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    imagePullPolicy: Never
    args:
    - sleep
    - "10"
    name: pod2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

[root@master ~]# kubectl apply -f pod2.yaml
pod/pod2 created
NAME   READY   STATUS             RESTARTS      AGE    IP               NODE    NOMINATED NODE   READINESS GATES
pod2   0/1     CrashLoopBackOff   4 (32s ago)   3m3s   10.244.166.141   node1   <none>           <none>
Never:不管什么情况下出现错误或退出,都不会重启
[root@master ~]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    imagePullPolicy: Never
    args:
    - sleep
    - "10"
    name: pod2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

[root@master ~]# kubectl apply -f pod2.yaml
pod/pod2 created
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS      RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod2   0/1     Completed   0          15s   10.244.166.137   node1   <none>           <none>
OnFailure:如果是正常命令执行完毕退出,不重启;如果非正常退出,会重启
OnFailure 正常退出
[root@master ~]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    imagePullPolicy: Never
    args:
    - sleep
    - "10"
    name: pod2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: OnFailure
status: {}

[root@master ~]# kubectl apply -f pod2.yaml
pod/pod2 created
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS      RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod2   0/1     Completed   0          21s   10.244.166.138   node1   <none>           <none>
OnFailure 非正常退出,会反复重启
sleeperror命令错误,非正常退出,会反复重启CrashLoopBackOff
[root@master ~]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    imagePullPolicy: Never
    args:
    - sleeperror
    - "10"
    name: pod2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: OnFailure
status: {}
[root@master ~]# kubectl apply -f pod2.yaml
pod/pod2 created
[root@master ~]# kubectl get pod -w    是kubectl get pod --watch简写    监测创建过程的动态更新
NAME   READY   STATUS             RESTARTS     AGE
pod2   0/1     CrashLoopBackOff   1 (8s ago)   9s
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS             RESTARTS        AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod2   0/1     CrashLoopBackOff   6 (4m46s ago)   10m   10.244.166.139   node1   <none>           <none>

标签label,标签分为主机标签和pod标签

查看主机标签,[root@master ~]# kubectl get nodes --show-labels
NAME     STATUS   ROLES           AGE   VERSION   LABELS
master   Ready    control-plane   19h   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1    Ready    <none>          19h   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2    Ready    <none>          19h   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
为主机节点添加标签,[root@master ~]# kubectl label nodes node2 disk=ssd
node/node2 labeled
[root@master ~]# kubectl get nodes node2 --show-labels
NAME    STATUS   ROLES    AGE   VERSION   LABELS
node2   Ready    <none>   19h   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux

注意:标签都是键值对,以等号=作为键值的分割。
disk=ssd    aaa/bbb=ccc    aaa.bbb/ccc=ddd
为主机删除标签,[root@master ~]# kubectl label nodes node2 disk-
node/node2 unlabeled
[root@master ~]#  kubectl get nodes node2 --show-labels
NAME    STATUS   ROLES    AGE   VERSION   LABELS
node2   Ready    <none>   19h   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
为pod添加标签,pod默认有个run标签
[root@master ~]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
    aaa: bbb
    ccc: memeda
  name: pod1
spec:
  containers:
  - image: nginx
    name: pod1
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

[root@master ~]# kubectl apply -f pod1.yaml
pod/pod1 created
查看pod标签[root@master ~]# kubectl get pod pod1 --show-labels
NAME   READY   STATUS    RESTARTS   AGE   LABELS
pod1   1/1     Running   0          18s   aaa=bbb,ccc=memeda,run=pod1

命令行给pod加标签,[root@master ~]# kubectl label pod pod1 abc=hehehe
pod/pod1 labeled
[root@master ~]# kubectl get pod pod1 --show-labels
NAME   READY   STATUS    RESTARTS   AGE     LABELS
pod1   1/1     Running   0          7m19s   aaa=bbb,abc=hehehe,ccc=memeda,run=pod1
为pod删除标签,[root@master ~]# kubectl label pod pod1 abc-
pod/pod1 unlabeled
[root@master ~]# kubectl label pod pod1 aaa-
pod/pod1 unlabeled
[root@master ~]# kubectl label pod pod1 ccc-
pod/pod1 unlabeled
[root@master ~]# kubectl get pod pod1 --show-labels
NAME   READY   STATUS    RESTARTS   AGE     LABELS
pod1   1/1     Running   0          8m50s   run=pod1

主机标签,让pod选择对应主机标签进行调度的(指定pod运行在哪个节点上)。pod标签,匹配控制器进行pod管理的。
错误理解:主机定义了一个标签disk=ssd ,pod也定义一个标签 disk=ssd,未来这个pod 一定会发放到 disk=ssd的主机上。
pod如果想要发放到对应的带标签的主机上,不是通过默认的labels标签匹配的,而是通过pod定义的一个nodeSelector选择器来匹配的。也就是说pod如果想要匹配主机标签,是要通过pod自身的 nodeSelector 选择器进行匹配的,不是通过pod自己的labels去匹配的,自己的labels更多的是为了给控制器deployment去匹配使用的。
控制器deployment通过pod的labels标签去管理pod,调度器scheduler决定pod是发放到node1主机还是node2主机。
在这里插入图片描述
定义pod在指定主机上运行,nodeSelector

[root@master ~]# kubectl label nodes node2 disk=ssd
node/node2 labeled
[root@master ~]# kubectl get nodes node2 --show-labels
NAME    STATUS   ROLES    AGE   VERSION   LABELS
node2   Ready    <none>   20h   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
[root@master ~]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
    aaa: bbb
    ccc: memeda
  name: pod1
spec:
  nodeSelector:
    disk: ssd
  containers:
  - image: nginx
    name: pod1
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

[root@master ~]# kubectl apply -f pod1.yaml
pod/pod1 created
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS             RESTARTS       AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod1   1/1     Running            0              7s    10.244.104.7     node2   <none>           <none>
如果在pod里面定义了一个不存在的主机标签,直接pending挂起
[root@master ~]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
    aaa: bbb
    ccc: memeda
  name: pod1
spec:
  nodeSelector:
    disk: ssdaaa
  containers:
  - image: nginx
    name: pod1
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

[root@master ~]# kubectl apply -f pod1.yaml
pod/pod1 created
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS             RESTARTS       AGE   IP               NODE     NOMINATED NODE   READINESS GATES
pod1   0/1     Pending            0              12s   <none>           <none>   <none>           <none>

特殊标签,管理角色名称的,角色是通过一个特殊标签来管理的

master节点的control-plane角色是这个特殊标签node-role.kubernetes.io/control-plane=
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   20h   v1.27.0
node1    Ready    <none>          20h   v1.27.0
node2    Ready    <none>          20h   v1.27.0
[root@master ~]# kubectl get nodes --show-labels
NAME     STATUS   ROLES           AGE   VERSION   LABELS
master   Ready    control-plane   20h   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1    Ready    <none>          20h   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2    Ready    <none>          20h   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
把角色删除[root@master ~]# kubectl label nodes master node-role.kubernetes.io/control-plane-
node/master unlabeled
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    <none>   20h   v1.27.0
node1    Ready    <none>   20h   v1.27.0
node2    Ready    <none>   20h   v1.27.0
添加自定义角色,[root@master ~]# kubectl label nodes master node-role.kubernetes.io/master=
node/master labeled
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   20h   v1.27.0
node1    Ready    <none>   20h   v1.27.0
node2    Ready    <none>   20h   v1.27.0
[root@master ~]# kubectl label nodes node1 node-role.kubernetes.io/node1=
node/node1 labeled
[root@master ~]# kubectl label nodes node2 node-role.kubernetes.io/node2=
node/node2 labeled
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   20h   v1.27.0
node1    Ready    node1    20h   v1.27.0
node2    Ready    node2    20h   v1.27.0

4、POD调度管理

cordon告警警戒
drain 包含了两个动作(cordon 告警 evicted 驱逐)
taint 污点
当创建一个pod的时候,会根据scheduler调度算法,分布在不同的节点上。
cordon(节点维护,临时拉警戒线,新的pod不可以调度到其上面),一旦对某个节点执行了cordon操作,就意味着,新的pod不可以调度到其上面,原来该节点运行的pod不受影响。SchedulingDisabled禁止调度

添加警戒cordon,[root@master ~]#  kubectl cordon node1
node/node1 cordoned
[root@master ~]# kubectl get nodes
NAME     STATUS                     ROLES    AGE   VERSION
master   Ready                      master   26h   v1.27.0
node1    Ready,SchedulingDisabled   node1    25h   v1.27.0
node2    Ready                      node2    25h   v1.27.0
[root@master ~]# kubectl apply -f pod3.yaml
pod/pod3 created
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
pod3   1/1     Running   0          29s   10.244.104.8   node2   <none>           <none>
取消警戒uncordon,[root@master ~]# kubectl get nodes
NAME     STATUS                     ROLES    AGE   VERSION
master   Ready                      master   26h   v1.27.0
node1    Ready,SchedulingDisabled   node1    26h   v1.27.0
node2    Ready                      node2    26h   v1.27.0
[root@master ~]#  kubectl uncordon node1
node/node1 uncordoned
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   26h   v1.27.0
node1    Ready    node1    26h   v1.27.0
node2    Ready    node2    26h   v1.27.0
[root@master ~]# kubectl apply -f pod2.yaml
pod/pod2 created
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS             RESTARTS      AGE     IP               NODE    NOMINATED NODE   READINESS GATES
pod2   0/1     CrashLoopBackOff   2 (16s ago)   65s     10.244.166.142   node1   <none>           <none>
pod3   1/1     Running            0             5m51s   10.244.104.8     node2   <none>           <none>

drain 包含了两个动作(cordon 告警 evicted 驱逐),新的容器(pod)不可以调度到其上面,把原来该节点运行的容器删除,再在其它节点上创建该容器。(驱逐不是迁移,而是删除原来的重新创建)

[root@master ~]# kubectl run pod5 --image nginx --dry-run=client -o yaml > pod5.yaml
[root@master ~]# kubectl apply -f pod5.yaml
pod/pod5 created
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod5   1/1     Running   0          45s   10.244.166.143   node1   <none>           <none>
[root@master ~]# kubectl drain node1
node/node1 cordoned
error: unable to drain node "node1" due to error:[cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): calico-system/calico-node-cdjl8, calico-system/csi-node-driver-kdqqx, kube-system/kube-proxy-dwgth, cannot delete Pods declare no controller (use --force to override): default/pod5], continuing command...
There are pending nodes to be drained:
 node1
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): calico-system/calico-node-cdjl8, calico-system/csi-node-driver-kdqqx, kube-system/kube-proxy-dwgth
cannot delete Pods declare no controller (use --force to override): default/pod5
[root@master ~]# kubectl drain node1 --ignore-daemonsets --force
node/node1 already cordoned
Warning: ignoring DaemonSet-managed Pods: calico-system/calico-node-cdjl8, calico-system/csi-node-driver-kdqqx, kube-system/kube-proxy-dwgth; deleting Pods that declare no controller: default/pod5
evicting pod default/pod5
evicting pod calico-system/calico-typha-69c55766b7-54ct4
evicting pod tigera-operator/tigera-operator-5f4668786-l7kgw
evicting pod calico-apiserver/calico-apiserver-7f679fc855-g58ph
pod/tigera-operator-5f4668786-l7kgw evicted
pod/calico-typha-69c55766b7-54ct4 evicted
pod/calico-apiserver-7f679fc855-g58ph evicted
pod/pod5 evicted
node/node1 drained

[root@master ~]# kubectl get pod
No resources found in default namespace.
[root@master ~]# kubectl get nodes
NAME     STATUS                     ROLES    AGE   VERSION
master   Ready                      master   27h   v1.27.0
node1    Ready,SchedulingDisabled   node1    26h   v1.27.0
node2    Ready                      node2    26h   v1.27.0
[root@master ~]# kubectl uncordon node1     解除警戒
node/node1 uncordoned
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   27h   v1.27.0
node1    Ready    node1    26h   v1.27.0
node2    Ready    node2    26h   v1.27.0

在这里插入图片描述
真正的体现驱逐的概念,可以使用deployment 控制器来控制pod。

[root@master ~]# kubectl create deployment web --image nginx --dry-run=client -o yaml > web.yaml
[root@master ~]# vim web.yaml   副本数改为5(replicas: 5),直接使用本地镜像(imagePullPolicy: Never)
[root@master ~]# cat web.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  replicas: 5
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Never
        name: nginx
        resources: {}
status: {}
[root@master ~]# kubectl apply -f web.yaml
deployment.apps/web created
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
web-74b7d5df6b-2dps5   1/1     Running   0          11s   10.244.166.146   node1   <none>            <none>
web-74b7d5df6b-7bv9n   1/1     Running   0          11s   10.244.166.145   node1   <none>            <none>
web-74b7d5df6b-7mc2k   1/1     Running   0          11s   10.244.104.10    node2   <none>            <none>
web-74b7d5df6b-9hvfl   1/1     Running   0          11s   10.244.166.144   node1   <none>            <none>
web-74b7d5df6b-h7cns   1/1     Running   0          11s   10.244.104.9     node2   <none>            <none>
[root@master ~]# kubectl drain node1 --ignore-daemonsets
node/node1 cordoned
Warning: ignoring DaemonSet-managed Pods: calico-system/calico-node-cdjl8, calico-system/csi-no de-driver-kdqqx, kube-system/kube-proxy-dwgth
evicting pod default/web-74b7d5df6b-9hvfl
evicting pod calico-system/calico-typha-69c55766b7-25m7j
evicting pod default/web-74b7d5df6b-2dps5
evicting pod default/web-74b7d5df6b-7bv9n
pod/calico-typha-69c55766b7-25m7j evicted
pod/web-74b7d5df6b-7bv9n evicted
pod/web-74b7d5df6b-9hvfl evicted
pod/web-74b7d5df6b-2dps5 evicted
node/node1 drained
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE   READINESS GATES
web-74b7d5df6b-7jnxb   1/1     Running   0          4m30s   10.244.104.13   node2   <none>           <none>
web-74b7d5df6b-7mc2k   1/1     Running   0          5m14s   10.244.104.10   node2   <none>           <none>
web-74b7d5df6b-8zztw   1/1     Running   0          4m30s   10.244.104.12   node2   <none>           <none>
web-74b7d5df6b-9s57p   1/1     Running   0          4m30s   10.244.104.11   node2   <none>           <none>
web-74b7d5df6b-h7cns   1/1     Running   0          5m14s   10.244.104.9    node2   <none>           <none>
[root@master ~]# kubectl uncordon node1    解除警戒
node/node1 uncordoned
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   27h   v1.27.0
node1    Ready    node1    27h   v1.27.0
node2    Ready    node2    27h   v1.27.0

taint 污点
不管pod如何调度,都不会自动调度到master节点上,为什么呢?因为master节点上默认有污点。

[root@master ~]# kubectl describe nodes master |grep -i taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
[root@master ~]# kubectl describe nodes node1 |grep -i taint
Taints:             <none>
[root@master ~]# kubectl describe nodes node2 |grep -i taint
Taints:             <none>

taint 污点和cordon警戒区别在哪?都是不让调度。cordon警戒了,新的pod是不能够在该节点上创建的,而taint污点,可以通过设置toleration 容忍度,来把对应的pod调度到该节点上。
增加污点,:NoSchedule这个是固定搭配

[root@master ~]#  kubectl taint node node2 aaa=bbb:NoSchedule
node/node2 tainted
[root@master ~]# kubectl describe nodes node2 |grep -i taint
Taints:             aaa=bbb:NoSchedule
[root@master ~]# kubectl delete -f web.yaml
deployment.apps "web" deleted
[root@master ~]# kubectl get pod -o wide
No resources found in default namespace.
[root@master ~]# kubectl apply -f web.yaml    有污点的节点node2不会运行pod
deployment.apps/web created
根据实验得出,5个pod都会运行在node1上,因为node2上存在污点了。
[root@master ~]# kubectl get pod -o wide   
NAME                   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
web-74b7d5df6b-6ln6s   1/1     Running   0          10s   10.244.166.150   node1   <none>           <none>
web-74b7d5df6b-8crhm   1/1     Running   0          10s   10.244.166.147   node1   <none>           <none>
web-74b7d5df6b-d8w6q   1/1     Running   0          10s   10.244.166.148   node1   <none>           <none>
web-74b7d5df6b-ngk5f   1/1     Running   0          10s   10.244.166.149   node1   <none>           <none>
web-74b7d5df6b-sxzb9   1/1     Running   0          10s   10.244.166.151   node1   <none>           <none>

删除污点,:NoSchedule-

[root@master ~]# kubectl taint node node2 aaa=bbb:NoSchedule-
node/node2 untainted
[root@master ~]#  kubectl describe nodes node2 |grep -i taint
Taints:             <none>
[root@master ~]# kubectl delete -f web.yaml
deployment.apps "web" deleted
[root@master ~]# kubectl apply -f web.yaml
deployment.apps/web created
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
web-74b7d5df6b-g29qk   1/1     Running   0          4s    10.244.104.14    node2   <none>           <none>
web-74b7d5df6b-h7x9w   1/1     Running   0          4s    10.244.166.154   node1   <none>           <none>
web-74b7d5df6b-n9k6c   1/1     Running   0          4s    10.244.104.15    node2   <none>           <none>
web-74b7d5df6b-pwrgt   1/1     Running   0          4s    10.244.166.153   node1   <none>           <none>
web-74b7d5df6b-tt9st   1/1     Running   0          4s    10.244.166.152   node1   <none>           <none>

容忍度 toleration
虽然node2节点有污点,但也可以进行分配pod,在pod里面加上容忍度即可。这里容忍度是污点等于什么就行(operator: “Equal”),看官网(k8s官网有中文)

[root@master ~]# kubectl delete -f web.yaml
deployment.apps "web" deleted
[root@master ~]# kubectl taint node node2 aaa=bbb:NoSchedule
node/node2 tainted
[root@master ~]# kubectl describe nodes node2 |grep -i taint
Taints:             aaa=bbb:NoSchedule
[root@master ~]# vim web.yaml   容忍污点是key: "aaa"和value: "bbb"就行(operator: "Equal")
[root@master ~]# cat web.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  replicas: 5
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Never
        name: nginx
        resources: {}
      tolerations:
      - key: "aaa"
        operator: "Equal"
        value: "bbb"
        effect: "NoSchedule"
status: {}
[root@master ~]# kubectl apply -f web.yaml
deployment.apps/web created
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
web-6bcf86dff4-68vwc   1/1     Running   0          7s    10.244.166.159   node1   <none>           <none>
web-6bcf86dff4-9ntmd   1/1     Running   0          7s    10.244.104.18    node2   <none>           <none>
web-6bcf86dff4-bt2d4   1/1     Running   0          7s    10.244.104.19    node2   <none>           <none>
web-6bcf86dff4-ls89g   1/1     Running   0          7s    10.244.166.158   node1   <none>           <none>
web-6bcf86dff4-wxbwj   1/1     Running   0          7s    10.244.166.160   node1   <none>           <none>

5、存储管理

存储volume,docker默认情况下,数据保存在容器层,一旦删除容器,数据也随之删除。
在k8s环境里,pod运行容器,之前也没有指定存储,当删除pod的时候,之前在pod里面写入的数据就没有了。
本地存储:emptyDir临时、hostPath永久
emptyDir 临时(不是永久的,pod删除之后,主机层随机的目录也删除)
对于emptyDir来说,会在pod所在的物理机上生成一个随机目录。pod的容器会挂载到这个随机目录上。当pod容器删除后,随机目录也会随之删除。

[root@master ~]# kubectl run podvol1 --image nginx --image-pull-policy Never --dry-run=client -o yaml > podvol1.yaml
[root@master ~]# vim podvol1.yaml
[root@master ~]# cat podvol1.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: podvol1
  name: podvol1
spec:
  volumes:
  - name: vol1
    emptyDir: {}
  containers:
  - image: nginx
    imagePullPolicy: Never
    name: podvol1
    resources: {}
    volumeMounts:
    - name: vol1
      mountPath: /abc       #容器挂载目录
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f podvol1.yaml
pod/podvol1 created
[root@master ~]# kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
podvol1   1/1     Running   0          64s   10.244.166.162   node1   <none>           <none>
[root@master ~]# kubectl exec -ti pods/podvol1 -- bash
root@podvol1:/# cd /abc
root@podvol1:/abc# touch mpp.txt
root@podvol1:/abc# ls
mpp.txt
来到pod所在的node1节点上查询文件
[root@node1 ~]# find / -name mpp.txt
/var/lib/kubelet/pods/d4038fb5-ca09-4291-af0e-75a6de6ebe20/volumes/kubernetes.io~empty-dir/vol1/mpp.txt
一旦删除pod,那么pod里面的数据就随之删除,即node1节点无该文件
[root@master ~]# kubectl delete -f podvol1.yaml
pod "podvol1" deleted
[root@node1 ~]# find / -name mpp.txt
[root@node1 ~]#

hostPath(是永久的,类似docker -v /host_dir:/docker_dir主机层指定的目录)指定主机挂载目录。

[root@master ~]# cat podvol2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: podvol2
  name: podvol2
spec:
  volumes:
  - name: vol1
    emptyDir: {}
  - name: vol2
    hostPath:
      path: /host_dir      #主机节点挂载目录
  containers:
  - image: nginx
    imagePullPolicy: Never
    name: podvol2
    resources: {}
    volumeMounts:
    - name: vol2
      mountPath: /container_dir      #容器挂载目录
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f podvol2.yaml
pod/podvol2 created
[root@master ~]#  kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
podvol2   1/1     Running   0          7s    10.244.166.164   node1   <none>          <none>
[root@master ~]#  kubectl exec -ti pods/podvol2 -- bash
root@podvol2:/# cd /container_dir
root@podvol2:/container_dir# touch 2.txt

[root@node1 ~]# cd /host_dir
[root@node1 host_dir]# ls
2.txt

[root@master ~]#  kubectl delete -f podvol2.yaml
pod "podvol2" deleted
[root@node1 host_dir]# ls
2.txt

网络存储NFS
如果上面的pod是在node1上运行的,目录也是在node1上的永久目录,但是未来该pod万一调度到了node2上面了,就找不到数据了。保存的记录就没有了。没有办法各个节点之间进行同步。可以用网络存储。
网络存储支持很多种类型 nfs/ceph/iscsi等都可以作为后端存储来使用。举例NFS。
完整克隆一台NFS服务器(也可用某台节点搭建NFS)

yum install -y yum-utils vim bash-completion net-tools wget nfs-utils 或只需yum install -y nfs-utils
[root@nfs ~]# mkdir /nfsdata
[root@nfs ~]# systemctl start nfs-server.service
[root@nfs ~]# systemctl enable nfs-server.service 
[root@nfs ~]# systemctl stop firewalld.service 
[root@nfs ~]# systemctl disable firewalld.service
[root@nfs ~]# setenforce 0
[root@nfs ~]# vim /etc/selinux/config 
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# cat /etc/exports
/nfsdata *(rw,async,no_root_squash)
[root@nfs ~]# exportfs -arv      不用重启nfs服务,配置文件就会生效
no_root_squash:登入NFS 主机使用分享目录的使用者,如果是root的话,那么对于这个分享的目录来说,他就具有root 的权限,那这个项目就不安全,不建议使用以root身份写。
exportfs命令:
-a 全部挂载或者全部卸载
-r 重新挂载
-u 卸载某一个目录
-v 显示共享目录
虽然未来的pod要连接nfs,但是真正连接nfs的是pod所在的物理主机。所以作为物理主机(客户端)也要安装nfs客户端。
[root@node1 ~]# yum install -y nfs-utils
[root@node2 ~]# yum install -y nfs-utils
客户端尝试挂载
[root@node1 ~]# mount 10.1.1.202:/nfsdata /mnt
[root@node1 ~]# df -Th
10.1.1.202:/nfsdata nfs4       96G  4.8G   91G   6% /mnt
[root@node1 ~]# umount /mnt
[root@master ~]# vim podvol3.yaml
[root@master ~]# cat podvol3.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: podvol3
  name: podvol3
spec:
  volumes:
  - name: vol1
    emptyDir: {}
  - name: vol2
    hostPath:
      path: /host_data
  - name: vol3
    nfs:
      server: 10.1.1.202     #这里用node2搭建nfs服务了
      path: /nfsdata
  containers:
  - image: nginx
    imagePullPolicy: Never
    name: podvol3
    resources: {}
    volumeMounts:
    - name: vol3
      mountPath: /container_data
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f podvol3.yaml
pod/podvol3 created
[root@master ~]# kubectl get pod -o wide
NAME      READY   STATUS              RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
mypod     0/1     ContainerCreating   0          6d18h   <none>           node1   <none>           <none>
podvol3   1/1     Running             0          25m     10.244.166.167   node1   <none>           <none>
注意:如果pod的status一直处于ContainerCreating状态,可能是防火墙未关闭。
[root@master ~]# kubectl -ti exec podvol3 -- bash
root@podvol3:/# df -Th
Filesystem          Type     Size  Used Avail Use% Mounted on
10.1.1.202:/nfsdata nfs4      96G  4.8G   91G   6% /container_data
在pod中写数据
root@podvol3:/# cd /container_data
root@podvol3:/container_data# touch haha.txt
root@podvol3:/container_data# ls
haha.txt
在node1上查看物理节点是否有挂载
[root@node1 ~]# df -Th
10.1.1.202:/nfsdata nfs4       96G  4.8G   91G   6% /var/lib/kubelet/pods/fa892ec1-bee8-465d-a428-01ea4dcaf4ee/volumes/kubernetes.io~nfs/vol3
在NFS服务器查看
[root@nfs ~]# ls /nfsdata/
haha.txt

多个客户端多个用户连接同一个存储里面的数据,会导致目录重复,后者有没有可能会把整个存储或者某个数据删除掉呢?就会带来一些安全隐患。
持久化存储(存储重点)
持久化存储有两个类型:PersistentVolume/PersistentVolumeClaim,所谓的持久化存储,它只是一种k8s里面针对存储管理的一种机制。后端该对接什么存储,就对接什么存储(NFS/Ceph)。
PV和PVC它不是一种后端存储,PV未来对接的存储可以是NFS/Ceph等,它是一种pod的存储管理方式,PV和PVC是一对一的。
下面是一个存储服务器(NFS),共享多个目录(/data),管理员会在k8s集群中创建 PersistentVolume(PV),这个PV是全局可见的(整个k8s集群可见)。该PV会和存储服务器中的某个目录关联。用户要做的就是创建自己的 PVC,PVC是基于命名空间进行隔离的。PVC仅当前命名空间可见,而PV是全局可见的(整个k8s集群可见)。之后把PVC和PV一对一关联在一起。
在这里插入图片描述
持久卷(静态pvc):创建PV(关联后端存储),创建PVC(关联PV),创建POD(关联pvc)
创建PV,这样pv和存储目录就关联在一起了

[root@master ~]# vim pv01.yaml
[root@master ~]# cat pv01.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: manual
  nfs:
    path: "/nfsdata"
    server: 10.1.1.202
[root@master ~]# kubectl apply -f pv01.yaml
persistentvolume/pv01 created
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01   5Gi        RWO            Retain           Available           manual                  10s
查看pv具体属性
[root@master ~]# kubectl describe pv pv01

claim为空的,因为PV和PVC还没有任何关联。pv是全局可见的,切换到命名空间依然可以看到pv。

[root@master ~]# kubens calico-system
[root@master ~]# kubens
calico-apiserver
calico-system
default
kube-node-lease
kube-public
kube-system
tigera-operator
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01   5Gi        RWO            Retain           Available           manual                  75s
[root@master ~]# kubens default
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "default".

创建PVC,pvc创建完成之后,自动完成pv关联。

[root@master ~]# vim pvc01.yaml
[root@master ~]# cat pvc01.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc01
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
[root@master ~]# kubectl apply -f pvc01.yaml
persistentvolumeclaim/pvc01 created
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM           STORAGECLASS   REASON   AGE
pv01   5Gi        RWO            Retain           Bound    default/pvc01   manual                  2m16s
[root@master ~]# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc01   Bound    pv01     5Gi        RWO            manual         14s

PVC和PV是通过什么关联的呢?一个是accessModes,这个参数在pv和pvc里面必须一致。然后大小,pvc指定的大小必须小于等于pv的大小才能关联成功。
而且一个pv只能和一个pvc进行关联。比如切换命名空间,再次运行pvc,查看状态。就会一直处于pending状态。

[root@master ~]# kubens calico-system
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "calico-system".
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM           STORAGECLASS   REASON   AGE
pv01   5Gi        RWO            Retain           Bound    default/pvc01   manual                  12m
[root@master ~]# kubectl get pvc
No resources found in calico-system namespace.
[root@master ~]# kubectl apply -f pvc01.yaml
persistentvolumeclaim/pvc01 created
[root@master ~]# kubectl get pvc
NAME    STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc01   Pending                                      manual         4s

谁操作的快,谁就能关联成功,而且不可控,因为命名空间相对独立,互相看不到。
这时候除了accessModes和大小,还有一个storageClassName可以控制,该参数随便自定义名称,但优先级最高。也就是pv如果定义了该参数,pvc没有该参数,即使满足了accessModes和大小也不会成功。

[root@master ~]# kubectl delete pvc pvc01
persistentvolumeclaim "pvc01" deleted
[root@master ~]# kubectl get pvc
No resources found in calico-system namespace.
[root@master ~]# kubens default
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "default".
[root@master ~]# kubens
calico-apiserver
calico-system
default
kube-node-lease
kube-public
kube-system
tigera-operator
[root@master ~]# kubectl delete pvc pvc01
persistentvolumeclaim "pvc01" deleted
[root@master ~]# kubectl delete pv pv01
persistentvolume "pv01" deleted
[root@master ~]# kubectl get pvc
No resources found in default namespace.
[root@master ~]# kubectl get pv
No resources found
[root@master ~]# vim pv01.yaml
[root@master ~]# cat pv01.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: aaa
  nfs:
    path: "/nfsdata"
    server: 10.1.1.202
[root@master ~]# kubectl apply -f pv01.yaml
persistentvolume/pv01 created
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01   5Gi        RWO            Retain           Available           aaa                     4s
[root@master ~]# cat pvc01.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc01
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
[root@master ~]# kubectl apply -f pvc01.yaml
persistentvolumeclaim/pvc01 created
[root@master ~]# kubectl get pvc
NAME    STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc01   Pending                                      manual         4s

创建pod

[root@master ~]# vim podpv.yaml
[root@master ~]# cat podpv.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
      image: nginx
      imagePullPolicy: Never
      volumeMounts:
      - mountPath: "/var/www/html"
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: pvc01
[root@master ~]# kubectl apply -f podpv.yaml
pod/mypod created
[root@master ~]# kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
mypod     1/1     Running   0          24s   10.244.166.168   node1   <none>           <none>
[root@master ~]# kubectl exec -ti mypod -- bash
root@mypod:/# cd /var/www/html/
root@mypod:/var/www/html# ls
haha.txt
root@mypod:/var/www/html# touch ddd
如果写数据提示permission denied ,说明没有权限。注意 /etc/exports里面的权限要加上 no_root_squash。
root@mypod:/var/www/html# ls
haha.txt  ddd
root@mypod:/var/www/html# exit
exit

静态pvc,创建PV(关联后端存储),创建PVC(关联PV),创建POD(关联pvc)
删除,POD删除,存储数据依然存在。PVC删除,存储数据依然存在。PV删除,存储数据依然存在。

删除所有资源,删除pod、pvc、pv
[root@master ~]# kubectl delete -f podpv.yaml 
pod "mypod" deleted
[root@master ~]# kubectl delete pvc pvc01 
persistentvolumeclaim "pvc01" deleted
[root@master ~]# kubectl delete pv pv01 
persistentvolume "pv01" deleted
删除资源后,NFS数据依然存在
[root@yw ~]# ls /nfsdata/
haha.txt  ddd

持久卷(静态pvc)优缺点
优点:如果误删除了pvc,那么数据依然被保留下来,不会丢数据。
缺点:如果我确实要删除pvc,那么需要手工把底层存储数据单独删除。
动态卷(动态pvc)
动态卷(动态pvc)实现随着删除PVC,底层存储数据也随之删除。
POD删除,存储数据不会删除。
PVC删除,让存储数据和PV自动删除。
动态卷(动态pvc),不需要手工单独创建pv,只需要创建pvc,那么pv会自动创建出来,只要删除pvc,pv也会随之自动删除,底层存储数据也会随之删除。
持久卷(静态pvc),pv pvc手工创建的,是静态的;直接创建pvc,pv跟着创建出这是动态卷。
查看pv时RECLAIM POLICY 有两种常用取值:Delete、Retain;
Delete:表示删除PVC的时候,PV也会一起删除,同时也删除PV所指向的实际存储空间;
Retain:表示删除PVC的时候,PV不会一起删除,而是变成Released状态等待管理员手动清理;
Delete:
优点:实现数据卷的全生命周期管理,应用删除PVC会自动删除后端云盘。能有效避免出现大量闲置云盘没有删除的情况。
缺点:删除PVC时候一起把后端云盘一起删除,如果不小心误删pvc,会出现后端数据丢失;
Retain:
优点:后端云盘需要手动清理,所以出现误删的可能性比较小;
缺点:没有实现数据卷全生命周期管理,常常会造成pvc、pv删除后,后端云盘闲置没清理,长此以往导致大量磁盘浪费。

查看当前的回收策略
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM           STORAGECLASS   REASON   AGE
pv01   5Gi        RWO            Retain           Bound    default/pvc01   manual                  12m
看RECLAIM POLICY,retain是不回收数据,删除pvc后,pv不可用,并长期保持released状态。
删除pvc和pv后,底层存储数据依然存在。如果想要使用pv和pvc,必须全部删除,重新创建。
[root@master ~]# kubectl delete pvc pvc01
persistentvolumeclaim "pvc01" deleted
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM           STORAGECLASS   REASON   AGE
pv01   5Gi        RWO            Retain           Released   default/pvc01   manual                  104s

nfs-subdir-external-provisioner是一个Kubernetes的外部卷(Persistent Volume)插件,它允许在Kubernetes集群中动态地创建和管理基于NFS共享的子目录卷。
获取 NFS Subdir External Provisioner 文件,地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/master/deploy
1、上传nfs-subdir-external-provisioner.tar.gz文件到mater节点家目录(已用node2节点搭建NFS服务)
设置RBAC授权[root@master ~]# tar -zxvf nfs-subdir-external-provisioner.tar.gz
这里用默认命名空间和rbac.yaml的default一致,所以rbac.yaml中不用改命名空间。
在这里插入图片描述
配置deployment.yaml,将镜像路径改为阿里云镜像路径和NFS共享目录修改下
image: registry.cn-hangzhou.aliyuncs.com/cloudcs/nfs-subdir-external-provisioner:v4.0.2
在这里插入图片描述

[root@master deploy]# kubectl apply -f deployment.yaml
deployment.apps/nfs-client-provisioner created
[root@master deploy]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-5d44b8c459-vszvz   1/1     Running   0          40s

创建storage class 存储类,才能实现动态卷供应

[root@master deploy]# kubectl get sc
No resources found
[root@master deploy]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client created
[root@master deploy]# kubectl get sc
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  10s

创建PVC,动态卷不用先手工创建pv,可直接创建pvc,pv会跟着创建出来。
修改test-claim.yaml,修改大小为5G
在这里插入图片描述

[root@master deploy]# kubectl get pv
No resources found
[root@master deploy]# kubectl get pvc
No resources found in default namespace.
[root@master deploy]# kubectl apply -f test-claim.yaml
persistentvolumeclaim/test-claim created
[root@master deploy]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
pvc-cf280353-4495-4d91-9028-966007d43238   5Gi        RWX            Delete           Bound    default/test-claim   nfs-client              19s
[root@master deploy]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-cf280353-4495-4d91-9028-966007d43238   5Gi        RWX            nfs-client     21s

nfs查看,会自动创建这个目录,后面删除也会删除这个目录
# ls /nfsdata
default-test-claim-pvc-cf280353-4495-4d91-9028-966007d43238

在这里插入图片描述
创建pod,将自带的test-pod.yaml修改下

[root@master deploy]# vim test-pod.yaml
[root@master deploy]# cat test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: nginx
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/abc"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
[root@master deploy]# kubectl apply -f test-pod.yaml
pod/test-pod created
[root@master deploy]# kubectl get pod -w    是kubectl get pod --watch简写 监测创建过程的动态更新
NAME                                      READY   STATUS    RESTARTS        AGE
nfs-client-provisioner-5d44b8c459-k27nb   1/1     Running   7 (2m20s ago)   11h
test-pod                                  1/1     Running   0               15s
^C[root@master deploy]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS        AGE   IP               NODE    NOMINATED NODE   READINESS GATES
nfs-client-provisioner-5d44b8c459-k27nb   1/1     Running   7 (2m34s ago)   11h   10.244.104.14    node2   <none>           <none>
test-pod                                  1/1     Running   0               29s   10.244.166.140   node1   <none>           <none>
[root@master deploy]# kubectl exec -ti test-pod -- bash
root@test-pod:/# cd /abc
root@test-pod:/abc# touch a.txt
root@test-pod:/abc# exit
exit

nfs查看,会有刚在容器中创建的文件
# ls /nfsdata
default-test-claim-pvc-cf280353-4495-4d91-9028-966007d43238
# cd /nfsdata/default-test-claim-pvc-cf280353-4495-4d91-9028-96600                                                                                                 7d43238
[root@nfs default-test-claim-pvc-cf280353-4495-4d91-9028-966007d43238]# ls
a.txt

删除PVC,发现PV和NFS存储都删除了

[root@master deploy]# kubectl delete -f test-pod.yaml
pod "test-pod" deleted
[root@master deploy]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-cf280353-4495-4d91-9028-966007d43238   5Gi        RWX            nfs-client     21s
[root@master deploy]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
pvc-cf280353-4495-4d91-9028-966007d43238   5Gi        RWX            Delete           Bound    default/test-claim   nfs-client              19s
[root@master deploy]# kubectl delete -f test-claim.yaml
persistentvolumeclaim "test-claim" deleted
[root@master deploy]# kubectl get pvc
No resources found in default namespace.
[root@master deploy]# kubectl get pv
No resources found

NFS查看底层数据也被删除了,这个目录default-test-claim-pvc-cf280353-4495-4d91-9028-966007d43238删除了
# ls /nfsdata

扩充 PVC 申领,参考官网,https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
如果要为某 PVC 请求较大的存储卷,可以编辑 PVC 对象,设置一个更大的尺寸值。 这一编辑操作会触发为下层 PersistentVolume 提供存储的卷的扩充。 Kubernetes 不会创建新的 PV 卷来满足此申领的请求。 与之相反,现有的卷会被调整大小。

动态扩展,需要在存储类class.yaml里面添加一行参数
只有当PVC的存储类中将allowVolumeExpansion设置为true时,才可以扩充该PVC申领
[root@master deploy]# vim class.yaml
[root@master deploy]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
allowVolumeExpansion: true
现在对扩充 PVC 申领的支持默认处于被启用状态,可以扩充以下类型的卷:
azureFile(已弃用)
csi
flexVolume(已弃用)
gcePersistentDisk(已弃用)
rbd
portworxVolume(已弃用)
[root@master deploy]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client configured
[root@master deploy]# kubectl get sc
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           true                   12h

6、Deployment控制器

在k8s里面,最小的调度单位是pod,但是pod本身不稳定,导致系统不健壮,没有可再生性(自愈功能)。
在集群中,业务需要很多个pod,而对于这些pod的管理,k8s提供了很多控制器来管理他们,其中一个就叫deploment。
kind: Pod和 kind: Deployment(kind: DaemonSet)都是k8s中一个对象,Deploymen是pod的一个控制器,Deployment可以实现pod的高可用。

查下简称,[root@master ~]# kubectl api-resources |grep depl
deployments    deploy      apps/v1    true   Deployment
[root@master ~]# kubectl api-resources |grep daemon
daemonsets    ds       apps/v1    true     DaemonSet
deployment(deploy)对应cce中无状态工作负载,daemonsets(ds)对应cce中有状态工作负载

daemonset也是一种控制器,也是用来创建pod的,但是和deployment不一样,deploy需要指定副本数,每个节点上都可以运行多个副本。
daemonset不需要指定副本数,会自动的在每个节点上都创建1个副本,不可运行多个。
daemonset作用就是在每个节点上收集日志、监控和管理等。
应用场景:
网络插件的 Agent 组件,都必须运行在每一个节点上,用来处理这个节点上的容器网络。
存储插件的 Agent 组件,也必须运行在每一个节点上,用来在这个节点上挂载远程存储目录,操作容器的 Volume 目录,比如:glusterd、ceph。
监控组件和日志组件,也必须运行在每一个节点上,负责这个节点上的监控信息和日志搜集,比如:fluentd、logstash、Prometheus 等。
[root@master ~]# kubectl get ds
No resources found in default namespace.
[root@master ~]# kubectl get ds -n kube-system
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-proxy   3         3         3       3            3           kubernetes.io/os=linux   45h
[root@master ~]# kubectl get namespaces
NAME               STATUS   AGE
calico-apiserver   Active   44h
calico-system      Active   44h
default            Active   45h
kube-node-lease    Active   45h
kube-public        Active   45h
kube-system        Active   45h
tigera-operator    Active   44h
创建daemonset
[root@master ~]# kubectl create deployment ds1 --image nginx --dry-run=client -o yaml -- sh -c "sleep 3600" > ds1.yaml
[root@master ~]# vim ds1.yaml
[root@master ~]# cat ds1.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  creationTimestamp: null
  labels:
    app: ds1
  name: ds1
spec:
  selector:
    matchLabels:
      app: ds1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ds1
    spec:
      containers:
      - command:
        - sh
        - -c
        - sleep 3600
        image: nginx
        name: nginx
        resources: {}
1.更改kind: DaemonSet
2.删除副本数replicas
3.删除strategy
4.删除status
[root@master ~]# kubectl get ds
No resources found in default namespace.
[root@master ~]# kubectl apply -f ds1.yaml
daemonset.apps/ds1 created
[root@master ~]# kubectl get ds
NAME   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ds1    2         2         2       2            2           <none>          7s
[root@master ~]# kubectl get ds -o wide
NAME   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES   SELECTOR
ds1    2         2         2       2            2           <none>          13s   nginx        nginx    app=ds1
[root@master ~]#  kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
ds1-crtl4   1/1     Running   0          29s   10.244.166.172   node1   <none>           <none>
ds1-kcpjt   1/1     Running   0          29s   10.244.104.54    node2   <none>           <none>
每个节点上都创建1个副本,不可运行多个。因为master上有污点,默认master不会创建pod。
[root@master ~]# kubectl describe nodes master |grep -i taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
[root@master ~]# kubectl delete ds ds1
daemonset.apps "ds1" deleted

集群中只需要告诉deploy,需要多少个pod即可,一旦某个pod宕掉,deploy会生成新的pod,保证集群中的一定存在3个pod。少一个,生成一个,多一个,删除一个。

[root@master ~]# kubectl create deployment web1 --image nginx --dry-run=client -o yaml > web1.yaml
[root@master ~]# vim web1.yaml   将副本数改为3
[root@master ~]# cat web1.yaml
apiVersion: apps/v1
kind: Deployment     #指定的类型
metadata:     #这个控制器web1的属性信息
  creationTimestamp: null
  labels:
    app: web1
  name: web1
spec:
  replicas: 3      #pod副本数默认1  这里修改为3
  selector:
    matchLabels:    #匹配标签  Deployment管理pod标签为web1的这些pod,这里的matchlabels 必须匹配下面labels里面的标签(下面的标签可以有多个,但至少匹配一个),如果不匹配,deploy无法管理,会直接报错。
      app: web1 
  strategy: {}
  template:     #所有的副本通过这个模板创建出来的,3个副本pod按照这个template模板定义的标签labels和镜像image生成pod
    metadata:
      creationTimestamp: null
      labels:   #创建出来pod的标签信息
        app: web1
    spec:
      containers:
      - image: nginx
        imagePullPolicy: IfNotPresent
        name: nginx
        resources: {}
status: {}

deployment能通过标签web1管理到这3个pod,selector指定的是当前deploy控制器是通过app=web1这个标签来管理和控制pod的。会监控这些pod。删除一个,就启动一个,如果不把deploy删除,那么pod是永远删除不掉的。

[root@master ~]# kubectl apply -f web1.yaml
deployment.apps/web1 created
[root@master ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
web1-58d8bd576b-h98jb   1/1     Running   0          11s   10.244.166.143   node1   <none>           <none>
web1-58d8bd576b-nt7rs   1/1     Running   0          11s   10.244.166.142   node1   <none>           <none>
web1-58d8bd576b-p8rq5   1/1     Running   0          11s   10.244.104.27    node2   <none>           <none>
[root@master ~]# kubectl get pods --show-labels    查看pod标签
NAME                    READY   STATUS    RESTARTS   AGE   LABELS
web1-58d8bd576b-h98jb   1/1     Running   0          76s   app=web1,pod-template-hash=58d8bd576b
web1-58d8bd576b-nt7rs   1/1     Running   0          76s   app=web1,pod-template-hash=58d8bd576b
web1-58d8bd576b-p8rq5   1/1     Running   0          76s   app=web1,pod-template-hash=58d8bd576b
查看deploy控制器
[root@master ~]# kubectl get deployments.apps
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
web1   3/3     3            3           18h
[root@master ~]# kubectl get deployments.apps -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
web1   3/3     3            3           18h   nginx        nginx    app=web1
查看deploy详细信息
[root@master ~]# kubectl describe  deployments.apps web1

现在删除node2上的pod,发现立刻又创建一个pod。删除3个pod又重新创建3个pod。
Deployment实现pod的高可用,删除3个pod后又重新创建3个pod,但这3个pod的ip变了,不会影响业务的访问,对外提供业务访问接口的不是pod,外部业务先访问service服务,service服务再将流量分摊到下面的pod,service服务的ip是固定的。

[root@master ~]# kubectl delete pod web1-58d8bd576b-p8rq5
pod "web1-58d8bd576b-p8rq5" deleted
[root@master ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP               NODE    NOMINATED NODE   READINESS GATES
web1-58d8bd576b-h98jb   1/1     Running   0          159m   10.244.166.143   node1   <none>           <none>
web1-58d8bd576b-nt7rs   1/1     Running   0          159m   10.244.166.142   node1   <none>           <none>
web1-58d8bd576b-tbkkh   1/1     Running   0          30s    10.244.104.28    node2   <none>           <none>
[root@master ~]# kubectl delete pods/web1-58d8bd576b-{h98jb,nt7rs,tbkkh}
pod "web1-58d8bd576b-h98jb" deleted
pod "web1-58d8bd576b-nt7rs" deleted
pod "web1-58d8bd576b-tbkkh" deleted
[root@master ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
web1-58d8bd576b-2zx9z   1/1     Running   0          6s    10.244.104.29    node2   <none>           <none>
web1-58d8bd576b-km4f7   1/1     Running   0          6s    10.244.166.144   node1   <none>           <none>
web1-58d8bd576b-nk69d   1/1     Running   0          6s    10.244.104.30    node2   <none>           <none>

replicas副本数的修改

1、命令行修改replicas副本数
[root@master ~]# kubectl scale deployment web1 --replicas 5
deployment.apps/web1 scaled
[root@master ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
web1-58d8bd576b-2zx9z   1/1     Running   0          6m58s   10.244.104.29    node2   <none>           <none>
web1-58d8bd576b-jt8z4   1/1     Running   0          7s      10.244.104.31    node2   <none>           <none>
web1-58d8bd576b-km4f7   1/1     Running   0          6m58s   10.244.166.144   node1   <none>           <none>
web1-58d8bd576b-nk69d   1/1     Running   0          6m58s   10.244.104.30    node2   <none>           <none>
web1-58d8bd576b-qrqlb   1/1     Running   0          7s      10.244.166.145   node1   <none>           <none>
2、在线修改pod副本数
[root@master ~]# kubectl edit deployments.apps web1
3、修改yaml文件,之后通过apply -f 更新
[root@master ~]# kubectl apply -f web1.yaml

动态扩展HPA
以上修改副本数,都是基于手工来修改的,如果面对未知的业务系统,业务并发量忽高忽低,总不能手工来来回回修改。HPA (Horizontal Pod Autoscaler) 水平自动伸缩,通过检测pod cpu的负载,解决deploy里某pod负载过高,动态伸缩pod的数量来实现负载均衡。HPA一旦监测pod负载过高,就会通知deploy,要创建更多的副本数,这样每个pod负载就会轻一些。HPA是通过metricservice组件来进行检测的,之前已经安装好了。

HPA动态的检测deployment控制器管理的pod,HPA设置CPU阈值为60%(默认CPU为80%),一旦deployment控制器管理的某个pod的CPU使用率超过60%,HPA会告诉deployment控制器扩容pod,deployment扩容pod数量受HPA限制(HPA设置的pod数量MIN 2 MAX 10),例如deployment扩容5个pod,分担了pod的压力,pod的CPU使用率会降下去,如CPU使用率降到40%,deployment就会按照最小pod数量进行缩容,CPU降到阈值以下就会释放掉刚扩容的pod。
在这里插入图片描述

查看节点和pod的cpu和内存
[root@master ~]# kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   158m         3%     1438Mi          37%
node1    87m          2%     1036Mi          27%
node2    102m         2%     1119Mi          29%
[root@master ~]# kubectl top pod
NAME                    CPU(cores)   MEMORY(bytes)
web1-58d8bd576b-2gn9v   0m           4Mi
web1-58d8bd576b-2zx9z   0m           4Mi
web1-58d8bd576b-nk69d   0m           4Mi
cpu数量是如何计算
500m(m 1000= 几个vcpus呢?
500/1000=0.5 个vcpus
1000m/1000=1 个vcpus

创建hpa

[root@master ~]# kubectl autoscale deployment web1 --min 2 --max 10 --cpu-percent 40
horizontalpodautoscaler.autoscaling/web1 autoscaled
--cpu-percent默认cpu使用率阈值默认80%,也可加内存使用率阈值检测,--min 2 --max 10 为pod最小数最大数
[root@master ~]# kubectl get hpa
NAME   REFERENCE         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
web1   Deployment/web1   <unknown>/40%   2         10        3          88s
这个40%是针对谁的呢?没有说明,所以会显示unknown。
CPU使用率超过40%,HPA会通知deployment扩容,40%是根据多少颗cpu来的呢?

创建hpa时没有指定hpa名称会默认用deployment名称如web1,也可以用–name指定hpa名称,默认会让hpa名称和deployment名称一致,方便管理。
[root@master ~]# kubectl describe hpa web1
describe查看详细信息,获取cpu资源阈值失败了,cpu给多大没有进行初始化,无法获取cpu定量值失败。
在这里插入图片描述
删除hpa

[root@master ~]# kubectl delete hpa web1
horizontalpodautoscaler.autoscaling "web1" deleted
[root@master ~]# kubectl get hpa
No resources found in default namespace.

在线修改deployment
[root@master ~]# kubectl edit deployments.apps web1
在这里插入图片描述
创建hpa

[root@master ~]# kubectl autoscale deployment web1 --name hpa1 --min 2 --max 10 --cpu-percent 40
horizontalpodautoscaler.autoscaling/hpa1 autoscaled
[root@master ~]# kubectl get hpa
NAME   REFERENCE         TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa1   Deployment/web1   0%/40%    2         10        3          59s

测试hpa,消耗cpu

[root@master ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
web1-6d875cb995-72qjs   1/1     Running   0          4m49s   10.244.166.150   node1   <none>           <none>
web1-6d875cb995-lspx2   1/1     Running   0          4m51s   10.244.104.32    node2   <none>           <none>
web1-6d875cb995-m69j2   1/1     Running   0          4m53s   10.244.166.149   node1   <none>           <none>
[root@master ~]# kubectl exec -ti web1-6d875cb995-m69j2 -- bash
root@web1-6d875cb995-m69j2:/# cat /dev/zero > /dev/null &
[1] 39
root@web1-6d875cb995-m69j2:/# cat /dev/zero > /dev/null &
[2] 40
root@web1-6d875cb995-m69j2:/# cat /dev/zero > /dev/null &
[3] 41
root@web1-6d875cb995-m69j2:/# cat /dev/zero > /dev/null &
[4] 42
root@web1-6d875cb995-m69j2:/# cat /dev/zero > /dev/null &
[5] 43

观察HPA的值

[root@master ~]# kubectl get hpa
NAME   REFERENCE         TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
hpa1   Deployment/web1   196%/40%   2         10        3          4m39s
[root@master ~]# kubectl get hpa
NAME   REFERENCE         TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa1   Deployment/web1   85%/40%   2         10        10         5m21s

hpa设置了pod最少2个最大10个,pod按道理说应该是负载均衡才对,按理说扩展的pod应该为那一个pod分担cpu压力,但从效果上看没有分担压力。因为你这个负载只是形式上的负载均衡,来自于内部产生的负载,所以无法转移,如果是外部来源,就会负载均衡。
在这里插入图片描述
到这个pod所运行的节点node1上把cat停掉,ps -ef |grep cat 先查出进程id 再kill -9杀掉进程(等几分钟pod才删除)
在这里插入图片描述
外部流量访问主机端口会映射到service端口(集群内所有主机端口和service端口映射),service具备负载均衡能力,service会把流量分摊到service管理的不同pod上。
在这里插入图片描述
为了测试效果明显把cpu使用率改成10%

[root@master ~]# kubectl delete hpa hpa1
horizontalpodautoscaler.autoscaling "hpa1" deleted
[root@master ~]# kubectl get hpa
No resources found in default namespace.
[root@master ~]# kubectl autoscale deployment web1 --min 2 --max 10 --cpu-percent 10
horizontalpodautoscaler.autoscaling/web1 autoscaled
[root@master ~]# kubectl get hpa
NAME   REFERENCE         TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
web1   Deployment/web1   0%/10%    2         10        2          2m16s

为deployment创建一个service服务,接受外部请求,类型为NodePort。
[root@master ~]# kubectl expose --help |grep dep
[root@master ~]# kubectl expose deployment web1 --port 80 --target-port 80 --type NodePort
–port 80指的是service端口,–target-port 80指的是service管理的pod的端口,NodePort指物理主机端口,随机分配3万以上的物理主机端口,如这里分配的是30122端口,网页输入k8s集群中某个主机ip:30122就能访问nginx。

[root@master ~]# kubectl expose deployment web1 --port 80 --target-port 80 --type NodePort
service/web1 exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        22h
web1         NodePort    10.98.101.25   <none>        80:30122/TCP   49s
直接访问k8s集群中某个主机30122端口,就可以访问web1的这个service了,这个service会把请求丢给pod

安装访问压力测试工具,[root@master ~]# yum install -y httpd-tools.x86_64
安装ab工具,ab是apachebench命令的缩写,ab是apache自带的压力测试工具。
模拟外部访问压力测试,[root@kmaster ~]# ab -t 600 -n 1000000 -c 1000 http://10.1.1.201:30122/index.html
-t 600 :600秒,-n 1000000:总的会话访问数量,-c 1000:并发量,ab -h 查看帮助,
压力测试过程中,另一个窗口查看hpa
在这里插入图片描述
镜像升级和回滚
比如集群里面有2个pod(nginx:latest),发现版本有问题,这时候需要更换版本。
可以先在两个node节点把镜像下载,crictl pull nginx:1.20
查看镜像,crictl images
在这里插入图片描述
1、可以在线直接修改image版本,当修改deployment的时候,本质上是删除旧的pod,重新创建新的pod。
[root@master ~]# kubectl edit deployments.apps web1
在这里插入图片描述
在这里插入图片描述
2、修改yaml文件,之后通过apply -f 更新
[root@master ~]# kubectl apply -f web1.yaml
3、命令行的方式进行更新
[root@master ~]# kubectl set image deploy web1 nginx=nginx:1.20
在这里插入图片描述
这些历史操作好像无记录,加上–record=true开启历史操作记录
[root@master ~]# kubectl rollout history deployment web1
[root@master ~]# kubectl set image deploy web1 nginx=nginx:1.20 --record=true
在这里插入图片描述
rollout undo --to-revision 5回滚之前5版本
[root@master ~]# kubectl rollout undo deployment web1 --to-revision 5
在这里插入图片描述
思考:镜像的升级(更新),会不会影响到上层业务呢?

采用滚动式更新,上层业务不受影响
      rollingUpdate:
        maxSurge: 25%  一次更新创建多少个pod
        maxUnavailable: 25%  一次删除多少个pod
kubectl edit deployments.apps web1   在线修改这两个参数可以是具体的数值,也可以是百分比
kubectl edit deployments.apps web1   在线修改pod副本数,把副本数修改为20个

在这里插入图片描述
更新镜像,查看效果(要是不正常就在web1.yaml中将副本数改成20,再重新apply -f下,不行删除重新创建)
发现会根据指定的参数,删除1个,创建1个,以此类推,而上层的业务是不受影响的。

[root@master ~]# kubectl get deployments.apps  -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
web1   0/20    20           0           20s   nginx        nginx    app=web1
[root@master ~]# kubectl set image deploy web1 nginx=nginx:1.20 --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/web1 image updated
[root@master ~]# kubectl get pod
NAME                    READY   STATUS              RESTARTS   AGE
web1-58d8bd576b-26mdn   1/1     Running             0          2m27s
web1-58d8bd576b-45t7f   1/1     Running             0          2m27s
web1-58d8bd576b-6vl7s   1/1     Running             0          2m27s
web1-58d8bd576b-7h2tx   1/1     Running             0          2m27s
web1-58d8bd576b-8764c   1/1     Running             0          2m27s
web1-58d8bd576b-b4hzx   1/1     Running             0          2m27s
web1-58d8bd576b-bht4n   1/1     Running             0          2m27s
web1-58d8bd576b-c96d7   1/1     Terminating         0          2m27s
web1-58d8bd576b-f9d6h   1/1     Running             0          2m27s
web1-58d8bd576b-fd9dg   1/1     Running             0          2m27s
web1-58d8bd576b-fpbpm   1/1     Running             0          2m27s
web1-58d8bd576b-kl5wh   1/1     Running             0          2m27s
web1-58d8bd576b-kq75w   1/1     Terminating         0          2m27s
web1-58d8bd576b-ldk97   1/1     Running             0          2m27s
web1-58d8bd576b-r45tm   1/1     Running             0          2m27s
web1-58d8bd576b-tldb9   1/1     Running             0          2m27s
web1-58d8bd576b-v44w9   1/1     Running             0          2m27s
web1-58d8bd576b-vx6r4   1/1     Running             0          2m27s
web1-58d8bd576b-xc4dj   1/1     Running             0          2m27s
web1-5cc484c649-27t88   0/1     ContainerCreating   0          2s
web1-5cc484c649-ns2wb   0/1     ContainerCreating   0          3s
web1-5cc484c649-ntdkb   1/1     Running             0          4s
web1-5cc484c649-zvr5z   1/1     Running             0          4s
[root@master ~]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
web1-5cc484c649-27t88   1/1     Running   0          2m48s
web1-5cc484c649-7zknm   1/1     Running   0          2m39s
web1-5cc484c649-88244   1/1     Running   0          2m35s
web1-5cc484c649-8kncg   1/1     Running   0          2m41s
web1-5cc484c649-8rl2c   1/1     Running   0          2m35s
web1-5cc484c649-9vjpm   1/1     Running   0          2m34s
web1-5cc484c649-fdhw8   1/1     Running   0          2m36s
web1-5cc484c649-gmzgk   1/1     Running   0          2m42s
web1-5cc484c649-jzt4j   1/1     Running   0          2m34s
web1-5cc484c649-kd4lz   1/1     Running   0          2m40s
web1-5cc484c649-ktkqg   1/1     Running   0          2m37s
web1-5cc484c649-l8jnp   1/1     Running   0          2m38s
web1-5cc484c649-lf84g   1/1     Running   0          2m36s
web1-5cc484c649-n879x   1/1     Running   0          2m39s
web1-5cc484c649-ns2wb   1/1     Running   0          2m49s
web1-5cc484c649-ntdkb   1/1     Running   0          2m50s
web1-5cc484c649-phs4b   1/1     Running   0          2m37s
web1-5cc484c649-x8r9b   1/1     Running   0          2m38s
web1-5cc484c649-xwskl   1/1     Running   0          2m41s
web1-5cc484c649-zvr5z   1/1     Running   0          2m50s
[root@master ~]# kubectl get deployments.apps -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES       SELECTOR
web1   20/20   20           20          3m14s   nginx        nginx:1.20   app=web1

secret和configmap
使用某些镜像例如mysql,是需要变量来传递密码的,也就是再编写yaml文件的时候,需要在参数里面指定明文密码。这样就会导致一定的安全隐患,例如查看yaml文件,可以看到所有变量参数的密码。这时候为了安全起见,需要单独的把密码保存到某个地方。

比如创建一个mysql容器
[root@master ~]# kubectl run db --image mysql --image-pull-policy IfNotPresent --env="MYSQL_ROOT_PASSWORD=redhat" --dry-run=client -o yaml > db.yaml
[root@master ~]# kubectl apply -f db.yaml
pod/db created
[root@master ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
db     1/1     Running   0          12s
[root@master ~]# kubectl exec -ti db -- bash
bash-4.4# mysql -uroot -predhat
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.2.0 MySQL Community Server - GPL
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> exit;
[root@master ~]# mysql -uroot -predhat -h 10.1.1.201
注意:如果执行mysql提示没有该命令,说明没有安装mysql客户端。
yum install -y mariadb

[root@master ~]# cat db.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: db
  name: db
spec:
  containers:
  - env:
    - name: MYSQL_ROOT_PASSWORD
      value: redhat
    image: mysql
    imagePullPolicy: IfNotPresent
    name: db
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

创建db的语句被记录在db.yaml文件里面,当别人有权限打开这个db.yaml文件的时候,就会发现数据库root的密码。这是不安全的,如果是参数和密码类的,直接使用secret进行封装和调用。
创建secret,有三种创建方式

[root@master ~]# kubectl create secret generic --help |grep from
  kubectl create secret generic my-secret --from-file=path/to/bar
  kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub
  kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret
  kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret
  # Create a new secret named my-secret from env files
  kubectl create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env
    --from-env-file=[]:
    --from-file=[]:
    --from-literal=[]:
  kubectl create secret generic NAME [--type=string] [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none] [options]
1.--from-literal  键值对形式(推荐使用此方式)
[root@master ~]# kubectl create secret generic secret01 --from-literal=MYSQL_ROOT_PASSWORD=redhat --from-literal=MYSQL_DATABASE=wordpress --from-literal=aaa=111 --from-literal=bbb=222
secret/secret01 created
[root@master ~]# kubectl get secrets
NAME       TYPE     DATA   AGE
secret01   Opaque   4      25s
一般情况下,这个secrets不会被其他命令空间看到,也不会给到其他用户,相对来说比明文更加安全。

secret是基于命名空间的,相互独立无法看到。
在这里插入图片描述

如果想查看里面的具体值,可以通过base64 -decode(base64 -d 简写)进行解码操作。
[root@master ~]# kubectl get secrets secret01 -o yaml
apiVersion: v1
data:
  MYSQL_DATABASE: d29yZHByZXNz
  MYSQL_ROOT_PASSWORD: cmVkaGF0
  aaa: MTEx
  bbb: MjIy
kind: Secret
metadata:
  creationTimestamp: "2023-10-28T10:58:02Z"
  name: secret01
  namespace: default
  resourceVersion: "220456"
  uid: 12eca400-73c9-48b5-8b8e-5347ad69aad0
type: Opaque
[root@master ~]# echo -n d29yZHByZXNz | base64 -d
wordpress[root@master ~]# echo -n cmVkaGF0 |base64 -d
redhat[root@master ~]#
2.--from-file  文件方式   文件名充当键,文件里面内容充当键的值
[root@master ~]# echo 123 > abc
[root@master ~]# cat abc
123
[root@master ~]# kubectl create secret generic secret02 --from-file=abc
secret/secret02 created
[root@master ~]# kubectl get secrets
NAME       TYPE     DATA   AGE
secret01   Opaque   4      11m
secret02   Opaque   1      35s
[root@master ~]# kubectl get secrets secret02 -o yaml
apiVersion: v1
data:
  abc: MTIzCg==
kind: Secret
metadata:
  creationTimestamp: "2023-10-28T11:08:39Z"
  name: secret02
  namespace: default
  resourceVersion: "221730"
  uid: 265e0882-a502-451c-b1c0-011ed8140fe2
type: Opaque
[root@master ~]# echo -n MTIzCg== | base64 -d
123
3.--from-env-file   变量文件
[root@master ~]# vim memeda
[root@master ~]# cat memeda
aaa=111
bbb=222
ccc=333
ddd=444
eee=Huawei12#
[root@master ~]# kubectl create secret generic secret03 --from-env-file=memeda
secret/secret03 created
[root@master ~]# kubectl get secrets
NAME       TYPE     DATA   AGE
secret01   Opaque   4      14m
secret02   Opaque   1      4m20s
secret03   Opaque   5      8s
[root@master ~]# kubectl get secrets secret03 -o yaml
apiVersion: v1
data:
  aaa: MTEx
  bbb: MjIy
  ccc: MzMz
  ddd: NDQ0
  eee: SHVhd2VpMTIj
kind: Secret
metadata:
  creationTimestamp: "2023-10-28T11:12:51Z"
  name: secret03
  namespace: default
  resourceVersion: "222236"
  uid: f97f8597-d40a-461d-aa29-2c4e61b0209d
type: Opaque
[root@master ~]# echo -n SHVhd2VpMTIj |base64 -d
Huawei12#[root@master ~]#

secret用法,创建一个mysql数据库,并以调用secret的方式进行安全创建。

[root@master ~]# kubectl run mydb --image mysql --image-pull-policy IfNotPresent --port 3306 --dry-run=client -o yaml > mydb.yaml
[root@master ~]# kubectl delete secrets secret02
secret "secret02" deleted
[root@master ~]# kubectl get secrets
NAME       TYPE     DATA   AGE
secret01   Opaque   4      3h1m
secret03   Opaque   5      166m

[root@master ~]#  kubectl get secrets secret01 -o yaml
apiVersion: v1
data:
  MYSQL_DATABASE: d29yZHByZXNz
  MYSQL_ROOT_PASSWORD: cmVkaGF0
  aaa: MTEx
  bbb: MjIy
kind: Secret
metadata:
  creationTimestamp: "2023-10-28T10:58:02Z"
  name: secret01
  namespace: default
  resourceVersion: "220456"
  uid: 12eca400-73c9-48b5-8b8e-5347ad69aad0
type: Opaque
编辑数据库mydb.yaml文件,调用secret
[root@master ~]# vim mydb.yaml
[root@master ~]# cat mydb.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: mydb
  name: mydb
spec:
  containers:
  - image: mysql
    imagePullPolicy: IfNotPresent
    name: mydb
    ports:
    - containerPort: 3306
    resources: {}
    env:
    - name: MYSQL_ROOT_PASSWORD
      valueFrom:
        secretKeyRef:
          name: secret01
          key: MYSQL_ROOT_PASSWORD
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f mydb.yaml
pod/mydb created
[root@master ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
db     1/1     Running   0          30m
mydb   1/1     Running   0          11s
[root@master ~]# kubectl exec -ti mydb -- bash
bash-4.4# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.2.0 MySQL Community Server - GPL
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

如果是配置文件(网站配置文件/参数配置文件/初始化文件等),直接使用configmap进行封装和调用。
configmap 创建和secret一样,有3种方式

[root@master ~]# kubectl create configmap --help |grep from
  kubectl create configmap my-config --from-file=path/to/bar
  kubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt
  kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2
  # Create a new config map named my-config from the key=value pairs in the file
  kubectl create configmap my-config --from-file=path/to/bar
  # Create a new config map named my-config from an env file
  kubectl create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env
    --from-env-file=[]:
    --from-file=[]:
    --from-literal=[]:
  kubectl create configmap NAME [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none] [options]
nginx访问路径,/usr/share/nginx/html
apache访问路径,/var/www/html
[root@master ~]# kubectl get configmap
NAME               DATA   AGE
kube-root-ca.crt   1      3d2h
[root@master ~]# vim index.html
[root@master ~]# cat index.html
hello world
[root@master ~]# kubectl create configmap config01 --from-file=index.html
configmap/config01 created
[root@master ~]# kubectl get configmaps config01 -o yaml      configmap明文显示不加密的
apiVersion: v1
data:
  index.html: |
    hello world
kind: ConfigMap
metadata:
  creationTimestamp: "2023-10-28T11:32:33Z"
  name: config01
  namespace: default
  resourceVersion: "224610"
  uid: 6e332ace-ca27-40fb-9f25-c641f5490075
[root@master ~]# kubectl get configmap
NAME               DATA   AGE
config01           1      36s
kube-root-ca.crt   1      3d2h

configmap用法,当手工进行镜像升级的时候,原来pod里面的内容就必然会发生改变。所以一旦升级,那么之前网站里面的内容就没了。原来的站点根目录是 /aaa/bbb/index.html,但是后来因为更换了镜像,镜像里面默认的网站根目录是/usr/share/nginx/html/index.html
现在就可以直接把index.html的内容,封装在configmap里面,未来不管怎么升级,最终镜像都会调用加载这个index.html 文件里面的内容,更新出来的镜像使用的依然是configmap里面的内容。

[root@master ~]# vim web1.yaml        把config01加载到mountPath: "/usr/share/nginx/html"
[root@master ~]# cat web1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web1
  name: web1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web1
    spec:
      volumes:
      - name: foo
        configMap:
          name: config01
      containers:
      - image: nginx
        imagePullPolicy: IfNotPresent
        name: nginx
        volumeMounts:
        - name: foo
          mountPath: "/usr/share/nginx/html"
          readOnly: true
        resources: {}
status: {}
[root@master ~]# kubectl apply -f web1.yaml
deployment.apps/web1 created
[root@master ~]# kubectl get deployments.apps web1 -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
web1   3/3     3            3           26s   nginx        nginx    app=web1
为deployment创建一个service服务,接受外部请求,类型为NodePort。
[root@master ~]# kubectl expose deployment web1 --port 80 --target-port 80 --type NodePort
service/web1 exposed
--port 80指的是service端口,--target-port 80指的是service管理的pod的端口,NodePort指物理主机端口,随机分配3万以上的物理主机端口,如这里分配的是32060端口。
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        3d2h
web1         NodePort    10.96.71.74   <none>        80:32060/TCP   10s

在这里插入图片描述
之后更新镜像,再次刷新网页内容没有改变

[root@master ~]# kubectl set image deployment web1 nginx=nginx:1.20
deployment.apps/web1 image updated
[root@master ~]# kubectl get deployments.apps web1 -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES       SELECTOR
web1   3/3     3            3           26m   nginx        nginx:1.20   app=web1

7、service服务

在这里插入图片描述
k8s里面的最小调度单位是pod,pod里面包含的有容器,pod是最终对外提供服务的。

[root@master ~]# kubectl run  pod1 --image nginx --image-pull-policy IfNotPresent --dry-run=client -o yaml > pod1.yaml
[root@master ~]# kubectl apply -f pod1.yaml
pod/pod1 created
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          18s   10.244.104.20   node2   <none>           <none>
[root@master ~]# ping 10.244.104.20
PING 10.244.104.20 (10.244.104.20) 56(84) bytes of data.
64 bytes from 10.244.104.20: icmp_seq=1 ttl=63 time=13.6 ms
64 bytes from 10.244.104.20: icmp_seq=2 ttl=63 time=0.594 ms
pod的ip,只有集群内部可见,也就是说只有集群内主机可以访问,其他pod也是可以访问的。

为何可以集群内部互通?因为配置了calico网络,会建立起很多iptables及转发的规则。
但是外接的主机是无法连通这个pod的,比如用物理windows主机,打开一个浏览器,是无法访问到pod里面的内容的。如果想让外界访问,可以做端口映射。

[root@master ~]# kubectl run  pod2 --image nginx --image-pull-policy IfNotPresent --port 80 --dry-run=client -o yaml > pod2.yaml
[root@master ~]# vim pod2.yaml
[root@master ~]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod2
    ports:
    - containerPort: 80
      hostPort: 5000
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f pod2.yaml
pod/pod2 created
[root@master ~]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          8m44s   10.244.104.20    node2   <none>           <none>
pod2   1/1     Running   0          11s     10.244.166.181   node1   <none>           <none>

查看该pod是在node1上运行的,所以通过windows访问node1的ip地址加端口号 http://10.1.1.201:5000/即可访问到nginx服务。
为什么能访问到?因为添加的hostPort 会修改iptables规则。

[root@node1 ~]# iptables -S -t nat |grep '5000 -j'
-A CNI-HOSTPORT-DNAT -p tcp -m comment --comment "dnat name: \"k8s-pod-network\" id: \"057fe1dad9729cebfd079c1b43c542d8b5ebc9a8ea5adb2305c044a83a8dc7c1\"" -m multiport --dports 5000 -j CNI-DN-1e7e7e47f27d6006e8068
-A CNI-DN-1e7e7e47f27d6006e8068 -s 10.244.166.181/32 -p tcp -m tcp --dport 5000 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-1e7e7e47f27d6006e8068 -s 127.0.0.1/32 -p tcp -m tcp --dport 5000 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-1e7e7e47f27d6006e8068 -p tcp -m tcp --dport 5000 -j DNAT --to-destination 10.244.166.181:80

但是,这种方式比较麻烦,而且存在一个问题。比如:创建一个deployment控制器
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ 复制代码

[root@master ~]# vim pod5.yaml
[root@master ~]# cat pod5.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          hostPort: 5500
[root@master ~]# kubectl apply -f pod5.yaml
deployment.apps/nginx-deployment created
[root@master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP               NODE     NOMINATED NODE   READINESS GATES
nginx-deployment-7c5c77bf5d-cwxqc   1/1     Running   0          13s    10.244.166.180   node1    <none>           <none>
nginx-deployment-7c5c77bf5d-mt7tr   0/1     Pending   0          13s    <none>           <none>   <none>           <none>
nginx-deployment-7c5c77bf5d-s5dq4   1/1     Running   0          13s    10.244.104.8     node2    <none>           <none>
有一个pod出现pending,其他两个pod是好的。因为node1和node2端口都被占用了,第三台无法进行映射。

在这里插入图片描述
现在改成两个副本,一个pod映射一个端口。

[root@master ~]# vim pod5.yaml
[root@master ~]# cat pod5.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          hostPort: 5500
[root@master ~]# kubectl apply -f pod5.yaml
deployment.apps/nginx-deployment configured
[root@master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-7c5c77bf5d-cwxqc   1/1     Running   0          7m50s   10.244.166.180   node1   <none>           <none>
nginx-deployment-7c5c77bf5d-s5dq4   1/1     Running   0          7m50s   10.244.104.8     node2   <none>           <none>
但是这时候,只有2个节点,是无法使用负载均衡的。因为你连接要么连接第一台,要么连接第二台。

使用svc负载均衡器解决hostPort以及负载均衡的问题。
deloyment控制器是通过标签来管理pod,svc也会通过标签会找到对应的pod,svc是负载均衡器,后端的pod叫real server(endpoint)。对于pod,是通过deploy来管理的,deploy创建的pod,它的pod标签都是一样的。一旦某个pod宕掉,deploy会立刻创建一个新的pod,这时候因为标签和之前一样,所以svc会自动识别到。
客户端请求到了svc,svc将请求转发给(pod所在物理主机上的kube-proxy)后端的pod上,那么是由谁来进行转发的呢?是由kube-proxy(两种模式:iptables和ipvs 前者默认,后者性能好)

[root@master ~]# kubectl delete -f pod5.yaml
deployment.apps "nginx-deployment" deleted
[root@master ~]# vim pod5.yaml
[root@master ~]# cat pod5.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f pod5.yaml
deployment.apps/nginx-deployment created
[root@master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-5f5c64f949-bcvmw   1/1     Running   0          10s   10.244.104.19    node2   <none>           <none>
nginx-deployment-5f5c64f949-z8v4q   1/1     Running   0          10s   10.244.166.188   node1   <none>           <none>
nginx-deployment-5f5c64f949-zclkp   1/1     Running   0          10s   10.244.166.183   node1   <none>           <none>
修改/usr/share/nginx/html/index.html里面的内容
[root@master ~]# kubectl exec -ti nginx-deployment-5f5c64f949-bcvmw -- bash
root@nginx-deployment-5f5c64f949-bcvmw:/# echo 111 > /usr/share/nginx/html/index.html
root@nginx-deployment-5f5c64f949-bcvmw:/# exit
exit
[root@master ~]# kubectl exec -ti nginx-deployment-5f5c64f949-z8v4q -- bash
root@nginx-deployment-5f5c64f949-z8v4q:/# echo 222 > /usr/share/nginx/html/index.html
root@nginx-deployment-5f5c64f949-z8v4q:/# exit
exit
[root@master ~]# kubectl exec -ti nginx-deployment-5f5c64f949-zclkp -- bash
root@nginx-deployment-5f5c64f949-zclkp:/# echo 333 > /usr/share/nginx/html/index.html
root@nginx-deployment-5f5c64f949-zclkp:/# exit
exit

创建svc,如果没有指定svc名字,默认使用deployment名字。
–port指定的是svc本身的端口(自定义),svc本身不提供业务服务,只是负载均衡器,而–target-port 指的是后端pod本身开放的端口。

[root@master ~]# kubectl expose --name svc1 deployment nginx-deployment --port 5500 --target-port 80 --dry-run=client -o yaml > svc1.yaml
[root@master ~]# cat svc1.yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: svc1
spec:
  ports:
  - port: 5500
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
status:
  loadBalancer: {}
[root@master ~]# kubectl apply -f svc1.yaml
service/svc1 created
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    3d7h
svc1         ClusterIP   10.110.201.163   <none>        5500/TCP   4s
在某个节点执行访问svc1测试
[root@master ~]# curl -s 10.110.201.163:5500
333
[root@master ~]# curl -s 10.110.201.163:5500
111
[root@master ~]# curl -s 10.110.201.163:5500
222
svc1 对应的 selector app=nginx ,这个标签是为了选择pod的。
[root@master ~]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE    SELECTOR
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    3d8h   <none>
svc1         ClusterIP   10.110.201.163   <none>        5500/TCP   10m    app=nginx
[root@master ~]# kubectl get svc --show-labels
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE    LABELS
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    3d8h   component=apiserver,provider=kubernetes
svc1         ClusterIP   10.110.201.163   <none>        5500/TCP   12m    app=nginx
[root@master ~]#  kubectl get pod -l app=nginx  
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-5f5c64f949-bcvmw   1/1     Running   0          22m
nginx-deployment-5f5c64f949-z8v4q   1/1     Running   0          22m
nginx-deployment-5f5c64f949-zclkp   1/1     Running   0          22m
这个标签是由svc的selector来决定的,而不是svc的lable决定的。
[root@master ~]# kubectl describe svc svc1
Name:              svc1
Namespace:         default
Labels:            app=nginx
Annotations:       <none>
Selector:          app=nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.110.201.163
IPs:               10.110.201.163
Port:              <unset>  5500/TCP
TargetPort:        80/TCP
Endpoints:         10.244.104.19:80,10.244.166.183:80,10.244.166.188:80
Session Affinity:  None
Events:            <none>
找出svc1关联的所有pod
[root@master ~]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE    SELECTOR
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    3d8h   <none>
svc1         ClusterIP   10.110.201.163   <none>        5500/TCP   10m    app=nginx
[root@master ~]#  kubectl get pod -l app=nginx   查看标签对应的pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-5f5c64f949-bcvmw   1/1     Running   0          22m
nginx-deployment-5f5c64f949-z8v4q   1/1     Running   0          22m
nginx-deployment-5f5c64f949-zclkp   1/1     Running   0          22m

之前都是通过deployment控制器管理pod,svc可以自动定位到pod标签。如果手工创建pod呢?
不管手工创建pod还是deployment控制器管理pod,svc都是根据selector来定位pod标签的。

首先查看当前的svc
[root@master ~]# kubectl describe svc svc1
Name:              svc1
Namespace:         default
Labels:            app=nginx
Annotations:       <none>
Selector:          app=nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.110.201.163
IPs:               10.110.201.163
Port:              <unset>  5500/TCP
TargetPort:        80/TCP
Endpoints:         10.244.104.19:80,10.244.166.183:80,10.244.166.188:80
Session Affinity:  None
Events:            <none>
手工创建pod
[root@master ~]# vim pod1.yaml
[root@master ~]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
    app: nginx
  name: pod1
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f pod1.yaml
pod/pod1 created
[root@master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP               NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-5f5c64f949-bcvmw   1/1     Running   0          55m    10.244.104.19    node2   <none>           <none>
nginx-deployment-5f5c64f949-z8v4q   1/1     Running   0          55m    10.244.166.188   node1   <none>           <none>
nginx-deployment-5f5c64f949-zclkp   1/1     Running   0          55m    10.244.166.183   node1   <none>           <none>
pod1                                1/1     Running   0          109s   10.244.104.10    node2   <none>           <none>
查看标签对应的pod
[root@master ~]# kubectl get pod -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-5f5c64f949-bcvmw   1/1     Running   0          53m
nginx-deployment-5f5c64f949-z8v4q   1/1     Running   0          53m
nginx-deployment-5f5c64f949-zclkp   1/1     Running   0          53m
pod1                                1/1     Running   0          12s
[root@master ~]# kubectl get pod --show-labels
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
nginx-deployment-5f5c64f949-bcvmw   1/1     Running   0          54m   app=nginx,pod-template-hash=5f5c64f949
nginx-deployment-5f5c64f949-z8v4q   1/1     Running   0          54m   app=nginx,pod-template-hash=5f5c64f949
nginx-deployment-5f5c64f949-zclkp   1/1     Running   0          54m   app=nginx,pod-template-hash=5f5c64f949
pod1                                1/1     Running   0          23s   app=nginx,run=pod1
查看手工创建的pod是否在endpoint列表里面
[root@master ~]# kubectl describe svc svc1
Name:              svc1
Namespace:         default
Labels:            app=nginx
Annotations:       <none>
Selector:          app=nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.110.201.163
IPs:               10.110.201.163
Port:              <unset>  5500/TCP
TargetPort:        80/TCP
Endpoints:         10.244.104.10:80,10.244.104.19:80,10.244.166.183:80 + 1 more...
Session Affinity:  None
Events:            <none>
[root@master ~]# kubectl get endpoints
NAME                                          ENDPOINTS                                                         AGE
k8s-sigs.io-nfs-subdir-external-provisioner   <none>                                                            3d4h
kubernetes                                    10.1.1.200:6443                                                   3d8h
svc1                                          10.244.104.10:80,10.244.104.19:80,10.244.166.183:80 + 1 more...   45m
信息查看不全,可以在线查看endpoin端点
[root@master ~]# kubectl edit ep svc1   或kubectl edit endpoints svc1  在线修改endpoin端点

注意:
1.svc的IP也是集群内部可访问的。
2.svc的IP地址不会发生改变,除非删除重新创建。
3.svc是通过selector定位到后端的pod标签的。
所以,不需要担心pod的IP地址发生改变,因为外部访问,并不是直接访问pod的,而是访问svc的IP。svc只开放了tcp的5500,并没有开放icmp,所以不要ping,但是可以通过wget来测试[root@master ~]# wget 10.110.201.163:5500 或者通过测试页访问[root@master ~]# curl 10.110.201.163:5500

SVC服务的发现:ClusterIP/变量/DNS方式(建议)
SVC服务的发现有3种方式:1.ClusterIP 2.变量方式 3.DNS方式(建议)
svc的IP地址只是集群内部可见的,master或node,或者集群里面的pod。
在这里插入图片描述
ClusterIP

创建svc,每个svc都会有自己的clusterIP,先查看clusterIP,然后在pod里面通过该IP访问即可。
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    3d11h
svc1         ClusterIP   10.110.201.163   <none>        5500/TCP   3h10m

K8S部署wordpress博客系统,先在node节点分别下载wordpress和mysql镜像,crictl pull mysql wordpress
1、创建mysql pod

[root@master ~]# kubectl run mysql --image mysql --env="MYSQL_ROOT_PASSWORD=redhat" --env="MYSQL_DATABASE=wordpress" --dry-run=client -o yaml > db.yaml
[root@master ~]# vim db.yaml
[root@master ~]# cat db.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: mysql
  name: mysql
spec:
  containers:
  - env:
    - name: MYSQL_ROOT_PASSWORD
      value: redhat
    - name: MYSQL_DATABASE
      value: wordpress
    image: mysql
    imagePullPolicy: IfNotPresent
    name: mysql
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f db.yaml
pod/mysql created
[root@master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
mysql                               1/1     Running   0          12s     10.244.104.21    node2   <none>           <none>

2、创建mysql svc,关联mysql pod

[root@master ~]# kubectl expose pod mysql --name db --port 3306 --target-port 3306
service/db exposed
[root@master ~]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE     SELECTOR
db           ClusterIP   10.105.193.40    <none>        3306/TCP   25s     run=mysql
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    3d11h   <none>
svc1         ClusterIP   10.110.201.163   <none>        5500/TCP   3h22m   app=nginx

3、创建wordpress pod 通过指定ClusterIP方式

[root@master ~]# kubectl run blog --image wordpress --env="WORDPRESS_DB_HOST=10.105.193.40" --env="WORDPRESS_DB_USER=root" --env="WORDPRESS_DB_PASSWORD=redhat" --env="WORDPRESS_DB_NAME=wordpress" --dry-run=client -o yaml > blog.yaml
[root@master ~]# vim blog.yaml
[root@master ~]# cat blog.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: blog
  name: blog
spec:
  containers:
  - env:
    - name: WORDPRESS_DB_HOST
      value: 10.105.193.40
    - name: WORDPRESS_DB_USER
      value: root
    - name: WORDPRESS_DB_PASSWORD
      value: redhat
    - name: WORDPRESS_DB_NAME
      value: wordpress
    image: wordpress
    imagePullPolicy: IfNotPresent
    name: blog
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f blog.yaml
pod/blog created
[root@master ~]# kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
blog    1/1     Running   0          49s   10.244.166.190   node1   <none>           <none>
mysql   1/1     Running   0          12m   10.244.104.21    node2   <none>           <none>

4、创建wordpress svc

[root@master ~]# kubectl expose pod blog --name blog  --port 80 --target-port 80 --type NodePort
service/blog exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
blog         NodePort    10.98.134.203    <none>        80:31289/TCP   3s
db           ClusterIP   10.105.193.40    <none>        3306/TCP       17m
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        3d11h
svc1         ClusterIP   10.110.201.163   <none>        5500/TCP       3h39m

打开浏览器,输入集群内任意一个主机ip,并带上端口 31289 进行访问,http://10.1.1.200:31289/ 就可以看到安装界面。并且点击下一步,直接跳过了数据库配置。这种方式,称之为叫ClusterIP方式。
变量方式
在这里插入图片描述
每创建一个pod,此pod里面会自动的创建一些变量的,包含了之前创建过的svc的相关变量。
pod1创建后,里面包含svc1的相关变量,没有svc2和svc3
pod2创建后,里面包含svc1和svc2的相关变量,没有svc3

创建一个临时pod
[root@master ~]# kubectl run nginx --image nginx --image-pull-policy IfNotPresent --rm -ti -- bash
If you don't see a command prompt, try pressing enter.
root@nginx:/# env |grep SVC1
SVC1_SERVICE_HOST=10.110.201.163
SVC1_PORT_5500_TCP_ADDR=10.110.201.163
SVC1_PORT=tcp://10.110.201.163:5500
SVC1_PORT_5500_TCP_PROTO=tcp
SVC1_PORT_5500_TCP_PORT=5500
SVC1_SERVICE_PORT=5500
SVC1_PORT_5500_TCP=tcp://10.110.201.163:5500
包含了SVC1的一些变量。复制另一个窗口现在再次创建第二个SVC2。
[root@master ~]# kubectl get svc svc1 -o wide
NAME   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE     SELECTOR
svc1   ClusterIP   10.110.201.163   <none>        5500/TCP   4h59m   app=nginx
[root@master ~]# cat pod5.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f pod5.yaml
deployment.apps/nginx-deployment created
[root@master ~]# kubectl expose deployment nginx-deployment --name svc2 --port 5500 --target-port 80 --selector app=nginx
service/svc2 exposed
[root@master ~]#  kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
blog         NodePort    10.98.134.203    <none>        80:31289/TCP   81m
db           ClusterIP   10.105.193.40    <none>        3306/TCP       99m
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        3d12h
svc1         ClusterIP   10.110.201.163   <none>        5500/TCP       5h1m
svc2         ClusterIP   10.97.217.131    <none>        5500/TCP       13s
之前的临时pod没有包含SVC2的变量
root@nginx:/# env |grep SVC2
root@nginx:/#
这时候再次创建第二个临时pod,发现有svc1和svc2变量
[root@master ~]# kubectl run nginx2 --image nginx --image-pull-policy IfNotPresent --rm -ti -- bash
If you don't see a command prompt, try pressing enter.
root@nginx2:/# env |grep SVC1
SVC1_SERVICE_HOST=10.110.201.163
SVC1_PORT_5500_TCP_ADDR=10.110.201.163
SVC1_PORT=tcp://10.110.201.163:5500
SVC1_PORT_5500_TCP_PROTO=tcp
SVC1_PORT_5500_TCP_PORT=5500
SVC1_SERVICE_PORT=5500
SVC1_PORT_5500_TCP=tcp://10.110.201.163:5500
root@nginx2:/# env |grep SVC2
SVC2_PORT_5500_TCP=tcp://10.97.217.131:5500
SVC2_PORT_5500_TCP_PROTO=tcp
SVC2_SERVICE_PORT=5500
SVC2_PORT_5500_TCP_PORT=5500
SVC2_PORT_5500_TCP_ADDR=10.97.217.131
SVC2_SERVICE_HOST=10.97.217.131
SVC2_PORT=tcp://10.97.217.131:5500
root@nginx2:/#

采用变量方式来创建blog
原来在wrodpress pod 对应的yaml文件里面需要指定 SVC 对应的ClusterIP地址,现在不需要指定IP了,因为创建pod的时候,会加载现有SVC中的环境变量,其中一个变量是svc名称_SERVICE_HOST 如DB_SERVICE_HOST就是db这个svc的,只需要指定这个变量即可。(mysql pod,mysql svc,wordpress pod(blog),wordpress svc(blog))

[root@master ~]# kubectl exec -ti blog -- bash
root@blog:/var/www/html# env |grep SERVICE_HOST
SVC1_SERVICE_HOST=10.110.201.163
DB_SERVICE_HOST=10.105.193.40
KUBERNETES_SERVICE_HOST=10.96.0.1
现在db的svc叫 DB所以变量为 DB_SERVICE_HOST,修改yaml文件:
[root@master ~]# vim blog.yaml
[root@master ~]# cat blog.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: blog
  name: blog
spec:
  containers:
  - env:
    - name: WORDPRESS_DB_HOST
      value: $(DB_SERVICE_HOST)
    - name: WORDPRESS_DB_USER
      value: root
    - name: WORDPRESS_DB_PASSWORD
      value: redhat
    - name: WORDPRESS_DB_NAME
      value: wordpress
    image: wordpress
    name: blog
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl delete -f blog.yaml
pod "blog" deleted
[root@master ~]# kubectl apply -f blog.yaml
pod/blog created
[root@master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
blog                                1/1     Running   0          8s      10.244.166.187   node1   <none>           <none>
mysql                               1/1     Running   0          4h53m   10.244.104.21    node2   <none>           <none>
[root@master ~]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE     SELECTOR
blog         NodePort    10.98.134.203    <none>        80:31289/TCP   4h35m   run=blog
db           ClusterIP   10.105.193.40    <none>        3306/TCP       4h52m   run=mysql

http://10.1.1.200:31289/wp-login.php,这就是变量方式发现服务,这种方式也有局限性,或许变量必须有先后顺序,且只能在同一个命名空间里面才能获得对应的svc变量。一旦svc重建,地址和名称就发生变化了,上层应用也需要重新对接,很是麻烦。
DNS方式(建议)
在 kube-system 命名空间中,有2个DNS服务器 Pod(一个deployment控制器)。
在其他命名空间里面 创建 svc 的时候,svc会去dns上自动注册。对于DNS,就知道了svc的ip地址。
在同一个命名空间中,创建svc,创建pod,那么pod可以直接通过 svc 的服务名来访问。不需要找svc对应的ClusterIP地址。

[root@master ~]# kubectl get pod -n kube-system
NAME                              READY   STATUS    RESTARTS       AGE
coredns-7bdc4cb885-4kvvb          1/1     Running   6 (10h ago)    3d16h
coredns-7bdc4cb885-4tw5z          1/1     Running   6 (10h ago)    3d16h
etcd-master                       1/1     Running   5 (15h ago)    3d16h
kube-apiserver-master             1/1     Running   7 (15h ago)    3d16h
kube-controller-manager-master    1/1     Running   19 (10h ago)   3d16h
kube-proxy-hmk7x                  1/1     Running   5 (15h ago)    3d16h
kube-proxy-x7hmw                  1/1     Running   5 (15h ago)    3d16h
kube-proxy-xr55g                  1/1     Running   5 (15h ago)    3d16h
kube-scheduler-master             1/1     Running   18 (10h ago)   3d16h
metrics-server-7f5dc7b49f-nhrfh   1/1     Running   21 (10h ago)   3d16h
[root@master ~]# kubectl get deployments.apps -n kube-system
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
coredns          2/2     2            2           3d16h
metrics-server   1/1     1            1           3d16h
[root@master ~]# kubectl get svc -n kube-system
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   3d17h
metrics-server   ClusterIP   10.99.160.220   <none>        443/TCP                  3d17h

比如上面有个svc1 ,再手工创建一个临时容器。

[root@master ~]# vim svc1.yaml    原来svc1是 5500 端口,现修改为 80端口
[root@master ~]# cat svc1.yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: svc1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
status:
  loadBalancer: {}
[root@master ~]# kubectl apply -f svc1.yaml
service/svc1 configured
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
svc1         ClusterIP   10.110.201.163   <none>        80/TCP         9h
[root@master ~]# curl -s 10.110.201.163:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

busybox和alpine 是精简版linux系统用于测试,默认没有curl命令

同一个命名空间
[root@master ~]# kubectl run busybox1 --image busybox --image-pull-policy IfNotPresent --rm -ti -- sh
If you don't see a command prompt, try pressing enter.
/ # wget svc1
Connecting to svc1 (10.110.201.163:80)
saving to 'index.html'
index.html           100% |*********************************************************************************************************************************|   615  0:00:00 ETA
'index.html' saved
/ #
或者用aipine
[root@kmaster ~]# kubectl run alpine1 --image alpine --image-pull-policy IfNotPresent --rm -ti -- sh
If you don't see a command prompt, try pressing enter.
/ # wget svc1
Connecting to svc1 (10.105.57.91:80)
saving to 'index.html'
index.html           100% |*******************|     4  0:00:00 ETA
'index.html' saved
/ # 

如果创建临时pod的时候看到这个错误
  Warning  Failed     11s (x2 over 12s)  kubelet            
  Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: 
  exec: "bash": executable file not found in $PATH: unknown
或者
 Warning  Failed     4s (x3 over 22s)  kubelet            Error: failed to create containerd task: 
 failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: 
 exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown
大概率是因为不支持 --bash,换成 --sh

pod 在找svc1名称的时候,会转发给DNS pod服务器,说这个名称对应IP地址是什么,DNS进行解析后返回给pod,于是pod就知道找svc1了。
查看记录dns文件,记录的这个ip地址是哪里的?

[root@master ~]# kubectl run busybox1 --image busybox --image-pull-policy IfNotPresent --rm -ti -- sh
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf
search kube-public.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
/ #
在kube-system命名空间中有个svc,叫 kube-dns,记录的就是它。
[root@master ~]# kubectl get svc -n kube-system
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   3d17h
metrics-server   ClusterIP   10.99.160.220   <none>        443/TCP                  3d17h

如果切换到其他命名空间呢?

跨命名空间,使用 svc名称.命名空间 格式来定义
[root@master ~]# kubens kube-public
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "kube-public".
[root@master ~]# kubectl run busybox1 --image busybox --image-pull-policy IfNotPresent --rm -ti -- sh
If you don't see a command prompt, try pressing enter.
/ # wget svc1
wget: bad address 'svc1'
/ # wget svc1.default
Connecting to svc1.default (10.110.201.163:80)
saving to 'index.html'
index.html           100% |*********************************************************************************************************************************|   615  0:00:00 ETA
'index.html' saved
/ #

接下来,删除wordpress pod ,修改yaml文件

[root@master ~]# kubectl delete -f blog.yaml
pod "blog" deleted
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
blog         NodePort    10.98.134.203    <none>        80:31289/TCP   6h6m
db           ClusterIP   10.105.193.40    <none>        3306/TCP       6h23m
[root@master ~]# vim blog.yaml
[root@master ~]# cat blog.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: blog
  name: blog
spec:
  containers:
  - env:
    - name: WORDPRESS_DB_HOST
      value: db    #同一命名空间直接输入svc对应的名称即可kubectl get svc,跨命名空间使用 svc名称.命名空间
    - name: WORDPRESS_DB_USER
      value: root
    - name: WORDPRESS_DB_PASSWORD
      value: redhat
    - name: WORDPRESS_DB_NAME
      value: wordpress
    image: wordpress
    name: blog
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f blog.yaml
pod/blog created
[root@master ~]# kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
blog      1/1     Running   0          16s     10.244.166.129   node1   <none>           <none>
mysql     1/1     Running   0          6h31m   10.244.104.21    node2   <none>           <none>

再次刷新网页,http://10.1.1.200:31289/wp-login.php,连接成功且无需配置数据库。
为什么物理机可以连接K8S集群主机的IP地址呢?这就是服务的发布。
SVC服务的发布:NodePort/LoadBalance/Ingress
svc的clusterip只是集群内部可见的,我们创建集群的目的,是为了给外界主机访问的。可这个ip无法从外部直接访问的,这时候,可以把该服务发布出去,让外界主机可以访问到。
SVC服务的发布有3种方式:1.NodePort 2.LoadBalance 3.Ingress
NodePort
默认情况下,外界主机是无法连通svc的clusterip地址的。
svc 端口映射,把svc使用NodePort方式映射到物理主机上(端口),集群内所有主机端口和service端口映射。之后通过主机:端口 方式就可以访问到 svc,最后svc通过kube-proxy(iptables规则)进行转发到后端pod上。
通过NodePort映射出去有两种方法:
1.在创建SVC服务的时候,直接使用 type 关键字来指定
2.可以在线修改
方法1:使用type

[root@master ~]# cat pod5.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f pod5.yaml
deployment.apps/nginx-deployment created
[root@master ~]# kubectl expose --name svc1 deployment nginx-deployment --port 80 --target-port 80 --type=NodePort
service/svc1 exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
svc1         NodePort    10.107.168.203   <none>        80:30806/TCP   79s
 --port 80指的是service端口,--target-port 80指的是service管理的pod的端口,NodePort指物理主机端口,随机分配3万以上的物理主机端口。
之前版本可以通过 netstat 命令查看到对应的端口,但是新版本查不到,可以通过iptables命令查看下。
[root@master ~]# netstat -ntulp |grep 30806
[root@master ~]# iptables -S -t nat |grep 30806
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/svc1" -m tcp --dport 30806 -j KUBE-EXT-DZERXHZGH3HKTTEJ
可以访问集群内任意一个主机ip:30806 来访问nginx服务。
http://10.1.1.200:30806/
http://10.1.1.201:30806/
http://10.1.1.202:30806/

[root@master ~]# kubectl edit svc svc1 现在把端口30806改为 30888
在这里插入图片描述

[root@master ~]# iptables -S -t nat |grep 30806
[root@master ~]# iptables -S -t nat |grep 30888
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/svc1" -m tcp --dport 30888 -j KUBE-EXT-DZERXHZGH3HKTTEJ
[root@master ~]# kubectl get svc
svc1         NodePort    10.107.168.203   <none>        80:30888/TCP   8m56s
这时候再次访问,就要更换端口30888了。
http://10.1.1.200:30888/
http://10.1.1.201:30888/
http://10.1.1.202:30888/

方法2:在线修改ClusterIP为NodePort

[root@master ~]# kubectl get svc
svc1         NodePort    10.107.168.203   <none>        80:30888/TCP   73m
[root@master ~]# kubectl edit svc svc1
service/svc1 edited
[root@master ~]# kubectl get svc    先将svc1改成ClusterIP
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
svc1         ClusterIP   10.107.168.203   <none>        80/TCP         76m

在这里插入图片描述

[root@master ~]# kubectl edit svc svc1
service/svc1 edited
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
svc1         NodePort    10.107.168.203   <none>        80:31000/TCP   80m

在这里插入图片描述
对于业务来说,集群内部主机,不会暴露在互联网上,前面还要有负载均衡器,防火墙等,可以把防火墙或LB(负载均衡器)的端口80公开出去,然后80转发到后端指定的端口上面去。
通过DNS指定到防火墙或LB(负载均衡器)上,使用默认80端口即可。
集群使用的都是私有地址。
在这里插入图片描述
LoadBalance
注意:搭建集群,每个节点都要有一个公网ip,代价高,直接暴露在互联网上。
在某节点上有个svc1,指定模式为lb,就会从地址池里面分配一个公网ip地址。
外网用户直接访问该公网ip即可访问。
在这里插入图片描述
最终公网ip会出现在 external-ip 这个字段上(默认none)

[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
svc1         NodePort    10.107.168.203   <none>        80:31000/TCP   80m

安装模拟公网插件官网,https://metallb.universe.tf/installation/
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml
网络原因,先把文件拷贝过来,将代码复制重命名上传。

将metallb.yaml上传到家目录,查询这个文件用到的镜像
[root@master ~]# grep image metallb.yaml -n
1712:        image: quay.io/metallb/controller:v0.13.9
1805:        image: quay.io/metallb/speaker:v0.13.9
可以提前在所有节点手工下载这两个镜像,并修改yaml文件的imagePullPolicy: IfNotPresent
crictl pull quay.io/metallb/controller:v0.13.9
crictl pull quay.io/metallb/speaker:v0.13.9
[root@node2 ~]# crictl images |grep me   在节点查看镜像下载情况
quay.io/metallb/controller                                                  v0.13.9             26952499c3023       27.8MB
quay.io/metallb/speaker                                                     v0.13.9             697605b359357       50.1MB

[root@master ~]# kubectl apply -f metallb.yaml
[root@master ~]# kubectl get ns    发现多了个metallb-system 命名空间
NAME               STATUS   AGE
calico-apiserver   Active   3d20h
calico-system      Active   3d20h
default            Active   3d20h
kube-node-lease    Active   3d20h
kube-public        Active   3d20h
kube-system        Active   3d20h
metallb-system     Active   16s
tigera-operator    Active   3d20h
[root@master ~]# kubectl get pod -n metallb-system -o wide    speaker是daemonset控制器管理的,每个节点运行一个speaker pod
NAME                          READY   STATUS    RESTARTS      AGE   IP              NODE     NOMINATED NODE   READINESS GATES
controller-7948676b95-xbw2m   1/1     Running   6 (12m ago)   19h   10.244.104.15   node2    <none>           <none>
speaker-fjzd6                 1/1     Running   0             19h   10.1.1.201      node1    <none>           <none>
speaker-wq8qn                 1/1     Running   5 (13m ago)   19h   10.1.1.200      master   <none>           <none>
speaker-xb5rx                 1/1     Running   7 (11m ago)   19h   10.1.1.202      node2    <none>           <none>
[root@master ~]# kubectl get deployments.apps -n metallb-system   controller是deployment控制器管理的
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
controller   1/1     1            1           19h
[root@master ~]# kubectl get deployments.apps -A   查询所有deployment控制器
NAMESPACE          NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
calico-apiserver   calico-apiserver          2/2     2            2           4d15h
calico-system      calico-kube-controllers   1/1     1            1           4d16h
calico-system      calico-typha              2/2     2            2           4d16h
kube-system        coredns                   2/2     2            2           4d16h
kube-system        metrics-server            1/1     1            1           4d15h
metallb-system     controller                1/1     1            1           19h
tigera-operator    tigera-operator           1/1     1            1           4d16h
[root@master ~]# kubectl get daemonsets.apps -A    查询所有daemonset控制器,speaker是daemonset控制器管理的
NAMESPACE        NAME              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-system    calico-node       3         3         3       3            3           kubernetes.io/os=linux   4d16h
calico-system    csi-node-driver   3         3         3       3            3           kubernetes.io/os=linux   4d16h
kube-system      kube-proxy        3         3         3       3            3           kubernetes.io/os=linux   4d16h
metallb-system   speaker           3         3         3       3            3           kubernetes.io/os=linux   19h

创建地址池官网,https://metallb.universe.tf/configuration/

安装小工具sipcalc查看网络段,计算ip子网
[root@master yum.repos.d]# yum install -y epel-release    下载epel源后,yum源发现多了个epel源
[root@master ~]#  yum install -y sipcalc
当前master节点ip地址为:10.1.1.200
[root@master ~]# sipcalc 10.1.1.200/24
-[ipv4 : 10.1.1.200/24] - 0

[CIDR]
Host address            - 10.1.1.200
Host address (decimal)  - 167838152
Host address (hex)      - A0101C8
Network address         - 10.1.1.0
Network mask            - 255.255.255.0
Network mask (bits)     - 24
Network mask (hex)      - FFFFFF00
Broadcast address       - 10.1.1.255
Cisco wildcard          - 0.0.0.255
Addresses in network    - 256
Network range           - 10.1.1.0 - 10.1.1.255
Usable range            - 10.1.1.1 - 10.1.1.254
创建地址池
[root@master ~]# vim pools.yaml
[root@master ~]# cat pools.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.1.1.210-10.1.1.220      #模拟公网的IP地址池
创建实例并绑定地址池
[root@master ~]# vim l2.yaml
[root@master ~]# cat l2.yaml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
[root@master ~]# kubectl apply -f pools.yaml
ipaddresspool.metallb.io/first-pool created
[root@master ~]# kubectl apply -f l2.yaml
l2advertisement.metallb.io/example created
[root@master ~]# kubectl get ipaddresspools.metallb.io -n metallb-system
NAME         AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
first-pool   true          false             ["10.1.1.210-10.1.1.220"]
创建svc (先创建deployment)
[root@master ~]# kubectl get deployments.apps
No resources found in default namespace.
[root@master ~]# cat pod5.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f pod5.yaml
deployment.apps/nginx-deployment created
[root@master ~]# kubectl get deployments.apps
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           71s
[root@master ~]# kubectl expose --name svc666 deployment nginx-deployment --port 80 --target-port 80 --type LoadBalancer
service/svc666 exposed
[root@master ~]# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        4d20h
svc666       LoadBalancer   10.109.75.168   10.1.1.210    80:30557/TCP   9s
最后通过 10.1.1.210访问nginx服务。注意:通过公网IP直接访问,不需要加端口号!!!
ip地址池可以有多个,但实例只能有一个。

Ingress(重点推荐)cce中叫路由

正向代理:engress出方向,一般用于出去的数据包

华为云
|
代理服务器 proxy 一般用于出去
|
pc电脑
反向代理:ingress入方向,一般用于进来的数据包
一般服务器提供反向代理的,外部流量进入里面,处理进来的数据包

用户(www1.baidu.com/www2.baidu.com 解析地址都是一样的)
|  
反向代理服务器 nginx
|  
server1  server2 
根据用户不同需求,反向代理会将数据包转发到不同的server上面去。
如下图:箭头往下

    反向代理 nginx控制器 (nginx本身具备负载均衡能力)
              |         
www1.xx.com   www2.xx.com  ww3.xx.com      ---虚拟主机,定义规则:访问www1,转发到service1

svcn1           svcn2          svcn3       ---clusterip 

podn1(111)      podn2 (222)    podn3

根据ingress定义规则将流量转发到不同的svc中,当前系统存在多个svc来处理集群中不同pod,根据不同的内容将流量路由到不同的svc上来访问不同的pod。
在这里插入图片描述
为什么要使用反向代理?
创建一个svc在物理机上映射一个三万以上端口,假如创建了很多svc,都要映射到物理机的端口上面去(NodePort类型),这样物理机开放的端口很多,安全隐患比较大。
把负载均衡器nginx虚拟机,映射一个端口出去,外部主机访问nginx,如果输入的是www1,那么切换到service1上面去,其他类似。就不需要把所有的服务端口映射出去。

给两个pod分别创建svc,类型LB
# kubectl expose pod web201 --name web201 --port 80 --target-port 80 --type LoadBalancer
# kubectl expose pod web202 --name web202 --port 80 --target-port 80 --type LoadBalancer
# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>            443/TCP        22d
web201       LoadBalancer   10.107.66.15    192.168.100.212   80:30801/TCP   17s
web202       LoadBalancer   10.110.72.130   192.168.100.213   80:30498/TCP   2s

这时候,两个pod,两个网站,分别通过两个不同的端口来访问,一个公网ip就要占用一个端口。
虽然配合DNS可以进行同一个域名,进行网站流量分摊,但前提是网站内容是一样的,目的是为了高可用。
网站不一样,这时候不同ip映射不同域名,一台主机上就多了很多端口,不安全。
现在的需求是,用同一个主机ip同一个端口,通过不同域名规则,来进行不同网站的访问。通过ingress实现。

在这里插入图片描述
步骤:
1.安装nginx(反向代理/负载均衡–NodePort类型发布出去)
2.创建三个svc
3.创建三个pod(111/222/333)
定义规则:
客户端访问www1 ,转发到svc1上面,访问www2,转发svc2

1、创建pod及svc
先清空环境,删除svc,deployment,pod
[root@master ~]# kubectl run n1 --image nginx --image-pull-policy IfNotPresent
pod/n1 created
[root@master ~]# kubectl run n2 --image nginx --image-pull-policy IfNotPresent
pod/n2 created
[root@master ~]# kubectl run n3 --image nginx --image-pull-policy IfNotPresent
pod/n3 created
[root@master ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
n1     1/1     Running   0          21s
n2     1/1     Running   0          11s
n3     1/1     Running   0          5s
[root@master ~]# kubectl exec -ti n1 -- bash
root@n1:/# echo 111 > /usr/share/nginx/html/index.html
root@n1:/# exit
exit
[root@master ~]# kubectl exec -ti n2 -- bash
root@n2:/# echo 222 > /usr/share/nginx/html/index.html
root@n2:/# exit
exit
[root@master ~]# kubectl exec -ti n3 -- bash
root@n3:/# echo 333 > /usr/share/nginx/html/index.html
root@n3:/# mkdir /usr/share/nginx/html/abc
root@n3:/# echo 444 > /usr/share/nginx/html/abc/index.html
root@n3:/# exit
exit
[root@master ~]# kubectl expose --name svc1 pod n1 --port 80 --target-port 80
service/svc1 exposed
[root@master ~]# kubectl expose --name svc2 pod n2 --port 80 --target-port 80
service/svc2 exposed
[root@master ~]# kubectl expose --name svc3 pod n3 --port 80 --target-port 80
service/svc3 exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   4d22h
svc1         ClusterIP   10.108.173.117   <none>        80/TCP    32s
svc2         ClusterIP   10.106.230.216   <none>        80/TCP    23s
svc3         ClusterIP   10.101.50.139    <none>        80/TCP    14s
2、配置反向代理
官网,https://kubernetes.github.io/ingress-nginx/deploy/
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/cloud/deploy.yaml
网络原因,先把文件拷贝过来,将代码复制重命名上传。
将deploy.yaml上传到家目录,查询这个文件用到的镜像
[root@master ~]# grep image deploy.yaml    看需要安装哪些包,这些包在线联网安装比较慢,可以提前手工下载
        image: registry.cn-hangzhou.aliyuncs.com/cloudcs/controller:v1.6.4
        imagePullPolicy: IfNotPresent
        image: registry.cn-hangzhou.aliyuncs.com/cloudcs/kube-webhook-certgen:v20220916-gd32f8c343
        imagePullPolicy: IfNotPresent
        image: registry.cn-hangzhou.aliyuncs.com/cloudcs/kube-webhook-certgen:v20220916-gd32f8c343
        imagePullPolicy: IfNotPresent
可以提前在所有节点手工下载这两个镜像
[root@master ~]# crictl pull registry.cn-hangzhou.aliyuncs.com/cloudcs/kube-webhook-certgen:v20220916-gd32f8c343
Image is up to date for sha256:520347519a8caefcdff1c480be13cea37a66bccf517302949b569a654b0656b5
[root@master ~]# crictl pull registry.cn-hangzhou.aliyuncs.com/cloudcs/controller:v1.6.4
Image is up to date for sha256:7744eedd958ffb7011ea5dda4b9010de8e69a9f114ba3312c149bb7943ddbcd6
因为镜像无法pull下来,所以申请香港主机
yum install -y yum-utils vim bash-completion net-tools wget
systemctl stop firewalld
systemctl disable firewalld
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce
systemctl start docker 
systemctl enable docker
docker -v
systemctl is-active docker
systemctl is-enabled docker

为方便后期使用,安装完docker后,直接做成私有镜像。
进行下载,之后推送到阿里云中
[root@ecs-docker ~]# docker pull registry.k8s.io/ingress-nginx/controller:v1.6.4
[root@ecs-docker ~]# docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343

推送到阿里云
[root@ecs-docker ~]# docker login --username=clisdodo@126.com registry.cn-hangzhou.aliyuncs.com
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@ecs-docker ~]# docker tag registry.k8s.io/ingress-nginx/controller:v1.6.4 registry.cn-hangzhou.aliyuncs.com/cloudcs/controller:v1.6.4
[root@ecs-docker ~]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343 registry.cn-hangzhou.aliyuncs.com/cloudcs/kube-webhook-certgen:v20220916-gd32f8c343

[root@ecs-docker ~]# docker push registry.cn-hangzhou.aliyuncs.com/cloudcs/kube-webhook-certgen:v20220916-gd32f8c343
[root@ecs-docker ~]# docker push registry.cn-hangzhou.aliyuncs.com/cloudcs/controller:v1.6.4

然后,从阿里云下载(所有节点)
[root@kmaster ~]# crictl pull registry.cn-hangzhou.aliyuncs.com/cloudcs/kube-webhook-certgen:v20220916-gd32f8c343
Image is up to date for sha256:520347519a8caefcdff1c480be13cea37a66bccf517302949b569a654b0656b5
[root@kmaster ~]# crictl pull registry.cn-hangzhou.aliyuncs.com/cloudcs/controller:v1.6.4
Image is up to date for sha256:7744eedd958ffb7011ea5dda4b9010de8e69a9f114ba3312c149bb7943ddbcd6
修改deploy.yaml文件,可在上传文件前修改好
[root@master ~]# vim deploy.yaml 
将里面的镜像路径修改成阿里云私有镜像仓库地址
[root@master ~]# grep image deploy.yaml    修改前的
        image: registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f
        imagePullPolicy: IfNotPresent
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f
        imagePullPolicy: IfNotPresent
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f
        imagePullPolicy: IfNotPresent
[root@master ~]# grep image deploy.yaml    修改后的
        image: registry.cn-hangzhou.aliyuncs.com/cloudcs/controller:v1.6.4
        imagePullPolicy: IfNotPresent
        image: registry.cn-hangzhou.aliyuncs.com/cloudcs/kube-webhook-certgen:v20220916-gd32f8c343
        imagePullPolicy: IfNotPresent
        image: registry.cn-hangzhou.aliyuncs.com/cloudcs/kube-webhook-certgen:v20220916-gd32f8c343
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f deploy.yaml
[root@master ~]# kubectl get ns    发现多了个ingress-nginx 命名空间
NAME               STATUS   AGE
calico-apiserver   Active   4d22h
calico-system      Active   4d22h
default            Active   4d22h
ingress-nginx      Active   34s
kube-node-lease    Active   4d22h
kube-public        Active   4d22h
kube-system        Active   4d22h
metallb-system     Active   25h
tigera-operator    Active   4d22h
[root@master ~]# kubectl get pod -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-wvgkt       0/1     Completed   0          79s
ingress-nginx-admission-patch-xf6hx        0/1     Completed   0          79s
ingress-nginx-controller-7dbcbbc65-6qpgr   1/1     Running     0          79s
上面两个是定时任务,正常状态为完成Completed
创建好deploy.yaml之后,ingress-nginx命名空间默认会有两个svc
[root@master ~]# kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.108.131.81   10.1.1.210    80:31887/TCP,443:31575/TCP   2m43s
ingress-nginx-controller-admission   ClusterIP      10.99.101.228   <none>        443/TCP                      2m42s
查看deployment
[root@master ~]# kubectl get deployments.apps -n ingress-nginx
NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
ingress-nginx-controller   1/1     1            1           4m10s

下面这段可以不要,不需要使用NodePort对外发布,使用它自己的LB即可。
因为当前环境存在LB地址池,所以会自动获取公网ip,如果没有,建议手工以NodePort方式对外发布。
把nginx 的 svc发布出去
[root@master ~]# kubectl expose --name inss deployment ingress-nginx-controller --type NodePort -n ingress-nginx
service/inss exposed
[root@master ~]# kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE
ingress-nginx-controller             LoadBalancer   10.108.131.81   10.1.1.210    80:31887/TCP,443:31575/TCP                  5m24s
ingress-nginx-controller-admission   ClusterIP      10.99.101.228   <none>        443/TCP                                     5m23s
inss                                 NodePort       10.110.146.54   <none>        80:31661/TCP,443:31248/TCP,8443:32553/TCP   21s
通过集群内任一台主机ip的31661端口发布出去了。
3、配置规则
官网,https://kubernetes.io/docs/concepts/services-networking/ingress/
[root@master ~]# kubectl get ingress -n ingress-nginx    现在是没有任何规则的
No resources found in ingress-nginx namespace.
[root@master ~]# vim inss.yaml
[root@master ~]# cat inss.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-wildcard-host
spec:
  rules:
  - host: "www.meme1.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: svc1
            port:
              number: 80
  - host: "www.meme3.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: svc3
            port:
              number: 80
      - pathType: Prefix
        path: "/abc"
        backend:
          service:
            name: svc3
            port:
              number: 80
注意这里还要创建class 类,否则报错。如何使用默认的ingress类
官网,https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/#default-ingress-class
[root@master ~]# kubectl get pod -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-wvgkt       0/1     Completed   0          15m
ingress-nginx-admission-patch-xf6hx        0/1     Completed   0          15m
ingress-nginx-controller-7dbcbbc65-6qpgr   1/1     Running     0          15m
[root@master ~]# kubectl logs pods/ingress-nginx-controller-7dbcbbc65-6qpgr -n ingress-nginx
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.6.4
  Build:         69e8833858fb6bda12a44990f1d5eaa7b13f4b75
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

-------------------------------------------------------------------------------

W1030 07:59:24.896781       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1030 07:59:24.897076       7 main.go:209] "Creating API client" host="https://10.96.0.1:443"
I1030 07:59:24.911366       7 main.go:253] "Running in Kubernetes cluster" major="1" minor="27" git="v1.27.0" state="clean" commit="1b4df30b3cdfeaba6024e81e559a6cd09a089d65" platform="linux/amd64"
I1030 07:59:25.728939       7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I1030 07:59:25.829025       7 ssl.go:533] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I1030 07:59:25.847438       7 nginx.go:261] "Starting NGINX Ingress controller"
I1030 07:59:25.976946       7 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c4a8a216-a7b3-48ec-b6b2-66ba86c1f4b1", APIVersion:"v1", ResourceVersion:"410806", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I1030 07:59:27.157281       7 nginx.go:304] "Starting NGINX process"
I1030 07:59:27.157526       7 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I1030 07:59:27.158083       7 nginx.go:324] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1030 07:59:27.158406       7 controller.go:188] "Configuration changes detected, backend reload required"
I1030 07:59:27.166951       7 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-nginx-leader
I1030 07:59:27.167048       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-7dbcbbc65-6qpgr"
I1030 07:59:27.257972       7 controller.go:205] "Backend successfully reloaded"
I1030 07:59:27.258376       7 controller.go:216] "Initial sync, sleeping for 1 second"
I1030 07:59:27.259765       7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7dbcbbc65-6qpgr", UID:"287e1cfa-ec26-4b1e-9d02-a7ceeeb56ae9", APIVersion:"v1", ResourceVersion:"410884", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
可以将一个特定的 IngressClass 标记为集群默认 Ingress 类。 
将一个 IngressClass 资源的 ingressclass.kubernetes.io/is-default-class 注解设置为 true 
将确保新的未指定 ingressClassName 字段的 Ingress 能够分配为这个默认的 IngressClass

默认有个nginx的class类。
[root@master ~]# kubectl get ingressclasses.networking.k8s.io
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       19m
[root@master ~]# kubectl edit ingressclasses.networking.k8s.io nginx
ingressclass.networking.k8s.io/nginx edited
在annotations:下面添加ingressclass一行
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
    kubectl.kubernetes.io/last-applied-configuration: |

在这里插入图片描述

修改完毕后,需要重新创建ingress
[root@master ~]# kubectl apply -f inss.yaml
ingress.networking.k8s.io/ingress-wildcard-host created
[root@master ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
n1     1/1     Running   0          70m
n2     1/1     Running   0          69m
n3     1/1     Running   0          69m
[root@master ~]# kubectl get ingress
NAME                    CLASS   HOSTS                         ADDRESS      PORTS   AGE
ingress-wildcard-host   nginx   www.meme1.com,www.meme3.com   10.1.1.210   80      9m18s
[root@master ~]# kubectl describe ingress ingress-wildcard-host
Name:             ingress-wildcard-host
Labels:           <none>
Namespace:        default
Address:          10.1.1.210
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host           Path  Backends
  ----           ----  --------
  www.meme1.com
                 /   svc1:80 (10.244.104.28:80)
  www.meme3.com
                 /      svc3:80 (10.244.166.134:80)
                 /abc   svc3:80 (10.244.166.134:80)
Annotations:     <none>
Events:
  Type    Reason  Age                    From                      Message
  ----    ------  ----                   ----                      -------
  Normal  Sync    9m19s (x2 over 9m41s)  nginx-ingress-controller  Scheduled for sync   #同步成功
负载均衡是在哪里运行的?
[root@master ~]# kubectl get pod -n ingress-nginx -o wide
NAME                                       READY   STATUS      RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-wvgkt       0/1     Completed   0          44m   10.244.104.27    node2   <none>           <none>
ingress-nginx-admission-patch-xf6hx        0/1     Completed   0          44m   10.244.166.141   node1   <none>           <none>
ingress-nginx-controller-7dbcbbc65-6qpgr   1/1     Running     0          44m   10.244.104.35    node2   <none>           <none>
4、测试,随便找台图形化linux虚拟机,修改/etc/hosts
[root@master ~]# kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE
ingress-nginx-controller             LoadBalancer   10.108.131.81   10.1.1.210    80:31887/TCP,443:31575/TCP                  61m
ingress-nginx-controller-admission   ClusterIP      10.99.101.228   <none>        443/TCP                                     61m
inss                                 NodePort       10.110.146.54   <none>        80:31661/TCP,443:31248/TCP,8443:32553/TCP   56m
以LoadBalancer方式发布出去,在hosts配置EXTERNAL-IP 10.1.1.210,在要测试的虚拟机先ping通10.1.1.210
[root@localhost ~]# echo '10.1.1.210 www.meme1.com' >> /etc/hosts  添加 ingress-nginx-controller LB 地址(公网ip)
[root@localhost ~]# echo '10.1.1.210 www.meme3.com' >> /etc/hosts
[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.1.210 www.meme1.com
10.1.1.210 www.meme3.com
解析的地址都是一样的,在测试主机上打开浏览器。
如果不起作用怎么办?查看ingress-nginx 命名空间下面 nginx-controller 的日志。
[root@master ~]# kubectl logs pods/ingress-nginx-controller-7dbcbbc65-6qpgr -n ingress-nginx 
或许会报错class类错误。
正确的日志是这样:
I1030 07:59:27.257972       7 controller.go:205] "Backend successfully reloaded"
I1030 07:59:27.258376       7 controller.go:216] "Initial sync, sleeping for 1 second"
I1030 07:59:27.259765       7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7dbcbbc65-6qpgr", UID:"287e1cfa-ec26-4b1e-9d02-a7ceeeb56ae9", APIVersion:"v1", ResourceVersion:"410884", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I1030 08:31:09.922478       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.074s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:29.6kBs testedConfigurationSize:0.075}
I1030 08:31:09.922538       7 main.go:100] "successfully validated configuration, accepting" ingress="default/ingress-wildcard-host"
I1030 08:31:09.983927       7 store.go:433] "Found valid IngressClass" ingress="default/ingress-wildcard-host" ingressclass="nginx"
I1030 08:31:09.984248       7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-wildcard-host", UID:"9efd1931-6362-45fe-8c90-b4e2c227ff74", APIVersion:"networking.k8s.io/v1", ResourceVersion:"414980", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1030 08:31:09.984483       7 controller.go:188] "Configuration changes detected, backend reload required"
I1030 08:31:10.098694       7 controller.go:205] "Backend successfully reloaded"

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
以NodePort方式发布出去,在hosts配置集群内任一台主机ip,在要测试的虚拟机先ping通集群内任一台主机ip(有问题)

把nginx 的 svc发布出去
[root@master ~]# kubectl expose --name inss deployment ingress-nginx-controller --type NodePort -n ingress-nginx
service/inss exposed
[root@master ~]# kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE
ingress-nginx-controller             LoadBalancer   10.108.131.81   10.1.1.210    80:31887/TCP,443:31575/TCP                  5m24s
ingress-nginx-controller-admission   ClusterIP      10.99.101.228   <none>        443/TCP                                     5m23s
inss                                 NodePort       10.110.146.54   <none>        80:31661/TCP,443:31248/TCP,8443:32553/TCP   21s
通过集群内任一台主机ip的31661端口发布出去了。

在这里插入图片描述

8、helm

在linux里面安装rpm包的时候,可以通过yum/dnf来安装,自动解决所有包依赖关系。helm作用是把许多定义(pod、svc、pvc、pv),比如svc,deployment等一次性全部定义好,放在源里统一管理。
对于helm来说,要安装软件,需要提前编辑 yaml 文件,它可以进行一键式安装应用。网上存在很多打包好的应用,互联网上有很多helm源,类似于yum源,源里面有很多包。helm类似yum包管理器,helm是k8s包管理器,可快速查找下载安装pod等。
在这里插入图片描述

源1             源2
包1 包2.tar   包3 包4
     |
     |
     |
包可以下载本地,也是通过helm下载到本地。下载好之后手工解压。下载的tar叫做包。
tar--解压--得到文件夹(叫chart,里面包含了创建应用的所有的属性)---------> k8s 环境
这时候,本地安装了helm之后,根据chart要求进行部署安装,helm会自动的部署到k8s环境里面。
另外,也可以在线指定源,不需要下载,自动部署。建议下载本地修改构建。

helm版本:v2和v3版本,v2版本有些复杂,v3版本简化了很多操作,没有了tiller端,只有单纯的helm客户端。
安装过程官网,https://helm.sh/docs/intro/install/
下载软件,https://github.com/helm/helm/releases/tag/v3.13.1,helm-v3.13.1-linux-amd64.tar.gz

[root@master ~]# mkdir /helm
[root@master ~]# cd /helm
[root@master helm]# ls
helm-v3.13.1-linux-amd64.tar.gz
[root@master helm]# tar -zxvf helm-v3.13.1-linux-amd64.tar.gz
linux-amd64/
linux-amd64/LICENSE
linux-amd64/helm
linux-amd64/README.md
[root@master helm]# ls
helm-v3.13.1-linux-amd64.tar.gz  linux-amd64
[root@master helm]# cd linux-amd64/
[root@master linux-amd64]# ls
helm  LICENSE  README.md
[root@master linux-amd64]# helm help
-bash: helm: command not found
在Linux系统中,/usr/local/bin路径一般被设为环境变量PATH的一部分,# echo $PATH,/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
因此在任何地方,只要输入该目录下的可执行文件名,系统就会自动查找该文件并执行。
[root@master linux-amd64]# mv helm /usr/local/bin/helm
[root@master linux-amd64]# helm help
The Kubernetes package manager
Common actions for Helm:
- helm search:    search for charts
- helm pull:      download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts
配置环境,一个>覆盖写入(没有此文件就新创建此文件并写入),两个>>换行追加写入
[root@master linux-amd64]# helm completion bash > ~/.helm
[root@master linux-amd64]# helm completion bash > ~/.helmrc
[root@master linux-amd64]# echo "source ~/.helmrc" >> ~/.bashrc
[root@master linux-amd64]# source ~/.bashrc
配置仓库,阿里云无法使用。
微软的chart仓库,http://mirror.azure.cn/kubernetes/charts/,这个仓库强烈推荐,基本上官网有的chart这里都有。
[root@master linux-amd64]# helm repo list
Error: no repositories to show
添加helm源并命名源
[root@master linux-amd64]# helm repo add weiruan http://mirror.azure.cn/kubernetes/charts/
"weiruan" has been added to your repositories
查看helm源
[root@master linux-amd64]# helm repo list
NAME    URL
weiruan http://mirror.azure.cn/kubernetes/charts/
查看helm源有哪些操作
[root@master linux-amd64]# helm repo
This command consists of multiple subcommands to interact with chart repositories.
It can be used to add, remove, list, and index chart repositories.
Usage:
  helm repo [command]
Available Commands:
  add         add a chart repository
  index       generate an index file given a directory containing packaged charts
  list        list chart repositories
  remove      remove one or more chart repositories
  update      update information of available charts locally from chart repositories
删除helm源
[root@master linux-amd64]# helm repo remove weiruan
"weiruan" has been removed from your repositories
[root@master linux-amd64]# helm repo list
Error: no repositories to show
添加helm源并命名源
[root@master linux-amd64]# helm repo add weiruan http://mirror.azure.cn/kubernetes/charts/
"weiruan" has been added to your repositories
[root@master linux-amd64]# helm repo list
NAME    URL
weiruan http://mirror.azure.cn/kubernetes/charts/
搜索helm源中mysql包
[root@master linux-amd64]# helm search repo mysql
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
weiruan/mysql                           1.6.9           5.7.30          DEPRECATED - Fast, reliable, scalable, and easy...
weiruan/mysqldump                       2.6.2           2.4.1           DEPRECATED! - A Helm chart to help backup MySQL...
weiruan/prometheus-mysql-exporter       0.7.1           v0.11.0         DEPRECATED A Helm chart for prometheus mysql ex...
weiruan/percona                         1.2.3           5.7.26          DEPRECATED - free, fully compatible, enhanced, ...
weiruan/percona-xtradb-cluster          1.0.8           5.7.19          DEPRECATED - free, fully compatible, enhanced, ...
weiruan/phpmyadmin                      4.3.5           5.0.1           DEPRECATED phpMyAdmin is an mysql administratio...
weiruan/gcloud-sqlproxy                 0.6.1           1.11            DEPRECATED Google Cloud SQL Proxy
weiruan/mariadb                         7.3.14          10.3.22         DEPRECATED Fast, reliable, scalable, and easy t...
从helm源中下载mysql包
[root@master linux-amd64]# helm pull weiruan/mysql
[root@master linux-amd64]# ls
LICENSE  mysql-1.6.9.tgz  README.md
tar--解压--得到文件夹(叫chart,里面包含了创建应用的所有的属性)---------> k8s 环境
[root@master linux-amd64]# tar -zxvf mysql-1.6.9.tgz
[root@master linux-amd64]# ls
LICENSE  mysql  mysql-1.6.9.tgz  README.md
mysql这个目录就是一个chart,也可以在本地打包
[root@master linux-amd64]# rm -rf mysql-1.6.9.tgz
[root@master linux-amd64]# ls
LICENSE  mysql  README.md
[root@master linux-amd64]# helm package mysql/     打包
Successfully packaged chart and saved it to: /helm/linux-amd64/mysql-1.6.9.tgz
[root@master linux-amd64]# ls
LICENSE  mysql  mysql-1.6.9.tgz  README.md
文件夹mysql打包后,它怎么知道具体版本的呢?mysql文件夹里面有个Chart.yaml文件,里面记录了元数据信息。
[root@master linux-amd64]# cd mysql
[root@master mysql]# cat Chart.yaml
apiVersion: v1
appVersion: 5.7.30
deprecated: true
description: DEPRECATED - Fast, reliable, scalable, and easy to use open-source relational
  database system.
home: https://www.mysql.com/
icon: https://www.mysql.com/common/logos/logo-mysql-170x115.png
keywords:
- mysql
- database
- sql
name: mysql
sources:
- https://github.com/kubernetes/charts
- https://github.com/docker-library/mysql
version: 1.6.9
然后根据values.yaml文件,部署pod
[root@master mysql]# ls
Chart.yaml  README.md  templates  values.yaml
查看values.yaml文件内容
版本可以改为最新的
busybox:
  image: "busybox"
  tag: "latest"

testFramework关闭
testFramework:
  enabled: false

mysql的root密码
## Default: random 10 character string
mysqlRootPassword: redhat

是否使用持久卷,改为false
## Persist data to a persistent volume
persistence:
  enabled: false

关闭ssl安全认证
ssl:
  enabled: false
这个values.yaml文件里面指定的这么多参数,它是如何自动创建的呢?比如持久卷如何创建?
[root@master mysql]# ls templates/
configurationFiles-configmap.yaml  _helpers.tpl                        NOTES.txt  secrets.yaml         servicemonitor.yaml  tests
deployment.yaml                    initializationFiles-configmap.yaml  pvc.yaml   serviceaccount.yaml  svc.yaml
在templates目录里面,有很多yaml文件,比如pvc,svc,secrests等。就是通过这些yaml来创建的。而这些yaml文件里面记录的都是变量,根据变量去取值。
如pvc.yaml[root@master templates]# cat pvc.yaml
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ template "mysql.fullname" . }}
  namespace: {{ .Release.Namespace }}
{{- with .Values.persistence.annotations  }}
  annotations:
{{ toYaml . | indent 4 }}
{{- end }}
  labels:
    app: {{ template "mysql.fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
spec:
  accessModes:
    - {{ .Values.persistence.accessMode | quote }}
  resources:
    requests:
      storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
  storageClassName: ""
{{- else }}
  storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}

通过helm源来部署一个应用

[root@master linux-amd64]# helm ls
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION
可以在本地构建(建议本地构建)
helm install name chart目录
也可以在线构建
helm install name helm源/名称
一般来说,使用公共的helm源,是不知道里面具体要设置什么值,建议最好down下来。
[root@master linux-amd64]# helm install db mysql
[root@master linux-amd64]# helm ls
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
db      default         1               2023-10-31 14:04:57.787520189 +0800 CST deployed        mysql-1.6.9     5.7.30
[root@master linux-amd64]# kubectl get pod -o wide
NAME                       READY   STATUS            RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
db-mysql-f8bdc94b5-gdkpw   0/1     PodInitializing   0          5m51s   10.244.104.36    node2   <none>           <none>
PodInitializing说明pod还在初始化,node2下载mysql5.7.30镜像要点时间
[root@master linux-amd64]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS        AGE   IP               NODE    NOMINATED NODE   READINESS GATES
db-mysql-f8bdc94b5-gdkpw   1/1     Running   1 (3m12s ago)   10m   10.244.104.36    node2   <none>           <none>
安装mysql客户端,尝试连接
[root@master linux-amd64]# yum install -y mariadb
[root@master linux-amd64]# mysql -uroot -predhat -h 10.244.104.36
ERROR 1130 (HY000): Host '10.1.1.200' is not allowed to connect to this MySQL server
要是连不上数据库就删除再创建[root@master linux-amd64]# helm delete db
[root@master linux-amd64]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
db-mysql-f8bdc94b5-btxd8   0/1     Running   0          11s   10.244.104.37    node2   <none>           <none>
[root@master linux-amd64]# mysql -uroot -predhat -h 10.244.104.37
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.30 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.001 sec)
删除helm
[root@master linux-amd64]# helm delete db
[root@master linux-amd64]# helm ls
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION
[root@master linux-amd64]# kubectl get pod
No resources found in default namespace.

使用helm安装软件包的时候,都是连接到互联网上去,但是生产环境有可能无法联网,这时候就可以选择搭建自己的私有仓库。比如就拿mysql为例。
用一个web服务器(为了方便演示,直接使用容器来做),创建一个新的目录,专门保存包资源。

[root@node2 ~]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
[root@node2 ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
a2abf6c4d29d: Pull complete
a9edb18cadd1: Pull complete
589b7251471a: Pull complete
186b1aaa4aa6: Pull complete
b4df32aa5a72: Pull complete
a0bcbecc962e: Pull complete
Digest: sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
[root@node2 ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
nginx        latest    605c77e624dd   22 months ago   141MB
[root@node2 ~]# docker run -tid --name web-chart --restart always -p 80:80 -v /charts:/usr/share/nginx/html/charts nginx
67020e3ae81cbb8aebbf270e3621fba14f1a9a9d8411964987201d3b1c9c6860

注意:当访问node2主机ip的时候,默认加载的网站根目录为 /usr/share/nginx/html,如果要访问charts目录,则是主机ip/charts

首先将chart目录进行打包
[root@master linux-amd64]# ls
LICENSE  mysql  mysql-1.6.9.tgz  README.md
[root@master linux-amd64]# mkdir aa
[root@master linux-amd64]# helm package mysql
Successfully packaged chart and saved it to: /helm/linux-amd64/mysql-1.6.9.tgz
[root@master linux-amd64]# ls
aa  LICENSE  mysql  mysql-1.6.9.tgz  README.md
[root@master linux-amd64]# cp mysql-1.6.9.tgz aa/
[root@master linux-amd64]# ls aa/
mysql-1.6.9.tgz
为mysql包创建索引信息
[root@master linux-amd64]# helm repo index aa/ --url http://10.1.1.202/charts
[root@master linux-amd64]# ls aa/
index.yaml  mysql-1.6.9.tgz
[root@master linux-amd64]# cat aa/index.yaml
apiVersion: v1
entries:
  mysql:
  - apiVersion: v1
    appVersion: 5.7.30
    created: "2023-10-31T15:04:08.279395136+08:00"
    deprecated: true
    description: DEPRECATED - Fast, reliable, scalable, and easy to use open-source
      relational database system.
    digest: 990b7060d12861f9730f3faa8992035d00478576d336feeda0b1c180f400cfff
    home: https://www.mysql.com/
    icon: https://www.mysql.com/common/logos/logo-mysql-170x115.png
    keywords:
    - mysql
    - database
    - sql
    name: mysql
    sources:
    - https://github.com/kubernetes/charts
    - https://github.com/docker-library/mysql
    urls:
    - http://10.1.1.202/charts/mysql-1.6.9.tgz
    version: 1.6.9
generated: "2023-10-31T15:04:08.255484502+08:00"
将aa目录下的索引文件及包资源拷贝到容器web服务器里面
[root@master linux-amd64]# scp aa/* 10.1.1.202:/charts
The authenticity of host '10.1.1.202 (10.1.1.202)' can't be established.
ECDSA key fingerprint is SHA256:riWZTwAbdcBotXCHPTUeP5BaRlM2nJiPNbB49nz+YKA.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.1.1.202' (ECDSA) to the list of known hosts.
root@10.1.1.202's password:
index.yaml                                                                                                                                     100%  737   548.1KB/s   00:00
mysql-1.6.9.tgz                                                                                                                                100%   11KB   7.1MB/s   00:00
[root@node2 ~]# ls /charts
index.yaml  mysql-1.6.9.tgz
[root@node2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS          PORTS                               NAMES
67020e3ae81c   nginx     "/docker-entrypoint.…"   11 minutes ago   Up 11 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp   web-chart
[root@node2 ~]# docker exec -ti web-chart /bin/bash
root@67020e3ae81c:/# ls /usr/share/nginx/html/charts/
index.yaml  mysql-1.6.9.tgz
做好之后,在helm中添加新的本地仓库。
[root@master linux-amd64]# helm repo add myrepo http://10.1.1.202/charts
"myrepo" has been added to your repositories
[root@master linux-amd64]# helm repo list
NAME    URL
weiruan http://mirror.azure.cn/kubernetes/charts/
myrepo  http://10.1.1.202/charts
这样,私有仓库就搭建好了。查询下。
[root@master linux-amd64]# helm search repo mysql
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
myrepo/mysql                            1.6.9           5.7.30          DEPRECATED - Fast, reliable, scalable, and easy...
weiruan/mysql                           1.6.9           5.7.30          DEPRECATED - Fast, reliable, scalable, and easy...
weiruan/mysqldump                       2.6.2           2.4.1           DEPRECATED! - A Helm chart to help backup MySQL...
weiruan/prometheus-mysql-exporter       0.7.1           v0.11.0         DEPRECATED A Helm chart for prometheus mysql ex...
weiruan/percona                         1.2.3           5.7.26          DEPRECATED - free, fully compatible, enhanced, ...
weiruan/percona-xtradb-cluster          1.0.8           5.7.19          DEPRECATED - free, fully compatible, enhanced, ...
weiruan/phpmyadmin                      4.3.5           5.0.1           DEPRECATED phpMyAdmin is an mysql administratio...
weiruan/gcloud-sqlproxy                 0.6.1           1.11            DEPRECATED Google Cloud SQL Proxy
weiruan/mariadb                         7.3.14          10.3.22         DEPRECATED Fast, reliable, scalable, and easy t...
通过helm私有仓库在线安装mysql。
[root@master linux-amd64]# helm ls
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION
[root@master linux-amd64]# kubectl get pod
No resources found in default namespace.
[root@master linux-amd64]# helm install db myrepo/mysql
WARNING: This chart is deprecated
[root@master linux-amd64]# helm ls
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
db      default         1               2023-10-31 15:15:16.776065805 +0800 CST deployed        mysql-1.6.9     5.7.30
[root@master linux-amd64]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
db-mysql-f8bdc94b5-jqbgt   0/1     Running   0          23s   10.244.104.45    node2   <none>           <none>
[root@master linux-amd64]# mysql -uroot -predhat -h 10.244.104.45
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 22
Server version: 5.7.30 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.001 sec)

9、探针

通过deployment创建管理pod很方便,能实现pod的高可用。deployment是根据pod里面容器的状态来判断pod是否正常(在node节点crictl ps后crictl stop 容器id,在master会发现pod状态CrashLoopBackOff不正常了。要是deployment管理,在node节点crictl ps后crictl stop 容器id,pod不会变状态也正常,只是重新创建pod里面的容器,crictl ps发现容器id变了)。如果pod状态一直是running,但是里面的文件丢失了怎么办?deploy是不能监控到的。于是出现了探针,就是探测这个pod是否是正常工作的。它会根据发现的问题,处理的方式也会不同,有3种探测方式:livenessProbe存活探针/readinessProbe就绪探针/ startupProbe启动探针,这三个探针可以同时存在,startupProbe启动探针优先级最高。
官网,https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
livenessProbe 存活探针,容器启动后循环工作的,通过重建(pod里面的容器会重新创建)来解决问题
readinessProbe 就绪探针,容器启动后循环工作的,检测到问题并不重启,只是svc接受的请求不再转发给此pod
startupProbe启动探针仅在容器启动时发生作用,一旦完成(单次),后续就不会再运行了。
在这里插入图片描述
livenessProbe存活探针
重建(pod里面的容器会重新创建),有3种方式:command/httpGet/tcpSocket

command方式
[root@master ~]# mkdir /project
[root@master ~]# cd /project
[root@master project]# vim pod1.yaml
[root@master project]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: busybox
    imagePullPolicy: IfNotPresent
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
创建文件,30秒后删除文件,再等600秒容器就停止了,但结合容器重启策略会继续运行。
容器重启策略 restartPolicy(默认Always)
Always:不管什么情况下出现错误或退出,都会一直重启
[root@master project]# kubectl apply -f pod1.yaml
kubectlpod/liveness-exec created
[root@master project]# kubectl get pod
NAME            READY   STATUS    RESTARTS   AGE
liveness-exec   1/1     Running   0          4s
[root@master project]# kubectl get pod -w    持续观察
NAME            READY   STATUS    RESTARTS   AGE
liveness-exec   1/1     Running   0          11s
每隔0.5秒或者1秒动态观察
[root@master project]# watch -n .5 'kubectl get pod'
[root@master project]# watch -n 1 'kubectl get pod'
可以看到pod一直在运行中,但是文件已经删除了。
[root@master project]# kubectl exec -ti liveness-exec -- bash 
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "6053bd62931d4ca97ecf9039bdd0944e8c4383ca18cce4d42aa5830bd1de572c": OCI runtime exec failed: exec failed: unable to start container process: exec: "bash": executable file not found in $PATH: unknown
注意:这里报错,原因是因为busybox没有bash,换成/bin/sh即可
[root@master project]# kubectl exec -ti liveness-exec -- /bin/sh
/ # ls /tmp
/ # exit
[root@master project]# kubectl exec -ti liveness-exec -- ls /tmp
也就是容器里面的文件已经被删除了,但是pod状态依然是running正常运行。
再次修改yaml,使用command方式进行探测。
[root@master project]# vim pod1.yaml
[root@master project]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: busybox
    imagePullPolicy: IfNotPresent
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

initialDelaySeconds: 5  启动5秒之内不探测
periodSeconds: 5  每隔5s探测一次
执行命令成功后,返回0;没有成功,返回1,语法错误返回127,等等
[root@node1 ~]# echo 11
11
[root@node1 ~]# echo $?
0
[root@node1 ~]# cat aa
cat: aa: No such file or directory
[root@node1 ~]# echo $?
1
如果等会容器反馈0,说明文件还在,返回1,说明文件被删除了。
[root@master project]# kubectl delete -f pod1.yaml
pod "liveness-exec" deleted
[root@master project]# kubectl apply -f pod1.yaml
pod/liveness-exec created
[root@master project]# kubectl exec -ti liveness-exec -- ls /tmp
healthy
创建好之后,查看容器文件。等待30秒,文件被删除。
另一个窗口查看文件情况[root@master ~]# kubectl exec -ti liveness-exec -- ls /tmp
[root@master ~]# kubectl exec -ti liveness-exec -- ls /tmp
[root@master ~]# kubectl exec -ti liveness-exec -- ls /tmp
healthy
[root@master ~]# kubectl exec -ti liveness-exec -- ls /tmp
[root@master ~]# kubectl exec -ti liveness-exec -- ls /tmp
[root@master ~]# kubectl exec -ti liveness-exec -- ls /tmp
healthy
这个窗口查看pod,持续观察[root@master project]# kubectl get pod -w
NAME            READY   STATUS    RESTARTS   AGE
liveness-exec   1/1     Running   0          25s
liveness-exec   1/1     Running   1 (1s ago)   77s
liveness-exec   1/1     Running   2 (1s ago)   2m32s
[root@master project]# kubectl get pod liveness-exec  -o wide
NAME            READY   STATUS    RESTARTS        AGE     IP               NODE    NOMINATED NODE   READINESS GATES
liveness-exec   1/1     Running   6 (2m13s ago)   9m44s   10.244.166.143   node1   <none>           <none>

pod不会变,但pod里面的容器id变了,重建(pod里面的容器会重新创建)

[root@node1 ~]# crictl ps
CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
cec44fc9f26e3       a416a98b71e22       About a minute ago   Running             liveness                    0                   d0fc724dfc17e       liveness-exec
[root@node1 ~]# crictl ps
CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
3d8f86816ce18       a416a98b71e22       About a minute ago   Running             liveness                    1                   d0fc724dfc17e       liveness-exec
[root@node1 ~]# crictl ps
CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
9e9f2a4b080c4       a416a98b71e22       38 seconds ago       Running             liveness                    2                   d0fc724dfc17e       liveness-exec
httpGet方式
[root@master project]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: pod2
spec:
  containers:
  - name: pod2
    image: nginx
    imagePullPolicy: IfNotPresent
    livenessProbe:
      httpGet:
        path: /index.html
        port: 80
      initialDelaySeconds: 3
      periodSeconds: 3
这里/代表/usr/share/nginx/html/,/index.html即代表/usr/share/nginx/html/index.html
[root@master project]# kubectl apply -f pod2.yaml
pod/pod2 created
[root@master project]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod2   1/1     Running   0          4s    10.244.166.137   node1   <none>           <none>
这个窗口持续观察pod状态[root@master project]# kubectl get pod -w
NAME   READY   STATUS    RESTARTS   AGE
pod2   1/1     Running   0          9s
pod2   1/1     Running   1 (1s ago)   71s

另一个窗口查看做删除文件
[root@master ~]# kubectl exec -ti pod2 -- bash
root@pod2:/# cd /usr/share/nginx/html/
root@pod2:/usr/share/nginx/html# ls     进入pod查看index文件
50x.html  index.html
删除index文件,等待一会,自动退出了,因为探针探测到了,直接重建容器。
root@pod2:/usr/share/nginx/html# rm -rf index.html 
root@pod2:/usr/share/nginx/html# ls
50x.html
root@pod2:/usr/share/nginx/html# command terminated with exit code 137
[root@master ~]# kubectl exec -ti pod2 -- bash
root@pod2:/# cd /usr/share/nginx/html/
root@pod2:/usr/share/nginx/html# ls     再次查看又有文件了
50x.html  index.html
[root@master ~]# kubectl get pod -o wide
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  8m39s                  default-scheduler  Successfully assigned default/pod2 to node1
  Normal   Pulled     7m29s (x2 over 8m37s)  kubelet            Container image "nginx" already present on machine
  Normal   Created    7m29s (x2 over 8m37s)  kubelet            Created container pod2
  Normal   Started    7m29s (x2 over 8m37s)  kubelet            Started container pod2
  Warning  Unhealthy  7m29s (x3 over 7m35s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    7m29s                  kubelet            Container pod2 failed liveness probe, will be restarted
tcpSocket方式
[root@master project]# cat pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod3
  labels:
    app: goproxy
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
    livenessProbe:
      tcpSocket:
        port: 81
      initialDelaySeconds: 15
      periodSeconds: 20
tcp三次握手,因为连不上80端口,所以导致失败。反复的重启。
[root@master project]# kubectl apply -f pod3.yaml
pod/pod3 created
[root@master project]# kubectl get pod -w
NAME   READY   STATUS    RESTARTS   AGE
pod3   1/1     Running   0          14s
pod3   1/1     Running   1 (1s ago)   62s
pod3   1/1     Running   2 (1s ago)   2m2s

readinessProbe就绪探针

livenessProbe 存活探针,通过重建(pod里面的容器会重新创建)来解决问题
readinessProbe 就绪探针,检测到问题并不重启,只是svc接受的请求不再转发给此pod
[root@master ~]# kubectl create deployment web1 --image=nginx --dry-run=client -o yaml > web1.yaml
[root@master ~]# vim web1.yaml
[root@master ~]# cat web1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web1
  name: web1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web1
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
        imagePullPolicy: IfNotPresent
        readinessProbe:
          httpGet:
            path: /index.html
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 3
status: {}
[root@master ~]# kubectl apply -f web1.yaml
deployment.apps/web1 created
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS              RESTARTS   AGE   IP       NODE    NOMINATED NODE   READINESS GATES
web1-8579c46f4-2prvq   0/1     ContainerCreating   0          3s    <none>   node1   <none>           <none>
web1-8579c46f4-84p4d   0/1     ContainerCreating   0          3s    <none>   node1   <none>           <none>
web1-8579c46f4-vlnzr   0/1     ContainerCreating   0          3s    <none>   node2   <none>           <none>
[root@master ~]# kubectl exec -ti web1-8579c46f4-2prvq -- bash
root@web1-8579c46f4-2prvq:/# echo host01 > /usr/share/nginx/html/host.html
root@web1-8579c46f4-2prvq:/# exit
exit
[root@master ~]# kubectl exec -ti web1-8579c46f4-84p4d -- bash
root@web1-8579c46f4-84p4d:/# echo host02 > /usr/share/nginx/html/host.html
root@web1-8579c46f4-84p4d:/# exit
exit
[root@master ~]# kubectl exec -ti web1-8579c46f4-vlnzr -- bash
root@web1-8579c46f4-vlnzr:/# echo host03 > /usr/share/nginx/html/host.html
root@web1-8579c46f4-vlnzr:/# exit
exit
每个pod里面都创建一个 host.html 文件,内容分别为 host01/host02/host03
创建svc,流量会转发到后端的pod上。
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   7d20h
[root@master ~]# kubectl expose deployment web1 --port 80 --target-port 80
service/web1 exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   7d20h
web1         ClusterIP   10.107.205.110   <none>        80/TCP    22s
创建一个svc,ip为10.107.205.110,在集群内某个节点上测试访问。(正常情况下会随机转发到到所有的pod上面,看到对应的host内容)
[root@master ~]# curl -s 10.107.205.110/host.html
host02
[root@master ~]# curl -s 10.107.205.110/host.html
host01
[root@master ~]# curl -s 10.107.205.110/host.html
host01
[root@master ~]# curl -s 10.107.205.110/host.html
host02
[root@master ~]# curl -s 10.107.205.110/host.html
host03
去host01上,把 index.html 删除掉。会影响 host.html 的访问。
[root@master ~]# kubectl exec -ti web1-8579c46f4-2prvq -- bash
root@web1-8579c46f4-2prvq:/# cd /usr/share/nginx/html/
root@web1-8579c46f4-2prvq:/usr/share/nginx/html# ls
50x.html  host.html  index.html
root@web1-8579c46f4-2prvq:/usr/share/nginx/html# rm -rf index.html
root@web1-8579c46f4-2prvq:/usr/share/nginx/html# ls
50x.html  host.html
root@web1-8579c46f4-2prvq:/usr/share/nginx/html# exit
exit
再次测试访问,会发现,不再转发到host01上了。
[root@master ~]#  curl -s 10.107.205.110/host.html
host03
[root@master ~]#  curl -s 10.107.205.110/host.html
host02
[root@master ~]#  curl -s 10.107.205.110/host.html
host02
[root@master ~]#  curl -s 10.107.205.110/host.html
host02
[root@master ~]#  curl -s 10.107.205.110/host.html
host03
[root@master ~]#  curl -s 10.107.205.110/host.html
host02
[root@master ~]#  curl -s 10.107.205.110/host.html
host02
因为yaml脚本里面定义的就绪探针,是探测 index.html,既然该文件不存在了,那么说明有问题了。
但定义的是readiness probe就绪探针,它并不会重启pod,所以不会自动修复。

startupProbe启动探针
startupProbe启动探针仅在容器启动时发生作用,一旦完成(单次),后续就不会再运行了。
初始化和依赖项准备:有时容器在启动后可能需要一些额外的时间来完成初始化过程。这可能涉及数据库连接、加载配置文件或进行其他初始化任务。在这种情况下,使用startupProbe启动探针可以确保容器在完全初始化完成之前不会接收流量。同时还可以定义依赖项,如数据库或其他服务是否已准备就绪,使得容器能够正确运行。
避免不必要的重启:在容器启动的早期阶段,应用程序可能尚未完全启动,或者依赖项尚未就绪。如果此时将流量发送到容器,可能会导致应用程序出现故障或功能不完整。通过使用startupProbe启动探针,可以确保在容器完全启动并准备好接收流量之前,不会将流量发送到容器,从而避免了不必要的重启。
自愈能力:在某些情况下,应用程序可能会在启动过程中出现故障。如果没有正确的探测机制,并及时采取适当的恢复措施,应用程序可能会持续失败或导致其他问题。使用startupProbe启动探针可以帮助 Kubernetes 在启动过程中自动检测并处理出现的问题,并执行相应的重启策略。
startupProbe启动探针的意义在于确保容器在启动过程中顺利完成且准备就绪,以提供稳定的服务并避免不必要的应用程序故障。它是一种有助于提高可靠性和自愈能力的重要机制。

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: pod2
spec:
  containers:
  - name: pod2
    image: nginx
    imagePullPolicy: IfNotPresent
    livenessProbe:
      httpGet:
        path: /index.html
        port: 80
      initialDelaySeconds: 3
      periodSeconds: 3

    startupProbe:
      httpGet:
        path: /healthz
        port: liveness-port
     failureThreshold: 30
     periodSeconds: 10

10、job和cronjob

传统运行的pod,比如deployment管理的pod,或手工管理的pod,只要创建好pod,该pod会一致运行下去。pod里面运行的是一个daemon守护进程。pod没有问题的情况下可以长期运行。但有时候想临时做一件事情,比如测试等,执行个脚本等。一下子就可以完成的。这种情况下可以通过job或cronjob来完成。
官网,https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup
job一次性计划任务,周期性计划任务cronjob。

[root@master ~]# mkdir /job
[root@master ~]# cd /job
[root@master job]# kubectl create job job1 --image busybox --dry-run=client -o yaml -- sh -c "echo 123" > job1.yaml
[root@master job]# cat job1.yaml
apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: null
  name: job1
spec:
  template:
    metadata:
      creationTimestamp: null
    spec:
      containers:
      - command:
        - sh
        - -c
        - echo 123
        image: busybox
        name: job1
        resources: {}
      restartPolicy: Never
status: {}
[root@master job]# kubectl apply -f job1.yaml
job.batch/job1 created
[root@master job]# kubectl get pod
NAME         READY   STATUS      RESTARTS   AGE
job1-hzzqd   0/1     Completed   0          5s
pod运行完成后,并不会继续重启,因为job创建的pod,重启策略里没有 Always 选项。job是一次性的,做完就完成了。

Job其他参数
1、带工作队列的并行Job:.spec.parallelism并行运行几个,不设置spec.completions必须完成几个,默认值为 .spec.parallelism并行运行几个。这里parallelism的值指的是一次性并行运行几个pod,该值不会超过completions的值。
多个Pod 之间必须相互协调,或者借助外部服务确定每个 Pod 要处理哪个工作条目。 例如,任一 Pod 都可以从工作队列中取走最多 N 个工作条目。
每个Pod 都可以独立确定是否其它 Pod 都已完成,进而确定 Job 是否完成。
2、带有确定完成计数的Job,即.spec.completions必须完成几个 不为 null 的 Job, 都可以在其 .spec.completionMode 中设置完成模式:job结束需要成功运行pod个数,即状态为completed的pod数。
NonIndexed(默认值):当成功完成的 Pod 个数达到 .spec.completions 所 设值时认为 Job 已经完成。换言之,
每个 Job 完成事件都是独立无关且同质的。 要注意的是,当 .spec.completions 取值为 null 时,Job 被隐式处理为 NonIndexed。
3、Pod 回退失效策略:.spec.backoffLimit失败了重启几次
在有些情形下,可能希望 Job 在经历若干次重试之后直接进入失败状态, 因为这很可能意味着遇到了配置错误。
为了实现这点,可以将 .spec.backoffLimit 设置为视 Job 为失败之前的重启次数。
失效回退的限制值默认为 6。 与 Job 相关的失效的 Pod 会被 Job 控制器重建,回退重启时间将会按指数增长 (从 10 秒、20 秒到 40 秒)最多至 6 分钟。

[root@master job]# cp job1.yaml job2.yaml
[root@master job]# vim job2.yaml
[root@master job]# cat job2.yaml
apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: null
  name: job2
spec:
  parallelism: 3
  completions: 6
  backoffLimit: 2
  template:
    metadata:
      creationTimestamp: null
    spec:
      containers:
      - command:
        - sh
        - -c
        - echo 123
        image: busybox
        imagePullPolicy: IfNotPresent
        name: job2
        resources: {}
      restartPolicy: Never
status: {}

  parallelism: 3  ---并行运行几个
  completions: 3  ---必须完成几个
  backoffLimit: 2  ---失败了重启几次
[root@master job]# kubectl apply -f job2.yaml
job.batch/job2 created
[root@master job]# kubectl get pod
NAME         READY   STATUS      RESTARTS   AGE
job2-86lh2   0/1     Completed   0          5s
job2-v7lqm   0/1     Completed   0          5s
job2-xdw4g   0/1     Completed   0          5s
[root@master job]# kubectl get pod
NAME         READY   STATUS      RESTARTS   AGE
job2-6qrsl   0/1     Completed   0          4s
job2-86lh2   0/1     Completed   0          9s
job2-jkkd5   0/1     Completed   0          4s
job2-r2wd5   0/1     Completed   0          4s
job2-v7lqm   0/1     Completed   0          9s
job2-xdw4g   0/1     Completed   0          9s

另外注意job的restartPolicy重启策略:
restartPolicy 指定什么情况下需要重启容器。对于 Job,只能设置为 Never 或者 OnFailure。对于其他 controller(比如 Deployment)可以设置为 Always 。
Never:只要任务没有完成,则是新创建pod运行,直到job完成,会产生多个pod
OnFailure:只要pod没有完成,则会重启pod,直到job完成
周期性计划任务:cronjob简称为cj

[root@master job]# kubectl get cj
No resources found in default namespace.
[root@master job]# kubectl create cj --help
和linux一样,如果不考虑某个时间单位,可以使用*表示。
分   时   日   月   周
几分/几点/几号/几月/星期几
*/1 * * * * 每隔1分钟执行
[root@master job]# kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *" --dry-run=client -o yaml -- sh -c "echo \$(date \"+%Y-%m-%d %H:%M:%S\")" > myjob3.yaml
[root@master job]# vim myjob3.yaml
[root@master job]# cat myjob3.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  creationTimestamp: null
  name: my-job
spec:
  jobTemplate:
    metadata:
      creationTimestamp: null
      name: my-job
    spec:
      template:
        metadata:
          creationTimestamp: null
        spec:
          containers:
          - command:
            - sh
            - -c
            - echo $(date "+%Y-%m-%d %H:%M:%S")
            image: busybox
            imagePullPolicy: IfNotPresent
            name: my-job
            resources: {}
          restartPolicy: OnFailure
  schedule: '*/1 * * * *'
status: {}
[root@master job]# kubectl apply -f myjob3.yaml
cronjob.batch/my-job created
[root@master job]# kubectl get cj
NAME     SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
my-job   */1 * * * *   False     1        3s              9s
[root@master job]# kubectl get pod
NAME                    READY   STATUS      RESTARTS   AGE
my-job-28316573-j2kwn   0/1     Completed   0          14s
[root@master job]# kubectl logs my-job-28316573-j2kwn
2023-11-03 06:53:01
[root@master job]# kubectl get pod
NAME                    READY   STATUS      RESTARTS   AGE
my-job-28316573-j2kwn   0/1     Completed   0          66s
my-job-28316574-fcb59   0/1     Completed   0          6s
[root@master job]# kubectl logs my-job-28316574-fcb59
2023-11-03 06:54:01
[root@master job]# kubectl get pod
NAME                    READY   STATUS      RESTARTS   AGE
my-job-28316576-gkbq6   0/1     Completed   0          2m23s
my-job-28316577-mmg7h   0/1     Completed   0          83s
my-job-28316578-s964g   0/1     Completed   0          23s
[root@master job]# kubectl get pod
NAME                    READY   STATUS      RESTARTS   AGE
my-job-28316579-8h46d   0/1     Completed   0          2m20s
my-job-28316580-wsjrw   0/1     Completed   0          80s
my-job-28316581-kjzr8   0/1     Completed   0          20s
记录:为什么cronjob周期性任务只保留3个。
官网,https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/cron-jobs/
任务历史限制 
.spec.successfulJobsHistoryLimit 和 .spec.failedJobsHistoryLimit 字段是可选的。 
这两个字段指定应保留多少已完成和失败的任务。 默认设置分别为 31。
将限制设置为 0 代表相应类型的任务完成后不会保留。
自动清理完成任务的Pod,ttlSecondsAfterFinished: 60,60秒后新的pod被创建,而之前的pod被删除了。
[root@master job]# cp myjob3.yaml myjob4.yaml
[root@master job]# vim myjob4.yaml
[root@master job]# cat myjob4.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  creationTimestamp: null
  name: my-job
spec:
  jobTemplate:
    metadata:
      creationTimestamp: null
      name: my-job
    spec:
      ttlSecondsAfterFinished: 60
      template:
        metadata:
          creationTimestamp: null
        spec:
          containers:
          - command:
            - sh
            - -c
            - echo $(date "+%Y-%m-%d %H:%M:%S")
            image: busybox
            imagePullPolicy: IfNotPresent
            name: my-job
            resources: {}
          restartPolicy: OnFailure
  schedule: '*/1 * * * *'
status: {}
[root@master job]# kubectl apply -f myjob4.yaml
cronjob.batch/my-job created
[root@master job]# kubectl get cj
NAME     SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
my-job   */1 * * * *   False     0        <none>          10s
[root@master job]# kubectl get pod   刚开始创建,不着急,等待一会
No resources found in default namespace.
[root@master job]# kubectl get pod
No resources found in default namespace.
[root@master job]# kubectl get pod
NAME                    READY   STATUS      RESTARTS   AGE
my-job-28316589-wbg5l   0/1     Completed   0          13s
[root@master job]# kubectl logs my-job-28316589-wbg5l
2023-11-03 07:09:01
之后等待60秒,可以看到,新的pod被创建,而之前的pod被删除了。
[root@master job]# kubectl get pod
NAME                    READY   STATUS      RESTARTS   AGE
my-job-28316590-rff5m   0/1     Completed   0          10s
[root@master job]# kubectl logs my-job-28316590-rff5m
2023-11-03 07:10:01
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值