文章目录
一.基于二进制部署kubernetes v1.27.x高可用环境:
环境规划:
主机名 | IP地址 | 功能 | 环境 |
---|---|---|---|
k8s-deployer | 172.18.10.121 | 开发部署节点 | centos7.7 |
k8s-master1 | 172.18.10.122 | master节点 | centos7.7 |
k8s-master2 | 172.18.10.123 | master节点 | centos7.7 |
k8s-node1 | 172.18.10.124 | node节点 | centos7.7 |
k8s-node2 | 172.18.10.125 | node节点 | centos7.7 |
k8s-etcd1 | 172.18.10.126 | etcd节点 | centos7.7 |
k8s-etcd2 | 172.18.10.127 | etcd节点 | centos7.7 |
k8s-etcd3 | 172.18.10.128 | etcd节点 | centos7.7 |
k8s-ha | 172.18.10.129 | 负载均衡节点 | centos7.7 |
k8s-harbor | 172.18.10.130 | harbor | centos7.7 |
1、部署负载均衡节点
[root@k8s-ha Python-3.6.8]# yum install keepalived -y
[root@k8s-ha Python-3.6.8]# vim ! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface ens33
garp_master_delay 10
smtp_alert
virtual_router_id 55
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
172.18.10.198 dev ens33 label ens33:0
172.18.10.199 dev ens33 label ens33:1
172.18.10.200 dev ens33 label ens33:2
}
}
[root@k8s-ha Python-3.6.8]# systemctl restart keepalived.service
[root@k8s-ha Python-3.6.8]# systemctl enable keepalived.service
# 验证vip
[root@k8s-ha Python-3.6.8]# ip a s ens33
[root@k8s-ha Python-3.6.8]# ping 172.18.10.129
# 部署haproxy
[root@k8s-ha ~]# yum install -y haproxy
[root@k8s-ha Python-3.6.8]# vim /etc/haproxy/haproxy.cfg
# 文件末尾添加
listen k8s-api-6443
bind 172.18.10.198:6443 # 绑定的ip端口,被访问时跳转到server配置
mode tcp # 模式
server server1 172.18.10.122:6443 check inter 3s fall 3 rise 3 # 跳转到master的api server,开启检查每隔3s检查一次
server server2 172.18.10.123:6443 check inter 3s fall 3 rise 3 # 跳转到master的api server,开启检查每隔3s检查一次
[root@k8s-ha Python-3.6.8]# systemctl restart haproxy.service
[root@k8s-ha Python-3.6.8]# systemctl enable haproxy.service
# 验证端口是否监听
[root@k8s-ha Python-3.6.8]# ss -tnl |grep 6443
LISTEN 0 128 172.18.10.198:6443 *:*
2、部署harbor节点
[root@k8s-harbor docker-install]# bash runtime-install.sh docker
[root@k8s-harbor apps]# ls
9186868_harbor.linuxarchitect.io_nginx.zip harbor-offline-installer-v2.5.6.tgz
[root@k8s-harbor apps]# tar -xf harbor-offline-installer-v2.5.6.tgz
# 解压证书
[root@k8s-harbor apps]# mkdir ssl
[root@k8s-harbor apps]# mv 9186868_harbor.linuxarchitect.io_nginx.zip ssl/
[root@k8s-harbor ssl]# unzip 9186868_harbor.linuxarchitect.io_nginx.zip
[root@k8s-harbor ssl]# ls
9186868_harbor.linuxarchitect.io_nginx.zip harbor.linuxarchitect.io.key harbor.linuxarchitect.io.pem
[root@k8s-harbor harbor]# cd /apps/harbor
[root@k8s-harbor harbor]# mv harbor.yml.tmpl harbor.yml
[root@k8s-harbor harbor]# vim harbor.yml
# 安装harbor --with-trivy安装扫描器
[root@k8s-harbor harbor]# ./install.sh --with-trivy
# windown hosts文件添加域名解析 172.18.10.130 harbor.linuxarchitect.io
# 访问验证harbor.linuxarchitect.io
3、部署etcd
1、左右节点都要做的
# 各节点做时钟同步 crontab -e | */5 * * * * /usr/bin/ntpdate time1.aliyun.com &> /dev/null && hwclock -w/usr/bin/ntpdate
# 升级内核
# 载入公钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# 安装ELRepo
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 载入elrepo-kernel元数据
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
# 查看可用的rpm包
yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel*
# 安装长期支持版本的kernel
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64
# 删除旧版本工具包
yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y
# 安装新版本工具包
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64
#查看默认启动顺序
awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (4.4.183-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-327.10.1.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-c52097a1078c403da03b8eddeac5080b) 7 (Core)
#默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而4.4.4的是在1),所以需要选择0。
grub2-set-default 0
#重启并检查
reboot
# 安装ansible
[root@k8s-deployer ~]# rpm -Uvh http://mirrors.ustc.edu.cn/epel/epel-release-latest-7.noarch.rpm
[root@k8s-deployer ~]# yum install epel-release -y
[root@k8s-deployer ~]# yum install -y ansible
# 配置免密钥登陆
[root@k8s-deployer ~]# ssh-keygen -t rsa-sha2-512 -b 4096
[root@k8s-deployer ~]# yum -y install sshpass
[root@k8s-deployer ~]# cat key-scp.sh
#!/bin/bash
#目标主机列表
IP="
172.18.10.122
172.18.10.123
172.18.10.124
172.18.10.125
172.18.10.126
172.18.10.127
172.18.10.128
172.18.10.129
172.18.10.130
"
REMOTE_PORT="22"
REMOTE_USER="root"
REMOTE_PASS="123456"
for REMOTE_HOST in ${IP};do
REMOTE_CMD="echo ${REMOTE_HOST} is successfully!"
#添加目标远程主机的公钥
ssh-keyscan -p "${REMOTE_PORT}" "${REMOTE_HOST}" >> ~/.ssh/known_hosts
#通过sshpass配置免秘钥登录、并创建python3软连接
sshpass -p "${REMOTE_PASS}" ssh-copy-id "${REMOTE_USER}@${REMOTE_HOST}"
#ssh ${REMOTE_HOST} ln -sv /usr/bin/python3 /usr/bin/python
echo ${REMOTE_HOST} 免秘钥配置完成!
done
[root@k8s-deployer ~]# bash key-scp.sh
# 安装git,下载项目
[root@k8s-deployer ~]# yum install -y git
[root@k8s-deployer ~]# export release=3.6.1
[root@k8s-deployer ~]# wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
[root@k8s-deployer ~]# chmod +x ./ezdown
# 安装docker,不装的话自动安装
[root@k8s-deployer src]# tar -xf runtime-docker24.0.2-containerd1.6.21-binary-install.tar.gz
[root@k8s-deployer src]# bash runtime-install.sh docker
[root@k8s-deployer ~]# ./ezdown -D
[root@k8s-deployer ~]# cd /etc/kubeasz/
# 新建集群
[root@k8s-deployer kubeasz]# ./ezctl new k8s-cluster2
[root@k8s-deployer kubeasz]# cat clusters/k8s-cluster2/hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
172.18.10.126
172.18.10.127
172.18.10.128
# master node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_master]
172.18.10.122 k8s_nodename='172.18.10.122'
172.18.10.123 k8s_nodename='172.18.10.123'
# work node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_node]
172.18.10.124 k8s_nodename='172.18.10.124'
172.18.10.125 k8s_nodename='172.18.10.125'
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#172.18.10.8 NEW_INSTALL=false
# [optional] loadbalance for accessing k8s from outside
[ex_lb] # 负载均衡器,自己装
#172.18.10.6 LB_ROLE=backup EX_APISERVER_VIP=172.18.10.250 EX_APISERVER_PORT=8443
#172.18.10.7 LB_ROLE=master EX_APISERVER_VIP=172.18.10.250 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony] # 时间服务器
#172.18.10.1
[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"
# Cluster container-runtime supported: docker, containerd
# if k8s version >= 1.24, docker is not supported
# 运行时
CONTAINER_RUNTIME="containerd"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
#
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
# service 类型
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
# service 地址段
SERVICE_CIDR="10.100.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
# pod 地址
CLUSTER_CIDR="100.200.0.0/16"
# NodePort Range 端口范围
NODE_PORT_RANGE="30000-32767"
# Cluster DNS Domain 域名后缀
CLUSTER_DNS_DOMAIN="cluster.local"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory 二进制命令路径
bin_dir="/usr/local/bin"
# Deploy Directory (kubeasz workspace) 工作路径
base_dir="/etc/kubeasz"
# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-cluster2"
# CA and other components cert/key Directory 生成的证书
ca_dir="/etc/kubernetes/ssl"
# Default 'k8s_nodename' is empty 默认使用主机ip作为主机名
k8s_nodename=''
# Default python interpreter python环境
ansible_python_interpreter=/usr/bin/python3
[root@k8s-deployer kubeasz]# cat clusters/k8s-cluster2/config.yml
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"
# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false
############################
# role:deploy
############################
# default: ca will expire in 100 years 证书和ca时间
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
# force to recreate CA and other certs, not suggested to set 'true'
CHANGE_CA: false
# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
# k8s version
K8S_VER: "1.27.2"
# set unique 'k8s_nodename' for each node, if not set(default:'') ip add will be used
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character (e.g. 'example.com'),
# regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'
K8S_NODENAME: "{%- if k8s_nodename != '' -%} \
{{ k8s_nodename|replace('_', '-')|lower }} \
{%- else -%} \
{{ inventory_hostname }} \
{%- endif -%}"
############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true
# [containerd]基础容器镜像,更换为自己的仓库地址,记得要上传pause镜像
SANDBOX_IMAGE: "harbor.linuxarchitect.io/baseimages/pause:3.9"
# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"
# [docker]开启Restful API
ENABLE_REMOTE_API: false
# [docker]信任的HTTP仓库
INSECURE_REG:
- "http://easzlab.io.local:5000"
- "https://{{ HARBOR_REGISTRY }}"
############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名) ,配置vip,可以配置多个域名
MASTER_CERT_HOSTS:
- "172.18.10.198"
- "api.linuxarchitect.io"
- "api.magedu.net"
#- "www.test.com"
# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24
############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"
# node节点最大pod 数
MAX_PODS: 200
# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"
# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
# [flannel]
flannel_ver: "v0.21.4"
# ------------------------------------------- calico
# [calico] IPIP隧道模式可选项有: [Always, CrossSubnet, Never],跨子网可以配置为Always与CrossSubnet(公有云建议使用always比较省事,其他的话需要修改各自公有云的网络配置,具体可以参考各个公有云说明)
# 其次CrossSubnet为隧道+BGP路由混合模式可以提升网络性能,同子网配置为Never即可.
CALICO_IPV4POOL_IPIP: "Always"
# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]设置calico 网络 backend: bird, vxlan, none
CALICO_NETWORKING_BACKEND: "bird"
# [calico]设置calico 是否使用route reflectors
# 如果集群规模超过50个节点,建议启用该特性
CALICO_RR_ENABLED: false
# CALICO_RR_NODES 配置route reflectors的节点,如果未设置默认使用集群master节点
# CALICO_RR_NODES: ["192.168.1.1", "192.168.1.2"]
CALICO_RR_NODES: []
# [calico]更新支持calico 版本: ["3.19", "3.23"]
calico_ver: "v3.24.6"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# ------------------------------------------- cilium
# [cilium]镜像版本
cilium_ver: "1.13.2"
cilium_connectivity_check: true
cilium_hubble_enabled: false
cilium_hubble_ui_enabled: false
# ------------------------------------------- kube-ovn
# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.11.5"
# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"
# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: true
# [kube-router]kube-router 镜像版本
kube_router_ver: "v1.5.4"
############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "no"
corednsVer: "1.9.3"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.22.20"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"
# metric server 自动安装
metricsserver_install: "no"
metricsVer: "v0.6.3"
# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "v2.7.0"
dashboardMetricsScraperVer: "v1.0.8"
# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "45.23.0"
# kubeapps 自动安装,如果选择安装,默认同时安装local-storage(提供storageClass: "local-path")
kubeapps_install: "no"
kubeapps_install_namespace: "kubeapps"
kubeapps_working_namespace: "default"
kubeapps_storage_class: "local-path"
kubeapps_chart_ver: "12.4.3"
# local-storage (local-path-provisioner) 自动安装
local_path_provisioner_install: "no"
local_path_provisioner_ver: "v0.0.24"
# 设置默认本地存储路径
local_path_provisioner_dir: "/opt/local-path-provisioner"
# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
# network-check 自动安装
network_check_enabled: false
network_check_schedule: "*/5 * * * *"
############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.6.4"
HARBOR_DOMAIN: "harbor.easzlab.io.local"
HARBOR_PATH: /var/data
HARBOR_TLS_PORT: 8443
HARBOR_REGISTRY: "{{ HARBOR_DOMAIN }}:{{ HARBOR_TLS_PORT }}"
# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: false
# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CHARTMUSEUM: false
# 按步骤执行,第一步集群初始化,5秒内可以取消
[root@k8s-deployer kubeasz]# ./ezctl setup k8s-cluster2 01
# 第二步,安装etcd
[root@k8s-deployer kubeasz]# ./ezctl setup k8s-cluster2 02
# 验证etcd服务
[root@k8s-etcd1 ~]# export NODE_IPS="172.18.10.126 172.18.10.127 172.18.10.128"
[root@k8s-etcd1 ~]# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
https://172.18.10.126:2379 is healthy: successfully committed proposal: took = 6.139803ms
https://172.18.10.127:2379 is healthy: successfully committed proposal: took = 5.824546ms
https://172.18.10.128:2379 is healthy: successfully committed proposal: took = 10.976508ms
4、containerd运行时
# 部署运行时containerd ,改为本地仓库拉取镜像
[root@k8s-deployer kubeasz]# grep SANDBOX_IMAGE ./clusters/* -R
./clusters/k8s-cluster1/config.yml:SANDBOX_IMAGE: "harbor.linuxarchitect.io/baseimages/pause:3.9"
# 配置harbor镜像仓库域名解析-公司有DNS服务器进行域名解析
[root@k8s-deployer kubeasz]# vim roles/containerd/tasks/main.yml
40 - name: 添加域名解析
41 shell: "echo '172.18.10.130 harbor.linuxarchitect.io' >> /etc/hosts"
# 可选自定义containerd配置文件
[root@k8s-deployer kubeasz]# vim roles/containerd/templates/config.toml.j2
SystemdCgroup = true
[root@k8s-deployer kubeasz]# wget https://github.com/containerd/nerdctl/releases/download/v1.5.0/nerdctl-1.5.0-linux-amd64.tar.gz
[root@k8s-deployer kubeasz]# tar xvf nerdctl-1.5.0-linux-amd64.tar.gz -C /etc/kubeasz/bin/containerd-bin/
[root@k8s-deployer kubeasz]# cat roles/containerd/tasks/main.yml
- name: 获取是否已经安装containerd
shell: 'systemctl is-active containerd || echo "NoFound"'
register: containerd_svc
- block:
- name: 准备containerd相关目录
file: name={{ item }} state=directory
with_items:
- "{{ bin_dir }}"
- "/etc/containerd"
- "/etc/nerdctl" #配置文件目录
- name: 加载内核模块 overlay
modprobe: name=overlay state=present
- name: 下载 containerd 二进制文件
copy: src={{ item }} dest={{ bin_dir }}/ mode=0755
with_fileglob:
- "{{ base_dir }}/bin/containerd-bin/*"
tags: upgrade
- name: 下载 crictl
copy: src={{ base_dir }}/bin/crictl dest={{ bin_dir }}/crictl mode=0755
- name: 添加 crictl 自动补全
lineinfile:
dest: ~/.bashrc
state: present
regexp: 'crictl completion'
line: 'source <(crictl completion bash) # generated by kubeasz'
- name: 创建 containerd 配置文件
template: src=config.toml.j2 dest=/etc/containerd/config.toml
tags: upgrade
- name: 创建 nerdctl 配置文件
template: src=nerdctl.toml.j2 dest=/etc/nerdctl/nerdctl.toml #分发nerdctl配置文件
tags: upgrade
- name: 添加域名解析
shell: "echo '172.18.10.130 harbor.linuxarchitect.io' >> /etc/hosts"
- name: 创建systemd unit文件
template: src=containerd.service.j2 dest=/etc/systemd/system/containerd.service
tags: upgrade
- name: 创建 crictl 配置
template: src=crictl.yaml.j2 dest=/etc/crictl.yaml
- name: 开机启用 containerd 服务
shell: systemctl enable containerd
ignore_errors: true
- name: 开启 containerd 服务
shell: systemctl daemon-reload && systemctl restart containerd
tags: upgrade
- name: 轮询等待containerd服务运行
shell: "systemctl is-active containerd.service"
register: containerd_status
until: '"active" in containerd_status.stdout'
retries: 8
delay: 2
tags: upgrade
when: "'NoFound' in containerd_svc.stdout"
[root@k8s-deployer kubeasz]# cat roles/containerd/templates/nerdctl.toml.j2
namespace = "k8s.io"
debug = false
debug_full = false
insecure_registry = true
[root@k8s-deployer kubeasz]# cat roles/containerd/templates/containerd.service.j2
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
Environment="PATH={{ bin_dir }}:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStartPre=-/sbin/modprobe overlay
ExecStart={{ bin_dir }}/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
# 执行部署运行时
[root@k8s-deployer kubeasz]# ./ezctl setup k8s-cluster2 03
# 验证node节点containerd
[root@k8s-master1 ~]# containerd -v
containerd github.com/containerd/containerd v1.6.20 2806fc1057397dbaeefbea0e4e17bddfbd388f38
[root@k8s-master1 ~]# nerdctl version
[root@k8s-master1 ~]# nerdctl pull harbor.linuxarchitect.io/baseimages/pause@sha256:3ec9d4ec5512356b5e77b13fddac2e9016e7aba17dd295ae23c94b2b901813de
5、master节点
# 部署master节点
[root@k8s-deployer kubeasz]# ./ezctl setup k8s-cluster2 04
# 验证kubernetes集群状态
[root@k8s-deployer kubeasz]# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.10.122 Ready,SchedulingDisabled master 7m11s v1.27.4
172.18.10.123 Ready,SchedulingDisabled master 7m11s v1.27.4
6、node节点
# 部署node节点
[root@k8s-deployer kubeasz]# ./ezctl setup k8s-cluster2 05
# 验证节点
[root@k8s-deployer kubeasz]# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.10.122 Ready,SchedulingDisabled master 9m51s v1.27.4
172.18.10.123 Ready,SchedulingDisabled master 9m51s v1.27.4
172.18.10.124 Ready node 11s v1.27.4
172.18.10.125 Ready node 12s v1.27.4
7、calico网络组件
[root@k8s-deployer kubeasz]# vim calico3.26.1-ipip_ubuntu2204-k8s-1.27.x.yaml
- name: CALICO_IPV4POOL_CIDR
value: "10.200.0.0/16"
- name: IP_AUTODETECTION_METHOD
value: "interface=ens33" #指定使用eth0网卡
[root@k8s-deployer kubeasz]# kubectl apply -f calico3.26.1-ipip_ubuntu2204-k8s-1.27.x.yaml
[root@k8s-deployer kubeasz]# kubectl get pod -A
8、创建Pod验证网络通信正常
# 创建测试容器,验证通信
[root@k8s-deployer kubeasz]# kubectl run net-test2 --image=alpine sleep 360000
pod/net-test2 created
[root@k8s-deployer kubeasz]# kubectl run net-test1 --image=alpine sleep 360000
pod/net-test1 created
[root@k8s-deployer kubeasz]# kubectl run net-test3 --image=alpine sleep 360000
pod/net-test3 created
[root@k8s-deployer kubeasz]# kubectl get pod -o wide
# 验证通信
9、部署crondns
[root@k8s-deployer ~]# unzip kubernetes-v1.27.x-files-20230806.zip
[root@k8s-deployer 1.coredns]# cd /root/kubernetes-v1.27.x-files/20230416-cases/1.coredns
[root@k8s-deployer ~]# kubectl apply -f coredns-v1.10.1.yaml
# 验证
二、总结kubectl命令的使用
# 查找某一台主机上的所有Pod
kubectl get pods -A --field-selector=spec.nodeName=172.18.10.125 -o wide
# 基础命令 增删改查
kubect create -f xxx.yml # 创建一个新的资源从一个文件或者stdin
kubect apply -f xxx.yml # 通过filename或stdin将配置应用到资源,可以实现动态配置
kubect delete -f xxx.yml # 删除资源,通过文件或者直接指定资源
kubect edit # 编辑服务器上的资源
kubectl get # 显示资源,可以是pod、node、svc等,-o wide显示详细信息
kubectl describe # 查看详细信息,可以查看日志信息
kubectl logs # 打印pod中容器的日志
kubectl exec # 进入容器,使用类似于docker exec
kubectl scale # 为Deployment,ReplicaSet,Replication等控制器设置一个新的大小或者任务
kubectl explain # 可以查看命令说明
kubectl label #为资源设置标签
kubectl cluster-info # 显示集群信息
kubectl top # 显示资源 (CPU/Memory/Storage)使用情况
kubectl cordon # 将节点标记为不可调度
kubectl uncordon # 将节点标记为可调度
kubectl drain # 驱逐node上的pod,用于node下线等场景
kubectl taint # 给node标记污点,实现反亲pod与node反亲和性
kubectl config # 修改kubeconfig配置文件
kubectl version # 打印客户端和服务端的版本信息
kubectl api-version # 输出服务器支持的API版本
# 常用命令
kubectl get service --all-namespaces -o wide
kubectl get pods --all-namespaces -o wide
kubectl get nodes --all-namespaces -o wide
kubectl get deployment --all-namespaces
kubectl get deployment -n myserver-o wide #更改显示格式
kubectl describe pods myserver-tomcat-app1-deployment -n myserver#查看某个资源详细信息
kubectl create -f tomcat-app1.yaml
kubectl delete -f tomcat-app1.yam
kubectl apply -f tomcat-app1.yaml #推荐命令,apply支持从yaml或json格式文件、标准输入、或指定URL创建资源对象
kubectl exec -it myserver-tomcat-app1-deployment-6bccd8f9c7-g76s5 bash -n myserver
kubectl logs myserver-tomcat-app1-deployment-6bccd8f9c7-g76s5 -n myserver
kubectl delete pods myserver-tomcat-app1-deployment-6bccd8f9c7-g76s5 -n myserver
文档:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
三、测试第三方dashboard如Kuboard
[root@k8s-deployer kubeasz]# yum install -y nfs-server
[root@k8s-deployer kubeasz]# mkdir /root/kuboard-data/
[root@k8s-deployer ~]# cat /etc/exports
/data/k8sdata/kuboard *(rw,no_root_squash)
[root@k8s-deployer ~]# systemctl restart nfs-server.service
[root@k8s-deployer ~]# systemctl enable nfs-server.service
[root@k8s-deployer 3.kuboard]# docker pull swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3
[root@k8s-deployer 3.kuboard]# docker run -d \
> --restart=unless-stopped \
> --name=kuboard \
> -p 80:80/tcp \
> -p 10081:10081/tcp \
> -e KUBOARD_ENDPOINT="http://172.18.10.121:80" \
> -e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
> -v /root/kuboard-data:/data \
> swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3
# 访问验证
http://172.18.10.121:80
用户名: admin
密 码: Kuboard123
添加集群:
配置集群信息,将 ~/.kube/config 的内容粘贴过去
查看集群的状态
集群概要信息:
节点信息:
命名空间容器信息:
进入容器:
四、总结etcd的集群选举机制
etcd基于Raft算法进行集群角色选举,使用Raft的还有Consul、InfluxDB、kafka(KRaft)等。
节点角色:集群中每个节点只能处于 Leader、Follower 和 Candidate 三种状态的一种
follower:追随者(Redis Cluster的Slave节点)
candidate:候选节点,选举过程中。
leader:主节点(Redis Cluster的Master节点)
节点启动后基于termID(任期ID)进行相互投票,termID是一个整数默认值为0,在Raft算法中,一个term代表leader
的一段任期周期,每当一个节点成为leader时,就会进入一个新的term, 然后每个节点都会在自己的term ID上加1,
以便与上一轮选举区分开来。
选举简介:
选举过程:
首次选举:
1、各etcd节点启动后默认为 follower角色、默认termID为0、如果发现集群内没有leader,则会变成 candidate角色并进行选举 leader。
2、candidate(候选节点)向其它候选节点发送投票信息(RequestVote),默认投票给自己。
3、各候选节点相互收到另外的投票信息(如A收到BC的,B收到AC的,C收到AB的),然后对比日志是否比自己的更新,如果比自己的更新,则将自己的选票投给目的候选人,并回复一个包含自己最新日志信息的响应消息,如果C的日志更新,那么将会得到A、B、C的投票,则C全票当选,如果B挂了,得到A、C的投票,则C超过半票当选。
4、C向其它节点发送自己leader心跳信息,以维护自己的身份(heartbeat-interval、默认100毫秒)。
5、其它节点将角色切换为Follower并向leader同步数据。
6、如果选举超时(election-timeout )、则重新选举,如果选出来两个leader,则超过集群总数半票的生效。
后期选举:
1、当一个follower节点在规定时间内未收到leader的消息时,它将转换为candidate状态,向其他节点发送投票请求(自己的term ID和日志更新记录),并等待其他节点的响应,如果该candidate的(日志更新记录最新),则会获多数投票,它将成为新的leader。
2、新的leader将自己的termID +1 并通告至其它节点。
3、如果旧的leader恢复了,发现已有新的leader,则加入到已有的leader中并将自己的term ID更新为和leader一致,在同一个任期内所有节点的term ID是一致的。
五、对kubernetes集群进行版本升级
# 查看升级前的版本
[root@k8s-deployer k8s]# cd /usr/local/src/k8s
[root@k8s-deployer k8s]# tar -xf kubernetes-client-linux-amd64.tar.gz
[root@k8s-deployer k8s]# tar -xf kubernetes-node-linux-amd64.tar.gz
[root@k8s-deployer k8s]# tar -xf kubernetes.tar.gz
[root@k8s-deployer k8s]# tar -xf kubernetes-server-linux-amd64.tar.gz
[root@k8s-deployer k8s]# cd kubernetes/server/bin
[root@k8s-deployer bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubectl /etc/kubeasz/bin/
[root@k8s-deployer bin]# cd /etc/kubeasz/bin/
[root@k8s-deployer bin]# ./kube-apiserver --version
Kubernetes v1.27.2
[root@k8s-deployer kubeasz]# cd /etc/kubeasz/
# 升级集群
[root@k8s-deployer kubeasz]# ./ezctl upgrade k8s-cluster2
# 验证升级结果