Kubernetes 1.24.2二进制部署

本文介绍使用kubeasz项目部署Kubernetes 1.24.2版本,然后进行扩容操作
kubeasz项目地址:https://github.com/easzlab/kubeasz

主机规划

角色IPHostnameOS
harbor192.168.122.10harbor-server1.linux.ioUbuntu20.04
harbor192.168.122.11harbor-server2.linux.ioUbuntu20.04
haproxy+keepalived192.168.122.12haproxy-server1.linux.ioUbuntu20.04
haproxy+keepalived192.168.122.13haproxy-server2.linux.ioUbuntu20.04
master-node+deploy-node192.168.122.14k8s-master01.linux.ioUbuntu20.04
master-node192.168.122.15k8s-master02.linux.ioUbuntu20.04
master-node192.168.122.16k8s-master03.linux.ioUbuntu20.04
etcd-node192.168.122.17k8s-etcd01.linux.ioUbuntu20.04
etcd-node192.168.122.18k8s-etcd02.linux.ioUbuntu20.04
etcd-node192.168.122.19k8s-etcd03.linux.ioUbuntu20.04
worker-node192.168.122.20k8s-node01.linux.ioUbuntu20.04
worker-node192.168.122.21k8s-node02.linux.ioUbuntu20.04
worker-node192.168.122.22k8s-node03.linux.ioUbuntu20.04
VIP192.168.122.188harbor-server.linx.io & api-k8s.linux.io

节点环境准备

在所有节点上进行准备工作,包括hostname设置、时间同步、防火墙设置、hosts文件解析等

Habor高可用部署

Harbor的部署以及haproxy+keepalived的部署都参照这篇文章实现:https://blog.csdn.net/weixin_43266367/article/details/126022696?spm=1001.2014.3001.5501

harbor部署完成后再添加一个kubernetes项目,并且配置同步规则,用于存放一些kubernetes部署需要使用的镜像
在这里插入图片描述
在这里插入图片描述

k8s部署

准备ansible环境

配置部署节点(master01)到其它集群各节点免密

ssh-keygen
for i in {14..22};do sshpass -p Wang@Passw0rd ssh-copy-id root@192.168.122.$i -o  StrictHostKeyChecking=no;done
#为每个节点设置python软链接
for i in {14..22};do ssh  root@192.168.122.$i ln -s /usr/bin/python3 /usr/bin/python ;done

安装ansible

apt -y install ansible
下载kubeasz组件
wget https://github.com/easzlab/kubeasz/releases/download/3.3.1/ezdown
chmod +x ezdown
./ezdown -D	#下载kubeasz代码、二进制文件和默认容器镜像等

下载完成后部署节点会下载很多docker镜像,如下所示:
在这里插入图片描述
我们可以把其中一些镜像上传到自己的harbor仓库,供多个环境使用:

#上传pause镜像
docker image tag easzlab/pause harbor-server.linux.io/kubernetes/pause:3.7
docker push easzlab/pause harbor-server.linux.io/kubernetes/pause:3.7
#上传calico镜像
docker image tag calico/node:v3.19.4 harbor-server.linux.io/kubernetes/calico-node:v3.19.4
docker image push harbor-server.linux.io/kubernetes/calico-node:v3.19.4
docker image tag calico/pod2daemon-flexvol:v3.19.4 harbor-server.linux.io/kubernetes/calico-pod2daemon-flexvol:v3.19.4
docker push harbor-server.linux.io/kubernetes/calico-pod2daemon-flexvol:v3.19.4
docker image tag calico/cni:v3.19.4 harbor-server.linux.io/kubernetes/calico-cni:v3.19.4
docker image push harbor-server.linux.io/kubernetes/calico-cni:v3.19.4
docker image tag calico/kube-controllers:v3.19.4 harbor-server.linux.io/kubernetes/calico-kube-controllers:v3.19.4
docker image push harbor-server.linux.io/kubernetes/calico-kube-controllers:v3.19.4
修改kubeasz配置

kubeasz可以管理多集群,先初始化一个新集群

cd /etc/kubeasz/	#kubeasz配置文件目录
./ezctl new k8s-cluster1	#初始化一个新集群

初始化一个集群之后会在/etc/kubeasz/clusters下生成一个集群同名目录,里边包含hosts和config.yaml两个配置文件:

  • hosts:执行ansible需要的主机清单文件,里边包含k8s集群节点信息
  • config.yaml:部署k8s集群时需要用到的参数配置
    在这里插入图片描述

先修改hosts文件,添加master、node和etcd节点的信息,如下所示:

# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
192.168.122.17
192.168.122.18
192.168.122.19

# master node(s)
[kube_master]
192.168.122.14
192.168.122.15

# work node(s)
[kube_node]
192.168.122.20
192.168.122.21

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#192.168.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb]
#192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
#192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443

# [optional] ntp server for the cluster
[chrony]
#192.168.1.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"	#apiserver端口号,保持默认

# Cluster container-runtime supported: docker, containerd
# if k8s version >= 1.24, docker is not supported
CONTAINER_RUNTIME="containerd"	#容器运行时,保持默认

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"	#网络插件,默认calico,按需修改

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"	#kubeproxy代理模型

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"		#service网络,按需修改

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"	#Pod网络,按需修改

# NodePort Range
NODE_PORT_RANGE="30000-40000"	#nodeport端口号,按需修改

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"	#集群域名后缀,按需修改

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/local/bin"	#k8s二进制文件存放目录

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-cluster1"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"	#证书文件保存目录

先添加两个master和两个worker,剩余的一个master和一个worker用作扩容,其余的配置可以按需修改。

然后修改config.yaml指定k8s集群相关的配置,如下所示:

############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false


############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"

# k8s version
K8S_VER: "1.24.2"

############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"	#etcd数据存放目录
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

# [containerd]基础容器镜像
SANDBOX_IMAGE: "harbor-server.linux.io/kubernetes/pause:3.7"	#pause镜像下载地址,修改为从harbor下载

# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启Restful API
ENABLE_REMOTE_API: false

# [docker]信任的HTTP仓库
INSECURE_REG: '["http://easzlab.io.local:5000"]'


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:		#证书配置,添加vip和vip的域名
  - "192.168.122.188"
  - "api-k8s.linux.io"
  #- "www.test.com"

# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node节点最大pod 数
MAX_PODS: 200	#根据节点资源情况调整

# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.15.1"
flanneld_image: "easzlab.io.local:5000/easzlab/flannel:{{ flannelVer }}"

# ------------------------------------------- calico配置,按需修改
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]设置calico 是否使用route reflectors
# 如果集群规模超过50个节点,建议启用该特性
CALICO_RR_ENABLED: false

# CALICO_RR_NODES 配置route reflectors的节点,如果未设置默认使用集群master节点
# CALICO_RR_NODES: ["192.168.1.1", "192.168.1.2"]
CALICO_RR_NODES: []

# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.19.4"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# ------------------------------------------- cilium
# [cilium]镜像版本
cilium_ver: "1.11.6"
cilium_connectivity_check: true
cilium_hubble_enabled: false
cilium_hubble_ui_enabled: false

# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"

# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: true

# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"


############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "no"	#禁用自动安装,改为手动安装
corednsVer: "1.9.3"
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: "1.21.1"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 自动安装
metricsserver_install: "no"		#禁用自动安装
metricsVer: "v0.5.2"

# dashboard 自动安装
dashboard_install: "no"		#禁用自动安装
dashboardVer: "v2.5.1"
dashboardMetricsScraperVer: "v1.0.8"

# prometheus 自动安装
prom_install: "no"	#禁用自动安装
prom_namespace: "monitor"
prom_chart_ver: "35.5.1"

# nfs-provisioner 自动安装
nfs_provisioner_install: "no"	#禁用自动安装
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

# network-check 自动安装
network_check_enabled: false
network_check_schedule: "*/5 * * * *"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.easzlab.io.local"
HARBOR_TLS_PORT: 8443

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true
安装集群
执行初始化
cd /etc/kubeasz/
./ezctl setup k8s-cluster1 01	#初始化操作
部署etcd
./ezctl setup k8s-cluster1 02

etcd部署完成后,登录任意一个etcd节点执行下面命令,检测etcd集群状态

export NODE_IPS="192.168.122.17 192.168.122.18 192.168.122.19"
for ip in ${NODE_IPS};do /usr/local/bin/etcdctl \
--endpoints=https://${ip}:2379 \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/kubernetes/ssl/etcd.pem \
--key=/etc/kubernetes/ssl/etcd-key.pem endpoint health ;done

在这里插入图片描述

部署容器运行时(containerd)

首先修改一下containerd的模板配置文件,让containerd能够访问自建的harbor

vim /etc/kubeasz/roles/containerd/templates/config.toml.j2	#在157行添加下面的内容
157         [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor-server.linux.io"]
158           endpoint = ["https://harbor-server.linux.io"]
159         [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor-server.linux.io".tls]
160           insecure_skip_verify = true
161         [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor-server.linux.io".auth]
162           username = "admin"
163           password = "Passw0rd"

然后部署containerd

./ezctl setup k8s-cluster1 03

部署完成后到任意一个master或node节点,pull一个镜像测试一下,验证针对harbor的配置是否生效。可以下载镜像就表示配置已经生效

crictl pull harbor-server.linux.io/base-images/rockylinux:9.0

在这里插入图片描述

部署master节点
./ezctl setup k8s-cluster1 04

部署完成之后查看master节点状态
在这里插入图片描述
还需要在haproxy配置中添加master负载均衡配置,如下,然后重启haproxy。

listen k8s_apiserver_6443
       bind 192.168.122.188:6443
       option tcplog
       mode tcp
       balance source
       server api-server1 192.168.122.14:6443 check inter 2000 fall 3 rise 5
       server api-server2 192.168.122.15:6443 check inter 2000 fall 3 rise 5
部署node节点
./ezctl setup k8s-cluster1 05

部署完成后查看node节点状态
在这里插入图片描述

部署calico

修改calico的部署清单文件,先将镜像都修改为从harbor下载,然后将calico为node分配子网的长度改为24(默认26)

cat /etc/kubeasz/roles/calico/templates/calico-v3.19.yaml.j2 |grep -n image
213:          image: harbor-server.linux.io/kubernetes/calico-cni:{{ calico_ver }}
257:          image: harbor-server.linux.io/kubernetes/calico-pod2daemon-flexvol:{{ calico_ver }}
268:          image: harbor-server.linux.io/kubernetes/calico-node:{{ calico_ver }}
519:          image: harbor-server.linux.io/kubernetes/calico-kube-controllers:{{ calico_ver }}
cat /etc/kubeasz/roles/calico/templates/calico-v3.19.yaml.j2 |grep -A1 -n SIZE
351:            - name: CALICO_IPV4POOL_BLOCK_SIZE
352-              value: "24"

修改完成后开始部署

./ezctl setup k8s-cluster1 06

等待calico相关Pod全部Running后创建一些Pod进行网络测试:
在这里插入图片描述
创建测试Pod

kubectl create deployment net-test --replicas=3 --image=alpine -- sleep 36000
kubectl get pods #查看Pod状态

等待创建的Pod Running后测试网络连通性
在这里插入图片描述

集群扩容

添加master节点

添加之前未部署的master节点(192.168.122.16)

./ezctl add-master k8s-cluster1 192.168.122.16

添加完成后查看新加master节点状态
在这里插入图片描述
然后将修改haproxy配置,将新添加的master节点加入负载均衡,重启haproxy,如下:

listen k8s_apiserver_6443
       bind 192.168.122.188:6443
       option tcplog
       mode tcp
       balance source
       server api-server1 192.168.122.14:6443 check inter 2000 fall 3 rise 5
       server api-server2 192.168.122.15:6443 check inter 2000 fall 3 rise 5
       #添加
       server api-server3 192.168.122.16:6443 check inter 2000 fall 3 rise 5	
添加node节点

添加之前未部署的node节点(192.168.122.22)

./ezctl add-node k8s-cluster1 192.168.122.22

添加完成后查看节点状态
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值