kubespray部署k8s-v1.26.6

Kubespray 是一个自由开源的工具,它提供了 Ansible 剧本(playbook) 来部署和管理 Kubernetes 集群。它旨在简化跨多个节点的 Kubernetes 集群的安装过程,允许用户快速轻松地部署和管理生产就绪的 Kubernetes 集群。

它支持一系列操作系统,包括 Ubuntu、CentOS、Rocky Linux 和 Red Hat Enterprise Linux(RHEL),它可以在各种平台上部署 Kubernetes,包括裸机、公共云和私有云。

Kubespray 是一个自由开源的工具,它提供了 Ansible 剧本(playbook) 来部署和管理 Kubernetes 集群。它旨在简化跨多个节点的 Kubernetes 集群的安装过程,允许用户快速轻松地部署和管理生产就绪的 Kubernetes 集群。

它支持一系列操作系统,包括 Ubuntu、CentOS、Rocky Linux 和 Red Hat Enterprise Linux(RHEL),它可以在各种平台上部署 Kubernetes,包括裸机、公共云和私有云。

kubespray-2.19.0 版
功能/主要更改
为 Kubernetes 1.24.0、1.24.1、1.21.12、v1.21.13、1.22.8、1.22.9、v1.22.10、1.21.11、1.23.5、1.23.6、v1.23.7 ,并将 kubernetes v1.23.7 设为默认值

kubespray-2.22.1 版(20230712最新版本)
Kubernetes的最低要求版本是v1.25

一、kubespray安装

1.0 免密,修改主机名,升级内核

cat > mianmi.sh << 'eof'
#!/bin/sh

#定义K8S主机字典
declare -A MASTERS
MASTERS=([kubespray]="192.168.6.225" [k8s-master-01]="192.168.6.220" [k8s-master-02]="192.168.6.221" [k8s-master-03]="192.168.6.222" [k8s-node-01]="192.168.6.223" [k8s-node-02]="192.168.6.224")

# 打印字典所有的key  :echo ${!MASTERS[*]}
# 打印字典所有的value:echo ${MASTERS[*]}



echo -e "\033[42;37m >>> 免密登陆 <<< \033[0m"
yum -y install sshpass &>/dev/null
if [ -f ~/.ssh/id_dsa.pub ]
then
    for ip in ${MASTERS[*]}
      do
	    echo -e "\033[33m $ip \033[0m"
        sshpass -p "1qaz@WSX" ssh-copy-id -i ~/.ssh/id_dsa.pub -p 22 -o StrictHostKeyChecking=no root@$ip &>/dev/null
	    ssh root@$ip "echo "$ip-ssh连接测试成功""
    done
else
    ssh-keygen -t dsa -f ~/.ssh/id_dsa -P "" &>/dev/null
	 for ip in ${MASTERS[*]}
      do
	    echo -e "\033[33m $ip \033[0m"
        sshpass -p "1qaz@WSX" ssh-copy-id -i ~/.ssh/id_dsa.pub -p 22 -o StrictHostKeyChecking=no root@$ip &>/dev/null
	    ssh root@$ip "echo "$ip-ssh连接测试成功""
    done
fi




echo -e "\033[42;37m >>> 修改主机名 <<< \033[0m"
for hostname in ${!MASTERS[*]}
do
    echo -e "\033[33m ${MASTERS[$hostname]} \033[0m" 
    ssh root@${MASTERS[$hostname]} "hostnamectl set-hostname $hostname  && hostname"
done

echo -e "\033[42;37m >>> 添加hosts解析 <<< \033[0m"
cat >/etc/hosts<<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
${MASTERS[k8s-master-01]} k8s-master-01
${MASTERS[k8s-master-02]} k8s-master-02
${MASTERS[k8s-master-03]} k8s-master-03
${MASTERS[k8s-node-01]} k8s-node-01
${MASTERS[k8s-node-02]} k8s-node-02
${MASTERS[kubespray]} kubespray
EOF
for hostname in ${!MASTERS[*]}
do
    echo -e "\033[33m ${MASTERS[$hostname]} \033[0m" 
    scp /etc/hosts root@${MASTERS[$hostname]}:/etc/hosts
done

echo -e "\033[42;37m >>> 升级内核 <<< \033[0m"
for hostname in ${!MASTERS[*]}
do
    echo -e "\033[33m ${MASTERS[$hostname]} \033[0m" 
    ssh root@${MASTERS[$hostname]} "rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm && yum --enablerepo=elrepo-kernel install kernel-lt -y && grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg && reboot"
done

eof


eof


bash mianmi.sh

1.1 kubespray节点python3准备(所有操作都在kubespray上)

yum install -y ncurses-devel gdbm-devel xz-devel sqlite-devel tk-devel uuid-devel readline-devel bzip2-devel libffi-devel curl wget
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

yum install -y openssl-devel openssl11 openssl11-devel

openssl11 version


1.2 安装python 3.10.4


wget https://www.python.org/ftp/python/3.10.4/Python-3.10.4.tgz

编译主要需要注意的问题是设置编译FLAG,以便使用最新的openssl库。

export CFLAGS=$(pkg-config --cflags openssl11)
export LDFLAGS=$(pkg-config --libs openssl11)


echo $CFLAGS
#显示结果 -I/usr/include/openssl11
echo $LDFLAGS
#显示结果  -L/usr/lib64/openssl11 -lssl -lcrypto


tar xf Python-3.10.4.tgz
cd Python-3.10.4/
./configure --enable-optimizations && make altinstall


python3.10 --version #显示如下版本
#Python 3.10.4
pip3.10 --version #显示如下版本
#pip 22.0.4 from /usr/local/lib/python3.10/site-packages/pip (python 3.10)

ln -sf /usr/local/bin/python3.10 /usr/bin/python3
ln -sf /usr/local/bin/pip3.10  /usr/bin/pip3

1.3 kubespray源文件获取


yum install git -y
git clone https://github.com/kubernetes-sigs/kubespray.git

cd /root/kubespray/

pip3 install -r requirements.txt


ansible --version

1.4 创建主机清单

[root@kubespray kubespray]# ls inventory/
local  sample
[root@kubespray kubespray]# cp -rfp inventory/sample inventory/mycluster
[root@kubespray kubespray]# ls inventory/
local  mycluster  sample


# 使用真实的hostname(否则会自动把你的hostname改成node1/node2...这种哦)
[root@kubespray kubespray]# export USE_REAL_HOSTNAME=true
#添加服务器ip
declare -a IPS=(192.168.6.220 192.168.6.221 192.168.6.222 192.168.6.223 192.168.6.224)


[root@kubespray kubespray]# ls inventory/mycluster/
group_vars  inventory.ini  patches


[root@kubespray kubespray]# CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}


DEBUG: Adding group all
DEBUG: Adding group kube_control_plane
DEBUG: Adding group kube_node
DEBUG: Adding group etcd
DEBUG: Adding group k8s_cluster
DEBUG: Adding group calico_rr
DEBUG: adding host node1 to group all
DEBUG: adding host node2 to group all
DEBUG: adding host node3 to group all
DEBUG: adding host node4 to group all
DEBUG: adding host node5 to group all
DEBUG: adding host node1 to group etcd
DEBUG: adding host node2 to group etcd
DEBUG: adding host node3 to group etcd
DEBUG: adding host node1 to group kube_control_plane
DEBUG: adding host node2 to group kube_control_plane
DEBUG: adding host node1 to group kube_node
DEBUG: adding host node2 to group kube_node
DEBUG: adding host node3 to group kube_node
DEBUG: adding host node4 to group kube_node
DEBUG: adding host node5 to group kube_node
[root@kubespray kubespray]# ls inventory/mycluster/
group_vars  hosts.yaml  inventory.ini  patches

    
#修改为:添加了一个master,删除了二个node
[root@kubespray kubespray]# vim inventory/mycluster/hosts.yaml
all:
  hosts:
    k8s-master-01:
      ansible_host: 192.168.6.220
      ip: 192.168.6.220
      access_ip: 192.168.6.220
    k8s-master-02:
      ansible_host: 192.168.6.221
      ip: 192.168.6.221
      access_ip: 192.168.6.221
    k8s-master-03:
      ansible_host: 192.168.6.222
      ip: 192.168.6.222
      access_ip: 192.168.6.222
    k8s-node-01:
      ansible_host: 192.168.6.223
      ip: 192.168.6.223
      access_ip: 192.168.6.223
    k8s-node-02:
      ansible_host: 192.168.6.224
      ip: 192.168.6.224
      access_ip: 192.168.6.224
  children:
    kube_control_plane:
      hosts:
        k8s-master-01:
        k8s-master-02:
        k8s-master-03:
    kube_node:
      hosts:
        k8s-node-01:
        k8s-node-02:
    etcd:
      hosts:
        k8s-master-01:
        k8s-master-02:
        k8s-master-03:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}

1.5 准备K8S集群配置文件

[root@kubespray kubespray]# cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
---
# Kubernetes configuration dirs and system namespace.
# Those are where all the additional config stuff goes
# the kubernetes normally puts in /srv/kubernetes.
# This puts them in a sane location and namespace.
# Editing those values will almost surely break something.
kube_config_dir: /etc/kubernetes
kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
kube_manifest_dir: "{{ kube_config_dir }}/manifests"

# This is where all the cert scripts and certs will be located
kube_cert_dir: "{{ kube_config_dir }}/ssl"

# This is where all of the bearer tokens will be stored
kube_token_dir: "{{ kube_config_dir }}/tokens"

kube_api_anonymous_auth: true

## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.26.6

# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
local_release_dir: "/tmp/releases"
# Random shifts for retrying failed ops like pushing/downloading
retry_stagger: 5

# This is the user that owns tha cluster installation.
kube_owner: kube

修改:重点观察20、70、76、81、160、229行等
默认可以不用修改。

1.6 准备k8s集群插件文件

要启用 Kuberenetes 仪表板和入口控制器等插件,请在文件inventory/mycluster/group_vars/k8s_cluster/addons.yml 中将参数设置为已启用

根据自身业务需要开启对应的服务即可。例如:
[root@kubespray kubespray]# vim inventory/mycluster/group_vars/k8s_cluster/addons.yml
1 ---
  2 # Kubernetes dashboard
  3 # RBAC required. see docs/getting-started.md for access details.
  4 dashboard_enabled: true
  5
  6 # Helm deployment
  7 helm_enabled: false
  8
  9 # Registry deployment
 10 registry_enabled: false
 11 # registry_namespace: kube-system
 12 # registry_storage_class: ""
 13 # registry_disk_size: "10Gi"
 14
 15 # Metrics Server deployment
 16 metrics_server_enabled: false

1.7 在K8S集群节点添加sysops用户指行授权

所有的k8s集群节点

echo "sysops ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/sysops

1.8 设置k8s集群主机

cd /root/kubespray/

#关闭防火墙
ansible all -i inventory/mycluster/hosts.yaml -m shell -a "systemctl stop firewalld && systemctl disable firewalld"

#k8s集群主机路由转发设置
ansible all -i inventory/mycluster/hosts.yaml -m shell -a "echo 'net.ipv4.ip_forward=1' | tee -a /etc/sysctl.conf"

#禁用swap分区
ansible all -i inventory/mycluster/hosts.yaml -m shell -a "sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab &&  swapoff -a"


1.9 k8s集群部署-修改默认镜像


[root@kubespray ~]# cd /root/kubespray/


#修改为国内镜像
cp inventory/mycluster/group_vars/all/offline.yml inventory/mycluster/group_vars/all/mirror.yml
sed -i -E '/# .*\{\{ files_repo/s/^# //g' inventory/mycluster/group_vars/all/mirror.yml
tee -a inventory/mycluster/group_vars/all/mirror.yml <<EOF
gcr_image_repo: "gcr.m.daocloud.io"
kube_image_repo: "k8s.m.daocloud.io"
docker_image_repo: "docker.m.daocloud.io"
quay_image_repo: "quay.m.daocloud.io"
github_image_repo: "ghcr.m.daocloud.io"
files_repo: "https://files.m.daocloud.io"
EOF



ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
#如果没有执行成功,可以多次执行。

1.10 k8s主节点验证可用性

kubectl get nodes

kubectl get componentstatuses

kubectl get pods -A


[root@k8s-master01 ~]# kubectl create deployment demo-nginx-kubespray --image=nginx --replicas=2
deployment.apps/demo-nginx-kubespray created


[root@k8s-master01 ~]# kubectl get pods
NAME                                   READY   STATUS              RESTARTS   AGE
demo-nginx-kubespray-b65cf84cd-jzkzf   1/1     Running             0          16s


demo-nginx-kubespray-b65cf84cd-v2nv4   0/1     ContainerCreating   0          16s
[root@k8s-master01 ~]# kubectl expose deployment demo-nginx-kubespray --type NodePort --port=80
service/demo-nginx-kubespray exposed


[root@k8s-master01 ~]# kubectl get svc
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
demo-nginx-kubespray   NodePort    10.233.7.87   <none>        80:30532/TCP   4s
kubernetes             ClusterIP   10.233.0.1    <none>        443/TCP        16m


[root@k8s-master01 ~]# kubectl get  deployments.apps
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
demo-nginx-kubespray   2/2     2            2           116s


[root@k8s-master01 ~]# kubectl get pods
NAME                                   READY   STATUS    RESTARTS   AGE
demo-nginx-kubespray-b65cf84cd-jzkzf   1/1     Running   0          44s
demo-nginx-kubespray-b65cf84cd-v2nv4   1/1     Running   0          44s


[root@k8s-master01 ~]# kubectl get svc demo-nginx-kubespray
NAME                   TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
demo-nginx-kubespray   NodePort   10.233.7.87   <none>        80:30532/TCP   17s

image.png

1.11 移除节点

不用修改hosts.yaml文件

 ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root remove-node.yml -v -b --extra-vars "node=k8s-node-02"

1.12 增加节点

需要修改hosts.yaml文件,在inventory/mycluster/hosts.yaml中添加新增节点信息

 
#添加了k8s-node-03
[root@kubespray kubespray]# vim inventory/mycluster/hosts.yaml
all:
  hosts:
    k8s-master-01:
      ansible_host: 192.168.6.220
      ip: 192.168.6.220
      access_ip: 192.168.6.220
    k8s-master-02:
      ansible_host: 192.168.6.221
      ip: 192.168.6.221
      access_ip: 192.168.6.221
    k8s-master-03:
      ansible_host: 192.168.6.222
      ip: 192.168.6.222
      access_ip: 192.168.6.222
    k8s-node-01:
      ansible_host: 192.168.6.223
      ip: 192.168.6.223
      access_ip: 192.168.6.223
    k8s-node-02:
      ansible_host: 192.168.6.224
      ip: 192.168.6.224
      access_ip: 192.168.6.224
    k8s-node-03:
      ansible_host: 192.168.6.226
      ip: 192.168.6.226
      access_ip: 192.168.6.226
  children:
    kube_control_plane:
      hosts:
        k8s-master-01:
        k8s-master-02:
        k8s-master-03:
    kube_node:
      hosts:
        k8s-node-01:
        k8s-node-02:
        k8s-node-03:
    etcd:
      hosts:
        k8s-master-01:
        k8s-master-02:
        k8s-master-03:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}

[root@kubespray kubespray]# ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root scale.yml -v -b

1.13 清理k8s集群

[root@kubespray ~]# cd kubespray/
[root@kubespray kubespray]# ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root reset.yml

根据提供的引用内容,以下是使用kubeadm部署Kubernetes 1.27.4的步骤: 1. 确认k8s版本和环境:首先,确认您要部署Kubernetes版本为1.27.4,并确保您的环境满足部署要求,例如操作系统版本、CPU和内存等。 2. 创建配置文件:根据您的需求,创建Kubernetes集群的配置文件,包括证书、网络插件、镜像源等。您可以根据实际情况进行配置。 3. 安装kubeadm:在两台Ubuntu 16.04 64位双核CPU虚拟机上安装kubeadm。您可以使用以下命令安装kubeadm: ```shell sudo apt-get update sudo apt-get install -y kubeadm ``` 4. 初始化Master节点:在其中一台虚拟机上执行以下命令初始化Master节点: ```shell sudo kubeadm init --kubernetes-version=1.27.4 ``` 该命令将会初始化Kubernetes Master节点,并生成一个加入集群的命令。 5. 部署网络插件:根据您的配置文件选择网络插件,这里以flannel为例。在Master节点上执行以下命令部署flannel网络插件: ```shell kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` 6. 加入Worker节点:在另一台虚拟机上执行Master节点生成的加入集群的命令,将其加入到Kubernetes集群中: ```shell sudo kubeadm join <Master节点IP>:<Master节点端口> --token <Token值> --discovery-token-ca-cert-hash <证书哈希值> ``` 请将`<Master节点IP>`、`<Master节点端口>`、`<Token值>`和`<证书哈希值>`替换为实际的值。 至此,您已成功使用kubeadm部署Kubernetes 1.27.4集群。
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

大虾别跑

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值