kubeadm部署k8s集群1.20版本


前言

k8s架构模型这里不再介绍

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。
这个工具能通过两条指令完成一个kubernetes集群的部署:
#创建一个 Master 节点
$ kubeadm init
#将一个 Node 节点加入到当前集群中
$ kubeadm join <Master节点的IP和端口 >

一、环境准备

单master架构

角色ip
k8s-master1192.168.131.128
k8s-node1192.168.131.129
k8s-node2192.168.131.130

环境初始化

关闭防火墙
systemctl disable --now firewalld
systemctl disable --now iptables

关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0

关闭swap
swapoff -a #
cat /etc/fstab swap #

加载模块
运行 lsmod | grep br_netfilter  #如果没有手动执行下面命令加载
modprobe br_netfilter
tee /etc/modules-load.d/k8s.conf <<EOF
br_netfilter
EOF
允许 iptables 检查桥接流量
k8s/etc/sysctl.d/k8s.conf
tee /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
# 配置hosts
cat /etc/hosts
192.168.131.128 k8s-master1
192.168.131.129 k8s-node1
192.168.131.130 k8s-node2
修改主机名
hostnamectl set-hostname <hostname>

安装配置ipvs

# kubeadm安装默认使用ipvs做集群内部流量转发
yum install ipset ipvsadm #安装
#加载模块到内核
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack
# 在/etc/modules-load.d/k8s.conf中加入ipvs相关模块,使其开机自动加载到内核。
root@k8s-master1:~ # more /etc/modules-load.d/k8s.conf
br_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack

# 内核优化(根据实际情况来)
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # swap OOM
vm.overcommit_memory=1 #
vm.panic_on_oom=0 # OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720

部署时钟同步chrony

参考文档 https://blog.csdn.net/u010674953/article/details/117701938
使用chrony软件,将master1与互联网服务器做时钟同步,然后其他服务器从master同步到其他服务器。

二、安装docker


#配置docker的yum地址
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#安装指定版本
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6

#配置docker的镜像源
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
        "exec-opts": ["native.cgroupdriver=systemd"],
        "registry-mirrors": [
                "http://hub-mirror.c.163.com/",
                "https://docker.mirrors.ustc.edu.cn/",
                "https://registry.docker-cn.com"
        ],
        "storage-driver": "overlay2",
        "storage-opts": ["overlay2.override_kernel_check=true"],
        "log-driver": "json-file",
        "log-opts": {
                "max-size": "100m",
                "max-file": "10"
        },
        "log-level": "debug"
}
EOF

# 启动
systemctl enable docker
systemctl start docker

三、基础组件部署

添加k8s镜像源

tee /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

1. 安装kubeadm , kubectl , kubelet

# 所有节点安装
yum install kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15 -y
# kubelet加入开机自启(不加入开机自启初始化时有警告)
systemctl enable kubelet

2. 集群初始化

选择一台master节点进行初始化操作,由于kubeadm默认需要的镜像k8s.gcr.io是国外镜像地址,国内无法访问,所以可以先到阿里云镜像仓库下载需要的镜像。

查看集群安装需要的镜像及版本

root@k8s-master1:~ # kubeadm config images list
I0510 21:47:45.036406   38396 version.go:254] remote version is much newer: v1.24.0; falling back to: stable-1.20
k8s.gcr.io/kube-apiserver:v1.20.15
k8s.gcr.io/kube-controller-manager:v1.20.15
k8s.gcr.io/kube-scheduler:v1.20.15
k8s.gcr.io/kube-proxy:v1.20.15
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

执行脚本下载镜像
root@k8s-master1:/opt/kubernetes # cat k8simages.sh 
#!/bin/bash
for i in k8s.gcr.io/kube-apiserver:v1.20.15 k8s.gcr.io/kube-controller-manager:v1.20.15 
    k8s.gcr.io/kube-scheduler:v1.20.15 k8s.gcr.io/kube-proxy:v1.20.15 k8s.gcr.io/pause:3.2 
    k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0; do
   temp=${i#k8s.gcr.io/}
   docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${temp}
   docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${temp} k8s.gcr.io/${temp}
   docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${temp}
done

初始化集群

kubeadm init \
--apiserver-advertise-address=192.168.131.128 \
--kubernetes-version v1.20.15 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--service-dns-domain=cluster.local \
--upload-certs
#指定镜像下载源(由于我提前下载好了镜像就不需要指定,默认就行,如果没有指定,此项一定要加上。)
#--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
.........

To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.131.128:6443 --token x3hbaf.cu55ms3dy7xyzi12 \
    --discovery-token-ca-cert-hash sha256:6d42d48700fd4c331eec891d552c0e66e88c168bfdd2c00d692feed7efda297f
[root@k8s-master1 kubernetes]#

# 继续执行(后续node节点加入集群时也需要执行此步骤)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

发现问题
node节点加入集群后查看状态时, scheduer和controller-manager组件状态为Unhealthy,报如下错误。

[root@k8s-master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"} 
# 解决方法
# 因为默认这两个组件初始化的端口都为0,需要修改,将配置文件中port=0注释掉
vim /etc/kubernetes/manifests/kube-controller-manager.yaml 
 #- --port=0
vim /etc/kubernetes/manifests/kube-scheduler.yaml 
 #- --port=0
# 重启kubelet服务
systemctl restart kubelet

# 重新查看状态
[root@k8s-master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

node节点加入集群

在node节点加入集群之前如果初始化时没有指定镜像下载源,需要先在node节点下载pause镜像,不然kube-proxy组件无法运行。
在node节点执行

kubeadm join 192.168.131.128:6443 --token x3hbaf.cu55ms3dy7xyzi12 \
    --discovery-token-ca-cert-hash sha256:6d42d48700fd4c331eec891d552c0e66e88c168bfdd2c00d692feed7efda297f

#此时查看节点状态都
root@k8s-master1:/opt/kubernetes # kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
k8s-master1   NotReady    control-plane,master   1h   v1.20.15
k8s-node1     NotReady    <none>                 1h   v1.20.15
k8s-node2     NotReady    <none>                 1h   v1.20.15
# 正常
集群状态都是 NotReady,是因为还没有部署网络插件

四. 网络插件部署

安装Calico插件

# 在master节点安装即可,其他节点会自动安装
# 下载calico.yaml文件
wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
# 下载完成后需要修改pod网络ip段,默认为192。168段

            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"  	# 修改
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
# 部署
kubectl apply -f calico.yaml

查看所有pod运行状态

root@k8s-master1:/opt/kubernetes # kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP                NODE          NOMINATED NODE   READINESS GATES
calico-kube-controllers-bcc6f659f-vz8h9   1/1     Running   1          8h    10.244.159.133    k8s-master1   <none>           <none>
calico-node-5spdb                         1/1     Running   1          8h    192.168.131.128   k8s-master1   <none>           <none>
calico-node-d2jsk                         1/1     Running   1          8h    192.168.131.130   k8s-node2     <none>           <none>
calico-node-nw9vg                         1/1     Running   1          8h    192.168.131.129   k8s-node1     <none>           <none>
coredns-74ff55c5b-gxnlv                   1/1     Running   1          10h   10.244.159.134    k8s-master1   <none>           <none>
coredns-74ff55c5b-qbzzf                   1/1     Running   1          10h   10.244.159.132    k8s-master1   <none>           <none>
etcd-k8s-master1                          1/1     Running   1          10h   192.168.131.128   k8s-master1   <none>           <none>
kube-apiserver-k8s-master1                1/1     Running   1          10h   192.168.131.128   k8s-master1   <none>           <none>
kube-controller-manager-k8s-master1       1/1     Running   1          8h    192.168.131.128   k8s-master1   <none>           <none>
kube-proxy-nwzzw                          1/1     Running   1          10h   192.168.131.128   k8s-master1   <none>           <none>
kube-proxy-qv9kh                          1/1     Running   1          10h   192.168.131.129   k8s-node1     <none>           <none>
kube-proxy-vpxjl                          1/1     Running   1          10h   192.168.131.130   k8s-node2     <none>           <none>
kube-scheduler-k8s-master1                1/1     Running   1          8h    192.168.131.128   k8s-master1   <none>           <none>
root@k8s-master1:/opt/kubernetes # 

# 验证
kubectl create deployment nginx --image=nginx:1.20.1-alpine
kubectl expose deployment/nginx --name=nginx --port=80 --type=NodePort

root@k8s-master1:/opt/kubernetes # kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        10h
nginx        NodePort    10.98.254.107   <none>        80:30545/TCP   28s

#外部浏览器能够访问 http://192.168.131.128:30545/ nginx,说明正常

优化操作

# 部署kubectl命令自动补全
yum install -y bash-completion
echo "source /usr/share/bash-completion/bash_completion" >> ~/.bashrc
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc

kubeadm部署最基础k8s集群自此完成
kubernetes的其他组件部署(metrices-server, ingress, dashboard等等),master扩容后续在介绍。

  • 2
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值