kubeadm搭建k8s容器集群管理系统

目录

一  环境部署

1.1 集群类型

1.2  安装方式

1.3 主机规划

1.4 检查环境

二 集群搭建

 2.1 安装docker

2.2 kubernetes安装

2.3 准备集群镜像

2.4 集群初始化

三 添加节点

3.1 安装网络插件

四 测试集群


 

一  环境部署

1.1 集群类型

  • 一台多从: 用于测试环境
  • 多住多从: 安全性高用于生产环境 多master 多node

 说明: 为了测试的简单,本次搭建的是 一主两从 类型集群

1.2  安装方式

kubernets有多种部署方式,目前主流的方式kubeadm、minikube 、二进制包

  • minikube:一个用于快速搭建单节点klubernetes的工具
  • kubeadm: 用于快速搭建kubernets集群的工具
  • 二进制包:官网下载每个组件的二进制包,依次去安装,能够理解kubernets组件

1.3 主机规划

master       

192.168.1.105
server1192.168.1.106
server2192.168.1.107

1.4 检查环境

1.检查centos版本 要大于7.5
[root@server1 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

2.主机名解析
[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 192.168.1.105 master
 192.168.1.106 server1
 192.168.1.107 server2

3.时间同步
[root@master ~]# ntpdate -u cn.pool.ntp.org
 8 Feb 20:53:34 ntpdate[18932]: adjust time server 139.199.215.251 offset -0.002730 sec

4.禁用防火墙和ipstable selinux

5.禁用swap分区
[root@master ~]# vim /etc/fstab 
注释掉含有swap的行 //永久禁用
[root@server1 ~]# mount -a

6.修改linux的内核参数
[root@master ~]# vim /etc/sysctl.d/kubernetes.conf
添加如下:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-ip4tables = 1
net.ipv4.ip_forward = 1

[root@master ~]# sysctl -p  #重新加载配置
[root@master ~]# modprobe br_netfilter #加载网桥过滤
[root@master ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
## 查看现在已经加载的模块

7.配置ipvs功能

 7.1 安装ipset 和ipvsadm
[root@master ~]# yum -y install ipset ipvsadm
##添加如下内容:
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
 7.2 为脚本文件添加执行权限  运行
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
 7.3 查看你模块是否加载成功
[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  2 

8.所有服务器都需要完成上述操作,完成后重启



二 集群搭建

 2.1 安装docker


# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: 开启Docker服务
sudo service docker start

# 注意:
# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
# vim /etc/yum.repos.d/docker-ce.repo
#   将[docker-ce-test]下方的enabled=0修改为enabled=1
#
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
#   Loading mirror speeds from cached hostfile
#   Loaded plugins: branch, fastestmirror, langpacks
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable
#   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
#   Available Packages
# Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)
# sudo yum -y install docker-ce-[VERSION]
2.添加配置文件

#docker 默认情况下使用的cgroupfs ,而kubernetes推荐使用systemd

[root@master ~]# cat <<EOF>> /etc/docker/daemon.json
> {
> "exec-opts": ["native.cgroupdriver=systemd"],
> "registry-mirrors": ["https://620kdbvq.mirror.aliyuncs.com"]
> }
> EOF

3.重启docker与验证
[root@master ~]# docker info -f {{.CgroupDriver}}
[root@master ~]# systemctl restart docker
[root@master ~]# docker --version
Docker version 23.0.0, build e92dd87

2.2 kubernetes安装

#由于kubernets的镜像在国外 速度比较慢,切换成国内的镜像源

1.添加镜像源

[root@server1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF



[root@master ~]# yum install -y kubelet kubeadm kubectl

[root@master ~]# systemctl enable kubelet ##集群会自动启动它
ps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安装
2.配置kubelet的cgroup
[root@master ~]# vim /etc/sysconfig/kubelet 

KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

2.3 准备集群镜像

##安装集群前,必须准备好镜像
1.准备镜像
[root@master ~]# kubeadm config images list ##查看所需要的镜像
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6


2.编写脚本怕批量导入镜像
[root@master ~]# cat 2.sh
#!/bin/bash
images=(
        kube-apiserver:v1.23.16
        kube-controller-manager:v1.23.16
        kube-scheduler:v1.23.16
        kube-proxy:v1.23.16
        pause:3.6
        etcd:3.5.1-0
        coredns:v1.8.6

)

for imagesName in ${images[@]} ; do
        docker pull registry.aliyuncs.com/google_containers/$imagesName
        docker tag registry.aliyuncs.com/google_containers/$imagesName  k8s.gcr.io/$imagesName
        docker rmi registry.aliyuncs.com/google_containers/$imagesName
done



2.4 集群初始化

#集群初始化操作只需要在master节点执行即可

[root@master ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
##此操作每台都需要

[root@master ~]# vim /etc/containerd/config.toml
##将disbale_[lugins = ["cri"]注释掉

[root@master ~]#  echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

遇到一个很难解决的问题 kubenet版本要小于1.24 否则会一直报错

1.初始化集群

[root@master ~]# cat 1.sh
#!/bin/bash
kubeadm init --kubernetes-version=v1.23.6  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12  --image-repository  registry.aliyuncs.com/google_containers  --apiserver-advertise-address=192.168.1.105 

出现Your Kubernetes control-plane has initialized successfully! 即代表成功

2.查看初始化集群输出信息

##要开始使用集群,您需要以普通用户身份运行以下命令:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

或者,如果您是root用户,则可以运行:

  export KUBECONFIG=/etc/kubernetes/admin.conf

现在应该将pod网络部署到集群。
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

然后,您可以通过以root身份在每个工作节点上运行以下命令来加入任意数量的工作节点:

kubeadm join 192.168.1.105:6443 --token kq0v21.y3jmc63lrk1yys0z \
	--discovery-token-ca-cert-hash sha256:847af937d79791a70de9cecd8ed080444bbb630474c2d4faafbe87ba6a59f080 

三 添加节点

1.输入指令
[root@server1 ~]# kubeadm join 192.168.1.105:6443 --token kq0v21.y3jmc63lrk1yys0z --discovery-token-ca-cert-hash sha256:847af937d79791a70de9cecd8ed080444bbb630474c2d4faafbe87ba6a59f080 

2.主节点查看节点主机
[root@master ~]# kubectl get nodes
NAME      STATUS     ROLES                  AGE     VERSION
master    NotReady   control-plane,master   23m     v1.23.6
server1   NotReady   <none>                 4m42s   v1.23.6
server2   NotReady   <none>                 4m43s   v1.23.6



##节点状态都是notready是因为没有网络

3.1 安装网络插件

  • kubernets支持多种网络插件,比如fannel calico等等 
  • 下列操作只需要在master节点执行即可
[root@master ~]# cat kube-flannel.yml 
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - "networking.k8s.io"
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.20.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.20.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

此文件较长  可以全盘复制

[root@master ~]# kubectl apply -f kube-flannel.yml


##耐心等待 则看到所有节点都是ready
[root@master ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE   VERSION
master    Ready    control-plane,master   64m   v1.23.6
server1   Ready    <none>                 45m   v1.23.6
server2   Ready    <none>                 45m   v1.23.6

四 测试集群

1.创建nginx pod
[root@master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

2.暴露端口
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

3.查看服务状态

[root@master ~]# kubectl get pod
NAME                         READY   STATUS              RESTARTS   AGE
pod/nginx-85b98978db-ndcpk   0/1     ContainerCreating   0          106s
##发现pod并没有启动 需要将此文件发送给每一个节点
[root@master flannel]# scp /run/flannel/* root@server1:/run/flannel/
subnet.env                                        100%   96   210.1KB/s   00:00
[root@master flannel]# kubectl get pods ##显示正常
NAME                     READY   STATUS    RESTARTS   AGE
nginx-85b98978db-zg9s6   1/1     Running   0          3m51s

[root@master ~]# kubectl get svc

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        69m
service/nginx        NodePort    10.105.1.184   <none>        80:30744/TCP   24s

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值