(二)Kubernetes安装

环境规划

Kubernetes集群大体分为一主多从和多主多从两大类:
一主多从:一个master节点和多个node节点,搭建简单,存在单点故障,一般用于测试环境
多主多从:多个master节点和多个node节点,搭建复杂,安全性高,用于生产环境

安装方式

Kubernetes的安装方式有3种,minikube,kubeadm,二进制包
minikube: 用户快速搭建单节点kubernetes的工具(不推荐)
kubeadm: 一个用户快速搭建kubernetes集群的工具
二进制包: 从官网下载二进制包,依次安装,此方式较复杂,但是对于理解kubernetes有帮助

我们这里采用kubeadm的方式来安装

主机规划

序号主机地址节点类型操作系统配置
1192.168.100.100masterCentOS 7.62CPU 3G 20G硬盘
2192.168.100.101node1CentOS 7.62CPU 3G 20G硬盘
3192.168.100.102node2CentOS 7.62CPU 3G 20G硬盘

环境搭建

这里就不细说,大家自行百度

环境初始化

  1. 检查操作系统版本
[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
  1. 添加主机名
cat >> /etc/hosts <<EOF
192.168.100.100 master
192.168.100.101 node1
192.168.100.102 node2
EOF
  1. 时间同步
#启动时间同步服务
systemctl start chronyd
#开机启动时间同步
systemctl enable chronyd

4.禁用防火墙
centos6版本是iptables,centos7是firewalld

# kubernetes和docker会产生很多iptables规则,这些规则会和系统规则混淆,直接关闭系统规则
#关闭防火墙
systemctl stop firewalld
#禁用防火墙
systemctl disable firewalld

#关闭防火墙
systemctl stop iptables
#禁用防火墙
systemctl disable iptables
  1. 禁用selinux
# 编辑/etc/selinux/config文件,修改SELINUX的值为disabled
# 修改完毕后需要重启linux服务
SELINUX=disabled
  1. 禁用swap分区
# 打开/etc/fstab,注释掉swap分区所在的行
# 注意修改完毕后需要重启linux服务

/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=dfba36c7-cbf4-4e53-9a2c-af7c1ea381e7 /boot                   xfs     defaults        
0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
  1. 修改linux内核参数
# 修改linux的内核参数,添加网桥过滤和地址转发功能
# 编辑/etc/sysctl.d/kubernetes.conf文件,添加如下配置
net.bridge.bridge-nf-call-ip6tables =1
net.bridge.bridge-nf-call-iptables =1
net.ipv4.ip_forward = 1

#修改完后重新加载配置
[root@master ~]# sysctl -p

#加载网桥过滤模块
[root@master ~]# modprobe br_netfilter

#查看网桥过滤模块是否加载成功
[root@master ~]# lsmod | grep br_netfilter
  1. 配置ipvs功能
    在kubernetes中service有两种模式,一种是基于iptables,一种是基于ipvs的,两者相比较的话,ipvs性能较高,我们如果要使用他的话,就需要手动载入ipvs模块
#1.安装ipset和ipvsadm模块
yum install ipset ipvsadm -y

#2.添加需要加载的模块写入脚本文件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

#3.为脚本文件添加执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules

#4.执行脚本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules

#5.查看对应的模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
  1. 重启服务器

安装docker

  1. 切换镜像源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  1. 查看当前镜像源所支持的docker版本
yum list docker-ce --showduplicates
  1. 安装特定版本的docker-ce,必须指定–setopt=obsoletes=0否则会自动安装最新版本
yum install docker-ce-18.06.3.ce-3.el7 -y
  1. 添加一个配置文件,docker在默认情况下使用的Cgroup Driver为cgroupfs,而kubernetes推荐使用systemd来代替cgroupfs
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
}
EOF
  1. 启动docker
systemctl restart docker
systemctl enable docker
  1. 检查docker状态和版本
docker version

安装Kubernetes组件

  1. 添加kubernetes镜像源
    由于国外镜像源不仅速度慢,不稳定,且可能访问不了,这里切换国内的镜像源,编辑/etc/yum.repos.d/kubernetes.repo文件,添加如下配置
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  1. 安装kubeadm、kubelet和kubectl
    由于版本更新频繁,这里指定版本号部署:
yum install --setopt=obsoletes=0 -y kubelet-1.18.17 kubeadm-1.18.17 kubectl-1.18.17
  1. 配置kubelet的cgroup
    为了实现Docker使用的cgroup drvier和kubelet使用的cgroup drver一致,建议修改"/etc/sysconfig/kubelet"文件的内容:
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
  1. 设置kubelet开机启动
systemctl enable kubelet

准备集群镜像

由于kubeadm运行过程中,底层会自动下载kubenetes的镜像,由于这些镜像被墙了,所以无法下载,因此我们需要从阿里云事先下载镜像准备好,这样就可以顺利安装kubenetes的集群

  1. 安装kubenetes前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看你
kubeadm config images list

上述命令执行结果如下

[root@node1 ~]# kubeadm config images list
I0326 18:46:59.015283   32344 version.go:252] remote version is much newer: 
v1.20.5; falling back to: stable-1.18
W0326 18:47:05.964186   32344 configset.go:202] WARNING: kubeadm cannot validate 
component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.17
k8s.gcr.io/kube-controller-manager:v1.18.17
k8s.gcr.io/kube-scheduler:v1.18.17
k8s.gcr.io/kube-proxy:v1.18.17
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

我们可以看到我们依赖的镜像的版本,由于这些镜像被墙了,所以我们无法拉取镜像,但是我们可以通过阿里云提供的镜像拉取,然后打tag变成这些目标镜像
所有k8s.gcr.io开头的镜像都可以用如下前缀替换

registry.aliyuncs.com/google_containers

比如:
我们要下载镜像k8s.gcr.io/kube-apiserver:v1.18.17,就可以使用命令

#先使用阿里的前缀拉取镜像
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.17
#使用tag切换成k8s.gcr.io
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.17 k8s.gcr.io/kube-apiserver:v1.18.17
#删除原来的镜像
docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.17

这样就完成了镜像的下载,我们还可以通过命令把镜像保存起来,以后就不要再次下载

docker save -o kube-apiserver.tar k8s.gcr.io/kube-apiserver:v1.18.17

只需将保存的tar文件上传到服务器,然后执行命令导入docker镜像仓库

docker load -i kube-apiserver.tar
  1. 集群初始化
    下面开始对集群进行初始化,并将node节点加入到集群中
  • 下面操作只需要在master节点上执行即可
kubeadm init \
    --kubernetes-version=v1.18.17 \
    --pod-network-cidr=10.244.0.0/16 \
    --service-cidr=10.96.0.0/12 \
    --apiserver-advertise-address=192.168.100.100     #这里的IP地址需要修改为自己的master的IP地址

也可以是使用如下命令指定镜像源为阿里云的镜像源

kubeadm init \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version=v1.18.17 \
    --pod-network-cidr=10.244.0.0/16 \
    --service-cidr=10.96.0.0/12 \
    --apiserver-advertise-address=192.168.100.100     #这里的IP地址需要修改为自己的master的IP地址

创建必要文件(所有机器都要执行,不执行的无法使用kubectl命令)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 下面操作只需在node节点上执行即可
    如果master节点运行成功,会输出一段shell脚本,我们复制这段shell脚本到node节点上运行即可让node节点加入kubenetes集群中,shell脚本类似如下
kubeadm join 192.168.100.100:6443 --token 1coiqe.i0zt321f61aanqf9 \
    --discovery-token-ca-cert-hash sha256:asjdf8972345hlk;jfds9yg3245h322397fdsoifaowiufew

安装网络插件

kubernetes支持的网络插件很多,如flannel,calico,canal等等,人选一种使用即可,本次选择flannel

  1. 获取flannel的配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

由于文件非常难下载,我这里附上kube-flannel.yml的源码

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
  1. 修改文件中quay.io仓库为quay-mirror.qiniu.com,要不要修改看个人,本人未修改也下载下来了,反而是修改后的镜像下载不了

  2. 使用配置文件启动flannel

kubectl apply -f kube-flannel.yml

执行这个命令后,后端做的操作就是在kube-system命令空间下拉取flannel的相关镜像,可以使用命令查看镜像拉取情况

kubectl describe pod kube-flannel-ds-amd64-54n98 -n kube-system

成功拉取镜像后执行如下命令:

kubectl describe pod kube-flannel-ds-amd64-54n98 -n istio-system

显示结果如下:

Name:         kube-flannel-ds-amd64-54n98
Namespace:    kube-system
Priority:     0
Node:         master/192.168.100.100
Start Time:   Fri, 26 Mar 2021 16:36:56 +0800
Labels:       app=flannel
              controller-revision-hash=56bf6995cf
              pod-template-generation=3
              tier=node
Annotations:  <none>
Status:       Running
IP:           192.168.100.100
IPs:
  IP:           192.168.100.100
Controlled By:  DaemonSet/kube-flannel-ds-amd64
Init Containers:
  install-cni:
    Container ID:  docker://2e3f779eb93e47f79fc325b6a31b4bffa381f743a582caab0a826af75b512530
    Image:         quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 26 Mar 2021 16:48:37 +0800
      Finished:     Fri, 26 Mar 2021 16:48:37 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-ksh5r (ro)
Containers:
  kube-flannel:
    Container ID:  docker://01122e880b4df0fa3a63f90fad1b476cb3140c2402ad4b5741633de35090bec8
    Image:         quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Running
      Started:      Fri, 26 Mar 2021 16:48:38 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:       kube-flannel-ds-amd64-54n98 (v1:metadata.name)
      POD_NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-ksh5r (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  flannel-token-ksh5r:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  flannel-token-ksh5r
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:          <none>
  1. 镜像拉取成功并运行后,再次查看集群节点的状态
kubectl get nodes

显示结果如下:

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   3h5m   v1.18.17
node1    Ready    <none>   3h4m   v1.18.17
node2    Ready    <none>   3h3m   v1.18.17

自此,kubernetes的集群环境搭建完成

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值