kubeadm初始化kubernetes集群

目录

1. 环境准备

1.1 设置selinux状态

1.2 防火墙暂时关闭

1.3 设置主机名称

1.4 分别配置三台主机hosts文件

1.5 关闭swap

1.6 重启虚拟机   reboot

2. 部署

2.1 下载源

2.2 拷贝配置到其他节点

2.3 查看组件是否安装成功

2.4 设置开机自启及开启服务

2.5 kubeadm初始化

2.6 检查各个节点是否正常

2.7 安装覆盖网络-Flannel

3. 备注

 3.1 修改NodePortde的范围

 3.2 重启apiserver

 3.3 验证结果


1. 环境准备

主机ip主机名称节点
192.168.1.91k8s-mastermaster
192.168.1.92k8s-node-1node-1
192.168.1.93k8s-node-2node-2

1.1 设置selinux状态

vim /etc/selinux/config

修改为SELINUX=disabled

1.2 防火墙暂时关闭

systemctl stop firewalld  暂时关闭

systemctl disable firewalld 重启关闭

1.3 设置主机名称

192.168.1.91:
hostnamectl set-hostname k8s-master

192.168.1.92:
hostnamectl set-hostname k8s-node-1

192.168.1.93:
hostnamectl set-hostname k8s-node-2

1.4 分别配置三台主机hosts文件

vim /etc/hosts

192.168.1.91 k8s-master
192.168.1.92 k8s-node-1
192.168.1.93 k8s-node-2

1.5 关闭swap

关闭swap
vim /etc/fstab

临时关闭swap
swapoff -a

1.6 重启虚拟机   reboot

2. 部署

2.1 下载源

下载docker-ce yum源 
cd /etc/yum.repo.d
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

k8s yum源:
vim kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

初始化yum源
yum repolist

yum install docker-ce kubelet kubeadm kubectl -y

2.2 拷贝配置到其他节点

scp docker-ce.repo kubernetes.repo k8s-node01:/etc/yum.repos.d/
scp docker-ce.repo kubernetes.repo k8s-node02:/etc/yum.repos.d/

yum repolist -y
yum install docker-ce kubelet kubeadm kubectl -y

2.3 查看组件是否安装成功

[root@master yum.repos.d]# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service

每个节点都不能打开swap设备,早起的时候,k8s是禁止swap的,一开swap就不能安装和启动,可以在下面的参数设置忽略swap

[root@master yum.repos.d]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=

改成下面的参数
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

2.4 设置开机自启及开启服务

systemctl enable kubelet && systemctl enable docker

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF


配置docker国内镜像

vim /etc/docker/daemon.json
{ "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"] }

Docker中国官方镜像加速
--registry-mirror=https://registry.docker-cn.com
网易163镜像加速
--registry-mirror=http://hub-mirror.c.163.com
中科大镜像加速
--registry-mirror=https://docker.mirrors.ustc.edu.cn
阿里云镜像加速
--registry-mirror=https://{your_id}.mirror.aliyuncs.com
daocloud镜像加速	
--registry-mirror=http://{your_id}.m.daocloud.io

启动docker
systemctl daemon-reload
systemctl start docker


必须保证下面的结果是1,不能是0。
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1

cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

如果是0的话,就需要按照下面的解决

vim /etc/sysctl.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1

sysctl -p

2.5 kubeadm初始化

方法一:
kubeadm init --image-repository registry.aliyuncs.com/google_containers  --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=Swap

若初始化失败,重新失败需执行
kubeadm reset

方法二:
获取初始化参数文件:
kubeadm config print init-defaults > init.default.yaml


vim init.default.yaml
修改:
localAPIEndpoint:
  advertiseAddress: 192.168.1.91 #master ip
  bindPort: 6443
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  #国内镜像
kubernetesVersion: v1.19.0       #k8s版本号
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12    #选择默认即可,当然也可以自定义CIDR
  podSubnet: 10.244.0.0/16       #添加pod网段

初始化:
kubeadm init --config=init.default.yaml

若初始化失败,重新执行
kubeadm reset

在重新执行kubeadm init命令

可以通过命令
kubeadm config images list  可预先下载所需镜像文件
k8s.gcr.io/kube-apiserver:v1.19.16
k8s.gcr.io/kube-controller-manager:v1.19.16
k8s.gcr.io/kube-scheduler:v1.19.16
k8s.gcr.io/kube-proxy:v1.19.16
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.9-1
k8s.gcr.io/coredns:1.7.0

下载镜像执行命令
kubeadm config images pull --config=init.default.yaml\

下载完成之后,在进行初始化命令即可!

初始化成功

 执行:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

复制初始化是的token,在其他节点执行,将其他节点加入集群

kubeadm join 192.168.1.91:6443 --token ipe8mu.bxr60a4p97fnyrrz \
        --discovery-token-ca-cert-hash sha256:24b9203e549e5bb53170a1165449b3a21185b15647e131e9092aae09efa2f82f 

2.6 检查各个节点是否正常

kubectl get componentstatus 或者 kubectl get cs

 注意:

出现这种情况,是/etc/kubernetes/manifests/下的kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0导致的,解决方式是注释掉对应的port即可,修改后再次查看节点情况

 使用kubectl get nodes 查看已加入的节点时,出现了status为NotReady的情况,这种情况是因为有某些关键的pod没有运行起来,首先使用如下命令来看一下kube-system的pod状态:

kubectl get pod -n kube-system

 这个原因一般是flannel镜像拉去失败的原因

2.7 安装覆盖网络-Flannel

vim kube-flannel.yml
拷贝以下代码执行
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: quay.io/coreos/flannel:v0.15.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.15.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

 执行

kubectl apply -f kube-flannel.yml
之后需等几分钟

3. 备注

 3.1 修改NodePortde的范围

在kubernetes集群中,NodePort默认范围是30000-32767

修改kubeadm安装k8s集群的情况下,你的Master节点上会有一个文件/etc/kubernetes/manifests/kube-apiserver.yaml,修改此文件,向其中添加

--service-node-port-range=20000-22767 (请使用您自己需要的端口范围)

另一种极端方案虚拟机直接重启

 - --service-cluster-ip-range=10.96.0.0/12
 下添加
 - --service-node-port-range=20000-30000 或者
 

 3.2 重启apiserver

执行以下命令,重启apiserver

# 获得 apiserver 的 pod 名字
export apiserver_pods=$(kubectl get pods --selector=component=kube-apiserver -n kube-system --output=jsonpath={.items..metadata.name})
# 删除 apiserver 的 pod
kubectl delete pod $apiserver_pods -n kube-system

 3.3 验证结果

执行以下命令,验证修改是否生效

kubectl describe pod $apiserver_pods -n kube-system

上面基本是kubeadm初始化kubernetes方案!!!

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值