快速搭建k8s集群

两台机器都要操作:

查看配置:2个cpu

192.168.0.137

192.168.0.138

  1. 关闭swap:

         swapoff -a

       关闭防火墙:

        sudo ufw disable

     使用sudo ufw status命令来查看当前防火墙的状态,如果是inactive说明防火墙已经关闭。

2.下载containerd 包

https://github.com/containerd/containerd/releases

将下载好的包上传至服务器

解压到usr/local 目录下

tar Cxzvf /usr/local containerd-1.6.6-linux-amd64.tar.gz

输入ctr 验证是否成功:

3.创建/usr/local/lib/systemd/system/这个目录

mkdir -p /usr/local/lib/systemd/system/

输入:vi /usr/local/lib/systemd/system/containerd.service 将下面配置文件复制进去

# Copyright The containerd Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

[Unit]

Description=containerd container runtime

Documentation=https://containerd.io

After=network.target local-fs.target

[Service]

#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration

#Environment="ENABLE_CRI_SANDBOXES=sandboxed"

ExecStartPre=-/sbin/modprobe overlay

ExecStart=/usr/local/bin/containerd

Type=notify

Delegate=yes

KillMode=process

Restart=always

RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNPROC=infinity

LimitCORE=infinity

LimitNOFILE=infinity

# Comment TasksMax if your systemd version does not supports it.

# Only systemd 226 and above support this version.

TasksMax=infinity

OOMScoreAdjust=-999

[Install]

WantedBy=multi-user.target

5.设置开机启动

systemctl daemon-reload

systemctl enable --now containerd

查看运行状态:

systemctl status containerd

6.下载runc.amd64文件

https://github.com/opencontainers/runc/releases

将下载好的文件上传至服务器。

运行:install -m 755 runc.amd64 /usr/local/sbin/runc

7.生成一个配置文件

先创建文件夹:mkdir /etc/containerd/

然后执行下面命令经行创建:

containerd config default > /etc/containerd/config.toml

8.修改配置文件,SystemdCgroup = true

vi /etc/containerd/config.toml

修改完后重启:

sudo systemctl restart containerd

9.配置阿里源并安装kubelet kubeadm kubectl:

apt-get update && apt-get install -y apt-transport-https

 

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

EOF

 apt-get update

apt-get install -y kubelet kubeadm kubectl

10.修改主机名(两台都要)

hostnamectl set-hostname  msater1

hostnamectl set-hostname  worker1

 11.修改配置文件(直接全部复制运行):

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

overlay

br_netfilter

EOF

sudo modprobe overlay

sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables  = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv4.ip_forward= 1

EOF

# Apply sysctl params without reboot

sudo sysctl --system

 12.查看需要的镜像:kubeadm config images list

拉取所需要的镜像:

kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers

 

修改/etc/containerd/config.toml  文件

vi /etc/containerd/config.toml

修改完成后重启containerd:

systemctl restart containerd

13.初始化k8s:(仅主节点操作

kubeadm  init  --pod-network-cidr="10.244.0.0/16"  --image-repository registry.aliyuncs.com/google_containers

 worker1节点复制下面部分运行,就是加入主节点

#如果token超期,重新生成(主节点)

sudo kubeadm token create --print-join-command

主节点执行:

mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时:kubectl get node  是notready的状态。

创建网络的yaml:

vi kube-flannel.yml

复制以下内容到kube-flannel.yml 里

---

kind: Namespace

apiVersion: v1

metadata:

  name: kube-flannel

  labels:

    pod-security.kubernetes.io/enforce: privileged

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: flannel

rules:

- apiGroups:

  - ""

  resources:

  - pods

  verbs:

  - get

- apiGroups:

  - ""

  resources:

  - nodes

  verbs:

  - list

  - watch

- apiGroups:

  - ""

  resources:

  - nodes/status

  verbs:

  - patch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: flannel

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: flannel

subjects:

- kind: ServiceAccount

  name: flannel

  namespace: kube-flannel

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: flannel

  namespace: kube-flannel

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: kube-flannel-cfg

  namespace: kube-flannel

  labels:

    tier: node

    app: flannel

data:

  cni-conf.json: |

    {

      "name": "cbr0",

      "cniVersion": "0.3.1",

      "plugins": [

        {

          "type": "flannel",

          "delegate": {

            "hairpinMode": true,

            "isDefaultGateway": true

          }

        },

        {

          "type": "portmap",

          "capabilities": {

            "portMappings": true

          }

        }

      ]

    }

  net-conf.json: |

    {

      "Network": "10.244.0.0/16",

      "Backend": {

        "Type": "vxlan"

      }

    }

---

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: kube-flannel-ds

  namespace: kube-flannel

  labels:

    tier: node

    app: flannel

spec:

  selector:

    matchLabels:

      app: flannel

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      affinity:

        nodeAffinity:

          requiredDuringSchedulingIgnoredDuringExecution:

            nodeSelectorTerms:

            - matchExpressions:

              - key: kubernetes.io/os

                operator: In

                values:

                - linux

      hostNetwork: true

      priorityClassName: system-node-critical

      tolerations:

      - operator: Exists

        effect: NoSchedule

      serviceAccountName: flannel

      initContainers:

      - name: install-cni-plugin

       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)

        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0

        command:

        - cp

        args:

        - -f

        - /flannel

        - /opt/cni/bin/flannel

        volumeMounts:

        - name: cni-plugin

          mountPath: /opt/cni/bin

      - name: install-cni

       #image: flannelcni/flannel:v0.19.0 for ppc64le and mips64le (dockerhub limitations may apply)

        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.0

        command:

        - cp

        args:

        - -f

        - /etc/kube-flannel/cni-conf.json

        - /etc/cni/net.d/10-flannel.conflist

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      containers:

      - name: kube-flannel

       #image: flannelcni/flannel:v0.19.0 for ppc64le and mips64le (dockerhub limitations may apply)

        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.0

        command:

        - /opt/bin/flanneld

        args:

        - --ip-masq

        - --kube-subnet-mgr

        resources:

          requests:

            cpu: "100m"

            memory: "50Mi"

          limits:

            cpu: "100m"

            memory: "50Mi"

        securityContext:

          privileged: false

          capabilities:

            add: ["NET_ADMIN", "NET_RAW"]

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        - name: EVENT_QUEUE_DEPTH

          value: "5000"

        volumeMounts:

        - name: run

          mountPath: /run/flannel

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

        - name: xtables-lock

          mountPath: /run/xtables.lock

      volumes:

      - name: run

        hostPath:

          path: /run/flannel

      - name: cni-plugin

        hostPath:

          path: /opt/cni/bin

      - name: cni

        hostPath:

          path: /etc/cni/net.d

      - name: flannel-cfg

        configMap:

          name: kube-flannel-cfg

      - name: xtables-lock

        hostPath:

          path: /run/xtables.lock

          type: FileOrCreate

最后一步执行yaml

kubectl apply -f kube-flannel.yml

安装结束!

 

 

 

 

 

 

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值