ubuntu20安装k8s

1、修改主机名称

hostnamectl set-hostname "name"

2、配置静态IP

2.1 ip addr查看哪个网口启动,接上网线的网口会显示UP字样

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 18:9b:a5:80:b5:9d brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.130/24 brd 192.168.2.255 scope global eno1
       valid_lft forever preferred_lft forever
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 18:9b:a5:80:b5:9e brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:ac:b1:d3:f1 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

2.2 vi /etc/netplan/00-installer-config.yaml,修改UP网口为静态固定IP

# This is the network config written by 'subiquity'
network:
  ethernets:
    eno1: #设置UP的网口
      critical: true
      dhcp-identifier: mac
      dhcp4: false #关闭动态IP分配
      addresses: [192.168.2.130/24] #设置静态IP
      gateway4: 192.168.2.254 #设置网关地址
      nameservers:
        addresses:
        - 114.114.114.114
    eno2:
      dhcp4: false
  version: 2

2.3 是配置生效

netplan apply

附加,删除多余IP的操作

ip addr del [ip/len] dev [网口名称]

3、禁用IPV6

为了不出现其它难以排查的网络情况,提前禁止IPV6,修改内核参数

echo "net.ipv6.conf.all.disable_ipv6     = 1" >>/etc/sysctl.conf \
&& echo "net.ipv6.conf.default.disable_ipv6 = 1" >>/etc/sysctl.conf \
&& echo "net.ipv6.conf.lo.disable_ipv6      = 1" >>/etc/sysctl.conf \
&& sysctl -p

4、修改最大文件打开数

tee -a /etc/security/limits.conf << EOF 
* soft nofile 6400000
* hard nofile 6400000
* soft nproc 6400000
* hard nproc 6400000
EOF

5、安装nfs服务端

nfs服务用来在多个服务器之间提供共享存储

5.1 安装nfs-kernel-server

apt-get install -y nfs-kernel-server
systemctl start nfs-kernel-server
systemctl enable nfs-kernel-server

5.2 创建共享目录

mkdir /data/share && chmod 777 /data/share && chown nobody:nogroup /data/share

5.3 配置共享目录

vi /etc/exports

# /etc/exports: the access control list for filesystems which may be exported
#		to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/data/share *(rw,sync,no_subtree_check,no_root_squash)

5.4 配置生效

exportfs -arv

6、安装nfs客户端

6.1 安装nfs-common

apt-get install -y nfs-common

6.2 创建挂载点并挂载

mkdir /data/share
mount -t nfs ip:/data/share /data/share

6.3 设置开机自动挂载

vi /etc/fstab 添加如下条目

[ip]:/data/share /data/share nfs defaults 0 0

6.4 手动挂载全部挂载点

mount -a

7、安装Docker

7.1 添加Docker官方软件源

apt-get update \
&& apt-get -y install \
apt-transport-https ca-certificates curl software-properties-common \
&& curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add - \
&& add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" \
&& apt-get -y update

7.2 安装docker-ce

apt-get install -y docker-ce

7.3 [可选]安装nvidia-docker官方软件源

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list \
&& apt-get update

7.4 [可选]安装nvidia-docer2

apt-get install -y nvidia-docker2

7.5 启动并开机启动docker

systemctl start docker && systemctl enable docker

8、修改Docker配置

8.1 vi /etc/docker/daemon.json

{
    "default-runtime": "nvidia",# 运行时,有显卡,则设置nvidia
    "exec-opts": [
        "native.cgroupdriver=systemd"
    ],
    "data-root": "/DATA/disk1/docker", #数据目录
    "insecure-registries": [
	      "harborip:harborport" #非安全注册表服务列表
    ],
    "registry-mirrors": [
        "https://docker.mirrors.ustc.edu.cn/",
        "https://hub-mirror.c.163.com",
        "https://registry.docker-cn.com"
    ],
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "log-driver":"json-file",
    "log-opts":{"max-size":"1g","max-file":"10"}
}

8.2 配置生效

systemctl daemon-reload && systemctl restart docker

9、安装k8s

9.1 禁用swap

swapoff -a
# 修改/etc/fstab,注释掉swap那行,持久化生效
vim /etc/fstab

9.2 修改内核参数

# 确认内核模块是否安装
lsmod | grep br_netfilter
# 没有安装,则安装
apt-get install -y bridge-utils
# 查看内核参数
sysctl -a |grep bridge-nf-call
# 如果未设置则设置
cat <<EOF | tee /etc/sysctl.d/k8s.conf
vm.swappiness=0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

9.3 安装k8s官方软件源

apt-get install -y ca-certificates curl software-properties-common apt-transport-https \
&& curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - \
&& tee /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

9.4 安装kubelet、kubeadm、kubectl

apt-get update && apt-get install -y kubelet=1.23.6-00 kubeadm=1.23.6-00 kubectl=1.23.6-00

9.5 初始化master

拉取镜像

kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version 1.23.6

内网IP+外网IP初始化

kubeadm init \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--apiserver-advertise-address 内网IP \
--apiserver-cert-extra-sans 外网IP \
--token-ttl 0 \
--pod-network-cidr 10.244.0.0/16 \
--service-cidr 10.96.0.0/12 \
--kubernetes-version 1.23.6

内网IP初始化

kubeadm init \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--apiserver-advertise-address 内网IP \
--token-ttl 0 \
--pod-network-cidr 10.244.0.0/16 \
--service-cidr 10.96.0.0/12 \
--kubernetes-version 1.23.6

9.6 复制客户端配置文件

mkdir -p $HOME/.kube && cp -f /etc/kubernetes/admin.conf $HOME/.kube/config

10、定时刷新证书

vi /root/renew_pki.sh

#!/bin/bash

/usr/bin/kubeadm certs renew all \
&& /usr/bin/kubectl get pods -n kube-system |grep -E 'etcd|apiserver|controller-manager|scheduler' |awk '{print $1}' |xargs /usr/bin/kubectl delete pod -n kube-system \
&& cp -i /etc/kubernetes/admin.conf /root/.kube/config

crontab -e

0 0 1 1 * bash /root/renew_pki.sh

11、安装网络插件

选择一个网络插件,这里选择了flannel

kubectl apply -f flannel.yaml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "500Mi"
          limits:
            cpu: "100m"
            memory: "500Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

  • 5
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Ubuntu 20.04是一种流行的Linux操作系统,而Kubernetes(简称K8s)是一个用于容器编排和管理的开源平台。在Ubuntu 20.04上安装Kubernetes可以通过以下步骤完成: 1. 更新系统:首先,确保你的Ubuntu 20.04系统是最新的,可以使用以下命令进行系统更新: ``` sudo apt update sudo apt upgrade ``` 2. 安装Docker:Kubernetes使用Docker来管理容器,因此需要先安装Docker。可以使用以下命令安装Docker: ``` sudo apt install docker.io ``` 3. 配置Docker:安装完成后,需要配置Docker以允许非特权用户运行容器。可以使用以下命令完成配置: ``` sudo usermod -aG docker $USER newgrp docker ``` 4. 安装Kubernetes工具:接下来,需要安装Kubernetes工具包,包括kubectl和kubeadm。可以使用以下命令进行安装: ``` sudo apt install kubectl kubeadm ``` 5. 初始化Kubernetes Master节点:在安装完成后,需要初始化Kubernetes Master节点。可以使用以下命令进行初始化: ``` sudo kubeadm init ``` 6. 配置kubectl:初始化完成后,需要配置kubectl以与Kubernetes集群通信。可以使用以下命令进行配置: ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 7. 安装网络插件:Kubernetes需要网络插件来实现容器之间的通信。可以选择安装不同的网络插件,如Flannel、Calico等。以下是安装Flannel的命令: ``` kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` 至此,你已经成功在Ubuntu 20.04上安装Kubernetes。你可以使用kubectl命令来管理和操作Kubernetes集群。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值