K8s部署

K8s部署

1.服务器准备:

192.168.43.100 k8smaster
192.168.43.101 k8snode1
192.168.43.102 k8snode2

2.Centos版本

最好centos7版本,各节点版本保持一致

[root@k8smaster ~]# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core)

3.操作系统准备

3.1关闭防火墙
#临时关闭防火墙
[root@k8smaster ~]# systemctl stop firewalld
#禁用防火墙
[root@k8smaster ~]# systemctl disable firewalld
3.2关闭selinux
#临时关闭
[root@k8smaster ~]# setenforce 0
#永久关闭
[root@k8smaster ~]# vi /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
# 将SELINUX=enforcing修改为SELINUX=disabled
#SELINUX=enforcing
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
3.3关闭swap
#关闭swap
[root@k8smaster ~]# swapoff -a 
#永久关闭
[root@k8smaster ~]# vi /etc/fstab
#将swap行注释掉
#
# /etc/fstab
# Created by anaconda on Sat Feb 24 15:39:59 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=399770e1-a642-4e57-82e2-e91246f4a907 /boot                   xfs     defaults        0 0
#将swap行注释掉
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
3.4添加主机名与IP对应的关系(真实IP与主机名称)
[root@k8smaster ~]# vim /etc/hosts
#添加如下内容:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.43.100  k8smaster
192.168.43.101  k8snode1
192.168.43.102  k8snode2
3.5将桥接的IPV4流量传递到iptables 的链
[root@k8smaster ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@localhost ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...

4安装Docker

4.1下载并安装
[root@k8smaster ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repo
--2021-03-04 19:51:21--  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 124.165.126.243, 124.165.126.248, 61.162.46.241, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|124.165.126.243|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1919 (1.9K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/docker-ce.repo’

100%[==============================================================================>] 1,919       --.-K/s   in 0s      

2021-03-04 19:51:21 (139 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [1919/1919]

[root@k8smaster ~]# yum install docker
========================================================================================================================
 Package                                      Arch         Version                                  Repository     Size
========================================================================================================================
Installing:
 docker                                       x86_64       2:1.13.1-203.git0be3e21.el7.centos       extras         18 M
Installing for dependencies:
 PyYAML                                       x86_64       3.10-11.el7                              base          153 k
 Install  1 Package (+30 Dependent packages)

Total download size: 26 M
Installed size: 91 M
Is this ok [y/d/N]: y
4.2设置开机启动
[root@k8smaster ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
4.3启动Docker
[root@k8smaster ~]# systemctl start docker
4.4查看Docker版本
[root@k8smaster ~]# docker --version
Docker version 1.13.1, build 0be3e21/1.13.1
4.5更改docker启动参数
[root@k8smaster ~]# vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}

4.6重启Docker
[root@k8smaster ~]# systemctl restart docker
#如果重启失败则删除4.5添加的内容
4.7查看Docker状态
[root@k8smaster ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-03-04 20:11:38 CST; 1min 0s ago
     Docs: http://docs.docker.com
 Main PID: 48307 (dockerd-current)
   CGroup: /system.slice/docker.service
           ├─48307 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --defau...
           └─48315 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock ...

Mar 04 20:11:37 k8smaster dockerd-current[48307]: time="2021-03-04T20:11:37.573321729+08:00" level=info msg="libc...315"
Mar 04 20:11:38 k8smaster dockerd-current[48307]: time="2021-03-04T20:11:38.630508792+08:00" level=info msg="Grap...nds"
Mar 04 20:11:38 k8smaster dockerd-current[48307]: time="2021-03-04T20:11:38.631340400+08:00" level=info msg="Load...rt."
Mar 04 20:11:38 k8smaster dockerd-current[48307]: time="2021-03-04T20:11:38.639911718+08:00" level=info msg="Fire...lse"
Mar 04 20:11:38 k8smaster dockerd-current[48307]: time="2021-03-04T20:11:38.694346880+08:00" level=info msg="Defa...ess"
Mar 04 20:11:38 k8smaster dockerd-current[48307]: time="2021-03-04T20:11:38.712225773+08:00" level=info msg="Load...ne."
Mar 04 20:11:38 k8smaster dockerd-current[48307]: time="2021-03-04T20:11:38.725328261+08:00" level=info msg="Daem...ion"
Mar 04 20:11:38 k8smaster dockerd-current[48307]: time="2021-03-04T20:11:38.725418154+08:00" level=info msg="Dock...13.1
Mar 04 20:11:38 k8smaster dockerd-current[48307]: time="2021-03-04T20:11:38.729251684+08:00" level=info msg="API ...ock"
Mar 04 20:11:38 k8smaster systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
4.8添加阿里云YUM软件源
[root@k8smaster ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
4.9查看版本
[root@k8smaster ~]# yum list docker-ce --showduplicates | sort -r
Repository docker-ce-stable is listed more than once in the configuration
Repository docker-ce-stable-debuginfo is listed more than once in the configuration
Repository docker-ce-stable-source is listed more than once in the configuration
Repository docker-ce-test is listed more than once in the configuration
Repository docker-ce-test-debuginfo is listed more than once in the configuration
Repository docker-ce-test-source is listed more than once in the configuration
 * updates: mirrors.aliyun.com
This system is not registered with an entitlement server. You can use subscription-manager to register.
              : manager
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-
 * extras: mirrors.aliyun.com
docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable
 * base: mirrors.neusoft.edu.cn
Available Packages

5.安装kubeadm,kubelet和kubectl

5.1安装

在部署kubernetes时,要求master node和worker node上的版本保持一致,否则会出现版本不匹配导致奇怪的问题出现。本文将介绍如何在CentOS系统上,使用yum安装指定版本的Kubernetes。

我们需要安装指定版本的kubernetes。那么如何做呢?在进行yum安装时,可以使用下列的格式来进行安装:

#指定版本:
yum install -y kubelet-<version> kubectl-<version> kubeadm-<version>
#例如:
yum install -y kubelet-1.20.4 kubectl-1.20.4 kubeadm-1.20.4
#不指定版本:
yum install -y kubelet kubeadm kubectl
[root@k8smaster ~]# yum install -y kubelet kubeadm kubectl
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

Repository docker-ce-stable is listed more than once in the configuration
Repository docker-ce-stable-debuginfo is listed more than once in the configuration
Repository docker-ce-stable-source is listed more than once in the configuration
Repository docker-ce-test is listed more than once in the configuration
Repository docker-ce-test-debuginfo is listed more than once in the configuration
Repository docker-ce-test-source is listed more than once in the configuration
Loading mirror speeds from cached hostfile
 * base: mirrors.neusoft.edu.cn
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.20.4-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.20.4-0.x86_64
--> Processing Dependency: cri-tools >= 1.13.0 for package: kubeadm-1.20.4-0.x86_64
---> Package kubectl.x86_64 0:1.20.4-0 will be installed
---> Package kubelet.x86_64 0:1.20.4-0 will be installed
--> Processing Dependency: socat for package: kubelet-1.20.4-0.x86_64
--> Processing Dependency: conntrack for package: kubelet-1.20.4-0.x86_64
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-7.el7 will be installed
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
---> Package cri-tools.x86_64 0:1.13.0-0 will be installed
---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
--> Running transaction check
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================================
 Package                              Arch                 Version                       Repository                Size
========================================================================================================================
Installing:
 kubeadm                              x86_64               1.20.4-0                      kubernetes               8.3 M
 kubectl                              x86_64               1.20.4-0                      kubernetes               8.5 M
 kubelet                              x86_64               1.20.4-0                      kubernetes                20 M
Installing for dependencies:
 conntrack-tools                      x86_64               1.4.4-7.el7                   base                     187 k
 cri-tools                            x86_64               1.13.0-0                      kubernetes               5.1 M
 kubernetes-cni                       x86_64               0.8.7-0                       kubernetes                19 M
 libnetfilter_cthelper                x86_64               1.0.0-11.el7                  base                      18 k
 libnetfilter_cttimeout               x86_64               1.0.0-7.el7                   base                      18 k
 libnetfilter_queue                   x86_64               1.0.2-2.el7_2                 base                      23 k
 socat                                x86_64               1.7.3.2-2.el7                 base                     290 k

Transaction Summary
========================================================================================================================
Install  3 Packages (+7 Dependent packages)
5.2设置开机自启
[root@k8smaster ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
5.3部署Kubernetes Master(node节点无需执行)
5.3.1初始化kubeadm
[root@k8smaster ~]# kubeadm init --apiserver-advertise-address=192.168.43.100 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --kubernetes-version v1.20.4 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.43.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.43.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.43.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 104.001975 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8smaster as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: igautj.9p1cd3n3mgst7fmr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.43.100:6443 --token igautj.9p1cd3n3mgst7fmr \
    --discovery-token-ca-cert-hash sha256:97a33442714c6a6b5a84df1598f34bd08c367f17b4ea4ca07c83403242a82ff9 
[root@k8smaster ~]# mkdir -p $HOME/.kube
[root@k8smaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8smaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8smaster ~]# vi .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

#下面语句在node节点中执行,加入到master中
kubeadm join 192.168.43.100:6443 --token b9nz9g.ijowwzgysk51j88d \
    --discovery-token-ca-cert-hash sha256:273d35ffe2fa7d190c68f3fdc9350b3c881009eb9a2f4e72c51235ef90ce738c 

#添加环境变量
[root@k8smaster ~]# vim .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH
#添加环境变量
export KUBECONFIG=/etc/kubernetes/admin.conf
5.3.2安装Pod网络插件
5.3.2.1kube-flannel.yml文件
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
5.3.2.2安装命令
kubectl apply -f kube-flannel.yml
5.3.3查看是否部署成功
kubectl get pods -n kube-system
如果配置失败,可以使用命令 kubeadm reset 重新配置
5.4Node节点加入集群

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

[root@k8snode1 ~]# kubeadm join 192.168.43.100:6443 --token b9nz9g.ijowwzgysk51j88d \
>     --discovery-token-ca-cert-hash sha256:273d35ffe2fa7d190c68f3fdc9350b3c881009eb9a2f4e72c51235ef90ce738c
[root@k8snode2 ~]# kubeadm join 192.168.43.100:6443 --token b9nz9g.ijowwzgysk51j88d \
>     --discovery-token-ca-cert-hash sha256:273d35ffe2fa7d190c68f3fdc9350b3c881009eb9a2f4e72c51235ef90ce738c
5.5查看node状态
[root@k8smaster ~]# kubectl get nodes
NAME        STATUS     ROLES                  AGE    VERSION
k8smaster   Ready      control-plane,master   8m7s   v1.20.4
k8snode1    NotReady   <none>                 78s    v1.20.4
k8snode2    NotReady   <none>                 62s    v1.20.4
#当部分node状态为NotReady时,由于网络延时需要多等待一会,或者去node节点查看
[root@k8snode1 ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
#上面的情况是node节点的8080端口未开放,需开放8080端口
[root@k8snode1 ~]# systemctl start firewalld
[root@k8snode1 ~]# firewall-cmd --zone=public --add-port=8080/tcp --permanent
success
[root@k8snode1 ~]# systemctl stop firewalld
#执行上面放开8080端口后node节点的状态一会就变为Ready了
[root@k8smaster ~]# kubectl get nodes
NAME        STATUS   ROLES                  AGE     VERSION
k8smaster   Ready    control-plane,master   12m     v1.20.4
k8snode1    Ready    <none>                 5m43s   v1.20.4
k8snode2    Ready    <none>                 5m27s   v1.20.4
5.6创建admin用户
#admin.yaml文件内容
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

[root@k8smaster ~]# kubectl apply -f admin.yaml 
serviceaccount/admin-user created
5.7给admin用户绑定角色
#bind.yaml文件内容
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  
[root@k8smaster ~]# kubectl apply -f bind.yaml 
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
5.8生成admin用户的token
[root@k8smaster ~]# kubectl -n kube-system describe secret | grep -A 10 admin-user
Name:         admin-user-token-v4n8s
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 3ed4c4aa-79db-45d6-ad4d-bdc0f2b2a8ef

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkxyVmRJcWFiV0sxUGJqeFVGNUpwakt6TGt6SVRxLXV0Qkw5SmxpYk9EMW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXY0bjhzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzZWQ0YzRhYS03OWRiLTQ1ZDYtYWQ0ZC1iZGMwZjJiMmE4ZWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.c62Sp5-xBubZXspb1FZWH9YZAFQqulQyNuXczdaztpMmwTrrNj7kRZZQwDb9hGqZeTvP76-utAhKU1KyUQoqhg_kNC34qQj3PqUg3ivuZIZ2_0ZpKbUJJNk1qsnXpyVcEbSdC4NtZC913PcRDaJCi6a7afaV0VibXq5Xl6dZ66Uv6ZIPbs7Gcu5bUnuW36RDN9r3oFN2qnEOYuKDeL5WdYkNOOkf3kofnPeICqWZntYWrftPQzcCT3D8AZyRszRzFpDe0vnUDTV3Qb09K9enBnd1kRK2ZyLFder6yOJiaDH9LO2B-kbqxW0_RgG0-Hx87eIqJj1TeqZ4wSeQ-UeI2A
5.9测试kubernetes集群
[root@k8smaster ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8smaster ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8smaster ~]# kubectl get pod,svc
NAME                         READY   STATUS              RESTARTS   AGE
pod/nginx-6799fc88d8-hchk9   0/1     ContainerCreating   0          18s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP        47m
service/nginx        NodePort    10.1.1.190   <none>        80:32040/TCP   9s
[root@k8smaster ~]# kubectl get pod,svc -o wide
NAME                         READY   STATUS              RESTARTS   AGE   IP       NODE       NOMINATED NODE   READINESS GATES
pod/nginx-6799fc88d8-hchk9   0/1     ContainerCreating   0          29s   <none>   k8snode1   <none>           <none>

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP        47m   <none>
service/nginx        NodePort    10.1.1.190   <none>        80:32040/TCP   20s   app=nginx
#通过192.168.43.100:32040端口可访问nginx页面

PS

1.修改主机名的方法
#Centos6
[root@centos6 ~]$ hostname                                              # 查看当前的hostnmae
centos6.magedu.com
[root@centos6 ~]$ vim /etc/sysconfig/network                            # 编辑network文件修改hostname行(重启生效)
[root@centos6 ~]$ cat /etc/sysconfig/network                            # 检查修改
NETWORKING=yes
HOSTNAME=centos66.magedu.com
[root@centos6 ~]$ hostname centos66.magedu.com                          # 设置当前的hostname(立即生效)
[root@centos6 ~]$ vim /etc/hosts                                        # 编辑hosts文件,给127.0.0.1添加hostname
[root@centos6 ~]$ cat /etc/hosts                                        # 检查
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 centos66.magedu.com
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

#Centos7
[root@centos7 ~]$ hostnamectl set-hostname centos77.magedu.com             # 使用这个命令会立即生效且重启也生效
[root@centos7 ~]$ hostname                                                 # 查看下
centos77.magedu.com
[root@centos7 ~]$ vim /etc/hosts                                           # 编辑下hosts文件, 给127.0.0.1添加hostname
[root@centos7 ~]$ cat /etc/hosts                                           # 检查
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 centos77.magedu.com
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
2.centos7查看docker是否安装的方法

一、rpm包安装的,可以用rpm -qa看到,如果要查找某软件包是否安装,用 rpm -qa | grep “软件或者包的名字”。

rpm -qa | grep  docker

二、yum方法安装的,可以用yum list installed查找,如果是查找指定包,命令后加 | grep “软件名或者包名”;

yum list installed | grep  docker
3.kubeadm安装的k8s集群卸载方式
# 卸载服务
kubeadm reset

# 删除rpm包
rpm -qa|grep kube*|xargs rpm --nodeps -e

# 删除容器及镜像
docker images -qa|xargs docker rmi -f
4.升级内核
CentOS7.x系统自带的3.10.x内核存在一些Bug,Docker运行不稳定,建议升级内核

#下载内核源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安装最新版本内核
yum --enablerepo=elrepo-kernel install -y kernel-lt
# 查看可用内核
cat /boot/grub2/grub.cfg |grep menuentry
# 设置开机从新内核启动
grub2-set-default "CentOS Linux (4.4.230-1.el7.elrepo.x86_64) 7 (Core)"
# 查看内核启动项
grub2-editenv list
# 重启系统使内核生效
reboot
# 查看内核版本是否生效
uname -r

常见问题

1 The kubelet is not running

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

打开文件/usr/lib/systemd/system/docker.service,如下图,将systemd改为cgroupfs:

[root@k8smaster ~]# vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Wants=docker-storage-setup.service
Requires=docker-cleanup.timer

[Service]
Type=notify
NotifyAccess=main
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          #将systemd改为cgroupfs
          --exec-opt native.cgroupdriver=cgroupfs \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          --init-path=/usr/libexec/docker/docker-init-current \
          --seccomp-profile=/etc/docker/seccomp.json \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值