K8S部署和基础组件的理解整合(一)

一.Pod

K8S下可以有很多Pod,每个Pods都有一个或多个容器,是容器的集合。是K8S下最小的对象,Pods可以命名。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-bW0PcB5O-1628524532764)(E:\Makedown-Documentation-Typora\images\image-20210717174037533.png)]

二.Label

#例如一些Jobs、Deployments 这些对象想要选我们的数据库Pods就可以用到 labes了。

LabeI是识别Kubernetes对象的标签,以key/value的方式附加到对象上( key最长不能超过63字节,value可以为空,也可以是不超过253字节的字符串)。
Label不提供唯一性,并且实际上经常是很多对象(如Pods )都使用相同的label来标志具体的应用。
Label定义好后其他对象可以使用Label 5elector来选择- -组相同label的对象 (比如Service用label来选择-组Pod )。Label Selector支持以下几种方式:
●等式,如app=nginx和env!=production
●集合,如env in (production, qa)
●多个label (它们之间是AND关系) ,如app=nginx,env=test

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-RMdlzhPc-1628524532772)(E:\Makedown-Documentation-Typora\images\image-20210717173906018.png)]

标签其实就一对 key/value ,被关联到对象上,比如Pod,标签的使用我们倾向于能够标示对象的特殊特点,并且对用户而言是有意义的(就是一眼就看出了这个Pod是尼玛数据库)。我们最终会索引并且反向索引(reverse-index)labels,以获得更高效的查询和监视,把他们用到UI或者CLI中用来排序或者分组等等。我们不想用那些不具有指认效果的label来污染label,特别是那些体积较大和结构型的的数据。不具有指认效果的信息应该使用Annotation来记录。

三.Namespace

不同Namespace起到隔离作用。Namespace用于Pod之间区分开发环境、测试环境、生产环境等。不同Namespace下可以有完全同名字的Label标识。

四.Deployments

是否手动创建Pod ,如果想要创建同一个容器的多份拷贝, 需要一个个分别创建出来么 ,能否将Pods划到逻辑
组里?

Deployment确保任意时间都有指定数量的Pod“副本” 在运行。如果为某个Pod创建了Deployment并且指定
3个副本,它会创建3个Pod ,并且持续监控它们。如果某个Pod不响应,那么Deployment会替换它,保持总
数为3.

如果之前不响应的Pod恢复了,现在就有4个Pod了,那么Deployment会将其中一个终止保持总数为3。如果
在运行中将副本总数改为5 , Deployment会立刻启动2个新Pod ,保证总数为5. Deployment还支持回滚和滚
动升级。
当创建Deployment时,需要指定两个东西:
●Pod模板 :用来创建Pod副本的模板
●Label标签 : Deployment需要监控的Pod的标签。

现在已经创建了Pod的一些副本,那么在这些副本上如何均衡负载呢?我们需要的是Service。

五.services

Service是应用服务的抽象,通过labels为应用提供负载均衡和服务发现。匹配labels的Pod IP和
端口列表组成endpoints ,由kube proxy负责将服务IP负载均衡到这些endpoints上。
每个Service都会自动分配一个cluster IP (仅在集群内部可访问的虚拟地址)和DNS名,其他容
器可以通过该地址或DNS来访问服务,而不需要了解后端容器的运行。

5.1 endpoint

例如,k8s集群中创建一个名为hello的service,就会生成一个同名的endpoint对象,endpoints就是service关联的pod的ip地址和端口。

endpoint是k8s集群中的一个资源对象,存储在etcd中,用来记录一个service对应的所有pod的访问地址。service配置selector,endpoint controller才会自动创建对应的endpoint对象;否则,不会生成endpoint对象。

一个 Service 由一组 backend Pod 组成。这些 Pod 通过 endpoints 暴露出来。 Service Selector 将持续评估,结果被 POST 到一个名称为 Service-hello 的 Endpoint 对象上。 当 Pod 终止后,它会自动从 Endpoint 中移除,新的能够匹配上 Service Selector 的 Pod 将自动地被添加到 Endpoint 中。 检查该 Endpoint,注意到 IP 地址与创建的 Pod 是相同的。现在,能够从集群中任意节点上使用 curl 命令请求 hello Service : 。 注意 Service IP 完全是虚拟的,它从来没有走过网络,如果对它如何工作的原理感到好奇,可以阅读更多关于 服务代理 的内容。

Endpoints是实现实际服务的端点集合。

基本概念与组件

Kubernetes中的绝大部分概念都抽象成Kubernetes管理的一种资源对象,下面我们一起复习一
下我们上节课遇到的一些资源对象:

●Master : Master节点是Kubernetes集群的控制节点,负责整个集群的管理和控制。Master
节点_上包含以下组件:
。kube apiserver :集群控制的入口, 提供HTTP REST服务
。kube-controller-manager : Kubernetes集群中所有资源对象的自动化控制中心
。kube-scheduler :负责Pod的调度
●Node : Node节点是Kubernetes集群中的工作节点, Node.上的工作负载由Master节点分
配,工作负载主要是运行容器应用。Node节点上包含以下组件:
。kubelet :负责Pod的创建、启动、监控、重启、销毁等工作,同时与Master节点协作,实
现集群管理的基本功能。
。kube-proxy :实现Kubernetes Service的通信和负载均衡
。运行容器化(Pod)应用
●Pod: Pod是Kubernetes最基本的部署调度单元。每个Pod可以由一个或多个业务容器和一个
根容器(Pause容器)组成。一个Pod表示某 个应用的一个实例
●ReplicaSet :是Pod副本的抽象,用于解决Pod的扩容和伸缩
●Deployment : Deployment表示部署,在内部使用ReplicaSet来实现。可以通过
Deployment来生成相应的ReplicaSet完成Pod副本的创建
●Service : Service是Kubernetes最重要的资源对象。Kubernetes中的Service对象可以对应
微服务架构中的微服务。Service定义了服务的访问入口,服务的调用者通过这个地址访问
Service后端的Pod副本实例。Service通过Label Selector同后端的Pod副本建立关系,
Deployment保证后端Pod副本的数量,也就是保证服务的伸缩性。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1IOkE6sB-1628524532774)(E:\Makedown-Documentation-Typora\images\image-20210718130909577.png)]

●用户通过REST API创建一个Pod
●apiserver将其写入etcd
●scheduluer检测到未绑定Node的Pod ,开始调度并更新Pod的Node绑定
●kubelet检测到有新的Pod调度过来,通过container runtime运行该Pod
●kubelet 通过container runtime取到Pod状态,并更新到apiserver中

----------kubeadm安装----------

#在每个节点上添加 hosts 信息:
$ vim /etc/hosts
192.168.31.40 master1
192.168.31.154 node1
192.168.31.21 node2
#禁用防火墙:
systemctl stop firewalld
systemctl disable firewalld
cat /etc/selinux/config
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#由于开启内核 ipv4 转发需要加载 br_netfilter 模块,所以加载下该模块:
modprobe br_netfilter
echo "net.bridge.bridge-nf-call-ip6tables = 1"  >>/etc/sysctl.d/k8s.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >>/etc/sysctl.d/k8s.conf
echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.d/k8s.conf
#关闭 swap 分区:
swapoff -a
vim /etc/fstab
```(加#注释掉)
UUID=48f429fc-2abd-4283-905a-982d8bf56452 swap                    swap    defaults        0 0
```
#修改/etc/fstab文件,注释掉 SWAP 的自动挂载,使用free -m确认 swap 已经关闭。swappiness 参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:
echo "vm.swappiness=0" >> /etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf
#安装 ipvs:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

#接下来还需要确保各个节点上已经安装了 ipset 软件包:
yum install ipset
#为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadm:
yum install ipvsadm -y
#同步服务器时间
yum install chrony -y
systemctl enable chronyd
systemctl start chronyd
chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^+ sv1.ggsrv.de                  2   6    17    32   -823us[-1128us] +/-   98ms
^- montreal.ca.logiplex.net      2   6    17    32    -17ms[  -17ms] +/-  179ms
^- ntp6.flashdance.cx            2   6    17    32    -32ms[  -32ms] +/-  161ms
^* 119.28.183.184                2   6    33    32   +661us[ +357us] +/-   38ms
$ date
Tue Aug 27 09:28:41 CST 2019
#安装docker
yum install -y yum-utilsdevice-mapper-persistent-data lvm2 yum-utils
#yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-19.03.11

#启动docker
$ systemctl start docker
$ systemctl enable docker
cat <<EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "registry-mirrors" : [
    "https://ot2k4d59.mirror.aliyuncs.com/"
  ]
}
EOF
$ systemctl restart docker
#安装kubeadm、kubelet、kubectl
#google
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
#阿里云
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#然后安装 kubeadm、kubelet、kubectl:
# --disableexcludes 禁掉除了kubernetes之外的别的仓库
yum install -y kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 --disableexcludes=kubernetes
#kubelet开机启动
systemctl enable --now kubelet
kubeadm version (查询版本号)
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

初始化集群

然后接下来在 master 节点配置 kubeadm 初始化文件,可以通过如下命令导出默认的初始化配置:

$ kubeadm config print init-defaults > kubeadm.yaml

然后根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,另外需要注意的是我们这里是准备安装 flannel 网络插件的,需要将 networking.podSubnet 设置为10.244.0.0/16

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.151.30.11  # apiserver 节点内网IP (master1 ip)
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1 #更改hostname
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
# registry.aliyuncs.com/k8sxio   # 修改成阿里云镜像源
kind: ClusterConfiguration
kubernetesVersion: v1.19.3
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16  # Pod 网段,flannel插件需要使用这个网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs  # kube-proxy 模式

配置提示

对于上面的资源清单的文档比较杂,要想完整了解上面的资源对象对应的属性,可以查看对应的 godoc 文档,地址: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2。

然后使用上面的配置文件进行初始化:

# 也可以提前先将相关镜像 pull 下来# 
$ kubeadm config images pull --config kubeadm.yaml
$ kubeadm init --config kubeadm.yaml


W1017 17:52:13.831477    8682 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.19.3[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 10.151.30.11][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [10.151.30.11 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [10.151.30.11 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 27.507850 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.31.40:6443 --token abcdef.0123456789abcdef \    --discovery-token-ca-cert-hash sha256:570e650eb241640ae0cdb0e1c4dd38cfac84b3c38a50d812cfddbb885ee10efa 
#将 master 节点上面的 $HOME/.kube/config 拷贝到所有master和node中
mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo 
chown $(id -u):$(id -g) $HOME/.kube/config

添加节点

记住初始化集群上面的配置和操作要提前做好,将 master 节点上面的 $HOME/.kube/config 文件拷贝到 node 节点对应的文件中,安装 kubeadm、kubelet、kubectl(可选),然后执行上面初始化完成后提示的 join 命令即可:

$ kubeadm join 10.151.30.11:6443 --token abcdef.0123456789abcdef \>     --discovery-token-ca-cert-hash sha256:da20ca0b12aea4afedc2a05026c285668ac3403949a5d091aa3123a7e87b9913[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

join 命令

如果忘记了上面的 join 命令可以使用命令 kubeadm token create --print-join-command 重新获取。

kubeadm init 命令执行流程如下图所示:

kubeadm init

执行成功后运行 get nodes 命令:

$ kubectl get nodesNAME          STATUS     ROLES    AGE    VERSIONydzs-master   NotReady      master   39m    v1.15.3ydzs-node1    NotReady   <none>   106s   v1.15.3

可以看到是 NotReady 状态,这是因为还没有安装网络插件,接下来安装网络插件,可以在文档 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ 中选择我们自己的网络插件,这里我们安装 flannel:

Flannel网络插件安装

$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml# 因为有节点是多网卡,所以需要在资源清单文件中指定内网网卡# 搜索到名为 kube-flannel-ds 的 DaemonSet,在kube-flannel容器下面$ vi kube-flannel.yml---apiVersion: policy/v1beta1kind: PodSecurityPolicymetadata:  name: psp.flannel.unprivileged  annotations:    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/defaultspec:  privileged: false  volumes:  - configMap  - secret  - emptyDir  - hostPath  allowedHostPaths:  - pathPrefix: "/etc/cni/net.d"  - pathPrefix: "/etc/kube-flannel"  - pathPrefix: "/run/flannel"  readOnlyRootFilesystem: false  # Users and groups  runAsUser:    rule: RunAsAny  supplementalGroups:    rule: RunAsAny  fsGroup:    rule: RunAsAny  # Privilege Escalation  allowPrivilegeEscalation: false  defaultAllowPrivilegeEscalation: false  # Capabilities  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']  defaultAddCapabilities: []  requiredDropCapabilities: []  # Host namespaces  hostPID: false  hostIPC: false  hostNetwork: true  hostPorts:  - min: 0    max: 65535  # SELinux  seLinux:    # SELinux is unused in CaaSP    rule: 'RunAsAny'---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelrules:- apiGroups: ['extensions']  resources: ['podsecuritypolicies']  verbs: ['use']  resourceNames: ['psp.flannel.unprivileged']- apiGroups:  - ""  resources:  - pods  verbs:  - get- apiGroups:  - ""  resources:  - nodes  verbs:  - list  - watch- apiGroups:  - ""  resources:  - nodes/status  verbs:  - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: flannelsubjects:- kind: ServiceAccount  name: flannel  namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:  name: flannel  namespace: kube-system---kind: ConfigMapapiVersion: v1metadata:  name: kube-flannel-cfg  namespace: kube-system  labels:    tier: node    app: flanneldata:  cni-conf.json: |    {      "name": "cbr0",      "cniVersion": "0.3.1",      "plugins": [        {          "type": "flannel",          "delegate": {            "hairpinMode": true,            "isDefaultGateway": true          }        },        {          "type": "portmap",          "capabilities": {            "portMappings": true          }        }      ]    }  net-conf.json: |    {      "Network": "10.244.0.0/16",      "Backend": {        "Type": "vxlan"      }    }---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:            - matchExpressions:              - key: kubernetes.io/os                operator: In                values:                - linux      hostNetwork: true      priorityClassName: system-node-critical      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni        image: quay.io/coreos/flannel:v0.14.0        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: quay.io/coreos/flannel:v0.14.0        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:            add: ["NET_ADMIN", "NET_RAW"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/      volumes:      - name: run        hostPath:          path: /run/flannel      - name: cni        hostPath:          path: /etc/cni/net.d      - name: flannel-cfg        configMap:          name: kube-flannel-cfg$ kubectl apply -f kube-flannel.yml  # 安装 flannel 网络插件

隔一会儿查看 Pod 运行状态:

$ kubectl get pods -n kube-system
NAME                              READY   STATUS     RESTARTS   AGEcoredns-6d56c8448f-plwrw          1/1     Running    0          4m5scoredns-6d56c8448f-s46mp          1/1     Running    0          4m5setcd-master1                      1/1     Running    0          4m13skube-apiserver-master1            1/1     Running    0          4m13skube-controller-manager-master1   1/1     Running    0          4m13skube-flannel-ds-6tv9h             1/1     Running    0          50skube-flannel-ds-t6m2x             1/1     Running    0          50skube-proxy-bcdv5                  1/1     Running    0          4m5skube-proxy-fmhs7                  1/1     Running    0          2m43skube-scheduler-master1            1/1     Running    0          4m13s

Flannel 网络插件

当我们部署完网络插件后执行 ifconfig 命令,正常会看到新增的cni0flannel1这两个虚拟设备,但是如果没有看到cni0这个设备也不用太担心,我们可以观察/var/lib/cni目录是否存在,如果不存在并不是说部署有问题,而是该节点上暂时还没有应用运行,我们只需要在该节点上运行一个 Pod 就可以看到该目录会被创建,并且cni0设备也会被创建出来。

网络插件运行成功了,node 状态也正常了:

$ kubectl get nodesNAME      STATUS   ROLES    AGE     VERSIONmaster1   Ready    master   5m7s    v1.19.3node1     Ready    <none>   3m23s   v1.19.3

用同样的方法添加另外一个节点即可。

Dashboard安装

v1.19.3 版本的集群需要安装最新的 2.0+ 版本的 Dashboard:

# 推荐使用下面这种方式$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml$ vi recommended.yaml# 
修改Service为NodePort类型......
kind: Service
apiVersion: v1
metadata:  
labels:    
k8s-app: kubernetes-dashboard  
name: kubernetes-dashboard  
namespace: kubernetes-dashboard
spec:  
ports:    
 - port: 443      
 - targetPort: 8443  
 - selector:    
 - k8s-app: kubernetes-dashboard  
 - type: NodePort  # 加上type=NodePort变成NodePort类型的服务......

监控组件

在 YAML 文件中可以看到新版本 Dashboard 集成了一个 metrics-scraper 的组件,可以通过 Kubernetes 的 Metrics API 收集一些基础资源的监控信息,并在 web 页面上展示,所以要想在页面上展示监控信息就需要提供 Metrics API,比如安装 Metrics Server。

直接创建:

$ kubectl apply -f recommended.yaml

新版本的 Dashboard 会被默认安装在 kubernetes-dashboard 这个命名空间下面:

$ kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGEdashboard-metrics-scraper-7b59f7d4df-d228v   1/1     Running   0          4m23skubernetes-dashboard-665f4c5ff-4zmst         1/1     Running   0          4m24s
$ kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)
AGEdashboard-metrics-scraper   ClusterIP   10.97.184.215    <none>      8000/TCP        31s
kubernetes-dashboard        NodePort    10.106.248.135   <none>        443:30750/TCP   32s

然后可以通过上面的 30750 端口去访问 Dashboard,要记住使用 https,Chrome 不生效可以使用Firefox 测试,如果没有 Firefox 下面打不开页面,可以点击下页面中的信任证书即可:

信任证书

信任后就可以访问到 Dashboard 的登录页面了:

k8s dashboard login

然后创建一个具有全局所有权限的用户来登录 Dashboard:(admin.yaml)

kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:  name: adminroleRef:  kind: ClusterRole  name: cluster-admin  apiGroup: rbac.authorization.k8s.iosubjects:- kind: ServiceAccount  name: admin  namespace: kubernetes-dashboard---apiVersion: v1kind: ServiceAccountmetadata:  name: admin  namespace: kubernetes-dashboard

直接创建:

$ kubectl apply -f admin.yaml
$ kubectl get secret -n kubernetes-dashboard|grep admin-token
admin-token-lwmmx                  kubernetes.io/service-account-token   3         1d
$ kubectl get secret admin-token-lwmmx -o jsonpath={.data.token} -n kubernetes-dashboard |base64 -d# 会生成一串很长的base64后的字符串

然后用上面的 base64 解码后的字符串作为 token 登录 Dashboard 即可,新版本还新增了一个暗黑模式:

k8s dashboard

最终我们就完成了使用 kubeadm 搭建 v1.19.3 版本的 kubernetes 集群、coredns、ipvs、flannel。

清理

如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:

$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值