Kubernetes+EFK构建日志分析平台

目录

Elasticsearch产品介绍

Fluentd 工作原理

Kibana产品介绍

一、环境准备

1.1、主机初始化配置

1.2、部署docker环境

二、部署kubernetes集群

2.1、组件介绍

2.2、配置阿里云yum源

2.3、安装kubelet kubeadm kubectl

2.4、配置init-config.yaml

2.5、安装master节点

2.6、安装node节点

2.7、安装flannel

3、部署企业镜像仓库

3.1、部署Harbor仓库

3.2、导入EFK镜像

4、部署EFK业务环境

4.1、准备组件Yaml文件

4.2、部署Elasticsearch

4.3、部署kibana

4.4、部署Fluentd

4.5、验证容器日志收集

4.6、配置 Kibana


        随着 Docker 容器及云原生相关技术的迅速发展,国内外厂商开始逐步向云原生方向转型。其中以 Kubernetes 为代表性的云原生技术凭借强大的功能成为各大厂商的第一选择。由于 Kubernetes 在容器编排领域的强势领先,使得越来越多的企业将业务迁至基于 Docker+Kubernetes 技术栈打造的容器管理平台,所以在 Kubernetes 集群环境下如何打造高效、可靠的业务日志收集系统也成为企业必须面临的问题。本章将主要介绍基于Elasticsearch、Fluentd 和 Kibana(EFK)技术栈实现完整 Kubernetes 集群日志收集解决方案。

Elasticsearch产品介绍

        Elasticsearch 是一个 Restful 风格的、开源的分布式搜索引擎,具备搜索和数据分析功能,它的底层是开源库 Apache Lucene。Elasticsearch 具有如下特点。

  • 一个分布式的实时文档存储,每个字段可以被索引与搜索;
  • 一个分布式实时分析搜索引擎;
  • 能支撑上百个服务节点的扩展,并支持 PB 级别的结构化或者非结构化数据。

Fluentd 工作原理

        Fluentd 是一个日志的收集、处理、转发系统。通过丰富的插件,可以收集来自各种系统或应用的日志,转化为用户指定的格式后,转发到用户所指定的日志存储系统中。

        Fluentd 通过一组给定的数据源抓取日志数据,处理后(转换成结构化的数据格式)将它们转发给其他服务,比如 Elasticsearch、对象存储等等。Fluentd 支持超过 300 个日志存储和分析服务,所以对日志存储和分析服务的支持是非常灵活的。Fluentd 采用了插件式的架构,具有高可扩展性及高可用性,同时还实现了高可靠的信息转发。其主要运行步骤如下所示:

(1)首先 Fluentd 从多个日志源获取数据。

(2)结构化并且标记这些数据。

(3)最后根据匹配的标签将数据发送到多个目标服务。

Kibana产品介绍

        Kibana 是一个开源的可视化分析平台,用于和 Elasticsearch 一起工作。可以通过Kibana 搜索、查看、交互存放在 Elasticsearch 索引中的数据。也可以轻松地执行高级数据分析,并且以各种图表、表格和地图的形式可视化数据。Kibana 简单的、基于浏览器的界面便于对大量数据进行呈现,能够快速创建和共享动态仪表板,实时显示 Elasticsearch 查询的变化。

一、环境准备

操作系统

IP地址

主机名

组件

CentOS7.x

192.168.2.115

k8s-master

kubeadm、kubelet、kubectl、docker-ce

CentOS7.x

192.168.2.116

k8s-node01

kubeadm、kubelet、kubectl、docker-ce、elasticsearch、fluentd

CentOS7.x

192.168.2.117

6-node02

kubeadm、kubelet、kubectl、docker-ce、kibana、fluentd

CentOS7.x

192.168.2.118

harbor

docker-ce、docker-compose、harbor

注意:所有主机配置推荐CPU2C+  Memory:4G+、运行 Elasticsearch 的节点要有足够的内存(不低于 4GB)。若 Elasticsearch 容器退出,请检查宿主机中的/var/log/message 日志,观察是否因为系统 OOM 导致进程被杀掉。

项目拓扑

1.1、主机初始化配置

所有主机配置禁用防火墙和selinux 

[root@localhost ~]# setenforce 0
[root@localhost ~]# iptables -F
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
[root@localhost ~]# systemctl stop NetworkManager
[root@localhost ~]# systemctl disable NetworkManager
[root@localhost ~]# sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config

配置主机名并绑定hosts,不同主机名称不同

[root@localhost ~]# hostname k8s-master    (不同主机名称不同)
[root@localhost ~]# bash

[root@k8s-master ~]# cat <<EOF>> /etc/hosts
> 192.168.2.115 k8s-master
> 192.168.2.116 k8s-node1
> 192.168.2.117 k8s-node2
> EOF

[root@k8s-master ~]# scp /etc/hosts 192.168.2.116:/etc/hosts
[root@k8s-master ~]# scp /etc/hosts 192.168.2.117:/etc/hosts

[root@localhost ~]# hostname k8s-node01
[root@localhost ~]# bash
[root@k8s-node01 ~]#

[root@localhost ~]# hostname k8s-node02
[root@localhost ~]# bash
[root@k8s-node02 ~]#

主机配置初始化(所有主机)

[root@k8s-master ~]# yum -y install vim wget net-tools lrzsz

[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -i '/swap/s/^/#/' /etc/fstab

[root@k8s-node01 ~]# cat << EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

[root@k8s-node01 ~]# modprobe br_netfilter
[root@k8s-node01 ~]# sysctl -p

1.2、部署docker环境

三台主机上分别部署 Docker 环境,因为 Kubernetes 对容器的编排需要 Docker 的支持。

[root@k8s-master ~]# cd /etc/yum.repos.d/

[root@k8s-master yum.repos.d]# ls
CentOS-Base.repo       CentOS-fasttrack.repo  CentOS-Vault.repo
CentOS-CR.repo         CentOS-Media.repo      CentOS-x86_64-kernel.repo
CentOS-Debuginfo.repo  CentOS-Sources.repo

[root@k8s-master yum.repos.d]# mkdir test

[root@k8s-master yum.repos.d]# mv CentOS-* test/

[root@k8s-master yum.repos.d]# ls
test

[root@k8s-master yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
--2023-08-16 14:59:38--  http://mirrors.aliyun.com/repo/Centos-7.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 42.202.208.239, 42.202.208.240, 42.202.208.241, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|42.202.208.239|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2523 (2.5K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”

100%[========================================>] 2,523       --.-K/s 用时 0.002s  

2023-08-16 14:59:38 (1.17 MB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2523/2523])


[root@k8s-master yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

使用 YUM 方式安装 Docker 时,推荐使用阿里的 YUM 源。

[root@k8s-master yum.repos.d]# yum-config-manager --add-repo 

https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

[root@k8s-master ~]# yum -y install docker-ce
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker

镜像加速器(所有主机配置)

[root@k8s-master ~]# cat << END > /etc/docker/daemon.json
{
        "registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]
}
END
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker

二、部署kubernetes集群

2.1、组件介绍

三个节点都需要安装下面三个组件

  1. kubeadm:安装工具,使所有的组件都会以容器的方式运行
  2. kubectl:客户端连接K8S API工具
  3. kubelet:运行在node节点,用来启动容器的工具

2.2、配置阿里云yum源

使用 YUM 方式安装 Kubernetes时,推荐使用阿里的 YUM 源。

[root@k8s-master ~]#  cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
>        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

[root@k8s-master ~]# ls /etc/yum.repos.d/
CentOS-Base.repo  docker-ce.repo  kubernetes.repo  test

2.3、安装kubelet kubeadm kubectl

所有主机配置

[root@k8s-master ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

[root@k8s-master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

        kubelet 刚安装完成后,通过 systemctl start kubelet 方式是无法启动的,需要加入节点或初始化为 master 后才可启动成功。

2.4、配置init-config.yaml

        Kubeadm 提供了很多配置项,Kubeadm 配置在 Kubernetes 集群中是存储在ConfigMap 中的,也可将这些配置写入配置文件,方便管理复杂的配置项。Kubeadm 配内容是通过 kubeadm config 命令写入配置文件的。

        在master节点安装,master 定于为192.168.2.115,通过如下指令创建默认的init-config.yaml文件:

[root@k8s-master ~]# kubeadm config print init-defaults > init-config.yaml

[root@k8s-master ~]# cat init-config.yaml 

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.2.115		//master节点IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master		//如果使用域名保证可以解析,或直接使用 IP 地址
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd		//etcd 容器挂载到本地的目录
imageRepository: registry.aliyuncs.com/google_containers	//修改为国内地址
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16 	//新增加 Pod 网段
scheduler: {}

2.5、安装master节点

拉取所需镜像

镜像提取链接:https://pan.baidu.com/s/1rVjcflmO2K5_HW4jwnG2vQ?pwd=v54d 
提取码:v54d

[root@k8s-master ~]#  kubeadm config images list --config init-config.yaml

registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:1.7.0

[root@k8s-master ~]# mkdir master 

[root@k8s-master ~]# cd master

[root@k8s-master master]# rz -E
rz waiting to receive.

[root@k8s-master master]# ls | while read line
> do
> docker load < $line
> done
225df95e717c: Loading layer  336.4kB/336.4kB
96d17b0b58a7: Loading layer  45.02MB/45.02MB
Loaded image: registry.aliyuncs.com/google_containers/coredns:1.7.0
d72a74c56330: Loading layer  3.031MB/3.031MB
d61c79b29299: Loading layer   2.13MB/2.13MB
1a4e46412eb0: Loading layer  225.3MB/225.3MB
bfa5849f3d09: Loading layer   2.19MB/2.19MB
bb63b9467928: Loading layer  21.98MB/21.98MB
Loaded image: registry.aliyuncs.com/google_containers/etcd:3.4.13-0
e7ee84ae4d13: Loading layer  3.041MB/3.041MB
597f1090d8e9: Loading layer  1.734MB/1.734MB
52d5280a7533: Loading layer  118.1MB/118.1MB
Loaded image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0
201617abe922: Loading layer  112.3MB/112.3MB
Loaded image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0
f00bc8568f7b: Loading layer  53.89MB/53.89MB
6ee930b14c6f: Loading layer  22.05MB/22.05MB
2b046f2c8708: Loading layer  4.894MB/4.894MB
f6be8a0f65af: Loading layer  4.608kB/4.608kB
3a90582021f9: Loading layer  8.192kB/8.192kB
94812b0f02ce: Loading layer  8.704kB/8.704kB
3a478f418c9c: Loading layer  39.49MB/39.49MB
Loaded image: registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0
aa679bed73e1: Loading layer  42.85MB/42.85MB
Loaded image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0
ba0dae6243cc: Loading layer  684.5kB/684.5kB
Loaded image: registry.aliyuncs.com/google_containers/pause:3.2

安装matser节点

[root@k8s-master ~]# kubeadm init --config=init-config.yaml	        //初始化安装K8S

[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.115]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.2.115 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.2.115 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.003694 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.115:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ec00f259095635fec9e0b54bc6fd6d1b7c65c20ee8a60b1494f14a43b0d65cb6 

根据提示操作

        kubectl 默认会在执行的用户家目录下面的.kube 目录下寻找config 文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf 拷贝到.kube/config

[root@k8s-master ~]#   mkdir -p $HOME/.kube
[root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

        Kubeadm 通过初始化安装是不包括网络插件的,也就是说初始化之后是不具备相关网络功能的,比如 k8s-master 节点上查看节点信息都是“Not Ready”状态、Pod 的 CoreDNS无法提供服务等。

2.6、安装node节点

根据master安装时的提示信息

[root@k8s-node1 ~]# kubeadm join 192.168.2.115:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:ec00f259095635fec9e0b54bc6fd6d1b7c65c20ee8a60b1494f14a43b0d65cb6

[root@k8s-node2 ~]# kubeadm join 192.168.2.115:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:ec00f259095635fec9e0b54bc6fd6d1b7c65c20ee8a60b1494f14a43b0d65cb6
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0816 15:22:52.578889    3319 common.go:148] WARNING: could not obtain a bind address for the API Server: no default routes found in "/proc/net/route" or "/proc/net/ipv6_route"; using: 0.0.0.0
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   5m25s   v1.20.0
k8s-node1    NotReady   <none>                 2m32s   v1.20.0
k8s-node2    NotReady   <none>                 2m38s   v1.20.0

        前面已经提到,在初始化 k8s-master 时并没有网络相关配置,所以无法跟 node 节点通信,因此状态都是“NotReady”。但是通过 kubeadm join 加入的 node 节点已经在k8s-master 上可以看到。

2.7、安装flannel

Master 节点NotReady 的原因就是因为没有使用任何的网络插件,此时Node 和Master的连接还不正常。目前最流行的Kubernetes 网络插件有Flannel、Calico、Canal、Weave 这里选择使用flannel。

flannel所需提取链接:https://pan.baidu.com/s/1lj2DuEVvzvvM1u0v1dwb3w?pwd=sylj 
提取码:sylj

所有主机上传flannel_v0.12.0-amd64.tar、cni-plugins-linux-amd64-v0.8.6

[root@k8s-master ~]# rz -E
rz waiting to receive.
[root@k8s-master ~]# docker load < flannel_v0.12.0-amd64.tar 
256a7af3acb1: Loading layer  5.844MB/5.844MB
d572e5d9d39b: Loading layer  10.37MB/10.37MB
57c10be5852f: Loading layer  2.249MB/2.249MB
7412f8eefb77: Loading layer  35.26MB/35.26MB
05116c9ff7bf: Loading layer   5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.12.0-amd64

[root@k8s-master ~]# tar xf cni-plugins-linux-amd64-v0.8.6.tgz 
[root@k8s-master ~]# cp flannel /opt/cni/bin/

master上传kube-flannel.yml

master主机配置:

[root@k8s-master ~]# kubectl apply -f kube-flannel.yml

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   14m   v1.20.0
k8s-node1    Ready    <none>                 11m   v1.20.0
k8s-node2    Ready    <none>                 11m   v1.20.0

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-skttb             1/1     Running   0          15m
coredns-7f89b7bc75-t29xn             1/1     Running   0          15m
etcd-k8s-master                      1/1     Running   0          16m
kube-apiserver-k8s-master            1/1     Running   0          16m
kube-controller-manager-k8s-master   1/1     Running   0          16m
kube-flannel-ds-amd64-2qs6m          1/1     Running   1          6m59s
kube-flannel-ds-amd64-724d6          1/1     Running   0          6m59s
kube-flannel-ds-amd64-fpt5l          1/1     Running   0          6m59s
kube-proxy-5q72g                     1/1     Running   0          13m
kube-proxy-mdf2s                     1/1     Running   0          15m
kube-proxy-xtfk8                     1/1     Running   0          13m
kube-scheduler-k8s-master            1/1     Running   0          16m

已经是ready状态

3、部署企业镜像仓库

3.1、部署Harbor仓库

所有主机配置禁用防火墙和selinux 

[root@localhost ~]# setenforce 0
[root@localhost ~]# iptables -F
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
[root@localhost ~]# systemctl stop NetworkManager
[root@localhost ~]# systemctl disable NetworkManager
[root@localhost ~]# sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config

配置主机名

[root@k8s-node2 ~]# hostname harbor
[root@k8s-node2 ~]# bash
[root@harbor ~]# 

部署docker环境

Harbor 仓库需要 Docker 容器支持,所以 Docker 环境是必不可少的。

[root@harbor yum.repos.d]# mkdir test
[root@harbor yum.repos.d]# mv CentOS-* test/

[root@harbor ~]#  wget -O /etc/yum.repos.d/CentOS-Base.repo 

http://mirrors.aliyun.com/repo/Centos-7.repo
--2023-08-16 15:46:03--  http://mirrors.aliyun.com/repo/Centos-7.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 42.202.208.244, 42.202.208.248, 42.202.208.238, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|42.202.208.244|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2523 (2.5K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”

100%[====================================================>] 2,523       --.-K/s 用时 0.004s  

2023-08-16 15:46:03 (593 KB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2523/2523])

[root@harbor ~]#  yum install -y yum-utils device-mapper-persistent-data lvm2

部署docker环境

Harbor 仓库需要 Docker 容器支持,所以 Docker 环境是必不可少的。

[root@harbor ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

已加载插件:fastestmirror
adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

[root@harbor ~]# yum clean all && yum makecache fast

已加载插件:fastestmirror
正在清理软件源: base docker-ce-stable extras updates
Cleaning up list of fastest mirrors
已加载插件:fastestmirror
Determining fastest mirrors
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
base                                                                   | 3.6 kB  00:00:00     
docker-ce-stable                                                       | 3.5 kB  00:00:00     
extras                                                                 | 2.9 kB  00:00:00     
updates                                                                | 2.9 kB  00:00:00     
(1/6): docker-ce-stable/7/x86_64/updateinfo                            |   55 B  00:00:00     
(2/6): base/7/x86_64/group_gz                                          | 153 kB  00:00:00     
(3/6): docker-ce-stable/7/x86_64/primary_db                            | 116 kB  00:00:00     
(4/6): extras/7/x86_64/primary_db                                      | 250 kB  00:00:00     
(5/6): base/7/x86_64/primary_db                                        | 6.1 MB  00:00:14     
(6/6): updates/7/x86_64/primary_db                                     |  22 MB  00:00:53     
元数据缓存已建立

[root@harbor ~]# yum -y install docker-ce
[root@harbor ~]# systemctl start docker
[root@harbor ~]#  systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@harbor ~]# cat << END > /etc/docker/daemon.json
> {
>         "registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]
> }
> END
[root@harbor ~]# systemctl daemon-reload
[root@harbor ~]# systemctl restart docker

部署docker-compose(上传docker-compose)

docker-compose提取链接:https://pan.baidu.com/s/1UVQ7l0Sme1V6ahpk0XNzRg?pwd=yau1 
提取码:yau1

[root@harbor ~]# rz -E
rz waiting to receive.
[root@harbor ~]# mv docker-compose /usr/local/bin/
[root@harbor ~]# chmod +x /usr/local/bin/docker-compose 

部署harbor

        Harbor 私有仓库程序采用 docker-compose 方式部署,不同的功能和应用处于不同的容器,这样带来了很好的兼容性,可在众多支持 Docker 的系统上运行 Harbor。

 链接:https://pan.baidu.com/s/1_ah_OL00YhmbCkCbVPhrXQ?pwd=38c6 
提取码:38c6

[root@harbor ~]# rz -E
rz waiting to receive.

[root@harbor ~]# tar xf harbor-offline-installer-v2.0.0.tgz 

[root@harbor ~]# mv harbor /usr/local/

        Harbor 的配置文件是/usr/local/harbor/harbor.yml 文件,默认的 hostname 要修改为

Harbor 虚拟机节点的 IP 地址。

[root@harbor ~]# vim /usr/local/harbor/harbor.yml.tmpl

  5 hostname: 192.168.2.118
 13 #https:   //https 相关配置都注释掉,包括 https、port、certificate 和 private_key
 14   # https port for harbor, default is 443
 15   #port: 443
 16   # The path of cert and key files for nginx
 17   #certificate: /your/certificate/path
 18   #private_key: /your/private/key/path

启动harbor

[root@harbor ~]# cd /usr/local/harbor/

[root@harbor harbor]# mv harbor.yml.tmpl harbor.yml

[root@harbor harbor]# ls

common.sh  harbor.v2.0.0.tar.gz  harbor.yml  input  install.sh  LICENSE  prepare

[root@harbor harbor]# sh install.sh 

[Step 0]: checking if docker is installed ...

Note: docker version: 24.0.5

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.21.1

[Step 2]: loading Harbor images ...
Loaded image: goharbor/notary-signer-photon:v2.0.0
Loaded image: goharbor/clair-adapter-photon:v2.0.0
Loaded image: goharbor/chartmuseum-photon:v2.0.0
Loaded image: goharbor/harbor-log:v2.0.0
Loaded image: goharbor/harbor-registryctl:v2.0.0
Loaded image: goharbor/registry-photon:v2.0.0
Loaded image: goharbor/clair-photon:v2.0.0
Loaded image: goharbor/notary-server-photon:v2.0.0
Loaded image: goharbor/redis-photon:v2.0.0
Loaded image: goharbor/nginx-photon:v2.0.0
Loaded image: goharbor/harbor-core:v2.0.0
Loaded image: goharbor/harbor-db:v2.0.0
Loaded image: goharbor/harbor-jobservice:v2.0.0
Loaded image: goharbor/trivy-adapter-photon:v2.0.0
Loaded image: goharbor/prepare:v2.0.0
Loaded image: goharbor/harbor-portal:v2.0.0


[Step 3]: preparing environment ...

[Step 4]: preparing harbor configs ...
prepare base dir is set to /usr/local/harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir



[Step 5]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-db     ... done
Creating redis         ... done
Creating registry      ... done
Creating registryctl   ... done
Creating harbor-portal ... done
Creating harbor-core   ... done
Creating harbor-jobservice ... done
Creating nginx             ... done
✔ ----Harbor has been installed and started successfully.----


[root@harbor harbor]# docker-compose ps
      Name                     Command                  State                   Ports             
--------------------------------------------------------------------------------------------------
harbor-core         /harbor/entrypoint.sh            Up (healthy)                                 
harbor-db           /docker-entrypoint.sh            Up (healthy)   5432/tcp                      
harbor-jobservice   /harbor/entrypoint.sh            Up (healthy)                                 
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)   127.0.0.1:1514->10514/tcp     
harbor-portal       nginx -g daemon off;             Up (healthy)   8080/tcp                      
nginx               nginx -g daemon off;             Up (healthy)   0.0.0.0:80->8080/tcp,:::80->80
                                                                    80/tcp                        
redis               redis-server /etc/redis.conf     Up (healthy)   6379/tcp                      
registry            /home/harbor/entrypoint.sh       Up (healthy)   5000/tcp                      
registryctl         /home/harbor/start.sh            Up (healthy)                                 

Harbor 启动完成后,浏览器访问 http://192.168.2.118,打开 Harbor Web 页面

 

 

 修改所有主机docker启动脚本

[root@harbor ~]# vim /usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry 192.168.2.118


[root@harbor ~]# scp /usr/lib/systemd/system/docker.service 192.168.2.115:/usr/lib/systemd/system/

The authenticity of host '192.168.2.115 
(192.168.2.115)' can't be established.
ECDSA key fingerprint is SHA256:stDOWr6tPaYeJ0AOLVXtS1p4CCHL9jFdbywH36Wa6ko.
ECDSA key fingerprint is MD5:68:2b:49:24:3b:cf:97:33:c0:3e:d5:ee:bc:2d:35:a1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.115' (ECDSA) to the list of known hosts.
root@192.168.2.115's password: 
docker.service                                                  100% 1764     2.2MB/s   00:00    

[root@harbor ~]# scp /usr/lib/systemd/system/docker.service 192.168.2.116:/usr/lib/systemd/system/ 

The authenticity of host '192.168.2.116 (192.168.2.116)' can't be established.
ECDSA key fingerprint is SHA256:RG6SwP4IEdCtwZTqmw5B3lW7k3e06TBVBtpIQQhXXU8.
ECDSA key fingerprint is MD5:30:ae:c1:97:d5:fd:9f:ca:6b:36:a1:6d:e3:b7:06:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.116' (ECDSA) to the list of known hosts.
root@192.168.2.116's password: 
docker.service                                                  100% 1764     1.4MB/s   00:00    

[root@harbor ~]# scp /usr/lib/systemd/system/docker.service 192.168.2.117:/usr/lib/systemd/system/ 

The authenticity of host '192.168.2.117 (192.168.2.117)' can't be established.
ECDSA key fingerprint is SHA256:a7IpGawJCffvD7q1hMT/WIP+ZT/Bm9Qhy8NxapJa1GA.
ECDSA key fingerprint is MD5:a6:56:1e:0c:59:62:fa:bf:f5:9b:77:d5:f0:0c:65:5d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.117' (ECDSA) to the list of known hosts.
root@192.168.2.117's password: 
docker.service                                                  100% 1764     2.4MB/s   00:00    

所有主机重启docker服务

[root@harbor ~]# systemctl daemon-reload
[root@harbor ~]# systemctl restart docker

3.2、导入EFK镜像

[root@harbor ~]# sh /usr/local/harbor/install.sh 

[Step 0]: checking if docker is installed ...

Note: docker version: 24.0.5

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.21.1

[Step 2]: loading Harbor images ...
Loaded image: goharbor/notary-signer-photon:v2.0.0
Loaded image: goharbor/clair-adapter-photon:v2.0.0
Loaded image: goharbor/chartmuseum-photon:v2.0.0
Loaded image: goharbor/harbor-log:v2.0.0
Loaded image: goharbor/harbor-registryctl:v2.0.0
Loaded image: goharbor/registry-photon:v2.0.0
Loaded image: goharbor/clair-photon:v2.0.0
Loaded image: goharbor/notary-server-photon:v2.0.0
Loaded image: goharbor/redis-photon:v2.0.0
Loaded image: goharbor/nginx-photon:v2.0.0
Loaded image: goharbor/harbor-core:v2.0.0
Loaded image: goharbor/harbor-db:v2.0.0
Loaded image: goharbor/harbor-jobservice:v2.0.0
Loaded image: goharbor/trivy-adapter-photon:v2.0.0
Loaded image: goharbor/prepare:v2.0.0
Loaded image: goharbor/harbor-portal:v2.0.0


[Step 3]: preparing environment ...

[Step 4]: preparing harbor configs ...
prepare base dir is set to /usr/local/harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/log/rsyslog_docker.conf
Clearing the configuration file: /config/nginx/nginx.conf
Clearing the configuration file: /config/core/env
Clearing the configuration file: /config/core/app.conf
Clearing the configuration file: /config/registry/passwd
Clearing the configuration file: /config/registry/config.yml
Clearing the configuration file: /config/registry/root.crt
Clearing the configuration file: /config/registryctl/env
Clearing the configuration file: /config/registryctl/config.yml
Clearing the configuration file: /config/db/env
Clearing the configuration file: /config/jobservice/env
Clearing the configuration file: /config/jobservice/config.yml
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
loaded secret from file: /data/secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir


Note: stopping existing Harbor instance ...
Stopping nginx             ... done
Stopping harbor-jobservice ... done
Stopping harbor-core       ... done
Stopping harbor-portal     ... done
Stopping registry          ... done
Stopping harbor-db         ... done
Stopping registryctl       ... done
Stopping redis             ... done
Stopping harbor-log        ... done
Removing nginx             ... done
Removing harbor-jobservice ... done
Removing harbor-core       ... done
Removing harbor-portal     ... done
Removing registry          ... done
Removing harbor-db         ... done
Removing registryctl       ... done
Removing redis             ... done
Removing harbor-log        ... done
Removing network harbor_harbor


[Step 5]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-portal ... done
Creating harbor-db     ... done
Creating redis         ... done
Creating registryctl   ... done
Creating registry      ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done
✔ ----Harbor has been installed and started successfully.----


[root@harbor ~]# docker login -u admin -p Harbor12345 http://192.168.2.118

WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

#上传elasticsearch-7.4.2.tar、fluentd-es.tar、kibana-7.4.2.tar、alpine-3.6.tar

[root@harbor ~]# rz -E
rz waiting to receive.

[root@harbor ~]# ls
alpine-3.6.tar   elasticsearch-7.4.2.tar  harbor-offline-installer-v2.0.0.tgz
anaconda-ks.cfg  fluentd-es.tar           kibana-7.4.2.tar

[root@harbor ~]# docker load < elasticsearch-7.4.2.tar 

877b494a9f30: Loading layer  209.6MB/209.6MB
d88e30aec0f6: Loading layer    173MB/173MB
1c5056425cea: Loading layer  379.4kB/379.4kB
5e54fa4095eb: Loading layer  487.1MB/487.1MB
4e90d125f1e5: Loading layer  4.608kB/4.608kB
115e3e5a8759: Loading layer   7.68kB/7.68kB
4d2c21064d7c: Loading layer  9.728kB/9.728kB
Loaded image ID: sha256:b1179d41a7b42f921f8ea0c5fa319c8aac4a3083dd733494170428917007e55f

[root@harbor ~]# docker  load < fluentd-es.tar 

f1b5933fe4b5: Loading layer  5.796MB/5.796MB
741e6e2c94d4: Loading layer   43.4MB/43.4MB
08e69dc70e17: Loading layer  13.82kB/13.82kB
921d47fed072: Loading layer  3.584kB/3.584kB
9e5da15b8bd1: Loading layer  3.072kB/3.072kB
ac62943fb2fc: Loading layer  2.729MB/2.729MB
5d526fbfafec: Loading layer  147.5kB/147.5kB
55e5db4c6e54: Loading layer  12.93MB/12.93MB
3e2e1fdff900: Loading layer  61.95kB/61.95kB
e9ed50a34eed: Loading layer  31.74kB/31.74kB
9d2b48d1be99: Loading layer  248.3kB/248.3kB
7432b737c247: Loading layer  3.072kB/3.072kB
19293dbdc1a1: Loading layer  3.072kB/3.072kB
89f0ad125648: Loading layer   1.32MB/1.32MB
22c80d316b5c: Loading layer  178.1MB/178.1MB
bda24b542570: Loading layer  7.534MB/7.534MB
3f0b69f8144c: Loading layer  4.559MB/4.559MB
38714652f7e6: Loading layer  11.16MB/11.16MB
9af19f0dccca: Loading layer  256.5kB/256.5kB
Loaded image ID: sha256:636f3d316141c1fcef8c793b5d9935347a13eba8a8b5bb6a75b2574722f863c1

[root@harbor ~]# docker load < kibana-7.4.2.tar 

01fc07851996: Loading layer  179.4MB/179.4MB
9cfbbc5ed16e: Loading layer  82.43kB/82.43kB
80642e52e3f7: Loading layer  57.86kB/57.86kB
90cabfea98a8: Loading layer  806.2MB/806.2MB
0b8e2117240b: Loading layer  2.048kB/2.048kB
b34d7a00ef38: Loading layer  4.096kB/4.096kB
cb5328c9d961: Loading layer  10.24kB/10.24kB
3234e7f24c9c: Loading layer   2.56kB/2.56kB
ccde9566aa40: Loading layer  374.8kB/374.8kB
Loaded image ID: sha256:230d3ded1abc1468536e41d80a9cc6a67908358c0e4ebf065c29b8ef0370ba4b

[root@harbor ~]# docker load < alpine-3.6.tar 

721384ec99e5: Loading layer  4.283MB/4.283MB
Loaded image ID: sha256:43773d1dba76c4d537b494a8454558a41729b92aa2ad0feb23521c3e58cd0440

[root@harbor ~]# docker tag b1179d 192.168.2.118/efk/elasticsearch:7.4.2
[root@harbor ~]# docker tag 636f3d 192.168.2.118/efk/fluentd-es-root:v2.5.2
[root@harbor ~]# docker tag 230d3d 192.168.2.118/efk/kibana-7.4.2
[root@harbor ~]# docker tag 43773d 192.168.2.118/efk/alpine-3.6

[root@harbor ~]# docker push 192.168.2.118/efk/elasticsearch:7.4.2

The push refers to repository [192.168.2.118/efk/elasticsearch]
4d2c21064d7c: Preparing 
115e3e5a8759: Preparing 
4e90d125f1e5: Preparing 
5e54fa4095eb: Preparing 
1c5056425cea: Preparing 
d88e30aec0f6: Waiting 
877b494a9f30: Waiting 
unauthorized: project not found, name: efk: project not found, name: efk

[root@harbor ~]# docker push 192.168.2.118/efk/fluentd-es-root:v2.5.2

The push refers to repository [192.168.2.118/efk/fluentd-es-root]
9af19f0dccca: Preparing 
38714652f7e6: Preparing 
3f0b69f8144c: Preparing 
bda24b542570: Preparing 
22c80d316b5c: Preparing 
89f0ad125648: Waiting 
19293dbdc1a1: Waiting 
7432b737c247: Waiting 
9d2b48d1be99: Waiting 
e9ed50a34eed: Waiting 
3e2e1fdff900: Waiting 
55e5db4c6e54: Waiting 
5d526fbfafec: Waiting 
ac62943fb2fc: Waiting 
9e5da15b8bd1: Waiting 
921d47fed072: Waiting 
08e69dc70e17: Waiting 
741e6e2c94d4: Waiting 
f1b5933fe4b5: Waiting 
unauthorized: project not found, name: efk: project not found, name: efk

[root@harbor ~]# docker push 192.168.2.118/efk/kibana-7.4.2

Using default tag: latest
The push refers to repository [192.168.2.118/efk/kibana-7.4.2]
ccde9566aa40: Preparing 
3234e7f24c9c: Preparing 
cb5328c9d961: Preparing 
b34d7a00ef38: Preparing 
0b8e2117240b: Preparing 
90cabfea98a8: Waiting 
80642e52e3f7: Waiting 
9cfbbc5ed16e: Waiting 
01fc07851996: Waiting 
877b494a9f30: Waiting 
unauthorized: project not found, name: efk: project not found, name: efk

[root@harbor ~]# docker push 192.168.2.118/efk/alpine-3.6

Using default tag: latest
The push refers to repository [192.168.2.118/efk/alpine-3.6]
721384ec99e5: Preparing 
unauthorized: project not found, name: efk: project not found, name: efk

4、部署EFK业务环境

4.1、准备组件Yaml文件

链接:https://pan.baidu.com/s/1iUA2zhTBfrpNAEpAJw6pMQ?pwd=xaev 
提取码:xaev

Yaml文件中涉及到镜像地址和 nodeSelector 选择器地址需要注意修改。

[root@k8s-master ~]# cd /opt/efk/
[root@k8s-master efk]# rz -E            #上传对应的yaml文件
rz waiting to receive.

[root@k8s-master efk]# vim elasticsearch.yaml 
        image: 192.168.2.118/efk/elasticsearch:7.4.2
        image: 192.168.2.118/efk/alpine:3.6
        image: 192.168.2.118/efk/alpine:3.6

        NodeSelector 节点选择器的修改,实际作用是决定将 Elasticsearch 服务部署到哪个节点。当前配置文件内是调度到 k8s-node1 节点,请根据实际负载情况进行调整。节点名称可以通过 kubectl get nodes 获取,在选择节点时务必确保节点有足够的资源。

[root@k8s-master efk]# vim elasticsearch.yaml
      nodeSelector:
        kubernetes.io/hostname: k8s-node1

对kibana.yaml文件镜像地址和调度节点进行修改,将 Kibana 部署到 k8s-node2 节点。

[root@k8s-master efk]# vim kibana.yaml 

      nodeSelector:
        kubernetes.io/hostname: k8s-node2

        image: 192.168.2.118/efk/kibana:7.4.2

修改 fluentd.yaml的镜像地址

[root@k8s-master efk]# vim fluentd.yaml 

        image: 192.168.2.118/efk/fluentd-es-root:v2.5.2

修改 test-pod.yaml的镜像地址

[root@k8s-master efk]# vim test-pod.yaml 

    image: 192.168.2.118/efk/alpine:3.6

4.2、部署Elasticsearch

创建命名空间

        创建名为 logging 的命名空间,用于存放 EFK 相关的服务。在 k8s-master节点的/opt/efk 目录下。

[root@k8s-master efk]# kubectl create -f namespace.yaml
namespace/logging created

[root@k8s-master efk]# kubectl get namespace | grep logging
logging           Active   34s

创建 es 数据存储目录

Elasticsearch 服务通常可以简写为 es。到 k8s-node1 节点创建数据目录/esdata。

[root@k8s-node1 ~]# mkdir /esdata

部署 es 容器

进入 k8s-master节点的/opt/efk 目录,部署 es 容器,执行如下操作。

[root@k8s-master ~]# cd /opt/efk/

[root@k8s-master efk]# kubectl create -f elasticsearch.yaml

statefulset.apps/elasticsearch-logging created
service/elasticsearch created

等待片刻,即可查看到 es 的 Pod,已经部署到 k8s-node1 节点,状态变为 running。

[root@k8s-master efk]# kubectl -n logging get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
elasticsearch-logging-0   1/1     Running   0          66s   10.244.2.5   k8s-node1   <none>           <none>


[root@k8s-master efk]# kubectl -n logging get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
elasticsearch   ClusterIP   10.97.190.214   <none>        9200/TCP   2m20s

通过 curl 命令访问服务,验证 es 是否部署成功。

[root@k8s-master efk]# curl 10.97.190.214:9200
{
  "name" : "elasticsearch-logging-0",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "eJ6WwijmSweA-Ap8aj8kdg",
  "version" : {
    "number" : "7.4.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date" : "2019-10-28T20:40:44.881551Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

4.3、部署kibana

进入 k8s-master 的/opt/efk 目录,执行如下命令。

[root@k8s-master efk]# kubectl create -f kibana.yaml
service/kibana created
deployment.apps/kibana created

查看 Pod 的状态。

[root@k8s-master efk]# kubectl -n logging get pods
NAME                      READY   STATUS    RESTARTS   AGE
elasticsearch-logging-0   1/1     Running   0          5m45s
kibana-769f5fd4cf-sffbj   1/1     Running   0          78s

        查看对应的 Service,得到 NodePort 值为 31732,此端口为随机端口,不同环境会不一致,请以实际结果为准。

[root@k8s-master efk]# kubectl -n logging get svc |grep kibana
kibana          NodePort    10.103.116.114   <none>        5601:30380/TCP   119s

        通过访问 192.168.2.118:30380 进入到 kibana 的访问界面,观察是否可以正常打开,其中 31732 端口需要替换成实际的端口号。若能正常访问,说明 Kibana 连接 es 已经正常。

 

4.4、部署Fluentd

给集群节点打标签

        为了自由控制需要采集集群中节点上业务容器的服务日志。因此,需要给 k8s-node1和 k8s-node2 节点打上 fluentd=true 的标签 label。

[root@k8s-master efk]# kubectl label node k8s-node1 fluentd=true
node/k8s-node1 labeled
[root@k8s-master efk]# kubectl label node k8s-node2 fluentd=true
node/k8s-node2 labeled

        k8s-node1 和 k8s-node2 已经打上了 fluentd=true 的 label,那么 Fluentd 服务就会启动到这两个节点,也就意味着运行在这两个节点的 Pod 日志会被收集起来。

启动 Fluentd 服务

在 k8s-master节点的/opt/efk 目录,启动 Fluentd 服务

[root@k8s-master efk]# kubectl create -f fluentd-es-config-main.yaml

configmap/fluentd-es-config-main created

[root@k8s-master efk]# kubectl create -f fluentd-configmap.yaml

configmap/fluentd-config created

[root@k8s-master efk]# kubectl create -f fluentd.yaml

serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v2.5.2 created

 查看 Pod 是否已经在 k8s-node1 和 k8s-node2 节点启动成功。

[root@k8s-master efk]# kubectl -n logging get pods
NAME                      READY   STATUS    RESTARTS   AGE
elasticsearch-logging-0   1/1     Running   0          13m
fluentd-es-v2.5.2-pxqxt   1/1     Running   0          47s
fluentd-es-v2.5.2-zf49b   1/1     Running   0          47s
kibana-769f5fd4cf-sffbj   1/1     Running   0          8m34s

4.5、验证容器日志收集

创建测试容器

进入 k8s-master的/opt/efk 目录,执行如下命令。

[root@k8s-master efk]# kubectl create -f test-pod.yaml

pod/counter created

[root@k8s-master efk]# kubectl get pods

NAME      READY   STATUS    RESTARTS   AGE
counter   1/1     Running   0          5s

4.6、配置 Kibana

        索引创建完成后,可以发现已经生成了多个索引域,稍等片刻再次点击左上角的

discover 图标,进入日志检索页面。

         然后通过索引键去过滤,比如根据Kubernetes.host、Kubernetes.container_name、 kubernetes.container_image_id等去做过滤。

 

        通过其他元数据也可以过滤日志数据,比如单击任何日志条目以查看其他元数据,如容

器名称、Kubernetes 节点、命名空间等。

        到这里,在 Kubernetes 集群上已经成功部署了 EFK。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

爱笑的男孩0522

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值