efk 生产_如何将EFK堆栈部署到Kubernetes

efk 生产

EFK堆栈简介 (Introduction to EFK stack)

When running multiple services and applications on a Kubernetes cluster, a centralized, cluster-level work stack will assist you quickly type through and analyze the serious volume of log knowledge made by your Pods. One well-liked centralized solution is that the Elasticsearch, Fluentd, and Kibana (EFK) stack.Elasticsearch could be a period, distributed, and a scalable computer program that permits for full-text and structured search, further as analytics.

在Kubernetes集群上运行多个服务和应用程序时,集中式集群级别的工作堆栈将帮助您快速键入并分析Pod大量的日志知识。 一个非常受人欢迎的集中式解决方案是Elasticsearch,Fluentd和Kibana(EFK)堆栈。Elasticsearch可以是一个周期的,分布式的和可伸缩的计算机程序,允许进行全文本和结构化搜索,以及进一步的分析。

Elasticsearch is usually deployed aboard Kibana, a robust knowledge visualization frontend, and dashboard for Elasticsearch.Here we will use Fluentd to gather, transform, and ship logs knowledge to the Elasticsearch backend.

Elasticsearch通常部署在强大的知识可视化前端Kibana上以及Elasticsearch的仪表板上,此处我们将使用Fluentd收集,转换日志知识并将其运送到Elasticsearch后端。

Hola,让我们从实验室开始: (Hola, Let’s start with the Lab :)

We will be working with the list of contents :

我们将处理目录:

Step1 — How to create a Namespace?
Step2 — How to Create the Elasticsearch StatefulSet?
Step3 — How to create Kibana deployments and services?
Step4 — How to Create the Fluentd DaemonSet in the cluster?

Step 1 — Creating a NamespaceSo once we tend to begin with Creating cluster, we want to form namespace wherever we are going to have our work directionsTo start, we want to 1st see the present Namespaces in our existing cluster kubectl:

第1步-创建命名空间因此,一旦我们开始创建集群,便想在将要有工作指导的任何地方形成命名空间。首先,我们希望首先在现有集群kubectl中看到当前的命名空间:

kubectl get namespaces

Then we must always see the subsequent 3 Namespaces, which require to be pre implemented together with your K8s cluster:

然后,我们必须始终看到随后的3个命名空间,它们需要与您的K8s集群一起预先实现:

Output
NAME STATUS AGE
default Active 5m
kube-system Active 5m
kube-public Active 5m

So to make the Kube-log Namespace we’ll initial open and edit a file known as `Kube-log.yaml `using nano or VI:Inside our editor, we’ll paste the desired Namespace object YAML:

因此,为了创建Kube-log命名空间,我们将首先使用nano或VI打开并编辑一个名为`Kube-log.yaml`的文件:在我们的编辑器中,我们将粘贴所需的Namespace对象YAML:

kind: Namespace
apiVersion: v1
metadata:
name: kube-logging
Then, save and close the file.

Once we tend to have created the Kube-log.yaml Namespace object file, we’d like to produce the Namespace with kubectl create with the -f filename flag:

一旦我们倾向于创建Kube-log.yaml命名空间对象文件,我们想使用带有-f filename标志的kubectl create来生成命名空间:

kubectl create -f kube-logging.yaml

We should see the subsequent output:

我们应该看到以下输出:

namespace/kube-log created

we can then ensure that the Namespace was created:

然后,我们可以确保创建了命名空间:

Now, we should always see the new Kube-log Namespace:

现在,我们应该总是看到新的Kube-log命名空间:

NAME               STATUS   AGE
default Active 12m
kube-log Active 2m
kube-public Active 13m
kube-system Active 12m

We can now deploy an Elasticsearch cluster here

我们现在可以在此处部署Elasticsearch集群

第2步—如何创建Elasticsearch StatefulSet (Step 2 — How to Create the Elasticsearch StatefulSet)

Now that we’ve created a Namespace to our stack, we will begin rolling out its various parts.

现在,我们已经为堆栈创建了一个名称空间,我们将开始推出其各个部分。

we will begin by deploying a 3-node ES cluster.

我们将从部署3节点ES群集开始。

让我们从创建无头服务开始 (Let’s start with Creating the Headless Service)

we’ll produce a headless Kubernetes service referred to as elastic search which will outline a DNS domain for the three Pods.Open a file referred to as elasticsearch_svc.yaml using nano and paste inKubernetes service YAML:

我们将产生一个称为弹性搜索的无头Kubernetes服务,它将概述三个Pod的DNS域。使用nano打开一个名为elasticsearch_svc.yaml的文件并粘贴inKubernetes服务YAML:

Then, save the file.

然后,保存文件。

we currently set clusterIP: None, that shows the service headless. Finally, we tend to outline ports 9200 and 9300 which can act with the API, and for inter-node communication within the cluster

我们当前将clusterIP设置为:None,表示服务无头。 最后,我们倾向于概述可与API配合使用的端口9200和9300,以及集群内的节点间通信。

Now, create the service using kubectl:

现在,使用kubectl创建服务:

kubectl create -f elasticsearch_svc.yaml

output:

输出:

service/elasticsearch created

Finally, the service was successfully created using kubectl get:

最后,使用kubectl get成功创建了服务:

kubectl get services --namespace=kube-logging

You should see the following ES now:

您现在应该看到以下ES:

Now that we’ve discovered our headless service and a stable .elasticsearch.kube-logging.svc.cluster.local domain for our Pods

现在,我们已经发现了无头服务和Pod的稳定.elasticsearch.kube-logging.svc.cluster.local域

步骤3 —如何创建Kibana部署和服务 (Step 3 — How to Create the Kibana Deployment and Service)

To launch Kibana on Kubernetes, we’ll produce a Service known as kibana, and a readying consisting of 1 Pod duplicate. you’ll scale the number of replicas looking on your production desires, and optionally specify a LoadBalancer kind for the Service to load balance requests across the readying pods.

要在Kubernetes上启动Kibana,我们将产生一个称为kibana的服务,以及一个由1个Pod副本组成的准备工作。 您将根据生产需求缩放副本的数量,并可以选择为Service指定LoadBalancer类型,以在准备就绪的容器中负载均衡请求。

This time, we’ll produce the Service and readying within the same file. Open up a file known as kibana.yaml in your favorite editor:

这次,我们将在同一文件中生成服务并准备就绪。 在您喜欢的编辑器中打开一个名为kibana.yaml的文件:

Paste within the following service spec:

粘贴以下服务规格中:

Then, save the file.

然后,保存文件。

In this specification, we’ve outlined a service referred to as kibana within the Kube-logging namespace and gave it the app: kibana label.

在本规范中,我们概述了Kube-logging名称空间中称为kibana的服务,并为其提供了app:kibana标签。

We’ve conjointly specified that it ought to be accessible on port 5601 and use the app: kibana label to pick out the Service’s target Pods.

我们已经联合指定应该在5601端口上对其进行访问,并使用app:kibana标签选择服务的目标Pod。

In the preparation specification, we tend to outline a preparation referred to as kibana and specify that we’d like one Pod duplicate.

在准备规范中,我们倾向于概述一种称为kibana的准备,并指定我们想要一个Pod副本。

We use the custom .elastic.co/kibana/kibana:7.2.0 image. At now you’ll substitute your non-public or public Kibana image to use.

我们使用自定义.elastic.co / kibana / kibana:7.2.0图像。 现在,您将使用非公开或公开的Kibana图像来使用。

We specify that we’d like at the least zero.1 vCPU bound to the Pod, exploding up to a limit of one vCPU. you’ll amendment these parameters looking on your anticipated load and obtainable resources.

我们指定我们希望绑定到Pod的至少0.1个vCPU,最多爆炸一个vCPU。 您将根据预期的负载和可获得的资源来修改这些参数。

Next, we tend to use the ELASTICSEARCH_URL atmosphere variable to line the end and port for the Elasticsearch cluster. exploitation Kubernetes DNS, this end corresponds to its Service name elastic search. This domain can resolve to a listing of information processing addresses for the three Elasticsearch Pods. to be told a lot of concerning Kubernetes DNS, consult DNS for Services and Pods.

接下来,我们倾向于使用ELASTICSEARCH_URL气氛变量来为Elasticsearch集群的末端和端口添加线条。 利用Kubernetes DNS,此端对应其服务名称弹性搜索。 该域可以解析为三个Elasticsearch Pod的信息处理地址列表。 要了解有关Kubernetes DNS的许多知识,请咨询DNS for Services和Pod。

Finally, we tend to set Kibana’s instrumentation port to 5601, to that the kibana Service can forward requests.

最后,我们倾向于将Kibana的检测端口设置为5601,以便Kibana服务可以转发请求。

Once you are glad together with your Kibana configuration, you’ll be able to roll out the Service and preparation using kubectl:

一旦对Kibana配置感到满意,就可以使用kubectl推出服务和准备工作:

You should see the subsequent output:

您应该看到以下输出:

service/kibana created deployment.apps/kibana created

You can ensure the rollout succeeded by running the subsequent command:You should see the output:

您可以通过运行以下命令来确保首次发布成功:您应该看到输出:

deployment "kibana" successfully rolled out

To access the Kibana interface, we will forward the local port to the Kubernetes node running Kibana. Grab the Kibana Pod details using kubectl get:

要访问Kibana界面,我们将本地端口转发到运行Kibana的Kubernetes节点。 使用kubectl get获取 Kibana Pod详细信息:

kubectl get pods — namespace=kube-logging

Here we tend to observe that our Kibana Pod is named kibana-9cfcnhb7-lghs2Forward the native port 5601 to port 5601 on this Pod:

在这里,我们倾向于观察到,我们的Kibana Pod名为kibana-9cfcnhb7-lghs2 将本地端口5601转发到此Pod上的端口5601:

kubectl port-forward kibana-9cfcnhb7-lghs2 5601:5601 — namespace=kube-logging

You should see the subsequent output:

您应该看到以下输出:

Forwarding from 127.0.0.1:5601 -> 5601 Forwarding from [::1]:5601 -> 5601

visit the following URL in your web browser:

在您的网络浏览器中访问以下URL:

http://localhost:5601

If you see Kibana welcome page, you have correctly deployed Kibana into your Kubernetes cluster:

如果看到Kibana欢迎页面,则说明您已将Kibana正确部署到Kubernetes集群中:

Image for post

You can currently go to rolling out the last part of the EFK stack: the log collector, Fluentd.

当前,您可以展开EFK堆栈的最后一部分:日志收集器Fluentd。

第4步—如何在集群中创建Fluentd DaemonSet (Step 4 — How to Create the Fluentd DaemonSet in the cluster)

Begin by gap a file referred to as fluentd.yaml in nano .we use the Fluentd DaemonSet specification provided by the Fluentd maintainers.Another useful resource provided by the Fluentd maintainers is Kubernetes Fluentd.

从空白开始,在nano中称为fluentd.yaml的文件。我们使用Fluentd维护者提供的Fluentd DaemonSet规范.Fluentd维护者提供的另一个有用资源是Kubernetes Fluentd。

First, paste in the ServiceAccount definition:

首先,粘贴ServiceAccount定义:

Here, we tend to produce a Service Account referred to as fluentD that the Fluentd Pods can use to access the Kubernetes API.

在这里,我们倾向于产生一个称为fluentD的服务帐户,Fluentd Pods可以使用该服务帐户来访问Kubernetes API。

Next, paste in ClusterRole block:

接下来,粘贴ClusterRole块:

Now, paste in the following ClusterRoleBinding block:

现在,粘贴以下ClusterRoleBinding块:

At this point we can now begin pasting in the actual DaemonSet spec:

此时,我们现在可以开始粘贴实际的DaemonSe t规范:

Here, we tend to outline a DaemonSet known as fluentd within the Kube-logging Namespace and provides it the app: fluentd label.

在这里,我们倾向于在Kube-logging命名空间中概述一个称为fluentd的DaemonSet,并为其提供app:fluentd标签。

Next, paste within the following section:

接下来,在以下部分中粘贴:

Here, we tend to match the app: fluentd label in .metadata.labels and assign the DaemonSet the fluentd Service Account.

在这里,我们倾向于匹配应用程序:.metadata.labels中的fluentd标签,并为DaemonSet分配fluentd服务帐户。

Finally, paste within the following section:

最后,在以下部分中粘贴:

The entire Fluentd spec should reflect like this :

整个Fluentd规范应反映如下:

Now we’ve got finished configuring the Fluentd DaemonSet, Save itNow, roll out the DaemonSet via kubectl:

现在我们已经完成了Fluentd DaemonSet的配置,保存它,然后通过kubectl推出DaemonSet:

You should see the subsequent output:

您应该看到以下输出:

serviceaccount/fluentd created clusterrole.rbac.authorization.k8s.io/fluentd created clusterrolebinding.rbac.authorization.k8s.io/fluentd created daemonset.extensions/fluentd created

Confirm that your DaemonSet extended with success via kubectl:You should see the subsequent standing output:

确认您的DaemonSet通过kubectl成功扩展:您应该看到随后的固定输出:

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE fluentd 3 3 3 3 3 <none> 58s

This currently indicates that three fluentd Pods are running within the cluster

当前,这表明集群中正在运行三个流利的Pod

We can currently check Kibana to verify that log information or data is being properly collected and shipped to Elasticsearch.

我们目前可以检查Kibana以验证日志信息或数据是否已正确收集并运送到Elasticsearch。

With the kubectl port-forward still open, navigate to http://localhost:5601.

在kubectl端口转发仍然打开的情况下,导航到http:// localhost:5601。

Click

请点击

Image for post

you will see this window :

您将看到此窗口:

Image for post

After the window, this section will appear :

在窗口之后,此部分将出现:

Here you will select the @timestamp field for Stack, and hit the Create index pattern as mentioned

在这里,您将选择Stack的@timestamp字段,并点击创建索引模式

Image for post

You should see a bar chart graph and a few latest Log entries here :

您应该在此处看到条形图和一些最新的日志条目:

Image for post

Search Kubernetes here in this section below and you will see the latest logs

在下面此部分的此处搜索Kubernetes,您将看到最新的日志

Image for post

At now you’ve got with success designed and extended the EFK stack on your Kubernetes cluster.

现在,您已经成功设计并扩展了Kubernetes集群上的EFK堆栈。

Image for post

Thanks

谢谢

For References,

供参考,

翻译自: https://medium.com/avmconsulting-blog/how-to-deploy-an-efk-stack-to-kubernetes-ebc1b539d063

efk 生产

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值