ELK安装部署及使用说明

ELK安装及使用说明

时下最流行的ELK,日志分析系统,是由三部分组成:Logstash、ElasticSearch、Kibana。这三者是核心组件,并非全部。

Elasticsearch是实时全文搜索和分析引擎,提供搜集、分析、存储数据三大功能;是一套开放REST和JAVA API等结构提供高效搜索功能,可扩展的分布式系统。它构建于Apache Lucene搜索引擎库之上。

Logstash是一个用来搜集、分析、过滤日志的工具。它支持几乎任何类型的日志,包括系统日志、错误日志和自定义应用程序日志。它可以从许多来源接收日志,这些来源包括 syslog、消息传递(例如 RabbitMQ)和JMX,它能够以多种方式输出数据,包括电子邮件、websockets和Elasticsearch。

Kibana是一个基于Web的图形界面,用于搜索、分析和可视化存储在 Elasticsearch指标中的日志数据。它利用Elasticsearch的REST接口来检索数据,不仅允许用户创建他们自己的数据的定制仪表板视图,还允许他们以特殊的方式查询和过滤数据

一般的架构如下:

在这里插入图片描述

下面分别对三个组件进行一一介绍。

Logstash

在这里插入图片描述

一、Logstash 安装

1、下载

通过官网进行下载相应的版本。需要注意的是,Logstash最低需要jdk1.8的支持

7.8,最低java8,支持java11 java14
6.8,最低jdk8,支持jdk11
6.0,最低jdk8,支持jdk9
5.0,最低jdk8,支持jdk9

官网地址:https://www.elastic.co/logstash

2、安装

将下载的安装包tar.gz,直接解压,解压后目录如下:

eyecool@eyecool-OptiPlex-7060:~/logstash/logstash-7.8.1$ ls
 bin  config  CONTRIBUTORS  data  Gemfile  Gemfile.lock  lib  LICENSE.txt  logs  logstash-core  logstash-core-plugin-api  modules  NOTICE.TXT  tools  vendor  x-pack

需要注意的是,必须配置JAVA_HOME。

3、启动/停止

​ mac:解压完成后,xattr -d -r com.apple.quarantine logstash-7.8.1

​ linux: bin/logstash [options]

在实际运行中,进程往往需要后台运行,并且最好不产生多余的日志文件,例如nohup.out日文件,为避免这种情况,可自行编写启动脚本,例如:

nohup ./bin/logstash -f ./config/logstash.conf  >/dev/null 2>&1 &

可以从logs中的日志中查看启动情况,也可以通过ps进程方式进行查验。

通过kill pid方式将logstash进程杀掉。

二、How to work

Logstash组件由inputs、filters、outputs三部分组成,官网介绍如下:

The Logstash event processing pipeline has three stages: inputs → filters → outputs. Inputs generate events, filters modify them, and outputs ship them elsewhere. Inputs and outputs support codecs that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter.

Inputs:通过配置input使其获取数据,常用的有:file、syslog(port:514)、redis等。

Filters:筛选器是Logstash管道中的中间处理层。 如果事件符合特定条件,则可以将过滤器与条件语句结合使用以对事件执行操作。 一些有用的过滤器包括:

grok: parse and structure arbitrary text. Grok is currently the best way in Logstash to parse unstructured log data into something structured and queryable. With 120 patterns built-in to Logstash, it’s more than likely you’ll find one that meets your needs!
mutate: perform general transformations on event fields. You can rename, remove, replace, and modify fields in your events.
drop: drop an event completely, for example, debug events.
clone: make a copy of an event, possibly adding or removing fields.
geoip: add information about geographical location of IP addresses (also displays amazing charts in Kibana!)

Outputs:输出是Logstash管道的最后阶段。 一个事件可以通过多个输出,但是一旦完成所有输出处理,该事件就完成了执行。 一些常用的输出包括:

elasticsearch: send event data to Elasticsearch. If you’re planning to save your data in an efficient, convenient, and easily queryable format…​ Elasticsearch is the way to go. Period. Yes, we’re biased :)
file: write event data to a file on disk.
graphite: send event data to graphite, a popular open source tool for storing and graphing metrics. http://graphite.readthedocs.io/en/latest/
statsd: send event data to statsd, a service that "listens for statistics, like counters and timers, sent over UDP and sends aggregates to one or more pluggable backend services". If you’re already using statsd, this could be useful for you!

三、配置

在Logstash启动时,需要制定一个配置文件,当安装包解压后,在config目录下,有一个logstash-sample.conf文件,可以复制一份命名为:logstash.conf。

该配置文件用于配置所需要收集的日志信息。以下是一个多input、多output的配置,请参考。

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  file {
    path => "/home/eyecool/abis/logs/sys-info.log"
    type => "abis-info"
    start_position => "beginning"
  }
   file {
    path => "/home/eyecool/abis/logs/sys-error.log"
    type => "abis-error"
    start_position => "beginning"
  }
}

output {
  if [type] == "abis-info" {
         elasticsearch {
        hosts => ["http://localhost:9200"]
        index => "abis-info-%{+YYYY.MM.dd}"
  }
 }

    if [type] == "abis-error" {
         elasticsearch {
         hosts => ["http://localhost:9200"]
         index => "abis-error-%{+YYYY.MM.dd}"  
  }
 }
   stdout{codec => rubydebug}
}

其中,output中的index,可以在ElasticSearch中根据索引名称进行查询,同时在kibana中创建IndexPatterns时会用到。

output中的hosts,是对应的ElasticSearch的地址,可以配置多个,以逗号隔开。

四、其他

其他高级操作,请参照官网:https://www.elastic.co/guide/en/logstash/current/index.html

ElasticSearch

Elasticsearch是Elastic Stack核心的分布式搜索和分析引擎。 Logstash和Beats有助于收集,聚合和丰富您的数据并将其存储在Elasticsearch中。 使用Kibana,您可以交互式地探索,可视化和共享对数据的见解,并管理和监视堆栈。

一、ElasticSearch安装

ElasticSearch在启动过程中,会用到root权限,如果在部署启动时,无法获取到root权限,请三思。

1、下载

通过官网进行下载相应的版本。

官网地址:https://www.elastic.co/elasticsearch/

2、安装

将下载的安装包(tar.gz)解压,目录如下:

eyecool@eyecool-OptiPlex-7060:~/logstash/logstash-7.8.1$ ls
 bin  config  CONTRIBUTORS  data  Gemfile  Gemfile.lock  lib  LICENSE.txt  logs  logstash-core  logstash-core-plugin-api  modules  NOTICE.TXT  tools  vendor  x-pack

3、启动/停止

在启动过程中,会出现很多错误,以下安装是在ubuntu 18上进行测试时,出现的错误及解决方案。

启动:./bin/elasticsearch -d -p pid  ,将pid写到文件中。
停止: pkill -F pid

1)、如果配置文件什么也不修改,通过命令./elasticsearch -d -p pid启动,会有以下输出:

future versions of Elasticsearch will require Java 11; your Java version from [/usr/lib/jvm/java-8-oracle/jre] does not meet this requirement
future versions of Elasticsearch will require Java 11; your Java version from [/usr/lib/jvm/java-8-oracle/jre] does not meet this requirement

但是,可以通过ps查看到进程,说明可以启动。官网上讲,elasticsearch需要jdk11,而本机安装的是jdk8,可以看出elasticsearch可以向下兼容jdk。

​ 可以通过:curl 127.0.0.1:9200,验证,如果返回以下信息,说明启动成功

{
  "name" : "eyecool-OptiPlex-7060",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "bc9L_L1nTWO3F3f2d4PTLA",
  "version" : {
    "number" : "7.8.1",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89",
    "build_date" : "2020-07-21T16:40:44.668009Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

2)、如果允许其他机器通过浏览器访问,需要修改配置文件,首先将networkk.host放开,并修改成0.0.0.0

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.

3)、如果只需要network.host,启动的时候还会报错,例如:

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

此时需要root权限修改/etc/sysctl.conf文件,在文件的末尾,添加

vm.max_map_count=262144

然后重启机器。

4)、待机器重启完成后,仍然会报错:

[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

继续修改配置elasticsearch.yml文件,初始化一个node:

#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#

5)其他错误:

org.elasticsearch.ElasticsearchException: Failure running machine learning native code. This could be due to running on an unsupported OS or distribution, missing OS libraries, or a problem with the temp directory. To bypass this problem by running Elasticsearch without machine learning functionality set [xpack.ml.enabled: false].
	at org.elasticsearch.xpack.ml.MachineLearning.createComponents(MachineLearning.java:618) ~[?:?]
	at org.elasticsearch.node.Node.lambda$new$11(Node.java:484) ~[elasticsearch-7.8.1.jar:7.8.1]
	at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) ~[?:1.8.0_191]
	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) ~[?:1.8.0_191]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_191]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_191]
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_191]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_191]
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[?:1.8.0_191]
	at org.elasticsearch.node.Node.<init>(Node.java:488) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.node.Node.<init>(Node.java:266) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:227) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:227) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393) [elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) [elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) [elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) [elasticsearch-cli-7.8.1.jar:7.8.1]
	at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) [elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-7.8.1.jar:7.8.1]
[2020-08-13T13:55:36,823][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [eyecool-OptiPlex-7060] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: ElasticsearchException[Failure running machine learning native code. This could be due to running on an unsupported OS or distribution, missing OS libraries, or a problem with the temp directory. To bypass this problem by running Elasticsearch without machine learning functionality set [xpack.ml.enabled: false].]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:174) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) ~[elasticsearch-cli-7.8.1.jar:7.8.1]
	at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.8.1.jar:7.8.1]
Caused by: org.elasticsearch.ElasticsearchException: Failure running machine learning native code. This could be due to running on an unsupported OS or distribution, missing OS libraries, or a problem with the temp directory. To bypass this problem by running Elasticsearch without machine learning functionality set [xpack.ml.enabled: false].
	at org.elasticsearch.xpack.ml.MachineLearning.createComponents(MachineLearning.java:618) ~[?:?]
	at org.elasticsearch.node.Node.lambda$new$11(Node.java:484) ~[elasticsearch-7.8.1.jar:7.8.1]
	at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) ~[?:1.8.0_191]
	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) ~[?:1.8.0_191]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_191]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_191]
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_191]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_191]
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[?:1.8.0_191]
	at org.elasticsearch.node.Node.<init>(Node.java:488) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.node.Node.<init>(Node.java:266) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:227) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:227) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393) ~[elasticsearch-7.8.1.jar:7.8.1]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.8.1.jar:7.8.1]
	... 6 more

需要修改yml配置文件,添加xpack:

# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
xpack.ml.enabled: false

参考网上资料:

max file descriptors,root用户下修改配置文件vim /etc/security/limits.conf,将soft nofile和hard nofile的值全部改为65536,保存推出,重新登录
max virtual memory,root用户下执行命令sysctl -w vm.max_map_count=262144

二、操作索引

1、查询索引

访问:http://192.168.61.95:9200/_cat/indices,可以查看所有的索引

green  open .apm-custom-link               6m_UeC5dTb-EeKnKuBI4zA 1 0   0 0    208b    208b
green  open .kibana_task_manager_1         FWIWep9_QR-PNG3RIXYGeQ 1 0   5 2  32.3kb  32.3kb
green  open .apm-agent-configuration       HPDHiZ5XSFKMCEc26Vvteg 1 0   0 0    208b    208b
yellow open es-message-2020.08.13          OrS5JDSYQBWPz1fsgoDAlQ 1 1 342 0 138.3kb 138.3kb
green  open .async-search                  SR6eAcwrTYWtiawmV043_Q 1 0   1 0  26.7kb  26.7kb
green  open .kibana_1                      j0r9N6gJTyaXVfHBC70GVw 1 0  30 6    56kb    56kb
green  open .kibana-event-log-7.8.1-000001 tRwQz06qSROfLgUUDK7FpA 1 0   1 0   5.3kb   5.3kb
yellow open es-message-2020.08.14          RtT3NH_XTjGAGmiH8nZ9AQ 1 1   2 0  13.1kb  13.1kb

2、索引删除

单个删除索引:

curl -XDELETE -u elastic:changeme http://localhost:9200/es-message-2020.08.14

删除多个索引,中间有逗号隔开:

curl -XDELETE -u elastic:changeme http://localhost:9200/abis-info-2020.08.09,abis-error-2020.08.10

模糊匹配删除:

curl -XDELETE -u elastic:changeme http://localhost:9200/abis-*

删除所有索引:

curl -XDELETE http://localhost:9200/_all
或 curl -XDELETE http://localhost:9200/*

	_all ,* 通配所有的索引
    通常不建议使用通配符,误删了后果就很严重了,所有的index都被删除了
    禁止通配符为了安全起见,可以在elasticsearch.yml配置文件中设置禁用_all和*通配符
    action.destructive_requires_name = true
    这样就不能使用_all和*了

如果想定期删除索引,可以使用以下shell脚本

#/bin/bash
#es-index-clear
#只保留15天内的日志索引
LAST_DATA=`date -d "-15 days" "+%Y.%m.%d"`
#删除上个月份所有的索引
curl -XDELETE 'http://ip:port/*-'${LAST_DATA}'*'

三、其他

其他高级操作,请参照官网:https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html

Kibana

Kibana 提供一个可视化的web界面,可以统计、分析收集上来的日志信息。同时页面也非常漂亮。

在这里插入图片描述

在设计之初,kibana就选择elasticsearch作为数据来源。将Elasticsearch视为存储和处理数据的引擎,而Kibana则位于其顶部。

一、Kibana安装

kibana 从6.0版本开始,只支持64bit操作系统。

1、下载

通过官网下载相应的版本。

https://www.elastic.co/kibana

2、安装

将下载的安装包(tar.gz)解压,目录如下:

eyecool@eyecool-OptiPlex-7060:~/kibana/kibana-7.8.1-linux-x86_64$ ls
bin  built_assets  config  data  LICENSE.txt  node  node_modules  NOTICE.txt  optimize  package.json  plugins  README.txt  src  webpackShims  x-pack

3、启动/停止

启动:./bin/kibaba

该启动方式,当ctrl+c时,会停掉服务

实际中可以采用:

nohup ./bin/kibana >/dev/null 2>&1 &

在linux平台上,通过ps -ef|grep kibana是查不到进程的,至于原因,网上有说是因为kibana是有node编写导致。

所以,要查询kibana进程,可以通过ps -ef|grep node来查看,不过ps出来的node进程可能有多个,无法确认是哪个。由于kibana默认端口是5601,那么我们可以通过netstat命令看查看哪个node进程是kibana:

netstat -tunlp|grep 5601

然后通过kill pid 来杀掉进程。

4、访问

默认端口5601,可以通过浏览器访问:http://192.168.61.95:5601/app/kibana

二、配置

由于kibana的数据来源是elasticsearch,要想在页面上看到我们想要的数据,需要在kibana的配置文件中进行配置。

在…/config目录下,可以找到kibana.yml文件:

eyecool@eyecool-OptiPlex-7060:~/kibana/kibana-7.8.1-linux-x86_64/config$ more kibana.yml 
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.61.95:9200"]

其中,

server.port:5601,默认端口

server.host:“0.0.0.0” ,可以允许其他机器访问

elasticsearch.hosts:[“http://ip:port”],数据来源信息,可以多个

其他参数可参照配置文件的说明或官方文档。

重启kibana,开始快乐查询之旅。

三、查询

1、索引模式

要想从页面上可以查看数据,首先要创建index patterns。我们可以将某一类相同的日志创建为一个pattern。比如说,logstash日志收集时的索引是按天来收集的,那么此时我们创建index pattern,可以将这些日志归结到一个pattern中,也可以对单个索引进行创建pattern。按实际情况创建。

从菜单中依次找到:Management–>Stack Management–>Kibana–>Index Patterns,可以按照提示创建index pattern。

在这里插入图片描述

创建完成后,可以看到所创建的index pattern

在这里插入图片描述

2、查询

在左侧菜单中,依次找到Home–>Kibana–>Discover,

在这里插入图片描述

在左侧的下拉框中,可以选择创建的index pattern进行查询。

同时可以对查询结果进行过滤:Selected fields & Available fields

四、其他

其他高级操作,请参照官网:https://www.elastic.co/guide/en/kibana/current/index.html

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值