elk简单搭建与使用

1.elk简介

在规模较大的场景中,此方法效率低下,面临问题包括日志量太大如何归档、文本搜索太慢怎么办、如何多维度查询。需要集中化的日志管理,所有服务器上的日志收集汇总。常见解决思路是建立集中式日志收集系统,将所有节点上的日志统一收集,管理,访问。

一般大型系统是一个分布式部署的架构,不同的服务模块部署在不同的服务器上,问题出现时,大部分情况需要根据问题暴露的关键信息,定位到具体的服务器和服务模块,构建一套集中式日志系统,可以提高定位问题的效率。

一个完整的集中式日志系统,需要包含以下几个主要特点:

收集-能够采集多种来源的日志数据
传输-能够稳定的把日志数据传输到中央系统
存储-如何存储日志数据
分析-可以支持 UI 分析
警告-能够提供错误报告,监控机制

ELK提供了一整套解决方案,并且都是开源软件,之间互相配合使用,完美衔接,高效的满足了很多场合的应用。目前主流的一种日志系统。

ELK简介:
ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具。

Elasticsearch是个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

Logstash 主要是用来日志的搜集、分析、过滤日志的工具,支持大量的数据获取方式。一般工作方式为c/s架构,client端安装在需要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操作在一并发往elasticsearch上去。

Kibana 也是一个开源和免费的工具,Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助汇总、分析和搜索重要数据日志。

Filebeat隶属于Beats。目前Beats包含四种工具:

Packetbeat(搜集网络流量数据)
Topbeat(搜集系统、进程和文件系统级别的 CPU 和内存使用情况等数据)
Filebeat(搜集文件数据)
Winlogbeat(搜集 Windows 事件日志数据)

2.环境

203.48.12.202 elasticsearch kibana 主节点、非数据节点
203.48.12.66 elasticsearch kibana logstash filebeat 主节点、非数据节点
203.48.27.105 elasticsearch logstash 非主节点、数据节点

安装包在/mnt下,安装在/opt目录下
在这里插入图片描述
在这里插入图片描述
需要jdk-1.8版本
Java -version看一下版本信息

[root@root ~]# java -version
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)

关闭防火墙,禁用selinux

[root@root ~]# vim /etc/sysctl.conf
vm.max_map_count=655360			#添加这句,允许进程在虚拟内存区的最大值(es需要)

3.搭建过程

3.1elasticsearch(以下简称为es)

注意:es不能用root用户启动
先建个elasticsearch用户,更改es所在目录权限

[root@root ~]# groupadd  elasticsearch
[root@root ~]# useradd elasticsearch -g elasticsearch
[root@root ~]# chown elasticsearch:elasticsearch /opt/elasticsearch-7.0.0/ -R

203.48.12.202的配置

[root@root ~]# vim /opt/elasticsearch-7.0.0/config/elasticsearch.yml 
cluster.name: devin		#集群名称
node.name: devin-3	#节点名称
node.master: false		#是否为主节点
node.data: true		#是否为数据节点
network.host: 203.48.27.105		#本机IP
http.port: 9200		#暴露端口
discovery.seed_hosts: ["203.48.12.202", "203.48.12.66","203.48.27.105"]		#集群
cluster.initial_master_nodes: ["devin-1", "devin-2"]		#可能成为主的节点
gateway.recover_after_nodes: 3		#当集群内达到3个时开始恢复数据(防止集群启动时部分节点自动恢复数据)

其他两个节点只需改这些数据即可(注意不要直接从第一个节点直接传输解压后的包过去,如果第一个节点启动后,/opt/elasticsearch/data目录下会有节点信息,启动后会冲突,最好传原始安装包过去解压,或者清除掉data下的数据)

node.name: devin-3	#节点名称
node.master: false		#是否为主节点
node.data: true		#是否为数据节点
network.host: 203.48.27.105		#本机IP

启动时

[root@root ~]# su - elasticsearch
[elasticsearch@root ~]# cd /opt/elasticsearch-7.0.0/bin
[elasticsearch@root ~]# nohup ./elasticsearch &

启动后观察9200,9300端口是否监听
在这里插入图片描述
检查es是否正常

[root@root ~]# curl 203.48.12.66:9200
{
  "name" : "devin-2",
  "cluster_name" : "devin",
  "cluster_uuid" : "zgY6Ay1DRMG7IvK-0Td8HQ",
  "version" : {
    "number" : "7.0.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "b7e28a7",
    "build_date" : "2019-04-05T22:55:32.697037Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.7.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

或者浏览器查看
在这里插入图片描述
检测集群是否正常

[root@root ~]# curl http://203.48.12.66:9200/_cluster/health?pretty
{
  "cluster_name" : "devin",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 8,
  "active_shards" : 16,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

或者浏览器查看
在这里插入图片描述

查看主节点信息

[root@root ~]# curl http://203.48.12.202:9200/_cat/master?v
id                     host         ip           node
2hBzl_aQRbCAWdUlrNZbeg 203.48.12.66 203.48.12.66 devin-2

或者浏览器查看
在这里插入图片描述

3.2kibana

[root@root ~]# vim /opt/kibana-7.0.0-linux-x86_64/config/kibana.yml
server.port: 5601
server.host: "203.48.12.66"
elasticsearch.hosts: ["http://203.48.12.202:9200","http://203.48.12.66:9200"]

启动

[root@root ~]# nohup /opt/kibana-7.0.0-linux-x86_64/bin/kibana &

因为kibana是用node.js写的
所以进程可以搜不到kibana,可以查询node

[root@root ~]# ps -ef | grep node

检查端口信息5601

或者访问浏览器,kibaba所在服务器ip:5601

3.3logstash

3.3.1.logstash的基本语法
input {
指定输入
}

output {
指定输出
}

3.3.2.测试标准输入输出

使用rubydebug方式前台输出展示以及测试

[root@root bin]# /opt/logstash-7.0.0/bin/logstash -e 'input { stdin {} } output { stdout { codec => rubydebug } }'
Sending Logstash logs to /opt/logstash-7.0.0/logs which is now configured via log4j2.properties
[2019-04-17T14:28:44,507][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-17T14:28:44,526][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-04-17T14:28:53,828][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x1ef7aadb run>"}
[2019-04-17T14:28:53,950][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2019-04-17T14:28:54,185][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-17T14:28:54,922][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
hello
/opt/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
          "host" => "data-node1",		#host标记事件发生在哪里
       "message" => "hello",			#消息的具体内容
      "@version" => "1",				#@version时间版本号,一个事件就是一个ruby对象
    "@timestamp" => 2019-04-17T06:29:09.643Z			#@timestamp,用来标记当前事件发生时间
}

在这里插入图片描述

3.3.3.测试输出到文件
[root@root bin]# /opt/logstash-7.0.0/bin/logstash -e 'input { stdin {} } output { file { path => "/tmp/test-%{+YYYY.MM.dd}.log" } }'
Sending Logstash logs to /opt/logstash-7.0.0/logs which is now configured via log4j2.properties
[2019-04-17T14:32:04,753][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-17T14:32:04,780][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.0"}
hello
[2019-04-17T14:32:13,382][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x48144385 run>"}
[2019-04-17T14:32:13,541][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2019-04-17T14:32:13,742][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-17T14:32:14,329][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-17T14:32:15,049][INFO ][logstash.outputs.file    ] Opening file {:path=>"/tmp/test-2019.04.17.log"}
i am devin
[2019-04-17T14:32:43,445][INFO ][logstash.outputs.file    ] Closing file /tmp/test-2019.04.17.log
go go go
[2019-04-17T14:32:55,547][INFO ][logstash.outputs.file    ] Opening file {:path=>"/tmp/test-2019.04.17.log"}
[2019-04-17T14:33:13,533][INFO ][logstash.outputs.file    ] Closing file /tmp/test-2019.04.17.log
^C[2019-04-17T14:34:16,221][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2019-04-17T14:34:16,570][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"main"}
[2019-04-17T14:34:17,392][INFO ][logstash.runner          ] Logstash shut down.

在这里插入图片描述
查看输出文件

[root@root tmp]# pwd
/tmp
[root@root tmp]# cat test-2019.04.17.log 
{"message":"hellp\bo\b\b\b","@version":"1","host":"data-node1","@timestamp":"2019-04-17T06:32:13.750Z"}
{"message":"hello","@version":"1","host":"data-node1","@timestamp":"2019-04-17T06:32:13.829Z"}
{"message":"i am devin","@version":"1","host":"data-node1","@timestamp":"2019-04-17T06:32:24.396Z"}
{"message":"go go go","@version":"1","host":"data-node1","@timestamp":"2019-04-17T06:32:55.443Z"}

在这里插入图片描述

3.3.4.开启gzip压缩输出
[root@root bin]# /opt/logstash-7.0.0/bin/logstash -e 'input { stdin {} } output { file { path => "/tmp/test-%{+YYYY.MM.dd}.log.tar.gz" gzip => true } }'
Sending Logstash logs to /opt/logstash-7.0.0/logs which is now configured via log4j2.properties
[2019-04-17T15:15:19,077][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-17T15:15:19,100][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-04-17T15:15:26,712][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0xa342f5b run>"}
[2019-04-17T15:15:26,847][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2019-04-17T15:15:26,998][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-17T15:15:27,597][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
hello  it's mo^He
[2019-04-17T15:15:41,588][INFO ][logstash.outputs.file    ] Opening file {:path=>"/tmp/test-2019.04.17.log.tar.gz"}
[2019-04-17T15:15:56,786][INFO ][logstash.outputs.file    ] Closing file /tmp/test-2019.04.17.log.tar.gz
^C[2019-04-17T15:16:19,982][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2019-04-17T15:16:20,343][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"main"}
[2019-04-17T15:16:20,748][INFO ][logstash.runner          ] Logstash shut down.

在这里插入图片描述
在这里插入图片描述

3.3.5.测试输出到elasticsearch

[root@root ~]# /opt/logstash-7.0.0/bin/logstash -e 'input { stdin {} } output { elasticsearch { hosts => ["203.48.12.202:9200","203.48.12.66:9200"] index => "logstash-test-%{+YYYY.MM.dd}"}}'
Sending Logstash logs to /opt/logstash-7.0.0/logs which is now configured via log4j2.properties
[2019-04-18T08:35:16,559][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-18T08:35:16,590][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-04-18T08:35:24,451][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://203.48.12.202:9200/, http://203.48.12.66:9200/]}}
[2019-04-18T08:35:24,704][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://203.48.12.202:9200/"}
[2019-04-18T08:35:24,770][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-04-18T08:35:24,774][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-04-18T08:35:24,792][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://203.48.12.66:9200/"}
[2019-04-18T08:35:24,837][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//203.48.12.202:9200", "//203.48.12.66:9200"]}
[2019-04-18T08:35:24,865][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x2da92183 run>"}
[2019-04-18T08:35:24,875][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-04-18T08:35:25,105][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-04-18T08:35:25,130][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
The stdin plugin is now waiting for input:
[2019-04-18T08:35:25,376][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-18T08:35:25,854][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
hello  it's me
hi

在这里插入图片描述
看一下数据信息

[root@root ~]# ll /opt/elasticsearch-7.0.0/data/nodes/0/indices/
total 20
drwxrwxr-x 4 elasticsearch elasticsearch 4096 Apr 16 17:20 bNL_nC_pQrSRDqifStgu_w
drwxrwxr-x 4 elasticsearch elasticsearch 4096 Apr 16 17:20 ejWHSFgeS02J-CRgboF49w
drwxrwxr-x 4 elasticsearch elasticsearch 4096 Apr 17 11:44 En_-XArvT_-TdKtN18krrQ
drwxrwxr-x 4 elasticsearch elasticsearch 4096 Apr 17 17:06 eOQKT_d2SzyEWeYdnggO5Q
drwxrwxr-x 4 elasticsearch elasticsearch 4096 Apr 18 08:35 IziuWw1BSB2jd0wbxewDGQ

在这里插入图片描述
在这里插入图片描述

使用conf文件匹配监控

[root@root bin]# vim /opt/logstash-7.0.0/config/logstash.conf
input {
  syslog {
    type => "system-syslog"
    port => 10514
  }
}

output {
  stdout {
codec => rubydebug
}
}

在这里插入图片描述
检查配置文件是否有误

[root@root bin]# /opt/logstash-7.0.0/bin/logstash --path.settings /opt/logstash-7.0.0/ -f /opt/logstash-7.0.0/config/logstash.conf --config.test_and_exit
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /opt/logstash-7.0.0/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-04-18 10:08:50.906 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[INFO ] 2019-04-18 10:08:59.513 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

修改日志输出

[root@root bin]# vim /etc/rsyslog.conf
#### RULES ####

*.* @@202.48.12.66:10514

在这里插入图片描述

[root@root ~]# systemctl restart rsyslog
[root@root ~]# systemctl status rsyslog

在这里插入图片描述

从另一台机子执行ssh到这台机子上

[root@root bin]# ./logstash --path.settings /opt/logstash-7.0.0/ -f /opt/logstash-7.0.0/config/logstash.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /opt/logstash-7.0.0/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-04-18 09:59:45.477 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-04-18 09:59:45.504 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.0.0"}
[INFO ] 2019-04-18 09:59:55.582 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x5a3d5ff7 run>"}
[INFO ] 2019-04-18 09:59:56.308 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2019-04-18 09:59:56.440 [Ruby-0-Thread-11: :1] syslog - Starting syslog tcp listener {:address=>"0.0.0.0:10514"}
[INFO ] 2019-04-18 09:59:56.445 [Ruby-0-Thread-10: :1] syslog - Starting syslog udp listener {:address=>"0.0.0.0:10514"}
[INFO ] 2019-04-18 09:59:56.523 [Ruby-0-Thread-1: /opt/logstash-7.0.0/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-04-18 09:59:57.314 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2019-04-18 10:00:01.271 [Ruby-0-Thread-17: /opt/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-syslog-3.4.1/lib/logstash/inputs/syslog.rb:130] syslog - new connection {:client=>"203.48.12.66:44910"}
/opt/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
        "@timestamp" => 2019-04-18T02:00:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "system",
          "severity" => 6,
           "program" => "systemd",
         "timestamp" => "Apr 18 10:00:01",
    "severity_label" => "Informational",
           "message" => "Started Session 3362 of user root.\n",
              "host" => "203.48.12.66",
          "priority" => 30,
          "facility" => 3
}
{
        "@timestamp" => 2019-04-18T02:00:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "system",
          "severity" => 6,
           "program" => "systemd",
         "timestamp" => "Apr 18 10:00:01",
    "severity_label" => "Informational",
           "message" => "Starting Session 3362 of user root.\n",
              "host" => "203.48.12.66",
          "priority" => 30,
          "facility" => 3
}
{
        "@timestamp" => 2019-04-18T02:00:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "clock",
          "severity" => 6,
           "program" => "CROND",
         "timestamp" => "Apr 18 10:00:01",
    "severity_label" => "Informational",
           "message" => "(root) CMD (/usr/lib64/sa/sa1 1 1)\n",
              "host" => "203.48.12.66",
               "pid" => "18152",
          "priority" => 78,
          "facility" => 9
}
{
        "@timestamp" => 2019-04-18T02:00:40.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "security/authorization",
          "severity" => 5,
           "program" => "sshd",
         "timestamp" => "Apr 18 10:00:40",
    "severity_label" => "Notice",
           "message" => "pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=203.48.27.105  user=elasticsearch\n",
              "host" => "203.48.12.66",
               "pid" => "18158",
          "priority" => 85,
          "facility" => 10
}
{
        "@timestamp" => 2019-04-18T02:00:42.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "security/authorization",
          "severity" => 6,
           "program" => "sshd",
         "timestamp" => "Apr 18 10:00:42",
    "severity_label" => "Informational",
           "message" => "Failed password for elasticsearch from 203.48.27.105 port 58020 ssh2\n",
              "host" => "203.48.12.66",
               "pid" => "18158",
          "priority" => 86,
          "facility" => 10
}
{
        "@timestamp" => 2019-04-18T02:00:48.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "security/authorization",
          "severity" => 6,
           "program" => "sshd",
         "timestamp" => "Apr 18 10:00:48",
    "severity_label" => "Informational",
           "message" => "Failed password for elasticsearch from 203.48.27.105 port 58020 ssh2\n",
              "host" => "203.48.12.66",
               "pid" => "18158",
          "priority" => 86,
          "facility" => 10
}
{
        "@timestamp" => 2019-04-18T02:00:54.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "security/authorization",
          "severity" => 6,
           "program" => "sshd",
         "timestamp" => "Apr 18 10:00:54",
    "severity_label" => "Informational",
           "message" => "Connection closed by 203.48.27.105 [preauth]\n",
              "host" => "203.48.12.66",
               "pid" => "18158",
          "priority" => 86,
          "facility" => 10
}
{
        "@timestamp" => 2019-04-18T02:00:54.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "security/authorization",
          "severity" => 5,
           "program" => "sshd",
         "timestamp" => "Apr 18 10:00:54",
    "severity_label" => "Notice",
           "message" => "PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=203.48.27.105  user=elasticsearch\n",
              "host" => "203.48.12.66",
               "pid" => "18158",
          "priority" => 85,
          "facility" => 10
}
{
        "@timestamp" => 2019-04-18T02:01:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "clock",
          "severity" => 5,
         "timestamp" => "Apr 18 10:01:01",
    "severity_label" => "Notice",
           "message" => "run-parts(/etc/cron.hourly)[1817 starting 0anacron\n",
              "host" => "203.48.12.66",
          "priority" => 77,
          "facility" => 9
}
{
        "@timestamp" => 2019-04-18T02:01:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "system",
          "severity" => 6,
           "program" => "systemd",
         "timestamp" => "Apr 18 10:01:01",
    "severity_label" => "Informational",
           "message" => "Started Session 3363 of user root.\n",
              "host" => "203.48.12.66",
          "priority" => 30,
          "facility" => 3
}
{
        "@timestamp" => 2019-04-18T02:01:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "clock",
          "severity" => 5,
         "timestamp" => "Apr 18 10:01:01",
    "severity_label" => "Notice",
           "message" => "run-parts(/etc/cron.hourly)[1818 finished 0anacron\n",
              "host" => "203.48.12.66",
          "priority" => 77,
          "facility" => 9
}
{
        "@timestamp" => 2019-04-18T02:01:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "clock",
          "severity" => 6,
           "program" => "CROND",
         "timestamp" => "Apr 18 10:01:01",
    "severity_label" => "Informational",
           "message" => "(root) CMD (run-parts /etc/cron.hourly)\n",
              "host" => "203.48.12.66",
               "pid" => "18177",
          "priority" => 78,
          "facility" => 9
}
{
        "@timestamp" => 2019-04-18T02:01:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "clock",
          "severity" => 5,
         "timestamp" => "Apr 18 10:01:01",
    "severity_label" => "Notice",
           "message" => "run-parts(/etc/cron.hourly)[1819 finished 0yum-hourly.cron\n",
              "host" => "203.48.12.66",
          "priority" => 77,
          "facility" => 9
}
{
        "@timestamp" => 2019-04-18T02:01:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "system",
          "severity" => 6,
           "program" => "systemd",
         "timestamp" => "Apr 18 10:01:01",
    "severity_label" => "Informational",
           "message" => "Starting Session 3363 of user root.\n",
              "host" => "203.48.12.66",
          "priority" => 30,
          "facility" => 3
}
{
        "@timestamp" => 2019-04-18T02:01:01.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "clock",
          "severity" => 5,
         "timestamp" => "Apr 18 10:01:01",
    "severity_label" => "Notice",
           "message" => "run-parts(/etc/cron.hourly)[1817 starting 0yum-hourly.cron\n",
              "host" => "203.48.12.66",
          "priority" => 77,
          "facility" => 9
}
{
        "@timestamp" => 2019-04-18T02:01:05.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "security/authorization",
          "severity" => 6,
           "program" => "systemd-logind",
         "timestamp" => "Apr 18 10:01:05",
    "severity_label" => "Informational",
           "message" => "New session 3364 of user root.\n",
              "host" => "203.48.12.66",
          "priority" => 38,
          "facility" => 4
}
{
        "@timestamp" => 2019-04-18T02:01:05.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "security/authorization",
          "severity" => 6,
           "program" => "sshd",
         "timestamp" => "Apr 18 10:01:05",
    "severity_label" => "Informational",
           "message" => "Accepted password for root from 203.48.27.105 port 59056 ssh2\n",
              "host" => "203.48.12.66",
               "pid" => "18194",
          "priority" => 86,
          "facility" => 10
}
{
        "@timestamp" => 2019-04-18T02:01:05.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "security/authorization",
          "severity" => 6,
           "program" => "sshd",
         "timestamp" => "Apr 18 10:01:05",
    "severity_label" => "Informational",
           "message" => "pam_unix(sshd:session): session opened for user root by (uid=0)\n",
              "host" => "203.48.12.66",
               "pid" => "18194",
          "priority" => 86,
          "facility" => 10
}
{
        "@timestamp" => 2019-04-18T02:01:05.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "system",
          "severity" => 6,
           "program" => "systemd",
         "timestamp" => "Apr 18 10:01:05",
    "severity_label" => "Informational",
           "message" => "Starting Session 3364 of user root.\n",
              "host" => "203.48.12.66",
          "priority" => 30,
          "facility" => 3
}
{
        "@timestamp" => 2019-04-18T02:01:05.000Z,
         "logsource" => "root",
          "@version" => "1",
              "type" => "system-syslog",
    "facility_label" => "system",
          "severity" => 6,
           "program" => "systemd",
         "timestamp" => "Apr 18 10:01:05",
    "severity_label" => "Informational",
           "message" => "Started Session 3364 of user root.\n",
              "host" => "203.48.12.66",
          "priority" => 30,
          "facility" => 3
}

可以看到,终端中以JSON的格式打印了收集的日志,测试成功

以上是测试的配置,接下来需要改动配置文件,让收集的日志信息输出到es服务器上,而不是当前终端

[root@root bin]# vim /opt/elasticsearch-7.0.0/config/log4j2.properties
input {
  syslog {
    type => "system-syslog"
    port => 10514
  }
}

output {
  elasticsearch {
    hosts => ["http://203.48.12.202:9200","http://203.48.12.66:9200"]		##定义es服务器的ip
    index => "system-syslog-%{+YYYY.MM}"			##定义索引
    #user => "elastic"
    #password => "changeme"
  }
}

在这里插入图片描述

检查配置文件是否有误

[root@root bin]# /opt/logstash-7.0.0/bin/logstash --path.settings /opt/logstash-7.0.0/ -f /opt/logstash-7.0.0/config/logstash.conf --config.test_and_exit
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /opt/logstash-7.0.0/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-04-18 10:08:50.906 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[INFO ] 2019-04-18 10:08:59.513 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

在这里插入图片描述

[root@root bin]# /opt/logstash-7.0.0/bin/logstash --path.settings /opt/logstash-7.0.0/ -f /opt/logstash-7.0.0/config/logstash.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /opt/logstash-7.0.0/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-04-18 10:11:14.039 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-04-18 10:11:14.064 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.0.0"}
[INFO ] 2019-04-18 10:11:24.466 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://203.48.12.202:9200/, http://203.48.12.66:9200/]}}
[WARN ] 2019-04-18 10:11:24.814 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://203.48.12.202:9200/"}
[INFO ] 2019-04-18 10:11:25.103 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2019-04-18 10:11:25.109 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[WARN ] 2019-04-18 10:11:25.123 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://203.48.12.66:9200/"}
[INFO ] 2019-04-18 10:11:25.222 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://203.48.12.202:9200", "http://203.48.12.66:9200"]}
[INFO ] 2019-04-18 10:11:25.226 [Ruby-0-Thread-5: :1] elasticsearch - Using default mapping template
[INFO ] 2019-04-18 10:11:25.291 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x32647ea5 run>"}
[INFO ] 2019-04-18 10:11:25.661 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[INFO ] 2019-04-18 10:11:26.209 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2019-04-18 10:11:26.289 [Ruby-0-Thread-12: :1] syslog - Starting syslog udp listener {:address=>"0.0.0.0:10514"}
[INFO ] 2019-04-18 10:11:26.300 [Ruby-0-Thread-13: :1] syslog - Starting syslog tcp listener {:address=>"0.0.0.0:10514"}
[INFO ] 2019-04-18 10:11:26.390 [Ruby-0-Thread-1: /opt/logstash-7.0.0/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-04-18 10:11:27.042 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2019-04-18 10:11:36.790 [Ruby-0-Thread-19: :1] syslog - new connection {:client=>"203.48.12.66:59024"}

在这里插入图片描述

kibana上查看日志
logstash完成搭建后,回到kinana服务器上查看日志,执行以上命令可以获得索引信息

[root@root ~]# curl '203.48.12.66:9200/_cat/indices?v'
health status index                    uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   logstash-test-2019.04.17 eOQKT_d2SzyEWeYdnggO5Q   1   1          1            0      8.7kb          4.3kb
green  open   logstash-test-2019.04.18 IziuWw1BSB2jd0wbxewDGQ   1   1          2            0       17kb          8.5kb
green  open   .kibana_1                bNL_nC_pQrSRDqifStgu_w   1   1          5            1     74.8kb         37.4kb
green  open   .kibana_task_manager     ejWHSFgeS02J-CRgboF49w   1   1          2            0     25.6kb         12.8kb
green  open   system-syslog-2019.04    2luEjCAQShSG7axn79EMOw   1   1         11            0      133kb         80.2kb
green  open   error-2019.04.17         En_-XArvT_-TdKtN18krrQ   1   1          1            0     12.3kb          6.1kb

在这里插入图片描述

可以看到,logstash配置文件中的system-syslog索引成功获取到了,证明配置没问题,logstash和elasticsearch通信正常

获取指定索引信息

[root@root ~]# curl -XGET "203.48.12.66:9200/system-syslog-2019.04?pretty"
{
  "system-syslog-2019.04" : {
    "aliases" : { },
    "mappings" : {
      "properties" : {
        "@timestamp" : {
          "type" : "date"
        },
        "@version" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "facility" : {
          "type" : "long"
        },
        "facility_label" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "host" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "logsource" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "message" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "pid" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "priority" : {
          "type" : "long"
        },
        "program" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "severity" : {
          "type" : "long"
        },
        "severity_label" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "timestamp" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "type" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        }
      }
    },
    "settings" : {
      "index" : {
        "creation_date" : "1555553497309",
        "number_of_shards" : "1",
        "number_of_replicas" : "1",
        "uuid" : "2luEjCAQShSG7axn79EMOw",
        "version" : {
          "created" : "7000099"
        },
        "provided_name" : "system-syslog-2019.04"
      }
    }
  }
}

在这里插入图片描述

如果想删除索引的话,使用以下命令可以删除

[root@root ~]# curl -XDELETE "203.48.12.66:9200/system-syslog-2019.04"

通信正常后可以去配置kibana,进行页面设置了,访问kibana服务器203.48.12.66:5601,到页面上配置索引

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

或者用通配符进行批量匹配

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述
然后点discovery

在这里插入图片描述
选择匹配
在这里插入图片描述

在management可以删除索引
在这里插入图片描述
logstash收集nginx日志

和收集syslog一样,先编辑配置文件

[root@root ~]# vim /opt/logstash-7.0.0/config/nginx.conf 

input {
  file {
    path => "/tmp/elk_access.log"
    start_position => "beginning"
    type => "nginx"
  }
}
filter {
    grok {
        match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
    }
    geoip {
        source => "clientip"
    }
}
output {
    stdout { codec => rubydebug }
    elasticsearch {
        hosts => ["203.48.12.202:9200","203.48.12.66:9200"]
        index => "nginx-test-%{+YYYY.MM.dd}"
  }
}

在这里插入图片描述
检测配置文件是否有错

[root@root ~]# /opt/logstash-7.0.0/bin/logstash --path.settings /opt/logstash-7.0.0/ -f /opt/logstash-7.0.0/config/nginx.conf --config.test_and_exit
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /opt/logstash-7.0.0/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-04-18 16:43:59.168 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[INFO ] 2019-04-18 16:44:09.160 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

在这里插入图片描述

ok后,进入nginx虚拟主机配置文件所在目录,新建一个虚拟主机配置文件

[root@root conf]# vim /opt/nginx/conf/nginx.conf
    server {
        listen 80;
        server_name elk.test.com;

        location / {
            proxy_pass      http://203.48.12.66:5601;
            proxy_set_header Host   $host;
            proxy_set_header X-Real-IP      $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        access_log  /tmp/elk_access.log main2;
    }

在这里插入图片描述

配置nginx的主配置文件,因为要配置日志格式,在log_format main下面添加

[root@root conf]# vim /opt/nginx/conf/nginx.conf
    log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request"'
                     '$status $body_bytes_sent "$http_referer"'
                     '"$http_user_agent" "$upstream_addr" $request_time';

在这里插入图片描述
完成后检测一下,reload重新加载,
[root@root conf]# /usr/bin/nginx -t
nginx: the configuration file /opt/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /opt/nginx/conf/nginx.conf test is successful
[root@root conf]# /usr/bin/nginx -s reload
在这里插入图片描述
在window下hosts文件添加域名
C:\Windows\System32\drivers\etc\hosts

203.48.12.66  elk.test.com

在浏览器通过域名进行访问
在这里插入图片描述
系统会生成日志文件

[root@root ~]# ls /tmp/elk_access.log 
/tmp/elk_access.log
[root@root ~]# wc -l !$
wc -l /tmp/elk_access.log
64 /tmp/elk_access.log

在这里插入图片描述

重启logstash服务,生成日志索引

[root@root ~]# /opt/logstash-7.0.0/bin/logstash --path.settings /opt/logstash-7.0.0/ -f /opt/logstash-7.0.0/config/nginx.conf

重启后,在es服务器上检查是否有nginx-test开头的索引生成

[root@root bin]# curl '203.48.12.66:9200/_cat/indices?v'
health status index                    uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   logstash-test-2019.04.17 eOQKT_d2SzyEWeYdnggO5Q   1   1          1            0      8.7kb          4.3kb
green  open   logstash-test-2019.04.18 IziuWw1BSB2jd0wbxewDGQ   1   1          2            0       17kb          8.5kb
green  open   .kibana_1                bNL_nC_pQrSRDqifStgu_w   1   1          8            2    136.4kb         68.7kb
green  open   .kibana_task_manager     ejWHSFgeS02J-CRgboF49w   1   1          2            0     25.6kb         12.8kb
green  open   system-syslog-2019.04    2luEjCAQShSG7axn79EMOw   1   1        123            0    206.2kb         79.2kb
green  open   error-2019.04.17         En_-XArvT_-TdKtN18krrQ   1   1          1            0     12.3kb          6.1kb
green  open   nginx-test-2019.04.18    xeVZPV9yQQKVWzoipverjQ   1   1         65            0     44.9kb         22.4kb

在这里插入图片描述
索引已生成,可以去kibana上配置该索引
在这里插入图片描述
在这里插入图片描述

在DIscover里进行查看nginx的访问数据
在这里插入图片描述

4.beats

使用beats采集日志
beats是elk体系里新增的一个日志采集工具,比logstash轻量级,logstash占用的资源较大,官方也推荐使用beats,而且beats可扩展,支持自定义构建。

在203.48.12.66上安装filebeat,filebeat是beats体系中用于收集日志的工具

[root@root opt]# vim /opt/filebeat-7.0.0-linux-x86_64/filebeat.yml
#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  # enabled: false		#先注释掉

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.		#先注释掉		
  #hosts: ["localhost:9200"]		#先注释掉
output.console:
  enable: true

配置完后,执行以下命令,看是否有在终端打印日志数据,有则表示filebeat正常工作

以上是为了测试filebeat是否正常收集日志,接下来修改配置文件,讲filebeat作为一个服务启动

[root@root opt]# vim /opt/filebeat-7.0.0-linux-x86_64/filebeat.yml
#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  # enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["203.48.12.202:9200","203.48.12.66:9200"]
#output.console:
  # enable: true
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

然后启动服务

[root@root opt]# /opt/filebeat-7.0.0-linux-x86_64/filebeat -c /opt/filebeat-7.0.0-linux-x86_64/filebeat.yml

启动成功后,到es服务上查看索引,可以看到一条filebeat的相关索引,这表明filebeat和es可以正常通信了

[root@root ~]# curl "203.48.12.66:9200/_cat/indices?v"
health status index                            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   logstash-test-2019.04.17         eOQKT_d2SzyEWeYdnggO5Q   1   1          1            0      8.7kb          4.3kb
green  open   logstash-test-2019.04.18         IziuWw1BSB2jd0wbxewDGQ   1   1          2            0       17kb          8.5kb
green  open   .kibana_1                        bNL_nC_pQrSRDqifStgu_w   1   1          9            2    135.5kb         68.3kb
green  open   .kibana_task_manager             ejWHSFgeS02J-CRgboF49w   1   1          2            0     25.6kb         12.8kb
green  open   filebeat-7.0.0-2019.04.18-000001 ozivraIPQ8qq-J9hBXTn6Q   1   1      26605            0        7mb          3.6mb
green  open   system-syslog-2019.04            2luEjCAQShSG7axn79EMOw   1   1        123            0    206.2kb         79.2kb
green  open   error-2019.04.17                 En_-XArvT_-TdKtN18krrQ   1   1          1            0     12.3kb          6.1kb
green  open   nginx-test-2019.04.18            xeVZPV9yQQKVWzoipverjQ   1   1         65            0     45.1kb         22.5kb

在这里插入图片描述
然后去kibana上查看

以上为filebeat的配置,相对来说比logstash要简单

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
开源实时日志分析ELK平台能够完美的解决我们上述的问题,ELK由ElasticSearch、Logstash和Kiabana三个开源工具组成。 官方网站:https://www.elastic.co/products Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。 Logstash是一个完全开源的工具,他可以对你的日志进行收集、过滤,并将其存储供以后使用(如,搜索)。 Kibana 也是一个开源和免费的工具,它Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。 ELK下载:https://www.elastic.co/downloads/ ELK工作原理: ElasticSearch 配置ElasticSearch: 1 2 unzip elasticsearch-6.2.4.zip cd elasticsearch-6.2.4 然后编辑ES的配置文件: 1 vi config/elasticsearch.yml 修改以下配置项: 1 2 3 4 5 6 7 cluster.name=es_cluster node.name=node0 path.data=/tmp/elasticsearch/data path.logs=/tmp/elasticsearch/logs #当前hostname或IP,我这里是node1 network.host=node1 network.port=9200 其他的选项保持默认,然后启动ES: 1 nohup sh elasticsearch > nohup.log & 注意: 1.需要添加用户elk,ES不能以root用户进行启动 2.可能出现的错误: max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536] 1 2 3 vi /etc/security/limits.conf elk soft nofile 819200 elk hard nofile 819200 max number of threads [1024] for user [work] likely too low, increase to at least [2048] 1 2 3 4 vi /etc/security/limits.d/90-nproc.conf * soft nproc 1024 #修改为: * soft nproc 2048 max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144] 1 2 3 4 5 vi /etc/sysctl.conf #增加改行配置: vm.max_map_count=655360 #保存退出后,执行: sysctl -p 另外再配置ES的时候,threadpool.bulk.queue_size 已经变成了thread_pool.bulk.queue_size ,ES_HEAP_SIZE,ES_MAX_MEM等配置都变为ES_JAVA_OPTS这一配置项,如限制内存最大最小为1G: 1 export ES_JAVA_OPTS="-Xms1g -Xmx1g" 然后可以打开页面http://node1:9200/,将会看到以下内容:(我是通过外部访问虚拟机,因此为了简单没有配置host文件,直接用ip访问) Logstash 配置Logstash: 1 2 tar -zxvf logstash-6.2.4.tar.gz cd logstash-6.2.4 编写配置文件(名字和位置可以随意,这里我放在config目录下,取名为log_app.conf): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 vi config/log_app.config #以下为内容 input { file { path => "/usr/local/software/elk/app.log" start_position => "beginning" #从文件开始处读写 } # stdin {} #可以从标准输入读数据 } filter { #Only matched data are send to output. } output { # For detail config for elasticsearch as output, # See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html elasticsearch { action => "index" #The operation on ES hosts => "node1:9200" #ElasticSearch host, can be array. index => "applog" #The index to write data to. } } 其他的选项保持默认,然后启动Logstash: 1 2 # -f为指定配置文件 nohup sh ./bin/logstash -f ../config/log_app.config > nohup.log & 日志: Kibana 配置Kibana: 1 2 tar -zxvf kibana-6.2.4-linux-x86_64.tar.gz cd kibana-6.2.4-linux-x86_64 修改以下几项(由于是单机版的,因此host的值也可以使用localhost来代替,这里仅仅作为演示): 1 2 3 4 server.port: 5601 server.host: “node1” elasticsearch.url: http://node1:9200 kibana.index: “.kibana” 启动kibana: 1 nohup sh ./bin/kibana > nohup.log & 启动后界面: 然后需要创建index,步骤如下: ①点击左边iscover出现以下界面 ②按照注释配置,然后点击Next step,在第二页 选择@timestamp点击create创建 ③创建完成之后,可以看到以下一个界面,红框内是 自动生成的域,也可以理解为 跟数据库中的字段类似,其中有一个message字段,就是我们想要的日志信息。 ④再次点击Discover出现以下界面,可以看到默认搜索的是最后15分钟的日志,可以通过点击设置搜索的时间范围. ⑤可以点击右侧域的add设置需要显示的字段 添加完成之后,日志显示如下:

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值