理论+实验 详解ELK日志分析系统

一 ElK日志分析系统简介

1.1 日志服务器

●提高安全性
●集中存放日志
●缺陷:对日志的分析困难
在这里插入图片描述

1.2 ELK日志分析系统

●Elasticsearch
●Logstash
●Kibana

1.3 日志处理步骤

  1. 将日志进行集中化管理
  2. 将日志格式化(Logstash)并输出到Elasticsearch
  3. 对格式化后的数据进行索引和存储(Elasticsearch)
  4. 前端数据的展示(Kibana)

二 Elasticsearch介绍

2.1 Elasticsearch的概述

● 提供了一个分布式多用户能力的全文搜索引擎

2.2 Elasticsearch核心概念

●接近实时
●集群
●节点
●索引:索引(库)---- 类型(表)---- 文档(记录)
●分片和副本

三 Logstash介绍

3.1 Logstash介绍

●一款强大的数据处理工具
●可实现数据传输,格式处理,格式化输出
●数据输入,数据加工(如过滤,改写等)以及数据输出

3.2 Logstash主要组件

●Shipper
●Indexer
●Broker
●Search and Storage
●Web Interface

四 Kibana介绍

4.1 Kibana介绍

●一个针对Elasticsearch的开源分析及可视化平台
●搜索,查看存储在Elasticsearch索引中的数据
●通过各种图表进行高级数据及展示

4.2 Kibana主要功能

●Elasticsearch无缝之集成
●整合数据,复杂数据分析
●让更多团队成员受益
●接口灵活,分享更容易
●配置简单,可视化多数据源
●简单数据导出

五 部署ElK日志分析系统

5.1 实验环境

主机操作系统主机名/ IP地址主要软件
服务器centos7.4apache/20.0.0.101Logstash Apache
服务器centos7.4node1/20.0.0.102Elasticsearch
服务器centos7.4node2/20.0.0.103Elasticsearch
服务器centos7.4kibana/20.0.0.104kibana

5.2 实验步骤

5.2.1 实验环境设置

##关闭防火墙,核心防护(所有服务器)##
[root@node1 ~]# systemctl stop firewalld
[root@node1 ~]# setenforce 0

##设置主机名##
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# hostnamectl set-hostname kibana
[root@localhost ~]# hostnamectl set-hostname apache

##设置Elasticsearch域名解析(node1,node2)##
[root@node1 ~]# vi /etc/hosts
20.0.0.102      node1
20.0.0.103      node2

[root@node2 ~]# vi /etc/hosts
20.0.0.102      node1
20.0.0.103      node2

##查看java版本##
[root@node1 ~]# java -version
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)

[root@node2 ~]# java -version
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)

5.2.2 安装Elasticsearch软件(node1 node2)

1. 安装Elasticsearch软件包
[root@node1 opt]# ll                         //提前传好软件包
总用量 32616
-rw-r--r--  1 root root 33396354 8月  11 2017 elasticsearch-5.5.0.rpm
[root@node1 opt]# rpm -ivh elasticsearch-5.5.0.rpm 

2. 加载系统服务
[root@node1 opt]# systemctl daemon-reload 
[root@node1 opt]# systemctl enable elasticsearch.service 

3.更改Elasticsearch主配置文件
[root@node1 opt]# vim /etc/elasticsearch/elasticsearch.yml
 17 cluster.name: my-elk-cluster 
 23 node.name: node-1
 33 path.data: /data/elk_data
 37 path.logs: /var/log/elasticsearch/
 43 bootstrap.memory_lock: false
 55 network.host: 0.0.0.0
 59 http.port: 9200
 68 discovery.zen.ping.unicast.hosts: ["node1", "node2"]

4. 创建数据存放路径并授权
[root@node1 opt]# mkdir -p /data/elk_data
[root@node1 opt]# chown elasticsearch:elasticsearch /data/elk_data/

5. 启动Elasticsearch是否成功开启
[root@node1 opt]# systemctl start elasticsearch.service
[root@node1 opt]# netstat -anpt | grep 9200
tcp6       0      0 :::9200                 :::*                    LISTEN      2514/java  
  1. 查看节点信息,用真机浏览器打开 http://20.0.0.102:9200,http://20.0.0.102:9200 有文件打开 下面是节点信息

在这里插入图片描述
在这里插入图片描述

  1. 在真机浏览器打开 http://20.0.0.102:9200/_cluster/health?pretty //检查群集健康状况
    在这里插入图片描述
    在这里插入图片描述

  2. 在真机浏览器打开 http://20.0.0.102:9200/_cluster/state?pretty //检查群集状态信息
    在这里插入图片描述
    在这里插入图片描述

9. 编译安装node组件依赖包
[root@node1 opt]# yum -y install gcc gcc-c++ make
[root@node1 opt]# tar xzvf node-v8.2.1.tar.gz                //提前把软件包源传进来
[root@node1 opt]# cd node-v8.2.1/
[root@node1 node-v8.2.1]# ./configure
[root@node1 node-v8.2.1]# make -j4             //等待时间比较长
[root@node1 node-v8.2.1]# make install

10. 安装phantomjs                    //前端框架
[root@node1 node-v8.2.1]# cd /opt/                          
[root@node1 opt]# tar xjvf phantomjs-2.1.1-linux-x86_64.tar.bz2                     //提前把软件包源传进来
[root@node1 opt]# cd phantomjs-2.1.1-linux-x86_64/bin/
[root@node1 bin]# cp phantomjs /usr/local/bin/


11. 安装Elasticsearch-head插件                   //数据可视化工具
[root@node1 bin]# cd /opt/
[root@node1 opt]# tar xzvf elasticsearch-head.tar.gz                                //提前把软件包源传进来
[root@node1 opt]# cd elasticsearch-head/
[root@node1 elasticsearch-head]# npm install

12. 修改主配置文件
[root@node1 elasticsearch-head]# cd ~
[root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml                 //最后面插入两行
http.cors.enabled: true                           
http.cors.allow-origin: "*"
[root@node1 ~]# systemctl restart elasticsearch.service


13. 启动Elasticsearch-head
[root@node1 ~# cd /opt/elasticsearch-head/
[root@node1 elasticsearch-head]# npm run start &
[4] 86571
[root@node1 elasticsearch-head]# 
> elasticsearch-head@0.0.0 start /opt/elasticsearch-head
> grunt server

Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100

[root@node1 elasticsearch-head]# netstat -lnupt | grep 9100
tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      86581/grunt 
[root@node1 elasticsearch-head]# netstat -lnupt | grep 9200
tcp6       0      0 :::9200                 :::*                    LISTEN
  1. 在真机浏览器输入 http://20.0.0.102:9100/ //可以看见群集很健康时绿色的
    在Elasticsearch后面的栏目中输入http://20.0.0.102:9200
    在这里插入图片描述
    在这里插入图片描述
15. node1创建索引为index-deamo,类型为test
[root@node1 ~]# curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type:application/json' -d ' {"user":"zahngsan","mesg":"hello world"} '
{
  "_index" : "index-demo",
  "_type" : "test",
  "_id" : "1",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "created" : true
}

在这里插入图片描述
在这里插入图片描述

5.2.3 安装logstash(20.0.0.101)

1. 安装Apache服务
[root@apache ~]# yum -y install httpd
[root@apache ~]# systemctl start httpd.service 
[root@apache ~]# cd /var/log/httpd/
[root@apache httpd]# ll
总用量 4
-rw-r--r-- 1 root root   0 10月 29 15:16 access_log
-rw-r--r-- 1 root root 797 10月 29 15:16 error_log


2. 安装java环境
[root@apache ~]# java -version                              //如果没有安装 yum -y install java
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)

3. 安装logstash
[root@apache ~]# cd /opt/
[root@apache opt]# rpm -ivh logstash-5.5.1.rpm 
[root@apache opt]# systemctl start logstash.service 
[root@apache opt]# systemctl enable logstash.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@apache opt]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/

4. 输入采用标准输入 输出采用标准输出
[root@apache opt]# logstash -e 'input {  stdin{} } output { stdout{} }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
15:23:36.322 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
15:23:36.330 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
15:23:36.360 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"6413fa3b-b0f1-450d-a681-4ddb8311763b", :path=>"/usr/share/logstash/data/uuid"}
15:23:36.522 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
15:23:36.610 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
15:23:36.649 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com                                  //输入www.baidu.com
2020-10-29T07:23:48.093Z apache www.baidu.com                       

[root@apache opt]# netstat -anpt | grep 9600                                  //一定要结束进程,可以在输入完毕时,直接结束进程,不要切换到后台
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      4763/java

5. 使用rubydebug显示详细输出(code为一种编解码器)
[root@apache opt]# logstash -e 'input {  stdin{} } output { stdout{ codec=>rubydebug } }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
15:32:35.837 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
15:32:35.913 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
15:32:35.964 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com
{
    "@timestamp" => 2020-10-29T07:34:08.728Z,
      "@version" => "1",
          "host" => "apache",
       "message" => "www.baidu.com"
}

6. 使用logstash将信息写入Elasticsearch中
[root@apache opt]# logstash -e 'input {  stdin{} } output { elasticsearch { hosts=>["20.0.0.102:9200"] } }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
15:37:02.743 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://20.0.0.102:9200/]}}
15:37:02.749 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://20.0.0.102:9200/, :path=>"/"}
15:37:02.819 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<Java::JavaNet::URI:0x1f7fc717>}
15:37:02.822 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
15:37:02.981 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
15:37:02.989 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Installing elasticsearch template to _template/logstash
15:37:03.117 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x541526b0>]}
15:37:03.122 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
15:37:03.168 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
15:37:03.274 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com

在这里插入图片描述
在这里插入图片描述

7.对Apache主机做对接配置
[root@apache opt]# chmod o+r /var/log/messages 
[root@apache opt]# ll /var/log/messages 
-rw----r--. 1 root root 1090206 10月 29 15:55 /var/log/messages
[root@apache opt]# vi /etc/logstash/conf.d/system.conf
input {
    file{
        path => "/var/log/messages"
        type => "system"
        start_position => "beginning"
        }
       }
output {
    elasticsearch {
        hosts => ["20.0.0.102:9200"]
        index => "system-%{+YYY.MM.dd}"
                   }
        }
[root@apache opt]# systemctl restart logstash.service 
  1. 真机浏览器输入 http://20.0.0.102:9100/ 查看索引信息
    在这里插入图片描述

5.2.4 安装kibana

1. 安装kibana软件
[root@kibana ~]# cd /opt/
[root@kibana opt]# rpm -ivh kibana-5.5.1-x86_64.rpm
[root@kibana opt]# cd /etc/kibana/
[root@kibana kibana]# cp -p kibana.yml kibana.yml.bak
[root@kibana kibana]# vim kibana.yml
  2 server.port: 5601
  7 server.host: "0.0.0.0"
 21 elasticsearch.url: "http://20.0.0.102:9200"
 30 kibana.index: ".kibana"
[root@kibana kibana]# systemctl start kibana.service 
[root@kibana kibana]# systemctl enable kibana.service

2.真机浏览器输入 20.0.0.104:5601
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

5.2.5对接apache主机的apache日志文件(access,error)

[root@apache opt]# cd /etc/logstash/conf.d/
[root@apache conf.d]# touch apache_log.conf
[root@apache conf.d]# vim apache_log.conf
input {
file{
path => “/etc/httpd/logs/access_log”
type => “access”
start_position => “beginning”
}
file{
path => “/etc/httpd/logs/error_log”
type => “error”
start_position => “beginning”
}
}
output {
if [type] == “access” {
elasticsearch {
hosts => [“20.0.0.102:9200”]
index => “apache_access-%{+YYYY.MM.dd}”
}
}
if [type] == “error” {
elasticsearch {
hosts => [“20.0.0.102:9200”]
index => “apache_error-%{+YYYY.MM.dd}”
}
}
}
[root@apache conf.d]# /usr/share/logstash/bin/logstash -f apache_log.conf

5.2.6 测试

  1. 真机浏览器输入 http://20.0.0.101 //打开apache测试页
    在这里插入图片描述

  2. 真机浏览器输入 http://20.0.0.102:9100 //查看索引信息
    能发现access和error两个索引
    在这里插入图片描述

  3. 真机浏览器输入 http://20.0.0.104:5601
    点击左下角有个management选项------index patterns-------create index pattern------分别创建apache_error-*和apache_access-*的索引
    在这里插入图片描述

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值