ELK日志分析系统---Logstash和kibana安装

本文介绍了如何在Apache虚拟机上安装和配置Logstash,用于数据输入、处理和输出到Elasticsearch。接着在另一台虚拟机上安装并配置Kibana,实现对Elasticsearch中数据的可视化分析。实验过程包括Logstash的输入输出配置,以及Kibana的设置和启动,最后展示了通过Kibana查看和分析Apache日志。
摘要由CSDN通过智能技术生成

ELK日志分析系统—Logstash和kibana安装

一、logstash介绍

1、概念

一款强大的数据处理工具,可实现数据传输、格式处理、格式化输出,数据输入、数据加工(如过滤,改写等)以及数据输出

2.主要组件

Shipper

Indexer

Broker

Search angd Storage

Web Interface

3.实验

apache 虚拟机20.0.0.10

[root@localhost ~]# hostnamectl set-hostname apache
[root@localhost ~]# yum -y install httpd
[root@apache ~]# systemctl stop firewalld.service
[root@apache ~]# setenforce 0
[root@apache ~]# cd /opt
[root@apache opt]# ls
logstash-5.5.1.rpm  rh
[root@apache opt]# rpm -ivh logstash-5.5.1.rpm 
[root@apache opt]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/
[root@apache opt]# systemctl start httpd
[root@apache opt]# systemctl start logstash.service 
[root@apache opt]# systemctl enable logstash.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.

[root@apache opt]# logstash -e 'input { stdin{} } output { stdout{} }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
11:46:48.516 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
11:46:48.522 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
11:46:48.618 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"3da52c0d-a43c-4a19-b9ff-c1eaf57521d7", :path=>"/usr/share/logstash/data/uuid"}
11:46:48.960 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
11:46:49.062 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
11:46:49.162 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com
2020-10-29T03:46:57.718Z apache www.baidu.com

[root@apache opt]# logstash -e 'input { stdin{} } output { stdout{ codec=>rubydebug } }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
11:48:04.820 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
11:48:04.866 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
11:48:04.943 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com
{
    "@timestamp" => 2020-10-29T03:48:14.014Z,
      "@version" => "1",
          "host" => "apache",
       "message" => "www.baidu.com"
}

[root@apache opt]# logstash -e 'input { stdin{} } output { elasticsearch { hosts=>["20.0.0.12:9200"] } }' 
.......
The stdin plugin is now waiting for input:
12:03:37.664 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com

真机浏览器登录20.0.0.12:9100 查看索引信息具体如图
在这里插入图片描述

[root@apache opt]# ll /var/log/messages
-rw-------. 1 root root 414579 10月 29 12:09 /var/log/messages
[root@apache opt]# chmod o+r /var/log/messages
[root@apache opt]# ll /var/log/messages
-rw----r--. 1 root root 416632 10月 29 12:10 /var/log/messages
[root@apache opt]# vim /etc/logstash/conf.d/system.conf

input {
       file{
         path => "/var/log/messages"
         type => "system"
         start_position => "beginning"
         }
      }
output {
        elasticsearch {
          hosts => ["20.0.0.12:9200"]
          index => "system-%{+YYYY.MM.dd}"
          }
       }
[root@apache opt]# systemctl restart logstash.service

真机浏览器登录20.0.0.12:9100 查看索引信息具体如图
在这里插入图片描述

二、kibana

1、概念

一个针对Elasticsearch的开源分析及可视化平台,搜索、查看存储在Elasticsearch索引中的数据,通过各种图表进行高级数据分析及展示。

2、功能

Elasticsearch无缝之集成

整合数据,复杂数据分析

让更多团队成员受益

接口灵活,分享更容易

配置简单,可视化多数据源

简单数据导出

3.实验

虚拟机20.0.0.11

[root@localhost ~]# cd /opt
[root@localhost opt]# ls
kibana-5.5.1-x86_64.rpm  rh
[root@localhost opt]# systemctl stop firewalld.service 
[root@localhost opt]# setenforce 0
[root@localhost opt]# rpm -ivh kibana-5.5.1-x86_64.rpm 
[root@localhost opt]# cd /etc/kibana
[root@localhost kibana]# ls
kibana.yml
[root@localhost kibana]# cp kibana.yml kibana.yml.bak
[root@localhost kibana]# ls
kibana.yml  kibana.yml.bak
[root@localhost kibana]# vim kibana.yml

2 server.port: 5601
7 server.host: "0.0.0.0"
21 elasticsearch.url: "http://20.0.0.12:9200"
30 kibana.index: ".kibana"
[root@localhost kibana]# systemctl start kibana.service 
[root@localhost kibana]# systemctl enable kibana.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service

真机登录浏览器输入IP地址20.0.0.11:5601出现下图
在这里插入图片描述

apache 主机20.0.0.10

[root@apache opt]# cd /etc/logstash/conf.d
[root@apache conf.d]# touch apache_log.conf
[root@apache conf.d]# vi apache_log.conf

input {
       file{
         path => "/etc/httpd/logs/access_log"
         type => "access"
         start_position => "beginning"
         }
       file{
         path => "/etc/httpd/logs/error_log"
         type => "error"
         start_position => "beginning"
         }
      }
output {
        if [type] == "access" {
        elasticsearch {
          hosts => ["20.0.0.12:9200"]
          index => "system-%{+YYYY.MM.dd}"
          }
        }
        if [type] == "error" {
        elasticsearch {
          hosts => ["20.0.0.12:9200"]
          index => "system-%{+YYYY.MM.dd}"
          }
       }
       }
~       
[root@apache conf.d]# logstash -f apache_log.conf

真机登录浏览器Elasticsearch页面刷新看看是否出现apache_error索引如图
在这里插入图片描述

登录KIbana 页面添加索引apache_access和apache_error如图,这样就能可视化日志了
在这里插入图片描述

ELK就完成安装了,页面里面具体内容就不一一细说了,自己安装去试试。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值