ELK日志分析系统

ELK日志分析系统

一.基础环境配置:

1.ip

192.168.200.10 elasticsearch+kibana ELK-1
192.168.200.20 elasticsearch+logstash ELK-2
192.168.200.30 elasticsearch ELK-3

[root@localhost ~]# hostnamectl set-hostname elk-1
[root@localhost ~]# hostnamectl set-hostname elk-2
[root@localhost ~]# hostnamectl set-hostname elk-3

2.配置hosts[三台]

[root@elk-1 ~]# vi /etc/hosts
[root@elk-1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.10 elk-1
192.168.200.20 elk-2
192.168.200.30 elk-3

3.安装jdk[三台]

[root@elk-1 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
……
[root@elk-1 ~]# java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (build 1.8.0_242-b08)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)

二.elasticserach安装

(1)安装es

[root@elk-1 ~]# rpm -ivh elasticsearch-6.0.0.rpm 
[root@elk-2 ~]# rpm -ivh elasticsearch-6.0.0.rpm 
[root@elk-3 ~]# rpm -ivh elasticsearch-6.0.0.rpm 
 #i表示安装,v表示显示安装过程,h表示显示进度

(2)配置es

配置elasticsearch的配置文件,配置文件在/etc/elasticsearch/elasticsearch.yml
Elk-1节点:增加以下 //字样
root@elk-1 ~]# vi /etc/elasticsearch/elasticsearch.yml
[root@elk-1 ~]# cat /etc/elasticsearch/elasticsearch.yml

```handlebars
# ======= Elasticsearch Configuration ===========
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ------------------Cluster --------------------
# Use a descriptive name for your cluster:
cluster.name: ELK  //在这里插入代码片配置es的集群名称,默认是elasticsearch,es会自动发现在同一网段下的es,如果在同一网段下有多个集群,就可以用这个属性来区分不同的集群。
# ------------------------Node -----------------
# Use a descriptive name for the node:
node.name: elk-1	//节点名,默认随机指定一个name列表中名字,该列表在es的jar包中config文件夹里name.txt文件中,其中有很多作者添加的有趣名字。
node.master: true	//指定该节点是否有资格被选举成为node,默认是true,es是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master。 其他两节点为false
node.data: false	//指定该节点是否存储索引数据,默认为true。其他两节点为true
# ----------------- Paths ----------------
# Path to directory where to store the data (separate multiple locations by comma):
path.data: /var/lib/elasticsearch //索引数据存储位置(保持默认,不要开启注释)
# Path to log files:
path.logs: /var/log/elasticsearch //设置日志文件的存储路径,默认是es根目录下的logs文件夹
# --------------- Network ------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
network.host: 192.168.200.10  //设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0。
# Set a custom port for HTTP:
http.port: 9200  //启动的es对外访问的http端口,默认9200
# For more information, consult the network module documentation.
# --------------------Discovery ----------------
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["elk-1","elk-2","elk-3"] //设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点。

Elk-2节点:

[root@elk-2 ~]# vi /etc/elasticsearch/elasticsearch.yml 
[root@elk-2 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v ^# |grep -v ^$
cluster.name: ELK
node.name: elk-2
node.master: false
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.200.20
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-1","elk-2","elk-3"]

Elk-3节点:

[root@elk-2 ~]# vi /etc/elasticsearch/elasticsearch.yml 
[root@elk-2 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v ^# |grep -v ^$
cluster.name: ELK
node.name: elk-3
node.master: false
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.200.30
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-1","elk-2","elk-3"]

(3)启动服务

通过命令启动es服务,启动后使用netstat命令查看是否端口启动

[root@elk-1 ~]# systemctl start elasticsearch
[root@elk-2 ~]# systemctl start elasticsearch
[root@elk-3 ~]# systemctl start elasticsearch
[root@elk-1 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1446/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1994/master         
tcp6       0      0 192.168.200.10:9200      :::*                    LISTEN      19280/java          
tcp6       0      0 192.168.200.10:9300      :::*                    LISTEN      19280/java          

(4)检测集群状态

通过curl ‘IP:9200/_cluster/health?pretty’ 来检查集群状态,命令如下
Elk-1节点:

[root@elk-1 ~]# curl '192.168.200.10:9200/_cluster/health?pretty'
{
  "cluster_name" : "ELK",
  "status" : "green",	//为green则代表健康没问题,yellow或者red	则是集群有问题
  "timed_out" : false,	//是否有超时
  "number_of_nodes" : 3, //集群中的节点数量
  "number_of_data_nodes" : 2,	//集群中data节点的数量
  "active_primary_shards" : 1,
  "active_shards" : 2,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

三.部署kibana

(1)安装kibana

[root@elk-1 ~]# rpm -ivh kibana-6.0.0-x86_64.rpm 

(2)配置kibana

[root@elk-1 ~]# vi /etc/kibana/kibana.yml 
[root@elk-1 ~]# cat /etc/kibana/kibana.yml |grep -v ^#
server.port: 5601
server.host: 192.168.200.10
elasticsearch.url: "http://192.168.200.10:9200"

(3)启动kibana

[root@elk-1 ~]# systemctl start kibana

查看端口

[root@elk-1 ~]# netstat -lntp |grep node
tcp        0      0 192.168.200.10:5601      0.0.0.0:*               LISTEN      19958/node     

启动成功后网页访问(192.168.200.10:5601),可以访问到如下界面

四.Logstash部署:

(1)安装logstash

[root@elk-2 ~]# rpm -ivh logstash-6.0.0.0.rpm 

(2)配置logstash

配置/etc/logstash/logstash.yml,修改增加如下:

[root@elk-2 ~]# vi /etc/logstash/logstash.yml
http.host: "192.168.200.20"
配置logstash收集syslog日志
[root@elk-2 ~]# vi /etc/logstash/conf.d/syslog.conf 
[root@elk-2 ~]# cat /etc/logstash/conf.d/syslog.conf 
input {  #定义日志源
    file {
        path => "/var/log/messages" #定义日志来源路径  目录要给644权限,不然无法读取日志
        type => "systemlog"   #定义类型
        start_position => "beginning" 
        stat_interval => "3" 
    }
}
output {   #定义日志输出
    if [type] == "systemlog" {
        elasticsearch {
            hosts => ["192.168.200.10:9200"]
            index => "system-log-%{+YYYY.MM.dd}"
        }
    }
}

赋权 /var/log/messages

[root@elk-2 ~]# chomd 644 /var/log/messages

#创建软连接,方便使用logstash命令

[root@elk-2 ~]# ln -s /usr/share/logstash/bin/logstash /usr/bin`

#检测配置文件是否错误:

[root@elk-2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK #为ok则代表没问题
//--path.settings 用于指定logstash的配置文件所在的目录
//-f 指定需要被检测的配置文件的路径
//--config.test_and_exit 指定检测完之后就退出,不然就会直接启动了

(3)启动logstash
检查配置文件没有问题后启动logstash服务,

[root@elk-2 ~]# vi /etc/rsyslog.conf#### RULES ####增加一行 
*.* @@192.168.200.20:10514
[root@elk-2 ~]# systemctl start logstash

查看端口netstat -lntp

[root@elk-2 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1443/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2009/master         
tcp6       0      0 192.168.200.20:9200      :::*                    LISTEN      19365/java          
tcp6       0      0 :::10514                :::*                    LISTEN      21835/java          
tcp6       0      0 192.168.200.20:9300      :::*                    LISTEN      19365/java          
tcp6       0      0 :::22                   :::*                    LISTEN      1443/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      2009/master         
tcp6       0      0 192.168.200.20:9600      :::*                    LISTEN      21835/java  

启动服务后,有进程但是没有端口的问题解决:

[root@elk-2 ~]# cat /var/log/logstash/logstash-plain.log
//权限问题,因为之前我们以root的身份在终端启动过logstash,所以产生的相关文件的属组属主都是root
[root@elk-2 ~]# ll /var/lib/logstash/  
total 4
drwxr-xr-x. 2 root root      6 Dec  6 15:45 dead_letter_queue
drwxr-xr-x. 2 root root      6 Dec  6 15:45 queue
-rw-r--r--. 1 root root     36 Dec  6 15:45 uuid
[root@elk-2 ~]# chown -R logstash /var/lib/logstash/
[root@elk-2 ~]# systemctl restart logstash #重启服务后即可

启动完毕后,让syslog产生日志,用第三台主机登录elk-2机器,登录后退出。
6.完善
(1)kibana上查看日志
之前部署kibana完成后,还没有检索日志。现在logstash部署完成,我们回到kibana服务器上查看日志索引,执行命令如下:

root@elk-1 ~]# curl '192.168.200.10:9200/_cat/indices?v'
health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   system-log-2019.12.06 UeKk3IY6TiebNu_OD04YZA   5   1        938            0      816kb        412.2kb
green  open   .kibana               KL7WlNw_T7K36_HSbchBcw   1   1          1            0      7.3kb          3.6kb
[root@elk-1 ~]# curl -XGET/DELETE '192.168.200.10:9200/system-log-2021.10.31?pretty'

{
  "system-log-20" : {
    "aliases" : { },
    "mappings" : {
      "systemlog" : {
        "properties" : {
          "@timestamp" : {
            "type" : "date"
          },
          "@version" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "host" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
(2)web界面配置
浏览器访问192.168.200.10:5601,到kibana上配置 索引:
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值