日志采集elk

一、elasticsearch安装

1、下载elasticsearch

https://www.elastic.co/cn/downloads/past-releases/elasticsearch-7-7-0

2、修改配置文件

/usr/local/elk/elasticsearch-7.7.0/config/elasticsearch.yml

network.host: 192.168.42.103  ----对应本机Ip
node.name: node-1
cluster.initial_master_nodes: ["node-1"]

3、启动

先创建一个elasticsearch用户,用这个用户启动,不能用root
adduser elasticsearch
passwd elasticsearch
chown -R elasticsearch /usr/local/elasticsearch
su elasticsearch
#修改 jvm.options
-Xms512m
-Xmx512m
cd bin
./elasticsearch

报错:

[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max number of threads [3805] for user [elasticsearch] is too low, increase to at least [4096]
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

vi /etc/security/limits.conf  添加下面配置,添加后要重启

*               soft    nofile          65536
*               hard    nofile          65536
//* 代表所有用户, 可以指定elasticsearch用户,如果指定的是elasticsearch用户,那么验证时,先切到elasticsearch用户再执行下面验证命令

重启系统验证:

[elasticsearch@localhost root]$ ulimit -Hn
65536
[elasticsearch@localhost root]$ ulimit -Sn
65536
[elasticsearch@localhost root]$ 

[2]: max number of threads [3805] for user [elasticsearch] is too low, increase to at least [4096]

vi /etc/security/limits.conf 添加下面配置, 同上, 也需要重启
*               soft    nproc           4096
*               hard    nproc           4096

验证:

[elasticsearch@localhost root]$ ulimit -Hu
4096
[elasticsearch@localhost root]$ ulimit -Su
4096

[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

 修改/etc/sysctl.conf文件,增加配置vm.max_map_count=262144

vi /etc/sysctl.conf
sysctl -p

4、访问

如果还不能访问,关闭防火墙:systemctl stop firewalld.service

http://192.168.42.103:9200/

二、kibana安装

1、下载

https://www.elastic.co/cn/downloads/past-releases/kibana-7-7-0

2、修改配置文件

vi /usr/local/elk/kibana-7.7.0-linux-x86_64/config/kibana.yml
#server.host: "192.168.42.103"
elasticsearch.hosts: ["http://192.168.42.103:9200"]

3、启动

cd bin
./kibana
提示:
Kibana should not be run as root.  Use --allow-root to continue.
kibana也不能用root启动,所以
 chown -R elasticsearch kibana-7.7.0-linux-x86_64
 su elasticsearch 
./kibana

4、访问

 http://192.168.42.103:5601

三、logstash安装

1、下载

https://www.elastic.co/cn/downloads/past-releases/logstash-7-7-0

2、创建配置文件

同样用elasticsearch

input {
   file {
     path => "/usr/local/elasticsearch-7.7.0/logs/elasticsearch.log"
     start_position => "beginning"
     codec => multiline {
        pattern => "^\["
        negate => true
        what =>"previous"
       }
     }
}
output{
   elasticsearch {
   hosts => ["http://192.168.42.109:9200"]
   index => "es-log-%{+YYYY.MM.dd}"
   }
   stdout{}
}

验证语法:

cd bin
./logstash -f ../config/conf/es_log.conf -t

如下,表示成功:

Sending Logstash logs to /usr/local/logstash-7.7.0/logs which is now configured via log4j2.properties
[2020-09-10T00:01:50,868][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-09-10T00:01:53,207][INFO ][org.reflections.Reflections] Reflections took 72 ms to scan 1 urls, producing 21 keys and 41 values 
Configuration OK
[2020-09-10T00:01:56,051][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

3、启动

./logstash -f ../config/conf/es_log.conf

4、验证

http://192.168.42.109:9200/_cat/indices?v

在这里插入图片描述
可以看到索引已经创建
找开kibana查询:

GET es-log-2020.09.10/_search
{
  "query": {
    "match_all": {}
  }
}

输入到es的日志格式如下:

{
        "_index" : "es-log-2020.09.10",
        "_type" : "_doc",
        "_id" : "-YjUdnQByadAa3SH9Fs7",
        "_score" : 1.0,
        "_source" : {
          "path" : "/usr/local/elasticsearch-7.7.0/logs/elasticsearch.log",
          "message" : "[2020-09-10T00:04:56,757][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [logstash] for index patterns [logstash-*]",
          "@version" : "1",
          "host" : "localhost.localdomain",
          "@timestamp" : "2020-09-10T07:05:00.399Z"
        }
      }

5、用grok过滤器

上面日志全都写到一个message字段,可以用grok将日志 过滤分析后 拆分成不同段 写到es不同属性中。
分析一下上面elasticsearch中的message字段:

"message" : "[2020-09-10T00:04:56,757][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [logstash] for index patterns [logstash-*]"

上面的日志格式为:

[时间戳][日志级别][输出信息的类名][节点名]具体的日志信息

可以考虑使用过滤器插件对文本进行分析后再存 入 es。

6、新创建一个es_log_grok.conf

input {
   file {
     path => "/usr/local/elasticsearch-7.7.0/logs/elasticsearch.log"
     start_position => "beginning"
     codec => multiline {
        pattern => "^\["
        negate => true
        what =>"previous"
       }
     }
}
filter{
   grok{
     match=>{
       message=>"\[%{TIMESTAMP_ISO8601:time}\]\[%{LOGLEVEL:level}%{SPACE}\]\[%{NOTSPACE:loggerclass}%{SPACE}\]%{SPACE}\[%{DATA:nodename}\]%{SPACE} %{GREEDYDATA:msg}"
     }
   }
 }
output{
   stdout{}
}

启动:

./logstash -f ../config/conf/es_log_grok.conf 

可以在命令行看到日志被分割了,如下:

{
    "loggerclass" => "o.e.m.j.JvmGcMonitorService",
     "@timestamp" => 2020-09-10T08:21:52.935Z,
           "path" => "/usr/local/elasticsearch-7.7.0/logs/elasticsearch.log",
           "host" => "localhost.localdomain",
        "message" => "[2020-09-10T01:09:08,167][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][8806] overhead, spent [490ms] collecting in the last [1s]",
          "level" => "INFO",
            "msg" => "[gc][8806] overhead, spent [490ms] collecting in the last [1s]",
       "@version" => "1",
           "time" => "2020-09-10T01:09:08,167",
       "nodename" => "node-1"
}

用来的output 直接输出到elasticsearch是不是行的,需要创建一个elasticsearch的索引mapper模版如下:

{
	"template":"es-log-text-%{+YYYY.MM.dd}",
	"settings":{
	   "index.refresh_interval":"1s"
	},
	"mappings":{
	   "properties":{
	      "time":{
	         "type":"date"
	      },
	      "level":{
	         "type":"keyword"
	      },
	      "loggerclass":{
	         "type":"keyword"
	      },
	      "nodename":{
	        "type":"keyword"
	      },
	      "msg":{
	        "type":"text"
	      },
	      "message":{
	        "type":"text"
	      }
	   }
	}
}

修改原来的es_log_grok.conf,在output中添加,指定模版名字和位置。

 elasticsearch {
    hosts => ["http://192.168.42.109:9200"]
     index => "es-log-%{+YYYY.MM.dd}"
     template_name => "es_template*"
     template => "/usr/local/logstash-7.7.0/config/conf"
  }

启动,结果如下: 日志格式已经被拆分

{
     "@timestamp" => 2020-09-10T08:58:25.392Z,
       "nodename" => "node-1",
            "msg" => "[gc][11638] overhead, spent [1s] collecting in the last [1.6s]",
           "time" => "2020-09-10T01:56:39,876",
          "level" => "WARN",
        "message" => "[2020-09-10T01:56:39,876][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][11638] overhead, spent [1s] collecting in the last [1.6s]",
       "@version" => "1",
           "path" => "/usr/local/elasticsearch-7.7.0/logs/elasticsearch.log",
    "loggerclass" => "o.e.m.j.JvmGcMonitorService",
           "host" => "localhost.localdomain"
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值