CentOS6.5 安装 Elasticsearch+Logstash+Kibana 6 记录日志 笔记

  1. 环境说明

按照之前的文章,搭建2台CentOS6.5 服务器,主机 172.16.1.176 安装ELK+redis并配置好,主机 172.16.1.177 安装EL并配置


附带一下本文Elasticsearch 的配置文件

[root@elk logstash]# grep '^[a-z]' /usr/share/elasticsearch-6.2.2/config/elasticsearch.yml 
cluster.name: li-application #集群名字
node.name: linux-1  #节点名称
path.data: /data/es-data   #数据存储路径
path.logs: /var/log/es-log   #日志存放路径
bootstrap.memory_lock: true   
bootstrap.system_call_filter: false
network.host: 172.16.1.176
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.16.1.176", "172.16.1.177"]   #集群的配置,这里选择单播
http.cors.enabled: true 
http.cors.allow-origin: "*" 

2.配置插件HEAD

HEAD 插件主要可以比较直观看到Elasticsearch集群状态,拓扑,分片,以及数据浏览,测试调用等一些功能

这里将HEAD插件安装到176主机上

https://github.com/mobz/elasticsearch-head

  • 安装NodeJS
yum install -y nodejs
  • 安装npm
npm install -g cnpm --registry=https://registry.npm.taobao.org
  • 使用npm安装grunt
npm install -g grunt
npm install -g grunt-cli --registry=https://registry.npm.taobao.org --no-proxy
  • 下载head

https://github.com/mobz/elasticsearch-head/archive/master.zip

 wget https://github.com/mobz/elasticsearch-head/archive/master.zip
  • 解压插件,然后配置
unzip master.zip #解压插件包
npm install   #安装一下依赖
#先停止ELasticsearch
vi $ES_HOME/config/elasticsearch.yml #修改一下es的配置文件,$ES_HOME为你安装ES的路径,本人笔记默认在/usr/share/elasticsearch-6.2.2/,添加如下内容:
http.cors.enabled: true
http.cors.allow-origin: "*"

vi Gruntfile.js #修改Head插件文件夹里的配置文件 ,修改如下内容:
                connect: {
                        server: {
                                options: {
                                        hostname: '172.16.1.176',
                                        port: 9100,
                                        base: '.',
                                        keepalive: true
                                }
                        }
                }

/usr/share/elasticsearch-6.2.2/bin/elasticsearch   #重新将ES启动起来
  • 启动HEAD插件

 进入head插件根目录,使用命令  grnut server 跑起来

 或者通过命令 npm run start 也可以启动 head

[root@elk elasticsearch-head-master]# grunt server
Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://172.16.1.176:9100

打开浏览器测试访问:http://172.16.1.176:9100/


如上图,两个es节点均成功建立,集群状态右上角显示,安装成功。

3.安装redis

    redis消息队列作用说明:
    1、防止Logstash和ES无法正常通信,从而丢失日志。
    2、防止日志量过大导致ES无法承受大量写操作从而丢失日志。
    3、应用程序(php,java)在输出日志时,可以直接输出到消息队列,从而完成日志收集。
    补充:如果redis使用的消息队列出现扩展瓶颈,可以使用更加强大的kafka,或者mq来代替。
[root@elk ~]# yum install -y redis
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * epel: mirror01.idc.hinet.net
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Setting up Install Process
Package redis-3.2.11-1.el6.x86_64 already installed and latest version
Nothing to do

本文使用yum安装,版本 3.2.11

修改一下redis 的配置文件,修改如下内容

[root@elk ~]# vim /etc/redis.conf
bind 172.16.1.176 #绑定IP
port 6379 #端口
daemonize yes #后台运行

开启Redis

[root@elk ~]# service redis start
启动 :                                                    [确定]

检查redis服务运行正常,使用netstat 查看监听端口,使用 redis-cli -h 连接redis

[root@elk ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 172.16.1.176:6379           0.0.0.0:*                   LISTEN      14270/redis-server

[root@elk ~]# redis-cli -h 172.16.1.176 
172.16.1.176:6379> info
# Server
redis_version:3.2.11
redis_git_sha1:00000000
redis_git_dirty:0

4.logstash 收集各类日志

4.1 收集syslog 日志 (syslog input ;redis output input;)

  • 在172.16.1.176 主机上 创建logstash 配置文件 : shipper.conf 用作读取各类日志,写入redis
[root@elk logstash]# vim shipper.conf
input{
        syslog {
                type => "system" #定义一个type 用于下方output的时候做匹配
                host => "172.16.1.176" #syslog的IP地址
                port => "514" #syslog端口
        }
}
output{
        if [type] == "system" {
        redis  {
                host => "172.16.1.176"   #redis的地址
                port => "6379"   #端口
                db => "3"    #创建一个数据库3
                data_type => "list"   #以list存
                key => "system"   #key名称,自己定义,最好对应上日志的类型
        }
    }
}

input 使用 syslog plugins 收集

syslog plugins介绍:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html

output 使用 redis plugins 输出到 redis中 

redis plugins 介绍 :https://www.elastic.co/guide/en/logstash/current/plugins-outputs-redis.html

  • 修改本机syslog配置 
[root@elk logstash]# vim /etc/rsyslog.conf
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
*.* @@172.16.1.176:514

修改remote-host 为本机IP地址

重启rsyslog 进程生效 

[root@elk logstash]# service rsyslog restart
关闭系统日志记录器:                                       [确定]
启动系统日志记录器:                                       [确定]
[root@elk logstash]# 

以配置文件shipper.conf 运行logstash (ctrl+c 可以终止运行)

[root@elk logstash]# bin/logstash -f shipper.conf 

使用 logger 命令 手动产生一些日志 ,然后进入redis查看是否生效

[root@elk ~]# logger "test LOG"
[root@elk ~]# redis-cli -h 172.16.1.176
172.16.1.176:6379> SELECT 3
OK
172.16.1.176:6379[3]> 
172.16.1.176:6379[3]> KEYS *
1) "system"
172.16.1.176:6379[3]> LINDEX system -1
"{\"facility\":1,\"facility_label\":\"user-level\",\"timestamp\":\"Mar  6 16:18:04\",\"program\":\"root\",\"type\":\"system\",\"severity\":5,\"@timestamp\":\"2018-03-06T08:18:04.000Z\",\"@version\":\"1\",\"host\":\"172.16.1.176\",\"logsource\":\"elk\",\"message\":\"test LOG\\n\",\"severity_label\":\"Notice\",\"priority\":13}"
172.16.1.176:6379[3]> exit

数据写入成功

  • 在172.16.1.177 主机上 创建logstash 配置文件 : indexer.conf 用作 从redis读取数据,写入es

创建配置文件indexer.conf

[root@elk2 logstash]# vim indexer.conf
input{
        redis  {
                host => "172.16.1.176" #redis主机地址  ,下面的配置对应上172.16.1.176 主机写入redis的配置
                port => "6379"    #端口
                db => "3"   #数据库名字
                data_type => "list"   #数据类型
                key => "system"    
                type => "system"
        }

}
output{
    if [type] == "system" {
         elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "system-%{+YYYY.MM.dd}"  #使用system+ 日期,这个格式的索引
        }
    }
}

input 使用 redis plugins 收集

redis plugins介绍:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html

既然ELK 是三个好基友,output当然使用 esplugins 输出到 es 中 

es plugins 介绍:https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html

在主机172.16.1.177 将logstash 跑起来 (ctrl+c 可以终止运行)

[root@elk2 logstash]# bin/logstash -f indexer.conf 

回到redis主机,可以看到redis的数据已经给消化了

[root@elk ~]# redis-cli -h 172.16.1.176
172.16.1.176:6379> SELECT 3
OK
172.16.1.176:6379[3]> KEYS *
(empty list or set)
172.16.1.176:6379[3]> 

进入es 的head 插件页面,可以看到数据已经写入了es 中 


登录kibana,创建一个默认的索引 : system*


使用time作为数据过滤



数据日志已经呈现在kibana中了

4.2 收集 nginx 日志(file input plugin)

本小节,file input plugin 的文档: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
  • 将nginx日志格式化为json ,方便logstash收集

默认access.log的日志格式为:

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

修改成json格式输出,我们需要新增log格式配置,修改nginx 的配置文件

[root@elk logstash]# vim /etc/nginx/nginx.conf

将配置添加到配置文件

log_format access_log_json '{"user_ip":"$remote_addr","@timestamp":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sents":"$body_bytes_sent","referer":"$http_referer","responsetime":"$request_time","user_ua":"$http_user_agent"}';
    access_log  /var/log/nginx/access.log  access_log_json;
http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    log_format access_log_json '{"user_ip":"$remote_addr","@timestamp":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sents":"$body_bytes_sent","referer":"$http_referer","responsetime":"$request_time","user_ua":"$http_user_agent"}';
    access_log  /var/log/nginx/access.log  access_log_json;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;
}

保存并重新加载nginx配置

[root@elk logstash]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@elk logstash]# nginx -s reload

测试访问,查看访问日志

[root@elk logstash]# cat /var/log/nginx/access.log
{"user_ip":"172.16.38.18","@timestamp":"2018-03-07T10:44:27+08:00","user_req":"GET /nginx-logo.png HTTP/1.1","http_code":"200","body_bytes_sents":"368","referer":"http://172.16.1.176/","responsetime":"0.000","user_ua":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36"}
{"user_ip":"172.16.38.18","@timestamp":"2018-03-07T10:44:27+08:00","user_req":"GET /poweredby.png HTTP/1.1","http_code":"200","body_bytes_sents":"2811","referer":"http://172.16.1.176/","responsetime":"0.000","user_ua":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36"}
{"user_ip":"172.16.38.18","@timestamp":"2018-03-07T10:44:28+08:00","user_req":"GET /favicon.ico HTTP/1.1","http_code":"404","body_bytes_sents":"3652","referer":"http://172.16.1.176/","responsetime":"0.000","user_ua":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36"}

修改172.16.1.176 主机的 logstash 配置文件 shipper.conf  (本文放在 /usr/share/logstash )

[root@elk logstash]# vim shipper.conf
input{
        syslog {
                type => "system" 		#定义一个type 用于下方output的时候做匹配
                host => "172.16.1.176"  #syslog的IP地址
                port => "514" 	 	    #syslog端口
        }
		file {                          		#以文件形式读取
			path => "/var/log/nginx/access.log" #文件路径
			codec => "json"                     #利用codec插件,读取json数据
			start_position => "beginning"       #记录位置,从头
			type => "nginx-log"                 
        }
}
output{
    if [type] == "system" {				 #写入system日志到redis
        redis  {
                host => "172.16.1.176"   #redis的地址
                port => "6379"   		 #端口
                db => "3"    		     #创建一个数据库3
                data_type => "list"   	 #以list存
                key => "system"   	     #key名称,自己定义,最好对应上日志的类型
        }
    }
	if [type] == "nginx-log" {			 #写入nginx日志到redis
		redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "nginx-log"
        }
    }
}

修改172.16.1.176 主机上的  logstash 配置文件  indexer.conf

[root@elk2 logstash]# pwd
/usr/share/logstash
[root@elk2 logstash]# vim indexer.conf
input{
        redis  {
            host => "172.16.1.176"  #redis主机地址  ,下面的配置对应上172.16.1.176 主机写入redis的配置
            port => "6379"    	    #端口
            db => "3"               #数据库名字
            data_type => "list"     #数据类型
            key => "system"       
            type => "system"     
        }
	redis  {    			#从redis读取nginx日志
		host => "172.16.1.176"   
		port => "6379"
		db => "3"
		data_type => "list"
		key => "nginx-log"     #需要对应上redis的key
		type => "nginx-log"    
	}

}
output{

    if [type] == "system" {      #写入system 日志到 es
         elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "system-%{+YYYY.MM.dd}"  #使用system+ 日期,这个格式的索引
        }
    }
    if [type] == "nginx-log" {   #写入nginx 日志到 es
         elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "nginx-log-%{+YYYY.MM.dd}"
        }

    }
}

在172.16.1.176 以及 172.16.1.177 主机上,分别将logstash跑起

[root@elk logstash]# bin/logstash -f shipper.conf
[root@elk2 logstash]# bin/logstash -f indexer.conf

访问一下nginx产生日志,日志记录在kibana中可以看到了



4.3 收集 es 日志(codec  multiline )

multiline 使用文档: https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

按照本人前面文章配置es,默认的es日志是存放在

[root@elk logstash]# cat /var/log/es-log/li-application.log

日志样例:

[root@elk logstash]# cat /var/log/es-log/li-application.log 
[2018-03-07T08:00:01,741][INFO ][o.e.c.m.MetaDataCreateIndexService] [linux-1] [system-2018.03.07] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2018-03-07T08:00:02,871][INFO ][o.e.c.m.MetaDataMappingService] [linux-1] [system-2018.03.07/30wHARyNQOWmA8M7m5VhAg] create_mapping [doc]
[2018-03-07T08:00:03,902][INFO ][o.e.c.r.a.AllocationService] [linux-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[system-2018.03.07][4]] ...]).
[2018-03-07T11:43:35,996][INFO ][o.e.c.m.MetaDataCreateIndexService] [linux-1] [nginx-log-2018.03.07] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2018-03-07T11:43:36,861][INFO ][o.e.c.m.MetaDataMappingService] [linux-1] [nginx-log-2018.03.07/EEPYUpQ5SFehkouvUrWSEQ] create_mapping [doc]
[2018-03-07T11:43:38,362][INFO ][o.e.c.r.a.AllocationService] [linux-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[nginx-log-2018.03.07][4]] ...]).
[2018-03-07T11:50:28,465][INFO ][o.e.c.r.a.AllocationService] [linux-1] Cluster health status changed from [GREEN] to [YELLOW] (reason: [{linux-2}{6bO0Kw5aTx6nYZN_0Y4mwg}{63mbD-B4Q6ueOw3OtevIGg}{172.16.1.177}{172.16.1.177:9300} left]).
[2018-03-07T11:50:28,466][INFO ][o.e.c.s.MasterService    ] [linux-1] zen-disco-node-left({linux-2}{6bO0Kw5aTx6nYZN_0Y4mwg}{63mbD-B4Q6ueOw3OtevIGg}{172.16.1.177}{172.16.1.177:9300}), reason(left)[{linux-2}{6bO0Kw5aTx6nYZN_0Y4mwg}{63mbD-B4Q6ueOw3OtevIGg}{172.16.1.177}{172.16.1.177:9300} left], reason: removed {{linux-2}{6bO0Kw5aTx6nYZN_0Y4mwg}{63mbD-B4Q6ueOw3OtevIGg}{172.16.1.177}{172.16.1.177:9300},}
[2018-03-07T11:50:28,466][INFO ][o.e.c.s.ClusterApplierService] [linux-1] removed {{linux-2}{6bO0Kw5aTx6nYZN_0Y4mwg}{63mbD-B4Q6ueOw3OtevIGg}{172.16.1.177}{172.16.1.177:9300},}, reason: apply cluster state (from master [master {linux-1}{RY3FNOHwT9-CPjSb8nM8Tw}{RgVROHXBR26GVXabtyRZ3Q}{172.16.1.176}{172.16.1.176:9300} committed version [54] source [zen-disco-node-left({linux-2}{6bO0Kw5aTx6nYZN_0Y4mwg}{63mbD-B4Q6ueOw3OtevIGg}{172.16.1.177}{172.16.1.177:9300}), reason(left)[{linux-2}{6bO0Kw5aTx6nYZN_0Y4mwg}{63mbD-B4Q6ueOw3OtevIGg}{172.16.1.177}{172.16.1.177:9300} left]]])
[2018-03-07T11:50:28,639][INFO ][o.e.i.s.IndexShard       ] [linux-1] [nginx-log-2018.03.07][1] primary-replica resync completed with 0 operations
[2018-03-07T11:50:28,699][INFO ][o.e.i.s.IndexShard       ] [linux-1] [nginx-log-2018.03.07][3] primary-replica resync completed with 0 operations
[2018-03-07T11:50:28,718][WARN ][o.e.c.NodeConnectionsService] [linux-1] failed to connect to node {linux-2}{6bO0Kw5aTx6nYZN_0Y4mwg}{63mbD-B4Q6ueOw3OtevIGg}{172.16.1.177}{172.16.1.177:9300} (tried [1] times)
org.elasticsearch.transport.ConnectTransportException: [linux-2][172.16.1.177:9300] connect_exception
	at org.elasticsearch.transport.TcpChannel.awaitConnected(TcpChannel.java:165) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:616) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.transport.TcpTransport.connectToNode(TcpTransport.java:513) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:331) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:318) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.cluster.NodeConnectionsService.validateAndConnectIfNeeded(NodeConnectionsService.java:154) [elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.cluster.NodeConnectionsService$ConnectionChecker.doRun(NodeConnectionsService.java:183) [elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.2.jar:6.2.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_161]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_161]
	at java.lang.Thread.run(Unknown Source) [?:1.8.0_161]
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: 拒绝连接: 172.16.1.177/172.16.1.177:9300
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) ~[?:?]
	at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
	... 1 more
Caused by: java.net.ConnectException: 拒绝连接
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) ~[?:?]
	at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) ~[?:?]
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
	... 1 more
[2018-03-07T11:50:28,752][INFO ][o.e.i.s.IndexShard       ] [linux-1] [system-2018.03.07][1] primary-replica resync completed with 0 operations

类似于java的日志,所以这边收集的时候,就需要一定的规则,将不是以  "[" 开头的日志记录,都归回到上一行。方便查看的时候,不会分开java日志的报错

  • 修改172.16.1.176 主机的 logstash 配置文件 shipper.conf  (本文放在 /usr/share/logstash )
input{
        syslog {
                type => "system" 		#定义一个type 用于下方output的时候做匹配
                host => "172.16.1.176"  #syslog的IP地址
                port => "514" 	 	    #syslog端口
        }
		file {                          		#以文件形式读取
			path => "/var/log/nginx/access.log" #文件路径
			codec => "json"                     #利用codec插件,读取json数据
			start_position => "beginning"       #记录位置,从头
			type => "nginx-log"                 
        }
		file {						#以文件形式读取
			path => "/var/log/es-log/li-application.log"
			start_position => "beginning"
			codec => multiline{             #使用meltiline 插件
				pattern => "^\["           #意思即为匹配开头是[
				negate => true             # negate = 否定
				what => "previous"         # 上面配置意思即为,非开头是“[” 的行数,归为上一行记录
			}
			type => "es-log"
		}
}
output{
    if [type] == "system" {				 #写入system日志到redis
        redis  {
                host => "172.16.1.176"   #redis的地址
                port => "6379"   		 #端口
                db => "3"    		     #创建一个数据库3
                data_type => "list"   	 #以list存
                key => "system"   	     #key名称,自己定义,最好对应上日志的类型
        }
    }
	if [type] == "nginx-log" {			 #写入nginx日志到redis
		redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "nginx-log"
        }
    }
	if [type] == "es-log"{        	#写入es日志到redis
	redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "es-log"
        }
     }
}

修改172.16.1.176 主机上的  logstash 配置文件  indexer.conf

[root@elk2 logstash]# vim indexer.conf
input{
        redis  {
            host => "172.16.1.176" #redis主机地址  ,下面的配置对应上172.16.1.176 主机写入redis的配置
            port => "6379"    	   #端口
            db => "3"   		   #数据库名字
            data_type => "list"    #数据类型
            key => "system"       
            type => "system"     
        }
		redis  {    			#从redis读取nginx日志
			host => "172.16.1.176"   
			port => "6379"
			db => "3"
			data_type => "list"
			key => "nginx-log"     #需要对应上redis的key
			type => "nginx-log"    
		}
		redis  {				#从redis读取es日志
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "es-log"
				type => "es-log"
        }

}
output{

    if [type] == "system" {      #写入system 日志到 es
         elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "system-%{+YYYY.MM.dd}"  #使用system+ 日期,这个格式的索引
        }
    }
	if [type] == "nginx-log" {   #写入nginx 日志到 es
         elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "nginx-log-%{+YYYY.MM.dd}"
        }

    }
	if [type] == "es-log"{     #写入es 日志到 es
	elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "es-log-%{+YYYY.MM.dd}"
         }
	}
}
分别在2台主机运行起来logstash,打开kibana看到 es的日志

4.4 收集 tcp 日志 (input tcp)

本小节,input tcp 文档:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html

修改172.16.1.176 主机上  shipper.conf

[root@elk logstash]# vim shipper.conf
input{
        syslog {
                type => "system"                #定义一个type 用于下方output的时候做匹配
                host => "172.16.1.176"  #syslog的IP地址
                port => "514"               #syslog端口
        }
        file {                                          #以文件形式读取
                        path => "/var/log/nginx/access.log" #文件路径
                        codec => "json"                     #利用codec插件,读取json数据
                        start_position => "beginning"       #记录位置,从头
                        type => "nginx-log"
        }
        file {                                          #以文件形式读取
                        path => "/var/log/es-log/li-application.log"
                        start_position => "beginning"
                        codec => multiline{             #使用meltiline 插件
                                pattern => "^\["           #意思即为匹配开头是[
                                negate => true             # negate = 否定
                                what => "previous"         # 上面配置意思即为,非开头是“[” 的行数,归为上一行记录
                        }
                        type => "es-log"
                }
        tcp {                    #以TCP协议打开端口收集日志
                        host => "172.16.1.176"
                        port => "6666"
                        type => "tcp-log"
                }
}
output{
    if [type] == "system" {                              #写入system日志到redis
        redis  {
                host => "172.16.1.176"   #redis的地址
                port => "6379"                   #端口
                db => "3"                    #创建一个数据库3
                data_type => "list"      #以list存
                key => "system"              #key名称,自己定义,最好对应上日志的类型
        }
    }
        if [type] == "nginx-log" {                       #写入nginx日志到redis
                redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "nginx-log"
        }
    }
        if [type] == "es-log"{          #写入es日志到redis
        redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "es-log"
        }
    }
        if [type] == "tcp-log"{         #写入tcp日志到redis
        redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "tcp-log"
        }
    }
}

跑起logstash ,检查打开的端口 

[root@elk logstash]# bin/logstash -f shipper.conf


修改172.16.1.177 主机的 indexer.conf 配置文件

[root@elk2 logstash]# vim indexer.conf
input{
        redis  {
            host => "172.16.1.176" #redis主机地址  ,下面的配置对应上172.16.1.176 主机写入redis的配置
            port => "6379"    	   #端口
            db => "3"   		   #数据库名字
            data_type => "list"    #数据类型
            key => "system"       
            type => "system"     
        }
	redis  {    			#从redis读取nginx日志
			host => "172.16.1.176"   
			port => "6379"
			db => "3"
			data_type => "list"
			key => "nginx-log"     #需要对应上redis的key
			type => "nginx-log"    
		}
	redis  {				#从redis读取es日志
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "es-log"
				type => "es-log"
        }
	redis  {				#从redis读取tcp日志
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "tcp-log"
		type => "tcp-log"
        }

}
output{

    if [type] == "system" {      #写入system 日志到 es
         elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "system-%{+YYYY.MM.dd}"  #使用system+ 日期,这个格式的索引
        }
    }
	if [type] == "nginx-log" {   #写入nginx 日志到 es
         elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "nginx-log-%{+YYYY.MM.dd}"
        }

    }
	if [type] == "es-log"{     #写入es 日志到 es
	elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "es-log-%{+YYYY.MM.dd}"
         }
	}
	if [type] == "tcp-log"{     #写入tcp日志到 es
	elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "tcp-log-%{+YYYY.MM.dd}"
         }
	}
}
运行logstash 
[root@elk2 logstash]# bin/logstash -f indexer.conf

利用 工具 nc 发送一些记录到 tcp端口中,随意发一个log(没有安装nc的可以yum安装一下),也可以输出一些信息到设备的tcp端口中,如下命令

[root@elk ~]# nc 172.16.1.176 6666 < install.log
[root@elk ~]# echo "test tcp log 1 " | nc 172.16.1.176 6666
[root@elk ~]# echo "test tcp log 2 " > /dev/tcp/172.16.1.176/6666

配置好kibana的索引后,可以看到收集的日志


4.5 收集 mysql  slowlog 日志 (grok plugin )

本小节,gork plugin应用:https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

由于本机没有mysql 环境,手动在/var/log/目录下创建一个 mysql slowlog 日志 作为测试

[root@elk logstash]# cat /var/log/slowlog.log 
# User@Host: taobao[taobao] @ regular_exp [192.168.35.23]  Id:  1235
# Schema: bat_db  Last_errno: 0  Killed: 0
# Query_time: 3.101086  Lock_time: 0.181175  Rows_sent: 0  Rows_examined: 360321  Rows_affected: 103560
# Bytes_sent: 58
SET timestamp=1450288856;
create table just_for_temp_case as
  select '2015-12-16' as the_date,
         t2.user_id,
         t1.stuff_no,
         count(*) as buy_times
  from stuff_entries as t1
  join bill as t2
  on t1.orderserino = t2.id
  where t2.notification_ts < '2015-12-17 00:00:00'
    and t2.notification_ts >= '2015-09-18 00:00:00'
  group by t2.user_id,
           t1.stuff_no;
# Time: 151217 18:03:47
# User@Host: taobao[taobao] @ regular_exp [192.168.35.23]  Id:  1235
# Schema: bat_db  Last_errno: 0  Killed: 0
# Query_time: 3.101086  Lock_time: 0.181175  Rows_sent: 0  Rows_examined: 360321  Rows_affected: 103560
# Bytes_sent: 58
SET timestamp=1450288856;
create table just_for_temp_case as
  select '2015-12-16' as the_date,
         t2.user_id,
         t1.stuff_no,
         count(*) as buy_times
  from stuff_entries as t1
  join bill as t2
  on t1.orderserino = t2.id
  where t2.notification_ts < '2015-12-17 00:00:00'
    and t2.notification_ts >= '2015-09-18 00:00:00'
  group by t2.user_id,
           t1.stuff_no;
# Time: 151217 18:03:47
# User@Host: taobao[taobao] @ regular_exp [192.168.35.23]  Id:  1235
# Schema: bat_db  Last_errno: 0  Killed: 0
# Query_time: 3.101086  Lock_time: 0.181175  Rows_sent: 0  Rows_examined: 360321  Rows_affected: 103560
# Bytes_sent: 58
SET timestamp=1450288856;
create table just_for_temp_case as
  select '2015-12-16' as the_date,
         t2.user_id,
         t1.stuff_no,
         count(*) as buy_times
  from stuff_entries as t1
  join bill as t2
  on t1.orderserino = t2.id
  where t2.notification_ts < '2015-12-17 00:00:00'
    and t2.notification_ts >= '2015-09-18 00:00:00'
  group by t2.user_id,
           t1.stuff_no;
# Time: 151217 18:03:47

编辑172.16.1.176 主机上的shipper.conf ,添加上读取mysql的慢日志配置,并且添加一个filter ,使用grok插件

Grok负责解析文本模式,使用正则表达式并将其分配至标识符。

Grok模式的语法结构为%{PATTERN:IDENTIFIER}。每个Logstash filter包含多种能够与日志信息相匹配并将其分配给不同标识符的grok模式,而这也正是让日志内容转化为结构化信息的关键所在。

[root@elk logstash]# vim shipper.conf 
input{
        syslog {
                type => "system" 		#定义一个type 用于下方output的时候做匹配
                host => "172.16.1.176"  #syslog的IP地址
                port => "514" 	 	    #syslog端口
        }
		file {                          		#以文件形式读取
			path => "/var/log/nginx/access.log" #文件路径
			codec => "json"                     #利用codec插件,读取json数据
			start_position => "beginning"       #记录位置,从头
			type => "nginx-log"                 
        }
		file {						#以文件形式读取
			path => "/var/log/es-log/li-application.log"
			start_position => "beginning"
			codec => multiline{             #使用meltiline 插件
				pattern => "^\["           #意思即为匹配开头是[
				negate => true             # negate = 否定
				what => "previous"         # 上面配置意思即为,非开头是“[” 的行数,归为上一行记录
			}
			type => "es-log"
		}
		tcp {                    #以TCP协议打开端口收集日志
			host => "172.16.1.176"
			port => "6666"
			type => "tcp-log"
		}
		file {
			path => "/var/log/slowlog.log"   #这里填写测试的slowlog地址,生产环境填写为自己的具体路径
			start_position => "beginning"
			codec => multiline {        #与es读取一样,需要合并多行
				pattern => "^# User@Host:"
				negate => true
				what => "previous"
			}
			type => "mysql-slowlog"
		}
}
filter{
	if [type] == "mysql-slowlog"{   #匹配type为mysql-slowlog 生效
		grok {
			match => [ "message", "#这里正则表达式,自己根据slowlog去编写,编写的规则以及语法可以参考上方连接#"]
		} 
	
			match => [ "timestamp", "UNIX" ]
			#remove_field => [ "timestamp" ]
		}
    }
}

output{
    if [type] == "system" {				 #写入system日志到redis
        redis  {
                host => "172.16.1.176"   #redis的地址
                port => "6379"   		 #端口
                db => "3"    		     #创建一个数据库3
                data_type => "list"   	 #以list存
                key => "system"   	     #key名称,自己定义,最好对应上日志的类型
        }
    }
	if [type] == "nginx-log" {			 #写入nginx日志到redis
		redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "nginx-log"
        }
    }
	if [type] == "es-log"{        	#写入es日志到redis
	redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "es-log"
        }
    }
	if [type] == "tcp-log"{        	#写入tcp日志到redis
	redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "tcp-log"
        }
    }
	if [type] == "mysql-slowlog"{    #写入slowlog日志到redis
	redis  {
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "mysql-slowlog"
        }
    }
}


编辑172.16.1.177配置文件 indexer.conf

[root@elk2 logstash]# vim indexer.conf
input{
        redis  {
            host => "172.16.1.176" #redis主机地址  ,下面的配置对应上172.16.1.176 主机写入redis的配置
            port => "6379"    	   #端口
            db => "3"   		   #数据库名字
            data_type => "list"    #数据类型
            key => "system"       
            type => "system"     
        }
		redis  {    			#从redis读取nginx日志
			host => "172.16.1.176"   
			port => "6379"
			db => "3"
			data_type => "list"
			key => "nginx-log"     #需要对应上redis的key
			type => "nginx-log"    
		}
		redis  {				#从redis读取es日志
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "es-log"
				type => "es-log"
        }
		redis  {				#从redis读取tcp日志
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "tcp-log"
				type => "tcp-log"
        }
		redis  {				#从redis读取slowlog日志
                host => "172.16.1.176"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "mysql-slowlog"
				type => "mysql-slowlog"
        }

}
output{

    if [type] == "system" {      #写入system 日志到 es
         elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "system-%{+YYYY.MM.dd}"  #使用system+ 日期,这个格式的索引
        }
    }
	if [type] == "nginx-log" {   #写入nginx 日志到 es
         elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "nginx-log-%{+YYYY.MM.dd}"
        }

    }
	if [type] == "es-log"{     #写入es 日志到 es
	elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "es-log-%{+YYYY.MM.dd}"
         }
	}
	if [type] == "tcp-log"{     #写入tcp日志到 es
	elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "tcp-log-%{+YYYY.MM.dd}"
         }
	}
	if [type] == "mysql-slowlog"{     #写入slowlog日志到 es
	elasticsearch{
          hosts => ["172.16.1.176:9200"]
          index => "mysqlslow-log-%{+YYYY.MM.dd}"
         }
	}
}

分别运行logstash起来

可以看到HEAD插件中,数据写入到es中了,filter 控件生效


kibana中同样看到日志记录了    因为我这里测试的slowlog日志是2015年 所以,注意选择好查看的日期,选择最近5年





  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值