ELK杂谈

elasticsearch索引

elasticsearch索引保存目录

elasticsearch索引数据保存在其安装目录的nodes/0/indices/目录下,比如安装目录为/disk/elasticsearch/,则其索引数据的保存目录为
/disk/elasticsearch/nodes/0/indices/,这个里面的数据就是curl ‘localhost:9200/_cat/indices’
查询到的数据,当执行curl -XDELETE 'localhost:9200/*'删除所有的索引数据的时候,这个目录也会变为空。

ll /disk/elasticsearch/nodes/0/indices/
total 312
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 0cOZr6QDRuWGuuL3NHWMQw
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 1GAI8R22SpWiUP2vOk8giA
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 1keMx5glRkiwzOpFPwGiSg
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 1SNoBdHjR1iAn-gDGS1t9A
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 1xCemHQATSyiIrb6fmzSLw
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 2FlztzZOS6iO-_t8iKIS8g
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 2kY1wg2PQliTnwUiAdmlcA
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 6EylnVSRT7C1KsJhHUEGtw
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 -6IpgO-6QdmYk3dB8fIiUg
drwxr-xr-x 6 elasticsearch elasticsearch 4096 Nov 23 16:52 7hjMYetnTbi8bu1kbBdRvQ
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 7oQEEnhzRAOgCq1YLlMrFQ
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 98IdWgcES5-wHbRxHpmRzw
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 9Ik3-J-7SVOlAQ6q3ZT19A
drwxr-xr-x 8 elasticsearch elasticsearch 4096 Nov 23 16:52 A7zdWQEVSdOksT1Ow9FtaA

查看elasticsearch索引

curl localhost:9200/_cat/indices

es删除索引

删除一个索引

curl -XDELETE 'localhost:9200/system-syslog-2018.03'

你也可以这样删除多个索引,索引直间用逗号隔开

curl -XDELETE ' localhost:9200/index_one,index_two'

你也可以这样删除所有索引

curl -XDELETE 'localhost:9200/*'

elasticsearch删除数据的方式

删除数据分为两种:一种是删除索引(数据和表结构同时删除,作用同MySQL中 DROP TABLE “表名” ),另一种是删除数据(不删除表结构,作用同MySQL中Delete 语句)。
删除索引 indices目录下的数据也会被删除,而删除indices目录下的数据,索引结构不会被删除。

注意事项:对数据安全来说,能够使用单个命令来删除所有的数据可能会带来很可怕的后果,所以,为了避免大量删除,可以在elasticsearch.yml 配置文件中(或者动态配置中)修改 action.destructive_requires_name: true
设置之后只限于使用特定名称来删除索引,使用_all 或者通配符来删除索引无效。

elk报错

elasticsearch报错

[2020-11-23T16:04:21,183][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [master-node] failed to put mappings on indices [[[winlogbeat-6.5.4-2020.11.03/8OlQk_SeTQSttOv8pywafg]]], type [doc]
org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-mapping) within 30s
	at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$null$0(MasterService.java:122) ~[elasticsearch-6.0.0.jar:6.0.0]
	at java.util.ArrayList.forEach(ArrayList.java:1257) ~[?:1.8.0_181]
	at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$1(MasterService.java:121) ~[elasticsearch-6.0.0.jar:6.0.0]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0.jar:6.0.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]

原因:
日志里面说了,你这个winlogbeat-6.5.4-2020.11.03这个index 有两个 type,不允许,你肯定是之前有一个 type 了。建议手动自己提前设定好相关 mapping,不要自动创建

curl localhost:9200/_cat/indices?v|grep winlogbeat-6.5.4-2020.11.03
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0yellow open   winlogbeat-6.5.4-2020.11.03             8OlQk_SeTQSttOv8pywafg   3   1          0            0       699b           699b
100  323k  100  323k    0     0   105k      0  0:00:03  0:00:03 --:--:--  105k
curl -XDELETE 'localhost:9200/winlogbeat-6.5.4-2020.11.03'

kibana报错

在这里插入图片描述
原因:
造成这个的原因很多,我这里是因为关闭了kibana和elasticsearch进程,重新启动kibana和elasticsearch进程后就报这个错。
解决:
等一会就好。

elasticsearch未生成索引

配置文件没问题,看logstash日志报错。

[2020-12-01T15:18:37,621][WARN ][logstash.inputs.file     ] failed to open /var/log/messages: Permission denied - /var/log/messages

是读取日志文件没有权限
授权

chmod +r /var/log/messages

logstash-filter-multiline插件

安装插件

原生方法安装

/usr/share/logstash/bin/logstash-plugin install logstash-filter-multiline

改镜像源安装

给 Ruby 加上国内的镜像站:https://gems.ruby-china.com/,替代https://rubygems.org。

注意原https://gems.ruby-china.org/目前已经不可用,需要使用https://gems.ruby-china.com/

安装Gem并更新

yum install -y gem
gem -v
2.0.14.1

gem update --system

gem -v
2.6.13

检查并修改镜像源

# gem sources -l
*** CURRENT SOURCES ***

https://rubygems.org/
gem sources --add https://gems.ruby-china.com/ --remove https://rubygems.org/
https://gems.ruby-china.com/ added to sources
https://rubygems.org/ removed from sources
cat ~/.gemrc 
---
:backtrace: false
:bulk_threshold: 1000
:sources:
- https://gems.ruby-china.com/
:update_sources: true
:verbose: true

安装 bundle 并更改镜像源

gem install bundler
bundle config mirror.https://rubygems.org https://gems.ruby-china.com

修改 logstash的 gem 镜像源

vim /usr/share/logstash/Gemfile
 
# This is a Logstash generated Gemfile.
# If you modify this file manually all comments and formatting will be lost.
 
source "https://rubygems.org"
gem "logstash-core", :path => "./logstash-core"
......
更改默认的 https://rubygems.org 为https://gems.ruby-china.com
Gemfile.jruby-1.9.lock文件不用改,会自动更新。

安装 logstash-filter-multiline

/usr/share/logstash/bin/logstash-plugin install logstash-filter-multiline
Validating logstash-filter-multiline
Installing logstash-filter-multiline
Installation successful

查看logstash都安装了哪些插件

/usr/share/logstash/bin/logstash-plugin list

离线打包插件

这里安装好了,那么就可以打个离线的 zip 包,方便生产环境直接安装。离线包将包含所有依赖的包。

cd /usr/share/logstash/bin
./logstash-plugin prepare-offline-pack --overwrite --output logstash-filter-multiline.zip logstash-filter-multiline
输出
Offline package created at: logstash-filter-multiline.zip
 
You can install it with this command `bin/logstash-plugin install file:///usr/share/logstash/bin/logstash-filter-multiline.zip`

离线安装插件

bin/logstash-plugin install file:///usr/share/logstash/bin/logstash-filter-multiline.zip

logstash-filter-multiline使用方法

codec =>multiline {
     charset=>...          #可选                    字符编码        
     max_bytes=>...        #可选     bytes类型       设置最大的字节数
     max_lines=>...        #可选     number类型      设置最大的行数,默认是500行
     multiline_tag...      #可选     string类型      设置一个事件标签,默认是multiline
     pattern=>...          #必选     string类型      设置匹配的正则表达式
     patterns_dir=>...     #可选     array类型       可以设置多个正则表达式
     negate=>...           #可选     boolean类型     默认false不显示,可设置ture
     what=>...             #必选                    向前previous , 向后 next
 }

negate 只支持布尔值,true 或者false,默认为false。
如果设置为true,表示与正则表达式(pattern)不匹配的内容都需要整合,
具体整合在前还是在后,看what参数。如果设置为false,即与pattern匹配的内容

what 前一行 或者后一行,指出上面对应的规则与前一行内容收集为一行,还是与后一行整合在一起

negate默认是 false,不显示与patten匹配的行
由what决定 向前或向后 匹配

negate 设置为true
则与patten 不匹配的行
由what决定 向前或向后 匹配

Logstash中filter常用的语法

1.根据条件删除当前消息

  if "caoke" not in [docker]{
     drop {}
   }
   if "caoke" != [className]{
      drop {}
   }

删除字段
remove_field => [“message”]
添加字段

mutate{
     add_field => {
         "timestamp" => "%{[message]}"
   }
 }

转换字段类型

mutate{
       convert => {
               "ip" => "string"
       }
}

重命名字段

mutate{
        convert => {
                "ip" => "string"
        }
        rename => {
                "ip"=>"IP"
        }
}

字段取值 %{message}
7.logstash 条件判断语句

使用条件来决定filter和output处理特定的事件。logstash条件类似于编程语言。条件支持if、else if、else语句,可以嵌套。
比较操作有:
相等: ==, !=, <, >, <=, >=
正则: =~(匹配正则), !~(不匹配正则)
包含: in(包含), not in(不包含)
布尔操作:
and(与), or(或), nand(非与), xor(非或)
一元运算符:
!(取反)
()(复合表达式), !()(对复合表达式结果取反)

if[foo] in "String"在执行这样的语句是出现错误原因是没有找到叫做foo的field,无法把该字段值转化成String类型。所以最好要加field if exist判断。
判断字段是否存在,代码如下:

if ["foo"] {
   mutate {
     add_field => { "bar" => "%{foo}"}
   }
 }

 example:
   filter{
       if "start" in [message]{
           grok{
               match => xxxxxxxxx
           }
       }else if "complete" in [message]{
           grok{
               xxxxxxxxxx
           }
       }else{
           grok{
               xxxxxxx
           }
       }

   }

创建模板

PUT _template/temp_jiagou
{
  "order": 0,
  "index_patterns": [
    "jiagou-*"
  ],
  "settings": {
    "index": {
      "number_of_shards": "1",
      "number_of_replicas": "1",
      "refresh_interval": "5s"
    }
  },
  "mappings": {
    "_default_": {
      "properties": {
        "logTimestamp": {
          "type": "date",
          "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS||epoch_millis"
        },
        "partition": {
          "type": "integer"
        },
        "offset": {
          "type": "long"
        },
        "lineNum": {
          "type": "integer"
        }
      }
    }
  }
}

logstash收集日志,多行合并成一行

使用multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并。

写追加日志脚本

这里以kibana日志做测试,手动写脚本追加多行日志到kibana.log文件中。

while true; do sleep 3; echo -e "[ERROR] [] 2017-10-23 09:34:37,855 操作超时,请重新登录 \n  at com.*****.*****.base.session.MobileIntercepter.preHandle(MobileIntercepter.java:102)\n at org.springframework.web.servlet.HandlerExecutionChain.applyPreHandle(HandlerExecutionChain.java:130)" >> /var/log/kibana.log;done

配置logstash不合并多行日志成一行

cat /etc/logstash/conf.d/kibana.log 
input {
    file {
        type => "kibana-log"
        path => "/var/log/kibana.log"    #日志的路径
        start_position => "end"      #从哪里开始读取日志,这里是从末尾读取
        sincedb_path => "/dev/null"
    }
}
output {
    stdout {
      codec => rubydebug #将日志输出到终端
    }

查看日志

cat /var/log/kibana.log
[ERROR] [] 2017-10-23 09:34:37,855 操作超时,请重新登录 
  at com.*****.*****.base.session.MobileIntercepter.preHandle(MobileIntercepter.java:102)
 at org.springframework.web.servlet.HandlerExecutionChain.applyPreHandle(HandlerExecutionChain.java:130)
[ERROR] [] 2017-10-23 09:34:37,855 操作超时,请重新登录 
  at com.*****.*****.base.session.MobileIntercepter.preHandle(MobileIntercepter.java:102)
 at org.springframework.web.servlet.HandlerExecutionChain.applyPreHandle(HandlerExecutionChain.java:130)

测试

./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/kibana.log

从下图可以看出,日志并没有合并成多行。
在这里插入图片描述

写追加日志脚本

while true; do sleep 3; echo -e "[ERROR] [] 2017-10-23 09:34:37,855 操作超时,请重新登录 \n  at com.*****.*****.base.session.MobileIntercepter.preHandle(MobileIntercepter.java:102)\n at org.springframework.web.servlet.HandlerExecutionChain.applyPreHandle(HandlerExecutionChain.java:130)" >> /var/log/kibana.log;done

配置logstash合并多行日志成一行

利用logstash-codec-multiline插件实现多行合并。
前提logstash-codec-multiline插件已经安装
查看logstash-codec-multiline插件是否安装

/usr/share/logstash/bin/logstash-plugin list|grep multiline
logstash-codec-multiline
logstash-filter-multiline

配置

cat /etc/logstash/conf.d/kibana.log 
input {
    file {
        type => "kibana-log"
        path => "/var/log/kibana.log"    #日志的路径
        start_position => "end"      #从哪里开始读取日志,这里是从末尾读取
        sincedb_path => "/dev/null"
        codec => multiline {
			pattern => "^\["						#以"["开头进行正则匹配
			negate => true 							#正则匹配成功
			what => "previous"						#和前面的内容进行合并
		#这样配置只会作用于该配置文件当前type
		}
    }
}

output {
    stdout {
      codec => rubydebug #将日志输出到终端
    }

利用logstash-filter-multiline插件实现多行合并。
前提logstash-filter-multiline插件已经安装
查看logstash-filter-multiline插件是否安装

/usr/share/logstash/bin/logstash-plugin list|grep multiline
logstash-codec-multiline
logstash-filter-multiline

配置

cat /etc/logstash/conf.d/kibana.log 
input {
    file {
        type => "kibana-log"
        path => "/var/log/kibana.log"    #日志的路径
        start_position => "end"      #从哪里开始读取日志,这里是从末尾读取
        sincedb_path => "/dev/null"
    }
}

filter {
       multiline {
            pattern => "^\["
            negate => true   
            what => "previous" 
        }       
} 
#这样配置会作用于该配置文件的所有type
output {
    stdout {
     codec => rubydebug #将日志输出到终端
    }
}

查看日志

cat /var/log/kibana.log
[ERROR] [] 2017-10-23 09:34:37,855 操作超时,请重新登录 
  at com.*****.*****.base.session.MobileIntercepter.preHandle(MobileIntercepter.java:102)
 at org.springframework.web.servlet.HandlerExecutionChain.applyPreHandle(HandlerExecutionChain.java:130)
[ERROR] [] 2017-10-23 09:34:37,855 操作超时,请重新登录 
  at com.*****.*****.base.session.MobileIntercepter.preHandle(MobileIntercepter.java:102)
 at org.springframework.web.servlet.HandlerExecutionChain.applyPreHandle(HandlerExecutionChain.java:130)

验证是否将多行合并为一行

./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/kibana.log

从下图可以看出,日志已经合并成多行。
在这里插入图片描述

查看kibana端是否合并为一行

将日志输出到elasticsearch,查看kibana上日志是否合并为一行。

cat /etc/logstash/conf.d/kibana.conf 
input {
    file {
        type => "kibana-log"
        path => "/var/log/kibana.log"    #日志的路径
        start_position => "end"      #从哪里开始读取日志,这里是从开始读取
        sincedb_path => "/dev/null"
    }
}
#filter {
#       grok {
#               match => { "mesage" => "{COMBINEDAPACHELOG}"}
#}

#}
filter {
       multiline {
            pattern => "^\[" 
            negate => true   
            what => "previous" 
        }       
} 
output {
    stdout {
     codec => rubydebug
    }
if [type] == "kibana-log" {
    elasticsearch{
       hosts => ["192.168.229.116:9200"]
        index => "kibana-log-%{+YYYY.MM.dd}"
        #document_type => "log4j_type"
    }   
        }
}

在这里插入图片描述
由上图可知,日志在kibana端也合并为一行。

logstash配置

检查配置文件是否有错

/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
[ERROR] 2020-11-24 14:09:24.268 [LogStash::Runner] sourceloader - No configuration found in the configured sources.
Configuration OK

注意:1、如果配置文件配置不对,logstash是可以启动的,但是无法生成索引,并且如果/etc/logstash/conf.d目录下只要有一个配置文件不对,这个目录下的所有配置都无法生成索引。
2、有些插件没安装,但配置用的了这个插件,比如logstash-filter-multiline插件,配置文件会报错。
3、配置文件更改后要重启logstash进程。

配置hadoop的日志

安装了logstash的hadoop节点配置,这样会自动生成索引。

cat /etc/logstash/conf.d/hadoop.conf 
input {
    file {
        type => "hadoop-1-namenode-log"
        path => "/disk/hadoop-2.8.0/logs/hadoop-root-namenode-c7001.log"    #日志的路径
        start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
        sincedb_path => "/dev/null"
    }
    file {
       type => "hadoop-1-namenode-out"
       path => "/disk/hadoop-2.8.0/logs/hadoop-root-namenode-c7001.out"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "hadoop-1-datanode-log"
       path => "/disk/hadoop-2.8.0/logs/hadoop-root-datanode-c7001.log"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "hadoop-1-datanode-out"
       path => "/disk/hadoop-2.8.0/logs/hadoop-root-datanode-c7001.out"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "hadoop-1-journalnode-log"
       path => "/disk/hadoop-2.8.0/logs/hadoop-root-journalnode-c7001.log"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "hadoop-1-journalnode-out"
       path => "/disk/hadoop-2.8.0/logs/hadoop-root-journalnode-c7001.out"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "hadoop-1-regionserver-out"
       path => "/disk/hadoop-2.8.0/logs/hadoop-root-regionserver-c7001.out"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "hadoop-1-zkfc-log"
       path => "/disk/hadoop-2.8.0/logs/hadoop-root-zkfc-c7001.log"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"  
    }  
    file {
       type => "hadoop-1-zkfc-out"
       path => "/disk/hadoop-2.8.0/logs/hadoop-root-zkfc-c7001.out"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"  
    }  
    file {
       type => "yarn-1-resourcemanager-log"
       path => "/disk/hadoop-2.8.0/logs/yarn-root-resourcemanager-c7001.log"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "yarn-1-resourcemanager-out"
       path => "/disk/hadoop-2.8.0/logs/yarn-root-resourcemanager-c7001.out"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "yarn-1-nodemanager-log "
       path => "/disk/hadoop-2.8.0/logs/yarn-root-nodemanager-c7001.log"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "yarn-1-nodemanager-out "
       path => "/disk/hadoop-2.8.0/logs/yarn-root-nodemanager-c7001.out"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
}
filter {
       multiline {
            pattern => "^\[0-9]"
            negate => true   
            what => "previous" 
        }       
} 
output {
    stdout {
      codec => rubydebug
    }
	if [type] == "hadoop-1-namenode-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hadoop-1-namenode-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }
	}
if [type] == "hadoop-1-namenode-out" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hadoop-1-namenode-out-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hadoop-1-datanode-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hadoop-1-datanode-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hadoop-1-datanode-out" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hadoop-1-datanode-out-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hadoop-1-journalnode-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hadoop-1-journalnode-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hadoop-1-journalnode-out" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hadoop-1-journalnode-out-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hadoop-1-regionserver-out" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hadoop-1-regionserver-out-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hadoop-1-zkfc-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hadoop-1-zkfc-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hadoop-1-zkfc-out" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hadoop-1-zkfc-out-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "yarn-1-resourcemanager-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "yarn-1-resourcemanager-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }
}
if [type] == "yarn-1-resourcemanager-out" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "yarn-1-resourcemanager-out-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }
        }
if [type] == "yarn-1-nodemanager-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "yarn-1-nodemanager-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }
        }
if [type] == "yarn-1-nodemanager-out" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "yarn-1-nodemanager-out-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }
        }
}

配置hbase的日志

cat /etc/logstash/conf.d/hbase.conf 
input {
    file {
        type => "hbase-1-master-log"
        path => "/disk/hbase-1.3.0/logs/hbase-root-master-c7001.log"    #日志的路径
        start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
        sincedb_path => "/dev/null"
    }
    file {
       type => "hbase-1-master-out"
       path => "/disk/hbase-1.3.0/logs/hbase-root-master-c7001.out"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "hbase-1-regionserver-log"
       path => "/disk/hbase-1.3.0/logs/hbase-root-regionserver-c7001.log"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
    file {
       type => "hbase-1-regionserver-out"
       path => "/disk/hbase-1.3.0/logs/hbase-root-regionserver-c7001.out"    #日志的路径
       start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
       sincedb_path => "/dev/null"
    }
}
filter {
       multiline {
            pattern => "^\[0-9]"
            negate => true   
            what => "previous" 
        }       
} 
output {
    stdout {
      codec => rubydebug
    }
if [type] == "hbase-1-master-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hbase-1-master-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hbase-1-master-out" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hbase-1-master-out-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hbase-1-regionserver-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hbase-1-regionserver-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
if [type] == "hbase-1-regionserver-out" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "hbase-1-regionserver-out-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
}

配置opentsdb的日志

cat opentsdb.conf 
input {
    file {
        type => "opentsdb-1-log"
        path => "/tmp/opentsdb.log"    #日志的路径
        start_position => "end"      #从哪里开始读取日志,这里是从开始读取
        sincedb_path => "/dev/null"
    }
}
}
filter {
       multiline {
            pattern => "^\[0-9]"
            negate => true   
            what => "previous" 
        }       
} 
output {
    stdout {
      codec => rubydebug
    }
if [type] == "opentsdb-1-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "opentsdb-1-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
}

配置mysql的日志

cat /etc/logstash/conf.d/mysql.conf 
input {
    file {
        type => "mysql-01-log"
        path => "/var/log/mariadb/mariadb.log"    #日志的路径
        start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
        sincedb_path => "/dev/null"
    }
}
#filter {
#       grok {
#               match => { "mesage" => "{COMBINEDAPACHELOG}"}
#}

#}
output {
    stdout {
      codec => rubydebug
    }
if [type] == "mysql-01-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "mysql-01-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
}

配置系统日志

cat /etc/logstash/conf.d/system.conf
input {
    file {
        type => "mysql-01-system-log"
        path => "/var/log/messages"    #日志的路径
        start_position => "beginning"      #从哪里开始读取日志,这里是从开始读取
        sincedb_path => "/dev/null"
    }
}

output {
    stdout {
      codec => rubydebug
    }
if [type] == "mysql-01-system-log" {
    elasticsearch{
        hosts => ["172.18.0.252:9200","172.18.0.224:9200"]
        index => "mysql-01-system-log-%{+YYYY.MM.dd}"
        document_type => "log4j_type"
    }   
        }
}

利用系统自带的syslog进程监控日志

cat /etc/logstash/conf.d/system.conf 
input {  # 定义日志源
  syslog {
    type => "system-syslog"  # 定义类型
    port => 10514    # 定义监听端口
  }
}
#output {  # 定义日志输出
#  stdout {
#    codec => rubydebug  # 将日志输出到当前的终端上显示
#  }
#}
output {
  elasticsearch {
    hosts => ["172.18.0.252:9200"]  # 定义es服务器的ip
    index => "c7001-syslog-%{+2019.02.18}" # 定义索引
  }
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

时空无限

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值