logstash 7.11.1 收集clickhouse日志到elasticsearch

1、clickhouse日志样式:

2021.04.14 09:20:43.114711 [ 1 ] {} <Information> Application: Will watch for the process with pid 44
2021.04.14 09:20:43.114917 [ 44 ] {} <Information> Application: Forked a child process to watch
2021.04.14 09:20:43.115122 [ 44 ] {} <Information> SentryWriter: Sending crash reports is disabled
2021.04.14 09:20:43.115331 [ 44 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2021.04.14 09:20:43.172831 [ 44 ] {} <Information> : Starting ClickHouse 21.2.7.11 with revision 54447, build id: 982020A4FECFEEE237071F94D04B9D693ADFA78D, PID 44
2021.04.14 09:20:43.173005 [ 44 ] {} <Information> Application: starting up
2021.04.14 09:20:44.048112 [ 44 ] {} <Information> Application: Calculated checksum of the binary: C862DF75BDC516833103C0C73375024F, integrity check passed.
2021.04.14 09:20:44.048195 [ 44 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2021.04.14 09:20:44.048703 [ 44 ] {} <Debug> Application: rlimit on number of file descriptors is 65536
2021.04.14 09:20:44.048730 [ 44 ] {} <Debug> Application: Initializing DateLUT.
2021.04.14 09:20:44.048748 [ 44 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2021.04.14 09:20:44.048796 [ 44 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2021.04.14 09:20:44.061626 [ 44 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'ab06b1a21b9b' as replica host.

2、logstash7.x其于jdk 1.8+版本,请确保环境配置正常。logstash下载、解压忽略。

进入logstash-7.11.1/config下,复制logstash-sample.conf 命名为:logstash.conf,

修改logstash.conf,内容如下:

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
    file #监测clickhouse目录下,生成的.log文件
        type => "clickhouselog"
        path => "/home/its/logs/clickhouse/*.log"
        discover_interval => 10
        start_position => "beginning"
    }
}
filter {
	if [type] == "clickhouselog" {
		grok {#利用正则,把日期时间取出,并赋值给time字段
     		   match => ["message", "(?<time>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{HOUR}:%{MINUTE}:%{SECOND})"]
		}
		grok {#截取<>内的内容作为日志级别,如:debug、Info、Trace
                  match =>{
                        "message" => "(?<logLevel>(?<=<).*?(?=>))"
                 }
                }
		grok {#截取>以后的内容作为消息体内容
                  match =>{
                        "message" => "(?<logContent>(?<=>)(.*)/?)"
                 }
                }
		ruby {#将时间转成日期类型并赋值给collet_time字段
			code => "
			    event.set('collet_time', Time.parse(event.get('time')))
			"
    		}
#		date {
#		  match => [ "time","yyyy-MMM-dd HH:mm:ss Z","ISO8601"]
#          	  locale => "cn"
#		  add_tag => "@timestamp"
#        	  timezone => "Asia/Shanghai"
#		}
		mutate {#移除临时字段,防止入到ES
		  remove_field => [ "time" ]
		}
	}

} 
output {#ES配置
    elasticsearch {
    index => "log-%{+YYYY.MM.dd}"
    hosts => ["192.168.100.41:8200"]
    }
    stdout {codec => rubydebug}
}

3、启动logstash服务

#进入logstash的bin目录:/home/soft/app/logstash-7.11.1/bin
[root@localhost bin]# ./logstash -f /home/soft/app/logstash-7.11.1/config/logstash.conf 

 出现下图表示日志写入正常

 

感谢以下文章作者:

https://developer.aliyun.com/article/154341

https://ruby-doc.org/stdlib-2.4.1/libdoc/date/rdoc/Date.html#method-i-to_datetime

https://www.junmajinlong.com/ruby/ruby_datetime/

https://blog.csdn.net/cai750415222/article/details/86614854

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值