由于不同版存在不兼容问题也试了好多次,网上找别人写的博客版本最后终于成功了。
flume:1.6.0
elasticsearch:1.7.6
kibana:4.1.9
从elasticsearch的lib目录拷贝elasticsearch-1.7.6.jar,lucene-core-4.10.4.jar到flume的lib目录,我们服务器使用的是logcxx和spdlog,使用syslog方式收集日志,(由于spdlog的syslog不能写到远程服务器,自己要改下源码用udp发送出去就行了)。
日志内容:<15>(Balance)[2016-12-15 14:53:28,610][DEBUG] ServerName[Entrance] statistics_total: total_UserCount = [cur_total=11,max_total=0],
红色:syslog用日志级别和日志类型生成的一个数字(自己修改spdlog的syslog需要在一条日志前加上尖括号和这个数字)
绿色:下面正则表达式要过滤出来的服务器名字或者模块名字
蓝色:时间,下面也会转换成时间戳
褐色:级别
黑色:内容
logcxx配置:
log4j.appender.RootSyslog=org.apache.log4j.net.SyslogAppender
log4j.appender.RootSyslog.SyslogHost=127.0.0.1 #flume地址
log4j.appender.RootSyslog.Facility=USER
log4j.appender.RootSyslog.Append=true
log4j.appender.RootSyslog.layout=org.apache.log4j.PatternLayout
log4j.appender.RootSyslog.layout.ConversionPattern=(服务器名字或者模块)[%d][%p] %m%n
flume配置:
agent.sources = sysSource
agent.channels = memoryChannel
agent.sinks = k1
#syslog source
agent.sources.sysSource.type = syslogudp
agent.sources.sysSource.bind = 0.0.0.0
agent.sources.sysSource.port = 514
agent.sources.sysSource.channels = memoryChannel
agent.sources.sysSource.interceptors = i1 i2 i3
#时间
agent.sources.sysSource.interceptors.i1.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
#服务器IP
agent.sources.sysSource.interceptors.i2.type = org.apache.flume.interceptor.HostInterceptor$Builder
agent.sources.sysSource.interceptors.i2.hostHeader = host
#正则表达式过滤出自定义圆括号中的字符串作为服务器类型名
agent.sources.sysSource.interceptors.i3.type=org.apache.flume.interceptor.RegexExtractorInterceptor$Builder
agent.sources.sysSource.interceptors.i3.regex = ((?<=\\()[^\\)]+)
agent.sources.sysSource.interceptors.i3.serializers = s1
agent.sources.sysSource.interceptors.i3.serializers.s1.name = server_type
#ElasticSearchSink
agent.sinks.k1.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
agent.sinks.k1.channel = memoryChannel
#elasticsearch地址
agent.sinks.k1.hostNames = 10.105.92.225:9300
#elasticsearch索引名字
agent.sinks.k1.indexName = server_log
agent.sinks.k1.batchSize = 100
agent.sinks.k1.indexType = log
#这个要和elasticsearch名字一样
agent.sinks.k1.clusterName = log-es
agent.sinks.k1.serializer = org.apache.flume.sink.elasticsearch.ElasticSearchLogStashEventSerializer
#memory channel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 10000
agent.channels.memoryChannel.transacionCapacity = 100
agent.channels.memoryChannel.byteCapacityBufferPercentage = 20
agent.channels.memoryChannel.byteCapacity = 10240000
elasticsearch配置:
cluster.name: log-es
node.name: "log-es01"
node.master: true
node.data: true
可以给elasticsearch配置kibana和head插件