之前说到日志事件的设计、如何埋点以及基于jvm的程序如何对接我们的系统,接下去我们说下日志如何进行索引。通过前三篇博客可以知道数据通过LOGGER.info等打印日志的函数就可以存入kafka,所以我们对日志建立索引只需要实时读kafka写入es,为了提高实时索引的速率,我们会部署3个实例实时消费kafka的9个partition,并且使用es的bulk load api,这样测试下来大概3台pc上能够实时每秒索引2w+的数据,实时处理kafka数据写文件大概每秒50w+的处理速度,完全能够满足我们公司现有的日志实时采集索引需求。
代码比较简单,核心代码如下:
BulkRequestBuilder bulkRequest =
transportClient.prepareBulk()
;
int count = 0 ;
try {
while ( true) {
ConsumerRecords< byte[] , String> records = this. kafkaConsumerApp.poll( this. kafkaProperties.getPollTimeout()) ;
if (!records.isEmpty()) {
for (ConsumerRecord< byte[] , String> record : records) {
String value = record.value() ;
XContentBuilder source = this.buildXContentBuilder(value) ;
if (source != null) {
bulkRequest.add( transportClient.prepareIndex( this. esProperties.getIndex() , this. esProperties.getDoc())
.setSource(source)) ;
} else {
LOGGER.info( "record transform error, {}" , value) ;
}
currentOffsets.put( new TopicPartition(record.topic() , record.partition()) , new OffsetAndMetadata(record.offset() + 1)) ;
count++ ;
if (count >= 1000) {
// 当达到了 1000 触发向 kafka 提交 offset
kafkaConsumerApp.commitAsync( currentOffsets , new KafkaOffsetCommitCallback()) ;
count = 0 ;
}
}
int size = bulkRequest.numberOfActions() ;
if (size != 0) {
bulkRequest.execute().actionGet() ;
}
LOGGER.info( "total record: {}, indexed {} records to es" , records.count() , size) ;
bulkRequest = transportClient.prepareBulk() ;
kafkaConsumerApp.commitAsync( currentOffsets , new KafkaOffsetCommitCallback()) ;
}
}
} catch (WakeupException e) {
// do not process, this is shutdown
LOGGER.error( "wakeup, start to shutdown, {}" , e) ;
} catch (Exception e) {
LOGGER.error( "process records error, {}" , e) ;
} finally {
kafkaConsumerApp.commitSync( currentOffsets) ;
LOGGER.info( "finally commit the offset") ;
// 不需要主动调 kafkaConsumer.close(), spring bean 容器会调用
}
int count = 0 ;
try {
while ( true) {
ConsumerRecords< byte[] , String> records = this. kafkaConsumerApp.poll( this. kafkaProperties.getPollTimeout()) ;
if (!records.isEmpty()) {
for (ConsumerRecord< byte[] , String> record : records) {
String value = record.value() ;
XContentBuilder source = this.buildXContentBuilder(value) ;
if (source != null) {
bulkRequest.add( transportClient.prepareIndex( this. esProperties.getIndex() , this. esProperties.getDoc())
.setSource(source)) ;
} else {
LOGGER.info( "record transform error, {}" , value) ;
}
currentOffsets.put( new TopicPartition(record.topic() , record.partition()) , new OffsetAndMetadata(record.offset() + 1)) ;
count++ ;
if (count >= 1000) {
// 当达到了 1000 触发向 kafka 提交 offset
kafkaConsumerApp.commitAsync( currentOffsets , new KafkaOffsetCommitCallback()) ;
count = 0 ;
}
}
int size = bulkRequest.numberOfActions() ;
if (size != 0) {
bulkRequest.execute().actionGet() ;
}
LOGGER.info( "total record: {}, indexed {} records to es" , records.count() , size) ;
bulkRequest = transportClient.prepareBulk() ;
kafkaConsumerApp.commitAsync( currentOffsets , new KafkaOffsetCommitCallback()) ;
}
}
} catch (WakeupException e) {
// do not process, this is shutdown
LOGGER.error( "wakeup, start to shutdown, {}" , e) ;
} catch (Exception e) {
LOGGER.error( "process records error, {}" , e) ;
} finally {
kafkaConsumerApp.commitSync( currentOffsets) ;
LOGGER.info( "finally commit the offset") ;
// 不需要主动调 kafkaConsumer.close(), spring bean 容器会调用
}
该kafka group为es-indexer-consume-group
/**
* 根据 log 字符串构造 XContentBuilder
* @param line
* @return
*/
private XContentBuilder buildXContentBuilder(String line) {
try {
LogDto logDto = new LogDto(line) ;
return jsonBuilder()
.startObject()
.field(Constants. DAY , logDto.getDay())
.field(Constants. TIME , logDto.getTime())
.field(Constants. NANOTIME , logDto.getNanoTime())
.field(Constants. CREATED , logDto.getCreated())
.field(Constants. APP , logDto.getApp())
.field(Constants. HOST , logDto.getHost())
.field(Constants. THREAD , logDto.getThread())
.field(Constants. LEVEL , logDto.getLevel())
.field(Constants. EVENT_TYPE , logDto.getEventType())
.field(Constants. PACK , logDto.getPack())
.field(Constants. CLAZZ , logDto.getClazz())
.field(Constants. LINE , logDto.getLine())
.field(Constants. MESSAGE_SMART , logDto.getMessageSmart())
.field(Constants. MESSAGE_MAX , logDto.getMessageMax())
.endObject() ;
} catch (Exception e) {
return null;
}
}
* 根据 log 字符串构造 XContentBuilder
* @param line
* @return
*/
private XContentBuilder buildXContentBuilder(String line) {
try {
LogDto logDto = new LogDto(line) ;
return jsonBuilder()
.startObject()
.field(Constants. DAY , logDto.getDay())
.field(Constants. TIME , logDto.getTime())
.field(Constants. NANOTIME , logDto.getNanoTime())
.field(Constants. CREATED , logDto.getCreated())
.field(Constants. APP , logDto.getApp())
.field(Constants. HOST , logDto.getHost())
.field(Constants. THREAD , logDto.getThread())
.field(Constants. LEVEL , logDto.getLevel())
.field(Constants. EVENT_TYPE , logDto.getEventType())
.field(Constants. PACK , logDto.getPack())
.field(Constants. CLAZZ , logDto.getClazz())
.field(Constants. LINE , logDto.getLine())
.field(Constants. MESSAGE_SMART , logDto.getMessageSmart())
.field(Constants. MESSAGE_MAX , logDto.getMessageMax())
.endObject() ;
} catch (Exception e) {
return null;
}
}
由于是进行日志消费,可以允许有一定的丢失和重复消费,但是应该尽量避免。
代码其实很简单,主要说明下:
- kafka消费的时候尽量自己控制offset,以防kafka出现异常的时候导致大量的重复消费和丢失当kafka consumer进行rebalance的时候需要将当前的消费者的offset进行提交同步提交offset commitSync(xxx)会等待提交完成异步提交offset commitAsync(xxx, callback)进行异步提交,无需等待
- 针对以上情况,同步提交我们可以放在rebalance的时候,异步提交应该放在正常消费的时候,并且提交出错需要打印异常进行排查错误
以上的代码是每1000条进行一个commit,如果以此poll的数据不足1000条也会进行commit,这就既保证了向es提交bulk的效率,同时也能保证正常的offset提交,该方法有一定的重复消费和丢失的情况,因为会出现向es进行了bulk 提交,但是向kafka提交offset的时候程序挂掉,也可能提交了offset之后程序挂掉,但是还没有向es进行bulk提交,但是这种情况比较少见。回头再介绍一篇如果完全确保日志有且仅消费以此的代码,需要用到rollback机制,将offset存入第三方缓存数据。
加入hook的目的是程序被kill的时候可以确保consumer的线程运行完成再退出。