KafkaStream 消费telegraf出现 has invalid (negative) timestamp 异常详解

[2017-08-09 10:03:16,632] WARN stream-thread [MyKstream-ffaae771-ce86-47c2-9b4b-177504e016d5-StreamThread-1] Unexpected state transition from RUNNING to DEAD. (org.apache.kafka.streams.processor.internals.StreamThread)
Exception in thread "MyKstream-ffaae771-ce86-47c2-9b4b-177504e016d5-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Input record ConsumerRecord(topic = telegraf, partition = 1, offset = 0, CreateTime = -1, serialized key size = -1, serialized value size = 283, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = {"fields":{"free":19354337280,"inodes_free":1860595,"inodes_total":1957888,"inodes_used":97293,"total":31439347712,"used":10464387072,"used_percent":35.093342520194476},"name":"disk","tags":{"device":"rootfs","fstype":"rootfs","host":"linux01","path":"/"},"timestamp":1502244180000}
) has invalid (negative) timestamp. Possibly because a pre-0.10 producer client was used to write this record to Kafka without embedding a timestamp, or because the input topic was created before upgrading the Kafka cluster to 0.10+. Use a different TimestampExtractor to process this data.
at org.apache.kafka.streams.processor.FailOnInvalidTimestamp.onInvalidTimestamp(FailOnInvalidTimestamp.java:63)
at org.apache.kafka.streams.processor.ExtractRecordMetadataTimestamp.extract(ExtractRecordMetadataTimestamp.java:61)
at org.apache.kafka.streams.processor.FailOnInvalidTimestamp.extract(FailOnInvalidTimestamp.java:46)
at org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:85)
at org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:117)
at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:464)
at org.apache.kafka.streams.processor.internals.StreamThread.addRecordsToTasks(StreamThread.java:650)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:556)

at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)


今天用kafkastream为消费者,telegraf为生产者,kafkastream消费topic数据时候出现以上异常。以为telegraf的时间戳格式不对(telegraf的默认时间戳是10位时间戳),改了时间戳格式后还是异常。json和influx两种格式都报此错误,但是console-producer.sh 生成的数据就不报异常,只要符合json和influx格式就会报错

原因为,telegraf发送过来的数据没有时间戳,而我的kafka为0.11.0.0版本,所以报以上异常

解决方案为以下

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.processor.TimestampExtractor;


public class MyEventTimeExtractor implements TimestampExtractor{


@Override
public long extract(ConsumerRecord<Object, Object> record, long previousTimestamp) {
return System.currentTimeMillis();

}
}

并在

Properties properties = new Properties();
properties.put(StreamsConfig.TIMESTAMP_EXTRACTOR_CLASS_CONFIG,MyEventTimeExtractor.class.getName());

异常解决。

  • 4
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 5
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值