Exception in thread "MyKstream-ffaae771-ce86-47c2-9b4b-177504e016d5-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Input record ConsumerRecord(topic = telegraf, partition = 1, offset = 0, CreateTime = -1, serialized key size = -1, serialized value size = 283, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = {"fields":{"free":19354337280,"inodes_free":1860595,"inodes_total":1957888,"inodes_used":97293,"total":31439347712,"used":10464387072,"used_percent":35.093342520194476},"name":"disk","tags":{"device":"rootfs","fstype":"rootfs","host":"linux01","path":"/"},"timestamp":1502244180000}
) has invalid (negative) timestamp. Possibly because a pre-0.10 producer client was used to write this record to Kafka without embedding a timestamp, or because the input topic was created before upgrading the Kafka cluster to 0.10+. Use a different TimestampExtractor to process this data.
at org.apache.kafka.streams.processor.FailOnInvalidTimestamp.onInvalidTimestamp(FailOnInvalidTimestamp.java:63)
at org.apache.kafka.streams.processor.ExtractRecordMetadataTimestamp.extract(ExtractRecordMetadataTimestamp.java:61)
at org.apache.kafka.streams.processor.FailOnInvalidTimestamp.extract(FailOnInvalidTimestamp.java:46)
at org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:85)
at org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:117)
at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:464)
at org.apache.kafka.streams.processor.internals.StreamThread.addRecordsToTasks(StreamThread.java:650)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:556)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)
今天用kafkastream为消费者,telegraf为生产者,kafkastream消费topic数据时候出现以上异常。以为telegraf的时间戳格式不对(telegraf的默认时间戳是10位时间戳),改了时间戳格式后还是异常。json和influx两种格式都报此错误,但是console-producer.sh 生成的数据就不报异常,只要符合json和influx格式就会报错
原因为,telegraf发送过来的数据没有时间戳,而我的kafka为0.11.0.0版本,所以报以上异常
解决方案为以下
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.processor.TimestampExtractor;
public class MyEventTimeExtractor implements TimestampExtractor{
@Override
public long extract(ConsumerRecord<Object, Object> record, long previousTimestamp) {
return System.currentTimeMillis();
}
}
并在
Properties properties = new Properties();
properties.put(StreamsConfig.TIMESTAMP_EXTRACTOR_CLASS_CONFIG,MyEventTimeExtractor.class.getName());
异常解决。