spark streaming 读取kafka的时候,数据丢失是一个很大的问题,streaming 通过direct方式读取kafka,提供了checkpoint方式去自己维护读取kafka的offset,将数据放到hdfs。
方式:
def main(args: Array[String]) {
def func(): StreamingContext ={
val conf = new SparkConf().setAppName("streamingKafka").setMaster("local[2]")
val sc = SparkContext.getOrCreate(conf)
val ssc = new StreamingContext(sc,Seconds(5))
ssc.checkpoint("hdfs://imedia-dev-web3:9000/BJJStreaming/checkpoint/test")
val kafkaParams = Map(
"zookeeper.connect" -> "192.168.225.15:2181,192.168.225.16:2181,192.168.225.17:2181",
"group.id" -> "spark-streaming-test",
"zookeeper.connection.timeout.ms" -> "4000")
val topic = "hello"
val topics = Set(topic)
val brokers = "192.168.225.15:9092,192.168.225.16:9092,192.168.225.17:9092"
val kafkaParam1s = Map[String, String]("metadata.broker.list" -> brokers, "serializer.class" -> "kafka.serializer.StringEncoder")
// Create a direct stream
val kafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParam1s, topics)
kafkaStream.print()
ssc
}
val ssc = StreamingContext.getOrCreate("hdfs://imedia-dev-web3:9000/BJJStreaming/checkpoint/test",func)
ssc.start()
`
sc.awaitTermination()
但是这样子有个问题,每当你的程序升级的时候,之前的checkpoint数据是不能使用的,否则会出现很多问题,类似task not Serializable等
这样子给程序升级带来了很大的烦恼,虽然有解决方式,就是读取kafka checkpoint的数据,来解决。