kafka(十三):spark ReceiveAPI和DirectAPI从kafka消费数据

一、实现功能

Streaming通过两种方式,消费kafka数据。具体实现,参考spark:http://spark.apache.org/docs/2.1.0/streaming-kafka-0-8-integration.html

二、环境

1.spark2.1.0

2.kafka0.9.0.0

3.pom文件

  <properties>
    <scala.version>2.11.8</scala.version>
    <kafka.version>0.9.0.0</kafka.version>
    <!--<kafka.version>0.10.2.1</kafka.version>-->
    <hbase.version>1.2.0-cdh5.7.0</hbase.version>
    <hadoop.version>2.6.0-cdh5.7.0</hadoop.version>
  </properties>

  <dependencies>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.11</artifactId>
      <version>2.1.0</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
      <version>2.1.0</version>
    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper -->
    <dependency>
      <groupId>org.apache.zookeeper</groupId>
      <artifactId>zookeeper</artifactId>
      <version>3.4.5-cdh5.7.0</version>
    </dependency>

    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-library</artifactId>
      <version>${scala.version}</version>
    </dependency>

  </dependencies>

三、Receive方式

1.功能实现

spark receive 接收kafka消息,在executor上开启receiver接收kafka消息。

2.实现代码

package com.base.spark._191207SparkStreaming

import kafka.serializer.StringDecoder
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
//import org.apache.spark.streaming.Kafka.ka

/**
  * Created by Administrator on 2018/8/5.
  */
object UseReceiveKafkaStreaming08 extends App{
  val conf = new SparkConf()
    .setMaster("local[2]")
    .setAppName("UseReceiveKafkaStreaming")
    .set("spark.streaming.blockInterval","1s") //1s内的数据生成一个文件块,保存到内存中
  //spark.streaming.blockInterval:当使用接收器接收数据,在一定时间内,生成一个block.如果这个过小,则产生多个小文件;如果过大,则会导致数据丢失.
  val sc = SparkContext.getOrCreate(conf)
  //  val sc = SparkUtil.createSparkContext(true,"StreamingWC")


  val ssc = new StreamingContext(sc,Seconds(10))

  //获取数据源:基础配置
  val zkQuorum="hadoop01:2181/kafka_09_streaming"
  val topics=Map[String,Int]("hello_topic"-> 1)
  val groupId="sparkstreaming"

  //===============================创建DStream第一种:API1=============================
  //创建DStream第一种:API1
  /**
    * ssc: StreamingContext,
      zkQuorum: String,
      groupId: String,
      topics: Map[String, Int],
      storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2
    */
  //  val kafkaDStream_api1=KafkaUtils.createStream(ssc,zkQuorum,groupId,topics,StorageLevel.MEMORY_AND_DISK_SER_2)
//      .map(word=>(word._2,1))
//      .reduceByKey(_ + _)



  //===============================创建DStream第二种:API2=============================
  /**
    *   def createStream[K: ClassTag, V: ClassTag, U <: Decoder[_]: ClassTag, T <: Decoder[_]: ClassTag](
      ssc: StreamingContext,
      kafkaParams: Map[String, String],
      topics: Map[String, Int],
      storageLevel: StorageLevel
    ): ReceiverInputDStream[(K, V)] = {
    val walEnabled = WriteAheadLogUtils.enableReceiverLog(ssc.conf)
    new KafkaInputDStream[K, V, U, T](ssc, kafkaParams, topics, walEnabled, storageLevel)
  }
    */
  val kafkaParams: Map[String, String] = Map[String,String](
    "zookeeper.connect" -> zkQuorum,
    "group.id" -> groupId,
    "zookeeper.connection.timeout.ms" -> "10000",
    //largest
    "auto.offset.reset" -> "smallest") //消费开始位置,smallest是从头开始消费
  //API2:
  val kafkaDStream = KafkaUtils.createStream[String,String,
    StringDecoder,StringDecoder](ssc,kafkaParams,topics,StorageLevel.MEMORY_AND_DISK)
    .flatMap(line => line._2.split(" "))
    .map(word => (word,1))
    .reduceByKey(_ + _)

  kafkaDStream.print()
  ssc.start()
  ssc.awaitTermination()
  //===============================上面是API 2已经被验证!20180806=============================

}

 

四、Direct方式

1.功能实现

spark direct API接收kafka消息,从而不需要经过zookeeper,直接从broker上获取信息。

2.两种方式

(1)方式一:直接传入topic

val kafkaDirectDStream1: InputDStream[(String, String)] = KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topics)

(2)方式二:传入偏移量

  val kafkaDirectDStream2: InputDStream[(Long,String)] =  KafkaUtils.createDirectStream[String,String,
    StringDecoder,StringDecoder, (Long,String)](ssc,kafkaParams,fromOffsets,messageHandler)

3.scala源码

package com.base.spark._191207SparkStreaming

import kafka.common.TopicAndPartition
import kafka.message.MessageAndMetadata
import kafka.serializer.StringDecoder
import org.apache.spark.streaming.dstream.{DStream, InputDStream}
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}

/**
  * Created by Administrator on 2018/8/9.
  */
object DirectKafkaStreaming08 extends App{
  //1、创建sparkConf
  val sparkConf: SparkConf = new SparkConf()
    .setAppName("DirectKafkaStreaming")
    .setMaster("local[2]")
  //2、创建sparkContext
  val sc = new SparkContext(sparkConf)
  sc.setLogLevel("WARN")
  //3、创建StreamingContext
  val ssc = new StreamingContext(sc,Seconds(5))

  //4、配置kafka相关参数smallest
//  val kafkaParams=Map("metadata.broker.list"->"hadoop:9092,hadoop:9093")
  val kafkaParams: Map[String, String] = Map[String,String](
    "metadata.broker.list" -> "hadoop01:9092",
    "auto.offset.reset" -> "smallest"
  )

  //Direct方式一:直接传入topic==================================================
//  val topics=Set("hello_topic")
//  val kafkaDirectDStream1 = KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topics)
//  val resultDStream1=kafkaDirectDStream1.map(line=>(line._2)).flatMap(_.split(" "))
//    .map(word=>(word,1))
//    .reduceByKey(_+_)
//  resultDStream1.print()

  /**
    * 结果:
    -------------------------------------------
    Time: 1575796560000 ms
    -------------------------------------------
    (fsdsdf,1)
    (we,1)
    (,1)
    (sdfs,1)
    (r,1)
    (fsdfwe,1)
    (fsd,42)
    (sdf,43)
    (dfsdf,43)
    (ds,43)
    */

  //Direct方式二:传入偏移量==================================================
  /* 源码:
 def createDirectStream[
   K: ClassTag,
   V: ClassTag,
   KD <: Decoder[K]: ClassTag,
   VD <: Decoder[V]: ClassTag,
   R: ClassTag] (
     ssc: StreamingContext,
     kafkaParams: Map[String, String],
     fromOffsets: Map[TopicAndPartition, Long],
     messageHandler: MessageAndMetadata[K, V] => R
 ): InputDStream[R] = {
   val cleanedHandler = ssc.sc.clean(messageHandler)
   new DirectKafkaInputDStream[K, V, KD, VD, R](
     ssc, kafkaParams, fromOffsets, cleanedHandler)
 }
  */
  //fromOffsets 有几个分区就要往map中传入几个对象
  //这个参数的意思是,从哪个偏移量开始消费哪一个topic的哪个分区的数据
  //【kafka的topic新建了hello_topic1,5个分区】
  val fromOffsets = Map[TopicAndPartition,Long](
//    TopicAndPartition("hello_topic1",0) -> 0l,  //从0分区,从0位置开始读取
//    TopicAndPartition("hello_topic1",1) -> 0l,  //从1分区,从0位置开始读取
//    TopicAndPartition("hello_topic1",2) -> 0l,  //从2分区,从0位置开始读取
//    TopicAndPartition("hello_topic1",3) -> 0l,  //从3分区,从0位置开始读取
//    TopicAndPartition("hello_topic1",4) -> 0l   //从4分区,从0位置开始读取
    TopicAndPartition("hello_topic",0) -> 0l

  )
  //接收到数据怎么处理
  val messageHandler: (MessageAndMetadata[String, String]) => (Long,String) = (mmd: MessageAndMetadata[String, String]) =>
    (mmd.offset,mmd.message)

   val kafkaDirectDStream2: InputDStream[(Long,String)] =  KafkaUtils.createDirectStream[String,String,
    StringDecoder,StringDecoder, (Long,String)](ssc,kafkaParams,fromOffsets,messageHandler)

  val resultDStream2: DStream[((Long, String), Int)] = kafkaDirectDStream2.flatMap(tuple =>{
    //tuple._1是偏移量;tuple._2是字符串
    tuple._2.split(" ").map(word =>(tuple._1,word))
  })
    .map(t2 => {
      ((t2._1,t2._2),1)
    })
    .reduceByKey(_ + _)
    resultDStream2.print()
 /**
    * 结果:
    -------------------------------------------
    Time: 1575797065000 ms
    -------------------------------------------
    ((18,ds),1)
    ((17,fsd),1)
    ((19,fsd),1)
    ((38,dfsdf),1)
    ((18,sdf),1)
    ((0,fsd),1)
    ((35,sdf),1)
    ((36,ds),1)
    ((32,ds),1)
    ((40,dfsdf),1)
    */
    //=====Direct方式结束================================================================



  ssc.start()
  ssc.awaitTermination()


}

(经测试,成功~)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值