SparkStreaming 集成Flume详解
1.Flume-style Push-based Approach
1).push方式:flume agent的sink需要使用avro,spark application是一个receiver,接受flume push过来的数据。
2).由于是使用的push方式,要先运行spark application,然后才能启动flume
3).flume配置: push_flume.properties
netcat_memmory_avro.sources =netcat_source
netcat_memmory_avro.sinks = avro_sink
netcat_memmory_avro.channels = memory_channel
netcat_memmory_avro.sources.netcat_source.type = netcat
netcat_memmory_avro.sources.netcat_source.bind = 192.168.126.31 (服务器ip地址)
netcat_memmory_avro.sources.netcat_source.port = 6666
netcat_memmory_avro.sinks.avro_sink.type = avro
netcat_memmory_avro.sinks.avro_sink.hostname = 192.168.1.230 (window 本地ip地址,目的是为了在本地调试)
netcat_memmory_avro.sinks.avro_sink.port = 30333
netcat_memmory_avro.channels.memory_channel.type = memory
netcat_memmory_avro.channels.memory_channel.capacity = 1000
netcat_memmory_avro.channels.memory_channel.transactionCapacity = 100
netcat_memmory_avro.sources.netcat_source.channels = memory_channel
netcat_memmory_avro.sinks.avro_sink.channel = memory_channel
4).编码
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume_2.11</artifactId>
<version>2.1.1</version>
</dependency>
def main(args: Array[String]): Unit = {
if(args.length!=2){
println("[host , port ]")
System.exit(0)
}
val Array(host,port) = args
val conf = new SparkConf().setMaster("local[2]").setAppName("SparkStreamingByPush")
val ssc = new StreamingContext(conf,Seconds(5))
val dStream = FlumeUtils.createStream(ssc,host,port.toInt)
val lines = dStream.map(x=>{new String(x.event.getBody.array()).trim})
val words = lines.flatMap(y=>y.split(" "))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_+_)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
5)本地测试
- 程序中传入的ip地址为window本地ip地址
- flume配置文件中的sink修改为window本地ip地址
- 运行spark应用程序
- 启动flume: flume-ng agent --name netcat_memmory_avro --conf $FLUME_HOME/conf --conf-file $FLUME_HOME/conf/push_flume.properties -Dflume.root.logger=INFO,console
- 向flume输入数据: telnet 192.168.126.31 6666
-
线上测试
spark-submit --class com.zhm.sparkstreaming.flume.SparkStreamingByPush
–master local[2]
–packages org.apache.spark:spark-streaming-flume_2.11:2.1.1
/testJar/sparkstreaming-1.0.jar
192.168.1.230 30333
2.Pull-based Approach using a Custom Sink
1)pull方式使用flume的sink是自定义的SparkSink,flume将数据push到SparkSink缓存里面,spark streaming 使用receiver到sparksink去拉取数据。
2) Pull方式比push方式具有更强的可靠性和容错性
3).flume配置:
**踩坑一:**这里使用的是自定义sink,需要将自定义的jar包,拷贝到${FLUME_HOME/lib},SparkSink需要拷贝的jar包:
spark-streaming-flume-sink_2.11-2.1.1.jar
scala-library-2.11.7.jar
commons-lang3-3.5.jar
踩坑二: 原FLUME/lib下面已经有scala,拷贝新的scala进去需要将老的scalar jar删除
踩坑三: sparksink 是自定义sink,在本地调试,不能下沉到windows上的本地ip,需要配置集群上的ip
netcat_memmory_spark.sources =netcat_source
netcat_memmory_spark.sinks = spark_sink
netcat_memmory_spark.channels = memory_channel
netcat_memmory_spark.sources.netcat_source.type = netcat
netcat_memmory_spark.sources.netcat_source.bind = 192.168.126.31
netcat_memmory_spark.sources.netcat_source.port = 666
netcat_memmory_spark.sinks.spark_sink.type = org.apache.spark.streaming.flume.sink.SparkSink
netcat_memmory_spark.sinks.spark_sink.hostname = 192.168.1.230
netcat_memmory_spark.sinks.spark_sink.port = 30331
netcat_memmory_spark.channels.memory_channel.type = memory
netcat_memmory_spark.channels.memory_channel.capacity = 1000
netcat_memmory_spark.channels.memory_channel.transactionCapacity = 100
netcat_memmory_spark.sources.netcat_source.channels = memory_channel
netcat_memmory_spark.sinks.spark_sink.channel = memory_channel
flume-ng agent --name netcat_memmory_spark --conf $FLUME_HOME/conf --conf-file $FLUME_HOME/conf/pull_flume.properties -Dflume.root.logger=INFO,console
4).编码
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume-sink_2.11</artifactId>
<version>${sparkstreaming_version}</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala_version}</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.5</version>
</dependency>
def main(args: Array[String]): Unit = {
if(args.length!=2){
println("SparkStreamingByPull use [host] [port]")
System.exit(0)
}
val Array(host,port) = args
val conf = new SparkConf().setMaster("local[2]").setAppName("SparkStreamingByPull")
val ssc = new StreamingContext(conf,Seconds(5))
val dStream = FlumeUtils.createPollingStream(ssc,host,port.toInt)
val lines = dStream.map(x=>new String(x.event.getBody.array()).trim)
val wordCounts = lines.flatMap(y=>y.split(" ")).map((_,1)).reduceByKey(_+_)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
5)本地测试
- 程序中传入的ip地址为window本地ip地址
- flume配置文件中的sink修改为window本地ip地址
- 启动flume: flume-ng agent --name netcat_memmory_spark --conf $FLUME_HOME/conf --conf-file $FLUME_HOME/conf/pull_flume.properties -Dflume.root.logger=INFO,console
- 运行spark应用程序
- 向flume输入数据: telnet 192.168.126.31 666
-
线上测试
spark-submit --class com.zhm.sparkstreaming.flume.SparkStreamingByPull
–master local[2]
–packages org.apache.spark:spark-streaming-flume_2.11:2.1.1
/testJar/sparkstreaming-1.0.jar
192.168.1.230 30331