Spark Streaming整合Flume实战
实战一:Flume-style Push-based Approach
Flume Agent的编写:flume_push_streaming.conf
$FLUME_HOME/conf 下创建
simple-agent.sources = netcat-source
simple-agent.sinks = avro-sink
simple-agent.channels = memory-channel
simple-agent.sources.netcat-source.type = netcat
simple-agent.sources.netcat-source.bind= hadoop000
simple-agent.sources.netcat-source.port= 44444
simple-agent.sinks.avro-sink.type = avro
simple-agent.sinks.avro-sink.hostname = 192.168.15.130
simple-agent.sinks.avro-sink.port = 41414
simple-agent.channels.memory-channel.type = memory
simple-agent.sources.netcat-source.channels = memory-channel
simple-agent.sinks.avro-sink.channel = memory-channel
启动:
flume-ng agent
–name simple-agent
–conf $FLUME_HOME/conf
–conf-file $FLUME_HOME/conf/flume-push-streaming.conf
-Dflume.root.logger=INFO,console
hadoop000是服务器的地址,
local的模式进行 Spark Streming代码的测试 192.168.15.130
本地测试总结:
1)启动saprkstreaming作业
2)启动flume agent
3)通过telnet输入数据,观察idea控制台的输出
Push方式整合之服务器环境联调
打包工程:[hadoop@hadoop000 sparktrain]$ mvn clean package -DskipTests
spark-submit
–class com.imooc.spark.FlumePushWordCount
–master local[2]
–packages org.apache.spark:spark-streaming-flume_2.11:2.2.0
/home/hadoop/lib/sparktrain-1.0.jar
hadoop000 41414
Flume Agent的编写:flume-pull-streaming.conf
$FLUME_HOME/conf 下创建
simple-agent.sources = netcat-source
simple-agent.sinks = spark-sink
simple-agent.channels = memory-channel
simple-agent.sources.netcat-source.type = netcat
simple-agent.sources.netcat-source.bind= hadoop000
simple-agent.sources.netcat-source.port= 44444
simple-agent.sinks.spark-sink.type = org.apache.spark.streaming.flume.sink.SparkSink
simple-agent.sinks.spark-sink.hostname = hadoop000
simple-agent.sinks.spark-sink.port = 41414
simple-agent.channels.memory-channel.type = memory
simple-agent.sources.netcat-source.channels = memory-channel
simple-agent.sinks.avro-sink.channel = memory-channel
spark-submit
–class com.imooc.spark.FlumePushWordCount
–master local[2]
–packages org.apache.spark:spark-streaming-flume_2.11:2.2.0
/home/hadoop/lib/sparktrain-1.0.jar
hadoop000 41414
==>
实战二:Pull-based Approach using a Custom Sink
注意点:先启动flume 后启动Spark Streaming应用程序
simple-agent.sources = netcat-source
simple-agent.sinks = spark-sink
simple-agent.channels = memory-channel
simple-agent.sources.netcat-source.type = netcat
simple-agent.sources.netcat-source.bind= hadoop000
simple-agent.sources.netcat-source.port= 44444
simple-agent.sinks.spark-sink.type = org.apache.spark.streaming.flume.sink.SparkSink
simple-agent.sinks.spark-sink.hostname = hadoop000
simple-agent.sinks.spark-sink.port = 41414
simple-agent.channels.memory-channel.type = memory
simple-agent.sources.netcat-source.channels = memory-channel
simple-agent.sinks.spark-sink.channel = memory-channel
flume-ng agent
–name simple-agent
–conf $FLUME_HOME/conf
–conf-file $FLUME_HOME/conf/flume-pull-streaming.conf
-Dflume.root.logger=INFO,console
打包命令 :[hadoop@hadoop000 sparktrain]$ mvn clean package -DskipTests
提交:
spark-submit
–class com.imooc.spark.FlumePullWordCount
–master local[2]
–packages org.apache.spark:spark-streaming-flume_2.11:2.2.0
/home/hadoop/lib/sparktrain-1.0.jar
hadoop000 41414