分别用Push方式与Pull方式整合Flume与Spark Streaming

1.PUSH架构

2.Flume配置

在$Flume_HOME/conf下新建配置文件:flume_push_streaming.conf

配置思路如下:

  1. source选择netcat,配置好主机名与端口
  2. sink选择avro,配置好主机名与端口
  3. channel选择memory
  4. 将source与channel连起来
  5. 将sink与channel连起来
simple-agent.sources = netcat-source
simple-agent.sinks = avro-sink
simple-agent.channels = memory-channel

simple-agent.sources.netcat-source.type = netcat
simple-agent.sources.netcat-source.bind = hadoop000
simple-agent.sources.netcat-source.port = 44444

simple-agent.sinks.avro-sink.type = avro
simple-agent.sinks.avro-sink.hostname = hadoop000
simple-agent.sinks.avro-sink.port = 41414

simple-agent.channels.memory-channel.type = memory

simple-agent.sources.netcat-source.channels = memory-channel
simple-agent.sinks.avro-sink.channel = memory-channel

3.Spark Streaming编写

pom.xml加上:

<!--SS整合Flume依赖-->
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-flume_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

编写思路如下:

  1. 判断输入参数是否为2——主机地址与端口号
  2. args接受端口号
  3. 创建SparkConf对象后传给StreamingContext对象,需要指定刷新时间
  4. 利用FlumeUtils里的createStream方法拿到Flume流数据
  5. 注意Flume在传输的时候是有head和body的,所以拿数据的时候要加上x.event.getBody.array(),顺手trim测下空
  6. 之后是对它数据的常规wordcount操作
  7. 最后打开ssc并await
package com.taipark.spark

import org.apache.spark.SparkConf
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

/**
  * Spark Streaming整合Flume的第一种方式
  */
object FlumePushWordCount {

  def main(args: Array[String]): Unit = {

    if(args.length != 2){
      System.err.println("Usage:FlumePushWordCount <hostname><port>")
      System.exit(1)
    }

    val Array(hostname,port) = args


    val sparkConf = new SparkConf()//.setMaster("local[2]").setAppName("FlumePushWordCount")
    val ssc = new StreamingContext(sparkConf,Seconds(5))

    val flumeStream = FlumeUtils.createStream(ssc,hostname,port.toInt)
    flumeStream.map(x=>new String(x.event.getBody.array()).trim)
        .flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
        .print(10)

    ssc.start()
    ssc.awaitTermination()
  }
}

编写完成用maven打包,上传服务器。

首先用spark-submit运行jar包,这里注意需要将spark-streaming-flume_2.11包一并带上。

需要指定的参数有:

  • 运行的类
  • master
  • 依赖包
  • 运行的包
  • 主机名和端口两个参数
spark-submit \
--class com.taipark.spark.FlumePushWordCount \
--master local[2] \
--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 \
/home/hadoop/tplib/sparktrain-1.0.jar \
hadoop000 41414

其次开Flume:

需要指定的参数有:

  • flume名(与配置中一致)
  • 配置地址
  • 配置文件
  • 日志显示在控制台
flume-ng agent \
--name simple-agent \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/flume_push_streaming.conf \
-Dflume.root.logger=INFO,console

最后开44444端口传数据测试(顺带一提,关闭telnet的方式为ctrl+],之后输入quit):

telnet localhost 44444

在jar包运行的控制台能够得到输出结果:

完成

4.PULL编写

Pull是可靠的,Push是不可靠的,所以在生产中Pull用到的概率较大。

对于Flume来说,主要修改的地方是将sink指向spark-sink,需要在type中填写org.apache.spark.streaming.flume.sink.SparkSink。

simple-agent.sources = netcat-source
simple-agent.sinks = spark-sink
simple-agent.channels = memory-channel

simple-agent.sources.netcat-source.type = netcat
simple-agent.sources.netcat-source.bind = hadoop000
simple-agent.sources.netcat-source.port = 44444

simple-agent.sinks.spark-sink.type = org.apache.spark.streaming.flume.sink.SparkSink
simple-agent.sinks.spark-sink.hostname = hadoop000
simple-agent.sinks.spark-sink.port = 41414

simple-agent.channels.memory-channel.type = memory

simple-agent.sources.netcat-source.channels = memory-channel
simple-agent.sinks.spark-sink.channel = memory-channel

对于Spark Streaming来说,要想启动Spark Sink需要另外添加两个依赖:

 <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-flume-sink_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.commons</groupId>
      <artifactId>commons-lang3</artifactId>
      <version>3.5</version>
    </dependency>

代码中需要将FlumeUtils中PUSH的createStram变成PULL的createPollingStream即可:

package com.taipark.spark

import org.apache.spark.SparkConf
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

/**
  * Spark Streaming整合Flume的第二种方式
  */
object FlumePullWordCount {

  def main(args: Array[String]): Unit = {

    if(args.length != 2){
      System.err.println("Usage:FlumePullWordCount <hostname><port>")
      System.exit(1)
    }

    val Array(hostname,port) = args


    val sparkConf = new SparkConf()//.setMaster("local[2]").setAppName("FlumePushWordCount")
    val ssc = new StreamingContext(sparkConf,Seconds(5))

    val flumeStream = FlumeUtils.createPollingStream(ssc,hostname,port.toInt)
    flumeStream.map(x=>new String(x.event.getBody.array()).trim)
        .flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
        .print(10)

    ssc.start()
    ssc.awaitTermination()
  }
}

另外最关键的不同是PULL需要先启动Flume再启动Spark Streaming。

启动Flume:

flume-ng agent \
--name simple-agent \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/flume_pull_streaming.conf \
-Dflume.root.logger=INFO,console

启动Spark Streaming:

spark-submit \
--class com.taipark.spark.FlumePushWordCount \
--master local[2] \
--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 \
/home/hadoop/tplib/sparktrain-1.0.jar \
hadoop000 41414

开44444端口写数据测试:

telnet localhost 44444

Spark Streaming中获得结果:

PULL方法成功~

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值