大数据(十一)---------Spark Streaming整合Flume

1.安装flume1.6以上

2.下载依赖包:spark-streaming-flume-sink_2.11-2.0.2.jar-------->放入到flume的lib目录下

3.修改自己的scala-library版本(在lib目录下),pom里面什么版本,找到地址传到  flume/lib/ (这块很重要,scala-library的版本必须与项目里的scala的版本一致)

4.依赖:

<properties>
    <scala.version>2.11.8</scala.version>
    <hadoop.version>2.7.4</hadoop.version>
    <spark.version>2.0.2</spark.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>${scala.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.11</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-flume_2.10</artifactId>
        <version>2.0.2</version>
    </dependency>
</dependencies>

5.编写flume-poll.conf配置文件,在conf文件夹下

a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = spooldir
# 下面的路径是数据的路径,可以自定义
a1.sources.r1.spoolDir = /root/data 
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname=node01<!—自己的主节点信息-->
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000

6.在服务器上的 /root/data目录下准备数据文件data.txt

7.具体代码:

package com.nb.lpq

import java.net.InetSocketAddress
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}

/**
  * sparkStreaming整合flume 拉模式Pull

  */
object SparkStreaming_Flume_Poll {
  //newValues 表示当前批次汇总成的(word,1)中相同单词的所有的1
  //runningCount 历史的所有相同key的value总和
  def updateFunction(newValues: Seq[Int], runningCount: Option[Int]): Option[Int] = {
    val newCount =runningCount.getOrElse(0)+newValues.sum
    Some(newCount)
  }


  def main(args: Array[String]): Unit = {
    //配置sparkConf参数
    val sparkConf: SparkConf = new SparkConf().setAppName("SparkStreaming_Flume_Poll").setMaster("local[2]")
    //构建sparkContext对象
    val sc: SparkContext = new SparkContext(sparkConf)

    sc.setLogLevel("WARN")

    //构建StreamingContext对象,每个批处理的时间间隔
    val scc: StreamingContext = new StreamingContext(sc, Seconds(5))
    //设置checkpoint
    scc.checkpoint("./")
    //设置flume的地址,可以设置多台
    val address=Seq(new InetSocketAddress("192.168.248.123",8888))
    // 从flume中拉取数据
    val flumeStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createPollingStream(scc,address,StorageLevel.MEMORY_AND_DISK)

    //获取flume中数据,数据存在event的body中,转化为String
    val lineStream: DStream[String] = flumeStream.map(x=>new String(x.event.getBody.array()))
    //实现单词汇总
    val result: DStream[(String, Int)] = lineStream.flatMap(_.split(" ")).map((_,1)).updateStateByKey(updateFunction)

    result.print()
    scc.start()
    scc.awaitTermination()
  }

}

8.启动flume

flume-ng agent -n a1 -c /opt/software/apache-flume-1.7.0-bin/conf -f /opt/software/apache-flume-1.7.0-bin/conf/flume-poll.conf -Dflume.root.logger=INFO,console

9.启动程序,查看控制台

遇到的问题:

      两边运行都没有问题,但就是不输出东西???

     

      文件夹里的数据数据文件只会被加载一次,一次过后,文件名会被更改为xxx.COMPLETED;

      把data下的data.txt数据文件改个名字,例如mv data.txt.COMPLETED data.txt,此时,就会加载该文件

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值