Flume 对接 Spark-Streaming

7 篇文章 0 订阅
3 篇文章 0 订阅

用 Flume 收集实时点击日志,以 Http 请求的形式 Post Json 数据,传入 Flume,通过 SparkStreaming 对数据进行处理,此处 SparkStreaming 于 Flume 对接的方式是 Push,所以启动顺序是先启动 SparkStreaming 再 启动 Flume

添加依赖到 Maven

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>groupId</groupId>
    <artifactId>KanChaNiDeLian</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <spark.version>2.2.0</spark.version>
        <scala.version>2.11</scala.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-mllib_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-flume -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-flume_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-flume-sink -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-flume-sink_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-flume-assembly -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-flume-assembly_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>


        <!-- https://mvnrepository.com/artifact/com.google.code.gson/gson -->
        <dependency>
            <groupId>com.google.code.gson</groupId>
            <artifactId>gson</artifactId>
            <version>2.8.5</version>
        </dependency>

        <!--解析json字符串-->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.36</version>
        </dependency>



    </dependencies>

    <repositories>
        <repository>

            <id>alimaven</id>
            <name>aliyun maven</name>
            <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
        </repository>
    </repositories>

    <build>
        <plugins>
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <version>2.15.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.6.0</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>


            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <configuration>
                    <skip>true</skip>
                </configuration>
            </plugin>

        </plugins>
    </build>
</project>

配置后台代码发送 Http 请求

在 Django 中发送 Post 请求, views.py 如下,需要注意的是,向 Flume 发送的 Json 的格式必须是 [{“headers”:{}, “body”:{}}, …]

from django.shortcuts import HttpResponse, render
import json
import requests
import random

# 预加载模型
# abc = random.randint(1, 10)
# print(abc)

def get_json(request):
    # global abc
    random_info = random.randint(1, 10)
    # 需要注意flume传输的json的格式
    j = [{
        "headers": {
            "userId": '233',
            "age": '20',
            "name": 'madala',
            "itemId": '777',
            "click": '1',
        },
        "body": "abc abc df df"
    }]
#    print(abc)
    json_data = json.dumps(j)
    # 发送请求
    headers = {'Content-Type': 'application/json;charset=UTF-8'}
    res = requests.post('http://localhost:5555', data=json_data, headers=headers)
    print(res)

    return HttpResponse(json_data, content_type='application/json')

配置 Flume

配置 Flume 的 Agent 大致步骤:

  • 配置 Source
  • 配置 Sink
  • 配置 Chanel
  • 连接 Chanel 和 Source,Chanel 和 Sink
a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = http
a1.sources.r1.bind = localhost
a1.sources.r1.port = 5555
# 注意这里是 channels,少了个 s 会启动不起来
a1.sources.r1.channels = c1
a1.sources.r1.handler = org.apache.flume.source.http.JSONHandler

# 这个是 poll 的方式
# a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
# a1.sinks.k1.hostname = localhost
# a1.sinks.k1.port = 1234
# a1.sinks.k1.channel = c1

a1.sinks.k1.type = avro
a1.sinks.k1.hostname = localhost
a1.sinks.k1.port = 1234
a1.sinks.k1.channel = c1


a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

Spark-Streaming

每隔 1s 处理一次 DStream,使用 fastjson 包处理json

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
import com.alibaba.fastjson.JSON


object FlumePushStreaming {
  def main(args: Array[String]): Unit = {
    val sparkConf: SparkConf = new SparkConf()
      .setMaster("local[3]")
    val sc = new SparkContext(sparkConf)
    sc.setLogLevel("WARN")
    val ssc = new StreamingContext(sc, Seconds(1))
    val stream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createStream(ssc, "localhost", 1234)
    // 取出 headers 部分
    val dstream: DStream[String] = stream.map(x=>x.event.getHeaders.toString)
    //处理 json 字符串
    val dataRDD = dstream.map(json => {
      // 对原来的json字符串进行修正
      val first:String = json.replaceAll("=", "\":\"")
        .replaceAll(",", "\",\"")
        .replaceAll("[{]", "{\"")
        .replaceAll("[}]", "\"}")
        .replaceAll(" ", "")

      println(first)
      val fixedJson = first
      val jsonObject = JSON.parseObject(fixedJson)
      val userId = jsonObject.getOrDefault("userId", null)
      val age = jsonObject.getOrDefault("age", null)
      val name = jsonObject.getOrDefault("name", null)
      val itemId = jsonObject.getOrDefault("itemId", null)
      val click = jsonObject.getOrDefault("click", null)
      println((userId, age, name, itemId, click))
      (userId, age, name, itemId, click)
    })
    dataRDD.print()

    //开启计算
    ssc.start()
    ssc.awaitTermination()
  }
}
  • 3
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值