基于cdh5的sparkStreaming-kafka测试

1.cdh集群环境
cdh版本 5.13.2
jdk 1.8
scala 2.10.6
zookeeper 3.4.5
hadoop 2.6.0
yarn 2.6.0
spark 1.6.0 、2.1.0
kafka 2.1.0
redis 3.0.0

2.pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.xxx.xx</groupId>
    <artifactId>spark-kafka</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <scala.compat.version>2.10</scala.compat.version>
        <!--编译时的编码-->
        <maven.compiler.encoding>UTF-8</maven.compiler.encoding>
        <!--文件拷贝时的编码-->
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <scala.version>2.10.6</scala.version>
        <spark.version>1.6.0</spark.version>
    </properties>
    
    <!--scala插件的存储仓库-->
    <pluginRepositories>
        <pluginRepository>
            <id>scala-tools.org</id>
            <name>Scala-tools Maven2 Repository</name>
            <url>http://scala-tools.org/repo-releases</url>
        </pluginRepository>
    </pluginRepositories>

    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
            <scope>provided</scope>
        </dependency>

        <!--provided表示maven打包时排除该jar包,如果集群上存在的话-->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>${spark.version}</version>
            <scope>provided</scope>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.10</artifactId>
            <version>0.10.2.1</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.10</artifactId>
            <version>${spark.version}</version>
            <scope>provided</scope>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-kafka_2.10</artifactId>
            <version>${spark.version}</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/redis.clients/jedis -->
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>3.0.0</version>
        </dependency>
    </dependencies>

    <build>
        <!-- 指定源码包和测试包的位置 -->
        <sourceDirectory>src/main/scala</sourceDirectory>
        <testSourceDirectory>src/test/scala</testSourceDirectory>
        <plugins>
            <!--scala插件,让maven能够编译、测试、运行scala项目-->
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>

                <configuration>
                    <scalaVersion>${scala.version}</scalaVersion>
                    <args>
                        <arg>-target:jvm-1.5</arg>
                    </args>
                </configuration>
            </plugin>

            <!--maven打包的插件-->
            <plugin>
                <!--此插件可以将maven项目import到eclipse中-->
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-eclipse-plugin</artifactId>
                <configuration>
                    <downloadSources>true</downloadSources>
                    <buildcommands>
                        <buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand>
                    </buildcommands>
                    <additionalProjectnatures>
                        <projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature>
                    </additionalProjectnatures>
                    <classpathContainers>
                        <classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
                        <classpathContainer>ch.epfl.lamp.sdt.launching.SCALA_CONTAINER</classpathContainer>
                    </classpathContainers>
                </configuration>
            </plugin>
            <!--maven打包时会将所有依赖包构建-->
            <!--<plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>-->
        </plugins>
    </build>


    <reporting>
        <plugins>
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <configuration>
                    <scalaVersion>${scala.version}</scalaVersion>
                </configuration>
            </plugin>
        </plugins>
    </reporting>

</project>

3.scala代码


import java.text.SimpleDateFormat
import java.util.Locale

import kafka.common.TopicAndPartition
import kafka.message.MessageAndMetadata
import kafka.serializer.StringDecoder
import kafka.utils.{ZKGroupTopicDirs, ZkUtils}
import org.I0Itec.zkclient.ZkClient
import org.I0Itec.zkclient.exception.ZkMarshallingError
import org.I0Itec.zkclient.serialize.ZkSerializer
import org.apache.commons.pool2.impl.GenericObjectPoolConfig
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.{DStream, InputDStream}
import org.apache.spark.streaming.kafka.{HasOffsetRanges, KafkaUtils, OffsetRange}
import org.apache.spark.streaming.{Milliseconds, StreamingContext}
import redis.clients.jedis.JedisPool

import scala.collection.mutable.ListBuffer

object KafkaOffsetToZookeeper {
  def main(args: Array[String]): Unit = {
    val conf: SparkConf = new SparkConf()
      //.setAppName("CreditRedisKeyCount_new")
      .setAppName(this.getClass.getName)
      //spark的反压机制,动态的控制拉取数据的rate(自1.5起)。这使Spark Streaming能够根据当前的批处理调度延迟和处理时间来控制接收速率,以便系统只接收系统可以处理的速度。在内部,这动态地设置接收器的最大接收速率。这个速率的上限取决于值 spark.streaming.receiver.maxRate,spark.streaming.kafka.maxRatePerPartition (默认false)
      .set("spark.streaming.backpressure.enabled", "true")
      //对于某个batch,receiver接收的数据在存储到Spark之前被分块为数据块的时间间隔。不建议调到小于50ms,不然处理数据的时间就会比申请线程的时间还小。
      //.set("spark.streaming.blockInterval", "100")
      //Receiver方式。每个Receiver接收器将接收数据的最大速率(每秒记录数)。实际上,每个流每秒最多将消耗此数量的记录。将此配置设置为0或负数将不会对速率进行限制。无默认值
      //.set("spark.streaming.receiver.maxRate", "1000")
      //启用接收器的预写日志。通过接收器接收的所有输入数据将被保存到提前写入日志,以便在驱动程序失败后恢复。(默认值false)
      // .set("spark.streaming.receiver.writeAheadLog.enable", "true")
      //自动将spark streaming产生的、持久化的数据给清理掉,默认true,自动清理内存垃圾。生成并持久化的强制RDD将自动从Spark的内存中取消。Spark Streaming接收的原始输入数据也会自动清除。将此设置为false将允许原始数据和持久RDD在流应用程序外部可访问,因为它们不会自动清除。但它的代价是Spark中更高的内存使用量。(默认值true)
      //.set("spark.streaming.unpersist", "true")
      //如果true,Spark StreamingContext在JVM关闭时优先关闭而不是立即关闭。(默认值false)
      //.set("spark.streaming.stopGracefullyOnShutdown", "false")
      //Direct方式。直接API读取每个Kafka分区的最大速率(以每秒消息数为单位)。无默认值
      //从consumer offsets到leader latest offsets中间延迟了很多消息,在下一次启动的时候,首个batch要处理大量的消息,会导致spark-submit设置的资源无法满足大量消息的处理而导致崩溃。因此在spark-submit启动的时候多加了一个配置:--conf spark.streaming.kafka.maxRatePerPartition=10000。限制每秒钟从topic的每个partition最多消费的消息条数,这样就把首个batch的大量的消息拆分到多个batch中去了,为了更快的消化掉delay的消息,可以调大计算资源和把这个参数调大。
      .set("spark.streaming.kafka.maxRatePerPartition", "10000")
      //驱动程序为了在每个分区的leader上找到最新偏移量而进行的最大连续重试次数(默认值为1表示驱动程序最多会尝试2次)。仅适用于新的Kafka直接流API.
      //.set("spark.streaming.kafka.maxRetries", "1")
      //日志接口在gc时保留的batch个数,默认1000。在垃圾收集之前,Spark Streaming UI和状态API会记住多少批次。(默认值1000)
      //.set("spark.streaming.ui.retainedBatches", "1000")
      //是否在驱动程序上写入提前写入日志记录后关闭文件。如果要将S3(或任何不支持刷新的文件系统)用于驱动程序上的元数据W​​AL,请将此项设置为“true”。(默认值false)
      //.set("spark.streaming.driver.writeAheadLog.closeFileAfterWrite", "false")
      //是否在接收器上写入日志记录后关闭文件。如果要将S3(或任何不支持刷新的文件系统)用于接收器上的数据WAL,请将此设置为“true”。(默认值false)
      //.set("spark.streaming.receiver.writeAheadLog.closeFileAfterWrite", "false")
      //Kafka Receiver的个数
      //.set("spark.receivers.num", "5")
      //程序同时启动几个active job的参数,如果你的资源足够多,你可以在提交的任务的时候指定这个参数(默认为1)
      .set("spark.streaming.concurrentJobs", "10")
      //开启shuffle map端输出文件合并的机制
      .set("spark.shuffle.consolidateFiles", "true")
      //shuffle write task的buffer大小,默认32k。在内存输出流中 每个shuffle文件占用内存大小,适当提高 可以减少磁盘读写io次数。
      .set("spark.shuffle.file.buffer", "128")
      //shuffle read task的buffer大小,默认48M。每次reduece能够拉取多少数据,就由buffer来决定。因为拉取过来的数据,都是先放在buffer中的。然后才用后面的executor分配的堆内存占比(0.2),hashmap,去进行后续的聚合、函数的执行
      .set("spark.reducer.maxSizeInFlight", "96")
      //默认executor内存中划分给reduce task进行聚合的比例是0.2。这片内存区域是为了解决 shuffles,joins, sorts and aggregations 过程中为了避免频繁IO需要的buffer。如果你的程序有大量这类操作可以适当调高。
      //.set("spark.shuffle.memoryFraction", "0.4")
      //executors和driver间消息传输、map输出的大小,默认128M。map多可以考虑增加。
      .set("spark.rpc.message.maxSize", "256")
      .set("spark.storage.blockManagerHeartBeatMs", "6000000ms")
      .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      //注册要序列化的自定义类型
      //.registerKryoClasses(Array(classOf[]))
      //并行度 num-executors * executor-cores的2~3倍较为合适,spark-submit脚本参数已设置
      //.set("spark.default.parallelism", "15")
      //shuffle文件拉取的时候,如果没有拉取到(拉取失败),最多或重试几次(会重新拉取几次文件),默认是3次
      .set("spark.shuffle.io.maxRetries", "60")
      //每一次重试拉取文件的时间间隔,默认是5s钟
      .set("spark.shuffle.io.retryWait", "60s")
    val ssc: StreamingContext = new StreamingContext(conf, Milliseconds(4500L))
    val groupId = "gpname"
    val topic = "topicNginxLog"
    //指定kafka的broker地址(sparkstreaming的Task直接连到Kafka分区上,用底层的API消费,效果更高)
    val brokerList = "192.168.226.88:9092,192.168.226.89:9092,192.168.226.90:9092"
    //指定zk的地址,后期要更新消费的偏移量时使用(也可以使用Redis,MySQL来记录偏移量)
    val zk = "192.168.226.88:2181/kafkaTest"
    //创建topic集合,可以消费多个topic
    val topics: Set[String] = Set(topic)
    //用于存储批处理的Dstream
    var messages: InputDStream[(String, String)] = null
    //定义一个Map用来封装partition,offset,后续用于consummer消费
    var fromOffsets: Map[TopicAndPartition, Long] = Map()
    //存储不同分区和偏移量OffsetRange(topic、partition、fromOffset、untilOffset)
    var offsetRanges: Array[OffsetRange] = Array[OffsetRange]()

    //ZKGroupTopicDirs用于获取zk中消费者路径
    val topicDirs: ZKGroupTopicDirs = new ZKGroupTopicDirs(groupId, topic)

    //获取zookeeper中的路径"/consumers/[group]/offsets/[topic]"即"/consumers/group01/offsets/tptestcreditnxlog"
    //val zkTopicPath: String = s"${topicDirs.consumerOffsetDir}"
    val zkTopicPath: String = topicDirs.consumerOffsetDir

    //准备kafka参数
    val kafkaParams: Map[String, String] = Map(
      "metadata.broker.list" -> brokerList,
      "group.id" -> groupId,
      //从上次提交的offset开始读数据,如果没有则从头开始--smallest
      //"auto.offset.reset" -> kafka.api.OffsetRequest.SmallestTimeString,
      //从上次提交的offset开始读数据,如果没有,最新的消息读取--largest
      "auto.offset.reset" -> kafka.api.OffsetRequest.LargestTimeString,
      //关闭自动提交offset
      "auto.commit.enable" -> "false"
    )

    //zk的客户端,可以从zk中读取偏移量,并更新偏移量,需要序列化
    val zkClient: ZkClient = new ZkClient(zk, 60000, 60000, new ZkSerializer {
      override def serialize(data: Object): Array[Byte] = {
        try {
          return data.toString.getBytes("UTF-8")
        } catch {
          case e: ZkMarshallingError => return null
        }
      }

      override def deserialize(bytes: Array[Byte]): AnyRef = {
        try {
          return new String(bytes, "UTF-8")
        } catch {
          case e: ZkMarshallingError => return null
        }
      }
    })

    //查询指定groupId,topic下的分区数  /consumers/group001/offsets/tptestcreditnxlog  如果不存在返回0
    val children: Int = zkClient.countChildren(zkTopicPath)

    //实例TopicAndPartition-->成员变量topic、partitonn
    //如果zookeeper中有保存offset,我们会利用这个offset作为kafkaStream的起始位置
    if (children > 0) {
      for (i <- 0 until children) {
        //  /consumers/group001/offsets/tptestcreditnxlog/0/
        val partitionOffset: String = zkClient.readData[String](zkTopicPath + "/" + i)
        val tp: TopicAndPartition = TopicAndPartition(topic, i)
        //将不同的partition对应的offset增加到fromOffset中
        fromOffsets += (tp -> partitionOffset.toLong)
      }

      //用于转换kafka的每条消息,这里转换成(kafka的key,message)这样的元组类型,k为offset,value为对应那条消息
      val messageHandlle: (MessageAndMetadata[String, String]) => (String, String) = (mmd: MessageAndMetadata[String, String]) => (mmd.key(), mmd.message())

      //直连方式根据fromOffsets中的offset去消费消息
      messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder, (String, String)](
        ssc,
        kafkaParams,
        fromOffsets,
        messageHandlle
      )
    } else {
      //如果没有保存,根据kafka的配置使用最新或者最旧的offset
      messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
        ssc,
        kafkaParams,
        topics
      )
    }

    //获取批处理的各分区偏移量
    val dStream: DStream[(String, String)] = messages.transform(rdd => {
      offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
      rdd
    })

    //中间处理,转化成需要的Dstream
    val targetDstream: DStream[(String, String, String, String, String, String)] = dealData(dStream)
    //经测试发现,这里cache会严重影响性能
    //val targetDstream = tmp_targetDstream.cache()
    val resultDstream: DStream[(String, Long)] = targetDstream.mapPartitions(it => {
      val list: ListBuffer[(String, Long)] = new ListBuffer[Tuple2[String, Long]]
      it.foreach(x => {
        ...
        ...
   })
      list.toIterator
    }).reduceByKey(_ + _)


    resultDstream.foreachRDD(rdd => {
      rdd.foreachPartition(it => {
        object InternalRedisClient extends Serializable {
          private var pool: JedisPool = null

          def makePool(redisHost: String, redisPort: Int, redisTimeout: Int, maxTotal: Int, maxIdle: Int, minIdle: Int): Unit = {
            makePool(redisHost, redisPort, redisTimeout, maxTotal, maxIdle, minIdle, false, false, 10000)
          }

          def makePool(redisHost: String, redisPort: Int, redisTimeout: Int, maxTotal: Int, maxIdle: Int,
                       minIdle: Int, testOnBorrow: Boolean, testOnReturn: Boolean, maxWaitMillis: Long): Unit = {
            if (pool == null) {
              val poolConfig = new GenericObjectPoolConfig()
              poolConfig.setMaxTotal(maxTotal)
              poolConfig.setMaxIdle(maxIdle)
              poolConfig.setMinIdle(minIdle)
              poolConfig.setTestOnBorrow(testOnBorrow)
              poolConfig.setTestOnReturn(testOnReturn)
              poolConfig.setMaxWaitMillis(maxWaitMillis)
              pool = new JedisPool(poolConfig, redisHost, redisPort, redisTimeout)
              val hook = new Thread {
                override def run = pool.destroy()
              }
              sys.addShutdownHook(hook.run)
            }
          }

          def getPool: JedisPool = {
            assert(pool != null)
            pool
          }
        }
        //redis config
        val maxTotal = 100
        val maxIdle = 10
        val minIdle = 1
        val redisHost = "192.168.226.88"
        //redis默认端口6379
        val redisPort = 6379
        val redisTimeout = 60000
        //val dbIndex = 1
        InternalRedisClient.makePool(redisHost, redisPort, redisTimeout, maxTotal, maxIdle, minIdle)
        val poolInst = InternalRedisClient.getPool
        var jedis = poolInst.getResource()
        jedis.select(0)
        it.foreach(x => {
               ...
               ...
        })
        jedis.close()
        poolInst.close()
      })
      for (o <- offsetRanges) {
        // 构建最新的zkPath  /consumers/cnl/offsets/tpcreditnxlog/0/
        val zkPath: String = s"${topicDirs.consumerOffsetDir}/${o.partition}"
        //更新偏移量到zookeeper
        //  /consumers/cn1/offsets/tpcreditnxlog/分区
         ZkUtils(zkClient, isZkSecurityEnabled = false).updatePersistentPath(zkPath, o.untilOffset.toString)
      }
    })

    ssc.start()
    ssc.awaitTermination()
  }
}

4.执行脚本

#!/bin/sh
BIN_DIR=$(cd `dirname $0`; pwd)
#BIN_DIR="$(cd $(dirname $BASH_SOURCE) && pwd)"
LOG_DIR=${BIN_DIR}/../logs
LOG_TIME=`date +%Y-%m-%d`
spark-submit --class KafkaOffsetToZookeeper \
--master yarn \
--deploy-mode client \
--queue default \
--executor-memory 2g \
--num-executors 2 \
--jars /home/spark/jars/libJars/redis/commons-pool2-2.4.3.jar,/home/spark/jars/libJars/redis/jedis-3.0.0.jar \
/home/spark/jars/myJars/spark-kafka-1.0-SNAPSHOT.jar  > ${LOG_DIR}/sparkKafka_${LOG_TIME}.log 2>&1

5.kafka、zookeeper、redis、yarn相关命令:
5.1 kafka
创建topic:
cd /opt/cloudera/parcels/KAFKA/lib/kafka/bin
kafka-topics.sh --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181 --create --replication-factor 1 --partitions 2 --topic topicNginxLog
shell发送消息:
./kafka-console-producer.sh --broker-list hadoop01:9092,hadoop02:9092,hadoop03:9092 --topic topicNginxLog

5.2 zookeeper
zookeeper客户端
/opt/cloudera/parcels/CDH/lib/zookeeper/bin/zkCli.sh -server 192.168.226.88:2181

创建节点:(不加-s、-e,默认为无序、持久化的)
create -s {path} “” 可以创建有序节点,后面添加十位递增的数字后缀
create -e {path} “” 创建临时节点,当断开连接后,临时节点会被删除
create -s -e {path} “” 创建临时有序节点
参考:
https://zhuanlan.zhihu.com/p/81765612
https://blog.csdn.net/qq_36014509/article/details/81538132#Create%E5%91%BD%E4%BB%A4

删除group
rmr /consumers/groupname 可以含有子节点
delete /consumers/groupname 当前节点下不含子节点

查询根节点下的子节点信息
ls /

获取分区offset
get /mktkafka/consumers/gp1992/offsets/topicNginxLog/0

5.3 redis
5.3.1
redis: 默认端口6379
c语言编译后的安装包:hadoop01 /usr/local/redis-3.0.0
实际安装路径cd /usr/local/redis/bin
启动服务:./redis-server redis.conf
停止服务:./redis-cli shutdown
查看服务:ps -ef | grep redis
5.3.2
客户端操作:./redis-cli -h 192.168.226.88 -p 6379
keys * 获取数据库(默认数据库为0)的所有键值
hget myhash field1 获取键值为 myhash,字段为 field1 的值
hlen myhash 获取myhash键的字段数量
hexists myhash field1 判断 myhash 键中是否存在字段名为 field1 的字段
hmget myhash field1 field2 field3 一次性获取多个字段
hgetall myhash 返回 myhash 键的所有字段及其值
hkeys myhash 获取myhash 键中所有字段的名字
hvals myhash 获取 myhash 键中所有字段的值
del myhash 删除myhash键
5.3.3
redis数据安全性:
快照、向AOF文件追加写操作
参考:https://blog.csdn.net/zhililang/article/details/76275493

5.4 yarn
yarn application -list
yarn application -kill application_1590674270000_0003
yarn logs -applicationId application_1590674270000_0003 > xxxx.log 导出日志

6.错误记录
6.1 集群环境下缺少依赖包
报错:
java.lang.NoClassDefFoundError:org/apache/commons/pool2/impl/GenericObjectPoolConfig
java.lang.ClassNotFoundException: redis.clients.jedis.JedisPoolredis01redis02
解决:
上传依赖包到linux上,spark-submit脚本中指定jar路径
–jars /home/spark/jars/libJars/redis/commons-pool2-2.4.3.jar,/home/spark/jars/libJars/redis/jedis-3.0.0.jar

6.2 未在zookeeper创建节点
报错:
org.apache.kafka.common.config.ConfigException: Zookeeper namespace does not exist
zookeeper01
解决:
通过zookeeper客户端创建持久化的节点
/opt/cloudera/parcels/CDH/lib/zookeeper/bin/zkCli.sh -server 192.168.226.88:2181
create /节点名称 “” (引号不能少)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值