SparkStreaming中参数配置释义:
1.分区策略
LocationStrategies 分配分区策略,LocationStrategies:根据给定的主题和集群地址创建consumer
创建DStream,返回接收到的输入数据
LocationStrategies.PreferConsistent:持续的在所有Executor之间均匀分配分区 (均匀分配,选中的每一个Executor都会分配 partition)
LocationStrategies.PreferBrokers: 如果executor和kafka brokers 在同一台机器上,选择该executor。
LocationStrategies.PreferFixed: 如果机器不是均匀的情况下,可以指定特殊的hosts。当然如果不指定,采用 LocationStrategies.PreferConsistent模式
2.auto.offset.reset
earliest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
latest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据
none:topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常
3. key.serializer 和 value.serialize
其实这个是kafka封装好的,把字符串进行序列化,这就是为什么在ProducerRecord中可以输入字符串进行传输。
同时,我们在Consumer端也需要添加这样两个参数,这两个参数是进行反序列化的作用的,
就是接收到kafka传递给你的数据之后,进行反序列化操作
4.
//指定kafka哪个分区从哪条记录开始消费
ConsumerStrategies.Subscribe[String, String](topics, kafkaParams,Map(new TopicPartition(topics(0),0)->90L))
================
pom:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.tzb.bigdata</groupId>
<artifactId>spark-test</artifactId>
<!--<packaging>pom</packaging>-->
<version>1.0</version>
<!--<modules>-->
<!--<module>hbase</module>-->
<!--</modules>-->
<properties>
<scala.version>2.10.6</scala.version>
<hadoop.version>2.6.0</hadoop.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.1.1</version>
</dependency>
<!--<dependency>-->
<!--<groupId>org.apache.spark</groupId>-->
<!--<artifactId>spark-sql_2.10</artifactId>-->
<!--<version>1.6.0</version>-->
<!--</dependency>-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>com.typesafe.play</groupId>
<artifactId>play-mailer_2.11</artifactId>
<version>7.0.0</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.41</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.1.1</version>
</dependency>
<!--=========================spark-streaming-kafka===========================-->
<!--0.8版本版本-->
<!--<dependency>-->
<!--<groupId>org.apache.spark</groupId>-->
<!--<artifactId>spark-streaming-kafka-0-8_2.11</artifactId>-->
<!--<version>2.1.1</version>-->
<!--</dependency>-->
<!--0.10版本 新版本-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.3.0</version>
<exclusions>
<exclusion>
<artifactId>scala-library</artifactId>
<groupId>org.scala-lang</groupId>
</exclusion>
</exclusions>
</dependency>
<!--======================================================================-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.11.0.2</version>
</dependency>
<!--<dependency>-->
<!--<groupId>org.scala-lang</groupId>-->
<!--<artifactId>scala-library</artifactId>-->
<!--<version>2.10.6</version>-->
<!--</dependency>-->
<!--<dependency>-->
<!--<groupId>org.apache.hadoop</groupId>-->
<!--<artifactId>hadoop-common</artifactId>-->
<!--</dependency>-->
<!--测试Hbase时再打开注释,否则idea本地连接测试环境会报错-->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>2.0.1</version>
<exclusions>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>net.sf.json-lib</groupId>
<artifactId>json-lib</artifactId>
<version>2.4</version>
<classifier>jdk15</classifier>
</dependency>
<dependency>
<groupId>org.neo4j.driver</groupId>
<artifactId>neo4j-java-driver</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.5</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<!-- 去掉scope作用域,使用默认的compile,编译、测试、运行都有效的作用域 -->
<!--<scope>test</scope>-->
</dependency>
<dependency>
<groupId>net.minidev</groupId>
<artifactId>json-smart</artifactId>
<version>2.3</version>
</dependency>
<!-- 邮件发送 -->
<!--<dependency>-->
<!--<groupId>com.typesafe.play</groupId>-->
<!--<artifactId>play-mailer_2.11</artifactId>-->
<!--<version>7.0.0</version>-->
<!--</dependency>-->
<!--<dependency>-->
<!--<groupId>org.apache.poi</groupId>-->
<!--<artifactId>poi</artifactId>-->
<!--<version>3.12</version>-->
<!--</dependency>-->
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.10.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-catalyst -->
<!--<dependency>-->
<!--<groupId>org.apache.spark</groupId>-->
<!--<artifactId>spark-catalyst_2.11</artifactId>-->
<!--<version>2.3.0</version>-->
<!--<scope>test</scope>-->
<!--</dependency>-->
<!--中文分词器-->
<dependency>
<groupId>com.huaban</groupId>
<artifactId>jieba-analysis</artifactId>
<version>1.0.2</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.68</version>
</dependency>
<!--es-->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark-20_2.11</artifactId>
<version>6.2.4</version>
</dependency>
<!--poi excel-->
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi</artifactId>
<version>3.12</version>
</dependency>
</dependencies>
<build>
<finalName>spark-test</finalName>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<!--<version>3.0.0</version>-->
<configuration>
<archive>
<manifest>
<mainClass>WordCount</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>8</source>
<target>8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
直接上代码:
案例1:
标签:hbase
DataChangeStreaming:
package com.tzb.sparkstreaming.prod
import java.io.{FileNotFoundException, IOException}
import java.util
import com.alibaba.fastjson.{JSON, JSONObject}
import com.tzb.utils.{ConfigUtils, HBaseUtil, StringUtil}
import net.sf.json.JSONArray
import org.apache.hadoop.hbase.TableExistsException
import org.apache.hadoop.hbase.client._
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.InputDStream
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.slf4j
import org.slf4j.LoggerFactory
import scala.collection.mutable.ArrayBuffer
/**
* <!--测试Hbase时再打开注释,否则idea本地连接测试环境会报错-->
* SparkStreaming 版本0.10
* 注:本程序是将sparkstreaming和kafka、hbase结合起来使用的示例,测试环境 kafka和kafka依赖的zk为210机器,hbase和hbase依赖的zk为211机器
*
* 本地以及210 linux测试机都已测试成功:
* 打开kafkatool向某个主题中推送数据
* 执行main方法,开始消费数据
* kafkaTool发送json消息示例:
* {
* "tableName": "hbasetable6",
* "option": "put",
* "rowKey": "1001",
* "families": [
* "info1",
* "info2"],
* "cols_data": {
* "name":"tom",
* "age":"20"
* }
* }
* 如何查看自己消费者分组 对应的 topic 的offset:
* https://blog.51cto.com/13639264/2135877
* [root@xg kafka_2.11-2.0.0]# bin/kafka-consumer-groups.sh --bootstrap-server 10.21.0.210:9092 --group testgroup --describe
* Consumer group 'testgroup' has no active members.
* TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
* testTopic 0 8 9 1 - - -
*
* 更改指定消费者分组对应topic的offset:未生效??
* bin/kafka-consumer-groups.sh --bootstrap-server 10.21.0.210:9092 --group testgroup --topic testTopic--execute --reset-offsets --to-offset 9
*
* 打包测试(成功):
* spark-submit --master yarn-client --conf spark.driver.memory=2g --class com.tzb.sparkstreaming.prod.DataChangeStreaming --executor-memory 8G --num-executors 5 --executor-cores 2 /var/lib/hadoop-hdfs/spride_sqoop_beijing/bi_table/test/spark-test-jar-with-dependencies.jar >> /var/lib/hadoop-hdfs/spride_sqoop_beijing/bi_table/test/sparkstreaming_datachange.log
* 线上跑的话要把代码里的kafka以及zk,hbase等组件的ip或域名,改为线上的,同时提交任务时把 spark-submit 改为 spark-submit2,命令后边加个&符号,则为后台启动程序,当前窗口可关闭。
*
* 如何停止任务:
* 如果想停止掉这个任务则:ps -ef | grep DataChangeStreaming,并将端口kill掉即可。
*/
object DataChangeStreaming {
// 设置日志的级别
Logger.getLogger("org.apache").setLevel(Level.ERROR)
val logger: slf4j.Logger = LoggerFactory.getLogger(this.getClass.getSimpleName)
def main(args: Array[String]): Unit = {
val sparkConf: SparkConf = new SparkConf()
.setAppName(this.getClass.getSimpleName)
.setMaster("local[*]")
val ssc: StreamingContext = new StreamingContext(sparkConf, Seconds(5))
//策略
val preferredHosts: LocationStrategy = LocationStrategies.PreferConsistent
//kafka topic
val topics = Array("testTopic")
val groupId = "testgroup"
val kafkaParams: Map[String, Object] = Map[String, Object](
"bootstrap.servers" -> ConfigUtils.brokers, //kafka producer 生产者地址
"key.deserializer" -> classOf[StringDeserializer].getName,
"value.deserializer" -> classOf[StringDeserializer].getName,
"group.id" -> groupId,
//latest, earliest, none
"auto.offset.reset" -> "earliest",
"enable.auto.commit" -> "false" // 不自动提交
)
val stream: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String](
ssc,
preferredHosts,
ConsumerStrategies.Subscribe[String, String