Spark_Streaming初级使用
前段时间有业务部门提出一个需求,需要用到流式处理数据,在Flink, Streaming, Storm 之间考虑了很久
综合业务需求
1. 对实时性要求达到秒级。
2. 现有所有业务都在spark框架上(学习成本低)
3. 对Spark框架后续发展看好
最终决定还是选择了Streaming
开发环境
- hadoop : 2.6.0
- spark-2.1.0-bin-hadoop2.6
- scala 2.11.6
- confluent-3.1.2
- zookeeper-3.4.5-cdh5.8.2
Confluent环境搭建并测试
参考:[http://www.cnblogs.com/zdfjf/p/5696921.html]
1. 启动zookeeper
/opt/moduls/zookeeper-3.4.5-cdh5.8.2/sbin/zkServer.sh start
2.启动kafka
/opt/moduls/confluent-3.1.2
./bin/kafka-server-start ./etc/kafka/server.properties
3.启动Schema Registry
/opt/moduls/confluent-3.1.2./bin/schema-registry-start./etc/schema-registry/schema-registry.properties
4.启动生产者并创建topic
./bin/kafka-avro-console-producer –broker-list localhost:9092 –topic test –property value.schema=’{“type”:”record”,”name”:”myrecord”,”fields”:[{“name”:”f1”,”type”:”string”}]}’
输入数据
{“f1”: “value1”}
{“f1”: “value2”}
{“f1”: “value3”}
5.启动消费者
/opt/moduls/confluent-3.1.2/bin
./kafka-console-consumer –zookeeper localhost –topic test
6.若消费者正常接收数据则kafka测试成功
IDEA中搭建开发环境
参考:[http://blog.csdn.net/bitbyteworld/article/details/52782776?locationNum=8&fps=1]
SBT配置
version := "1.0"
scalaVersion := "2.11.6"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.1.0"
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "2.1.0"
libraryDependencies += "org.apache.spark" % "spark-streaming-kafka-0-8-assembly_2.11" % "2.1.0"
libraryDependencies += "org.elasticsearch" % "elasticsearch" % "5.2.1"
Consumer
import java.text.SimpleDateFormat
import java.util.Date
import io.confluent.kafka.serializers.KafkaAvroDecoder
import kafka.serializer.StringDecoder
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.{SparkSession, SQLContext}
import org.apache.spark.streaming.dstream.DStream
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkContext, SparkConf}
/**
* Created by squigley on 2/20/16.
* $ spark-submit --class example.StreamingJob --driver-java-options "-Dconfig.file=conf/application.conf -Dlog4j.configuration=file:conf/log4j.properties" target/scala-2.10/StreamingExample-assembly-1.0.jar
*/
object StreamingJob{
object SparkSessionSingleton {
@transient private var instance: SparkSession = _
def getInstance(sparkConf: SparkConf): SparkSession = {
if (instance == null) {
instance = SparkSession
.builder
.config(sparkConf)
.getOrCreate()
}
instance
}
}
// Get job configuration
// val config = ConfigFactory.load()
def main(args: Array[String]) {
if (args.length < 4) {
System.err.println("Usage: KafkaWordCount <zkQuorum> <group> <topics> <numThreads>")
System.exit(1)
}
val Array(topic, groupId, bootstrap, registry_url) = args
val sparkConf = new SparkConf()
.setAppName("StreamingExample")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
// val sc = new SparkContext(sparkConf)
val ssc = new StreamingContext(sparkConf, Seconds(20))
// Create Kafka stream
val kafkaParams = Map(
"bootstrap.servers" -> bootstrap,
"schema.registry.url" -> registry_url,
"group.id" -> groupId
)
@transient val kafkaStream: DStream[(String, Object)] =
KafkaUtils.createDirectStream[String, Object, StringDecoder, KafkaAvroDecoder](
ssc, kafkaParams, Set(topic)
)
// Load JSON strings into DataFrame
kafkaStream.foreachRDD { rdd =>
// Get the singleton instance of SQLContext
val sqlContext = SparkSessionSingleton.getInstance(rdd.sparkContext.getConf)
import sqlContext.implicits._
val topicValueStrings = rdd.map(_._2.toString)
// print(topicValueStrings.first())
val df = sqlContext.read.json(topicValueStrings)
val time = new SimpleDateFormat("yyyyMMddHHmmss").format(new Date())
df.show(5,false)
// if (df !=null)
// {
// df.repartition(1).write.mode("append").parquet("/tmp/ClouderBehavier/ClouderBehavier" + "_" + time)
// }
}
ssc.start()
ssc.awaitTermination()
}
}
打包
脚本提交运行
unset SPARK_HOME
unset SPARK_JAR
export PYSPARK_PYTHON=/usr/bin/python
export PYSPARK_DRIVER_PYTHON=/usr/bin/python
export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.6
export HADOOP_CONF_DIR=/opt/hadoop-2.6.0/etc/hadoop
export ONLINE_WS=~/online/tesla
cd $ONLINE_WS
$SPARK_HOME/bin/spark-submit --class StreamingJob --master yarn-client --num-executors 10 --driver
-memory 6g --executor-cores 1 --executor-memory 10g --files /home/core_adm/online/tesla/tesla.zip
--conf spark.memory.useLegacyMode=true --conf spark.storage.memoryFraction=0.05 --jars /data/test/spark-streaming-kafka-0-8-assembly_2.112.1.0.jar,/data/test/kafka-avro-serializer-2.0.1.jar,/data/test/common-config-2.0.1.jar,/data/test/common-utils-2.0.1.jar,/data/test/kafka-schema-registry-client-2.0.1.jar /data/test/avro_streaming.jar topic_test my-consumer-group localhost1:9092,localhost2:9092,localhost3:9092 registry_url_1:8081,registry_url_2:8081,registry_url_3:8081