Spark大数据-输入源之kafka
kafka相关基础
高吞吐量的分布式发布订阅消息系统,能订阅和发布消息。
broker:kafka集群中每个节点服务器叫broker。
topic:消息扔给某个topic,订阅相关topic即可。
partition:每个topic消息非常多,所以需要分区放在多台服务器上。
生产者:把消息发给kafka broker。
消费者:向kafka broker读取消息。
group:每个消费者只属于某个消费者分组。
kafka架构图:
测试启动kafka:
1.kafka需要借助zookeeper服务,首先启动zookeeper服务,分布式zookeeper一般安装在三个节点即可,本文安装在192.168.1.30、192.168.1.31、192.168.1.32,分别启动,启动方式如下:
cd /opt/opensoc/zookeeper-3.4.12/bin/
./zkServer.sh start
2.再启动kafka服务,本文分布式kafka安装在六台机器,分别30~35,分别启动,启动方式如下(daemon 为将前台命令转化为后台运行):
cd /opt/opensoc/kafka_2.12-2.0.1/bin/
./kafka-server-start.sh -daemon ../config/server.properties
3.创建topic,数据1份备份,1个分区:
cd /opt/opensoc/kafka_2.12-2.0.1/
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic wordsendertest
//列出所有创建的topic,验证是否创建成功
./bin/kafka-topics.sh --list --zookeeper localhost:2181
3.创建生产者:
cd /opt/opensoc/kafka_2.12-2.0.1/
bin/kafka-console-producer.sh --broker-list 192.168.1.30:9092 --topic wordsendertest
4.创建消费者:
cd /opt/opensoc/kafka_2.12-2.0.1/
bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.30:9092 --topic wordsendertest --from-beginning
或者
cd /usr/local/kafka
./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic wordsendertest --from-beginning
5.在生产者中可以输入单词回车即可。
spark+kafka
1.下载jar包导入spark-streaming-kafka-0-10_2.11-2.2.1.jar,复制到spark的jars目录下;将kafka的libs下的所有jar复制到spark的jars下,如果是其他版本的参照官网API:
cd /usr/local/kafka/libs
ls
cp ./* /usr/local/spark/jars/kafka
2.启动spark-shell时加上jar目录,导入包则不会报错。
3.编写使用kafka数据源的spark streaming程序:
- 创建代码目录
cd /usr/local/spark/mycode
mkdir kafka
cd kafka
mkdir -p src/main/scala
cd src/main/scala
vim KafkaWordProducer.scala
- 编写生产者程序(每条消息5个0~9的随机数字,每秒钟3条消息)
// 创建生产者
import java.util.HashMap
import org.apache.spark.SparkConf
import org.apache.spark.streaming._
import org.apache.kafka.clients.producer.{KafkaProducer,ProducerConfig,ProducerRecord}
import org.apache.spark.streaming.kafka010._
object KafkaWordProducer {
def main(args: Array[String]) {
if (args.length < 4) {
System.err.println("Usage: KafkaWordCountProducer <metadataBrokerList> <topic> " +
"<messagesPerSec> <wordsPerMessage>")
System.exit(1)
}
val Array(brokers, topic, messagesPerSec, wordsPerMessage) = args
// Zookeeper connection properties
val props = new HashMap[String, Object]()
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers)
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringSerializer")
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringSerializer")
val producer = new KafkaProducer[String, String](props)
// Send some messages
while(true) {
(1 to messagesPerSec.toInt).foreach { messageNum =>
val str = (1 to wordsPerMessage.toInt).map(x => scala.util.Random.nextInt(10).toString)
.mkString(" ")
print(str)
println()
val message = new ProducerRecord[String, String](topic, null, str)
producer.send(message)
}
Thread.sleep(1000)
}
}
}
// 一定要使用具体地址,不要localhost,否则会出现奇奇怪怪的问题
KafkaWordProducer.main(Array("192.168.1.30:9092","wordsender","3","5"))
- 编写消费者程序(对上述消息进行词频统计并打印)
// 创建消费者
import org.apache.spark._
import org.apache.spark.SparkConf
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
//格式化日志的单例程序
import org.apache.spark.internal.Logging
import org.apache.log4j.{Level, Logger}
/** Utility functions for Spark Streaming examples. */
object StreamingExamples extends Logging {
/** Set reasonable logging levels for streaming if the user has not configured log4j. */
def setStreamingLogLevels() {
val log4jInitialized = Logger.getRootLogger.getAllAppenders.hasMoreElements
if (!log4jInitialized) {
// We first log something to initialize Spark's default logging, then we override the
// logging level.
logInfo("Setting log level to [WARN] for streaming example." +
" To override add a custom log4j.properties to the classpath.")
Logger.getRootLogger.setLevel(Level.WARN)
}
}
}
object KafkaWordCount{
def main(args:Array[String]){
// 设置日志格式
StreamingExamples.setStreamingLogLevels()
// 创建ssc
val sc=new SparkConf().setAppName("KafkaWordCount").setMaster("local[2]")
val ssc=new StreamingContext(sc,Seconds(10))
// 设置检查点
ssc.checkpoint("home/ziyu_bigdata/quick_learn_spark/checkpoint")
// 如果为存在hdfs,则ssc.checkpoint("root/usr/checkpoint")
// val zkQuorum="localhost:2181"//zookeeper服务器地址
// val group="1"//topic所在的group,可自定义名称val group="tesr-consumer-group"
// val topics="wordsender"
// val numThreads=1//每个topic的分区数
// val topicMap=topics.split(",").map((_,numThreads.toInt)).toMap
// val lineMap=KafkaUtils.createStream(ssc,zkQuorum,group,topicMap)
// val lines=lineMap.map(_._2)
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "192.168.1.30:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "1",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("wordsender")
val lineMap = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
val lines =lineMap.map(record => record.value)
val words=lines.flatMap(_.split(" "))
val pair=words.map(x => (x,1))
// 词频统计
val wordCounts=pair.reduceByKeyAndWindow(_ + _,_ - _,Minutes(2),Seconds(10),2)
wordCounts.print
ssc.start()
ssc.awaitTermination()
}
}
KafkaWordCount.main(Array())
至此全部搞定kafka数据源的spark streaming的流处理。