目录
二、项目导好jar包,并且创建源数据,并在kafka中测试能否消费到数据
一、启动zookeeper,kafka基础环境
启动zookeeper集群,需要在三台机器上都启动:
#启动
cd /opt/module/zookeeper-3.6.3/bin/
./zkServer.sh start
./zkServer.sh status
启动kafka集群,需要在三台机器上都启动:
cd /opt/module/kafka/kafka/bin/
./kafka-server-start.sh -daemon ../config/server.properties
查看kafka是否启动好:
#查看是否启动好
/opt/module/zookeeper-3.6.3/bin/zkCli.sh
ls /brokers/ids
注:如果没有创建好atguiguNew这个topic,看我上一篇文章怎么创建的
二、项目导好jar包,并且创建源数据,并在kafka中测试能否消费到数据
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.27</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.10.1</version>
</dependency>
</dependencies>
SparkStreaming10_MockData:
数据创建
package com.atguigu.bigdata.spark.core.streaming
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, ProducerRecord}
import java.util.{Properties, Random}
import scala.collection.mutable.ListBuffer
object SparkStreaming10_MockData {
def main(args: Array[String]): Unit = {
// 生成模拟数据
// 格式 :timestamp area city userid adid
// 含义: 时间戳 区域 城市 用户 广告
// Application => Kafka => SparkStreaming => Analysis
val prop = new Properties()
// 添加配置
prop.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.200.102:9092")
prop.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
prop.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
val producer = new KafkaProducer[String, String](prop)
while (true) {
mockdata().foreach(
data => {
// 向Kafka中生成数据
val record = new ProducerRecord[String, String]("atguiguNew", data)
producer.send(record)
println(data)
}
)
Thread.sleep(2000)
}
}
def mockdata() = {
val list = ListBuffer[String]()
val areaList = ListBuffer[String]("华北", "华东", "华南")
val cityList = ListBuffer[String]("北京", "上海", "深圳")
for (i <- 1 to new Random().nextInt(50)) {
val area = areaList(new Random().nextInt(3))
val city = cityList(new Random().nextInt(3))
var userid = new Random().nextInt(6) + 1
var adid = new Random().nextInt(6) + 1
list.append(s"${System.currentTimeMillis()} ${area} ${city} ${userid} ${adid}")
}
list
}
}
SparkStreaming11_Req1:
在kafka中看能不能消费到上述生成的数据
package com.atguigu.bigdata.spark.core.streaming
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.InputDStream
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Seconds, StreamingContext}
object SparkStreaming11_Req1 {
def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming")
val ssc = new StreamingContext(sparkConf, Seconds(3))
val kafkaPara: Map[String, Object] = Map[String, Object](
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG -> "192.168.200.102:9092,192.168.200.103:9092,192.168.200.104:9092",
ConsumerConfig.GROUP_ID_CONFIG -> "atguigu",
"key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer",
"value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer"
)
val kafkaDataDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String](
ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe[String, String](Set("atguiguNew"), kafkaPara)
)
kafkaDataDS.map(_.value()).print()
ssc.start()
ssc.awaitTermination()
}
}
<