第一个实例
安装nc
yum install -y nc
实例官网
http://spark.apache.org/docs/2.3.0/streaming-programming-guide.html#a-quick-example
启动服务
nc -lk 9999
./run-example streaming.NetworkWordCount localhost 9999
注意事项
处理器至少为2核,1核负责接收数据,1核负责处理数据
编程模型
NetworkWordCount
import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.{Seconds, StreamingContext}
object NetworkWordCount {
def main(args: Array[String]): Unit = {
if (args.length < 2) {
System.err.println("Usage: NetworkWordCount <hostname> <port>")
System.exit(1)
}
val sparkConf = new SparkConf().setAppName("NetworkWordCount")
//本机测试的时候需要写入该行代码
//sparkConf.setMaster("local[2]")
val ssc = new StreamingContext(sparkConf, Seconds(1))
val lines = ssc.socketTextStream(args(0), args(1).toInt, StorageLevel.MEMORY_AND_DISK_SER)
val words = lines.flatMap(_.split(" "))
val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
}
本机运行测试
查看cdh1的9999端口的进程
netstat -tunlp |grep 9999
启动nc服务
nc -lk 9999
启动esplise程序
Linux测试运行
./spark-submit --class com.pcitc.sparkstreaming.NetworkWordCount --master local[2] /root/app/sparktest/streamingwordcount.jar localhost 9999
Flume+kafka+sparkstreaming集成
设计
服务器 | 服务 | Source | sink |
cdh2 | flume | 监听文件变化 tail-f | 收集的数据落库到cdh1的flume |
cdh1 | flume | 监听cdh2、cdh3发送的数据 | 收集的数据落库到kafka |
服务器 | 服务 | 功能 |
cdh1 | Kafka | Sparkstreaming的数据源 |
cdh2 | Kafka | Sparkstreaming的数据源 |
cdh3 | Kafka | Sparkstreaming的数据源 |
本地程序 | Sparkstreaming | 读取kafka数据,对相同key进行统计,落库mysql |
配置
flume-conf-1(cdh2 )
agent.sources = execSource
agent.channels = memoryChannel
agent.sinks = avroSink
# For each one of the sources, the type is defined
agent.sources.execSource.type = exec
agent.sources.execSource.command = tail -F /root/data/flume/test.log
agent.sources.execSource.channels = memoryChannel
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100
# Each sink's type must be defined
agent.sinks.avroSink.type = avro
agent.sinks.avroSink.channel = memoryChannel
agent.sinks.avroSink.hostname = cdh1
agent.sinks.avroSink.port = 1234
flume-conf-3(cdh1)
agent.sources = avroSource
agent.channels = memoryChannel
agent.sinks = kafkaSink
# For each one of the sources, the type is defined
agent.sources.avroSource.type = avro
agent.sources.avroSource.channels = memoryChannel
agent.sources.avroSource.bind = 0.0.0.0
agent.sources.avroSource.port = 1234
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100
# Each sink's type must be defined
agent.sinks.kafkaSink.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.kafkaSink.kafka.topic = test
agent.sinks.kafkaSink.kafka.bootstrap.servers = cdh1:9092,cdh2:9092,cdh3:9092
agent.sinks.kafkaSink.channel = memoryChannel
Sparkstreaming核心代码
def main(args: Array[String]): Unit = {
// Create the context with a 1 second batch size
val sparkConf = new SparkConf().setAppName("applogs").setMaster("local[2]")
val ssc = new StreamingContext(sparkConf, Seconds(1))
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "cdh1:9092,cdh2:9092,cdh3:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "applogs",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (true: java.lang.Boolean)
)
val topics = Array("test")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
val lines = stream.map(record => record.value)
val wordCounts = lines.map(x => (x, 1)).reduceByKey(_ + _)
wordCounts.foreachRDD(rdd => {
print("--------------------------------------------")
//分区并行执行
rdd.foreachPartition(myFun)
})
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
cdh2(flume) 启动程序
bin/flume-ng agent -n agent -c conf -f conf/flume-conf-1.properties -Dflume.root.logger=INFO,console
cdh1(flume)
bin/flume-ng agent -n agent -c conf -f conf/flume-conf-3.properties -Dflume.root.logger=INFO,console
cdh1 cdh2 cdh3(kafka)
bin/kafka-server-start.sh config/server.properties &
启动本地sparkstreaming程序
测试程序
cdh2
[root@cdh2 flume]# echo "spark" >>test.log
......
sparkstreaming程序进行相应监听
结果落库mysql