本次实习是学习厦门大学林子雨团队的案例,本来以为容易,结果两天才调试通过,主要是spark的版本不对,调试了好久,最后下载对的版本,才通过,记录一下:
教程网址: http://dblab.xmu.edu.cn/post/8274/ Spark课程实验案例:Spark+Kafka构建实时分析Dashboard(免费共享)
本案例实现:
一、下载数据,测试kafka处理数据
1.数据集下载:点击这里下载data_format.zip数据集,
2.pip安装需要的module
- pip3 install flask
- pip3 install flask-socketio
- pip3 install kafka-python
- pip3 install kafka
3.启动kafka
cd /home/linbin/software/kafka_2.12-2.1.0
bin/zookeeper-server-start.sh config/zookeeper.properties &
bin/kafka-server-start.sh config/server.properties &
4.建立文件producer.py作为生产者不断发送数据
# coding: utf-8
import csv
import time
from kafka import KafkaProducer
# 实例化一个KafkaProducer示例,用于向Kafka投递消息
producer = KafkaProducer(bootstrap_servers='localhost:9092')
# 打开数据文件
csvfile = open("../data/user_log.csv","r")
# 生成一个可用于读取csv文件的reader
reader = csv.reader(csvfile)
for line in reader:
gender = line[9] # 性别在每行日志代码的第9个元素
if gender == 'gender':
continue # 去除第一行表头
time.sleep(0.1) # 每隔0.1秒发送一行数据
# 发送数据,topic为'sex'
producer.send('sex',line[9].encode('utf8'))
5.建立文件consumer.py作为消费者测试接收数据
from kafka import KafkaConsumer
consumer = KafkaConsumer('sex')
for msg in consumer:
print((msg.value).decode('utf8'))
6.测试
一个终端运行 python3 producer.py
另外一个终端运行 python3 consumer.py
如果消费终端可以看见源源不断的数据,说明成功
二、Spark Streaming实时处理数据(python版本)
1.检查是否有这个文件,没有下载一个
~/spark-2.2.0-bin-hadoop2.6/jars/spark-streaming-kafka-0-8-assembly_2.11-2.2.0.jar
2. 编辑spark-env.sh
cd ~/spark-2.2.0-bin-hadoop2.6/conf
nano spark-env.sh
加一行: export PYSPARK_PYTHON=/usr/bin/python3
改一行: export SPARK_DIST_CLASSPATH=$(/home/linbin/software/hadoop-2.6.0-cdh5.15.1/bin/hadoop classpath):/home/linbin/software/kafka_2.12-2.1.0/libs/*
3.修改文件 ~/spark-2.2.0-bin-hadoop2.6/bin/pyspark
找到 PYSPARK_PYTHON=python 修改为 PYSPARK_PYTHON=python3
4.建立目录 ~/spark-2.2.0-bin-hadoop2.6/mycode/kafka
在目录下建立文件kafka_test.py
from kafka import KafkaProducer
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
from pyspark import SparkConf, SparkContext
import json
import sys
def KafkaWordCount(zkQuorum, group, topics, numThreads):
spark_conf = SparkConf().setAppName("KafkaWordCount")
sc = SparkContext(conf=spark_conf)
sc.setLogLevel("ERROR")
ssc = StreamingContext(sc, 1)
ssc.checkpoint(".")
# 这里表示把检查点文件写入分布式文件系统HDFS,所以要启动Hadoop
#ssc.checkpoint("file:///home/linbin/software/spark-1.6.0-cdh5.15.1/mycode/kafka//checkpoint")
topicAry = topics.split(",")
# 将topic转换为hashmap形式,而python中字典就是一种hashmap
topicMap = {}
for topic in topicAry:
topicMap[topic] = numThreads
lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(lambda x : x[1])
words = lines.flatMap(lambda x : x.split(" "))
wordcount = words.map(lambda x : (x, 1)).reduceByKeyAndWindow((lambda x,y : x&