编写一个SocketTest.java文件,用来模拟日志文件一条数据一条数据的生成
SocketTest
File ctoFile = new File(args[0]); //数据源
File dest=new File(args[1]); //目标文件
InputStreamReader rdCto = new InputStreamReader(new FileInputStream(ctoFile));
OutputStreamWriter writer=new OutputStreamWriter(new FileOutputStream(dest));
BufferedReader bfReader = new BufferedReader(rdCto);
BufferedWriter bwriter=new BufferedWriter(writer);
PrintWriter pw=new PrintWriter(bwriter);
String txtline = null;
while ((txtline = bfReader.readLine()) != null) {
Thread.sleep(2000);
pw.println(txtline);
pw.flush();
}
bfReader.close();
pw.close();
编译成.class上传到linux系统,需要两个参数 第一个参数 数据源 第二参数 目标文件。(之后会使用java SocketTest access.20120104.log sparktest/data.log这个命令)
在~下创建一个文件夹sparktest,里面新建的文件data.log,这个文件作为保存上传的一条条数据的文件
编写flume配置文件(根据官方文档,由于1.6和1.8版本配置不一样,所以在http://flume.apache.org/向下翻找1.6版本,http://flume.apache.org/releases/content/1.6.0/FlumeUserGuide.html找它的source和sink)
编写source
#指定类型为exec
agent.sources.r1.type = exec
#命令 ---监控文件
agent.sources.r1.command = tail -F /home/ice/sparktest/data.log
#指定管道
agent.sources.r1.channels = c1
编写sink
#设置成控制台输出用于测试有没有监听到输入文件的信息
agent.sinks.k1.type = logger
修改sink
#配置sink指向kafka
agent.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.k1.topic = data
agent.sinks.k1.brokerList = localhost:9092
agent.sinks.k1.batchSize = 20
agent.sinks.k1.requiredAcks = 1
启动kafka测试数据是否传到kafka并用消费者消费
启动zookeeper
zookeeper-server-start.sh ~/soft/kafka/config/zookeeper.properties 或者用zkServer.sh start
启动kafka服务
kafka-server-start.sh ~/soft/kafka/config/server.properties
创建主题
kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
启动SocketTest
java SocketTest access.20120104.log sparktest/data.log
启动flume
flume-ng agent --conf ~/soft/flume/conf --conf-file ~/soft/flume/conf/tokafka.conf --name agent -Dflume.root.logger=INFO,console
启动kafka消费者测试
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
kafka接受到数据,成功!
引入spark-streaming-kafka-0-10_2.11包
maven
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.1.2</version>
</dependency>
streaming程序
val conf = new SparkConf().setAppName("fromkafka").setMaster("local[2]")
val ssc = new StreamingContext(conf,Seconds(5))
ssc.sparkContext.setLogLevel("ERROR")
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "master:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "test",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("test")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
stream.map(x => x.key())print()
ssc.start()
ssc.awaitTermination()
启动测试