0.自动生成日志
import org.apache.log4j.Logger;//模拟日志产生
public class LoggerGenerator {
private static Logger logger = Logger.getLogger(LoggerGenerator.class.getName());
public static void main(String[] args) throws InterruptedException {
int index = 0;
while (true) {
Thread.sleep(2000);
logger.info("value:" + index++);
}
}
}
日志产生器开发并结合log4j完成日志的输出
1.设置log4j日志的输出格式,将程序自动生成的日志发送到192.168.43.150的41414端口中:
log4j.rootLogger=INFO,stdout,flume
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.target = System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [%c] [%p] - %m%n
log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jAppender
log4j.appender.flume.Hostname = 192.168.43.150
log4j.appender.flume.Port = 41414
log4j.appender.flume.UnsafeMode = true
使用Flume采集Log4j产生的日志
2.编辑flume配置文件,接收192.168.43.150的41414端口收到的日志
streaming.conf:
agent1.sources=avro-source
agent1.channels=logger-channel
agent1.sinks=log-sink
#define source
agent1.sources.avro-source.type=avro
agent1.sources.avro-source.bind=0.0.0.0
agent1.sources.avro-source.port=41414
#define channel
agent1.channels.logger-channel.type=memory
#define sink
agent1.sinks.log-sink.type=logger
agent1.sources.avro-source.channels=logger-channel
agent1.sinks.log-sink.channel=logger-channel
flume-ng agent --conf $FLUME_HOME/conf --conf-file $FLUME_HOME/conf/streaming.conf --name agent1 -Dflume.root.logger=INFO,console
先启动flume,再启动idea中自动生成日志的程序
3.使用KafkaSInk将Flume收集到的数据输出到Kafka
flume收集到192.168.43.150的41414端口收到的日志后,将数据输出到kafka的streamingtopic主题中
首先启动zookeeper,再启动kafka
创建一个topic:
cd $KAFKA_HOME/bin
./kafka-topics.sh --create --zookeeper hadoop000:2181 --replication-factor 1 --partitions 1 --topic streamingtopic
编辑flume配置文件: streaming2.conf
agent1.sources=avro-source
agent1.channels=logger-channel
agent1.sinks=kafka-sink
#define source
agent1.sources.avro-source.type=avro
agent1.sources.avro-source.bind=0.0.0.0
agent1.sources.avro-source.port=41414
#define channel
agent1.channels.logger-channel.type=memory
#define sink
agent1.sinks.kafka-sink.type=org.apache.flume.sink.kafka.KafkaSink
agent1.sinks.kafka-sink.topic = streamingtopic
agent1.sinks.kafka-sink.brokerList = hadoop000:9092
agent1.sinks.kafka-sink.requiredAcks = 1
agent1.sinks.kafka-sink.batchSize = 20
agent1.sources.avro-source.channels=logger-channel
agent1.sinks.kafka-sink.channel=logger-channel
启动flume:
flume-ng agent --conf $FLUME_HOME/conf --conf-file $FLUME_HOME/conf/streaming2.conf --name agent1 -Dflume.root.logger=INFO,console
启动kafka消费者,消费的topic是streamingtopic
./kafka-console-consumer.sh --zookeeper hadoop000:2181 --topic streamingtopic
//注意:agent1.sinks.kafka-sink.batchSize = 20
20条数据一个批次来收集
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
/**
* Spark Streaming对接Kafka
*/
object KafkaStreamingApp {
def main(args: Array[String]): Unit = {
if(args.length != 4) {
System.err.println("Usage: KafkaStreamingApp <zkQuorum> <group> <topics> <numThreads>")
}
val Array(zkQuorum, group, topics, numThreads) = args
val sparkConf = new SparkConf()
//.setAppName("KafkaReceiverWordCount")
//.setMaster("local[2]")
val ssc = new StreamingContext(sparkConf, Seconds(5))
val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap
// TODO... Spark Streaming如何对接Kafka
val messages = KafkaUtils.createStream(ssc, zkQuorum, group,topicMap)
// TODO... 自己去测试为什么要取第二个
messages.map(_._2).count().print()
ssc.start()
ssc.awaitTermination()
}
}
生产环境联调:
1.将LoggerGenerator类打包成jar,执行LoggerGenerator类
spark-submit --class LoggerGenerator --master local[*] --packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 /home/hadoop/lib/sparkstream-1.0-SNAPSHOT.jar
2.flume,kafka和我们测试的是一样的
启动flume和kafka,再启动idea中自动生成日志的程序
3.SparkStreaming的代码也打成jar包,然后使用spark-submit 的方式进行提交,根据实际情况选择运行模式: local/yarn/standalone/mesos
spark-submit \
--class com.imooc.spark.KafkaStreamingApp \
--master local[2] \
--name KafkaStreamingApp \
--packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.2.0 \
/home/hadoop/lib/sparkstream-1.0-SNAPSHOT.jar hadoop000:9092 kafka_streaming_topic
在生产上,整个流处理的流程是一样的,区别在于业务逻辑的复杂性