SparkStreaming学习札记4-2020-2-15--SparkStreaming实时流处理项目实战

12-8 -通过定时调度工具每一分钟产生一批数据

1.在线工具

https://tool.lu/crontab/

crontab -e

              */1 * * * * /hadoop/data/project/log_generator.sh

如果要取消用#注释掉

 

 

2.对接python日志产生器输出的日志到Flume

定义名字为streaming_project.conf

 

选型:access.log ==>控制台输出

           exec

          memory

          logger

streaming_project.conf文件具体配置:
exec-memory-logger.sources = exec-source
exec-memory-logger.sinks = logger-sink
exec-memory-logger.channels = memory-channel

exec-memory-logger.sources.exec-source.type = exec
exec-memory-logger.sources.exec-source.command = tail -F /home/hadoop/data/project/logs/access.log
exec-memory-logger.sources.exec-source.shell = /bin/sh -c

exec-memory-logger.channels.memory-channel.type = memory

exec-memory-logger.sinks.logger-sink.type = logger

exec-memory-logger.sources.exec-source.channels = memory-channel
exec-memory-logger.sinks.logger-sink.channel = memory-channel
 

启动命令:

flume-ng agent --name exec-memory-logger --conf $FLUME_HOME/conf --conf-fi
le /home/hadoop/data/project/streaming_project.conf -Dflume.root.logger=INFO,console

3日志 == 》Kafka

 (1)启动zk:

         进入目录

          cd /home/hadoop/app/zookeeper-3.4.5-cdh5.7.0/bin

         启动命令

           ./zkServer.sh start

 

(2)启动Kafka Server:

         

          进入目录:cd /home/hadoop/app/kafka_2.11-0.9.0.0/bin/

          启动命令:./kafka-server-start.sh -daemon /home/hadoop/app/kafka_2.11-0.9.0.0/config/server.properties

       修改flume配置文件使得Flume sink数据到Kafka,修改如下并以streaming_project2.conf命名

exec-memory-kafka.sources = exec-source
exec-memory-kafka.sinks = kafka-sink
exec-memory-kafka.channels = memory-channel

exec-memory-kafka.sources.exec-source.type = exec
exec-memory-kafka.sources.exec-source.command = tail -F /home/hadoop/data/project/logs/access.log
exec-memory-kafka.sources.exec-source.shell = /bin/sh -c

exec-memory-kafka.channels.memory-channel.type = memory

exec-memory-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
exec-memory-kafka.sinks.kafka-sink.brokerList = hadoop000:9092
exec-memory-kafka.sinks.kafka-sink.topic = streamingtopic
exec-memory-kafka.sinks.kafka-sink.batchSize = 5
exec-memory-kafka.sinks.kafka-sink.requiredAcks = 1

exec-memory-kafka.sources.exec-source.channels = memory-channel
exec-memory-kafka.sinks.kafka-sink.channel = memory-channel
 

(3)开启Kafka消费者查看

kafka-console-consumer.sh --zookeeper hadoop000:2181 --topic streamingtopic

(4)启动flume

flume-ng agent --name exec-memory-kafka --conf $FLUME_HOME/conf --conf-file /home/hadoop/data/project/streaming_project2.conf -Dflume.root.logger=INFO,console

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值