Spark Streaming实时流项目实战 笔记十

Spark Streaming整合Kafka实战

实战一:Receiver-based

1)先启动zookeeper
2)启动kafka

[hadoop@hadoop000 bin]$ ./kafka-server-start.sh -daemon /home/hadoop/app/kafka_2.11-0.9.0.0/config/server
server-1.properties   server-3.properties   server.properties~
server-2.properties   server-3.properties~  
server-2.properties~  server.properties     
[hadoop@hadoop000 bin]$ ./kafka-server-start.sh -daemon /home/hadoop/app/kafka_2.11-0.9.0.0/config/server.properties
[hadoop@hadoop000 bin]$ jps

3)创建topic

[hadoop@hadoop000 bin]$ ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic  kafka_streaming_topic
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "kafka_streaming_topic".


[hadoop@hadoop000 bin]$ ./kafka-topics.sh --list --zookeeper localhost:2181
hello_topic
kafka_streaming_topic
my-replicated-topic

4)通过控制台测试本控制台是否能够正常的生产和消费信息

./kafka-console-producer.sh --broker-list localhost:9092 --topic kafka_streaming_topic

./kafka-console-consumer.sh --zookeeper localhost:2181 --topic kafka_streaming_topic

spark-submit
–class com.imooc.spark.KafkaReceiverWordCount
–master local[2]
–name KafkaReceiverWordCount
–packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.2.0
/home/hadoop/lib/sparktrain-1.0.jar hadoop000:2181 test kafka_streaming_topic 1

实战二:Direct-Approach

Kafka: Spark Streaming 2.2.0 is compatible with Kafka broker versions 0.8.2.1 or higher. See the Kafka Integration Guide for more details.(版本选择)

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值