Spark修炼之道(进阶篇)——Spark入门到精通:第十五节 Kafka 0.8.2.1 集群搭建

作者:周志湖 
微信号:zhouzhihubeyond

本节为下一节Kafka与Spark Streaming做铺垫

主要内容

1.kafka 集群搭建

1. kafka 集群搭建

  1. kafka 安装与配置

    到下面的地址下载:Scala 2.10 - kafka_2.10-0.8.2.1.tgz 
    http://kafka.apache.org/downloads.html 
    下载完成后,使用命令

tar -zxvf  kafka_2.10-0.8.2.1.tgz 
  •  

解压,解压后的目录如下 
这里写图片描述

进入config目录,将server.properties文件内容如下:

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=sparkmaster

//中间省略,默认配置即可
############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=sparkmaster:2181,sparkslave01:2181,sparkslave02:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
  •  

将整个安装文件进行跨机器拷贝:

root@sparkmaster:/hadoopLearning# scp -r kafka_2.10-0.8.2.1/ sparkslave01:/hadoopLearning/ 
root@sparkmaster:/hadoopLearning# scp -r kafka_2.10-0.8.2.1/ sparkslave02:/hadoopLearning/ 
  •  

将sparkslave01机器上的server.properties文件内容如下:


############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

############################# Socket Server Settings #############################

# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=sparkslave01

//中间省略,默认配置即可

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=sparkmaster:2181,sparkslave01:2181,sparkslave02:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
  •  

将sparkslave02机器上的server.properties文件内容如下:

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2

############################# Socket Server Settings #############################

# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=sparkslave02


//中间省略,默认配置即可

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=sparkmaster:2181,sparkslave01:2181,sparkslave02:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
  •  
  1. 启动Kafka集群
root@sparkslave02:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-server-start.sh config/server.properties 
root@sparkslave01:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-server-start.sh config/server.properties 
root@sparkmaster:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-server-start.sh config/server.properties 
  •  

这里写图片描述

3 创建topic 
在sparkmaster机器上执行下列命令创建一个topic

root@sparkmaster:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-topics.sh --create --topic kafkatopictest --replication-factor 3 --partitions 2 --zookeeper sparkmaster:2181
Created topic "kafkatopictest".
  •  

4 发送消息至kafka 
在sparkslave01机器上执行下列命令并向kafka发送消息

root@sparkslave01:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-console-producer.sh --broker-list sparkslave01:9092 --sync --topic kafkatopictest
Hello Kafka, I will test Spark Streaming on you next lesson
  •  

这里写图片描述

5 接收kafka发送来的消息

在sparkslave02机器上执行下列命令并接收kafka发送消息

root@sparkslave02:/hadoopLearning/kafka_2.10-0.8.2.1# bin/kafka-console-consumer.sh --zookeeper sparkmaster:2181 --topic kafkatopictest --from-beginning
Hello Kafka, I will test Spark Streaming on you next lesson
  •  

这里写图片描述

至此Kafka 集群搭建与测试完毕

下一节当中,我们将演示kafka如何与Spark Streaimg结合起来使用

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值