大数据与Spark系列之Kafka集群部署

1. kafka集群搭建规划
搭建主机
  • slave61 192.168.9.61
  • slave62 192.168.9.62
  • slave63 192.168.9.63

2. kafka安装包下载

3. 创建工作目录,解压安装
$ cd /apps/svr/
$ mkdir kafka
$ tar xzf ~/tools/kafka_2.12-1.0.0.tgz -C /apps/svr/kafka/

4. 配置zookeeper.properties
$ cd /apps/svr/kafka/kafka_2.12-1.0.0/
$ cd config/
$ vim zookeeper.properties
····························································
dataDir=/apps/svr/kafka/zookeeper
dataLogDir=/apps/svr/kafka/logs

clientPort=2181
maxClientCnxns=100
tickTime=2000
initLimit=10
syncLimit=5
server.1=192.168.9.61:2888:3888
server.2=192.168.9.62:2888:3888
server.3=192.168.9.63:2888:3888
····························································

5. 配置zookeeper
$ mkdir -p /apps/svr/kafka/zookeeper
$ mkdir -p /apps/svr/kafka/logs
5.1 拷贝kafka至slave62与slave63
$ cd /apps/svr/
$ scp -r kafka/ slave62:/apps/svr/
$ scp -r kafka/ slave63:/apps/svr/
5.2 创建myid文件,三台服务器上的myid文件分别配置1,2,3
$ cd /apps/svr/kafka/zookeeper
$ vim myid
····························································
1
····························································
5.3 kafka工作目录,启动zookeeper,三台服务器都启动
$ cd /apps/svr/kafka/kafka_2.12-1.0.0/
$ ./bin/zookeeper-server-start.sh config/zookeeper.properties &

6. 配置server.properties
  • broker.id的值三个节点要配置不同的值,分别配置为0,1,2
  • log.dirs必须保证目录存在,不会根据配置文件自动生成
$ cd /apps/svr/kafka/kafka_2.12-1.0.0/config/
$ vim server.properties
····························································
broker.id=0
log.dirs=/apps/svr/kafka/kafka-logs
zookeeper.connect=192.168.9.61:2181,192.168.9.62:2181,192.168.9.63:2181
····························································
$ mkdir -p /apps/svr/kafka/kafka-logs

7. 配置Kafka环境变量
$ vim ~/.bash_profile
····························································
# KAFKA_HOME
export KAFKA_HOME=/apps/svr/kafka/kafka_2.12-1.0.0
export PATH=$PATH:$KAFKA_HOME/bin
····························································
$ source ~/.bash_profile

8. kafka工作目录,启动kafka,三台服务器都启动
$ cd /apps/svr/kafka/kafka_2.12-1.0.0/
$ ./bin/kafka-server-start.sh -daemon config/server.properties

9. kafka常用命令
  • 创建topic–test
$ kafka-topics.sh --create --zookeeper 192.168.9.61:2181,192.168.9.62:2181,192.168.9.63:2181 --replication-factor 3 --partitions 3 --topic test
  • 列出已创建的topic列表
$ kafka-topics.sh --list --zookeeper localhost:2181
  • 模拟客户端去发送消息
$ kafka-console-producer.sh --broker-list 192.168.9.61:9092,192.168.9.62:9092,192.168.9.63:9092 --topic test
  • 模拟客户端去接受消息
$ kafka-console-consumer.sh --zookeeper 192.168.9.61:2181,192.168.9.62:2181,192.168.9.63:2181 --from-beginning --topic test
  • 查看指定的主题
$ kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
阅读更多
文章标签: kafka 大数据
个人分类: kafka
想对作者说点什么? 我来说一句

没有更多推荐了,返回首页

加入CSDN,享受更精准的内容推荐,与500万程序员共同成长!
关闭
关闭