上传文件配置环境
kafka版本 kafka_2.12-2.8.2.tgz
zookeeper版本 apache-zookeeper-3.5.7-bin.tar.gz
上传解压
配置Zookeeper
因为kafka需要用到zookeeper所以我们先配置
配置环境变量
vi /etc/profile
设置完保存刷新
source /etc/profile
export ZOOKEEPER_HOME=/bigdata/zookeeper
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin:$FLINK_HOME/bin:$ZOOKEEPER_HOME/bin:
修改配置文件
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
修改里面的内容如下
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/bigdata/zookeeper/zkdata
# the port at which the clients will connect
clientPort=2181
admin.serverPort=8282
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
在zookeeper下创建zkdata
分发zookeeper
分发给其他机器,并且分发环境然后刷新
修改myid对应server.id的id
启动
zkServer.sh start 三台机器都启动
zkServer.sh status 查看状态
如图则成功
配置kafka
进入kafka下的config目录
vi server.properties
broker.id=1 #myid是几就是几
#需要自己加下面三条
port=9092
advertised.host.name=master#主机名
delect.topic.enable=true #删除topic的参数
advertised.listeners=PLAINTEXT://master:9092 #修改成自己的主机host
log.dirs=/bigdata/kafka/kafka-logs # kafka数据暂存放路径
num.partitions=3 #分区数
zookeeper.connect=master:2181,slave1:2181,slave2:2181 #连接zookeeper集群地址
添加环境变量,然后source刷新
最后分发,分发完之后需要修改唯一的id
启动
最后启动
先启动zookeeper,因为我们没有用kafka自带的zookeeper启动,所以我们先启动三台机器的zookeeper
然后启动kafka
要注意这是单节点启动的
// 有日志启动
kafka-server-start.sh ./config/server.properties &
// 无日志启动
kafka-server-start.sh -daemon ./config/server.properties &