zookeeper集群部署
kafka集群部署配置
- 下载kafka
- 官网http://kafka.apache.org/
wget https://archive.apache.org/dist/kafka/2.4.0/kafka_2.11-2.4.0.tgz
- 修改配置文件
vi ${kafka_home}/config/server.properties
(1)、配置 broker 的ID,每一台服务器的地址依次不同,其他两台为2,3
broker.id=1
(2)、打开监听端口 其他两台需要修改slave1,slave2
listeners=PLAINTEXT://master:9092
# 可选 zook中的监听
advertised.listeners=PLAINTEXT://master:9092
(3)、修改 log 的目录,在指定的位置创建好文件夹logs
log.dirs=/usr/local/kafka/kafkalogs
(4)、修改 zookeeper.connect
zookeeper.connect=master:2181/kafka,slave1:2181/kafka,slave2:2181/kafka
- scp到子节点
scp -r ./kafka_2.11-2.4.0/ root@slave1:/bigdata/binfile/
- 各节点启动server
kafka-server-start.sh $KAFKA_HOME/config/server.properties &
- 测试
jps看到kafka
kafka-topics.sh --create --bootstrap-server master:9092 --replication-factor 1 --partitions 1 --topic flinktest
kafka-topics.sh --list --bootstrap-server master:9092 ##查看创建的topic
kafka-topics.sh --list --bootstrap-server slave1:9092 ##查看节点是否创建topic
kafka-topics.sh --list --bootstrap-server slave2:9092 ##查看节点是否创建topic
kafka启动之后,一段时间后自动停止,出现这种情况一般是没有使用守护进程 -daemon 启动kafka。
①第一种情况:
[hadoop@master kafka_2.11-0.11.0.2]$ bin/kafka-server-start.sh -daemon config/server.properties
原因请参考/opt/module/kafka_2.11-0.11.0.2/bin下的kafka-run-class.sh 文件。
##### Launch mode
##### 使用-daemon启动
if [ "x$DAEMON_MODE" = "xtrue" ]; then
nohup $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
else
##### 不使用-daemon启动
exec $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@"
fi
②、还有一种情况是上次kafka没有正常退出,即kafka还没有关闭就关闭了zookeeper。
查看kafka启动日志,一般在kafka/logs目录下的server.log文件。
报错信息如下:
[2020-02-02 00:02:04,660] INFO Result of znode creation is: NODEEXISTS (kafka.utils.ZKCheckedEphemeral)
[2020-02-02 00:02:04,663] FATAL [Kafka Server 3], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.RuntimeException: A broker is already registered on the path /brokers/ids/3. This probably indicates that you either have configured a brokerid that is already in use, or else you have shutdown this broker and restarted it faster than the zookeeper timeout so it appears to be re-registering.
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:417)
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:403)
at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:70)
at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:50)
at kafka.server.KafkaServer.startup(KafkaServer.scala:280)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:65)
at kafka.Kafka.main(Kafka.scala)
解决方法;
停掉kafka,启动zkCli.sh,删除对应的节点。
[atguigu@hadoop102 jobs]$ zkCli.sh
[zk: localhost:2181(CONNECTED) 2] ls /brokers/ids
[1]
[zk: localhost:2181(CONNECTED) 2] rmr /brokers/ids
[zk: localhost:2181(CONNECTED) 2] ls /brokers/ids
[]
然后。。。
当然是重新启动kafka了,一般到了这里问题就解决了。
博主公众号
求关注