kafka集群安装

一、安装环境

1、服务器准备:
  • 准备三台机器:4CPU 4G内存
    kafka1: 172.20.67.52
    kafka2: 172.20.67.56
    kafka3: 172.20.67.57

  • 修改hostname依次为:kafka1、kafka2、kafka3

  • 配置host

172.20.67.52   kafka1
172.20.67.56   kafka2
172.20.67.57   kafka3
  • 建议挂载独立数据盘到“/data”目录,作为kafka日志数据存储目录
2、zookeeper安装

Kafka集群环境搭建,需要准备好一个zookeeper环境(集群),

zk集群部署参考:https://blog.csdn.net/huchao_lingo/article/details/103491566

3、安装JDK

https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

rpm -ivh jdk-8u231-linux-x64.rpm

二、安装Kafka

1、创建相关目录
mkdir -p  /opt/server
mkdir -p  /data/logs/kafka
mkdir -p  /data/kafka-logs
2、下载kafka
cd /opt/server
wget https://archive.apache.org/dist/kafka/2.0.0/kafka_2.12-2.0.0.tgz
tar -xzvf kafka_2.12-2.0.0.tgz
mv kafka_2.12-2.0.0 kafka
3、编辑配置文件
cd config
vim server.properties

修改以下配置项目

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

# A comma separated list of directories under which to store log files
log.dirs=/data/kafka-logs

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=172.16.10.205:2181,172.16.10.206:2181,172.16.10.207:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

num.partitions=3
delete.topic.enable=true
default.replication.factor=3
  • broker.id依次为1、2、3

  • /data/kafka-logs为数据目录,建议单独挂载磁盘

  • 其他配置可以保持默认,保存,退出;

三、将kafka注册为Systemd服务

1、创建一个用于启动kafka的专有用户
useradd -M -s /sbin/nologin kafka

chown -R kafka.kafka /opt/server/kafka
chown -R kafka.kafka /data/logs/kafka
chown -R kafka.kafka /data/kafka-logs
2、创建Systemd配置文件

cd /usr/lib/systemd/system/

vim kafka.service

[Unit]
Description=Apache Kafka server (broker)
After=network.target

[Service]
Type=simple
User=kafka
Group=kafka
ExecStart=/opt/server/kafka/bin/kafka-server-start.sh /opt/server/kafka/config/server.properties
ExecStop=/opt/server/kafka/bin/kafka-server-stop.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target

四、修改kafka日志输出目录

修改$KAFKA_HOME/bin/kafka-run-class.sh

vim bin/kafka-run-class.sh

定位到LOG_DIR

# Log directory to use
if [ "x$LOG_DIR" = "x" ]; then
  LOG_DIR="$base_dir/logs"
fi

增加一行,修改为

LOG_DIR=/data/logs/kafka
# Log directory to use
if [ "x$LOG_DIR" = "x" ]; then
  LOG_DIR="$base_dir/logs"
fi

五、运行Kafka

  • 启动kafka服务
systemctl start kafka
  • 停止kafka服务
systemctl stop kafka
  • 查询kafka状态
systemctl status kafka

五、命令参考

  • 创建主题

    /opt/server/kafka/bin/kafka-topics.sh --create --zookeeper 172.16.10.205:2181,172.16.10.206:2181,172.16.10.207:2181 --replication-factor 3 --partitions 10 --topic service-logs
    
  • 查询消费情况

    /opt/server/kafka/bin/kafka-consumer-groups.sh --group logstash --describe --bootstrap-server kafka1:9092,kafka2:9092,kafka3:9092
    
  • 查看日志内容

    /opt/server/kafka/bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /data/kafka-logs/service-logs-0/00000000000000000000.log --print-data-log
    
    /opt/server/kafka/bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /data/kafka-logs/service-logs-0/00000000000000000000.index --print-data-log
    
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值