kafka-01部署启动

kafka的启动

补充:
查看版本号 find ./libs/ -name \*kafka_\* | head -1 | grep -o '\kafka[^\n]*'

方式:

  1. kafka自带的zookeeper服务
//默认启动方式是前台启动,当关闭当前的命令行窗口,进程会被关闭
# ./bin/zookeeper-server-start.sh ./config/zookeeper.properties
//后台启动命令
# nohup ./bin/zookeeper-server-start.sh ./config/zookeeper.properties &
或
# ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties
//"-daemon" 参数代表以守护进程的方式启动kafka server

nohup ./bin/kafka-server-start.sh ./config/server.properties & //启动kafka

//查看端口号是否监听
lsof -i:2181  //zookeeper端口
lsof -i:9092  //kafka 端口
  • 常用命令:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test //创建topic

./bin/kafka-topics.sh --describe  --zookeeper localhost:2181 --topic test 

./bin/kafka-topics.sh --list --zookeeper localhost:2181
  1. 单独的zookeeper服务
    注意:一定要将kafka自带的zookeeper进程和kafka进程停止
./zkServer.sh start /root/zookeeper/conf/zoo.cfg //启动zookeeper
./zkServer.sh status /root/zookeeper/conf/zoo.cfg
./zkServer.sh stop /root/zookeeper/conf/zoo.cfg

nohup ./bin/kafka-server-start.sh ./config/server.properties & //启动kafka

检查:

# lsof -i:2181
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java    18676 root   53u  IPv6 561661      0t0  TCP *:eforward (LISTEN)

# lsof -i:9092
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java    18968 root  154u  IPv6 561700      0t0  TCP server-1:XmlIpcRegSvc (LISTEN)
java    18968 root  173u  IPv6 558773      0t0  TCP server-1:36496->server-1:XmlIpcRegSvc (ESTABLISHED)
java    18968 root  174u  IPv6 561707      0t0  TCP server-1:XmlIpcRegSvc->server-1:36496 (ESTABLISHED)

kafka单机部署

  1. 下载kafka
    下载地址:kafka
    在这里插入图片描述
    其中Scala 2.13是编译器及其版本,3.1.0是kafka版本,下载编译器是2.13版本的
    上传到服务器中,重命名为kafka,给权限

  2. 启动zookeeper
    Kafka依赖于ZooKeeper,所以需要先启动一个ZooKeeper服务器

    zookeeper配置是 config目录下的zookeeper.properties,默认端口 2181
    启动命令:nohup ./bin/zookeeper-server-start.sh ./config/zookeeper.properties &,启动后可后台运行zookeeper
    命令:ps -ef | grep zookeeper 查看zookeeper是否启动成功

配置文件:
dataDir=/kafka/zookeeper  
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0
# Disable the adminserver by default to avoid port conflicts.
# Set the port to something non-conflicting if choosing to enable this
admin.enableServer=false
  1. 启动kafka
    在config目录下提供了kafka的配置文件server.properties。为了保证可以远程访问Kafka,需要修改两处配置。
listeners=PLAINTEXT://:9092   //去掉注释
advertised.listeners=PLAINTEXT://your.host.name:9092 //去掉注释并修改

broker.id=0
log.dirs=/kafka/kafka-logs  
zookeeper.connect=localhost:2181
# zookeeper.connect=localhost:2181,*:2181,*:2181

启动kafka: nohup ./bin/kafka-server-start.sh ./config/server.properties &
查看:ps -ef | grep kafka

  1. 常用命令
//zookeeper启动
nohup ./bin/zookeeper-server-start.sh ./config/zookeeper.properties &

//kafka启动
nohup ./bin/kafka-server-start.sh ./config/server.properties &

//创建topic
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

//查看topic列表
./bin/kafka-topics.sh --list --zookeeper localhost:2181

//查看描述topic信息 
./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test

//创建生产者 
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

//创建消费者
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

zookeeper与bootstrap-server
在 0.9.0.0 之后的 Kafka,出现了几个新变动,一个是在 Server 端增加了 GroupCoordinator 这个角色,另一个较大的变动是将 topic 的 offset 信息由之前存储在 zookeeper 上改为存储到一个特殊的 topic(__consumer_offsets)中

[使用zk] --zookeeper localhost:2181
[使用内置] --bootstrap-server localhost:9092

官方推荐如果kafka版本大于等于2.2使用–bootstrap-serverlocalhost:9092 替代–zookeeper localhost:2181 (2.2以上也兼容–zookeeper)命令如下

kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test1

如果kafka版本小于2.2则命令如下

kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test1

注意:2181是ZooKeeper的监听端口,9092是Kafka的监听端口

旧版用–zookeeper参数,主机名(或IP)和端口用ZooKeeper的,也就是server.properties文件中zookeeper.connect属性的配置值

新旧用–bootstrap-server参数,主机名(或IP)和端口用某个节点的即可,即主机名(或主机IP):9092

对于消费者,kafka中有两个设置的地方:对于老的消费者,由–zookeeper参数设置;对于新的消费者,由–bootstrap-server参数设置
如果使用了–zookeeper参数,那么consumer的信息将会存放在zk之中

查看的方法是使用./zookeeper-client,然后 ls /consumers/[group_id]/offsets/[topic]/[broker_id-part_id],这个是查看某个group_id的某个topic的offset

如果使用了–bootstrap-server参数,那么consumer的信息将会存放在kafka之中

对于console生产者,--broker-list参数指定了所使用的broker

bin目录包含了很多运维可操作的shell脚本

脚本名称用途描述
connect-distributed.sh连接kafka集群模式
connect-standalone.sh连接kafka单机模式
kafka-acls.shtodo
kafka-broker-api-versions.shtodo
kafka-configs.sh配置管理脚本
kafka-console-consumer.shkafka消费者控制台
kafka-console-producer.shkafka生产者控制台
kafka-consumer-groups.shkafka消费者组相关信息
kafka-consumer-perf-test.shkafka消费者性能测试脚本
kafka-delegation-tokens.shtodo
kafka-delete-records.sh删除低水位的日志文件
kafka-log-dirs.shkafka消息日志目录信息
kafka-mirror-maker.sh不同数据中心kafka集群复制工具
kafka-preferred-replica-election.sh触发preferred replica选举
kafka-producer-perf-test.shkafka生产者性能测试脚本
kafka-reassign-partitions.sh分区重分配脚本
kafka-replay-log-producer.shtodo
kafka-replica-verification.sh复制进度验证脚本
kafka-run-class.shtodo
kafka-server-start.sh启动kafka服务
kafka-server-stop.sh停止kafka服务
kafka-simple-consumer-shell.shdeprecated,推荐使用kafka-console-consumer.sh
kafka-streams-application-reset.shtodo
kafka-topics.shtopic管理脚本
kafka-verifiable-consumer.sh可检验的kafka消费者
kafka-verifiable-producer.sh可检验的kafka生产者
trogdor.shtodo
zookeeper-security-migration.shtodo
zookeeper-server-start.sh启动zk服务
zookeeper-server-stop.sh停止zk服务
zookeeper-shell.shzk客户端

查看topic状态

# ./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test1
Topic: test1	PartitionCount: 4	ReplicationFactor: 2	Configs: 
	Topic: test1	Partition: 0	Leader: 1	Replicas: 1,3	Isr: 1
	Topic: test1	Partition: 1	Leader: 2	Replicas: 2,1	Isr: 2,1
	Topic: test1	Partition: 2	Leader: none	Replicas: 3,2	Isr: 2
	Topic: test1	Partition: 3	Leader: 2	Replicas: 1,2	Isr: 2,1

状态说明:test1有八个分区,两个副本,分区1的leader是2(broker.id),分区2有两个副本,并且状态都为lsr(ln-sync,表示可以参加选举成为leader)。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Bruce 是 Apache Kafka 的生产者守护进程,它简化了客户端发送消息到 Kafka ,无需关注后端的 Kafka 集群。Bruce 主要处理: Routing messages to the proper brokers, and spreading the load evenly across multiple partitions for a given topic Waiting for acknowledgements, and resending messages as necessary due to communication failures or Kafka-reported errors Buffering messages to handle transient load spikes and Kafka-related problems Tracking message discards when serious problems occur; Providing web-based discard reporting and status monitoring interfaces Batching and compressing messages in a configurable manner for improved performance Bruce runs on each individual host that communicates with Kafka, receiving messages from local clients over a UNIX domain datagram socket. Clients write messages to Bruce's socket in a simple binary format. Once a client has written a message, no further interaction with Bruce is required. From that point onward, Bruce takes full responsibility for reliable message delivery. Bruce serves as a single intake point for a Kafka cluster, receiving messages from diverse clients regardless of what programming language a client is written in. Client code is currently available in C, C , Java, Python, and PHP. Code contributions for clients in other programming languages are much appreciated. Technical details on how to send messages to Bruce are provided here. Bruce runs on Linux, and has been tested on CentOS versions 7 and 6.5, and Ubuntu versions 14.04.1 LTS and 13.10. Bruce requires at least version 0.8 of Kafka. 标签:Bruce

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值