kafka安装(二)

1、Create a topic

localhost:bin jack$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".
localhost:bin jack$ kafka-topics.sh --list --zookeeper localhost:2181
test


2、Send some messages

localhost:bin jack$ kafka-console-producer.sh --broker-list localhost:9092 --topic test
[2016-06-22 14:35:14,700] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
This is a message
This is another message


3、Start a consumer

localhost:bin jack$ kafka-console-consumer.sh  --zookeeper localhost:2181 --topic test --from-beginning
This is a message
This is another message

如果要最新的数据,可以不带 --from-beginning 参数即可。

4、Setting up a multi-broker cluster(单节点多个Broker

将kafka_2.10-0.8.2.2文件夹再复制两份分别为kafka_2,kafka_3

scp -r kafka_2.10-0.8.2.2 kafka_2 
scp -r kafka_2.10-0.8.2.2 kafka_3

分别修改kafka_2/config/server.properties以及kafka_3/config/server.properties 文件中的broker.id,以及port属性,确保唯一性

kafka_2.10-0.8.2.2/config/server.properties  
broker.id=1 
port=9092 
kafka_2/config/server.properties  
broker.id=2  
port=9093  
kafka_3/config/server.properties  
broker.id=3  
port=9094  

因为多个Broker使用同一个目录,会报错,所以同时修改下log.dirs

log.dirs=/tmp/kafka2-logs
log.dirs=/tmp/kafka3-logs
kafka.common.KafkaException: Failed to acquire lock on file .lock in /tmp/kafka-logs. A Kafka instance in another process or thread is using this directory.
	at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:98)
	at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:95)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
	at scala.collection.AbstractTraversable.map(Traversable.scala:105)
	at kafka.log.LogManager.lockLogDirs(LogManager.scala:95)
	at kafka.log.LogManager.<init>(LogManager.scala:57)
	at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:335)
	at kafka.server.KafkaServer.startup(KafkaServer.scala:85)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
	at kafka.Kafka$.main(Kafka.scala:46)
	at kafka.Kafka.main(Kafka.scala)

启动3个Broker

kafka-server-start.sh ../config/server.properties &


创建一个replication factor为3的topic,并查看Topic的状态

localhost:bin jack$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
Created topic "my-replicated-topic".
localhost:bin jack$ kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic	PartitionCount:1	ReplicationFactor:3	Configs:
	Topic: my-replicated-topic	Partition: 0	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3

从上面的内容可以看出,该topic包含1个part,replicationfactor为3,且Node1 是leador

解释如下:

  • "leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
  • "replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
  • "isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader. 

Let's publish a few messages to our new topic:

kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic

my test message 1

my test message 2

^C


Now let's consume these messages:

kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic

...

my test message 1

my test message 2

^C


参考:

http://kafka.apache.org/documentation.html#quickstart_send

http://www.centoscn.com/CentosServer/cluster/2015/0312/4863.html

Apache Kafka是一款分布式流处理平台,它主要用于构建实时数据管道和流应用程序。在Windows系统上安装Kafka需要下载进制包并按照以下步骤操作: 1. **下载**: 访问Kafka官网 (https://kafka.apache.org/downloads) ,选择适合的版本(如LTS版或最新版),然后下载适用于Windows的进制文件。 2. **解压**: 下载完成后,双击运行下载的.zip文件,将其解压缩到一个目录,例如`C:\Program Files\Kafka`。 3. **配置环境变量**: 打开系统的环境变量设置,新增一个名为`KAFKA_HOME`的变量,并指向你刚刚解压的Kafka目录。 4. **添加到PATH**: 在环境变量的系统变量部分,新建或编辑`Path`变量,将`%KAFKA_HOME%\bin`路径添加进去,以便于命令行访问Kafka工具。 5. **启动服务**: 进入`%KAFKA_HOME%\bin`目录,使用`kafka-server-start.bat`脚本来启动Kafka服务器。你可以创建一个批处理文件来自动化这个过程。 6. **验证安装**: 可以通过运行`kafka-topics.sh --create`等命令来测试Kafka是否已经正确安装并可以正常使用。 7. **监控与管理**: 如果你计划在生产环境中使用,可以考虑使用Kafka的管理工具如Confluent Control Center,或者使用命令行工具`kafka-configs.sh`、`kafka-console-producer.sh`等。 **相关问题--:** 1. 如何停止Kafka服务? 2. Windows上如何查看Kafka日志? 3. Kafka有没有图形化管理界面供Windows用户使用?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值