Apache Kafka编程入门指南:设置分区数和复制因子

转载:http://m.blog.csdn.net/article/details?id=50926633

我们学习了如何编写简单的Kafka Producer程序。在那个例子中,在如果需要发送的topic不存在,Producer将会创建它。我们都知道(假设你知道),每个topic都是有分区数和复制因子的,但是我们无法通过Producer相关的API设定分区数和复制因子的,因为Producer相关API创建topic的是通过读取server.properties文件中的num.partitionsdefault.replication.factor的。那么是否就意味着咱们无法在程序里面定义topic的分区数和复制因子呢?

们可以通过Kafka提供的AdminUtils.createTopic函数来创建topic,它的函数原型如下:

def createTopic(zkClient: ZkClient, 
      topic: String,
      partitions: Int,   
      replicationFactor: Int,  
      topicConfig: Properties = new Properties)
这个函数是没有返回值的。从上面的参数列表我们可以看出,partitions和replicationFactor参数就是上面说到的分区数和复制因子,所以我们可以通过这个参数来创建topic。在使用 createTopic 函数之前,我们需要创建zkClient对象,它里面封装了操作Zookeeper的相关API。而这个API不是Kafka内置的,所以我们需要先引入这个依赖:

<dependency>
      <groupId>com.101tec</groupId>
      <artifactId>zkclient</artifactId>
      <version>0.3</version>
</dependency>
然后我们就可以创建ZkClient对象了:

val zk = "www.iteblog.com:2181"
val sessionTimeoutMs = 10000
val connectionTimeoutMs = 10000
val zkClient = new ZkClient(zk, sessionTimeoutMs, connectionTimeoutMs, ZKStringSerializer)
需要特别主要的是,我们必须指定 ZKStringSerializer 对象,否则运行完代码之后,你可以看到zookeeper里面已经创建了相关topic,而且你list的时候也可以看到你创建的topic,但是当你往这个topic里面发送消息的是,你会得到以下的异常:
[2016-02-05 16:45:52,335] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: iteblog (kafka.producer.async.DefaultEventHandler)
[2016-02-05 16:45:52,441] WARN Error while fetching metadata [{TopicMetadata for topic iteblog ->
No partition metadata for topic flight due to kafka.common.LeaderNotAvailableException}] for topic [flight]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)
[2016-02-05 16:45:52,441] ERROR Failed to send requests for topics flight with correlation ids in [41,48] (kafka.producer.async.DefaultEventHandler)
[2016-02-05 16:45:52,441] ERROR Error in handling batch of 1 events (kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
    at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
    at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
    at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
    at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
    at scala.collection.immutable.Stream.foreach(Stream.scala:547)
    at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
    at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)

找到topic的Leader,所以无法发送消息。这是因为如果你不使用ZKStringSerializer对象,那么只会在Zookeeper里面创建topic的相关信息,但是kafka并没有创建这个主题!现在我们就可以使用AdminUtils.createTopic参见topic了:

val topic = "iteblog"
val replicationFactor = 1
val numPartitions = 2
AdminUtils.createTopic(zkClient, topic, numPartitions, replicationFactor)
如果topic存在,那么程序将会报错:

Exception in thread "main" kafka.common.TopicExistsException: Topic "iteblog" already exists.
    at kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:187)
    at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:172)
    at com.iteblog.kafka.IteblogProducerV3$.main(IteblogProducerV3.scala:46)
    at com.iteblog.kafka.IteblogProducerV3.main(IteblogProducerV3.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

否则什么都没输出则代表你的topic创建成功了!完整代码片段如下:

package com.iteblog.kafka

import kafka.admin.AdminUtils
import kafka.utils.ZKStringSerializer
import org.I0Itec.zkclient.ZkClient
 
object CreateTopic {

  def main(args: Array[String]) {
    val zk = "www.iteblog.com:2181"
    val sessionTimeoutMs = 10000
    val connectionTimeoutMs = 10000
    val zkClient = new ZkClient(zk, sessionTimeoutMs, connectionTimeoutMs, ZKStringSerializer)
    val topic = "iteblog"
    val replicationFactor = 1
    val numPartitions = 2
    AdminUtils.createTopic(zkClient, topic, numPartitions, replicationFactor)
  }
}
除了使用上面 AdminUtils.createTopic 在创建主题的时候设置复制因子和分区数,我们还可以使用 kafka.admin.TopicCommand 来实现同样的功能,如下:

val arguments = Array("--create", "--zookeeper", zk, "--replication-factor", "2", "--partition", "2", "--topic", "iteblog")
TopicCommand.main(arguments)
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值