kafka
文章平均质量分 64
blackproof
这个作者很懒,什么都没留下…
展开
-
storm kafka consumer
5761 准备,一些相关类GlobalPartitionInformation (storm.kafka.trident)记录partitionid和broker的关系GlobalPartitionInformation info = new GlobalPartitionInformation();info.addPartition(0, new Broker("10.1.110.24",9092));info.addPartition(0, n原创 2015-11-06 15:58:30 · 149 阅读 · 0 评论 -
kafka接口协议
which broker hosts which partitions.获得活着的broker,他们的host和port,broker包含的partitionSend - Send messages to a broker 发送消息到brokerFetch - Fetch messages from a broker, one which fetches data, one which gets cluster metadata, and one which gets offset informa原创 2015-07-10 11:06:52 · 288 阅读 · 0 评论 -
kafka storm报错
t>(NimbusClient.java:36)at backtype.storm.utils.NimbusClient.getConfiguredClient(NimbusClient.java:17)at backtype.storm.utils.Utils.downloadFromMaster(Utils.java:190)at com.alipay.bluewhale.core.daemon.supervisor.SynchronizeSupervisor.download原创 2015-07-07 16:03:27 · 91 阅读 · 0 评论 -
kafka Reassign Partitions Tool
escription ------ ----------- --broker-list <brokerlist> The list of brokers to which the2015-07-02 20:37:01 · 193 阅读 · 0 评论 -
kafka topic命令
原创 2015-07-02 20:31:38 · 147 阅读 · 0 评论 -
kafka replication tools
原创 2015-07-01 17:20:38 · 89 阅读 · 0 评论 -
kafka参数配置
er.id=0341############################# Socket Server Settings ############################## The port the socket server listens onport=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaceshost.name=ip原创 2015-07-08 16:30:54 · 124 阅读 · 0 评论 -
kafka client端 producer
zervalue:org.apache.kafka.common.serialization.StringSerializer3.根据发送数据计算索要发送的topic的partition使用record记录中的partition,若为空,用paritition类计算partition:org.apache.kafka.clients.producer.internals.DefaultPartitioner4.确保所要发送的信息的序列化大小不超过阈值阈值:MAX_REQUES2015-06-19 14:57:59 · 575 阅读 · 0 评论 -
kafka获得最新partition offset
st;import java.util.Map;import java.util.Properties;import java.util.TreeMap;import java.util.Map.Entry;import kafka.api.PartitionOffsetRequestInfo;import kafka.common.TopicAndPartition;import kafka.consumer.Consumer;import kafka.consumer2015-06-05 17:45:09 · 720 阅读 · 0 评论 -
kafka接口协议二 详细
r,当leader broker处理数据有误时,有两种情况1.broker死了,2broker不在包含此partition;所以需要循环处理过程,当返回有误,则刷新metadata,在执行 官网: Cycle through a list of "bootstrap" kafka urls until we find one we can connect to. Fetch cluster metadata.Process fetch or produce原创 2015-07-10 11:12:45 · 496 阅读 · 0 评论 -
kafka 重新分配leader kafka-preferred-replica-election.sh
opicPartitionList.json:{"partitions":[{"topic":"topic","partition": 0},{"topic":"topic","partition": 1},{"topic":"topic","partition": 2},{"topic"原创 2015-07-17 17:45:04 · 904 阅读 · 0 评论 -
kafka logManager类 kafka存储机制
messageSet:每个log file的管道类 base offset:在topic中的绝对offset值 offsetindex:每个log index的管道map类,存储相对offset值和文件position 按照partition分区topic,分发到各个机子上 partition上有多个log文件,每个log文件一个索引文件 log文件是实际的数据,索引文件是log文件里数据的相对偏移量和在log文件里的position原创 2015-08-26 17:31:44 · 253 阅读 · 0 评论 -
kafka 获取metadata
数据 NetworkClient类,方法poll,检查metadata是否需要更新方法: /** * Add a metadata request to the list of sends if we can make one */ private void maybeUpdateMetadata(List<NetworkSend> sends, long now) { // Beware that the原创 2015-10-14 18:48:11 · 1985 阅读 · 0 评论 -
kafka leader balance
ly be a follower for all its partitions, meaning it will not be used for client reads and writes.To avoid this imbalance, Kafka has a notion of preferred replicas. If the list of replicas for a partition is 1,5,9 then node 1 is preferred as the leader to原创 2015-10-14 13:23:35 · 381 阅读 · 0 评论 -
kafka broker宕机&leader选举
ener() extends IZkChildListener with Logging { this.logIdent = "[BrokerChangeListener on Controller " + controller.config.brokerId + "]: " def handleChildChange(parentPath : String, currentBrokerList : java.util.List[String])2015-10-09 16:40:50 · 1311 阅读 · 0 评论 -
kafka producer服务端
pis类,调用handleProducerOrOffsetCommitRequest方法: def handle(request: RequestChannel.Request) { try{ trace("Handling request: " + request.requestObj + " from client: " + request.remoteAddress) request.requestId match {原创 2015-09-01 15:56:17 · 147 阅读 · 0 评论 -
kafka KafkaRequestHandlerPool类
questHandler(i, brokerId, aggregateIdleMeter, numThreads, requestChannel, apis) threads(i) = Utils.daemonThread("kafka-request-handler-" + i, runnables(i)) threads(i).start() } run方法: def run() { while(true) { tr原创 2015-09-01 15:12:43 · 168 阅读 · 0 评论 -
kafka SocketServer类
原创 2015-09-01 15:09:18 · 150 阅读 · 0 评论 -
kafka ReplicaManager类
// start ISR expiration thread scheduler.schedule("isr-expiration", maybeShrinkIsr, period = config.replicaLagTimeMaxMs, unit = TimeUnit.MILLISECONDS) } 主方法:maybeShrinkIsr private def maybeShrinkIsr(): Unit = { trace(&qu原创 2015-08-27 13:35:17 · 165 阅读 · 0 评论 -
kafka TopicConfigManager类
plog(每个topic每个partition对应一个log)配置 /** * 注册config change的listener * Begin watching for config changes */ def startup() { ZkUtils.makeSurePersistentPathExists(zkClient, ZkUtils.TopicConfigChangesPath) //监听/config/changes的子节点,Confi2015-08-27 11:24:13 · 144 阅读 · 0 评论 -
kafka参数转
ions和replicas),建立起来的socket连接用于发送实际数据,这个列表可以是broker的一个子集,或者一个VIP,指向broker的一个子集。request.required.acks 默认值:0用来控制一个produce请求怎样才能算完成,准确的说,是有多少broker必须已经提交数据到log文件,并向leader发送ack,可以设置如下的值:0,意味着producer永远不会等待一个来自broker的ack,这就是0.7版本的行为。这个选项提供了最低的延迟,但是持久化的保证是原创 2015-06-05 17:41:15 · 87 阅读 · 0 评论 -
kafka源码编译
原创 2015-06-06 14:11:36 · 107 阅读 · 0 评论 -
kafka storm 命令
ost36:2181,host37:2181,host38:2181 bin/kafka-topics.sh --create --zookeeper host34:2181,host36:2181,host37:2181,host38:2181 --topic dirkzhang bin/kafka-topics.sh --describe --zookeeper host34:2181,host36:2181,host37:2181,host38:2181 --topic dirkzhang原创 2015-06-03 15:10:50 · 77 阅读 · 0 评论 -
kafka接口协议
The Kafka protocol is fairly simple, there are only six client requests APIs. Metadata - Describes the currently available brokers, their host and port information, and gives information about w...原创 2015-07-10 11:06:52 · 192 阅读 · 0 评论 -
kafka参数配置
kafka参数配置server.properties############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=03...原创 2015-07-08 16:30:54 · 104 阅读 · 0 评论 -
kafka Reassign Partitions Tool
kafka 0.8增加了重新分配partition功能,进行扩容,增加减少replica,移动partitionkafka-reassign-partitions.sh脚本 Option Description ------ ...2015-07-02 20:37:01 · 176 阅读 · 0 评论 -
kafka topic命令
kafka topic可以定制执行brokerid和partition的topic,还有增加partitionkafka topicCreate, delete, describe, or change a topic.Option Description ...原创 2015-07-02 20:31:38 · 158 阅读 · 0 评论 -
kafka replication tools
kafka工具 replication tool工作步骤(异步过程,执行完1就结束):1.更新zk上的/admin/preferred_replica_election节点,内容为副本的几个位置(不包含已经奔溃的leader) 2.controller中的zk的listener读取topic partition的副本的几个位置 3.controller获取每个top...原创 2015-07-01 17:20:38 · 127 阅读 · 0 评论 -
kafka获得最新partition offset
kafka获得partition下标,需要用到kafka的simpleconsumer import java.util.ArrayList;import java.util.Collections;import java.util.Date;import java.util.HashMap;import java.util.List;import java.util...2015-06-05 17:45:09 · 251 阅读 · 0 评论 -
kafka接口协议二 详细
kafka没有直接将消息发给某个topic的partition,所以product必须发送partition的broker client可以从任意broker获得cluster metadata信息,获得paritition的leader broker,当leader broker处理数据有误时,有两种情况1.broker死了,2broker不在包含此partition;所以需...原创 2015-07-10 11:12:45 · 176 阅读 · 0 评论 -
kafka 重新分配leader kafka-preferred-replica-election.sh
bin/kafka-preferred-replica-election.sh --zookeeper hostzk/kafka-real bin/kafka-preferred-replica-election.sh --zookeeper localhost:12913/kafka --path-to-json-file topicPartitionList.json top...原创 2015-07-17 17:45:04 · 628 阅读 · 0 评论 -
转kafka入门
规范的实现。kafka对消息保存时根据Topic进行归类,发送消息者成为Producer,消息接受者成为Consumer,此外kafka集群有多个kafka实例组成,每个实例(server)成为broker。无论是kafka集群,还是producer和consumer都依赖于zookeeper来保证系统可用性集群保存一些meta信息。下面这张图描述更准确。主要特性: 1)消息持久化要从大数据中获取真正的价值,那么不能丢失任何信息。Apache Kafka设计上是时间复杂度O(1)的磁盘结原创 2015-04-19 19:05:11 · 70 阅读 · 0 评论 -
kafka leader balance
Balancing leadership Whenever a broker stops or crashes leadership for that broker's partitions transfers to other replicas. This means that by default when the broker is restarted it will only...原创 2015-10-14 13:23:35 · 150 阅读 · 0 评论 -
kafka KafkaRequestHandlerPool类
KafkaRequestHandlerPool是KafkaRequestHandler的handler池,处理所有请求队列具体的处理,会交由KafkaApis类 for(i <- 0 until numThreads) { runnables(i) = new KafkaRequestHandler(i, brokerId, aggregateIdleMeter,...原创 2015-09-01 15:12:43 · 119 阅读 · 0 评论