【kafka系列教程14】kafka设计动机

Kafka设计目标在于作为统一平台处理大公司的实时数据流,要求高吞吐量支持大规模事件流,如实时日志聚合,同时能有效处理大量数据积压和低延迟交付。系统采用分区和分布式处理,确保容错性,使其适用于各种实时数据处理场景。
摘要由CSDN通过智能技术生成

We designed Kafka to be able to act as a unified platform for handling all the real-time data feeds a large company might have. To do this we had to think through a fairly broad set of use cases.

我们设计kafka是必须能够作为一个统一的平台,来处理一家大公司可能有的所有实时数据。要做到这一点,我们不得不思考一个相当广泛的用例。

 

It would have to have high-throughput to support high volume event streams such as real-time log aggregation.

它必须具有高吞吐量,以支持大容量的事件流,如实时日志聚集。

 

It would need to deal gracefully with large data backlogs to be able to support periodic data loads from offline systems.

它将需要优雅地处理大型数据积压,要能够支持从脱机系统的定期数据装载。

 

It also meant the system would have to handle low-latency delivery to handle more traditional messaging use-cases.

这也意味着该系统将不得不处理低延迟交付,以处理更传统的消息传递的用例。

 

We wanted to support partitioned, distributed, real-time processing of these feeds to create new, derived feeds. This motivated our partitioning and consumer model.

我们想要支持以分区,分布式的,实时处理来这些创建新派生的feeds,促使我们的分区和消费模式。

 

Finally in cases where the stream is fed into other data systems for serving we new the system would have to be able to guarantee fault-tolerance in the presence of machine failures.

最后,在其中该流被送入其他数据系统用于服务的情况下,我们的新的系统必须能够保证在机器故障存在的容错。

 

Supporting these uses led use to a design with a number of unique elements, more akin to a database log then a traditional messaging system. We will outline some elements of the design in the following sections.

为了支持这些用途,需要设计一些独特的元素。比起传统的消息系统,更像数据库日志,我们将在下一节介绍。


 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值