Facebook开发了一种叫做Social Inbox的系统,集成了email,IM,SMS,文本消息
FB每个月需要存储1350亿条信息。Kannan Muthukkaruppa给出了一篇文章,介绍了背后使用的存储技术 -- HBase。
Hbase的性能优于MySQL,Cassandra及类似系统。
Cassandra是Facebook专为inbox类型应用开发的上一代存储系统,但是Facebook工程师在应用过程中发现,由于Cassandra使用了最终一致性模型,这并不符合Facebook最新的实时产品。Facebook中也有一套复杂的MySQL架构,但是当数据和索引增长到一定规模后,MySQL性能让人无法忍受。Facebook的工程师们没有自己去开发新的系统,而是选择了HBase。
HBase支持大规模数据的列级别更新(row-level update),而这一特性正是消息系统(messaging system)所需要的。HBase也是基于BigTable模型所构建的一个基于列的key-value存储系统,擅长于用key来扫描并获取列内容,这也正是消息系统所需要的。但是,复杂的查询指令不被HBase所支持。对于复杂查询指令,都由Hive这样的工具来处理。Hive是Facebook开发用来查询pB级别数据中心的工具,它所基于的文件系统是HDFS。HDFS也被HBase所利用。
Facebook选择HBase是基于以下两点观察到的需求:
1. 短期小批量临时数据
1. 长期增长的很少被访问到的数据
例如,收件箱里面的内容基本只会看一次,看过之后很少回头再看。
系统关键点:
* Hbase
* 使用的一致性模型比Cassandra更简单
* 对于特定数据模式,有更好的性能和扩展性
* 自动负载平衡和容错,压缩,多副本(multiple shards per server)
* HDFS,支持副本,端对端校验,自动负载平衡
* Facebook的运维人员熟悉HDFS
* Haystack用于存储附件
* 专用的服务器用于处理不同来源的消息
* 基于ZooKeeper实现的好友推荐服务
* 组件服务:邮件帐户验证,好友关系,隐私决定,消息发送路由
* 15个工程师一年开发了20个新的服务组件
* Facebook不会拘泥于单一数据库平台。不同性质的任务应该使用不同的平台。
尽管Facebook在HDFS/Hadoop/Hive上有丰富的使用经验,但这不是他们选择HBase的理由。一个产品的梦想是借助另外一个流行产品的东风加入到产品生态系统中去。Hbase实现了这一目标。看到HBase在持久化方面的诸多好处---实时,分布式,线性可扩展,稳定,大数据(BigData),key-value,面向列---我们有理由相信它将会更加流行,Facebook的选择将会促使这种流行的发扬广大。
Facebook's New Real-time Messaging System: HBase to Store 135+ Billion Messages aMonth
You may have read somewhere that Facebook has introduced a new Social Inbox integrating email, IM, SMS, text messages, on-site Facebook messages.All-in-all they need to store over 135 billion messages a month. Where do they store all that stuff?Facebook's Kannan Muthukkaruppan gives thesurpriseanswer in The Underlying Technology of Messages : HBase . HBase beat out MySQL, Cassandra, and a few others.
Why a surprise? Facebook created Cassandra and it was purpose built for an inbox type application, but they found Cassandra's eventual consistency model wasn't a good match for their new real-time Messages product. Facebook also has an extensive MySQL infrastructure , but they found performance suffered as data set and indexes grew larger. And they could have built their own, but they chose HBase.
HBase is a scaleout table store supporting very high rates of row-level updates over massive amounts of data .Exactly what is needed for a Messaging system. HBase is also a column based key-value store built on the BigTable model. It's good at fetching rows by key or scanning ranges of rows and filtering. Also what is needed for a Messaging system. Complex queries are not supported however. Queries are generally given over to an analytics tool like Hive , which Facebook created to make sense of their multi-petabyte data warehouse, and Hive is based on Hadoop's file system, HDFS, which is also used by HBase.
Facebook chose HBase because they monitored their usage and figured out what the really needed . What they needed was a system that could handle two types of data patterns:
- A short set of temporal data that tends to be volatile
- An ever-growing set of data that rarely gets accessed
Makes sense. You read what's current in your inbox once and then rarely if ever take a look at it again. These are so different one might expect two different systems to be used, but apparently HBase works well enough for both. How they handle generic search functionality isn't clear as that's not a strength of HBase, though it does integrate with various search systems .
Some key aspects of their system:
- HBase:
- Has a simpler consistency model than Cassandra.
- Very good scalability and performance for their data patterns.
- Most feature rich for their requirements: auto load balancing and failover, compression support, multiple shards per server, etc.
- HDFS, the filesystem used by HBase, supports replication, end-to-end checksums, and automatic rebalancing.
- Facebook's operational teams have a lot of experience using HDFS because Facebook is a big user of Hadoop and Hadoop uses HDFS as its distributed file system.
- Haystack is used to store attachments.
- A custom application server was written from scratch in order to service the massive inflows of messages from many different sources.
- A user discovery service was written on top ofZooKeeper .
- Infrastructure services are accessed for: email account verification, friend relationships, privacy decisions, and delivery decisions (should a message be sent over chat or SMS?).
- Keeping with their small teams doing amazing things approach,20 new infrastructures services are being released by 15 engineers in one year .
- Facebook is not going to standardize on a single database platform, they will use separate platforms for separate tasks.
I wouldn't sleep on the idea that Facebook already having a lot of experience with HDFS/Hadoop/Hiveas being a big adoption driver for HBase. It's the dream of any product to partner with another very popular product in the hope of being pulled in as part of the ecosystem. That's what HBase hasachieved. Given how HBase covers a nice spot in the persistence spectrum--real-time, distributed, linearly scalable, robust, BigData, open-source, key-value, column-oriented--we should see it become even more popular, especially with itsanointmentby Facebook.
Related Articles
- Integrating Hive and HBase by Carl Steinbach
- 1 Billion Reasons Why Adobe Chose HBase
- HBase Architecture 101 - Write-ahead-Log by Lars George
- HBase Architecture 101 - Storage y Lars George
- BigTable Model with Cassandra and HBase by Ricky Ho
- New Facebook Chat Feature Scales To 70 Million Users Using Erlang