Facebook的系统架构

Facebook Engineering: What is Facebook's architecture?
 

5 Answers

   
From various readings and conversations I had, my understanding of Facebook's current architecture is:
  • Web front-end written in PHP. Facebook's HipHop Compiler [1] then converts it to C++ and compiles it using g++, thus providing a high performance templating and Web logic execution layer.
  • Because of the limitations of relying entirely on static compilation, Facebook's started to work on a HipHop Interpreter [2] as well as a HipHop Virtual Machine which translate PHP code to HipHop ByteCode [3].
  • Business logic is exposed as services using Thrift [4]. Some of these services are implemented in PHP, C++ or Java depending on service requirements (some other languages are probably used...)
  • Services implemented in Java don't use any usual enterprise application server but rather use Facebook's custom application server. At first this can look as wheel reinvented but as these services are exposed and consumed only (or mostly) using Thrift, the overhead of Tomcat, or even Jetty, was probably too high with no significant added value for their need.
  • Persistence is done using MySQL, Memcached [5], Hadoop's HBase [6]. Memcached is used as a cache for MySQL as well as a general purpose cache.
  • Offline processing is done using Hadoop and Hive.
  • Data such as logging, clicks and feeds transit using Scribe [7] and are aggregating and stored in HDFS using Scribe-HDFS [8], thus allowing extended analysis using MapReduce
  • BigPipe [9] is their custom technology to accelerate page rendering using a pipelining logic
  • Varnish Cache [10] is used for HTTP proxying. They've prefered it for its high performance and efficiency [11].
  • The storage of the billions of photos posted by the users is handled by Haystack, an ad-hoc storage solution developed by Facebook which brings low level optimizations and append-only writes [12].
  • Facebook Messages is using its own architecture which is notably based on infrastructure sharding and dynamic cluster management. Business logic and persistence is encapsulated in so-called 'Cell'. Each Cell handles a part of users ; new Cells can be added as popularity grows [13]. Persistence is achieved using HBase [14].
  • Facebook Messages' search engine is built with an inverted index stored in HBase [15]
  • Facebook Search Engine's implementation details are unknown as far as I know
  • The typeahead search uses a custom storage and retrieval logic [16]
  • Chat is based on an Epoll server developed in Erlang and accessed using Thrift [17]
  • They've built an automated system that respond to monitoring alert by launching the appropriated repairing workflow, or escalating to humans if the outage couldn't be overcome [18]. 
About the resources provisioned for each of these components, some information and numbers are known:
  • Facebook is estimated to own more than 60,000 servers [18]. Their recent datacenter in Prineville, Oregon is based on entirely self-designed hardware [19] that was recently unveiled as Open Compute Project [20].
  • 300 TB of data is stored in Memcached processes [21]
  • Their Hadoop and Hive cluster is made of 3000 servers with 8 cores, 32 GB RAM, 12 TB disks that is a total of 24k cores, 96 TB RAM and 36 PB disks [22]
  • 100 billion hits per day, 50 billion photos, 3 trillion objects cached, 130 TB of logs per day as of july 2010 [22]

[1] HipHop for PHP: http://developers.facebook.com/b...
[2] Making HPHPi Faster: http://www.facebook.com/note.php...
[3] The HipHop Virtual Machine: http://www.facebook.com/note.php...
[4] Thrift: http://thrift.apache.org/
[5] Memcached: http://memcached.org/
[6] HBase: http://hbase.apache.org/
[7] Scribe: https://github.com/facebook/scribe
[8] Scribe-HDFS: http://hadoopblog.blogspot.com/2...
[9] BigPipe: http://www.facebook.com/notes/fa...
[10] Varnish Cache: http://www.varnish-cache.org/
[11] Facebook goes for Varnish: http://www.varnish-software.com/...
[12] Needle in a haystack: efficient storage of billions of photos: http://www.facebook.com/note.php...
[13] Scaling the Messages Application Back End: http://www.facebook.com/note.php...
[14] The Underlying Technology of Messages: https://www.facebook.com/note.ph...
[15] The Underlying Technology of Messages Tech Talk: http://www.facebook.com/video/vi...
[16] Facebook's typeahead search architecture: http://www.facebook.com/video/vi...
[17] Facebook Chat: http://www.facebook.com/note.php...
[18] Who has the most Web Servers?: http://www.datacenterknowledge.c...
[19] B uilding Efficient Data Centers with the Open Compute Project: http://www.facebook.com/note.php...
[20] Open Compute Project: http://opencompute.org/
[21] Facebook's architecture presentation at Devoxx 2010: http://www.devoxx.com

[22] Scaling Facebook to 500 millions users and beyond: http://www.facebook.com/note.php...

 

Facebook的系统架构 (2011-12-16 16:50)

导读:本文根据quora.comWhat is Facebook’s architecture?》这篇文章整理编译而来。

根据我现有的阅读和谈话,我所理解的今天Facebook的架构如下:

Web前端是由PHP写的。Facebook的HipHop [1]会把PHP转成C++并用g++编译,这样就可以为模板和Web逻贺业务层提供高的性能。

业务逻辑以Service的形式存在,其使用Thrift [2]。这些Service根据需求的不同由PHP,C++或Java实现(也可以用到了其它的一些语言……)

用 Java写的Services没有用到任何一个企业级的应用服务器,但用到了Facebook自己的定制的应用服务器。看上去好像是重新发明轮子,但是这 些Services只被暴露给Thrift使用(绝大所数是这样),Tomcat太重量级了,即使是Jetty也可能太过了点,其附加值对 Facebook所需要的没有意义。

持久化由MySQL, Memcached [3],Facebook的Cassandra [4],Hadoop的HBase [5]完成。Memcached使用了MySQL的内存Cache。Facebook工程师承认他们的Cassandra使用正在减少,因为他们更喜欢HBase,因为它的更简单的一致性模型,以到其MapReduce能力。

离线处理使用Hadoop和Hive。

日志,点击,feeds数据使用Scribe [6],把其聚合并存在HDFS,其使用Scribe-HDFS [7],因而允许使用MapReduce进行扩展分析。

BigPipe [8]是他们的定制技术,用来加速页面显示。

Varnish Cache[9]用作HTTP代理。他们用这个的原因是高速和有效率。[10].

用来搞定用户上传的十亿张照片的存储,其由Haystack处理,Facebook自己开发了一个Ad-Hoc存储方案,其主要做了一些低层优化和“仅追加”写技术 [11].

Facebook Messages使用了自己的架构,其明显地构建在了一个动态集群的基础架构上。业务逻辑和持久化被封装在一个所谓的‘Cell’。每个‘Cell’都处 理一部分用户,新的‘Cell’可以因为访问热度被添加[12]。持久化归档使用HBase[13]。

Facebook Messages的搜索引擎由存储在HBase中的一个倒置索引的构建。[14]

Facebook搜索引擎实现细节据我所知目前是未知状态。

Typeahead搜索使用了一个定制的存储和检索逻辑。[15]

Chat基于一个Epoll服务器,这个服务器由Erlang开发,由Thrift存取 [16]

关于那些供给给上述组件的资源,下面是一些信息和数量,但是有一些是未知的:

Facebook估计有超过60,000台服务器[16]。他们最新的数据中心在俄勒冈州的Prineville,其基于完全自定设计的硬件[17] 那是最近才公开的 Open Compute 项目[18]。

300TB的数据存在Memcached中处理[19]

他们的Hadoop和Hive集群由3000服务器组成,每台服务器有8个核,32GB的内存,12TB的硬盘,全部有2万4千个CPU的核,96TB内存和36PB的硬盘。[20]

每天有1000亿的点击量,500亿张照片,100 billion hits per day, 50 billion photos,3万亿个对象被Cache,每天130TB的日志(2010年7月的数据)[21]

参考引用

[1] HipHop for PHP

[2] Thrift

[3] Memcached

[4] Cassandra

[5] HBase

[6] Scribe

[7] Scribe-HDFS

[8] BigPipe

[9] Varnish Cache

[10] Facebook goes for Varnish

[11] Needle in a haystack: efficient storage of billions of photos

[12] Scaling the Messages Application Back End

[13] The Underlying Technology of Messages

[14] The Underlying Technology of Messages Tech Talk

[15] Facebook’s typeahead search architecture

[16] Facebook Chat

[17] Who has the most Web Servers?

[18] Building Efficient Data Centers with the Open Compute Project

[19] Open Compute Project

[20] Facebook’s architecture presentation at Devoxx 2010

[21] Scaling Facebook to 500 millions users and beyond

原文出自:quora.com

译文出自:cnbeta

转载于:https://my.oschina.net/u/136923/blog/69231

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值