文件存储方案对比

文件存储方案对比

需求

对海量文件(图片、文档等)进行存储,系统间共享。


  • 数据安全
    需要实现数据冗余,避免数据的单点故障
  • 可线性扩展
    当数据增长到TB、甚至PB以上时,存储方案需要支持可线性扩展
  • 存储高可用
    某个存储服务宕掉时,不影响整体存储方案的可用
  • 性能
    性能达到应用要求

开源选型

Ceph

Ceph是一个开源的分布存储系统,同时提供对象存储、块存储和文件存储。
linux内核2.6.34将ceph加入到内核中,红帽基于ceph出了redhat ceph storage.
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0riKkaCK-1661480865476)(http://docs.ceph.org.cn/_images/stack.png)]

ceph体系结构

OpenStack Swift

OpenStack的存储项目,提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-7tbURpoV-1661480865478)(http://www.ibm.com/developerworks/cn/cloud/library/1310_zhanghua_openstackswift/image005.png)]
OpenStack Swift 作为稳定和高可用的开源对象存储被很多企业作为商业化部署,如新浪的 App Engine 已经上线并提供了基于 Swift 的对象存储服务,韩国电信的 Ucloud Storage 服务。

OpanStack Swift 原理、架构与 API 介绍
OpenStack Swift特性

Hbase/hdfs

hdf全称是Hadoop distributed file system,是一个用java语言开发的分布式文件系统,有很好的伸缩性,支持10亿+的文件,上百PB数据,上千节点的集群。
HDFS设计目标是**支持海量数据的批量计算**,而不是直接与用户做交互式操作。
缺点

  • It had a single point of failure until the recent versions of HDFS
  • It isn’t POSIX compliant
  • It stores at least 3 copies of data
  • It has a centralized name server resulting in scalability challenges
Assumptions and Goals
  • Hardware Failure
    Hardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that some component of HDFS is always non-functional. Therefore, detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS.

  • Streaming Data Access
    Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements that are not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates.

  • Large Data Sets
    Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.

  • Simple Coherency Model
    HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A MapReduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in the future.

  • “Moving Computation is Cheaper than Moving Data”
    A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves closer to where the data is located.

  • Portability Across Heterogeneous Hardware and Software Platforms
    HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications.

GlusterFS

  • GlusterFS是一个开源的分布式文件系统,可支持PB级数据量和几千个客户端,没有元数据服务器。
  • 红帽2011年花1.36亿$购买了GlusterFS,基于GlusterFS发布了一个商业存储系统
    image
    image

GlusterFS架构与维护
安装GlusterFS Client

fastdfs

fastdfs是阿里余庆做的一个个人项目,在一些互联网创业公司中有应用,没有官网,不活跃,两个contributors。

tfs

taobao开源的分布式存储系统,其设计目标是用于存海量小文件。设计思路类似于hdfs,两个NameServer和多个DataServer作成,大量小文件合并成一个大文件(Block,默认64M)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-7iKZRU8c-1661480865486)(http://code.taobao.org/p/tfs/file/305/structure.png)]

github上面最近一次修改是4年前,阿里云开源网站上最近一次修改是3年前,资料不多。

minio

minio是用go语言开发的一个分布式对象存储系统,提供与Amazon S3兼容的API。它与其它分布式存储系统的特色在于简单、轻量级,对开发者友好,认为存储应该是一个开发问题而不是一个运维问题。

minio创使人Anand Babu Periasamy

对比

特性cephminioswifthbase/hdfsGlusterFSfastdfs
开发语言Cgopythonjava副本副本
数据冗余副本,纠删码Reed-Solomon code副本副本副本副本
一致性强一致性强一致最终一致最终一致??
动态扩展HASH不支持动态加节点一致性hash???
性能??????
中心节点对象存储无中心,cephFS有元数据服务中心点无中心无中心nameNode单点??
存储方式块、文件、对象对象存储(分块)块存储块存储??
活跃度高,中文社区不算活跃高,没有中文社区
成熟度??
操作系统linux-3.10.0+linux,windows?任何支持java的OS??
文件系统EXT4,XFSEXT4,XFS????
客户端c、python,S3java,s3java,RESTfuljava,RESTful??
断点续传兼容S3,分段上传,断点下载兼容S3,分段上传,断点下载不支持不支持??
学习成本???
前景1089975
开源协议LGPL version 2.1Apache v2.0Apache V2.0???
管理工具ceph-admin,ceph-mgr,zabbix插件web管理工具命令行工具 mc????

ceph vs minio

从对比中,目前文件存储在ceph和minio中进行比较选型

ceph优缺点

优点
  1. 成熟

    • 红帽继子,ceph创始人已经加入红帽
    • 国内有所谓的ceph中国社区,私人机构,不活跃,文档有滞后,而且没有更新的迹象。
    • 从git上提交者来看,中国有几家公司的程序员在提交代码,星辰天合easystack, 腾讯、阿里基于ceph在做云存储,但是在开源社区中不活跃,阿里一位叫liupan的有参与
  2. 功能强大

    • 支持数千节点
    • 支持动态增加节点,自动平衡数据分布。(TODO,需要多长时间,add node时是否可以不间断运行)
    • 可配置性强,可针对不同场景进行调优
缺点
  1. 学习成本高,安装运维复杂。(或者说这个不是ceph的缺点,是咱们的缺点)

minio优缺点

优点
  1. 学习成本低,安装运维简单,开箱即用
  2. 目前minio论坛推广给力,有问必答
  3. 有java客户端、js客户端
缺点
  1. 社区不够成熟,业界参考资料较少
  2. 不支持动态增加节点,minio创始人的设计理念就是动态增加节点太复杂,后续会采用其它方案来支持扩容。

Dynamic addition and removal of nodes are essential when all the storage nodes are managed by Minio server. Such a design is too complex and restrictive when it comes to cloud native application. Old design is to give all the resources to the storage system and let it manage them efficiently between the tenants. Minio is different by design. It is designed to solve all the needs of a single tenant. Spinning minio per tenant is the job of external orchestration layer. Any addition and removal means one has to rebalance the nodes. When Minio does it internally, it behaves like blackbox. It also adds significant complexity to Minio. Minio is designed to be deployed once and forgotten. We dont even want users to be replacing failed drives and nodes. Erasure code has enough redundancy built it. By the time half the nodes or drives are gone, it is time to refresh all the hardware. If the user still requires rebalancing, one can always start a new minio server on the same system on a different port and simply migrate the data over. It is essentially what minio would do internally. Doing it externally means more control and visibility.

We are planning to integrate the bucket name based routing inside the minio server itself. This means you can have 16 servers handle a rack full of drives (say few petabytes). Minio will schedule buckets to free 16 drives and route all operations appropriately

参考资料

存储架构
亚马逊S3文档


ceph commiter
阿里liupan的博客
阿里Ceph PPT
ceph day beijing2016
ceph day beijing2017
闲聊Ceph目前在中国的发展&Ceph现状
bluestore,一种新的ceph存储后端
Ceph: placement groups
ceph简介
星辰天合开发人员上ceph官网


minio官方文档
minio社区
minio作者访谈

  • 5
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 5
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值