各节点职能:
ceph-node1 mon.0ceph-node1 mds.0
ceph-node4 osd client
ceph-node5 osd clientosd.0osd.1
1、必要的支持包安装osd.2osd.3
软件包安装
yum install automake autoconf boost-devel
yum install fuse-devel libtool libuuid-devel
yum install libblkid-devel keyutils-libs-devel
yum install cryptopp-devel fcgi-devel libcurl-devel
安装cryptopp-devel:
cryptopp:rpm -ivh http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/cryptopp-5.5.2-1.el6.rf.x86_64.rpm
cryptopp-devel:rpm -ivh http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/cryptopp-devel-5.5.2-1.el6.rf.x86_64.rpm
安装fcgi-devel:
epel-release是安装一个有关ceph的源:rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
把epel.repo文件下面的baseurl取消注释,把mirrorlist行注释掉
yum install expat-devel gperftools-devel libedit-devel
yum install libatomic_ops-devel snappy-devel leveldb-devel
yum install libaio-devel xfsprogs-devel
yum install libudev-devel btrfs-progs
2、编译源代码包
安装包为:ceph-0.80.1.tar.gz
编译安装命令:
cd /root/ceph-0.80.1
./autogen.sh
./configure --prefix=/usr/local/ceph
make&&make install
3、ceph安装配置
①初步配置
yum install python-pip
pip install argparse
;以上两个软件包是创建osd所依赖的包,直接用yum源安装就可以了
cp /usr/local/ceph/bin/ceph /usr/local/bin/ ;在local目录下面没有拷贝创建ceph执行脚本,这里需要人工操作
/usr/local/bin/ceph osd create ;创建osd模块
执行完上面的操作后可能会出现下面的错误信息输出:
[root@ceph1 ~]# /usr/local/bin/ceph create osd Traceback (most recent call last): File "/usr/local/bin/ceph", line 56, in <module> import rados ImportError: No module named rados [root@ceph1 ~]# /usr/local/bin/ceph osd create Traceback (most recent call last): File "/usr/local/bin/ceph", line 56, in <module> import rados ImportError: No module named rados
为了解决上面的问题需要执行如下操作cp -vf /usr/local/ceph/lib/python2.6/site-packages/* /usr/lib64/python2.6echo /usr/local/ceph/lib > /etc/ld.so.conf.d/ceph.confecho /usr/local/lib> /etc/ld.so.conf.d/libtcmalloc.confldconfig②每个节点上创建“mkdir /etc/ceph”目录③添加环境变量export PATH=$PATH:/usr/local/ceph/bin:/usr/local/ceph/sbin④添加ceph服务cp init-ceph /etc/init.d/ceph//这个文件必须复制,且不要做任何修改########################################################################################
上面的操作是在ceph1节点上面执行的,下面的操作将会涉及到其他节点特别是ceph4节点的配置和创建分区,为了安装简单,上面操作完成之后就可以clone系统了########################################################################################
4、无密码登录
实现各节点之间使用ssh免密码登录,要是新添加节点要执行相同的操作,需要注意的是每个节点之间都需要相互免密码登录
5、在ceph-node4、ceph-node5上创建分区
①创建分区的情况如下所示:[osd.0] host = ceph-node4 devs = /dev/sdb1 [osd.1] host = ceph-node4 devs = /dev/sdc1 [osd.2] host = ceph-node5 devs = /dev/sdb1 [osd.3] host = ceph-node5 devs = /dev/sdc1
在ceph-node2节点上添加两个容量为50GB的两块硬盘,分别挂载在对应的分区上面
②创建分区的文件系统为xfs
6、脚本说明
在ceph-node1节点上操作如下:cd /root/ceph-0.80.1/src
cp sample.ceph.conf /etc/ceph/ceph.conf ;粘贴修改下面配置即可,也可不复制
cp sample.fetch_config /etc/ceph/fetch_config ;fetch_config没什么用,可要可不要
7、ceph.conf配置文件:
根据配置文件创建目录:ceph-node1下:/var/lib/ceph/mon/mon.0ceph-node4和ceph-node5下创建挂载点,和配置文件里面的目录对应创建journal相关的目录要看情况,要是journal采用和osd不同的盘,则需要创建,否则不要创建,启动集群的时候可能会出现osd挂载点不为空同步脚本可采用下面的脚本:[global] public network = 10.10.2.0/24 cluster network = 10.10.2.0/24 ;fsid = a3fa7253-63c2-4e98-a13c-9f9376157561 pid file = /var/run/ceph/$name.pid max open files = 131072 auth cluster required = cephx auth service required = cephx auth client required = cephx keyring = /etc/ceph/$cluster.$name.keyring cephx require signatures = true osd pool default size = 2 osd pool default min size = 1 osd pool default crush rule = 0 osd crush chooseleaf type = 1 osd pool default pg num = 192 osd pool default pgp num = 192 osd auto discovery = false journal collocation = false raw multi journal = true [mon] mon data = /var/lib/ceph/mon/$name mon clock drift allowed = .15 keyring = /etc/ceph/keyring.$name [mon.0] host=ceph-node1 mon addr = 10.10.2.171:6789 [mds] keyring = /etc/ceph/keyring.$name [mds.0] host=ceph-node1 [osd] osd data = /mnt/osd$id osd recovery max active = 5 osd mkfs type = xfs ;osd mount options btrfs = noatime,nodiratime osd journal = /mnt/osd$id/journal osd journal size = 1000 keyring=/etc/ceph/keyring.$name [osd.0] host = ceph-node4 devs = /dev/sdb1 [osd.1] host = ceph-node4 devs = /dev/sdc1 [osd.2] host = ceph-node5 devs = /dev/sdb1 [osd.3] host = ceph-node5 devs = /dev/sdc1
#!/bin/bash cp /etc/ceph/ceph.conf /usr/local/ceph/etc/ceph/ scp /etc/ceph/ceph.conf ceph-node4:/usr/local/ceph/etc/ceph/ scp /etc/ceph/ceph.conf ceph-node4:/etc/ceph/ #scp /etc/ceph/ceph.conf ceph-node2:/usr/local/ceph/etc/ceph/ #scp /etc/ceph/ceph.conf ceph-node2:/etc/ceph/ scp /etc/ceph/ceph.conf ceph-node5:/usr/local/ceph/etc/ceph/ scp /etc/ceph/ceph.conf ceph-node5:/etc/ceph/
8、启动集群
执行下面脚本之前先“yum install redhat-lsb”,不然可能会报“/etc/init.d/ceph: line 15: /lib/lsb/init-functions: No such file or directory”,具体解决参照“init-functions No such file or directory.txt”初次启动或者上次启动错误,要重新启动需执行下面的格式化脚本:
启动脚本:#!/bin/bash sh reset_settings.sh #reset_settings.sh脚本的作用:清楚之前启动的各种数据,同步新的配置文件 mkcephfs -a -c /etc/ceph/ceph.conf --mkfs /etc/init.d/ceph start ceph osd create ceph osd create ceph osd create ceph osd create ssh root@ceph-node4 "/etc/init.d/ceph start osd" ssh root@ceph-node5 "/etc/init.d/ceph start osd"
停止脚本:#!/bin/bash /etc/init.d/ceph start ceph osd create ceph osd create ceph osd create ceph osd create ssh root@ceph-node4 "/etc/init.d/ceph start osd" ssh root@ceph-node5 "/etc/init.d/ceph start osd"
#!/bin/bash /etc/init.d/ceph -a stop
说明:“ceph osd create”操作有几个就创建几个osd,从0开始计数
方法一:
升级内核到3.10以上然后挂载.
mount -t ceph 10.10.2.171:/ /mnt/ceph
如果打开了认证模式,那么加载时需要
mount -t ceph 10.10.2.171:/ /mnt/ceph -o name=admin,secret=AQCGdJ5TYLNrCBAAkoMJgdYHW66ITpnWyItccw==
或直接使用
mount -t ceph 10.10.2.171:/ /mnt/ceph -o name=admin,secret=`ceph-authtool/etc/ceph/keyring.client.admin -p`
secret在keyring.client.admin中
方法二:
不用升级内核
scp ceph.client.admin.keyring ceph-node4:/etc/ceph/
scp ceph.client.admin.keyring ceph-node5:/etc/ceph/
然后在client上执行
ceph-fuse /mnt/ceph
这种方法不推荐,客户端获取了ceph.conf文件是不安全的,目前就采用这种方式
备注:
①ceph源代码下载地址
http://ceph.com/download/
②从相关软件包下载地址
http://ceph.com/rpm/el6/x86_64/
③可以参考“Centos 6.5 安装 ceph”,但是尽量不要执行里面的操作,错误太多了。
④日志文件“init-functions No such file or directory.txt”
⑤日志文件“坑死人的fsid.txt”
⑥0.80.2之后的版本格式化采用“http://blog.csdn.net/skdkjzz/article/details/41445847”里面的方法