ceph部署实践

参考ceph官方文档进行的部署,由于需要和OpenStack结合,所以将其部署在OpenStack的三个节点中:


1,实验环境:

10.0.0.102 controller                (作为admin和mon)

10.0.0.103 compute1               (作为osd0)

10.0.0.106 compute3        (作为osd1)


2,在admin节点安装ceph工具

2.1加入release key

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
2.2  Add the Ceph packages to your repository

echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
注意需要把{ceph-stable-release}替换为稳定的版本(e.g., cuttlefish, dumpling, emperor, firefly, etc.).,这里我采用的是firefly。

2.3安装ceph-deploy

sudo apt-get update && sudo apt-get install ceph-deploy

3,在各个节点安装ntp和ssh-server服务:

sudo apt-get install ntp
sudo apt-get install openssh-server
4,创建ceph用户

4,1 在每个节点上创建ceph用户

ssh ceph@ceph-server	(这里为controller,compute1,compute3)
sudo useradd -d /home/ceph -m ceph
sudo passwd ceph
4.2 给每个节点的ceph用户root权限

echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph
4.3 在admin节点生成公钥,这样到每个节点可以免登陆,这里注意要用ceph用户进行生成,不能用root,否则可能会带来很多其他的问题

ssh-keygen
4.4将公钥复制到各个节点上,包括admin节点

ssh-copy-id ceph@controller
ssh-copy-id ceph@compute1
ssh-copy-id ceph@compute2
4.5修改~/.ssh/config为:

Host controller
Hostname controller
User ceph
Host compute1
        Hostname compute1
        User ceph
Host compute3
        Hostname compute3
        User ceph

4.6用shortname来ping一下,看能否成功:

ceph@controller:~/.ssh$ ping controller
PING controller (10.0.0.102) 56(84) bytes of data.
64 bytes from controller (10.0.0.102): icmp_seq=1 ttl=64 time=0.096 ms
64 bytes from controller (10.0.0.102): icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from controller (10.0.0.102): icmp_seq=3 ttl=64 time=0.068 ms
^C
--- controller ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.068/0.077/0.096/0.015 ms
ceph@controller:~/.ssh$ ping compute1
PING compute1 (10.0.0.103) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.103): icmp_seq=1 ttl=64 time=1.30 ms
64 bytes from compute1 (10.0.0.103): icmp_seq=2 ttl=64 time=0.656 ms
64 bytes from compute1 (10.0.0.103): icmp_seq=3 ttl=64 time=0.786 ms
^C
--- compute1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.656/0.916/1.308/0.283 ms
ceph@controller:~/.ssh$ ping compute3
PING compute3 (10.0.0.106) 56(84) bytes of data.
64 bytes from compute3 (10.0.0.106): icmp_seq=1 ttl=64 time=0.995 ms
64 bytes from compute3 (10.0.0.106): icmp_seq=2 ttl=64 time=0.713 ms
64 bytes from compute3 (10.0.0.106): icmp_seq=3 ttl=64 time=0.769 ms


5,从controller创建一个集群cluster

首先,到ceph用户下:

su - ceph

5.1 由于在创建过程中会产生很多的文件,所以我们在~目录下创建一个my-cluster文件夹,然后在里面进行操作

ceph@controller:~/my-cluster$

5.2创建一个cluster

ceph-deploy new controller
5.3 安装ceph

ceph-deploy install controller compute1 compute3
5.4  Add the initial monitor(s) and gather the keys

ceph-deploy mon create-initial
如果采用的ceph版本较老的话需要分两步进行
ceph-deploy mon create controller

ceph-deploy gatherkeys controller
5.5 完成后查看my-cluster目录下是否有如下的文件:


ceph.bootstrap-mds.keyring  
ceph.bootstrap-osd.keyring  
ceph.client.admin.keyring   


6,OSD安装

要让集群正常运行,Ceph 开发人员建议使用 XFS(Silicon Graphics 日志文件系统)或 B 树文件系统 (Btrfs) 作为用于对象存储的文件系统。

6.1 对增加的硬盘进行格式化:

mkfs.xfs /dev/vdb

6.2 将该文件挂载到需要的目录下:

这里我采用的为

compute1:/var/local/osd0 compute3:/var/local/osd0

即用mount命令即可:

compute1:

mount /dev/vdb /var/local/osd0

compute3:

mount /dev/vdb /var/local/osd1

***(由于mount是在root权限下进行的,所以我将各个目录的权限改为ceph :命令为#chown -R ceph:ceph local/,否则权限不够会报错)

6.3在admin处进行osd安装

ceph-deploy osd prepare compute1:/var/local/osd0 compute3:/var/local/osd1
6.4激活osd:

ceph-deploy osd activate compute1:/var/local/osd0 compute3:/var/local/osd1
6.5 将conf文件拷贝到各个节点:

ceph-deploy admin controller compute1 compute3
6.6 确保权限ceph.client.admin.keyring.够,否则mon无法进行监控:

sudo chmod +r /etc/ceph/ceph.client.admin.keyring
6.7 检查cluster的健康状况:

ceph health
















评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值