Ceph 的简单部署步骤

如果文章哪个地方描述有误,或者大家遇到其他问题,欢迎提出,谢谢!


1集群方案:

一个admin,一个monitor,两个OSD

192.168.126.130 admin

192.168.126.131 node1

192.168.126.132 node2

192.168.126.133 node3

admin用于安装集群,即通过这个机器login到其他机器上进行配置。

VMware上安装四台虚拟机。我们先配置monitor,之后clone两个OSD节点。

OSD节点需要两个虚拟磁盘,一个装系统,一个跑OSD

 

2安装配置OS

2.1最小化安装CentOS7,去官网下载minimal版本。

                cat /etc/centos-release

CentOS Linux release 7.2.1511 (Core)

 

2.2ceph用户加入sudo列表

sudo echo 'ceph ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph

sudo chmod 440 /etc/sudoers.d/ceph

 

2.3同步时间

yum install -y ntp ntpdate ntp-doc

vi /etc/ntp.conf

添加 server0.cn.pool.ntp.org

server 1.asia.pool.ntp.org

server 2.asia.pool.ntp.org

删除原有 server*.centos.pool.ntp.org iburst

 

[root@localhost ~]# ntpdate 0.cn.pool.ntp.org

[root@localhost ~]# hwclock -w

[root@localhost ~]# systemctl enable ntpd.service

[root@localhost~]# systemctl start ntpd.service

 

2.4停用防火墙,系统没有,

[root@localhost ~]# systemctl status firewalld

firewalld.service

  Loaded: not-found (Reason:No such file or directory)

  Active: inactive (dead)

[root@localhost ~]# systemctl status iptables

       iptables.service

  Loaded: not-found (Reason:No such file or directory)

  Active: inactive (dead)

 

2.5禁用 SElinux(生产环境不停用,而是创建相应安全规则)

[root@localhost ~]# sed -i's/Defaults   requiretty/#Defaultsrequiretty/g' /etc/sudoers

如果替换不成功,可能是”Defaults””requiretty”之间的空格数量不对,可以直接打开文件/etc/sudoers修改。

2.6去掉 requiretty选项

为了在admin机器上安装各个集群节点不报错,还要去掉sudoerrequiretty选项

[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'/etc/sysconfig/selinux


3安装Ceph

3.1添加key

rpm --import'https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc'

3.2建立cephrepo文件,确保优先级选项 priority=2

[ceph@mdsmon ~]$ vi/etc/yum.repos.d/ceph.repo

[ceph]

name=Ceph packages for $basearch

baseurl=http://ceph.com/rpm-hammer/el7/$basearch

enabled=1

priority=2

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 

[ceph-noarch]

name=Ceph noarch packages

baseurl=http://ceph.com/rpm-hammer/el7/noarch

enabled=1

priority=2

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 

[ceph-source]

name=Ceph source packages

baseurl=http://ceph.com/rpm-hammer/el7/SRPMS

enabled=0

priority=2

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 

 

3.3安装yum插件 yum-plugin-priorities

sudo yum -y install yum-plugin-priorities

执行完后,确认存在/etc/yum/pluginconf.d/priorities.conf文件,同时确认已经开启文件内容为

[main]

enabled = 1

 

3.4安装依赖包

需要依赖snappy leveldbgdisk python-argparse gperftools-libs

但是这些软件包不全包含在系统自带yum源里,leveldb,gperftools-libs包含在EPEP(Extra Packagefor Enterprise Linux),需要安装epelyum源:yum install epel-release

执行成功后,在/etc/yum.repos.d/目录下生成两个文件:epel.repo/epel-testing.repo.可以修改baseurl为当前一直比较快的站点,并注释mirrorlist(这样可以直接访问某个速度较快特定的mirror站点,免去fastestmirror插件费时查找),如果不知道最快站点,可以不做修改(我没修改)。修改完之后,清除并更新源:

sudo yum clean all

sudo yum update

参考站点:http://mirrors.neusoft.edu.cn/epel/7

修改后的文件,以[epel]为例

[epel]

name=Extra Packages for Enterprise Linux 7 - $basearch

baseurl=http://mirros.neusoft.edu.cn/epel/7/$basearch

#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch

#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch

failovermethod=priority

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

 

安装第三方包:sudoyum -y install snappy leveldb gdisk python-argparse gperftools-libs

 

安装cephsudo yum -yinstall ceph

 

每个机器都需要按照第2/3步进行配置。

可以先配置一台,配置完之后关机,然后利用 VMwareClone功能,复制2OSD节点。Clone步骤:菜单– VM -- Clone –然后一直下一步,在选择Clone type时注意,要选择”Create a full clone” – 完成。



4ssh设置:

        4.1修改hostname

启动所有节点

通过vi/etc/hostname修改主机名字为 admin,node1node2node3

        4.2ssh设置

sudo vi /etc/hosts

添加如下:

192.168.126.130 admin

192.168.126.131 node1

192.168.126.132 node2

192.168.126.133 node3

 

admin机器上生成ssh keyssh-keygen-t rsa

admin机器上的key拷贝到其他机器:ssh-copy-idceph@node1,需要输入ceph用户口令

 

再编辑 ~/.ssh/config

内容如下:

Host admin

Hostname admin

User ceph

 

Host node1

Hosename node1

User ceph

 

Host node2

Hostname node2

User ceph

 

Host node3

Hostname node3

User ceph

 

修改文件属性 chmod 640 ~/.ssh/config

之后可以通过 ssh hostname来登录节点

当然你也可以直接ssh ceph@ipaddress登录节点,也可以直接login节点,不通过ssh

 

 

5部署monitor 

ssh node1

5.1生成UUID用于fsid             

[ceph@node1 ~]$ uuidgen

2d9cec7e-e0fd-40d8-82a1-fa0bcad5d13e

 

5.2UUID添加到 ceph.conf

[ceph@mdsmon ~]$ sudo vi /etc/ceph/ceph.conf

2d9cec7e-e0fd-40d8-82a1-fa0bcad5d13e

 

5.3加入mon节点主机名及ip地址到ceph.conf

mon initial members = node1

mon host = 192.168.126.131

 

5.4创建 monitor keyring

[ceph@node1 ~]$ ceph-authtool --create-keyring /tmp/ceph.mon.keyring--gen-key -n mon. --cap mon 'allow *'

 

5.5创建client.admin.keyring

[ceph@node1 ~]$ ceph-authtool --create-keyring/etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --capmon 'allow *' --cap osd 'allow *' --cap mds 'asllow’

 

5.6client.admin.keyring导入到ceph.mon.keyring

[ceph@node1 ~]$sudo ceph-authtool /tmp/ceph.mon.keyring--import-keyring /etc/ceph/ceph.client.admin.keyring

5.7用短主机名,ipfsid建立monitor map

[ceph@node1 ~]$ monmaptool --create --add node1 192.168.126.131--fsid 2d9cec7e-e0fd-40d8-82a1-fa0bcad5d13e /tmp/monmap

sudo

 

5.8monitor节点建立数据存放目录

[ceph@node1 ~]$ sudo mkdir /var/lib/ceph/mon/ceph-node1

 

5.9初始化monitor

[ceph@node1 ~]$ sudo ceph-mon –mkfs –I node1 –monmap /tmp/monmap –keyring/tmp/ceph.mon.keyring

 

5.10完成配置的ceph.conf文件:

[ceph@node1 ~]$ sudo vi /etc/ceph/ceph.conf

fsid = 2d9cec7e-e0fd-40d8-82a1-fa0bcad5d13e

mon initial members = node1

mon host = 192.168.126.131

public network = 192.168.126.0/24

auth cluster required = cephx

auth service required = cephx

auth client required = cephx

osd joural size = 1024

filestore xattr use omap = true

osd pool default size = 2

osd pool default min size = 1

osd pool default pg num = 333

osd pool default pgp num = 333

osd crush chooseleaf type = 1

 

 

5.11完成初始化                                                                                                                                                                                                                                                                                             

[ceph@node1 ~]$ sudo touch /var/lib/ceph/mon/ceph-node1/done

 

5.12设置启动方式,在数据目录下创建名为 sysvinit的文件标记,可以通过sysvinit方式启动monitor节点

[ceph@node1 ~]$ sudo touch /var/lib/ceph/mon/ceph-node1/sysvinit

5.13启动monitor节点

[ceph@node1 ~]$ sudo /etc/init.d/ceph start mon.node1

 

 

5.14验证:

[ceph@node1 ~]$ sudo ceph -s

   cluster2d9cec7e-e0fd-40d8-82a1-fa0bcad5d13e

    health HEALTH_ERR

           64 pgs stuckinactive

           64 pgs stuckunclean

           no osds

    monmap e1: 1 mons at{node1=192.168.126.135:6789/0}

           election epoch 2,quorum 0 node1

    osdmap e1: 0 osds: 0 up,0 in

     pgmap v2: 64 pgs, 1pools, 0 bytes data, 0 objects

           0 kB used, 0 kB /0 kB avail

                 64 creating

 

5.15设置ceph服务自动启动

[ceph@node1 ~]$ sudo chkconfig ceph on

 

6、配置OSD

        6.1拷贝文件

monitor上的/etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring

/var/lib/ceph/bootstrap-osd/ceph.keyring到两个OSD相应目录下

        6.2、配置OSD(快速配置)

[root@admin ~]# ssh node3

                  准备OSD

[ceph@ node3~]$ sudo ceph-disk prepare --cluster ceph--cluster-uuid 2d9cec7e-e0fd-40d8-82a1-fa0bcad5d13e --fs-type xfs /dev/sdb

激活OSD

[ceph@localhost ~]$ sudo ceph-disk activate /dev/sdb1

 

同样方法配置另一个OSD,配置完后,检查Ceph集群状态:

[ceph@localhost~]$ sudo ceph -s

   cluster 2d9cec7e-e0fd-40d8-82a1-fa0bcad5d13e

    health HEALTH_WARN

           40 pgs degraded

           40 pgs stuck degraded

           40 pgs stuck unclean

           40 pgs stuck undersized

           40 pgs undersized

    monmap e1: 1 mons at {node1=192.168.126.135:6789/0}

           election epoch 2, quorum 0 node1

    osdmap e9: 2 osds: 2 up, 2 in

     pgmap v13: 64 pgs, 1 pools, 0 bytes data, 0 objects

           5186 MB used, 30630 MB / 35816 MB avail

                 40 active+undersized+degraded

                 24 active+clean

 

遇到的错误及解决方法:

0

[root@admin ~]# ceph
Error initializing cluster client: Error('error calling conf_read_file: errnoEINVAL',)
原因:缺少 ceph.conf 文件,故    
[root@admin ~]#sudo touch /etc/ceph/ceph.conf

 

1

部署完 MON之后,通过 ceph -s查看集群状态,提示 ERROR: missing keyring, cannot use cephx for authentication

[ceph@mdsmon ~]$ ceph -s

2015-12-29 23:49:16.867852 7f7a7a047700 -1 monclient(hunting):ERROR: missing keyring, cannot use cephx for authentication

2015-12-29 23:49:16.867854 7f7a7a047700  0 librados:client.admin initialization error (2) No such file or directory

Error connecting to cluster: ObjectNotFound

原因:

/etc/ceph/ceph.client.admin.keyring权限只允许root读写导致

[ceph@mdsmon ~]$ ls /etc/ceph/ceph.client.admin.keyring -l
-rw-------. 1 root root 137 Dec 29 22:49 /etc/ceph/ceph.client.admin.keyring

解决方法:

a、在命令先加 sudosudo ceph -s

b、修改/etc/ceph/ceph.client.admin.keyring权限,给普通用户读取权限,chmod640 /etc/ceph/ceph.client.admin.keyring

 

2

[ceph@mdsmon ~]$ sudo ceph osd lspools
2015-12-30 21:09:59.332198 7fa390125700  0 -- :/1023521 >>192.168.126.129:6789/0 pipe(0x7fa38c064010 sd=3 :0 s=1 pgs=0 cs=0 l=1c=0x7fa38c05c720).fault
2015-12-30 21:10:02.315085 7fa388ff9700  0 -- :/1023521 >>192.168.126.129:6789/0 pipe(0x7fa380000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1c=0x7fa380004ef0).fault

解决办法:重启 ceph服务, sudo/etc/init.d/ceph restart

 

3[root@mdsmon ~]# ceph osd create这个命令是在 mon 上执行的,不是 osd。每执行一次osd-number会增加 1.(手动部署OSD需要执行这个命令)

 

4、注意每个节点都需要安装ceph及其依赖,需有相同的ceph.conf文件


参考:

中文社区:http://bbs.ceph.org.cn/question/138

官网文档:http://docs.ceph.com/docs/master/install/manual-deployment/

                                                                                                                                                           






评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值