ceph部署步骤

ceph部署步骤
安装前的准备:1.检查ntp集群的同步的时间服务器是否安装。
1.修改yum源(三台机器都需要进行操作)
yum clean all
curl http://mirrors.aliyun.com/repo/Centos-7.repo >/etc/yum.repos.d/CentOS-Base.repo
curl http://mirrors.aliyun.com/repo/epel-7.repo >/etc/yum.repos.d/epel.repo
sed -i ‘/aliyuncs/d’ /etc/yum.repos.d/CentOS-Base.repo
sed -i ‘/aliyuncs/d’ /etc/yum.repos.d/epel.repo
yum makecache

2.增加ceph的源(三台机器都需要进行操作)
vim /etc/yum.repos.d/ceph.repo
##内容如下
[ceph]
name=ceph
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0

3.安装ceph客户端(三台机器都需要进行操作)
yum makecache

yum install ceph ceph-radosgw rdate -y

4.关闭selinux&firewalld(三台机器都需要进行操作)
sed -i ‘s/SELINUX=.*/SELINUX=disabled/’ /etc/selinux/config
setenforce 0

systemctl stop firewalld

systemctl disable firewalld

5.开始部署deploy(三台机器都需要进行操作)
[root@ceph-1 ~]# yum -y install ceph-deploy
[root@ceph-1 ~]# ceph-deploy --version
1.5.39
[root@ceph-1 ~]# ceph -v
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)

显示这个界面的时候表示ceph安装成功。
6.设置免密码登录(三台机器都需要进行操作,进行ssh免密登录)
[root@ceph-1 cluster]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
54:f8:9b:25:56:3b:b1:ce:fc:6d:c5:61:b1:55:79:49 root@ceph-1
The key’s randomart image is:
±-[ RSA 2048]----+
| … .E=|
| … o +o|
| … . + =|
| . + = + |
| S. O …|
| o + o|
| . …|
| . o|
| . |
±----------------+

[root@ceph-1 cluster]# ssh-copy-id ceph-2(注释:ceph-1和ceph-2进行免密登录,如果ceph-3和ceph-1进行免密登录的命令为 ssh-copy-id ceph-3)
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed – if you are prompted now it is to install the new keys
Warning: Permanently added ‘10.39.47.63’ (ECDSA) to the list of known hosts.
root@10.39.47.63’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘10.39.47.63’”
and check to make sure that only the key(s) you wanted were added.

[root@ceph-1 cluster]# ssh-copy-id ceph-3
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed – if you are prompted now it is to install the new keys
Warning: Permanently added ‘10.39.47.64’ (ECDSA) to the list of known hosts.
root@10.39.47.64’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘10.39.47.64’”
and check to make sure that only the key(s) you wanted were added.

7.在部署节点创建部署目录并开始部署
(1.首先在根目录创建cluster 2.切入到cluster中进行下一步的操作)
[root@ceph-1 ~]# mkdir cluster

[root@ceph-1 ~]# cd cluster/

[root@ceph-1 cluster]# ceph-deploy new ceph-1 ceph-2 ceph-3
(如果没有进行域名的解析,请在文档末尾中查看步骤)
执行完之后生成一下文件
[root@ceph-1 cluster]# ls -l
total 16
-rw-r–r-- 1 root root 235 Nov 2 10:40 ceph.conf
-rw-r–r-- 1 root root 4879 Nov 2 10:40 ceph-deploy-ceph.log
-rw------- 1 root root 73 Nov 2 10:40 ceph.mon.keyring

8.根据自己的IP配置向ceph.conf中添加public_network,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s):
[root@ceph-1 cluster]# echo public_network=192.168.3.0/24 >> ceph.conf
192.168.3.0/24实际是192.168.3.0/255.255.255.0的简要写法。
[root@ceph-1 cluster]# echo mon_clock_drift_allowed = 2 >> ceph.conf
[root@ceph-1 cluster]# cat ceph.conf
[global]
fsid = 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
mon_initial_members = ceph-1, ceph-2, ceph-3
mon_host = 10.39.47.63,10.39.47.64,10.39.47.65
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public_network=10.39.47.0/24
mon_clock_drift_allowed = 2

9.开始部署monitor
[root@ceph-1 cluster] ceph-deploy mon create-initial
//执行成功之后显示
[root@ceph-1 cluster]# ls -l
total 56
-rw------- 1 root root 113 Nov 2 10:45 ceph.bootstrap-mds.keyring
-rw------- 1 root root 71 Nov 2 10:45 ceph.bootstrap-mgr.keyring
-rw------- 1 root root 113 Nov 2 10:45 ceph.bootstrap-osd.keyring
-rw------- 1 root root 113 Nov 2 10:45 ceph.bootstrap-rgw.keyring
-rw------- 1 root root 129 Nov 2 10:45 ceph.client.admin.keyring
-rw-r–r-- 1 root root 292 Nov 2 10:43 ceph.conf
-rw-r–r-- 1 root root 27974 Nov 2 10:45 ceph-deploy-ceph.log
-rw------- 1 root root 73 Nov 2 10:40 ceph.mon.keyring

10.创建osd的文件夹,并且赋予权限(三台机器都要操作)
mkdir -p /mnt/xRaid0/dev/vdc
chown -R ceph:ceph /mnt/xRaid0/dev/vdc
11.开始部署OSD
ceph-deploy --overwrite-conf osd prepare ceph25:/mnt/xRaid0/dev/vdc ceph26:/mnt/xRaid0/dev/vdc ceph27:/mnt/xRaid0/dev/vdc --zap-disk
(在每一个服务器节点准备osd的存储空间)
ceph-deploy --overwrite-conf osd activate ceph25:/mnt/xRaid0/dev/vdc ceph26:/mnt/xRaid0/dev/vdc ceph27:/mnt/xRaid0/dev/vdc
(激活服务器上的osd上的存储)
12.查看ceph的状态
命令:ceph -s
health HEALTH_OK

 monmap e1: 3 mons at {ceph25=1.1.1.181:6789/0,ceph26=1.1.1.182:6789/0,ceph27=1.1.1.183:6789/0}

        election epoch 8, quorum 0,1,2 ceph25,ceph26,ceph27

 osdmap e20: 4 osds: 3 up, 3 in

        flags sortbitwise,require_jewel_osds

  pgmap v39: 64 pgs, 1 pools, 0 bytes data, 0 objects

        118 GB used, 89298 GB / 89416 GB avail

              64 active+clean

在liunx中添加域名解析的步骤
1.vi /etc/hosts

2.根据机器所设定的ip与对应的计算机名称进行对应。
修改计算机名称
vi /etc/hostname
创建ceph块存储
1.创建10G的块存储。
rbd create Data --image-feature layering --size 10G
2.块存储进行的映射加载。
rbd map Data(可以把Data换成你所创建的块设备的名称)
3.进行格式化
mkfs.xfs /dev/rbd0
4.把映射的块设备进行永久的挂载处理。
blkid

vi /etc/fstab

UUID=3b8528b1-1f28-4566-8ab8-d036574c5a6e /mnt/xRaid0/ceph-block-divce xfs defaults 0 0

5.刷新文件的挂载功能。
mount -a
如果不显示错误的信息,表示已经挂载成功。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值