本文将全面详细介绍如何使用三台服务器搭建ceph集群
1、更改hosts和 hostname,三台服务器之间做免密
hostnamectl --static set-hostname yz-25-60-36
vim /etc/hosts
172.25.60.36 yz-25-60-36
172.25.60.37 yz-25-60-37
172.25.60.38 yz-25-60-38
2、添加国内的 ceph yum 源--阿里云的源,可以选择不同的版本安装:
rpm-giant/
rpm-hammer/
rpm-infernalis/
rpm-jewel/
rpm-kraken/
rpm-luminous/
rpm-mimic/
rpm-nautilus/
rpm-testing/
[ceph_local]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64
gpgcheck=0
3、安装 ceph、ceph-deploy
yum makecache
yum install ceph-deploy
yum install ceph ceph-radosgw ntp -y
4、关闭selinux&firewalld
sed-i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
setenforce 0
systemctl stop firewalld
systemctl disable firewalld
5、配置 NTP 用于时间同步
6、创建目录
mkdir /export/my-cluster
7、创建集群
ceph-deploy new yz-25-60-36 yz-25-60-37 yz-25-60-38
8、修改配置文件
[global]
fsid = 07327f31-cc6f-4bda-a18d-20f55391dc6e
mon_initial_members = yz-25-60-36, yz-25-60-37, yz-25-60-38
mon_host = 172.25.60.36,172.25.60.37,172.25.60.38
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 172.25.60.0/24
cluster network = 172.25.60.0/24
# 默认的副本数为3,实验环境变更为2
osd pool default size = 2
# 默认保护机制不允许删除pool,根据情况设置
mon_allow_pool_delete = true
osd_max_object_name_len = 256
osd_max_object_namespace_len = 64
osd_check_max_object_name_len_on_startup = false
8、初始化集群、会将密匙拉到本地
ceph-deploy mon create-initial
9、若是没有报错,集群的mon已经创建好了,可以用 ceph -s 检测
10、ceph-deploy有两个版本 1.5.X 和 2.0.x
1.5 的ceph-deploy 无法安装 mgr
2.0 的ceph-deploy 无法在目录级别创建ODS(不知以后会不会支持)
1.5 的ceph-deploy,执行:
mkdir /export/my-cluster
chown ceph:ceph /export/my-cluster
ceph-deploy osd prepare yz-25-60-37:/export/my-cluster yz-25-60-38:/export/my-cluster
ceph-deploy osd activate yz-25-60-37:/export/my-cluster yz-25-60-38:/export/my-cluster
11,添加 mds
ceph-deploy --overwrite-conf mds create yz-25-60-36
查看状态:
ceph mds stat
2019-08-23 10:59:08.678780 7fa801493700 -1 WARNING: unknown auth protocol defined: cc
2019-08-23 10:59:08.678781 7fa801493700 -1 WARNING: unknown auth protocol defined: network
2019-08-23 10:59:08.678783 7fa801493700 -1 WARNING: unknown auth protocol defined: 172.25.60.0/24
e2:, 1 up:standby
12、创建CEPH文件系统 参考:https://docs.ceph.com/docs/master/cephfs/createfs/
创建数据池:
ceph osd pool create cephfs_data 128
创建元数据数据池:
ceph osd pool create cephfs_metadata 128
启用文件系统:
ceph fs new cephfs cephfs_metadata cephfs_data
检测:
[ceph@yz-25-60-36 my-cluster]$ ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[ceph@yz-25-60-36 my-cluster]$ ceph mds stat
e5: 1/1/1 up {0=yz-25-60-36=up:active}
13、挂载
获取 key
cat /export/my-cluster/ceph.client.admin.keyring
[client.admin]
key = AQBQUl9dW5yDOBAAkJYKdI4xxAZ3hOQ1vv5XZg==
caps mds = "allow *"
caps mon = "allow *"
caps osd = "allow *"
将key 写入admin.secret
vim /export/my-cluster/admin.secret
AQBQUl9dW5yDOBAAkJYKdI4xxAZ3hOQ1vv5XZg==
挂载目录
sudo mount -t ceph yz-25-60-36:6789:/ /cephMount -o name=admin,secretfile=/export/my-cluster/admin.secret
mount | grep ceph 检查是否挂载