- 环境
主机名 | IP | OS |
|
Ceph1 | 192.168.48.132 | Centos7 |
|
Ceph2 | 192.168.48.133 | Centos7 |
|
Ceph3 | 192.168.48.134 | Centos7 |
|
- 前期准备
- 关闭防火墙、selinux,配置hosts文件,配置ceph.repo,配置NTP,创建用户和SSL免密登陆。
[root@ ceph1~]#systemctl stop firewalld
[root@ ceph1]# setenforce 0
[root@ ceph1~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled #关掉selinux
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
[root@ ceph1~]# wget -O /etc/yum.repos.d/ceph.repo
https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo
[root@ceph1 ~]# yum -y install ntpdate ntp
[root@ceph1 ~]# cat /etc/ntp.conf
server ntp1.aliyun.com iburst
[root@ceph1 ~]# systemctl restart ntpd
[root@ceph1 ~]# useradd ceph-admin
[root@ceph1 ~]# echo "ceph-admin" | passwd --stdin ceph-admin
Changing password for user ceph-admin.
passwd: all authentication tokens updated successfully.
[root@ceph1 ~]# echo "ceph-admin ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph-admin
ceph-admin ALL = (root) NOPASSWD:ALL
[root@ceph1 ~]# cat /etc/sudoers.d/ceph-admin
ceph-admin ALL = (root) NOPASSWD:ALL
[root@ceph1 ~]# chmod 0440 /etc/sudoers.d/ceph-admin
[root@ceph1 ~]# cat /etc/hosts
192.168.48.132 ceph1
192.168.48.133 ceph2
192.168.48.134 ceph3
[root@ceph1 ~]#
[root@ceph1 ~]# sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers #配置sudo不需要tty
官方安装方法:http://docs.ceph.com/docs/master/start/quick-start-preflight/
注:以上在另外两台机上配置,步骤一样
2)使用ceph-deploy部署群集
[root@ceph1 ~]# su - ceph-admin
[ceph-admin@ceph1 ~]$ ssh-keygen
[ceph-admin@ceph1 ~]$ ssh-copy-id ceph-admin@ceph1
[ceph-admin@ceph1 ~]$ ssh-copy-id ceph-admin@ceph2
[ceph-admin@ceph1 ~]$ ssh-copy-id ceph-admin@ceph3
[ceph-admin@ceph1 sudoers.d]$ sudo yum install -y ceph-deploy python-pip
[ceph-admin@ceph1 ~]$ sudo mkdir my-cluster
[ceph-admin@ceph1 ~]$ sudo cd my-cluster/
[ceph-admin@ceph1 ~]$ ceph-deploy new ceph1 ceph2 ceph3 #部署节点
[ceph-admin@ceph1 my-cluster]$ ls
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
[ceph-admin@ceph1 my-cluster]$ cat ceph.conf #添加两行
[global]
fsid = 37e48ca8-8b87-40eb-9f64-cfdc0b659cf2
mon_initial_members = ceph1, ceph2, ceph3
mon_host = 192.168.48.132,192.168.48.133,192.168.48.134
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.48.0/24
cluster network = 192.168.48.0/24
#安装ceph包、替代ceph-deploy install node1 node2,不过下面的命令需要在ceph2和ceph3上做
[ceph-admin@ceph1 my-cluster]$ sudo yum -y install ceph ceph-radosgw
[ceph-admin@ceph1 my-cluster]$ sudo yum -y install ceph ceph-radosgw
[ceph-admin@ceph1 my-cluster]$ sudo yum -y install ceph ceph-radosgw
配置初始monitor(s)、并收集所有秘钥:
[ceph-admin@ceph1 my-cluster]$ ceph-deploy mon create-initial
把配置信息拷贝到各节点
[ceph-admin@ceph1 my-cluster]$ ceph-deploy admin ceph1 ceph2 ceph3
配置osd
[ceph-admin@ceph1 my-cluster]$ for dev in /dev/sdb /dev/sdc /dev/sdd
> do
> ceph-deploy disk zap ceph1 $dev
> ceph-deploy osd create ceph1 --data $dev
> ceph-deploy disk zap ceph2 $dev
> ceph-deploy osd create ceph2 --data $dev
> ceph-deploy disk zap ceph3 $dev
> ceph-deploy osd create ceph3 --data $dev
> done
部署mgr,L版以后才需要部署
[ceph-admin@ceph1 my-cluster]$ ceph-deploy mgr create ceph1 ceph3 ceph3
ceph-deploy gatherkeys ceph01
[ceph-admin@ceph1 my-cluster]$ ceph mgr module enable dashboard
注如果报错,做如下配置
[ceph-admin@ceph1 my-cluster]$ sudo chown -R ceph-admin /etc/ceph
[ceph-admin@ceph1 my-cluster]$ ll /etc/ceph/
total 12
-rw------- 1 ceph-admin root 63 Dec 15 11:05 ceph.client.admin.keyring
-rw-r--r-- 1 ceph-admin root 308 Dec 15 11:15 ceph.conf
-rw-r--r-- 1 ceph-admin root 92 Nov 27 04:20 rbdmap
-rw------- 1 ceph-admin root 0 Dec 15 10:49 tmpqjm9oQ
[ceph-admin@ceph1 my-cluster]$ ceph mgr module enable dashboard
[ceph-admin@ceph1 my-cluster]$ sudo netstat -tupln | grep 7000
tcp6 0 0 :::7000 :::* LISTEN 8298/ceph-mgr
在浏览器上访问http://192.168.48.132:7000/ #图形化界面就0k
这是监听的节点
到这集群部署完成,好开心啊
3)安装ceph块存储客户端
Ceph块设备,以前称为RADOS块设备,为客户机提供可靠的、分布式的和高性能的块存储磁盘。RADOS块设备利用 librbd库并以顺序的形式在Ceph集群中的多个osd上存储数据块。RBD是由Ceph的RADOS层支持的,因此每个块设备 都分布在多个Ceph节点上,提供了高性能和优异的可靠性。RBD有Linux内核的本地支持,这意味着RBD驱动程序从 过去几年就与Linux内核集成得很好。除了可靠性和性能之外,RBD还提供了企业特性,例如完整和增量快照、瘦配 置、写时复制克隆、动态调整大小等等。RBD还支持内存缓存,这大大提高了其性能:
任何普通的Linux主机(RHEL或基于debian的)都可以充当Ceph客户机。客户端通过网络与Ceph存储集群交互以存储或检 索用户数据。Ceph RBD支持已经添加到Linux主线内核中,从2.6.34和以后的版本开始。
[ceph-admin@ceph1 ceph]$ ll /etc/ceph/*
-rw------- 1 ceph-admin root 63 Dec 15 11:05 /etc/ceph/ceph.client.admin.keyring #这是root的秘钥
-rw-r--r-- 1 ceph-admin root 308 Dec 15 11:15 /etc/ceph/ceph.conf
-rw-r--r-- 1 ceph-admin root 92 Nov 27 04:20 /etc/ceph/rbdmap
-rw------- 1 ceph-admin root 0 Dec 15 10:49 /etc/ceph/tmpqjm9oQ
创建ceph块client用户名和认知秘钥
[ceph-admin@ceph1 my-cluster]$ ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prifix rbd_children, allow rwx=rbd' | tee ./ceph.client.rbd.keyring
[client.rbd]
key = AQDmeBRcPXpMNBAAlSJxwDM9PbcH2UMgx2cAYQ==
打开从新一个client 注:要能域名解析及配置hosts文件
[ceph-admin@ceph1 my-cluster]$scp ceph.client.rbd.keyring /etc/ceph/ceph.conf client:/etc/ceph/ceph.client.rbd.keyring
检查是否符合块设备环境要求
[root@client ceph]# uname -r
3.10.0-514.el7.x86_64
[root@client ceph]# modprobe rbd
[root@client ceph]# echo $?
0
[root@client ceph]#
安装ceph客户端
[root@client ceph]# wget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo
[root@client ~]# yum -y install ceph
[root@client ~]# cat /etc/ceph/ceph.client.rbd.keyring
[root@client ~]# ceph -s --name client.rbd
cluster:
id: 37e48ca8-8b87-40eb-9f64-cfdc0b659cf2
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3
mgr: ceph3(active), standbys: ceph1
osd: 9 osds: 9 up, 9 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 9.05GiB used, 261GiB / 270GiB avail
pgs:
[root@client ~]#
客户端创建块设备及映射
*创建块设备
默认创建块设备,会直接创建在 rbd 池中,但使用 deploy 安装后,该rbd池并没有创建。
# 创建池和块
[ceph-admin@ceph1 my-cluster]$ ceph osd lspools # 查看集群存储池
[ceph-admin@ceph1 my-cluster]$ ceph osd pool create rbd 512 # 512为 place group 数量,由于我们后续测试,也需要更多的pg,所以这里设置为50
在这里也能看到
确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值:
• 少于 5 个 OSD 时可把 pg_num 设置为 128
• OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
• OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
• OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值
# 客户端创建 块设备
[root@client ~]# rbd create rbd1 --size 10240 --name client.rbd
rbd: create error: (1) Operation not permitted
2018-12-15 21:47:52.435474 7f9706ae2d40 -1 librbd: Could not tell if rbd1 already exists