一、环境介绍
centos 7.6
二、实验步骤
- ceph01节点和ceph02节点构建ceph集群
- 扩容ceph集群,将ceph03节点加入(扩容mon,osd)
- 模拟删除osd
- 恢复osd
- ceph常用命令(创建mgr服务,添加删除pool)
三、部署ceph集群
- 三台主机关闭防火墙,安装常用工具
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
yum install wget -y ##安装下载工具
yum install net-tools -y ##解决最小化安装缺少ifconfig、route等命令
yum install bash-completion -y ##解决最小化安装命令补全
yum install ntp ntpdate -y ##安装时间同步工具
- ceph01和ceph02修改hosts主机名映射
vi /etc/hosts
192.168.100.10 ceph01
192.168.100.11 ceph02
- ceph01和ceph02做免密交互和时间同步
####在ceph01节点上执行免密交互
ssh-keygen -t rsa
ssh-copy-id ceph02
###时间同步,在ceph01上执行
ntpdate ntp.aliyun.com ##同步阿里云时间
vi /etc/ntp.conf
##第8行改为 restrict default nomodify
##第17行改为 restrict 192.168.100.0 mask 255.255.255.0 nomodify notrap
##将21行到24行删除##
21 server 0.centos.pool.ntp.org iburst
22 server 1.centos.pool.ntp.org iburst
23 server 2.centos.pool.ntp.org iburst
24 server 3.centos.pool.ntp.org iburst
###删除的插入下面内容###
fudge 127.127.1.0 stratum 10
server 127.127.1.0
##重启ntp服务
systemctl restart ntpd
systemctl enable ntpd
###在ceph02节点上执行
ntpdate ceph01
##做完后可以敲date查看ceph01和ceph02时间是否一致
date
- ceph01和ceph02上配置ceph现网源
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
vim /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
- 配置完成后在ceph01和ceph02上更新yum源
yum update -y
- ceph01和ceph02安装ceph软件和配置工具
yum install ceph -y
yum -y install ceph-deploy
yum -y install python-setuptools
- 在ceph01上创建集群
cd /etc/ceph
ceph-deploy new ceph01 ceph02
- 在ceph01上创建mon
cd /etc/ceph
ceph-deploy mon create-initial
- 在ceph01上创建osd
cd /etc/ceph
ceph-deploy osd create --data /dev/sdb ceph01
ceph-deploy osd create --data /dev/sdb ceph02
- 在ceph01上下发密钥
ceph-deploy admin ceph01 ceph02
- 修改ceph01和ceph02的/ceph.client.admin.keyring权限
##ceph01和ceph02都执行
chmod +x /etc/ceph/ceph.client.admin.keyring
- 查看ceph集群状态
ceph -s
四、集群扩容(将ceph03加入集群)
- 修改3个节点的hosts文件
192.168.100.10 ceph01
192.168.100.11 ceph02
192.168.100.12 ceph03
- 在ceph01上添加ceph03的免交互
ssh-copy-id ceph03
- 修改ceph03的时间
ntpdate ceph01
- 配置ceph03的ceph源
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
vi /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
- 配置完成后在ceph03上更新yum源
yum update -y
- ceph03安装ceph软件和配置工具
yum install ceph -y
yum -y install ceph-deploy
yum -y install python-setuptools
- 进入ceph01节点修改配置文件,并下发给ceph02和ceph03
vi /etc/ceph/ceph.conf
###修改
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.100.10,192.168.100.11,192.168.100.12
##添加内部通信网段
public_network= 192.168.100.0/24
###下发配置给ceph02和ceph03
ceph-deploy --overwrite-conf admin ceph02 ceph03
- 扩容osd和mon
##在ceph03上修改下发的配置文件权限
chmod +x /etc/ceph/ceph.client.admin.keyring
###进入ceph01节点
ceph-deploy osd create --data /dev/sdb ceph03 ##扩容osd
ceph-deploy mon add ceph03 ##扩容mon
- 扩容完成后查看集群信息
ceph -s
五、OSD数据恢复
- 查看osd信息
ceph osd tree
- 模拟删除osd.2
ceph osd out osd.2
ceph osd crush remove osd.2
ceph auth del osd.2 ##删除osd.2的认证
systemctl restart ceph-osd.target ##在ceph03上重启
ceph osd rm osd.2 ##彻底删除
- 删除成功后查看osd信息
ceph osd tree
- 恢复被删除的osd.2
进入ceph03节点(因为osd.2在ceph03上)
df -hT ##查看osd挂载情况
tmpfs tmpfs 3.9G 52K 3.9G 1% /var/lib/ceph/osd/ceph-2
cd /var/lib/ceph/osd/ceph-2
more fsid ###查看osd信息
490cb174-2126-4e00-818e-b395c761fdde
##执行恢复
ceph osd create 490cb174-2126-4e00-818e-b395c761fdde
ceph auth add osd.2 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-2/keyring
ceph osd crush add 2 0.99899 host=ceph03
ceph osd in osd.2
- 恢复完成后在ceph03上重启服务,查看osd信息
systemctl restart ceph-osd.target
ceph osd tree
六、ceph常用命令
创建mgr服务
ceph-deploy mgr create ceph01 ceph02 ceph03
创建pool
ceph osd pool create cinder 64 ##创建大小为64G的cinder池子
ceph osd pool create nova 64
ceph osd pool create glance 64
##查看pool信息
ceph osd pool ls
cinder
nova
glance
删除pool
进入ceph01节点
vi /etc/ceph/ceph.conf ###添加删除权限
mon_allow_pool_delete=true
ceph-deploy --overwrite-conf admin ceph02 ceph03 ##下发配置给另外节点
systemctl restart ceph-mon.target ###三个节点重启mon
ceph osd pool rm cinder cinder --yes-i-really-really-mean-it ##删除cinder
修改pool名字
ceph osd pool rename cinder cinder01 ##将cinder改为cinder01