部署Ceph分布式存储实验记录
一、实验环境
Deploy | Node1 | Node2 | Node3 | Client | |
---|---|---|---|---|---|
Hostname | deploy.ceph.local | node1.ceph.local | node2.ceph.local | node3.ceph.local | client.ceph.local |
CPU | 2C | 2C | 2C | 2C | 2C |
Memory | 4GB | 4GB | 4GB | 4GB | 2GB |
Disk | 32G | 32G+3*20G | 32G+3*20G | 32G+3*20G | 32G |
Nic | Nic1:192.168.0.10 | Nic1:192.168.0.11 Nic2:10.0.0.11 | Nic1:192.168.0.12 Nic2:10.0.0.12 | Nic1:192.168.0.13 Nic2:10.0.0.13 | Nic1:192.168.0.100 |
所有节点系统均为CentOS 7.X最小化安装
二、搭建Ceph存储集群
1、系统基础配置
如未声明,如下操作在所有节点执行
(1)安装基本软件
yum -y install wget vim
(2)添加软件仓库
mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak
wget -O /etc/yum.repos.d/CentOS-Base.repo
http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
priority=1
(3)更新系统
yum -y update
systemctl reboot
(4)关闭防火墙、SELinux
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config
(5)修改hosts文件
Delpoy节点:
vim /etc/hosts
192.168.0.10 deploy.ceph.local
192.168.0.11 node1.ceph.local
192.168.0.12 node2.ceph.local
192.168.0.13 node3.ceph.local
ping deploy.ceph.local -c 1
ping node1.ceph.local -c 1
ping node2.ceph.local -c 1
ping node3.ceph.local -c 1
(6)配置ssh互信
Delpoy节点:
ssh-keygen
for host in deploy.ceph.local node1.ceph.local node2.ceph.local
node3.ceph.local; do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; done
(7)配置NTP
Node节点:
yum -y install ntp ntpdate ntp-doc
systemctl start ntpd
systemctl status ntpd
2、创建Ceph存储集群
Deploy节点:
(1)安装ceph-deploy
yum -y install ceph-deploy python-setuptools
(2)创建目录
创建一个目录,保存 ceph-deploy 生成的配置文件和密钥对
mkdir cluster
cd cluster/
(3)创建集群
ceph-deploy new deploy.ceph.local
(4)修改配置文件
指定前端和后端网络
vim ceph.conf
public network = 192.168.0.0/24
cluster network = 10.0.0.0/24
[mon]
mon allow pool delete = true
如配置文件更改,需同步配置文件至各节点,并重启相关进程
ceph-deploy --overwrite-conf config push deploy.ceph.local
ceph-deploy --overwrite-conf config push node1.ceph.local
ceph-deploy --overwrite-conf config push node2.ceph.local
ceph-deploy --overwrite-conf config push node3.ceph.local
(5)安装Ceph
ceph-deploy install deploy.ceph.local
ceph-deploy install node1.ceph.local
ceph-deploy install node2.ceph.local
ceph-deploy install node3.ceph.local
(6)初始化MON节点
ceph-deploy mon create-initial
(7)收集密钥
ceph-deploy gatherkeys deploy.ceph.local
(8)查看节点磁盘
ceph-deploy disk list node1.ceph.local
ceph-deploy disk list node2.ceph.local
ceph-deploy disk list node3.ceph.local
(9)擦除节点磁盘
ceph-deploy disk zap node1.ceph.local /dev/sdb
ceph-deploy disk zap node1.ceph.local /dev/sdc
ceph-deploy disk zap node1.ceph.local /dev/sdd
ceph-deploy disk zap node2.ceph.local /dev/sdb
ceph-deploy disk zap node2.ceph.local /dev/sdc
ceph-deploy disk zap node2.ceph.local /dev/sdd
ceph-deploy disk zap node3.ceph.local /dev/sdb
ceph-deploy disk zap node3.ceph.local /dev/sdc
ceph-deploy disk zap node3.ceph.local /dev/sdd
(10)创建OSD
ceph-deploy osd create node1.ceph.local --data /dev/sdb
ceph-deploy osd create node1.ceph.local --data /dev/sdc
ceph-deploy osd create node1.ceph.local --data /dev/sdd
ceph-deploy osd create node2.ceph.local --data /dev/sdb
ceph-deploy osd create node2.ceph.local --data /dev/sdc
ceph-deploy osd create node2.ceph.local --data /dev/sdd
ceph-deploy osd create node3.ceph.local --data /dev/sdb
ceph-deploy osd create node3.ceph.local --data /dev/sdc
ceph-deploy osd create node3.ceph.local --data /dev/sdd
查看磁盘及分区信息
ceph-deploy disk list node1.ceph.local
ceph-deploy disk list node2.ceph.local
ceph-deploy disk list node3.ceph.local
lsblk
(11)拷贝配置和密钥
把配置文件和admin密钥拷贝至Ceph节点
ceph-deploy admin node1.ceph.local node2.ceph.local node3.ceph.local
(12)初始化MGR节点
ceph-deploy mgr create node1.ceph.local
ceph-deploy mgr create node2.ceph.local
ceph-deploy mgr create node3.ceph.local
cp ~/cluster/*.keyr