环境
ceph-1 | 192.168.1.120 | deploy,mon*1,osd*3 |
ceph-2 | 192.168.1.121 | deploy,mon*1,osd*3 |
硬件环境
操作系统:Centos 7.3软件环境
Openstack:Ocata
Ceph:Jewel
安装Ceph
1: 准备repo
yum clean all
rm -rf /etc/yum.repos.d/*.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
sed -i 's/$releasever/7/g' /etc/yum.repos.d/CentOS-Base.repo
vi /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
yum update -y
2: 操作系统配置
启用Ceph monitor OSD端口
禁用Selinux
setenforce 0
安装ntp
yum install ntp ntpdate -y
systemctl restart ntpdate.service
systemctl restart ntpd.service
systemctl enable ntpd.service ntpdate.service
无密码访问配置
配置deploy节点和其他ceph节点之间的无密码访问
sudo su -
ssh-keygen
ssh-copy-id ceph-1
ssh-copy-id ceph-2
3: 部署Ceph集群
安装ceph-deploy
yum install ceph-deploy -y
用Ceph-deploy创建Ceph集群
mkdir /etc/ceph
cd /etc/ceph
ceph-deploy new ceph-1
安装ceph二进制软件包
ceph-deploy install --no-adjust-repos ceph-1
修改ceph 配置文件
[global]
fsid = 7bac6963-0e1d-4cea-9e2e-f02bbae96ba7
mon_initial_members = ceph-1
mon_host = 192.168.1.101
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.1.0/24
在ceph-node1上创建第一个ceph monitor
ceph-deploy mon create-initial
在ceph-1上创建OSD
ceph-deploy disk list ceph-1(列出disk)
ceph-deploy disk zap ceph-1:sdb ceph-n1:sdc ceph-1:sdd
ceph-deploy osd create ceph-1:sdb ceph-node1:sdc ceph-1:sdd
总结
通过上面的步骤,一个all in one的ceph就成功部署了。检查ceph的状态。
ceph -s