系统架构师从入门到精通1.0 Ceph集群部署无报错流畅版本

背景

为什么要研究Ceph集群搭建呢?

  • 商业存储非常昂贵,这是其一
  • 新服务器加入机房,旧服务器做改造再利用,这是其二
  • 数据都是保存在服务器本地,提升数据安全,这是其三

实验环境介绍

本次实验使用3台vmware workstation软件创建的虚拟机

  • 配置1vCPU,1G内存,4块20G磁盘,2块网卡(host only+nat模式)
  • 网络配置说明,host only模式网卡,仅配置IP+掩码,nat模式网卡,配置DHCP即可
  • 磁盘配置说明,sda作为操作系统盘,sdb/sdc/sdd作为ceph的数据盘
  • OS:CentOS 7.8 x86_64
  • Ceph版本:14.2.16 nautilus (stable)
  • ceph-deploy版本:2.0.1
  • python-pip版本:python2-pip-8.1.2-14.el7.noarch
  • IP规划:ceph-node1 192.168.0.1/10.0.0.1,ceph-node2 192.168.0.2/10.0.0.2,ceph-node3 192.168.0.3/10.0.0.3

实验步骤

一、ceph集群创建和第一个节点部署

1.1、关闭firewalld防火墙&&SELinux安全模块
systemctl stop firewalld && systemctl disable firewalld &> /dev/null
setenforce 0
cat > /etc/selinux/config << EOF
SELINUX=disabled
SELINUXTYPE=targeted
EOF

1.2、配置主机名解析配置
cat > /etc/hosts << EOF
10.0.0.1 ceph-node1
10.0.0.2 ceph-node2
10.0.0.3 ceph-node3
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
EOF

1.3、所有节点修改国内yum源,添加epel仓库
1.3.1.备份CentOS-Base.repo
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
1.3.2.下载repo文件
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
1.3.3.添加EPEL源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
1.3.4.手工编写ceph.repo仓库
cat > /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
priority=1
EOF
1.3.5.清除缓存
yum clean all
1.3.6.更新本地YUM缓存
yum makecache

1.4、NTP配置
yum install ntp ntpdate -y
ntpdate pool.ntp.org
systemctl restart ntpdate.service
systemctl restart ntpd.service
systemctl enable ntpd.service
systemctl enable ntpdate.service

1.5、创建ceph集群
1.5.1、初始化创建集群
mkdir /etc/ceph
cd /etc/ceph
ceph-deploy new --cluster-network 10.0.0.0/24 --public-network 192.168.0.0/24 ceph-node1
1.5.2、安装二进制软件包
ceph-deploy install ceph-node1
[root@ceph-node1 ceph]# ceph -v
ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable)
1.5.3、初始化并创建第一个moniter
ceph-deploy mon create-initial
1.5.4、创建三个osd并加入集群
fdisk /dev/sdb
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdb1
ceph-deploy osd create ceph-node1 --data /dev/sdb1
fdisk /dev/sdc
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdc1
ceph-deploy osd create ceph-node1 --data /dev/sdc1
fdisk /dev/sdd
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdd1
ceph-deploy osd create ceph-node1 --data /dev/sdd1
ceph status
[root@ceph-node1 ceph]# ceph status
cluster:
id: 231326c7-fb58-40e7-92f4-627c6def0200
health: HEALTH_WARN
no active mgr

services:
mon: 1 daemons, quorum ceph-node1 (age 10m)
mgr: no daemons active
osd: 3 osds: 3 up (since 43s), 3 in (since 43s)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

1.6、SSH免密配置
ceph-node1执行下面命令
ssh-keygen,键入三次回车
ssh-copy-id ceph-node1
输入yes,输入ceph-node2的root密码
ssh-copy-id ceph-node2
输入yes,输入ceph-node2的root密码
ssh-copy-id ceph-node3
输入yes,输入ceph-node3的root密码

1.7、部署Manager
ceph-deploy mgr create ceph-node1

二、给ceph集群添加第二个节点

2.1、关闭firewalld防火墙&&SELinux安全模块
systemctl stop firewalld && systemctl disable firewalld &> /dev/null
setenforce 0
cat > /etc/selinux/config << EOF
SELINUX=disabled
SELINUXTYPE=targeted
EOF

2.2、配置主机名解析配置
cat > /etc/hosts << EOF
10.0.0.1 ceph-node1
10.0.0.2 ceph-node2
10.0.0.3 ceph-node3
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
EOF

2.3、所有节点修改国内yum源,添加epel仓库
2.3.1.备份CentOS-Base.repo
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
2.3.2.下载repo文件
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
2.3.3.添加EPEL源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
2.3.4.手工编写ceph.repo仓库
cat > /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
priority=1
EOF
2.3.5.清除缓存
yum clean all
2.3.6.更新本地YUM缓存
yum makecache

2.4、NTP配置
yum install ntp ntpdate -y
ntpdate pool.ntp.org
systemctl restart ntpdate.service
systemctl restart ntpd.service
systemctl enable ntpd.service
systemctl enable ntpdate.service

2.5、在ceph-node2上向ceph集群添加组件
2.5.1、安装二进制软件包
ceph-deploy install ceph-node2
[root@ceph-node1 ceph]# ceph -v
ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable)
2.5.2、添加第二个moniter
ceph-deploy mon add ceph-node2
2.5.3、在ceph-node2创建三个磁盘分区
fdisk /dev/sdb
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdb1
fdisk /dev/sdc
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdc1
fdisk /dev/sdd
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdd1
2.5.4、使用ceph-node1添加三个osd
ceph-deploy osd create ceph-node2 --data /dev/sdb1
ceph-deploy osd create ceph-node2 --data /dev/sdc1
ceph-deploy osd create ceph-node2 --data /dev/sdd1
ceph status
[root@ceph-node1 ceph]# ceph status
cluster:
id: 231326c7-fb58-40e7-92f4-627c6def0200
health: HEALTH_WARN
no active mgr

services:
mon: 2 daemons, quorum ceph-node1,ceph-node2 (age 26m)
mgr: no daemons active
osd: 6 osds: 6 up (since 86s), 6 in (since 86s)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

2.6、在ceph-node1上添加Manager节点
ceph-deploy mgr create ceph-node2

三、给ceph集群添加第三个节点

3.1、关闭firewalld防火墙&&SELinux安全模块
systemctl stop firewalld && systemctl disable firewalld &> /dev/null
setenforce 0
cat > /etc/selinux/config << EOF
SELINUX=disabled
SELINUXTYPE=targeted
EOF

3.2、配置主机名解析配置
cat > /etc/hosts << EOF
10.0.0.1 ceph-node1
10.0.0.2 ceph-node2
10.0.0.3 ceph-node3
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
EOF

3.3、所有节点修改国内yum源,添加epel仓库
3.3.1.备份CentOS-Base.repo
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
3.3.2.下载repo文件
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
3.3.3.添加EPEL源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
3.3.4.手工编写ceph.repo仓库
cat > /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
priority=1
EOF
3.3.5.清除缓存
yum clean all
3.3.6.更新本地YUM缓存
yum makecache

3.4、NTP配置
yum install ntp ntpdate -y
ntpdate pool.ntp.org
systemctl restart ntpdate.service
systemctl restart ntpd.service
systemctl enable ntpd.service
systemctl enable ntpdate.service

3.5、在ceph-node2上向ceph集群添加组件
3.5.1、安装二进制软件包
ceph-deploy install ceph-node3
[root@ceph-node1 ceph]# ceph -v
ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable)
3.5.2、添加第二个moniter
ceph-deploy mon add ceph-node3
3.5.3、在ceph-node3创建三个磁盘分区
fdisk /dev/sdb
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdb1
fdisk /dev/sdc
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdc1
fdisk /dev/sdd
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdd1
3.5.4、使用ceph-node1添加三个osd
ceph-deploy osd create ceph-node3 --data /dev/sdb1
ceph-deploy osd create ceph-node3 --data /dev/sdc1
ceph-deploy osd create ceph-node3 --data /dev/sdd1
ceph status
[root@ceph-node1 ceph]# ceph status
cluster:
id: 231326c7-fb58-40e7-92f4-627c6def0200
health: HEALTH_WARN
no active mgr

services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 4m)
mgr: no daemons active
osd: 9 osds: 9 up (since 9s), 9 in (since 9s)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

3.6、在ceph-node1上添加Manager节点
ceph-deploy mgr create ceph-node3

四、Ceph集群搭建完毕的状态

[root@ceph-node1 ceph]# ceph -s
cluster:
id: 231326c7-fb58-40e7-92f4-627c6def0200
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 111m)
mgr: ceph-node1(active, since 89s), standbys: ceph-node3, ceph-node2
osd: 9 osds: 9 up (since 111m), 9 in (since 2h)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 9.1 GiB used, 171 GiB / 180 GiB avail
pgs:

[root@ceph-node1 ceph]# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 180 GiB 171 GiB 57 MiB 9.1 GiB 5.03
TOTAL 180 GiB 171 GiB 57 MiB 9.1 GiB 5.03

POOLS:
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
[root@ceph-node1 ceph]#

总结

至此3节点的ceph集群搭建完毕,接下来是深入学习与实践运用。还有很多未知等待去摸索。下期见。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值