实验环境:
主机名 | ip | 配置 |
ceph01(deploy节点) | 192.168.122.41 | 2c4g20G+20G*2 |
ceph02 | 192.168.122.42 | 2c4g20G+20G*2 |
ceph03 | 192.168.122.43 | 2c4g20G+20G*2 |
实验步骤:
1.基础配置
配置主机名,/etc/hosts文件,关iptables,关selinux,ntp时间同步等不表
2.配置yum源
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
cat > /etc/yum.repos.d/ceph.repo <<EOF
[ceph]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-octopus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOF
3.更新系统并重启机器
yum upgrade -y
4.安装ceph-deploy和其他基础包
yum install ceph-deploy python-setuptools openssh-server -y
5.删除ceph.repo文件
rm -f /etc/yum.repos.d/ceph.repo
6.在deploy节点执行下面脚本
cat install_ceph.sh
#!/bin/bash
mkdir my-cluster
cd my-cluster
ceph-deploy new ceph01 ceph02 ceph03
ceph-deploy install ceph01 ceph02 ceph03
ceph-deploy mon create-initial
ceph-deploy admin ceph01 ceph02 ceph03
ceph-deploy mgr create ceph01 ceph02 ceph03
for node in ceph01 ceph02 ceph03
do
for disk in sdb sdc
do
ceph-deploy osd create --data /dev/$disk $node
done
done
7.部署成功后查看状态
[root@ceph01 my-cluster]# ceph -s
cluster:
id: e205ae15-22aa-4a25-b787-7eafe6a60728
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03
mgr: ceph01(active), standbys: ceph02, ceph03
osd: 6 osds: 6 up, 6 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 114 GiB / 120 GiB avail
pgs:
[root@ceph01 my-cluster]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.11691 root default
-3 0.03897 host ceph01
0 hdd 0.01949 osd.0 up 1.00000 1.00000
1 hdd 0.01949 osd.1 up 1.00000 1.00000
-5 0.03897 host ceph02
2 hdd 0.01949 osd.2 up 1.00000 1.00000
3 hdd 0.01949 osd.3 up 1.00000 1.00000
-7 0.03897 host ceph03
4 hdd 0.01949 osd.4 up 1.00000 1.00000
5 hdd 0.01949 osd.5 up 1.00000 1.00000
参考链接:https://ceph.readthedocs.io/en/latest/install/ceph-deploy/quick-ceph-deploy/