文章目录
前言
接到开发同事的需求,在物理机上准备一个和甲方一样版本的ceph 16.2.7单点集群用于实验。
一、环境版本
操作系统:Ubuntu 20.04.6 LTS
ceph集群:v16.2.7
二、准备基础环境
1.时间同步、防火墙服务、时区校正
# 安装时间同步服务
sudo apt install -y chrony && sudo systemctl enable --now chrony
# 关闭防火墙
sudo systemctl stop ufw
sudo systemctl disable ufw
# 修改时区
sudo timedatectl set-timezone Asia/Shanghai
2.安装docker服务
sudo apt-get purge docker-ce docker-ce-cli containerd.io
sudo rm -rf /var/lib/docker
sudo apt-get update
sudo apt install curl -y
sudo curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get -y update
sudo apt-get -y upgrade
sudo apt-get -y install docker-ce docker-ce-cli containerd.io
sudo systemctl status docker
3.安装lvm服务
sudo apt install lvm2 -y
三、准备部署工作
1.获取cephadm文件
网络受限下载不了的cephadm,我在上传了对应的资源,可以直接下载
cephadm资源下载:https://download.csdn.net/download/baidu_35848778/89680096
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
# 赋权
chmod +x cephadm
# 切换需要安装的ceph的版本
./cephadm add-repo --version 16.2.7
./cephadm install
# 查看版本信息
cephadm version
# 输出
# root@ubuntu:~# cephadm version
# Using recent ceph image quay.io/ceph/ceph@sha256:f15b41add2c01a65229b0db515d2dd57925636ea39678ccc682a49e2e9713d98
# ceph version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific (stable)
2.下载所需镜像
由于镜像源问题,可以使用国内源或是使用外网云主机下载后传输的方式搞定镜像问题。
docker pull quay.io/ceph/ceph:v16
docker pull quay.io/ceph/ceph:v16.2.7
docker pull quay.io/ceph/ceph-grafana:8.3.5
docker pull quay.io/prometheus/prometheus:v2.33.4
docker pull quay.io/prometheus/node-exporter:v1.3.1
docker pull quay.io/prometheus/alertmanager:v0.23.0
四、进行部署
1.开始部署
# prepare
cephadm prepare-host
# bootstrap
cephadm bootstrap --mon-ip 172.16.112.50 --cluster-network 172.16.112.0/21 --single-host-defaults
# 安装客户端软件
sudo apt install ceph-common -y
# 设置mon数量
ceph orch apply mon 1
# 设置mgr数量
ceph orch apply mgr 1
# 查看集群状态
ceph orch ls
# 查看集群节点列表
ceph orch host ls
# 输出
# root@ubuntu:~# ceph orch host ls
# HOST ADDR LABELS STATUS
# ubuntu 172.16.112.50 _admin
# 1 hosts in cluster
# 查看集群节点列表
ceph orch device ls ubuntu
# 输出 我这里由于是部署后查看的,所以不是AVAILABLE状态
# root@ubuntu:~# ceph orch device ls ubuntu
# HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
# ubuntu /dev/sdb ssd SAMSUNG_MZ7LH3T8_S456NC0T819104 3576G 10m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
# ubuntu /dev/sdc ssd SAMSUNG_MZ7LH3T8_S456NC0T819105 3576G 10m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
# ubuntu /dev/sdd ssd SAMSUNG_MZ7LH3T8_S456NC0T819096 3576G 10m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
# ubuntu /dev/sde ssd SAMSUNG_MZ7LH3T8_S456NC0T819103 3576G 10m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
# ubuntu /dev/sdf ssd SAMSUNG_MZ7LH3T8_S456NC0T819099 3576G 10m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
# ubuntu /dev/sdg ssd SAMSUNG_MZ7LH3T8_S456NC0T819100 3576G 10m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
# zap节点上的硬盘
# ceph orch device zap <hostname> <path> [--force]
ceph orch device zap --force ubuntu /dev/sdX
# 部署osd
ceph orch apply osd --all-available-devices
# 查看部署结果
ceph osd status
# 输出
# root@ubuntu:~# ceph osd status
# ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE
# 0 ubuntu 290M 3576G 0 0 0 0 exists,up
# 1 ubuntu 310M 3576G 0 0 0 0 exists,up
# 2 ubuntu 290M 3576G 0 0 0 0 exists,up
# 3 ubuntu 292M 3576G 0 0 0 0 exists,up
# 4 ubuntu 304M 3576G 0 0 0 0 exists,up
# 5 ubuntu 290M 3576G 0 0 0 0 exists,up
2.建立cephfs
ceph osd pool create cephfs_data 32 32
ceph osd pool create cephfs_metadata 32 32
ceph fs new cephfs cephfs_metadata cephfs_data
# 新建mds
# ceph orch apply mds cephfs --placement="--placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
ceph orch apply mds cephfs --placement="1 ubuntu"
查看建立结果
ceph fs ls
# 输出
# root@ubuntu:~# ceph fs ls
# name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
ceph fs status
# 输出
# root@ceph:~# ceph fs status
# cephfs - 0 clients
# ======
# RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
# 0 active cephfs.ubuntu.vfywjb Reqs: 0 /s 68 40 37 0
# POOL TYPE USED AVAIL
# cephfs_metadata metadata 10.3M 9.95T
# cephfs_data data 8192 9.95T
# MDS version: ceph version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific (stable)
总结
记录一下使用cephadm工具安装指定版本(16.2.7)的单点集群的部署过程
最后,截个图当封面