ceph-deploy部署ceph-nautilus

ceph-deploy部署ceph-nautilus

借鉴:https://www.e-learn.cn/topic/3678897

Ceph介绍

Ceph基础

Ceph是一个可靠地、自动重均衡、自动恢复的分布式存储系统,根据场景划分可以将Ceph分为三大块,分别是对象存储(rgw)、块设备存储( rbd)和文件系统服务(cephfs)。Ceph相比其它存储的优势点在于它不单单是存储,同时还充分利用了存储节点上的计算能力,在存储每一个数据时,都会通过计算得出该数据存储的位置,尽量将数据分布均衡,同时由于Ceph的良好设计,采用了CRUSH算法、HASH环等方法,使得它不存在传统的单点故障的问题,且随着规模的扩大性能并不会受到影响。

Ceph核心组件

Ceph的核心组件包括Ceph OSD、Ceph Monitor和Ceph MDS。

Ceph OSD:OSD的英文全称是Object Storage Device,它的主要功能是存储数据、复制数据、平衡数据、恢复数据等,与其它OSD间进行心跳检查等,并将一些变化情况上报给Ceph Monitor。一般情况下一块硬盘对应一个OSD,由OSD来对硬盘存储进行管理,当然一个分区也可以成为一个OSD。

Ceph OSD的架构实现由物理磁盘驱动器、Linux文件系统和Ceph OSD服务组成,对于Ceph OSD Deamon而言,Linux文件系统显性的支持了其拓展性,一般Linux文件系统有好几种,比如有BTRFS、XFS、Ext4等,BTRFS虽然有很多优点特性,但现在还没达到生产环境所需的稳定性,一般比较推荐使用XFS。

伴随OSD的还有一个概念叫做Journal盘,一般写数据到Ceph集群时,都是先将数据写入到Journal盘中,然后每隔一段时间比如5秒再将Journal盘中的数据刷新到文件系统中。一般为了使读写时延更小,Journal盘都是采用SSD,一般分配10G以上,当然分配多点那是更好,Ceph中引入Journal盘的概念是因为Journal允许Ceph OSD功能很快做小的写操作;一个随机写入首先写入在上一个连续类型的journal,然后刷新到文件系统,这给了文件系统足够的时间来合并写入磁盘,一般情况下使用SSD作为OSD的journal可以有效缓冲突发负载。

Ceph Monitor:由该英文名字我们可以知道它是一个监视器,负责监视Ceph集群,维护Ceph集群的健康状态,同时维护着Ceph集群中的各种Map图,比如OSD Map、Monitor Map、PG Map和CRUSH Map,这些Map统称为Cluster Map,Cluster Map是RADOS的关键数据结构,管理集群中的所有成员、关系、属性等信息以及数据的分发,比如当用户需要存储数据到Ceph集群时,OSD需要先通过Monitor获取最新的Map图,然后根据Map图和object id等计算出数据最终存储的位置。

Ceph MDS:全称是Ceph MetaData Server,主要保存的文件系统服务的元数据,但对象存储和块存储设备是不需要使用该服务的。

查看各种Map的信息可以通过如下命令:ceph osd(mon、pg) dump

规划

主机名IP地址角色配置
ceph1192.168.2.30ceph-deploy、mon,mgr,osd1块500GB硬盘
ceph2192.168.2.31mon, mgr, osd1块500GB硬盘
ceph3192.168.2.32mon, mgr, osd1块500GB硬盘

系统初始化

更新内核

导入ELRepo软件仓库的公共秘钥

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

安装ELRepo软件仓库的yum源

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

启用 elrepo 软件源并下载安装最新稳定版内核

yum --enablerepo=elrepo-kernel install kernel-ml -y

查看系统可用内核,并设置内核启动顺序

sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg

生成 grub 配置文件机器上存在多个内核,我们要使用最新版本,可以通过 grub2-set-default 0 命令生成 grub 配置文件

grub2-set-default 0   #初始化页面的第一个内核将作为默认内核
grub2-mkconfig -o /boot/grub2/grub.cfg  #重新创建内核配置

重启系统并验证

reboot
uname -r

删除旧内核

yum -y remove kernel kernel-tools

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

关闭SELinux

在三台主机上分别执行以下命令,管理SELinux

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
grep SELINUX=disabled /etc/selinux/config
setenforce 0
getenforce
#清理重置防火墙
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT

配置主机名

#ceph1 
hostnamectl set-hostname ceph1

#ceph2
hostnamectl set-hostname ceph2

#ceph3
hostnamectl set-hostname ceph3

配置hosts

配置hosts,并复制到其他两台主机

#ceph1

vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.30 ceph1
192.168.2.31 ceph2
192.168.2.32 ceph3
scp /etc/hosts ceph2:/etc/
scp /etc/hosts ceph3:/etc/

配置互信

在ceph1上生成秘钥,不设置密码

[root@localhost ~]# ssh-keygen 
ssh-copy-id -i /root/.ssh/id_rsa.pub ceph1
ssh-copy-id -i /root/.ssh/id_rsa.pub ceph2
ssh-copy-id -i /root/.ssh/id_rsa.pub ceph3

验证

配置时间服务器

在ceph1上配置时间服务器

安装chrony时间服务器软件

yum install ntpdate -y
crontab -e
* */1 * * * /usr/sbin/ntpdate time.windows.com

ntpdate time.windows.com
date


配置yum源

cd /etc/yum.repos.d/
mkdir bak
mv *.repo bak

基础yum源

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

epel源

curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

ceph源

cat << EOF |tee /etc/yum.repos.d/ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages 
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[Ceph-source]
name=Ceph source packages 
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

EOF

生成yum源缓存

yum clean all
yum makecache fast

yum list |grep ceph

部署Ceph

部署控制节点

在ceph1节点上安装ceph-deploy

yum -y install ceph-deploy ceph

在ceph1上创建一个cluster目录,所有命令再此目录下进行操作

[root@ceph1 ~]# mkdir /ceph-cluster
[root@ceph1 ~]# cd /ceph-cluster

将ceph1,ceph2,ceph3加入集群

[root@ceph1 ceph-cluster]# ceph-deploy new ceph1 ceph2 ceph3
......
[ceph_deploy.new][DEBUG ] Resolving host ceph3
[ceph_deploy.new][DEBUG ] Monitor ceph3 at 192.168.2.32
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph1', 'ceph2', 'ceph3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.2.30', '192.168.2.31', '192.168.2.32']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

输出没有报错,表示部署成功。

查看ceph版本

[root@ceph1 ceph-cluster]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

生成mon角色

[root@ceph1 ceph-cluster]# ceph-deploy mon create-initial
......
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.admin
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-mds
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-mgr
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-osd
[ceph1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpRev3cc

生成ceph admin秘钥

[root@ceph1 ceph-cluster]# ceph-deploy admin ceph1 ceph2 ceph3 
......
[ceph1][DEBUG ] connected to host: ceph1 
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph2
[ceph2][DEBUG ] connected to host: ceph2 
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph3
[ceph3][DEBUG ] connected to host: ceph3 
[ceph3][DEBUG ] detect platform information from remote host
[ceph3][DEBUG ] detect machine type
[ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

部署MGR,提供web界面管理ceph(可选安装)

[root@ceph1 ceph-cluster]# ceph-deploy mgr create ceph1 ceph2 ceph3 
......
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph3
[ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph3][WARNIN] mgr keyring does not exist yet, creating one
[ceph3][DEBUG ] create a keyring file
[ceph3][DEBUG ] create path recursively if it doesn't exist
[ceph3][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph3/keyring
[ceph3][INFO  ] Running command: systemctl enable ceph-mgr@ceph3
[ceph3][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph3.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph3][INFO  ] Running command: systemctl start ceph-mgr@ceph3
[ceph3][INFO  ] Running command: systemctl enable ceph.target

部署rgw

yum -y install ceph-radosgw
[root@ceph1 ceph-cluster]# ceph-deploy rgw create ceph1
......
[ceph_deploy.rgw][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] rgw keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph1][DEBUG ] create path recursively if it doesn't exist
[ceph1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph1 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph1/keyring
[ceph1][INFO  ] Running command: systemctl enable ceph-radosgw@rgw.ceph1
[ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph1.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[ceph1][INFO  ] Running command: systemctl start ceph-radosgw@rgw.ceph1
[ceph1][INFO  ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host ceph1 and default port 7480

部署cephfs(可选)

[root@ceph1 ceph-cluster]# ceph-deploy mds create ceph1 ceph2 ceph3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph1 ceph2 ceph3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb889afe2d8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7fb889b3ced8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('ceph1', 'ceph1'), ('ceph2', 'ceph2'), ('ceph3', 'ceph3')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph1:ceph1 ceph2:ceph2 ceph3:ceph3
[ceph1][DEBUG ] connected to host: ceph1 
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] mds keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph1][DEBUG ] create path if it doesn't exist
[ceph1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph1 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph1/keyring
[ceph1][INFO  ] Running command: systemctl enable ceph-mds@ceph1
[ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph1.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph1][INFO  ] Running command: systemctl start ceph-mds@ceph1
[ceph1][INFO  ] Running command: systemctl enable ceph.target
[ceph2][DEBUG ] connected to host: ceph2 
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph2
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph2][WARNIN] mds keyring does not exist yet, creating one
[ceph2][DEBUG ] create a keyring file
[ceph2][DEBUG ] create path if it doesn't exist
[ceph2][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph2 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph2/keyring
[ceph2][INFO  ] Running command: systemctl enable ceph-mds@ceph2
[ceph2][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph2.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph2][INFO  ] Running command: systemctl start ceph-mds@ceph2
[ceph2][INFO  ] Running command: systemctl enable ceph.target
[ceph3][DEBUG ] connected to host: ceph3 
[ceph3][DEBUG ] detect platform information from remote host
[ceph3][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph3
[ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph3][WARNIN] mds keyring does not exist yet, creating one
[ceph3][DEBUG ] create a keyring file
[ceph3][DEBUG ] create path if it doesn't exist
[ceph3][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph3 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph3/keyring
[ceph3][INFO  ] Running command: systemctl enable ceph-mds@ceph3
[ceph3][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph3.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph3][INFO  ] Running command: systemctl start ceph-mds@ceph3
[ceph3][INFO  ] Running command: systemctl enable ceph.target

初始化OSD

在ceph1上一次初始化磁盘(我这里只有一块盘)

ceph-deploy osd create --data /dev/sdb ceph1
ceph-deploy osd create --data /dev/sdb ceph2
ceph-deploy osd create --data /dev/sdb ceph3
##其他盘
#ceph-deploy osd create --data /dev/sdc ceph1
#ceph-deploy osd create --data /dev/sdc ceph2
#ceph-deploy osd create --data /dev/sdc ceph3

#ceph-deploy osd create --data /dev/sdd ceph1
#ceph-deploy osd create --data /dev/sdd ceph2
#ceph-deploy osd create --data /dev/sdd ceph3

查看OSD状态

[root@ceph1 ceph-cluster]# ceph osd status
+----+-------+-------+-------+--------+---------+--------+---------+-----------+
| id |  host |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+-------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | ceph1 | 1027M |  498G |    0   |     0   |    0   |     0   | exists,up |
| 1  | ceph2 | 1027M |  498G |    0   |     0   |    0   |     0   | exists,up |
| 2  | ceph3 | 1027M |  498G |    0   |     0   |    0   |     0   | exists,up |
+----+-------+-------+-------+--------+---------+--------+---------+-----------+

查看ceph状态

[root@ceph1 ceph-cluster]# ceph -s
  cluster:
    id:     028978bb-0f6e-4208-8519-cd8db3f5978e
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 4m)
    mgr: ceph1(active, since 3m), standbys: ceph2, ceph3
    mds:  3 up:standby
    osd: 3 osds: 3 up (since 35s), 3 in (since 35s)
    rgw: 1 daemon active (ceph1)
 
  task status:
 
  data:
    pools:   4 pools, 128 pgs
    objects: 187 objects, 1.2 KiB
    usage:   3.0 GiB used, 1.5 TiB / 1.5 TiB avail
    pgs:     128 active+clean
 
  io:
    recovery: 0 B/s, 2 objects/s

状态ok。

health: HEALTH_WARN
        mons are allowing insecure global_id reclaim

解决方法:ceph config set mon auth_allow_insecure_global_id_reclaim false

启用 dashboard 模块
注意事项:模块启用后还不能直接访问,需要配置关闭 SSL 或启用 SSL 及指定监听地址。

Ceph dashboard 模块在 mgr 节点进行开启设置,并且配置关闭 SSL

部署dashboard

[ceph@ceph1 ceph-cluster]$ ceph mgr module enable dashboard
#关闭 dashboard SSL 功能

[ceph@ceph1 ceph-cluster]$ ceph config set mgr mgr/dashboard/ssl false
#指定 dashboard 监听地址
[ceph@ceph1 ceph-cluster]$ ceph config set mgr mgr/dashboard/ceph-mgr1/server_addr 0.0.0.0 

#指定 dashboard 监听端口
[ceph@ceph1 ceph-cluster]$ ceph config set mgr mgr/dashboard/ceph-mgr1/server_port 7000

#验证 ceph 集群状态,如果有以下报错: 
Module 'dashboard' has failed: error('No socket could be created',) 
需要检查 mgr 服务是否正常运行,可以重启一遍 mgr 服务

#dashboard SSL
#如果要使用 SSL 访问。则需要配置签名证书。证书可以使用 ceph 命令生成,或是 opessl 命令生成。

#ceph 自签名证书
# 未启用 SSL ,查看当前 dashboard 访问 URL
[root@ceph1 ceph-cluster]# ceph mgr services
{
    "dashboard": "http://ceph1:7000/"
}



# 生成证书
[ceph@1 ceph-cluster]# ceph dashboard create-self-signed-cert

# 启用 SSL
[ceph@1 ceph-cluster]# ceph config set mgr mgr/dashboard/ssl true
 
[ceph@1 ceph-cluster]# systemctl restart ceph-mgr@ceph1

添加删除pool权限

cd /etc/ceph/
cat /etc/ceph/ceph.conf 
[global]
mon_allow_pool_delete = true

#分发配置文件
ceph-deploy --overwrite-conf admin ceph01 ceph02 ceph03

重启所有节点的mon和mgr服务

[root@ceph1 ceph]# systemctl list-units --type=service|grep ceph

[root@ceph1 ceph]# systemctl restart ceph-mgr@ceph1.service
[root@ceph1 ceph]# systemctl restart ceph-mon@ceph1.service

[root@ceph2 ~]# systemctl restart ceph-mgr@ceph2.service
[root@ceph2 ~]# systemctl restart ceph-mon@ceph2.service

[root@ceph3 ~]# systemctl restart ceph-mgr@ceph3.service
[root@ceph3 ~]# systemctl restart ceph-mon@ceph3.service

删除pool

#说明:pool名字要输入两次,另外如果是缓存卷,无法删除。
ceph osd pool rm test test --yes-i-really-really-mean-it

#无法删除缓存卷(正常)
ceph osd pool rm volumes volumes  --yes-i-really-really-mean-it

Error EBUSY: pool 'volumes' has tiers cache-pool

创建存储池

[root@ceph1 ceph-cluster]# ceph osd pool create mypool1 128
pool 'mypool1' created

#查看已创建的存储池信息
[root@ceph1 ceph-cluster]# ceph osd pool ls 
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
mypool1

查看存储池详细参数

[root@ceph1 ceph-cluster]# ceph osd pool ls detail 

查看存储池单个参数配置

# 查看副本数
[root@ceph1 ceph-cluster]# ceph osd pool get mypool1 size
size: 3
# 查看pg_num
[root@ceph1 ceph-cluster]# ceph osd pool get mypool1 pg_num
pg_num: 128

修改存储池参数

[root@ceph1 ceph-cluster]# ceph osd pool set mypool1 pg_num 64
set pool 5 pg_num to 64

创建EC池,创建EC策略

[root@ceph1 ceph-cluster]# ceph osd erasure-code-profile set ec001 k=3 m=2 crush-failure-domain=osd
[root@ceph1 ceph-cluster]# ceph osd pool create mypool2 100 erasure ec001

[root@ceph1 ceph-cluster]# ceph osd pool ls detail

rgw使用

设置mypool2为rgw,使用rados客户端工具测试上传下载

[root@ceph1 ceph-cluster]# ceph osd pool application enable mypool2 rgw

# 将/etc/passwd 上到mypool2中,命名为t_pass
[root@ceph1 ceph-cluster]# rados -p mypool2 put t_pass /etc/passwd
# 查看mypool2 中的文件
[root@ceph1 ceph-cluster]# rados -p mypool2 ls

# 将mypool2中的文件t_pass下载到/tmp/passwd
[root@ceph1 ceph-cluster]# rados -p mypool2 get t_pass /tmp/passwd
# 查看下载的文件内容
[root@ceph1 ceph-cluster]# cat /tmp/passwd 

rbd使用

设置mypool3为rbd,并创建卷,挂载到/mnt, 映射给业务服务器使用。

# 创建存储池mypool3
[root@ceph1 ceph-cluster]# ceph osd pool create mypool3 64
pool 'mypool3' created
# 设置存储池类型为rbd
[root@ceph1 ceph-cluster]# ceph osd pool application enable mypool3 rbd
enabled application 'rbd' on pool 'mypool3'
# 在存储池里划分一块名为disk1的磁盘,大小为1G
[root@ceph1 ceph-cluster]# rbd create mypool3/disk1 --size 1G
# 将创建的磁盘map成一个块设备
[root@ceph1 ceph-cluster]# rbd map mypool3/disk1

[root@ceph1 ceph-cluster]# rbd feature disable mypool3/disk1 object-map fast-diff deep-flatten
[root@ceph1 ceph-cluster]# rbd map mypool3/disk1

[root@ceph1 ceph-cluster]# ll /dev/rbd

[root@ceph1 ceph-cluster]# ll /dev/rbd0

# 格式化块设备
[root@ceph1 ceph-cluster]# mkfs.ext4 /dev/rbd0


# 挂载使用
[root@ceph1 ceph-cluster]# mount /dev/rbd0 /mnt
[root@ceph1 ceph-cluster]# df -h

4、ceph.conf详细参数

[global]
fsid = 028978bb-0f6e-4208-8519-cd8db3f5978e #集群标识ID 
mon_initial_members = ceph1, ceph2, ceph3   #初始monitor (由创建monitor命令而定)
mon_host = 192.168.2.30,192.168.2.31,192.168.2.32 #monitor IP 地址
auth_cluster_required = cephx 			#集群认证
auth_service_required = cephx 			#服务认证
auth_client_required = cephx			#客户端认证
                 
osd pool default size = 3                               #最小副本数 默认是3
osd pool default min size = 1                           #PG 处于 degraded 状态不影响其 IO 能力,min_size是一个PG能接受IO的最小副本数
public network = 192.168.2.0/24                            #公共网络(monitorIP段) 
cluster network = 192.168.2.0/24                           #集群网络
max open files = 131072                                 #默认0#如果设置了该选项,Ceph会设置系统的max open fds


[mon]
mon data = /var/lib/ceph/mon/ceph-$id
mon clock drift allowed = 1                             #默认值0.05#monitor间的clock drift
mon osd min down reporters = 13                         #默认值1#向monitor报告down的最小OSD数
mon osd down out interval = 600      #默认值300      #标记一个OSD状态为down和out之前ceph等待的秒数


[osd]
osd data = /var/lib/ceph/osd/ceph-$id
osd mkfs type = xfs                                     #格式化系统类型
osd max write size = 512 #默认值90                       #OSD一次可写入的最大值(MB)
osd client message size cap = 2147483648 #默认值100      #客户端允许在内存中的最大数据(bytes)
osd deep scrub stride = 131072 #默认值524288         #在Deep Scrub时候允许读取的字节数(bytes)
osd op threads = 16 #默认值2                         #并发文件系统操作数
osd disk threads = 4 #默认值1                        #OSD密集型操作例如恢复和Scrubbing时的线程
osd map cache size = 1024 #默认值500                 #保留OSD Map的缓存(MB)
osd map cache bl size = 128 #默认值50                #OSD进程在内存中的OSD Map缓存(MB)
osd mount options xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier" #默认值rw,noatime,inode64  #Ceph OSD xfs Mount选项
osd recovery op priority = 2 #默认值10               #恢复操作优先级,取值1-63,值越高占用资源越高
osd recovery max active = 10 #默认值15               #同一时间内活跃的恢复请求数 
osd max backfills = 4  #默认值10                     #一个OSD允许的最大backfills数
osd min pg log entries = 30000 #默认值3000           #修建PGLog是保留的最大PGLog数
osd max pg log entries = 100000 #默认值10000         #修建PGLog是保留的最大PGLog数
osd mon heartbeat interval = 40 #默认值30            #OSD ping一个monitor的时间间隔(默认30s)
ms dispatch throttle bytes = 1048576000 #默认值 104857600 #等待派遣的最大消息数
objecter inflight ops = 819200 #默认值1024           #客户端流控,允许的最大未发送io请求数,超过阀值会堵塞应用io,为0表示不受限
osd op log threshold = 50 #默认值5                   #一次显示多少操作的log
osd crush chooseleaf type = 0 #默认值为1             #CRUSH规则用到chooseleaf时的bucket的类型


[client]
rbd cache = true #默认值 true      #RBD缓存
rbd cache size = 335544320 #默认值33554432           #RBD缓存大小(bytes)
rbd cache max dirty = 134217728 #默认值25165824      #缓存为write-back时允许的最大dirty字节数(bytes),如果为0,使用write-through
rbd cache max dirty age = 30 #默认值1                #在被刷新到存储盘前dirty数据存在缓存的时间(seconds)
rbd cache writethrough until flush = false #默认值true  #该选项是为了兼容linux-2.6.32之前的virtio驱动,避免因为不发送flush请求,数据不回写
              #设置该参数后,librbd会以writethrough的方式执行io,直到收到第一个flush请求,才切换为writeback方式。
rbd cache max dirty object = 2 #默认值0              #最大的Object对象数,默认为0,表示通过rbd cache size计算得到,librbd默认以4MB为单位对磁盘Image进行逻辑切分
      #每个chunk对象抽象为一个Object;librbd中以Object为单位来管理缓存,增大该值可以提升性能
rbd cache target dirty = 235544320 #默认值16777216    #开始执行回写过程的脏数据大小,不能超过 rbd_cache_max_dirty
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值