一、基础环境准备
主机IP | 主机名 | 部署服务 | 备注 |
---|---|---|---|
192.168.0.91 | admin-node | ceph、ceph-deploy、mon | mon节点又称为master节点 |
192.168.0.92 | ceph01 | ceph | osd |
192.168.0.93 | ceph02 | ceph | osd |
Ceph版本 10.2.11 Ceph-deploy版本 7.6.1810 内核版本 3.10.0-957.el7.x86_64
每个节点关闭防火墙和selinux
[root@admin-node ~]# systemctl stop firewalld
[root@admin-node ~]# systemctl disable firewalld
[root@admin-node ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@admin-node ~]# setenforce 0
每个节点设置时间同步
[root@admin-node ~]# yum -y install ntp ntpdate ntp-doc
[root@admin-node ~]# systemctl enable ntpd
[root@admin-node ~]# systemctl start ntpd
[root@admin-node ~]# /usr/sbin/ntpdate ntp1.aliyun.com
[root@admin-node ~]# hwclock --systohc
[root@admin-node ~]# timedatectl set-timezone Asia/Shanghai
每个节点修改主机名
[root@admin-node ~]# hostnamectl set-hostname admin-node
[root@ceph01 ~]# hostnamectl set-hostname ceph01
[root@ceph02 ~]# hostnamectl set-hostname ceph02
每个节点修改hosts
[root@admin-node ~]# cat >> /etc/hosts <<EOF
192.168.0.91 admin-node
192.168.0.92 ceph01
192.168.0.93 ceph02
EOF
设置yum源
[root@admin-node ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@admin-node ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@admin-node ~]# yum clean all && yum makecache
添加ceph源
[root@admin-node ~]# cat > /etc/yum.repos.d/ceph.repo <<EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
EOF
缓存yum源
[root@admin-node ~]# yum clean all && yum makecache
每个节点创建cephuser用户并设置sudo权限
[root@admin-node ~]# useradd -d /home/cephuser -m cephuser
[root@admin-node ~]# echo "cephuser"|passwd --stdin cephuser
[root@admin-node ~]# echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
[root@admin-node ~]# chmod 0440 /etc/sudoers.d/cephuser
[root@admin-node ~]# sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers
测试cephuser的sudo权限
[root@admin-node ~]# su - cephuser
[cephuser@admin-node ~]$ sudo su -
因为ceph-deploy不支持输入密码,所以必须要配置master节点与每个Ceph节点的ssh无密码登录。
[root@admin-node ~]# su - cephuser
[cephuser@admin-node ~]$ ssh-keygen -t rsa
[cephuser@admin-node ~]$ ssh-copy-id cephuser@admin-node
[cephuser@admin-node ~]$ ssh-copy-id cephuser@ceph01
[cephuser@admin-node ~]$ ssh-copy-id cephuser@ceph02
二、准备磁盘
三台主机上都添加一块30G大小的硬盘,我这里是在vmware上添加,比较简单。
添加好硬盘之后,用fdisk查看下
[cephuser@admin-node ~]$ sudo fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a93ad
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 104857599 52427776 83 Linux
Disk /dev/sdb: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
对磁盘进行分区,分别输入“n”,“p”,“1”,然后会两次回车,再输入“w”。
[cephuser@admin-node ~]$ sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x8aec5f5a.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-62914559, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-62914559, default 62914559):
Using default value 62914559
Partition 1 of type Linux and of size 30 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
然后用fdisk -l就可以看到已经分区的磁盘了
[cephuser@admin-node ~]$ sudo fdisk -l
Disk /dev/sdb: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8aec5f5a
Device Boot Start End Blocks Id System
/dev/sdb1 2048 62914559 31456256 83 Linux
格式化分区
[cephuser@admin-node ~]$ sudo mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1 isize=512 agcount=4, agsize=1966016 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=7864064, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=3839, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
添加到/etc/fstab
[root@admin-node ~]# echo '/dev/sdb1 /ceph xfs defaults 0 0' >> /etc/fstab
[root@admin-node ~]# mkdir /ceph
[root@admin-node ~]# mount -a
[root@admin-node ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 1.7G 49G 4% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 12M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 378M 0 378M 0% /run/user/0
/dev/sdb1 30G 33M 30G 1% /ceph
三、部署
使用ceph-deploy工具部署ceph集群,在master节点上新建一个ceph集群,使用ceph-deploy来管理三个节点。
[root@admin-node ~]# su - cephuser
[cephuser@admin-node ~]$ sudo yum -y install ceph ceph-deploy
创建cluster目录
[cephuser@admin-node ~]$ mkdir cluster
[cephuser@admin-node ~]$ cd cluster/
在master节点创建monitor
[cephuser@admin-node cluster]$ ceph-deploy new admin-node
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy new admin-node
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f27b483f5f0>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f27b3fbc5a8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['admin-node']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[admin-node][DEBUG ] find the location of an executable
[admin-node][INFO ] Running command: sudo /usr/sbin/ip link show
[admin-node][INFO ] Running command: sudo /usr/sbin/ip addr show
[admin-node][DEBUG ] IP addresses found: [u'192.168.0.91']
[ceph_deploy.new][DEBUG ] Resolving host admin-node
[ceph_deploy.new][DEBUG ] Monitor admin-node at 192.168.0.91
[ceph_deploy.new][DEBUG ] Monitor initial members are ['admin-node']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.0.91']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
执行这条命令之后,admin-node节点作为了monitor节点,如果想多个mon节点可以实现互备,需要加上其他节点并且节点需要安装ceph-deploy
当上面的步骤执行完成之后,在cluster目录下会产生ceph.conf文件,这里需要修改ceph.conf文件(注意:mon_host必须和public network网络是同网段的),添加下面4行:
osd pool default size = 3
rbd_default_features = 1
public network = 192.168.0.0/24
osd journal size = 2000
解释如下:
# osd pool default size = 3
设置默认副本数为3份,有一个副本故障,其他2个副本的osd可正常提供服务,需要注意的是如果设置为副本数为3,osd总数量需要是3的倍数。
# rbd_default_features = 1
增加 rbd_default_features 配置可以永久的改变默认值。注意:这种方式设置是永久性的,要注意在集群各个node上都要修改。
# public network = 192.168.0.0/24
设置集群所在的网段
# osd journal size = 2000
OSD日志大小,单位是MB
安装ceph
这个过程有点长,需要等待一段时间
[cephuser@admin-node cluster]$ ceph-deploy install admin-node ceph01 ceph02
master节点初始化mon节点及收集秘钥信息
cephuser@admin-node cluster]$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbc00e79ef0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fbc00e566e0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts admin-node
[ceph_deploy.mon][DEBUG ] detecting platform for host admin-node ...
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
......
[admin-node][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-admin-node/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpmLTa1b
在master节点创建osd:
创建osd
[cephuser@admin-node cluster]$ ceph-deploy osd prepare admin-node:/ceph ceph01:/ceph ceph02:/ceph
激活osd
在激活之前需要先将/ceph目录的权限改为ceph。因为从I版本起,ceph的守护进程以ceph用户而不是root用户运行。而osd在prepare和activate之前需要将硬盘挂载到指定目录去。而创建指定目录(例如/ceph)时,用户可能是非ceph用户例如root用户等。
[cephuser@admin-node cluster]$ sudo chown -R ceph:ceph /ceph
[cephuser@admin-node cluster]$ ceph-deploy osd activate admin-node:/ceph ceph01:/ceph ceph02:/ceph
master创建mon节点-监控集群状态-同时管理集群及msd
[cephuser@admin-node cluster]$ ceph-deploy mon create admin-node
如果需要管理集群节点,需要在node节点安装ceph-deploy,因为ceph-deploy是管理ceph节点的工具。
如果需要增加其他的mon机器,可以使用 ceph-deploy mon add 主机名
修改秘钥权限
[cephuser@admin-node cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
检查ceph状态
[cephuser@admin-node cluster]$ sudo ceph health
HEALTH_OK
[cephuser@admin-node cluster]$ sudo ceph -s
cluster e76f5c31-89d6-4d5f-a302-8f1faf17655e
health HEALTH_OK
monmap e1: 1 mons at {admin-node=192.168.0.91:6789/0}
election epoch 4, quorum 0 admin-node
osdmap e15: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v31: 64 pgs, 1 pools, 0 bytes data, 0 objects
6321 MB used, 85790 MB / 92112 MB avail
64 active+clean
[cephuser@admin-node cluster]$ ceph osd stat
osdmap e15: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
检查osd运行状态
[cephuser@admin-node cluster]$ ceph osd stat
osdmap e15: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
查看ceph集群磁盘
[cephuser@admin-node cluster]$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
92112M 85790M 6321M 6.86
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 27061M 0
这里3台服务器每台30G硬盘组成90G
查看osd节点状态
[cephuser@admin-node cluster]$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.08789 root default
-2 0.02930 host admin-node
0 0.02930 osd.0 up 1.00000 1.00000
-3 0.02930 host ceph01
1 0.02930 osd.1 up 1.00000 1.00000
-4 0.02930 host ceph02
2 0.02930 osd.2 up 1.00000 1.00000
查看osd的状态-负责数据存放的位置
[cephuser@admin-node cluster]$ ceph-deploy osd list admin-node ceph01 ceph02
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy osd list admin-node ceph01 ceph02
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f614b4ba170>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f614b4fdf50>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('admin-node', None, None), ('ceph01', None, None), ('ceph02', None, None)]
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[admin-node][DEBUG ] find the location of an executable
[admin-node][DEBUG ] find the location of an executable
[admin-node][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd tree --format=json
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[admin-node][DEBUG ] find the location of an executable
[admin-node][INFO ] Running command: sudo /usr/sbin/ceph-disk list
[admin-node][INFO ] ----------------------------------------
[admin-node][INFO ] ceph-0
[admin-node][INFO ] ----------------------------------------
[admin-node][INFO ] Path /var/lib/ceph/osd/ceph-0
[admin-node][INFO ] ID 0
[admin-node][INFO ] Name osd.0
[admin-node][INFO ] Status up
[admin-node][INFO ] Reweight 1.0
[admin-node][INFO ] Active ok
[admin-node][INFO ] Magic ceph osd volume v026
[admin-node][INFO ] Whoami 0
[admin-node][INFO ] Journal path /ceph/journal
[admin-node][INFO ] ----------------------------------------
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO ] Running command: sudo /usr/sbin/ceph-disk list
[ceph01][INFO ] ----------------------------------------
[ceph01][INFO ] ceph-1
[ceph01][INFO ] ----------------------------------------
[ceph01][INFO ] Path /var/lib/ceph/osd/ceph-1
[ceph01][INFO ] ID 1
[ceph01][INFO ] Name osd.1
[ceph01][INFO ] Status up
[ceph01][INFO ] Reweight 1.0
[ceph01][INFO ] Active ok
[ceph01][INFO ] Magic ceph osd volume v026
[ceph01][INFO ] Whoami 1
[ceph01][INFO ] Journal path /ceph/journal
[ceph01][INFO ] ----------------------------------------
[ceph02][DEBUG ] connection detected need for sudo
[ceph02][DEBUG ] connected to host: ceph02
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph02][DEBUG ] find the location of an executable
[ceph02][INFO ] Running command: sudo /usr/sbin/ceph-disk list
[ceph02][INFO ] ----------------------------------------
[ceph02][INFO ] ceph-2
[ceph02][INFO ] ----------------------------------------
[ceph02][INFO ] Path /var/lib/ceph/osd/ceph-2
[ceph02][INFO ] ID 2
[ceph02][INFO ] Name osd.2
[ceph02][INFO ] Status up
[ceph02][INFO ] Reweight 1.0
[ceph02][INFO ] Active ok
[ceph02][INFO ] Magic ceph osd volume v026
[ceph02][INFO ] Whoami 2
[ceph02][INFO ] Journal path /ceph/journal
[ceph02][INFO ] ----------------------------------------
查看集群mon选举状态
[cephuser@admin-node cluster]$ ceph quorum_status --format json-pretty
{
"election_epoch": 4,
"quorum": [
0
],
"quorum_names": [
"admin-node"
],
"quorum_leader_name": "admin-node",
"monmap": {
"epoch": 1,
"fsid": "e76f5c31-89d6-4d5f-a302-8f1faf17655e",
"modified": "2020-09-28 15:18:13.294522",
"created": "2020-09-28 15:18:13.294522",
"mons": [
{
"rank": 0,
"name": "admin-node",
"addr": "192.168.0.91:6789\/0"
}
]
}
}
创建文件的系统
先查看管理节点状态,默认是没有管理节点的
[cephuser@admin-node ~]$ ceph mds stat
e1:
创建管理节点(admin-node作为管理节点)
需要注意:如果不创建mds管理节点的,client客户端将不能正常挂载到ceph集群!!
[cephuser@admin-node ~]$ cd /home/cephuser/cluster/
[cephuser@admin-node cluster]$ ceph-deploy mds create admin-node
再次查看管理节点状态,发现已经在启动中了
[cephuser@admin-node cluster]$ ceph mds stat
e2:, 1 up:standby
创建pool池,pool是ceph存储数据时的逻辑分区,它起到namespace的作用
[cephuser@admin-node cluster]$ ceph osd lspools #先查看pool
0 rbd,
新创建的ceph集群只有rdb一个pool。这时需要创建一个新的pool
[cephuser@admin-node cluster]$ ceph osd pool create cephfs_data 128 #后面的数据是pg的数量
pool 'cephfs_data' created
##命令格式##
#创建pool
ceph osd pool create [pool池名称] 128
#删除pool
ceph osd pool delete [pool池名称]
#调整副本数量
ceph osd pool set [pool池名称] size 2
###
[cephuser@admin-node cluster]$ ceph osd pool create cephfs_metadata 128 #pool的元数据
pool 'cephfs_metadata' created
[cephuser@admin-node cluster]$ ceph fs new mycephfs cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1
再次查看pool状态
[cephuser@admin-node cluster]$ ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadata,
检查mds管理节点状态
[cephuser@admin-node cluster]$ ceph mds stat
e5: 1/1/1 up {0=admin-node=up:active}
检查ceph集群状态
[cephuser@admin-node cluster]$ ceph -s
cluster e76f5c31-89d6-4d5f-a302-8f1faf17655e
health HEALTH_WARN
too many PGs per OSD (320 > max 300)
monmap e1: 1 mons at {admin-node=192.168.0.91:6789/0}
election epoch 5, quorum 0 admin-node
fsmap e5: 1/1/1 up {0=admin-node=up:active}
osdmap e29: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v248: 320 pgs, 3 pools, 2068 bytes data, 20 objects
6323 MB used, 85788 MB / 92112 MB avail
320 active+clean
挂载到服务器的目录中
由于我这里的cephfs存储是用来给pod中的java应用存放日志的,所以可以将cephfs存储挂载到一台服务器上,方便查看日志。
查看并记录admin用户的key,admin用户默认就存在,不需要创建
[cephuser@admin-node cluster]$ ceph auth get-key client.admin
AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q==
查看用户权限
[cephuser@admin-node cluster]$ ceph auth get client.admin
exported keyring for client.admin
[client.admin]
key = AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q==
caps mds = "allow *"
caps mon = "allow *"
caps osd = "allow *"
查看ceph授权
[cephuser@admin-node cluster]$ ceph auth list
installed auth entries:
mds.admin-node
key: AQCj+XNfUd6qLBAAERbdVdkbeV3AmKyxvDLDyA==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
osd.0
key: AQAZqHJfsM7fCxAAX4BRw+HxNAPMezGpKKTafw==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQDMqHJfJxAIAhAAyccoTUr5PciteZ5X/vTRLw==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQDXqHJfkilwCBAAtw2QqKZ6LATMijK/8Ggi7Q==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q==
caps: [mds] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQA2jnFfRdOMLRAA9LuyGs5YsZYfH+Mu4wQrsA==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQA5jnFfp87tDxAAtx4rvkE7OIobER2iKQDyew==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQA2jnFfocEJFRAArMJYW9Bjd8B3zT/PpsRZJg==
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgw
key: AQA2jnFfrOQqIRAAmH3v7pvVNBmy5CqxtxeNsg==
caps: [mon] allow profile bootstrap-rgw
在客户机上挂载
客户机需要安装ceph,具体yum源上面已经写过,这里不再赘述
[root@localhost ~]# yum -y install ceph
[root@localhost ~]# mkdir /mnt/cephfs
root@localhost ~]# mount.ceph 192.168.0.91:6789:/ /mnt/cephfs/ -o name=admin,secret=AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q==
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 2.0G 48G 4% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 12M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 378M 0 378M 0% /run/user/0
192.168.0.91:6789:/ 90G 6.2G 84G 7% /mnt/cephfs
并在fstab中加入以下配置:
192.168.0.91:6789:/ /mnt/cephfs/ ceph name=admin,secret=AQA2jnFfBT8QBRAAd504A08U+Jlk/pt6TkcS4Q== 0 0
创建Provisioner
注意点:
1.需要在k8s的master和node节点中都安装ceph,因为我后面还要使用storageclass模式,还需要在k8s的master节点中安装ceph-common,否则会报错。
2.因为只有cephfs(即文件类的存储)才能够支持ReadWriteMany,所以上面创建的是ceph的文件系统cephfs。
3.在k8s中使用cephfs存储时,节点中必须安装ceph-fuse,不然会报错。
创建秘钥
ceph auth get-key client.admin > /opt/secret
kubectl create ns cephfs
kubectl create secret generic ceph-secret-admin --from-file=/opt/secret --namespace=cephfs
部署Provisioner
git clone https://github.com/kubernetes-retired/external-storage.git
cd external-storage/ceph/cephfs/deploy
NAMESPACE=cephfs
sed -r -i "s/namespace: [^ ]+/namespace: $NAMESPACE/g" ./rbac/*.yaml
sed -r -i "N;s/(name: PROVISIONER_SECRET_NAMESPACE.*\n[[:space:]]*)value:.*/\1value: $NAMESPACE/" ./rbac/deployment.yaml
kubectl -n $NAMESPACE apply -f ./rbac
检查pod的状态是否正常
kubectl get pods -n cephfs
创建Storageclass
cat > storageclass.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 192.168.0.91:6789 #换成自己的ceph地址
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: cephfs
claimRoot: /logs
EOF
kubectl apply -f storageclass.yaml
创建pvc
cat > pvc.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim1
spec:
storageClassName: cephfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
EOF
kubectl apply -f pvc.yaml
查看pvc
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim1 Bound pvc-7c1740a3-a3ab-4889-9d94-e50c736701fd 2Gi RWX cephfs 27m
上面创建好了StorageClass和PVC之后,我这边的java应用就可以直接使用这个pvc进行日志目录的挂载了,并且是多个pod可以同时使用同一个PVC的,这样方便了日志的管理和收集工作。
注意:在这里还是要再次强调下,ceph存储的文件系统cephfs才支持ReadWriteMany哦。
参考文章:
https://k.i4t.com/kubernetes_ceph_storageclass.html
https://www.cnblogs.com/kevingrace/p/9141432.html
https://blog.csdn.net/qq_42006894/article/details/88576446
https://zhuanlan.zhihu.com/p/58602716
https://www.cnblogs.com/sunsky303/p/12206485.html