CentOS 7配置存储服务器
一、配置NFS服务器
配置NFS服务在局域网共享文档
1、配置NFS服务
Configure NFS Server to share directories on your Network.
This example is based on the environment below.
+----------------------+ | +----------------------+
| [ NFS Server ] |10.0.0.30 | 10.0.0.31| [ NFS Client ] |
| dlp.srv.world +----------+----------+ www.srv.world |
| | | |
+----------------------+ +----------------------+
[1] Configure NFS Server.
[root@dlp ~]# yum -y install nfs-utils
[root@dlp ~]# vi /etc/idmapd.conf
# line 5: uncomment and change to your domain name
Domain = srv.world
[root@dlp ~]# vi /etc/exports
# write settings for NFS exports
/home 10.0.0.0/24(rw,no_root_squash)
[root@dlp ~]# systemctl start rpcbind nfs-server
[root@dlp ~]# systemctl enable rpcbind nfs-server
[2] If Firewalld is running, allow NFS service.
[root@dlp ~]# firewall-cmd --add-service=nfs --permanent
success
[root@dlp ~]# firewall-cmd --reload
success
For basic options of exports
Option Description
rw Allow both read and write requests on a NFS volume.
ro Allow only read requests on a NFS volume.
sync Reply to requests only after the changes have been committed to stable storage. (Default)
async This option allows the NFS server to violate the NFS protocol and reply to requests before any changes made by that request have been committed to stable storage.
secure This option requires that requests originate on an Internet port less than IPPORT_RESERVED (1024). (Default)
insecure This option accepts all ports.
wdelay Delay committing a write request to disc slightly if it suspects that another related write request may be in progress or may arrive soon. (Default)
no_wdelay This option has no effect if async is also set. The NFS server will normally delay committing a write request to disc slightly if it suspects that another related write request may be in progress or may arrive soon. This allows multiple write requests to be committed to disc with the one operation which can improve performance. If an NFS server received mainly small unrelated requests, this behaviour could actually reduce performance, so no_wdelay is available to turn it off.
subtree_check This option enables subtree checking. (Default)
no_subtree_check This option disables subtree checking, which has mild security implications, but can improve reliability in some circumstances.
root_squash Map requests from uid/gid 0 to the anonymous uid/gid. Note that this does not apply to any other uids or gids that might be equally sensitive, such as user bin or group staff.
no_root_squash Turn off root squashing. This option is mainly useful for disk-less clients.
all_squash Map all uids and gids to the anonymous user. Useful for NFS exported public FTP directories, news spool directories, etc.
no_all_squash Turn off all squashing. (Default)
anonuid=UID These options explicitly set the uid and gid of the anonymous account. This option is primarily useful for PC/NFS clients, where you might want all requests appear to be from one user. As an example, consider the export entry for /home/joe in the example section below, which maps all requests to uid 150.
anongid=GID Read above (anonuid=UID)
2、Configure NFS Client.
This example is based on the environment below.
+----------------------+ | +----------------------+
| [ NFS Server ] |10.0.0.30 | 10.0.0.31| [ NFS Client ] |
| dlp.srv.world +----------+----------+ www.srv.world |
| | | |
+----------------------+ +----------------------+
[1] Configure NFS Client.
[root@www ~]# yum -y install nfs-utils
[root@www ~]# vi /etc/idmapd.conf
# line 5: uncomment and change to your domain name
Domain = srv.world
[root@www ~]# systemctl start rpcbind
[root@www ~]# systemctl enable rpcbind
[root@www ~]# mount -t nfs dlp.srv.world:/home /home
[root@www ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 46G 1.4G 45G 4% /
devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs tmpfs 1.9G 8.3M 1.9G 1% /run
tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vda1 xfs 497M 219M 278M 45% /boot
dlp.srv.world:/home nfs4 46G 1.4G 45G 4% /home
# /home from NFS server is mounted
[2] Configure NFS mounting on fstab to mount it when the system boots.
[root@www ~]# vi /etc/fstab
/dev/mapper/centos-root / xfs defaults 1 1
UUID=a18716b4-cd67-4aec-af91-51be7bce2a0b /boot xfs defaults 1 2
/dev/mapper/centos-swap swap swap defaults 0 0
# add like follows to the end
dlp.srv.world:/home /home nfs defaults 0 0
[3] Configure auto-mounting. For example, set NFS directory on /mntdir.
[root@www ~]# yum -y install autofs
[root@www ~]# vi /etc/auto.master
# add follows to the end
/- /etc/auto.mount
[root@www ~]# vi /etc/auto.mount
# create new : [mount point] [option] [location]
/mntdir -fstype=nfs,rw dlp.srv.world:/home
[root@www ~]# mkdir /mntdir
[root@www ~]# systemctl start autofs
[root@www ~]# systemctl enable autofs
# move to the mount point to make sure it normally mounted
[root@www ~]# cd /mntdir
[root@www mntdir]# ll
total 0
drwx------ 2 cent cent 59 Jul 9 2014 cent
[root@www mntdir]# cat /proc/mounts | grep mntdir
/etc/auto.mount /mntdir autofs rw,relatime,fd=18,pgrp=2093,timeout=300,minproto=5,maxproto=5,direct 0 0
dlp.srv.world:/home /mntdir nfs4 rw,relatime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,
port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.31,local_lock=none,addr=10.0.0.30 0 0
二、Configure iSCSI Target
1、Configure Storage Server with iSCSI.
A storage on a network is called iSCSI Target, a Client which connects to iSCSI Target is called iSCSI Initiator.
This example is based on the environment below.
+----------------------+ | +----------------------+
| [ iSCSI Target ] |10.0.0.30 | 10.0.0.31| [ iSCSI Initiator ] |
| dlp.srv.world +----------+----------+ www.srv.world |
| | | |
+----------------------+ +----------------------+
[1] Install administration tools first.
[root@dlp ~]# yum -y install targetcli
[2] Configure iSCSI Target.
For example, create an disk-image under the /iscsi_disks directory and set it as a SCSI device.
# create a directory
[root@dlp ~]# mkdir /iscsi_disks
# enter the admin console
[root@dlp ~]# targetcli
targetcli shell version 2.1.fb34
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/> cd backstores/fileio
# create a disk-image with the name "disk01" on /iscsi_disks/disk01.img with 10G
/backstores/fileio> create disk01 /iscsi_disks/disk01.img 10G
Created fileio disk01 with size 10737418240
/backstores/fileio> cd /iscsi
# create a target
/iscsi> create iqn.2014-07.world.srv:storage.target00
Created target iqn.2014-07.world.srv:storage.target00.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> cd iqn.2014-07.world.srv:storage.target00/tpg1/luns
# set LUN
/iscsi/iqn.20...t00/tpg1/luns> create /backstores/fileio/disk01
Created LUN 0.
/iscsi/iqn.20...t00/tpg1/luns> cd ../acls
# set ACL (it's the IQN of an initiator you permit to connect)
/iscsi/iqn.20...t00/tpg1/acls> create iqn.2014-07.world.srv:www.srv.world
Created Node ACL for iqn.2014-07.world.srv:www.srv.world
Created mapped LUN 0.
/iscsi/iqn.20...t00/tpg1/acls> cd iqn.2014-07.world.srv:www.srv.world
# set UserID for authentication
/iscsi/iqn.20....srv.world> set auth userid=username
Parameter userid is now 'username'.
/iscsi/iqn.20....srv.world> set auth password=password
Parameter password is now 'password'.
/iscsi/iqn.20....srv.world> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
# after configuration above, the target enters in listening like follows.
[root@dlp ~]# ss -napt | grep 3260
LISTEN 0 256 *:3260 *:*
[root@dlp ~]# systemctl enable target
[3] If Firewalld is running, allow iSCSI Target service.
[root@dlp ~]# firewall-cmd --add-service=iscsi-target --permanent
success
[root@dlp ~]# firewall-cmd --reload
Success
2、Configure iSCSI Target(tgt)
Configure Storage Server with iSCSI.
This is the example of configuring iSCSI Target with scsi-target-utils.
[1] Install scsi-target-utils.
# install from EPEL
[root@dlp ~]# yum --enablerepo=epel -y install scsi-target-utils
注意:在centos7.3上不能安装成功上述的命令,不使用与之相关的命令,可以成功
[2] Configure iSCSI Target.
For example, create a disk image under the [/iscsi_disks] directory and set it as a shared disk.
# create a disk image
[root@dlp ~]# mkdir /iscsi_disks
[root@dlp ~]# dd if=/dev/zero of=/iscsi_disks/disk01.img count=0 bs=1 seek=10G
[root@dlp ~]# vi /etc/tgt/targets.conf
# add follows to the end
# if you set some devices, add - and set the same way with follows
# naming rule : [ iqn.yaer-month.domain:any name ]
# provided devicce as a iSCSI target
backing-store /iscsi_disks/disk01.img
# iSCSI Initiator's IP address you allow to connect
initiator-address 10.0.0.31
# authentication info ( set anyone you like for "username", "password" )
incominguser username password
[3] If SELinux is enabled, change SELinux Context.
[root@dlp ~]# chcon -R -t tgtd_var_lib_t /iscsi_disks
[root@dlp ~]# semanage fcontext -a -t tgtd_var_lib_t /iscsi_disks
[4] If Firewalld is running, allow iSCSI Target service.
[root@dlp ~]# firewall-cmd --add-service=iscsi-target --permanent
success
[root@dlp ~]# firewall-cmd --reload
success
[5] Start tgtd and verify status.
[root@dlp ~]# systemctl start tgtd
[root@dlp ~]# systemctl enable tgtd
# show status
[root@dlp ~]# tgtadm --mode target --op show
Target 1: iqn.2015-12.world.srv:target00
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 10737 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /iscsi_disks/disk01.img
Backing store flags:
Account information:
username
ACL information:
10.0.0.31
3、
Configure iSCSI Initiator.
This example is based on the environment below.
+----------------------+ | +----------------------+
| [ iSCSI Target ] |10.0.0.30 | 10.0.0.31| [ iSCSI Initiator ] |
| dlp.srv.world +----------+----------+ www.srv.world |
| | | |
+----------------------+ +----------------------+
[1] Configure iSCSI Initiator.
[root@www ~]# yum -y install iscsi-initiator-utils
[root@www ~]# vi /etc/iscsi/initiatorname.iscsi
# change to the same IQN you set on the iSCSI target server
InitiatorName=iqn.2014-07.world.srv:www.srv.world
[root@www ~]# vi /etc/iscsi/iscsid.conf
# line 57: uncomment
node.session.auth.authmethod = CHAP
# line 61,62: uncomment and specify the username and password you set on the iSCSI target server
node.session.auth.username = username
node.session.auth.password = password
#重启相关isscsi的进程
systemctl restart iscsid
# discover target
[root@www ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.30
[ 635.510656] iscsi: registered transport (tcp)
10.0.0.30:3260,1 iqn.2014-07.world.srv:storage.target00
# confirm status after discovery
[root@www ~]# iscsiadm -m node -o show
# BEGIN RECORD 6.2.0.873-21
node.name = iqn.2014-07.world.srv:storage.target00
node.tpgt = 1
node.startup = automatic
node.leading_login = No
...
...
...
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD
# login to the target
[root@www ~]# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2014-07.world.srv:storage.target00, portal: 10.0.0.30,3260] (multiple)
[ 708.383308] scsi2 : iSCSI Initiator over TCP/IP
[ 709.393277] scsi 2:0:0:0: Direct-Access LIO-ORG disk01 4.0 PQ: 0 ANSI: 5
[ 709.395709] scsi 2:0:0:0: alua: supports implicit and explicit TPGS
[ 709.398155] scsi 2:0:0:0: alua: port group 00 rel port 01
[ 709.399762] scsi 2:0:0:0: alua: port group 00 state A non-preferred supports TOlUSNA
[ 709.401763] scsi 2:0:0:0: alua: Attached
[ 709.402910] scsi 2:0:0:0: Attached scsi generic sg0 type 0
Login to [iface: default, target: iqn.2014-07.world.srv:storage.target00, portal: 10.0.0.30,3260] successful.
# confirm the established session
[root@www ~]# iscsiadm -m session -o show
tcp: [1] 10.0.0.30:3260,1 iqn.2014-07.world.srv:storage.target00 (non-flash)
# confirm the partitions
[root@www ~]# cat /proc/partitions
major minor #blocks name
252 0 52428800 sda
252 1 512000 sda1
252 2 51915776 sda2
253 0 4079616 dm-0
253 1 47833088 dm-1
8 0 20971520 sdb
# added new device provided from the target server as "sdb"
[2] After setting iSCSI devide, configure on Initiator to use it like follwos.
# create label
[root@www ~]# parted --script /dev/sdb "mklabel msdos"
# create partiton
[root@www ~]# parted --script /dev/sdb "mkpart primary 0% 100%"
# format with XFS
[root@www ~]# mkfs.xfs -i size=1024 -s size=4096 /dev/sdb1
meta-data=/dev/sdb1 isize=1024 agcount=16, agsize=327616 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=5241856, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# mount it
[root@www ~]# mount /dev/sdb1 /mnt
[ 6894.010661] XFS (sdb1): Mounting Filesystem
[ 6894.031358] XFS (sdb1): Ending clean mount
[root@www ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 46G 1023M 45G 3% /
devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs tmpfs 1.9G 8.3M 1.9G 1% /run
tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 xfs 497M 120M 378M 25% /boot
/dev/sdb1 xfs 20G 33M 20G 1% /mnt
三、Ceph : Configure Ceph Cluster
1、Install Distributed File System "Ceph" to Configure Storage Cluster.
For example on here, Configure Cluster with 1 admin Node and 3 Storage Node like follows.
|
+--------------------+ | +-------------------+
| [dlp.srv.world] |10.0.0.30 | 10.0.0.x| [ Client ] |
| Ceph-Deploy +-----------+-----------+ |
| | | | |
+--------------------+ | +-------------------+
+----------------------------+----------------------------+
| | |
|10.0.0.51 |10.0.0.52 |10.0.0.53
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] |
| Object Storage +----+ Object Storage +----+ Object Storage |
| Monitor Daemon | | | | |
| | | | | |
+-----------------------+ +-----------------------+ +-----------------------+
[1]
Add a user for Ceph admin on all Nodes.
It adds "cent" user on this exmaple.
[2] Grant root priviledge to Ceph admin user just added above with sudo settings.
And also install required packages.
Furthermore, If Firewalld is running on all Nodes, allow SSH service.
Set all of above on all Nodes.
[root@dlp ~]# echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
[root@dlp ~]# chmod 440 /etc/sudoers.d/ceph
[root@dlp ~]# yum -y install centos-release-ceph-hammer epel-release yum-plugin-priorities
[root@dlp ~]# sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/CentOS-Ceph-Hammer.repo
[root@dlp ~]# firewall-cmd --add-service=ssh --permanent
[root@dlp ~]# firewall-cmd --reload
[3] On Monitor Node (Monitor Daemon), If Firewalld is running, allow required port.
[root@dlp ~]# firewall-cmd --add-port=6789/tcp --permanent
[root@dlp ~]# firewall-cmd --reload
[4] On Storage Nodes (Object Storage), If Firewalld is running, allow required ports.
[root@dlp ~]# firewall-cmd --add-port=6800-7100/tcp --permanent
[root@dlp ~]# firewall-cmd --reload
[4] Login as a Ceph admin user and configure Ceph.
Set SSH key-pair from Ceph Admin Node (it's "dlp.srv.world" on this example) to all storage Nodes.
[cent@dlp ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cent/.ssh/id_rsa):
Created directory '/home/cent/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cent/.ssh/id_rsa.
Your public key has been saved in /home/cent/.ssh/id_rsa.pub.
The key fingerprint is:
54:c3:12:0e:d3:65:11:49:11:73:35:1b:e3:e8:63:5a cent@dlp.srv.world
The key's randomart image is:
[cent@dlp ~]$ vi ~/.ssh/config
# create new ( define all nodes and users )
Host dlp
Hostname dlp.srv.world
User cent
Host node01
Hostname node01.srv.world
User cent
Host node02
Hostname node02.srv.world
User cent
Host node03
Hostname node03.srv.world
User cent
[cent@dlp ~]$ chmod 600 ~/.ssh/config
# transfer key file
[cent@dlp ~]$ ssh-copy-id node01
cent@node01.srv.world's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node01'"
and check to make sure that only the key(s) you wanted were added.
[cent@dlp ~]$ ssh-copy-id node02
[cent@dlp ~]$ ssh-copy-id node03
[5] Install Ceph to all Nodes from Admin Node.
[cent@dlp ~]$ sudo yum -y install ceph-deploy
[cent@dlp ~]$ mkdir ceph
[cent@dlp ~]$ cd ceph
[cent@dlp ceph]$ ceph-deploy new node01
[cent@dlp ceph]$ vi ./ceph.conf
# add to the end
osd pool default size = 2
# Install Ceph on each Node
[cent@dlp ceph]$ ceph-deploy install dlp node01 node02 node03
# settings for monitoring and keys
[cent@dlp ceph]$ ceph-deploy mon create-initial
[6] Configure Ceph Cluster from Admin Node.
Beforeit, Create a directory /storage01 on Node01, /storage02 on Node02, /storage03 on node03 on this example.
# prepare Object Storage Daemon
[cent@dlp ceph]$ ceph-deploy osd prepare node01:/storage01 node02:/storage02 node03:/storage03
# activate Object Storage Daemon
[cent@dlp ceph]$ ceph-deploy osd activate node01:/storage01 node02:/storage02 node03:/storage03
# transfer config files
[cent@dlp ceph]$ ceph-deploy admin dlp node01 node02 node03
[cent@dlp ceph]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
# show status (display like follows if no ploblem)
[cent@dlp ceph]$ ceph health
HEALTH_OK
[7] By the way, if you'd like to clean settings and re-configure again, do like follows.
# remove packages
[cent@dlp ceph]$ ceph-deploy purge dlp node01 node02 node03
# remove settings
[cent@dlp ceph]$ ceph-deploy purgedata dlp node01 node02 node03
[cent@dlp ceph]$ ceph-deploy forgetkeys
2、Ceph : Use as Block Device
Configure Clients to use Ceph Storage like follows.
|
+--------------------+ | +-------------------+
| [dlp.srv.world] |10.0.0.30 | 10.0.0.x| [ Client ] |
| Ceph-Deploy +-----------+-----------+ |
| | | | |
+--------------------+ | +-------------------+
+----------------------------+----------------------------+
| | |
|10.0.0.51 |10.0.0.52 |10.0.0.53
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] |
| Object Storage +----+ Object Storage +----+ Object Storage |
| Monitor Daemon | | | | |
| | | | | |
+-----------------------+ +-----------------------+ +-----------------------+
For exmaple, Create a block device and mount it on a Client.
[1] First, Configure Sudo and SSH key-pair for a user on a Client and next, Install Ceph from Ceph admin Node like follows.
[cent@dlp ceph]$ ceph-deploy install client
[cent@dlp ceph]$ ceph-deploy admin client
[2] Create a Block device and mount it on a Client.
[cent@client ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
# create a disk with 10G
[cent@client ~]$ rbd create disk01 --size 10240
# show list
[cent@client ~]$ rbd ls -l
NAME SIZE PARENT FMT PROT LOCK
disk01 10240M 2
# map the image to device
[cent@client ~]$ sudo rbd map disk01
/dev/rbd0
# show mapping
[cent@client ~]$ rbd showmapped
id pool image snap device
0 rbd disk01 - /dev/rbd0
# format with XFS
[cent@client ~]$ sudo mkfs.xfs /dev/rbd0
# mount device
[cent@client ~]$ sudo mount /dev/rbd0 /mnt
[cent@client ~]$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 27G 1.3G 26G 5% /
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 8.4M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 497M 151M 347M 31% /boot
/dev/rbd0 xfs 10G 33M 10G 1% /mnt
3、Ceph : Use as File System
Configure Clients to use Ceph Storage like follows.
|
+--------------------+ | +-------------------+
| [dlp.srv.world] |10.0.0.30 | 10.0.0.x| [ Client ] |
| Ceph-Deploy +-----------+-----------+ |
| | | | |
+--------------------+ | +-------------------+
+----------------------------+----------------------------+
| | |
|10.0.0.51 |10.0.0.52 |10.0.0.53
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] |
| Object Storage +----+ Object Storage +----+ Object Storage |
| Monitor Daemon | | | | |
| | | | | |
+-----------------------+ +-----------------------+ +-----------------------+
For example, mount as Filesystem on a Client.
[1] Create MDS (MetaData Server) on a Node which you'd like to set MDS. It sets to node01 on this exmaple.
[cent@dlp ceph]$ ceph-deploy mds create node01
[2] Create at least 2 RADOS pools on MDS Node and activate MetaData Server.
For pg_num which is specified at the end of a creating command, refer to official document and decide appropriate value.
⇒ http://docs.ceph.com/docs/master/rados/operations/placement-groups/
[cent@node01 ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
# create pools
[cent@node01 ~]$ ceph osd pool create cephfs_data 128
pool 'cephfs_data' created
[cent@node01 ~]$ ceph osd pool create cephfs_metadata 128
pool 'cephfs_metadata' created
# enable pools
[cent@node01 ~]$ ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1
# show list
[cent@node01 ~]$ ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[cent@node01 ~]$ ceph mds stat
e5: 1/1/1 up {0=node01=up:active}
[3] Mount CephFS on a Client.
[root@client ~]# yum -y install ceph-fuse
# get admin key
[root@client ~]# ssh cent@node01.srv.world "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key
cent@node01.srv.world's password:
[root@client ~]# chmod 600 admin.key
[root@client ~]# mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key
[root@client ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 27G 1.3G 26G 5% /
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 8.3M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 497M 151M 347M 31% /boot
10.0.0.51:6789:/ ceph 80G 19G 61G 24% /mnt
四、GlusterFS 安装
1、Install GlusterFS to Configure Storage Cluster.
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
[1] Install GlusterFS Server on all Nodes in Cluster.
[root@node01 ~]# curl http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo
# enable EPEL, too
[root@node01 ~]# yum --enablerepo=epel -y install glusterfs-server
[root@node01 ~]# systemctl start glusterd
[root@node01 ~]# systemctl enable glusterd
[2] If Firewalld is running, allow GlusterFS service on all nodes.
[root@node01 ~]# firewall-cmd --add-service=glusterfs --permanent
success
[root@node01 ~]# firewall-cmd --reload
success
It's OK if you mount GlusterFS volumes from clients with GlusterFS Native Client.
[3] GlusterFS supports NFS (v3), so if you mount GlusterFS volumes from clients with NFS, Configure additinally like follows.
[root@node01 ~]# yum -y install rpcbind
[root@node01 ~]# systemctl start rpcbind
[root@node01 ~]# systemctl enable rpcbind
[root@node01 ~]# systemctl restart glusterd
[4] Installing and Basic Settings of GlusterFS are OK. Refer to next section for settings of clustering.
2、GlusterFS : Distributed Configuration
Configure Storage Clustering.
For example, create a distributed volume with 2 servers.
This example shows to use 2 servers but it's possible to use more than 3 servers.
|
+----------------------+ | +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
[1]
Install GlusterFS Server on All Nodes, refer to here.
[2] Create a Directory for GlusterFS Volume on all Nodes.
[root@node01 ~]# mkdir /glusterfs/distributed
[3] Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
[root@node01 ~]# gluster peer probe node02
peer probe: success.
# show status
[root@node01 ~]# gluster peer status
Number of Peers: 1
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
# create volume
[root@node01 ~]# gluster volume create vol_distributed transport tcp \
node01:/glusterfs/distributed \
node02:/glusterfs/distributed
volume create: vol_distributed: success: please start the volume to access data
# start volume
[root@node01 ~]# gluster volume start vol_distributed
volume start: vol_distributed: success
# show volume info
[root@node01 ~]# gluster volume info
Volume Name: vol_distributed
Type: Distribute
Volume ID: 6677caa9-9aab-4c1a-83e5-2921ee78150d
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
performance.readdir-ahead: on
[4] To mount GlusterFS volume on clients, refer to here.
3、GlusterFS : Replication Configuration
For example, create a Replication volume with 2 servers.
This example shows to use 2 servers but it's possible to use more than 3 servers.
|
+----------------------+ | +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
[1]
Install GlusterFS Server on All Nodes, refer to here.
[2] Create a Directory for GlusterFS Volume on all Nodes.
[root@node01 ~]# mkdir /glusterfs/replica
[3] Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
[root@node01 ~]# gluster peer probe node02
peer probe: success.
# show status
[root@node01 ~]# gluster peer status
Number of Peers: 1
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
# create volume
[root@node01 ~]# gluster volume create vol_replica replica 2 transport tcp \
node01:/glusterfs/replica \
node02:/glusterfs/replica
volume create: vol_replica: success: please start the volume to access data
# start volume
[root@node01 ~]# gluster volume start vol_replica
volume start: vol_replica: success
# show volume info
[root@node01 ~]# gluster volume info
Volume Name: vol_replica
Type: Replicate
Volume ID: 0d5d5ef7-bdfa-416c-8046-205c4d9766e6
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/replica
Brick2: node02:/glusterfs/replica
Options Reconfigured:
performance.readdir-ahead: on
[4] To mount GlusterFS volume on clients, refer to here.
4、GlusterFS : Striping Configuration
Configure Storage Clustering.
For example, create a Striping volume with 2 servers.
This example shows to use 2 servers but it's possible to use more than 3 servers.
|
+----------------------+ | +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
[1]
Install GlusterFS Server on All Nodes, refer to here.
[2] Create a Directory for GlusterFS Volume on all Nodes.
[root@node01 ~]# mkdir /glusterfs/striped
[3] Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
[root@node01 ~]# gluster peer probe node02
peer probe: success.
# show status
[root@node01 ~]# gluster peer status
Number of Peers: 1
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
# create volume
[root@node01 ~]# gluster volume create vol_striped stripe 2 transport tcp \
node01:/glusterfs/striped \
node02:/glusterfs/striped
volume create: vol_striped: success: please start the volume to access data
# start volume
[root@node01 ~]# gluster volume start vol_striped
volume start: vol_replica: success
# show volume info
[root@node01 ~]# gluster volume info
Volume Name: vol_striped
Type: Stripe
Volume ID: b6f6b090-3856-418c-aed3-bc430db91dc6
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/striped
Brick2: node02:/glusterfs/striped
Options Reconfigured:
performance.readdir-ahead: on
[4] To mount GlusterFS volume on clients, refer to here.
5、GlusterFS : Distributed + Replication
Configure Storage Clustering.
For example, create a Distributed + Replication volume with 4 servers.
|
+----------------------+ | +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | | |
+----------------------+ | +----------------------+
|
+----------------------+ | +----------------------+
| [GlusterFS Server#3] |10.0.0.53 | 10.0.0.54| [GlusterFS Server#4] |
| node03.srv.world +----------+----------+ node04.srv.world |
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
[1]
Install GlusterFS Server on All Nodes, refer to here.
[2] Create a Directory for GlusterFS Volume on all Nodes.
[root@node01 ~]# mkdir /glusterfs/dist-replica
[3] Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
[root@node01 ~]# gluster peer probe node02
peer probe: success.
[root@node01 ~]# gluster peer probe node03
peer probe: success.
[root@node01 ~]# gluster peer probe node04
peer probe: success.
# show status
[root@node01 ~]# gluster peer status
Number of Peers: 3
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
Hostname: node03
Uuid: 79cff591-1e98-4617-953c-0d3e334cf96a
State: Peer in Cluster (Connected)
Hostname: node04
Uuid: 779ab1b3-fda9-46da-af95-ba56477bf638
State: Peer in Cluster (Connected)
# create volume
[root@node01 ~]# gluster volume create vol_dist-replica replica 2 transport tcp \
node01:/glusterfs/dist-replica \
node02:/glusterfs/dist-replica \
node03:/glusterfs/dist-replica \
node04:/glusterfs/dist-replica
volume create: vol_dist-replica: success: please start the volume to access data
# start volume
[root@node01 ~]# gluster volume start vol_dist-replica
volume start: vol_dist-replica: success
# show volume info
[root@node01 ~]# gluster volume info
Volume Name: vol_dist-replica
Type: Distributed-Replicate
Volume ID: 784d2953-6599-4102-afc2-9069932894cc
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/dist-replica
Brick2: node02:/glusterfs/dist-replica
Brick3: node03:/glusterfs/dist-replica
Brick4: node04:/glusterfs/dist-replica
Options Reconfigured:
performance.readdir-ahead: on
6、GlusterFS : Striping + Replication
Configure Storage Clustering.
For example, create a Striping + Replication volume with 4 servers.
|
+----------------------+ | +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | | |
+----------------------+ | +----------------------+
|
+----------------------+ | +----------------------+
| [GlusterFS Server#3] |10.0.0.53 | 10.0.0.54| [GlusterFS Server#4] |
| node03.srv.world +----------+----------+ node04.srv.world |
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
[1]
Install GlusterFS Server on All Nodes, refer to here.
[2] Create a Directory for GlusterFS Volume on all Nodes.
[root@node01 ~]# mkdir /glusterfs/strip-replica
[3] Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
[root@node01 ~]# gluster peer probe node02
peer probe: success.
[root@node01 ~]# gluster peer probe node03
peer probe: success.
[root@node01 ~]# gluster peer probe node04
peer probe: success.
# show status
[root@node01 ~]# gluster peer status
Number of Peers: 3
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
Hostname: node03
Uuid: 79cff591-1e98-4617-953c-0d3e334cf96a
State: Peer in Cluster (Connected)
Hostname: node04
Uuid: 779ab1b3-fda9-46da-af95-ba56477bf638
State: Peer in Cluster (Connected)
# create volume
[root@node01 ~]# gluster volume create vol_strip-replica stripe 2 replica 2 transport tcp \
node01:/glusterfs/strip-replica \
node02:/glusterfs/strip-replica \
node03:/glusterfs/strip-replica \
node04:/glusterfs/strip-replica
volume create: vol_strip-replica: success: please start the volume to access data
# start volume
[root@node01 ~]# gluster volume start vol_strip-replica
volume start: vol_strip-replica: success
# show volume info
[root@node01 ~]# gluster volume info
Volume Name: vol_strip-replica
Type: Striped-Replicate
Volume ID: ec36b0d3-8467-47f6-aa83-1020555f58b6
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/strip-replica
Brick2: node02:/glusterfs/strip-replica
Brick3: node03:/glusterfs/strip-replica
Brick4: node04:/glusterfs/strip-replica
Options Reconfigured:
performance.readdir-ahead: on
7、GlusterFS : Clients' Settings
It's the settings for GlusterFS clients to mount GlusterFS volumes.
[1] For mounting with GlusterFS Native Client, Configure like follows.
[root@client ~]# curl http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo
[root@client ~]# yum -y install glusterfs glusterfs-fuse
# mount vol_distributed volume on /mnt
[root@client ~]# mount -t glusterfs node01.srv.world:/vol_distributed /mnt
[root@client ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 27G 1.1G 26G 5% /
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 8.3M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 497M 151M 347M 31% /boot
node01.srv.world:/vol_distributed fuse.glusterfs 40G 65M 40G 1% /mnt
[2] NFS (v3) is also supported, so it's possible to mount with NFS.
Configure for it on GlusterFS Servers first, refer to here.
[root@client ~]# yum -y install nfs-utils
[root@client ~]# systemctl start rpcbind rpc-statd
[root@client ~]# systemctl enable rpcbind rpc-statd
[root@client ~]# mount -t nfs -o mountvers=3 node01.srv.world:/vol_distributed /mnt
[root@client ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 27G 1.1G 26G 5% /
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 8.3M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 497M 151M 347M 31% /boot
node01.srv.world:/vol_distributed nfs 40G 64M 40G 1% /mnt