企业实战RHCS存储共享集群

1.存储共享集群

(1)实验准备

此实验还需要一台虚拟机:

server3 : ip:172.25.63.3 作用:存储服务器
并且需要为其再添加一块硬盘:
在这里插入图片描述
(2)配置scsi服务

在server3:

 yum install scsi-* -y
 yum install parted-2.1-21.el6.x86_64 -y
 partprobe 

在server1 和 server2 :

yum install iscsi-* -y

在server3:

 vim /etc/tgt/targets.conf 
写入:
 38 <target iqn.2020-02.com.example:server.target1>
 39     backing-store /dev/vda
 40 </target>
[root@server3 ~]# /etc/init.d/tgtd start					#启动服务
Starting SCSI target daemon:                               [  OK  ]
[root@server3 ~]# tgt-admin -s
......

            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vda			#表示配置正确
            Backing store flags: 
    Account information:
    ACL information:
        ALL
[root@server3 ~]# ps -ax
......
 1008 ?        S      0:00 [virtio-blk]
 1040 ?        S<     0:00 /sbin/udevd -d
 1057 ?        Ssl    0:00 tgtd				#有两个进程表示配置正确
 1060 ?        S      0:00 tgtd
 1093 pts/0    R+     0:00 ps -ax

之后在server1 和server2上(以server1为例,server2和server1操作相同)

[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.63.3
Starting iscsid:                                           [  OK  ]
172.25.63.3:3260,1 iqn.2020-02.com.example:server.target1
[root@server1 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2020-02.com.example:server.target1, portal: 172.25.63.3,3260] (multiple)
Login to [iface: default, target: iqn.2020-02.com.example:server.target1, portal: 172.25.63.3,3260] successful.
[root@server1 ~]# partprobe
Warning: ......
[root@server1 ~]# fdisk -l

.....

Disk /dev/sdb: 21.5 GB, 21474836480 bytes				#配置成功
64 heads, 32 sectors/track, 20480 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

(3)硬盘分区

在server1 或者 server 2(在一台虚拟机上做即可)上:

[root@server1 ~]# fdisk -cu /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x6b388308.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n							#新建分区
Command action
   e   extended
   p   primary partition (1-4)
p												#主分区
Partition number (1-4): 1
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e			#设置为lvm
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes
64 heads, 32 sectors/track, 20480 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6b388308

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   8e  Linux LVM

Command (m for help): wq					#保存
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@server1 ~]# partprobe 				#同步分区表,两个虚拟机都需要做
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.

(4)创建lvm
在server1:

[root@server1 ~]# pvcreate /dev/sdb1			#创建pv
  dev_is_mpath: failed to get device for 8:17
  Physical volume "/dev/sdb1" successfully created
[root@server1 ~]# vgcreate dangdang /dev/sdb1			#创建vg
  Clustered volume group "dangdang" successfully created
[root@server1 ~]# pvs
  PV         VG       Fmt  Attr PSize  PFree 
  /dev/sda2  VolGroup lvm2 a--   9.51g     0 
  /dev/sdb1  dangdang lvm2 a--  20.00g 20.00g

[root@server1 ~]# lvcreate -L 4G -n dd dangdang			#创建lvm
  Logical volume "dd" created
[root@server1 ~]# lvs	
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----   8.54g                                             
  lv_swap VolGroup -wi-ao---- 992.00m                                             
  dd      dangdang -wi-a-----   4.00g                  
在server2:
[root@server2 ~]# lvs
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----   8.54g                                             
  lv_swap VolGroup -wi-ao---- 992.00m                                             
  dd      dangdang -wi-a-----   4.00g   			#说明配置成功      

(5)配置文件系统并挂载

在server1:

[root@server1 ~]# mkfs.ext4 /dev/dangdang/dd 

挂载:

[root@server1 ~]# mount /dev/dangdang/dd /mnt/
[root@server1 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   8813300 1083348   7282260  13% /
tmpfs                           251136   25656    225480  11% /dev/shm
/dev/sda1                       495844   33462    436782   8% /boot
/dev/mapper/dangdang-dd        4128448  139256   3779480   4% /mnt			#成功挂载

ext4是本地文件系统,不支持同时写入,当server1 和server2 都挂载上且都写入时会报错。
因此,需要设置两个虚拟机谁拿到 vip 谁就把/dev/dangdang/dd挂载到/var/www/html/,用户就可以看到存储服务器server3通过apache服务器(server1或server2)发布的内容

在server1挂载设备并编辑测试界面,目的是把测试页面写入设备/dev/dangdang/dd:

[root@server1 ~]# mount /dev/dangdang/dd /mnt/
[root@server1 ~]# cd /mnt/
[root@server1 mnt]# vim index.html
[root@server1 mnt]# cat index.html 
server3 iscsi:index.html
[root@server1 mnt]# cd
[root@server1 ~]# umount /mnt/

(6)配置luci

此步需要把之前实验的服务停止:

[root@server2 ~]# clusvcadm -d apache
Local machine disabling service:apache...Success
[root@server2 ~]# clustat
Cluster Status for redhat_cl @ Mon Feb 24 00:43:22 2020
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server1                                     1 Online, rgmanager
 server2                                     2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:apache                 (server1)                      disabled   			#表明已经停止

添加资源:
在这里插入图片描述
整合资源:
顺序是: IP Address Filesystem Script
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
之后点击启动按钮即可启动服务

(7)测试

在物理机中:

[root@foundation8 ~]# curl 172.25.63.100 
server3 iscsi:index.html					#表明配置成功
[root@foundation8 ~]# curl 172.25.63.100 
server3 iscsi:index.html

此时集群服务在servre1上,测试将其迁移到server2上:

[root@server1 ~]# clusvcadm -r apache -m server2				#迁移
Trying to relocate service:apache to server2...Success
service:apache is now running on server2
[root@server1 ~]# clustat
Cluster Status for redhat_cl @ Mon Feb 24 00:59:25 2020
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server1                                     1 Online, Local, rgmanager
 server2                                     2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:apache                 server2                        started       	#表明迁移成功

此时在server2上:

[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   8813300 1026124   7339484  13% /
tmpfs                           251136   25656    225480  11% /dev/shm
/dev/sda1                       495844   33462    436782   8% /boot
/dev/mapper/dangdang-dd        4128448  139260   3779476   4% /var/www/html				#成功挂载
[root@server2 ~]# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:fa:30:e7 brd ff:ff:ff:ff:ff:ff
    inet 172.25.63.2/24 brd 172.25.63.255 scope global eth0
    inet 172.25.63.100/24 scope global secondary eth0				#得到vip
    inet6 fe80::5054:ff:fefa:30e7/64 scope link 
       valid_lft forever preferred_lft forever

此时对客户端的服务丝毫不受影响,上述测试表明将服务迁移到server2 后,server 2 得到了 vip,将/dev/dangdang/dd设备挂载到/var/www/html中,因此设备中的测试页面可以在客户端看到。

[root@foundation8 ~]# curl 172.25.63.100 
server3 iscsi:index.html
[root@foundation8 ~]# curl 172.25.63.100 
server3 iscsi:index.html

GFS文件系统同时读写

在上个实验中,ext4本地文件系统不支持同时读写,因此Rhel还支持了GFS文件系统以支持同时读写

(1)格式化设备

首先停止服务并且保证server1 和server 2 都没有挂载:

[root@server2 ~]# clusvcadm -d apache
Local machine disabling service:apache...Success				#停止服务

格式化设备:

[root@server1 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t redhat_cl:mygfs2 /dev/dangdang/dd 
This will destroy any data on /dev/dangdang/dd.
It appears to contain: symbolic link to `../dm-2'

Are you sure you want to proceed? [y/n] y

Device:                    /dev/dangdang/dd
Blocksize:                 4096
Device Size                4.00 GB (1048576 blocks)
Filesystem Size:           4.00 GB (1048575 blocks)
Journals:                  2
Resource Groups:           16
Locking Protocol:          "lock_dlm"
Lock Table:                "redhat_cl:mygfs2"
UUID:                      10713819-87fc-b666-7530-f5c3d90fe6a0

mkfs.gfs2为gfs2文件系统创建工具,其一般常用的选项有:

-b BlockSize:指定文件系统块大小,最小为512,默认为4096;

-J MegaBytes:指定gfs2日志区域大小,默认为128MB,最小值为8MB;

-j Number:指定创建gfs2文件系统时所创建的日志区域个数,一般需要为每个挂载的客户端指定一个日志区域;有几个node节点就要指定几个

-p LockProtoName:所使用的锁协议名称,通常为lock_dlm或lock_nolock之一;

-t LockTableName:锁表名称,一般来说一个集群文件系统需一个锁表名以便让集群节点在施加文件锁时得悉其所关联到的集群文件系统,锁表名称为
clustername:fsname,其中的clustername必须跟集群配置文件中的集群名称保持一致,因此,也仅有此集群内的节点可访问此集群
文件系统;此外,同一个集群内,每个文件系统的名称必须惟一
格式化完成后即可实现,server1 和 server 2 同时挂载的情况下同时写入。

注意:gfs文件系统依赖于集群,但是集群可以不需要gfs

(2)写入 gfs 测试页面

server1 和server 2 都没有挂载的情况下:

[root@server1 ~]# mount /dev/dangdang/dd /var/www/html/
[root@server1 ~]# vim /var/www/html/index.html
[root@server1 ~]# cat /var/www/html/index.html 
gfs2:index.html

此步的目的是给设备中写入 gfs 测试页面
(3)实现永久挂载:

在server 1 和 server 2:

vim /etc/fstab
在最后一行写入:

/dev/dangdang/dd        /var/www/html           gfs2    _netdev         0 0

此时在server1 和server2 中执行 mount -a即可实现挂载:

[root@server1 ~]# mount -a
[root@server1 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   8813300 1081448   7284160  13% /
tmpfs                           251136   31816    219320  13% /dev/shm
/dev/sda1                       495844   33462    436782   8% /boot
/dev/mapper/dangdang-dd        4193856  264780   3929076   7% /var/www/html
[root@server2 ~]# mount -a
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   8813300 1024196   7341412  13% /
tmpfs                           251136   31816    219320  13% /dev/shm
/dev/sda1                       495844   33462    436782   8% /boot
/dev/mapper/dangdang-dd        4193856  264780   3929076   7% /var/www/html

(4)配置luci
将filesystem在资源列表中删除:
在这里插入图片描述
在这里插入图片描述
删除data资源:
在这里插入图片描述
在这里插入图片描述
(5)测试

在物理机:

[root@foundation63 ~]# curl 172.25.63.100 
gfs2:index.html
[root@foundation63 ~]# curl 172.25.63.100 
gfs2:index.html
[root@foundation63 ~]# curl 172.25.63.100 
gfs2:index.html

表示配置成功
(6)gfs2的相关操作

查看配置的gfs2的信息

[root@server1 ~]# gfs2_tool sb /dev/dangdang/dd all
  mh_magic = 0x01161970
  mh_type = 1
  mh_format = 100
  sb_fs_format = 1801
  sb_multihost_format = 1900
  sb_bsize = 4096
  sb_bsize_shift = 12
  no_formal_ino = 2
  no_addr = 23
  no_formal_ino = 1
  no_addr = 22
  sb_lockproto = lock_dlm
  sb_locktable = redhat_cl:mygfs2
  uuid = 0281d23d-fc3f-5759-9259-39cfd2bac068

查看保留的日志信息

[root@server1 ~]# gfs2_tool journals /dev/dangdang/dd
journal1 - 128MB
journal0 - 128MB
2 journal(s) found.

如果再添加3个挂载点,对应的需要增加3份日志信息

[root@server1 ~]# gfs2_jadd -j 3 /dev/dangdang/dd 
Filesystem:            /var/www/html
Old Journals           2
New Journals           5

对设备进行扩容

[root@server1 ~]# lvextend -L +1G /dev/dangdang/dd 
  Extending logical volume dd to 5.00 GiB
  Logical volume dd successfully resized
[root@server1 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root  8.5G  1.1G  7.0G  13% /
tmpfs                         246M   32M  215M  13% /dev/shm
/dev/sda1                     485M   33M  427M   8% /boot
/dev/mapper/dangdang-dd       4.0G  647M  3.4G  16% /var/www/html

注意:此时文件系统大小并没有改变,需要进行如下操作

[root@server1 ~]# gfs2_grow /dev/dangdang/dd 
FS: Mount Point: /var/www/html
FS: Device:      /dev/dm-2
FS: Size:        1048575 (0xfffff)
FS: RG size:     65533 (0xfffd)
DEV: Size:       1310720 (0x140000)
The file system grew by 1024MB.
gfs2_grow complete.
[root@server1 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root  8.5G  1.1G  7.0G  13% /
tmpfs                         246M   32M  215M  13% /dev/shm
/dev/sda1                     485M   33M  427M   8% /boot
/dev/mapper/dangdang-dd       5.0G  647M  4.4G  13% /var/www/html
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值