配置环境
server5:scisi(172.25.62.5)
server1:iscsi(172.25.62.1)
server4:iscsi(172.25.62.4)
[root@server1 ~]# yum install -y iscsi-*
[root@server4 ~]# yum install -y iscsi-*
server5:
一、添加一块新的硬盘
[root@server5 ~]# fdisk -l
Disk /dev/vda: 21.5 GB, 21474836480 bytes
16 heads, 63 sectors/track, 41610 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cbcd2
Device Boot Start End Blocks Id System
/dev/vda1 * 3 1018 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/vda2 1018 41611 20458496 8e Linux LVM
Partition 2 does not end on cylinder boundary.
Disk /dev/vdb: 8589 MB, 8589934592 bytes ##新的磁盘
16 heads, 63 sectors/track, 16644 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_root: 19.9 GB, 19906166784 bytes
255 heads, 63 sectors/track, 2420 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_swap: 1040 MB, 1040187392 bytes
255 heads, 63 sectors/track, 126 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
server5
二、安装scisi服务
[root@server5 ~]# yum install scsi-* -y
修改targets配置文件
[root@server5 ~]# vim /etc/tgt/targets.conf
[root@server5 ~]# /etc/init.d/tgtd start
Starting SCSI target daemon: [ OK ]
[root@server5 ~]# tgt-admin -s ##查看是否配置成功
Target 1: iqn.2018-06.com.example:server.target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null ##
Backing store path: None ##
Backing store flags:
Account information:
ACL information:
ALL
三、配置server1和server4 iscis端
server1和server4同步操作
[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.62.5
Starting iscsid: [ OK ]
172.25.62.5:3260,1 iqn.2018-06.com.example:server.target
[root@server1 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.62.5,3260] (multiple)
Login to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.62.5,3260] successful.
[root@server1 nodes]# fdisk -l
Disk /dev/vda: 21.5 GB, 21474836480 bytes
16 heads, 63 sectors/track, 41610 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cbcd2
Device Boot Start End Blocks Id System
/dev/vda1 * 3 1018 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/vda2 1018 41611 20458496 8e Linux LVM
Partition 2 does not end on cylinder boundary.
Disk /dev/mapper/VolGroup-lv_root: 19.9 GB, 19906166784 bytes
255 heads, 63 sectors/track, 2420 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_swap: 1040 MB, 1040187392 bytes
255 heads, 63 sectors/track, 126 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sda: 8589 MB, 8589934592 bytes
64 heads, 32 sectors/track, 8192 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
四、在server1创建iscsi分区,为lvm分区
[root@server1 nodes]# fdisk -cu /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xdd3b0421.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4):
Value out of range.
Partition number (1-4): 1
First sector (2048-16777215, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-16777215, default 16777215): +2G
Command (m for help): p
Disk /dev/sda: 8589 MB, 8589934592 bytes
64 heads, 32 sectors/track, 8192 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xdd3b0421
Device Boot Start End Blocks Id System
/dev/sda1 2048 4196351 2097152 83 Linux
Command (m for help): wq
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@server1 nodes]# pvcreate /dev/sda1 ##创建pv
dev_is_mpath: failed to get device for 8:1
Physical volume "/dev/sda1" successfully created
[root@server1 nodes]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 lvm2 a-- 2.00g 2.00g
/dev/vda2 VolGroup lvm2 a-- 19.51g 0
[root@server1 nodes]# /etc/init.d/clvmd status
clvmd (pid 1500) is running...
Clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)
[root@server1 nodes]# vim /etc/lvm/lvm.conf
[root@server1 nodes]# vgcreate cluster_vg /dev/sda1 ##创建vg
Clustered volume group "cluster_vg" successfully created
[root@server1 nodes]# lvcreate -L 1.5G -n demo cluster_vg ##创建lv
Error locking on node server4: Volume group for uuid not found: sB4omRy0S7h1mt55V5LNu1zZ3nrnzJRPCy5BQCcdVWjOnaf56hq09L36zwy4q1E2
Failed to activate new LV. ##创建过程中如果遇到此报错,表示server4没有被同步,手动在server4同步分区表即可
server4同步分区表
[root@server4 ~]# partprobe
server1
[root@server1 nodes]# lvcreate -L 1G -n demo cluster_vg ##创建lv
Logical volume "demo" created
[root@server1 nodes]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 18.54g
lv_swap VolGroup -wi-ao---- 992.00m
demo cluster_vg -wi-a----- 1.00g
格式化分区为ext4格式:
[root@server1 nodes]# mkfs.ext4 /dev/mapper/cluster_vg-demo
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
在server1和server4上挂载sicsi磁盘
五、在集群上添加mysql服务
1、配置mysql
[root@server4 /]# yum install -y mysql-server
[root@server4 ~]# /etc/init.d/mysqld start
[root@server1 ~]# yum install mysql-server -y
[root@server1 ~]# /etc/init.d/mysqld start
2、将mnt卸载掉,将lvm挂载在mysql的文件下
[root@server1 mysql]# chown mysql.mysql /var/lib/mysql/ -R
[root@server1 ~]# mount /dev/mapper/cluster_vg-demo /var/lib/mysql/
[root@server4 mysql]# chown mysql.mysql /var/lib/mysql/ -R
[root@server4 ~]# mount /dev/mapper/cluster_vg-demo /var/lib/mysql/
如果在server4的mysql中写入,在server1中并不会及时同步,必须卸载掉,重新挂载才会写入,所以ext4本地文件系统的弊端就是不能同时写入,否则会脑裂,测试完卸载
六、创建高可用集群共享盘,可以同时读取,同时写入
1、将原来的iscsi磁盘格式化为gfs2网络文件
[root@server1 ~]# mkfs.gfs2 -j 3 -p lock_dlm -t westos_hwj:mygfs2 /dev/cluster_vg/demo
This will destroy any data on /dev/cluster_vg/demo.
It appears to contain: symbolic link to `../dm-2'
Are you sure you want to proceed? [y/n] y
Device: /dev/cluster_vg/demo
Blocksize: 4096
Device Size 1.00 GB (262144 blocks)
Filesystem Size: 1.00 GB (262142 blocks)
Journals: 3
Resource Groups: 4
Locking Protocol: "lock_dlm"
Lock Table: "westos_hwj:mygfs2"
UUID: 8d734613-f2e3-e512-2ac9-259ca1075c22
2、查出lvm的uuid,写入/etc/fstab开机自动挂载
[root@server1 ~]# blkid
/dev/vda1: UUID="beb86874-992f-447f-af86-259b13a17eb4" TYPE="ext4"
/dev/vda2: UUID="4kv7Rw-cMbP-wPAu-wnb7-22KV-Vcfn-OUMgsn" TYPE="LVM2_member"
/dev/mapper/VolGroup-lv_root: UUID="a96203aa-603e-4ebf-ab41-409f814f3f3a" TYPE="ext4"
/dev/mapper/VolGroup-lv_swap: UUID="a48ec496-2a4e-4048-b0ad-3da47be6abfb" TYPE="swap"
/dev/sda1: UUID="PmBoaI-O2DE-7nzU-pzk5-jRZn-THdD-eRjH36" TYPE="LVM2_member"
/dev/mapper/cluster_vg-demo: LABEL="westos_hwj:mygfs2" UUID="8d734613-f2e3-e512-2ac9-259ca1075c22" TYPE="gfs2
在luci网页管理页面配置
测试:
[root@server4 ~]# clustat
Cluster Status for westos_hwj @ Thu Jun 28 16:09:46 2018
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
server1 1 Online
server4 2 Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:web server4 started
[root@server4 ~]# clusvcadm -r web -m server1 ##将集群服务移动到server1
[root@server4 ~]# clusvcadm -e web 重新激活web
[root@server4 ~]# clusvcadm -d web 停止web