HighAvailability

环境:三台1G内存的虚拟机,每台虚拟机都关闭sellinux,火墙,并对每台主机都对其进行解析,同时设置时间同步。
server1.example.com:
[root@server1 ~]# /etc/init.d/iptables stop
[root@server1 ~]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# ls
rhel-source.repo
[root@server1 yum.repos.d]# yum repolist
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
ftp://172.25.5.105/pub/yum6.5/repodata/repomd.xml: [Errno 14] PYCURL ERROR 9 - "Server denied you to change to the given directory"
Trying other mirror.
repo id                                  repo name                        status
rhel-source-beta                         localyum                         3,690
repolist: 3,690
[root@server1 yum.repos.d]# ls
rhel-source.repo
[root@server1 yum.repos.d]# vim rhel-source.repo
[rhel-source-beta]
name=localyum
baseurl=ftp://172.25.30.250/pub/yum6.5
gpgcheck=0

[HighAvailability]
name=localyum
baseurl=ftp://172.25.30.250/pub/yum6.5/HighAvailability
gpgcheck=0

[LoadBalancer]
name=localyum
baseurl=ftp://172.25.30.250/pub/yum6.5/LoadBalancer
gpgcheck=0

[ResilientStorage]
name=localyum
baseurl=ftp://172.25.30.250/pub/yum6.5/ResilientStorage
gpgcheck=0

[ScalableFileSystem]
name=localyum
baseurl=ftp://172.25.30.250/pub/yum6.5/ScalableFileSystem
   
[root@server1 yum.repos.d]# yum repolist
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
HighAvailability                                         | 3.9 kB     00:00     
HighAvailability/primary_db                              |  43 kB     00:00     
LoadBalancer                                             | 3.9 kB     00:00     
LoadBalancer/primary_db                                  | 7.0 kB     00:00     
ResilientStorage                                         | 3.9 kB     00:00     
ResilientStorage/primary_db                              |  47 kB     00:00     
ScalableFileSystem                                       | 3.9 kB     00:00     
ScalableFileSystem/primary_db                            | 6.8 kB     00:00     
rhel-source-beta                                         | 3.9 kB     00:00     
repo id                                   repo name                       status
HighAvailability                          localyum                           56
LoadBalancer                              localyum                            4
ResilientStorage                          localyum                           62
ScalableFileSystem                        localyum                            7
rhel-source-beta                          localyum                        3,690
repolist: 3,819
[root@server1 yum.repos.d]# yum install ricci -y
[root@server1 yum.repos.d]# chkconfig ricci on
[root@server1 yum.repos.d]# echo westos | passwd --stdin ricci
更改用户 ricci 的密码 。
passwd: 所有的身份验证令牌已经成功更新。
[root@server1 yum.repos.d]# /etc/init.d/ricci start

[root@server1 yum.repos.d]# scp rhel-source.repo 172.25.30.2:/etc/yum.repos.d/
[root@server1 yum.repos.d]# scp rhel-source.repo 172.25.30.3:/etc/yum.repos.d/

server2.example.com:
[root@server2 ~]# /etc/init.d/iptables stop
[root@server2 ~]# cd /etc/yum.repos.d/
[root@server2 yum.repos.d]# yum install ricci -y
[root@server2 yum.repos.d]# chkconfig ricci on
[root@server2 yum.repos.d]# echo westos | passwd --stdin ricci
更改用户 ricci 的密码 。
passwd: 所有的身份验证令牌已经成功更新。
[root@server2 yum.repos.d]# /etc/init.d/ricci start
[root@server2 yum.repos.d]# echo westos | passwd --stdin ricci
更改用户 ricci 的密码 。
passwd: 所有的身份验证令牌已经成功更新。
[root@server2 yum.repos.d]# /etc/init.d/ricci restart
Shutting down ricci:                                       [确定]
启动 ricci:                                               [确定]

server3.example.com:
[root@server3 ~]# cd /etc/yum.repos.d/
[root@server3 yum.repos.d]# yum install luci -y
[root@server3 yum.repos.d]# /etc/init.d/luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `server3.example.com' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
    (none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Starting saslauthd:                                        [  OK  ]
Start luci...                                              [确定]
Point your web browser to https://server3.example.com:8084 (or equivalent) to access luci


此时,打开浏览器访问:https://server3.example.com:8084,下载证书后,用超级用户的身份登陆,然后在Clusters中创建集群,将server1.examle.com和server2.example.com,分别加入到同一个集群中。



############fencce##########
[root@server1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="1" name="liyao_ha">
    <clusternodes>
        <clusternode name="server1.example.com" nodeid="1"/>
        <clusternode name="server2.example.com" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices/>
    <rm/>
</cluster>

[root@server2 ~]# cd /etc/cluster/
[root@server2 cluster]# cat cluster.conf
<?xml version="1.0"?>
<cluster config_version="2" name="liyao_ha">
    <clusternodes>
        <clusternode name="server1.example.com" nodeid="1"/>
        <clusternode name="server2.example.com" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="vmfence"/>
    </fencedevices>
</cluster>

两台服务器中的cluster.conf中的内容完全一致。

[root@foundation5 ~]# yum search fence | grep fence-virtd
fence-virtd.x86_64 : Daemon which handles requests from fence-virt
fence-virtd-libvirt.x86_64 : Libvirt backend for fence-virtd
fence-virtd-multicast.x86_64 : Multicast listener for fence-virtd
fence-virtd-serial.x86_64 : Serial VMChannel listener for fence-virtd
[root@foundation5 ~]# yum install fence-virtd.x86_64  fence-virtd-libvirt.x86_64  fence-virtd-multicast.x86_64 fence-virtd-serial.x86_64  -y
[root@foundation5 ~]# systemctl status fence-virtd
fence-virtd.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
[root@foundation5 ~]# mkdir /etc/cluster/
[root@foundation5 ~]# cd /etc/cluster/
[root@foundation5 cluster]# ls
[root@foundation5 cluster]# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:

Available backends:
    libvirt 0.1
Available listeners:
    serial 0.4
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [br0]:

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]:

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]:

Configuration complete.

=== Begin Configuration ===
fence_virtd {
    listener = "multicast";
    backend = "libvirt";
    module_path = "/usr/lib64/fence-virt";
}

listeners {
    multicast {
        key_file = "/etc/cluster/fence_xvm.key";
        address = "225.0.0.12";
        interface = "br0";
        family = "ipv4";
        port = "1229";
    }

}

backends {
    libvirt {
        uri = "qemu:///system";
    }

}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@foundation5 cluster]# ls
[root@foundation5 cluster]# pwd
/etc/cluster
[root@foundation5 cluster]# dd if=/dev/urandom of=fence_xvm.key bs=128 count=1    #############截取密码#############
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000169515 s, 755 kB/s
[root@foundation5 cluster]# file fence_xvm.key
fence_xvm.key: data
[root@foundation5 cluster]# ll
total 4
-rw-r--r-- 1 root root 128 Jul 10 14:52 fence_xvm.key
############将获得的密码传给其他两个服务器(server1和server2)#
[root@foundation5 cluster]# scp fence_xvm.key 172.25.30.1:/etc/cluster/
[root@foundation5 cluster]# scp fence_xvm.key 172.25.30.2:/etc/cluster/
#######通常要在fence_virtd关闭的时候截取密码,然后在打开或者重启服务,否则不生效。##########
[root@foundation5 cluster]# systemctl restart fence_virtd
[root@foundation5 cluster]# systemctl status fence_virtd

#########查看该服务的端口是是否打开######
[root@foundation5 cluster]# netstat -anulp | grep 1229
udp        0      0 0.0.0.0:1229            0.0.0.0:*                           19981/fence_virtd   

server1.example.com
[root@server1 ~]# clustat
Cluster Status for liyao_ha @ Sun Jul 10 14:25:04 2016
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server1.example.com                         1 Online, Local
 server2.example.com                         2 Online

[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# ls
cluster.conf  cman-notify.d  fence_xvm.key

server2.example.com
[root@server2 ~]# cd /etc/cluster/
[root@server2 cluster]# cat cluster.conf
<?xml version="1.0"?>
<cluster config_version="2" name="liyao_ha">
    <clusternodes>
        <clusternode name="server1.example.com" nodeid="1"/>
        <clusternode name="server2.example.com" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="vmfence"/>
    </fencedevices>
</cluster>
[root@server2 cluster]# ls
cluster.conf  cman-notify.d  fence_xvm.key

在浏览器中用超级用户登陆后,点击Fence Devices 然后添加fence,选择Fence virt (Multicast Mode) 名字随便起,然后提交。
在Nodes 中 点击 server1.example.com ,然后点击Add Fence Method ,Method Name 随便起,之后提交。同时再点击Add Fence Instance ,选择vmfence (xvm Virtusal Machine Fencing),在Domain 这一栏中添加虚拟机的UUID,并且提交,server2.example.com设置同server1.example.com.

测试:
在其中一台机子上输入如下命令,令一台机子就会down掉(比如关掉server2)。
fence_node server2.example.com


然后再在Failover Domains中点击Add,在Add Failover Domain to Cluster中,Name中填dbfail ,然后选择prioritized,pestricted,No Failbaxk ,并且设置server1.example.com和server2.example.com的优先级,prtority 值越小,其优先级越高,在这里设置server1.example.com的优先级值为1,server2.example.com的优先级值为2,并且提交。
  然后在Resources中点击Add,在Add Resource Cluster中选择IP Address,添加IP Address 为172.25.30.100(在这里是指一个没有被占用的ip) ,Netmask Bits 为 24,Monitor Link 为被选中状态,Number of Seconds to Sleep After Removing an Ip Address为10 (可以随意指定),设置完毕后选择提交。在Add Resource Cluster中选择Scripts ,其中Name 为mysqld , Full Path to Scripts File为 /etc/init.d/mysqld ,设置完毕后提交。

此时分别在server1,server2 上安装mysql-server.并将其打开后,确定其可以z正常开启后,再将其关闭。
[root@server1 cluster]# yum install mysql-server -y
[root@server1 cluster]# /etc/init.d/mysqld start
[root@server1 cluster]# /etc/init.d/mysqld stop
Stopping mysqld:                                           [  OK  ]
[root@server1 cluster]# clustat

[root@server2 ~]# yum install mysql-server -y
[root@server2 ~]# /etc/init.d/mysqld start
[root@server2 ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.71 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> quit
Bye
[root@server2 ~]# /etc/init.d/mysqld stop
Stopping mysqld:                                           [  OK  ]


然后此时,在Service Group 中点击Add,在Add Service Group to Cluster 中填写,Service Name 为mysql,Automatically Start This Service和 Run Exclusive均为被选中状态。Failover Domain为dbfail ,Recovery Policy 为Relocate ,然后在点击Add Resource,分别将ip address 和 mysqld 分别加入集群,然后提交。

#########将集群的拥有者切换至另一台服务器##########
命令如下:clusvcadm -r mysql -m server2.example.com

[root@server2 ~]# clustat
Cluster Status for westos_ha @ Sat Jul 16 11:47:01 2016
Member Status: Quorate

 Member Name                        ID   Status
 ------ ----                        ---- ------
 server1.example.com                    1 Online, rgmanager
 server2.example.com                    2 Online, Local, rgmanager

 Service Name              Owner (Last)              State         
 ------- ----              ----- ------              -----         
 service:mysql             server1.example.com       started       
[root@server2 ~]# clusvcadm -r mysql -m server2.example.com
Trying to relocate service:mysql to server2.example.com...

Success
service:mysql is now running on server2.example.com
[root@server2 ~]# clustat
Cluster Status for westos_ha @ Sat Jul 16 11:48:05 2016
Member Status: Quorate

 Member Name                        ID   Status
 ------ ----                        ---- ------
 server1.example.com                    1 Online, rgmanager
 server2.example.com                    2 Online, Local, rgmanager

 Service Name              Owner (Last)              State         
 ------- ----              ----- ------              -----         
 service:mysql             server2.example.com       started       
[root@server2 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:4f:1a:29 brd ff:ff:ff:ff:ff:ff
    inet 172.25.30.2/24 brd 172.25.30.255 scope global eth0
    inet 172.25.30.100/24 scope global secondary eth0
    inet6 fe80::5054:ff:fe4f:1a29/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:0a:5f:60 brd ff:ff:ff:ff:ff:ff



###########存储######
首先在server3 上加一块硬盘
[root@server3 ~]# fdisk -l

Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009bf58

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        1045     7875584   8e  Linux LVM

Disk /dev/vda: 8589 MB, 8589934592 bytes
16 heads, 63 sectors/track, 16644 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000



[root@server3 ~]# fdisk /dev/vda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x15393fe4.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-16644, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-16644, default 16644):
Using default value 16644

Command (m for help): p

Disk /dev/vda: 8589 MB, 8589934592 bytes
16 heads, 63 sectors/track, 16644 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x15393fe4

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1               1       16644     8388544+  83  Linux

Command (m for help): wq
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@server3 ~]# df
Filesystem                           1K-blocks   Used Available Use% Mounted on
/dev/mapper/vg_foundation104-lv_root   6926264 965676   5608744  15% /
tmpfs                                   510200      0    510200   0% /dev/shm
/dev/sda1                               495844  33464    436780   8% /boot
[root@server3 ~]# lvs
  LV      VG               Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root vg_foundation104 -wi-ao----   6.71g                                             
  lv_swap vg_foundation104 -wi-ao---- 816.00m  
[root@server3 ~]# yum install scsi-* -y
[root@server3 ~]# vim /etc/tgt/targets.conf
<target iqn.2016-07.com.example:server.disk>
     backing-store /dev/vda1
     initiator-address 172.25.30.1
     initiator-address 172.25.30.2
</target>
[root@server3 ~]# /etc/init.d/tgtd start
Starting SCSI target daemon:                               [  OK  ]

server1:
[root@server1 ~]# yum install -y iscsi-*
[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.30.3
172.25.30.3:3260,1 iqn.2016-07.com.example:server.target1
[root@server1 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2016-07.com.example:server.target1, portal: 172.25.30.3,3260] (multiple)
Login to [iface: default, target: iqn.2016-07.com.example:server.target1, portal: 172.25.30.3,3260] successful.
[root@server1 ~]# fdisk -l

Disk /dev/sdb: 8589 MB, 8589869568 bytes
64 heads, 32 sectors/track, 8191 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@server1 ~]# /etc/init.d/clvmd status
clvmd (pid  1518) is running...
Clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)
[root@server1 ~]# fdisk -cu /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x1a853dc6.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-16777088, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-16777088, default 16777088):
Using default value 16777088

Command (m for help): p

Disk /dev/sdb: 8589 MB, 8589869568 bytes
64 heads, 32 sectors/track, 8191 cylinders, total 16777089 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x1a853dc6

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    16777088     8387520+  83  Linux

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sdb: 8589 MB, 8589869568 bytes
64 heads, 32 sectors/track, 8191 cylinders, total 16777089 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x1a853dc6

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    16777088     8387520+  8e  Linux LVM

Command (m for help): wq
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

server2:
[root@server2 ~]# yum install -y iscsi-*
[root@server2 ~]#  iscsiadm -m discovery -t st -p 172.25.30.3
Starting iscsid:                                           [  OK  ]
172.25.30.3:3260,1 iqn.2016-07.com.example:server.target1
[root@server2 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2016-07.com.example:server.target1, portal: 172.25.30.3,3260] (multiple)
Login to [iface: default, target: iqn.2016-07.com.example:server.target1, portal: 172.25.30.3,3260] successful.
[root@server2 ~]# fdisk -l

此时server1和server2上都出现共享磁盘sdb
[root@server2 ~]# cat /proc/partitions
major minor  #blocks  name

   8        0    8388608 sda
   8        1     512000 sda1
   8        2    7875584 sda2
 253        0    7036928 dm-0
 253        1     835584 dm-1
   8       16    8388544 sdb
[root@server2 ~]# partprobe  ####同步磁盘###
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.
[root@server2 ~]# cat /proc/partitions
major minor  #blocks  name

   8        0    8388608 sda
   8        1     512000 sda1
   8        2    7875584 sda2
 253        0    7036928 dm-0
 253        1     835584 dm-1
   8       16    8388544 sdb
   8       17    8387520 sdb1

###########创建/dev/clustervg/demo#########
server1:
[root@server1 ~]# pvcreate /dev/sdb1
  dev_is_mpath: failed to get device for 8:17
  Physical volume "/dev/sdb1" successfully created
[root@server1 ~]# vgcreate clustervg /dev/sdb1
  Clustered volume group "clustervg" successfully created
[root@server1 ~]# lvcreate -L 2G -n demo clustervg
  Logical volume "demo" created
server2(此端只需扫描,即可同步)

[root@server2 ~]# pvs
  PV         VG               Fmt  Attr PSize PFree
  /dev/sda2  vg_foundation104 lvm2 a--  7.51g    0
[root@server2 ~]# vgs
  VG               #PV #LV #SN Attr   VSize VFree
  clustervg          1   0   0 wz--nc 8.00g 8.00g
  vg_foundation104   1   2   0 wz--n- 7.51g    0
[root@server2 ~]# vgdisplay clustervg
  --- Volume group ---
  VG Name               clustervg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               8.00 GiB
  PE Size               4.00 MiB
  Total PE              2047
  Alloc PE / Size       0 / 0   
  Free  PE / Size       2047 / 8.00 GiB
  VG UUID               GV74DE-Lrtf-1KbM-KsLw-NTSI-vzYX-eyzf1B
   
[root@server2 ~]# lvs
  LV      VG               Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  demo    clustervg        -wi-a-----   2.00g                                             
  lv_root vg_foundation104 -wi-ao----   6.71g                                             
  lv_swap vg_foundation104 -wi-ao---- 816.00m                                             

##############将/dev/clustervg/demo格式化为ext4 形式,此时将/dev/clustervg/demo挂载在/var/lib/mysql/上,server1和server2两端不能同步。
[root@server1 ~]# mkfs.ext4 /dev/clustervg/demo
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912

Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

##########再分别将其挂载到/var/lib/mysql/
[root@server1 ~]# mount /dev/clustervg/demo /mnt/
[root@server1 ~]# cd /var/lib/mysql/
[root@server1 mysql]# cp -rp * /mnt/
[root@server1 mysql]# ls
ibdata1  ib_logfile0  ib_logfile1  mysql  test
[root@server1 mysql]# cd /mnt
[root@server1 mnt]# ls
ibdata1  ib_logfile0  ib_logfile1  lost+found  mysql  test
[root@server1 mnt]# chown mysql.mysql
[root@server1 mnt]# df
Filesystem                           1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg_foundation104-lv_root   6926264 1082620   5491800  17% /
tmpfs                                   510200   25656    484544   6% /dev/shm
/dev/sda1                               495844   33464    436780   8% /boot
/dev/mapper/clustervg-demo             2064208   90076   1869276   5% /mnt
[root@server1 mnt]# cd
[root@server1 ~]# umount /mnt
[root@server1 ~]# mount /dev/clustervg/demo /var/lib/mysql
[root@server1 ~]# ll -d /var/lib/mysql
drwxr-xr-x 5 mysql mysql 4096 7月  16 14:38 /var/lib/mysql
[root@server1 ~]# /etc/init.d/mysqld start
Starting mysqld:                                           [  OK  ]
[root@server1 ~]# /etc/init.d/mysqld stop
Stopping mysqld:                                           [  OK  ]
[root@server1 ~]# umount /var/lib/mysql/

server2:
[root@server2 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server2 ~]# cd /var/lib/mysql
[root@server2 mysql]# ll -d .
drwxr-xr-x 5 mysql mysql 4096 7月  16 14:41 .
[root@server2 mysql]# /etc/init.d/mysqld start
Starting mysqld:                                           [  OK  ]
[root@server2 mysql]# /etc/init.d/mysqld stop
Stopping mysqld:                                           [  OK  ]
[root@server2 mysql]# cd
[root@server2 ~]# df
Filesystem                           1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg_foundation104-lv_root   6926264 1082792   5491628  17% /
tmpfs                                   510200   25656    484544   6% /dev/shm
/dev/sda1                               495844   33464    436780   8% /boot
/dev/mapper/clustervg-demo             2064208   90076   1869276   5% /var/lib/mysql
[root@server2 ~]# umount /var/lib/mysql/

[root@server1 ~]# clustat
Cluster Status for westos_ha @ Sat Jul 16 14:57:01 2016
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 server1.example.com                          1 Online, Local, rgmanager
 server2.example.com                          2 Online, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:mysql                   (server1.example.com)           recoverable   
#####关闭mysql##############
[root@server1 ~]# clusvcadm -d mysql
Local machine disabling service:mysql...Success
[root@server1 ~]# clustat
Cluster Status for westos_ha @ Sat Jul 16 15:19:13 2016
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 server1.example.com                          1 Online, Local, rgmanager
 server2.example.com                          2 Online, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:mysql                   (server2.example.com)           disabled   

此时在浏览器中的资源中添加文件系统项 ,然后再将其添加至集群组项中
[root@server1 ~]# clusvcadm -e mysql
Local machine trying to enable service:mysql...Success
service:mysql is now running on server1.example.com
[root@server1 ~]# clustat
Cluster Status for westos_ha @ Sat Jul 16 15:19:47 2016
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 server1.example.com                          1 Online, Local, rgmanager
 server2.example.com                          2 Online, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:mysql                   server1.example.com             started       
[root@server1 ~]# df
Filesystem                           1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg_foundation104-lv_root   6926264 1083784   5490636  17% /
tmpfs                                   510200   25656    484544   6% /dev/shm
/dev/sda1                               495844   33464    436780   8% /boot
/dev/mapper/clustervg-demo             2064208   90076   1869276   5% /var/lib/mysql
[root@server1 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:ae:3e:14 brd ff:ff:ff:ff:ff:ff
    inet 172.25.30.1/24 brd 172.25.30.255 scope global eth0
    inet 172.25.30.100/24 scope global secondary eth0
    inet6 fe80::5054:ff:feae:3e14/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:b2:64:1c brd ff:ff:ff:ff:ff:ff
[root@server1 ~]# lvs
  LV      VG               Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  demo    clustervg        -wi-ao----   2.00g                                             
  lv_root vg_foundation104 -wi-ao----   6.71g                                             
  lv_swap vg_foundation104 -wi-ao---- 816.00m                                             

##############将/dev/clustervg/demo格式化为gfs2 形式
在浏览器中的集群组项和资源项中将文件系统项删除
[root@server1 ~]# clusvcadm -d mysql
Local machine disabling service:mysql...Success
[root@server1 ~]# mkfs.gfs2 -p lock_dlm -t westos_ha:mygfs2 -j 3 /dev/clustervg/demo
This will destroy any data on /dev/clustervg/demo.
It appears to contain: symbolic link to `../dm-2'

Are you sure you want to proceed? [y/n] y

Device:                    /dev/clustervg/demo
Blocksize:                 4096
Device Size                2.00 GB (524288 blocks)
Filesystem Size:           2.00 GB (524288 blocks)
Journals:                  3
Resource Groups:           8
Locking Protocol:          "lock_dlm"
Lock Table:                "westos_ha:mygfs2"
UUID:                      9ae9c9ac-bcbd-8b00-4610-30cccf981f5e

[root@server1 ~]# mount /dev/clustervg/demo /mnt
[root@server1 ~]# cd /mnt
[root@server1 mnt]# ls
[root@server1 mnt]# cp /etc/passwd .
[root@server1 mnt]# ls
passwd
[root@server1 mnt]# ls
fstab  passwd
server2:
[root@server2 ~]# mount /dev/clustervg/demo /mnt
[root@server2 ~]# cd /mnt
[root@server2 mnt]# ls
passwd
[root@server2 mnt]# cp /etc/fstab .
[root@server2 mnt]# ls
fstab  passwd
[root@server1 mnt]# ll
total 8
-rw-r--r-- 1 root root  795 7月  16 15:26 fstab
-rw-r--r-- 1 root root 1254 7月  16 15:26 passwd
[root@server1 mnt]# gfs2_tool -h
Clear a flag on a inode
  gfs2_tool clearflag flag <filenames>
Freeze a GFS2 cluster:
  gfs2_tool freeze <mountpoint>
Get tuneable parameters for a filesystem
  gfs2_tool gettune <mountpoint>
List the file system's journals:
  gfs2_tool journals <mountpoint>
Have GFS2 dump its lock state:
  gfs2_tool lockdump <mountpoint> [buffersize]
Tune a GFS2 superblock
  gfs2_tool sb <device> proto [newval]
  gfs2_tool sb <device> table [newval]
  gfs2_tool sb <device> ondisk [newval]
  gfs2_tool sb <device> multihost [newval]
  gfs2_tool sb <device> all
Set a flag on a inode
  gfs2_tool setflag flag <filenames>
Tune a running filesystem
  gfs2_tool settune <mountpoint> <parameter> <value>
Unfreeze a GFS2 cluster:
  gfs2_tool unfreeze <mountpoint>
Print tool version information
  gfs2_tool version
Withdraw this machine from participating in a filesystem:
  gfs2_tool withdraw <mountpoint>

[root@server1 mnt]# gfs2_tool journals /dev/clustervg/demo       #####查看日志#########
journal2 - 128MB
journal1 - 128MB
journal0 - 128MB
3 journal(s) found.
[root@server1 mnt]# df -h
Filesystem                            Size  Used Avail Use% Mounted on
/dev/mapper/vg_foundation104-lv_root  6.7G  1.1G  5.3G  17% /
tmpfs                                 499M   32M  468M   7% /dev/shm
/dev/sda1                             485M   33M  427M   8% /boot
/dev/mapper/clustervg-demo            2.0G  388M  1.7G  19% /mnt
[root@server1 mnt]# ls
fstab  passwd
[root@server1 mnt]# rm -fr *
[root@server1 mnt]# cd
[root@server1 ~]# umount /mnt
[root@server1 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server1 ~]# df
Filesystem                           1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg_foundation104-lv_root   6926264 1083780   5490640  17% /
tmpfs                                   510200   31816    478384   7% /dev/shm
/dev/sda1                               495844   33464    436780   8% /boot
/dev/mapper/clustervg-demo             2096912  397148   1699764  19% /var/lib/mysql
[root@server1 ~]# chown mysql.mysql /var/lib/mysql/
[root@server1 ~]# clustat
Cluster Status for westos_ha @ Sat Jul 16 15:31:23 2016
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 server1.example.com                          1 Online, Local, rgmanager
 server2.example.com                          2 Online, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:mysql                   (server1.example.com)           disabled      
[root@server2 ~]# umount /mnt
[root@server2 ~]# ll -d /var/lib/mysql/
drwxr-xr-x 4 mysql mysql 4096 7月  16 11:47 /var/lib/mysql/
[root@server2 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server2 ~]# df
Filesystem                           1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg_foundation104-lv_root   6926264 1082796   5491624  17% /
tmpfs                                   510200   31816    478384   7% /dev/shm
/dev/sda1                               495844   33464    436780   8% /boot
/dev/mapper/clustervg-demo             2096912  418888   1678024  20% /var/lib/mysql
###将/dev/clustervg/demo格式化为gfs2形式########
[root@server1 ~]# gfs2_grow /dev/clustervg/demo
###将/dev/clustervg/demo中添加3个日志####
[root@server1 ~]# gfs2_jadd -j 3 /dev/clustervg/demo
###设置/dev/clustervg/demo开机自动挂载至/var/lib/mysql
[root@server1 ~]# vim /etc/fstab
UUID="9ae9c9ac-bcbd-8b00-4610-30cccf981f5e"  /var/lib/mysql gfs2  _netdev 0 0

############删除上边的所有信息###############
先将mysql关闭
[root@server1 ~]# clusvcadm -d mysql
Local machine disabling service:mysql...Success
[root@server1 ~]# df
Filesystem                           1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg_foundation104-lv_root   6926264 1083860   5490560  17% /
tmpfs                                   510200   31816    478384   7% /dev/shm
/dev/sda1                               495844   33464    436780   8% /boot
/dev/mapper/clustervg-demo             2096912  418884   1678028  20% /var/lib/mysql
[root@server1 ~]# clustat
Cluster Status for westos_ha @ Sat Jul 16 16:29:50 2016
Member Status: Quorate

 Member Name                            ID   Status
 ------ ----                            ---- ------
 server1.example.com                        1 Online, Local, rgmanager
 server2.example.com                        2 Online, rgmanager

 Service Name                  Owner (Last)                  State         
 ------- ----                  ----- ------                  -----         
 service:mysql                 (server1.example.com)         disabled     

此时,在浏览器中的Node中分别先将server1.example.com和server2.example.com移除集群,然后再将其删除,最后将所有启动项都关闭。
 
[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# ls
cman-notify.d  fence_xvm.key
[root@server1 cluster]# chkconfig --list cman
cman               0:off    1:off    2:off    3:off    4:off    5:off    6:off
[root@server1 cluster]# chkconfig --list modclusterd
modclusterd        0:off    1:off    2:on    3:on    4:on    5:on    6:off
[root@server1 cluster]# /etc/init.d/modclusterd stop
Shutting down Cluster Module - cluster monitor:            [  OK  ]
[root@server1 cluster]# chkconfig modclusterd off
[root@server1 cluster]# fdisk -l
[root@server1 cluster]# lvs
  Skipping clustered volume group clustervg
  Skipping volume group clustervg
  LV      VG               Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root vg_foundation104 -wi-ao----   6.71g                                             
  lv_swap vg_foundation104 -wi-ao---- 816.00m                                             
[root@server1 cluster]# iscsiadm -m node -u  ##先卸载
Logging out of session [sid: 1, target: iqn.2016-07.com.example:server.target1, portal: 172.25.30.3,3260]
Logout of [sid: 1, target: iqn.2016-07.com.example:server.target1, portal: 172.25.30.3,3260] successful.
[root@server1 cluster]# iscsiadm -m node -o delete  ##再删除
[root@server1 cluster]# /etc/init.d/ricci stop
Shutting down ricci:                                       [  OK  ]
[root@server1 cluster]# chkconfig ricci off
[root@server1 cluster]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination    

server2:
[root@server2 ~]# /etc/init.d/modclusterd stop
Shutting down Cluster Module - cluster monitor:            [  OK  ]
[root@server2 ~]# chkconfig modclusterd off
[root@server2 ~]# iscsiadm -m node -u
Logging out of session [sid: 1, target: iqn.2016-07.com.example:server.target1, portal: 172.25.30.3,3260]
Logout of [sid: 1, target: iqn.2016-07.com.example:server.target1, portal: 172.25.30.3,3260] successful.
[root@server2 ~]# iscsiadm -m node -o delete
[root@server2 ~]# df
Filesystem                           1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg_foundation104-lv_root   6926264 1082748   5491672  17% /
tmpfs                                   510200       0    510200   0% /dev/shm
/dev/sda1                               495844   33464    436780   8% /boot
[root@server2 ~]# lvs
  LV      VG               Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root vg_foundation104 -wi-ao----   6.71g                                             
  lv_swap vg_foundation104 -wi-ao---- 816.00m                                             
[root@server2 ~]# vgs
  VG               #PV #LV #SN Attr   VSize VFree
  vg_foundation104   1   2   0 wz--n- 7.51g    0
[root@server2 ~]# pvs
  PV         VG               Fmt  Attr PSize PFree
  /dev/sda2  vg_foundation104 lvm2 a--  7.51g    0
[root@server2 ~]# /etc/init.d/ricci stop
Shutting down ricci:                                       [  OK  ]
[root@server2 ~]# chkconfig ricci off

并且删除server1和server2中/etc/fstab中将/dev/clustervg/demo开机自动挂载至/var/lib/mysql的设置。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值