RHCS套件实现高可用集群

一、RHCS提供高可用性、负载均衡两种类型的集群
1.高可用性:应用/服务故障切换-通过创建n个节点的服务器集群来实现关键应用和服务的故障切换
2.负载均衡:IP 负载均衡-对一群服务器上收到的 IP 网络请求进行负载均衡
这里写图片描述

Server2做h1,server3为h2,两个调度器作为高可用节点,集群主机必须所有的配置相同
Server2:172.25.53.2
Server3:172.25.53.3

准备工作

高可用yum源
[root@server2 ~]# vim /etc/yum.repos.d/rhel-source.repo 

[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.53.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[HighAvailability]
name=HighAvailability
baseurl=http://172.25.53.250/rhel6.5/HighAvailability
gagchack=0

[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.53.250/rhel6.5/LoadBalancer
gagchack=0

[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.53.250/rhel6.5/ResilientStorage
gagchack=0

[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.53.250/rhel6.5/ScalableFileSystem
gagchack=0



[root@server2 ~]# yum clean all
[root@server2 ~]# yum repolist
HighAvailability                  HighAvailability                                                     56
LoadBalancer                      LoadBalancer                                                          4
ResilientStorage                  ResilientStorage                                                     62
ScalableFileSystem                ScalableFileSystem                                                    7
rhel-source                       Red Hat Enterprise Linux 6Server - x86_64 - Source                3,690
repolist: 3,819
将高可用yum源和nginx传给server3
[root@server2 ~]# scp /etc/yum.repos.d/rhel-source.repo   
[root@server2 ~]# scp -r /usr/local/nginx/ server3:/usr/local/
安装luci ricci 并打开,给ricci设置密码 
[root@server2 ~]# yum install -y luci ricci
[root@server2 ~]# chkconfig luci on
[root@server2 ~]# chkconfig ricci on
[root@server2 ~]# passwd ricci
[root@server2 ~]# /etc/init.d/luci start
[root@server2 ~]# netstat -antlp | grep ricci
tcp        0      0 :::11111                    :::*                        LISTEN      1250/ricci   
同步高可用yum源
[root@server3 ~]# yum clean all
[root@server3 ~]# yum repolist

repo id                           repo name                                                         status
HighAvailability                  HighAvailability                                                     56
LoadBalancer                      LoadBalancer                                                          4
ResilientStorage                  ResilientStorage                                                     62
ScalableFileSystem                ScalableFileSystem                                                    7
rhel-source                       Red Hat Enterprise Linux 6Server - x86_64 - Source                3,690
repolist: 3,819
创建nginx软连接可用
[root@server3 ~]# ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/
创建用户,与server2相同
[root@server3 ~]# useradd -M -d /usr/local/nginx/ nginx
[root@server3 ~]# nginx 
[root@server3 ~]# netstat -antlp | grep nginx
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      1209/nginx          
[root@server3 ~]# yum install -y ricci
[root@server3 ~]# /etc/init.d/ricci start
[root@server3 ~]# netstat -antlp | grep ricci
tcp        0      0 :::11111                    :::*                        LISTEN      1338/ricci   

需要做解析:测试机(本地物理机),server2,server3都做同样的dns解析

/etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.53.1 server1
172.25.53.2 server2
172.25.53.3 server3
172.25.53.4 server4
172.25.53.5 server5
172.25.53.6 server6
172.25.53.7 server7

查看RHCS管理界面:

在浏览器访问:https://server2:8084,luci自动开启8084端口接受集群节点的11111端口的数据包
提示连接不安全,我们进去再说
这里写图片描述
添加安全证书
这里写图片描述
使用server2的root密码登陆
这里写图片描述
这里写图片描述
创建新的集群并添加集群节点
这里写图片描述

1Use the Same Password for All Nodes的前面如果有对号,则表示集群节点的密码相同,集群节点的密码为刚才为 ricci 的设定的密码
2) Download Packages:表示自动下载需要的安装包
3Use Locally Installed Packages:表示根据本地已下载的安装包来下载
4)Reboot Nodes Before Joining Cluster:创建集群后会重启动集群节点,所以之前要将集群管理工具(luci和ricci设置为开机自启动)
5)Enable Shared Storage Support:开启集群共享存储功能
由于集群名称最好不要带数字,所以在下面我们已经更改为hello_ha

这里写图片描述
这里写图片描述
向集群添加故障转移域
启用优先级(数字越小优先级越高)
限制在指定节点运行
不返回,即服务再次可用时,不按优先级返回

Server3,2均制作脚本 /etc/init.d/nginx
[root@server2 ~]# cat /etc/init.d/nginx 
#!/bin/bash
# nginx Startup script for the Nginx HTTP Server
# it is v.0.0.2 version.
# chkconfig: - 85 15
# description: Nginx is a high-performance web and proxy server.
#              It has a lot of features, but it's not for everyone.
# processname: nginx
# pidfile: /var/run/nginx.pid
# config: /usr/local/nginx/conf/nginx.conf
nginxd=/usr/local/nginx/sbin/nginx
nginx_config=/usr/local/nginx/conf/nginx.conf
nginx_pid=/var/run/nginx.pid
RETVAL=0
prog="nginx"
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ ${NETWORKING} = "no" ] && exit 0
[ -x $nginxd ] || exit 0
# Start nginx daemons functions.
start() {
if [ -e $nginx_pid ];then
   echo "nginx already running...."
   exit 1
fi
   echo -n $"Starting $prog: "
   daemon $nginxd -c ${nginx_config}
   RETVAL=$?
   echo
   [ $RETVAL = 0 ] && touch /var/lock/subsys/nginx
   return $RETVAL
}
# Stop nginx daemons functions.
stop() {
        echo -n $"Stopping $prog: "
        killproc $nginxd
        RETVAL=$?
        echo
        [ $RETVAL = 0 ] && rm -f /var/lock/subsys/nginx /var/run/nginx.pid
}
# reload nginx service functions.
reload() {
    echo -n $"Reloading $prog: "
    #kill -HUP `cat ${nginx_pid}`
    killproc $nginxd -HUP
    RETVAL=$?
    echo
}
# See how we were called.
case "$1" in
start)
        start
        ;;
stop)
        stop
        ;;
reload)
        reload
        ;;
restart)
        stop
        start
        ;;
status)
        status $prog
        RETVAL=$?
        ;;

*)
        echo $"Usage: $prog {start|stop|restart|reload|status|help}"
        exit 1
esac
exit $RETVAL

配 置 全 局 集 群 资 源
先添加vip
这里写图片描述
然后添加脚本(确保脚本存在)
这里写图片描述
这里写图片描述

在 集 群 中 添 加 集 群 服 务

这里写图片描述
这里写图片描述

再开两个虚拟机45
打开httpd
发布目录内添加东西
并在server2,3内
将nginx主文件作如下更改


vim /usr/local/nginx/conf/nginx.conf

http {
        upstream westos {
        #ip_hash;
        server 172.25.53.5:80 weight=2;
        server 172.25.53.4:80;
        server 127.0.0.1:80 backup;
        }
    include       mime.types;
    default_type  application/octet-stream;
重启nginx服务



在客户端做www.westos.org与vip即172.25.53.100的域名解析


[root@foundation53 Desktop]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
#nameserver 114.114.114.114
172.25.53.1 server1
172.25.53.2 server2
172.25.53.3 server3
172.25.53.4 server4
172.25.53.5 server5
172.25.53.6 server6
172.25.53.7 server7
172.25.53.100   www.westos.org bbs.westos.org westos.org



浏览器访问如下

这里写图片描述
这里写图片描述

使用keepalived配置高可用的话,不同的keepalived之间通过vrrp协议通信,若果因为网络等问题导致某一个keepalived挂掉的话,
可能会造成数据丢失。

为什么使用fence’设备?
如果集群中一个节点通信失效,那么集群中的其他节点必须能够保证将已经失效的节点与其正在访问的共享资源(比如共享存储)隔离开,出问题的集群节点 本身无法做到这一点,因为该集群节点在此时可能已经失去响应(例如发生hung机),因此需要通过外部机制来实现这一点。这种方法被称为带有fence代 理的隔离。

不配置隔离设备,我们没有办法知道之前断开连接的集群节点使用的资源是否已经被释放掉。如果我们没有配置隔离代理(或者设备),系统可能错误的认为集群节点已经释放了它的资源,这将会造成数据损坏和丢失。 没有配置隔离设备,数据的完整性就不能够被保证,集群配置将不被支持。

当隔离动作正在进行中时,不允许执行其他集群操作。这包括故障转移服务和获取GFS文件系统或GFS2文件系统的新锁。 在隔离动作完成之前或在该集群节点已经重启并且重新加入集群之前,集群不能恢复正常运行。

隔离代理(或设备)是一个外部设备,这个设备可以被集群用于限制异常节点对共享存储的访问(或者硬重启此集群节点。

fence设备如何实现?
利用libvitr,构建模拟fence。libvirt可以管理虚拟机的开关。

fence设备如何实现

在本地物理主机上:
[root@foundation53 ~]# yum serch fence

[root@foundation53 ~]# yum install fence-virtd.x86_64 fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 -y
    fence-virtd.x86_64              模拟fence
       ence-virtd-libvirt.x86_64                     将libvirt变为fence
        fence-virtd-multicast.x86_64                    实现广播同系机制

创建fence


运行fence_virtd -c
如下
注意不要光回车,还要看是否正确,正确的回车,不正确的要更改
[root@foundation53 Desktop]# fence_virtd -c
Module search path [/usr/lib64/fence-virt]: 

Available backends:
    libvirt 0.1
Available listeners:
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]: 

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]: 

Using ipv4 as family.

Multicast IP Port [1229]: 

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]: 

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]: 

Configuration complete.

=== Begin Configuration ===
fence_virtd {
    listener = "multicast";
    backend = "libvirt";
    module_path = "/usr/lib64/fence-virt";
}

listeners {
    multicast {
        key_file = "/etc/cluster/fence_xvm.key";
        address = "225.0.0.12";
        interface = "br0";
        family = "ipv4";
        port = "1229";
    }

}

backends {
    libvirt {
        uri = "qemu:///system";
    }

}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@foundation53 Desktop]# systemctl restart fence_virtd.service 
[root@foundation53 Desktop]# netstat -anulp | grep:1229
bash: grep:1229: command not found...
[root@foundation53 Desktop]# netstat -anulp | grep :1229
udp        0      0 0.0.0.0:1229            0.0.0.0:*                           12213/fence_virtd   
[root@foundation53 Desktop]# cd /etc/cluster/
[root@foundation53 cluster]# ls
fence_xvm.key
现在有钥匙,现在可以删除重新建个钥匙
[root@foundation53 cluster]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000179386 s, 714 kB/s
[root@foundation53 cluster]# systemctl restart fence_virtd.service 
[root@foundation53 cluster]# ls
fence_xvm.key
[root@foundation53 cluster]# scp fence_xvm.key root@server2:/etc/cluster/
[root@foundation53 cluster]# scp fence_xvm.key root@server3:/etc/cluster/

Server2  3查看钥匙已经传过来

这里写图片描述
这里写图片描述
这里写图片描述

注意:fence_xvm.key用于fence连接集群节点的认证

注意:以下步骤server2和server3都要完成。

在浏览端添加fence:
这里写图片描述
选择add fance instance
这里写图片描述

一定要在23上都加
可以利用主机的UUID做映射,将集群节点的名称和相应的主机对应
[root@server2 ~]# blkid 
/dev/vda1: UUID="c642d64c-b392-4e97-9c21-b5a52690b285" TYPE="ext4" 
/dev/vda2: UUID="FEaXmj-wkjT-X4s3-XFJx-BYGr-A1RG-YUBu06" TYPE="LVM2_member" 
/dev/mapper/VolGroup-lv_root: UUID="4a18c6cf-630e-4203-932a-1e154210e8b1" TYPE="ext4" 
/dev/mapper/VolGroup-lv_swap: UUID="e51c93b6-957d-4de7-b563-3ded9b90fc20" TYPE="swap"

这里写图片描述
这里写图片描述
这里写图片描述

[root@server3 ~]# clustat 
Cluster Status for hello_ha @ Thu Aug  2 14:16:47 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server2                                     1 Online, rgmanager
 server3                                     2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:nginx                  server2                        started       
Fence停掉server2,server3会自己补上
[root@server3 ~]# fence_node server2
fence server2 success
[root@server3 ~]# clustat 
Cluster Status for hello_ha @ Thu Aug  2 14:17:49 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server2                                     1 Online
 server3                                     2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:nginx                  server3                        started  

添加分布式存储

给后端服务器

这里写图片描述

添加磁盘,8g


[root@server4 html]# df
Filesystem                   1K-blocks   Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 935228  17227124   6% /
tmpfs                           251124      0    251124   0% /dev/shm
/dev/vda1                       495844  33455    436789   8% /boot
[root@server4 html]# df
Filesystem                   1K-blocks   Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 935224  17227128   6% /
tmpfs                           251124      0    251124   0% /dev/shm
/dev/vda1                       495844  33455    436789   8% /boot
[root@server4 html]# fdisk -l



Disk /dev/vdb: 8589 MB, 8589934592 bytes
16 heads, 63 sectors/track, 16644 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
后端服务器/dev/vdb 显示已添加

后端服务器
[root@server4 html]# yum install -y scsi-*

集群服务器 server2,3都做
[root@server2 ~]# yum install -y iscsi-*
[root@server2 ~]# iscsiadm -m discovery -t st -p 172.25.53.4
[root@server2 ~]# iscsiadm -m node -l

后端服务器
[root@server4 ~]# vim /etc/tgt/targets.conf 

这里写图片描述

[root@server4 ~]# /etc/init.d/tgtd start
Starting SCSI target daemon:                               [  OK  ]
[root@server4 ~]# tgt-admin -s
            Backing store path: /dev/vdb
            Backing store flags: 
    Account information:
    ACL information:
        172.25.53.2
        172.25.53.3

[root@server2 ~]# fdisk -l



Disk /dev/sda: 8589 MB, 8589934592 bytes
64 heads, 32 sectors/track, 8192 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@server2 ~]# /etc/init.d/clvmd status
clvmd (pid  1262) is running...
Clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)
[root@server2 ~]# /etc/init.d/clvmd status
clvmd (pid  1262) is running...
Clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)
[root@server2 ~]# lvs
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----  18.54g                                             
  lv_swap VolGroup -wi-ao---- 992.00m   
[root@server2 ~]# pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created
[root@server2 ~]# vgcreate clustervg /dev/sda
  Clustered volume group "clustervg" successfully created
[root@server2 ~]# lvcreate -L 4G -n demo clustervg
  Logical volume "demo" created
[root@server2 ~]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-a-----   4.00g 




Server3pvs刷新,可看到同步
[root@server3 ~]# pvs
  PV         VG       Fmt  Attr PSize  PFree
  /dev/vda2  VolGroup lvm2 a--  19.51g    0 
[root@server3 ~]# pvs
  PV         VG       Fmt  Attr PSize  PFree
  /dev/sda            lvm2 a--   8.00g 8.00g
  /dev/vda2  VolGroup lvm2 a--  19.51g    0 



[root@server2 ~]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-a-----   4.00g                                             


Server3刷新

[root@server3 ~]# vgs
  VG        #PV #LV #SN Attr   VSize  VFree
  VolGroup    1   2   0 wz--n- 19.51g    0 
  clustervg   1   0   0 wz--nc  8.00g 8.00g
[root@server2 ~]# mkfs.ext4 /dev/clustervg/demo 
[root@server2 ~]# clusvcadm -d nginx


删除ngxin的组与脚本

这里写图片描述
这里写图片描述

Server23安装数据库,挂载,打开,关闭,确保两边都可运行

[root@server2 ~]# yum install -y mysql-server
挂载数据库
[root@server2 mysql]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server2 mysql]# chown mysql.mysql /var/lib/mysql/
打开数据库
[root@server2 mysql]# /etc/init.d/mysqld start
关闭数据库
[root@server2 ~]# /etc/init.d/mysqld stop


创建数据库文件

这里写图片描述
创建数据库脚本
这里写图片描述
添加数据库组
这里写图片描述

确保是按照ip,文件,脚本依次添加
这里写图片描述
这里写图片描述
这里写图片描述

[root@server2 ~]# clustat
Cluster Status for hello_ha @ Thu Aug  2 16:23:55 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server2                                     1 Online, Local, rgmanager
 server3                                     2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:mysql                  server2                        started       

转移
这里写图片描述

[root@server2 ~]# clustat
Cluster Status for hello_ha @ Thu Aug  2 16:24:29 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server2                                     1 Online, Local, rgmanager
 server3                                     2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:mysql                  server2                        stopping      
[root@server2 ~]# clustat
Cluster Status for hello_ha @ Thu Aug  2 16:24:34 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server2                                     1 Online, Local, rgmanager
 server3                                     2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:mysql                  server3                        started 
[root@server3 ~]# clustat 
Cluster Status for hello_ha @ Fri Aug  3 09:27:24 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server2                                     1 Online, rgmanager
 server3                                     2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:mysql                  server3                        started       
[root@server3 ~]# clusvcadm -d mysql
Local machine disabling service:mysql...Success
[root@server3 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1157576  17004776   7% /
tmpfs                           251124   25656    225468  11% /dev/shm
/dev/vda1                       495844   33455    436789   8% /boot

删掉文件系统,文件如下
这里写图片描述
这里写图片描述

[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1237544  16924808   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33455    436789   8% /boot
[root@server2 ~]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-a-----   4.00g                                             
[root@server2 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1237544  16924808   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33455    436789   8% /boot
/dev/mapper/clustervg-demo     4128448  139256   3779480   4% /var/lib/mysql
[root@server2 ~]# vgs
  VG        #PV #LV #SN Attr   VSize  VFree
  VolGroup    1   2   0 wz--n- 19.51g    0 
  clustervg   1   1   0 wz--nc  8.00g 4.00g
[root@server2 ~]# lvextend -l +1023 /dev/clustervg/demo 
  Extending logical volume demo to 8.00 GiB
  Logical volume demo successfully resized
[root@server3 ~]# pvs
  PV         VG        Fmt  Attr PSize  PFree
  /dev/sda   clustervg lvm2 a--   8.00g 4.00g
  /dev/vda2  VolGroup  lvm2 a--  19.51g    0 
[root@server3 ~]# vgs
  VG        #PV #LV #SN Attr   VSize  VFree
  VolGroup    1   2   0 wz--n- 19.51g    0 
  clustervg   1   1   0 wz--nc  8.00g 4.00g
[root@server2 ~]# vgs
  VG        #PV #LV #SN Attr   VSize  VFree
  VolGroup    1   2   0 wz--n- 19.51g    0 
  clustervg   1   1   0 wz--nc  8.00g    0 
[root@server2 ~]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-ao----   8.00g                                             
[root@server2 ~]# resize2fs /dev/clustervg/demo 
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/clustervg/demo is mounted on /var/lib/mysql; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/clustervg/demo to 2096128 (4k) blocks.
The filesystem on /dev/clustervg/demo is now 2096128 blocks long.
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1237548  16924804   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33455    436789   8% /boot
/dev/mapper/clustervg-demo     8252856  140276   7693408   2% /var/lib/mysql
[root@server2 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root   19G  1.2G   17G   7% /
tmpfs                         499M   26M  474M   6% /dev/shm
/dev/vda1                     485M   33M  427M   8% /boot
/dev/mapper/clustervg-demo    7.9G  137M  7.4G   2% /var/lib/mysql
[root@server3 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server3 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1157600  17004752   7% /
tmpfs                           251124   25656    225468  11% /dev/shm
/dev/vda1                       495844   33455    436789   8% /boot
/dev/mapper/clustervg-demo     8252856  140276   7693408   2% /var/lib/mysql
[root@server3 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root   19G  1.2G   17G   7% /
tmpfs                         246M   26M  221M  11% /dev/shm
/dev/vda1                     485M   33M  427M   8% /boot
/dev/mapper/clustervg-demo    7.9G  137M  7.4G   2% /var/lib/mysql
[root@server2 ~]# ls
anaconda-ks.cfg     nginx-1.10.1         nginx-1.14.0.tar.gz
install.log         nginx-1.10.1.tar.gz  nginx-sticky-module-ng
install.log.syslog  nginx-1.14.0         nginx-sticky-module-ng.tar.gz
[root@server2 ~]# cd /var/lib/mysql/
[root@server2 mysql]# ls
lost+found
[root@server2 mysql]# cp /etc/passwd .
[root@server2 mysql]# ls
lost+found  passwd
[root@server2 mysql]# ll
total 20
drwx------ 2 root root 16384 Aug  3 09:40 lost+found
-rw-r--r-- 1 root root  1433 Aug  3 10:08 passwd
[root@server3 ~]# cd /var/lib/mysql/
[root@server3 mysql]# ls
ls: cannot access passwd: Input/output error
lost+found  passwd
[root@server3 mysql]# ll
ls: cannot access passwd: Input/output error
total 16
drwx------ 2 root root 16384 Aug  3 09:40 lost+found
-????????? ? ?    ?        ?            ? passwd
[root@server3 mysql]# cd
[root@server3 ~]# umount /var/lib/mysql/
[root@server3 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1157612  17004740   7% /
tmpfs                           251124   25656    225468  11% /dev/shm
/dev/vda1                       495844   33455    436789   8% /boot
[root@server3 ~]# cd /var/lib/mysql/
[root@server3 mysql]# ls
ibdata1  ib_logfile0  ib_logfile1  mysql  test
[root@server3 mysql]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server3 mysql]# cd /var/lib/mysql/
[root@server3 mysql]# ls
lost+found  passwd

两边都卸载
[root@server3 ~]# umount /var/lib/mysql/

[root@server2 ~]# clustat
Cluster Status for hello_ha @ Fri Aug  3 10:16:02 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server2                                     1 Online, Local, rgmanager
 server3                                     2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:mysql                  (server3)                      disabled      
[root@server2 ~]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-a-----   8.00g                                             
[root@server2 ~]# lvremove /dev/clustervg/demo 
Do you really want to remove active clustered logical volume demo? [y/n]: y
  Logical volume "demo" successfully removed
[root@server2 ~]# lvcreate -L 4G -n demo clustervg
  Logical volume "demo" created
[root@server2 ~]# mkfs.gfs2 -j 3 -p lock_dlm -t hello_ha:mygfs2
no device specified (try -h for help)
[root@server2 ~]# mkfs.gfs2 -j 3 -p lock_dlm -t hello_ha:mygfs2 /dev/clustervg/demo 
This will destroy any data on /dev/clustervg/demo.
It appears to contain: symbolic link to `../dm-2'

Are you sure you want to proceed? [y/n] y

Device:                    /dev/clustervg/demo
Blocksize:                 4096
Device Size                4.00 GB (1048576 blocks)
Filesystem Size:           4.00 GB (1048575 blocks)
Journals:                  3
Resource Groups:           16
Locking Protocol:          "lock_dlm"
Lock Table:                "hello_ha:mygfs2"
UUID:                      f3cf1d58-cd65-6b5f-0657-3e61a66d6e97

[root@server2 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server2 ~]# cd /var/lib/mysql/
[root@server2 mysql]# ls
[root@server2 mysql]# chown mysql.mysql .
[root@server2 mysql]# ll
total 0
[root@server2 mysql]# ll -d .
drwxr-xr-x 2 mysql mysql 3864 Aug  3 10:20 .

[root@server3 ~]# clustat 
Cluster Status for hello_ha @ Fri Aug  3 10:19:14 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server2                                     1 Online, rgmanager
 server3                                     2 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:mysql                  (server3)                      disabled      
[root@server3 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server3 ~]# ll -d
dr-xr-x---. 3 root root 4096 Aug  2 14:21 .
[root@server3 ~]# ll -d /var/lib/mysql/
drwxr-xr-x 2 mysql mysql 3864 Aug  3 10:20 /var/lib/mysql/
[root@server3 ~]# cd /var/lib/mysql/
[root@server3 mysql]# ls
[root@server2 mysql]# /etc/init.d/mysqld start
[root@server2 mysql]# ls
ibdata1  ib_logfile0  ib_logfile1  mysql  mysql.sock  test
[root@server2 mysql]# pwd
/var/lib/mysql
[root@server2 mysql]# /etc/init.d/mysqld stop
Stopping mysqld:                                           [  OK  ]
[root@server2 mysql]# ls
ibdata1  ib_logfile0  ib_logfile1  mysql  test
[root@server3 mysql]# gfs2_tool sb /dev/clustervg/demo all
  mh_magic = 0x01161970
  mh_type = 1
  mh_format = 100
  sb_fs_format = 1801
  sb_multihost_format = 1900
  sb_bsize = 4096
  sb_bsize_shift = 12
  no_formal_ino = 2
  no_addr = 23
  no_formal_ino = 1
  no_addr = 22
  sb_lockproto = lock_dlm
  sb_locktable = hello_ha:mygfs2
  uuid = f3cf1d58-cd65-6b5f-0657-3e61a66d6e97
[root@server3 mysql]# gfs2_tool journals /dev/clustervg/demo 
journal2 - 128MB
journal1 - 128MB
journal0 - 128MB
3 journal(s) found.
[root@server3 mysql]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root   19G  1.2G   17G   7% /
tmpfs                         246M   32M  215M  13% /dev/shm
/dev/vda1                     485M   33M  427M   8% /boot
/dev/mapper/clustervg-demo    4.0G  410M  3.7G  10% /var/lib/mysql
[root@server2 mysql]# vim /etc/fstab
[root@server2 mysql]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1237564  16924788   7% /
tmpfs                           510188   31816    478372   7% /dev/shm
/dev/vda1                       495844   33455    436789   8% /boot
/dev/mapper/clustervg-demo     4193856  418884   3774972  10% /var/lib/mysql
[root@server2 mysql]# cd
[root@server2 ~]# umount /var/lib/mysql/
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1237560  16924792   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33455    436789   8% /boot
[root@server2 ~]# mount -a 
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1237560  16924792   7% /
tmpfs                           510188   31816    478372   7% /dev/shm
/dev/vda1                       495844   33455    436789   8% /boot
/dev/mapper/clustervg-demo     4193856  418884   3774972  10% /var/lib/mysql
[root@server2 ~]# tail -n 1 /etc/fstab (2,3都写自动挂载)
/dev/clustervg/demo /var/lib/mysql      gfs2    _netdev     0 0 
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值